qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
sequencelengths
3
3
response_j
stringlengths
29
22k
response_k
stringlengths
26
13.4k
__index_level_0__
int64
0
17.8k
19,298,147
I always get this error for this code: ``` Traceback (most recent call last): File "E:/ankosh/trial13.py", line 14, in <module> if grades_mix[index_no]=="HM1": IndexError: list index out of range) ``` I would really appreciate the help. ``` `file_pointer=open("C:/python27/Doc/student_grades.txt", "r") read_grades=file_pointer.readline() my_list=[] while 0==0: grades_mix=read_grades.split() name_str=grades_mix[0] contained_list=[name_str,0,0,0,0,0.0] index_no=1 count_num=0 sum_float=0.0 avg_float=0.0 while 0==0: if grades_mix[index_no]=="HM1": index_no+=1 grade_num=int(grades_mix[index_no]) count_num+=1 sum_float+=grade_num contained_list[1]=grade_num elif grades_mix[index_no]=="HM2": index_no+=1 grade_num=int(grades_mix[index_no]) count_num+=1 sum_float+=grade_num contained_list[2]=grade_num elif grades_mix[index_no]=="HM3": index_no+=1 grade_num=int(grades_mix[index_no]) count_num+=1 sum_float+=grade_num contained_list[3]=grade_num elif grades_mix[index_no]=="HM4": index_no+=1 grade_num=int(grades_mix[index_no]) count_num+=1 sum_float+=grade_num contained_list[4]=grade_num index_no+=1 if count_num>0: avg_float=sum_float/count_num contained_list[5]=avg_float index_num=0 while index_num<len(my_list): if my_list[0]>name_str: break index_no+=1 my_list.insert(index_num, contained_list) read_grades=file_pointer.readline() file_pointer.close() print format ("Name","<10")+" | "+format("HM1"," >5")+" | "+format("HM2"," >5")+" | "+format("HM3"," >5")+" | "+format("HM4"," >5")+" | "+format("avg_float"," <10")+" | " for index_no in range(0, len(my_list)): print format(my_list[index_num][0], "<10") + " | " + \ format(my_list[index_num][1], " >5") + " | " + \ format(my_list[index_num][2], " >5") + " | " + \ format(my_list[index_num][3], " >5") + " | " + \ format(my_list[index_num][4], " >5") + " | " + \ format(my_list[index_num][5], " >10.2f") + " | " ```
2013/10/10
[ "https://Stackoverflow.com/questions/19298147", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2833577/" ]
You have, in abbreviated form: ``` while 0==0: if grades_mix[index_no]=="HM1": index_no+=1 ``` with no break statement to ever get out of the loop (which, how about `while True`?). Sooner or later you're going to go out of range.
The error message ``` IndexError: list index out of range ``` indicates that you are using an index that does not exist in the array. Try to check the size of the array by printing out its length print "length of array grades\_mix =" ,len(grades\_mix) If the length of the array is less than or equal to 1, then you will get an error at the first step when calling ``` grades_mix[index_no] ``` with ``` index_no=1 ``` It might be that your file contains empty lines or lines with a single word, you need to take care of that in your script.
2,987
2,408,677
I'm trying to implement the calculation of correlation coefficient of people between two sets of data in php. I'm just trying to do the porting python script that can be found at this url <http://answers.oreilly.com/topic/1066-how-to-find-similar-users-with-python/> my implementation is the following: ``` class LB_Similarity_PearsonCorrelation implements LB_Similarity_Interface{ public function similarity($user1, $user2){ $sharedItem = array(); $pref1 = array(); $pref2 = array(); $result1 = $user1->fetchAllPreferences(); $result2 = $user2->fetchAllPreferences(); foreach($result1 as $pref){ $pref1[$pref->item_id] = $pref->rate; } foreach($result2 as $pref){ $pref2[$pref->item_id] = $pref->rate; } foreach ($pref1 as $item => $preferenza){ if(key_exists($item,$pref2)){ $sharedItem[$item] = 1; } } $n = count($sharedItem); if ($n == 0) return 0; $sum1 = 0;$sum2 = 0;$sumSq1 = 0;$sumSq2 = 0;$pSum = 0; foreach ($sharedItem as $item_id => $pre) { $sum1 += $pref1[$item_id]; $sum2 += $pref2[$item_id]; $sumSq1 += pow($pref1[$item_id],2); $sumSq2 += pow($pref2[$item_id],2); $pSum += $pref1[$item_id] * $pref2[$item_id]; } $num = $pSum - (($sum1 * $sum2) / $n); $den = sqrt(($sumSq1 - pow($sum1,2)/$n) * ($sumSq2 - pow($sum2,2)/$n)); if ($den == 0) return 0; return $num/$den; } } ``` clarification to better understand the code, the method fetchAllPreferences return back a set of objects that are actually the items, turns them into an array for ease of management I'm not sure that this implementation is correct, in particular I have some doubts about the correctness of the calculation of the denominator. any advice is welcome. thanks in advance!
2010/03/09
[ "https://Stackoverflow.com/questions/2408677", "https://Stackoverflow.com", "https://Stackoverflow.com/users/76593/" ]
Your algorithm looks mathematically correct but numerically unstable. Finding the sum of squares explicitly is a recipe for disaster. What if you have numbers like `array(10000000001, 10000000002, 10000000003)`? A numerically stable one-pass algorithm for calculating the variance can be [found on Wikipedia](http://en.wikipedia.org/wiki/Algorithms_for_calculating_variance), and the same principle can be applied to computing the covariance. Easier yet, if you don't care much about speed, you could just use two passes. Find the means in the first pass, then compute the variances and covariances using the textbook formula in the second pass.
try my package here <http://www.phpclasses.org/browse/package/5854.html>
2,988
17,550,136
My background is in more managed languages (C#, python) but I am becoming more experienced in C/C++. I am familiar with why the selection by reference (.) and selection through pointer operation (->) operators are different. In all cases I have encountered, if you use the incorrect one, it will result in a compile error. If that is the case, they why were they not made into one operator? Is there a case where using either on the same object results in different, meaningful and useful results? This question inspired by this answer: [Is this right way to call a function in c++?](https://stackoverflow.com/questions/17548007/is-this-right-way-to-call-a-function-in-c/17548054#17548054)
2013/07/09
[ "https://Stackoverflow.com/questions/17550136", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2518451/" ]
In C++ you can overload the `->`-operator, which is used in pretty much all smart pointer implementations. However, some of those also have their own methods, i.e. to release a reference. ``` struct test { int x; }; std::shared_ptr<int> ptr(new test); // Write to member x of the allocated object ptr->x = 3; // Reset the shared pointer to point to a different object. // If there are no further shared_ptrs pointing to the previously allocated one, // it is deleted. ptr.reset(new test) ``` Additionally, it would be quite messy for the compiler to resolve operator-`.` for something like multiple-level pointers, i.e. `test*** ptr`. With your logic, `ptr.x`, `(*ptr).x`, `(**ptr).x` and `(***ptr).x` would all be the same.
You cannot apply `->` to a reference to a basic type and you cannot apply `.` to a pointer, but you can apply both to a user-defined type and they will have different meanings. The simplest example is a smart pointer, like `std::shared_ptr`: ``` struct A { int x; }; std::shared_ptr<A> p(new A); p->x = 10; p.reset(); ```
2,990
72,623,017
As of may 30th, smtp is no longer accepted. <https://support.google.com/accounts/answer/6010255?hl=en&ref_topic=7188673> What is the new way to make a simple python emailer rather than a full application with the "login with google" option? Not sure why I was asked for the code and error, given that I already diagnosed the issue and was asking for alternative methods. Here it is. Its a handy emailer that texts me to workout when I work at home. ``` import time import smtplib import random gmail_user = 'usernameImNotSharing@gmail.com' gmail_password = 'TheCorrectPassword' sent_from = gmail_user to = ['myphonenumber@vtext.com'] exercises = ['push ups', 'jumps in place', '20lb curls', 'tricep extensions', 'quarter mile runs'] levels = [1, 2, 3] level1 = ['10', '15', '16', '20', '1'] level2 = ['15', '30', '30', '40', '2'] level3 = ['20', '50', '48', '70', '4'] while True: if int(time.strftime('%H')) > 9: if int(time.strftime('%H')) < 23: abc = random.uniform(0, 1) picker = random.randint(0, 4) if abc < 0.3: level = level1 if 0.3 < abc and abc < 0.8: level = level2 if abc > 0.8: level = level3 exersize = exercises[picker] amount = level[picker] try: subject = f'Test' body = f'Do {amount} {exersize}' server = smtplib.SMTP_SSL('smtp.gmail.com', 465) server.ehlo() server.login(gmail_user, gmail_password) server.sendmail(sent_from, to, body) server.close() print('Email sent!') except Exception as error: print(error) time.sleep(random.randint(1500, 4800)) time.sleep(100) ``` error: > > (535, b'5.7.8 Username and Password not accepted. Learn more at\n5.7.8 <https://support.google.com/mail/?p=BadCredentials> jj1-20020a170903048100b00163247b64bfsm7655137plb.115 - gsmtp') > > > Solved below: SMTP is still accepted for app passwords. App passwords creation steps can be found here, but you must enable 2 factor auth first, before app passwords can be created. <https://support.google.com/accounts/answer/185833> <https://myaccount.google.com/security>
2022/06/14
[ "https://Stackoverflow.com/questions/72623017", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12339133/" ]
Correction after May 30 2022, sending the users actual password is no longer accepted by googles smtp server You should configuring an [apps password](https://support.google.com/accounts/answer/185833?hl=en#:%7E:text=An%20App%20Password%20is%20a,2%2DStep%20Verification%20turned%20on.) this works. Then replace the password in your code with this new apps password. > > An App Password is a 16-digit passcode that gives a less secure app or device permission to access your Google Account. App Passwords can only be used with accounts that have 2-Step Verification turned on. > > > ``` gmail_user = 'usernameImNotSharing@gmail.com' gmail_password = 'AppsPassword' ``` Another option is to use [Xoauth2](https://developers.google.com/gmail/imap/xoauth2-protocol) [Quick fix for SMTP username and password not accepted error](https://www.youtube.com/watch?v=Y_u5KIeXiVI)
``` import smtplib host = "server.smtp.com" server = smtplib.SMTP(host) FROM = "testpython@test.com" TO = "bla@test.com" MSG = "Subject: Test email python\n\nBody of your message!" server.sendmail(FROM, TO, MSG) server.quit() print ("Email Send") ```
2,993
65,526,849
I run through the following steps to attempt to start up an app for production: ``` -Setup a virtualenv for the python dependencies: virtualenv -p /usr/bin/python3.8 ~/app_env -Install pip dependencies: . ~/app_env/bin/activate && pip install -r ~/app/requirements.txt -Un-comment the lines for privilege dropping in uwsgi.ini and change the uid and gid to your account name -Login to root with sudo -s and re-source the env with source /home/usr/app_env/bin/activate -Set the courthouse to production mode by setting the environment variable with export PRODUCTION=1 -Start the app: cd /home/usr/app && ./start_script.sh ``` And I get the following error: ``` (app_env) root@usr-Spin-SP314-53N:/home/usr/Desktop/app# ./start.sh uwsgi: error while loading shared libraries: libpcre.so.1: cannot open shared object file: No such file or directory ``` I tried a few things such as installing a newer libpcre version like mentioned [here](https://github.com/facebook/watchman/issues/522), tried also the steps mentioned [here](https://stackoverflow.com/questions/43301339/pcre-issue-when-setting-up-wsgi-application/50087846) but that didn't work. Also the environment I'm setting up doesn't use anaconda but regular python. I even tried `pip install uwsgi` in my virtual env but it said the requirement was already satisfied. I'm not much of an expert when it comes to somewhat complex package management like this, help with how to solve this would be greatly appreciated. Thanks. I'm on Ubuntu 20.04, using python 3.8.
2021/01/01
[ "https://Stackoverflow.com/questions/65526849", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12027484/" ]
What solved it for me was apparently just reinstalling UWSGI, like in [this](https://stackoverflow.com/questions/21669354/rebuild-uwsgi-with-pcre-support?noredirect=1&lq=1) thread, in my virtual env while forcing it to ignore the cache so it could know to use the pcre library I installed. In order, doing this ``` uwsgi --version ``` Was giving me this ``` uwsgi: error while loading shared libraries: libpcre.so.1: cannot open shared object file: No such file or directory ``` So I made sure I had the latest libpcre installed ``` sudo apt-get install libpcre3-dev ``` And then what linked it all together was this ``` pip install uwsgi -I --no-cache-dir ```
I tried to solve this error but it did not work no matter what I did and then reinstalled uwsgi, but the following 2 lines solved my problem ``` sudo find / -name libpcre.so.* ``` #change the path of the /home/anaconda3/lib/libpcre.so.1 with the one appears after above one. ``` sudo ln -s /home/anaconda3/lib/libpcre.so.1 /lib which python ```
2,994
41,285,789
I came across a python library which has docs, which start like this: > > Quickstart > > > Include foolib in your requirements.txt file. > > > AFAIK dependencies should be specified via `install_requires` in `setup.py`. Should I talk the maintainer of the library and create a pull-request for the docs?
2016/12/22
[ "https://Stackoverflow.com/questions/41285789", "https://Stackoverflow.com", "https://Stackoverflow.com/users/633961/" ]
Both are acceptable. The difference is that specifying something in your `install_requires` will auto-download / install that package when you install the package using setup.py. Having a `requirements.txt` makes it easier to see at a glance what the requirements are. I personally prefer seeing libraries with a `requirements.txt`, since I can install all those requirements with `pip` into my virtualenv and be able to update them quickly if needed.
Add your dependencies in a requirements file and then parse this file in the setup.py. This will help you to: * Easily install dependencies without installing the entire package through pip * Get only one source for your dependencies * Get all way to install your package available (pip, easy\_install, command line, etc...)
2,995
55,721,564
Here is the 'smem' command I run on the Redhat/CentOS Linux system. I expect the output be printed without the fields with zero size however I would expect the heading columns. ``` smem -kt -c "pid user command swap" PID User Command Swap 7894 root /sbin/agetty --noclear tty1 0 9666 root ./nimbus /opt/nimsoft 0 7850 root /sbin/auditd 236.0K 7885 root /usr/sbin/irqbalance --fore 0 11205 root nimbus(hdb) 0 10701 root nimbus(spooler) 0 8446 trapsanalyzer1 /opt/traps/analyzerd/analyz 0 50316 apache /usr/sbin/httpd -DFOREGROUN 0 50310 apache /usr/sbin/httpd -DFOREGROUN 0 3971 root /usr/sbin/lvmetad -f 36.0K 63988 root su - 0 7905 ntp /usr/sbin/ntpd -u ntp:ntp - 4.0K 7876 dbus /usr/bin/dbus-daemon --syst 44.0K 9672 root nimbus(controller) 0 7888 root /usr/lib/systemd/systemd-lo 0 63990 root -bash 0 59978 postfix pickup -l -t unix -u 0 3977 root /usr/lib/systemd/systemd-ud 736.0K 9016 postfix qmgr -l -t unix -u 0 50303 root /usr/sbin/httpd -DFOREGROUN 0 3941 root /usr/lib/systemd/systemd-jo 52.0K 8199 root //usr/lib/vmware-caf/pme/bi 0 8598 daemon /opt/quest/sbin/.vasd -p /v 0 8131 root /usr/sbin/vmtoolsd 0 7881 root /usr/sbin/NetworkManager -- 8.0K 8364 root /opt/puppetlabs/puppet/bin/ 0 8616 daemon /opt/quest/sbin/.vasd -p /v 0 23290 root /usr/sbin/rsyslogd -n 3.8M 64091 root python /bin/smem -kt -c pid 0 7887 polkitd /usr/lib/polkit-1/polkitd - 0 8363 root /usr/bin/python2 -Es /usr/s 0 53606 root /usr/share/metricbeat/bin/m 0 24631 nagios /usr/local/ncpa/ncpa_passiv 0 24582 nagios /usr/local/ncpa/ncpa_listen 0 7886 root /opt/traps/bin/authorized 76.0K 7872 root /opt/traps/bin/pmd 12.0K 8374 root /opt/puppetlabs/puppet/bin/ 0 7883 root /opt/traps/bin/trapsd 64.0K ---------------------------------------------------- 54 10 5.1M ```
2019/04/17
[ "https://Stackoverflow.com/questions/55721564", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11096022/" ]
Like this?: ``` $ awk '$NF!=0' file PID User Command Swap 7850 root /sbin/auditd 236.0K ... 7883 root /opt/traps/bin/trapsd 64.0K ---------------------------------------------------- 54 10 5.1M ``` But instead of using the form `awk ... file` you'd probably like to `smem ... | awk '$NF!=0'`.
Could you please try following, for extra precautions removing the space from last fields(in case it is there). ``` smem -kt -c "pid user command swap" | awk 'FNR==1{print;next} {sub(/[[:space:]]+$/,"")} $NF==0{next} 1' ```
2,996
35,282,609
I'm a new Python programmer working through the book *Automate the Boring Stuff with Python*. One of the end-of-chapter projects is to build a mad libs program. Based on what has been introduced so far, I think that the author intends for me to use regular expressions. Here is my code: ``` #! python3 # # madlibs.py - reads a text file and let's the user add their own text # anywhere the words ADJECTIVE, NOUN, ADVERB, or VERB appear in the text # file. import sys, re, copy # open text file, save text to variable if len(sys.argv) == 2: print('Opening text file...') textSource = open(sys.argv[1]) textContent = textSource.read() textSource.close() else: print('Usage: madlibs.py <textSource>') # locate instances of keywords keywordRegex = re.compile(r'ADJECTIVE|NOUN|ADVERB|VERB', re.I) matches = keywordRegex.findall(textContent) # prompt user to replace keywords with their own input answers = copy.copy(matches) for i in range(len(answers)): answers[i] = input() # create a new text file with the end result for i in range(len(matches)): findMatch = re.compile(matches[i]) textContent = findMatch.sub(answers[i], textContent) print(textContent) textEdited = open('madlibbed.txt', 'w') textEdited.write(textContent) textEdited.close() ``` The input I'm using for textSource is a text file that reads: > > This is the test source file. It has the keyword ADJECTIVE in it, as well as the keyword NOUN. Also, it has another instance of NOUN and then one of ADVERB. > > > My problem is that the findMatch.sub method is replacing both of the instances of NOUN at once. I understand that this is how the sub() method works, but I'm having trouble thinking of a simple way to work around it. How can I design this program so that it only targets and replaces one keyword at a time? I don't want all NOUNS to be replaced with the same word, but rather different words respective to the order in which the user types them.
2016/02/09
[ "https://Stackoverflow.com/questions/35282609", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5901044/" ]
This is covered in detail in [BashFAQ #004](http://mywiki.wooledge.org/BashFAQ/004). Notably, [use of `ls` for this purpose is an antipattern and should be avoided](http://mywiki.wooledge.org/ParsingLs). ``` shopt -s dotglob # if including hidden files is desired files=( "$dir"/* ) [[ -e $files || -L $files ]] && echo "Directory is not empty" ``` `[[ -e $files ]]` doesn't actually check if the entire array's contents exist; rather, it checks the first name returned -- which handles the case when no files match, wherein the glob expression itself is returned as the sole result. --- Notably: * This is **far faster** than invoking `ls`, which requires using `fork()` to spawn a subshell, `execve()` to replace that subshell with `/bin/ls`, the operating system's dynamic linker to load shared libraries used by the `ls` binary, etc, etc. [An exception to this is extremely large directories, of tens of thousands of files -- a case in which `ls` will also be slow; see the `find`-based solution below for those]. * This is **more correct** than invoking `ls`: The list of files returned by globbing is guaranteed to exactly match the literal names of files, whereas `ls` can munge names with hidden characters. If the first entry is a valid filename, `"${files[@]}"` can be safely iterated over with assurance that each returned value will be a name, and there's no need to worry about filesystems with literal newlines in their names inflating the count if the local `ls` implementation does not escape them. --- That said, an alternative approach is to use `find`, if you have one with the `-empty` extension (available both from GNU find and from modern BSDs including Mac OS): ``` [[ $(find -H "$dir" -maxdepth 0 -type d -empty) ]] || echo "Directory is not empty" ``` ...if *any* result is given, the directory is nonempty. While slower than globbing on directories which are not unusually large, this is faster than *either* `ls` or globbing for extremely large directories not present in the direntry cache, as it can return results without a full scan.
**Robust pure Bash** solutions: For background on ***why* a pure Bash solution with globbing is superior to using `ls`**, see **[Charles Duffy's helpful answer](https://stackoverflow.com/a/35282784/45375)**, which also contains a **`find`-based alternative**, which is **much faster and less memory-intensive with *large* directories**.[1] Also consider **anubhava's equally fast and memory-efficient [`stat`-based answer](https://stackoverflow.com/a/35284977/45375)**, which, however, requires distinct syntax forms on Linux and BSD/OSX. *Updated* to a simpler solution, gratefully adapted from [this answer](https://stackoverflow.com/a/43402951/45375). ``` # EXCLUDING hidden files and folders - note the *quoted* use of glob '*' if compgen -G '*' >/dev/null; then echo 'not empty' else echo 'empty, but may have hidden files/dirs.' fi ``` * `compgen -G` is normally used for tab completion, but it is useful in this case as well: + Note that `compgen -G` does its *own* globbing, so you must pass it the glob (filename pattern) *in quotes* for it to output all matches. In this particular case, even passing an *unquoted* pattern up front would work, but the difference is worth nothing. + if nothing matches, `compgen -G` *always* produces *no* output (irrespective of the state of the `nullglob` option), and it indicates via its exit code whether at least 1 match was found, which is what the conditional takes advantage of (while suppressing any stdout output with `>/dev/null`). ``` # INCLUDING hidden files and folders - note the *unquoted* use of glob * if (shopt -s dotglob; compgen -G * >/dev/null); then echo 'not empty' else echo 'completely empty' fi ``` * `compgen -G` *never* matches *hidden* items (irrespective of the state of the `dotglob` option), so a workaround is needed to find hidden items too: + `(...)` creates a subshell for the conditional; that is, the commands executed in the subshell don't affect the current shell's environment, which allows us to set the `dotglob` option in a localized way. + `shopt -s dotglob` causes `*` to match hidden items too (except for `.` and `..`). + `compgen -G *` with *unquoted* `*`, thanks to *up-front* expansion by the shell, is either passed at least one filename, whether hidden or not (additional filenames are ignored) or the empty string, if neither hidden nor non-hidden items exists. In the former case the exit code is `0` (signaling success and therefore a nonempty directory), in the later `1` (signaling a truly empty directory). --- [1] This answer originally *falsely* claimed to offer a Bash-only solution that is efficient with large directories, based on the following approach: `(shopt -s nullglob dotglob; for f in "$dir"/*; do exit 0; done; exit 1)`. This is NOT more efficient, because, internally, Bash *still* collects all matches in an array first before entering the loop - in other words: `for *` is *not* evaluated lazily.
2,997
13,283,253
I just upgraded to Django 1.4 and it has broken a couple things including messaging. Here's the error I get when trying to change a avatar: ``` 'User' object has no attribute 'message_set' Exception Location: /Users/nb/Desktop/myenv2/lib/python2.7/site-packages/django/utils/functional.py in inner, line 185 ``` Traceback: ``` File "/Users/nb/Desktop/myenv2/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response 111. response = callback(request, *callback_args, **callback_kwargs) File "/Users/nb/Desktop/myenv2/lib/python2.7/site-packages/django/contrib/auth/decorators.py" in _wrapped_view 20. return view_func(request, *args, **kwargs) File "/Users/nb/Desktop/spicestore/apps/avatar/views.py" in change 76. request.user.message_set.create( File "/Users/nb/Desktop/myenv2/lib/python2.7/site-packages/django/utils/functional.py" in inner 185. return func(self._wrapped, *args) Exception Type: AttributeError at /avatar/change/ Exception Value: 'User' object has no attribute 'message_set' ``` Also, messaging no longer works on the site. What are the changes in Django 1.4 that could be causing this and has anyone overcome a similar issue?
2012/11/08
[ "https://Stackoverflow.com/questions/13283253", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1328021/" ]
Django introduced a messages app in 1.2 ([release notes](https://docs.djangoproject.com/en/dev/releases/1.2/#messages-framework)), and deprecated the old user messages API. In Django 1.4, the old message\_set API has been removed completely, so you'll have to update your code. If you follow the [messages docs](https://docs.djangoproject.com/en/dev/ref/contrib/messages/), you should find it pretty straight forward.
What is in your `INSTALLED_APPS` in your `settings.py`? Do you have `'django.contrib.messages',` included there? Something like: ``` INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'django.contrib.messages', 'django.contrib.staticfiles', 'django.contrib.humanize', ... ```
3,000
64,518,660
I am currently working with an API in python and trying to retrieve previous institution ID's from certain authors. I have come to this point ```py my_auth.hist_names['affiliation'] ``` which outputs: ``` [{'@_fa': 'true', '@id': '60016491', '@href': 'http://api.elsevier.com/content/affiliation/affiliation_id/60016491'}, {'@_fa': 'true', '@id': '60023955', '@href': 'http://api.elsevier.com/content/affiliation/affiliation_id/60023955'}, {'@_fa': 'true', '@id': '109604360', '@href': 'http://api.elsevier.com/content/affiliation/affiliation_id/109604360'}, {'@_fa': 'true', '@id': '112377026', '@href': 'http://api.elsevier.com/content/affiliation/affiliation_id/112377026'}, {'@_fa': 'true', '@id': '112678642', '@href': 'http://api.elsevier.com/content/affiliation/affiliation_id/112678642'}, {'@_fa': 'true', '@id': '60031106', '@href': 'http://api.elsevier.com/content/affiliation/affiliation_id/60031106'}] ``` The type here is list. I'd like to use this list as a dictionary to retrieve the `'@id'` section
2020/10/24
[ "https://Stackoverflow.com/questions/64518660", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12758690/" ]
@Anton Sizikov, Thanks for reply, but my problem was another. Unlike Downloadpipelineartifact@2, the DownloadBuildArtifacts@0 task did not work by entering the project name and pipeline for me: ``` - task: DownloadBuildArtifacts@0 inputs: buildType: 'current' project: 'Leaf' pipeline: 'Leaf' buildVersionToDownload: 'latest' branchName: 'refs/heads/develop' downloadType: 'specific' itemPattern: '**/*.exe' downloadPath: $(Build.ArtifactStagingDirectory) ``` By chance I decided to create the download task through the azure interface, that made the value of the "pipeline" and "project" fields change their value (would it be the ID?). Anyway, the task looked like this: ``` - task: DownloadBuildArtifacts@0 inputs: buildType: 'specific' project: '8c3c84b6-802b-4187-a1bb-b75ac9c7d48e' pipeline: '4' specificBuildWithTriggering: true buildVersionToDownload: 'latest' allowPartiallySucceededBuilds: true downloadType: 'specific' itemPattern: '**/*.exe' downloadPath: '$(System.ArtifactsDirectory)' ``` Know, I can identify my artifact downloaded using dir command.
Based on your log I can see that your artifact was downloaded to `$(Build.ArtifactStagingDirectory)` directory, which is `D:\a\1\a` in your case. Then you run `dir` command there: ```sh Successfully downloaded artifacts to D:\a\1\a 2020-10-24T22:25:48.7993950Z Directory of D:\a\1\a 2020-10-24T22:25:48.7994230Z 2020-10-24T22:25:48.7994896Z 10/24/2020 10:24 PM <DIR> . 2020-10-24T22:25:48.7995491Z 10/24/2020 10:24 PM <DIR> .. 2020-10-24T22:25:48.7999544Z 0 File(s) 0 bytes 2020-10-24T22:25:48.8000346Z 2 Dir(s) 11,552,690,176 bytes free ``` [Download Build Artifacts task](https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/download-build-artifacts?view=azure-devops) works slightly differently comparing to the old task. It puts the files into a `<downloadPath>/<artifact-name>` directory. You can see there you've got two of them in your expected path. This is [a known issue](https://github.com/Microsoft/azure-pipelines-tasks/issues/6739).
3,005
32,342,262
I am looking for a way to search a large string for a large number of equal length substrings. My current method is basically this: ``` offset = 0 found = [] while offset < len(haystack): current_chunk = haystack[offset*8:offset*8+8] if current_chunk in needles: found.append(current_chunk) offset += 1 ``` This is painfully slow. Is there a better python way of doing this?
2015/09/01
[ "https://Stackoverflow.com/questions/32342262", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2448592/" ]
More Pythonic, much faster: ``` for needle in needles: if needle in haystack: found.append(needle) ``` Edit: With some limited testing here are test results **This algorithm:** 0.000135183334351 **Your algorithm:** 0.984048128128 Much faster.
I think that you can break it up on a multicore and parallelize your search. Something along the lines of: ``` from multiprocessing import Pool text = "Your very long string" """ A generator function for chopping up a given list into chunks of length n. """ def chunks(l, n): for i in xrange(0, len(l), n): yield l[i:i+n] def searchHaystack(haystack, needles): offset = 0 found = [] while offset < len(haystack): current_chunk = haystack[offset*8:offset*8+8] if current_chunk in needles: found.append(current_chunk) offset += 1 return(needles) # Build a pool of 8 processes pool = Pool(processes=8,) # Fragment the string data into 8 chunks partitioned_text = list(chunks(text, len(text) / 8)) # Generate all the needles found all_the_needles = pool.map(searchHaystack, partitioned_text, needles) ```
3,006
37,661,456
I would like to use spark jdbc with python. First step was to add a jar: ``` %AddJar http://central.maven.org/maven2/org/apache/hive/hive-jdbc/2.0.0/hive-jdbc-2.0.0.jar -f ``` However, the response: ``` ERROR: Line magic function `%AddJar` not found. ``` How can I add JDBC jar files in a python script?
2016/06/06
[ "https://Stackoverflow.com/questions/37661456", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1033422/" ]
Presently, this is not possible only from a python notebook; but it is understood as an important requirement. What you can do until this is supported, is from the same spark service instance of your python notebook, create a scala notebook and `%AddJar` from there. Then all python notebooks of that same spark service instance can access it. For py notebooks that were active when you added the jar from the scala nb, you will need to restart their kernels. Note that this works for notebook instances on Jupyter 4+ but not necessarily for earlier IPython notebook instances; check the version from the Help -> About menu from a notebook. Any new notebook instances created recently will be on Jupyter 4+.
I don't think this is possible in Notebook's Python Kernel as %Addjar is scala kernel magic function in notebook. You would need to rely on the service provider to add this jar to python kernel. Another thing you could try is sc.addjar() but not sure how would it work. [Add jar to pyspark when using notebook](https://stackoverflow.com/questions/31677345/add-jar-to-pyspark-when-using-notebook) Thanks, Charles.
3,007
36,533,759
I have a file.dat of this type but with a lot more data: ``` Apr 1 18:15 [n1_Cam_A_120213_O.fits]: 4101.77 1. -3.5612 3.561 -0.278635 4.707 6.448 #data1 0.03223 0. 0.05278 0.05278 0.00237 0.4393 0.4125 #error1 4088.9 1. -0.404974 0.405 -0.06538 5.819 0. #data2 0. 0. 0.01559 0.01559 0.00277 0.1717 0. #error2 4116.4 1. -0.225521 0.2255 -0.041111 5.153 0. #data3 0. 0. 0.01947 0.01947 0.00368 0.4748 0. #error3 4120.8 1. -0.382279 0.3823 -0.062194 5.774 0. #data4 0. 0. 0.01873 0.01873 0.00311 0.3565 0. #error4 Apr 1 18:15 [n1_Cam_B_120213_O.fits]: 4101.767 0.9999 -4.57791 4.578 -0.388646 0.03091 7.499 #data1 0.0293 0. 0.03447 0.03447 0.00243 0.00873 0.07529 #error1 4088.9 1. -0.211493 0.2115 -0.080003 2.483 0. 0. 0. 0.01091 0.01091 0.00327 0.1275 0. 4116.4 1. -0.237161 0.2372 -0.040493 5.502 0. 0. 0. 0.02052 0.02052 0.00231 0.5069 0. 4120.8 1. -0.320798 0.3208 -0.108827 2.769 0. 0. 0. 0.0167 0.0167 0.00404 0.1165 0. ``` first row of each dataset contains a name.fits, even rows contain values, and odd rows (except first) contain errors of the values in the row before. Then comes a blank row and starts again. What I need is to separate the information into different files in this way: ``` name1.fits data1[1] err1[1] data1[2] err1[2] data1[3] err1[3]... name2.fits data1[1] err1[1] data1[2] err1[2] data1[3] err1[3]... ``` So the next file would be ``` name1.fits data2[1] err2[1] data2[2] err2[2] data2[3] err2[3]... name2.fits data2[1] err2[1] data2[2] err2[2] data2[3] err2[3]... ``` Then the first new file of my data would look like: ``` n1_Cam_A_120213_O.fits 4101.77 0.03223 1. 0. -3.5612 0.05278 3.561 0.05278 -0.278635 0.00237 4.707 0.4393 6.448 0.4125 n1_Cam_B_120213_O.fits 4101.767 0.0293 0.9999 0. -4.57791 0.03447 4.578 0.03447 -0.388646 0.00243 0.03091 0.00873 7.499 0.07529 ``` Here is what I have so far: ``` with open('file.dat','r') as data, open('names.txt', 'a') as nam, open('values.txt', 'a') as val, open('errors.txt', 'a') as err: for lines in data.readlines(): cols = lines.split() if "fits" in lines: header = lines.split() nam.write(header[3]) elif float(cols[0]) > 1: #print cols[0] x=str(cols) val.write(x) elif float(cols[0]) < 1: #print cols[0] y=str(cols) err.write(y) ``` I'm just starting with python. I thought in separate name values and errors in different files and then select the rows and columns that I need. But since I'll be dealing with hundreds of rows and files, I would like a more automatic approach. What I want is to read the first 3 rows and write file1, then rows 1,4,5 and write file2, then rows 1,6,7 and write file3, then rows 1,8,9 and write file4, then skip blank row and read rows 11,12,13, and write file1, then rows 11,14,15 and write file2, and so forth (or something like that).
2016/04/10
[ "https://Stackoverflow.com/questions/36533759", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6184340/" ]
Try to write **system("pause");** before **return 0;** at the end of your program and press ctrl + F5.
Try this: ``` std::cin.get(); if (oper == 1) ans = num1*num2; else if(oper == 2) ans = num1 / num2; else if(oper == 3) ans = num1 + num2; else if(oper == 4) ans = num1 - num2; std::cout << ans; std::cin.get();//this will block and prevent the console from closing until you press a key return 0; ```
3,010
57,257,751
I've created my application in python and I want the application to be executed only one at a time. So I have used the singleton approach: ```py from math import fmod from PyQt4 import QtCore, QtGui from PyQt4.QtCore import SIGNAL import tendo import pywinusb.hid as hid import sys import os import time import threading import UsbHidCB05Connect if __name__ == "__main__": just_one = tendo.singleton.SingleInstance() app = QtGui.QApplication(sys.argv) screen_UsbHidConnect = ConnectFunction() screen_UsbHidConnect.show() sys.exit(app.exec_()) ``` When using the pyinstaller to convert it to an exe, I did not get any error, but when I tried to run the exe I get the error: "Failed to execute script mainUsbHidCB05v01" If in my code I comment the: ``` import tendo ``` and ``` just_one = tendo.singleton.SingleInstance() ``` I convert the script to an exe, and the exe runs without any problem. But I'm able to have more than one instance / program running, and I don't want that. I'm using pyinstaller like: ``` pyinstaller --noconsole -F -i cr.ico mainUsbHidCB05v01.py ``` I have also tried pyinstaller without the -F option. The result is the same. Anyone have any idea why with the singleton option in the code the exe doesn't runs?? Thanks.
2019/07/29
[ "https://Stackoverflow.com/questions/57257751", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6078511/" ]
> > Hello, it's me again! So, I have found the solution. I search a lot and I found very different ways to make the program runs only one time (singleinstance). > > > > > In summary, it's possible to use a lock file, using the library OS, but if the computer shutdowns in a energy fall, this file will stay locking your application when it returns, since the app was not closed properly. There is other way, when you use TENDO library to create a singleton, there are similar ways for that, but everyone uses some specific DLL, and when you use pyinstaller, adding/importing dll could be a little difficult. > > > > > Finally, there is a third way, that creates a socket communication with the PC that verifies if some specific port is being using for the application. That works like a charm to me. The library is: <https://pypi.org/project/Socket-Singleton/> > > > > > A simple and workable script: > > > ``` from time import sleep from Socket_Singleton import Socket_Singleton #Socket_Singleton(address="127.0.0.1", port=1337, timeout=0, client=True, strict=True) Socket_Singleton() print("hello!") sleep(10) print("hello 2!") ``` > > I used it with my app and create an .EXE file using pyinstaller and it works very well. > > >
I had the same problem and I didn't find a way to use the singleinstance of tendo. If You need a solution right now, you can create a file using the "os" library, and put a variable there that when the program is running it is 1, else is 0, so you just have to verify that variable in the beginning of your program. This is not the best way, but you can use that during the time you need to find a best solution. :)
3,011
5,871,621
I have a list of cities (simple cvs file) and I want to populate the citeis table while creating the City model. Class description: ``` class City(models.Model): city = modeld.CharField('city_name', max_length=50) class Meta: berbuse_name =.... ........... def __unicode__(self): return self.city ``` Now, what I am looking for is how to do it only once, while creating the model(DB table). I'm trying to do it over here because it sound very logic to me (like building a sql script in MS-sql and other) EDIT: Ok, I guess I am asking the wrong thing....maybe this: How do I create a python function that will take the cvs file and transform it to json (Again, in the model itself, while it is being build) and should I do it at all??? Can any one please help me with this?
2011/05/03
[ "https://Stackoverflow.com/questions/5871621", "https://Stackoverflow.com", "https://Stackoverflow.com/users/288219/" ]
We do something like this, usually. ``` import csv from my_django_app.forms import CityForm with open( "my file", "rb" ) as source: rdr = csv.DictReader( source ) for row in rdr: form= CityForm( **row ) if form.is_valid(): form.save() else: print form.errors ``` This validates and loads the data. After the data is loaded, you can use `django-admin dumpdata` to preserve a JSON fixture from the loaded model.
[Providing initial data for models](http://docs.djangoproject.com/en/1.3/howto/initial-data/).
3,012
30,397,107
I want to build a simple tool that uses functions from an open source project from GitHub, SourceForge, etc. (e.g., a project such as <https://github.com/vishnubob/python-midi/>). I searched the documentation but could not find the right way to do this. (I assume I need to point PyCharm at the source somehow and "import") I am utterly new to PyCharm and Python in general. This is just a test project. I am running PyCharm Pro 4.5 on OS X 10.10.3. PyCharm is up and running and just need to get to these functions. Thanks so much.
2015/05/22
[ "https://Stackoverflow.com/questions/30397107", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4929051/" ]
Yes there is a convention. Use `bower` if a package exists. If it doesn't, download the repo into the `vendor` folder. Import the file in your `Brocfile.js` ``` app.import('vendor/path_to/main_js_file.js'); ```
Yes, use bower or place them in `vendor/`. Then register them in `ember-cli-build.js` Here's the documentation: <https://guides.emberjs.com/v2.14.0/addons-and-dependencies/managing-dependencies/>
3,013
4,589,696
I apologize up front for the dumbness of this question, but I can't figure it out and its driving me crazy. In ruby I can do: ``` irb(main):001:0> s = "\t\t\n" => "\t\t\n" irb(main):003:0> puts s => nil irb(main):004:0> puts s.inspect "\t\t\n" ``` Is there an equivalent of ruby's `inspect` function in python?
2011/01/04
[ "https://Stackoverflow.com/questions/4589696", "https://Stackoverflow.com", "https://Stackoverflow.com/users/561964/" ]
[`repr()`](http://docs.python.org/library/functions.html#repr): ``` >>> print repr('\t\t\n') '\t\t\n' ```
You can use repr or (backticks), I am doing the exactly the same things as you did above. ``` >>> s = "\t\t\n" >>> s '\t\t\n' >>> print s >>> repr(s) "'\\t\\t\\n'" >>> print repr(s) '\t\t\n' >>> print `s` '\t\t\n' ```
3,014
3,049,101
I have a piece of code that behaves differently depending on whether I go through a dictionary to get conversion factors or whether I use them directly. The following piece of code will print `1.0 == 1.0 -> False` But if you replace `factors[units_from]` with `10.0` and `factors[units_to ]` with `1.0 / 2.54` it will print `1.0 == 1.0 -> True` ``` #!/usr/bin/env python base = 'cm' factors = { 'cm' : 1.0, 'mm' : 10.0, 'm' : 0.01, 'km' : 1.0e-5, 'in' : 1.0 / 2.54, 'ft' : 1.0 / 2.54 / 12.0, 'yd' : 1.0 / 2.54 / 12.0 / 3.0, 'mile' : 1.0 / 2.54 / 12.0 / 5280, 'lightyear' : 1.0 / 2.54 / 12.0 / 5280 / 5.87849981e12, } # convert 25.4 mm to inches val = 25.4 units_from = 'mm' units_to = 'in' base_value = val / factors[units_from] ret = base_value * factors[units_to ] print ret, '==', 1.0, '->', ret == 1.0 ``` Let me first say that I am pretty sure what is going on here. I have seen it before in C, just never in Python but since Python in implemented in C we're seeing it. I know that floating point numbers will change values going from a CPU register to cache and back. I know that comparing what should be two equal variables will return false if one of them was paged out while the other stayed resident in a register. **Questions** * What is the best way to avoid problems like this?... In Python or in general. * Am I doing something completely wrong? **Side Note** This is obviously part of a stripped down example but what I'm trying to do is come with with classes of length, volume, etc that can compare against other objects of the same class but with different units. **Rhetorical Questions** * If this is a potentially dangerous problem since it makes programs behave in an undetermanistic matter, should compilers warn or error when they detect that you're checking equality of floats * Should compilers support an option to replace all float equality checks with a 'close enough' function? * Do compilers already do this and I just can't find the information.
2010/06/15
[ "https://Stackoverflow.com/questions/3049101", "https://Stackoverflow.com", "https://Stackoverflow.com/users/180464/" ]
Thanks for your responses. Most were very good and provided good links so I'll just say that and answer my own question. Caspin posted this [link](http://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/). He also mentioned that Google Tests used ULP comparison and when I looked at the google code I saw that they mentioned the same exact link to cygnus-software. I wound up implementing some of the algorithms in C as a Python extension and then later found that I could do it in pure Python as well. The code is posted below. In the end, I will probably just wind up adding ULP differences to my bag of tricks. It was interesting to see how many floating points are between what should be two equal numbers that never left memory. One of the articles or the google code I read said that 4 was a good number... but here I was able to hit 10. ``` >>> f1 = 25.4 >>> f2 = f1 >>> >>> for i in xrange(1, 11): ... f2 /= 10.0 # to cm ... f2 *= (1.0 / 2.54) # to in ... f2 *= 25.4 # back to mm ... print 'after %2d loops there are %2d doubles between them' % (i, dulpdiff(f1, f2)) ... after 1 loops there are 1 doubles between them after 2 loops there are 2 doubles between them after 3 loops there are 3 doubles between them after 4 loops there are 4 doubles between them after 5 loops there are 6 doubles between them after 6 loops there are 7 doubles between them after 7 loops there are 8 doubles between them after 8 loops there are 10 doubles between them after 9 loops there are 10 doubles between them after 10 loops there are 10 doubles between them ``` --- Also interesting is how many floating points there are between equal numbers when one of them is written out as a string and read back in. ``` >>> # 0 degrees Fahrenheit is -32 / 1.8 degrees Celsius ... f = -32 / 1.8 >>> s = str(f) >>> s '-17.7777777778' >>> # floats between them... ... fulpdiff(f, float(s)) 0 >>> # doubles between them... ... dulpdiff(f, float(s)) 6255L ``` --- ``` import struct from functools import partial # (c) 2010 Eric L. Frederich # # Python implementation of algorithms detailed here... # from http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm def c_mem_cast(x, f=None, t=None): ''' do a c-style memory cast In Python... x = 12.34 y = c_mem_cast(x, 'd', 'l') ... should be equivilent to the following in c... double x = 12.34; long y = *(long*)&x; ''' return struct.unpack(t, struct.pack(f, x))[0] dbl_to_lng = partial(c_mem_cast, f='d', t='l') lng_to_dbl = partial(c_mem_cast, f='l', t='d') flt_to_int = partial(c_mem_cast, f='f', t='i') int_to_flt = partial(c_mem_cast, f='i', t='f') def ulp_diff_maker(converter, negative_zero): ''' Getting the ulp difference of floats and doubles is similar. Only difference if the offset and converter. ''' def the_diff(a, b): # Make a integer lexicographically ordered as a twos-complement int ai = converter(a) if ai < 0: ai = negative_zero - ai # Make b integer lexicographically ordered as a twos-complement int bi = converter(b) if bi < 0: bi = negative_zero - bi return abs(ai - bi) return the_diff # double ULP difference dulpdiff = ulp_diff_maker(dbl_to_lng, 0x8000000000000000) # float ULP difference fulpdiff = ulp_diff_maker(flt_to_int, 0x80000000 ) # default to double ULP difference ulpdiff = dulpdiff ulpdiff.__doc__ = ''' Get the number of doubles between two doubles. ''' ```
> > What is the best way to avoid problems > like this?... In Python or in general. > > > What problem? You're working with physical measurements. Unless you have some *really* sophisticated equipment, the error in your measurements is going to be several orders of magnitude higher than floating-point epsilon. So why write code that depends on numbers being exact to 16 significant digits? > > Should compilers support an option to > replace all float equality checks with > a 'close enough' function? > > > If it did, you'd get some strange results: ``` >>> float.tolerance = 1e-8 # hypothetical "close enough" definition >>> a = 1.23456789 >>> b = 1.23456790 >>> c = 1.23456791 >>> a == b True >>> b == c True >>> a == c False ``` If you think it's hard enough to store floats in a dictionary now, try it with a non-transitive `==` operator! And performance would suck, because the only way to guarantee `x == y` → `hash(x) == hash(y)` would be for every float to have the same hash code. And that'd be inconsistent with ints.
3,015
45,078,474
I want to implement import feature with required and optional parameters, to run this in this way: ``` python manage.py import --mode archive ``` where `--mode` is required and `archive` also. I'm using argparse library. ``` class Command(BaseCommand): help = 'Import' def add_arguments(self, parser): parser.add_argument('--mode', required=True, ) parser.add_argument('archive', required=True, default=False, help='Make import archive events' ) ``` But I recived error: ``` TypeError: 'required' is an invalid argument for positionals ```
2017/07/13
[ "https://Stackoverflow.com/questions/45078474", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7985656/" ]
You created a positional argument (no `--` option in front of the name). Positional arguments are *always* required. You can't use `required=True` for such options, just drop the `required`. Drop the `default` too; a required argument can't have a default value (it would never be used anyway): ``` parser.add_argument('archive', help='Make import archive events' ) ``` If you meant for `archive` to be a command-line switch, use `--archive` instead.
I think that `--mode archive` is supposed to mean "mode is archive", in other words `archive` is the *value* of the `--mode` argument, not a separate argument. If it were, it would have to be `--archive` which is not what you want. Just leave out the definition of `archive`.
3,025
15,768,136
In python 2.7, I want to run: $ ./script.py initparms.py This is a trick to supply a parameter file to script.py, since initparms.py contains several python variables e.g. ``` Ldir = '/home/marzipan/jelly' LMaps = True # etc. ``` script.py contains: ``` X = __import__(sys.argv[1]) Ldir = X.Ldir LMaps = X.Lmaps # etc. ``` I want to do a bulk promotion of the variables in X so they are available to script.py, without spelling out each one in the code by hand. Things like ``` import __import__(sys.argv[1]) ``` or ``` from sys.argv[1] import * ``` don't work. Almost there perhaps... Any ideas? Thanks!
2013/04/02
[ "https://Stackoverflow.com/questions/15768136", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1021819/" ]
You can use [`execfile`](http://docs.python.org/2/library/functions.html#execfile): ``` execfile(sys.argv[1]) ``` Of course, the usual warnings with `exec` or `eval` apply (Your script has no way of knowing whether it is running trusted or untrusted code). My suggestion would be to not do what you're doing and instead use `configparser` and handling the configuration though there.
You could do something like this: ``` import os import imp import sys try: module_name = sys.argv[1] module_info = imp.find_module(module_name, [os.path.abspath(os.path.dirname(__file__))] + sys.path) module_properties = imp.load_module(module_name, *module_info) except ImportError: pass else: try: attrlist = module_properties.__all__ except AttributeError: attrlist = dir(module_properties) for attr in attrlist: if attr.startswith('__'): continue globals()[attr] = getattr(module_properties, attr) ``` Little complicated, but gets the job done.
3,026
70,668,633
I am currently on a Discord Bot interacting with the Controlpanel API. (<https://documenter.getpostman.com/view/9044962/TzY69ub2#02b8da43-ab01-487d-b2f5-5f8699b509cd>) Now, I am getting an KeyError when listing a specific user. ``` headers = { 'Accept': 'application/json', 'Authorization': 'Bearer <censored>' } url = "https://<censored>" endpoint = f"/api/users/{user}" if __name__ == '__main__': data = requests.get(f'{url}{endpoint}', headers=headers).text for user in json.loads(data)['data']: embed = discord.Embed(title="Users") embed.add_field(name=user['id'], value=user['name']) await ctx.send(embed=embed) ``` ^That's python. Error: ``` for user in json.loads(data)['data']: ``` KeyError: 'data' How can I fix this? Thank you!
2022/01/11
[ "https://Stackoverflow.com/questions/70668633", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17903752/" ]
This KeyError happens usually when the Key doesn't exist(not exist or even a typo). in your case I think you dont have the 'data' key in your response and your should use something like: ``` data.json() ``` if you can post the complete response it would be more convinient to give you some hints.
The endpoint you're hitting does not return a list but a single object. You should use the generic endpoint : `{{url}}/api/users` Also I don't think you want to recreate your `embed` object for each user. ```py headers = { 'Authorization': 'Bearer <censored>' } url = 'https://<censored>' endpoint = '/api/users' if __name__ == '__main__': embed = discord.Embed(title="Users") for user in requests.get( f'{url}{endpoint}', headers=headers ).json()['data']: embed.add_field(name=user['id'], value=user['name']) await ctx.send(embed=embed) ``` Also I'm pretty sure you can't just `await` like that in `__main__`.
3,029
41,281,072
I would like to make a file with 3 main columns but my current file has different number of columns per row. an example of my file is like this: ``` BPIFB3,chr20;ENST00000375494.3 PXDN,chr2,ENST00000252804.4;ENST00000483018.1 RP11,chr2,ENST00000607956.1 RNF19B,chr1,ENST00000373456.7;ENST00000356990.5;ENST00000235150.4 ``` and here is what I want to make: ``` BPIFB3 chr20 ENST00000375494.3 PXDN chr2 ENST00000252804.4 PXDN chr2 ENST00000483018.1 RP11 chr2 ENST00000607956.1 RNF19B chr1 ENST00000373456.7 RNF19B chr1 ENST00000356990.5 RNF19B chr1 ENST00000235150.4 ``` in fact if in the 3rd row we have more than 3 columns, per any extra column, I want to make a new row in which the first two columns are the same but the 3rd column is different(which is the extra column in original file). I tried the following code in python but did not get what I am looking for: ``` from collections import defaultdict with open('data.tbl') as f, open('out.tbl', 'w') as out: for line in f.split('\t'): if len(line) > 2: d[line[0]] = line[3] out.write(d.items) ```
2016/12/22
[ "https://Stackoverflow.com/questions/41281072", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7310023/" ]
**If** your input is actually well-formed, you can do this: ``` for row in reader: for thing in row[2].split(';'): writer.writerow(row[:2]+[thing]) ``` But as it exists, your first row has malformed data that doesn't match the rest of your rows. So if that *is* an example of your data, then [you could try replacing](https://stackoverflow.com/a/41264619/344286) `;` with `,` before you feed it to the csv reader, and then you can do: ``` for thing in row[3:]: ``` instead. --- old answer to your prevous question: You just want list slicing. Also, if it's a tab separated file you can just use the csv module. And you're importing `defaultdict` that you're not using. ``` import csv with open('data.tbl') as f, open('out.tbl', 'w') as out: reader = csv.reader(f, delimiter='\t') writer = csv.writer(out, delimiter='\t') for row in reader: writer.writerow(row[:3]) ```
Try this: ``` import csv with open('data.tbl') as f, open('out.tbl', 'w') as out: reader = csv.reader(f, delimiter='\t') writer = csv.writer(out, delimiter='\t') for row in reader: if len(row) == 3: writer.writerow(row) else: n = len(row) writer.writerow(row[:3]) for j in range(3,n): writer.writerow([row[0], row[1], row[j]]) ```
3,030
17,517,718
I am working on a project which aims to fetch some data from some websites and then stores into database. But these websites contains different charsets such as utf-8, gbk. The fetched data is unicode so i wonder when to convert to string is the right way. I convert to string immediately for now, but it seems that the python suggests to keep unicode as long as possible, i can't figure out why? Because we always don't know the charset of the unicode object.
2013/07/08
[ "https://Stackoverflow.com/questions/17517718", "https://Stackoverflow.com", "https://Stackoverflow.com/users/674199/" ]
Your `start_time` (`Time.now.in_time.zone`) is populated on the page load/render. Whereas the `updated_at` is re-populated when you save the model. Your times are only off by a few seconds, which indicates that it took the person a few seconds to submit the form. Based on the updated requirement of it being when a start button is clicked to submit a form, it sounds like you want to populate the time in the controller action. ``` def update # your code team.start_time = Time.now.in_time_zone('EST') # rest of your code saving and processing the return end ``` As for why the time is different, it should not be, unless you are in a different timezone. However, I did notice there is no timezone in the SQL generated. It is possible that the database is not timezone aware and things are being converted to UTC. What is your DB?
1 . You set a static time in the view witch gets generated on page load. 2 . Set the time in the controller when you save the object. Basic example: ``` @object = Object.new(params[:object].merge(:start_time => Time.now)) if @object.save redirect_to 'best side in da world' else render :new end ```
3,033
8,397,617
I am trying to get python to emulatue mouse clicks, and then type a phrase into the pop up window that the mouse clicks into or the text box. 1) click a security box "run" link with mouse 2) move inside a pop up and enter different phrases with python What would be the best way to control the mouse and keyboard in this function. These are based in windows. I just need to click " run " on pop up and then in another pop box in one text line enter a phrase then switch to another text line and enter another. much like a password User Id fields. Could sommeone point me in the right direction.
2011/12/06
[ "https://Stackoverflow.com/questions/8397617", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1027427/" ]
To trigger an event simply call the relevant event function in jQuery on the element, with no handler function defined, like this: ``` $("#2").click(); ``` Or there is also `trigger()`, which accepts the event type as a parameter: ``` $("#2").trigger('click'); ``` However it's worth noting that `Id` attributes beginning with numbers are invalid, so you will most likely have to change your Ids for them to work properly. I've updated your fiddle to fix the IDs and show the above code working [**here**](http://jsfiddle.net/RoryMcCrossan/x9Hfj/1/)
You can use `trigger`: ``` $('#2').trigger('click'); ``` <http://api.jquery.com/trigger/>
3,034
51,209,598
Here is a sample of json data I created from defaultdict in python. ``` [{ "company": [ "ABCD" ], "fullname": [ "Bruce Lamont", "Ariel Zilist", "Bruce Lamont", "Bobby Ramirez" ], "position": [ " The Hesh", " Server", " HESH", " Production Assistant" ], "profile_url": [ "http://www.url1.com", "http://www.url2.com", "http://www.url3.com", "http://www.url4.com", ] }] ``` I realized I made a mistake creating such list. `json.loads()` gives this error **Error** > > Expecting value: line 1 column 1 (char 0). > > > I want something like this. ``` [{ "company": [ "name": "THALIA HALL", "employee": { fullname: "emp_name", "position": "position", profile: "url" }, { fullname: "emp_name", "position": "position", profile: "url" } ] }] ``` How can I solve this problem? I need to do this on python.
2018/07/06
[ "https://Stackoverflow.com/questions/51209598", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7655640/" ]
you are adding an extra comma in the end for profile\_url array. The proper json should be ``` [{ "company": [ "ABCD" ], "fullname": [ "Bruce Lamont", "Ariel Zilist", "Bruce Lamont", "Bobby Ramirez" ], "position": [ " The Hesh", " Server", " HESH", " Production Assistant" ], "profile_url": [ "http://www.url1.com", "http://www.url2.com", "http://www.url3.com", "http://www.url4.com" ] }] ``` Use <https://jsonformatter.curiousconcept.com/> to check for JSON formatting errors next time.
``` import json j = '[{"company":["ABCD"],"fullname":["Bruce Lamont","Ariel Zilist","Bruce Lamont","Bobby Ramirez"],"position":[" The Hesh"," Server"," HESH"," Production Assistant"],"profile_url":["http://www.url1.com","http://www.url2.com","http://www.url3.com","http://www.url4.com"]}]' json_obj = json.loads(j) ``` you have your JSON as object and now you can user for making csv
3,038
55,962,661
I am very new to python and have trouble printing the length of a variable I have created. I have tried len(), and I have tried converting to lists, arrays and tuples and so on and cannot get it to print the length correctly. ``` print(k1_idx_label0) len(k1_idx_label0) ``` And the output is --- ``` (array([ 0, 3, 7, 13, 20, 21, 23, 27, 29, 30, 32, 33, 36, 38, 40, 41, 42, 44, 45, 46, 48, 49, 54, 56, 57, 58, 62, 65, 68, 69, 70, 72, 76, 80, 82, 83, 84, 85, 88, 89, 92, 97, 103, 105, 109, 110, 111, 113, 115, 116, 117, 121, 122, 124, 126, 136, 137, 139, 140, 142, 143, 146, 148, 149, 150, 151, 153, 155, 156, 157, 158, 160, 161, 165, 166, 168, 173, 174, 175, 176, 177, 178, 180, 181, 182, 185, 186, 188, 191, 192, 193, 196, 199, 200, 203, 206, 207, 210, 211, 215, 218, 220, 225, 226, 227, 228, 232, 235, 236, 237, 238, 239, 241, 244, 249, 251, 252, 257, 258, 262, 264, 267, 272, 278, 282, 283, 285, 286, 289, 291, 297, 298, 299, 300, 301, 305, 307, 308, 309, 313, 315, 317, 318, 319, 326, 327, 329, 330, 331, 333, 335, 336, 340, 342, 347, 350, 351, 352, 354, 355, 356, 360, 361, 365, 375, 377, 378, 382, 383, 385, 386, 387, 390, 391, 392, 393, 394, 397, 398, 403, 405, 406, 407, 408, 409, 413, 414, 421, 426, 429, 430, 431, 435, 439, 443, 444, 445, 446, 447, 449, 452, 454, 455, 456, 457, 460, 462, 463, 464, 466, 468, 469, 471, 472, 473, 477, 478, 480, 482, 492, 493, 496, 501, 504, 506, 512, 517, 518, 519, 520, 521, 522, 523, 528, 529, 531, 533, 535, 536, 542, 543, 545, 547, 551, 555, 556, 558, 564, 565, 567, 568, 569], dtype=int64),) 1 ``` It keeps printing the length as 1 when there is clearly a lot more than that... any idea?
2019/05/03
[ "https://Stackoverflow.com/questions/55962661", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11338333/" ]
The tuple has just `1` element, if you want to know the size of that element inside the tuple: ``` len(k1_idx_label0[0]) ``` if you want to know the size of **all** elements in the tuple: ``` [len(e) for e in k1_idx_label0] ```
Try: ``` print(len(k1_idx_label0[0])) ```
3,039
24,890,259
**I'm not looking for a solution (I have two ;) ), but on insight to compare the strengths and weaknesses of each solution considering python's internals. Thanks !** With a coworker, we wish to extract the difference between two successive list elements, for all elements. So, for list : ``` [1,2,4] ``` the expected output is : ``` [1,2] ``` (since 2-1 = 1, and 4-2 = 2). We came with two solutions and I am not sure how they compare. The first one is very C-like, it considers the list as a table and substracts the difference between two successive list elements. ``` res = [] for i in range(0, len(a)-1): res.append(a[i+1] - a[i]) ``` The second one is (for a list "l"), I think, more pythonic : ``` [j - i for i,j in zip(l[:-1], l[1:])] ``` Though, isn't it far less efficient to build two copies of the list to then extract the differences ? How does Python handle this internally ? Thanks for your insights !
2014/07/22
[ "https://Stackoverflow.com/questions/24890259", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2481478/" ]
With a generator: ``` def diff_elements(lst): """ >>> list(diff_elements([])) [] >>> list(diff_elements([1])) [] >>> list(diff_elements([1, 2, 4, 7])) [1, 2, 3] """ as_iter = iter(lst) last = next(as_iter) for value in as_iter: yield value - last last = value ``` This has the nice properties of: 1. Being readable, and 2. Working on infinitely large data sets.
If I understood your question I suggest you use something like that: ``` diffList = lambda l: [(l[i] - l[i-1]) for i in range(1, len(l))] answer = diffList( [ 1,2,4] ) ``` This function will give you a list with the differences between all consecutive elements in the input list. This one is similar with your first approach (and still somewhat pythonic) what is more efficient than the second one.
3,042
12,805,699
recently i understand the great advantage to use the list comprehension. I am working with several milion of points (x,y,z) stored in a special format \*.las file. In python there are two way to work with this format: ``` Liblas module [http://www.liblas.org/tutorial/python.html][1] (in a C++/Python) laspy module [http://laspy.readthedocs.org/en/latest/tut_part_1.html][2] (pure Python) ``` I had several problem with liblas and i wish to test laspy. in liblas i can use list comprehension as: ``` from liblas import file as lasfile f = lasfile.File(inFile,None,'r') # open LAS points = [(p.x,p.y) for p in f] # read in list comprehension ``` in laspy i cannot figurate how do the same: ``` from laspy.file import File f = file.File(inFile, mode='r') f <laspy.file.File object at 0x0000000013939080> (f[0].X,f[0].Y) (30839973, 696447860) ``` i tryed several combination as: ``` points = [(p.X,p.Y) for p in f] ``` but i get this message ``` Traceback (most recent call last): File "<interactive input>", line 1, in <module> AttributeError: Point instance has no attribute 'x' ``` I tryed in uppercase and NOT-uppercase because Python is case sensitive: ``` >>> [(p.x,p.y) for p in f] Traceback (most recent call last): File "<interactive input>", line 1, in <module> AttributeError: Point instance has no attribute 'x' >>> [(p.X,p.Y) for p in f] Traceback (most recent call last): File "<interactive input>", line 1, in <module> AttributeError: Point instance has no attribute 'X' ``` this is in interactive prompt: ``` C:\Python27>python.exe Python 2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500 64 bit (AMD64)] on win 32 Type "help", "copyright", "credits" or "license" for more information. >>> from laspy.file import File >>> inFile="C:\\04-las_clip_inside_area\\Ku_018_class.las" >>> f = File(inFile, None, 'r') >>> f <laspy.file.File object at 0x00000000024D5E10> >>> points = [(p.X,p.Y) for p in f] Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: Point instance has no attribute 'X' >>> ``` the print p after the list is: ``` print dir(p) ['__doc__', '__init__', '__module__', 'make_nice', 'pack', 'packer', 'reader', 'unpacked'] ``` in a loop format i have always the same error ``` >>> for p in f: ... print dir(p) ... print p.X,p.Y ... ['__doc__', '__init__', '__module__', 'make_nice', 'pack', 'packer', 'reader', 'unpacked'] Traceback (most recent call last): File "<interactive input>", line 3, in <module> AttributeError: Point instance has no attribute 'X' ``` using this code suggested by nneonneo ``` import numpy as np for p in f: ... points = np.array([f.X, f.Y]).T ``` i can store in an array ``` points array([[ 30839973, 696447860], [ 30839937, 696447890], [ 30839842, 696447832], ..., [ 30943795, 695999984], [ 30943695, 695999922], [ 30943960, 695999995]]) ``` but miss the way to create a list comprehension ``` points = [np.array(p.X,p.Y).T for p in f] Traceback (most recent call last): File "<interactive input>", line 1, in <module> AttributeError: Point instance has no attribute 'X' ``` thanks in advance for help. Gianni
2012/10/09
[ "https://Stackoverflow.com/questions/12805699", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1493192/" ]
Python is case-sensitive. Too me it looks like you ask for attribute `x`, but it should be an uppercase `X`.
Try ``` import numpy as np ... points = np.array([f.X, f.Y]).T ```
3,044
39,190,274
I am currently developing a Python application which I continually performance test, simply by recording the runtime of various parts. A lot of the code is related only to the testing environment and would not exist in the real world application, I have these separated into functions and at the moment I comment out these calls when testing. This requires me to remember which calls refer to test only components (they are quite interleaved so I cannot group the functionality). I was wondering if there was a better solution to this, the only idea I have had so far is creation of a 'mode' boolean and insertion of If statements, though this feels needlessly messy. I was hoping there might be some more standardised testing method that I am naive of. I am new to python so I may have overlooked some simple solutions. Thank you in advance
2016/08/28
[ "https://Stackoverflow.com/questions/39190274", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2071737/" ]
That's because of > > Let `shiftCount` be the result of masking out all but the least significant 5 bits of `rnum`, that is, compute `rnum & 0x1F`. > > > of how the `<<` operation is defined. See <http://www.ecma-international.org/ecma-262/6.0/#sec-left-shift-operator-runtime-semantics-evaluation> So according to it - `32 & 0x1F` equals 0 So `1 << 32` equals to `1 << 0` so is basically no op. Whereas 2 consecutive shifts by 31 and 1 literally perform calculations
JavaScript defines a left-shift by 32 to do nothing, presumably because it smacks up against the 32-bit boundary. You cannot actually shift anything more than 31 bits across. Your approach of first shifting 31 bits, then a final bit, works around JavaScript thinking that shifting so much doesn't make sense. Indeed, it's pointless to execute those calculations when you could just write `= 0` in the first place.
3,046
531,487
I'm looking for a python browser widget (along the lines of pyQT4's [QTextBrowser](http://doc.trolltech.com/3.3/qtextbrowser.html) class or [wxpython's HTML module](http://www.wxpython.org/docs/api/wx.html-module.html)) that has events for interaction with the DOM. For example, if I highlight an h1 node, the widget class should have a method that notifies me something was highlighted and what dom properties that node had (`<h1>`, contents of the tag, sibling and parent tags, etc). Ideally the widget module/class would give access to the DOM tree object itself so I can traverse it, modify it, and re-render the new tree. Does something like this exist? I've tried looking but I'm unfortunately not able to find it. Thanks in advance!
2009/02/10
[ "https://Stackoverflow.com/questions/531487", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11110/" ]
It may not be ideal for your purposes, but you might want to take a look at the Python bindings to KHTML that are part of PyKDE. One place to start looking is the KHTMLPart class: <http://api.kde.org/pykde-4.2-api/khtml/KHTMLPart.html> Since the API for this class is based on the signals and slots paradigm used in Qt, you will need to connect various signals to slots in your own code to find out when parts of a document have been changed. There's also a DOM API, so it should also be possible to access DOM nodes for selected parts of the document. More information can be found here: <http://api.kde.org/pykde-4.2-api/khtml/index.html>
I would also love such a thing. I suspect one with Python bindings does not exist, but would be really happy to be wrong about this. One option I recently looked at (but never tried) is the [Webkit](http://webkit.org/) browser. Now this has some bindings for Python, and built against different toolkits (I use GTK). However there are available API for the entire Javascript machine for C++, but no Python bindings and I don't see any reason why these can't be bound for Python. It's a fairly huge task, I know, but it would be a universally useful project, so maybe worth the investment.
3,048
5,475,549
In the Ubuntu terminal, how do I loop a command like ``` python myscript.py ``` so that it runs every 15 minutes?
2011/03/29
[ "https://Stackoverflow.com/questions/5475549", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
you are looking for [crontab](http://en.wikipedia.org/wiki/Crontab) rather than loop.
Sounds like you want to use something like cron instead, but... if you are sure you want something to run in the same terminal-window every N minutes (or seconds, actually), you could use the 'watch' command. ``` watch -n 60 python myscripy.py ```
3,050
50,910,136
I am new to python and I currently have one text file that I sliced into two columns. I am looking for unique one-to-one relationships in the text file to determine new home buyers: **Main File** 1234 Address , Billy Joel Joe Martin, 45 Other Address 63 OtherOther Address, Joe Martin Billy Joel, 1234 Address ***I am lookng for the unique one-to-one relationship (1234 Address and Billy Joel)*** **Curent Steps/Goals:** > > 1. sliced text file into two lists based on the ',' > > > Looking to write something like this (I know this is very horribly laid out, but I am stuck on how to implement this): ``` addressListing= text file that is read leftLst = addressListing.split(",", 1)[0] rightLst = addressListing.split(".", 1) [1] for (x, y) in (leftLst, rightLst): if x in rightLst and y in leftLst: return x + y else: pass ``` The text file is not neat where there are only addresses on one side and only names on the other.
2018/06/18
[ "https://Stackoverflow.com/questions/50910136", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9574850/" ]
Everything from the request is just a string. The modelbinder matches up keys in the request body with property names, and then attempts to coerce them to the appropriate type. If the property is not posted or is posted with an empty string, that will obviously fail when trying to convert to an int. As a result, you end up with the default value for the type. In the case of an `int` that's `0`, while the default value of `int?` is `null`. *Only after this binding process is complete* is the model then validated. Remember you're validating the *model* not the post body. There's no reasonable way to validate the post body, since again, it's just a a bunch of key-value pair strings. Therefore, in the case of an `int` property that's required, but not posted, the value is `0`, which is a perfectly valid value for an int, and the validation is satisfied. In the case of `int?`, the value is `null`, which is *not* a valid int, and thus fails validation. That is why the nullable is required, if you want to require a non-nullable type have a value. It's the only way that an empty value can be differentiated from simply a "default" value. If you are using view models, as you should be, this should not be an issue. You can bind to a nullable int with a required attribute, and you will be assured that it *will* have a value, despite being nullable, if your model state is valid. Then, you can map that over to a straight int on your entity. That is the correct way to handle things.
> > non-nullable required types. > > > You do not. It is either required - then there is no sense in it being nullable - or it is not required, then you nullable makes sense, but it makes no sense to require it. Attributes are always for the whole request. You are in a logical problem because you try to use them not as intended. If it is optional, the user should actually submit a patch, not a put/post.
3,056
1,250,779
I'm interested in hearing some discussion about class attributes in Python. For example, what is a good use case for class attributes? For the most part, I can not come up with a case where a class attribute is preferable to using a module level attribute. If this is true, then why have them around? The problem I have with them, is that it is almost too easy to clobber a class attribute value by mistake, and then your "global" value has turned into a local instance attribute. Feel free to comment on how you would handle the following situations: 1. Constant values used by a class and/or sub-classes. This may include "magic number" dictionary keys or list indexes that will never change, but possible need one-time initialization. 2. Default class attribute, that in rare occasions updated for a special instance of the class. 3. Global data structure used to represent an internal state of a class shared between all instances. 4. A class that initializes a number of default attributes, not influenced by constructor arguments. Some Related Posts: [Difference Between Class and Instance Attributes](https://stackoverflow.com/questions/207000/python-difference-between-class-and-instance-attributes)
2009/08/09
[ "https://Stackoverflow.com/questions/1250779", "https://Stackoverflow.com", "https://Stackoverflow.com/users/64313/" ]
Class attributes are often used to allow overriding defaults in subclasses. For example, BaseHTTPRequestHandler has class constants sys\_version and server\_version, the latter defaulting to `"BaseHTTP/" + __version__`. SimpleHTTPRequestHandler overrides server\_version to `"SimpleHTTP/" + __version__`.
Encapsulation is a good principle: when an attribute is inside the class it pertains to instead of being in the global scope, this gives additional information to people reading the code. In your situations 1-4, I would thus avoid globals as much as I can, and prefer using class attributes, which allow one to benefit from encapsulation.
3,062
161,872
What are some really useful but esoteric language features in Perl that you've actually been able to employ to do useful work? Guidelines: * Try to limit answers to the Perl core and not CPAN * Please give an example and a short description --- Hidden Features also found in other languages' Hidden Features: --------------------------------------------------------------- (These are all from [Corion's answer](https://stackoverflow.com/questions/161872/hidden-features-of-perl#162257)) * [C](https://stackoverflow.com/questions/132241/hidden-features-of-c#) + Duff's Device + Portability and Standardness * [C#](https://stackoverflow.com/questions/9033/hidden-features-of-c) + Quotes for whitespace delimited lists and strings + Aliasable namespaces * [Java](https://stackoverflow.com/questions/15496/hidden-features-of-java) + Static Initalizers * [JavaScript](https://stackoverflow.com/questions/61088/hidden-features-of-javascript) + Functions are First Class citizens + Block scope and closure + Calling methods and accessors indirectly through a variable * [Ruby](https://stackoverflow.com/questions/63998/hidden-features-of-ruby) + Defining methods through code * [PHP](https://stackoverflow.com/questions/61401/hidden-features-of-php) + Pervasive online documentation + Magic methods + Symbolic references * [Python](https://stackoverflow.com/questions/101268/hidden-features-of-python) + One line value swapping + Ability to replace even core functions with your own functionality Other Hidden Features: ---------------------- Operators: * [The bool quasi-operator](https://stackoverflow.com/questions/161872/hidden-features-of-perl#162094) * [The flip-flop operator](https://stackoverflow.com/questions/161872/hidden-features-of-perl#162058) + Also used for [list construction](https://stackoverflow.com/questions/161872/hidden-features-of-perl#205627) * [The `++` and unary `-` operators work on strings](https://stackoverflow.com/questions/161872/hidden-features-of-perl#162004) * [The repetition operator](https://stackoverflow.com/questions/161872/hidden-features-of-perl#162075) * [The spaceship operator](https://stackoverflow.com/questions/161872/hidden-features-of-perl#161943) * [The || operator (and // operator) to select from a set of choices](https://stackoverflow.com/questions/161872/hidden-features-of-perl#162239) * [The diamond operator](https://stackoverflow.com/questions/161872/hidden-features-of-perl#162152) * [Special cases of the `m//` operator](https://stackoverflow.com/questions/161872/hidden-features-of-perl#162249) * [The tilde-tilde "operator"](https://stackoverflow.com/questions/161872/hidden-features-of-perl#162060) Quoting constructs: * [The qw operator](https://stackoverflow.com/questions/161872/hidden-features-of-perl#163416) * [Letters can be used as quote delimiters in q{}-like constructs](https://stackoverflow.com/questions/161872/hidden-features-of-perl#162094) * [Quoting mechanisms](https://stackoverflow.com/questions/161872/hidden-features-of-perl#163374) Syntax and Names: * [There can be a space after a sigil](https://stackoverflow.com/questions/161872/hidden-features-of-perl#162094) * [You can give subs numeric names with symbolic references](https://stackoverflow.com/questions/161872/hidden-features-of-perl#162094) * [Legal trailing commas](https://stackoverflow.com/questions/161872/hidden-features-of-perl#163416) * [Grouped Integer Literals](https://stackoverflow.com/questions/161872/hidden-features-of-perl#162601) * [hash slices](https://stackoverflow.com/questions/161872/hidden-features-of-perl#168925) * [Populating keys of a hash from an array](https://stackoverflow.com/questions/161872/hidden-features-of-perl#195254) Modules, Pragmas, and command-line options: * [use strict and use warnings](https://stackoverflow.com/questions/161872/hidden-features-of-perl#163440) * [Taint checking](https://stackoverflow.com/questions/161872/hidden-features-of-perl#163440) * [Esoteric use of -n and -p](https://stackoverflow.com/questions/161872/hidden-features-of-perl#162085) * [CPAN](https://stackoverflow.com/questions/161872/hidden-features-of-perl#163541) * [`overload::constant`](https://stackoverflow.com/questions/161872/hidden-features-of-perl#162601) * [IO::Handle module](https://stackoverflow.com/questions/161872/hidden-features-of-perl#164255) * [Safe compartments](https://stackoverflow.com/questions/161872/hidden-features-of-perl#163725) * [Attributes](https://stackoverflow.com/questions/161872/hidden-features-of-perl#310083) Variables: * [Autovivification](https://stackoverflow.com/questions/161872/hidden-features-of-perl#162357) * [The `$[` variable](https://stackoverflow.com/questions/161872/hidden-features-of-perl#161985) * [tie](https://stackoverflow.com/questions/161872/hidden-features-of-perl#168947) * [Dynamic Scoping](https://stackoverflow.com/questions/161872/hidden-features-of-perl#172118) * [Variable swapping with a single statement](https://stackoverflow.com/questions/161872/hidden-features-of-perl#205627) Loops and flow control: * [Magic goto](https://stackoverflow.com/questions/161872/hidden-features-of-perl#163440) * [`for` on a single variable](https://stackoverflow.com/questions/161872/hidden-features-of-perl#163481) * [continue clause](https://stackoverflow.com/questions/161872/hidden-features-of-perl#169592) * [Desperation mode](https://stackoverflow.com/questions/161872/hidden-features-of-perl#205104) Regular expressions: * [The `\G` anchor](https://stackoverflow.com/questions/161872/hidden-features-of-perl#162565) * [`(?{})` and '(??{})` in regexes](https://stackoverflow.com/questions/161872/hidden-features-of-perl#161976) Other features: * [The debugger](https://stackoverflow.com/questions/161872/hidden-features-of-perl#163440) * [Special code blocks such as BEGIN, CHECK, and END](https://stackoverflow.com/questions/161872/hidden-features-of-perl#162206) * [The `DATA` block](https://stackoverflow.com/questions/161872/hidden-features-of-perl#163700) * [New Block Operations](https://stackoverflow.com/questions/161872/hidden-features-of-perl#162601) * [Source Filters](https://stackoverflow.com/questions/161872/hidden-features-of-perl#162601) * [Signal Hooks](https://stackoverflow.com/questions/161872/hidden-features-of-perl#162601) * [map](https://stackoverflow.com/questions/161872/hidden-features-of-perl#167309) ([twice](https://stackoverflow.com/questions/161872/hidden-features-of-perl#167809)) * [Wrapping built-in functions](https://stackoverflow.com/questions/161872/hidden-features-of-perl#162842) * [The `eof` function](https://stackoverflow.com/questions/161872/hidden-features-of-perl/189883#189883) * [The `dbmopen` function](https://stackoverflow.com/questions/161872/hidden-features-of-perl#194796) * [Turning warnings into errors](https://stackoverflow.com/questions/161872/hidden-features-of-perl#205104) Other tricks, and meta-answers: * [cat files, decompressing gzips if needed](https://stackoverflow.com/questions/161872/hidden-features-of-perl#163532) * [Perl Tips](https://stackoverflow.com/questions/161872/hidden-features-of-perl#162271) --- **See Also:** * [Hidden features of C](https://stackoverflow.com/questions/132241/hidden-features-of-c) * [Hidden features of C#](https://stackoverflow.com/questions/9033/hidden-features-of-c) * [Hidden features of C++](https://stackoverflow.com/questions/75538/hidden-features-of-c) * [Hidden features of Java](https://stackoverflow.com/questions/15496/hidden-features-of-java) * [Hidden features of JavaScript](https://stackoverflow.com/questions/61088/hidden-features-of-javascript) * [Hidden features of Ruby](https://stackoverflow.com/questions/63998/hidden-features-of-ruby) * [Hidden features of PHP](https://stackoverflow.com/questions/61401/hidden-features-of-php) * [Hidden features of Python](https://stackoverflow.com/questions/101268/hidden-features-of-python) * [Hidden features of Clojure](https://stackoverflow.com/questions/2493996/hidden-features-of-clojure)
2008/10/02
[ "https://Stackoverflow.com/questions/161872", "https://Stackoverflow.com", "https://Stackoverflow.com/users/21632/" ]
The null filehandle [diamond operator](http://perldoc.perl.org/perlop.html#I%2fO-Operators) `<>` has its place in building command line tools. It acts like `<FH>` to read from a handle, except that it magically selects whichever is found first: command line filenames or STDIN. Taken from perlop: ``` while (<>) { ... # code for each line } ```
@Schwern mentioned turning warnings into errors by localizing `$SIG{__WARN__}`. You can do also do this (lexically) with `use warnings FATAL => "all";`. See `perldoc lexwarn`. On that note, since Perl 5.12, you've been able to say `perldoc foo` instead of the full `perldoc perlfoo`. Finally! :)
3,067
59,481,941
I am new to python and I am trying to disable TAB2 from widget notebook tkinter, through the support\_test.py file I have the command of the disable\_tab2 button, which should have the command to disable the option, but I get the error below: ``` Exception in Tkinter callback Traceback (most recent call last): File "C:\Users\Ryzen\AppData\Local\Programs\Python\Python37\lib\tkinter\__init__.p>y", line 1705, in __call__ return self.func(*args) File "c:\teste\teste_support.py", line 20, in desativa_tab2 w.TNotebook1_t0.configure(state='disabaled') File "C:\Users\Ryzen\AppData\Local\Programs\Python\Python37\lib\tkinter\__init__.py", line 1485, in configure return self._configure('configure', cnf, kw) File "C:\Users\Ryzen\AppData\Local\Programs\Python\Python37\lib\tkinter\__init__.py", line 1476, in _configure self.tk.call(_flatten((self._w, cmd)) + self._options(cnf)) _tkinter.TclError: unknown option "-state" ``` file teste.py ``` # -*- coding: utf-8 -*- import sys try: import Tkinter as tk except ImportError: import tkinter as tk try: import ttk py3 = False except ImportError: import tkinter.ttk as ttk py3 = True import teste_support def vp_start_gui(): '''Starting point when module is the main routine.''' global val, w, root root = tk.Tk() top = Toplevel1 (root) teste_support.init(root, top) root.mainloop() w = None def create_Toplevel1(root, *args, **kwargs): '''Starting point when module is imported by another program.''' global w, w_win, rt rt = root w = tk.Toplevel (root) top = Toplevel1 (w) teste_support.init(w, top, *args, **kwargs) return (w, top) def destroy_Toplevel1(): global w w.destroy() w = None class Toplevel1: def __init__(self, top=None): '''This class configures and populates the toplevel window. top is the toplevel containing window.''' _bgcolor = '#d9d9d9' # X11 color: 'gray85' _fgcolor = '#000000' # X11 color: 'black' _compcolor = '#d9d9d9' # X11 color: 'gray85' _ana1color = '#d9d9d9' # X11 color: 'gray85' _ana2color = '#ececec' # Closest X11 color: 'gray92' self.style = ttk.Style() if sys.platform == "win32": self.style.theme_use('winnative') self.style.configure('.',background=_bgcolor) self.style.configure('.',foreground=_fgcolor) self.style.configure('.',font="TkDefaultFont") self.style.map('.',background= [('selected', _compcolor), ('active',_ana2color)]) top.geometry("600x450+633+190") top.minsize(120, 1) top.maxsize(1924, 1061) top.resizable(1, 1) top.title("New Toplevel") top.configure(background="#d9d9d9") self.Button1 = tk.Button(top) self.Button1.place(relx=0.417, rely=0.044, height=24, width=47) self.Button1.configure(activebackground="#ececec") self.Button1.configure(activeforeground="#000000") self.Button1.configure(background="#d9d9d9") self.Button1.configure(command=teste_support.desativa_tab2) self.Button1.configure(disabledforeground="#a3a3a3") self.Button1.configure(foreground="#000000") self.Button1.configure(highlightbackground="#d9d9d9") self.Button1.configure(highlightcolor="black") self.Button1.configure(pady="0") self.Button1.configure(text='''Button''') self.style.configure('TNotebook.Tab', background=_bgcolor) self.style.configure('TNotebook.Tab', foreground=_fgcolor) self.style.map('TNotebook.Tab', background= [('selected', _compcolor), ('active',_ana2color)]) self.TNotebook1 = ttk.Notebook(top) self.TNotebook1.place(relx=0.067, rely=0.222, relheight=0.591 , relwidth=0.69) self.TNotebook1.configure(takefocus="") self.TNotebook1_t0 = tk.Frame(self.TNotebook1) self.TNotebook1.add(self.TNotebook1_t0, padding=3) self.TNotebook1.tab(0, text="Page 1",compound="left",underline="-1",) self.TNotebook1_t0.configure(background="#d9d9d9") self.TNotebook1_t0.configure(highlightbackground="#d9d9d9") self.TNotebook1_t0.configure(highlightcolor="black") self.TNotebook1_t1 = tk.Frame(self.TNotebook1) self.TNotebook1.add(self.TNotebook1_t1, padding=3) self.TNotebook1.tab(1, text="Page 2",compound="left",underline="-1",) self.TNotebook1_t1.configure(background="#d9d9d9") self.TNotebook1_t1.configure(highlightbackground="#d9d9d9") self.TNotebook1_t1.configure(highlightcolor="black") self.Label1 = tk.Label(self.TNotebook1_t0) self.Label1.place(relx=0.195, rely=0.333, height=21, width=104) self.Label1.configure(background="#d9d9d9") self.Label1.configure(disabledforeground="#a3a3a3") self.Label1.configure(foreground="#000000") self.Label1.configure(text='''TAB 1''') self.Label1 = tk.Label(self.TNotebook1_t1) self.Label1.place(relx=0.263, rely=0.258, height=21, width=104) self.Label1.configure(activebackground="#f9f9f9") self.Label1.configure(activeforeground="black") self.Label1.configure(background="#d9d9d9") self.Label1.configure(disabledforeground="#a3a3a3") self.Label1.configure(foreground="#000000") self.Label1.configure(highlightbackground="#d9d9d9") self.Label1.configure(highlightcolor="black") self.Label1.configure(text='''TAB 2''') if __name__ == '__main__': vp_start_gui() ``` file teste\_suporte.py ``` # -*- coding: utf-8 -*- import sys try: import Tkinter as tk except ImportError: import tkinter as tk try: import ttk py3 = False except ImportError: import tkinter.ttk as ttk py3 = True def desativa_tab2(): global w w.TNotebook1_t0.configure(state='disabaled') print('teste_support.desativa_tab2') sys.stdout.flush() def init(top, gui, *args, **kwargs): global w, top_level, root w = gui top_level = top root = top def destroy_window(): # Function which closes the window. global top_level top_level.destroy() top_level = None if __name__ == '__main__': import teste teste.vp_start_gui() ```
2019/12/25
[ "https://Stackoverflow.com/questions/59481941", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11596546/" ]
I found a way to do this. We used GatsbyJS for a project and it relies on a .env.production file for the env variables. I tried to pass them as `env:` to the github action, but that didn't work and they were ignored. Here is what I did. I base 64 encoded the .env.production file: ``` base64 -i .env.production ``` Added the output to an env variable in github action. Then in my action I do: ``` echo ${{ secrets.ENV_PRODUCTION_FILE }} | base64 -d > .env.production ``` This way the contents of my .env.production file ended being written to the machine that executes the github action.
Here is how to solve your actual problem of securely logging into an SSH server using a secret stored in GitHub Actions, named `GITHUB_ACTIONS_DEPLOY`. Let's call this "beep", because it will cause an audible bell on the server you login to. Maybe you use this literally ping a server in your house when somebody pushes code to your repo. ```yaml - name: Beep # if: github.ref == 'refs/heads/XXXX' # Maybe limit only this step to some branches run: | eval $(ssh-agent) ssh-add - <<< "$SSH_KEY" echo "* ssh-rsa XXX" >> /tmp/known_hosts # Get from your local ~/.ssh/known_hosts or from ssh-keyscan ssh -o UserKnownHostsFile=/tmp/known_hosts user@example.com "echo '\a'" env: SSH_KEY: ${{ secrets.PMT_GITHUB_ACTIONS_DEPLOY }} ``` If, actually you are using SSH as part of a rsync push task, here is how to do that: ```yaml - name: Publish if: github.ref == 'refs/heads/XXX' run: | eval $(ssh-agent) ssh-add - <<< "$SSH_KEY" echo "* ssh-rsa XXX" >> /tmp/known_hosts rsync $FROM user@server: env: SSH_KEY: ${{ secrets.GITHUB_ACTIONS_DEPLOY }} RSYNC_RSH: "ssh -o UserKnownHostsFile=/tmp/known_hosts" ```
3,077
44,150,069
I've implemented the "xor problem" with cntk (python). Currently it solves the problem only occasionally. How could I implement a more reliable network? I guess the problem gets solved whenever the starting random weights are near optimal. I have tried `binary_cross_entropy` as the loss function but it didn't improve. I tried `tanh` as the non-linear function but that it didn't work either. I have also tried many different combinations of parameters `learning_rate`, `minibatch_size` and `num_minibatches_to_train`. Please help. Thanks ``` # -*- coding: utf-8 -*- import numpy as np from cntk import * import random import pandas as pd input_dim = 2 output_dim = 1 def generate_random_data_sample(sample_size, feature_dim, num_classes): Y = [] X = [] for i in range(sample_size): if i % 4 == 0: Y.append([0]) X.append([1,1]) if i % 4 == 1: Y.append([0]) X.append([0,0]) if i % 4 == 2: Y.append([1]) X.append([1,0]) if i % 4 == 3: Y.append([1]) X.append([0,1]) return np.array(X,dtype=np.float32), np.array(Y,dtype=np.float32) def linear_layer(input_var, output_dim,scale=10): input_dim = input_var.shape[0] weight = parameter(shape=(input_dim, output_dim),init=uniform(scale=scale)) bias = parameter(shape=(output_dim)) return bias + times(input_var, weight) def dense_layer(input_var, output_dim, nonlinearity,scale=10): l = linear_layer(input_var, output_dim,scale=scale) return nonlinearity(l) feature = input(input_dim, np.float32) h1 = dense_layer(feature, 2, sigmoid,scale=10) z = dense_layer(h1, output_dim, sigmoid,scale=10) label=input(1,np.float32) loss = squared_error(z,label) eval_error = squared_error(z,label) learning_rate = 0.5 lr_schedule = learning_rate_schedule(learning_rate, UnitType.minibatch) learner = sgd(z.parameters, lr_schedule) trainer = Trainer(z, (loss, eval_error), [learner]) def print_training_progress(trainer, mb, frequency, verbose=1): training_loss, eval_error = "NA", "NA" if mb % frequency == 0: training_loss = trainer.previous_minibatch_loss_average eval_error = trainer.previous_minibatch_evaluation_average if verbose: print ("Minibatch: {0}, Loss: {1:.4f}, Error: {2:.2f}".format(mb, training_loss, eval_error)) return mb, training_loss, eval_error minibatch_size = 800 num_minibatches_to_train = 2000 training_progress_output_freq = 50 for i in range(0, num_minibatches_to_train): features, labels = generate_random_data_sample(minibatch_size, input_dim, output_dim) trainer.train_minibatch({feature : features, label : labels}) batchsize, loss, error = print_training_progress(trainer, i, training_progress_output_freq, verbose=1) out = z result = out.eval({feature : features}) a = pd.DataFrame(data=dict( query=[str(int(x[0]))+str(int(x[1])) for x in features], test=[int(l[0]) for l in labels], pred=[l[0] for l in result])) print(pd.DataFrame.drop_duplicates(a[["query","test","pred"]]).sort_values(by="test")) ```
2017/05/24
[ "https://Stackoverflow.com/questions/44150069", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1635993/" ]
One option is to apply the [`TO_JSON_STRING` function](https://cloud.google.com/bigquery/docs/reference/standard-sql/functions-and-operators#to_json_string) to the results of your query. For example, ``` #standardSQL SELECT TO_JSON_STRING(t) FROM ( SELECT x, y FROM YourTable WHERE z = 10 ) AS t; ``` If you want all of the table's columns as JSON, you can use a simpler form: ``` #standardSQL SELECT TO_JSON_STRING(t) FROM YourTable AS t WHERE z = 10; ```
I'm using a service account to access the BigQuery REST API to get the response in JSON format. In order to use a service account, you will have to go to credentials (<https://console.cloud.google.com/apis/credentials>) and choose a project. [![Click on the credentials button](https://i.stack.imgur.com/qwwRQ.png)](https://i.stack.imgur.com/qwwRQ.png) You will get a drop down like this: [![Select Service Account from the options](https://i.stack.imgur.com/gGSlA.png)](https://i.stack.imgur.com/gGSlA.png) Create a Service account for your project and download the secret file in the JSON format. Keep the JSON file in your file system and set the path to it. Check below image to set the file path: [![Set the key as shown the image and path to the credentials image](https://i.stack.imgur.com/3D5MK.png)](https://i.stack.imgur.com/3D5MK.png) So, now all you have to do in is use JAVA client api to consume the Big Query REST API. Here's is a simple solution that I've been using for my project. ``` package com.example.bigquery; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.util.Arrays; import org.apache.log4j.Logger; import com.google.api.client.googleapis.auth.oauth2.GoogleCredential; import com.google.api.client.http.GenericUrl; import com.google.api.client.http.HttpContent; import com.google.api.client.http.HttpHeaders; import com.google.api.client.http.HttpRequest; import com.google.api.client.http.HttpRequestFactory; import com.google.api.client.http.HttpResponse; import com.google.api.client.http.HttpTransport; import com.google.api.client.http.javanet.NetHttpTransport; import com.google.api.client.http.json.JsonHttpContent; import com.google.api.client.json.JsonFactory; import com.google.api.client.json.jackson2.JacksonFactory; import com.google.common.io.CharStreams; public class BigQueryDemo { private static final String QUERY_URL_FORMAT = "https://www.googleapis.com/bigquery/v2/projects/%s/queries" + "?access_token=%s"; private static final String QUERY = "query"; private static final String QUERY_HACKER_NEWS_COMMENTS = "SELECT * FROM [bigquery-public-data:hacker_news.comments] LIMIT 1000"; private static final Logger logger = Logger.getLogger(BigQueryDemo.class); static GoogleCredential credential = null; static final HttpTransport HTTP_TRANSPORT = new NetHttpTransport(); static final JsonFactory JSON_FACTORY = new JacksonFactory(); static { // Authenticate requests using Google Application Default credentials. try { credential = GoogleCredential.getApplicationDefault(); credential = credential.createScoped(Arrays.asList("https://www.googleapis.com/auth/bigquery")); credential.refreshToken(); } catch (IOException e) { e.printStackTrace(); } } public static void implicit() { String projectId = credential.getServiceAccountProjectId(); String accessToken = generateAccessToken(); // Set the content of the request. Dataset dataset = new Dataset().addLabel(QUERY, QUERY_HACKER_NEWS_COMMENTS); HttpContent content = new JsonHttpContent(JSON_FACTORY, dataset.getLabels()); // Send the request to the BigQuery API. GenericUrl url = new GenericUrl(String.format(QUERY_URL_FORMAT, projectId, accessToken)); logger.debug("URL: " + url.toString()); String responseJson = getQueryResult(content, url); logger.debug(responseJson); } private static String getQueryResult(HttpContent content, GenericUrl url) { String responseContent = null; HttpRequestFactory requestFactory = HTTP_TRANSPORT.createRequestFactory(); HttpRequest request = null; try { request = requestFactory.buildPostRequest(url, content); request.setParser(JSON_FACTORY.createJsonObjectParser()); request.setHeaders( new HttpHeaders().set("X-HTTP-Method-Override", "POST").setContentType("application/json")); HttpResponse response = request.execute(); InputStream is = response.getContent(); responseContent = CharStreams.toString(new InputStreamReader(is)); } catch (IOException e) { logger.error(e); } return responseContent; } private static String generateAccessToken() { String accessToken = null; if ((System.currentTimeMillis() > credential.getExpirationTimeMilliseconds())) { accessToken = credential.getRefreshToken(); } else { accessToken = credential.getAccessToken(); } System.out.println(accessToken); return accessToken; } } ``` Following is the Github link to the code: <https://github.com/vslala/BigQueryRestSample> It is just a demo project to fetch JSON data from the BQ REST API. Do not use it in your project directly. Let me know if you have any questions.
3,087
36,054,382
I have succeeded ran the code in C++,with the code below: ``` int countOnes(int num) { int count =0; while (num) { count ++; num = num & (num-1); } return count; } ``` but it didn't work in Python version: ``` def countOnes(num): count = 0 while(num): count += 1 num = num&(num-1) return count ``` It seems blocked when the num = -1(0Xffffffff), why does it work in C++, but not in python?
2016/03/17
[ "https://Stackoverflow.com/questions/36054382", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6075620/" ]
The reason this function works differently in the two languages is that they have different fundamental number types. In C++, `int` is indeed often a 32 bit integer in two's complement representation, though the language standard allows other representations. In Python, however, the standard number type has arbitrary precision. The continuation criterion for your loop is that `num` is non-zero. If it doesn't terminate, let's add some debugging to see what happens instead: ``` def countOnes(num): count = 0 while(num): count += 1 num = num&(num-1) print(num) # <-- inspect what happens to num return count ``` Let's see what we get for different inputs: ``` >>> countOnes(1) 0 1 >>> countOnes(7) 6 4 0 3 ``` For `-1`, however, things quickly go out of hand: ``` >>> countOnes(-1) -2 -4 -8 -16 -32 -64 -128 -256 -512 -1024 -2048 -4096 ... ``` `num` keeps decreasing. Since the number type has arbitrary size, the number will only keep growing more negative. To simulate the fixed precision of the C integer, you can limit `num` to 32 bits: ``` def countOnes(num): num = num & 0xffffffff # <-- Limit num to 32 bits count = 0 while(num): count += 1 num = num&(num-1) return count ``` With debugging output, the output for `countOnes(-1)` is now: ``` >>> countOnes(-1) 4294967294 4294967292 4294967288 4294967280 4294967264 4294967232 4294967168 4294967040 4294966784 4294966272 4294965248 4294963200 4294959104 4294950912 4294934528 4294901760 4294836224 4294705152 4294443008 4293918720 4292870144 4290772992 4286578688 4278190080 4261412864 4227858432 4160749568 4026531840 3758096384 3221225472 2147483648 0 32 ``` as desired :)
Python doesn't have "32-bit integers". Its integers are arbitrary (read:infinite) length. This means that -1 is *not* 0xffffffff, but is instead an infinitely long binary sequence of 1s.
3,088
68,116,542
I am trying to recode an existing python script to Java. It includes this following line: ``` r = requests.get('https://{}/redfish/v1/{}'.format(ip, query), auth=('ADMIN', 'ADMIN'), verify=False) ``` I don't have a lot of experience in Python and didn't write the script myself. So far I've only been able to figure out what it does, but not how to replicate it using Java. If anyone could point me in the right direction that would be awesome. Thanks!
2021/06/24
[ "https://Stackoverflow.com/questions/68116542", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8298497/" ]
First, read [this tutorial on the java HTTP client](https://openjdk.java.net/groups/net/httpclient/intro.html). (Note that it requires jdk11 or up). From there it should be fairly simply; that `.format()` thing is just replacing the `{}` with the provided ip and query parts. The auth part is more interesting. The verify part presumably means 'whatever, forget about SSL'. Between a password of 'admin' and 'disregard SSL issues', this code screams "You are about 2 weeks away from getting your box p0wned", maybe you should be taking security a bit more seriously than this. At any rate, the equivalents in the java sphere are more complicated, because java intentionally does not meant 'disable ssl' to be a casual throwaway move, unlike python which just hands you the bazooka no questions asked. [Here is a tutorial on how to do basic http auth with the http client](https://www.baeldung.com/httpclient-4-basic-authentication). To shoot your foot off properly and ensure that the foot is fully dead, you need to make an SSL Context that does nothing and silently just accepts all certificates, even ones someone trying to hack your system made. Then pass that for `.sslContext` to `HttpClient.builder()`. [Here is an example of someone firing this bazooka](https://gist.github.com/mingliangguo/c86e05a0f8a9019b281a63d151965ac7).
At first, you can use `String.format` for the formatting: ```java String url=String.format("https://%s/redfish/v1/%s",ip,query); ``` You could also use `MessageFormat` if you want to. For connecting, you can create a `URL`-object and creating a `URLConnection` (in your case `HttpsURLConnection`) and opening an `InputStream` for the response afterwards: ```java HttpsURLConnectioncon=(HttpsURLConnection)new URL(url).openConnection(); try(BufferedInputStream is=new BufferedInputStream(con.openStream()){ //... } ``` In order to do the authentication, you can take a look at [this tutorial](https://www.baeldung.com/java-http-url-connection): ```java String auth = "ADMIN:ADMIN"; byte[] encodedAuth = Base64.getEncoder().encode(auth.getBytes(StandardCharsets.UTF_8)); //Get the HttpURLConnection con.setRequestProperty("Authorization", authHeaderValue); //Connect/open InputStream ``` If you really want to disable verification, you can create your own `HostnameVerifier` that allows everything but this is strongly discouraged as this **allows man in the middle attacks** as you basically **disable the security of HTTPs**: ```java con.setHostnameVerifier((hostname,sslSession)->true); ``` All combined, it could look like this: ```java String url=String.format("https://%s/redfish/v1/%s",ip,query); String auth = "ADMIN:ADMIN"; byte[] encodedAuth = Base64.getEncoder().encode(auth.getBytes(StandardCharsets.UTF_8)); String authHeaderValue = "Basic " + new String(encodedAuth); HttpsURLConnection con=(HttpsURLConnection)new URL(url).openConnection(); con.setRequestProperty("Authorization", authHeaderValue); con.setHostnameVerifier((hostname,sslSession)->true);//vulnerable to man in the middle attacks try(BufferedInputStream is=new BufferedInputStream(con.openStream()){ //... } ```
3,090
44,962,225
I imported a table with the years that each coach served as the football coach. Some of the years listed look like this: "1903–1910, 1917, 1919" I am aiming for [1903, 1904, 1905, 1906, 1907, 1908, 1909, 1910, 1917, 1919] In my original DataFrame this list is an object. I have tried: `x = "1903–1910, 1917, 1919"` `x[0].split('-')` `re.split(r'\s|-', x[0])` `x[0].replace('-', ' ').split(' ')` I keep getting: `['1903–1910']` What am I doing wrong? Why isn't python finding the hyphen?
2017/07/07
[ "https://Stackoverflow.com/questions/44962225", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8268427/" ]
The hyphen you see is not really a hyphen. It could be some other character, like an unicode en-dash which would look very similar. Try to copy-paste the actual character into the split string. Looking at the text you posted, here's the difference: ``` ➜ ~ echo '1903–1910' | xxd 00000000: 3139 3033 e280 9331 3931 300a 1903...1910. ➜ ~ echo '1903-1910' | xxd 00000000: 3139 3033 2d31 3931 300a 1903-1910. ``` The character in the first case is: <https://unicode-table.com/en/2013/>
Your character is not an hyfen, it's a dash: ``` >>> "–" == "-" False >>> x = "1903–1910, 1917, 1919" >>> x.split("–") ['1903', '1910, 1917, 1919'] ```
3,091
34,902,486
There are many posts about 'latin-1' codec , however those answers can't solve my problem, maybe it's my question, I am just a rookie to learn Python, somewhat. When I used `cwd(dirname)` to change directory of FTP site, the unicodeerror occurred. Note that `dirname` included Chinese characters, obviously, those characters caused this error. I made some encoding and decoding according to the suggestions in the past posts, but it didn't work. Could someone give me some advice how to repair this error and make `cwd` work? Some codes: ``` file = 'myhongze.jpg' dirname = './项目成员资料/zgcao/test-python/' site = '***.***.***.***' user = ('zhigang',getpass('Input Pwd:')) ftp = FTP(site) ftp.login(*user) ftp.cwd(dirname)# throw exception ``` --- Some tests: ```none u'./项目成员资料/zgcao/test-python/'.encode('utf-8') ``` Output: ```none b'./\xe9\xa1\xb9\xe7\x9b\xae\xe6\x88\x90\xe5\x91\x98\xe8\xb5\x84\xe6\x96\x99/zgcao/test-python/' ``` --- ```none u'./项目成员资料/zgcao/test-python/'.encode('utf-8').decode('cp1252') ``` Output: ```none UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 10: character maps to <undefined> ``` --- ```none u'./项目成员资料/zgcao/test-python/'.encode('utf-8').decode('latin-1') ``` Output: ```none './项ç\x9b®æ\x88\x90å\x91\x98èµ\x84æ\x96\x99/zgcao/test-python/' Using the result of decode('latin-1'), the cwd can't still work. ``` --- It is noted that `项目成员资料` is showed as `ÏîÄ¿×é³ÉԱ˽È˿ռä` when I used `retrlines('LIST')`.
2016/01/20
[ "https://Stackoverflow.com/questions/34902486", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5816236/" ]
No need to edit ftplib source code. Just set `ftp.encoding` property in your code: ``` ftp.encoding = "UTF-8" ftp.cwd(dirname) ``` A similar question, about FTP output, rather then input: [List files with UTF-8 characters in the name in Python ftplib](https://stackoverflow.com/q/53091871/850848)
I solved this problem by editing `ftplib.py`. On my machine, it is under `C:\Users\<user>\AppData\Local\Programs\Python\Python36\Lib`. You just need to replace `encoding = "latin-1"` with `encoding = "utf-8"`
3,094
30,433,983
This is what I have in my `Procfile`: ``` web: gunicorn --pythonpath meraki meraki.wsgi ``` and when I do `foreman start`, I get this error: ``` gunicorn.errors.HaltServer: <HaltServer 'Worker failed to boot.' 3> ``` the reason, as far as I can see in the traceback, is: ``` ImportError: No module named wsgi ``` I did `import wsgi` in the shell and the import was successful, no errors. Why can't I start `foreman`? **Project Structure:** ``` meraki meraki //other apps meraki settings __init__.py celery.py views.py wsgi.py manage.py Procfile requirements requirements.txt ```
2015/05/25
[ "https://Stackoverflow.com/questions/30433983", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4137194/" ]
You've confused yourself by following an unnecessarily complicated structure. You don't need that outer meraki directory, and your Procfile and requirements.txt should be in the same directory as manage.py. Then you can remove the pythonpath parameter and all should be well.
As Roseman said, it is unnecessarily complicated structure.If you want it to do so,Try ``` web: gunicorn --pythonpath /path/to/meraki meraki.wsgi ``` That is `/absolutepath/to/secondmeroki(out of 3)` which contains `apps`.
3,095
71,266,145
I wrote a python function called `plot_ts_ex` that takes two arguments `ts_file` and `ex_file` (and the file name for this function is `pism_plot_routine`). I want to run this function from a bash script from terminal. When I don't use variables in the bash script in pass the function arguments (in this case `ts_file = ts_g10km_10ka_hy.nc` and `ex_file = ex_g10km_10ka_hy.nc`) directly, like this: ``` #!/bin/sh python -c 'import pism_plot_routine; pism_plot_routine.plot_ts_ex("ts_g10km_10ka_hy.nc", "ex_g10km_10ka_hy.nc")' ``` which is similar as in [Run function from the command line](https://stackoverflow.com/questions/3987041/run-function-from-the-command-line), that works. But when I define variables for the input arguments, it doesn't work: ``` #!/bin/sh ts_name="ts_g10km_10ka_hy.nc" ex_name="ex_g10km_10ka_hy.nc" python -c 'import pism_plot_routine; pism_plot_routine.plot_ts_ex("$ts_name", "$ex_name")' ``` It gives the error: ``` FileNotFoundError: [Errno 2] No such file or directory: b'$ts_name' ``` Then I found a similar question [passing an argument to a python function from bash](https://stackoverflow.com/questions/47939713/passing-an-argument-to-a-python-function-from-bash/47943114#47943114?newreg=8641b85190ae44d7ad69a8b2b32f61f8) for a python function with only one argument and I tried ``` #!/bin/sh python -c 'import sys, pism_plot_routine; pism_plot_routine.plot_ts_ex(sys.argv[1])' "$ts_name" "$ex_name" ``` but that doesn't work. So how can I pass 2 arguments for a python function in a bash script using variables?
2022/02/25
[ "https://Stackoverflow.com/questions/71266145", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18308686/" ]
When you use single quotes the variables aren’t going to be expanded, you should use double quotes instead: ``` #!/bin/sh ts_name="ts_g10km_10ka_hy.nc" ex_name="ex_g10km_10ka_hy.nc" python -c "import pism_plot_routine; pism_plot_routine.plot_ts_ex('$ts_name', '$ex_name')" ``` --- You can also use sys.argv, arguments are stored in a list, so `ts_name` is `sys.arv[1]` and `ex_name` is `sys.argv[2]`: ``` #!/bin/sh ts_name="ts_g10km_10ka_hy.nc" ex_name="ex_g10km_10ka_hy.nc" python -c 'import sys, pism_plot_routine; pism_plot_routine.plot_ts_ex(sys.argv[1], sys.argv[2])' "$ts_name" "$ex_name" ```
You are giving the value `$ts_name` to python as string, bash does not do anything with it. You need to close the `'`, so that it becomes a string in bash, and then open it again for it to become a string in python. The result will be something like this: ``` #!/bin/sh ts_name="ts_g10km_10ka_hy.nc" ex_name="ex_g10km_10ka_hy.nc" python -c 'import pism_plot_routine; pism_plot_routine.plot_ts_ex("'$ts_name'", "'$ex_name'")' ``` For issues like this it is often nice to use some smaller code to test it, I used `python3 -c 'print("${test}")'` to figure out what it was passing to python, without the bother of the `pism_plot`.
3,096
36,915,188
I have a large csv file with 25 columns, that I want to read as a pandas dataframe. I am using `pandas.read_csv()`. The problem is that some rows have extra columns, something like that: ``` col1 col2 stringColumn ... col25 1 12 1 str1 3 ... 33657 2 3 str4 6 4 3 #<- that line has a problem 33658 1 32 blbla #<-some columns have missing data too ``` When I try to read it, I get the error ``` CParserError: Error tokenizing data. C error: Expected 25 fields in line 33657, saw 28 ``` The problem does not happen if the extra values appear in the first rows. For example if I add values to the third row of the same file it works fine ``` #that example works: col1 col2 stringColumn ... col25 1 12 1 str1 3 2 12 1 str1 3 3 12 1 str1 3 f 4 ... 33657 2 3 str4 6 4 3 #<- that line has a problem 33658 1 32 blbla #<-some columns have missing data too ``` My guess is that pandas checks the first (n) rows to determine the number of columns, and if you have extra columns after that it has a problem parsing it. Skipping the offending lines like suggested [here](https://stackoverflow.com/questions/18039057/python-pandas-error-tokenizing-data) is not an option, those lines contain valuable information. Does anybody know a way around this?
2016/04/28
[ "https://Stackoverflow.com/questions/36915188", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5841927/" ]
If you want Coded UI to launch your forms application on start of a test, use the method ``` ApplicationUnderTest.Launch("FORMS_APP_PATH"); ``` You can check the precise method details on MSDN. Update: To handle changing paths I created a new Forms solution and called it LabPlus. I then added a CodedUI test project to it. Inside the test project I added the reference of the LabPlus assembly. After that, I wrote following line in my CUI test method: ``` ApplicationUnderTest.Launch(System.Reflection.Assembly.GetAssembly(typeof(LabPlus.Form1)).Location); ``` I hope this answers your question :)
My fix: 1. add reference from test project to WinForm project 2. decorate test class with `[DeploymentItem('your-app.exe')]` attribute 3. add `ApplicationUnderTest.Launch("your-app.exe");` to the test method
3,097
37,009,587
I'm making a python app to automate some tasks in AutoCAD (drawing specific shapes in specific layers and checking the location of some circles). For the first part, drawing things, it was easy to use the AutoCAD Interop library, as you could easily put objects in the active document without doing anything on AutoCAD, not even loading any plugin. However i don't find any way of using that same library to check the properties of objects in the document. What I need is a function that, when passed as argument the layer name, returns a list of the centers of every circle in that layer. Now, it would be easy to do just by loading a plugin. But i need that info passed to a python program (that loads the AutoCAD Interop library through pythonnet) and i dont know how to do it. So, summarizing, I need to: * Learn how to use the AutoCAD Interop library to retrieve drawing's info. or * Interface an AutoCAD plugin with an external app writen in python. Is it possible what i intend to do? What would be the best approach?
2016/05/03
[ "https://Stackoverflow.com/questions/37009587", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6266184/" ]
You can iterate through list of available resources using method of Assembly class: ``` var names = someInstance.GetType() .GetTypeInfo() .Assembly .GetManifestResourceNames(); ``` And then load resource by full name from the list above: ``` var stream = someAssembly.GetManifestResourceStream(name); ``` And then do whatever you want with the stream.
You need to get Assembly which you embedded resource then call GetManifestResourceStream. I have sample with namespace YourProjectNameSpace have MyFolder contain MyFile.json: ``` public class EndpointBuilder { private static String _filePath = "YourProjectNameSpace.MyFolder.MyFile.json"; public async Task<Endpoint> Build() { var assembly = this.GetType().GetTypeInfo().Assembly; var stream = assembly.GetManifestResourceStream(_filePath); JsonSerializer jsonSerializer = new JsonSerializer(); var serializer = new JsonSerializer(); using (var sr = new StreamReader(stream)) using (var jsonTextReader = new JsonTextReader(sr)) { Endpoint enpoint = serializer.Deserialize<Endpoint>(jsonTextReader); return enpoint; } } } ```
3,098
718,040
<http://pypi.python.org/pypi/simplejson> I am just diving into the Python world and want to make a simple twitter application which requires the installation of simplejson but not sure how I can set it up and get it working.. I am on a Windows System
2009/04/04
[ "https://Stackoverflow.com/questions/718040", "https://Stackoverflow.com", "https://Stackoverflow.com/users/32372/" ]
I would recommend [EasyInstall](http://pypi.python.org/pypi/setuptools#windows), a package management application for Python. Once you've installed EasyInstall, you should be able to go to a command window and type: ``` easy_install simplejson ``` This may require putting easy\_install.exe on your PATH first, I don't remember if the EasyInstall setup does this for you (something like `C:\Python25\Scripts`).
If you have Python 2.6 installed then you already have simplejson - just import `json`; it's the same thing.
3,099
16,330,838
I would like to split a string by ':' and ' ' characters. However, i would like to ignore two spaces ' ' and two colons '::'. for e.g. ``` text = "s:11011 i:11010 ::110011 :110010 d:11000" ``` should split into ``` [s,11011,i,11010,:,110011, ,110010,d,11000] ``` after following the Regular Expressions HOWTO on the python website, i managed to comeup with the following ``` regx= re.compile('([\s:]|[^\s\s]|[^::])') regx.split(text) ``` However this does not work as intended as it splits on the : and spaces, but it still includes the ':' and ' ' in the split. ``` [s,:,11011, ,i,:,11010, ,:,:,110011, , :,110010, ,d,:,11000] ``` How can I fix this? **EDIT:** In case of a double space, i only want one space to appear
2013/05/02
[ "https://Stackoverflow.com/questions/16330838", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1753000/" ]
Note this assumes that your data has format like `X:101010`: ``` >>> re.findall(r'(.+?):(.+?)\b ?',text) [('s', '11011'), ('i', '11010'), (':', '110011'), (' ', '110010'), ('d', '11000')] ``` Then `chain` them up: ``` >>> list(itertools.chain(*_)) ['s', '11011', 'i', '11010', ':', '110011', ' ', '110010', 'd', '11000'] ```
``` >>> text = "s:11011 i:11010 ::110011 :110010 d:11000" >>> [x for x in re.split(r":(:)?|\s(\s)?", text) if x] ['s', '11011', 'i', '11010', ':', '110011', ' ', '110010', 'd', '11000'] ```
3,109
65,856,151
I am using anaconda and python 3.8. Now some of my codes need to be run with python 2. so I create a separate python 2.7 environment in conda like below: after that, I installed spyder, then launcher spyder amd spyder is showing I am still using python 3.8 how do i do to use python 2.7 in spyder with a new environment? Thanks ``` conda create -n py27 python=2.7 ipykernel conda activate py27 pip install spyder ```
2021/01/23
[ "https://Stackoverflow.com/questions/65856151", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9983652/" ]
According to the documentation [here](https://docs.anaconda.com/anaconda/user-guide/tasks/switch-environment/), this should create a python2.7 virtual environment (29 April 2021) with spyder installed. I verified that spyder version 3.3.6 is python2.7 compatible ``` conda create -y -n py27 python=2.7 spyder=3.3.6 ``` However, I could not run `spyder` in the `py27` environment due to conflicts that `conda` failed to catch. The workaround shown by [asanganuwan](https://github.com/asanganuwan) on this [Spyder Github Issue](https://github.com/spyder-ide/spyder/issues/13510#issuecomment-754392635) page worked for me also > > Found a workaround to use Spyder on python 2.7. > > > > ``` > setup two virtual environments for Python 2.7 and 3.6. > Launce anaconda navigator and install spyder 3.3.6 on both the environments > Launch spyder on the environment with Python 3.6 > Preferences-->Python Interpreter --> set the Python path for 2.7 > Restart Spyder > Done! > > ``` > > So my recommendation is next run ``` conda create -y -n py36 python=3.6 spyder=3.3.6 conda activate py36 spyder ``` And follow the last three instructions from asanganuwan. Also you should use the `conda` package manager as much as possible since it is smarter with managing requirements. When I try to use `pip install spyder` after activating the environment, it warns of version conflicts and fails to start.
You can manage environments from Ananconda's Navigator. <https://docs.anaconda.com/anaconda/navigator/getting-started/#navigator-managing-environments>
3,114
8,638,880
I've come across an interesting behavior with Python 3 that I don't understand. I've understood that with the built-in immutable types like str, int, etc, that not only are two variables of the same value (both contain 'x') equal, they are literally the same object, which allows the use of the `is` operator. However, when I use the input() function, it seems to create a string object that is NOT the same object, but does have the same value. Here's my python interactive prompt of this: ``` $ python Python 3.2 (r32:88452, Feb 20 2011, 11:12:31) [GCC 4.2.1 (Apple Inc. build 5664)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> x = input() test >>> y = 'test' >>> x is y False >>> x == y True >>> id(x) 4301225744 >>> id(y) 4301225576 ``` Why is this?
2011/12/26
[ "https://Stackoverflow.com/questions/8638880", "https://Stackoverflow.com", "https://Stackoverflow.com/users/206349/" ]
> > I've understood that with the built-in immutable types like str, int, etc, that not only are two variables of the same value (both contain 'x') equal, they are literally the same object, which allows the use of the is operator. > > > This is your misconception: concerning the `int`s and `long`s, that is valid only for a few values; with strings of any kind, it may be true concerning the strings of one module, but not otherwise. But [there is a builtin function `intern()`](http://docs.python.org/library/functions.html#intern) which interns any given string.
This is a properly behavior. ``` x == y #True because they have a the same value x is y #False because x isn't reference to y id(x) == id(y) #False because as the above ``` But: ``` x = input() y = x #rewrite reference of x to another variable y == x and x is y and id(x) == id(y) #True ```
3,119
9,887,224
Does apache or nginx must be installed before I can run my PHP files in browser? Django itself has a run-server for testing python codes.Is there any similar way to test PHP files?
2012/03/27
[ "https://Stackoverflow.com/questions/9887224", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1188895/" ]
Your options is: * Install web server, as you said. * Use [web server](http://php.net/manual/en/features.commandline.webserver.php), as JohnP suggested. * Install php-cli, run your script from console, save output to html file and open it in a browser. Actually, you can't normally "run" php files in browser. Browser can only send requests to server and display script's output.
Yes. You need something like nginx or Apache. Either install one of those (on say your local machine). **OR**, see JohnP's comment - a new feature released recently.
3,127
2,765,664
I have got some code to pass in a variable into a script from the command line. I can pass any value into `function` for the `var` arg. The problem is that when I put `function` into a class the variable doesn't get read into `function`. The script is: ``` import sys, os def function(var): print var class function_call(object): def __init__(self, sysArgs): try: self.function = None self.args = [] self.modulePath = sysArgs[0] self.moduleDir, tail = os.path.split(self.modulePath) self.moduleName, ext = os.path.splitext(tail) __import__(self.moduleName) self.module = sys.modules[self.moduleName] if len(sysArgs) > 1: self.functionName = sysArgs[1] self.function = self.module.__dict__[self.functionName] self.args = sysArgs[2:] except Exception, e: sys.stderr.write("%s %s\n" % ("PythonCall#__init__", e)) def execute(self): try: if self.function: self.function(*self.args) except Exception, e: sys.stderr.write("%s %s\n" % ("PythonCall#execute", e)) if __name__=="__main__": function_call(sys.argv).execute() ``` This works by entering `./function <function> <arg1 arg2 ....>`. The problem is that I want to to select the function I want that is in a class rather than just a function by itself. The code I have tried is the same except that `function(var):` is in a class. I was hoping for some ideas on how to modify my `function_call` class to accept this. If i want to pass in the value `Hello` I run the script like so -- `python function_call.py function Hello`. This then prints the `var` variable as `Hello`. By entering the variable into the command lines I can then use this variable throughout the code. If the script was a bunch of functions I could just select the function using this code but I would like to select the functions inside a particular class. Instead of `python function.py function hello` I could enter the class in as well eg. `python function.py A function hello`. Also I have encounter that I have problem's saving the value for use outside the function. If anyone could solve this I would appreciate it very much. `_________________________________________________________________________________` Amended code. This is the code that work's for me now. ``` class A: def __init__(self): self.project = sys.argv[2] def run(self, *sysArgs): pass def funct(self): print self.project class function_call(object): def __init__(self, sysArgs): try: self.function = None self.args = [] self.modulePath = sysArgs[0] self.moduleDir, tail = os.path.split(self.modulePath) self.moduleName, ext = os.path.splitext(tail) __import__(self.moduleName) self.module = sys.modules[self.moduleName] if len(sysArgs) > 1: self.functionName = sysArgs[1] self.function = getattr(A(), sysArgs[1])(*sysArgs[2:]) self.args = sysArgs[2:] except Exception, e: sys.stderr.write("%s %s\n" % ("PythonCall#__init__", e)) def execute(self): try: if self.function: self.function(*self.args) except Exception, e: sys.stderr.write("%s %s\n" % ("PythonCall#execute", e)) if __name__=="__main__": function_call(sys.argv).execute() inst_A = A() inst_A.funct() ``` Thanks for all the help.
2010/05/04
[ "https://Stackoverflow.com/questions/2765664", "https://Stackoverflow.com", "https://Stackoverflow.com/users/234435/" ]
you might find [`getattr`](http://docs.python.org/library/functions.html#getattr) useful: ``` >>> argv = ['function.py', 'run', 'Hello'] >>> class A: def run(self, *args): print(*args) >>> getattr(A(), argv[1])(*argv[2:]) Hello ```
It sounds like rather than: ``` self.function = self.module.__dict__[self.functionName] ``` you want to do something like (as @SilentGhost mentioned): ``` self.function = getattr(some_class, self.functionName) ``` The tricky thing with retrieving a method on a class (not an object instance) is that you are going to get back an unbound method. You will need to pass an instance of some\_class as the first argument when you call self.function. Alternately, if you are defining the class in question, you can use classmethod or [staticmethod](http://docs.python.org/library/functions.html#staticmethod) to make sure that some\_class.function\_you\_want\_to\_pick will return a bound function.
3,129
65,661,996
How Do I pass values say `12,32,34` to formula `x+y+z` in python without assigning manually? I have tried using `**args` but the results is `None`. ``` def myFormula(*args): lambda x, y: x+y+z(*args) print(myFormula(1,2,3)) ```
2021/01/11
[ "https://Stackoverflow.com/questions/65661996", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14911024/" ]
try this: ```py formula = lambda x,y,z: x+y+z print(formula(1,2,3)) ``` there is no need to use \*args there. here is an example of using a function for a formula ```py # a = (v - u)/t acceleration = lambda v, u, t: (v - u)/t print(acceleration(23, 12, 5) ```
Just use `sum`: ``` print(sum([1, 2, 3])) ``` Output: ``` 6 ``` If you want a `def` try this: ``` def myFormula(*args): return sum(args) print(myFormula(1, 2, 3)) ``` Output: ``` 6 ```
3,130
55,929,577
I'm trying to run a python program in the online IDE SourceLair. I've written a line of code that simply prints hello, but I am embarrassed to say I can't figure out how to RUN the program. I have the console, web server, and terminal available on the IDE already pulled up. I just don't know how to start the program. I've tried it on Mac OSX and Chrome OS, and neither work. I don't know if anyone has experience with this IDE, but I can hope. Thanks!!
2019/04/30
[ "https://Stackoverflow.com/questions/55929577", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11434891/" ]
Can I ask you why you are using SourceLair? Well I just figured it out in about 2 mins....its the same as using any other editor for python. All you have to do is to run it in the terminal. python (nameoffile).py
Antonis from SourceLair here. In SourceLair, you get a fully featured terminal, plus a web server for running your Python applications. For simple files, as you correctly found out, all you have to do is save the file and run it through your terminal, using `python <your-file.py>`. If you want to run a complete web server, you can check out our server configuration guide here: <https://help.sourcelair.com/webserver/configure-your-web-server/> Happy hacking!
3,133
60,099,737
I have a compiled a dataframe that contains USGS streamflow data at several different streamgages. Now I want to create a Gantt chart similar to [this](https://stackoverflow.com/questions/31820578/how-to-plot-stacked-event-duration-gantt-charts-using-python-pandas). Currently, my data has columns as site names and a date index as rows. Here is a sample of my [data](https://drive.google.com/file/d/1KHokKsjAIuCS8lNVRJ9NQJzk0-5q6JYA/view?usp=sharing). The problem with the Gantt chart example I linked is that my data has gaps between the start and end dates that would normally define the horizontal time-lines. Many of the examples I found only account for the start and end date, but not missing values that may be in between. How do I account for the gaps where there is no data (blanks or nan in those slots for values) for some of the sites? First, I have a plot that shows where the missing data is. ``` import missingno as msno msno.bar(dfp) ``` [![Missing Streamflow Gage Data](https://i.stack.imgur.com/obRn8.png)](https://i.stack.imgur.com/obRn8.png) Now, I want time on the x-axis and a horizontal line on the y-axis that tracks when the sites contain data at those times. I know how to do this the brute force way, which would mean manually picking out the start and end dates where there is valid data (which I made up below). ``` from datetime import datetime import pandas as pd import matplotlib.pyplot as plt import matplotlib.dates as dt df=[('RIO GRANDE AT EMBUDO, NM','2015-7-22','2015-12-7'), ('RIO GRANDE AT EMBUDO, NM','2016-1-22','2016-8-5'), ('RIO GRANDE DEL RANCHO NEAR TALPA, NM','2014-12-10','2015-12-14'), ('RIO GRANDE DEL RANCHO NEAR TALPA, NM','2017-1-10','2017-11-25'), ('RIO GRANDE AT OTOWI BRIDGE, NM','2015-8-17','2017-8-21'), ('RIO GRANDE BLW TAOS JUNCTION BRIDGE NEAR TAOS, NM','2015-9-1','2016-6-1'), ('RIO GRANDE NEAR CERRO, NM','2016-1-2','2016-3-15'), ] df=pd.DataFrame(data=df) df.columns = ['A', 'Beg', 'End'] df['Beg'] = pd.to_datetime(df['Beg']) df['End'] = pd.to_datetime(df['End']) fig = plt.figure(figsize=(10,8)) ax = fig.add_subplot(111) ax = ax.xaxis_date() ax = plt.hlines(df['A'], dt.date2num(df['Beg']), dt.date2num(df['End'])) ``` [![enter image description here](https://i.stack.imgur.com/ZvJUV.png)](https://i.stack.imgur.com/ZvJUV.png) How do I make a figure (like the one shown above) with the dataframe I provided as an example? Ideally I want to avoid the brute force method. **Please note:** values of zero are considered valid data points. Thank you in advance for your feedback!
2020/02/06
[ "https://Stackoverflow.com/questions/60099737", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10382580/" ]
The reason this is happening is because you have pd.MultiIndex column headers. I can tell you have MultiIndex column headers by tuples in your column names from pd.DataFrame.info() results. See this example below: ``` df = pd.DataFrame(np.random.randint(100,999,(5,5))) #create a dataframe df.columns = pd.MultiIndex.from_arrays([['A','B','C','D','E'],['max','min','max','min','max']]) #create multi index column headers type(df['A'] - df['E']) ``` Output: ``` pandas.core.frame.DataFrame ``` ***Note*** The type of the return even though you are subtracting one column from another column. You expected a pd.Series, but this is returning a dataframe. You have a couple of options to solving this. Option 1 use [`squeeze`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.squeeze.html): ------------------------------------------------------------------------------------------------------------------- ``` type((df['A'] - df['E']).squeeze()) pandas.core.series.Series ``` Option 2 flatten your column headers before: -------------------------------------------- ``` df.columns = df.columns.map('_'.join) type(df['A_max'] - df['E_max']) ``` Output: ``` pandas.core.series.Series ``` Now, you can apply .dt datetime accessor to your series. `type` is important to know the object you are working with.
Well, as @EdChum said above, .dt is a pd.DataFrame attribute, not a pd.Series method. If you want to get the date difference, use the `apply()` pd.Dataframe method.
3,134
16,247,002
I have a script using DaemonRunner to create a daemon process with a pid file. The problem is that if someone tried to start it without stopping the currently running process, it will silently fail. What's the best way to detect an existing process and alert the user to stop it first? Is it as easy as checking the pidfile? My code is similar to this example: ``` #!/usr/bin/python import time from daemon import runner class App(): def __init__(self): self.stdin_path = '/dev/null' self.stdout_path = '/dev/tty' self.stderr_path = '/dev/tty' self.pidfile_path = '/tmp/foo.pid' self.pidfile_timeout = 5 def run(self): while True: print("Howdy! Gig'em! Whoop!") time.sleep(10) app = App() daemon_runner = runner.DaemonRunner(app) daemon_runner.do_action() ``` To see my actual code, look at investor.py in: <https://github.com/jgillick/LendingClubAutoInvestor>
2013/04/27
[ "https://Stackoverflow.com/questions/16247002", "https://Stackoverflow.com", "https://Stackoverflow.com/users/629195/" ]
since DaemonRunner handles its own lockfile, it's more wisely to refer to that one, to be sure you can't mess up. Maybe this block can help you with that: Add `from lockfile import LockTimeout` to the beginning of the script and surround `daemon_runner.doaction()` like this ``` try: daemon_runner.do_action() except LockTimeout: print "Error: couldn't aquire lock" #you can exit here or try something else ```
This is the solution that I decided to use: ``` lockfile = runner.make_pidlockfile('/tmp/myapp.pid', 1) if lockfile.is_locked(): print 'It looks like a daemon is already running!' exit() app = App() daemon_runner = runner.DaemonRunner(app) daemon_runner.do_action() ``` Is this a best practice or is there a better way?
3,135
7,554,576
i am tryig to execute a sample python program through monkey runner command prompt and it is throwing an error ``` Can't open specified script file Usage: monkeyrunner [options] SCRIPT_FILE -s MonkeyServer IP Address. -p MonkeyServer TCP Port. -v MonkeyServer Logging level (ALL, FINEST, FINER, FINE, CONFIG, INFO, WARNING, SEVERE, OFF) ``` Exception in thread "main" java.lang.NullPointerException so any one can guide me how to resolve this one
2011/09/26
[ "https://Stackoverflow.com/questions/7554576", "https://Stackoverflow.com", "https://Stackoverflow.com/users/946040/" ]
scriptfile should be a full path file name try below `monkeyrunner c:\test_script\first.py`
Under all unix/linux families OS the sha bang syntax can be used. Edit the first line of your script with the results of the following command: ``` which monkeyrunner ``` for example, if monkeyrunner (usually provided with android sdk) has been installed under /usr/local/bin/sdk write: ``` #!/usr/local/bin/sdk/tools/monkeyrunner ``` or even use "env" ``` #!/usr/bin/env monkeyrunner ``` then set you script file as executable ``` chmod +x <script> ``` You can now launch your script from the shell.
3,136
21,302,971
I initially had `python 2.7.3` i downloaded from the sourcee and did `make install` and after downloading i run `python` but again my system is showing `2.7.3` I didn't get any error while installing
2014/01/23
[ "https://Stackoverflow.com/questions/21302971", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3113427/" ]
Since you are on Ubuntu, I recommend following the steps outlined [here](http://heliumhq.com/docs/installing_python_2.7.5_on_ubuntu) to install a new Python version. The steps there are for Python 2.7.5 but should be equally applicable to Python 2.7.6.
The version you installed is probably in /usr/local/bin/python . Try calling it with the complete path. You may want to change your path settings or remove the previously installed version using your package manager if it was installed by the system.
3,146
59,987,601
Good morning! I am trying to remove duplicate rows from a csv file with panda. I have 2 files, A.csv and B.csv I want to delete all rows in A that exist in B. File A.csv: ``` Pedro,10,rojo Mirta,15,azul Jose,5,violeta ``` File B.csv: ``` Pedro, ignacio, fernando, federico, ``` Output file output.csv: ``` Mirta,15,azul Jose,5,violeta ``` try to join the files and then apply ``` cat A.csv B.csv > output.csv ``` and run this program in python: ``` import pandas as pd df = pd.read_csv('output.csv') df.drop_duplicates(inplace=True) df.to_csv('final.csv', index=False) ```
2020/01/30
[ "https://Stackoverflow.com/questions/59987601", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12814128/" ]
Search based on Many-to-Many Relationship ========================================= Talking about articles and authors, when each article may have many authors, let's say you are going to search based on a `term` and find *Articles where the article name or article abstract contains the term or one of the authors of the article have the term in their first name or their last name.* **EF 6 - Many-To-May without entity class for Relationship** You can handle these cases in a Linq query using `Any`, the same way that you can handle in a SQL query using `EXISTS`: ``` Where(article=> article.Title.Contains(term) || article.Abstract.Contains(term) || article.Authors.Any(author => author.FirstName.Contains(term) || author.LastName.Contains(searchTerm))) ``` It doesn't exactly generate the following SQL Query, but the logic is quite similar to having the following in SQL: ``` FROM Articles WHERE (Articles.Title LIKE '%' + @Term + '%') OR (Articles.Abstract LIKE '%' + @Term + '%') OR EXISTS (SELECT * FROM Authors WHERE (Authors.FirstName LIKE '%' + @Term + '%') OR (Authors.LastName LIKE '%' + @Term + '%')) ``` **EF CORE - Many-To-May with entity class for Relationship** At the moment, Many-to-Many relationships without an entity class to represent the join table are not yet supported. You can handle these cases in a Linq query using `Any`, the same way that you can handle in a SQL query using `EXISTS` + `Join`: ``` .Where(article => article.Title.Contains(model.SearchTerm) || article.Abstract.Contains(model.SearchTerm) || article.ArticlesAuthors.Any(au => (au.Author.FirstName).Contains(model.SearchTerm) || (au.Author.LastName).Contains(model.SearchTerm))) ``` It doesn't exactly generate the following SQL Query, but the logic is quite similar to having the following in SQL: ``` FROM Articles WHERE (Articles.Title LIKE '%' + @Term + '%') OR (Articles.Abstract LIKE '%' + @Term + '%') OR EXISTS (SELECT * FROM ArticlesAuthors INNER JOIN Authors ON ArticlesAuthors.AuthorId = Authors.Id WHERE ((Authors.FirstName LIKE '%' + @Term + '%') OR (Authors.LastName LIKE '%'+ @Term + '%')) AND (Articles.Id = ArticlesAuthors.ArticleId)) ``` EF 6 - Example ============== The question is a bit cluttered including search sort and a lot of code and needs more focus. To make it more useful and more understandable for you and feature readers, I'll use a simpler model with fewer properties and easier to understand. As you can see in the EF diagram, the `ArticlesAuthors` table has not been shown in diagram because it's a many-to-many relationship containing just Id columns of other entities without any extra fields [![enter image description here](https://i.stack.imgur.com/kXiXd.png)](https://i.stack.imgur.com/kXiXd.png) ### Search Logic We want to find articles based on a `SerachTerm`, `PublishDateFrom` and `PublishDateTo`: * If the title or abstract of article contains the term, article should be part of the result. * If the combination of first name and last name of an author of the article contains the term, article should be part of the result. * If the publish date is greater than or equal to `PublishDateFrom`, article should be part of the result, also if the publish date is less than or equal to `PublishDateTo`, article should be part of the result. Here is a model for search: ``` public class ArticlesSearchModel { public string SearchTerm { get; set; } public DateTime? PublishDateFrom { get; set; } public DateTime? PublishDateTo { get; set; } } ``` Here is the code for search: > > Please note: `Inculde` doesn't have anything to do with search and > it's just for including the the related entities in output result. > > > ``` public class ArticlesBusinessLogic { public IEnumerable<Article> Search(ArticlesSearchModel model) { using (var db = new ArticlesDBEntities()) { var result = db.Articles.Include(x => x.Authors).AsQueryable(); if (model == null) return result.ToList(); if (!string.IsNullOrEmpty(model.SearchTerm)) result = result.Where(article => ( article.Title.Contains(model.SearchTerm) || article.Abstract.Contains(model.SearchTerm) || article.Authors.Any(author => (author.FirstName + " " + author.LastName).Contains(model.SearchTerm)) )); if (model.PublishDateFrom.HasValue) result = result.Where(x => x.PublishDate >= model.PublishDateFrom); if (model.PublishDateFrom.HasValue) result = result.Where(x => x.PublishDate <= model.PublishDateTo); return result.ToList(); } } } ``` EF CORE - Example ================= As I mentioned above, at the moment, Many-to-Many relationships without an entity class to represent the join table are not yet supported, so the model using EF CORE will be: [![enter image description here](https://i.stack.imgur.com/oUwmZ.png)](https://i.stack.imgur.com/oUwmZ.png) Here is the code for search: > > Please note: `Inculde` doesn't have anything to do with search and > it's just for including the the related entities in output result. > > > ``` public IEnumerable<Article> Search(ArticlesSearchModel model) { using (var db = new ArticlesDbContext()) { var result = db.Articles.Include(x=>x.ArticleAuthor) .ThenInclude(x=>x.Author) .AsQueryable(); if (model == null) return result; if (!string.IsNullOrEmpty(model.SearchTerm)) { result = result.Where(article => ( article.Title.Contains(model.SearchTerm) || article.Abstract.Contains(model.SearchTerm) || article.ArticleAuthor.Any(au => (au.Author.FirstName + " " + au.Author.LastName) .Contains(model.SearchTerm)) )); } if (model.PublishDateFrom.HasValue) { result = result.Where(x => x.PublishDate >= model.PublishDateFrom); } if (model.PublishDateFrom.HasValue) { result = result.Where(x => x.PublishDate <= model.PublishDateTo); } return result.ToList(); } } ```
You are doing a lot of things wrong : 1. you can not use `.ToString()` on classes or lists. so first you have to remove or change these lines. for example : ```cs sort = sort.Where(s => OfficerIDs.ToString().Contains(searchString)); sort = sort.OrderBy(s => officerList.ToString()).ThenBy(s => s.EventDate); sort = sort.OrderByDescending(s => officerList.ToString()).ThenBy(s => s.EventDate); ``` 2. you are almost loading the entire data from your database tables every time your page loads or your search or sorting changed. Of course, having paging makes this problem a little fuzzy here 3. you are not using entity framework to load your relational data so you can not write a query that loads what you need or what user searched for. (you are fetching data from the database on separate parts) --- I know this is not what you looking for but honestly, I tried to answer your question and help you solved the problem but I ended up rewriting whole things again ... you should break your problem into smaller pieces and ask a more conceptual question.
3,147
3,870,312
I am trying to solve problem related to model inheritance in Django. I have four relevant models: `Order`, `OrderItem` which has ForeignKey to `Order` and then there is `Orderable` model which is model inheritance superclass to children models like `Fee`, `RentedProduct` etc. In python, it goes like this (posting only relevant parts): ``` class Orderable(models.Model): real_content_type = models.ForeignKey(ContentType, editable=False) objects = OrderableManager() available_types = [] def save(self, *args, **kwargs): """ Saves instance and stores information about concrete class. """ self.real_content_type = ContentType.objects.get_for_model(type(self)) super(Orderable, self).save(*args, **kwargs) def cast(self): """ Casts instance to the most concrete class in inheritance hierarchy possible. """ return self.real_content_type.get_object_for_this_type(pk=self.pk) @staticmethod def register_type(type): Orderable.available_types.append(type) @staticmethod def get_types(): return Orderable.available_types class RentedProduct(Orderable): """ Represent a product which is rented to be part of an order """ start_at = models.ForeignKey(Storage, related_name='starting_products', verbose_name=_('Start at')) real_start_at = models.ForeignKey(Storage, null=True, related_name='real_starting_products', verbose_name=_('Real start at')) finish_at = models.ForeignKey(Storage, related_name='finishing_products', verbose_name=_('Finish at')) real_finish_at = models.ForeignKey(Storage, null=True, related_name='real_finishing_products', verbose_name=_('Real finish at')) target = models.ForeignKey(Product, verbose_name=_('Product')) Orderable.register_type(RentedProduct) class OrderItem(BaseItem): unit_price = models.DecimalField(max_digits=8, decimal_places=2, verbose_name=_('Unit price')) count = models.PositiveIntegerField(default=0, verbose_name=_('Count')) order = models.ForeignKey('Order', related_name='items', verbose_name=_('Order')) discounts = models.ManyToManyField(DiscountDescription, related_name='order_items', through=OrderItemDiscounts, blank=True, verbose_name=_('Discounts')) target = models.ForeignKey(Orderable, related_name='ordered_items', verbose_name=_('Target')) class Meta: unique_together = ('order', 'target') ``` I would like to have an inline tied to Order model to enable editing OrderItems. Problem is, that the target field in OrderItem points to Orderable (not the concrete class which one can get by calling Orderable's `cast` method) and the form in inline is therefore not complete. Does anyone have an idea, how to create at least a bit user-friendly interface for this? Can it be solved by Django admin inlines only, or you would suggest creating special user interface? Thanks in advance for any tips.
2010/10/06
[ "https://Stackoverflow.com/questions/3870312", "https://Stackoverflow.com", "https://Stackoverflow.com/users/303184/" ]
Try inherit OrderItemInlineAdmin's Form a define your own Form there. But fingers crossed for that.
I'm looking for a solid answer to this very thing, but you should check out FeinCMS. They are doing this quite well. See, for example, the FeinCMS [inline editor](https://github.com/matthiask/feincms/blob/master/feincms/admin/item_editor.py). I need to figure out how to adapt this to my code.
3,148
34,910,115
I'd like to make this more efficient but I can't figure out how to turn this into a python list comprehension. ``` coupons = [] for source in sources: for coupon in source: if coupon.code_used not in coupons: coupons.append(coupon.code_used) ```
2016/01/20
[ "https://Stackoverflow.com/questions/34910115", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3977548/" ]
You cannot access the list you currently creating, but if the order is not important you can use a `set`: ``` coupons = set(coupon.code_used for source in sources for coupon in source) ```
``` used_codes = set(coupon.code_used for source in sources for coupon in source) ```
3,149
40,674,526
I'm having trouble deploying my app to heroku, I'm using the heroku\_deploy.sh from documentation and get ``` Deploying Heroku Version 82d8ec66d98120ae24c89b88dc75e4d1c225461e Traceback (most recent call last): File "<string>", line 1, in <module> KeyError: 'source_blob' Traceback (most recent call last): File "<string>", line 1, in <module> KeyError: 'source_blob' curl: no URL specified! curl: try 'curl --help' or 'curl --manual' for more information Traceback (most recent call last): File "<string>", line 1, in <module> KeyError: 'output_stream_url' curl: try 'curl --help' or 'curl --manual' for more information ``` I'm using a custom docker image, but It has python on it, anything else I should make sure exists?
2016/11/18
[ "https://Stackoverflow.com/questions/40674526", "https://Stackoverflow.com", "https://Stackoverflow.com/users/67505/" ]
If you want to destroy the window when its closed just specifiy ``` closeAction : 'destroy' ``` instead of ``` closeAction : 'hide' ``` If doing so ExtJS destroys, and thus removes, all items completely. Additional, if specifying `destroy` as close action you will not need the additional listener (`onCardLayoutWindowBeforeHide`) to remove all items. If you create the window again it will be build from scratch (see [Sencha Fiddle](https://fiddle.sencha.com/#view/editor&fiddle/1knk)).
### Thanks to oberbics and Evan Trimboli I solve the problem, just because I assign a rownumberer like this: > > new Ext.grid.RowNumberer({width: 40}), > > > however, when I replace it with xtype config, it works well. > > {xtype: 'rownumberer'} > > > ``` Ext.define('MyWebServer.view.qualityassign.AllocateStrategy',{ extend : 'Ext.panel.Panel', xtype : 'allocate-strategy', layout : 'fit', requires: [ 'Ext.grid.column.Action', 'Ext.ProgressBarWidget', 'Ext.slider.Widget' ], reference : 'allocatestrategypanel', controller : 'allocatestrategy', viewModel : { data : { } }, listeners : { beforerender : 'onAllocateStrategyPanelBeforeRender', scope : 'controller' }, header : { xtype : 'container', html : '<p>Step 4 of 4 Choose allocate strategy</p>' }, initComponent : function() { var me = this; me.items = this.createItems(); me.callParent(); }, createItems : function() { var me = this; return [{ xtype : 'grid', reference : 'allocatestrategygrid', frame : true, viewConfig : { loadMask: true }, store : { type : 'allocate' }, dockedItems : [{ xtype : 'toolbar', dock : 'bottom', items : [{ xtype : 'toolbar', itemId : 'allocatestrategygrid-topbar', dock : 'top', items : [{ xtype : 'combo', reference : 'selectgroupcombo', fieldLabel : 'qualityinspectorgrp', labelWidth : 30, editable : false, triggerAction : 'all', valueField : 'sGroupGuid', displayField : 'sGroupName', forceSelection : true, store : { type : 'selectgroup' }, listeners : { select : 'onSelectGroupComboSelect', scope : 'controller' } }] }], columns : { xtype : 'gridcolumn', defaults: { align : 'center', width : 100, menuDisabled: true }, items : [ **new Ext.grid.RowNumberer({width: 40}),** { text : 'agentId', dataIndex : 'qualInspId' }, { text : 'agentName', dataIndex : 'qualInspName' }, { text : 'percent', xtype : 'widgetcolumn', width : 120, widget : { bind : '{record.percent}', xtype : 'progressbarwidget', textTpl: [ '{percent:number("0")}%' ] } }, { text : '', xtype : 'widgetcolumn', width : 120, widget : { xtype : 'numberfield', editable : false, bind : '{record.percent}', maxValue : 0.99, minValue : 0, step : 0.01, maxLength : 4, minLength : 1, autoFitErrors: false } }, { text : '', xtype : 'widgetcolumn', width : 120, flex : 1, widget : { xtype : 'sliderwidget', minValue : 0, maxValue : 1, bind : '{record.percent}', publishOnComplete : false, decimalPrecision : 2 } } ] } }]; } ```
3,155
5,445,166
I am developing a 3d shooter game that I would like to run on Computers/Phones/Tablets and would like some help to choose which engine to use. * I would like to write the application once and port it over to Android/iOS/windows/mac with ease. * I would like to make the application streamable over the internet. * The engine needs some physics(collision detection) as well as 3d rendering capabilities * I would prefer to use a scripting language such as Javascript or Python to Java or C++(although I would be willing to learn these if it is the best option) -My desire is to use an engine that is Code-based and not GUI-based, an engine that is more like a library which I can import into my Python files(for instance) than an application which forces me to rely on its GUI to import assets and establish relationships between them. > > > > > > > > > > > > This desire stems from my recent experience with Unity3d and Blender. The way I had designed my code required me to write dozens of disorganized scripts to control various objects. I cannot help but think that if I had written my program in a series of python files that I would be able to do a neater, faster job. > > > > > > > > > > > > > > > > > > I'd appreciate any suggestions. The closest thing to what I want is Panda3d, but I had a difficult time working with textures, and I am not convinced that my application can be made easily available to mobile phone/device users. If there is a similar option that you can think about, I'd appreciate the tip.
2011/03/26
[ "https://Stackoverflow.com/questions/5445166", "https://Stackoverflow.com", "https://Stackoverflow.com/users/509895/" ]
You've mentioned iOS -- that pretty much limits you to going native or using web stack. Since native is not what you want (because that'd be different for each platform you mention), you can go JavaScript. The ideal thing for that would be WebGL, but support is still experimental and not available in phone systems. You can still use one of JS libraries built on top of 2D `<canvas>`. You can't expect great performance from that though. You can find examples here: <http://www.javascriptgaming.com/>
Well I see you've checked Unity3D already, but I can't think of any other engines work on PC, Telephones and via streaming internet that suport 3D (for 2D check EXEN or any others). I'm also pretty sure that you can use Unity code-based, and it supports a couple of different languages, but for Unity to work you can't just import unity.dll (for example) into your code, no you have to use your code into unity so that unity can make it work on all these different platforms.
3,156
34,755,636
I am trying to show time series lines representing an effort amount using matplotlib and pandas. I've got my DF's to all to overlay in one plot, however when I do python seems to strip the x axis of the date and input some numbers. (I'm not sure where these come from but at a guess, not all days contain the same data so python has reverted to using an index id number). If I plot any one of these they come up with date on the x-axis. Any hints or solutions to make the x axis show date for the multiple plot would be much appreciated. This is the single figure plot with time axis: [![single figure plot with time axis](https://i.stack.imgur.com/P954I.png)](https://i.stack.imgur.com/P954I.png) Code I'm using to plot is ``` fig = pl.figure() ax = fig.add_subplot(111) ax.plot(b342,color='black') ax.plot(b343,color='blue') ax.plot(b344,color='red') ax.plot(b345,color='green') ax.plot(b346,color='pink') ax.plot(fi,color='yellow') plt.show() ``` This is the multiple plot fig with weird x axis: [![multiple plots without time axis](https://i.stack.imgur.com/kqLwz.png)](https://i.stack.imgur.com/kqLwz.png)
2016/01/12
[ "https://Stackoverflow.com/questions/34755636", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3509416/" ]
When you use `getClass().getResource(...)` you are loading a resource, not specifying a path to a file. In the case where the class loader loads classes from the file system, these essentially equate to the same thing, and it does actually work (though even then there's no technical reason it has to). When the class loader is loading classes by other mechanisms (and probably in all cases anyway), then it's important to pay attention to the Java [specifications for a resource](https://docs.oracle.com/javase/8/docs/technotes/guides/lang/resources.html). In particular, note: > > **Resources, names, and contexts** > > > A resource is identified by a string consisting of a sequence of > substrings, delimited by slashes (/), followed by a resource name. > ***Each substring must be a valid Java identifier.*** The resource name is of the form shortName or shortName.extension. Both shortName > and extension must be Java identifiers. > > > (My emphasis.) Since `..` is not a valid Java identifier, there's no guarantee of this resource being resolvable. It happens that the file system class loader resolves this in the way you expect, which is why it works in your IDE, but the implementation of `getResource(...)` in the jar class loader does not implement this in the way you are hoping. Try ``` FXMLLoader loader = new FXMLLoader(getClass().getResource("/sm/customer/CustomerHome.fxml")); ``` --- Using controller locations to load FXML: ---------------------------------------- Since you have organized your code so that each FXML is in the same package as its corresponding controller file (which I think is a sensible way to do things), you could also leverage this in loading the FXML: just load the FXML "relative to its controller": ``` FXMLLoader loader = new FXMLLoader(CustomerHomeCtrl.class.getResource("CustomerHome.fxml")); ``` This seems fairly natural in this setup, and the compiler will check that you have the package name for `CustomerHomeCtrl` correct at the point where you import the class. It also makes it easy to refactor: for example suppose you wanted to split `sm.admin` into multiple subpackages. In Eclipse you would create the subpackages, drag and drop the FXML and controllers to the appropriate subpackages, and the import statements would automatically be updated: there would be no further changes needed. In the case where the path is specified in the `getResource(...)`, all those would have to be changed by hand.
A bit late but this can maybe help someone. If you are using IntelliJ, your `resources` folder may not be marked as THE resources folder, which has the following icon: [![enter image description here](https://i.stack.imgur.com/LWBaw.png)](https://i.stack.imgur.com/LWBaw.png) This is the way I fixed it: [![enter image description here](https://i.stack.imgur.com/OdqfM.png)](https://i.stack.imgur.com/OdqfM.png)
3,159
14,722,788
I'm the author of [doctest](https://github.com/davidchambers/doctest), quick and dirty [doctests](http://docs.python.org/2/library/doctest.html) for JavaScript and CoffeeScript. I'd like to make the library less dirty by using a JavaScript parser rather than regular expressions to locate comments. I'd like to use [Esprima](http://esprima.org/) or [Acorn](https://github.com/marijnh/acorn) to do the following: 1. Create an AST 2. Walk the tree, and for each comment node: 1. Create an AST from the comment node's text 2. Replace the comment node in the main tree with this subtree **Input:** ``` !function() { // > toUsername("Jesper Nøhr") // "jespernhr" var toUsername = function(text) { return ('' + text).replace(/\W/g, '').toLowerCase() } }() ``` **Output:** ``` !function() { doctest.input(function() { return toUsername("Jesper Nøhr") }); doctest.output(4, function() { return "jespernhr" }); var toUsername = function(text) { return ('' + text).replace(/\W/g, '').toLowerCase() } }() ``` I don't know how to do this. Acorn provides a [walker](https://github.com/marijnh/acorn/blob/master/util/walk.js) which takes a node type and a function, and walks the tree invoking the function each time a node of the specified type is encountered. This seems promising, but doesn't apply to comments. With Esprima I can use `esprima.parse(input, {comment: true, loc: true}).comments` to get the comments, but I'm not sure how to update the tree.
2013/02/06
[ "https://Stackoverflow.com/questions/14722788", "https://Stackoverflow.com", "https://Stackoverflow.com/users/312785/" ]
Most AST-producing parsers throw away comments. I don't know what Esprima or Acorn do, but that might be the issue. .... in fact, Esprima lists comment capture as a current bug: <http://code.google.com/p/esprima/issues/detail?id=197> ... Acorn's code is right there in GitHub. It appears to throw comments away, too. So, looks like you get to fix either parser to capture the comments first, at which point your task should be straightforward, or, you're stuck. Our DMS Software Reengineering Toolkit has JavaScript parsers that capture comments, in the tree. It also has language *substring* parsers, that could be used to parse the comment text into JavaScript ASTs of whatever type the comment represents (e.g, function declaration, expression, variable declaration, ...), and the support machinery to graft such new ASTs into the main tree. If you are going to manipulate ASTs, this substring capability is likely important: most parsers won't parse arbitrary language fragments, they are wired only to parse "whole programs". For DMS, there are no comment nodes to replace; there are comments associated with ASTs nodes, so the grafting process is a little trickier than just "replace comment nodes". Still pretty easy. I'll observe that most parsers (including these) read the source and break it into tokens by using or applying the equivalent of a regular expressions. So, if you are already using these to locate comments (that means using them to locate \*non\*comments to throw away, as well, e.g., you need to recognize string literals that contain comment-like text and ignore them), you are doing as well as the parsers would do anyway in terms of *finding* the comments. And if all you want to do is to replace them exactly with their content, echoing the source stream with the comment prefix/suffix /\* \*/ stripped will do apparantly exactly what you want, so all this parsing machinery seems like overkill.
You can already use Esprima to achieve what you want: 1. Parse the code, get the comments (as an array). 2. Iterate over the comments, see if each is what you are interested in. 3. If you need to transform the comment, note its range. Collect all transformations. 4. Apply the transformation back-to-first so that the ranges are not shifted. The trick is here not change the AST. Simply apply the text change as if you are doing a typical search replace on the source string directly. Because the position of the replacement might shift, you need to collect everything and then do it from the last one. For an example on how to carry out such a transformation, take a look at my blog post ["From double-quotes to single-quotes"](http://ariya.ofilabs.com/2012/02/from-double-quotes-to-single-quotes.html) (it deals with string quotes but the principle remains the same). Last but not least, you might want to use a slightly higher-level utility such as [Rocambole](https://github.com/millermedeiros/rocambole).
3,160
48,548,878
I'm running Django 1.11 with Python 3.4 on Ubuntu 14.04.5 Moving my development code to the test server and running into some strange errors. Can anyone see what is wrong from the traceback? I'm very new to linux and have made the mistake of developing on a Windows machine on this first go around. I have since created a virtualbox copy of the test and production servers to develop on, but I'm hoping I can salvage what's up on the test server now. I think my app is looking in the correct directory for this environment, but I am a Django, Python and linux noob. Any direction would be very helpful. \*\*UPDATE: I added models.py and migration for relevant app. Also, I was using sqlite on dev machine and am using postgreSQL on test server (like a fool). Thanks! staff\_manager/models.py ``` # -*- coding: utf-8 -*- from __future__ import unicode_literals # Create your models here. from django.db import models from django.utils.encoding import python_2_unicode_compatible from smrt.settings import DATE_INPUT_FORMATS class OrganizationTitle(models.Model): def __str__(self): return "{}".format(self.organization_title_name) organization_title_name = models.CharField(max_length=150, unique=True) class ClassificationTitle(models.Model): def __str__(self): return "{}".format(self.classification_title_name) classification_title_name = models.CharField(max_length=150, unique=True) class WorkingTitle(models.Model): def __str__(self): return "{}".format(self.working_title_name) working_title_name = models.CharField(max_length=150, unique=True) class Category(models.Model): def __str__(self): return "{}".format(self.category_name) category_name = models.CharField(max_length=150, unique=True) class Department(models.Model): def __str__(self): return "{}".format(self.department_name) department_name = models.CharField(max_length=150, unique=True) class Employee(models.Model): first_name = models.CharField(max_length=150) last_name = models.CharField(max_length=150) org_title = models.ForeignKey(OrganizationTitle, blank=True, null=True, on_delete=models.SET_NULL) manager = models.ForeignKey('self', blank=True, null=True, on_delete=models.SET_NULL) manager_email = models.EmailField(max_length=50, blank=True, null=True) hire_date = models.DateField(blank=True, null=True) classification_title = models.ForeignKey(ClassificationTitle, blank=True, null=True, on_delete=models.SET_NULL) working_title = models.ForeignKey(WorkingTitle, blank=True, null=True, on_delete=models.SET_NULL) email_address = models.EmailField(max_length=250, blank=False, unique=True, error_messages={'unique': 'An account with this email exist.', 'required': 'Please provide an email address.'}) category = models.ForeignKey(Category, blank=True, null=True, on_delete=models.SET_NULL) is_substitute = models.BooleanField(default=False) department = models.ForeignKey(Department, blank=True, null=True, on_delete=models.SET_NULL) is_active = models.BooleanField(default=True) is_manager = models.BooleanField(default=False) class Meta: ordering = ('is_active', 'last_name',) def __str__(self): return "{}".format(self.first_name + ' ' + self.last_name) def __iter__(self): return iter([ self.email_address, self.last_name, self.first_name, self.org_title, self.manager, self.manager.email_address, self.hire_date, self.classification_title, self.working_title, self.email_address, self.category, self.is_substitute, self.department ]) def save(self, *args, **kwargs): for field_name in ['first_name', 'last_name']: val = getattr(self, field_name, False) if val: setattr(self, field_name, val.capitalize()) super(Employee, self).save(*args, **kwargs) ``` MIGRATION staff\_manager.0003\_auto\_20180131\_1756: ``` # -*- coding: utf-8 -*- # Generated by Django 1.11.7 on 2018-01-31 17:56 from __future__ import unicode_literals from django.db import migrations, models import django.db.models.deletion class Migration(migrations.Migration): dependencies = [ ('staff_manager', '0002_auto_20171127_2244'), ] operations = [ migrations.CreateModel( name='Category', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('category_name', models.CharField(max_length=150, unique=True)), ], ), migrations.CreateModel( name='ClassificationTitle', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('classification_title_name', models.CharField(max_length=150, unique=True)), ], ), migrations.CreateModel( name='Department', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('department_name', models.CharField(max_length=150, unique=True)), ], ), migrations.CreateModel( name='OrganizationTitle', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('organization_title_name', models.CharField(max_length=150, unique=True)), ], ), migrations.CreateModel( name='WorkingTitle', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('working_title_name', models.CharField(max_length=150, unique=True)), ], ), migrations.AlterField( model_name='employee', name='category', field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to='staff_manager.Category'), ), migrations.AlterField( model_name='employee', name='classification_title', field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to='staff_manager.ClassificationTitle'), ), migrations.AlterField( model_name='employee', name='department', field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to='staff_manager.Department'), ), migrations.AlterField( model_name='employee', name='email_address', field=models.EmailField(error_messages={'required': 'Please provide an email address.', 'unique': 'An account with this email exist.'}, max_length=250, unique=True), ), migrations.AlterField( model_name='employee', name='first_name', field=models.CharField(max_length=150), ), migrations.AlterField( model_name='employee', name='hire_date', field=models.DateField(blank=True, null=True), ), migrations.AlterField( model_name='employee', name='last_name', field=models.CharField(max_length=150), ), migrations.AlterField( model_name='employee', name='manager', field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to='staff_manager.Employee'), ), migrations.AlterField( model_name='employee', name='manager_email', field=models.EmailField(blank=True, max_length=50, null=True), ), migrations.AlterField( model_name='employee', name='org_title', field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to='staff_manager.OrganizationTitle'), ), migrations.AlterField( model_name='employee', name='working_title', field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to='staff_manager.WorkingTitle'), ), ] ``` TRACEBACK: ``` Operations to perform: Apply all migrations: admin, auth, contenttypes, csvimport, sessions, staff_manager Running migrations: Applying staff_manager.0003_auto_20180131_1756...Traceback (most recent call last): File "/home/www-root/envs/django_env_1/lib/python3.4/site-packages/django/db/backends/utils.py", line 65, in execute return self.cursor.execute(sql, params) psycopg2.DataError: invalid input syntax for integer: "test" The above exception was the direct cause of the following exception: Traceback (most recent call last): File "manage.py", line 22, in <module> execute_from_command_line(sys.argv) File "/home/www-root/envs/django_env_1/lib/python3.4/site-packages/django/core/management/__init__.py", line 364, in execute_from_command_line utility.execute() File "/home/www-root/envs/django_env_1/lib/python3.4/site-packages/django/core/management/__init__.py", line 356, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/home/www-root/envs/django_env_1/lib/python3.4/site-packages/django/core/management/base.py", line 283, in run_from_argv self.execute(*args, **cmd_options) File "/home/www-root/envs/django_env_1/lib/python3.4/site-packages/django/core/management/base.py", line 330, in execute output = self.handle(*args, **options) File "/home/www-root/envs/django_env_1/lib/python3.4/site-packages/django/core/management/commands/migrate.py", line 204, in handle fake_initial=fake_initial, File "/home/www-root/envs/django_env_1/lib/python3.4/site-packages/django/db/migrations/executor.py", line 115, in migrate state = self._migrate_all_forwards(state, plan, full_plan, fake=fake, fake_initial=fake_initial) File "/home/www-root/envs/django_env_1/lib/python3.4/site-packages/django/db/migrations/executor.py", line 145, in _migrate_all_forwards state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_initial) File "/home/www-root/envs/django_env_1/lib/python3.4/site-packages/django/db/migrations/executor.py", line 244, in apply_migration state = migration.apply(state, schema_editor) File "/home/www-root/envs/django_env_1/lib/python3.4/site-packages/django/db/migrations/migration.py", line 129, in apply operation.database_forwards(self.app_label, schema_editor, old_state, project_state) File "/home/www-root/envs/django_env_1/lib/python3.4/site-packages/django/db/migrations/operations/fields.py", line 216, in database_forwards schema_editor.alter_field(from_model, from_field, to_field) File "/home/www-root/envs/django_env_1/lib/python3.4/site-packages/django/db/backends/base/schema.py", line 515, in alter_field old_db_params, new_db_params, strict) File "/home/www-root/envs/django_env_1/lib/python3.4/site-packages/django/db/backends/postgresql/schema.py", line 112, in _alter_field new_db_params, strict, File "/home/www-root/envs/django_env_1/lib/python3.4/site-packages/django/db/backends/base/schema.py", line 684, in _alter_field params, File "/home/www-root/envs/django_env_1/lib/python3.4/site-packages/django/db/backends/base/schema.py", line 120, in execute cursor.execute(sql, params) File "/home/www-root/envs/django_env_1/lib/python3.4/site-packages/django/db/backends/utils.py", line 80, in execute return super(CursorDebugWrapper, self).execute(sql, params) File "/home/www-root/envs/django_env_1/lib/python3.4/site-packages/django/db/backends/utils.py", line 65, in execute return self.cursor.execute(sql, params) File "/home/www-root/envs/django_env_1/lib/python3.4/site-packages/django/db/utils.py", line 94, in __exit__ six.reraise(dj_exc_type, dj_exc_value, traceback) File "/home/www-root/envs/django_env_1/lib/python3.4/site-packages/django/utils/six.py", line 685, in reraise raise value.with_traceback(tb) File "/home/www-root/envs/django_env_1/lib/python3.4/site-packages/django/db/backends/utils.py", line 65, in execute return self.cursor.execute(sql, params) django.db.utils.DataError: invalid input syntax for integer: "test" (django_env_1) www-root@Server:~/envs/django_env_1/smrt$ ^C (django_env_1) www-root@Server:~/envs/django_env_1/smrt$ django.db.utils.DataError: invalid input syntax for integer: "test"django.db.utils.DataError: invalid input syntax for integer: "test" ```
2018/01/31
[ "https://Stackoverflow.com/questions/48548878", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3541909/" ]
The issue is likely related to [this open bug](https://code.djangoproject.com/ticket/25012) in Django. You have some test data in one of the fields that you are now converting to a ForeignKey. For instance, maybe `department` used to be a `CharField` and you added an employee who has "test" as their `department` value. Now you're trying to change `department` from a CharField to a ForeignKey. The issue is that Django is trying to convert the previous value "test" into a relational value (integer) for the ForeignKey. I can think of a few good solutions: * If this is just a test database, just reset your database and run the migration on a clean database * If you need to migrate the existing data, figure out what field has the "test" value. Then try something similar to the solution given in the bug report: ``` ``` from __future__ import unicode_literals from django.db import migrations class Migration(migrations.Migration): dependencies = [ ('documents', '0042_auto_19700101-0000'), ] operations = [ migrations.RunSQL('ALTER TABLE documents_document_tags ALTER tag_id TYPE varchar(32);'), ] ```
The simplest way that works for me is changing the foreign key to a character field, Make migrations, migrate. Then change back the field to be a foreign key. This way, you will force a database alteration which is very important
3,161
45,073,617
I am use AWS with REL 7. the default EC2 mico instance has already install python. but it encounter below error when i try to install pip by yum. sudo yum install pip ==================== Loaded plugins: amazon-id, rhui-lb, search-disabled-repos No package pip available. Error: Nothing to do Anyone advise on how to install pip with yum?
2017/07/13
[ "https://Stackoverflow.com/questions/45073617", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8035222/" ]
To install pip3.6 in Amazon Linux., there is no python36-pip. If you install python34-pip, it will also install python34 and point to it. The best option that worked for me is the following: ``` #Download get-pip to current directory. It won't install anything, as of now curl -O https://bootstrap.pypa.io/get-pip.py #Use python3.6 to install pip python3 get-pip.py #this will install pip3 and pip3.6 ``` Based on your preference, if you like to install them for all users, you may choose to run it as 'sudo'
The above answers seem to apply to python3 not python2 I'm running an instance where the default Python is 2.7 ``` python --version Python 2.7.14 ``` I just tried to python-pip but it gave me pip for 2.6 To install pip for python 2.7 I installed the package pyton27-pip ``` sudo yum -y install python27-pip ``` That seemed to work for me.
3,168
53,093,487
How can I specify multi-stage build with in a `docker-compose.yml`? For each variant (e.g. dev, prod...) I have a multi-stage build with 2 docker files: * dev: `Dockerfile.base` + `Dockerfile.dev` * or prod: `Dockerfile.base` + `Dockerfile.prod` File `Dockerfile.base` (common for all variants): ``` FROM python:3.6 RUN apt-get update && apt-get upgrade -y RUN pip install pipenv pip COPY Pipfile ./ # some more common configuration... ``` File `Dockerfile.dev`: ``` FROM flaskapp:base RUN pipenv install --system --skip-lock --dev ENV FLASK_ENV development ENV FLASK_DEBUG 1 ``` File `Dockerfile.prod`: ``` FROM flaskapp:base RUN pipenv install --system --skip-lock ENV FLASK_ENV production ``` Without docker-compose, I can build as: ``` # Building dev docker build --tag flaskapp:base -f Dockerfile.base . docker build --tag flaskapp:dev -f Dockerfile.dev . # or building prod docker build --tag flaskapp:base -f Dockerfile.base . docker build --tag flaskapp:dev -f Dockerfile.dev . ``` According to the [compose-file doc](https://docs.docker.com/compose/compose-file/#build), I can specify a Dockerfile to build. ``` # docker-compose.yml version: '3' services: webapp: build: context: ./dir dockerfile: Dockerfile-alternate ``` But how can I specify 2 Dockerfiles in `docker-compose.yml` (for multi-stage build)?
2018/10/31
[ "https://Stackoverflow.com/questions/53093487", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3858883/" ]
As mentioned in the comments, a multi-stage build involves a single Dockerfile to perform multiple stages. What you have is a common base image. You could convert these to a non-traditional multi-stage build with a syntax like (I say non-traditional because you do not perform any copying between the layers and instead use just the from line to pick from a prior stage): ``` FROM python:3.6 as base RUN apt-get update && apt-get upgrade -y RUN pip install pipenv pip COPY Pipfile ./ # some more common configuration... FROM base as dev RUN pipenv install --system --skip-lock --dev ENV FLASK_ENV development ENV FLASK_DEBUG 1 FROM base as prod RUN pipenv install --system --skip-lock ENV FLASK_ENV production ``` Then you can build one stage or another using the `--target` syntax to build, or a compose file like: ``` # docker-compose.yml version: '3.4' services: webapp: build: context: ./dir dockerfile: Dockerfile target: prod ``` The biggest downside is the current build engine will go through every stage until it reaches the target. Build caching can mean that's only a sub-second process. And BuildKit which is coming out of experimental in 18.09 and will need upstream support from docker-compose will be more intelligent about only running the needed commands to get your desired target built. All that said, I believe this is trying to fit a square peg in a round hole. The docker-compose developer is encouraging users to move away from doing the build within the compose file itself since it's not supported in swarm mode. Instead, the recommended solution is to perform builds with a CI/CD build server, and push those images to a registry. Then you can run the same compose file with `docker-compose` or `docker stack deploy` or even some k8s equivalents, without needing to redesign your workflow.
you can use as well concating of docker-compose files, with including both `dockerfile` pointing to your existing dockerfiles and run `docker-compose -f docker-compose.yml -f docker-compose.prod.yml build`
3,178
37,871,964
I am calling a second python script that is written for the command line from within my script using ``` os.system('insert command line arguments here') ``` this works fine and runs the second script in the terminal. I would like this not to be output in the terminal and simply have access to the lists and variables that are being printed. Is this possible using os.system? Or, do I need to use something else?
2016/06/17
[ "https://Stackoverflow.com/questions/37871964", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1244051/" ]
You have a [circular import](http://effbot.org/zone/import-confusion.htm#circular-imports): `models.py` is importing `db` from core, and `core.py` is importing `User` from models You should move this line: ``` from users.models import User ``` to the bottom of `core.py`. That way when `models.py` tries to import `db` from `core`, it will be defined (since it is past that point)
It works when you import user from .models in django with version 2 and above
3,179
34,113,000
I have the following python Numpy function; it is able to take X, an array with an arbitrary number of columns and rows, and output a Y value predicted by a least squares function. What is the Math.Net equivalent for such a function? Here is the Python code: ``` newdataX = np.ones([dataX.shape[0],dataX.shape[1]+1]) newdataX[:,0:dataX.shape[1]]=dataX # build and save the model self.model_coefs, residuals, rank, s = np.linalg.lstsq(newdataX, dataY) ```
2015/12/06
[ "https://Stackoverflow.com/questions/34113000", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3834415/" ]
I think you are looking for the functions on this page: <http://numerics.mathdotnet.com/api/MathNet.Numerics.LinearRegression/MultipleRegression.htm> You have a few options to solve : * Normal Equations : `MultipleRegression.NormalEquations(x, y)` * QR Decomposition : `MultipleRegression.QR(x, y)` * SVD : `MultipleRegression.SVD(x, y)` Normal equations are faster but less numerically stable while SVD is the most numerically stable but the slowest.
You can call numpy from .NET using pythonnet (C# CODE BELOW IS COPIED FROM GITHUB): The only "funky" part right now with pythonnet is passing numpy arrays. It is possible to convert them to Python lists at the interface, though this reduces performance for some situations. <https://github.com/pythonnet/pythonnet/tree/develop> ``` static void Main(string[] args) { using (Py.GIL()) { dynamic np = Py.Import("numpy"); dynamic sin = np.sin; Console.WriteLine(np.cos(np.pi*2)); Console.WriteLine(sin(5)); double c = np.cos(5) + sin(5); Console.WriteLine(c); dynamic a = np.array(new List<float> { 1, 2, 3 }); dynamic b = np.array(new List<float> { 6, 5, 4 }, Py.kw("dtype", np.int32)); Console.WriteLine(a.dtype); Console.WriteLine(b.dtype); Console.WriteLine(a * b); Console.ReadKey(); } } ``` outputs: ``` 1.0 -0.958924274663 -0.6752620892 float64 int32 [ 6. 10. 12.] ``` Here is example using F# posted on github: <https://github.com/pythonnet/pythonnet/issues/112> ``` open Python.Runtime open FSharp.Interop.Dynamic open System.Collections.Generic [<EntryPoint>] let main argv = //set up for garbage collection? use gil = Py.GIL() //----- //NUMPY //import numpy let np = Py.Import("numpy") //call a numpy function dynamically let sinResult = np?sin(5) //make a python list the hard way let list = new Python.Runtime.PyList() list.Append( new PyFloat(4.0) ) list.Append( new PyFloat(5.0) ) //run the python list through np.array dynamically let a = np?array( list ) let sumA = np?sum(a) //again, but use a keyword to change the type let b = np?array( list, Py.kw("dtype", np?int32 ) ) let sumAB = np?add(a,b) let SeqToPyFloat ( aSeq : float seq ) = let list = new Python.Runtime.PyList() aSeq |> Seq.iter( fun x -> list.Append( new PyFloat(x))) list //Worth making some convenience functions (see below for why) let a2 = np?array( [|1.0;2.0;3.0|] |> SeqToPyFloat ) //-------------------- //Problematic cases: these run but don't give good results //make a np.array from a generic list let list2 = [|1;2;3|] |> ResizeArray let c = np?array( list2 ) printfn "%A" c //gives type not value in debugger //make a np.array from an array let d = np?array( [|1;2;3|] ) printfn "%A" d //gives type not value in debugger //use a np.array in a function let sumD = np?sum(d) //gives type not value in debugger //let sumCD = np?add(d,d) // this will crash //can't use primitive f# operators on the np.arrays without throwing an exception; seems //to work in c# https://github.com/tonyroberts/pythonnet //develop branch //let e = d + 1 //----- //NLTK //import nltk let nltk = Py.Import("nltk") let sentence = "I am happy" let tokens = nltk?word_tokenize(sentence) let tags = nltk?pos_tag(tokens) let taggedWords = nltk?corpus?brown?tagged_words() let taggedWordsNews = nltk?corpus?brown?tagged_words(Py.kw("categories", "news") ) printfn "%A" taggedWordsNews let tlp = nltk?sem?logic?LogicParser(Py.kw("type_check",true)) let parsed = tlp?parse("walk(angus)") printfn "%A" parsed?argument 0 // return an integer exit code ```
3,180
27,914,930
I'm trying to install OpenStack python novaclient using pip install python-novaclient This task fails: netifaces.c:185:6 #error You need to add code for your platform I have no idea what code it wants. Does anyone understand this?
2015/01/13
[ "https://Stackoverflow.com/questions/27914930", "https://Stackoverflow.com", "https://Stackoverflow.com/users/518012/" ]
This has to do with the order that libraries are imported in the netifaces setup.py and is fixed in version 10.3+ (which you need to install from source). Here's how to install 10.4 (current latest release): ``` mkdir -p /tmp/install/netifaces/ cd /tmp/install/netifaces && wget -O "netifaces-0.10.4.tar.gz" "https://pypi.python.org/packages/source/n/netifaces/netifaces-0.10.4.tar.gz#md5=36da76e2cfadd24cc7510c2c0012eb1e" tar xvzf netifaces-0.10.4.tar.gz cd netifaces-0.10.4 && python setup.py install ```
I landed on this question while doing something similar: ``` pip install rackspace-novaclient ``` And this is what my error looked like: ``` Command "/usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip-build-G5GwYu/netifaces/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-Jugr2a-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-G5GwYu/netifaces ``` After reading the entire output logs, I realized I was missing "gcc" and just needed to install it. On CentOS 7, my fix was: ``` yum -y install gcc && pip install rackspace-novaclient ```
3,181
71,870,864
I'm writing a python tool with modules at different 'levels': * A low-level module, that can do everything, with a bit of work * A higher level module, with added "sugar" and helper functions I would like to be able to share function signatures from the low-level module to the higher one, so that intellisense works with both modules. > > In the following examples, I'm using the `width` and `height` parameters as placeholders for a pretty long list of arguments (around 30). > > > I could do everything explicitly. This works, the interface is what I want, and intellisense works; but it's very tedious, error prone and a nightmare to maintain: ```py # high level function, wraps/uses the low level one def create_rectangles(count, width=10, height=10): return [create_rectangle(width=width, height=height) for _ in range(count)] # low level function def create_rectangle(width=10, height=10): print(f"the rectangle is {width} wide and {height} high") create_rectangles(3, width=10, height=5) ``` I could create a class to hold the lower function's parameters. It's very readable, intellisense works, but the interface in clunky: ```py class RectOptions: def __init__(self, width=10, height=10) -> None: self.width = width self.height = height def create_rectangles(count, rectangle_options:RectOptions): return [create_rectangle(rectangle_options) for _ in range(count)] def create_rectangle(options:RectOptions): print(f"the rectangle is {options.width} wide and {options.height} high") # needing to create an instance for a function call feels clunky... create_rectangles(3, RectOptions(width=10, height=3)) ``` I could simply use `**kwargs`. It's concise and allows a good interface, but it breaks intellisense and is not very readable: ```py def create_rectangles(count, **kwargs): return [create_rectangle(**kwargs) for _ in range(count)] def create_rectangle(width, height): print(f"the rectangle is {width} wide and {height} high") create_rectangles(3, width=10, height=3) ``` What I would like is something that has the advantages of kwargs but with better readability/typing/intellisense support: ```py # pseudo-python class RectOptions: def __init__(self, width=10, height=10) -> None: self.width = width self.height = height # The '**' operator would add properties from rectangle_options to the function signature # We could even 'inherit' parameters from multiple sources, and error in case of conflict def create_rectangles(count, **rectangle_options:RectOptions): return [create_rectangle(rectangle_options) for idx in range(count)] def create_rectangle(options:RectOptions): print(f"the rectangle is {options.width} wide and {options.height} high") create_rectangles(3, width=10, height=3) ``` I could use code generation, but I'm not very familiar with that, and it seems like it would add a lot of complexity. While looking for a solution, I stumbled upon this [reddit post](https://www.reddit.com/r/Python/comments/8kmzfw/new_to_the_python_but_read_a_lot_of_codebases_and/). From what I understand, what I'm looking for is not currently possible, but I really hope I'm wrong about that I've tried the the [docstring\_expander](https://pypi.org/project/docstring-expander/) pip package, since it looks like it's meant to solve this problem, but it didn't do anything for me (I might be using it wrong...) I don't think this matters but just in case: I'm using vscode 1.59 and python 3.9.9
2022/04/14
[ "https://Stackoverflow.com/questions/71870864", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2486378/" ]
While `args` [won't be null](https://learn.microsoft.com/en-us/dotnet/csharp/fundamentals/program-structure/main-command-line#command-line-arguments) (See the green **Tip** box in the link), it might be an Array of length 0. So `args[0]` doesn't exist because it refers to the first item in the array, which doesn't have any items. If you are really setting it in "command line arguments" in Visual Studio and are really using debug mode - see [this answer](https://stackoverflow.com/a/54189058/939213). Basically - make sure it's all on "Any CPU". **EDIT** Change ``` string[] inputArray = input.Split(); ``` to ``` string[] inputArray = input.ToCharArray(); ```
How do you start Main function (program)? Do you pass arguments to Main function? If not, lenght of your args array is 0 (you don't have any fields in that array).
3,186
56,695,227
I'm using `tf.estimator` API with TensorFlow 1.13 on Google AI Platform to build a DNN Binary Classifier. For some reason I don't get a `eval` graph but I do get a `training` graph. Here are two different methods for performing training. The first is the normal python method and the second is using GCP AI Platform in local mode. Notice in either method, the evaluation is simply a dot for what appears to be the final result. I was expecting a plot similar to training where it would be a curve. Lastly, I show the relevant model code for the performance metric. **Normal python notebook method:** [![enter image description here](https://i.stack.imgur.com/vAdF1.png)](https://i.stack.imgur.com/vAdF1.png) ``` %%bash #echo ${PYTHONPATH}:${PWD}/${MODEL_NAME} export PYTHONPATH=${PYTHONPATH}:${PWD}/${MODEL_NAME} python -m trainer.task \ --train_data_paths="${PWD}/samples/train_sounds*" \ --eval_data_paths=${PWD}/samples/valid_sounds.csv \ --output_dir=${PWD}/${TRAINING_DIR} \ --hidden_units="175" \ --train_steps=5000 --job-dir=./tmp ``` **Local gcloud (GCP) ai-platform method:** [![enter image description here](https://i.stack.imgur.com/ZPxVQ.png)](https://i.stack.imgur.com/ZPxVQ.png) ``` %%bash OUTPUT_DIR=${PWD}/${TRAINING_DIR} echo "OUTPUT_DIR=${OUTPUT_DIR}" echo "train_data_paths=${PWD}/${TRAINING_DATA_DIR}/train_sounds*" gcloud ai-platform local train \ --module-name=trainer.task \ --package-path=${PWD}/${MODEL_NAME}/trainer \ -- \ --train_data_paths="${PWD}/${TRAINING_DATA_DIR}/train_sounds*" \ --eval_data_paths=${PWD}/${TRAINING_DATA_DIR}/valid_sounds.csv \ --hidden_units="175" \ --train_steps=5000 \ --output_dir=${OUTPUT_DIR} ``` **The performance metric code** ``` estimator = tf.contrib.estimator.add_metrics(estimator, my_auc) ``` And ``` # This is from the tensorflow website for adding metrics for a DNNClassifier # https://www.tensorflow.org/api_docs/python/tf/metrics/auc def my_auc(features, labels, predictions): return { #'auc': tf.metrics.auc( labels, predictions['logistic'], weights=features['weight']) #'auc': tf.metrics.auc( labels, predictions['logistic'], weights=features[LABEL]) # 'auc': tf.metrics.auc( labels, predictions['logistic']) 'auc': tf.metrics.auc( labels, predictions['class_ids']), 'accuracy': tf.metrics.accuracy( labels, predictions['class_ids']) } ``` **The method used during train and evaluate** ``` eval_spec = tf.estimator.EvalSpec( input_fn = read_dataset( filename = args['eval_data_paths'], mode = tf.estimator.ModeKeys.EVAL, batch_size = args['eval_batch_size']), steps=100, throttle_secs=10, exporters = exporter) # addition of throttle_secs=10 above and this # below as a result of one of the suggested answers. # The result is that these mods do no print the final # evaluation graph much less the intermediate results tf.estimator.RunConfig(save_checkpoints_steps=10) tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec) ``` **The DNN binary classifier using tf.estimator** ``` estimator = tf.estimator.DNNClassifier( model_dir = model_dir, feature_columns = final_columns, hidden_units=hidden_units, n_classes=2) ``` **screenshot of file in model\_trained/eval dir.** Only this one file is in this directory. It is named model\_trained/eval/events.out.tfevents.1561296248.myhostname.local and looks like [![enter image description here](https://i.stack.imgur.com/QoC9O.png)](https://i.stack.imgur.com/QoC9O.png)
2019/06/20
[ "https://Stackoverflow.com/questions/56695227", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1008596/" ]
With the comment and suggestions as well as tweaking the parameters, here is the result which works for me. The code to start the tensorboard, train the model etc. Using ------- to denote a notebook cell --- ``` %%bash # clean model output dirs # This is so that the trained model is deleted output_dir=${PWD}/${TRAINING_DIR} echo ${output_dir} rm -rf ${output_dir} ``` --- ``` # start tensorboard def tb(logdir="logs", port=6006, open_tab=True, sleep=2): import subprocess proc = subprocess.Popen( "exec " + "tensorboard --logdir={0} --port={1}".format(logdir, port), shell=True) if open_tab: import time time.sleep(sleep) import webbrowser webbrowser.open("http://127.0.0.1:{}/".format(port)) return proc cwd = os.getcwd() output_dir=cwd + '/' + TRAINING_DIR print(output_dir) server1 = tb(logdir=output_dir) ``` --- ``` %%bash # The model run config is hard coded to checkpoint every 500 steps # #echo ${PYTHONPATH}:${PWD}/${MODEL_NAME} export PYTHONPATH=${PYTHONPATH}:${PWD}/${MODEL_NAME} python -m trainer.task \ --train_data_paths="${PWD}/samples/train_sounds*" \ --eval_data_paths=${PWD}/samples/valid_sounds.csv \ --output_dir=${PWD}/${TRAINING_DIR} \ --hidden_units="175" \ --train_batch_size=10 \ --eval_batch_size=100 \ --eval_steps=1000 \ --min_eval_frequency=15 \ --train_steps=20000 --job-dir=./tmp ``` The relevant model code ``` # This hard codes the checkpoints to be # every 500 training steps? estimator = tf.estimator.DNNClassifier( model_dir = model_dir, feature_columns = final_columns, hidden_units=hidden_units, config=tf.estimator.RunConfig(save_checkpoints_steps=500), n_classes=2) # trainspec to tell the estimator how to get training data train_spec = tf.estimator.TrainSpec( input_fn = read_dataset( filename = args['train_data_paths'], mode = tf.estimator.ModeKeys.TRAIN, # make sure you use the dataset api batch_size = args['train_batch_size']), max_steps = args['train_steps']) # max_steps allows a resume exporter = tf.estimator.LatestExporter(name = 'exporter', serving_input_receiver_fn = serving_input_fn) eval_spec = tf.estimator.EvalSpec( input_fn = read_dataset( filename = args['eval_data_paths'], mode = tf.estimator.ModeKeys.EVAL, batch_size = args['eval_batch_size']), steps=args['eval_steps'], throttle_secs = args['min_eval_frequency'], exporters = exporter) tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec) ``` The resultant graphs [![enter image description here](https://i.stack.imgur.com/Lr5w4.png)](https://i.stack.imgur.com/Lr5w4.png) [![enter image description here](https://i.stack.imgur.com/QjZVj.png)](https://i.stack.imgur.com/QjZVj.png)
In `estimator.train_and_evaluate()` you specify a `train_spec` and an `eval_spec`. The `eval_spec` often has a different input function (e.g. development evaluation dataset, non-shuffled) Every N steps, a checkpoint from the train process is saved, and the eval process loads those same weights and runs according to the `eval_spec`. Those eval summaries are logged under the step number of the checkpoint, so you are able to compare train vs test performance. In your case, evaluation produces only a single point on the graph for each call to evaluate. This point contains the average over the entire evaluation call. Take a look at [this](https://github.com/tensorflow/tensorflow/issues/18858) similar issue: I would modify `tf.estimator.EvalSpec` with `throttle_secs` small value (Default is 600) and `save_checkpoints_steps` in `tf.estimator.RunConfig` to an small value as well: `tf.estimator.RunConfig(save_checkpoints_steps=SOME_SMALL_VALUE_TO_VERIFY)` [![enter image description here](https://i.stack.imgur.com/2arND.png)](https://i.stack.imgur.com/2arND.png)
3,189
71,276,514
I had everything working fine, then out of nowhere I keep getting this ``` PS C:\Users\rygra\Documents\Ryan Projects\totalwine-product-details-scraper> ensurepip ensurepip : The term 'ensurepip' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:1 + ensurepip + ~~~~~~~~~ + CategoryInfo : ObjectNotFound: (ensurepip:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException PS C:\Users\rygra\Documents\Ryan Projects\totalwine-product-details-scraper> py get-pip.py C:\Users\rygra\AppData\Local\Programs\Python\Python310\python.exe: can't open file 'C:\\Users\\rygra\\Documents\\Ryan Projects\\totalwine-product-details-scraper\\get-pip.py': [Errno 2] No such file or directory PS C:\Users\rygra\Documents\Ryan Projects\totalwine-product-details-scraper> ``` How do I resolve this issue?
2022/02/26
[ "https://Stackoverflow.com/questions/71276514", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18312732/" ]
Add param `android:exported` in `AndroidManifest.xml` file under `activity` Like below code: ``` <activity android:name=".MainActivity" android:exported="true"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> ``` Add `android:exported="true"` for Launcher activity and `android:exported="false"` to other activities.
Goto **AndroidManifest.xml** and add the following line to each activity ``` <activity android:name=".MainActivity" android:exported="true" /> ```
3,190
12,755,804
I am trying to run a python script on my mac .I am getting the error :- > > ImportError: No module named opengl.opengl > > > I googled a bit and found that I was missing pyopengl .I installed pip.I go to the directory pip-1.0 and then say > > sudo pip install pyopengl > > > and it installs correctly I believe because I got this > > Successfully installed pyopengl Cleaning up... > > > at the end. I rerun the script but i am still getting the same error .Can someone tell me what I might be missing? Thanks!
2012/10/06
[ "https://Stackoverflow.com/questions/12755804", "https://Stackoverflow.com", "https://Stackoverflow.com/users/592667/" ]
Is it in the PYTHON-PATH? Maybe try something like this: ``` import sys sys.path.append(opengl-dir) import opengl.opengl ``` Replace the opengl-dir with the directory that you have installed in... Maybe try what RocketDonkey is suggesting... I don't know, really...
Thanks guys! I figured it out.It was infact a separate module which I needed to copy over to the "site-packages" location and it worked fine.So in summary no issues with the path just that the appropriate module was not there.
3,191
11,713,871
I am playing with Heroku to test how good it is for Django apps. I created a simple project with two actions: 1. return simple hello world 2. generate image and send it as response I used `siege -c10 -t30s` to test both Django dev server and gunicorn (both running on Heroku). These are my results: **Simple hello world** - django dev ``` Lifting the server siege... done. Transactions: 376 hits Availability: 100.00 % Elapsed time: 29.75 secs Data transferred: 0.00 MB Response time: 0.29 secs Transaction rate: 12.64 trans/sec Throughput: 0.00 MB/sec Concurrency: 3.65 Successful transactions: 376 Failed transactions: 0 Longest transaction: 0.50 Shortest transaction: 0.26 ``` - gunicorn ``` Lifting the server siege... done. Transactions: 357 hits Availability: 100.00 % Elapsed time: 29.27 secs Data transferred: 0.00 MB Response time: 0.27 secs Transaction rate: 12.20 trans/sec Throughput: 0.00 MB/sec Concurrency: 3.34 Successful transactions: 357 Failed transactions: 0 Longest transaction: 0.34 Shortest transaction: 0.26 ``` **generating images** - django dev ``` Lifting the server siege... done. Transactions: 144 hits Availability: 100.00 % Elapsed time: 29.91 secs Data transferred: 0.15 MB Response time: 1.52 secs Transaction rate: 4.81 trans/sec Throughput: 0.01 MB/sec Concurrency: 7.32 Successful transactions: 144 Failed transactions: 0 Longest transaction: 4.14 Shortest transaction: 1.13 ``` - gunicorn ``` Lifting the server siege... done. Transactions: 31 hits Availability: 100.00 % Elapsed time: 29.42 secs Data transferred: 0.05 MB Response time: 7.39 secs Transaction rate: 1.05 trans/sec Throughput: 0.00 MB/sec Concurrency: 7.78 Successful transactions: 31 Failed transactions: 0 Longest transaction: 9.13 Shortest transaction: 1.19 ``` I used - Django 1.4 - Gunicorn 0.14.6 - venv Why is gunicorn so slow? //UPDATE Both tests were running in Heroku envirenment dev server means standard django server - it can be ran by `python manage.py runserver` it is described [here](https://devcenter.heroku.com/articles/django#using_a_different_wsgi_server).
2012/07/29
[ "https://Stackoverflow.com/questions/11713871", "https://Stackoverflow.com", "https://Stackoverflow.com/users/513686/" ]
Are settings the same? Django 1.4 dev server is multithreaded by default and there is only 1 sync worker in gunicorn default config.
You're going to have to set up [application profiling](http://docs.python.org/library/profile.html) to gain some insight into where exactly the problem is located.
3,192
45,877,080
I'm trying to create a dropdown menu in HTML using info from a python script. I've gotten it to work thus far, however, the html dropdown displays all 4 values in the lists as 4 options. Current: **Option 1:** Red, Blue, Black Orange; **Option 2:** Red, Blue, Black, Orange etc. (Screenshot in link) [Current](https://i.stack.imgur.com/8w0tz.png) Desired: **Option 1:** Red **Option 2:** Blue etc. How do I make it so that the python list is separated? dropdown.py ``` from flask import Flask, render_template, request app = Flask(__name__) app.debug = True @app.route('/', methods=['GET']) def dropdown(): colours = ['Red', 'Blue', 'Black', 'Orange'] return render_template('test.html', colours=colours) if __name__ == "__main__": app.run() ``` test.html ``` <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Dropdown</title> </head> <body> <select name= colours method="GET" action="/"> {% for colour in colours %} <option value= "{{colour}}" SELECTED>{{colours}}</option>" {% endfor %} </select> </body> </html> ```
2017/08/25
[ "https://Stackoverflow.com/questions/45877080", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8515504/" ]
you have a typo, replace `colours` to `colour` ``` <option value= "{{colour}}" SELECTED>{{colours}}</option>" ``` replace to ``` <option value= "{{colour}}" SELECTED>{{ colour }}</option>" <!-- ^^^^ --> ```
You need to use `{{colour}}` in both places (instead of `{{colours}}` in the second place): ``` <select name="colour" method="GET" action="/"> {% for colour in colours %} <option value="{{colour}}" SELECTED>{{colour}}</option>" {% endfor %} </select> ``` Note that using `selected` inside the loop will add `selected` attribute to all options and the last one will be selected, what you need to do is the following: ``` <select name="colour" method="GET" action="/"> <option value="{{colours[0]}}" selected>{{colours[0]}}</option> {% for colour in colours[1:] %} <option value="{{colour}}">{{colour}}</option> {% endfor %} </select> ```
3,194
37,691,552
I have the following code that is leveraging multiprocessing to iterate through a large list and find a match. How can I get all processes to stop once a match is found in any one processes? I have seen examples but I none of them seem to fit into what I am doing here. ``` #!/usr/bin/env python3.5 import sys, itertools, multiprocessing, functools alphabet = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ12234567890!@#$%^&*?,()-=+[]/;" num_parts = 4 part_size = len(alphabet) // num_parts def do_job(first_bits): for x in itertools.product(first_bits, *itertools.repeat(alphabet, num_parts-1)): # CHECK FOR MATCH HERE print(''.join(x)) # EXIT ALL PROCESSES IF MATCH FOUND if __name__ == '__main__': pool = multiprocessing.Pool(processes=4) results = [] for i in range(num_parts): if i == num_parts - 1: first_bit = alphabet[part_size * i :] else: first_bit = alphabet[part_size * i : part_size * (i+1)] pool.apply_async(do_job, (first_bit,)) pool.close() pool.join() ``` Thanks for your time. **UPDATE 1:** I have implemented the changes suggested in the great approach by @ShadowRanger and it is nearly working the way I want it to. So I have added some logging to give an indication of progress and put a 'test' key in there to match. I want to be able to increase/decrease the iNumberOfProcessors independently of the num\_parts. At this stage when I have them both at 4 everything works as expected, 4 processes spin up (one extra for the console). When I change the iNumberOfProcessors = 6, 6 processes spin up but only for of them have any CPU usage. So it appears 2 are idle. Where as my previous solution above, I was able to set the number of cores higher without increasing the num\_parts, and all of the processes would get used. [![enter image description here](https://i.stack.imgur.com/YjNj4.png)](https://i.stack.imgur.com/YjNj4.png) I am not sure about how to refactor this new approach to give me the same functionality. Can you have a look and give me some direction with the refactoring needed to be able to set iNumberOfProcessors and num\_parts independently from each other and still have all processes used? Here is the updated code: ``` #!/usr/bin/env python3.5 import sys, itertools, multiprocessing, functools alphabet = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ12234567890!@#$%^&*?,()-=+[]/;" num_parts = 4 part_size = len(alphabet) // num_parts iProgressInterval = 10000 iNumberOfProcessors = 6 def do_job(first_bits): iAttemptNumber = 0 iLastProgressUpdate = 0 for x in itertools.product(first_bits, *itertools.repeat(alphabet, num_parts-1)): sKey = ''.join(x) iAttemptNumber = iAttemptNumber + 1 if iLastProgressUpdate + iProgressInterval <= iAttemptNumber: iLastProgressUpdate = iLastProgressUpdate + iProgressInterval print("Attempt#:", iAttemptNumber, "Key:", sKey) if sKey == 'test': print("KEY FOUND!! Attempt#:", iAttemptNumber, "Key:", sKey) return True def get_part(i): if i == num_parts - 1: first_bit = alphabet[part_size * i :] else: first_bit = alphabet[part_size * i : part_size * (i+1)] return first_bit if __name__ == '__main__': # with statement with Py3 multiprocessing.Pool terminates when block exits with multiprocessing.Pool(processes = iNumberOfProcessors) as pool: # Don't need special case for final block; slices can for gotmatch in pool.imap_unordered(do_job, map(get_part, range(num_parts))): if gotmatch: break else: print("No matches found") ``` **UPDATE 2:** Ok here is my attempt at trying @noxdafox suggestion. I have put together the following based on the link he provided with his suggestion. Unfortunately when I run it I get the error: ... line 322, in apply\_async raise ValueError("Pool not running") ValueError: Pool not running Can anyone give me some direction on how to get this working. Basically the issue is that my first attempt did multiprocessing but did not support canceling all processes once a match was found. My second attempt (based on @ShadowRanger suggestion) solved that problem, but broke the functionality of being able to scale the number of processes and num\_parts size independently, which is something my first attempt could do. My third attempt (based on @noxdafox suggestion), throws the error outlined above. If anyone can give me some direction on how to maintain the functionality of my first attempt (being able to scale the number of processes and num\_parts size independently), and add the functionality of canceling all processes once a match was found it would be much appreciated. Thank you for your time. Here is the code from my third attempt based on @noxdafox suggestion: ``` #!/usr/bin/env python3.5 import sys, itertools, multiprocessing, functools alphabet = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ12234567890!@#$%^&*?,()-=+[]/;" num_parts = 4 part_size = len(alphabet) // num_parts iProgressInterval = 10000 iNumberOfProcessors = 4 def find_match(first_bits): iAttemptNumber = 0 iLastProgressUpdate = 0 for x in itertools.product(first_bits, *itertools.repeat(alphabet, num_parts-1)): sKey = ''.join(x) iAttemptNumber = iAttemptNumber + 1 if iLastProgressUpdate + iProgressInterval <= iAttemptNumber: iLastProgressUpdate = iLastProgressUpdate + iProgressInterval print("Attempt#:", iAttemptNumber, "Key:", sKey) if sKey == 'test': print("KEY FOUND!! Attempt#:", iAttemptNumber, "Key:", sKey) return True def get_part(i): if i == num_parts - 1: first_bit = alphabet[part_size * i :] else: first_bit = alphabet[part_size * i : part_size * (i+1)] return first_bit def grouper(iterable, n, fillvalue=None): args = [iter(iterable)] * n return itertools.zip_longest(*args, fillvalue=fillvalue) class Worker(): def __init__(self, workers): self.workers = workers def callback(self, result): if result: self.pool.terminate() def do_job(self): print(self.workers) pool = multiprocessing.Pool(processes=self.workers) for part in grouper(alphabet, part_size): pool.apply_async(do_job, (part,), callback=self.callback) pool.close() pool.join() print("All Jobs Queued") if __name__ == '__main__': w = Worker(4) w.do_job() ```
2016/06/08
[ "https://Stackoverflow.com/questions/37691552", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2109254/" ]
You can check [this question](https://stackoverflow.com/questions/33447055/python-multiprocess-pool-how-to-exit-the-script-when-one-of-the-worker-process/33450972#33450972) to see an implementation example solving your problem. This works also with concurrent.futures pool. Just replace the `map` method with `apply_async` and iterated over your list from the caller. Something like this. ``` for part in grouper(alphabet, part_size): pool.apply_async(do_job, part, callback=self.callback) ``` [grouper recipe](https://stackoverflow.com/questions/434287/what-is-the-most-pythonic-way-to-iterate-over-a-list-in-chunks)
`multiprocessing` isn't really designed to cancel tasks, but you can simulate it for your particular case by using `pool.imap_unordered` and terminating the pool when you get a hit: ``` def do_job(first_bits): for x in itertools.product(first_bits, *itertools.repeat(alphabet, num_parts-1)): # CHECK FOR MATCH HERE print(''.join(x)) if match: return True # If we exit loop without a match, function implicitly returns falsy None for us # Factor out part getting to simplify imap_unordered use def get_part(i): if i == num_parts - 1: first_bit = alphabet[part_size * i :] else: first_bit = alphabet[part_size * i : part_size * (i+1)] if __name__ == '__main__': # with statement with Py3 multiprocessing.Pool terminates when block exits with multiprocessing.Pool(processes=4) as pool: # Don't need special case for final block; slices can for gotmatch in pool.imap_unordered(do_job, map(get_part, range(num_parts))): if gotmatch: break else: print("No matches found") ``` This will run `do_job` for each part, returning results as fast as it can get them. When a worker returns `True`, the loop breaks, and the `with` statement for the `Pool` is exited, `terminate`-ing the `Pool` (dropping all work in progress). Note that while this works, it's kind of abusing `multiprocessing`; it won't handle canceling individual tasks without terminating the whole `Pool`. If you need more fine grained task cancellation, you'll want to look at [`concurrent.futures`](https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.Future.cancel), but even there, it can only cancel undispatched tasks; once they're running, they can't be cancelled without terminating the `Executor` or using a side-band means of termination (having the task poll some interprocess object intermittently to determine if it should continue running).
3,195
21,995,255
i'm trying to use BeautifulSoup on my NAS, that is the model in the title, but i am not able to install it, with the `ipkg list` there isn't a package named BeautifulSoup. On my NAS i have this version of python: ``` Python 2.5.6 (r256:88840, Feb 16 2012, 08:51:29) [GCC 3.4.3 20041021 (prerelease)] on linux2 ``` So i think i have to use the version 3 of Beautiful soup, so i have two question: 1) anyone knows how i can install it? 2) if i can't install this module i can import directly the BeautifulSoup.py file directly in my script? if yes how i can do? thanks
2014/02/24
[ "https://Stackoverflow.com/questions/21995255", "https://Stackoverflow.com", "https://Stackoverflow.com/users/678833/" ]
In this case I suppose that you can't even install pip to manage your Python dependencies. One way of doing so would be to download the source from <http://www.crummy.com/software/BeautifulSoup/bs3/download//3.x/>, download the tarball for your preferred version. Once done, unzip it cd into the folder and type: ``` $ python setup.py install ```
You can if you install python3 from the package manager: ``` ashton@NASty:~/bin/three/$ pip install beautifulsoup4 -- user Requirement already satisfied: beautifulsoup4 in /volume1/@appstore/py3k/usr/local/lib/python3.5/site-packages (4.8.0) Requirement already satisfied: soupsieve>=1.2 in /volume1/@appstore/py3k/usr/local/lib/python3.5/site-packages (from beautifulsoup4) (1.9.2) ```
3,196
39,215,663
I think I have the same issue as [here on SO.](https://stackoverflow.com/questions/28323644/flask-sqlalchemy-backref-not-working) Using python 3.5, flask-sqlalchemy, and sqlite. I am trying to establish a one (User) to many (Post) relationship. ``` class User(db_blog.Model): id = db_blog.Column(db_blog.Integer, primary_key=True) nickname = db_blog.Column(db_blog.String(64), index=True, unique=True) email = db_blog.Column(db_blog.String(120), unique=True) posts = db_blog.relationship('Post', backref='author', lazy='dynamic') def __repr__(self): return '<User: {0}>'.format(self.nickname) class Post(db_blog.Model): id = db_blog.Column(db_blog.Integer, primary_key=True) body = db_blog.Column(db_blog.String(2500)) title = db_blog.Column(db_blog.String(140)) user_id = db_blog.Column(db_blog.Integer, db_blog.ForeignKey('user.id')) def __init__(self, body, title, **kwargs): self.body = body self.title = title def __repr__(self): return '<Title: {0}\nPost: {1}\nAuthor: {2}>'.format(self.title, self.body, self.author) ``` Author is None. ``` >>> u = User(nickname="John Doe", email="jdoe@email.com") >>> u <User: John Doe> >>> db_blog.session.add(u) >>> db_blog.session.commit() >>> u = User.query.get(1) >>> u <User: John Doe> >>> p = Post(body="Body of post", title="Title of Post", author=u) >>> p <Title: Title of Post Post: Body of post Author: None> #Here I expect- "Author: John Doe>" ``` I get the same result after session add/commit of the post and so can't find the post author.
2016/08/29
[ "https://Stackoverflow.com/questions/39215663", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5795832/" ]
If you add the newly created `post` to the `posts` attribute of the user, it will work: ``` >>> u = User(nickname="John Doe", email="jdoe@email.com") >>> u <User: John Doe> >>> db_blog.session.add(u) >>> p = Post(body="Body of post", title="Title of Post") >>> p <Title: Title of Post Post: Body of post Author: None> >>> db_blog.session.add(p) >>> db_blog.session.commit() >>> u.posts.append(p) >>> p <Title: Title of Post Post: Body of post Author: <User: John Doe>> >>> p.author <User: John Doe> ``` The reason your code doesn't work as-is, is because you've defined an `__init__` constructor for your `Post` class, and this doesn't consider the `author` field at all, so Python doesn't know what to do with that `author` parameter.
You added your own `__init__` to `Post`. While it accepts keyword arguments, it does nothing with them. You can either update it to use them ``` def __init__(self, body, title, **kwargs): self.body = body self.title = title for k, v in kwargs: setattr(self, k, v) ``` Or, ideally, you can just remove the method and let SQLAlchemy handle it for you.
3,197
15,481,808
I'm trying to set the figure size with `fig1.set_size_inches(5.5,3)` on python, but the plot produces a fig where the x label is not completely visibile. The figure itself has the size I need, but it seems like the axis inside is too tall, and the x label just doesn't fit anymore. here is my code: ``` fig1 = plt.figure() fig1.set_size_inches(5.5,4) fig1.set_dpi(300) ax = fig1.add_subplot(111) ax.grid(True,which='both') ax.hist(driveDistance,100) ax.set_xlabel('Driven Distance in km') ax.set_ylabel('Frequency') fig1.savefig('figure1_distance.png') ``` and here is the result file: ![image with 5.5x3 inch](https://i.stack.imgur.com/HMiQJ.png)
2013/03/18
[ "https://Stackoverflow.com/questions/15481808", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1862909/" ]
You could order the save method to take the artist of the x-label into consideration. This is done with the bbox\_extra\_artists and the tight layout. The resulting code would be: ``` import matplotlib.pyplot as plt fig1 = plt.figure() fig1.set_size_inches(5.5,4) fig1.set_dpi(300) ax = fig1.add_subplot(111) ax.grid(True,which='both') ax.hist(driveDistance,100) xlabel = ax.set_xlabel('Driven Distance in km') ax.set_ylabel('Frequency') fig1.savefig('figure1_distance.png', bbox_extra_artists=[xlabel], bbox_inches='tight') ```
It works for me if I initialize the figure with the `figsize` and `dpi` as `kwargs`: ``` from numpy import random from matplotlib import pyplot as plt driveDistance = random.exponential(size=100) fig1 = plt.figure(figsize=(5.5,4),dpi=300) ax = fig1.add_subplot(111) ax.grid(True,which='both') ax.hist(driveDistance,100) ax.set_xlabel('Driven Distance in km') ax.set_ylabel('Frequency') fig1.savefig('figure1_distance.png') ``` ![driveDistance](https://i.stack.imgur.com/QhxE9.png)
3,198
42,903,036
I am trying to find any way possible to get a SharePoint list in Python. I was able to connect to SharePoint and get the XML data using Rest API via this video: <https://www.youtube.com/watch?v=dvFbVPDQYyk>... but not sure how to get the list data into python. The ultimate goal will be to get the SharePoint data and import into SSMS daily. Here is what I have so far.. ``` import requests from requests_ntlm import HttpNtlmAuth url='URL would go here' username='username would go here' password='password would go here' r=requests.get(url, auth=HttpNtlmAuth(username,password),verify=False) ``` I believe these would be the next steps. I really only need help getting the data from SharePoint in Excel/CSV format preferably and should be fine from there. But any recommendations would be helpful.. ``` #PARSE XML VIA REST API #PRINT INTO DATAFRAME AND CONVERT INTO CSV #IMPORT INTO SQL SERVER #EMAIL RESULTS ```
2017/03/20
[ "https://Stackoverflow.com/questions/42903036", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7418496/" ]
``` from shareplum import Site from requests_ntlm import HttpNtlmAuth server_url = "https://sharepoint.xxx.com/" site_url = server_url + "sites/org/" auth = HttpNtlmAuth('xxx\\user', 'pwd') site = Site(site_url, auth=auth, verify_ssl=False) sp_list = site.List('list name in my share point') data = sp_list.GetListItems('All Items', rowlimit=200) ```
I know this doesn't directly answer your question (and you probably have an answer by now) but I would give the [SharePlum](https://pypi.org/project/SharePlum/) library a try. It should hopefully [simplify](https://shareplum.readthedocs.io/en/latest/index.html) the process you have for interacting with SharePoint. Also, I am not sure if you have a requirement to export the data into a csv but, you can [connect directly to SQL Server](https://stackoverflow.com/a/33787509/9350722) and insert your data more directly. I would have just added this into the comments but don't have a high enough reputation yet.
3,199
44,789,394
I have a spark cluster running in EMR. I also have a jupyter notebook running on a second EC2 machine. I would like to use spark on my EC2 instance through jupyter. I'm looking for references on how to configure spark to access the EMR cluster from EC2. Searching gives me only guides on how to setup spark on either EMR or EC2, but not how to access one from the other. I saw a similar question here: [Sending Commands from Jupyter/IPython running on EC2 to EMR cluster](https://stackoverflow.com/questions/35450586/sending-commands-from-jupyter-ipython-running-on-ec2-to-emr-cluster) However, the setup there uses a bootstrap action to setup zeppelin, and I'm not sure how to edit my hadoop configuration on EC2.
2017/06/27
[ "https://Stackoverflow.com/questions/44789394", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2781958/" ]
Here you can do like following using the From ID. ``` <script> function submitForm(){ document.getElementById("myFrom_id").submit();// Form submission } </script> <form class="functionsquestionform2 classtest" id="myFrom_id" action="frameworkplayground.php" method="POST"> <input type="radio" name="functions_question2" value="1" /> Question <div id="div_man" onclick="return submitForm();">click me</div> </form> ``` Please do some research and study about basic HTML and JS on W3School. There are many ways but this is the simple way that i prefer when you are beginner. **Update: You can try following, it works** ``` <form class="functionsquestionform2 classtest" id="myFrom_id" action="frameworkplayground.php" method="POST"> <input type="radio" name="functions_question2" value="1" /> Question <div id="div_man" onclick='document.getElementById("myFrom_id").submit();'>click me</div> </form> ```
``` <script type='text/javascript'> $(document).ready(function () { $("div.submitter").click(function(){ $(this).parent().submit(); }); }); </script> ``` for this form: ``` <form class =".functionsquestionform2" class = "classtest" action="frameworkplayground.php" method="POST"> <input type="radio" name="functions_question2" value="1"> Question </input> <div class = "submitter">click me</div> </form> ```
3,204
14,601,426
I have an Ubuntu server which has a python script that runs a terminal command-based interface. I'm using plink to login and immediately execute the python script: ``` plink.exe -ssh -l goomuckel -pw greenpepper#7 192.168.1.201 "python server.py" ``` However, I get the following message: ``` TERM environment variable not set. ``` So I created a sh script (server.sh) on the Ubuntu machine: ``` export TERM=xterm python server.py ``` Using the following plink command: ``` plink.exe -ssh -l goomuckel -pw greenpepper#7 192.168.1.201 "sh server.sh" ``` Now I don't receive the warning anymore, it seems that the python script is execute. But instead of showing the terminal interface I'm printing in the python script, only the following characters are printed: ``` ←[H←[2J ``` The weird thing is, when logging in manually with Putty and executing the python script, everything works fine. I've tried adding the -t flag to plink and then the script executes. However I'm using colors for printing the terminal text, and the colors are printed as text rather than changing the colors of the text as observed in Putty. Any ideas what I can do?
2013/01/30
[ "https://Stackoverflow.com/questions/14601426", "https://Stackoverflow.com", "https://Stackoverflow.com/users/977063/" ]
You don't need to do this into a python script. You could simply modify `.profile` -that is a file that system will execute on every login - with the same expression you use into python script ``` export TERM=xterm ``` (if you use bash) ``` setnv TERM xterm ``` (for c-shell and similar)
I had the same problem and setting the TERM variable before the command eliminated that *TERM environment variable not set.* error message: ``` plink.exe -ssh -l goomuckel -pw greenpepper#7 192.168.1.201 "export TERM=xterm; python server.py" ``` This is handy if you can't modify the *.profile* file...
3,205
8,695,352
i am creating a django app, my project name is domain\_com and the application name is gallery. The project is mapped to domain.com, so that works, now when i create the urls.py with these redirects its giving me these errors ``` (r'^domain_com/(?P<page_name>[^/]+)/edit/$', 'domain_com.gallery.views.edit_page'), (r'^domain_com/(?P<page_name>[^/]+)/save/$', 'domain_com.gallery.views.save_page'), (r'^domain_com/(?P<page_name>[^/]+)/$', 'domain_com.gallery.views.view_page') ``` error: ``` Using the URLconf defined in domain_com.urls, Django tried these URL patterns, in this order: ^domain_com/(?P<page_name>[^/]+)/edit/$ ^domain_com/(?P<page_name>[^/]+)/save/$ ^domain_com/(?P<page_name>[^/]+)/$ The current URL, edit, didn't match any of these. ``` any idea where the problem is? my intial install of django worked after create the application, so i am sure its the urls.py this is my apache config ``` <VirtualHost *:80> ServerName www.domain.com ServerAlias domain.com DocumentRoot /var/www/www.domain.com/htdocs/ ErrorLog /var/www/www.domain.com/logs/error.log CustomLog /var/www/www.domain.com/logs/access.log combined SetHandler mod_python PythonHandler django.core.handlers.modpython PythonPath sys.path+['/var/app/virtual/'] SetEnv DJANGO_SETTINGS_MODULE domain_com.settings SetEnv PYTHON_EGG_CACHE /tmp <Location "/gallery/"> SetHandler None </Location> </VirtualHost> ```
2012/01/01
[ "https://Stackoverflow.com/questions/8695352", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3836677/" ]
after updated my answer: try this: ``` (r'^/edit/(?P<page_name>\w+)$', 'gallery.views.edit_page'), (r'^/save/(?P<page_name>\w+)$', 'gallery.views.save_page'), (r'^/(?P<page_name>\w+)$', 'gallery.views.view_page') ``` While `urls.py` is root folder of your application. Then if you visit: <http://domain.com/edit/page1> it should work
Set up both your main root urls to include the urls of your apps: <https://docs.djangoproject.com/en/dev/topics/http/urls/#including-other-urlconfs>
3,206
60,943,751
I am defining a pipeline using Jenkins Blue Ocean. I'm trying to do a simple python pep8 coding convention, but if I go inside the shell and type the command directly, it runs fine. But when the same command is executed in the pipeline, it is executed, but at the end 'script returned exit code 1' is displayed. Because of this error code, it does not go to the next step. Is there a workaround? ``` using credential github > git rev-parse --is-inside-work-tree # timeout=10 Fetching changes from the remote Git repository > git config remote.origin.url https://github.com/YunTaeIl/jenkins_retest.git # timeout=10 Cleaning workspace > git rev-parse --verify HEAD # timeout=10 Resetting working tree > git reset --hard # timeout=10 > git clean -fdx # timeout=10 Fetching without tags Fetching upstream changes from https://github.com/YunTaeIl/jenkins_retest.git > git --version # timeout=10 using GIT_ASKPASS to set credentials GitHub Access Token > git fetch --no-tags --progress -- https://github.com/YunTaeIl/jenkins_retest.git +refs/heads/master:refs/remotes/origin/master # timeout=10 Checking out Revision fe49ddf379732305a7a50f014ab4b25f9382c913 (master) > git config core.sparsecheckout # timeout=10 > git checkout -f fe49ddf379732305a7a50f014ab4b25f9382c913 # timeout=10 > git branch -a -v --no-abbrev # timeout=10 > git branch -D master # timeout=10 > git checkout -b master fe49ddf379732305a7a50f014ab4b25f9382c913 # timeout=10 Commit message: "Added Jenkinsfile" > git rev-list --no-walk bc12a035337857b29a4399f05d1d4442a2f0d04f # timeout=10 Cleaning workspace > git rev-parse --verify HEAD # timeout=10 Resetting working tree > git reset --hard # timeout=10 > git clean -fdx # timeout=10 + ls Jenkinsfile README.md jenkins-retest + python3.7 --version Python 3.7.3 + python3.7 -m flake8 jenkins-retest jenkins-retest/N801_py3.py:3:1: E302 expected 2 blank lines, found 0 jenkins-retest/N801_py3.py:6:1: E302 expected 2 blank lines, found 0 jenkins-retest/N801_py3.py:9:1: E302 expected 2 blank lines, found 0 jenkins-retest/N801_py3.py:12:1: E302 expected 2 blank lines, found 0 jenkins-retest/N801_py3.py:15:1: E302 expected 2 blank lines, found 0 jenkins-retest/N801_py3.py:18:1: E302 expected 2 blank lines, found 0 jenkins-retest/N801_py3.py:24:1: E303 too many blank lines (4) jenkins-retest/N801_py3.py:24:11: E999 SyntaxError: invalid syntax jenkins-retest/python_testfile.py:1:1: E999 SyntaxError: invalid syntax jenkins-retest/python_testfile.py:1:2: E228 missing whitespace around modulo operator jenkins-retest/python_testfile.py:3:1: E402 module level import not at top of file jenkins-retest/python_testfile.py:3:20: W291 trailing whitespace jenkins-retest/python_testfile.py:5:1: E302 expected 2 blank lines, found 1 jenkins-retest/python_testfile.py:8:1: E305 expected 2 blank lines after class or function definition, found 0 jenkins-retest/python_testfile.py:11:33: W291 trailing whitespace jenkins-retest/python_testfile.py:12:1: E402 module level import not at top of file jenkins-retest/python_testfile.py:12:19: W291 trailing whitespace jenkins-retest/python_testfile.py:14:4: E714 test for object identity should be 'is not' jenkins-retest/python_testfile.py:17:16: W291 trailing whitespace jenkins-retest/python_testfile.py:18:80: E501 line too long (95 > 79 characters) script returned exit code 1 ```
2020/03/31
[ "https://Stackoverflow.com/questions/60943751", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11992601/" ]
I'm surprised that more people are looking for this problem than I think. Use `set +e` if you intend to ignore the error code exit 1 of code run as a shell script.
I had the same problem with a batch script calling an executable whose return status was 1 in case of success, and 0 in case of error. This was a problem for Jenkins as for Jenkins, the success error code is 0 and any other status code means failure so stops the job with the following message: `script returned exit code 1` My workaround: check the last error code and invert the return value of the script: ``` stages { stage("My stage") { steps { bat label: 'My batch script', script: ''' @echo off return_1_if_success.exe // command which returns 1 in case of success, 0 otherwise IF %ERRORLEVEL% EQU 1 (exit /B 0) ELSE (exit /B 1)''' } } } ``` Explanation: ``` IF %ERRORLEVEL% EQU 1 (exit /B 0) ELSE (exit /B 1) // if previous command returned 1 (meaning success for this command), // then we exit with return code 0 (meaning success for Jenkins), // otherwise we exit with return code 1 (meaning failure for Jenkins) ``` On Windows cmd, `%ERRORLEVEL%` holds that last error code encountered in a cmd.exe terminal or at a given point in a batch script. For PowerShell, you might want to check `$?` instead of `ERRORLEVEL`, I'll let you check the equivalent of this for other shell and platform.
3,208
9,856,163
I have to parse a 1Gb XML file with a structure such as below and extract the text within the tags "Author" and "Content": ``` <Database> <BlogPost> <Date>MM/DD/YY</Date> <Author>Last Name, Name</Author> <Content>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Maecenas dictum dictum vehicula.</Content> </BlogPost> <BlogPost> <Date>MM/DD/YY</Date> <Author>Last Name, Name</Author> <Content>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Maecenas dictum dictum vehicula.</Content> </BlogPost> [...] <BlogPost> <Date>MM/DD/YY</Date> <Author>Last Name, Name</Author> <Content>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Maecenas dictum dictum vehicula.</Content> </BlogPost> </Database> ``` So far I've tried two things: i) reading the whole file and going through it with .find(xmltag) and ii) parsing the xml file with lxml and iterparse(). The first option I've got it to work, but it is very slow. The second option I haven't managed to get it off the ground. Here's part of what I have: ``` for event, element in etree.iterparse(path_to_file, tag="BlogPost"): if element.tag == "BlogPost": print element.text else: print 'Finished' ``` The result of that is only blank spaces, with no text in them. I must be doing something wrong, but I can't grasp it. Also, In case it wasn't obvious enough, I am quite new to python and it is the first time I'm using lxml. Please, help!
2012/03/24
[ "https://Stackoverflow.com/questions/9856163", "https://Stackoverflow.com", "https://Stackoverflow.com/users/420622/" ]
```py for event, element in etree.iterparse(path_to_file, tag="BlogPost"): for child in element: print(child.tag, child.text) element.clear() ``` the final clear will stop you from using too much memory. [update:] to get "everything between ... as a string" i guess you want one of: ```py for event, element in etree.iterparse(path_to_file, tag="BlogPost"): print(etree.tostring(element)) element.clear() ``` or ```py for event, element in etree.iterparse(path_to_file, tag="BlogPost"): print(''.join([etree.tostring(child) for child in element])) element.clear() ``` or perhaps even: ```py for event, element in etree.iterparse(path_to_file, tag="BlogPost"): print(''.join([child.text for child in element])) element.clear() ```
I prefer [XPath](http://www.w3schools.com/xpath/xpath_syntax.asp) for such things: ```py In [1]: from lxml.etree import parse In [2]: tree = parse('/tmp/database.xml') In [3]: for post in tree.xpath('/Database/BlogPost'): ...: print 'Author:', post.xpath('Author')[0].text ...: print 'Content:', post.xpath('Content')[0].text ...: Author: Last Name, Name Content: Lorem ipsum dolor sit amet, consectetur adipiscing elit. Maecenas dictum dictum vehicula. Author: Last Name, Name Content: Lorem ipsum dolor sit amet, consectetur adipiscing elit. Maecenas dictum dictum vehicula. Author: Last Name, Name Content: Lorem ipsum dolor sit amet, consectetur adipiscing elit. Maecenas dictum dictum vehicula. ``` I'm not sure if it's different in terms of processing big files, though. Comments about this would be appreciated. Doing it your way, ``` for event, element in etree.iterparse(path_to_file, tag="BlogPost"): for info in element.iter(): if info.tag in ('Author', 'Content'): print info.tag, ':', info.text ```
3,209
37,766,700
I am trying to transform the age columns of a pandas dataframe by applying apply function. how to make below code work or is there a more pythonic way way to do this. ``` cps=(cps.assign(Age_grp_T=cps['age'].apply(lambda x:{x>=71:'Tradionalists', 71>x>=52:'Baby Boomers', 52>x>=46:'Generation X', 46>x>=16:'Millennials'}.get(x, ' '))) ```
2016/06/11
[ "https://Stackoverflow.com/questions/37766700", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4608730/" ]
i would use [cut()](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.cut.html) function for that: ``` In [663]: labels=[' ','Millennials','Generation X','Baby Boomers','Tradionalists'] In [664]: a['category'] = pd.cut(a['age'], bins=[1, 16,46,52,71, 200],labels=labels) In [665]: a Out[665]: age category 0 29 Millennials 1 65 Baby Boomers 2 68 Baby Boomers 3 18 Millennials 4 29 Millennials 5 58 Baby Boomers 6 15 7 67 Baby Boomers 8 21 Millennials 9 17 Millennials 10 19 Millennials 11 39 Millennials 12 64 Baby Boomers 13 70 Baby Boomers 14 33 Millennials 15 27 Millennials 16 54 Baby Boomers 17 60 Baby Boomers 18 23 Millennials 19 65 Baby Boomers 20 63 Baby Boomers 21 36 Millennials 22 53 Baby Boomers 23 29 Millennials 24 66 Baby Boomers ```
I have found one more way to do this but thanks MaxU your answers works too ``` cps=(cps.assign(Age_grp_T=np.where(cps['age']>=71,"Tradionalists", np.where(np.logical_and(71>cps['age'],cps['age']>=52),"Baby Boomers", np.where(np.logical_and(52>cps['age'],cps['age']>=46),"Generation X", np.where(np.logical_and(46>cps['age'],cps['age']>=16),"Millennials",-99)))) ) ``` I wonder which one is more efficient?
3,212
69,695,016
I would like to install this library with pip: [ikpy library](https://pypi.org/project/ikpy/). However pip gives the error below: ``` Complete output from command python setup.py egg_info: Traceback (most recent call last): File "<string>", line 1, in <module> IOError: [Errno 2] No such file or directory: '/tmp/pip-build-oYTjdr/ikpy/setup.py' ``` What I understand from the error, pip cannot find setup.py, becuase library has setup.cfg instead. I tried to upgrade pip and got a different error. ``` $ pip install --upgrade pip Complete output from command python setup.py egg_info: Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-build-KCnfi9/pip/setup.py", line 7 def read(rel_path: str) -> str: ^ SyntaxError: invalid syntax ``` I also tried pip3: ``` $ pip3 install ikpy ModuleNotFoundError: No module named 'pip._vendor.pkg_resources' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/bin/pip3", line 9, in <module> from pip import main File "/usr/lib/python3/dist-packages/pip/__init__.py", line 13, in <module> from pip.exceptions import InstallationError, CommandError, PipError File "/usr/lib/python3/dist-packages/pip/exceptions.py", line 6, in <module> from pip._vendor.six import iteritems File "/usr/lib/python3/dist-packages/pip/_vendor/__init__.py", line 75, in <module> vendored("pkg_resources") File "/usr/lib/python3/dist-packages/pip/_vendor/__init__.py", line 36, in vendored __import__(modulename, globals(), locals(), level=0) File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 668, in _load_unlocked File "<frozen importlib._bootstrap>", line 638, in _load_backward_compatible File "/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl/pkg_resources/__init__.py", line 2927, in <module> File "/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl/pkg_resources/__init__.py", line 2913, in _call_aside File "/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl/pkg_resources/__init__.py", line 2952, in _initialize_master_working_set File "/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl/pkg_resources/__init__.py", line 956, in subscribe File "/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl/pkg_resources/__init__.py", line 2952, in <lambda> File "/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl/pkg_resources/__init__.py", line 2515, in activate File "/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl/pkg_resources/__init__.py", line 2097, in declare_namespace File "/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl/pkg_resources/__init__.py", line 2047, in _handle_ns File "/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl/pkg_resources/__init__.py", line 2066, in _rebuild_mod_path AttributeError: '_NamespacePath' object has no attribute 'sort' ``` What should I do? **Python 3.7.10, python3-pip: (8.1.1-2ubuntu0.6).** **Note:** I solved with these commands, now it can be installed with pip3: ``` curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py python3 get-pip.py pip3 install --upgrade setuptools ```
2021/10/24
[ "https://Stackoverflow.com/questions/69695016", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5384988/" ]
(*this message was created before the question was updated with pip3*) It's likely that the `pip` command you use is for Python 2. Can you try with `pip3` instead?
Upgrade Your **`Pip`** ``` pip install --upgrade pip ``` then install **`ikpy`** Now it's should up and running ;-) ``` pip install ikpy ``` *Installed and Checked now on: Ubuntu 20.04, Pip 21.3, Python 3.8.10*
3,213
68,384,553
I have used idle before, but never set it up. my problem is actually getting a py file to work with. I don't actually know how to make one and it isn't an option when using save as on a text file. (only text file and all files(?) are put as options) I've attempted to open the py files already in the python folder but when selecting edit with idle I get prompted to pick what to open the file with and then if I click python nothing happens. what am I missing?
2021/07/14
[ "https://Stackoverflow.com/questions/68384553", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16450379/" ]
`shutil.make_archive` does not have a way to do what you want without copying files to another directory, which is inefficient. Instead you can use a compression library directly similar to the linked answer you provided. Note this doesn't handle name collisions! ```py import zipfile import os with zipfile.ZipFile('output.zip','w',zipfile.ZIP_DEFLATED,compresslevel=9) as z: for path,dirs,files in os.walk('dir_name'): for file in files: full = os.path.join(path,file) z.write(full,file) # write the file, but with just the file's name not full path # print the files in the zipfile with zipfile.ZipFile('output.zip') as z: for name in z.namelist(): print(name) ``` Given: ```none dir_name ├───dir1 │ ├───cat1 │ │ file1.txt │ │ file2.txt │ │ │ └───cat2 │ file3.txt │ └───dir2 └───cat3 file4.txt ``` Output: ```none file1.txt file2.txt file3.txt file4.txt ```
```py # The root directory to search for path = r'dir_name/' import os import glob # List all *.txt files in the root directory file_paths = [file_path for root_path, _, _ in os.walk(path) for file_path in glob.glob(os.path.join(root_path, '*.txt'))] import tempfile # Create a temporary directory to copy your files into with tempfile.TemporaryDirectory() as tmp: import shutil for file_path in file_paths: # Get the basename of the file basename = os.path.basename(file_path) # Copy the file to the temporary directory shutil.copyfile(file_path, os.path.join(tmp, basename)) # Zip the temporary directory to the working directory shutil.make_archive('output', 'zip', tmp) ``` This will create a *output.zip* file in the current working directory. The temporary directory will be deleted when the end of the context manager is reached.
3,215
18,768,224
I have built a backend for an iOS app with Google App Engine running python 2.7. When i create objects i want the backend to give it an ID which can be used by all clients as the one identifier to query. This method i use involves two put and basically just used since i was using db instead of ndb. Is there a better way of inserting objects and giving them an easily accessable Integer ID? ``` class Player(ndb.Model): playerID = ndb.IntegerProperty() username = db.StringProperty() class createUserWithUsername(webapp2.RequestHandler): def get(self): username = self.request.get('username') playerArr = Player.query(Player._properties["username"] == username).fetch(1) if len(playerArr) > 0: self.error(500) else: newPlayer = Player() newPlayer.username = username key = newPlayer.put() newPlayer.playerID = key.id() newPlayer.put() # returning player as dict to client ```
2013/09/12
[ "https://Stackoverflow.com/questions/18768224", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1071332/" ]
The problem is for the 3rd element even though `id` attribute is not there `this.id` is not `undefined` it is an empty string. So for the 3rd element `test1` gets an empty string as the value but for `test2` the following `if` condition updates the value with the `id` data value. One possible solution is to test the length of the `id` property instead of checking whether it is defined or not ``` var result1 = 'Result1:', result2 = 'Result2:'; $('.test').each(function () { var test = $(this), testId1 = ($.trim(this.id || '').length ? this.id : test.data('id')), testId2 = (this.id !== undefined ? this.id : ''); if (testId2 == '') { testId2 = test.data('id'); } result1 += testId1 + '.'; result2 += testId2 + '.'; }); $('#result1').html(result1); $('#result2').html(result2); ``` Demo: [Fiddle](http://jsfiddle.net/arunpjohny/3wKdJ/)
Modify this line. ``` <div data-id="Test2" class="test">test</div> to <div id="Test2" class="test">test</div> ``` **[JsFiddle](http://jsfiddle.net/sudhAnsu63/VxLUk/)**
3,216
16,428,401
I have created an FTP client using ftplib. I am running the server on one of my Ubuntu virtual machine and client on another. I want to connect to the server using ftplib and I'm doing it in the following way: ``` host = "IP address of the server" port = "Port number of the server" ftpc = FTP() ftpc.connect(host, port) ``` I'm getting the following error! ``` Traceback (most recent call last): File "./client.py", line 54, in <module> ftpc.connect(host, port) File "/usr/lib/python2.7/ftplib.py", line 132, in connect self.sock = socket.create_connection((self.host, self.port), self.timeout) File "/usr/lib/python2.7/socket.py", line 571, in create_connection raise err socket.error: [Errno 111] Connection refused ``` When I went through the docs of python, I could see ftplib used only with domain names as in FTP("domain name"). Can I use IP address instead of domain name? In my case I am unable to comprehend the error. It would be great if anyone can help me out. Also if I use port 21 on my server, I'm getting socket error: Connection refused. How do I use port 21 for my FTP server? Thank You.
2013/05/07
[ "https://Stackoverflow.com/questions/16428401", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2253736/" ]
It seems like you are trying to connect to SFTP server using ftplib which is giving you the Connection Refused error. Try using pysftp instead of ftplib and see if it works. On the virtual machine, test by typing ftp and sftp commands on the console. You will get to know on which server the machine is running i.e ftp or sftp.
To solve the problem, I install and config vsftpd: 1. sudo apt install vsftpd (if not exist) 2. sudo vim /etc/vsftpd.conf 3. set "listen=YES"
3,226
6,558,051
Being a python programmer for four years now (it doesn't mean much though) and moving to Objective-C the one concept that is weird and "alien" to me is memory management. Luckily garbage collectiom exists and I only intend to develop apps for the mac OS 10.6+, so in all my projects so far I have always turned garbage collection to required. But here is my problem: when I use Instruments with the Alloc and Leaks tool I see leaked bytes poppin in the graph. ??? Very weird. What does Garbage Collection really do, when it is required. The way I see it is that you can completely forget about retain, release, etc. But is that true? Please provide examples where GC will help and where it won't (if any), so that I can understand what I am doing wrong. Edit ---- I probably should have been more clear. The problem I want to solve is the fact that, even after GC is set up as required, Instruments still finds leaks. I also asked what GC does just so I could make sure that it does what I think, and the problem is not in my code or in GC, but elsewhere. That "elsewhere" is what I want to find out.
2011/07/02
[ "https://Stackoverflow.com/questions/6558051", "https://Stackoverflow.com", "https://Stackoverflow.com/users/778396/" ]
Make an array of `(i,j)` integer doublets covering all possible name-surname combinations. Shuffle the array. Then just loop through the array, and take names from it one by one. You'll get unique full names in random order (provided that your name and surname lists don't contain duplicates). **EDIT** Pseudocode, per request. First, do these setup steps: ``` indexList = emptyList for i = 1 to length(first_names) for j = 1 to length(last_names) indexList.push( (i,j) ) shuffle(indexList) ``` `push` here means adding an element to the end of the list. Now every time you need a new name, just take one index pair off `indexList`, and return the corresponding names from the name arrays: ``` function getName() (i, j) = indexList.pop() return concatenate(first_names[i], " ", last_names[j]) ``` `pop` here means remove an element from the end of the list and return it. It is unlikely that you'd have so many names that the `indexList` should take up too much memory.
Loop over the first and last names, check if you already generated that combination. In PHP: ``` $full_names = array(); foreach ($names as $first_name) { foreach ($surnames as $last_name) { $candidate = $first_name . " " . $last_name; if (!isset($full_names[$candidate])) { $full_names[$candidate] = true; } } } print_r(array_keys($full_names)); ``` Or same in Python: ``` print set([first + ' ' + last for first in names for last in surnames]) ```
3,227
10,722,976
I have a Python Script that generate a CSV (data parsed from a website). Here is an exemple of the CSV file: **File1.csv** ``` China;Beijing;Auralog Software Development (Deijing) Co. Ltd.;;; United Kingdom;Oxford;Azad University (Ir) In Oxford Ltd;;; Italy;Bari;Bari, The British School;;Yes; China;Beijing;Beijing Foreign Enterprise Service Group Co Ltd;;; China;Beijing;Beijing Ying Biao Human Resources Development Limited;;Yes; China;Beijing;BeiwaiOnline BFSU;;; Italy;Curno;Bergamo, Anderson House;;Yes; ``` **File2.csv** ``` China;Beijing;Auralog Software Development (Deijing) Co. Ltd.;;; United Kingdom;Oxford;Azad University (Ir) In Oxford Ltd;;; Italy;Bari;Bari, The British School;;Yes; China;Beijing;Beijing Foreign Enterprise Service Group Co Ltd;;; China;Beijing;Beijing Ying Biao Human Resources Development Limited;;Yes; This;Is;A;New;Line;; Italy;Curno;Bergamo, Anderson House;;Yes; ``` As you can see, *China;Beijing;BeiwaiOnline BFSU;;;* ==> This line from File1.csv is not more present in File2.csv and *This;Is;A;New;Line;;* ==> This line from File2.csv is new (is not present in File1.csv). I am looking for a way to compare this two CSV files (one important thing to know is that the order of the lines doesn't count ... they cant be anywhere). What I'd like to have is a script which can tell me: - One new line : *This;Is;A;New;Line;;* - One removed line : *China;Beijing;BeiwaiOnline BFSU;;;* And so on ... ! I've tried but without any success: ``` #!/usr/bin/python # -*- coding: utf-8 -*- import csv f1 = file('now.csv', 'r') f2 = file('past.csv', 'r') c1 = csv.reader(f1) c2 = csv.reader(f2) now = [row for row in c2] past = [row for row in c1] for row in now: #print row lol = past.index(row) print lol f1.close() f2.close() _csv.Error: new-line character seen in unquoted field - do you need to open the file in universal-newline mode? ``` Any idea of the best way to proceed ? Thank you so much in advance ;) EDIT: ``` import csv f1 = file('now.csv', 'r') f2 = file('past.csv', 'r') c1 = csv.reader(f1) c2 = csv.reader(f2) s1 = set(c1) s2 = set(c2) lol = s1 - s2 print type(lol) print lol ``` This seems to be a good idea but : ``` Traceback (most recent call last): File "compare.py", line 20, in <module> s1 = set(c1) TypeError: unhashable type: 'list' ``` **EDIT 2 (Please don't care about what is above):** \**with your help, here is the script I'm writing :*\* ``` #!/usr/bin/python # -*- coding: utf-8 -*- import os import csv ### COMPARISON THING ### x=0 fichiers = os.listdir('/me/CSV') for fichier in fichiers: if '.csv' in fichier: print('%s -----> %s' % (x,fichier)) x=x+1 choice = raw_input("Which file do you want to compare with the new output ? ->>>") past_file = fichiers[int(choice)] print 'We gonna compare %s to our output' % past_file s_now = frozenset(tuple(row) for row in csv.reader(open('/me/CSV/now.csv', 'r'), delimiter=';')) ## OUR OUTPUT s_past = frozenset(tuple(row) for row in csv.reader(open('/me/CSV/'+past_file, 'r'), delimiter=';')) ## CHOOSEN ONE added = [";".join(row) for row in s_now - s_past] # in "now" but not in "past" removed = [";".join(row) for row in s_past - s_now] # in "past" but not in "now" c = csv.writer(open("CHANGELOG.csv", "a"),delimiter=";" ) line = ['AD'] for item_added in added: line.append(item_added) c.writerow(['AD',item_added]) line = ['RM'] for item_removed in removed: line.append(item_removed) c.writerow(line) ``` Two kind of errors: ``` File "programcompare.py", line 21, in <genexpr> s_past = frozenset(tuple(row) for row in csv.reader(open('/me/CSV/'+past_file, 'r'), delimiter=';')) ## CHOOSEN ONE _csv.Error: line contains NULL byte ``` or ``` File "programcompare.py", line 21, in <genexpr> s_past = frozenset(tuple(row) for row in csv.reader(open('/me/CSV/'+past_file, 'r'), delimiter=';')) ## CHOOSEN ONE _csv.Error: newline inside string ``` It was working few minutes ago but I've changed the CSV files to test with different datas and here I am :-) Sorry, last question !
2012/05/23
[ "https://Stackoverflow.com/questions/10722976", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1323870/" ]
If your data is not prohibitively large, loading them into a [set (or frozenset)](http://docs.python.org/library/stdtypes.html#set) will be an easy approach: ``` s_now = frozenset(tuple(row) for row in csv.reader(open('now.csv', 'r'), delimiter=';')) s_past = frozenset(tuple(row) for row in csv.reader(open('past.csv', 'r'), delimiter=';')) ``` To get the list of entries that were added: ``` added = [";".join(row) for row in s_now - s_past] # in "now" but not in "past" # Or, simply "added = list(s_now - s_past)" to keep them as tuples. ``` similarly, list of entries that were removed: ``` removed = [";".join(row) for row in s_past - s_now] # in "past" but not in "now" ``` --- To address your updated question on why you're seeing `TypeError: unhashable type: 'list'`, the csv returns each entry as a `list` when iterated. `lists` are not [hashable](http://docs.python.org/glossary.html#term-hashable) and therefore cannot be inserted into a `set`. To address this, you'll need to convert the `list` entries into `tuple`s before adding the to the set. See previous section in my answer for an example of how this can be done. --- To address the additional errors you're seeing, they are both due to the content of your CSV files. **\_csv.Error: newline inside string** It looks like you have quote characters (`"`) somewhere in data which confuses the parser. I'm not familiar enough with the CSV module to tell you exactly what has gone wrong, not without having a peek at your data anyway. I did however manage to reproduce the error as such: ``` >>> [e for e in csv.reader(['hello;wo;"rld'], delimiter=";")] Traceback (most recent call last): File "<stdin>", line 1, in <module> _csv.Error: newline inside string ``` In this case, it can fixed by instructing the reader not to do any special processing with quotes (see [csv.QUOTE\_NONE](http://docs.python.org/library/csv.html#csv.QUOTE_NONE)). (Do note that this will disable the handling of quoted data whereby delimiters can appear within a quoted string without the string being split into separate entries.) ``` >>> [e for e in csv.reader(['hello;wo;"rld'], delimiter=";", quoting=csv.QUOTE_NONE)] [['hello', 'wo', '"rld']] ``` **\_csv.Error: line contains NULL byte** I'm guessing this might be down to the encoding of your CSV files. See the following questions: * [Python CSV error: line contains NULL byte](https://stackoverflow.com/questions/4166070/python-csv-error-line-contains-null-byte) * ["Line contains NULL byte" in CSV reader (Python)](https://stackoverflow.com/questions/7894856/line-contains-null-byte-in-csv-reader-python)
Read the csv files line by line into sets. Compare the sets. ``` >>> s1 = set('''China;Beijing;Auralog Software Development (Deijing) Co. Ltd.;;; ... United Kingdom;Oxford;Azad University (Ir) In Oxford Ltd;;; ... Italy;Bari;Bari, The British School;;Yes; ... China;Beijing;Beijing Foreign Enterprise Service Group Co Ltd;;; ... China;Beijing;Beijing Ying Biao Human Resources Development Limited;;Yes; ... China;Beijing;BeiwaiOnline BFSU;;; ... Italy;Curno;Bergamo, Anderson House;;Yes;'''.split('\n')) >>> s2 = set('''China;Beijing;Auralog Software Development (Deijing) Co. Ltd.;;; ... United Kingdom;Oxford;Azad University (Ir) In Oxford Ltd;;; ... Italy;Bari;Bari, The British School;;Yes; ... China;Beijing;Beijing Foreign Enterprise Service Group Co Ltd;;; ... China;Beijing;Beijing Ying Biao Human Resources Development Limited;;Yes; ... This;Is;A;New;Line;; ... Italy;Curno;Bergamo, Anderson House;;Yes;'''.split('\n')) >>> s1 - s2 set(['China;Beijing;BeiwaiOnline BFSU;;;']) >>> s2 - s1 set(['This;Is;A;New;Line;;']) ```
3,228
71,808,755
I am doing vehicle registration plate detection using YOLOv4 in colab. When I ran !python \convert\_annotations.py file I got following error ``` Currently in subdirectory: validation Converting annotations for class: Vehicle registration plate 0% 0/30 [00:00<?, ?it/s] Traceback (most recent call last): File "convert_annotations.py", line 63, in <module> coords = np.asarray([float(labels[1]), float(labels[2]), float(labels[3]), float(labels[4])]) ValueError: could not convert string to float: 'registration' ``` convert\_annotations.py file is as follows. ``` import os import cv2 import numpy as np from tqdm import tqdm import argparse import fileinput # function that turns XMin, YMin, XMax, YMax coordinates to normalized yolo format def convert(filename_str, coords): os.chdir("..") image = cv2.imread(filename_str + ".jpg") coords[2] -= coords[0] coords[3] -= coords[1] x_diff = int(coords[2]/2) y_diff = int(coords[3]/2) coords[0] = coords[0]+x_diff coords[1] = coords[1]+y_diff coords[0] /= int(image.shape[1]) coords[1] /= int(image.shape[0]) coords[2] /= int(image.shape[1]) coords[3] /= int(image.shape[0]) os.chdir("Label") return coords ROOT_DIR = os.getcwd() # create dict to map class names to numbers for yolo classes = {} with open("classes.txt", "r") as myFile: for num, line in enumerate(myFile, 0): line = line.rstrip("\n") classes[line] = num myFile.close() # step into dataset directory os.chdir(os.path.join("OID", "Dataset")) DIRS = os.listdir(os.getcwd()) # for all train, validation and test folders for DIR in DIRS: if os.path.isdir(DIR): os.chdir(DIR) print("Currently in subdirectory:", DIR) CLASS_DIRS = os.listdir(os.getcwd()) # for all class folders step into directory to change annotations for CLASS_DIR in CLASS_DIRS: if os.path.isdir(CLASS_DIR): os.chdir(CLASS_DIR) print("Converting annotations for class: ", CLASS_DIR) # Step into Label folder where annotations are generated os.chdir("Label") for filename in tqdm(os.listdir(os.getcwd())): filename_str = str.split(filename, ".")[0] if filename.endswith(".txt"): annotations = [] with open(filename) as f: for line in f: for class_type in classes: line = line.replace(class_type, str(classes.get(class_type))) labels = line.split() coords = np.asarray([float(labels[1]), float(labels[2]), float(labels[3]), float(labels[4])]) coords = convert(filename_str, coords) labels[1], labels[2], labels[3], labels[4] = coords[0], coords[1], coords[2], coords[3] newline = str(labels[0]) + " " + str(labels[1]) + " " + str(labels[2]) + " " + str(labels[3]) + " " + str(labels[4]) line = line.replace(line, newline) annotations.append(line) f.close() os.chdir("..") with open(filename, "w") as outfile: for line in annotations: outfile.write(line) outfile.write("\n") outfile.close() os.chdir("Label") os.chdir("..") os.chdir("..") os.chdir("..") ``` I am doing this by seeing youtube videos so in video he didn't get error but I am getting error. What could be the problem, please help?
2022/04/09
[ "https://Stackoverflow.com/questions/71808755", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11578411/" ]
Brian, the solution to your problem : With your terminal, go to your directory, for example : ``` yolov4/OIDv4_ToolKit ``` To get 15 images for the training data: ``` python3 main.py downloader --classes Vehicle_registration_plate --type_csv train --limit 15 ``` To get 3 images for the validation: ``` python3 main.py downloader --classes Vehicle_registration_plate --type_csv validation --limit 3 ``` (If you want, you can set higher numbers, like 1500 images for the training data and 300 for the validation data) Within the root `OIDv4_ToolKit` folder open the file `classes.txt` and edit it to have the classes you just downloaded and type exactly: ``` Vehicle registration plate ``` Finally, the command to convert all labels to YOLOv4 format: ``` python3 convert_annotations.py ``` Normally, this solution works like a charm.
remove apostrophes in 'Vehicle registration plate' and run it. if the issue is not solved, then try running pip install -r requirements.txt before running this command
3,229
68,967,823
I'm using a python-based software, which utilizes long multiline-strings to run. I would like to create a loop in which I change one value to e.g. 40 other values and create 40 new strings where everything is the same except the target value. I've achieved "what-I-want" as an output through this code: ``` for x in range(0, 40000, 40): for line in mystring.splitlines(keepends=True): line = line.replace("24000", str(x)) line = line.strip() print(line) ``` and it prints all 40 of the long strings, but how can I save them to new strings and use them in the software? I could also achieve 1 iteration but only 1, through this code: ``` for x in range(0,40000,40): mystring = mystring.split("\n") mystring[17] = "Constant = " + str(x) mystring = "\n".join(mystring) print(mystring) ``` Hope my question makes sense. I would just like 40 (in this example) new strings identical except the one value changing throughout. Thank you
2021/08/28
[ "https://Stackoverflow.com/questions/68967823", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16776029/" ]
You're trying to find the ID from the response object itself. The response object has all kind of data related to the HTTP request response, but most of that is irrelevant to you. The actual data that comes in the response is within the 'data' property of the response. So simply access that data variable, for example by destructuring it, and then use the native find function of arrays: ``` const { data } = await axios.get('http://localhost:4200/products'); return data.find((product) => product._id === id) ``` Note that you can't use the Mongoose's findById function here. That is a function related to the database, but you're not interacting with the database here, even though the HTTP request to get the products you want internally gets them from some database related to the other service.
You can achieve this with the following code: ``` const element = response.data.filter(e => e.id == <your_id>); ```
3,230
12,898,804
I try: ``` [True,True,False] and [True,True,True] ``` and get [True, True True] but ``` [True,True,True] and [True,True,False] ``` gives ``` [True,True,False] ``` Not too sure why it's giving those strange results, even after taking a look at some other python boolean comparison questions. Integer does the same (replace True -> 1 and False ->0 above and the results are the same). What am I missing? I obviously want ``` [True,True,False] and [True,True,True] ``` to evaluate to ``` [True,True,False] ```
2012/10/15
[ "https://Stackoverflow.com/questions/12898804", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1729412/" ]
From the [Python documentation](http://docs.python.org/reference/expressions.html#boolean-operations): > > The expression x and y first evaluates x; if x is false, its value is returned; otherwise, y is evaluated and the resulting value is returned. > > > You're getting the second value returned. P.S. I had never seen this behavior before either, I had to look it up myself. My naive expectation was that a boolean expression would yield a boolean result.
Python works by short-circuiting its boolean and gives the result expression as the result. A populated list evaluates to true and gives the result as the value of the second list. Look at this, when I just interchanged the position of your first and second list. ``` In [3]: [True,True,True] and [True, True, False] Out[3]: [True, True, False] ```
3,233
54,857,129
Please enter a sentence: The quick brown fox jumps over the lazy dog. Output: The brown jumps the dog I've been doing some learning in strings in python, but no matter what I do, I can't seem to write a program that will remove the 2nd letter of every sentence. ``` word=(input ("enter setence")) del word[::2] print(word[char], end="") Print("\n") ``` This was my closest attempt. At least I was able to write the sentence on the command prompt but was unable to get the required output.
2019/02/24
[ "https://Stackoverflow.com/questions/54857129", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11110810/" ]
``` string = 'The quick brown fox jumps over the lazy dog.' even_words = string.split(' ')[::2] ``` You split the original string using spaces, then you take every other word from it with the [::2] splice.
Try something like : `" ".join(c for c in word.split(" ")[::2])`
3,243
3,264,024
I am running in ubuntu and I can code in python, without problem. I have tried to install pygame and to do make it so, I did: > > sudo apt-get instal python-pygame > > > When I go into the python IDLE and write: > > import pygame > > > I get: > > Traceback (most recent call last): > > > File "", line 1, in > > > ImportError: No module named pygame > > > What can I do to solve this problem? I am forgetting something, or doing something wrong?
2010/07/16
[ "https://Stackoverflow.com/questions/3264024", "https://Stackoverflow.com", "https://Stackoverflow.com/users/388456/" ]
apt-get will install pygame for the registered and pygame-package-supported Python versions. Execute ``` ls -1 /usr/lib/python*/site-packages/pygame/__init__.pyc ``` to find out which. On my old debian system, that prints ``` /usr/lib/python2.4/site-packages/pygame/__init__.pyc /usr/lib/python2.5/site-packages/pygame/__init__.pyc ``` That means if I'm not using either Python 2.4 or Python 2.5, pygame will not be available. The Python version your IDLE is using should be displayed at the top (you can also see it with `import sys; print(sys.version)`). You can either manually install pygame or try to add the installed version with ``` import sys sys.path.append('/usr/lib/python2.5/site-packages/') import pygame ``` Depending on the complexity of the loaded module(pygame in your case), this might cause some problems later though.
If you don't like to download an unpack then install manually, you can use apt to install **setuptools** . After that you can use **easy\_install**(or **easy\_install-2.7**?) to install many python packages, including pygame, of course.
3,246
66,144,266
I am looking to sort my list(class) into the order of small - large - small, for example if it were purely numeric and the list was [1,5,3,7,7,3,2] the sort would look like [1,3,7,7,5,3,2]. The basic class structure is: ``` class LaneData: def __init__(self): self.Name = "Random" self.laneWeight = 5 ``` So essentially the sort function would work from the LaneData.laneWeight variable. I found this answer [here](https://stackoverflow.com/questions/10482684/python-reorder-a-sorted-list-so-the-highest-value-is-in-the-middle) which I'm not entirely sure if it will work in this instance using a class as the list variable. My second idea is like so (pseudo code below) [[inspired by]](https://stackoverflow.com/a/752330/8763997): ``` def sortByWeight(e): return e.getLaneWeight() newList = [] # create a new list to store lanes.sort(key=sortByWeight) # python default sort by our class attrib newList = lanes[:len(lanes) // 2] # get the first half of the sorted list (low-high) lanes.sort(key=sortByWeight, reverse=True) # reverse the sort newList = newList + lanes[:len(lanes) // 2] # get the first half of the sorted list (high-low) ``` If possible I'd like to keep the sort small and efficient, I don't mind building a small algorithm for it if needs be. What's your thoughts team?
2021/02/10
[ "https://Stackoverflow.com/questions/66144266", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8763997/" ]
Your solution works but you sort the end of the list in ascending order first and in descending order afterwards. You could optimize it by : * Looking for the index of the max, swap the max with the element in the middle position and finally sort separately the first half of the table (in ascending order) and the second half (in descending order). * Instead of doing a reverse sort of the second half, you can simply reverse it (with the reverse function). This is less complex (O(n) instead of O(nlog(n))
Discussion ========== Given that you can just use the `key` parameter, I would just ignore it for the time being. Your algorithm for a given sequence looks like: ```py def middle_sort_flip_OP(seq, key=None): result = [] length = len(seq) seq.sort(key=key) result = seq[:length // 2] seq.sort(key=key, reverse=True) result.extend(seq[:length // 2]) return result print(middle_sort_flip_OP([1, 5, 3, 7, 3, 2, 9, 8])) # [1, 2, 3, 3, 9, 8, 7, 5] print(middle_sort_flip_OP([1, 5, 3, 7, 3, 2, 8])) # [1, 2, 3, 8, 7, 5] ``` The second sorting step is completely unnecessary (but has the same computational complexity as a simple reversing for the [Timsort](https://en.wikipedia.org/wiki/Timsort) algorithm implemented in [Python](https://github.com/python/cpython/blob/master/Objects/listobject.c)), since you can simply slice the sorted sequence backward (making sure to compute the correct offset for the "middle" element): ``` def middle_sort_flip(seq, key=None): length = len(seq) offset = length % 2 seq.sort(key=key) return seq[:length // 2] + seq[:length // 2 - 1 + offset:-1] print(middle_sort_flip([1, 5, 3, 7, 3, 2, 9, 8])) # [1, 2, 3, 3, 9, 8, 7, 5] print(middle_sort_flip([1, 5, 3, 7, 3, 2, 8])) # [1, 2, 3, 8, 7, 5] ``` --- Another approach which is theoretically more efficient consist in ordering the left and right side of the sequence separately. This is more efficient because each sorting step is `O(N/2 log N/2)` and, when combined gives `O(N log N/2)` (instead of `O(N + N log N)`): ``` def middle_sort_half(seq, key=None): length = len(seq) return \ sorted(seq[:length // 2], key=key) \ + sorted(seq[length // 2:], key=key, reverse=True) ``` --- However, those approaches either give a largely unbalanced result where the whole right side is larger than the left side (`middle_sort_flip()`), or have a balancing which is dependent on the initial ordering of the input (`middle_sort_half()`). A more balanced result can be obtained by extracting and recombining the odd and even subsequences. This is simple enough in Python thanks to the slicing operations and has the same asymptotic complexity as `middle_sort_flip()` but with much better balancing properties: ``` def middle_sort_mix(seq, key=None): length = len(seq) offset = length % 2 seq.sort(key=key) result = [None] * length result[:length // 2] = seq[::2] result[length // 2 + offset:] = seq[-1 - offset::-2] return result print(middle_sort_mix([1, 5, 3, 7, 3, 2, 9, 8])) # [1, 3, 5, 8, 9, 7, 3, 2] print(middle_sort_mix([1, 5, 3, 7, 3, 2, 8])) # [1, 3, 5, 8, 7, 3, 2] ``` --- Benchmarks ========== Speedwise they are all very similar when the `key` parameter is not used, because the execution time is dominated by the copying around: ``` import random nums = [10 ** i for i in range(1, 7)] funcs = middle_sort_flip_OP, middle_sort_flip, middle_sort_half, middle_sort_mix print(nums) # [10, 100, 1000, 10000, 100000, 1000000] def gen_input(num): return list(range(num)) for num in nums: print(f"N = {num}") for func in funcs: seq = gen_input(num) random.shuffle(seq) print(f"{func.__name__:>24s}", end=" ") %timeit func(seq.copy()) print() ``` ``` ... N = 1000000 middle_sort_flip_OP 542 ms ± 54.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) middle_sort_flip 510 ms ± 49 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) middle_sort_half 546 ms ± 4.28 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) middle_sort_mix 539 ms ± 63 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ``` --- On the other hand, when the `key` parameter is non-trivial, your approach has a much larger amount of function calls as the other two, which may result in a significant increase in execution time for `middle_sort_OP()`: ``` def gen_input(num): return list(range(num)) def key(x): return x ** 2 for num in nums: print(f"N = {num}") for func in funcs: seq = gen_input(num) random.shuffle(seq) print(f"{func.__name__:>24s}", end=" ") %timeit func(seq.copy(), key=key) print() ``` ``` ... N = 1000000 middle_sort_flip_OP 1.33 s ± 16.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) middle_sort_flip 1.09 s ± 23.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) middle_sort_half 1.1 s ± 27.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) middle_sort_mix 1.11 s ± 8.88 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ``` or, closer to your use-case: ``` class Container(): def __init__(self, x): self.x = x def get_x(self): return self.x def gen_input(num): return [Container(x) for x in range(num)] def key(c): return c.get_x() for num in nums: print(f"N = {num}") for func in funcs: seq = gen_input(num) random.shuffle(seq) print(f"{func.__name__:>24s}", end=" ") %timeit func(seq.copy(), key=key) print() ``` ``` ... N = 1000000 middle_sort_flip_OP 1.27 s ± 4.44 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) middle_sort_flip 1.13 s ± 13.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) middle_sort_half 1.24 s ± 12.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) middle_sort_mix 1.16 s ± 8.07 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ``` which seems to be a bit less dramatic.
3,251
71,707,011
I'm trying to setup the Django AllAuth Twitter login. When the user authenticates with Twitter and is redirected to my website, Django AllAuth raises the Error "No access to private resources at api.twitter.com" and I'm pretty lost here. I have the following settings in my settings.py: ``` SOCIALACCOUNT_PROVIDERS = { "twitter": { # From https://developer.twitter.com "APP": { "client_id": os.environ["TWITTER_API_KEY"], "secret": os.environ["TWITTER_API_SECRET"], } }, } ``` Stack Trace: ``` DEBUG Signing request <PreparedRequest [POST]> using client <Client client_key={consuner_key}, client_secret=****, resource_owner_key=None, resource_owner_secret=None, signature_method=HMAC-SHA1, signature_type=AUTH_HEADER, callback_uri=None, rsa_key=None, verifier=None, realm=None, encoding=utf-8, decoding=utf-8, nonce=None, timestamp=None> DEBUG Including body in call to sign: False DEBUG Collected params: [('oauth_callback', 'http://127.0.0.1:8000/accounts/twitter/login/callback/'), ('oauth_nonce', '107239631555922908281648822311'), ('oauth_timestamp', '1648822311'), ('oauth_version', '1.0'), ('oauth_signature_method', 'HMAC-SHA1'), ('oauth_consumer_key', '{consuner_key}')] DEBUG Normalized params: oauth_callback=http%3A%2F%2F127.0.0.1%3A8000%2Faccounts%2Ftwitter%2Flogin%2Fcallback%2F&oauth_consumer_key={consuner_key}&oauth_nonce=107239631555922908281648822311&oauth_signature_method=HMAC-SHA1&oauth_timestamp=1648822311&oauth_version=1.0 DEBUG Normalized URI: https://api.twitter.com/oauth/request_token DEBUG Signing: signature base string: POST&https%3A%2F%2Fapi.twitter.com%2Foauth%2Frequest_token&oauth_callback%3Dhttp%253A%252F%252F127.0.0.1%253A8000%252Faccounts%252Ftwitter%252Flogin%252Fcallback%252F%26oauth_consumer_key%3D{consuner_key}%26oauth_nonce%3D107239631555922908281648822311%26oauth_signature_method%3DHMAC-SHA1%26oauth_timestamp%3D1648822311%26oauth_version%3D1.0 DEBUG Signature: {signature} DEBUG Encoding URI, headers and body to utf-8. DEBUG Updated url: https://api.twitter.com/oauth/request_token?oauth_callback=http%3A%2F%2F127.0.0.1%3A8000%2Faccounts%2Ftwitter%2Flogin%2Fcallback%2F DEBUG Updated headers: {b'User-Agent': b'python-requests/2.27.1', b'Accept-Encoding': b'gzip, deflate', b'Accept': b'*/*', b'Connection': b'keep-alive', b'Content-Length': b'0', b'Authorization': b'OAuth oauth_nonce="107239631555922908281648822311", oauth_timestamp="1648822311", oauth_version="1.0", oauth_signature_method="HMAC-SHA1", oauth_consumer_key="{consuner_key}", oauth_signature="{oauth_signature}"'} DEBUG Updated body: None DEBUG Starting new HTTPS connection (1): api.twitter.com:443 DEBUG https://api.twitter.com:443 "POST /oauth/request_token?oauth_callback=http%3A%2F%2F127.0.0.1%3A8000%2Faccounts%2Ftwitter%2Flogin%2Fcallback%2F HTTP/1.1" 200 129 [01/Apr/2022 14:11:52] "GET /accounts/twitter/login/ HTTP/1.1" 302 0 DEBUG Signing request <PreparedRequest [POST]> using client <Client client_key={consuner_key}, client_secret=****, resource_owner_key=dkDlygAAAAABa6NrAAABf-V328s, resource_owner_secret=****, signature_method=HMAC-SHA1, signature_type=AUTH_HEADER, callback_uri=None, rsa_key=None, verifier=None, realm=None, encoding=utf-8, decoding=utf-8, nonce=None, timestamp=None> DEBUG Including body in call to sign: False DEBUG Collected params: [('oauth_verifier', '{verifier_value}'), ('oauth_nonce', '23913555268131873461648822314'), ('oauth_timestamp', '1648822314'), ('oauth_version', '1.0'), ('oauth_signature_method', 'HMAC-SHA1'), ('oauth_consumer_key', '{consuner_key}'), ('oauth_token', 'dkDlygAAAAABa6NrAAABf-V328s')] DEBUG Normalized params: oauth_consumer_key={consuner_key}&oauth_nonce=23913555268131873461648822314&oauth_signature_method=HMAC-SHA1&oauth_timestamp=1648822314&oauth_token={oauth_token}&oauth_verifier={verifier_value}&oauth_version=1.0 DEBUG Normalized URI: https://api.twitter.com/oauth/access_token DEBUG Signing: signature base string: POST&https%3A%2F%2Fapi.twitter.com%2Foauth%2Faccess_token&oauth_consumer_key%3D{consuner_key}%26oauth_nonce%3D23913555268131873461648822314%26oauth_signature_method%3DHMAC-SHA1%26oauth_timestamp%3D1648822314%26oauth_token%3DdkDlygAAAAABa6NrAAABf-V328s%26oauth_verifier%3D{verifier_value}%26oauth_version%3D1.0 DEBUG Signature: 6Lpfmoe6tKAvi5x3cYg/3Jl7rzU= DEBUG Encoding URI, headers and body to utf-8. DEBUG Updated url: https://api.twitter.com/oauth/access_token?oauth_verifier={verifier_value} DEBUG Updated headers: {b'User-Agent': b'python-requests/2.27.1', b'Accept-Encoding': b'gzip, deflate', b'Accept': b'*/*', b'Connection': b'keep-alive', b'Content-Length': b'0', b'Authorization': b'OAuth oauth_nonce="23913555268131873461648822314", oauth_timestamp="1648822314", oauth_version="1.0", oauth_signature_method="HMAC-SHA1", oauth_consumer_key="{consuner_key}", oauth_token="{oauth_token}"'} DEBUG Updated body: None DEBUG Starting new HTTPS connection (1): api.twitter.com:443 DEBUG https://api.twitter.com:443 "POST /oauth/access_token?oauth_verifier={verifier_value} HTTP/1.1" 200 172 DEBUG Signing request <PreparedRequest [GET]> using client <Client client_key={consuner_key}, client_secret=****, resource_owner_key=1508849183922569220-ptdyhtd6a5IEAeSWvM9iSZEYGKMzaf, resource_owner_secret=****, signature_method=HMAC-SHA1, signature_type=AUTH_HEADER, callback_uri=None, rsa_key=None, verifier=None, realm=None, encoding=utf-8, decoding=utf-8, nonce=None, timestamp=None> DEBUG Including body in call to sign: False DEBUG Collected params: [('oauth_nonce', '69076491240381283361648822315'), ('oauth_timestamp', '1648822315'), ('oauth_version', '1.0'), ('oauth_signature_method', 'HMAC-SHA1'), ('oauth_consumer_key', '{consuner_key}'), ('oauth_token', '1508849183922569220-ptdyhtd6a5IEAeSWvM9iSZEYGKMzaf')] DEBUG Normalized params: oauth_consumer_key={consuner_key}&oauth_nonce=69076491240381283361648822315&oauth_signature_method=HMAC-SHA1&oauth_timestamp=1648822315&oauth_token=1508849183922569220-ptdyhtd6a5IEAeSWvM9iSZEYGKMzaf&oauth_version=1.0 DEBUG Normalized URI: https://api.twitter.com/1.1/account/verify_credentials.json DEBUG Signing: signature base string: GET&https%3A%2F%2Fapi.twitter.com%2F1.1%2Faccount%2Fverify_credentials.json&oauth_consumer_key%3D{consuner_key}%26oauth_nonce%3D69076491240381283361648822315%26oauth_signature_method%3DHMAC-SHA1%26oauth_timestamp%3D1648822315%26oauth_token%3D1508849183922569220-ptdyhtd6a5IEAeSWvM9iSZEYGKMzaf%26oauth_version%3D1.0 DEBUG Signature: fwWKf2KRDy3wGjJo1X6U8NHJveU= DEBUG Encoding URI, headers and body to utf-8. DEBUG Updated url: https://api.twitter.com/1.1/account/verify_credentials.json DEBUG Updated headers: {b'User-Agent': b'python-requests/2.27.1', b'Accept-Encoding': b'gzip, deflate', b'Accept': b'*/*', b'Connection': b'keep-alive', b'Authorization': b'OAuth oauth_nonce="69076491240381283361648822315", oauth_timestamp="1648822315", oauth_version="1.0", oauth_signature_method="HMAC-SHA1", oauth_consumer_key="{consuner_key}", oauth_token="{token}", oauth_signature="{ {signature}"'} DEBUG Updated body: None DEBUG Starting new HTTPS connection (1): api.twitter.com:443 DEBUG https://api.twitter.com:443 "GET /1.1/account/verify_credentials.json HTTP/1.1" 403 270 [01/Apr/2022 14:11:58] "GET /accounts/twitter/login/callback/?oauth_token={token}&oauth_verifier={verifier_value} HTTP/1.1" 200 93 DEBUG Updated body: None DEBUG Starting new HTTPS connection (1): api.twitter.com:443 DEBUG https://api.twitter.com:443 "GET /1.1/account/verify_credentials.json HTTP/1.1" 403 270 ``` My app has the following privilege rights in the developer portal: * OAuth1 Endpoints * OAuth2 Endpoints * User Email * Read Tweets and Profiles Any ideas why this could be happening? Thanks in advance!
2022/04/01
[ "https://Stackoverflow.com/questions/71707011", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3608004/" ]
The reason this is happening is that your developer account doesn't have access to the v1.1 API. To do so you need to apply for the 'Elevated' API access level as described here: <https://developer.twitter.com/en/docs/twitter-api/getting-started/about-twitter-api> I was getting the exact same error as you, and then tried it again on an older dev account I have that is grandfathered in to Elevated, and everything started working for me. (Any dev account created before Nov 2021 automatically has Elevated access.) Update: Elevated access was approved for my account in about ten hours, so seems straightforward to get.
i am still having a different problem after going through the process of creating the headers and hashing. i used this library that streamlines the OAuth1 and i believe the OAuth2 process, with the consumer key and secret and no need of nonce or timestamps. they also go through some twitter API examples that helped me for another API. posting in case this helps someone else. <https://docs.authlib.org/en/latest/client/oauth1.html>
3,252
28,399,335
I have an error on a script I have wrote since few months, it worked very good with a raspberry pi, but now with an orange pi I have this: ``` >>> import paramiko >>> transport = paramiko.Transport("192.168.2.2", 22) >>> transport.connect(username = "orangepi", password = "my_pass") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.7/dist-packages/paramiko/transport.py", line 978, in connect self.start_client() File "/usr/lib/python2.7/dist-packages/paramiko/transport.py", line 406, in start_client raise e paramiko.ssh_exception.SSHException: Incompatible ssh server (no acceptable macs) ``` I can connect in console with ssh without problem. Somebody has an idea ?
2015/02/08
[ "https://Stackoverflow.com/questions/28399335", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3314648/" ]
You should check if any of those MACs algorithms are available on your SSH server (sshd\_config, key: MACs) : * HMAC-SHA1 * HMAC-MD5 * HMAC-SHA1-96 * HMAC-MD5-96. They are **needed** in order for Paramiko to connect to your SSH server.
On your **remote** server, edit `/etc/ssh/sshd_config` and **add a `MACs` line or append to the existing one**, with one or more of `hmac-sha1,hmac-md5,hmac-sha1-96,hmac-md5-96` (values are comma-separated), for example: ``` MACs hmac-sha1 ``` Now **restart sshd**: `sudo systemctl restart ssh`.
3,253