input_text
stringlengths
54
40.1k
target_text
stringlengths
5
29.4k
AttributeError when creating tkinter PhotoImage object with PIL ImageTk I am trying to place an image resized with PIL in a tkinter PhotoImage object ````import tkinter as tk # I use Python3 from PIL import Image ImageTk master = tk Tk() img =Image open(file_name) image_resized=img resize((200 200)) photoimg=ImageTk PhotoImage(image_resized) ```` However when I later try to call ````photoimg put( "#000000" (0 0) ) ```` I get an ````AttributError: 'PhotoImage' object has no attribute 'put' ```` While this: ````photoimg=tk PhotoImage(file=file_name) photoimg put( "#000000" (0 0)) ```` does not raise an error What am I doing wrong?
`ImageTk PhotoImage` as in `PIL ImageTk PhotoImage` is not the same class as `tk PhotoImage` (`tkinter PhotoImage`) they just have the same name here is ImageTk PhotoImage docs: <a href="http://pillow readthedocs io/en/3 1 x/reference/ImageTk html#PIL ImageTk PhotoImage" rel="nofollow">http://pillow readthedocs io/en/3 1 x/reference/ImageTk html#PIL ImageTk PhotoImage</a> as you can see there is no put method in it but `ImageTk PhotoImage` do have it: <a href="http://epydoc sourceforge net/stdlib/Tkinter PhotoImage-class html" rel="nofollow">http://epydoc sourceforge net/stdlib/Tkinter PhotoImage-class html</a>
Drawing on python and pycharm I am a beginner on Python I draw a square with this code ````import turtle square=turtle Turtle() print(square) for i in range(4): square fd(100) square lt(90) turtle mainloop() ```` However there is another code for drawing square with this code in the book Apparently I tried to copy the exact same thing but it did not work out Can someone help me to figure out the problem? ````def drawSquare(t sz): """Make turtle t draw a square of sz """ for i in range(4): t forward(sz) t left(90) turtle mainloop() ````
You need to call the function so it will start: ````import turtle def drawSquare(t size): for i in range(4): t forward(size) t left(90) turtle mainloop() drawSquare(turtle Turtle() 100) ````
Work with a row in a pandas dataframe without incurring chain indexing (not coping just indexing) My data is organized in a dataframe: ````import pandas as pd import numpy as np data = {'Col1' : [4 5 6 7] 'Col2' : [10 20 30 40] 'Col3' : [100 50 -30 -50] 'Col4' : ['AAA' 'BBB' 'AAA' 'CCC']} df = pd DataFrame(data=data index = ['R1' 'R2' 'R3' 'R4']) ```` Which looks like this (only much bigger): ```` Col1 Col2 Col3 Col4 R1 4 10 100 AAA R2 5 20 50 BBB R3 6 30 -30 AAA R4 7 40 -50 CCC ```` My algorithm loops through this table rows and performs a set of operations For cleaness/lazyness sake I would like to work on a single row at each iteration without typing `df loc['row index' 'column name']` to get each cell value I have tried to follow the <a href="http://pandas pydata org/pandas-docs/stable/indexing html#indexing-view-versus-copy" rel="nofollow">right style</a> using for example: ````row_of_interest = df loc['R2' :] ```` However I still get the warning when I do: ````row_of_interest['Col2'] = row_of_interest['Col2'] 1000 SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame ```` And it is not working (as I intended) it is making a copy ````print df Col1 Col2 Col3 Col4 R1 4 10 100 AAA R2 5 20 50 BBB R3 6 30 -30 AAA R4 7 40 -50 CCC ```` Any advice on the proper way to do it? Or should I just stick to work with the data frame directly? Edit 1: Using the replies provided the warning is removed from the code but the original dataframe is not modified: The "row of interest" `Series` is a copy not part of the original dataframe For example: ````import pandas as pd import numpy as np data = {'Col1' : [4 5 6 7] 'Col2' : [10 20 30 40] 'Col3' : [100 50 -30 -50] 'Col4' : ['AAA' 'BBB' 'AAA' 'CCC']} df = pd DataFrame(data=data index = ['R1' 'R2' 'R3' 'R4']) row_of_interest = df loc['R2'] row_of_interest is_copy = False new_cell_value = row_of_interest['Col2'] 1000 row_of_interest['Col2'] = new_cell_value print row_of_interest Col1 5 Col2 1020 Col3 50 Col4 BBB Name: R2 dtype: object print df Col1 Col2 Col3 Col4 R1 4 10 100 AAA R2 5 20 50 BBB R3 6 30 -30 AAA R4 7 40 -50 CCC ```` Edit 2: This is an example of the functionality I would like to replicate In python a list of lists looks like: ````a = [[1 2 3] [4 5 6]] ```` Now I can create a "label" ````b = a[0] ```` And if I change an entry in b: ````b[0] = 7 ```` Both a and b change ````print a b [[7 2 3] [4 5 6]] [7 2 3] ```` Can this behavior be replicated between a pandas dataframe labeling one of its rows a pandas series?
This should work: ````row_of_interest = df loc['R2' :] row_of_interest is_copy = False row_of_interest['Col2'] = row_of_interest['Col2'] 1000 ```` Setting ` is_copy = False` is the trick Edit 2: ````import pandas as pd import numpy as np data = {'Col1' : [4 5 6 7] 'Col2' : [10 20 30 40] 'Col3' : [100 50 -30 -50] 'Col4' : ['AAA' 'BBB' 'AAA' 'CCC']} df = pd DataFrame(data=data index = ['R1' 'R2' 'R3' 'R4']) row_of_interest = df loc['R2'] row_of_interest is_copy = False new_cell_value = row_of_interest['Col2'] 1000 row_of_interest['Col2'] = new_cell_value print row_of_interest df loc['R2'] = row_of_interest print df ```` df: ```` Col1 Col2 Col3 Col4 R1 4 10 100 AAA R2 5 1020 50 BBB R3 6 30 -30 AAA R4 7 40 -50 CCC ````
How can I create a mesh that can be changed without invalidating native sets in Abaqus? The nodesets I create "by feature edge" in Abaqus are invalidated if I change the mesh What are my options to prevent this from happening? I am asking because I am trying to write a phython file in which to change the mesh as a parameter That will not be possible if changing the mesh invalides the nodesets
As usual there is more than one way One technique if you know the coordinates of some point on or near the edge(s) of interest is to use the EdgeArray findAt() method followed with the Edge getNodes() method to return the Node objects and then defining a new set from them You can use the following code for inspiration for other more complex methods you might dream up: ````# Tested on Abaqus/CAE 6 12 # Assumes access from the Part-level (Assembly-level is similar): p = mdb models['Model-1'] parts['Part-1'] # the Part object e = p edges # an EdgeArray of Edge objects in the Part # Look for your edge at the specified coords then get the nodes: e1 = e findAt( coordinates = (0 5 0 0 0 0) ) # the Edge object of interest e1_nodes = e1 getNodes() # A list of Node objects # Specify the new node set: e1_nset = p SetFromNodeLabels( name='Nset-1' nodeLabels=[node label for node in e1_nodes] ) ````
Issue when imoporting GDAL : ImportError Library not loaded Image not found Since yesterday I struggle to import some libraries such as GDAL (or iris) and I allways get the same type of outputs ````&gt;&gt;&gt; import gdal Traceback (most recent call last): File "<stdin&gt;" line 1 in <module&gt; File "gdal py" line 28 in <module&gt; _gdal = swig_import_helper() File "gdal py" line 24 in swig_import_helper _mod = imp load_module('_gdal' fp pathname description) ImportError: dlopen( /_gdal so 2): Library not loaded: @rpath/libicui18n 56 dylib Referenced from: /Users/zoran/anaconda/lib/libgdal 20 dylib Reason: image not found ```` I searched in my files and found: - 1 files containing `libicui18n` - 2 files containing `_gdal so` /Users/zoran/anaconda/pkgs/icu-54 1-0/lib/libicui18n 54 1 dylib /Users/zoran/anaconda/lib/python2 7/site-packages/osgeo/_gdal so /Library/Frameworks/GDAL framework/Versions/2 1/Python/2 7/site-packages/osgeo/_gdal so This morning I could import gdal without problem and suddenly (I do not know what I did) it was totally impossible I tried to: - uninstall/install gdal - uninstall/install anaconda and install again gdal - create different new environments (in python2 and python3) and install only gdal I do not know what this `libicui18n 56 dylib` is neighter `libgdal 20 dylib` When I type otool -L with the name of the paths above I get: ````libicui18n 54 dylib (compatibility version 54 0 0 current version 54 1 0) @loader_path/ /libicuuc 54 dylib (compatibility version 54 0 0 current version 54 1 0) @loader_path/ /libicudata 54 dylib (compatibility version 54 0 0 current version 54 1 0) /usr/lib/libSystem B dylib (compatibility version 1 0 0 current version 111 0 0) /usr/lib/libstdc++ 6 dylib (compatibility version 7 0 0 current version 7 4 0) /usr/lib/libgcc_s 1 dylib (compatibility version 1 0 0 current version 1 0 0) @rpath/libgdal 1 dylib (compatibility version 20 0 0 current version 20 5 0) /usr/lib/libc++ 1 dylib (compatibility version 1 0 0 current version 120 0 0) /usr/lib/libSystem B dylib (compatibility version 1 0 0 current version 1197 1 1) /Library/Frameworks/GDAL framework/Versions/2 1/GDAL (compatibility version 22 0 0 current version 22 1 0) /usr/lib/libstdc++ 6 dylib (compatibility version 7 0 0 current version 56 0 0) /usr/lib/libSystem B dylib (compatibility version 1 0 0 current version 169 3 0) ```` When I type conda info: ```` platform : osx-64 conda version : 4 2 9 conda is private : False conda-env version : 4 2 9 conda-build version : 2 0 2 python version : 2 7 12 final 0 requests version : 2 11 1 root environment : /Users/zoran/anaconda (writable) default environment : /Users/zoran/anaconda envs directories : /Users/zoran/anaconda/envs package cache : /Users/zoran/anaconda/pkgs channel URLs : https://conda anaconda org/anaconda/osx-64/ https://conda anaconda org/anaconda/noarch/ https://conda anaconda org/scitools/osx-64/ https://conda anaconda org/scitools/noarch/ https://conda anaconda org/conda-forge/osx-64/ https://conda anaconda org/conda-forge/noarch/ https://repo continuum io/pkgs/free/osx-64/ https://repo continuum io/pkgs/free/noarch/ https://repo continuum io/pkgs/pro/osx-64/ https://repo continuum io/pkgs/pro/noarch/ config file : /Users/zoran/ condarc offline mode : False ```` I am wondering if somehow the libraries are saved in the wrong directrory? I have seen many similar issues but no trick to fix the problem Thanks for helping
I found a solution to my problem <a href="https://github com/conda-forge/gdal-feedstock/issues/111" rel="nofollow">here</a> Thank you for the clear explanation of "ocefpaf": <blockquote> You problem seems like the usuall mismatch between conda-forge and defaults Can you try the following instructions (if you do want to use conda-forge's gdal of course): - Make sure you have the latest conda to take advantage of the channel preference feature You can do that by issuing conda update conda in the root env of your conda installation - Edit your condarc file and place the conda-forge on top of defaults The condarc usually lives in your home directory See mine below (Note that the more channels you have you are more likely to face issues I recommend having only defaults and conda-forge ) - Issue the following commands to check if you will get the correct installation: </blockquote> ````conda create --yes -n TEST_GDAL python=3 5 gdal source activate TEST_GDAL python -c "from osgeo import gdal; print(gdal __version__)" ```` <blockquote> If you get 2 1 1 you got a successful installation of the latest version from conda-forge We always recommend users to work with envs as the the example above But you do not need to use Python 3 5 (conda-forge has 3 4 and 2 7 also) and you do not need to name the env TEST_GDAL And here is my condarc file </blockquote> ````&gt; cat condarc channels: - conda-forge - defaults show_channel_urls: true ````
Python: How to develop a between_time similar method when on pandas 0 9 0? I am stick to pandas 0 9 0 as I am working under python 2 5 hence I have no <a href="http://pandas pydata org/pandas-docs/stable/generated/pandas DataFrame between_time html" rel="nofollow">between_time</a> method available I have a DataFrame of dates and would like to filter all the dates that are between certain hours e g between `08:00` and `09:00` for all the dates within the DataFrame `df` ````import pandas as pd import numpy as np import datetime dates = pd date_range(start="08/01/2009" end="08/01/2012" freq="10min") df = pd DataFrame(np random rand(len(dates) 1)*1500 index=dates columns=['Power']) ```` How can I develop a method that provides same functionality as `between_time` method? N B : The original problem I am trying to accomplish is under <a href="http://stackoverflow com/questions/40117702/python-filter-dataframe-in-pandas-by-hour-day-and-month-grouped-by-year">Python: Filter DataFrame in Pandas by hour day and month grouped by year</a>
<strong>UPDATE:</strong> try to use: ````df ix[df index indexer_between_time('08:00' '09:50')] ```` <strong>OLD answer:</strong> I am not sure that it will work on Pandas 0 9 0 but it is worth to try it: ````df[(df index hour &gt;= 8) &amp; (df index hour <= 9)] ```` PS please be aware - it is not the same as `between_time` as it checks only hours and `between_time` is able to check <strong>time</strong> like `df between_time('08:01:15' '09:13:28')` <strong>Hint</strong>: download a source code for a newer version of Pandas and take a look at the definition of `indexer_between_time()` function in `pandas/tseries/index py` - you can clone it for your needs
Why is my Python script not running via command line? Thanks! ````def hello(a b): print "hello and that is your sum:" sum=a+b print sum import sys if __name__ == "__main__": hello(sys argv[2]) ```` It does not work for me I appreciate the help!!! Thanks!
Without seeing your error message it is hard to say exactly what the problem is but a few things jump out: - No indentation after if __name__ == "__main__": - you are only passing one argument into the hello function and it requires two - the sys module is not visible in the scope outside the hello function probably more but again need the error output Here is what you might want: ````import sys def hello(a b): print "hello and that is your sum:" sum=a+b print sum if __name__ == "__main__": hello(int(sys argv[1]) int(sys argv[2])) ````
Python find and replace dialog from Rapid GUI Programming error When building the find and replace dialog from "Rapid GUI Programming with Python and Qt (Chapter 07) by Prentice Hall (Mark Sumerfield)" I get the following error: ```` import ui_findandreplacedlg ImportError: No module named ui_findandreplacedlg ```` Depending on which version of python I run I also get: ````File "findandreplacedlg py" line 7 in <module&gt; ui_findandreplacedlg Ui_FindAndReplaceDlg): AttributeError: 'module' object has no attribute 'Ui_FindAndReplaceDlg' ```` I got the source code from their website and it errors on the same line in the same way I have searched the errata on their webpage with no mention whatsoever Does anyone know what the solution is?
The code in question can be found here - <a href="https://github com/suzp1984/pyqt5-book-code/blob/master/chap07/ui_findandreplacedlg py" rel="nofollow">https://github com/suzp1984/pyqt5-book-code/blob/master/chap07/ui_findandreplacedlg py</a> If that file is in the same directory as the code you are trying to run just do ````import ui_findandreplacedlg ````
Database Connect Error: Centos 6 / Apache 2 4 / Postgres 9 4 / Django 1 9 / mod_wsgi 3 5 / python 2 7 I am trying to get my website up and running Everything seems to work fine but when I go to a page with a database write - I get this: ````[Wed Oct 19 09:53:12 319824 2016] [mpm_prefork:notice] [pid 12411] AH00173: SIGHUP received Attempting to restart [Wed Oct 19 09:53:13 001121 2016] [ssl:warn] [pid 12411] AH01909: sXXX-XXX-XXX-XXX secureserver net:443:0 server certificate does NOT include an ID which matches the server name [Wed Oct 19 09:53:13 003578 2016] [mpm_prefork:notice] [pid 12411] AH00163: Apache/2 4 18 (Unix) OpenSSL/1 0 1e-fips mod_bwlimited/1 4 mod_wsgi/3 5 Python/2 7 6 configured -- resuming normal operations [Wed Oct 19 09:53:13 003590 2016] [core:notice] [pid 12411] AH00094: Command line: '/usr/local/apache/bin/httpd' (XID fsf92m) Database Connect Error: Access denied for user 'leechprotect'@'localhost' (using password: YES) [Wed Oct 19 09:53:17 637487 2016] [mpm_prefork:notice] [pid 12411] AH00169: caught SIGTERM shutting down ```` This line shows that a user "leechprotest" cannot connect: ````(XID fsf92m) Database Connect Error: Access denied for user 'leechprotect'@'localhost' (using password: YES) ```` However I do not have a user called leechprotect leechportect is a default user on MySQL (i am guessing) because MySQL is installed as the default database on my dedicated server My Django settings py file: ````DATABASES = { 'default': { 'ENGINE': 'django db backends postgresql' 'NAME': 'prelaunch_db' 'USER': 'postgres_user' 'PASSWORD': 'XXXXXXXXXXXXXXX' 'HOST': 'localhost' 'PORT': '' } } ```` I already know my database and entire site works on my test server at home I think it might be interference with MySQL and PostgreSQL Any help much appreciated EDIT (After disabling leech protection): ````[Wed Oct 19 11:40:24 000919 2016] [ssl:warn] [pid 14754] AH01909: sXXX-XXX-XXX-XXX secureserver net:443:0 server certificate does NOT include an ID which matches the server name [Wed Oct 19 11:40:24 001851 2016] [suexec:notice] [pid 14754] AH01232: suEXEC mechanism enabled (wrapper: /usr/local/apache/bin/suexec) [Wed Oct 19 11:40:24 001887 2016] [:notice] [pid 14754] ModSecurity for Apache/2 9 0 (http://www modsecurity org/) configured [Wed Oct 19 11:40:24 001892 2016] [:notice] [pid 14754] ModSecurity: APR compiled version="1 5 2"; loaded version="1 5 2" [Wed Oct 19 11:40:24 001897 2016] [:notice] [pid 14754] ModSecurity: PCRE compiled version="8 38 "; loaded version="8 38 2015-11-23" [Wed Oct 19 11:40:24 001900 2016] [:notice] [pid 14754] ModSecurity: LUA compiled version="Lua 5 1" [Wed Oct 19 11:40:24 001903 2016] [:notice] [pid 14754] ModSecurity: LIBXML compiled version="2 9 2" [Wed Oct 19 11:40:24 001905 2016] [:notice] [pid 14754] ModSecurity: Status engine is currently disabled enable it by set SecStatusEngine to On [Wed Oct 19 11:40:25 001596 2016] [ssl:warn] [pid 14755] AH01909: sXXX-XXX-XXX-XXX secureserver net:443:0 server certificate does NOT include an ID which matches the server name [Wed Oct 19 11:40:25 004276 2016] [mpm_prefork:notice] [pid 14755] AH00163: Apache/2 4 18 (Unix) OpenSSL/1 0 1e-fips mod_bwlimited/1 4 mod_wsgi/3 5 Python/2 7 6 configured -- resuming normal operations [Wed Oct 19 11:40:25 004294 2016] [core:notice] [pid 14755] AH00094: Command line: '/usr/local/apache/bin/httpd -D SSL' (XID 6jmrjj) Database Connect Error: Access denied for user 'leechprotect'@'localhost' (using password: YES) [Wed Oct 19 11:40:31 847492 2016] [mpm_prefork:notice] [pid 14755] AH00169: caught SIGTERM shutting down ```` EDIT 2: I found that Apache comes preconfigured on cPanel with a rewrite function: These lines are in the httpd conf file: ````RewriteEngine on RewriteMap LeechProtect prg:/usr/local/cpanel/bin/leechprotect Mutex file:/usr/local/apache/logs rewrite-map ```` I tried to comment out these lines but cPanel jut regenerates the default file I looked how to edit and I found: ````[root@sXXX-XXX-XXX-XXX]# /usr/local/cpanel/bin/apache_conf_distiller --update ```` From what I see anyting written outside the tag with be perminantly saved when running the above command this got rid of the Database error problem But I still get a 500 server error And all other error log messages are the same
MySQL and PostgreSQL both do not come along with a user called 'leechprotect' But a google search points out that this username <a href="https://confluence2 cpanel net/display/1152Docs/Leech+Protect" rel="nofollow">is related to cPanel</a> - might be worth reading that to understand what is going on Afterwards you might consider deactivating it for you project directory
Python - reduce complexity using sets I am using `url_analysis` tools from `spotify` `API` (wrapper `spotipy` with `sp `) to process tracks using the following code: ````def loudness_drops(track_ids): names = set() tids = set() tracks_with_drop_name = set() tracks_with_drop_id = set() for id_ in track_ids: track_id = sp track(id_)['uri'] tids add(track_id) track_name = sp track(id_)['name'] names add(track_name) #get audio features features = sp audio_features(tids) #and then audio analysis id urls = {x['analysis_url'] for x in features if x} print len(urls) #fetch analysis data for url in urls: # print len(urls) analysis = sp _get(url) #extract loudness sections from analysis x = [_['start'] for _ in analysis['segments']] print len(x) l = [_['loudness_max'] for _ in analysis['segments']] print len(l) #get max and min values min_l = min(l) max_l = max(l) #normalize stream norm_l = [(_ - min_l)/(max_l - min_l) for _ in l] #define silence as a value below 0 1 silence = [l[i] for i in range(len(l)) if norm_l[i] < 1] #more than one silence means one of them happens in the middle of the track if len(silence) &gt; 1: tracks_with_drop_name add(track_name) tracks_with_drop_id add(track_id) return tracks_with_drop_id ```` The code works but if the number of songs I `search` is set to say `limit=20` the time it takes to process all the `audio segments` `x`and `l` makes the process too expensive e g: `time time()` prints `452 175742149` <strong>QUESTION</strong>: how can I drastically reduce complexity here? I have tried to use `sets` instead of `lists` but working with `set` `objects` prohibts `indexing` EDIT: 10 `urls`: ````[you'https://api spotify com/v1/audio-analysis/5H40slc7OnTLMbXV6E780Z' you'https://api spotify com/v1/audio-analysis/72G49GsqYeWV6QVAqp4vl0' you'https://api spotify com/v1/audio-analysis/6jvFK4v3oLMPfm6g030H0g' you'https://api spotify com/v1/audio-analysis/351LyEn9dxRxgkl28GwQtl' you'https://api spotify com/v1/audio-analysis/4cRnjBH13wSYMOfOF17Ddn' you'https://api spotify com/v1/audio-analysis/2To3PTOTGJUtRsK3nQemP4' you'https://api spotify com/v1/audio-analysis/4xPRxqV9qCVeKLQ31NxhYz' you'https://api spotify com/v1/audio-analysis/1G1MtHxrVngvGWSQ7Fj4Oj' you'https://api spotify com/v1/audio-analysis/3du9aoP5vPGW1h70mIoicK' you'https://api spotify com/v1/audio-analysis/6VIIBKYJAKMBNQreG33lBF'] ````
This is what I see not knowing much about spotify: ````for id_ in track_ids: # this runs N times where N = len(track_ids) tids add(track_id) # tids contains all track_ids processed until now # in the end: len(tids) == N features = sp audio_features(tids) # features contains features of all tracks processed until now # in the end I guess: len(features) == N * num_features_per_track urls = {x['analysis_url'] for x in features if x} # very probably: len(urls) == len(features) for url in urls: # for the first track this processes features of the first track only # for the seconds track this processes features of 1st and 2nd # etc # in the end this loop repeats N * N * num_features_per_track times ```` You should not any url twice And you do because you keep all tracks in `tids` and then for each track you process everything in `tids` which turns the complexity of this into O(n<sup>2</sup>) In general always look for loops inside loops when trying to reduce complexity I believe in this case this should work if `audio_features` expects a set of ids: ````# replace this: features = sp audio_features(tids) # with: features = sp audio_features({track_id}) ````
shapely is_valid for polygons in 3D I am trying to validate some polygons that are on planes with `is_valid` but I get `Too few points in geometry component at or near point` for polygons where the z is not constant Is there a way to validate these other polygons? Here is an example: ````from shapely geometry import Polygon poly1 = Polygon([(0 0) (1 1) (1 0)]) print(poly1 is_valid) # True # z=1 poly2 = Polygon([(0 0 1) (1 1 1) (1 0 1)]) print(poly2 is_valid) # True # x=1 poly3 = Polygon([(1 0 0) (1 1 1) (1 1 0)]) print(poly3 is_valid) # Too few points in geometry component at or near point 1 0 0 # False ````
The problem is that `shapely` in fact ignores the z coordinate So as far as shapely can tell you are building a polygon with the points `[(1 0) (1 1) (1 1)]` that are not enough to build a polygon See this other SO question for more information: <a href="http://stackoverflow com/questions/39317261/python-polygon-does-not-close-shapely/39347117#39347117">python-polygon-does-not-close-shapely</a> IMHO shapely should not allow three dimension coordinates because it brings this kind of confusions
Mark as unseen on Gmail (imaplib) I am trying to mark email as unseen on Gmail server I am using this command: ````res data = mailbox uid('STORE' uid '-FLAGS' '(\Seen)') ```` Everything goes OK but when I check it using web browser it is still marked as seen When I check flags here is what I got: ```` b'46 (FLAGS (-FLAGS \\Seen))' ```` I have seen multiple questions on this issue but none of the proposed solutions work Just to mention that I am appending this email using: ````mailbox append(db_email folder "-FLAGS \Seen" time mktime(db_email date timetuple()) mail as_bytes()) ```` But the flag parameter `-FLAGS \Seen` does not have any effect since it is the same when I do not pass flag argument Also I have double-checked `uid` for given mail folder and it matches to appropriate email
It appears you have misunderstood flags on APPEND a bit By doing `APPEND folder (-FLAGS \Seen) ` you have actually created a message with two flags: The standard `\Seen` flag and a nonstandard `-FLAGS` flag To create a message without the \Seen flag just use `()` as your flag list for `APPEND` `-FLAGS` is a subcommand to STORE saying to remove these flags from the current list Conversely `+FLAGS` is add these flags to the current list The plain `FLAGS` overwrites the current list Also if you do remove the `\Seen` flag over an IMAP connection it can take sometime to show up in the GMail WebUI You may need to refresh or switch folders to get the changes to render NB: You are not protecting your backslashes `\S` is not a legal escape sequence so will be passed through but you should either use a double backslash (`'\\Seen'`) or a raw string (`r'\Seen'`)
Reading Database Queries into a Specific format in Python Hi I am connecting to sqlite database using python and fetching some results However to input these results to another file I need it to be in the following format ```` x={ (1 1):1 (1 2):0 (2 1):1 (2 2):0 (3 1):0 (3 2):1 (4 1):0 (4 2):1 } ```` My database table has only two rows (id (integer) and task (integer)) So I run the query "select * from allocation" and the result I get required to be formatted as above For instance allocation table is as follows: ```` id | task 1 | 1 2 | 1 3 | 2 4 | 2 ```` Please Help
In this code the commented lines at the top indicate what is needed to access the sqlite database Since I did not want to build and populate such a database I created the object <strong>C</strong> to emulate its approximate behaviour I used <strong>defaultdict</strong> because I do not know how many possible combinations of id's and tasks are involved However this means that only non-zero occurrences are represented in the final dictionary ````#~ import sqlite3 #~ conn = sqlite3 connect ( some database ) #~ c = conn cursor ( ) #~ c execute ( 'Select id task from aTable' ) class C: def __init__(self iterated): self iterated=iterated def fetchone (self): for _ in iter(list(self iterated)): yield _ c=C( [ ['1' '1'] ['2' '1'] ['3' '2'] ['4' '2'] ] ) from collections import defaultdict counts = defaultdict(int) for row in c fetchone(): print (row) id task = row counts [(id task)]+=1 print (counts) ```` Here is the output ````['1' '1'] ['2' '1'] ['3' '2'] ['4' '2'] defaultdict(<class 'int'&gt; {('4' '2'): 1 ('2' '1'): 1 ('1' '1'): 1 ('3' '2'): 1}) ````
How to return JSON from Python REST API I have a Python API that receives data from mysql select query The data looks like this: ````| val | type | status | |-----|------|--------| | 90 | 1 | a | ```` That data was received well in python Now I want to present that data as JSON to my REST client - how? Here is my python code: ````def somefunction(self by identifier): # validate args procedure = 'mysproc' str(by) try: with self connection cursor() as cursor: cursor callproc(procedure [str(identifier)]) self connection commit() result = cursor fetchone() print("+++ Result: " str(result) " +") except: result = "Request Failed" raise finally: self DestroyConnection() return json dumps(result) ```` with that my client is receiving: ````"[90 1 "a"]" ```` Question: is there a way for me to receive it as a proper JSON? like: ````{'val': 90 'type': 1 : 'status': "a"} ````
You will first need to get the mysql query to return a dict object instead of a list If your library is MySQLdb then this answer: <a href="http://stackoverflow com/questions/4147707/python-mysqldb-sqlite-result-as-dictionary">Python - mysqlDB sqlite result as dictionary</a> is what you need Here is a link to the docs for MySQLdb: <a href="http://www mikusa com/python-mysql-docs/docs/MySQLdb connections html" rel="nofollow">http://www mikusa com/python-mysql-docs/docs/MySQLdb connections html</a> I think if you pass in the cursor class you want to use when you create your cursor the result of fetchone will be a dictionary ````with self connection cursor(MySQLdb cursors DictCursor) as cursor: ```` Running json dumps(result) on a dictionary will give the output you are looking for
Python - Return integer value for list enumeration Is there a cleaner way to get an integer value of the position of a list item in this code: ````a = ['m' 'rt' 'paaq' 'panc'] loc = [i for i x in enumerate(a) if x == 'rt'] loc_str = str(loc) strip('[]') loc_int = int(loc_str) id_list = a[loc_int 1:] print id_list ```` Returns all items after 'rt' as a list ['paaq' 'panc']
Yes use `list index()` ````a = ['m' 'rt' 'paaq' 'panc'] id_list = a[a index('rt')+1:] assert id_list == ['paaq' 'panc'] ```` Or to minimally change your program: ````a = ['m' 'rt' 'paaq' 'panc'] loc_int = a index('rt') id_list = a[loc_int 1:] print id_list ```` References: - <a href="https://docs python org/2/library/stdtypes html#sequence-types-str-unicode-list-tuple-bytearray-buffer-xrange" rel="nofollow">https://docs python org/2/library/stdtypes html#sequence-types-str-unicode-list-tuple-bytearray-buffer-xrange</a> - <a href="https://docs python org/2/tutorial/datastructures html#more-on-lists" rel="nofollow">https://docs python org/2/tutorial/datastructures html#more-on-lists</a>
Concatenate string using format I have some code similar to the following: ````test_1 = 'bob' test_2 = 'jeff' test_1 = "-" test_2 "\n" ```` Output: ````bob- jeff\n ```` I would like to have the same functionality but using the ` format` method This is what I have so far: ````test_1 = "{}{} {}\n" format(test_1 "-" test_2) ```` Which produces the same output but is there a better/more efficient way of using ` format ` in this case?
`'' join` is probably fast enough and efficient ````'-' join((test_1 test_2)) ```` You can measure different methods using the `timeit` module That can tell you which is fastest This is an example of how `timeit` can be used:- ````&gt;&gt;&gt; import timeit &gt;&gt;&gt; timeit timeit('"-" join(str(n) for n in range(100))' number=10000) 0 8187260627746582 ````
Theano's function() reports that my `givens` value is not needed for the graph Sorry for not posting entire snippets -- the code is very big and spread out so hopefully this can illustrate my issue I have these: ````train = theano function([X] output updates=update_G givens={train_mode=:np cast['int32'](1)}) ```` and ````test = theano function([X] output updates=update_G givens={train_mode=:np cast['int32'](0)}) ```` to my understanding `givens` would input the value of `train_mode` (i e `1`/`0`) wherever it is needed to compute the output The `output` is computed in the lines of this: ```` network2 = Net2() # This is sort of a dummy variable so I do not get a NameError when this # is called before `theano function()` is called Not sure if this is the # right way to do this train_mode = T iscalar('train_mode') output = loss(network1 get_outputs(network2 get_outputs(X train_mode=train_mode)) something) mean() class Net2(): def get_outputs(self x train_mode): from theano ifelse import ifelse import theano tensor as T my_flag = ifelse(T eq(train_mode 1) 1 0) return something if my_flag else something_else ```` So `train_mode` is used as an argument in one of the nested functions and I use it to tell between `train` and `test` as I would like to handle them slightly differently However when I try to run this I get this error: ````theano compile function_module UnusedInputError: theano function was asked to create a function computing outputs given certain inputs but the provided input variable at index 1 is not part of the computational graph needed to compute the outputs: <TensorType(int32 scalar)&gt; To make this error into a warning you can pass the parameter on_unused_input='warn' to theano function To disable it completely use on_unused_input='ignore' ```` If I delete the `givens` parameter the error disappears so to my understanding Theano believes that my `train_mode` is not necessary for compute the `function()` I can use `on_unusued_input='ignore'` as per their suggestion but that would just ignore my `train_mode` if they think it is unused Am I going around this the wrong way? I basically just want to train a neural network with dropout but not use dropout when evaluating
why you use "=" sign? I think it made train_mode not readable my code works well by writing: `givens = {train_mode:1}`
Concatinating multiple Data frames of different length I have 88 different dataFrame of different lengths which I need to concatenate And its all are located in one directory and I used the following python script to produce such a single data frame Here is what I tried ```` path = 'GTFS/' files = os listdir(path) files_txt = [os path join(path i) for i in files if i endswith(' tsv')] ## Change it into dataframe dfs = [pd DataFrame from_csv(x sep='\t')[[6]] for x in files_txt] ##Concatenate it merged = pd concat(dfs axis=1) ```` Since each of those data frames are of different length or shape its throwing me following error message ````ValueError: Shape of passed values is (88 57914) indices imply (88 57905) ```` My aim is to concatenate column-wise into single data frame with 88 columns as my input is 88 separate data frames from which I need to use 7th column as in my script Any solutions or suggestion would be great in this case for concatenating data frames Thank you
The key is to make a `list` of different data-frames and then concatenate the list instead of individual concatenation I created 10 `df` filled with random length data of one column and saved to `csv` files to simulate your data ````import pandas as pd import numpy as np from random import randint #generate 10 df and save to seperate csv files for i in range(1 11): dfi = pd DataFrame({'a':np arange(randint(2 11))}) csv_file = "file{0} csv" format(i) dfi to_csv(csv_file sep='\t') print "saving file" csv_file ```` Then we read those 10 `csv` files into separate data-frames and save to a `list` ````#read previously saved csv files into 10 seperate df # and add to list frames = [] for x in range(1 10): csv_file = "file{0} csv" format(x) newdf = pd DataFrame from_csv(csv_file sep='\t') frames append(newdf) ```` Finally we concatenate the `list` ````#concatenate frames list result = pd concat(frames axis=1) print result ```` The result is 10 frames of variable length concatenated column wise into single `df` ````saving file file1 csv saving file file2 csv saving file file3 csv saving file file4 csv saving file file5 csv saving file file6 csv saving file file7 csv saving file file8 csv saving file file9 csv saving file file10 csv a a a a a a a a a 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 1 0 2 2 0 2 0 2 0 2 0 2 0 2 0 2 0 2 2 0 3 3 0 3 0 3 0 3 0 3 0 NaN 3 0 3 NaN 4 4 0 4 0 4 0 4 0 4 0 NaN NaN 4 NaN 5 5 0 5 0 5 0 5 0 5 0 NaN NaN 5 NaN 6 6 0 6 0 6 0 6 0 6 0 NaN NaN 6 NaN 7 NaN 7 0 7 0 7 0 7 0 NaN NaN 7 NaN 8 NaN 8 0 NaN NaN 8 0 NaN NaN 8 NaN 9 NaN NaN NaN NaN 9 0 NaN NaN 9 NaN 10 NaN NaN NaN NaN NaN NaN NaN 10 NaN ```` Hope this is what you are looking for A good example on merge join and concatenate can be found <a href="http://pandas pydata org/pandas-docs/stable/merging html" rel="nofollow">here</a>
insert element in the start of the numpy array I have array ````x=[ 0 30153836 0 30376881 0 29115761 0 29074261 0 28676876 ] ```` I want to insert -1 to the start of the array to be like ```` x= [-1 0 30153836 0 30376881 0 29115761 0 29074261 0 28676876] ```` I tried : ````np insert(x 0 -1 axis=0) ```` but it did not do any change any idea how to do that ?
You can do the insert by ommiting the axis param: ````x = np array([0 0 0 0]) x = np insert(x 0 -1) x ```` That will give: ````array([-1 0 0 0 0]) ````
Reformat JSON file? I have two JSON files File A: ```` "features": [ { "attributes": { "NAME": "R T CO" "LTYPE": 64 "QUAD15M": "279933" "OBJECTID": 225 "SHAPE LEN": 828 21510830520401 } "geometry": { "paths": [ [ [ -99 818614674337155 27 782542677671653 ] [ -99 816056346719051 27 782590806976135 ] ] ] } } ```` File B: ```` "features": [ { "geometry": { "type": "MultiLineString" "coordinates": [ [ [ -99 773315512624 27 808875128096 ] [ -99 771397939251 27 809512259374 ] ] ] } "type": "Feature" "properties": { "LTYPE": 64 "SHAPE LEN": 662 3800009247 "NAME": "1586" "OBJECTID": 204 "QUAD15M": "279933" } } ```` I would like File B to be reformatted to look like File A Change "properties" to "attributes" "coordinates" to "paths" and remove both "type": "MultiLineString" and "type": "Feature" What is the best way to do this via python? Is there a way to also reorder the "attributes" key value pairs to look like File A? It is a rather large dataset and I would like to iterate through the entire file
What about: ````cat <file&gt; | python -m json tool ```` This will reformat the contents of the file into a uniform human readable format If you really need to change the names of the fields you could use sed ````cat <file&gt; | sed -e 's/"properties"/"attributes"/' ```` This may be sufficient for your use case If you have something that requires more nuanced parsing you will have to read up on how to manage the JSON through an ORM library
Search for a combination in dataframe to change cell value I want to replace values in a column if the a combination of values in two columns is valid Let Us say I have the following `DataFrame` ````df = pd DataFrame([ ['Texas 1' '111' '222' '333'] ['Texas 1' '444' '555' '666'] ['Texas 2' '777' '888' '999'] ]) 0 1 2 3 0 Texas 1 111 222 333 1 Texas 1 444 555 666 2 Texas 2 777 888 999 ```` And if I want to replace the value in `column 2` if `column 0 = Texas 1` and the value of `column 2 = 222`I am doing the following: ````df ix[ (df Column 0=='Texas 1')&amp;(df Column 2 =='222') Column 2] = "Success" ```` That works fine for a few combinations The part where I am lost is how to do this for over 300 combinations? I thought maybe I could use a `dict` and store the key which would be `'Success'` or whatever other value And the list could be the combination Kind of like this ````a["Success"] = [Texas 1 222] &gt;&gt;&gt; a {"Success": [Texas 1 222]} ```` But I am not sure how to do that in a `DataFrame`
You have all almost all your code just create `dictionary` or `list` and iterate over it and you are done ````import pandas as pd combinations = [['key1' 'key2' 'message']] combinations append(['Texas 1' '222' 'triple two']) combinations append(['Texas 1' '555' 'triple five']) df = pd DataFrame([ ['Texas 1' '111' '222' '333'] ['Texas 1' '444' '555' '666'] ['Texas 2' '777' '888' '999'] ]) for c in combinations: df ix[(df[0] == c[0]) &amp; (df[2] == c[1]) 1] = c[2] ```` Output: ```` 0 1 2 3 0 Texas 1 triple two 222 333 1 Texas 1 triple five 555 666 2 Texas 2 777 888 999 ````
Is it possible to pass a single user input (an int) into an argument to match 2 ints in python? Basically I am trying to write a tic tac toe game in python and I am new to the language At the moment I am trying to get the user to input an int which I will then pass into an argument which requires two ints of the same number (as the grid will be a square) to make a grid for the game If you look in the code below I have hard coded in the arguments (grid_maker(6 6)) but is there a way I can assign the user input into h and w so the grid will be the size the user requests? (The user can input any number they wish e g 20 and make a 20 by 20 grid but they still only need 3 in a row the code is more for the practice rather than an efficient game) On a side note would this way be recommended as I will need to check if someone has gotten 3 Xs or Os in a row ````class GameBoard: def printBoard(): print('Welcome to my tic tac toe game') ("Commented gridInput out as it results in an error ") #gridInput = int(input('Please enter a number between 5-10 to set the grid dimensions for the game board\n')) def grid_maker(h w): grid = [[" | " for _ in range(w)] for _ in range(h)] return grid print ('\n' join(' ' join(row) for row in grid_maker(6 6))) def print_grid(grid): for row in grid: for e in row: print (e) ````
I am not sure what exactly you want to do but I guess there are multiple solutions to it I will just give one possible solution: ````def grid_maker(h w=None): if w is None: w=h grid = [[" | " for _ in range(w)] for _ in range(h)] return grid ````
Python - dividing a user input to display a range of answers I am having problems with a Python question The question is to write a function that shows all integers that are cleanly divisble by 13 in the range of (1:x) where x is a user input I am new to Python and am struggling with this question I need to have a user input which Python then divides by 13 and displays the answer(s) So if a user inputs '27' the answers would be '13' and '26' My code so far is: ```` x = int(raw_input('Enter your Number Here: ')) def divide(x): cond = True while cond: x % 13 == 0 print x else: cond = False print 'Your number us not divisble by 13' divide(x) ````
`x % 13 == 0` by itself does nothing; it evaluates to True or False but you then ignore that result If you want to do something with it you need to use it in an if condition Note also that indentation is important - the else needs to be lined up with the if There is no need for `while` at all because nothing can change within the loop ````if x % 13 == 0: print x else: print 'Your number us not divisible by 13' ````
iteration counter for GCD I have the following code for calulating the GCD of two numbers: ````def gcd(m n): are = m % n while are != 0: m = n n = r are = m % n return n print ("\n" "gcd (10 35) = " gcd(10 35)) print ("\n" "gcd (735 175) = " gcd(735 175)) print ("\n" "gcd (735 350) = " gcd(735 350)) ```` I would like to count the number of iterations that the algorithm has to go through before finding the GCD I am having trouble making a for loop to determine the number of iterations
````def gcd(m n): are = m % n counter = 0 while are != 0: m = n n = r are = m % n counter = 1 return n counter ````
AttributeError: 'module' object has no attribute 'io' in caffe I am trying to do a gender recognition program below is the code ````import caffe import os import numpy as np import sys import cv2 import time #Models root folder models_path = " /models" #Loading the mean image mean_filename=os path join(models_path ' /mean binaryproto') proto_data = open(mean_filename "rb") read() a = caffe io caffe_pb2 BlobProto FromString(proto_data) mean_image = caffe io blobproto_to_array(a)[0] #Loading the gender network gender_net_pretrained=os path join(models_path ' /gender_net caffemodel') gender_net_model_file=os path join(models_path ' /deploy_gender prototxt') gender_net = caffe Classifier(gender_net_model_file gender_net_pretrained) #Reshaping mean input image mean_image = np transpose(mean_image (2 1 0)) #Gender labels gender_list=['Male' 'Female'] #cv2 Haar Face detector face_cascade=cv2 CascadeClassifier(os path join (models_path 'haarcascade_frontalface_default xml')) #Getting prediction from live camera cap = cv2 VideoCapture(0) while True: ret frame = cap read() if ret is True: start_time = time time() frame_gray = cv2 cvtColor(frame cv2 COLOR_BGR2GRAY) rects = face_cascade detectMultiScale(frame_gray 1 3 5) #Finding the largest face if len(rects) &gt;= 1: rect_area = [rects[i][2]*rects[i][3] for i in xrange(len(rects))] rect = rects[np argmax(rect_area)] x y w h = rect cv2 rectangle(frame (x y) (x+w y+h) (255 0 0) 2) roi_color = frame[y:y+h x:x+w] #Resizing the face image crop = cv2 resize(roi_color (256 256)) #Subtraction from mean file #input_image = crop -mean_image input_image = rect #Getting the prediction start_prediction = time time() prediction = gender_net predict([input_image]) gender = gender_list[prediction[0] argmax()] print("Time taken by DeepNet model: {}") format(time time()-start_prediction) print prediction gender cv2 putText(frame gender (x y) cv2 FONT_HERSHEY_SIMPLEX 1 (0 255 0) 2) print("Total Time taken to process: {}") format(time time()-start_time) #Showing output cv2 imshow("Gender Detection" frame) cv2 waitKey(1) #Delete objects cap release() cv2 killAllWindows() ```` When I am running the I am getting an error: ````a = caffe io caffe_pb2 BlobProto FromString(proto_data) AttributeError: 'module' object has no attribute 'io' ```` How Can I solve it I am using cnn_gender_age_prediction model I want to make a real time gender recognition program using python and cnn_gender_age model
`io` is a module in `caffe` package Basically when you type `import caffe` it will not automatically try to import all modules in `caffe` package including `io` There are two solutions First one: import caffe io manually ````import caffe import caffe io ```` Second one: update to the latest caffe version in which you should find a line in `__init__ py` under `python/caffe` directory: ````from import io ````
Python Arguments and Passing Floats in Arguments I have run into a couple of issues using arguments within a python script Can i please get some help or direction to get this code functional? Thank you in advance First issue: I am unable to specify multiple arguments at once For example I am able to pass a single argument fine: ````$ /my_arg_scenario py -a Argument_A $ /my_arg_scenario py -c Argument_C $ /my_arg_scenario py -d Argument_D ```` However I am looking for a way to pass multiple arguments in any position Is there a way I can accomplish this? For example I would like the below to occur: ```` /my_arg_scenario py -a -c -d Argument_A Argument_C Argument_D # OR /my_arg_scenario py -c -a Argument_C Argument_A ```` Second Issue: I am trying to pass both whole numbers and floats in the -b argument But when I pass a float/decimal I get the below error Is there a way I can pass both a float and whole number? This works: ````$ /my_arg_scenario py -b 5 The number provided is: 5 ```` But this does NOT: ````$ /my_arg_scenario py -b 5 50 Traceback (most recent call last): File " /my_arg_scenario py" line 18 in <module&gt; if int(sys argv[2]) not in range(0 11): ValueError: invalid literal for int() with base 10: '5 50' ```` Below is my testable code: ````#!/usr/local/bin/python3 5 import sys script_options = ['-a' '-b' '-c' '-d'] manual_flag = '' build_flag = '' if len(sys argv) &gt; 1: if sys argv[1] in script_options: pass else: print('\n\t\tParameter "' sys argv[1] '" is an invalid argument \n') sys exit() if sys argv[1] == '-a': print('Argument_A') sys exit() elif sys argv[1] == '-b': if int(sys argv[2]) not in range(0 11): print('Invalid interval Please select a value bewteen 1-5s ') sys exit() else: print('The number provided is: ' (sys argv[2])) elif sys argv[1] == '-c': manual_flag = 'Argument_C' print(manual_flag) elif sys argv[1] == '-d': build_flag ='Argument_D' print(build_flag) else: pass ````
<s>You did not actually provide the code you are using (aside from incidentally in the traceback) </s>(<strong>Update:</strong> Code added later) but the answer is: Stop messing around with parsing `sys argv` manually and use <a href="https://docs python org/3/library/argparse html" rel="nofollow">the `argparse` module</a> (or `docopt` or something that does not involve rolling your own switch parsing) ````import argparse parser = argparse ArgumentParser() parser add_argument('-a' action='store_true') parser add_argument('-b' metavar='INTERVAL' type=int choices=range(11)) parser add_argument('-c' action='store_true') parser add_argument('-d' action='store_true') args = parser parse_args() if args a: print('Argument_A') if args b is not None: print('The number provided is:' args b) if args c: print('Argument_C') if args d: print('Argument_D') ```` If you want to accept `int` or `float` the easiest solution is to just make `type=float` and use a consistent type (but the `range` check must be done outside the parsing step) If you must allow both `ast literal_eval` or a homegrown `argparse` type conversion function are options Since you want a range check too (which `range` will not handle properly for `float` values that are not equal to `int` values) roll a type checker: ````def int_or_float(minval=None maxval=None): def checker(val): try: val = int(val) except ValueError: val = float(val) if minval is not None and val < minval: raise argparse ArgumentTypeError('%r must be &gt;= %r' % (val minval)) if maxval is not None and val &gt; maxval: raise argparse ArgumentTypeError('%r must be <= %r' % (val maxval)) return val return checker ```` Then use it by replacing the definition for `-b` with: ````# Might want int_or_float(0 10) depending on range exclusivity rules parser add_argument('-b' metavar='INTERVAL' type=int_or_float(0 11)) ````
How do I schedule a job in Django? I have to schedule a job using <a href="https://pypi python org/pypi/schedule" rel="nofollow">Schedule</a> on my <a href="https://www djangoproject com/" rel="nofollow">django</a> web application ````def new_job(request): print("I am working ") file=schedulesdb objects filter (user=request user f_name__icontains ="mp4") last() file_initiated = str(f_name) os startfile(f_name_initiated) ```` I need to do it with filtered time in db ````GIVEN DATETIME = schedulesdb objects datetimes('request_time' 'second') last() schedule GIVEN DATETIME do(job) ````
Django is a web framework It receives a request does whatever processing is necessary and sends out a response It does not have any persistent process that could keep track of time and run scheduled tasks so there is no good way to do it using just Django That said Celery (<a href="http://www celeryproject org/" rel="nofollow">http://www celeryproject org/</a>) is a python framework specifically built to run tasks both scheduled and on-demand It also integrates with Django ORM with minimal configuration I suggest you look into it You could of course write your own external script that would use schedule module that you mentioned You would need to implement a way to write shedule objects into the database and then you could have your script read and execute them Is your "scheduledb" model already implemented?
Keeping 'key' column when using groupby with transform in pandas Finding a normalized dataframe removes the column being used to group by so that it cannot be used in subsequent groupby operations for example (edit: updated): ```` df = pd DataFrame({'a':[1 1 2 3 2 3] 'b':[0 1 2 3 4 5]}) a b 0 1 0 1 1 1 2 2 2 3 3 3 4 2 4 5 3 5 df groupby('a') transform(lambda x: x) b 0 0 1 1 2 2 3 3 4 4 5 5 ```` Now with most operations on groups the 'missing' column becomes a new index (which can then be adjusted using `reset_index` or set `as_index=False`) but when using transform it just disappears leaving the original index and a new dataset without the key Edit: here is a one liner of what I would like to be able to do ```` df groupby('a') transform(lambda x: x+1) groupby('a') mean() KeyError 'a' ```` In the example from the <a href="http://pandas pydata org/pandas-docs/stable/groupby html" rel="nofollow">pandas docs</a> a function is used to split based on the index which appears to avoid this issue entirely Alternatively it would always be possible just to add the column after the groupby/transform but surely there is a better way? Update: It looks like reset_index/as_index are intended only for functions that reduce each group to a single row There seem to be a couple options from answers
that is bizzare! I tricked it like this ````df groupby(df a values) transform(lambda x: x) ```` <a href="https://i stack imgur com/XcYxq png" rel="nofollow"><img src="https://i stack imgur com/XcYxq png" alt="enter image description here"></a>
python/pandas/sklearn: getting closest matches from pairwise_distances I have a dataframe and am trying to get the closest matches using mahalanobis distance across three categories like: ````from io import StringIO from sklearn import metrics import pandas as pd stringdata = StringIO(you"""pid ratio1 pct1 rsp 0 2 9 26 7 95 073615 1 11 6 29 6 96 963660 2 0 7 37 9 97 750412 3 2 7 27 9 102 750412 4 1 2 19 9 93 750412 5 0 2 22 1 96 750412 """) stats = ['ratio1' 'pct1' 'rsp'] df = pd read_csv(stringdata) d = metrics pairwise pairwise_distances(df[stats] as_matrix() metric='mahalanobis') print(df) print(d) ```` Where that `pid` column is a unique identifier What I need to do is take that `ndarray` returned by the `pairwise_distances` call and update the original dataframe so each row has some kind of list of its closest N matches (so `pid` 0 might have an ordered list by distance of like 2 1 5 3 4 (or whatever it actually is) but I am totally stumped how this is done in python
````from io import StringIO from sklearn import metrics stringdata = StringIO(you"""pid ratio1 pct1 rsp 0 2 9 26 7 95 073615 1 11 6 29 6 96 963660 2 0 7 37 9 97 750412 3 2 7 27 9 102 750412 4 1 2 19 9 93 750412 5 0 2 22 1 96 750412 """) stats = ['ratio1' 'pct1' 'rsp'] df = pd read_csv(stringdata) dist = metrics pairwise pairwise_distances(df[stats] as_matrix() metric='mahalanobis') dist = pd DataFrame(dist) ranks = np argsort(dist axis=1) df["rankcol"] = ranks apply(lambda row: ' ' join(map(str row)) axis=1) df ````
Obey the Testing Goat - Traceback So I am going through this book called "Obey the Testing Goat" and I am running into an issue in the sixth chapter while learning Python It says that I should be able to run the functional_tests we have set up throughout the chapter and previous one with no errors; however I keep getting a Traceback that I do not know how to fix ````Traceback (most recent call last): File "C:\Users\YaYa\superlists\functional_tests\tests py" line 54 in test_can_start_a_list_and_retrieve_it_later self check_for_row_in_list_table('1: Buy peacock feathers') File "C:\Users\YaYa\superlists\functional_tests\tests py" line 15 in check_for_row_in_list_table table = self browser find_element_by_id('id_list_table') File "C:\Users\YaYa\AppData\Local\Programs\Python\Python35-32\lib\site-packages\selenium\webdriver\remote\webdriver py" line 269 in find_element_by_id return self find_element(by=By ID value=id_) File "C:\Users\YaYa\AppData\Local\Programs\Python\Python35-32\lib\site-packages\selenium\webdriver\remote\webdriver py" line 752 in find_element 'value': value})['value'] File "C:\Users\YaYa\AppData\Local\Programs\Python\Python35-32\lib\site-packages\selenium\webdriver\remote\webdriver py" line 236 in execute self error_handler check_response(response) File "C:\Users\YaYa\AppData\Local\Programs\Python\Python35-32\lib\site-packages\selenium\webdriver\remote\errorhandler py" line 192 in check_response raise exception_class(message screen stacktrace) selenium common exceptions NoSuchElementException: Message: Unable to locate element: {"method":"id" "selector":"id_list_table"} Stacktrace: at FirefoxDriver prototype findElementInternal_ (file:///C:/Users/YaYa/AppData/Local/Temp/tmp869pyxau/extensions/fxdriver@googlecode com/components/driver-component js:10770) at fxdriver Timer prototype setTimeout/< notify (file:///C:/Users/YaYa/AppData/Local/Temp/tmp869pyxau/extensions/fxdriver@googlecode com/components/driver-component js:625) ```` <a href="https://gist github com/yuyu23/8e5cce6c55e9ec3a771396048058a489" rel="nofollow">I have created a GIST in case anyone's interested in looking at the files that I have worked on throughout the chapters</a> You can also access the chapter for this book right <a href="http://www obeythetestinggoat com/book/chapter_06 html#_one_more_view_to_handle_adding_items_to_an_existing_list" rel="nofollow">here</a> I really do not know what the problem is (I am not good at Python AT ALL and tried running pdb but I do not even know what half of it means) and no one that I know and that I have asked has any information on what I can do to fix it Thanks in advance! EDIT: Here is the test_can_start_a_list_and_retrieve_it_later - just a note in case it is needed but the def test_can line number is 19 ````def test_can_start_a_list_and_retrieve_it_later(self): # Edith has heard about a cool new online to-do app She goes # to check out its homepage self browser get(self live_server_url) # She notices the page title and header mention to-do lists self assertIn('To-Do' self browser title) header_text = self browser find_element_by_tag_name('h1') text self assertIn('To-Do' header_text) # She is invited to enter a to-do item straight away inputbox = self browser find_element_by_id('id_new_item') self assertEqual( inputbox get_attribute('placeholder') 'Enter a to-do item' ) # She types "Buy peacock feathers" into a text box (Edith's hobby # is tying fly-fishing lures) inputbox send_keys('Buy peacock feathers') # When she hits enter the page updates and now the page lists # "1: Buy peacock feathers" as an item in a to-do list inputbox send_keys(Keys ENTER) edith_list_url = self browser current_url self assertRegex(edith_list_url '/lists/ +') self check_for_row_in_list_table('1: Buy peacock feathers') # There is still a text box inviting her to add another item She # enters "Use peacock feathers to make a fly" (Edith is very methodical) inputbox = self browser find_element_by_id('id_new_item') inputbox send_keys('Use peacock feathers to make a fly') inputbox send_keys(Keys ENTER) # The page updates again and now shows both items on her list self check_for_row_in_list_table('1: Buy peacock feathers') self check_for_row_in_list_table('2: Use peacock feathers to make a fly') # Now a new user Francis comes along to the site ##We use a new browser session to make sure that no information ##of Edith's is coming through from cookies etc self browser quit() self browser = webdriver Firefox() #Francis visits the home page There is no sign of Edith's #list self browser get(self live_server_url) page_text = self browser find_element_by_tag_name('body') text self assertNotIn('Buy peacock feathers' page_text) self assertNotIn('make a fly' page_text) #Francis starts a new list by entering a new item He #is less interesting than Edith inputbox = self browser find_element_by_id('id_new_item') inputbox send_keys('Buy milk') inputbox send_keys(Keys ENTER) #Francis gets his own unique URL francis_list_url = self browser current_url self assertRegex(francis_list_url '/lists/ +') self assertNotEqual(francis_list_url edith_list_url) #Again there is no trace of Edith's list page_text = self browser find_element_by_tag_name('body') text self assertNotIn('Buy peacock feathers' page_text) self assertIn('Buy milk' page_text) self fail('Finish the test!') # Satisfied they both go back to sleep ```` EDIT 2: Here is the check_for_row_in_list_table Note that this starts on line 14 of the document ````def check_for_row_in_list_table(self row_text): table = self browser find_element_by_id('id_list_table') rows = table find_elements_by_tag_name('tr') self assertIn(row_text [row text for row in rows]) ````
Found the error in my work I was apparently missing an s in list html ````<form method="POST" action="/lists/{{ list id }}/add_item"&gt; ````
Django templates: why does __call__ magic method breaks the rendering of a non-model object? Today I faced a strange issue on one of my development I reproduced it with a very minimal example Have a look at these 2 dummy classes (non Django model subclasses): ````class DummyClassA(object): def __init__(self name): self name = name def __repr__(self): return 'Dummy1 object called ' self name class DummyClassB(object): """Same as ClassA with the __call__ method added""" def __init__(self name): self name = name def __repr__(self): return 'Dummy2 object called ' self name def __call__(self *args **kwargs): return "bar" ```` They are identical but the second have a special `__call__()` method I want to display instances of these 2 objects in a view using the builtin Django template engine: ````class MyView(TemplateView): template_name = 'myapp/home html' def get_context_data(self **kwargs): ctx = super(MyView self) get_context_data(**kwargs) list1 = [ DummyClassA(name="John") DummyClassA(name="Jack") ] list2 = [ DummyClassB(name="Albert") DummyClassB(name="Elmer") ] ctx update({ 'list1': list1 'list2': list2 }) return ctx ```` and the corresponding template: ```` <h1&gt;Objects repr</h1&gt; <ul&gt; {% for element in list1 %} <li&gt;{{ element }}</li&gt; {% endfor %} </ul&gt; <ul&gt; {% for element in list2 %} <li&gt;{{ element }}</li&gt; {% endfor %} </ul&gt; <h1&gt;Access members</h1&gt; <ul&gt; {% for element in list1 %} <li&gt;{{ element name }}</li&gt; {% endfor %} </ul&gt; <ul&gt; {% for element in list2 %} <li&gt;{{ element name }}</li&gt; {% endfor %} </ul&gt; ```` I obtain this result: <a href="https://i stack imgur com/kL9lZ png" rel="nofollow"><img src="https://i stack imgur com/kL9lZ png" alt="html result"></a> When displaying instances of the second class (`{{ element }}`) the `__call__` method is executed instead of `__repr__()` and when I want to access a member of the class it returns nothing I do not understand why defining the `__call__()` change the way Django template engine will handle those instances I imagine this is not a bug but mostly a feature but I am curious why `__call__()` is run in such case And why I cannot get the value of `element name` in the 2nd list ?
Because that is what the template language is designed to do As <a href="https://docs djangoproject com/en/1 10/ref/templates/language/#variables" rel="nofollow">the docs state</a>: <blockquote> If the resulting value [of looking up a variable] is callable it is called with no arguments The result of the call becomes the template value </blockquote> Without this there would be no way of calling methods in templates since the template syntax does not allow using parentheses
Import error installing tlslite I am trying to install tlslite After installed the module I have tried to test it and I receive this error: ````from checker import Checker ImportError: No module named checker ```` I have checked on my pip module list and checker is installed Any idea? Thanks!
Assuming you installed tlslite correctly try this: ````&gt;&gt;&gt; from tlslite checker import Checker ```` If that does not work check that tlslite is in your site packages
Percentage data from csv to excel I read some data under percentage form (11 00%) from a csv file I copy them into an excel file and i want to represent them in a chart The problem is when i copy them into excel data is automatically converted to string type and i cannot represent them in the chart correctly I tried few methods but nothing successfully ````f = open("any csv") wb = openpyxl load_workbook("any xlsx") ws = wb get_sheet_by_name('Sheet1') reader = csv reader(f delimiter=' ') for row in ws iter_rows(row_offset=0): for i in reader: ws append(i[2:]) code here ```` I tried using: ````for row in ws iter_rows(row_offset=0): for i in reader: ws append(float(i[2:])) ```` and i recieve this error: TypeError: float() argument must be a string or a number I have tried using: ````for i in reader: i[2:] = [float(x) for x in i[2:]] ws append(i) ```` and i got this eror: ValueError: invalid literal for float()11 1% I use the i[2:] because first first and second column contain some text that do not need to be converted
````for row in ws iter_rows(row_offset=0): for i in reader: ws append(float(i[2:-1])) code here ```` Float the value before you append it
Python read dedicated rows from csv file need some help to read dedicated rows in python The txt file content is defined as the following: ````A;Maria;1 5;20 0;FFM; B;2016;20;1;2017;20;1; ```` I read the file n python as defined below: ````import csv with open('C:/0001 txt' newline='') as csvfile: filereader = csv reader(csvfile delimiter=' ') for row in filereader : print('; ' join(row)) ```` What I am not sure about it is how can I read the first row based on the first Character <blockquote> A </blockquote> and fill every value in an own function Then the second row identified based on <blockquote> B </blockquote> fill every value in an own function etc Thanks
Thanks for the answers I defined a class and want to fill every value to the dedicated function: ````class Class(object): def __init__(self name years age town): self name = name self years = years self age = age self town = town def GetName(self): return self name def GetYears(self): return self years def GetAge(self): return self age def GetTown(self): return self town def __str__(self): return "%s is a %s" % (self name self years self age self town) ```` So my file reader should load the file read a line and fill the dedicated values into the function as shown below I am just not sure how to call the reader for the first row based on A and the fill the function: ````import csv with open('C:/0001 txt' newline='') as csvfile: spamreader = csv reader(csvfile delimiter=';') for row in spamreader: FirstRow = row[0] if FirstRow == 'A': **Fill Maria into GetName Fill 1 5 into GetYears Fill 20 0 into GetAge Fill FFM into GetTown** ````
How do I return a nonflat numpy array selecting elements given a set of conditions? I have a multidimensional array say of shape (4 3) that looks like ````a = np array([(1 2 3) (4 5 6) (7 8 9) (10 11 12)]) ```` If I have a list of fixed conditions ````conditions = [True False False True] ```` How can I return the list ````array([(1 2 3) (10 11 12)]) ```` Using `np extract` returns ````&gt;&gt;&gt; np extract(conditions a) array([1 4]) ```` which only returns the first element along each nested array as opposed to the array itself I was not sure if or how I could do this with `np where` Any help is much appreciated thanks!
Let us define you variables: ````&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; a = np array([(1 2 3) (4 5 6) (7 8 9) (10 11 12)]) &gt;&gt;&gt; conditions = [True False False True] ```` Now let us select the elements that you want: ````&gt;&gt;&gt; a[np array(conditions)] array([[ 1 2 3] [10 11 12]]) ```` <h3>Aside</h3> Note that the simpler `a[conditions]` has some ambiguity: ````&gt;&gt;&gt; a[conditions] -c:1: FutureWarning: in the future boolean array-likes will be handled as a boolean array index array([[4 5 6] [1 2 3] [1 2 3] [4 5 6]]) ```` As you can see `conditions` are treated here as (integer-like) index values which is not what we wanted
AWS Lambda sending HTTP request This is likely a question with an easy answer but i cannot seem to figure it out Background: I have a python Lambda function to pick up changes in a DB then using HTTP post the changes in json to a URL I am using urllib2 sort of like this: ````# this runs inside a loop in reality my error handling is much better request = urllib2 Request(url) request add_header('Content-type' 'application/json') try: response = urllib2 urlopen(request json_message) except: response = "Failed!" ```` It seems from the logs either the call to send the messages is skipped entirely or times-out while waiting for a response Is there a permission setting I am missing the outbound rules in AWS appear to be right [Edit] - The VPC applied to this lambda does have internet access and the security groups applied appear to allow internet access [/Edit] I have tested the code locally (connected to the same data source) and it works flawlessly It appears the other questions related to posting from a lambda is related to node js and usually because the url is wrong In this case I am using a requestb in url that i know is working as it works when running locally
If you have deployed your Lambda function inside your VPC it does not obtain a public IP address even if it is deployed into a subnet with a route to an Internet Gateway It only obtains a private IP address and thus can not communicate to the public Internet by itself To communicate to the public Internet Lambda functions deployed inside your VPC need to be done so in a private subnet which has a <a href="http://docs aws amazon com/AmazonVPC/latest/UserGuide/VPC_Route_Tables html#route-tables-nat" rel="nofollow">route</a> to either a <a href="https://aws amazon com/blogs/aws/new-managed-nat-network-address-translation-gateway-for-aws/" rel="nofollow">NAT Gateway</a> or a self-managed <a href="http://docs aws amazon com/AmazonVPC/latest/UserGuide/VPC_NAT_Instance html" rel="nofollow">NAT instance</a>
Is there a way to refer two attributes of an object in a list? I want to know who is the taller athlete (object) in a list of objects If I want to print it I tried to write this: ````print ("The greater height is" max(x height for x in athletes_list) "meters ") ```` It shows the height of the taller athlete but I do not know how to get his name by this way putting all commands in print's body Is there any way to do this? I know its possible by creating a for like this: ````for i in athletes_list: if i height==max(x height for x in athletes_list): print ("The taller athlete is" i name "with" i height "meters ") ```` Is it possible to get both informations only in print's body? Sorry for bad english
Reread your question The answer is still yes Use the `format` method of strings: ````print("The taller athlete is {0 name} with {0 height} meters " format(max(athletes_list key=lambda a: a height))) ````
Attribute error for PersonalInfoForm I am a little confused why is 'clickjacking middleware` <them>trowing</them> Attribute Error on my form I am making a simple application for collecting labor or user information and I am facing a small problem can someone please help me and clarify what is wrong in this code <a href="http://dpaste com/373PYDK" rel="nofollow">Dpaste</a> from my Traceback this is my view ````class PersonalInfoView(FormView): """TODO: CreateView for PersonalInfoForm return: TODO """ template_name = 'apply_to/apply_now html' form_class = PersonalInfoForm success_url = 'success/' def get(self form *args **kwargs): """TODO: define get request return: TODO """ self object = None form_class = self get_form_class() form = self get_form(form_class) return self render_to_response( self get_context_data(form=form)) def post(self form *args **kwargs): """TODO: Post request for PersonalInfoForm return: TODO """ self object = None form_class = self get_form_class() form = self get_form(form_class) if form is_valid(): return self form_valid(form) else: return self form_class(form) def form_valid(self form *args **kwargs): """TODO: Validate form return: TODO """ self object = form save() return HttpResponseRedirect(self get_success_url()) def form_invalid(self form *args **kwargs): """TODO: handle invalid form request return: TODO """ return self render_to_response( self get_context_data(form=form)) ```` Urls ````"""superjobs URL Configuration the `urlpatterns` list routes URLs to views For more information please see: https://docs djangoproject com/en/1 8/topics/http/urls/ examples: function views 1 Add an import: from my_app import views 2 Add a URL to urlpatterns: url(r'^$' views home name='home') class-based views 1 Add an import: from other_app views import Home 2 Add a URL to urlpatterns: url(r'^$' Home as_view() name='home') including another URLconf 1 Add a URL to urlpatterns: url(r'^blog/' include('blog urls')) """ from django conf urls import include url from django contrib import admin from django views generic import TemplateView from labor_apply_app views import PersonalInfoView urlpatterns = [ url(r'^admin/' include(admin site urls)) # django-contrib-flatpages # url(r'^apply_to/' include('labor_apply_app urls')) url(r'^$' 'labor_apply_app views index' name='index') url(r'^apply_now/$' PersonalInfoView as_view()) url(r'^success/$' TemplateView as_view()) # Django Allauth url(r'^accounts/' include('allauth urls')) ] ````
Your traceback is showing that you have not used the view above at all but the form Presumably you have assigned the wrong thing in urls py <strong>Edit</strong> Actually the problem is that your post method when the form is not valid returns the form itself and not an HttpResponse However you should <strong>not</strong> be defining any of these methods You are just replicating what the class-based views are already supposed to be doing for you Make your view actually inherit from CreateView and remove all those method definitions completely
Setting Image background for a line plot in matplotlib I am trying to set a background image to a line plot that I have done in matplotlib While importing the image and using zorder argument also I am getting two seperate images in place of a single combined image Please suggest me a way out My code is -- <div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override">`import quandl import pandas as pd import sys os import matplotlib pyplot as plt import seaborn as sns import numpy as np import itertools def flip(items ncol): return itertools chain(*[items[i::ncol] for i in range(ncol)]) df = pd read_pickle('never pickle') rows = list(df index) countries = ['USA' 'CHN' 'JPN' 'DEU' 'GBR' 'FRA' 'IND' 'ITA' 'BRA' 'CAN' 'RUS'] x = range(len(rows)) df = df pct_change() fig ax = plt subplots(1) for country in countries: ax plot(x df[country] label=country) plt xticks(x rows size='small' rotation=75) #legend = ax legend(loc='upper left' shadow=True) plt legend(bbox_to_anchor=(1 05 1) loc=2 borderaxespad=0 ) plt show(1) plt figure(2) i am = plt imread('world png') ax1 = plt imshow(i am zorder=1) ax1 = df iloc[: :] plot(zorder=2) handles labels = ax1 get_legend_handles_labels() plt legend(flip(handles 2) flip(labels 2) loc=9 ncol=12) plt show()```` </div> </div> So in the figure(2) I am facing problem and getting two separate plots
You are creating two separate figures in your code The first one with `fig ax = plt subplots(1)` and the second with `plt figure(2)` If you delete that second figure you should be getting closer to your goal
'DataFrame' object is not callable I am trying to create a heatmap using Python on Pycharms I have this code: ````import numpy as np import pandas as pd import matplotlib matplotlib use('agg') import matplotlib pyplot as plt data1 = pd read_csv(FILE") freqMap = {} for line in data1: for item in line: if not item in freqMap: freqMap[item] = {} for other_item in line: if not other_item in freqMap: freqMap[other_item] = {} freqMap[item][other_item] = freqMap[item] get(other_item 0) 1 freqMap[other_item][item] = freqMap[other_item] get(item 0) 1 df = data1[freqMap] T fillna(0) print(df) ```` My data is stored into a CSV file Each row represents a sequence of products that are associated by a Consumer Transaction The typically Basket Market Analysis: ````99 32 35 45 56 58 7 72 99 45 51 56 58 62 72 17 55 56 58 62 21 99 35 21 99 44 56 58 7 72 72 17 99 35 45 56 7 56 62 72 21 91 99 35 99 35 55 56 58 62 72 99 35 51 55 58 7 21 99 56 58 62 72 21 55 56 58 21 99 35 99 35 62 7 17 21 62 72 21 99 35 58 56 62 72 99 32 35 72 17 99 55 56 58 ```` When I execute the code I am getting the following error: ````Traceback (most recent call last): File "C:/Users/tst/PycharmProjects/untitled1/tes py" line 22 in <module&gt; df = data1[freqMap] T fillna(0) File "C:\Users\tst\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\core\frame py" line 1997 in __getitem__ return self _getitem_column(key) File "C:\Users\tst\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\core\frame py" line 2004 in _getitem_column return self _get_item_cache(key) File "C:\Users\tst\AppData\Local\Continuum\Anaconda3\lib\site-packages\pandas\core\generic py" line 1348 in _get_item_cache res = cache get(item) TypeError: unhashable type: 'dict' ```` How can I solve this problem? Many thanks!
You are reading a csv file but it has no header the delimiter is a space not a comma and there are a variable number of columns So that is three mistakes in your first line And data1 is a DataFrame freqMap is a dictionary that is completely unrelated So it makes no sense to do data1[freqMap] I suggest you step through this line by line in jupyter or a python interpreter Then you can see what each line actually does and experiment
Python- request post login credentials for website So I am trying to write this python script and add it to my Windows Task Scheduler to be executed every time I log on my Work Machine The script should open a webpage and post my login info ````import webbrowser import os url = 'www example com' webbrowser open(url) import requests url = 'www example com' values = ["'username': username" "'password': 'somepass'"] are = requests post(url data=values) print r content ```` When I run the script it opens my browser and lands on the page I want it to however Nothing is posted and I get these errors on my IDE; `````Traceback (most recent call last): File "C:\Users\user\Desktop\Scripts\myscript py" line 20 in <module&gt; are = requests post(url data=values) File "C:\Python27\lib\requests\api py" line 110 in post return request('post' url data=data json=json **kwargs) File "C:\Python27\lib\requests\api py" line 56 in request return session request(method=method url=url **kwargs) File "C:\Python27\lib\requests\sessions py" line 462 in request prep = self prepare_request(req) File "C:\Python27\lib\requests\sessions py" line 395 in prepare_request hooks=merge_hooks(request hooks self hooks) File "C:\Python27\lib\requests\models py" line 302 in prepare self prepare_body(data files json) File "C:\Python27\lib\requests\models py" line 462 in prepare_body body = self _encode_params(data) File "C:\Python27\lib\requests\models py" line 95 in _encode_params for k vs in to_key_val_list(data): ValueError: too many values to unpack*` ````
That is what your dict should look like ````values = {'username': 'username' 'password': 'somepass'} ````
Is it possible to use FillBetweenItem to fill between two PlotCurveItem's in pyqtgraph? I am attempting to fill between two curves that were created using PlotCurveItem in pyqtgraph ```` phigh = self p2 addItem(pg PlotCurveItem(x y pen = 'k')) plow = self p2 addItem(pg PlotCurveItem(x yy pen = 'k')) pfill = pg FillBetweenItem(phigh plow brush = br) self p2 addItem(pfill) ```` The curve items are plotting properly however there is no fill
This fixed it ```` phigh = pg PlotCurveItem(x y pen = 'k') plow = pg PlotCurveItem(x yy pen = 'k') pfill = pg FillBetweenItem(ph plow brush = br) self p2 addItem(ph) self p2 addItem(plow) self p2 addItem(pfill) ````
PyBrain - out = fnn activateOnDataset(griddata) I have been adapting a neural network to classify images from PyBrain's tutorial: <a href="http://pybrain org/docs/tutorial/fnn html" rel="nofollow">http://pybrain org/docs/tutorial/fnn html</a> It is feed in image data in png form each image is assigned a particular class It works well until: ````out = fnn activateOnDataset(griddata) ```` The message it returns is: AssertionError: (3 2) Am pretty sure it is a problem with how I have declared the griddata dataset but I do not know exactly what? On the tutorial version it runs fine My code: ````from pybrain datasets import ClassificationDataSet from pybrain utilities import percentError from pybrain tools shortcuts import buildNetwork from pybrain supervised trainers import BackpropTrainer from pybrain structure modules import SoftmaxLayer from pylab import ion ioff figure draw contourf clf show hold plot from scipy import diag arange meshgrid where from numpy random import multivariate_normal import cv2 from pyroc import * #Creates cover type array based on color of pixels in roadmap coverType = [(255 225 104 3) #Road (254 254 253 0) #Other (254 254 254 3) #Road (253 254 253 0) #Other (253 225 158 0) #Other ] # have other cover type but sample amount included coverTypes = len(coverType) print coverTypes #to count #Creates dataset alldata = ClassificationDataSet(3 1 nb_classes=10) """Classifies Roadmap Sub-Images by type and loads matching Satellite Sub-Image with classification into dataset """ for eachFile in glob glob('Roadmap Sub-Images/*'): image = Image open(eachFile) fileName = eachFile newFileName = fileName replace("Roadmap Sub-Images" "Satellite Sub-Images") colors = image convert('RGB') getcolors() #Finds all colors in image and their frequency colors sort() #Sorts colors in image by their frequency colorMostFrequent = colors[-1][1] #Finds last element in array the most frequent color for eachColor in range(1 151): #151 number of element in CoverType array if colorMostFrequent[0] == coverType[eachColor][0] and colorMostFrequent[1] == coverType[eachColor][1] and colorMostFrequent[2] == coverType[eachColor][2]: print newFileName #Check new route image = cv2 imread(newFileName) meanImage = cv2 mean(image) #Take average color meanImageRGB = meanImage[:3] #Converts to RGB scale excluding "alpha" print meanImageRGB #Check RGB average colors alldata addSample(meanImageRGB coverType[eachColor][3]) tstdata trndata = alldata splitWithProportion( 0 25 ) trndata _convertToOneOfMany( ) tstdata _convertToOneOfMany( ) fnn = buildNetwork( trndata indim 5 trndata outdim outclass=SoftmaxLayer ) trainer = BackpropTrainer( fnn dataset=trndata momentum=0 1 verbose=True weightdecay=0 01) ticks = arange(-3 6 0 2) X Y = meshgrid(ticks ticks) #I think every thing is good to here problem with the griddata dataset I think? # need column vectors in dataset not arrays griddata = ClassificationDataSet(2 1 nb_classes=4) for i in xrange(X size): griddata addSample([X ravel()[i] Y ravel()[i]] [0]) griddata _convertToOneOfMany() # this is still needed to make the fnn feel comfy for i in range(20): trainer trainEpochs( 1 ) trnresult = percentError( trainer testOnClassData() trndata['class'] ) tstresult = percentError( trainer testOnClassData( dataset=tstdata ) tstdata['class'] ) print "epoch: %4d" % trainer totalepochs \ " train error: %5 2f%%" % trnresult \ " test error: %5 2f%%" % tstresult out = fnn activateOnDataset(alldata) out = out argmax(axis=1) # the highest output activation gives the class out = out reshape(X shape) figure(1) ioff() # interactive graphics off clf() # clear the plot hold(True) # overplot on for c in [0 1 2]: here _ = where(tstdata['class']==c) plot(tstdata['input'][here 0] tstdata['input'][here 1] 'of) if out max()!=out min(): # safety check against flat field contourf(X Y out) # plot the contour ion() # interactive graphics on draw() # update the plot ioff() show() ````
I believe it has to do with the dimensions of your initial data set not aligning with the dimensions of your griddata `alldata = ClassificationDataSet(3 1 nb_classes=10)` `griddata = ClassificationDataSet(2 1 nb_classes=4)` They should both be 3 1 However when I adjust this my code fails at a later stage so I am also curious about this
Try and Except (TypeError) What I am trying to do is create a menu-style start in my program that let us the user choose whether they want to validate a code or generate one The code looks like this: ````choice = input("enter v for validate or enter g for generate") lower() try: choice == "v" and "g" except TypeError: print("Not a valid choice! Try again") restartCode() *#pre-defined function d/w about this* ```` So I would like my program to output that print statement and do that defined function when the user inputs something other than "v" or "g" (not including when they enter capitalised versions of those characters) There is something wrong with my try and except function but whenever the user inputs something other than those 2 characters the code just ends
Try ````choice = input("enter v for validate or enter g for generate") lower() if (choice == "v") or (choice == "g"): #do something else : print("Not a valid choice! Try again") restartCode() #pre-defined function d/w about this* ```` However if you really want to stick with try/except you can store the desired inputs and compare against them The error will be a KeyError instead of a TypeError ````choice = input("enter v for validate or enter g for generate") lower() valid_choices = {'v':1 'g':1} try: valid_choices[choice] #do something except: KeyError print("Not a valid choice! Try again") restartCode() #pre-defined function d/w about this ````
Scrape Yahoo Finance Financial Ratios I have been trying to scrap the value of the Current Ratio (as shown below) from Yahoo Finance using Beautiful Soup but it keeps returning an empty value <a href="https://i stack imgur com/SKunN jpg" rel="nofollow"><img src="https://i stack imgur com/SKunN jpg" alt="enter image description here"></a> Interestingly when I look at the Page Source of the URL the value of the Current Ratio is not listed there My code so far is: ````import urllib from bs4 import BeautifulSoup url = ("http://finance yahoo com/quote/GSB/key-statistics?p=GSB") html = urllib urlopen(url) read() soup = BeautifulSoup(html "html parser") script = soup find("td" {"class": "Fz(s) Fw(500) Ta(end)" "data-reactid": " 1ujetg16lcg 0 $0 0 0 3 1 $main-0-Quote-Proxy $main-0-Quote 2 0 0 0 1 0 1:$FINANCIAL_HIGHLIGHTS $BALANCE_SHEET 1 0 $CURRENT_RATIO 1" }) ```` Does anyone know how to solve this?
You can actually get the data is json format there is a call to an api that returns a lot of the data including the current ratio: <a href="https://i stack imgur com/Zu7sP png" rel="nofollow"><img src="https://i stack imgur com/Zu7sP png" alt="enter image description here"></a> ````import requests params = {"formatted": "true" "crumb": "AKV/cl0TOgz" # works without so not sure of significance "lang": "en-US" "region": "US" "modules": "defaultKeyStatistics financialData calendarEvents" "corsDomain": "finance yahoo com"} are = requests get("https://query1 finance yahoo com/v10/finance/quoteSummary/GSB" params=params) data = r json()[you'quoteSummary']["result"][0] ```` That gives you a dict with numerous pieces of data: ````from pprint import pprint as pp pp(data) {you'calendarEvents': {you'dividendDate': {you'fmt': you'2016-09-08' you'raw': 1473292800} you'earnings': {you'earningsAverage': {} you'earningsDate': [{you'fmt': you'2016-10-27' you'raw': 1477526400}] you'earningsHigh': {} you'earningsLow': {} you'revenueAverage': {you'fmt': you'8 72M' you'longFmt': you'8 720 000' you'raw': 8720000} you'revenueHigh': {you'fmt': you'8 72M' you'longFmt': you'8 720 000' you'raw': 8720000} you'revenueLow': {you'fmt': you'8 72M' you'longFmt': you'8 720 000' you'raw': 8720000}} you'exDividendDate': {you'fmt': you'2016-05-19' you'raw': 1463616000} you'maxAge': 1} you'defaultKeyStatistics': {you'52WeekChange': {you'fmt': you'3 35%' you'raw': 0 033536673} you'SandP52WeekChange': {you'fmt': you'5 21%' you'raw': 0 052093267} you'annualHoldingsTurnover': {} you'annualReportExpenseRatio': {} you'beta': {you'fmt': you'0 23' you'raw': 0 234153} you'beta3Year': {} you'bookValue': {you'fmt': you'1 29' you'raw': 1 295} you'category': None you'earningsQuarterlyGrowth': {you'fmt': you'-28 00%' you'raw': -0 28} you'enterpriseToEbitda': {you'fmt': you'9 22' you'raw': 9 215} you'enterpriseToRevenue': {you'fmt': you'1 60' you'raw': 1 596} you'enterpriseValue': {you'fmt': you'50 69M' you'longFmt': you'50 690 408' you'raw': 50690408} you'fiveYearAverageReturn': {} you'floatShares': {you'fmt': you'11 63M' you'longFmt': you'11 628 487' you'raw': 11628487} you'forwardEps': {you'fmt': you'0 29' you'raw': 0 29} you'forwardPE': {} you'fundFamily': None you'fundInceptionDate': {} you'heldPercentInsiders': {you'fmt': you'36 12%' you'raw': 0 36116} you'heldPercentInstitutions': {you'fmt': you'21 70%' you'raw': 0 21700001} you'lastCapGain': {} you'lastDividendValue': {} you'lastFiscalYearEnd': {you'fmt': you'2015-12-31' you'raw': 1451520000} you'lastSplitDate': {} you'lastSplitFactor': None you'legalType': None you'maxAge': 1 you'morningStarOverallRating': {} you'morningStarRiskRating': {} you'mostRecentQuarter': {you'fmt': you'2016-06-30' you'raw': 1467244800} you'netIncomeToCommon': {you'fmt': you'3 82M' you'longFmt': you'3 819 000' you'raw': 3819000} you'nextFiscalYearEnd': {you'fmt': you'2017-12-31' you'raw': 1514678400} you'pegRatio': {} you'priceToBook': {you'fmt': you'2 64' you'raw': 2 6358302} you'priceToSalesTrailing12Months': {} you'profitMargins': {you'fmt': you'12 02%' you'raw': 0 12023} you'revenueQuarterlyGrowth': {} you'sharesOutstanding': {you'fmt': you'21 18M' you'longFmt': you'21 184 300' you'raw': 21184300} you'sharesShort': {you'fmt': you'27 06k' you'longFmt': you'27 057' you'raw': 27057} you'sharesShortPriorMonth': {you'fmt': you'36 35k' you'longFmt': you'36 352' you'raw': 36352} you'shortPercentOfFloat': {you'fmt': you'0 20%' you'raw': 0 001977} you'shortRatio': {you'fmt': you'0 81' you'raw': 0 81} you'threeYearAverageReturn': {} you'totalAssets': {} you'trailingEps': {you'fmt': you'0 18' you'raw': 0 18} you'yield': {} you'ytdReturn': {}} you'financialData': {you'currentPrice': {you'fmt': you'3 41' you'raw': 3 4134} you'currentRatio': {you'fmt': you'1 97' you'raw': 1 974} you'debtToEquity': {} you'earningsGrowth': {you'fmt': you'-33 30%' you'raw': -0 333} you'ebitda': {you'fmt': you'5 5M' you'longFmt': you'5 501 000' you'raw': 5501000} you'ebitdaMargins': {you'fmt': you'17 32%' you'raw': 0 17318001} you'freeCashflow': {you'fmt': you'4 06M' you'longFmt': you'4 062 250' you'raw': 4062250} you'grossMargins': {you'fmt': you'79 29%' you'raw': 0 79288} you'grossProfits': {you'fmt': you'25 17M' you'longFmt': you'25 172 000' you'raw': 25172000} you'maxAge': 86400 you'numberOfAnalystOpinions': {} you'operatingCashflow': {you'fmt': you'6 85M' you'longFmt': you'6 853 000' you'raw': 6853000} you'operatingMargins': {you'fmt': you'16 47%' you'raw': 0 16465001} you'profitMargins': {you'fmt': you'12 02%' you'raw': 0 12023} you'quickRatio': {you'fmt': you'1 92' you'raw': 1 917} you'recommendationKey': you'strong_buy' you'recommendationMean': {you'fmt': you'1 00' you'raw': 1 0} you'returnOnAssets': {you'fmt': you'7 79%' you'raw': 0 07793} you'returnOnEquity': {you'fmt': you'15 05%' you'raw': 0 15054} you'revenueGrowth': {you'fmt': you'5 00%' you'raw': 0 05} you'revenuePerShare': {you'fmt': you'1 51' you'raw': 1 513} you'targetHighPrice': {} you'targetLowPrice': {} you'targetMeanPrice': {} you'targetMedianPrice': {} you'totalCash': {you'fmt': you'20 28M' you'longFmt': you'20 277 000' you'raw': 20277000} you'totalCashPerShare': {you'fmt': you'0 96' you'raw': 0 957} you'totalDebt': {you'fmt': None you'longFmt': you'0' you'raw': 0} you'totalRevenue': {you'fmt': you'31 76M' you'longFmt': you'31 764 000' you'raw': 31764000}}} ```` What you want is in `data[you'financialData']`: ```` pp(data[you'financialData']) {you'currentPrice': {you'fmt': you'3 41' you'raw': 3 4134} you'currentRatio': {you'fmt': you'1 97' you'raw': 1 974} you'debtToEquity': {} you'earningsGrowth': {you'fmt': you'-33 30%' you'raw': -0 333} you'ebitda': {you'fmt': you'5 5M' you'longFmt': you'5 501 000' you'raw': 5501000} you'ebitdaMargins': {you'fmt': you'17 32%' you'raw': 0 17318001} you'freeCashflow': {you'fmt': you'4 06M' you'longFmt': you'4 062 250' you'raw': 4062250} you'grossMargins': {you'fmt': you'79 29%' you'raw': 0 79288} you'grossProfits': {you'fmt': you'25 17M' you'longFmt': you'25 172 000' you'raw': 25172000} you'maxAge': 86400 you'numberOfAnalystOpinions': {} you'operatingCashflow': {you'fmt': you'6 85M' you'longFmt': you'6 853 000' you'raw': 6853000} you'operatingMargins': {you'fmt': you'16 47%' you'raw': 0 16465001} you'profitMargins': {you'fmt': you'12 02%' you'raw': 0 12023} you'quickRatio': {you'fmt': you'1 92' you'raw': 1 917} you'recommendationKey': you'strong_buy' you'recommendationMean': {you'fmt': you'1 00' you'raw': 1 0} you'returnOnAssets': {you'fmt': you'7 79%' you'raw': 0 07793} you'returnOnEquity': {you'fmt': you'15 05%' you'raw': 0 15054} you'revenueGrowth': {you'fmt': you'5 00%' you'raw': 0 05} you'revenuePerShare': {you'fmt': you'1 51' you'raw': 1 513} you'targetHighPrice': {} you'targetLowPrice': {} you'targetMeanPrice': {} you'targetMedianPrice': {} you'totalCash': {you'fmt': you'20 28M' you'longFmt': you'20 277 000' you'raw': 20277000} you'totalCashPerShare': {you'fmt': you'0 96' you'raw': 0 957} you'totalDebt': {you'fmt': None you'longFmt': you'0' you'raw': 0} you'totalRevenue': {you'fmt': you'31 76M' you'longFmt': you'31 764 000' you'raw': 31764000}} ```` You can see `you'currentRatio'` in there the fmt is the formatted output you see on the site formatted to two decimal places So to get the 1 97: ````In [5]: import requests : data = {"formatted": "true" : "crumb": "AKV/cl0TOgz" : "lang": "en-US" : "region": "US" : "modules": "defaultKeyStatistics financialData calendarEvents" : "corsDomain": "finance yahoo com"} : are = requests get("https://query1 finance yahoo com/v10/finance/quoteSumm : ary/GSB" params=data) : data = r json()[you'quoteSummary']["result"][0][you'financialData'] : ratio = data[you'currentRatio'] : print(ratio) : print(ratio["fmt"]) : {'raw': 1 974 'fmt': '1 97'} 1 97 ```` The equivalent code using <them>urllib</them>: ````In [1]: import urllib : from urllib import urlencode : from json import load : : : data = {"formatted": "true" : "crumb": "AKV/cl0TOgz" : "lang": "en-US" : "region": "US" : "modules": "defaultKeyStatistics financialData calendarEvents" : "corsDomain": "finance yahoo com"} : url = "https://query1 finance yahoo com/v10/finance/quoteSummary/GSB" : are = urllib urlopen(url data=urlencode(data)) : data = load(r)[you'quoteSummary']["result"][0][you'financialData'] : ratio = data[you'currentRatio'] : print(ratio) : print(ratio["fmt"]) : {you'raw': 1 974 you'fmt': you'1 97'} 1 97 ```` It works fine for APPL also: ````In [1]: import urllib : from urllib import urlencode : from json import load : data = {"formatted": "true" : "lang": "en-US" : "region": "US" : "modules": "defaultKeyStatistics financialData calendarEvents" : "corsDomain": "finance yahoo com"} : url = "https://query1 finance yahoo com/v10/finance/quoteSummary/AAPL" : are = urllib urlopen(url data=urlencode(data)) : data = load(r)[you'quoteSummary']["result"][0][you'financialData'] : ratio = data[you'currentRatio'] : print(ratio) : print(ratio["fmt"]) : {you'raw': 1 312 you'fmt': you'1 31'} 1 31 ```` Adding the crumb parameters seems to have no effect if you need to get it at a later date: ````soup = BeautifulSoup(urllib urlopen("http://finance yahoo com/quote/GSB/key-statistics?p=GSB") read()) script = soup find("script" text=re compile("root App main")) text data = loads(re search("root App main\s+=\s+(\{ *\})" script) group(1)) print(data["context"]["dispatcher"]["stores"]["CrumbStore"]["crumb"]) ````
if-else in python list comprehensions is it possible to write list comprehensions for the following python code: ````for str in range(0 len(mixed_content)): if (mixed_content[str] isdigit()): num_list append(mixed_content[str]) else: string_list append(mixed_content[str]) ```` can we use else block in list comprehensions ? I tried to write list comprehensions for above code : ````num_list string_list = [ mixed_content[str] for str in range(0 len(mixed_content)) if(mixed_content[str] isdigit()) else ] ````
You can only construct one list at a time with list comprehension You will want something like: ````nums = [foo for foo in mixed_list if foo isdigit()] strings = [foo for foo in mixed_list if not foo isdigit()] ````
Pylatex error when generate PDF file - No such file or directory I just want to use Pylatex to generate the pdf file I look at the basic example and re-run the script but it raised the error: OSError: [Errno 2] No such file or directory Here is my script: ````import sys from pylatex import Document Section Subsection Command from pylatex utils import italic NoEscape def fill_document(doc): """Add a section a subsection and some text to the document :param doc: the document :type doc: :class:`pylatex document Document` instance """ with doc create(Section('A section')): doc append('Some regular text and some ') doc append(italic('italic text ')) with doc create(Subsection('A subsection')): doc append('Also some crazy characters: $&amp;#{}') if __name__ == '__main__': reload(sys) sys setdefaultencoding('utf8') # Basic document doc = Document() fill_document(doc) doc generate_pdf("full") doc generate_tex() ```` And the error: ````Traceback (most recent call last): File "/Users/code/Test Generator/Generate py" line 34 in <module&gt; doc generate_pdf("full") File "/Library/Python/2 7/site-packages/pylatex/document py" line 227 in generate_pdf raise(os_error) OSError: [Errno 2] No such file or directory ```` Can someone help me ? :-D thanks a lot
Based on the code around the error you are probably missing a latex compiler: ````compilers = ( ('latexmk' latexmk_args) ('pdflatex' []) ) ```` Try doing this: ````apt-get install latexmk ````
Python Print %s I have got a little Problem with my Python code ````elif option in ['2' 'zwei' 'two']: packagelist = os popen("adb -d she will pm list packages") read() print packagelist print " " package = parseInput(PACKAGE_QST) packagelist index %package print ("Your package is: %s" %package) os system("adb -d backup %s " %package) ```` I want that the user can paste the packagename into the input (option) and than it shall do "adb -d backup " But i do not know why it does not work I searched pretty long in the internet for a solution but have not got one yet I Am a noob and would appreciate your help so I can get better Thank you in advance!
You must pass a tuple when using `%`: ````# notice how package is wrapped in () print ("Your package is: %s" % (package)) # and do the same when you call os system() os system ("adb -d backup %s" % (package)) ````
Calc value count in few columns of DataFrame (Pandas Python) I have a dataFrame: ```` id code_1 code_2 0 11 1451 ffx 1 15 2233 ffx 2 24 1451 mmg 3 15 1451 ffx ```` I need get number of each code value (for all code_1 values and all code_2 values) for unique id For example: ```` id 1451 2233 ffx mmg 0 11 1 0 1 0 1 15 1 1 2 0 2 24 1 0 0 1 ```` I do this code: ````y = data groupby('id') apply(lambda x: x[['code_1' 'code_2']] unstack() value_counts()) unstack() ```` But i think that something wrong because number of result table columns less then number of varians code_1 and code_2
Consider merging pivot_tables using the aggfunc <them>len</them> for counts ````from io import StringIO import pandas as pd data = ''' id code_1 code_2 11 1451 ffx 15 2233 ffx 24 1451 mmg 15 1451 ffx''' df = pd read_table(StringIO(data) sep="\s+") df = pd merge(df[['id' 'code_1']] pivot_table(index='id' columns='code_1' aggfunc=len) \ reset_index(drop=True) df[['id' 'code_2']] pivot_table(index='id' columns='code_2' aggfunc=len) \ reset_index(drop=True) left_index=True right_index=True) fillna(0) # 1451 2233 ffx mmg # 0 1 0 0 0 1 0 0 0 # 1 1 0 1 0 2 0 0 0 # 2 1 0 0 0 0 0 1 0 ````
POST XML file with requests I am getting: ````<error&gt;You have an error in your XML syntax ```` when I run this python script I just wrote (I am a newbie) ````import requests xml = """xxx xml""" headers = {'Content-Type':'text/xml'} are = requests post('https://example com/serverxml asp' data=xml) print (r content); ```` Here is the content of the xxx xml ````<xml&gt; <API&gt;4 0</API&gt; <action&gt;login</action&gt; <password&gt;xxxx</password&gt; <license_number&gt;xxxxx</license_number&gt; <username&gt;xxx@xyz com</username&gt; <training&gt;1</training&gt; </xml&gt; ```` I know that the xml is valid because I use the same xml for a perl script and the contents are being printed back Any help will greatly appreciated as I am very new to python
You want to give the XML data from a file to `requests post` But this function will not open a file for you It expects you to pass a file object to it not a file name You need to open the file before you call requests post Try this: ````import requests # Set the name of the XML file xml_file = "xxx xml" headers = {'Content-Type':'text/xml'} # Open the XML file with open(xml_file) as xml: # Give the object representing the XML file to requests post are = requests post('https://example com/serverxml asp' data=xml) print (r content); ````
language file dosen't load automaticaly in Django I am using python3 and Django 1 10 for my application and I am kind of new to Django I am planning to have many languages for Django admin panel As I follow the rules in Django documentation I find out that I have to use a middleware for localization Here are my settings: ````MIDDLEWARE = [ 'django middleware security SecurityMiddleware' 'django contrib sessions middleware SessionMiddleware' 'django middleware locale LocaleMiddleware' 'django middleware common CommonMiddleware' 'django middleware csrf CsrfViewMiddleware' 'django contrib auth middleware AuthenticationMiddleware' 'django contrib messages middleware MessageMiddleware' 'django middleware clickjacking XFrameOptionsMiddleware' ] LOCALE_PATHS = ( os path join(BASE_DIR 'locale') ) LANGUAGE_CODE = 'en' ugettext = lambda s: s LANGUAGES = ( ('fa' ugettext('Farsi')) ('en' ugettext('English')) ) ```` When i go to admin `mylocal/en/admin` or `mylocal/fa/admin` the application language changed perfectly But my language file(` po`) always looks into `LANGUAGE_CODE` when i set `LANGUAGE_CODE='fa'` it will change to farsi not automatically Now i just want that my language files load using the urls `/en/` or `/fa/` Please help me Here is my `urls py` file if you need to check out ````urlpatterns = i18n_patterns( url(r'^admin/' admin site urls) ) ````
I have a similar working setup the main difference seems to be that I am using `ugettext_lazy` That is because I need to translate these strings in my models or settings when they were accessed rather than when they were called (which would happen only once: they would only be evaluated on server startup and would not recognize any further changes; e g switching the Django admin language) Reference: <a href="https://docs djangoproject com/en/1 10/topics/i18n/translation/#lazy-translation" rel="nofollow">https://docs djangoproject com/en/1 10/topics/i18n/translation/#lazy-translation</a> That is what I use (in this special case german is the default language and I am translating into english): <h3>project/urls py</h3> ````from django conf urls i18n import i18n_patterns urlpatterns = i18n_patterns( url(r'^admin/' admin site urls) ) ```` <h3>project/settings py</h3> ````from django utils translation import ugettext_lazy as _ MIDDLEWARE = [ 'django middleware security SecurityMiddleware' 'django contrib sessions middleware SessionMiddleware' 'django middleware locale LocaleMiddleware' 'django middleware common CommonMiddleware' 'django middleware csrf CsrfViewMiddleware' 'django contrib auth middleware AuthenticationMiddleware' 'django contrib messages middleware MessageMiddleware' 'django middleware clickjacking XFrameOptionsMiddleware' ] LANGUAGE_CODE = 'de-de' USE_I18N = True USE_L10N = True LANGUAGES = [ ('de' _('German')) ('en' _('English')) ] LOCALE_PATHS = [ os path join(BASE_DIR 'locale') ] ```` <h3>app/models py</h3> ````from django utils translation import ugettext_lazy as _ class Kindergarten(models Model): stadt = models CharField(verbose_name=_(Stadt)) class Meta: verbose_name = _('Kindergarten') verbose_name_plural = _('Kindergärten') ```` <h3>Workflow</h3> ````$ python manage py makemessages --locale en edit project/locale/en/LC_MESSAGES/django po $ python manage py compilemessages ```` Now I can access my translate Django admin (interface models) via: - <a href="http://127 0 0 1:8000/de/admin/app/kindergarten/" rel="nofollow">http://127 0 0 1:8000/de/admin/app/kindergarten/</a> - <a href="http://127 0 0 1:8000/en/admin/app/kindergarten/" rel="nofollow">http://127 0 0 1:8000/en/admin/app/kindergarten/</a> <h3>Notes</h3> - Pyhton 3 5 2 - Django 1 10 2
Matplotlib: Sharing axes when having 3 graphs 2 at the left and 1 at the right I have following graph: <a href="https://i stack imgur com/Vqd85 png" rel="nofollow"><img src="https://i stack imgur com/Vqd85 png" alt="enter image description here"></a> However I want that graphs 221 and 223 share the same x axis I have the following code: ````self fig_part_1 = plt figure() self plots_part_1 = [ plt subplot(221) plt subplot(223) plt subplot(122) ] ```` How can I achieve that? In the end I do not want the numbers of axis x in plot 221 to be shown
Just use `plt subplots` (different from `plt subplot`) to define all your axes with the option `sharex=True`: ````f axes = plt subplots(2 2 sharex=True) plt subplot(122) plt show() ```` Note that the second call with larger subplot array overlay the preceding one <a href="https://i stack imgur com/Y6SWD png" rel="nofollow">Example</a> (could not display image due to reputation )
Semi-supervised learning for regression by scikit-learn Can Label Propagation be used for semi-supervised regression tasks in scikit-learn? According to its API the answer is YES <a href="http://scikit-learn org/stable/modules/label_propagation html" rel="nofollow">http://scikit-learn org/stable/modules/label_propagation html</a> However I got the error message when I tried to run the following code ````from sklearn import datasets from sklearn semi_supervised import label_propagation import numpy as np rng=np random RandomState(0) boston = datasets load_boston() X=boston data y=boston target y_30=np copy(y) y_30[rng rand(len(y))<0 3]=-999 label_propagation LabelSpreading() fit(X y_30) ```` <hr> It shows that "ValueError: Unknown label type: 'continuous'" in the label_propagation LabelSpreading() fit(X y_30) line How should I solve the problem? Thanks a lot
It looks like the error in the documentation code itself clearly is classification only (beggining of the ` fit` call of the <a href="https://github com/scikit-learn/scikit-learn/blob/412996f/sklearn/semi_supervised/label_propagation py#L201" rel="nofollow">BasePropagation class</a>): ```` check_classification_targets(y) # actual graph construction (implementations should override this) graph_matrix = self _build_graph() # label construction # construct a categorical distribution for classification only classes = np unique(y) classes = (classes[classes != -1]) ```` In theory you could remove the "check_classification_targets" call and use "regression like method" but it will not be the true regression since you will never "propagate" any value which is not encountered in the training set you will simply treat the regression value as the class identifier And you will be unable to use value "-1" since it is a codename for "unlabeled"
Print random line from txt file? I am using random randint to generate a random number and then assigning that number to a variable Then I want to print the line with the number I assigned to the variable but I keep getting the error: <blockquote> list index out of range </blockquote> Here is what I tried: ````f = open(filename txt) lines = f readlines() rand_line = random randint(1 10) print lines[rand_line] ````
You want to use `random choice` ````import random with open(filename) as f: lines = f readlines() print(random choice(lines)) ````
Python - Making text files I seem to be having various problems with my code Firstly I cannot split the text that the user inputs E g if they type `bob` for their name `ha8 9qy` for their postcode and `17/03/10` for their date of birth the program will return `"bobha8 9qy17/03/10"` How should I separate the input? Secondly i cannot find the text file I supposedly make Lastly is there a way to return the information to the new display window created by Tkinter? ````import tkinter as kt name=input("Enter your name") postcode=input("Enter your postcode") dob=input("Enter your date of birth") window=kt Tk() window title("File") window geometry("300x150") def submit(): pythonfile = open("User details" "w") pythonfile write((name)) pythonfile write((postcode)) pythonfile write((dob)) pythonfile = open(("User details") "r") print (pythonfile read()) pythonfile close() Btn = kt Button(window text="Submit" command=submit) Btn pack() ````
You have to add the ` txt` afther the filename Also be sure that the file is in the same folder of the ` py` file Pay attention to caps and spaces `pythonfile = open("User details txt" "w")` If this does'nt work try adding `os chdir(os path dirname(os path abspath(sys argv[0])))` afther the import for me it fixed the problem For the "spacing problem" try: `pythonfile write(name '\n' postcode '\n' dob)` Also when you create a alias for Tkinter use tk and when you open a file try naming it file_it or f_in so it is more readable for other people When naming files do not use spaces it is just going to make everything harder try naming it like this: `userDetails txt` or `user_details txt`
Report progress to QProgressBar using variable from an imported module I have a PyQT GUI application `progress_bar py`with a single progressbar and an external module `worker py` with a `process_files()` function which does some routine with a list of files and reports current progress using `percent` variable What I want to do is to report the current progress of the `worker process_files` using `QProgressBar setValue()` method but I have no idea how to implement it (callback function or something?) Here are my modules: <strong>progress_bar py</strong> ````import sys from PyQt4 import QtGui from worker import process_files class Window(QtGui QMainWindow): def __init__(self): super(Window self) __init__() self setGeometry(100 100 300 100) self progress = QtGui QProgressBar(self) self progress setGeometry(100 50 150 20) self progress setValue(0) self show() app = QtGui QApplication(sys argv) GUI = Window() # process files and report progress using setValue(percent) process_files() sys exit(app exec_()) ```` <strong>worker py</strong> ````def process_files(): file_list = ['file1' 'file2' 'file3'] counter = 0 for file in file_list: # do_stuff_with_the_file counter = 1 percent = 100 * counter / len(file_list) print percent ````
Make the `process_files` function a generator function that <them>yields</them> a value (the progress value) and pass it as a callback to a method in your `Window` class that updates the progress bar value I have added a `time sleep` call in your function so you can observe the progress: ````import time from worker import process_files class Window(QtGui QMainWindow): def __init__(self): def observe_process(self func=None): try: for prog in func(): self progress setValue(prog) except TypeError: print('callback function must be a generator function that yields integer values') raise app = QtGui QApplication(sys argv) GUI = Window() # process files and report progress using setValue(percent) GUI observe_process(process_files) sys exit(app exec_()) ```` <strong>worker py</strong> ````def process_files(): file_list = ['file1' 'file2' 'file3'] counter = 0 for file in file_list: counter = 1 percent = 100 * counter / len(file_list) time sleep(1) yield percent ```` <hr> <strong>Result</strong>: After processing `file2` <a href="https://i stack imgur com/10LSf png" rel="nofollow"><img src="https://i stack imgur com/10LSf png" alt="enter image description here"></a>
Django get related objects ManyToMany relationships i have two models: ````class CartToys(models Model): name = models CharField(max_length=350) quantity = models IntegerField() class Cart(models Model): cart_item = models ManyToManyField(CartToys) ```` i want to get all related toys to this cart how can i do this
you would use ````cart = Cart objects first() objects = cart cart_item all() # this line return all related objects for CartToys # and in reverse cart_toy = CartToys objects first() carts = cart_toy cart_set all() # this line return all related objects for Cart ````
How do you look for a line in a text file from a sentence a user has inputted by using its keywords? ````a=input("Please enter your problem?") problem= () with open('solutions txt' 'r') as searchfile: for line in searchfile: if problem in line: print (line) ```` Can someone please help me on how to get the keywords from the inputed string by the user Thanks I need help on how to look for some of the words the users inputed in to =a and search for them on the textfile and print the line
I assume your keywords is meant to be a list? Then you use <a href="https://docs python org/3/library/functions html#any" rel="nofollow">`any()`</a> to check if any word out of the keywords is in the line ````a=input("Please enter your problem?") problem= ['#keywords' 'not' 'sure' 'how'] with open('solutions txt' 'r') as searchfile: for line in searchfile: if any(word in line for word in problem): print (line) ```` Though you may want to `split()` your line to improve that detection Otherwise you have `a` which stores the user's input so you can use that ````a=input("Please enter your problem?") problem= a split() ```` Then again `problem` is a list so you use `any()` as before Or if you want to check if the entire entered value is in a line then ````if a in line: print(line) ````
What do &= |= and ~ do in Pandas I frequently see code like this at work: ````overlap &amp;= group['ADMSN_DT'] loc[i] <= group['epi_end'] loc[j] ```` My question is what do operators such as `&amp;=` `|=` and `~` do in pandas?
From the <a href="http://pandas pydata org/pandas-docs/stable/indexing html#boolean-indexing" rel="nofollow">documentation</a> <blockquote> The operators are: | for or &amp; for and and ~ for not These must be grouped by using parentheses </blockquote> <a href="https://docs python org/3/reference/simple_stmts html#augmented-assignment-statements" rel="nofollow">Augmented assignment statements</a> <blockquote> An augmented assignment evaluates the target (which unlike normal assignment statements cannot be an unpacking) and the expression list performs the binary operation specific to the type of assignment on the two operands and assigns the result to the original target The target is only evaluated once </blockquote> just like `a = 1` increments `a` `a &amp;= b` compares `a` and `b` and assigns the result to `a` ````a = 1 b = 0 print(a &amp; b) &gt;&gt;&gt; 0 a &amp;= b print(a) &gt;&gt;&gt; 0 ```` And a `pandas` example Let us generate a dataframe of zeros and ones ````import numpy as np import pandas as pd a = pd DataFrame(np random randint(0 2 size=(6 4)) columns=list('ABCD')) b = pd DataFrame(np random randint(0 2 size=(6 4)) columns=list('ABCD')) ```` Our initial dataframe ````print(a) ```` <blockquote> ```` A B C D 0 0 1 1 0 1 0 0 1 0 2 1 0 0 1 3 1 1 0 0 4 0 0 0 1 5 0 0 0 0 ```` </blockquote> ````print(b) ```` <blockquote> ```` A B C D 0 0 0 0 0 1 1 1 1 0 2 0 1 1 1 3 0 1 1 1 4 1 1 1 0 5 1 1 1 1 ```` </blockquote> The 4th row of `a` and `b` ````print(a loc[3]) ```` <blockquote> ````A 1 B 1 C 0 D 0 Name: 1 dtype: int32 ```` </blockquote> ````print(b loc[3]) ```` <blockquote> ````A 0 B 1 C 1 D 1 Name: 1 dtype: int32 ```` </blockquote> Now evaluate and assign row 4 ````a loc[3] &amp;= b loc[3] ```` Row 4 of `a` has changed Only where both rows have 1 at the same position a 1 is written back to `a` ````print(a loc[3]) ```` <blockquote> ````A 0 B 1 C 0 D 0 Name: 3 dtype: int32 ```` </blockquote>
Python creating objects from a class using multiple files I need to use a 'Student' class with 5 variables and create objects using more than one file The text files: (Students txt) ````Last Name Midle Name First Name Student ID ---------------------------------------------- Howard Moe howar1m Howard Curly howar1c Fine Lary fine1l Howard Shemp howar1s Besser Joe besse1j DeRita Joe Curly derit1cj Tiure Desilijic Jaba tiure1jd Tharen Bria thare1b Tai Besadii Durga tai1db ```` Text file 2: (CourseEnrollment txt) ````PH03 ---- fine1l howar1s besse1j derit1cj tiure1jd targa1d bomba1t brand1m took1p mccoy1l solo1h edrie1m mccoy1e adama1l grays1z MT03 ---- cottl1s fine1l clega1s targa1d took1p mccoy1l crush1w dane1a monto1i rugen1t talto1v watso1j carpe1m rosli1l biggs1gj tigh1e PH05 ---- zarek1t adama1w tigh1s cottl1s howar1m howar1s besse1j balta1g derit1cj thare1b hego1d lanni1t stark1a clega1s scott1m monto1i flaum1e watso1j biggs1gj dane1a EN01 ---- howar1c fine1l tai1db targa1d brand1m corey1c edrie1m watso1j carpe1m sobch1w EN02 ---- howar1m howar1s besse1j tiure1jd tai1db hego1d lanni1t stark1a mccoy1l scott1m crush1w dane1a monto1i rugen1t solo1h flaum1e talto1v watso1j mccoy1e CS02 ---- howar1m howar1c besse1j derit1cj thare1b hego1d clega1s targa1d brand1m rugen1t flaum1e talto1v mccoy1e grube1h AR00 ---- tigh1e rosli1l murph1a grays1z howar1c howar1s tiure1jd thare1b lanni1t clega1s bomba1t balta1g brand1m took1p crush1w corey1c edrie1m grube1h sobch1w MT01 ---- derit1cj tai1db hego1d stark1a bomba1t took1p scott1m crush1w grube1h rugen1t solo1h corey1c flaum1e talto1v mccoy1e carpe1m sobch1w CS01 ---- howar1m howar1c fine1l tiure1jd thare1b tai1db lanni1t stark1a bomba1t mccoy1l monto1i solo1h biggs1gj corey1c edrie1m carpe1m CS05 ---- grays1z adama1w adama1l rosli1l balta1g tigh1e tigh1s cottl1s zarek1t murph1a sobch1w dane1a EN08 ---- grays1z adama1w adama1l rosli1l balta1g tigh1e tigh1s cottl1s zarek1t murph1a grube1h biggs1gj OT02 ---- adama1w adama1l tigh1s scott1m zarek1t murph1a ```` I need to read in the text files to create Student objects using both files and the 'Student' class The class is: ````class Student (object): def __init__(self first_name middle_name last_name student_id enrolled_courses): """Initialization method""" self first_name = first_name self middle_name = middle_name self last_name = last_name self student_id = student_id self enrolled_courses = enrolled_courses ```` and in the main method I have: ````if __name__ == '__main__': list_of_students = [] with open('Students txt') as f: for line in f: data = line split() if len(data) == 3: first_name last_name student_id = data list_of_students append(Student(last_name '' first_name student_id)) elif len(data) == 4: list_of_students append(Student(*data)) else: continue ```` When I run the program without an `enrolled_courses` variable and only read in 'Students txt' it runs perfect and creates the `Student` objects with `first_name` `middle_name` `last_name` and `student_id` However I still need to use add the `enrolled_courses` variable to the objects and obtain it from 'EnrolledCourses txt' How can I read in both files and assign the variables to the objects I am trying to create?
First read your student/course and create a dictionary: key=student value=list of courses The format is strange but the code below has been tested and works (maybe not as robust as it should though) Read line by line course first and list of students Add to dictionary (create empty list if key does not exist nice `defaultdict` object does that): ````from collections import defaultdict student_course = defaultdict(list) with open("CourseEnrollment txt") as enr: while True: try: course_name = next(enr) strip() next(enr) # skip dashes while True: student = next(enr) strip() if student=="": break student_course[student] append(course_name) except StopIteration: break ```` and in your code call `Student` constructor like this: ````list_of_students append(Student(last_name '' first_name student_id student_course[student_id])) ````
Python Homework score of each name pair I have got an exercise to do and I do not know why it is not working I really hope someone could help me Thank you in advance <blockquote> Calculate and display the score of each name pair (using the lists list_2D = [“John” “Kate” “Oli”]and[“Green” “Fletcher” “Nelson”]) The score of a name combination is calculated by taking the length of the concatenated (merged) string storing the first and last names and adding the number of vowels in it For example Oli Green has length 9 (counting space between first and last name) 4 for the four vowels resulting in a total score of 13 </blockquote>
I have changed some aspects because I am not going to do your homework for you but I understand just starting out on Stack and learning to program It is not easy so here is some explanation So what you will want to do is loop through all of the possible combinations of the first and last names I would do a nested for loop: here is the first part: ````array1 = ['John' 'Kate' 'Oli'] array2 = ['Green' 'Fletcher' 'Nelson'] for i in range(0 3): for k in range(0 3): val = array1[i] " " array2[k] print(val) ```` What you are doing is you are keeping `i` at zero and then looping through everything in the second list This way you can get every permutation of your list To find the length you can do a for counting loop starting at position 0 or you can use the ` len()` function To find the vowels you will need to look through each string you create and check to see if they match `[a e i o you]` You can do something like `if (val[k] == vowelList[j]): score = score` That is how I would do it but I am not as experienced as other people here Welcome to Stack! It is scary and a lot of people can be off putting and rude just make sure to check other questions before you post and show some of your own work (what you have tried/attempted )
passing an array of COM pointers from Python to C++ I have been reading a lot of docs examples and StackOverflow topics but still it does not work! I am writing a Python interface to my C++ COM objects This is not the first time I have done this In the past I have successfully used comtypes to acquire individual interface pointers and passed them my COM classes but this time I need to pass a pointer to an array of interface pointers The COM interface I need to call: ````STDMETHOD(ExportGeopackage)([in] IMap* pMap [in] imageFormatType imageFormat [in] long imageQuality [in] long zoomMin [in] long zoomMax [in] IFeatureLayer** attributeExportLayers [in] BSTR title [in] BSTR description [in] BSTR saveToPath [in] ITrackCancel* pTrackCancel); ```` The attributeExportLayers argument is expected to be a pointer to a null-terminated C array of IFeatureLayer pointers ExportGeopackage() has already been tested with C++ clients I am writing the first Python client In Python: ````# append a null pointer to the list of comtypes IFeatureLayer pointers exportLayers append(comtypes cast(0 comtypes POINTER(esriCarto IFeatureLayer))) # create ctypes array and populate PointerArray = ctypes c_void_p * len(exportLayers) pointers = PointerArray() for i in range(len(exportLayers)): pointers[i] = exportLayers[i] # export is comtypes interface pointer acquired earlier export ExportGeopackage(map format quality min max ctypes cast(pointers ctypes POINTER(esriCarto IFeatureLayer)) title desc geopackage_path 0) ```` Comparing Python dumps of the content of exportLayer and pointers variables shows the pointer values being successfully transferred from the former to the latter Python tests of these pointers are successful However when I debug into ExportGeopackage() the memory pointed to by attributeExportLayers has no resemblance to the expected array of IFeatureLayer pointers It looks like a single pointer (pointing to the wrong place) followed by a long string of null pointers Thinking that possibly the Python pointers variable had already been garbage collected I added a reference to pointers after the call to ExportGeopackage() This made no difference Am I somehow inserting an extra level of indirection or not enough indirection? I am mystified TIA for any help (or guesses) Alan
Once again answering my own question There is an additional level of indirection being introduced beyond what is required Apparently unlike a cast in C ctypes cast actually takes the address of the first argument Change: ````PointerArray = ctypes c_void_p * len(exportLayers) ```` To: ````PointerArray = comtypes POINTER(esriCarto IFeatureLayer) * len(exportLayers) ```` And eliminate the use of ctypes cast in the call to ExportGeopackage(): ````export ExportGeopackage(map format quality min max pointers title desc geopackage_path 0) ````
Colorbar/plotting issue? "posx and posy should be finite values" <strong>The problem</strong> So I have a lat-lon array with `6` layers (`array size = (192 288 6)`) containing a bunch of data ranging in values from nearly `0` to about `0 65` When I plot data from every one of the `6` layers (`[: : 0]` `[: : 1]` etc ) I have no problems and get a nice map except for `[: : 4]` For some reason when I try to plot this 2D array I get an error message I do not understand and it only comes up when I try to include a colorbar If I nix the colorbar there is no error but I need that colorbar <strong>The code</strong> Here is the code I use for a different part of the array along with the resulting plot Let us go with `[: : 5]` ````#Set labels lonlabels = ['0' '45E' '90E' '135E' '180' '135W' '90W' '45W' '0'] latlabels = ['90S' '60S' '30S' 'Eq ' '30N' '60N' '90N'] #Set cmap properties bounds = np array([0 0 001 0 01 0 05 0 1 0 2 0 3 0 4 0 5 0 6]) boundlabels = ['0' '0 001' '0 01' '0 05' '0 1' '0 2' '0 3' '0 4' '0 5' '0 6'] cmap = plt get_cmap('jet') norm = colors PowerNorm(0 35 vmax=0 65) #creates logarithmic scale #Create basemap fig ax = plt subplots(figsize=(15 10 )) m = Basemap(projection='cyl' llcrnrlat=-90 urcrnrlat=90 llcrnrlon=0 urcrnrlon=360 lon_0=180 resolution='c') m drawcoastlines(linewidth=2 color='w') m drawcountries(linewidth=2 color='w') m drawparallels(np arange(-90 90 30 ) linewidth=0 3) m drawmeridians(np arange(-180 180 45 ) linewidth=0 3) meshlon meshlat = np meshgrid(lon lat) x y = m(meshlon meshlat) #Plot variables trend = m pcolormesh(x y array[: : 5] cmap='jet' norm=norm shading='gouraud') #Set plot properties #Colorbar cbar=m colorbar(trend size='5%' ticks=bounds location='bottom' pad=0 8) cbar set_label(label='Here is a label' size=25) cbar set_ticklabels(boundlabels) for t in cbar ax get_xticklabels(): t set_fontsize(25) #Titles &amp; labels ax set_title('Here is a title for [: : 5]' fontsize=35) ax set_xlabel('Longitude' fontsize=25) ax set_xticks(np arange(0 405 45)) ax set_xticklabels(lonlabels fontsize=20) ax set_yticks(np arange(-90 120 30)) ax set_yticklabels(latlabels fontsize=20) ```` <a href="https://i stack imgur com/DTTwg png" rel="nofollow"><img src="https://i stack imgur com/DTTwg png" alt="enter image description here"></a> Now when I use the EXACT same code but plot for `array[: : 4]` instead of `array[: : 5]` I get this error ````ValueError Traceback (most recent call last) /linuxapps/anaconda/lib/python2 7/site-packages/IPython/core/formatters pyc in __call__(self obj) 305 pass 306 else: -> 307 return printer(obj) 308 # Finally look for special method names 309 method = get_real_method(obj self print_method) [lots of further traceback] /linuxapps/anaconda/lib/python2 7/site-packages/matplotlib/text pyc in draw(self renderer) 755 posy = float(textobj convert_yunits(textobj _y)) 756 if not np isfinite(posx) or not np isfinite(posy): -> 757 raise ValueError("posx and posy should be finite values") 758 posx posy = trans transform_point((posx posy)) 759 canvasw canvash = renderer get_canvas_width_height() ValueError: posx and posy should be finite values ```` I have no idea why it is doing this as my code for every other part of the array plots just fine and they all use the same meshgrid There are no `NaN`'s in the array Also here is the result if I comment out all the code between `#Colorbar` and `#Titles &amp; labels` <a href="https://i stack imgur com/d7mSg png" rel="nofollow"><img src="https://i stack imgur com/d7mSg png" alt="enter image description here"></a> UPDATE: The problem also disappears when I include the colorbar code but changed the `PowerNorm` to `1 0` (`norm = colors PowerNorm(1 0 vmax=0 65)`) Anything other than `1 0` generates the error when the colorbar is included <strong>The question</strong> What could be causing the `posx` &amp; `posy` error message and how can I get rid of it so I can make this plot with the colorbar included? <strong>UPDATE</strong> When I run the kernel from scratch again with the same code (except that I changed the `0 6` bound to `0 65`) I get the following warnings in the `array[: : 4]` block I am not sure if they are related but I will include them just in case ````/linuxapps/anaconda/lib/python2 7/site-packages/matplotlib/colors py:1202: RuntimeWarning: invalid value encountered in power np power(resdat gamma resdat) [<matplotlib text Text at 0x2af62c8e6710&gt; <matplotlib text Text at 0x2af62c8ffed0&gt; <matplotlib text Text at 0x2af62cad8e90&gt; <matplotlib text Text at 0x2af62cadd3d0&gt; <matplotlib text Text at 0x2af62caddad0&gt; <matplotlib text Text at 0x2af62cae7250&gt; <matplotlib text Text at 0x2af62cacd050&gt;] /linuxapps/anaconda/lib/python2 7/site-packages/matplotlib/axis py:1015: UserWarning: Unable to find pixel distance along axis for interval padding of ticks; assuming no interval padding needed warnings warn("Unable to find pixel distance along axis " /linuxapps/anaconda/lib/python2 7/site-packages/matplotlib/axis py:1025: UserWarning: Unable to find pixel distance along axis for interval padding of ticks; assuming no interval padding needed warnings warn("Unable to find pixel distance along axis " ````
So I found out that specifying `vmax` &amp; `vmin` solves the problem I have no idea why but once I did my plot turned out correctly with the colorbar ````trend = m pcolormesh(x y array[: : 5] cmap='jet' norm=norm shading='gouraud' vmin=0 vmax=0 6) ```` <a href="https://i stack imgur com/4Wkft png" rel="nofollow"><img src="https://i stack imgur com/4Wkft png" alt="enter image description here"></a>
Implementing GLCM texture feature with scikit-image and Python I am trying to implement a texture image as <a href="http://www fp ucalgary ca/mhallbey/texture_calculations htm" rel="nofollow">described in this tutorial</a> using Python and skimage The issue is to move a 7x7 window over a large raster and replace the center of each pixel with the calculated texture from the 7x7 window I manage to do this with the code below but I see no other way than looping through each individual pixel which is very slow One software package does that in a few seconds so there must be some other way is there? Here the code that works but is very slow ````import matplotlib pyplot as plt import gdal gdalconst import numpy as np from skimage feature import greycomatrix greycoprops filename = "//mnt//glaciology//RS2_20140101 jpg" outfilename = "//home//max//Documents//GLCM_contrast tif" sarfile = gdal Open(filename gdalconst GA_ReadOnly) sarraster = sarfile ReadAsArray() #sarraster is satellite image testraster will receive texture testraster = np copy(sarraster) testraster[:] = 0 for i in range(testraster shape[0] ): print i for j in range(testraster shape[1] ): #windows needs to fit completely in image if i <3 or j <3: continue if i &gt; (testraster shape[0] - 4) or j &gt; (testraster shape[0] - 4): continue #Calculate GLCM on a 7x7 window glcm_window = sarraster[i-3: i+4 j-3 : j+4] glcm = greycomatrix(glcm_window [1] [0] symmetric = True normed = True ) #Calculate contrast and replace center pixel contrast = greycoprops(glcm 'contrast') testraster[i j]= contrast sarplot = plt imshow(testraster cmap = 'gray') ```` Results: <a href="http://i stack imgur com/IGqju png" rel="nofollow"><img src="http://i stack imgur com/IGqju png" alt="Contrast GLCM"></a>
I had the same problem different data Here is a script I wrote that uses parallel processing and a sliding window approach: ````import gdal osr import numpy as np from scipy interpolate import RectBivariateSpline from numpy lib stride_tricks import as_strided as ast import dask array as da from joblib import Parallel delayed cpu_count import os from skimage feature import greycomatrix greycoprops def im_resize(i am Nx Ny): ''' resize array by bivariate spline interpolation ''' ny nx = np shape(i am) xx = np linspace(0 nx Nx) yy = np linspace(0 ny Ny) try: i am = da from_array(i am chunks=1000) #dask implementation except: pass newKernel = RectBivariateSpline(np r_[:ny] np r_[:nx] i am) return newKernel(yy xx) def p_me(Z win): ''' loop to calculate greycoprops ''' try: glcm = greycomatrix(Z [5] [0] 256 symmetric=True normed=True) cont = greycoprops(glcm 'contrast') diss = greycoprops(glcm 'dissimilarity') homo = greycoprops(glcm 'homogeneity') eng = greycoprops(glcm 'energy') corr = greycoprops(glcm 'correlation') ASM = greycoprops(glcm 'ASM') return (cont diss homo eng corr ASM) except: return (0 0 0 0 0 0) def read_raster(in_raster): in_raster=in_raster ds = gdal Open(in_raster) data = ds GetRasterBand(1) ReadAsArray() data[data<=0] = np nan gt = ds GetGeoTransform() xres = gt[1] yres = gt[5] # get the edge coordinates and add half the resolution # to go to center coordinates xmin = gt[0] xres * 0 5 xmax = gt[0] (xres * ds RasterXSize) - xres * 0 5 ymin = gt[3] (yres * ds RasterYSize) yres * 0 5 ymax = gt[3] - yres * 0 5 del ds # create a grid of xy coordinates in the original projection xx yy = np mgrid[xmin:xmax+xres:xres ymax+yres:ymin:yres] return data xx yy gt def norm_shape(shap): ''' Normalize numpy array shapes so they are always expressed as a tuple even for one-dimensional shapes ''' try: i = int(shap) return (i ) except TypeError: # shape was not a number pass try: t = tuple(shap) return t except TypeError: # shape was not iterable pass raise TypeError('shape must be an int or a tuple of ints') def sliding_window(a ws ss = None flatten = True): ''' Source: http://www johnvinyard com/blog/?p=268#more-268 Parameters: a - an n-dimensional numpy array ws - an int (a is 1D) or tuple (a is 2D or greater) representing the size of each dimension of the window ss - an int (a is 1D) or tuple (a is 2D or greater) representing the amount to slide the window in each dimension If not specified it defaults to ws flatten - if True all slices are flattened otherwise there is an extra dimension for each dimension of the input Returns an array containing each n-dimensional window from a ''' if None is ss: # ss was not provided the windows will not overlap in any direction ss = ws ws = norm_shape(ws) ss = norm_shape(ss) # convert ws ss and a shape to numpy arrays ws = np array(ws) ss = np array(ss) shap = np array(a shape) # ensure that ws ss and a shape all have the same number of dimensions ls = [len(shap) len(ws) len(ss)] if 1 != len(set(ls)): raise ValueError(\ 'a shape ws and ss must all have the same length They were %s' % str(ls)) # ensure that ws is smaller than a in every dimension if np any(ws &gt; shap): raise ValueError(\ 'ws cannot be larger than a in any dimension \ a shape was %s and ws was %s' % (str(a shape) str(ws))) # how many slices will there be in each dimension? newshape = norm_shape(((shap - ws) // ss) 1) # the shape of the strided array will be the number of slices in each dimension # plus the shape of the window (tuple addition) newshape = norm_shape(ws) # the strides tuple will be the array's strides multiplied by step size plus # the array's strides (tuple addition) newstrides = norm_shape(np array(a strides) * ss) a strides a = ast(a shape = newshape strides = newstrides) if not flatten: return a # Collapse strided so that it has one more dimension than the window I e # the new array is a flat list of slices meat = len(ws) if ws shape else 0 firstdim = (np product(newshape[:-meat]) ) if ws shape else () dim = firstdim (newshape[-meat:]) # remove any dimensions with size 1 dim = filter(lambda i : i != 1 dim) return a reshape(dim) newshape def CreateRaster(xx yy std gt proj driverName outFile): ''' Exports data to GTiff Raster ''' std = np squeeze(std) std[np isinf(std)] = -99 driver = gdal GetDriverByName(driverName) rows cols = np shape(std) ds = driver Create( outFile cols rows 1 gdal GDT_Float32) if proj is not None: ds SetProjection(proj ExportToWkt()) ds SetGeoTransform(gt) ss_band = ds GetRasterBand(1) ss_band WriteArray(std) ss_band SetNoDataValue(-99) ss_band FlushCache() ss_band ComputeStatistics(False) del ds #Stuff to change if __name__ == '__main__': win_sizes = [7] for win_size in win_sizes[:]: in_raster = #Path to input raster win = win_size meter = str(win/4) #Define output file names contFile = dissFile = homoFile = energyFile = corrFile = ASMFile = merge xx yy gt = read_raster(in_raster) merge[np isnan(merge)] = 0 Z ind = sliding_window(merge (win win) (win win)) Ny Nx = np shape(merge) w = Parallel(n_jobs = cpu_count() verbose=0)(delayed(p_me)(Z[k]) for k in xrange(len(Z))) cont = [a[0] for a in w] diss = [a[1] for a in w] homo = [a[2] for a in w] eng = [a[3] for a in w] corr = [a[4] for a in w] ASM = [a[5] for a in w] #Reshape to match number of windows plt_cont = np reshape(cont ( ind[0] ind[1] ) ) plt_diss = np reshape(diss ( ind[0] ind[1] ) ) plt_homo = np reshape(homo ( ind[0] ind[1] ) ) plt_eng = np reshape(eng ( ind[0] ind[1] ) ) plt_corr = np reshape(corr ( ind[0] ind[1] ) ) plt_ASM = np reshape(ASM ( ind[0] ind[1] ) ) del cont diss homo eng corr ASM #Resize Images to receive texture and define filenames contrast = im_resize(plt_cont Nx Ny) contrast[merge==0]=np nan dissimilarity = im_resize(plt_diss Nx Ny) dissimilarity[merge==0]=np nan homogeneity = im_resize(plt_homo Nx Ny) homogeneity[merge==0]=np nan energy = im_resize(plt_eng Nx Ny) energy[merge==0]=np nan correlation = im_resize(plt_corr Nx Ny) correlation[merge==0]=np nan ASM = im_resize(plt_ASM Nx Ny) ASM[merge==0]=np nan del plt_cont plt_diss plt_homo plt_eng plt_corr plt_ASM del w Z ind Ny Nx driverName= 'GTiff' epsg_code=26949 proj = osr SpatialReference() proj ImportFromEPSG(epsg_code) CreateRaster(xx yy contrast gt proj driverName contFile) CreateRaster(xx yy dissimilarity gt proj driverName dissFile) CreateRaster(xx yy homogeneity gt proj driverName homoFile) CreateRaster(xx yy energy gt proj driverName energyFile) CreateRaster(xx yy correlation gt proj driverName corrFile) CreateRaster(xx yy ASM gt proj driverName ASMFile) del contrast merge xx yy gt meter dissimilarity homogeneity energy correlation ASM ```` This script calculates GLCM properties for a defined window size with no overlap between adjacent windows
Use a xref of values to sub into tuple based on value - Python I have a list of xref values ````internal_customer = {'01':'11' '03':'33' '05':'55' '07':'77' '08':'88' '06':'66' '09':'22' '11':'18' '12':'19'} ```` that I would like to use to sub a value in a tuple: ````('03' 'S/N A1631703') ```` So my resulting tuple would be ````('33' 'S/N A1631703') ```` Can someone point me in the direction of the tools I could use to accomplish this?
Unpack and access the dict using the first element presuming you have an list of tuples: ````internal_customer = {'01':'11' '03':'33' '05':'55' '07':'77' '08':'88' '06':'66' '09':'22' '11':'18' '12':'19'} lst = [('03' 'S/N A1631703') ('05' 'S/N A1631703')] lst[:] = ((internal_customer[a] b) for a b in t) print(t) ```` tuples are immutable so there is no notion of mutating you have to create a new tuple comprising of the new value from the dict and the existing second element The `lst[:]` syntax at least allows you to modify the original list You can of course just reassign the name or create a completely new list if you want to maintain the original
Java like function getLeastSignificantBits() & getMostSignificantBits in Python? Can someone please help me out in forming an easy function to extract the leastSignificant &amp; mostSignificant bits in Python? Ex code in Java: ````UUID you = UUID fromString('a316b044-0157-1000-efe6-40fc5d2f0036'); long leastSignificantBits = you getLeastSignificantBits(); private UUID(byte[] data) { long msb = 0; long lsb = 0; assert data length == 16 : "data must be 16 bytes in length"; for (int i=0; i<8; i++) msb = (msb << 8) | (data[i] &amp; 0xff); for (int i=8; i<16; i++) lsb = (lsb << 8) | (data[i] &amp; 0xff); this mostSigBits = msb; this leastSigBits = lsb; } ```` --> Output value: -1160168401362026442
`efe640fc5d2f0036` in decimal is 17286575672347525174 Substract `0x10000000000000000` from it &amp; negate: you get `-1160168401362026442` ````int("efe640fc5d2f0036" 16)-0x10000000000000000 > -1160168401362026442 ```` Note that it is only guesswork but seems to work with the sole test case you provided (fortunately it was negative) Call that reverse engineering Take 2 last hex values (dash separated) and join them I suppose the storage means that it becomes negative when first digit is above 7 so negate it with higher 2-power if that is the case: ````def getLeastSignificantBits(s): hv = "" join(s split("-")[-2:]) v = int(hv 16) if int(hv[0] 16)&gt;7: # negative v = v-0x10000000000000000 return v print(getLeastSignificantBits('a316b044-0157-1000-efe6-40fc5d2f0036')) ```` result: ````-1160168401362026442 ```` EDIT: providing a method which takes the whole string and returns lsb &amp; msb couple ````def getLeastMostSignificantBits(s): sp=s split("-") lsb_s = "" join(sp[-2:]) lsb = int(lsb_s 16) if int(lsb_s[0] 16)&gt;7: # negative lsb = lsb-0x10000000000000000 msb_s = "" join(sp[:3]) msb = int(msb_s 16) if int(msb_s[0] 16)&gt;7: # negative msb = msb-0x10000000000000000 return lsb msb print(getLeastMostSignificantBits('a316b044-0157-1000-efe6-40fc5d2f0036')) ```` result: ````(-1160168401362026442 -6694969989912915968) ````
Python/Spyder: General Working Directory So far I have code that opens a text file manipulates it into a pandas data file then exports to excel I am sharing this code with other people and we all have the same working directory within Spyder All the code works fine the only lines I want to manipulate are the opening of the file and the exporting of the file ````with open(r'C:\Users\"my_name"\Desktop\data\file txt' 'r') as data_file: ```` <hr> The issue here is I want to set my working directory to just "\data" so that I can just write: ````with open(r'file txt' 'r') as data_file: ```` this way the people I send it to who also have "\data" as their working directory on their computer can just run the code and it will select the "file txt" that is in their data directory
The answer that you are technically looking for is using `os chdir()` as follows ````import os os chdir(' ' 'data') #THE REST OF THE CODE IS THE SAME with open(r'file txt' 'r') as data_file: ```` A safer answer would however be ````def doTheThing(fName): return os path join(os getcwd() 'data' fName) with open(doTheThing('file txt') 'r') as data_file: ````
selenium run chrome on raspberry pi If your seeing this I guess you are looking to run chromium on a raspberry pi with selenium like this `Driver = webdriver Chrome("path/to/chomedriver")` or like this `webdriver Chrome()`
I have concluded that after hours and a hole night of debugging that you cannot because there is no chromedriver compatible with a raspberry pi processor Even if you download the linux 32bit You can confirm this by running this in a terminal window `path/to/chromedriver` it will give you this error <blockquote> cannot execute binary file: Exec format error </blockquote> hope this helps anyone that wanted to do this :)
Numpy: How to vectorize parameters of a functional form of a function applied to a data set Ultimately I want to remove all explicit loops in the code below to take advantage of numpy vectorization and function calls in C instead of python Below is simplified for uses of numpy in python I have the following quadratic function: ````def quadratic_func(a b c x): return a*x*x b*x c ```` I am trying to optimize choices of a b c given input data x and output data y of the same size (of course this should be done by linear regression but humor me) Say len(x)=100 Easy to vectorize with scalars a b c to get back a result of length 100 Let us say that we know a b c should be inside of [-10 10] and I optimize by building a grid and picking the point with the min sum square error ````a=np arange(-10 0 10 01 2 0) nodes=np array(np meshgrid(a a a)) T reshape(-1 3) #3-d cartesian product with array of nodes ```` For each of the 1331 nodes I would like to calculate all 1331 of the length 100 return values ````res=[] x=np random uniform(-5 0 5 0 100) for node in nodes: res append(quadratic_func(*node x=x)) ```` How can I take advantage of broadcasting so as to get my list of 1331 items each with 100 values that are the results of calling quadratic_func on x? Answer must use vectorization broadcasting etc to get the orders of magnitude speed improvements I am looking for Also the answer must use calls to quadratic_func - or more generally my_func(*node x=x) In real life I am optimizing a non-linear function that is not even close to being convex and has many local minimums It is a great functional form to use if I can get to the "right" local minimum - I already know how to do that but would like to get there faster!
One approach using a combination of <a href="https://docs scipy org/doc/numpy/user/basics broadcasting html" rel="nofollow">`broadcasting`</a> and <a href="https://docs scipy org/doc/numpy/reference/generated/numpy einsum html" rel="nofollow">`np einsum`</a> - ````np einsum('ij jk>ik' nodes x**np array([2 1 0])[: None]) ```` Another one using matrix-multiplication with <a href="https://docs scipy org/doc/numpy/reference/generated/numpy dot html" rel="nofollow">`np dot`</a> - ````nodes dot(x**np array([2 1 0])[: None]) ````
Filter by a Reference Property in Appnengine I am doing a blog in appengine I want make a query to get the numbers of post by category So I need filter by a Reference Property in appengine Look my actual Code Those are my models : ````class Comment(db Model) : user = db ReferenceProperty(User) post = db ReferenceProperty(Blog) subject = db StringProperty(required = True) content = db TextProperty(required = True) date = db DateProperty(auto_now_add = True) last_modified = db DateProperty() status = db BooleanProperty(default = True) class Category(db Model): name = db StringProperty() date = db DateProperty(auto_now_add=True) class Blog(db Model) : subject = db StringProperty(required = True) content = db TextProperty(required = True) date = db DateProperty(auto_now_add = True) category = db ReferenceProperty(Category) user = db ReferenceProperty(User) last_modified = db DateProperty(auto_now = True) status = db BooleanProperty() likes = db IntegerProperty(default = 0) users_liked = db ListProperty(db Key default = []) dislikes = db IntegerProperty(default = 0) users_disliked = db ListProperty(db Key default = []) ```` And this is my query : ````def numcomments_all_category() : dic = {} category = get_category() for cat in category : dic[cat key() id()] = Comment all() filter("post category =" cat key()) ancestor(ancestor_key) count() return dic ```` But It seems that filter("post category =" cat key()) is not the correct way to do this
I have not used `db` in a while but I think something like this will work: ````count = 0 # Get all blogs of the desired category blogs = Blog all() filter("category =" cat key()) for blog in blogs: # For each blog count all the comments count = Comment all() filter("post =" blog key()) count() ````
HTML select options from a python list I am writing a python cgi script to setup a Hadoop cluster I want to create an HTML select dropdown where the options are taken from a python list Is this possible?? I have looked around a lot Could not find any proper answer to this This is what i have found so far on another thread ````def makeSelect(name values): SEL = '<select name="{0}"&gt;\n{1}</select&gt;\n' OPT = '<option value="{0}"&gt;{0}</option&gt;\n' return SEL format(name '' join(OPT format(v) for v in values)) ```` I really need some help Please Thanks
You need to generate a list of "option"s and pass them over to your javascript to make the list ````values = {"A": "One" "B": "Two" "C": "Three"} options = [] for value in sorted(values keys()): options append("<option value='" value "'&gt;" values[value] "</option&gt;") ```` Then inject "options" into your html Say in your "template html" there is a line: ````var options = $python_list; ```` Then at the end of your python script: ````####open the html template file read it into 'content' html_file = "template html" f = open(html_file 'r') content = f read() f close() ####replace the place holder with your python-generated list content = content replace("$python_list" json dumps(options)) ####write content into the final html output_html_file = "index html" f = open(output_html_file 'w') f write(temp_content) f close() ```` In your "index html" you should have one line after "var options = " that takes the list and generate the dropdown ````$('#my_dropdown') append(options join("")) selectmenu(); ```` Alternatively I suggest that you use your python to generate a json file (with json dumps()) a file called "config json" maybe And your html javascript file should read this json file to render the final page So in your json file there should be something like: { "options": ["One" "Two" "Three"] } And in your html section you could read the option values ````d3 json("config json" function(data)) { var options = []; for (var i= 0; i < data options length; i++) { options push("<option value='" data options[i] "'&gt;" data options[i] "</option&gt;"); } $('#my_dropdown') append(options join("")) selectmenu(); } ````
Plot decision boundaries of classifier ValueError: X has 2 features per sample; expecting 908430" Based on the scikit-learn document <a href="http://scikit-learn org/stable/auto_examples/svm/plot_iris html#sphx-glr-auto-examples-svm-plot-iris-py" rel="nofollow">http://scikit-learn org/stable/auto_examples/svm/plot_iris html#sphx-glr-auto-examples-svm-plot-iris-py</a> I try to plot a decision boundaries of the classifier but it sends a error message call "ValueError: X has 2 features per sample; expecting 908430" for this code "Z = clf predict(np c_[xx ravel() yy ravel()])" ````clf = SGDClassifier() fit(step2 index) X=step2 y=index h = 02 colors = "bry" x_min x_max = X[: 0] min() - 1 X[: 0] max() 1 y_min y_max = X[: 1] min() - 1 X[: 1] max() 1 xx yy = np meshgrid(np arange(x_min x_max h) np arange(y_min y_max h)) Z = clf predict(np c_[xx ravel() yy ravel()]) # Put the result into a color plot Z = Z reshape(xx shape) plt contourf(xx yy Z cmap=plt cm Paired) plt axis('off') # Plot also the training points plt scatter(X[: 0] X[: 1] c=y cmap=plt cm Paired) ```` the 'index' is a label which contain around [98579 X 1] label for the comment which include positive natural and negative ````array(['N' 'N' 'P' 'NEU' 'P' 'N'] dtype=object) ```` the 'step2' is the [98579 X 908430] numpy matrix which formed by the Countvectorizer function which is about the comment data ````<98579x908430 sparse matrix of type '<type 'numpy float64'&gt;' with 3168845 stored elements in Compressed Sparse Row format&gt; ````
The thing is you <strong>cannot</strong> plot decision boundary for a classifier for data which is not <strong>2 dimensional</strong> Your data is clearly high dimensional it has 908430 dimensions (NLP task I assume) There is no way to plot actual decision boundary for such a model Example that you are using is trained on <strong>2D data</strong> (reduced Iris) and this is <strong>the only reason</strong> why they were able to plot it
JavaScript double backslash in WebSocket messages I am sending binary data over WebSocket to a Python application This binary data is decoded by calling `struct unpack("BH" data")` on it requiring a 4-long bytes object The problem I am currently facing is that all data contains duplicate backslashes even in `arraybuffer` mode and is therefor 16 bytes long I cannot detect the varying size because there is data appended to the back of it (irrelevant for this question) and even if I could I also could not find a way to strip the backslashes in Python How the data is sent: <pre class="lang-js prettyprint-override">`// this webSocket binaryType = "arraybuffer"; var message = "\\x05\\x00\\x00\\x00"; // escaping required var buffer = new Int8Array(message length); for (var i = 0; i < message length; i++) { buffer[i] = message charCodeAt(i); } this webSocket send(buffer buffer); ```` In comparison this is how said data looks when defined in Python: <pre class="lang-py prettyprint-override">`b'\x05\x00\x00\x00' ```` And this is how it looks as a received message: ````b'\\x05\\x00\\x00\\x00' ```` The ideal solution for this issue would be on the JavaScript-side because I cannot really change the implementation of the Python application without breaking Python-based clients
You should define the message as bytes and not as string: ````var buffer = new Int8Array([5 0 0 0]) this webSocket send(buffer buffer) ````
Python regex : trimming special characters Is it possible to remove special characters using regex? I am attempting to trim: ````\n\t\t\t\t\t\t\t\t\t\tButte County High School\t\t\t\t\t\t\t\t\t ```` down to: ````Butte County High School ```` using ````regexform = re sub("[A-Z]+[a-z]+\s*" '' schoolstring) print regexform ````
You do not need regex for this simple task Use <a href="https://docs python org/2/library/string html#string lstrip" rel="nofollow">`string strip()`</a> instead For example: ````&gt;&gt;&gt; my_string = '\t\t\t\t\t\t\t\t\t\tButte County High School\t\t\t\t\t\t\t\t\t' &gt;&gt;&gt; my_string strip() 'Butte County High School' ```` In case it is must to use `regex` your expression should be: ````&gt;&gt;&gt; re sub('[^A-Za-z0-9]\s+' '' my_string) 'Butte County High School' ```` It matches a string of characters that are not letters or numbers
Convert multiple columns to one column I am looking to merge multiple columns to one column Here is my current dataset : ````Column A Column B Column C a1 b1 c1 b2 a2 e2 ```` I am looking for the following as output ````Column D a1 b1 c1 b2 a2 e2 ```` Is this possible ? Using R or Python ?
With the data that you provided in the format you provided you could do this with: ````data frame(ColumnD=c(t(df))) ColumnD 1 a1 2 b1 3 c1 4 b2 5 a2 6 e2 ```` We transpose the data then combine it
Easier way to check if a string contains only one type of letter in python I have a string `'829383&amp;&amp;*&amp;@<<<<&gt;&gt;<&gt;GG'` I want a way to measure if a string has only one type of letter For example the string above would return True because it only has two Gs but this string `'829383&amp;&amp;*&amp;@<<<<&gt;&gt;<&gt;GGAa'` would not I have been iteratively going through the string having made it into an array I was hoping someone knew an easier way
use `filter` with `str isalpha` function to create a sublist containing only letters then create a set Final length must be one or your condition is not met ````v="829383&amp;&amp;&amp;@<<<<&gt;&gt;<&gt;GG" print(len(set(filter(str isalpha v)))==1) ````
How to create a web crawler to get multiple pages from agoda with python3 I am new to here Recently I want to get data from Agoda and I got a problem that agoda com do not provide the url(or href) of "next page" So I have no idea to change page Now I only get the data from page 1 but I need the data from page2 page3 Is anyone help me I need some advise tools or others By the way I use python3 and win10 Please help me and thank you Below is my code presently ````import requests import pandas as pd import csv from bs4 import BeautifulSoup from pandas import Series DataFrame import unicodecsv def name1(): url="https://www agoda com/zh-tw/pages/agoda/default/DestinationSearchResult aspx?asq=%2bZePx52sg5H8gZw3pGCybdmU7lFjoXS%2baxz%2bUoF4%2bbAw3oLIKgWQqUpZ91GacaGdIGlJ%2bfxiotUg7cHef4W8WIrREFyK%2bHWl%2ftRKlV7J5kUcPb7NK6DnLacMaVs1qlGagsx8liTdosF5by%2fmvF3ZvJvZqOWnEqFCm0staf3OvDRiEYy%2bVBJyLXucnzzqZp%2fcBP3%2bKCFNOTA%2br9ARInL665pxj%2fA%2bylTfAGs1qJCjm9nxgYafyEWBFMPjt2sg351B&amp;city=18343&amp;cid=1732641&amp;tag=41460a09-3e65-d173-1233-629e2428d88e&amp;gclid=Cj0KEQjwvve_BRDmg9Kt9ufO15EBEiQAKoc6qlyYthgdt9CgZ7a6g6yijP42n6DsCUSZXvtfEJdYqiAaAvdW8P8HAQ&amp;tick=636119092231&amp;isdym=true&amp;searchterm=%E5%A2%BE%E4%B8%81&amp;pagetypeid=1&amp;origin=TW&amp;cid=1732641&amp;htmlLanguage=zh-tw&amp;checkIn=2016-10-20&amp;checkOut=2016-10-21&amp;los=1&amp;rooms=1&amp;adults=2&amp;children=0&amp;isFromSearchBox=true&amp;ckuid=1b070b17-86c2-4376-a4f5-d3b98fc9cf45" source_code=requests get(url) plain_text=source_code text soup=BeautifulSoup(plain_text "lxml") hotelname=soup find_all("h3" {"class":"hotel-name"}) f = csv writer(open("test csv" "w" newline='')) f writerow(["hotelname" "address"]) p = [] for N in hotelname: a=N string strip() f writerow([a]) ````
Examine closely in browser development tools what happens when you click next button It has click event that sends xhr post request with a lot of parameters One of the parameters is `PageNumber` Most values for the parameters are straightforward to get maybe except `SearchMessageID` that you have to find somewhere on page or is generated by javascript
Yes or No answer from user with Validation and restart option? (py) At the moment the code below does not validate/output error messages when the user inputs something other than the two choices "y" and "n" because it is in a while loop ````again2=input("Would you like to calculate another GTIN-8 code? Type 'y' for Yes and 'n' for No ") lower() #** while again2 == "y": print("\nOK! Thanks for using this GTIN-8 calculator!\n\n") restart2() break #Break ends the while loop restart2() ```` I am struggling to think of ways that will allow me to respond with an output when they input neither of the choices given For example: ````if again2 != "y" or "n" print("Not a valid choice try again") #Here would be a statement that sends the program back to the line labelled with a ** ```` So when the user's input is not equal to "y" or "n" the program would return to the initial statement and ask the user to input again Any ideas that still supports an efficient code with as little lines as possible? Thanks!
````def get_choice(prompt="Enter y/n?" choices=["Y" "y" "n" "N"] error="Invalid choice"): while True: result = input(prompt) if result in choices: return result print(error) ```` is probably a nice generic way to approach this problem ````result = get_choice("Enter A B or C:" choices=list("ABCabc") error="That Is not A or B or C") ```` you could of coarse make it not case sensitive or add other types of criteria (e g must be an integer between 26 and 88)
Sending Function as Argument to Another Function I have came across this logic: ````def f2(f): def g(arg): return 2 * f(arg) return g def f1(arg): return arg 1 f_2 = f2(f1) print f_2(3) ```` From a first glance it may seem it is a very simple code But it takes some time to figure out what is going on here Sending a function as an argument to another function is unusual to me While it works I wonder if a technique like this should be avoided (since it does appear quite confusing at first)
The passing of functions to other functions is a common idiom in so-called functional programming languages like LISP Scheme Haskell etc Python is sometimes referred to as a "multi-paradigm language" because it has some features of functional languages (as well as of imperative/structured and object-oriented languages) So while it is considered an advanced technique it is hardly uncommon to see it in Python Python even has a language keyword (`lambda`) to let you define short anonymous functions "in line" when calling a function so you do not have to give them a name and define them elsewhere It also has built-in functions like `map` `filter` and `reduce` which are explicitly designed to work with functions passed in to them; these are borrowed from the aforementioned functional languages And a commonly-used language feature decorators is basically a function that takes a function and returns a function
Django ImportError No module named x settings I have the following structure: ````mysite > manage py > mysite (again) > __init__ py > wsgi py > settings py etc > myapp > __init__ py > myscript py > models py etc ```` When I run a script from myapp (that does myapp-related things putting stuff in a the database for example) i need to do ````import django django setup() ```` in order to be able to `from models import MyModel` But if i do this in the myapp directory I get: ````Traceback (most recent call last): File "<stdin&gt;" line 1 in <module&gt; File "C:\Python27\lib\site-packages\django\__init__ py" line 22 in setup configure_logging(settings LOGGING_CONFIG settings LOGGING) File "C:\Python27\lib\site-packages\django\conf\__init__ py" line 53 in __getattr__ self _setup(name) File "C:\Python27\lib\site-packages\django\conf\__init__ py" line 41 in _setup self _wrapped = Settings(settings_module) File "C:\Python27\lib\site-packages\django\conf\__init__ py" line 97 in __init__ mod = importlib import_module(self SETTINGS_MODULE) File "C:\Python27\lib\importlib\__init__ py" line 37 in import_module __import__(name) ImportError: No module named mysite settings ```` Which I kind of understand since it is further up the directory tree and not in the same directory as e g `manage py` (the root I guess?) When I issue a python interpreter in mysite where `manage py` is located I do not get this error What should I do to be able to place my scripts in myapp and still be able to use `django setup()` from that dir?
You need to make sure the root of your project is in python path when you run the script Something like this might help ``` import os import sys projpath = os path dirname(__file__) sys path append(os path join(projpath ' ')) ```
Design: Google OAuth using AngularJS and Flask I am building a web application using AngularJS on the frontend and Python with Flask on the server side I am trying to implement the OpenID/OAuth login feature following the documentation available on <a href="https://developers google com/identity/protocols/OpenIDConnect" rel="nofollow">google developers site</a> I began by building the server side code first(where everything worked fine when tested using Postman) and started to work on frontend Now I have a feeling that the design I followed for adding this feature is really bad Below is the problem I am facing: Following the Google OAuth documentation I have setup my server side flow somewhat like this: <strong>gAuth</strong>: It is a function in my Flask app that will be invoked by a GET request from Angular when the user clicks on the Google signIn button It sends a GET request to the Google authorization endpoint to fetch the authorization code The response to this request has got a text field which has an HTML code that shows the google login page I am returning this to Angular(because that is where this function has been invoked from) Then Angular renders this on the browser for the user to enter credentials Once the user enters credentials and authorizes my application Google sends the auth code to another function in my app <strong>requestToken</strong> using a GET request because that is the redirect_URI I am using for the Google OAuth At this point I am completely cut from Angular Now requestToken sends a POST request to token endpoint of Google using the auth code in order to exchange it for an id_token Now requestToken has the id_token and does what it is supposed to do with it I am returning a value of '1' from the requestToken function but that goes to the Google code because that is where the request to requestToken came from Now the problem is how can I let the frontend know that everything went fine with OAuth and user must be redirected to the homepage I want the return value of requestToken to go to Angular I want to ask if that is possible but my commonsense says that it is not possible But I am wondering if there is a way by which I can send a request from Flask to Angular Any help is much appreciated
I gave up with manual implementation There is a ready to use lib: <a href="https://github com/sahat/satellizer" rel="nofollow">Satellizer</a> as well as server implementation (for Python example see the docs)
How to do a substring using pandas or numpy I am trying to do a substring on data from column "ORG" I only need the 2nd and 3rd character So for 413 I only need 13 I have tried the following: ````Attempt 1: dr2['unit'] = dr2[['ORG']][1:2] Attempt 2: dr2['unit'] = dr2[['ORG'] str[1:2] Attempt 3: dr2['unit'] = dr2[['ORG'] str([1:2]) ```` My dataframe: ```` REGION ORG 90 4 413 91 4 413 92 4 413 93 5 503 94 5 503 95 5 503 96 5 503 97 5 504 98 5 504 99 1 117 100 1 117 101 1 117 102 1 117 103 1 117 104 1 117 105 1 117 106 3 3 107 3 3 108 3 3 109 3 3 ```` Expected output: ```` REGION ORG UNIT 90 4 413 13 91 4 413 13 92 4 413 13 93 5 503 03 94 5 503 03 95 5 503 03 96 5 503 03 97 5 504 04 98 5 504 04 99 1 117 17 100 1 117 17 101 1 117 17 102 1 117 17 103 1 117 17 104 1 117 17 105 1 117 17 106 3 3 03 107 3 3 03 108 3 3 03 109 3 3 03 ```` thanks for any and all help!
Your square braces are not matching and you can easily slice with `[-2:]` <them>apply</them> `str zfill` with a width of 2 to pad the items in the new series: ````&gt;&gt;&gt; import pandas as pd &gt;&gt;&gt; ld = [{'REGION': '4' 'ORG': '413'} {'REGION': '4' 'ORG': '414'}] &gt;&gt;&gt; df = pd DataFrame(ld) &gt;&gt;&gt; df ORG REGION 0 413 4 1 414 4 &gt;&gt;&gt; df['UNIT'] = df['ORG'] str[-2:] apply(str zfill args=(2 )) &gt;&gt;&gt; df ORG REGION UNIT 0 413 4 13 1 414 4 14 2 3 4 03 ````
Python: Finding palindromes in a list ````def num_sequence (num1 num2): #This function takes the lower and upper bound and builds a list array = [] for i in range(num2 1): if i &gt;= num1: array append(i) return array def inverted_sequence (array): #This function takes the previous and list inverts the numbers in every element for i in range(len(array)): if array[i] &gt; 10: array[i] = str(array[i]) #Converts the i element of the array to a string array[i] = array[i][::-1] #Inverts the the position of the numbers in every element array[i] = int(array[i]) #Converts the i element of the array to an integer return array def main (): #Main program lower = int(input("Type the lower bound: ")) upper = int(input("Type the upper bound: ")) sequence = num_sequence(lower upper) inv_sequence = sequence[:] inv_sequence = inverted_sequence(inv_sequence) print (sequence) print (inv_sequence) """While loop inside the for loop that checks if the number is a palindrome if to check if it is a palindrome return True else return False""" pal_count = 0 seq_sum = [] for i in range(len(sequence)): if sequence[i] != inv_sequence[i]: while sequence[i] != inv_sequence[i]: seq_sum append(sequence[i] inv_sequence[i]) sequence = seq_sum inv_sequence = inverted_sequence(inv_sequence) print (seq_sum) if sequence[i] == inv_sequence[i]: pal_count *= 1 print (pal_count) main() ```` I am trying to make a program that finds all the palindromes in a range of numbers and if they are not palindromes reverse the number and add it to the original until it becomes a palindrome In my code I created a list of numbers with two inputs and name the list sequence inv_sequence is the previous list but with numbers of each element reversed when i try to add sequence[i] and inv_sequence[i] the program throws an error saying that the list is out of range I am testing the program with the lower bound being 5 and the upper bound being 15
I assume that you mean that you want the final list to contain the lowest palindrome in the sequence formed by the sum of the previous number in the sequence and the result of reversing the previous number in the sequence If so here is some code (not bulletproof): ````#checks if a number is a palindrome by comparing the string written fowards #with the string written backwards def is_pal(x): if str(x)==str(x)[::-1]: return True return False #iterates a number by adding and reversing until it is a palindrom def iter_pal(x): while not is_pal(x): x+=int(str(x)[::-1]) #adds the number's reverse to itself return x #main program def main(): #gets input lower = int(input("Type the lower bound: ")) upper = int(input("Type the upper bound: ")) #uses range to make the sequence of numbers sequence = list(range(lower upper+1)) #iterates through the list performing iter_pal for i in range(upper-lower+1): sequence[i]=iter_pal(sequence[i]) #prints the final sequence print(sequence) main() ```` I am not sure what you will do with the <a href="https://en wikipedia org/wiki/Lychrel_number" rel="nofollow">Lychrel numbers</a>
Error when using sympy's solver on polynomials with complex coefficients (4th deg) Trying to solve a 4th degree polynomial equation with sympy I arrived at some difficulties My code and the equation i am trying to solve: ````import sympy as sym from sympy import I sym init_printing() k = sym Symbol('k') t sigma k0 L V = sym symbols('t sigma k0 L V') x4 = ( -t**2 2*I * t / sigma**2 1/sigma**4) x3 = ( -2*I * t * k0 / sigma**2 - 2*k0 / sigma**4) x2 = ( L**2 k0 **2 / sigma **4 t**2 * V - 2 * I * t * V / sigma**2 -V/sigma**4) x1 = (2*I * V * k0 / sigma**2 2*k0 * V / sigma **4) x0 = (2*I*k0*t*V / sigma**2 - k0 **2 *V / sigma**4) expr = x4 * k**4 x3 * k**3 x2 * k**2 x1 * k x0 expr2 = expr subs({k0 :2 sigma : 2 L : 1 V:1}) sym solvers solve(expr2 k) ```` Output: ````Traceback (most recent call last): File "<ipython-input-4-e1ce7d8c9531&gt;" line 1 in <module&gt; sols = sym solvers solve(expr2 k) File "/usr/local/lib/python2 7/dist-packages/sympy/solvers /solvers py" line 1125 in solve solution = nfloat(solution exponent=False) File "/usr/local/lib/python2 7/dist-packages/sympy/core/function py" line 2465 in nfloat return type(expr)([nfloat(a n exponent) for a in expr]) File "/usr/local/lib/python2 7/dist-packages/sympy/core/function py" line 2499 in nfloat lambda x: isinstance(x Function))) File "/usr/local/lib/python2 7/dist-packages/sympy/core/basic py" line 1087 in xreplace value _ = self _xreplace(rule) File "/usr/local/lib/python2 7/dist-packages/sympy/core/basic py" line 1095 in _xreplace return rule[self] True File "/usr/local/lib/python2 7/dist-packages/sympy/core/rules py" line 59 in __getitem__ return self _transform(key) File "/usr/local/lib/python2 7/dist-packages/sympy/core/function py" line 2498 in <lambda&gt; lambda x: x func(*nfloat(x args n exponent)) File "/usr/local/lib/python2 7/dist-packages/sympy/core/function py" line 2465 in nfloat return type(expr)([nfloat(a n exponent) for a in expr]) File "/usr/local/lib/python2 7/dist-packages/sympy/core/function py" line 2465 in nfloat return type(expr)([nfloat(a n exponent) for a in expr]) TypeError: __new__() takes exactly 3 arguments (2 given) ```` And I really cannot make anything out of it I am not so sure what is causing this I "tested" this solver for more compact polynomials and it worked well
Looks like you can work around the issue by using `solve(expr2 k rational=False)`
django-pipeline throwing ValueError: the file could not be found When running `python manage py collectstatic --noinput` I am getting the following error: ````Post-processing 'jquery-ui-dist/jquery-ui css' failed! Traceback (most recent call last): File "manage_local py" line 10 in <module&gt; execute_from_command_line(sys argv) File "/Users/michaelbates/GoogleDrive/Development/inl/venv/lib/python3 4/site-packages/django/core/management/__init__ py" line 367 in execute_from_command_line utility execute() File "/Users/michaelbates/GoogleDrive/Development/inl/venv/lib/python3 4/site-packages/django/core/management/__init__ py" line 359 in execute self fetch_command(subcommand) run_from_argv(self argv) File "/Users/michaelbates/GoogleDrive/Development/inl/venv/lib/python3 4/site-packages/django/core/management/base py" line 294 in run_from_argv self execute(*args **cmd_options) File "/Users/michaelbates/GoogleDrive/Development/inl/venv/lib/python3 4/site-packages/django/core/management/base py" line 345 in execute output = self handle(*args **options) File "/Users/michaelbates/GoogleDrive/Development/inl/venv/lib/python3 4/site-packages/django/contrib/staticfiles/management/commands/collectstatic py" line 193 in handle collected = self collect() File "/Users/michaelbates/GoogleDrive/Development/inl/venv/lib/python3 4/site-packages/django/contrib/staticfiles/management/commands/collectstatic py" line 145 in collect raise processed File "/Users/michaelbates/GoogleDrive/Development/inl/venv/lib/python3 4/site-packages/django/contrib/staticfiles/storage py" line 257 in post_process content = pattern sub(converter content) File "/Users/michaelbates/GoogleDrive/Development/inl/venv/lib/python3 4/site-packages/django/contrib/staticfiles/storage py" line 187 in converter hashed_url = self url(unquote(target_name) force=True) File "/Users/michaelbates/GoogleDrive/Development/inl/venv/lib/python3 4/site-packages/django/contrib/staticfiles/storage py" line 132 in url hashed_name = self stored_name(clean_name) File "/Users/michaelbates/GoogleDrive/Development/inl/venv/lib/python3 4/site-packages/django/contrib/staticfiles/storage py" line 292 in stored_name cache_name = self clean_name(self hashed_name(name)) File "/Users/michaelbates/GoogleDrive/Development/inl/venv/lib/python3 4/site-packages/django/contrib/staticfiles/storage py" line 95 in hashed_name (clean_name self)) ValueError: The file 'jquery-ui-dist/"images/ui-icons_555555_256x240 png"' could not be found with <pipeline storage PipelineCachedStorage object at 0x1073e2c50&gt; ```` If I run `python manage py findstatic jquery-ui-dist/"images/ui-icons_555555_256x240 png"` I get: ````Found 'jquery-ui-dist/images/ui-icons_555555_256x240 png' here: /Users/michaelbates/GoogleDrive/Development/inl/node_modules/jquery-ui-dist/images/ui-icons_555555_256x240 png /Users/michaelbates/GoogleDrive/Development/inl/staticfiles/jquery-ui-dist/images/ui-icons_555555_256x240 png ```` Here are some relevant settings: ````STATICFILES_FINDERS = ( 'django contrib staticfiles finders FileSystemFinder' 'pipeline finders AppDirectoriesFinder' 'pipeline finders PipelineFinder' ) STATICFILES_STORAGE = 'pipeline storage PipelineCachedStorage' STATIC_ROOT = os path join(BASE_DIR 'staticfiles') STATICFILES_DIRS = ( os path join(BASE_DIR 'static') os path join(BASE_DIR 'node_modules') ) ```` My `PIPELINE` settings dict is huge so I will not post the entire thing but some parts of it are: ````PIPELINE = { 'STYLESHEETS': { 'pricing': { 'source_filenames': ( 'jquery-ui-dist/jquery-ui min css' ) 'output_filename': 'css/pricing min css' } } 'JS_COMPRESSOR': 'pipeline compressors yuglify YuglifyCompressor' 'CSS_COMPRESSOR': 'pipeline compressors yuglify YuglifyCompressor' 'COMPILERS': ( 'pipeline compilers sass SASSCompiler' ) } ```` I have tried changing the STATICFILES_FINDERS to the django-pipeline specific ones but it makes no difference Can anyone she would some light on why that png file cannot be found during collectstatic but can with findstatic?
Your problem is related to <a href="https://code djangoproject com/ticket/21080" rel="nofollow">this bug</a> on the Django project In short django-pipeline is post-processing the `url()` calls with Django's `CachedStaticFilesStorage` to append the md5 checksum to the filename (<a href="https://docs djangoproject com/en/1 10/ref/contrib/staticfiles/#manifeststaticfilesstorage" rel="nofollow">more details here</a>) and does not detect when it is inside a comment If you look on the header of the `jquery-ui css` (and similar) files there is a comment that starts with <blockquote> - To view and modify this theme visit [ ] </blockquote> Inside the URL on this line there is a parameter that is being interpreted as an `url()` call and generating the error you see To work around this issue you can simply remove the above line from `jquery-ui css` and `collectstatic` should work properly
How can I extend the unit selection logic in my RTS game to apply to multiple units? Currently if you left-click on the unit it becomes 'selected' (or 'de-selected') and a green square is drawn around it Then when you right-click somewhere on the screen the unit moves neatly into the square in the location that you clicked Also if you use the up down left or right keys it will scroll the screen ````import pygame import random pygame init() #Define mouse position mouse_position_x = 525 mouse_position_y = 315 # Define colors green = (0 255 0) brown = (150 75 0) #Define border position border_x = 0 border_y = 0 #Define character selection box def character_selection_box(): pygame draw line(screen green (character_location_x character_location_y) (character_location_x+character_width character_location_y) 2) # Top bar pygame draw line(screen green (character_location_x character_location_y+character_height) (character_location_x+character_width character_location_y+character_height) 2) # Bottom bar pygame draw line(screen green (character_location_x character_location_y) (character_location_x character_location_y+character_height) 2) # Left bar pygame draw line(screen green (character_location_x+character_width character_location_y) (character_location_x+character_width character_location_y+character_height+1) 2) # Right bar #Define round def assign_square(n): div = (n/35) rou = round(div) mul = (35*rou) return int(mul) #Set window screen_width = 981 screen_height = 700 game_screen_width = 800 game_screen_height = 700 screen_size = (screen_width screen_height) screen = pygame display set_mode(screen_size) pygame display set_caption("Warpath") #Set block character character_width = 35 character_height = 35 character_location_x = 525 character_location_y = 315 movement = 1 unit_selected = 0 #Load images orc_grunt_forward = pygame image load('orc_forward3 png') #(35 by 35 pixel image) character_image = orc_grunt_forward #Loop until the user clicks the close button shutdown = False #Set clock clock = pygame time Clock() #Set scroll limit scroll_x = 0 scroll_y = 0 # ---------- Main program loop ----------- while not shutdown: # --- Main event loop --- for event in pygame event get(): # --- If quit button pressed shutdown if event type == pygame QUIT: shutdown = True # --- If mouse button pressed elif event type == pygame MOUSEBUTTONDOWN: # If a mouse button is pressed mouse_position = pygame mouse get_pos() # Get mouse position button_type = pygame mouse get_pressed() # Check which button was pressed # --- If left click pressed and the curser was on a character select that character if button_type[0] == 1 and mouse_position[0] &gt;= character_location_x and mouse_position[0] <= character_location_x character_width and mouse_position[1] &gt;= character_location_y and mouse_position[1] <= character_location_y character_height: print("Unit selected" unit_selected) print(button_type) unit_selected = 1 unit_selected /= unit_selected #(Otherwise it will add up unit selected if you click more than once) int(unit_selected) # --- If right click pressed and a character was selected (and it is within the game screen) move the character to the location elif button_type[2] == 1 and unit_selected == 1 and mouse_position[0] &gt; 175: mouse_position_x *= 0 mouse_position_y *= 0 if mouse_position[0] &gt;= assign_square(mouse_position[0]): mouse_position_x = assign_square(mouse_position[0]) elif mouse_position[0] <= assign_square(mouse_position[0]): mouse_position_x -= 35 mouse_position_x = assign_square(mouse_position[0]) if mouse_position[1] &gt;= assign_square(mouse_position[1]): mouse_position_y = assign_square(mouse_position[1]) elif mouse_position[1] <= assign_square(mouse_position[1]): mouse_position_y -= 35 mouse_position_y = assign_square(mouse_position[1]) # --- If left click pressed and the curser was not on a character deselect the character elif button_type[0] == 1 and mouse_position[0] < character_location_x or mouse_position[0] &gt; character_location_x character_width or mouse_position[1] < character_location_y or mouse_position[1] &gt; character_location_y character_height: print("Unit not selected") print(button_type) unit_selected *= 0 int(unit_selected) # --- If key pressed scroll the screen elif event type == pygame KEYDOWN: if event key == pygame K_RIGHT and scroll_x &gt; -10: direction = "right" character_location_x -= 35 mouse_position_x -= 35 border_x -= 35 scroll_x -= 1 if event key == pygame K_LEFT and scroll_x < 10: direction = "left" character_location_x = 35 mouse_position_x = 35 border_x = 35 scroll_x = 1 if event key == pygame K_UP and scroll_y < 10: direction = "up" character_location_y = 35 mouse_position_y = 35 border_y = 35 scroll_y = 1 if event key == pygame K_DOWN and scroll_y &gt; -10: direction = "down" character_location_y -= 35 mouse_position_y -= 35 border_y -= 35 scroll_y -= 1 # --- Game logic --- # --- Set character movement if character_location_x < mouse_position_x: character_location_x = movement if character_location_x &gt; mouse_position_x: character_location_x -= movement if character_location_y < mouse_position_y: character_location_y = movement if character_location_y &gt; mouse_position_y: character_location_y -= movement # --- Drawing --- screen fill(brown) # Draw background screen blit(character_image (character_location_x character_location_y)) # Draw character if unit_selected == 1: character_selection_box() # Draw character selection box if unit is selected clock tick(30) pygame display flip() #Shutdown if shutdown == True: pygame quit() ```` The problem is that I cannot figure out how to extend this to multiple units - currently if I want to add more units I can only either manage to: a) Move them all at once or b) Paste the same code multiple times adjusting the character variable (not a robust / scalable solution) How can I adjust my code so that I have a scalable solution where: 1) I can select a single unit and move it without moving every unit at once 2) I can select multiple units by clicking on each one individually and move them all at once (not worrying about pathfinding right now) I also tried using classes to achieve this but it still felt like I was copying / pasting multiple functions rather than having a robust solution I have removed any code that does not concern the issue while keeping a functioning program Thanks
There are few things to do: - Change variables `character_*` to object that holds all data about the unit - Create array of units / characters That way each unit in array can have unique position velocity ets - Everywhere in code where you check `character_*` change to for loops where you iterate over characters array to check every unit - Next step should be adding functions like move / shoot to character class to make keypress event working for multiple units That should give you code where you can select multiple units (if they occupy same spot) and move them independently of deselected units
pandas groupby transform behaving differently with seemingly equivalent representations consider the `df` ````df = pd DataFrame(dict(A=['a' 'a'] B=[0 1])) ```` I expected the following two formulations to be equivalent <strong><them>formulation 1</them></strong> ````df groupby('A') transform(np mean) ```` <a href="https://i stack imgur com/euRkW png" rel="nofollow"><img src="https://i stack imgur com/euRkW png" alt="enter image description here"></a> <strong><them>formulation 2</them></strong> ````df groupby('A') transform(lambda x: np mean(x)) ```` <a href="https://i stack imgur com/18pe5 png" rel="nofollow"><img src="https://i stack imgur com/18pe5 png" alt="enter image description here"></a> I would consider the results from <strong><them>formulation 2</them></strong> incorrect But before I go crying <them>bug</them> maybe someone has a rational explanation for it
It looks like a bug to me: ````In [19]: df groupby('A') transform(lambda x: x sum()) Out[19]: B 0 1 1 1 In [20]: df groupby('A') transform(lambda x: len(x)) Out[20]: B 0 2 1 2 In [21]: df groupby('A') transform(lambda x: x sum()/len(x)) Out[21]: B 0 0 1 0 ```` PS Pandas version: 0 19 0
Getting combinations back from memoized subset-sum algorithm? I have been working on a pretty basic subset sum problem Given a sum (say s=6) and the numbers ((1:s) so [1 2 3 4 5]) I had to find the total number of combinations that totalled s (so: [1 5] [2 4] [1 2 3]) It was quite easy to satisify the requirements of the problem by doing a brute-force approach For my own learning I have been trying to understand how I can instead implement a memoizable algorithm that would allow me to calculate combinations for very large values of n (say 500) Obviously the brute-force approach becomes untenable quickly for values over 70 or so Digging around on the internet quite a bit I already found an algorithm <a href="http://www markandclick com/advance html#SubsetSum" rel="nofollow">here</a> that works quite well This is the code I am currently working with: ````def get_combos_wrapped(): cache = {} def get_combos(numbers idx total depth=0 which=''): dstr = '\t' * depth print("%scalled: idx=%s total=%s %s" % (dstr idx total which)) if (idx total) in cache: # print("cache hit: %s %s" % (idx total)) to_return = cache[(idx total)] del(cache[(idx total)]) return to_return depth = 1 if idx &gt;= len(numbers): to_return = 1 if total == 0 else 0 print("%sreturning %s" % (dstr to_return)) return to_return the_sum = get_combos(numbers idx 1 total depth) \ get_combos(numbers idx 1 total - numbers[idx] depth) print("%sreturning %s" % (dstr the_sum)) cache[(idx total)] = the_sum return the_sum return get_combos ```` Here is my problem The algorithm is still completely mysterious to me -- I just know that it returns the right totals I am wondering if anyone can indulge my ignorance and offer insight into understanding 1) how this works and 2) whether I would be able to use this algorithm to actually get back the unique number combinations given a value of s The following is some output I hacked together to help me visualize the program flow for s=6 though unfortunately it has not really helped Thank you very much for your help ````called: idx=0 total=6 called: idx=1 total=6 called: idx=2 total=6 called: idx=3 total=6 called: idx=4 total=6 called: idx=5 total=6 returning 0 called: idx=5 total=1 returning 0 returning 0 called: idx=4 total=2 called: idx=5 total=2 returning 0 called: idx=5 total=-3 returning 0 returning 0 returning 0 called: idx=3 total=3 called: idx=4 total=3 called: idx=5 total=3 returning 0 called: idx=5 total=-2 returning 0 returning 0 called: idx=4 total=-1 called: idx=5 total=-1 returning 0 called: idx=5 total=-6 returning 0 returning 0 returning 0 returning 0 called: idx=2 total=4 called: idx=3 total=4 called: idx=4 total=4 called: idx=5 total=4 returning 0 called: idx=5 total=-1 returning 0 returning 0 called: idx=4 total=0 called: idx=5 total=0 returning 1 called: idx=5 total=-5 returning 0 returning 1 returning 1 called: idx=3 total=1 called: idx=4 total=1 called: idx=5 total=1 returning 0 called: idx=5 total=-4 returning 0 returning 0 called: idx=4 total=-3 called: idx=5 total=-3 returning 0 called: idx=5 total=-8 returning 0 returning 0 returning 0 returning 1 returning 1 called: idx=1 total=5 called: idx=2 total=5 called: idx=3 total=5 called: idx=4 total=5 called: idx=5 total=5 returning 0 called: idx=5 total=0 returning 1 returning 1 called: idx=4 total=1 returning 1 called: idx=3 total=2 called: idx=4 total=2 called: idx=4 total=-2 called: idx=5 total=-2 returning 0 called: idx=5 total=-7 returning 0 returning 0 returning 0 returning 1 called: idx=2 total=3 called: idx=3 total=3 called: idx=3 total=0 called: idx=4 total=0 called: idx=4 total=-4 called: idx=5 total=-4 returning 0 called: idx=5 total=-9 returning 0 returning 0 returning 1 returning 1 returning 2 returning 3 3 ````
You may simplify your problem using <a href="https://docs python org/2/library/itertools html#itertools combinations" rel="nofollow">`itertools combinations()`</a> as: ````&gt;&gt;&gt; from itertools import combinations &gt;&gt;&gt; s = 6 &gt;&gt;&gt; my_list = range(1 s) # Value of 'my_list': # [1 2 3 4 5] &gt;&gt;&gt; my_combinations = [combinations(my_list i) for i in range(2 s)] # Value of 'my_combinations' (in actual will be containg <combinations&gt; objects): # [[(1 2) (1 3) (1 4) (1 5) (2 3) (2 4) (2 5) (3 4) (3 5) (4 5)] [(1 2 3) (1 2 4) (1 2 5) (1 3 4) (1 3 5) (1 4 5) (2 3 4) (2 3 5) (2 4 5) (3 4 5)] [(1 2 3 4) (1 2 3 5) (1 2 4 5) (1 3 4 5) (2 3 4 5)] [(1 2 3 4 5)]] &gt;&gt;&gt; my_required_set = [my_set for my_sublist in my_combinations for my_set in my_sublist if sum(my_set) == s] &gt;&gt;&gt; my_required_set [(1 5) (2 4) (1 2 3)] ````
Efficiently summing outer product for 1D NumPy arrays I have a function of the form <a href="https://i stack imgur com/MwLCs gif" rel="nofollow"><img src="https://i stack imgur com/MwLCs gif" alt="enter image description here"></a> One way to implement this function in numpy is to assemble a matrix to sum over: ````y = a*b - np sum(np outer(a*b b) axis=0) ```` Is there a better way to implement this function with numpy one that does not involve creating an NxN array?
You could use <a href="https://docs scipy org/doc/numpy/reference/generated/numpy einsum html" rel="nofollow">`np einsum`</a> - ````y = a*b - np einsum('i i j>j' a b b) ```` We can also perform `a*b` and feed to `einsum` - ````y = a*b - np einsum('i j>j' a*b b) ```` On the second approach we can save some runtime by storing `a*b` and reusing Runtime test - ````In [253]: a = np random rand(4000) In [254]: b = np random rand(4000) In [255]: %timeit np sum(np outer(a*b b) axis=0) 10 loops best of 3: 105 ms per loop In [256]: %timeit np einsum('i i j>j' a b b) 10 loops best of 3: 24 2 ms per loop In [257]: %timeit np einsum('i j>j' a*b b) 10 loops best of 3: 21 9 ms per loop ````
Characters from listbox are still recorded even when deleted from listbox So my little game is programmed to have two characters fight each other One from the left side and one from the right side After they fight both should be deleted regardless of who wins or loses They are in fact deleted from listboxes but after you have two more charachters from each side fight those previous characters sometimes show up If you start with Zys and Rash fighting no other names are printed in the win and loss section besides theirs Only when you go backwards from Dant and Ilora does it work the way it should with each character making a place in either wins or loss only once If you start with some other characters they could be put in the wins and loss section more then once It is also possible for a character to be placed as a win or loss even if it has not been selected to fight The bottomline is each character gets to fight a character on the opposite side ONCE and after that it is placed and then deleted with no use in the later part of the program For some apparent reason it does not do that ````from tkinter import * from tkinter import ttk from tkinter import messagebox class Character: def __init__(self name attack defense health): self name = name self attack = attack self defense = defense self health = health self total = attack+defense+health #Left side characters Rash = Character("Rash" 42 50 80) Untss = Character("Untss" 15 54 100) Ilora = Character("Ilora" 60 35 80) #Both sides have totals of 165 168 and 175 #Right side characters Zys = Character("Zys" 12 97 83) Eentha = Character("Eentha" 55 17 90) Dant = Character("Dant" 73 28 88) def fight(): #Part of code that checks for wins and loss checks which has greater total stats and deletes from list box try: namel = "" namer="" left = lbox curselection()[0] right = rbox curselection()[0] totalleft = 0 totalright = 0 if left == 0: namel = "Rash" totalleft = Rash total elif left==1: namel = "Untss" totalleft = Untss total elif left==2: namel = "Ilora" totalleft = 60+35+80 if right == 0: namer = "Zys" totalright = Zys total elif right==1: namer = "Eentha" totalright = Eentha total elif right==2: namer = "Dant" totalright = Dant total lbox delete(lbox curselection()[0]) rbox delete(rbox curselection()[0]) print(namel) print(namer) if (totalleft&gt;totalright): #Checks if won or lost wins set(wins get()+"\n"+namel) loss set(loss get()+"\n"+namer) else: wins set(wins get()+"\n"+namer) loss set(loss get()+"\n"+namel) except IndexError: pass #The left listbox and its characters leftnames = ('Rash' 'Untss' 'Ilora') lnames = StringVar(value=leftnames) lbox = Listbox(mainframe listvariable=lnames exportselection=0 height=3) lbox grid(column=0 row=0) #Right listboxes characters rightnames = ('Zys' 'Eentha' 'Dant') rnames = StringVar(value=rightnames) rbox = Listbox(mainframe listvariable=rnames exportselection=0 height=3) rbox grid(column=1 row=0) #Shows users wins and lossses wins = StringVar() loss = StringVar() #Label that ttk Label(mainframe text="Wins" width=13) grid(column=2 row=0 sticky=N) ttk Label(mainframe text="Loss" width=13) grid(column=2 row=1 sticky=N) ttk Label(mainframe textvariable=wins) grid(column=2 row=0 sticky=(S E)) ttk Label(mainframe textvariable=loss) grid(column=2 row=1 sticky=(S E)) #Button for fighting fightbttn= ttk Button(mainframe text="Fight" command=fight) fightbttn grid(column=3 row=3 sticky=(E)) root mainloop() ```` This is only the part of the code that could relate to the problem not the code as a whole This is not the same question from yesterday just the same code I thought it would be more appropriate to work with the bugs one at a time as different problems so they could be more organized
Problem is because you use always `if left == 0: namel = "Rash" even if`"Rash"`was deleted from listbox and now`left == 0`means`"Untss" You have to get selected name instead of index ```` namel = lbox get(lbox curselection()[0]) namer = rbox get(rbox curselection()[0]) ```` and use it ```` if namel == "Rush": totalleft = Rash total ```` But you could use dictionary to get data ````left_characters = { "Rash": Character("Rash" 42 50 80) "Untss": Character("Untss" 15 54 100) "Ilora": Character("Ilora" 60 35 80) } right_characters = { "Zys": Character("Zys" 12 97 83) "Eentha": Character("Eentha" 55 17 90) "Dant": Character("Dant" 73 28 88) } leftnames = list(left_characters keys()) rightnames = list(right_characters keys()) ```` and then ````def fight(): try: namel = lbox get(lbox curselection()[0]) namer = rbox get(rbox curselection()[0]) print(namel) print(namer) totalleft = left_characters[namel] total totalright = right_characters[namer] total lbox delete(lbox curselection()[0]) rbox delete(rbox curselection()[0]) if totalleft &gt; totalright : #Checks if won or lost wins set(wins get()+"\n"+namel) loss set(loss get()+"\n"+namer) else: wins set(wins get()+"\n"+namer) loss set(loss get()+"\n"+namel) except IndexError as e: print("ERROR:" e) ```` If you add new characters to dictionary then you do not have to change code BY THE WAY: do not use `pass` in `except` because you do not see error if there is something wrong with code
How to fix "TypeError: len() of unsized object" I am getting: <strong>TypeError: len() of unsized object</strong> after running the following script: ````from numpy import * v=array(input('Introduce un vector v: ')) you=array(input('Introduce un vector you: ')) nv= len(v) nu= len(you) diferenza= 0; i=0 if nv==nu: while i<nv: diferenza=diferenza ((v[i+1]-you[i+1]))**2 modulo= sqrt(diferenza) print('Distancia' v) else: print('Vectores de diferente dimensión') ```` How can I fix this?
Use the arrays' <a href="https://docs scipy org/doc/numpy/reference/generated/numpy ndarray size html" rel="nofollow">`size`</a> attribute instead: ````nv = v size nu = you size ```` <hr> You also probably want to use <a href="https://docs scipy org/doc/numpy/reference/generated/numpy fromstring html#numpy fromstring" rel="nofollow">`numpy fromstring`</a> to take and convert the input string into an array: ````&gt;&gt;&gt; v = np fromstring(input('enter the elements of the vector separated by comma: ') dtype=int sep=' ') enter the elements of the vector separated by comma: 1 2 3 &gt;&gt;&gt; v array([1 2 3]) &gt;&gt;&gt; len(v) 3 &gt;&gt;&gt; v size 3 ````
Why to use Lambda to send Function as Argument to Another Function While digging into `lambda` I defined this code below: ````def f2(number lambda_function): return lambda_function(number) def f1(number): return f2(number lambda x: x*2) number = 2 print f1(number) ```` While I do agree the code like this looks pretty cool I wonder why would not I just put it down using a more traditional approach like so: ````def f1(number): return number*2 number = 2 print f1(number) ```` I understand that some languages rely on the functional programming more than Python But in Python I end up with less lines and more readable friendly code if I just avoid `lambda` and the function programming tricks Can the code above be modified to illustrate a situation when a task could not be completed without using `lambda`? What is the purpose of `lambda` in Python? Can you show me the example of where `lambda` really "shines"? May be an example where the use of `lambda` simplified the code?
<a href="https://docs python org/3/tutorial/controlflow html#lambda-expressions" rel="nofollow">`lambda`</a> functions work like normal function but are used in cases where they will be executed just once Then why to create the message body and define the function? For example sorting the list of tuples: ````&gt;&gt;&gt; my_list = [(1 3) (4 8) (2 3) (1 6)] &gt;&gt;&gt; my_list sort(key=lambda x: (x[0] -x[1])) &gt;&gt;&gt; my_list [(1 6) (1 3) (2 3) (4 8)] ```` Here I am sorting `my_list` in increasing order of index `0` and then decreasing order of index `1` But `list sort()` accepts value of `key` as function Instead of defining the function and then pass that function over here it is cleaner to make a `lambda` call within it Since these can be written as an expression they are termed as <a href="https://docs python org/3/tutorial/controlflow html#lambda-expressions" rel="nofollow">`Lambda Expression`</a> in document instead of <them>Lambda Function</them> Also read: <a href="http://stackoverflow com/questions/890128/why-are-python-lambdas-useful">Why are Python lambdas useful?</a> In the example that you have given I do not think that using `lambda` there is any useful
Build a single list of element from bi-dimensional array list I am totally noob to python so please forgive my mistake and lack of vocabulary Long Story Short I have the following list of array : ````[url1 data1][url2 data2][url3 data3]etc ```` I want to build a simple list of element by only keeping the url So I am doing this : ```` if results: for row in results get('rows'): data append(row[:1]) print data ```` for this result: ````[[url1][url2][url3]etc ] ```` However I would like to have something like this : ````[[url1 url2 url3 etc ] ```` I have imported numpy and tried this but does not work : ````np array(data) tolist() ```` Any help ? thanks
I am bad at python but I think like this when you result get('rows'): row is [url1] not url1 Why do not you try extend? Sorry about my silly English ( I am bat at English too)
Numbers separated by spaces in txt file into python list I am trying to convert a txt file containing lines of numbers separated by spaces into numbers separated by commas in lists where each line is a new list of these numbers using Python 3 E g txt file contains <blockquote> 1 2 3 4 5 6 7 8 9 10 </blockquote> and I want this in python: ````[1 2 3 4 5] [6 7 8 9 10] ```` I cannot seem to find a good solution I used numpy and obtained the list of lists but not comma separated e g : `[[1 2 3 4 5] [6 7 8 9 10]]` Here is the example code I have used that does not quite work: ````import numpy as np mymatrix = np loadtxt('file') ```` Grateful for any input! (ps I am a beginner but want to use the lists in a programme I am developing)
Following uses plain Python 3 (without NumPy) ````# open file with open('file txt') as fp: # 1 iterate over file line-by-line # 2 strip line of newline symbols # 3 split line by spaces into list (of number strings) # 4 convert number substrings to int values # 5 convert map object to list data = [list(map(int line strip() split(' '))) for line in fp] ```` This provides the result you are looking for: ````&gt;&gt;&gt; with open('data txt') as fp: data = [list(map(int line strip() split(' '))) for line in fp] &gt;&gt;&gt; print(data) [[1 2 3 4 5] [6 7 8 9 10]] ````
How can I manipulate strings in a slice of a pandas MultiIndex I have a `MultiIndex` like this: ```` metric sensor variable side foo Speed Left Left speed Right Right speed bar Speed Left Left_Speed Right Right_Speed baz Speed Left speed foo Support Left Left support Right Right support bar Support Left Left_support Right Right_support baz Support Left support ```` I am trying to apply a string mapping to a slice of this dataframe: ````df loc['baz' : 'Left'] metric map(lambda s: "Left_" s) ```` How can I apply this map to just the `baz-Left` rows and get back the resulting `DataFrame`? ```` metric sensor variable side foo Speed Left Left speed Right Right speed bar Speed Left Left_Speed Right Right_Speed baz Speed Left Left_speed foo Support Left Left support Right Right support bar Support Left Left_support Right Right_support baz Support Left Left_support ````
I found the following method but i think/hope there must be a more elegant way to achieve that: ````In [101]: index_saved = df index ```` Let us sort index in order to get rid of `KeyError: 'MultiIndex Slicing requires the index to be fully lexsorted tuple len (3) lexsort depth (0)'` error: ````In [102]: df = df sort_index() In [103]: df Out[103]: metric sensor variable side bar Speed Left Left_Speed Right Right_Speed Support Left Left_support Right Right_support baz Speed Left speed Support Left support foo Speed Left Left speed Right Right speed Support Left Left support Right Right support In [119]: df loc[pd IndexSlice['baz' : 'Left'] 'metric'] = \ : 'AAA__' df loc[pd IndexSlice['baz' : 'Left'] 'metric'] In [120]: df Out[120]: metric sensor variable side bar Speed Left Left_Speed Right Right_Speed Support Left Left_support Right Right_support baz Speed Left AAA__speed Support Left AAA__support foo Speed Left Left speed Right Right speed Support Left Left support Right Right support ```` set back old (saved) index: ````In [121]: df = df reindex(index_saved) In [122]: df Out[122]: metric sensor variable side foo Speed Left Left speed Right Right speed bar Speed Left Left_Speed Right Right_Speed baz Speed Left AAA__speed foo Support Left Left support Right Right support bar Support Left Left_support Right Right_support baz Support Left AAA__support ````
Recursive functions : Inversing word I am trying to make a simple function that inverses a string using recursion this is what i tried : ```` def inverse(ch): if ch=='' : return '' else: return ch[len(ch)]+inverse(ch[1:len(ch)-1]) print inverse('hello') ```` And this is what i get : <blockquote> line 13 in inverse return ch[len(ch)]+inverse(ch[1:len(ch)-1]) IndexError: string index out of range </blockquote>
Check this: ````ch[len(ch)-1]+inverse(ch[0:len(ch)-1]) ````
How to split files according to a field and edit content I am not sure if I can do this using unix commands or I need a more complicated code like python I have a big input file with 3 columns - id different sequences (second column) grouped in different groups (3rd column) ````Seq1 MVRWNARGQPVKEASQVFVSYIGVINCREVPISMEN Group1 Seq2 PSLFIAGWLFVSTGLRPNEYFTESRQGIPLITDRFDSLEQLDEFSRSF Group1 Seq3 HQAPAPAPTVISPPAPPTDTTLNLNGAPSNHLQGGNIWTTIGFAITVFLAVTGYSF Group20 ```` I would like: split this file according the group id and create separate files for each group; edit the info in each file adding a ">" sign in the beginning of the id; and then create a new row for the sequence ````Group1 txt file &gt;Seq1 MVRWNARGQPVKEASQVFVSYIGVINCREVPISMEN &gt;Seq2 PSLFIAGWLFVSTGLRPNEYFTESRQGIPLITDRFDSLEQLDEFSRSF Group20 txt file &gt;Seq3 HQAPAPAPTVISPPAPPTDTTLNLNGAPSNHLQGGNIWTTIGFAITVFLAVTGYSF ```` How can I do that?
This she will script should do the trick: ````#!/usr/bin/env bash filename="data txt" while read line; do id=$(echo "${line}" | awk '{print $1}') sequence=$(echo "${line}" | awk '{print $2}') group=$(echo "${line}" | awk '{print $3}') printf "&gt;${id}\n${sequence}\n" &gt;&gt; "${group} txt" done < "${filename}" ```` where `data txt` is the name of the file containing the original data Importantly the Group-files should not exist prior to running the script
Sorting list with dictionaries values(Maximum to Minimum) I have an array with loads of dictionaries in it However I want to sort dictionaries in a way where I have maximum value to a specific key in a dictionary For example I have a list that looks like this ````[ { "num_gurus": 40 "id": 119749 "code": null "name": "ART 198P" "short_name": "ART 198P" "title": "Directed Group Study" "department_long": null "full_name": "Directed Group Study" "department_short": "ART" } { "num_gurus": 3 "id": 119825 "code": null "name": "ASAMST 198P" "short_name": "ASAMST 198P" "title": "Supervised Group Study" "department_long": null "full_name": "Supervised Group Study" "department_short": "ASAMST" } { "num_gurus": 200 "id": 119904 "code": null "name": "AST 636" "short_name": "AST 636" "title": "Baudelaire: Art Poetry Modernity" "department_long": null "full_name": "Baudelaire: Art Poetry Modernity" "department_short": "AST" } ] ```` I want my output to sort my dictionaries where the value of a key attribute 'num_gurus' is maximum to minimum Expected output would be ````[ { "num_gurus": 200 "id": 119904 "code": null "name": "AST 636" "short_name": "AST 636" "title": "Baudelaire: Art Poetry Modernity" "department_long": null "full_name": "Baudelaire: Art Poetry Modernity" "department_short": "AST" } { "num_gurus": 40 "id": 119749 "code": null "name": "ART 198P" "short_name": "ART 198P" "title": "Directed Group Study" "department_long": null "full_name": "Directed Group Study" "department_short": "ART" } { "num_gurus": 3 "id": 119825 "code": null "name": "ASAMST 198P" "short_name": "ASAMST 198P" "title": "Supervised Group Study" "department_long": null "full_name": "Supervised Group Study" "department_short": "ASAMST" } ] ```` I have tried this so far ```` for items in load_as_json: for key val in sorted(items['num_gurus'] iteritems() key=lambda (k v): (v k) reverse=True): print key val This throws me error and does not do what I actually want to This is the error I got File "utils py" line 61 in GetPopularCoursesBasedOnGurus for key val in sorted(str(items['num_gurus']) iteritems() key=lambda (k v): (v k)): AttributeError: 'str' object has no attribute 'iteritems' ````
try this: ````my_list sort(key=lambda my_dict: my_dict["num_gurus"] reverse=True) ```` what this does is basically two things: - key paramater expects an anonymous function (lambda in python) and then sorts the original list values by the values returned by lambda function `lambda my_dict: my_dict["num_gurus"]` returns the "num_gurus" item within each dictionary hence the list is sorted by those values - `reverse=True` by default sort function sorts from min to max hence this simply reverses that also I find this very "unsafe" as you have no guarentee for "num_gurus" key within your dictionaries or a dictionary as a key value hence I would personally wrap this with some exception handler: `try` \ `except` read more here: <a href="https://docs python org/2 7/tutorial/errors html" rel="nofollow">https://docs python org/2 7/tutorial/errors html</a> remember better safe than sorry!
Pandas df to dictionary with values as python lists aggregated from a df column I have a pandas df containing 'features' for stocks which looks like this: <a href="https://i stack imgur com/fekQs png" rel="nofollow"><img src="https://i stack imgur com/fekQs png" alt="features for stocks previous to training neural net"></a> I am now trying to create a dictionary with <strong>unique sector</strong> as <strong>key</strong> and a <strong>python list of tickers</strong> for that unique sector as <strong>values</strong> so I end up having something that looks like this: ````{'consumer_discretionary': ['AAP' 'AMZN' 'AN' 'AZO' 'BBBY' 'BBY' 'BWA' 'KMX' 'CCL' 'CBS' 'CHTR' 'CMG' ```` etc I could iterate over the pandas df rows to create the dictionary but I prefer a more pythonic solution Thus far this code is a partial solution: ````df set_index('sector')['ticker'] to_dict() ```` Any feedback is appreciated <strong>UPDATE:</strong> The solution by @wrwrwr ````df set_index('ticker') groupby('sector') groups ```` partially works but it returns a <strong>pandas series</strong> as a the value instead of a <strong>python list</strong> Any ideas about how to transform the pandas series into a python list in the same line and w/o having to iterate the dictionary?
Would not `f set_index('ticker') groupby('sector') groups` be what you want? For example: ````f = DataFrame({ 'ticker': ('t1' 't2' 't3') 'sector': ('sa' 'sb' 'sb') 'name': ('n1' 'n2' 'n3')}) groups = f set_index('ticker') groupby('sector') groups # {'sa': Index(['t1']) 'sb': Index(['t2' 't3'])} ```` To ensure that they have the type you want: ````{k: list(v) for k v in f set_index('ticker') groupby('sector') groups items()} ```` or: ````f set_index('ticker') groupby('sector') apply(lambda g: list(g index)) to_dict() ````
Querying MySQL from multiple uWSGI workers returns mismatched rows I am running a query against a MySQL database from a Flask app being run with uWSGI with multiple workers I have noticed that sometimes when I query a resource by id the id of the returned row is different than the one I queried with I thought that query isolation meant that this was not possible However it appears that MySQL is getting the queries mixed up I am not able to reproduce this when not using uWSGI but this may just be because it is running on localhost rather than a server when testing the Flask server by itself Why is there a mismatch between the input id and the result id? ````from flask import Flask import pymysql cursor random class Database: def __init__(self user password host database): self connection = pymysql connect( user=user password=password host=host database=database cursorclass=pymysql cursors DictCursor ) def query(self sql **kwargs): with self connection cursor() as cursor: cursor execute(sql kwargs) return cursor app = Flask(__name__) database = Database('user' 'password' 'localhost' 'database') @app route('/resources/<path:id&gt;') def resource(id): item = database query( 'SELECT resources id FROM resources WHERE resources id = %(id)s' id=id ) fetchone() identifier = random random() print(identifier 'ID 1:' id) print(identifier 'ID 2:' item['id']) if int(item['id']) != int(id): print('Error found!!!') return 'Done' 200 if __name__ == '__main__': app run() ```` <pre class="lang-none prettyprint-override">`[pid: 2824|app: 0|req: 1/1] xxx xxx xxx xxx () {44 vars in 737 bytes} [Wed Oct 19 18:38:07 2016] GET /resources/10 =&gt; generated 4 bytes in 6 msecs (HTTP/1 1 200) 2 headers in 78 bytes (1 switches on core 0) 0 687535338604848 ID 1: 11 0 687535338604848 ID 2: 11 [pid: 2821|app: 0|req: 1/2] xxx xxx xxx xxx () {44 vars in 737 bytes} [Wed Oct 19 18:38:07 2016] GET /resources/11 =&gt; generated 4 bytes in 5 msecs (HTTP/1 1 200) 2 headers in 78 bytes (1 switches on core 0) 0 9216930740141296 ID 1: 13 0 9216930740141296 ID 2: 13 [pid: 2823|app: 0|req: 1/3] xxx xxx xxx xxx () {44 vars in 737 bytes} [Wed Oct 19 18:38:07 2016] GET /resources/13 =&gt; generated 4 bytes in 6 msecs (HTTP/1 1 200) 2 headers in 78 bytes (1 switches on core 0) 0 9053128320497649 ID 1: 12 0 9053128320497649 ID 2: 14 Error found!!! 0 794023616025622 ID 1: 15 0 794023616025622 ID 2: 15 [pid: 2824|app: 0|req: 2/4] xxx xxx xxx xxx () {44 vars in 737 bytes} [Wed Oct 19 18:38:07 2016] GET /resources/15 =&gt; generated 4 bytes in 1 msecs (HTTP/1 1 200) 2 headers in 78 bytes (1 switches on core 0) [pid: 2822|app: 0|req: 1/5] xxx xxx xxx xxx () {44 vars in 737 bytes} [Wed Oct 19 18:38:07 2016] GET /resources/12 =&gt; generated 4 bytes in 31 msecs (HTTP/1 1 200) 2 headers in 78 bytes (1 switches on core 0) 0 3608322871408709 ID 1: 14 0 3608322871408709 ID 2: 16 Error found!!! [pid: 2825|app: 0|req: 1/6] xxx xxx xxx xxx () {44 vars in 737 bytes} [Wed Oct 19 18:38:07 2016] GET /resources/14 =&gt; generated 4 bytes in 18 msecs (HTTP/1 1 200) 2 headers in 78 bytes (1 switches on core 0) 0 8346421078513786 ID 1: 16 0 8346421078513786 ID 2: 17 Error found!!! ````
<strong>For anyone else facing this issue I have found the following solution </strong> According to <a href="http://uwsgi-docs readthedocs io/en/latest/ThingsToKnow html" rel="nofollow">http://uwsgi-docs readthedocs io/en/latest/ThingsToKnow html</a> <blockquote> uWSGI tries to (ab)use the Copy On Write semantics of the fork() call whenever possible By default it will fork after having loaded your applications to share as much of their memory as possible If this behavior is undesirable for some reason use the lazy-apps option This will instruct uWSGI to load the applications after each worker’s fork() </blockquote> After taking a look at <a href="http://stackoverflow com/questions/22752521/uwsgi-flask-sqlalchemy-and-postgres-ssl-error-decryption-failed-or-bad-reco">uWSGI Flask sqlalchemy and postgres: SSL error: decryption failed or bad record mac</a> I realised my problem was to do with the fact that multiple processes were being created However because uWSGI loads all the processes from one master worker by default (and does not run the whole of the Flask application each time) it turns out that all the workers end up sharing a database connection (which does not end well!) The solution is to include the `lazy-apps` parameter which forces all the code to be run when each worker is created