input_text
stringlengths
1
40.1k
target_text
stringlengths
1
29.4k
What is the major annual festival in Montevideo?
the annual Montevideo Carnaval
How do you keep GPG from asking for PinEntry? I am calling GPG from python like this You would think that `--batch` and `--no-tty` would stop it from popping open the PinEntry Dialog in KDE What am I missing? ````subprocess Popen(['gpg' '--list-packets' '--batch' '--no-tty'] ````
This depends on the version of GnuPG you are using - <strong>GnuPG 1</strong>: Use `--no-use-agent` to prevent GnuPG from asking the agent (which results in the pin entry dialog being opened) - <strong>GnuPG 2</strong>: There is no way to prevent the agent being asked But (at least starting with GnuPG 2 1) you can use <a href="https://www gnupg org/documentation/manuals/gnupg/gpg_002dpreset_002dpassphrase html" rel="nofollow">`gpg-preset-passphrase`</a> to make sure `gpg-agent` already knows your passphrase and will not ask for it At least on systems running Debian (and probably derivatives) it is hidden in `/usr/lib/gnupg2/gpg-preset-passphras`
Creating numpy array with empty columns using genfromtxt I am importing data using `numpy genfromtxt` and I would like to add a field of values derived from some of those within the dataset As this is a structured array it seems like the most simple efficient way of adding a new column to the array is by using `numpy lib recfunctions append_fields()` I found a good description of this library <a href="http://www astropython org/resource/2011/7/recfunctions" rel="nofollow">HERE</a> Is there a way of doing this without copying the array perhaps by forcing `genfromtxt` to create an empty column to which I can append derived values?
I was trying to make `genfromtxt` read this: ````11 12 13 14 15 21 22 31 32 33 34 35 41 42 43 45 ```` using: ````import numpy as np print np genfromtxt('tmp txt' delimiter=' ' filling_values='0') ```` but it did not work I had to change the input adding commas to represent the empty columns: ````11 12 13 14 15 21 22 31 32 33 34 35 41 42 43 45 ```` then it worked returning: ````[[ 11 12 13 14 15 ] [ 21 22 0 0 0 ] [ 31 32 33 34 35 ] [ 41 42 43 0 45 ]] ````
Printing outlook calendar events list using pyExchange I am working with Microsoft Outlook 2010 and with <a href="https://pyexchange readthedocs org/en/latest/#" rel="nofollow">pyExchange</a> I am trying to list all the events scheduled between two dates as mentioned <a href="https://pyexchange readthedocs org/en/latest/#listing-events" rel="nofollow">here</a> in the docs My code snippet is as below: ````eventsList = service calendar() list_events( start=timezone("Europe/London") localize(datetime(2015 1 12 1 0 0)) end=timezone("Europe/London") localize(datetime(2015 1 14 23 0 0))) print eventsList for event in eventsList: print "{start} {stop} - {subject}" format( start=event start stop=event end subject=event subject ) ```` I have created events in my calendar manually using Outlook as well as by using pyExchange But when ever I execute the code snippet above I only get the following traceback: ````<pyexchange exchange2010 Exchange2010CalendarEventList object at 0x02056550&gt; Traceback (most recent call last): File "C:\Users\p\Desktop\getEvent py" line 41 in <module&gt; for event in eventsList: TypeError: 'Exchange2010CalendarEventList' object is not iterable ```` Any suggestion why this happening and how to solve it? Thanks
It seems like the eventsList is not iterable which means you cannot open it Item per Item This would mean that the eventsList is no List or String or any other iterable object You have to Iterate over the members instead: ````for events in eventList events: # do stuff ```` PS: The doc is 'unfinished' better read the source
Correct way to handle UTF-8 for outgoing email Anyone could explain what is the correct way to send this email on UTF-8 form? Destination received as human unreadable code Edit1: added more detail on the code which shows where uploaded_file variable come from Edit2: Added last section of the code ````import smtplib from email mime multipart import MIMEMultipart from email mime text import MIMEText def upload(upload_file): ftp = ftplib FTP('ftp domain com') ftp login("user" "pass") f = open(upload_file 'rb') ftp_server_response = ftp storbinary('STOR %s' %upload_file f) ftp_server_response_msg = ftp_server_response split("/" 5)[4] f close() ftp quit() os remove(upload_file) uploaded_filename = os path basename(upload_file) html = """\ <iframe src="https://example com/embed/{file_id}/{uploaded_file}" scrolling="no" frameborder="0" width="700" height="430" allowfullscreen="true" webkitallowfullscreen="true" mozallowfullscreen="true"&gt;</iframe&gt; """ format(file_id=ftp_server_response_msg uploaded_file=uploaded_filename) From = 'email@domain com' Recipient = 'email@domain com' # Credentials username = 'user01@domain com' password = 'password' server = smtplib SMTP('smtp domain com:587') email_msg = MIMEMultipart('alternative') email_msg['Subject'] = os path basename(upload_file) rsplit(" " 1)[0] email_msg['From'] = From email_msg['To'] = Recipient email_msg_part1 = MIMEText(html 'html') email_msg attach(email_msg_part1) server ehlo() server starttls() server login(username password) server sendmail(From Recipient email_msg as_string()) server quit() if __name__ == "__main__": pool = Pool(9) tasks = [] for root dirs filenames in os walk("/ext_hdd/download"): dirs[:] = [d for d in dirs if d not in exclude] for extension in file_extensions: for filename in fnmatch filter(filenames extension): match = os path join(root filename) file_size = os path getsize(match) if file_size &gt; 209715200: tasks append(pool apply_async(upload args=(match ))) else: pass for task in tasks: print task task get() pool close() pool join() ````
The quick answer might be because you have not specified the encoding on MIMEText and the subject header is not defined as UTF-8 Assuming all your strings are UTF-8 encoded you should use: ````email_msg_part1 = MIMEText(html 'html' "utf-8") email_msg['Subject'] = Header(os path basename(upload_file) rsplit(" " 1)[0] "utf-8") ```` If this does not work however then you should concentrate on the source of `upload_file` I presume that `upload_file` comes from a file listing On Linux filenames are not neutrally encoded like they are on Windows or enforced on OS X This means that you can create a file with a filename UTF-8 encoded which will look corrupted to a program that is reading the filename as ISO-8859-15 It is possible that the files in `/ext_hdd/download` do not have UTF-8 filenames You are then passing this non-UTF-8 encoded string for use where UTF-8 encoded strings should be used To resolve this you should try to use Unicode strings where possible and let the mime library encode as it wishes To get Unicode strings you need decode encoded strings like filenames An easy way to do this is to pass a Unicode string as the directory name to `os walk()`: ````os walk(you"/ext_hdd/download") ```` This will use Python's locale to decode filenames where possible Where is it not possible to decode it will return the encoded filename You will then need force an encoding on the string Let us assume the encoding is actually Windows-1252 Add this to you `os walk()` code: ````if isinstance(filname str): filename = filename decode("windows-1252") ```` Then when you call set the message parts as given at the top
Descriptor protocol implementation of property() The Python <a href="https://docs python org/3 5/howto/descriptor html#properties" rel="nofollow">descriptor How-To</a> describes how one could implement the `property()` in terms of descriptors I do not understand the reason of the first if-block in the `__get__` method Under what circumstances will `obj` be `None`? What is supposed to happen then? Why do the `__get__` and `__del__` methods not check for that? Code is a bit lengthy but it is probably better to give the full code rather than just a snippet Questionable line is marked ````class Property(object): "Emulate PyProperty_Type() in Objects/descrobject c" def __init__(self fget=None fset=None fdel=None doc=None): self fget = fget self fset = fset self fdel = fdel if doc is None and fget is not None: doc = fget __doc__ self __doc__ = doc def __get__(self obj objtype=None): # =====&gt;&gt;&gt; What is the reason of this if block? <<<===== if obj is None: return self if self fget is None: raise AttributeError("unreadable attribute") return self fget(obj) def __set__(self obj value): if self fset is None: raise AttributeError("cannot set attribute") self fset(obj value) def __delete__(self obj): if self fdel is None: raise AttributeError("cannot delete attribute") self fdel(obj) def getter(self fget): return type(self)(fget self fset self fdel self __doc__) def setter(self fset): return type(self)(self fget fset self fdel self __doc__) def deleter(self fdel): return type(self)(self fget self fset fdel self __doc__) ````
From the <a href="https://docs python org/3 5/howto/descriptor html#invoking-descriptors" rel="nofollow">descriptor documentation</a>: <blockquote> The details of invocation depend on whether `obj` is an object or a class </blockquote> Basically instances call descriptors as `type(b) __dict__['x'] __get__(b type(b))` while classes call descriptors as `B __dict__['x'] __get__(None B)` If `obj is None` means the getter was called from the class not an instance This machinery is used for example to implement <a href="https://docs python org/3 5/howto/descriptor html#static-methods-and-class-methods" rel="nofollow">classmethods</a> The `__set__` and `__delete__` do not check for `obj is None` because they can never be called like this Only `__get__` is invoked when called from the class Doing `cls prop = 2` or `del cls prop` will directly overwrite or delete the property object without invoking `__set__` or `__delete__`
Selecting columns with condition on Pandas DataFrame I have a dataframe looking like this ```` col1 col2 0 something1 something1 1 something2 something3 2 something1 something1 3 something2 something3 4 something1 something2 ```` I am trying to filter all rows that have `something1` either on `col1` or `col2` If I just need the condition logic on a column I can do it with `df[df col1 == 'something1']` but would there be a way to do it with multiple columns?
You can use <a href="http://pandas pydata org/pandas-docs/stable/generated/pandas DataFrame all html" rel="nofollow">`all`</a> with <a href="http://pandas pydata org/pandas-docs/stable/indexing html#boolean-indexing" rel="nofollow">`boolean indexing`</a>: ````print ((df == 'something1') all(1)) 0 True 1 False 2 True 3 False 4 False dtype: bool print (df[(df == 'something1') all(1)]) col1 col2 0 something1 something1 2 something1 something1 ```` EDIT: If need select only some columns you can use <a href="http://pandas pydata org/pandas-docs/stable/generated/pandas Index isin html" rel="nofollow">`isin`</a> with <a href="http://pandas pydata org/pandas-docs/stable/indexing html#boolean-indexing" rel="nofollow">`boolean indexing`</a> for selecting desired `columns` and then use `subset` - `df[cols]`: ````print (df) col1 col2 col3 0 something1 something1 a 1 something2 something3 s 2 something1 something1 r 3 something2 something3 a 4 something1 something2 a cols = df columns[df columns isin(['col1' 'col2'])] print (cols) Index(['col1' 'col2'] dtype='object') print (df[(df[cols] == 'something1') all(1)]) col1 col2 col3 0 something1 something1 a 2 something1 something1 are ````
assigning a list in python ````pt=[2] pt[0]=raw_input() ```` when i do this and give an input suppose 1011 it says list indexing error- " list assignment index out of range" may i know why? i think i am not able to assign a list properly how to assign an array of 2 elements in python then?
It is not clear what you are trying to do My guess is that you are trying to do this: ````pt = [2] # Create array with two elements? for i in range(2): pt[i] = raw_input() ```` Note that the first line does not create an array with two elements it creates a list with one element: the number 2 You could try this instead although there are more Pythonic ways to do it: ````pt = [None] * 2 for i in range(2): pt[i] = raw_input() ````
Allow only update in Django rest api framework I am working on a very simple application that has only 1 model with a couple of fields I want these models to be created or deleted only through the /admin pages (through the standard Django admin framework) and allow the rest api framework to only alter these objects Is there any simple way to make it happen?
Just create a viewset that uses update/retrieve model mixen ````from rest_framwork import viewsets mixins class FooViewSet(mixens RetrieveModelMixin mixins UpdateModelMixin viewsets GenericViewSet): model = Foo queryset = Foo objects all() serializer_class = FooSerializer ```` This will only give you an APIEnd points to retrieve or update an instance of your model
Python/Tkinter: expanding fontsize dynamically to fill frame I know you can get frame widgets to expand and fill all of the area available to them in their container via these commands: `frameName pack(fill = 'both' expand = True)` What would do the same for a text's font size? Currently my text is an attribute of a label widget The label widget's parent is `frameName` I guess I could define my own function to call `labelName config(fontsize = N)` to update the font size as the frame get's bigger but I am not sure how to correlate them This is what my program looks like right now: <img src="http://i stack imgur com/ysnZK png" alt=""> Each of those blocks is a frame widget I would like the text to expand to fill up in some capacity the frame and respond to resizing of the window as well
You can use tkFont font When you initialize the label set the font to a variable such as: ````self font = SOME_BASE_FONT self labelName config(font = self font) ```` Then you can use: ````self font = tkFont Font(size = PIXEL_HEIGHT) ```` This you can scale to the height of the label You can bind a `'<Configure&gt;'` Event to the widget and make your callback function adjust the label size ````frameName bind('<Configure&gt;' self resize) def resize(self event): self font = tkFont(size = widget_height) ```` For more info see the documentation here: <a href="http://www tutorialspoint com/python/tk_fonts htm" rel="nofollow">http://www tutorialspoint com/python/tk_fonts htm</a>
How do I install OpenCV 3 on Centos 6 8? I am working on a CentOS cluster right now and have Python2 7 installed I have managed to get OpenCV 2 4 installed (using <a href="https://superuser com/questions/678568/install-opencv-in-centos">these</a> helpful instructions) but it does not have all of the functionality of 3 (I need the connectedComponents function and a couple others not available) Omitting the "checkout tags" step results in errors during "cmake" Something else to note is when I attempt to install the ffmpeg package it tells me no such package is available Error: ````CMake Error at 3rdparty/ippicv/downloader cmake:77 (message): ICV: Failed to download ICV package: ippicv_linux_20151201 tgz Status=6;"Could not resolve host name" Call Stack (most recent call first): 3rdparty/ippicv/downloader cmake:110 (_icv_downloader) cmake/OpenCVFindIPP cmake:237 (include) ````
<blockquote> I have managed to get OpenCV 2 4 installed (using these helpful instructions) but it does not have all of the functionality of 3 (I need the connectedComponents function and a couple others not available) </blockquote> Why do not you just download OpenCV 3 then? <blockquote> Something else to note is when I attempt to install the ffmpeg package it tells me no such package is available </blockquote> You can download the file yourself from <a href="http://sourceforge net/projects/opencvlibrary/files/3rdparty/ippicv/" rel="nofollow">here</a> (the package that is not available for you) Then place it in the folder where it initially would have been downloaded to: `<your opencv build&gt;/3rdparty/ippicv/`
path to strings in HTML How can I generate all the paths to text strings in a HTML document preferably using BeautifulSoup? I have f e this code: ````<DIV class="art-info"&gt;<SPAN class="time"&gt;<SPAN class="time-date" content="2012-02-28T14:46CET" itemprop="datePublished"&gt; 28 february 2012 </SPAN&gt; 14:46 </SPAN&gt;</DIV&gt;<DIV&gt; Something <P&gt;something else</P&gt;continuing </DIV&gt; ```` I would like to divide HTML code into paths to text strings like ````str1 &gt;&gt;&gt; <DIV class="art-info"&gt;<SPAN class="time"&gt;<SPAN class="time-date" content="2012-02-28T14:46CET" itemprop="datePublished"&gt;28 february 2012</SPAN&gt;</SPAN&gt;</DIV&gt; str2 &gt;&gt;&gt; <DIV class="art-info"&gt;<SPAN class="time"&gt;14:46</SPAN&gt;</DIV&gt; str3 &gt;&gt;&gt; <DIV&gt;Something continuing </DIV&gt; str4 &gt;&gt;&gt; <DIV&gt;<P&gt;something else</P&gt;</DIV&gt; ```` or ````str1 &gt;&gt;&gt; <DIV&gt;<SPAN&gt;<SPAN&gt;28 february 2012</SPAN&gt;</SPAN&gt;</DIV&gt; str2 &gt;&gt;&gt; <DIV&gt;<SPAN&gt;14:46</SPAN&gt;</DIV&gt; str3 &gt;&gt;&gt; <DIV&gt;Something continuing </DIV&gt; str4 &gt;&gt;&gt; <DIV&gt;<P&gt;something else</P&gt;</DIV&gt; ```` or ````str1 &gt;&gt;&gt; //div/span/span/28 february str2 &gt;&gt;&gt; //div/span/14:46 str3 &gt;&gt;&gt; //div/Something continuing str4 &gt;&gt;&gt; //div/p/something else ```` I have studied BeautifulSoup documentation but I cannot figure out how to do it Do you have any ideas?
````from bs4 import BeautifulSoup import re file=open("input") soup = BeautifulSoup(file) for t in soup(text=re compile(" ")): path = '/' join(reversed([p name for p in t parentGenerator() if p])) print path+"/"+ t strip() ```` <strong>Output</strong> ````[document]/html/body/div/span/span/28 february 2012 [document]/html/body/div/span/14:46 [document]/html/body/div/Something [document]/html/body/div/p/something else [document]/html/body/div/continuing ````
How can I add an element at the top of an OrderedDict in python? I have this ````d1 = OrderedDict([('a' '1') ('b' '2')]) ```` If i do this `d1 update({'c':'3'})` Then i get this `OrderedDict([('a' '1') ('b' '2') ('c' '3')])` but i want this ````[('c' '3') ('a' '1') ('b' '2')] ```` without creating new Dictionary
There is no built-in method for doing this in Python 2 If you need this you need to write a `prepend()` method/function that operates on the `OrderedDict` internals with O(1) complexity For Python 3 2 and later you can use the <a href="https://hg python org/cpython/file/3 2/Lib/collections py#l130">`move_to_end`</a><sup>1</sup> method The method accepts a `last` argument which indicates whether the element will be moved to the bottom (`last=True`) or the top (`last=False`) of the `OrderedDict` Finally if you want a quick dirty and <them>slow</them> solution you can just create a new `OrderedDict` from scratch Details for the four different solutions: <hr> <h1>Extend `OrderedDict` and add a new instance method</h1> ````from collections import OrderedDict class MyOrderedDict(OrderedDict): def prepend(self key value dict_setitem=dict __setitem__): root = self _OrderedDict__root first = root[1] if key in self: link = self _OrderedDict__map[key] link_prev link_next _ = link link_prev[1] = link_next link_next[0] = link_prev link[0] = root link[1] = first root[1] = first[0] = link else: root[1] = first[0] = self _OrderedDict__map[key] = [root first key] dict_setitem(self key value) ```` <strong>Demo:</strong> ````&gt;&gt;&gt; d = MyOrderedDict([('a' '1') ('b' '2')]) &gt;&gt;&gt; d MyOrderedDict([('a' '1') ('b' '2')]) &gt;&gt;&gt; d prepend('c' 100) &gt;&gt;&gt; d MyOrderedDict([('c' 100) ('a' '1') ('b' '2')]) &gt;&gt;&gt; d prepend('a' d['a']) &gt;&gt;&gt; d MyOrderedDict([('a' '1') ('c' 100) ('b' '2')]) &gt;&gt;&gt; d prepend( would' 200) &gt;&gt;&gt; d MyOrderedDict([( would' 200) ('a' '1') ('c' 100) ('b' '2')]) ```` <hr> <h1>Standalone function that manipulates `OrderedDict` objects</h1> This function does the same thing by accepting the dict object key and value I personally prefer the class: ````from collections import OrderedDict def ordered_dict_prepend(dct key value dict_setitem=dict __setitem__): root = dct _OrderedDict__root first = root[1] if key in dct: link = dct _OrderedDict__map[key] link_prev link_next _ = link link_prev[1] = link_next link_next[0] = link_prev link[0] = root link[1] = first root[1] = first[0] = link else: root[1] = first[0] = dct _OrderedDict__map[key] = [root first key] dict_setitem(dct key value) ```` <strong>Demo:</strong> ````&gt;&gt;&gt; d = OrderedDict([('a' '1') ('b' '2')]) &gt;&gt;&gt; ordered_dict_prepend(d 'c' 100) &gt;&gt;&gt; d OrderedDict([('c' 100) ('a' '1') ('b' '2')]) &gt;&gt;&gt; ordered_dict_prepend(d 'a' d['a']) &gt;&gt;&gt; d OrderedDict([('a' '1') ('c' 100) ('b' '2')]) &gt;&gt;&gt; ordered_dict_prepend(d would' 500) &gt;&gt;&gt; d OrderedDict([( would' 500) ('a' '1') ('c' 100) ('b' '2')]) ```` <hr> <h1>Use `OrderedDict move_to_end()` (Python >= 3 2)</h1> <sup>1</sup><a href="https://docs python org/3/whatsnew/3 2 html#collections">Python 3 2 introduced</a> the <a href="https://docs python org/3/library/collections html#collections OrderedDict move_to_end">`OrderedDict move_to_end()`</a> method Using it we can move an existing key to either end of the dictionary in O(1) time ````&gt;&gt;&gt; d1 = OrderedDict([('a' '1') ('b' '2')]) &gt;&gt;&gt; d1 update({'c':'3'}) &gt;&gt;&gt; d1 move_to_end('c' last=False) &gt;&gt;&gt; d1 OrderedDict([('c' '3') ('a' '1') ('b' '2')]) ```` If we need to insert an element and move it to the top all in one step we can directly use it to create a `prepend()` wrapper (not presented here) <hr> <h1>Create a new `OrderedDict` - slow!!!</h1> If you do not want to do that and <strong>performance is not an issue</strong> then easiest way is to create a new dict: ````from itertools import chain ifilterfalse from collections import OrderedDict def unique_everseen(iterable key=None): "List unique elements preserving order Remember all elements ever seen " # unique_everseen('AAAABBBCCDAABBB') -> A B C D # unique_everseen('ABBCcAD' str lower) -> A B C D seen = set() seen_add = seen add if key is None: for element in ifilterfalse(seen __contains__ iterable): seen_add(element) yield element else: for element in iterable: k = key(element) if k not in seen: seen_add(k) yield element d1 = OrderedDict([('a' '1') ('b' '2') ('c' 4)]) d2 = OrderedDict([('c' 3) ('e' 5)]) #dict containing items to be added at the front new_dic = OrderedDict((k d2 get(k d1 get(k))) for k in \ unique_everseen(chain(d2 d1))) print new_dic ```` <strong>output:</strong> ````OrderedDict([('c' 3) ('e' 5) ('a' '1') ('b' '2')]) ```` <hr>
Python: Reading individual elements of a file I am attempting to read in individual elements of a file In this example the first element of each line is to be the key of a dictionary The next five elements will be a corresponding value for said key in list form ````max_points = [25 25 50 25 100] assignments = ['hw ch 1' 'hw ch 2' 'quiz ' 'hw ch 3' 'test'] students = {'#Max': max_points} def load_records(students filename): #loads student records from a file in_file = open(filename "r") #run until break while True: #read line for each iteration in_line = in_file readline() #ends while True if not in_line: break #deletes line read in in_line = in_line[:-1] #initialize grades list grades = [0]*len(students['#Max']) #set name and grades name grades[0] grades[1] grades[2] grades[3] grades[4] = in_line split() #add names and grades to dictionary students[name] = grades print name students[name] filename = 'C:\Python27\Python_prgms\Grades_list txt' print load_records(students filename) ```` The method I have now is extremely caveman and I would like to know what the more elegant looping method would be I have been looking for a while but I cannot seem to find the correct method of iteration Help a brotha out
Another way of doing it: ````def load_records(students filename): with open(filename) as f: for line in f: line = line split() name = line[0] students[name] = map(int line[1:]) print name students[name] ```` It seems a bit strange that the `student` dictionary contains both the scores and a parameter `#Max` though - a key has two meanings is it a student's name or parameter's name? Might be better to separate them
Starting and stopping python script but running for set amount of time I have a linear actuator hooked up to a raspberry pi that is turned on and off with a button push I want that actuator to move for a TOTAL prescribed amount of time Example: The total time I want the actuator to run is 5 seconds If a user pushes the button to start the actuator it begins moving Then after 3 seconds the user pushes the button again it stops Then if they push it again it starts moving and automatically stops after 5 seconds Here is the code I have for running the motor with the push button I just want to integrate the timing now Any ideas? ````from Adafruit_MotorHAT import Adafruit_MotorHAT Adafruit_DCMotor import RPi GPIO as GPIO import atexit from time import sleep from threading import Thread GPIO setmode(GPIO BCM) GPIO setup(23 GPIO IN) GPIO setup(24 GPIO IN) # create a default object no changes to I2C address or frequency mh = Adafruit_MotorHAT(addr=0x60) # recommended for auto-disabling motors on shutdown! def turnOffMotors(): mh getMotor(1) run(Adafruit_MotorHAT RELEASE) mh getMotor(2) run(Adafruit_MotorHAT RELEASE) mh getMotor(3) run(Adafruit_MotorHAT RELEASE) mh getMotor(4) run(Adafruit_MotorHAT RELEASE) atexit register(turnOffMotors) ################################# DC motor test! myMotor = mh getMotor(3) # set the speed to start from 0 (off) to 255 (max speed) myMotor setSpeed(255) myMotor run(Adafruit_MotorHAT FORWARD); # turn on motor myMotor run(Adafruit_MotorHAT RELEASE); snooziness = 5 stateOn = 0 stateOff = 0 while (True): if(GPIO input(23)== False) and (stateOn==0): myMotor run(Adafruit_MotorHAT FORWARD) sleep(snooziness) myMotor run(Adafruit_MotorHAT RELEASE) stateOn=1 if(GPIO input(23)== False) and (stateOn==1): myMotor run(Adafruit_MotorHAT RELEASE) stateOn=0 if(GPIO input(24)== False) and (stateOff==0): myMotor run(Adafruit_MotorHAT BACKWARD) sleep(snooziness) myMotor run(Adafruit_MotorHAT RELEASE) stateOff=1 if(GPIO input(24)== False) and (stateOff==1): myMotor run(Adafruit_MotorHAT RELEASE) stateOn=0 continue ````
Maybe something like: ````import time running = False stop_time = None while(True): if(buttonPushed()): if running: running = False stop_motor() stop_time = None else running = True start_motor() stop_time = time time() 5 if stop_time is not None and time time() &gt; stop_time(): running = False stop_motor() stop_time = None ```` I think that demonstrates the logic I would use although there is clearly some code redundancy there that could be tidied up
Dict of dict issue in django I am experience strange issue while resulting the dict of dict in django template ````data = [{'hrs': 9 0 'ld': you'pname' 'dn': you'TS' 'ist': you'TS' 'act': you'OF 'date': datetime date(2011 9 19) 'id': 1119556} {'hrs': 9 0 'ld': you'pname' 'dn': you'TS' 'ist': you'TS' 'act': you'Ti' 'date': datetime date(2011 9 21) 'id': 1119558} {'hrs': 2 5 'ld': you'pname' 'dn': you'TS' 'ist': you'dmin' 'act': you'POC' 'date': datetime date(2011 9 20) 'id': 1119577} {'hrs': 0 5 'ld': you'pname' 'dn': you'SMgr' 'ist': you'SMgr' 'act': you'PL' 'date': datetime date(2011 9 20) 'id': 1119578} {'hrs': 2 0 'ld': you'pname' 'dn': you'SMgr' 'ist': you'TS' 'act': you'Mting' 'date': datetime date(2011 9 20) 'id': 1119579} {'hrs': 8 0 'ld': you'sname' 'dn': you'holiday' 'ist': you'holiday' 'act': you'PL' 'date': datetime date(2011 9 19) 'id': 1119455} {'hrs': 8 0 'ld': you'sname' 'dn': you'holiday' 'ist': you'holiday' 'act': you'PL' 'date': datetime date(2011 9 21) 'id': 1119457} {'hrs': 1 0 'ld': you'sname' 'dn': you'TS' 'ist': you'TS' 'act': you'OF 'date': datetime date(2011 9 20) 'id': 1119566} {'hrs': 7 0 'ld': you'sname' 'dn': you'PD' 'ist': you'PD' 'act': you'LOP' 'date': datetime date(2011 9 20) 'id': 1119567}] ```` Using this list of dict wrote below logic ````results = collections defaultdict(dict) for fetch in data: user = fetch['ld'] get_id = fetch['id'] adata = '%s/%s/%s/%s' % (fetch['dn'] fetch['ist'] fetch['act'] fetch['hrs']) row = results[user] row['user'] = user dt = str(fetch['date']) row[dt] = adata ```` this give me below output <strong>Results :</strong> ```` {you'sname': {'2011-09-21': you'holiday/holiday/PL/8 0' '2011-09-20': you'PD/PD/LOP/7 0' 'user': you'sname' '2011-09-19': you'holiday/holiday/PL/8 0'} you'pname': {'2011-09-21': you'TS/TS/Ti/9 0' '2011-09-20': you'SMgr/TS/Mting/2 0' 'user': you'pname' '2011-09-19': you'TS/TS/O/9 0'}} ```` <strong>In General i need to get below output but dict of dict print only date key value instead of of multiple key values on same date</strong> <strong>Expected Output :</strong> ````{you'sname': {'2011-09-21': you'holiday/holiday/PL/8 0' '2011-09-20': you'TS/TS/O/1 0' '2011-09-20': you'PD/PD/LOP/7 0' 'user': you'sname' '2011-09-19': you'holiday/holiday/PL/8 0'} you'pname': {'2011-09-21': you'TS/TS/Trickle/9 0' '2011-09-20': you'TS/dmin/POC/2 5' '2011-09-20': you'SMgr/Smgr/PL/0 5' '2011-09-20': you'SMgr/TS/Mting/2 0' 'user': you'pname' '2011-09-19': you'TechSupport/TechSupport/O/9 0'}} ```` one i am using below below template tag for to hash the table <strong>Template</strong> : ````@register filter def hash(object attr): gen_context = { 'object' : object } try: value = template Variable('object %s' % attr) resolve(gen_context) except template VariableDoesNotExist: value = ' ' return value ```` <strong>HTML :</strong> ````<table&gt; <thead&gt; <th&gt;S No</th&gt; <th&gt;Name</th&gt; {% for dates in week_dates %} <th&gt;{{dates}}</th&gt; {% endfor %} </thead&gt; <tr&gt; {% for fetch in data items %} <tr&gt; <td&gt;{{ forloop counter }}</td&gt; <td&gt;{{ fetch 0 }}</td&gt; {% for dates in week_dates %} <td&gt;{{ fetch 1|hash:dates }}</td&gt; {% endfor %} </tr&gt; {% endfor %} </table&gt; ```` Any help really appreciate it Please let me know if you need any more information
As is explained in the tutorial <strong>AT LEAST ONCE</strong> dictionary keys are <strong>unique</strong> If you want to have multiple values for a key then you will have to fake it with a sequence ````{'foo': ['bar' 42]} ````
ArcPy/Python: How can I add a word into a string infront of a specific word of that string? I am working with Arcpy to edit large shapefiles I am currently using the `UpdateCursor` function to find and update a type attribute based on the feature's name In this case changing buoys into lighted buoys based on the words light or lighted being in the features name The basic block I am using is as follows ````cursor = arcpy UpdateCursor(navp """TYPEC = 'LATERAL BUOY' AND NAME LIKE'%Lighted%' OR NAME LIKE'%Light%'""") for row in cursor: row setValue('TYPEC' "LATERAL LIGHTED BUOY") cursor updateRow(row) ```` The issue is I would have to do this for every type of buoy in my data I have over a dozen colors lateral and non lateral buoys to deal with I know I could repeat this block for each one of my cases and get a workable script but that would be messy and I am trying to learn how to make my code more elegant and efficient Are there any ways I could simply drop "LIGHTED" in front of `BUOY` for all features selected by `cursor = arcpy UpdateCursor(navp """TYPEC LIKE'%BUOY%' AND NAME LIKE'%Lighted%' OR NAME LIKE'%Light%'""")`?? Thanks for your help
So I solved my own issue I just had to review my built in string formatting functions I used replace() inside my UpdateCursor for loop to find and then replace the appropriate sub strings Here is my final code block ````cursor = arcpy UpdateCursor(navp """TYPEC LIKE'%BUOY%' AND NAME LIKE'%Light%'""") for row in cursor: row setValue('TYPEC' row getValue("TYPEC") replace("BUOY" "LIGHTED BUOY")) cursor updateRow(row) del cursor ```` Short and sweet Thanks for looking!
During what war was uranium-235 first used to create nuclear weapons?
World War II
Furniture and fittings were brought over from which house?
Carlton House
Install node js using alternate install of PYTHON My server is running CentOS 5 8 and uses PYTHON 2 4 I installed an alternate version of PYTHON 2 7 to use to install node js I have followed several different tutorials to get to this point and need a little help to finish i am in the node directory and used this command for configure ````/usr/bin/env python2 7 /configure ```` when I ran the `make` command there was an error ```` File " / /tools/js2c py" line 387 except Error as e: ^ SyntaxError: invalid syntax make[1]: *** [/root/node/out/Release/obj/gen/libraries cc] Error 1 make[1]: Leaving directory `/root/node/out' make: *** [node] Error 2 ```` I believe that is because it is using the 2 4 version of python How can I force the `make` and `make install` command use my alternate install of python 2 7? I am a complete beginner to linux commands
I accomplished this by doing the following Full process - `yum update -y` - `yum -y groupinstall "Development Tools"` - installed git followed this (<a href="http://stackoverflow com/a/8327476/888640">http://stackoverflow com/a/8327476/888640</a>) - Installed alternate version of PYTHON - `wget http://www python org/ftp/python/2 7 2/Python-2 7 2 tgz` - `tar -xvzf Python-2 7 2 tgz` - `cd Python-2 7 2` - ` /configure` - `make altinstall` - `cd` - use the correct python version - `mv /usr/bin/python /usr/bin/python old` - `ln -s /usr/local/bin/python2 7 /usr/bin/python` - install node - `cd node` - ` /configure` - `make` - `make install` - change back to normal version of python - `mv /usr/bin/python old /usr/bin/python`
Python Array Reshaping Issue to Array with Shape (None 192) I have this error and I am not sure how do I reshape where there is a dimension with `None` ````Exception: Error when checking : expected input_1 to have shape (None 192) but got array with shape (192 1) ```` How do I reshape an array to (None 192)? I have the array `accuracy` with shape `(12 16)` and I did `accuracy reshape(-1)` that gives `(192 )` However this is not `(None 192)`
This is an error in the library code because `(None 192)` is an invalid shape for a numpy array - the shape must be a tuple of integers To investigate any further we will have to see the traceback and/or the code which is raising that exception
Django update one field using ModelForm <strong>How do I update just one field in an instance using ModelForm if the POST request has only that one field as parameter</strong>? ModelField tries to override the fields that were not passed in the POST request with None leading to loss of data I have a model with 25 fields say ````class C(models Model): a = models CharField(max_length=128) b = models CharField(max_length=128) x = models IntegerField() ```` and I have a desktop application that does POST requests in order to edit an instance of C through an exposed api method in views py In the api method I am using ModelForm to validate the fields as follows: ````form = CModelForm(request POST instance=c_instance) if form is_valid(): form save() ```` When doing save() django either complains that some other field cannot be null or (if all fields are optional) overwrites them with None Does somebody know how to manage it? I would do all checks manually and update manually but the model has so freakishly long list of fields
You could use a subset of the fields in your ModelForm by specifying those fields as follows: ````class PartialAuthorForm(ModelForm): class Meta: model = Author fields = ('name' 'title') ```` From the docs: <blockquote> If you specify fields or exclude when creating a form with ModelForm then the fields that are not in the resulting form will not be set by the form's save() method </blockquote> <a href="https://docs djangoproject com/en/dev/topics/forms/modelforms/#using-a-subset-of-fields-on-the-form">https://docs djangoproject com/en/dev/topics/forms/modelforms/#using-a-subset-of-fields-on-the-form</a>
Redirecting to an external domain in Flask I am writing a very simple flask app (a URL shortener) that should be able to redirect certain requests to other arbitrary domains However I am running into problems with the redirection This stripped-down version for example does not work: ````from app import app db from flask import abort redirect @app route('/') def index(): return "Hello world" @app route('/favicon ico') def favicon(): abort(404) @app route('/<slug&gt;') def redirect(slug): return redirect('http://google com/') ```` Perhaps naively I expected this to redirect to google com but instead the redirect seems to get "captured" by Flask and it tries to route the redirected URL back through the redirect handler (e g `redirect(slug="http://google com/")`) until it eats all the stack space through recursion and errors out I cannot seem to figure out why this would happen or how to work around it but I would really appreciate a pointer in the right direction
The problem is in this function: ````@app route('/<slug&gt;') def redirect(slug): return redirect('http://google com/') ```` You named the function `redirect()` so in the scope of the function when you call `return redirect( )` this is interpreted as a recursive call because the view function shadows Flask's function of the same name And the URL that you pass is mapped to the `slug` argument in the recursive call Change the name of the view function from `redirect()` to something else and your code will work just fine
How to effeciently find all fuzzy matches between a set of terms and a list of sentences? I have a list of sentences (e g "This is an example sentence") and a glossary of terms (e g "sentence" "example sentence") and need to find all the terms that match the sentence with a cutoff on some Levenshtein ratio How can I do it fast enough? Splitting sentences using FTS to find words that appear in terms and filtering terms by ratio works but it is quite slow Right now I am using sphinxsearch python-Levelshtein are there better tools? Would the reverse search: FTS matching terms in sentence be faster?
If speed is a real issue and if your glossary of terms is not going to be updated often compared to the number of searches you want to do you could look into something like a <a href="https://en wikipedia org/wiki/Levenshtein_automaton" rel="nofollow">Levenshtein Automaton</a> I do not know of any python libraries that support it but if you really need it you could implement it yourself To find all possible paths will require some dynamic programming If you just need to get it done just loop over the glossary and test each one against each word in the string That should give you an answer in polynomial time If you are on a multicore processor you might get some speedup by doing it <a href="https://docs python org/dev/library/multiprocessing html" rel="nofollow">in parallel</a>
Is it better to save the length of a list that I use several time? I know about inlining and from what I checked it is not done by the Python's compiler My question is : is there any optimizations with the python's compiler which transforms : ````print myList __len__() for i in range(0 myList __len__()): print i myList __len__() ```` to ````l = myList __len__() print l for i in range(0 l): print i l ```` So is it done by the compiler ? If it is not : is it worth it to do it by myself ? Bonus question (not so related) : I like to have a lot of functions (better for readability IMHO) like there is no inlining in Python is this something to avoid (lots of functions) ?
No there is not You can check what Python does by compiling the code to byte-code using the `dis` module: ````&gt;&gt;&gt; def test(): print myList __len__() for i in range(0 myList __len__()): print i myList __len__() &gt;&gt;&gt; import dis &gt;&gt;&gt; dis dis(test) 2 0 LOAD_GLOBAL 0 (myList) 3 LOAD_ATTR 1 (__len__) 6 CALL_FUNCTION 0 9 PRINT_ITEM 10 PRINT_NEWLINE 3 11 SETUP_LOOP 44 (to 58) 14 LOAD_GLOBAL 2 (range) 17 LOAD_CONST 1 (0) 20 LOAD_GLOBAL 0 (myList) 23 LOAD_ATTR 1 (__len__) 26 CALL_FUNCTION 0 29 CALL_FUNCTION 2 32 GET_ITER &gt;&gt; 33 FOR_ITER 21 (to 57) 36 STORE_FAST 0 (i) 4 39 LOAD_FAST 0 (i) 42 LOAD_GLOBAL 0 (myList) 45 LOAD_ATTR 1 (__len__) 48 CALL_FUNCTION 0 51 BINARY_ADD 52 PRINT_ITEM 53 PRINT_NEWLINE 54 JUMP_ABSOLUTE 33 &gt;&gt; 57 POP_BLOCK &gt;&gt; 58 LOAD_CONST 0 (None) 61 RETURN_VALUE ```` As you can see the `__len__` attribute is looked up and called each time Python cannot know what a given method will return between calls the `__len__` method is no exception If python were to try to optimize that by <them>assuming</them> the value returned would be the same between calls you would run into countless different problems and we have not even tried to use multi-threading yet Note that you would be much better off using `len(myList)` and not call the `__len__()` hook directly: ````print len(myList) for i in xrange(len(myList): print i len(myList) ````
How to compare inner for iterator with outer for iterator I have the template code set up as : ````{% for f in fs %} { name: '{{f fname}}' data: [{% for items in frequencydata %} {% if {{forloop parentloop counter}}=={{forloop counter}} %} {% for item in items %} {{ item}} {% endfor %} {% endif %} {% endfor %} ] } {% endfor %} ```` I am getting "Could not parse" error in ````{% if {{forloop parentloop counter}}=={{forloop counter}} %} ```` What is the correct method to do this? More info: I have two lists ````fs ['a' 'b' 'c'] frequencydata [[[1990 5] [1991 6]] [[1990 15] [1991 16]] [[1990 25] [1991 26]]] ```` The data is corresponding between lists based on position and I need to display them together Thanks
Do not include the curly braces when using a variable inside a template tag: ````{% if forloop parentloop counter == forloop counter %} ```` Including `{{f fname}}` does not make sense if fs is the list `['a' 'b' 'c']` Those strings do not have an `fname` attribute Finally including `{{ item}} ` means that the final item in the array will have a trailing comma which is invalid You might find it easier to create the object in python use <a href="https://docs python org/2/library/json html" rel="nofollow">`json dumps()`</a> to convert it to json and pass it to your template
Slice pandas dataframe to get only oneday's data I am trying to select one specific day's data from a Pandas DataFrame ````dfx = dfx set_index('Time') df9 = dfx between_time('2013-07-17 00:00' '2013-07-17 23:59') ```` `between_time` just check the time period Is there a similar method that I could use to get data during the period of `2013-07-17 00:00 to 2013-07-17 23:59`?
Without setting the index it worked fine as following ````df9 = df[(df['Time'] &gt; '2013-07-17 00:00') &amp; (df['Time'] < '2013-07-17 23:59')] ````
Who is an example of someone that states that affirmative action is another form of discrimination?
Ward Connerly
When was the office of Priest Vicar created?
1970s
Order columns of DataFrame according to values I have the following input: ````col1 col2 col3 1 4 0 0 12 2 2 12 4 3 2 1 ```` I want to sort the DataFrame according to the values in the columns e g sorting it primarily for `df[df==0] count()` and secondarily for `df sum()` would produce the output: ````col2 col3 col1 4 0 1 12 2 0 12 4 2 2 1 3 ```` `pd DataFrame sort()` takes a colum object as argument which does not apply here so how can I achieve this?
Firstly I think your zero count is increasing from right to left whereas your sum is decreasing so I think you need to clarify that You can get the number of zero rows simply by `(df == 0) sum()` To sort by a single aggregate you can do something like: ````col_order = (df == 0) sum() sort(inplace=False) index df[col_order] ```` This sorts the series of aggregates by its values and the resulting index is the columns of `df` in the order you want To sort on two sets of values would be more awkward/tricky but you could do something like ````aggs = pd DataFrame({'zero_count': (df == 0) sum() 'sum': df sum()}) col_order = aggs sort(['zero_count' 'sum'] inplace=False) index df[col_order] ```` Note that the `sort` method takes an `ascending` parameter which takes either a Boolean or a list of Booleans of equal length to the number of columns you are sorting on e g ````df sort(['a' 'b' ascending=[True False]) ````
PyMongo group by multiple keys With PyMongo group by one key seems to be ok: ````results = collection group(key={"scan_status":0} condition={'date': {'$gte': startdate}} initial={"count": 0} reduce=reducer) ```` results: ````{you'count': 215339 0 you'scan_status': you'PENDING'} {you'count': 617263 0 you'scan_status': you'DONE'} ```` but when I try to do group by multiple keys I get an exception: ````results = collection group(key={"scan_status":0 "date":0} condition={'date': {'$gte': startdate}} initial={"count": 0} reduce=reducer) ```` How can I do group by multiple fields correctly?
If you are trying to count over two keys then while it is possible using ` group()` your better option is via <a href="http://docs mongodb org/manual/reference/method/db collection aggregate/" rel="nofollow"><strong>` aggregate()`</strong></a> This uses "native code operators" and not the JavaScript interpreted code as required by ` group()` to do the same basic "grouping" action as you are trying to achieve Particularly here is the <a href="http://docs mongodb org/manual/reference/operator/aggregation/group/" rel="nofollow"><strong>`$group`</strong></a> pipeline operator: ````result = collection aggregate([ # Matchn the documents possible { "$match": { "date": { "$gte": startdate } } } # Group the documents and "count" via $sum on the values { "$group": { "_id": { "scan_status": "$scan_status" "date": "$date" } "count": { "$sum": 1 } }} ]) ```` In fact you probably want something that reduces the "date" into a distinct period As in: ````result = collection aggregate([ # Matchn the documents possible { "$match": { "date": { "$gte": startdate } } } # Group the documents and "count" via $sum on the values { "$group": { "_id": { "scan_status": "$scan_status" "date": { "year": { "$year": "$date" } "month": { "$month" "$date" } "day": { "$dayOfMonth": "$date" } } } "count": { "$sum": 1 } }} ]) ```` Using the <a href="http://docs mongodb org/manual/reference/operator/aggregation-date/" rel="nofollow"><strong>Date Aggregation Operators</strong></a> as shown here Or perhaps with basic "date math": ````import datetime from datetime import date result = collection aggregate([ # Matchn the documents possible { "$match": { "date": { "$gte": startdate } } } # Group the documents and "count" via $sum on the values # use "epoch" "1970-01-01" as a base to convert to integer { "$group": { "_id": { "scan_status": "$scan_status" "date": { "$subtract": [ { "$subtract": [ "$date" date fromtimestamp(0) ] } { "$mod": [ { "$subtract": [ "$date" date fromtimestamp(0) ] } 1000 * 60 * 60 * 24 ]} ] } } "count": { "$sum": 1 } }} ]) ```` Which will return integer values from "epoch" time instead of a compisite value object But all of these options are better than ` group()` as they use native coded routines and perform their actions much faster than the JavaScript code you need to supply otherwise
How to handle call to __setattr__ from __init__? I have written a class that will be used to store parameters in a convenient way for pickling It overloads `__setattr__` for convenient access It also uses a list to remember the order in which attributes where added so that the iteration order is predictable and constant Here it is: ````class Parameters(object): def __init__(self): self _paramOrder = [] def __setattr__(self name value): self _paramOrder append(name) object __setattr__(self name value) def __delattr__(self name): self _paramOrder remove(name) object __delattr__(self name) def __iter__(self): for name in self _paramOrder: yield self name def iteritems(self): for name in self _paramOrder: yield name self name ```` The problem is that `__init__` calls my overloaded `__setattr__` in order to add the `_paramOrder` to the instance dictionary Is there a way to handle this without adding a special case to `__setattr__`?
yes have it call `super(Parameters self) __setattr__()` instead ````class Parameters(object): def __init__(self): super(Parameters self) __setattr__('paramOrder' []) # etc ```` Or am I missing something? Another alternative is to just go straight to `__dict__` ````class Parameters(object): def __init__(self): self __dict__['paramOrder'] = [] # etc ```` This should work because you are not overriding `__getattr__` so you can read it without anything getting in the way
In what year were Zucchet's seven out of ten counts acquitted?
null
How to select a node whose subelement includes some text in Python? I have an xml file like this: ````<data&gt; <entry&gt; <word&gt;ABC</word&gt; (this) </entry&gt; <entry&gt; <word&gt;ABC</word&gt; [not this] </entry&gt; </data&gt; ```` I want to select nodes whose descendant include "(" and move ( *) to the text of `<entry&gt;` That is: ````<data&gt; <entry&gt; (this) <word&gt;ABC</word&gt; </entry&gt; <entry&gt; <word&gt;ABC</word&gt; [not this] </entry&gt; </data&gt; ```` I am using lxml And I tried: ```` import lxml etree as ET data = ET parse('sample xml') for entry in data iter('entry'): A = entry xpath(' //*[text() = " *( *?)"]') ```` But it does not work "(" can appear as a tail of a node or as a text of a node
if `(` in tail and move it to parent's text then ````In [67]: myxml="""<data&gt; : <entry&gt; : <word&gt;ABC</word&gt; (this) : </entry&gt; : <entry&gt; : <word&gt;ABC</word&gt; [not this] : </entry&gt; : </data&gt;""" In [68]: import StringIO re lxml etree as ET In [69]: f=StringIO StringIO(myxml) In [70]: data=ET parse(f) In [71]: print ET tostring(data) <data&gt; <entry&gt; <word&gt;ABC</word&gt; (this) </entry&gt; <entry&gt; <word&gt;ABC</word&gt; [not this] </entry&gt; </data&gt; In [72]: for elem in data findall("/entry/"): : if re match(" *\( *\) *" elem tail): : elem getparent() text=elem tail : elem tail=None : In [73]: print ET tostring(data) <data&gt; <entry&gt; (this) <word&gt;ABC</word&gt;</entry&gt; <entry&gt; <word&gt;ABC</word&gt; [not this] </entry&gt; </data&gt; ````
insert a tkinter progress bar in a List Comprehensions of Python Normally i use a simple loop to insert a indeterminate progress bar in tkinter example ````self pbar_ind = ttk Progressbar(self orient="horizontal" length=300 mode="indeterminate") new_point_in_list = list() for point in point_in_list: self pbar_ind step(1) self update() if point &gt; 2: new_point_in_list append(point) ```` Now i am using a List Comprehensions to speed my computation ````new_point_in_list = [point for point in point_in_list if point &gt; 2] ```` i wish to know if it possible to insert in the List Comprehensions the Tkinter progress bar
If you are concerned about performance remove the call to `self update()` It will slow your loop down by up to <them>three orders of magnitude</them> At the very least you should call it only every 1 000 iterations or so In a quick test I can do 10 000 simple calculations that result in 1% of the values being appended to a loop in about 0016 seconds When I add a call to `update` in the loop the time expands to 1 0148 seconds You said in a comment you have 80 million rows to iterate over My same code can process 80 million calculations in <strong>12 seconds</strong> versus over <strong>2 hours</strong> when I add in a call to update Converting your code to using a list comprehension will have a negligible effect compared to removing or reducing the calls to update
Updating Score Pygame Pong For whatever reason the score display in my pong game is not updating It just keeps saying "0" I checked to see if the score was actually being updated in the logic of the game and it is (printed it out to console) I have the text being "blitted" every time a new display is drawn so can anyone tell me why it will not update? ````import pygame import random pygame init() # Create colors BLACK = (0 0 0) WHITE = (255 255 255) # Create screen and set screen caption size = (700 500) screen = pygame display set_mode(size) pygame display set_caption("Pong") # Loop until user clicks close button done = False # Used to manage how fast the screen is updated clock = pygame time Clock() # Create player class class Player(): # Initialize player paddles def __init__(self x y color height width): self x = x self y = y self color = color self height = height self width = width self y_speed = 0 self score = 0 self font = pygame font SysFont('Calibri' 24 True False) self display_score = self font render("Score: %d " % (self score) True WHITE) # Updates with new position of paddle every frame def draw(self y): pygame draw rect(screen self color [self x y self height self width]) # Keeps paddle from going off screen def keepOnScreen(self): if self y < 0: self y = 0 elif self y &gt; 410: self y = 410 # Create Ball class class Ball(): # Initialize ball in the middle of the screen with no movement def __init__(self color height width): self x = 325 self y = random randrange(150 350) self color = color self height = height self width = width self y_speed = 0 self x_speed = 0 # Updates new position of ball every frame def draw(self x y): pygame draw rect(screen self color [x y self height self width]) # Create instances of both players and ball player1 = Player(50 100 WHITE 25 90) player2 = Player(625 100 WHITE 25 90) ball = Ball(WHITE 20 20) # --- Main Program Loop --- while not done: # --- Main event loop for event in pygame event get(): # User did something if event type == pygame QUIT: # If user clicked close done = True # We are done so we exit this loop if event type == pygame KEYDOWN: # Players utilize keyboard to move paddles if event key == pygame K_w: player1 y_speed = -6 if event key == pygame K_UP: player2 y_speed = -6 if event key == pygame K_s: player1 y_speed = 6 if event key == pygame K_DOWN: player2 y_speed = 6 if event key == pygame K_SPACE: # Starts the ball movement ball x_speed = 3 * random randrange(-1 1 2) ball y_speed = 3 * random randrange(-1 1 2) if event type == pygame KEYUP: if event key == pygame K_w: player1 y_speed = 0 if event key == pygame K_UP: player2 y_speed = 0 if event key == pygame K_s: player1 y_speed = 0 if event key == pygame K_DOWN: player2 y_speed = 0 # Calculate the movement of the players player1 y = player1 y_speed player2 y = player2 y_speed # Prevents paddles from going off-screen player1 keepOnScreen() player2 keepOnScreen() # Checks to see if ball has made contact with paddle then reverses direction of the ball # Had to give a 4 pixel buffer since the ball will not always exactly hit the same part of paddle in x direction if ball x <= player1 x 27 and (ball x &gt;= player1 x 23): if ball y &gt;= player1 y and (ball y <= player1 y 100): ball x_speed *= -1 if ball x &gt;= player2 x - 27 and (ball x <= player2 x - 23): if ball y &gt;= player2 y and (ball y <= player2 y 100): ball x_speed *= -1 # Checks to see if ball has made contact with top or bottom of screen if ball y <= 0 or ball y &gt;= 480: ball y_speed *= -1 # Calculates movement of the ball ball x = ball x_speed ball y = ball y_speed # Updates score if ball x < 0: player2 score = 1 ball __init__(WHITE 20 20) if ball x &gt; 700: player1 score = 1 ball __init__(WHITE 20 20) # Set background screen fill(BLACK) # Draw players and ball on screen player1 draw(player1 y) player2 draw(player2 y) ball draw(ball x ball y) screen blit(player1 display_score [0 0]) screen blit(player2 display_score [615 0]) # Update display pygame display flip() # Limit to 60 frames per second clock tick(60) # Close the window and quit pygame quit() ````
It looks like the problem is you are setting each player's `display_score` only in their `__init__` function ````# Initialize player paddles def __init__(self x y color height width): self score = 0 self display_score = self font render("Score: %d " % (self score) True WHITE) ```` Because of the way you initialize the variable changing the value of `player score` does not change the value of `player display_score` <strong>Solution 1</strong> You could change the value of `display_score` when you change the player scores which would be easily done through a function call: ````def player_scores( player ball ): player score = 1 player display_score = player font render("Score: %d " % (player score) True WHITE) ball __init__(WHITE 20 20) ```` Then in your game loop: ````# Updates score if ball x < 0: player_scores( player2 ball ) if ball x &gt; 700: player_scores( player1 ball ) ```` <strong>Solution 2</strong> You could render the score text when you display it rather than creating it on the players In your game loop: ```` screen blit(player1 font render("Score: %d " % (player1 score) True WHITE) [0 0]) screen blit(player2 font render("Score: %d " % (player2 score) True WHITE) [615 0]) ```` This invalidates the `display_score` variable completely
Converting scientific notation to human readable float How can I programmatically take a float like this: ````1 87491348956e-28 ```` and convert it into a float (likely a string) like this: ````0 000000000000000000000000000187491348956 ```` (I think that is the right number of zeros)
count the number of digits after the decimal you have and add the exponent to get the output precision: ````'% 39f' % 1 87491348956e-28 ````
How to get the mean of a subset of rows after using groupby? I want to get the average of a particular subset of rows in one particular column in my dataframe I can use ````df['C'] iloc[2:9] mean() ```` to get the mean of just the particular rows I want from my original Dataframe but my problem is that I want to perform this operation after using the groupby operation I am building on ````df groupby(["A" "B"])['C'] mean() ```` whereby there are 11 values returned in 'C' once I group by columns A and B and I get the average of those 11 values I actually only want to get the average of the 3rd through 9th values though so ideally what I would want to do is ````df groupby(["A" "B"])['C'] iloc[2:9] mean() ```` This would return those 11 values from column C for every group of A B and then would find the mean of the 3rd through 9th values but I know I cannot do this The error suggests using the apply method but I cannot seem to figure it out Any help would be appreciated
Try this variant: ````for key grp in df groupby(["A" "B"]): print grp['C'] iloc[2:9] mean() ````
Where do the least amount of FA cup finals take place?
null
In what year did Captain Juan de Santiago start to pacify Guam?
null
Why Python datetime and JS Date does not match? I have this code that returns UTC offset from given date: ````&gt;&gt;&gt; import datetime &gt;&gt;&gt; import pytz &gt;&gt;&gt; cet = pytz timezone("Europe/Moscow") &gt;&gt;&gt; cet localize(datetime datetime(2000 6 1)) datetime datetime(2000 6 1 0 0 tzinfo=<DstTzInfo 'Europe/Moscow' MSD+4:00:00 DST&gt;) &gt;&gt;&gt; int(cet localize(datetime datetime(2000 6 1)) utcoffset() seconds/60) 240 ```` Ok do it in JS using this code ( <a href="http://jsfiddle net/nvn1fef0/" rel="nofollow">http://jsfiddle net/nvn1fef0/</a> ) ````new Date(2000 5 1) getTimezoneOffset(); // -180 ```` Maybe i doing something wrong? And how i can get `plus-minus` before offset (like in JS result)?
If you print the result for the following - ````print(cet localize(datetime datetime(2000 6 1)) utcoffset()) ```` You will notice that it gives a `datetime timedelta()` object which has both days as well as second So for timezones that are `UTC - <something&gt;` this actually gives days as `-1` and then the remaining in seconds Example - ````In [84]: cet = pytz timezone("America/Los_Angeles") In [87]: cet localize(datetime datetime(2000 6 1)) utcoffset() Out[87]: datetime timedelta(-1 61200) ```` To get the info about the actual offset you need to use both `days` as well as `seconds` using a code like (For the above timezone - `America/Los_Angeles`) - ````In [88]: int((cet localize(datetime datetime(2000 6 1)) utcoffset() days*60*60*24 cet localize(datetime datetime(2000 6 1)) utcoffset() seconds)/60) Out[88]: -420 ```` Also I believe when you are doing - `new Date(2000 5 1) getTimezoneOffset();` in javascript I think it is giving you the timezone offset from UTC for today's date rather than the date `2000/05/01` (Because for the date - `2000/05/01` the correct offset is what you are getting from python - `240` ) You may checkout <a href="https://github com/mde/timezone-js" rel="nofollow">`TimezoneJS`</a> for getting the timezone specific as well as date specific offsets etc
GUI Label ListBox Button appear in different frame Hi i am trying to create a Python GUI with label textbox button and etc below are my code that is working fine Problem: Beside my Button the rest(label tb listbox) are all displayed in another frame Is there a way to open and display all in just 1 frame? ````from Tkinter import * import webbrowser import tkMessageBox import Tkinter as Tk def actionDirectory(): webbrowser open('C:\AgmPlots') def actionOpenFile(): print "Done" win = Tk Toplevel() #Open up connection and declare button and label frame = Tk Frame(master=win) grid(row=1 column=1) Lb1 = Listbox(win) Lb1 insert(1 "Python") Lb1 insert(2 "Perl") Lb1 insert(3 "C") Lb1 insert(4 "PHP") Lb1 insert(5 "JSP") Lb1 insert(6 "Ruby") Lb1 pack() button = Tk Button(master=frame text='Open Folder' command=actionDirectory) button1 = Tk Button(master=frame text='Open Text' command=actionOpenFile) button pack() button1 pack() win mainloop() ````
Sure The only reason you have an extra frame is because you explicitly created one Just do not do that Get rid of the `frame = …` line and you will get rid of the other window Use `master=win` instead of `master=frame` to put widgets into the main window instead of into the other frame Or if you want to put a frame inside the top-level window do not create a new Toplevel and put some things inside that but others inside the default one
Parque Batlle is named in honor of who?
José Batlle y Ordóñez
When are property validations run in Google App Engine (GAE)? So I was reading <a href="http://code google com/appengine/articles/extending%5Fmodels html" rel="nofollow">the following</a> documentation on defining your own property types in GAE I noticed that I could also include a validate() method when extending a new Property This validate method will be called "when an assignment is made to a property to make sure that it is compatible with your assigned attributes" Fair enough but when exactly is that? My question is when exactly is this validate method called? Specifically is it called before or after it is put? If I create this entity in a transaction is validate called within the transaction or before the transaction? I am aware that optimally every Property should be "self contained" or at most it should only deal with the state of the entity is resides in But what would happen if you performed a Query in the validate method? Would it blow up if you did a Query within validate that was in a different entity group than your current transactions entity group?
Before put and during the transaction respectively (it may abort the transaction if validation fails of course) "When an assignment is made" to a property of your entity is when you write `theentity theproperty = somevalue` (or when you perform it implicitly) I believe that queries of unrelated entities during a transaction (in validate or otherwise) are non-transactional (and thus very iffy practice) but not forbidden -- but on this last point I am not sure
Who was the founder of the Methodist movement?
John Wesley
'set' object has no attribute 'items' while iterating in python I have a python script which compares 2 scan reports and the finds what changed in the 2 scan results I have written a method which compares the port for each host with the old host in previous scan result I am using `collections defaultdict` just to have everything clean so that i do not create a empty set in case a host is not found in the new scan The method comparing port change looks like: ````def comp_ports(self filename): try: f = open(filename) self prev_report = pickle load(f) # NmapReport self old_port_dict = collections defaultdict(set) for s in self prev_report hosts: for x in s get_open_ports(): self old_port_dict[s address] add(x) self new_port_dict = collections defaultdict(set) for s in self report hosts: for x in s get_open_ports(): self new_port_dict[s address] add(x) hosts = sorted(set(self old_port_dict) | set(self new_port_dict)) scan_same = dict() scan_new = dict() scan_del = dict() prev_set = set(self prev_report hosts) new_set = set(self report hosts) scan_same = prev_set &amp; new_set scan_new = new_set - prev_set scan_del = prev_set - new_set print() print('-' * 10 'Same') for host ports in scan_same items(): print(host ':') for port in ports: print(':::' port[0] '/' port[1]) print() print('*' * 10 'Added') for host ports in scan_new() items(): print(host ':') for port in ports: print(':::' port[0] '/' port[1]) print() print('=' * 10 'Deleted') for host ports in scan_del() items(): print(host ':') for port in ports: print(':::' port[0] '/' port[1]) except Exception as l: print l raise ```` This raises a exception: ````'set' object has no attribute 'items' Traceback (most recent call last): File "portwatch py" line 316 in <module&gt; report comp_ports(config get('system' 'scan_directory') '/nmap-report-old pkl') File "portwatch py" line 159 in comp_ports for host ports in scan_same items(): AttributeError: 'set' object has no attribute 'items' ```` The scan result is being stored in old_scan_result pl and new_scan_result pl ; `prev_report hosts` is the pickle object to get the previous scan result &amp; `report hosts` is the pickle object to get the current scan result i am expecting output as : ````---------- Same 172 16 0 163 : ::: 80 / tcp ::: 22 / tcp 172 16 0 41 : ::: 80 / tcp *********** Added 172 16 0 163 : 172 16 0 41 : ::: 22 / tcp ========== Deleted 172 16 0 163 : 172 16 0 41 : ::: 666 / tcp ````
Basically your question boils down to ````i_think_this_is_a_dict = dict() i_think_this_is_a_dict = set(1 2 3) &amp; set(3 4 5) #but now it is a set i_think_this_is_a_dict items() ```` Thing is the variable after assigning the set intersection to it does not refer to a dict anymore but to a set
Highlight Active Navigation: django 1 4 2 I am trying to set a base django template whose navigation elements will highlight when they are active I have tried the following posts to create a templatetag to append an active value to the markup: <a href="http://ilostmynotes blogspot com/2010/03/django-current-active-page-highlighting html" rel="nofollow">Blog Post</a> <a href="http://stackoverflow com/questions/340888/navigation-in-django">110j's answer to this Stack Overflow Question (not the accepted answer)</a> Both are suggesting using a templatetag to check if user is on selected page then use that to add an active value to the navigation in the template However I get an `AttributeError` saying that: `'str' object has no attribute 'path'` After searching I cannot figure out how to remedy this problem How do I achieve the most de-coupled method of highlighting active navigation on Django 1 4 2 python 2 7? Thank you very much for your input!
Old question but I had the same problem The problem is in the `settings py` file configuration Try adding this after `TEMPLATE_LOADERS`: ````TEMPLATE_CONTEXT_PROCESSORS = ('django core context_processors request' 'django contrib auth context_processors auth' 'django core context_processors static' ) ````
Fastq parser not taking empty sequence (and other edge cases) Python this is a continuation of <a href="http://stackoverflow com/questions/26351451/generator-not-working-to-split-string-by-particular-identifier-python-2">Generator not working to split string by particular identifier Python 2</a> however i modified the code completely and it is not the same format at all this is about edge cases ````Edge Cases: when sequence length is different than number of quality values when there is an empty sequence or entry when the number of lines with quality values is more than one ```` i cannot figure out how to work with the edge cases above If its an empty data file then I still want to output empty strings i am trying with these sequences right here for my input file: (Just a little background IDs are set by @ at beginning of line sequence characters are followed by the lines after until a line with is reached the next lines are going to have quality values (value ~= chr(char) ) this format is terrible and poorly thought out ````@m120204_092117_richard_c100250832550000001523001204251233_s1_p0/422/ccs CTGTTGCGGATTGTTTGGCTATGGCTAAAACCGATGAAGAAAAAGGAAATGCCAAAACCGTTTATAGCGATTGATCCAAGAAATCCAAAATAAAAGGACACAAAACAAACAAAATCAATTGAGTAAAACAGAAAGGCCATCAAGCAAGCGAGTGCTTGATAACTTAGATGACCCTACTGATCAAGAGGCCATAGAGCAATGTTTAGAGGGCTTGAGCGATAGTGAAAGGGCGCTAATTCTAGGAATTCAAACGACAAGCTGATGAAGTGGATCTGATTTATAGCGATCTAAGAAACCGTAAAACCTTTGATAACATGGCGGCTAAAGGTTATCCGTTGTTACCAATGGATTTCAAAAATGGCGGCGATATTGCCACTATTAACCGCTACTAATGTTGATGCGGACAAATAGCTAGCAGATAATCCTATTTATGCTTCCATAGAGCCTGATATTACCAAGCATACGAAACAGAAAAAACCATTAAGGATAAGAATTTAGAAGCTAAATTGGCTAAGGCTTTAGGTGGCAATAAACAAATGACGATAAAGAAAAAAGTAAAAAACCCACAGCAGAAACTAAAGCAGAAAGCAATAAGATAGACAAAGATGTCGCAGAAACTGCCAAAAATATCAGCGAAATCGCTCTTAAGAACAAAAAAGAAAAGAGTGGGATTTTGTAGATGAAAATGGTAATCCCATTGATGATAAAAAGAAAGAAGAAAAACAAGATGAAACAAGCCCTGTCAAACAGGCCTTTATAGGCAAGAGTGATCCCACATTTGTTTTTAGCGCAATACACCCCCATTGAAATCACTCTGACTTCTAAAGTAGATGCCACTCTCACAGGTATAGTGAGTGGGGTTGTAGCCAAAGATGTATGGAACATGAACGGCACTATGATCTTATTAAGACAAACGGCCACTAAGGTGTATGGGAATTATCAAAGCGTGAAAGGTGGCCACGCCTATTATGACTCGTTTAATGATAGTCTTTACTAAAGCCATTACGCCTGATGGGGTGGTGATACCTCTAGCAAACGCTCAAGCAGCAGGCATGCTGGGTGAAGCAGGCGGTAGATGGCTATGTGAATAATCACTTCATGAAGCGTATAGGCTTTGCTGTGATAGCAAGCGTGGTTAATAGCTTCTTGCAAACTGCACCTATCATAGCTCTAGATAAACTCATAGGCCTTGGCAAAGGCAGAAGTGAAAGGACACCTGAATTTAATTACGCTTTGGGTCAAGCTATCAATGGTAGTATGCAAAGTTCAGCTCAGATGTCTAATCAAATTCTAGGGCAACTGATGAATATCCCCCAAGTTTTTACAAAAATGAGGGCGATAGTATTAAGATTCTCACCATGGACGATATTGATTTTAGTGGTGTGTATGATGTTAAAATTGACCAACAAATCTGTGGTAGATGAAATTATCAAACAAAGCACCAAAAACTTTGTCTAGAGAACATGAAGAAATCACCACAGCCCCAAAGGTGGCAATTGATTCAAGAGAAAGGATAAAATATATTCATGTTATTAAACTCGGTTCTTTACAAAATAAAAAGACAAACCAACCTAGGCTCTTCTAGAGGA J(78=AEEC65HR+++*3327H00GD++++FF440 +-64444426ABAB<:=7888((/788P&gt;&gt;LAA8*+')3&amp;++=<////==<4&amp;<&gt;EFHGGIJ66P;;;9;;FE34KHKHP<<11;HK:57678NJ990((&amp;26&gt;PDDJE JL&gt;=@@88 8 +&gt;::J88ELF9 -5 45G+@@@NP==??<&gt;455F((<BB===;;EE;3&gt;<<;M=&gt;89PLLPP?&gt;KP8+7699&gt;A;ANO===J@'''B; ( HP?E@@AHGE77MNOO9=OO?&gt;98?DLIMPOG&gt;;=PRKB5H---3;MN&amp;&amp;&amp;&amp;&amp;F?B&gt;;99;8AA53)A<=;&gt;777:<&gt;;;8:LM==))6:@K M?6?::7 /4444=JK&gt;&gt;HNN=//16@--F@K;9<:6449@BADD;&gt;CD11JE55K;;;=&amp;&amp;%% 3644DL&amp;=:<877 3&gt;344:&gt;&gt;?44*+MN66PG==:;;?0 /AGLKF99&amp;&amp;5?&gt;+++JOP333333AC@EBBFBCJ&gt;&gt;HINPMNNCC&gt;&gt;++6:??3344&gt;B=<89:/000::K&gt;A=00@ +-/ #(LL@&gt;@I555K22221115666666477KML559- 333?GGGKCCP:::PPNPPNP??PPPLLMNOKKFOP2Q&amp;&amp;P7777PM<<<=<6<HPOPPP44?=@=:?BB=89:<<DHI777777645545PPO((((((((C3P??PM0000@NOPJPPFGGL<<<NNGNKGGGGGEELKB'''(((((L===L<< *--MJ111?PO=788<8GG&gt;&gt;?JJL88 1CF))??=?M6667PPKAKM&amp;&amp;&amp;&amp;&amp;<?P43?OENPP''''&amp;5579ICIFRPPPPOP&gt;:&gt;&gt;&gt;P888PLPAJDPCCDMMD;9=FBADDJFD7;ALL? 06ID13 000DA4CFJC44 >ED99;44CJK?42FAB?=CLNO''PJI999&amp;77&amp;&amp;ERP&gt;<)))O==D677FP768PA=@@HEE ::NM&amp;&amp;&amp;&gt;OF'PO88H@A999P<:?IHL;;;GIIPPMMPPB7777PP&gt;&gt;&gt;&gt;KOPIIEEE<<CL%%5656AAAG<<DDFFGG%%N21778;M&amp;&amp;&gt;&gt;CCL::LKK6 711DGHHMIA@BAJ7&gt;%6700;;=@@?=;J55&gt;&gt;QP<<:&gt;MF;;RPL==JMMPPPQR@@P===;=BM99M&gt;&gt;PPOQGD44777PKKFP=<'''2215566&gt;CG&gt;&gt;HH<<PLJI800CE<<PPPMGNOPMJ&gt;&gt;GG***LCCC777 @AP&gt;&gt;AOPMFN99ENNMEPP&gt;&gt;&gt;&gt;&gt;&gt;CLPP??66OOKLLP=:&gt;&gt;KMBCPOPP@FKEI<<ML?&gt;EAF&gt;&gt;&gt;LDCD77JK=H&gt;BN==:=<<<:==JN 659???8K<:==<4))))))P98&gt;&gt;&gt;&gt;;967777N66@@@AMKKKIKPMG;;AD88HN&amp;&amp;LMIGJOJMGHPC&gt;@5D((((C?9--?8HGCDPNH7?9974;;AC&amp;ABH''#%:=NP: 9999=GJG&gt;&gt;=&gt;JG21''':9&gt;&gt;&gt;;;MP*****OKKKIE??55PPKJ21:K---///Q11//EN&amp;';;;;:=;00011;IP@@PP11?778JDDMM&gt;&gt;::KKLLKLNONOHDMPKLMIB&gt;&gt;?JP&gt;9;KJL====;8;;;L)))))E@=$$$# :: BPJK76B;;F5<<J::K @m120204_092117_richard_c100250832550000001523001204251233_s1_p0/904/ccs CTCTCTCATCACACACGAGGAGTGAAGAGAGAACCTCCTCTCCACACGTGGAGTGAGGAGATCCTCTCACACACGTGAGGTGTTGAGAGAGATACTCTCTCATCACCTCACGTGAGGAGTGAGAGAGAT {~~~~~sXNL&gt;&gt;||~~fVM~jtu~&amp;&amp;(uxy~f8YHh=<gA5 ''<O1A44N'`oK57(((G&amp;&amp;Q*Q66;"$$Df66E~Z\ZMO&gt;^;%L}~~~~~Q ~~~~x~@-LF9&gt;~MMqbV~ABBV=99mhIwGRR~ @different_number_of_seq_qual ATCG **! @this_should_work GGGG **** ```` The ones with an error I am trying to replace the seq and qual strings with empty strings ````seq qual = '' '' ```` Here is my code so far These edge cases are so difficult for me to figure out please help ````def read_fastq(input offset): """ Inputs a fastq file and reads each line at a time 'offset' parameter can be set to 33 (phred+33 encoding fastq) and 64 Yields a tuple in the format (ID comments for a sequence sequence [integer quality values]) Capable of reading empty sequences and empty files """ ID comment seq qual = None '' '' '' step = 1 #step is a variable that organizes the order fastq parsing #step= 1 scans for ID and comment line #step= 2 adds relevant lines to sequence string #step= 3 adds quality values to string for line in input: line = line strip() if step == 1 and line startswith('@'): #Step system from Nedda Saremi if ID is not None: qual = [ord(char)-offset for char in qual] #Converts from phred encoding to integer values sep = None if ' ' in ID: sep = ' ' if sep is not None: ID comment = ID split(sep 1) #Separates ID and comment by ' ' yield ID comment seq qual ID comment seq qual = None '' '' '' #Resets variable for next sequence ID = line[1:] step = 2 continue if step==2 and not line startswith('@') and not line startswith('+'): seq = seq line strip() continue if step == 2 and line startswith('+'): step = 3 continue while step == 3: #process the quality data if len(qual) == len(seq): #once the length of the quality seq and seq are the same end gathering data step = 1 continue if len(qual) < len(seq): qual = qual line strip() if len(qual) < len(seq): step = 3 continue if (len(qual) &gt; len(seq)): sys stderr write('\nError: ' ID ' sequence length not equal to quality values\n') comment seq qual= '' '' '' ID = line step = 1 continue break if ID is not None: #Section reserved for last entry in file if len(qual) &gt; 0: qual = [ord(char)-offset for char in qual] sep = None if ' ' in ID: sep = ' ' if sep is not None: ID comment = ID split(sep 1) if len(seq) == 0: ID comment seq qual= '' '' '' '' yield ID comment seq qual ```` my output is skipping the ID @m120204_092117_richard_c100250832550000001523001204251233_s1_p0/904/ccs and adding @**! when it should not be in the output ````@m120204_092117_richard_c100250832550000001523001204251233_s1_p0/422/ccs CTGTTGCGGATTGTTTGGCTATGGCTAAAACCGATGAAGAAAAAGGAAATGCCAAAACCGTTTATAGCGATTGATCCAAGAAATCCAAAATAAAAGGACACAAAACAAACAAAATCAATTGAGTAAAACAGAAAGGCCATCAAGCAAGCGAGTGCTTGATAACTTAGATGACCCTACTGATCAAGAGGCCATAGAGCAATGTTTAGAGGGCTTGAGCGATAGTGAAAGGGCGCTAATTCTAGGAATTCAAACGACAAGCTGATGAAGTGGATCTGATTTATAGCGATCTAAGAAACCGTAAAACCTTTGATAACATGGCGGCTAAAGGTTATCCGTTGTTACCAATGGATTTCAAAAATGGCGGCGATATTGCCACTATTAACCGCTACTAATGTTGATGCGGACAAATAGCTAGCAGATAATCCTATTTATGCTTCCATAGAGCCTGATATTACCAAGCATACGAAACAGAAAAAACCATTAAGGATAAGAATTTAGAAGCTAAATTGGCTAAGGCTTTAGGTGGCAATAAACAAATGACGATAAAGAAAAAAGTAAAAAACCCACAGCAGAAACTAAAGCAGAAAGCAATAAGATAGACAAAGATGTCGCAGAAACTGCCAAAAATATCAGCGAAATCGCTCTTAAGAACAAAAAAGAAAAGAGTGGGATTTTGTAGATGAAAATGGTAATCCCATTGATGATAAAAAGAAAGAAGAAAAACAAGATGAAACAAGCCCTGTCAAACAGGCCTTTATAGGCAAGAGTGATCCCACATTTGTTTTTAGCGCAATACACCCCCATTGAAATCACTCTGACTTCTAAAGTAGATGCCACTCTCACAGGTATAGTGAGTGGGGTTGTAGCCAAAGATGTATGGAACATGAACGGCACTATGATCTTATTAAGACAAACGGCCACTAAGGTGTATGGGAATTATCAAAGCGTGAAAGGTGGCCACGCCTATTATGACTCGTTTAATGATAGTCTTTACTAAAGCCATTACGCCTGATGGGGTGGTGATACCTCTAGCAAACGCTCAAGCAGCAGGCATGCTGGGTGAAGCAGGCGGTAGATGGCTATGTGAATAATCACTTCATGAAGCGTATAGGCTTTGCTGTGATAGCAAGCGTGGTTAATAGCTTCTTGCAAACTGCACCTATCATAGCTCTAGATAAACTCATAGGCCTTGGCAAAGGCAGAAGTGAAAGGACACCTGAATTTAATTACGCTTTGGGTCAAGCTATCAATGGTAGTATGCAAAGTTCAGCTCAGATGTCTAATCAAATTCTAGGGCAACTGATGAATATCCCCCAAGTTTTTACAAAAATGAGGGCGATAGTATTAAGATTCTCACCATGGACGATATTGATTTTAGTGGTGTGTATGATGTTAAAATTGACCAACAAATCTGTGGTAGATGAAATTATCAAACAAAGCACCAAAAACTTTGTCTAGAGAACATGAAGAAATCACCACAGCCCCAAAGGTGGCAATTGATTCAAGAGAAAGGATAAAATATATTCATGTTATTAAACTCGGTTCTTTACAAAATAAAAAGACAAACCAACCTAGGCTCTTCTAGAGGA J(78=AEEC65HR+++*3327H00GD++++FF440 +-64444426ABAB<:=7888((/788P&gt;&gt;LAA8*+')3&amp;++=<////==<4&amp;<&gt;EFHGGIJ66P;;;9;;FE34KHKHP<<11;HK:57678NJ990((&amp;26&gt;PDDJE JL&gt;=@@88 8 +&gt;::J88ELF9 -5 45G+@@@NP==??<&gt;455F((<BB===;;EE;3&gt;<<;M=&gt;89PLLPP?&gt;KP8+7699&gt;A;ANO===J@'''B; ( HP?E@@AHGE77MNOO9=OO?&gt;98?DLIMPOG&gt;;=PRKB5H---3;MN&amp;&amp;&amp;&amp;&amp;F?B&gt;;99;8AA53)A<=;&gt;777:<&gt;;;8:LM==))6:@K M?6?::7 /4444=JK&gt;&gt;HNN=//16@--F@K;9<:6449@BADD;&gt;CD11JE55K;;;=&amp;&amp;%% 3644DL&amp;=:<877 3&gt;344:&gt;&gt;?44*+MN66PG==:;;?0 /AGLKF99&amp;&amp;5?&gt;+++JOP333333AC@EBBFBCJ&gt;&gt;HINPMNNCC&gt;&gt;++6:??3344&gt;B=<89:/000::K&gt;A=00@ +-/ #(LL@&gt;@I555K22221115666666477KML559- 333?GGGKCCP:::PPNPPNP??PPPLLMNOKKFOP2Q&amp;&amp;P7777PM<<<=<6<HPOPPP44?=@=:?BB=89:<<DHI777777645545PPO((((((((C3P??PM0000@NOPJPPFGGL<<<NNGNKGGGGGEELKB'''(((((L===L<< *--MJ111?PO=788<8GG&gt;&gt;?JJL88 1CF))??=?M6667PPKAKM&amp;&amp;&amp;&amp;&amp;<?P43?OENPP''''&amp;5579ICIFRPPPPOP&gt;:&gt;&gt;&gt;P888PLPAJDPCCDMMD;9=FBADDJFD7;ALL? 06ID13 000DA4CFJC44 >ED99;44CJK?42FAB?=CLNO''PJI999&amp;77&amp;&amp;ERP&gt;<)))O==D677FP768PA=@@HEE ::NM&amp;&amp;&amp;&gt;OF'PO88H@A999P<:?IHL;;;GIIPPMMPPB7777PP&gt;&gt;&gt;&gt;KOPIIEEE<<CL%%5656AAAG<<DDFFGG%%N21778;M&amp;&amp;&gt;&gt;CCL::LKK6 711DGHHMIA@BAJ7&gt;%6700;;=@@?=;J55&gt;&gt;QP<<:&gt;MF;;RPL==JMMPPPQR@@P===;=BM99M&gt;&gt;PPOQGD44777PKKFP=<'''2215566&gt;CG&gt;&gt;HH<<PLJI800CE<<PPPMGNOPMJ&gt;&gt;GG***LCCC777 @AP&gt;&gt;AOPMFN99ENNMEPP&gt;&gt;&gt;&gt;&gt;&gt;CLPP??66OOKLLP=:&gt;&gt;KMBCPOPP@FKEI<<ML?&gt;EAF&gt;&gt;&gt;LDCD77JK=H&gt;BN==:=<<<:==JN 659???8K<:==<4))))))P98&gt;&gt;&gt;&gt;;967777N66@@@AMKKKIKPMG;;AD88HN&amp;&amp;LMIGJOJMGHPC&gt;@5D((((C?9--?8HGCDPNH7?9974;;AC&amp;ABH''#%:=NP: 9999=GJG&gt;&gt;=&gt;JG21''':9&gt;&gt;&gt;;;MP*****OKKKIE??55PPKJ21:K---///Q11//EN&amp;';;;;:=;00011;IP@@PP11?778JDDMM&gt;&gt;::KKLLKLNONOHDMPKLMIB&gt;&gt;?JP&gt;9;KJL====;8;;;L)))))E@=$$$# :: BPJK76B;;F5<<J::K Error: different_number_of_seq_qual sequence length not equal to quality values @**! @this_should_work GGGG **** ````
Have you considered using one of the robust python packages that are available for dealing with this kind of data rather than writing a parser from scratch? In partincular I would recommend checking out <a href="http://www-huber embl de/users/anders/HTSeq/doc/overview html" rel="nofollow">HTSeq</a>
django mutlipleChoiceField how to validate? i have the form below : forms py : ````class enrolForm(forms Form): CHOICES = [('male' 'male') ('female' 'female')] students = forms MultipleChoiceField(choices = CHOICES) ```` views py : ```` form = enrolForm(request POST) if form is_valid(): print "creating student" exam = models Exam objects get(name=examName) #more code ```` it NEVER gets to ( `print "creating student"` ) the form is never valid why is that ? how do i validate such a form please help EDIT : it says in the form errors : `<ul class = "errorlist"&gt;<li&gt; students<ul class="errorlist"&gt;<li&gt;This field is required </li&gt;</ul&gt;</li&gt;</ul&gt;<br&gt;` EDIT: views py : ````def enrol(request examName): print "EXAm naaame" form = enrolForm() allStudents = models User objects filter(groups__name="students") myList = [] for i in allStudents: myList append((i i)) print allStudents '33333333333333333333' form fields['students'] choices = myList if request method == 'POST': form = enrolForm(request POST) if form is_valid(): exam = models Exam objects get(name=examName) for i in form cleaned_data['students']: user= models User objects get(username=i) exam = models Exam objects get(name= examName) models Exam_Student objects create(user exam) return render_to_response('enrol html' RequestContext(request {'form':form})) variables = RequestContext(request {'form':form}) return render_to_response("enrol html" variables) ````
What is wrong with using a standard ChoiceField? ````class enrolForm(forms Form): CHOICES = [('male' 'male') ('female' 'female')] students = forms ChoiceField(choices = CHOICES) ```` Also doing this to view the errors and letting us know would be helpful: ````if form is_valid(): print "creating student" exam = models Exam objects get(name=examName) else: print 'ERRORS:' form errors ````
Google AppEngine libraries import I am currently looking around to find what is allow and what is not in GAE Using the <a href="https://developers google com" rel="nofollow">Google's Developers website</a> I found that _socket C Libraries and the socket module are not allowed on GAE How did they disable these modules? Did they performed a complete rebuild of the python interpreter or did they developed their own (like pypy)?
You do not really need to rebuild the whole python interpeter just to disable modules you can (for example) delete the libraries or (as AppEngine did) or have a import hook that will check for loaded module and have a whitelist of modules which are allowed to be loaded
Google app engine handle html form post array My HTML code as following: ````<INPUT type="text" name="txt[]"&gt; <INPUT type="checkbox" name="chk[]"/&gt; ```` I get the value in PHP by ````<?php $chkbox = $_POST['chk']; $txtbox = $_POST['txt']; foreach($txtbox as $a =&gt; $b) echo "$chkbox[$a] - $txtbox[$a] <br /&gt;"; ?&gt; ```` How do get the value in Google App Engine using Python?
You do not need that trick in Python You can have for example many fields with the same names: ````<INPUT type="text" name="txt"&gt; <INPUT type="text" name="txt"&gt; <INPUT type="text" name="txt"&gt; <INPUT type="checkbox" name="chk"&gt; <INPUT type="checkbox" name="chk"&gt; <INPUT type="checkbox" name="chk"&gt; ```` Then get a list of all posted values for those names and merge them using `zip()` Example for webapp (which uses webob as request wrapper): ````txt = self request POST getall('txt') chk = self request POST getall('chk') for txt_value chk_value in zip(txt chk): print '%s - %s<br /&gt;' % (txt_value chk_value) ````
Who was responsible for yet another definition of the philosophy in 1854?
null
preferred block size when reading/writing big binary files i need to read and write huge binary files is there a preferred or even optimal number of bytes (what i call `BLOCK_SIZE`) i should `read()` at a time? 1 byte is certainly to little; and i do not think reading 4GB into the RAM is a good idea either - is there a 'best' block size? or does that even depend on the file-system (i am on ext4)? what do i need to consider? <a href="https://docs python org/3 4/library/functions html#open" rel="nofollow">python's `open()`</a> even provides a `buffering` argument would i need to tweak that as well? this is sample code that just joins the two files `in-0 data` and `in-1 data` into `out data` (in real life there is more processing that is irrelevant to the question at hand) the `BLOCK_SIZE` is chosen equal to <a href="https://docs python org/3 4/library/io html#io DEFAULT_BUFFER_SIZE" rel="nofollow">`io DEFAULT_BUFFER_SIZE`</a> which seems to be the default for `buffering`: ````from pathlib import Path DATA_PATH = Path(__file__) parent / ' /data/' out_path = DATA_PATH / 'out data' in_paths = (DATA_PATH / 'in-0 data' DATA_PATH / 'in-1 data') BLOCK_SIZE = 8192 def process(data): pass with out_path open('wb') as out_file: for in_path in in_paths: with in_path open('rb') as in_file: while True: data = in_file read(BLOCK_SIZE) if not data: break process(data) out_file write(data) ````
Let the OS make the decision for you Use the mmap module: <a href="https://docs python org/3 4/library/mmap html" rel="nofollow">https://docs python org/3 4/library/mmap html</a> It uses your OS's underlying memory mapping mechanism for mapping the contents of a file into RAM Be aware that there is a 2GB file size limit if you are using 32-bit Python so be sure to use the 64-bit version if you decide to go this route For example: ````f1 = open('input_file' 'r+b') m1 = mmap mmap(f1 fileno() 0) f2 = open('out_file' 'a+b') # out_file must be &gt;0 bytes on windows m2 = mmap mmap(f2 fileno() 0) m2 resize(len(m1)) m2[:] = m1 # copy input_file to out_file m2 flush() # flush results ```` Note that you never had to call any read() functions and decide how many bytes to bring into RAM This example just copies one file into another but as you said in your example you can do whatever processing you need in between Note that while the entire file is mapped to an address space in RAM that does not mean it has actually been copied there It will be copied piecewise at the discretion of the OS
Why is PySide's exception handling extending this object's lifetime? <strong>tl;dr -- In a PySide application an object whose method throws an exception will remain alive even when all other references have been deleted Why? And what if anything should one do about this?</strong> In the course of building a simple CRUDish app using a Model-View-Presenter architecture with a PySide GUI I discovered some curious behavior In my case: - The interface is divided into multiple Views -- i e each tab page displaying a different aspect of data might be its own class of View - Views are instantiated first and in their initialization they instantiate their own Presenter keeping a normal reference to it - A Presenter receives a reference to the View it drives but stores this as a weak reference (`weakref ref`) to avoid circularity - No other strong references to a Presenter exist (Presenters can communicate indirectly with the `pypubsub` messaging library but this also stores only weak references to listeners and is not a factor in the MCVE below ) - Thus in normal operation when a View is deleted (e g when a tab is closed) its Presenter is subsequently deleted as its reference count becomes 0 However a Presenter of which a method has thrown an exception does not get deleted as expected The application continues to function because PySide employs <a href="http://stackoverflow com/questions/14493081/pyqt-event-handlers-snarf-exceptions">some magic</a> to catch exceptions The Presenter in question continues to receive and respond to any View events bound to it But when the View is deleted the exception-throwing Presenter remains alive until the whole application is closed An MCVE (<a href="http://pastebin com/CvVFnjAJ" rel="nofollow">link for readability</a>): ````import logging import sys import weakref from PySide import QtGui class InnerPresenter: def __init__(self view): self _view = weakref ref(view) self logger = logging getLogger('InnerPresenter') self logger debug('Initializing InnerPresenter (id:%s)' % id(self)) def __del__(self): self logger debug('Deleting InnerPresenter (id:%s)' % id(self)) @property def view(self): return self _view() def on_alert(self): self view show_alert() def on_raise_exception(self): raise Exception('From InnerPresenter (id:%s)' % id(self)) class OuterView(QtGui QMainWindow): def __init__(self *args **kwargs): super(OuterView self) __init__(*args **kwargs) self logger = logging getLogger('OuterView') # Menus menu_bar = self menuBar() test_menu = menu_bar addMenu('&amp;Test') self open_action = QtGui QAction('&amp;Open inner' self triggered=self on_open enabled=True) test_menu addAction(self open_action) self close_action = QtGui QAction('&amp;Close inner' self triggered=self on_close enabled=False) test_menu addAction(self close_action) def closeEvent(self event *args **kwargs): self logger debug('Exiting application') event accept() def on_open(self): self setCentralWidget(InnerView(self)) self open_action setEnabled(False) self close_action setEnabled(True) def on_close(self): self setCentralWidget(None) self open_action setEnabled(True) self close_action setEnabled(False) class InnerView(QtGui QWidget): def __init__(self *args **kwargs): super(InnerView self) __init__(*args **kwargs) self logger = logging getLogger('InnerView') self logger debug('Initializing InnerView (id:%s)' % id(self)) self presenter = InnerPresenter(self) # Layout layout = QtGui QHBoxLayout(self) alert_button = QtGui QPushButton('Alert!' self clicked=self presenter on_alert) layout addWidget(alert_button) raise_button = QtGui QPushButton('Raise exception!' self clicked=self presenter on_raise_exception) layout addWidget(raise_button) self setLayout(layout) def __del__(self): super(InnerView self) __del__() self logger debug('Deleting InnerView (id:%s)' % id(self)) def show_alert(self): QtGui QMessageBox(text='Here is an alert') exec_() if __name__ == '__main__': logging basicConfig(level=logging DEBUG) app = QtGui QApplication(sys argv) view = OuterView() view show() sys exit(app exec_()) ```` Open and close the inner view and you will see both view and presenter are deleted as expected Open the inner view click the button to trigger an exception on the presenter then close the inner view The view will be deleted but the presenter will not until the application exits <strong>Why?</strong> Presumably whatever it is that catches all exceptions on behalf of PySide is storing a reference to the object that threw it Why would it need to do that? <strong>How</strong> should I proceed (aside from writing code that never causes exceptions of course)? I have enough sense not to rely on `__del__` for resource management I get that I have no right to expect anything subsequent to a caught-but-not-really-handled exception to go ideally but this just strikes me as unnecessarily ugly How should I approach this in general?
The problem is `sys last_tracback` and `sys last_value` When a traceback is raised interactively and this seems to be what is emulated the last exception and its traceback are stores in `sys last_value` and `sys last_traceback` respectively Doing ````del sys last_value del sys last_traceback # for consistency see # https://docs python org/3/library/sys html#sys last_type del sys last_type ```` will free the memory It is worth noting that at most <them>one</them> exception and traceback pair can get cached This means that because you are sane and do not rely on `del` there is not a massive amount of damage to be done But if you want to reclaim the memory just delete those values
How do I deal with a logical expression in Python? let us say I got a logical expression in the format of ie `AvBv~C>D` It consists of boolean elements and operators like `(v ~ >)` (disjunction negation implication) I need to store those expressions and each element they consist of Also each element should have a description so I guess I will have to make a class that will represent them (with fields `representation` and `description` ie `element1 representation="A"` and `element1 description="This is element A"` but I am not sure whether this is the pythonic way maybe a 2D array with names and descriptions as columns would be a better idea since the names are all unique - Which data structure should I use to store such an expression? Note the fact I need to store elements and operators which are of different type and then be able to restore them as logical expressions and do operations on them - Should I create methods for recognition of each element and operator to deal with the logical operations or is there a better approach? Maybe use a parser like Lex-Yacc or some other library that deals with those? Forgive me if I am not too clear but I am coming from Java where I cannot store different types of elements in the same data structure
- Create a tree data structure that represents each element in the expression - You can indeed use a parser generator to produce the above data structures from a given string For example a conjunction can be represented as follows and a similar approach can be used for variables: ````class Node: operator = "AND" left_node = None right_node = None description = "text" value = None class Node: operator = "VAR" left_node = None right_node = None description = "text" value = "B" ```` You can then compose a tree out of these nodes For example: `A^B` can be represented as a `Node` with `AND` where the `left_node` is a `VAR` node (`value=A`) and the `right_node` is a `VAR` node as well (`value=B`)
same scale of Y axis on differents figures I try to plot different data with similar representations but slight different behaviours and different origins on several figures So the min &amp; max of the Y axis is different between each figure but the scale too e g here are some extracts of my batch plotting : <a href="http://i stack imgur com/Syzxw png" rel="nofollow"><img src="http://i stack imgur com/Syzxw png" alt="enter image description here"></a> <a href="http://i stack imgur com/hgHw8 png" rel="nofollow"><img src="http://i stack imgur com/hgHw8 png" alt="enter image description here"></a> Does it exists a simple way with matplotlib to constraint the same Y step on those different figures in order to have an easy visual interpretation while keeping an automatically determined Y min and Y max ? In others words I would like to have the same <them>metric</them> spacing between each Y-tick
you could use a <a href="http://matplotlib org/api/ticker_api html#matplotlib ticker MultipleLocator" rel="nofollow">`MultipleLocator`</a> from the <a href="http://matplotlib org/api/ticker_api html" rel="nofollow">`ticker`</a> module on both axes to define the tick spacings: ````import matplotlib pyplot as plt import matplotlib ticker as ticker fig=plt figure() ax1=fig add_subplot(211) ax2=fig add_subplot(212) ax1 set_ylim(0 100) ax2 set_ylim(40 70) # set ticks every 10 tickspacing = 10 ax1 yaxis set_major_locator(ticker MultipleLocator(base=tickspacing)) ax2 yaxis set_major_locator(ticker MultipleLocator(base=tickspacing)) plt show() ```` <a href="http://i stack imgur com/yASxx png" rel="nofollow"><img src="http://i stack imgur com/yASxx png" alt="enter image description here"></a> <strong>EDIT:</strong> It seems like your desired behaviour was different to how I interpreted your question Here is a function that will change the limits of the y axes to make sure `ymax-ymin` is the same for both subplots using the larger of the two `ylim` ranges to change the smaller one ````import matplotlib pyplot as plt import numpy as np fig=plt figure() ax1=fig add_subplot(211) ax2=fig add_subplot(212) ax1 set_ylim(40 50) ax2 set_ylim(40 70) def adjust_axes_limits(ax1 ax2): yrange1 = np ptp(ax1 get_ylim()) yrange2 = np ptp(ax2 get_ylim()) def change_limits(ax yr): new_ymin = ax get_ylim()[0] - yr/2 new_ymax = ax get_ylim()[1] yr/2 ax set_ylim(new_ymin new_ymax) if yrange1 &gt; yrange2: change_limits(ax2 yrange1-yrange2) elif yrange2 &gt; yrange1: change_limits(ax1 yrange2-yrange1) else: pass adjust_axes_limits(ax1 ax2) plt show() ```` Note that the first subplot here has expanded from `(40 50)` to `(30 60)` to match the y range of the second subplot <a href="http://i stack imgur com/wfv5e png" rel="nofollow"><img src="http://i stack imgur com/wfv5e png" alt="enter image description here"></a>
Using while loops to determine a text value from an enumerated file <h1>THE ERROR IS "Cannot assign to function call"</h1> So I have written a code that with an input of: ```` To BE or NoT tO be ```` will return: ```` 1 2 3 4 1 2 ```` this is what I will refer to as "compressed" Now I need to take that output and turn it back into the input (case does not matter) The code I have written (below) will not function when I input 2 and I do not understand why Please help open to suggestions I however struggle with the compression functions built into python such as zlib or gzip ````floop = 0 while floop ==0: choice = int(input("Press 1 to compress a text and 2 to decode a text:")) if choice == 1: tfile = int(input("Press:\n 1 to compress the File \n 2 to start again:")) if tfile ==1: print("Compressing file 1") file1 = open("File1 txt" "r") sent = file1 readlines() sent = str(sent) splitsent = sent lower() split()#Lowers the case and splits the sentence dl = [0] #Defines the DL variable for count v in enumerate (splitsent): #Enumerates the splitsent list if splitsent count(v) < 2: #Cycles through the splitsent list apending dl dl append(max(dl) 1) #Defines the unique variable set else: dl append(splitsent index(v) 1) #Adds the remainding values to the dl list dl remove(0)#Removes the starting zero print(sent "\n" dl)#Prints the output ec = open("Encoded txt" "w") dl = str(dl) ec write(dl) ec close() elif tfile == 2: print("Back to the beginning") else: print ("Invalid input") elif choice == 2: file1 = open("File1 txt" "r") sent = file1 readlines() sent = str(sent) lower() split() you=0 ofile = open("Encoded txt" "r") decode = ofile readlines() decode = str(decode) split() you = 0 decode(0) = sent(0) while you < len(decode): you=you+1 decode(you) = sent(you) ```` I would greatly appreciate the assistance or returning an output of text when the input of 2 is done so <h1>EDIT</h1> This is the input file (File1 txt) ````Karen spent every waking moment planning the perfect life Her mind soaked up luxury sun and freedom She could almost taste her homemade spaghetti sauce simmering on the stove in her quiet apartment The putrid odor of another inmate slapped Karen back to reality Only nine thousand days to go ```` and this is the Encoded txt file: ````[1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 10 22 23 24 25 26 7 27 28 10 29 30 7 31 32 33 34 35 36 37 38 43 44 45 46 47 48 43 49] ````
It looks like you are trying to assign to a function call: ````decode(0) = sent(0) ```` You might mean ````# notice changing ( to [ as you are doing decode(0) = as well as decode(you) = decode[you] = sent[you] ```` You also have: ```` while < len(decode): ```` Which is syntactically incorrect I believe you mean: ```` while you < len(decode): ```` However it would be better to say: ````l = len(decode) while you < l: ```` It sounds like your eventual target is: ````# A target output value output = [] while you < l: key = sent[you] # append the value found at the index of sent output append(decode[key]) ```` Or better yet: ````output = [decode[val] for val in sent] ````
Trouble with sqlite3 set_authorizer method I am trying to use the `Connection set_authorizer` method to only allow certain DB operations with a connection object (The documentation is <a href="http://docs python org/library/sqlite3 html#sqlite3 Connection set_authorizer" rel="nofollow">here</a>) I am using this code to test: ````import sqlite3 as sqlite def select_authorizer(sqltype arg1 arg2 dbname): print("Test") return sqlite SQLITE_OK #should allow all operations conn = sqlite connect(":memory:") conn execute("CREATE TABLE A (name integer PRIMARY KEY AUTOINCREMENT)") conn set_authorizer(select_authorizer) conn execute("SELECT * FROM A") fetchall() #should still work ```` This gives me a `sqlite3 DatabaseError: not authorized` without ever printing out "Test" I am guessing I may have set up my authorizer wrong and it is just failing to even call it (Though the error message sure does not communicate that) But according to the documentation this setup looks right EDIT: Changed `sqlite SQLITE_OKAY` to `sqlite SQLITE_OK` but since the method does not seem to be called at all not surprisingly that did not help
The authorizer callback takes <them>5</them> arguments but yours only accepts four: <blockquote> The first argument to the callback signifies what kind of operation is to be authorized The second and third argument will be arguments or None depending on the first argument The 4th argument is the name of the database (“main” “temp” etc ) if applicable The 5th argument is the name of the inner-most trigger or view that is responsible for the access attempt or None if this access attempt is directly from input SQL code </blockquote> Thus the signature <them>should</them> be: ````def select_authorizer(sqltype arg1 arg2 dbname source): ```` Generally when testing a callback like that testing is made easy by using a `*args` wildcard parameter: ````def select_authorizer(*args): print(args) return sqlite SQLITE_OK ```` The above callback prints out: ````(21 None None None None) (20 'A' 'name' 'main' None) ```` when I run your test `SELECT` See the <a href="https://www sqlite org/c3ref/set_authorizer html" rel="nofollow">C reference for SQLite `set_authorizer`</a> and the <a href="https://www sqlite org/c3ref/c_alter_table html" rel="nofollow">action codes reference</a> for the various constants used
How the function auto arima() in R determines d? What is the test used in auto arima() function in R to determine stationarity i e to determine the value of "d" Can that logic be implemented in python?
This <a href="https://www otexts org/fpp/8/7" rel="nofollow">link</a> says it is determined using repeated KPSS tests I see no reason why it could not be implemented in Python it would just need to be written Otherwise you could use <a href="http://rpy sourceforge net/rpy2/doc-dev/html/introduction html" rel="nofollow">rpy2</a> and just call `auto arima` from python ````from rpy2 import * import rpy2 robjects as RO RO r('library(forecast)') # use example WWWusage data RO r('fit <- auto arima(WWWusage)') ````
Azure Python Web App Internal Server Error EDIT: The problem seems to be the importing of packages in my app All the packages are correctly installed and i can see them in my wwwroot with kudu But when i import them in the scripts i get the 500 error The WIERDEST thing is that the problem only occurs when i import the packages this way: ````from package import something ```` But not this way: ````import package ```` I also get the same error when i try to call a package function meaning i cannot access anything from the packages(?) It seems that there is an exception generated in web app but not in my local machine Any thoughts? <hr> I am trying to publish a Python Web App in Azure Web Apps but I keep failing I am using bottle as the web framework and the packages i installed along with their dependencies are: - Numpy - Scipy - Scikit-image I have configured the virtual env to match the web app (32bit/2 7) and i installed the packages using wheels as mentioned in this post: <a href="https://azure microsoft com/en-us/documentation/articles/web-sites-python-create-deploy-bottle-app/" rel="nofollow">https://azure microsoft com/en-us/documentation/articles/web-sites-python-create-deploy-bottle-app/</a> I am deploying the app via VS and the publish wizard from Azure SDK and everything works as expected When the app is up i get a 500 error and the console says these things: <a href="http://i stack imgur com/P9GgC jpg" rel="nofollow"><img src="http://i stack imgur com/P9GgC jpg" alt="enter image description here"></a> My web cofing is this: ```` <?xml version="1 0"?&gt; <!-- Generated web config for Microsoft Azure Remove this comment to prevent modifications being overwritten when publishing the project -> <configuration&gt; <system diagnostics&gt; <trace&gt; <listeners&gt; <add type="Microsoft WindowsAzure Diagnostics DiagnosticMonitorTraceListener Microsoft WindowsAzure Diagnostics Version=1 0 0 0 Culture=neutral PublicKeyToken=31b " name="AzureDiagnostics"&gt; <filter type="" /&gt; </add&gt; </listeners&gt; </trace&gt; </system diagnostics&gt; <appSettings&gt; <add key="WSGI_ALT_VIRTUALENV_HANDLER" value="app wsgi_app()" /&gt; <add key="WSGI_ALT_VIRTUALENV_ACTIVATE_THIS" value="D:\home\site\wwwroot\env\Scripts\activate_this py" /&gt; <add key="WSGI_HANDLER" value="ptvs_virtualenv_proxy get_virtualenv_handler()" /&gt; <add key="PYTHONPATH" value="D:\home\site\wwwroot" /&gt; </appSettings&gt; <system web&gt; <compilation debug="true" targetFramework="4 0" /&gt; </system web&gt; <system webServer&gt; <modules runAllManagedModulesForAllRequests="true" /&gt; <handlers&gt; <add name="Python FastCGI" path="handler fcgi" verb="*" modules="FastCgiModule" scriptProcessor="D:\Python27\python exe|D:\Python27\Scripts\wfastcgi py" resourceType="Unspecified" requireAccess="Script" /&gt; </handlers&gt; <rewrite&gt; <rules&gt; <rule name="Static Files" stopProcessing="true"&gt; <conditions&gt; <add input="true" pattern="false" /&gt; </conditions&gt; </rule&gt; <rule name="Configure Python" stopProcessing="true"&gt; <match url="( *)" ignoreCase="false" /&gt; <conditions&gt; <add input="{REQUEST_URI}" pattern="^/static/ *" ignoreCase="true" negate="true" /&gt; </conditions&gt; <action type="Rewrite" url="handler fcgi/{R:1}" appendQueryString="true" /&gt; </rule&gt; </rules&gt; </rewrite&gt; </system webServer&gt; </configuration&gt; ```` Any help why this might happening?
FINALLY i got it working The problem was the version of pip i was using to install the wheels After i upgraded to `pip 8 11 1` everything worked as expected
Remove all files in a directory Trying to remove all of the files in a certain directory gives me the follwing error: ````OSError: [Errno 2] No such file or directory: '/home/me/test/*' ```` The code I am running is: ````import os test = "/home/me/test/*" os remove(test) ````
os remove will only remove a single file In order to remove with wildcards you will need to write your own routine that handles this There are <a href="http://forums devshed com/python-programming-11/deleting-files-with-python-108565 html" rel="nofollow">quite a few suggested approaches</a> listed on this forum page
Jquery $ get using selenium python I was trying make a jquery get request using selenium python from the web page but I am always getting None as the response Here is the code ```` from selenium import webdriver from selenium webdriver common keys import Keys import time driver = webdriver PhantomJS() driver get(url) time sleep(5) driver execute_script("ok = $ get("+get_url+")") #get_url same domain as url print driver execute_script("return ok responseText") driver close() ```` Please tell me what is going wrong here Also please tell me any other alternatives for doing this if this is not possible with selenium python
`$ get` is asynchronous You can do a promise with the `$ get` and in a success (or failed) you can create a DOM element with the answer and with Selenium you can use a WebDriverWait for wait for that element with the response
In what year were there hundred and 50 to 200 Iranian languages
null
Simplifying a short list comparison Here is the code It is being used to change a certain value stored in a list (of between 25 and 81 values) depending on the user input I am using a second list to decide what happens with only 4 values inside which were also the user's input Basically if the user chooses a certain value it changes to the next value in sequence ````if list1[value] == list2[3]: list[value] = list2[0] elif list1[value] == list2[0]: list[value] = list2[1] elif list1[value] == list2[1]: list[value] = list2[2] elif list1[value] == list2[2]: list[value] = list2[3] ```` My problem is I cannot find a way to make it simpler It looks quite long and ugly It also needs to be executed many times based on extra inputs so if the user chose the same variable twice it would change each time in sequence This seems like such a dumb problem I have been trying to think of how to simplify this for ages to a short for loop or something but for some reason fail every time I have tried something like this: ````e = value % 4 if list1[value] == list2[e]: list1[value] = list2[e 1] #This is not exactly what I had but something along these lines maybe in a for loop too etc ```` List 2 contains 4 string [colour1 colour2 colour3 colour4] List 1 contains these same strings but looped over and over until it hits the list limit specified by user Thanks for any help!
Instead of a list for the "cycle" set of data use a dictionary of `{value: nextvalue}` i e : ````cycler = {0: 1 1: 2 2: 3 3: 0} # replace 0 1 2 3 with the actual values if list1[value] in cycler: list1[value] = cycler[list1[value]] ```` EDIT: To build the cycler from a list of elements: ````cycler = {list2[i-1]: list2[i] for i in xrange(len(list2))} # Note that this works because `list2[-1]` indexes the last element ````
If I use QT For Windows will my application run great on Linux/Mac/Windows? I am under the impressions that Python runs in the Triforce smoothly A program that runs in Windows will run in Linux Is this sentiment correct? Having said that if I create my application in QT For Windows will it run flawlessly in Linux/Mac as well? Thanks
Yes The code that you write using Qt will work on Windows Mac Linux/X11 embedded Linux Windows CE and Symbian without any change You can take a <a href="http://qt nokia com/products/platform/platforms" rel="nofollow">look here</a>
Google App Engine deployment: missing file I have a problem with google app engine It used to work but now I cannot figure out what is wrong ````python: cannot open file 'google_appengine/dev_appserver py': [Errno 2] No such file or directory ````
apologize for the delay in getting back to this 1) it happens when command is executed outside google_appengine directory e g ````abid@abid-webdev:~/Documents/GAE_projects/helloworld$ python google_appengine/dev_appserver py helloworld/ ```` <blockquote> python: cannot open file 'google_appengine/dev_appserver py': [Errno 2] No such file or directory </blockquote> 2) now when i run from the directory where i have "google_appengine folder" and project it works grand :) abid@abid-webdev:~/Documents/GAE_projects$ ls `google_appengine` `helloworld` ````abid@abid-webdev:~/Documents/GAE_projects$ python google_appengine/dev_appserver py helloworld/ ```` <blockquote> Allow dev_appserver to check for updates on startup? (Y/n): y dev_appserver will check for updates on startup To change this setting edit /home/abid/ appcfg_nag </blockquote> 3) another thing i noticed google docs says->[https://developers google com/appengine/docs/python/gettingstartedpython27/helloworld][1] to use -> ````google_appengine/dev_appserver py helloworld/ this command - ```` whereas i used <blockquote> python google_appengine/dev_appserver py helloworld/ as suggested on udacity forums-> <a href="http://forums udacity com/questions/6003945/with-opensuse-121-the-python-app-refuses-to-work" rel="nofollow">http://forums udacity com/questions/6003945/with-opensuse-121-the-python-app-refuses-to-work</a> </blockquote> thanks all for your help cheers
Python Mock type only within module I am using mock for testing in Python I am trying to unit test a metaclass which overwrites the `__new__` method and then calls `type __new__(cls)` internally I do not want to actually call `type __new__` so I want to mock out `type` Of course I cannot patch `__builtin__ type` because it breaks object construction within the test So I really want to limit mocking `type` within the module under test Is this possible?
Yes You `patch` as close to where you are going to call your function as possible for just these sort of reasons So in your test case only around the function (or whatever callable) you will call that is under tests you can patch `type` The documentation for <a href="http://www voidspace org uk/python/mock/patch html#mock patch" rel="nofollow">`patch`</a> has plenty of examples for doing this if you would like to peruse them Cheers
numpy 1 6 1 quadrature attributes for polynomial object not found I am using python 2 7 3 and numpy 1 6 1 I am trying to obtain the gauss quadrature points for legendre and hermite polynomials As per the <a href="http://docs scipy org/doc/numpy/reference/generated/numpy polynomial legendre leggauss html#numpy polynomial legendre leggauss" rel="nofollow">numpy documentation</a> I should be able to access the attribute by typing <blockquote> <blockquote> numpy polynomial legendre leggauss(1) </blockquote> </blockquote> But whenever I do this (even for hermite or hermite_e) I get an error <blockquote> <blockquote> AttributeError: 'module' object has no attribute 'leggauss' </blockquote> </blockquote> What is going on? How can it be missing such an important attribute? I am using Mac OS X 10 7 4 Below is the exact code I type into python: ````Python 2 7 3 (default Jul 12 2012 11:58:31) [GCC 4 2 1 Compatible Apple Clang 3 1 (tags/Apple/clang-318 0 61)] on darwin Type "help" "copyright" "credits" or "license" for more information &gt;&gt;&gt; import numpy &gt;&gt;&gt; numpy polynomial legendre leggauss(1) Traceback (most recent call last): File "<stdin&gt;" line 1 in <module&gt; AttributeError: 'module' object has no attribute 'leggauss' &gt;&gt;&gt; numpy version version '1 6 1' ````
If you click on <a href="https://github com/numpy/numpy/blob/master/numpy/polynomial/legendre py#L1662" rel="nofollow">source</a> in the documentation you will see the note that it was added in version 1 7 0 ````def leggauss(deg): """ Gauss-Legendre quadrature Computes the sample points and weights for Gauss-Legendre quadrature These sample points and weights will correctly integrate polynomials of degree :math:`2*deg - 1` or less over the interval :math:`[-1 1]` with the weight function :math:`f(x) = 1` Parameters ---------- deg : int Number of sample points and weights It must be &gt;= 1 Returns ------- x : ndarray 1-D ndarray containing the sample points y : ndarray 1-D ndarray containing the weights Notes ----- versionadded::1 7 0 ````
How to fix Index out of range in python I have to extract two columns from a website I know how to read the rows and print them but I am having trouble extracting only the first and third column every time I try to attach a variable to a row and try printing the variable it gives me an Error but if I use print(row[2]) it works but at the end it would say index out of range I do not understand why? Here is what I did: ````import urllib request import csv with urllib request urlopen("http://archive ics uci edu/ml/machine-learning-databases/iris/iris data") as webpage: reader = csv reader(webpage read() decode('utf-8') splitlines()) for row in reader: cor = row[0] cors = row[2] print(cors) ````
At the very end of the data set there is an empty row which is being stored as an empty list You can check for this by using the following condition: ````import urllib request import csv with urllib request urlopen("http://archive ics uci edu/ml/machine-learning-databases/iris/iris data") as webpage: reader = csv reader(webpage read() decode('utf-8') splitlines()) for row in reader: if not row: continue cor = row[0] cors = row[2] print(cors) ````
Bluehost: Python/CGI versions I have been struggling with this problem for a few days now and cannot seem to find an answer anywhere I need to run the numpy package from Python2 7 and have thus installed Python 2 7 on my Bluehost account (as per the Bluehost instructions) Then I used python2 7 easy_install to install numpy in the correct site-packages folder Calling 'python' from the command line shows that Python2 7 is called and numpy can be imported without issues However when I call a python script from my site (i e using a CGI form) I see it calls python2 6 instead and cannot import numpy anymore I suspect there is a problem with my bashrc which is as follows: # bashrc ````# User specific aliases and functions alias mv='mv -i' alias rm='rm -i' alias cp='cp -i' # Source global definitions if [ -f /etc/bashrc ]; then /etc/bashrc fi # Python stuff export PATH=$HOME/python/Python-2 7 2/:$PATH export PYTHONPATH=$HOME/python/lib/python2 7/site-packages:$PYTHONPATH ```` Does anyone have a clue how to get CGI to call the Python v2 7? Cheers Hugh
Alright I figured it out The problem was with my python not the CGI configuration of the server Basically the first line of the program (e g "#!/usr/local/bin/python") points to the location of the executable used for that particular script (I thought it was just a comment!) As running Py2 7 2 on Bluehost requires having 2 versions (2 6 and 2 7) installed the latter version needs to be in this first line otherwise the script uses the 'default' 2 6 In short solution is to use "#!/home4/username/python/Python-2 7 2/python" instead
OSX - cannot find what is adding python to my path I have just upgraded to OSX Yosemite I am running `brew doctor` and among the zillions of warnings it generates is: ````Warning: /usr/local/share/python is not needed in PATH Formerly homebrew put Python scripts you installed via `pip` or `pip3` (or `easy_install`) into that directory above but now it can be removed from your PATH variable Python scripts will now install into /usr/local/bin You can delete anything except 'Extras' from the /usr/local/share/python (and /usr/local/share/python3) dir and install affected Python packages anew with `pip install --upgrade` ```` I have looked carefully everywhere and I cannot find what is adding /usr/local/share/python to my path My bash_profile contains no mention of python /private/etc/paths d contains only a file called "git" I did find an instance of /usr/local/share/python in /private/etc/paths but I deleted that but when I start a new terminal after that and rerun `brew doctor` I still get the warning Can anyone suggest how I can stop this? It is driving me insane!
I Am not able to comment and while I do not have an answer this might help through process of elimination or potentially help your future problem solver - What about `etc/profile`; any sign in that file? - Do you have a `~/ profile` it could be in? - What do you get when you: `echo $PATH` any sign of said path? - What about `export` or `declare` (with no options)? - How are you setting/exporting your desired path now? Probably not what you want to hear but I always do a fresh install rather than upgrade for fear of issues like this Doing so would most likely solve your issues; all zillion of them I know I have not had any issues or even a single warning from Homebrew since I moved to Yosemite
Round to two decimal places only if repeating python I was wondering if anybody knew of a quick way in python to check and see if a fraction gives a repeating decimal I have a small function that takes in two numbers and divides them If the quotient is a repeating decimal I would like to round to 2 decimal places and if the quotient is not repeating I would like to round to just one Example: 800/600 = 1 33333333333333 which would equal 1 33 900/600 = 1 5 would stay as 1 5 I know that I need to use the two statements for the two types of rounding ````output = "{: 2f}" format(float(num)) output = "{: }" format(float(num)) ```` but I am having trouble with the if statement to direct to one or the other Can anybody help with some insight?
<blockquote> repeating decimal </blockquote> There are only 10 fractions that can be written as some repeated digit - ` (0)` ` (1)` ` (9)` Thus if you only care about repeating pattern starting right after decimal point you only need to check against those cases All those numbers (and only them) give an integer if multiplied by 9 Thus if `(9 * numenator) % denominator == 0` you will print 2 digits You will probably want to exclude ` (0)` pattern though To do that test if your fraction is in fact an integer - `numenator % denominator == 0` Also check out <a href="https://docs python org/3 1/library/fractions html" rel="nofollow">fractions</a> module in case you have some wheels to reinvent Of course if you only have your number as a `float` there is some ambiguity about what numenator and denominator are because `float`s do not actually store rational numbers like `1/3` You can experiment with `fractions`'s ` limit_denominator()` to choose something that works for your case
Is there a way to 'search' in a pandas DataFrame for 2 specific numbers in columns and return the index if it exists? I have a made a DataFrame using pandas and want to know if there is a way to 'search' for 2 specific numbers in the rows of specific columns and then return the row eg <img src="http://i stack imgur com/byrZs png" alt="eg"> seach if a = 4 and b =6 return 1 (the index) or if b=5 and c =2 return 2 (the index) or seach in a = 2 and b=1 return nothing (as this does not exist) Thanks!
````print(df loc[(df["A"] == 4) &amp; (df["B"] == 6)] index[0]) ```` In a function: ````def pairs(df k1 k2 a b): check = df loc[(df[k1] == a) &amp; (df[k2] == b)] return None if check empty else check index[0] ```` Running it on your df: ````In [5]: pairs(df "A" "B" 4 6) Out[5]: 1 In [6]: pairs(df "B" "C" 5 2 ) Out[6]: 2 In [7]: print(pairs(df "A" "B" 2 1)) None ```` If you want all indexes use index tolist: ````def pairs(df k1 k2 a b): check = df loc[(df[k1] == a) &amp; (df[k2] == b)] return None if check empty else check index tolist() ````
When did agrarian societies start to arise in Southeast Europe?
7th millennium BC
using python to send an image over TCP to c# server I am working on a project that uses a python script to send an image taken with a webcam to a c# webserver using sockets I am currently sending strings to the server from the python client using code like: ````info = bytearray("Text to send" "UTF-8") socket send(info) ```` to send data to the c# server and it functions perfectly for text and numbers I run into an issue when trying to encode the data read from the bmp into "UTF-8" as trying this returns an error of not being able to encode certain characters into UTF-8 I was wondering if anyone had any idea of a way to encode this that c# will be able to recognize or if there is a better way of trying to implement this process I am all ears A couple of options I have come up with would be to 1 - use something like google drive to save the image to or an FTP server and then have the c# server retrieve it from there or 2 - create a packet system containing the RGB values and recreating the image from those pixel values on the server Thanks for your help <strong>EDIT:</strong> I have tried sending the file this way ```` data = bytearray("123456789" "UTF-8") file = open("image bmp" "rb") data = file read() socket send(data) ```` and was able to successfully retreive the string "123456789" but the data after this is garbage I have also implemented sending the size of the file before sending the data and that size number is retrieved fine but the img data saves as a black bmp <strong>Edit 2 :</strong> Here is the server and client code I am using to try and recreate the image using a memory stream I the client code is using the process mentioned by hcalves <strong>Client</strong> ````if __name__ == "__main__": sock = socket socket() sock connect(("localhost" 50839)) with open("image bmp" "rb") as fd: buf = fd read(1024) while (buf): sock send(buf) buf = fd read(1024) sock close() ```` <strong>Server</strong> ````Socket client = server AcceptSocket(); NetworkStream stream = new NetworkStream(client); byte[] imgData = new byte[1024]; MemoryStream memstream = new MemoryStream(); stream Read(imgData 0 1024); int counter = 0; while (stream DataAvailable) { memstream Write(imgData 0 1024); stream Read(imgData 0 1024); counter = counter 1024; } memstream Seek(0 SeekOrigin Begin); using (Stream file = File OpenWrite("img bmp")) { byte[] buffer = new byte[8*1024]; int len; while ((len = memstream Read(buffer 0 buffer Length)) &gt; 0) { file Write(buffer 0 len); } } ````
You should not need more than this recipe: ````import socket if __name__ == "__main__": sock = socket socket() sock connect(("localhost" 50839)) with open("data bin" "rb") as fd: buf = fd read(1024) while (buf): sock send(buf) buf = fd read(1024) sock close() ```` For practical reasons you can treat `str` objects (the result of `fd read`) as raw data you do not need any further crazy encoding Just iterate the file and send over the socket Test by running this server which just echoes to stdout with `python server py &gt; data2 bin`: ````import socket import sys if __name__ == "__main__": sock = socket socket() sock bind(("localhost" 50839)) sock listen(1) client address = sock accept() buf = client recv(1024) while (buf): sys stdout write(buf) buf = client recv(1024) client close() sock close() ```` A checksum shows the binary data is sent correctly: ````% md5 data bin data2 bin MD5 (data bin) = 8b3280072275badf3e53a6f7aae0b8be MD5 (data2 bin) = 8b3280072275badf3e53a6f7aae0b8be ```` Your C# server should be able to accept this data as is If it does not work it is because your server is expecting something in particular not just raw data
What is a common Attic-based dialect?
Koine Greek
Another term for tagma is?
thorax
How to use docker for deployment and development? Suppose I have a python web app I can create docker file for installing all dependencies But then (or before it if I have requirements for pip) I have like two different goals For deployment I can just download all source code from git through ssh or tarballs and it would work But for a developer machine it would not work I would need then work on actual source code I know that I can 'mirror' any folder/files from host machine to docker container So ok I can then remove all source code that was downloaded when image was built and 'mirror' current source code that exists in developer machine But if developer machine do not have any source code downloaded with git clone it would not work So what to do in that case? I mean except the obvious - clone all repos on developer machine and then 'mirror' it? So what is the right way to use docker not only for deployment but for the development also?
In the development case I would just use docker's -v option to mount the current working copy into a well known location in the container and provide a small wrapper she will script that automates firing up the app in the container
Which physicist quoted the Quran in his address after receiving the Nobel Prize?
Abdus Salam
What place is she tied for in songwriting credits?
third
What did the Legend of Zelda require in certain instances to kill enemies?
Famicom microphone
How can I handle static files with Python webapp2 in Heroku? I am now migrating my small Google App Engine app to Heroku platform I do not actually use Bigtable and `webapp2` reduces my migration costs a lot Now I am stuck on handling the static files Is there any good practices? If so lead me there please Thanks in advance <strong>EDIT</strong> Well I am now using `paste` for my WSGI server And `paste StaticURLParser()` should be what I need to implement static file handler However I have no idea how to integrate it with `webapp2 WSGIApplication()` Could anyone help me? Maybe I need to override `webapp2 RequestHandler` class to load `paste StaticURLParser()` properly; ````import os import webapp2 from paste import httpserver class StaticFileHandler(webapp2 RequestHandler): you"""Static file handler""" def __init__(self): # I guess I need to override something here to load # `paste StaticURLParser()` properly pass app = webapp2 WSGIApplication([(r'/static' StaticFileHandler)] debug=True) def main(): port = int(os environ get('PORT' 5000)) httpserver serve(app host='0 0 0 0' port=port) if __name__ == '__main__': main() ```` Any helps would be appreciated!
Below is how I got this working I am guessing that relying on a cascade app is not the most efficient option but it works well enough for my needs ````from paste urlparser import StaticURLParser from paste cascade import Cascade from paste import httpserver import webapp2 import socket class HelloWorld(webapp2 RequestHandler): def get(self): self response write('Hello cruel world ') # Create the main app web_app = webapp2 WSGIApplication([ ('/' HelloWorld) ]) # Create an app to serve static files # Choose a directory separate from your source (e g "static/") so it is not dl'able static_app = StaticURLParser("static/") # Create a cascade that looks for static files first then tries the webapp app = Cascade([static_app web_app]) def main(): httpserver serve(app host=socket gethostname() port='8080') if __name__ == '__main__': main() ````
What district is the South fourth of Boston located in?
the 8th district
Print output of external command in realtime and have it in a string at the same time in python For example: ````#!/usr/bin/env python3 # cmd py import time for i in range(10): print("Count %d" % i) time sleep(1) ```` <hr> ````#!/usr/bin/env python3 import subprocess # useCmd py p = subprocess Popen([' /cmd py'] stdout=subprocess PIPE) out err = p communicate() out = out decode() print(out) ```` In useCmd py I can print out the output of cmd py but only after it is finished outputting How can I print out it in realtime and still have it stored in a string? (sort of like tee in bash )
If you do not have to deal with stdin you could avoid using <a href="https://docs python org/2/library/subprocess html#subprocess Popen communicate" rel="nofollow">communicate</a> that is blocking and read directly from the process stdout until your stdout ends: ````p = subprocess Popen(['python' 'cmd py'] stdout=subprocess PIPE) # out err = p communicate() while True: line = p stdout readline() if line != '': print line else: break ```` <a href="http://stackoverflow com/questions/2804543/read-subprocess-stdout-line-by-line">related</a>
Find index of min value in a matrix I have a 2-Dim array containing the residual sum of squares of a given fit (unimportant here) ````RSS[i j] = np sum((spectrum_theo - sp_exp_int) ** 2) ```` I would like to find the matrix element with the minimum value AND its position (i j) in the matrix Find the minimum element is OK: ````RSS_min = RSS[RSS != 0] min() ```` but for the index I have tried: ````ij_min = np where(RSS == RSS_min) ```` which gives me: ````ij_min = (array([3]) array([20])) ```` I would like to obtain instead: ij_min = (3 20) If I try : ````ij_min = RSS argmin() ```` I obtain: ````ij_min = 0 ```` which is a wrong result Does it exist a function in Scipy or elsewhere that can do it? I have searched on the web but I have found answers leading only with 1-Dim arrays not 2- or N-Dim Thanks!
Does this work for you ````import numpy as np array = np random rand((1000)) reshape(10 10 10) print np array(np where(array == array min())) flatten() ```` in the case of multiple minimums you could try something like ````import numpy as np array = np array([[1 1 2 3] [1 1 4 5]]) print zip(*np where(array == array min())) ````
ValueError resizing an ndarray I have a small python script and I always run into an error: ````ValueError: cannot resize an array references or is referenced by another array in this way Use the resize function ```` <strong>Code:</strong> ````points = comp findall('Points') # comp is a parsed xml diffvals = np arange(10 dtype=float) diffvals resize(len(points) 8) ```` But there are two things I do not understand: - I only get this error when I use debugger - I have another script with identical code and everything works fine I checked this with debugger all values data types and so on are identical (except the memory addresses of course) I have no idea what I could possibly do to resolve this
You cannot resize NumPy arrays that share data with another array in-place using the `resize` method by default Instead you can create a new resized array using the `np resize` function: ````np resize(a new_shape) ```` or you can disable reference checking using: ````a resize(new_shape refcheck=False) ```` The likely reason you are only seeing it with a debugger is that the debugger references the array to e g print it Also the debugger may not store references to temporary arrays before you assign them into a variable which probably explains why the other script works
Using Pandas to read CSV containing some missing values I use Python 2 7 with Anaconda I have a csv file: ```` action_type action_detail secs_elapsed 0 data similar_listings 255 0 1 data similar_listings 183 0 2 click change_trip_characteristics 175570 0 3 NaN NaN 86 0 4 data wishlist_content_update 1535 0 ```` The file contains some missing values and data types of each column are not necessarily similar I used Pandas to load this csv ````for chunk in pd read_csv('the_file_name csv' chunksize=1000 dtype={'action_type': str 'action_detail': str 'secs_elapsed': str}) ```` For each chunk I found that data type of some rows are not my instructions in function pd read_csv Let me show an example ````chunk ix[3 'action_type'] Out[1]: nan type(chunk ix[3 'action_type']) Out[2]: float ```` My questions are - I want all datas type like my instruction how could I do that? - I also want to replace these missing values I have used `pandas filna()` but it does not effect I think it is due to data type Could you please give nay hints for this? Thank you
Use `converters` instead of `dtype`: ````for chunk in pd read_csv('the_file_name csv' chunksize=1000 delim_whitespace=True converters={'action_type': str 'action_detail': str 'secs_elapsed': str}): &gt;&gt;&gt; type(chunk ix[3 'action_type']) str ```` Also for your file example you need to set `delim_whitespace=True` Unless the real file is comma separated
Unique User Profiles via URL Alright i have asked this question in the past However when I asked it I had a limited knowledge of python and app engine This is most likely why I failed to implement this in the past I am attempting to get each profile to be unique based on the username Any who before I dive in i will throw my profile handler up that only deals with currently logged in users at the moment: ````class Profile(MainHandler): def get(self): if self user: current_user = str(self user name) key = '' imgs = db GqlQuery("select * from Profile_Images WHERE name =:1" current_user) team_name = db GqlQuery("select * from Teams WHERE name =:1" current_user) team_images = db GqlQuery("select * from Teamimg WHERE user =:1" current_user) for clan in team_name: name1 = clan team_name_anycase for image in team_images: team_imagee = image key() if self user: for img in imgs: key = img key() self render('profile html' team_img = team_imagee team_name = name1 profile_image = key username = self user name email = self user email firstname = self user first_name last_name = self user last_name country = self user country) else: self redirect('/register') ```` This handler is mapped via ('profile' Profile) Any who what I understand thus far is that I need to pass the username through the URL and into the profile handler which then uses that username as identifier for pulling data from the db What I saw posted on stackoverflow was `('profile/<profile_id&gt;' Profile)` And I have been messing around with that for a bit but it seems that the trailing username (ex localhost:8080/profile/admin) is getting a 404 error I would assume that either my mapping is failing or the variable (i e the username) is failing to interact with the profile handler Could someone please help me out here? I was sure I had it and it failed <strong>YAML file:</strong> ````application: suitegamer version: 1 runtime: python27 api_version: 1 threadsafe: true handlers: - url: /static static_dir: static - url: /img static_dir: img - url: / * script: main app libraries: - name: jinja2 version: latest - name: PIL version: "1 1 7" ```` <strong>MainHandler:</strong> ````class MainPage(MainHandler): def get(self): if self user: self render('index html' username = self user name firstname = self user first_name) else: self render('index html') def post(self): username = self request get('username') lower() password = self request get('password') you = User login(username password) if you: self login(you) self redirect('/news_page') else: message = 'Invalid login ' self render('login html' error = message) ```` <strong>Mapping:</strong> ````app = webapp2 WSGIApplication([('/' MainPage) ('/logout' Logout) ('/img' GetImage) ('/register' Register) ('/welcome' Welcome) ('/news_page' News_Page) ('/profile' Profile) ('/edit_profile' Edit_Profile) ('/change_profile_image' Change_Profile_Image) ('/found_a_team' Found_Team) ('/team_main' Team_Main) ('/edit_team_main' Edit_Team_Main) ('/edit_team_image' Edit_Team_Image)] debug=True) ```` <strong>Testing Profile Handler:</strong> ````class Profile(MainHandler): def get(self profile_id): profile_id = 'admin' if self user: key = '' imgs = db GqlQuery("select * from Profile_Images WHERE name =:1" profile_id) team_name = db GqlQuery("select * from Teams WHERE name =:1" profile_id) team_images = db GqlQuery("select * from Teamimg WHERE user =:1" profile_id) for clan in team_name: name1 = clan team_name_anycase for image in team_images: team_imagee = image key() if self user: for img in imgs: key = img key() self render('profile html' team_img = team_imagee team_name = name1 profile_image = key username = self user name email = self user email firstname = self user first_name last_name = self user last_name country = self user country) else: self redirect('/register') ```` Image Handler: ````class GetImage(MainHandler): def get(self): img = db get(self request get("entity_id")) self response out write(img image) ```` Maps to ('/img' GetImage)
the mapping should look like this: ````('/profile/( *)/?' Profile) ```` and your Handlers get function should take a parameter which will be the profile id/name ````class Profile(MainHandler): def get(self profile_id): # do something with the profile id ```` i think you should give the docs a deeper look for example here: <a href="https://developers google com/appengine/docs/python/tools/webapp/running" rel="nofollow">https://developers google com/appengine/docs/python/tools/webapp/running</a>
Beyonce tied who for most number one singles by a female?
Mariah Carey
What is more important for a textual critic: quality or quantity?
Readings are approved or rejected by reason of the quality, and not the number, of their supporting witnesses
Python AttributeError: NoneType object has no attribute 'close' I am learning python and I wrote a script that copies the content of one text file to another Here is my code ````from sys import argv out_file = open(argv[2] 'w') write(open(argv[1]) read()) out_file close() ```` I get the AttributeError listed on the title Why is it that wen I call the write method on open(argv[2] 'w') the out_file is not assigned a File type? Thank you in advance
`out_file` is being assigned to the return value of the `write` method which is `None` Break the statement into two: ````out_file = open(argv[2] 'w') out_file write(open(argv[1]) read()) out_file close() ```` And really it would be preferable to do this: ````with open(argv[1]) as in_file open(argv[2] 'w') as out_file: out_file write(in_file read()) ```` Using with `with` statement means Python will automatically close `in_file` and `out_file` when execution leaves the `with` block
python mpi and she will subprocess: orte_error_log I have written a small code in `python 2 7` for launching 4 independent processes on the she will via `subprocess` using the library `mpi4py` I am getting ORTE_ERROR_LOG and I would like to understand where it is happening and why This is my code: ````#!/usr/bin/python import subprocess import re import sys from mpi4py import MPI def main(): root='base' comm = MPI COMM_WORLD if comm rank == 0: job = [root+str(i) for i in range(4)] else: job = None job = comm scatter(job root=0) cmd=" / /montepython/montepython/MontePython py -conf /config/default conf -p /config/XXXX param -o /chains/XXXX -N 10000 &gt; XXXX log" cmd_job = re sub(r"XXXX" job cmd) subprocess check_call(cmd_job she will=True) return if __name__ == '__main__': main() ```` I am running with the command: ````mpirun -np 4 /run py ```` This is the error message that I get: ````[localhost:51087] [[51455 0] 0] ORTE_ERROR_LOG: Not found in file base/odls_base_default_fns c at line 1762 [localhost:51087] [[51455 0] 0] ORTE_ERROR_LOG: Not found in file orted/orted_comm c at line 916 [localhost:51087] [[51455 0] 0] ORTE_ERROR_LOG: Not found in file base/odls_base_default_fns c at line 1762 [localhost:51087] [[51455 0] 0] ORTE_ERROR_LOG: Not found in file orted/orted_comm c at line 916 -------------------------------------------------------------------------- A system call failed during shared memory initialization that should not have It is likely that your MPI job will now either abort or experience performance degradation Local host: localhost System call: open(2) Error: No such file or directory (errno 2) -------------------------------------------------------------------------- ```` I cannot understand where the error is happening `MontePython` by itself should not use `mpi` because it should be serial <hr> I asked for help to the openmpi user forum They told me that the problem is likely due to a bad interaction between the subprocess and MPI implementation I should change from `subprocess` to `spawn` but this function is not very well documented I am unsure how to proceed
<pre class="lang-none prettyprint-override">`[localhost:51087] [[51455 0] 0] ORTE_ERROR_LOG: Not found in file base/odls_base_default_fns c at line 1762 [localhost:51087] [[51455 0] 0] ORTE_ERROR_LOG: Not found in file orted/orted_comm c at line 916 [localhost:51087] [[51455 0] 0] ORTE_ERROR_LOG: Not found in file base/odls_base_default_fns c at line 1762 [localhost:51087] [[51455 0] 0] ORTE_ERROR_LOG: Not found in file orted/orted_comm c at line 916 ```` Those are occurring in the part of the ORTE framework that is responsible for launching and controlling the MPI processes It could be because there is not enough space in the temporary directory where Open MPI is storing the session information <pre class="lang-none prettyprint-override">`-------------------------------------------------------------------------- A system call failed during shared memory initialization that should not have It is likely that your MPI job will now either abort or experience performance degradation Local host: localhost System call: open(2) Error: No such file or directory (errno 2) -------------------------------------------------------------------------- ```` This error is coming from the module that implements shared-memory intranode communication in Open MPI The because is most likely that `tmpfs` is mounted in some non-standard place or is not mounted at all Without the shared-memory module the library will either use the slower TCP/IP module (if enabled which it is by default) or crash the application as no other means to communicate between will remain Both error messages have probably nothing to do with your program Try something simpler e g the canonical "Hello World!" example in order to confirm that Open MPI is functioning correctly
What event was organized by Chicago's Department of Cultural Affairs to celebrate Oliver music?
null
How to specify a full click in Python Tkinter The following python line will bind the method "click" to the event when the user <strong>presses</strong> the mouse button while the pointer is on the widget; no matter where the pointer is when she releases the button ````self bind('<Button-1&gt;' self click) ```` If I use "ButtonRelease" instead of "Button" in the code it seems that the method "click" will be called for the widget <strong>on which the mouse was pressed</strong> after the button release; no matter where you release it 1- Is not there a neat way to make it call the bound method only if the mouse button was released on my widget; no matter where it was pressed? 2- Is not there neat way to tell it to react only in case of a full click (press and release both on the same widget)?
The tkinter documentation does provide you info on that: <a href="http://www pythonware com/library/tkinter/introduction/events-and-bindings htm" rel="nofollow">http://www pythonware com/library/tkinter/introduction/events-and-bindings htm</a> You can do a binding on ````<ButtonRelease-1&gt; ````
Python - Class and function Doing a class and finished with the rest less this one Any guidance is appreciated I have derived part of the question where I am stuck with to keep it short I have also attached my working Question as follows: <blockquote> Create a class with 1 variable in it holding its own properties Provide the following 3 methods: getvariable1() - use return key tp return value of property 1 setvariable1() - This should allow new value to be specified for property 1 - additional parameter needed to accept input printerfun() - to print values of the variables for the object Create your own object of the class and call get &amp; set methods for the object created Use printerfun() method to check if the codes works </blockquote> My working: ````class animal: horns = 2 def printerfun(self): print getHorns() def getHorns(self): #do not get where I should call this return self horns def setHorns(horns): self horns = horns animal_1 = animal() F1 = raw_input('Please enter number of horns: ') setHorns(F1) ````
Not sure what the question is but anyway You should write a `__init__` member function to create the initial member variables: ````class animal: def __init__(self): self horns = 2 ```` Your code creates a <them>class variable</them> not a normal member variable Then change the horns with: ````animal_1 setHorns(F1) ```` Your code does not say which animal you want to change the variable to
Scikit grid searching the parameters (not hyper parameters) Scikit's <a href="http://scikit-learn org/stable/modules/generated/sklearn grid_search GridSearchCV html" rel="nofollow">GridSearch</a> is perfect when I want to find the best hyper parameters I want to use the same philosophy to find the best set of parameters for a linear regression using an objective function across multiple folds How can I optimize the parameters (literally the betas and intercept) of a linear regression on multiple folds? <strong>Use case (simplified):</strong> I have a dataset that has three years worth of data I want to define what is the best linear regression that is "ok on all years" If I fit the linear regression the full dataset I will get the one that reduces the least square error on all data Effectively I will minimize the error of <a href="http://i stack imgur com/iv6GV png" rel="nofollow"><img src="http://i stack imgur com/iv6GV png" alt="enter image description here"></a> However this (`min(error)`)is not my objective I can get a good result on this objective simply because the classifier did well on year 1 and 2 and that was good enough to compensate for year 3 What I effectively want to minimize is something along the lines of `min(max(error_year_1 error_year_1 error_year_1))` One hacky way about this is to make a function f(b0 b1 b2 year1 year2 year3) which returns the max of the error and then minimize that function using scipy <strong>Actual question:</strong> is there a way to do this in scikit?
It seems to me that scikit only offers direct api access to the <a href="http://scikit-learn org/stable/modules/model_evaluation html" rel="nofollow">scoring</a> which I believe will only see one fold at a time Not very beautiful but I think your best option will be to go over the `grid_scores_` `cv_validation_scores` of the `GridSearchCV` and manually fetch the set of params that minimized the max of the loss function you choose Do not think it saves the classifier though you will probably have to re-train another logit if you want to use it to make predictions
does google app engine display unicode differently in StringProperty v StringListProperty objs? I have a db StringProperty() mRegion that is set to some Korean text I see in my Dashboard that the value is visibly in Korean like this: 한국 : 충청남도 However when I take this field and add it into a string list property (db StringListProperty()) I end up with something like this: \ud55c\uad6d : \ucda9\uccad\ub0a8\ub3c4 I am having issues displaying this text on my client when I have this string list property value output to the client so it makes me wonder if something is wrong on the server end when the value is stored (as I would expect it to be readable Korean like the StringProperty) Does anyone know where I might be going wrong with this or if this second display is simply normal in string list objects and the problem is likely on my client end? Thanks Update with more detail of the issues: My client is an iphone app Basically I use the iPhone to get the user's gps location info using the reverse geocoder api I send this to app engine and save it This part appears to be working because for Korea I see the Korean characters The region name is obtained in summary like this: ````region = self request get('region') entry init(region) self mRegion = region ```` pretty straightforward (and it works) Where it breaks down is when I retrieve that data and send it back to the client To summarize: ````query = db GqlQuery("SELECT * FROM RegionData WHERE mLatitudeCenter &gt;= :1 and mLatitudeCenter <= :2" latmin latmax) for entry in query: output = entry mRegion ' ' self response out write(output) ```` When I take this and put it on a UILabel in the client it is garbled Also when I take the garbled value in the client and send it <them>back</them> to the server to look up a region it fails so that suggests to me that instead of sending the Korean text maybe it is transmitting the repr() characters or something If as you say it is just a matter of presentation and not the inherent data itself then perhaps it is something to do with the system font I am using to try to display this data? I had thought that somewhere I was missing the right call to encode() or decode() but not sure
It is quite possible that the admin interface displays the two differently yes In the latter case it is clearly doing a repr(s) while in the former it is just printing the string The admin interface's interface does not affect how your code works though - both Strings and StringLists are stored the same way in the datastore and will come back as Unicode strings for you to deal with as you wish I highly recommend reading <a href="http://www joelonsoftware com/articles/Unicode html" rel="nofollow">this Joel on Software post about unicode</a> In short you are dealing with two kinds of things: Binary data and unicode characters To confuse you Python exposes these both as strings - 'unicode strings' and 'raw strings' respectively but you should only treat the former as actual strings The datastore with its StringListProperty and StringProperty stores and returns Unicode strings Your framework should also be giving you Unicode strings and accepting Unicode strings back but some poorly designed frameworks do not What you need to do is check that you are using Unicode strings everywhere you deal with text that you explicitly call encode() to convert a Unicode string to a raw string and decode() to convert a raw string to a unicode string and that the character encoding on the returned response is set correctly and you are encoding your strings using the same encoding How you do that will depend on your framework Once you have done that if you still have trouble I would suggest writing some simple unit tests - storing data to the datastore and retrieving it and manipulating it then checking it equals what you expect - to pin down where the issue is
Does Pycharm 5 automates resolving directory? Does pycharm 5 support the auto completion of resolving directory in html? If yes how can I use this feature? I need to resolve the unreferences in the html of my project
If you markup the directories in your project for source root template root that helps Pycharm know where to look Just right-click the folders in your project tree and check the 'Mark directory as' submenu If you are writing something in a framework like Django you should also ensure you setup the specific settings for that because it will help you out &amp; enable more features Here is the Pycharm docs for <a href="https://www jetbrains com/pycharm/help/configuring-content-roots html?search=source%20root" rel="nofollow">content roots</a> And this is the docs on <a href="https://www jetbrains com/pycharm/help/configuring-folders-within-a-content-root html?search=source%20root" rel="nofollow">marking directories</a> via your project structure that I have mentioned
Adding noise to an image in increments in Python Hi I am trying to add noise to a QR image that I create this is my code so far: ````import numpy import scipy import scipy misc import sys sys path append('M:/PythonMods') import qrcode if __name__ == "__main__": myqr = qrcode make("randomtexxxxxxxxxt") #myqr show() myqr save("M:/COMPUTINGSEMESTER2/myqr4 png") filename = 'myqr4 png' imagea = (scipy misc imread(filename)) astype(float) poissonNoise = numpy random poisson(50 imagea shape) astype(float) noisyImage = imagea poissonNoise ```` Please could someone advise me how I get it to show the noisy image? and how to save the image so I can test it? Any help really appreciated edit I tried adding this code to the program to get it to show the image: ````from PIL import Image myimage = Image open(noisyImage) myimage load() ```` But then got this error: ````Traceback (most recent call last): File "M:\COMPUTINGSEMESTER2\untitled4 py" line 28 in <module&gt; myimage = Image open(noisyImage) File "Q:\PythonXY273_MaPS-T v01\Python27\lib\site-packages\PIL\Image py" line 1958 in open prefix = fp read(16) AttributeError: 'numpy ndarray' object has no attribute 'read' ````
````scipy misc imsave('NoisyImage jpg' noisyImage) ````
Cannot open file with PyFITS I have got some ` fit` files containing images from a CCD camera and I cannot seem to open them using PyFITS I am a complete newbie with PyFITS so I do not know what (if any) options I have Here is what I am trying: ````import pyfits hdulist = pyfits open('apex5_1_90s fit') ```` Which returns the error: ````Traceback (most recent call last): File "\\uol le ac uk\root\staff\home\l\lvh8\Desktop Files\Prototype Data\spextract py" line 3 in <module&gt; hdulist = pyfits open('apex5_1_90s fit')# ignore_missing_end=True) File "C:\Python27\lib\site-packages\pyfits\hdu\hdulist py" line 118 in fitsopen return HDUList fromfile(name mode memmap save_backup **kwargs) File "C:\Python27\lib\site-packages\pyfits\hdu\hdulist py" line 250 in fromfile save_backup=save_backup **kwargs) File "C:\Python27\lib\site-packages\pyfits\hdu\hdulist py" line 803 in _readfrom hdu = _BaseHDU readfrom(ffo **kwargs) File "C:\Python27\lib\site-packages\pyfits\hdu\base py" line 299 in readfrom hdr = Header fromfile(fileobj endcard=not ignore_missing_end) File "C:\Python27\lib\site-packages\pyfits\header py" line 476 in fromfile raise IOError('Header missing END card ') IOError: Header missing END card ```` If on the other hand I try: ````import pyfits hdulist = pyfits open('apex5_1_90s fit' ignore_missing_end=True) ```` I get told: ````Error validating header for HDU #1 (note: PyFITS uses zero-based indexing) Header size is not multiple of 2880: 1920 There may be extra bytes after the last HDU or the file is corrupted ```` I do not really know what this means and I have no idea what I can do about it so any help would be greatly appreciated!
It is seems that your file is corrupted or have some issues with header Are you sure you have a valid FITS file? Based on PyFITS documentation: <blockquote> Problem with the FITS format is that as old as it is there are many conventions that appear in files from certain sources that do not meet the FITS standard And yet they are so common-place that it is necessary to support them in any FITS readers CONTINUE cards are one such example ( ) If PyFITS is having trouble opening a file a good way to rule out whether not the problem is with PyFITS is to run the file through the <a href="http://heasarc gsfc nasa gov/docs/software/ftools/fitsverify/" rel="nofollow">fitsverify</a> If the file is malformatted fitsverify will output errors and warnings If fitsverify confirms no problems with a file and PyFITS is still having trouble opening it ( ) then it’s likely there is a bug in PyFITS </blockquote>