qid
int64
469
74.7M
question
stringlengths
36
37.8k
date
stringlengths
10
10
metadata
list
response_j
stringlengths
5
31.5k
response_k
stringlengths
10
31.6k
42,512,141
I have written the following simple program which should print out all events detected by `pygame.event.get()`. ``` import pygame, sys from pygame.locals import * display = pygame.display.set_mode((300, 300)) pygame.init() while True: for event in pygame.event.get(): print(event) if event.type == QUIT: pygame.quit() sys.exit() ``` But when I run this I only have mouse events, and a KEYDOWN and KEYUP event when I hit caps-lock twice, being printed in terminal. When I use any other keys they only print to terminal as if I was writing in the terminal window. ``` <Event(4-MouseMotion {'pos': (102, 15), 'buttons': (0, 0, 0), 'rel': (-197, -284)})> <Event(2-KeyDown {'unicode': '', 'scancode': 0, 'key': 301, 'm od': 8192})> <Event(3-KeyUp {'key': 301, 'scancode': 0, 'mod': 0})> wasd ``` I am using Mac OSX 10.12.1, python 3.5.2, and pygame 1.9.4.dev0. I assume I'm missing something straight forward, but I found nothing similar online. Any help would be much appreciated.
2017/02/28
[ "https://Stackoverflow.com/questions/42512141", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4191155/" ]
For anyone still struggling with this, the issue is documented here on git and is fixed. <https://github.com/pygame/pygame/issues/203> Just uninstall pygame from your venv, then install below version. ``` pip install -U https://github.com/pygame/pygame/archive/master.zip ``` Just tried this and can finally use key events in pygame.
Firstly i doubt you are but pygame only registers inputs when your focused on the pygame screen so there's that. I don't have a direct answer to your question so sorry but i do have my solution or work around to it. Because i dislike the normal event system i use pygame.key.get\_pressed() (<https://www.pygame.org/docs/ref/key.html>) just because i think it looks better and more readable. This is probably just a bad habit of mine though sooo.....
42,512,141
I have written the following simple program which should print out all events detected by `pygame.event.get()`. ``` import pygame, sys from pygame.locals import * display = pygame.display.set_mode((300, 300)) pygame.init() while True: for event in pygame.event.get(): print(event) if event.type == QUIT: pygame.quit() sys.exit() ``` But when I run this I only have mouse events, and a KEYDOWN and KEYUP event when I hit caps-lock twice, being printed in terminal. When I use any other keys they only print to terminal as if I was writing in the terminal window. ``` <Event(4-MouseMotion {'pos': (102, 15), 'buttons': (0, 0, 0), 'rel': (-197, -284)})> <Event(2-KeyDown {'unicode': '', 'scancode': 0, 'key': 301, 'm od': 8192})> <Event(3-KeyUp {'key': 301, 'scancode': 0, 'mod': 0})> wasd ``` I am using Mac OSX 10.12.1, python 3.5.2, and pygame 1.9.4.dev0. I assume I'm missing something straight forward, but I found nothing similar online. Any help would be much appreciated.
2017/02/28
[ "https://Stackoverflow.com/questions/42512141", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4191155/" ]
For anyone still struggling with this, the issue is documented here on git and is fixed. <https://github.com/pygame/pygame/issues/203> Just uninstall pygame from your venv, then install below version. ``` pip install -U https://github.com/pygame/pygame/archive/master.zip ``` Just tried this and can finally use key events in pygame.
If you're working in a virtualenv, don't use the `virtualenv` command. Use `python3 -m venv`. Then install pygame (*e.g.* `pip3 install hg+http://bitbucket.org/pygame/pygame`). See [this thread](https://bitbucket.org/pygame/pygame/issues/203/window-does-not-get-focus-on-os-x-with#comment-32656108) for more details on this issue.
1,206,215
In python I can use os.getpid() and os.name() to get information about the Process ID and OS name. Is there something similar in C++? I tried GetProcessId() but was told that this is undeclared... I am using Cygwin under windows. Thank you
2009/07/30
[ "https://Stackoverflow.com/questions/1206215", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Standard C++ has no such functionality. You need to use OS specific features to get this. In your case, you need to look up POSIX/UNIX functions such as [getpid()](http://www.opengroup.org/onlinepubs/009695399/functions/getpid.html). Note that if you actually do want to call the Windows functions to get process ID etc, you should be using a C++ environment like [MinGW](http://www.mingw.org/), which allows you to build native Windows applications, rather than Cygwin, which is more aimed at porting POSIX apps to Windows.
To use [GetProcessId](http://msdn.microsoft.com/en-us/library/ms683215(VS.85).aspx) you need to include Windows.h and link to Kernel32.lib. See [Process and Thread Functions](http://msdn.microsoft.com/en-us/library/ms684847(VS.85).aspx) for more information. I use [MSYS/mingw](http://www.mingw.org/) instead of [cygwin](http://www.cygwin.com/). So, you may need the [w32api](http://cygwin.com/packages/w32api/) package installed.
1,206,215
In python I can use os.getpid() and os.name() to get information about the Process ID and OS name. Is there something similar in C++? I tried GetProcessId() but was told that this is undeclared... I am using Cygwin under windows. Thank you
2009/07/30
[ "https://Stackoverflow.com/questions/1206215", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Standard C++ has no such functionality. You need to use OS specific features to get this. In your case, you need to look up POSIX/UNIX functions such as [getpid()](http://www.opengroup.org/onlinepubs/009695399/functions/getpid.html). Note that if you actually do want to call the Windows functions to get process ID etc, you should be using a C++ environment like [MinGW](http://www.mingw.org/), which allows you to build native Windows applications, rather than Cygwin, which is more aimed at porting POSIX apps to Windows.
I recommend Hart's book "Win32 System Programming". Great discussion about how to manage processes, memory, files etc in Kernel32, if you're just starting to look at Windows programming. You can also get a free version of Visual Studio (<http://www.microsoft.com/express/>).
1,206,215
In python I can use os.getpid() and os.name() to get information about the Process ID and OS name. Is there something similar in C++? I tried GetProcessId() but was told that this is undeclared... I am using Cygwin under windows. Thank you
2009/07/30
[ "https://Stackoverflow.com/questions/1206215", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
To use [GetProcessId](http://msdn.microsoft.com/en-us/library/ms683215(VS.85).aspx) you need to include Windows.h and link to Kernel32.lib. See [Process and Thread Functions](http://msdn.microsoft.com/en-us/library/ms684847(VS.85).aspx) for more information. I use [MSYS/mingw](http://www.mingw.org/) instead of [cygwin](http://www.cygwin.com/). So, you may need the [w32api](http://cygwin.com/packages/w32api/) package installed.
I recommend Hart's book "Win32 System Programming". Great discussion about how to manage processes, memory, files etc in Kernel32, if you're just starting to look at Windows programming. You can also get a free version of Visual Studio (<http://www.microsoft.com/express/>).
24,435,697
Python 3.4: From reading some other SO questions it seems that if a `moduleName.py` file is outside of your current directory, if you want to import it you must add it to the path with `sys.path.insert(0, '/path/to/application/app/folder')`, otherwise an `import moduelName` statement results in this error: ``` ImportError: No module named moduleName ``` Does this imply that python automatically adds all other .py files in the same directory to the path? What's going on underneath the surface that allows you to import local files without appending the Python's path? And what does an `__init__.py` file do under the surface?
2014/06/26
[ "https://Stackoverflow.com/questions/24435697", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3462076/" ]
Python adds the directory where the initial script resides as first item to [`sys.path`](https://docs.python.org/3/library/sys.html#sys.path): > > As initialized upon program startup, the first item of this list, `path[0]`, is the directory containing the script that was used to invoke the Python interpreter. If the script directory is not available (e.g. if the interpreter is invoked interactively or if the script is read from standard input), `path[0]` is the empty string, which directs Python to search modules in the current directory first. Notice that the script directory is inserted before the entries inserted as a result of `PYTHONPATH`. > > > So what goes on underneath the surface is that Python appends (or rather, prepends) the 'local' directory to `sys.path` *for you*. This simply means that the directory the script lives in is the first port of call when searching for a module. `__init__.py` has nothing to do with all this. `__init__.py` is needed to make a directory a [(regular) package](https://docs.python.org/3/reference/import.html#packages); any such directory that is found on the Python module search path is treated as a module.
I have faced same problem when running python script from Intellij Idea. There is a script in a ``` C:\Users\user\IdeaProjects\Meshtastic-python\meshtastic ``` It uses ``` from meshtastic import portnums_pb2, channel_pb2, config_pb2 ``` and fails. I have realized that it looks for ``` C:\Users\user\IdeaProjects\Meshtastic-python\meshtastic\meshtastic ``` and changed **working directory** of this script in **Run Configuration** from ``` C:\Users\user\IdeaProjects\Meshtastic-python\meshtastic ``` to ``` C:\Users\user\IdeaProjects\Meshtastic-python ``` so it can find this module **UNDERNEATH workdir** during execution ``` C:\Users\user\IdeaProjects\Meshtastic-python\meshtastic ```
29,333,578
From work i got a job to make a python script which will click for testing the product of a "secret application" for windows 8.1. The problem is that i can make it move the cursor but it can't click and i searched for win32 documentation on the internet but with no luck. Anyone who had this problem? This is the click code ``` def click(x,y): win32api.SetCursorPos((x, y)) #Left click win32api.mouse_event(win32con.MOUSEEVENTF_LEFTDOWN,x,y,0,0) time.sleep(0.05) win32api.mouse_event(win32con.MOUSEEVENTF_LEFTUP,x,y,0,0) ```
2015/03/29
[ "https://Stackoverflow.com/questions/29333578", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2984950/" ]
`body` is a string. You have to parse it as JSON first: ``` res.json(JSON.parse(body)._links.self); ```
This question is little old, yet, the following also seems more helpful. In request, you can pass `json: true` and request library returns you the json object. replace following line, > > > ``` > request('https://api.twitch.tv/kraken/streams/' + req.params.user, function ( error, response, body) { > > ``` > > with the one below > > > ``` > request({'url':`https://api.twitch.tv/kraken/streams/${req.params.user}`, 'json': true }, function ( error, response, body) { > > ``` > >
12,578,943
I'm writing a program to get a video feed from a web cam and display it in a Tkinter window. I wrote the following code which I ran on Ubuntu 12.04. ``` #!/usr/bin/env python import sys, os, gobject from Tkinter import * import pygst pygst.require("0.10") import gst # Goto GUI Class class Prototype(Frame): def __init__(self, parent): gobject.threads_init() Frame.__init__(self, parent) # Parent Object self.parent = parent self.parent.title("WebCam") self.parent.geometry("640x560+0+0") self.parent.resizable(width=FALSE, height=FALSE) # Video Box self.movie_window = Canvas(self, width=640, height=480, bg="black") self.movie_window.pack(side=TOP, expand=YES, fill=BOTH) # Buttons Box self.ButtonBox = Frame(self, relief=RAISED, borderwidth=1) self.ButtonBox.pack(side=BOTTOM, expand=YES, fill=BOTH) self.closeButton = Button(self.ButtonBox, text="Close", command=self.quit) self.closeButton.pack(side=RIGHT, padx=5, pady=5) gotoButton = Button(self.ButtonBox, text="Start", command=self.start_stop) gotoButton.pack(side=RIGHT, padx=5, pady=5) # Set up the gstreamer pipeline self.player = gst.parse_launch ("v4l2src ! video/x-raw-yuv,width=640,height=480 ! ffmpegcolorspace ! xvimagesink") bus = self.player.get_bus() bus.add_signal_watch() bus.enable_sync_message_emission() bus.connect("message", self.on_message) bus.connect("sync-message::element", self.on_sync_message) def start_stop(self): if self.gotoButton["text"] == "Start": self.gotoButton["text"] = "Stop" self.player.set_state(gst.STATE_PLAYING) else: self.player.set_state(gst.STATE_NULL) self.gotoButton["text"] = "Start" def on_message(self, bus, message): t = message.type if t == gst.MESSAGE_EOS: self.player.set_state(gst.STATE_NULL) self.button.set_label("Start") elif t == gst.MESSAGE_ERROR: err, debug = message.parse_error() print "Error: %s" % err, debug self.player.set_state(gst.STATE_NULL) self.button.set_label("Start") def on_sync_message(self, bus, message): if message.structure is None: return message_name = message.structure.get_name() if message_name == "prepare-xwindow-id": # Assign the viewport imagesink = message.src imagesink.set_property("force-aspect-ratio", True) imagesink.set_xwindow_id(self.movie_window.window.xid) def main(): root = Tk() app = Prototype(root) app.pack(expand=YES, fill=BOTH) root.mainloop() if __name__ == '__main__': main() ``` My problem is neither the ButtonBox nor the VideoBox show in the output window when the program is running. How can I fix this? I did look at other sites for possible solutions (for instance <http://pygstdocs.berlios.de/#projects> or [Way to play video files in Tkinter?](https://stackoverflow.com/questions/7227162/way-to-play-video-files-in-tkinter)) however they have very limited information on what their code means. After making the suggested alteration and a few others to get the buttons working, I realize that the display window is different from the main window when I run the program. Is there a way to get the video to display in the main window when using tkinter??
2012/09/25
[ "https://Stackoverflow.com/questions/12578943", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1696565/" ]
It looks like your Prototype class is a Tkinter Frame but you don't seem to have packed/placed it anywhere. ``` ... app = Prototype(root) app.pack(expand=YES, fill=BOTH) root.mainloop() ```
I finally came up with a solution to the question. I realised that the error was in the line imagesink.set\_xwindow\_id(self.movie\_window.window.xid) which I changed to imagesink.set\_xwindow\_id(self.movie\_window.winfo\_id()) The mistake is that I had used window.xid which is an attribute for gtk widgets. In tkinter winfo\_id() returns the window identifier for tkinter widgets. For more information <http://effbot.org/tkinterbook/widget.htm#Tkinter.Widget.winfo_id-method>
28,422,787
Using python 3, how would you change this code to print the sum of all numbers from 1 to 20? ``` n = 20 i=0 sum = 0 for i in range (1,n+1): sum =+ i i = i+1 print(sum) ```
2015/02/10
[ "https://Stackoverflow.com/questions/28422787", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4548170/" ]
The simplest way I can think about is: ``` sum(range(1, 21)) # includes 20 ``` You can also use a loop: ``` s = 0 for i in range(21): s += i ```
``` n = 20 # this isn't needed, the for loop sets i: i = 0 sum = 0 for i in range (1, n+1): sum += i # Remove this line: i = i+1 # for i in range already increments i print(sum) ``` You shouldn't use the variable name `sum` because there is already a builtin function `sum` which you can even use instead.
24,213,905
I have account in Openshift. I use Django and Mysql in this account. <https://github.com/ogurchik/pullover/tree/master/wsgi/openshift>. I created models for a new table in the Mysql database. When I execute the command `python manage.py sqlall MY_APP`, it renders this log: ``` BEGIN; CREATE TABLE `books_publisher` ( `id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `name` varchar(30) NOT NULL, `address` varchar(50) NOT NULL, `city` varchar(60) NOT NULL, `state_province` varchar(30) NOT NULL, `country` varchar(50) NOT NULL, `website` varchar(200) NOT NULL ); ``` and etc. I think this log means what account's environment setup suitable. But when I execute command `python manage.py syncdb`, the log is: ``` Creating tables ... Installing custom SQL ... Installing indexes ... Installed 0 object(s) from 0 fixture(s) ``` But the database has nothing. How do I solve this problem? I have tried google'ing but I find nothing similar.
2014/06/13
[ "https://Stackoverflow.com/questions/24213905", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2966342/" ]
Correct. The timestamp is a UNIX timestamp. That is - the number of whole seconds since Jan 1, 1970 UTC, not accounting for leap seconds. You can verify the timestamp using a site like [epochconverter.com](http://www.epochconverter.com/) ``` 1388613600 = 2014-01-01T22:00:00Z ``` Then you can check the time zone details at [timeanddate.com](http://www.timeanddate.com). * In January 2014, [Toronto was on EST](http://www.timeanddate.com/time/zone/canada/toronto), which is UTC-05:00. * [This calculation](http://www.timeanddate.com/worldclock/converted.html?iso=20140101T22&p1=0&p2=250) clearly verifies that 22:00 UTC is 5:00 PM EST.
As Marc B mentioned, `date('r', 1388613600)` returned a formatted version of the date including the timezone offset which was set to `+0000`. The output is in fact UTC. Thanks Marc!
32,046,360
I'm using wxpython with wx.Grid... I have a general grid with many columns -created with `SetColumn(self, column)` , I want to be able to show and hide specific columns based on user security permission. I read that `self.SetColMinimalAcceptableWidth(0)` might be useful? How do I use it on specific column? How do I restore the column to original size when I need to show it?
2015/08/17
[ "https://Stackoverflow.com/questions/32046360", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2131325/" ]
The Grid manual has the following entry: HideCol(self, col) ``` Hides the specified column. To show the column later you need to call SetColSize with non-0 width or ShowCol to restore the previous column width. If the column is already hidden, this method doesn’t do anything. Parameters: col (int) – The column index. ``` So in the case of self.Mygrid: ``` self.Mygrid.HideCol(0) ``` would hide the first column.
Under wxPython 2.8: ``` grid.SetColMinimalAcceptableWidth(0) grid.SetColSize(col, 0) grid.ForceRefresh() ```
15,904,973
Say i store a password in plain text in a variable called `passWd` as a string. How does python release this variable once i discard of it (for instance, with `del passWd` or `passWd= 'new random data'`)? Is the string stored as a byte-array meaning it can be overwritten in the memoryplace that it originally existed or is it a fixed set in a memory area which can't be modified and there for when assining a new value a new memory area is created and the old area is discareded but not overwritten by null? I'm questioning how Python implements the safety of memory areas and would like to know more about it, mainly because i'm curious :) From what i've gathered so far, using `del` (or `__del__`) causes the interpreter to not release memory areas of that variable automaticly which can cause issues, and also i'm not sure that **del** is so thurrow on deleting the values. But that's just from what i've gathered and not something in black or white :) The main reason for me asking, is I'm intending to write a hand-over application that gets a string, does some I/O, passes it along to another subsystem (bootloader for raspberry pi for instance) and the interface is written in Python (how odd that must sound in some peoples ears..) and i'm not worried that the data is compromised during the I/O calculations but that a memory dump might be occuring in between the two subsystem handovers. or if the system is frozen (say a hiberation) say 20min after the system is booted and i removed the variable as fast as i could, but somehow it's still in the memory despite me doing a `del passWd` :) (Ps. I've asked on Superuser, they refered me here aand i'm sorry for poor grammar!)
2013/04/09
[ "https://Stackoverflow.com/questions/15904973", "https://Stackoverflow.com", "https://Stackoverflow.com/users/929999/" ]
Unless you use custom coded input methods to get the password, it will be in many more places then just your immutable string. So don't worry too much. The OS should take care that any data from your process is cleared before the memory is allocated to another process. This may of course fail if the page is copied to disk (swapped out or hibernated). Secure password entry is not easy. Maybe you can find a special library or module that handles this.
I finally whent with two solutions. ld\_preload to replace the functionality of the string handling of Python on a lower level. One other option which is a bit easier was to develop my own C library that has more functionality then what Python offers through the standard string handling. Mainly the C code has a shread() function that writes over the memory area where the string "was" stored and some other error checks. However, @Ber gave me a good enough answer to start developing my own solution since (as he pointed out) there is no secure method in Python and python stores strings in way to many places and relies on the OS (which, on it's own isn't a bad thing except when you don't trust the OS you are installing your realtively secure application on).
3,422,775
I have written a small Django App, that executes an interactive program based on user input and returns the output as the result. But for some reason, the subprocess hangs. On verification of the logs I found that a place where a '\n' has to be given as response to a challenge, the response seems to have never been made. Interestingly, if I run the same code from outside of Django, i.e either from a python module or from the interactive shell, subprocess works without a hitch. I am assuming some settings within the environment used by Django are the culprit here. Here are snippets of the code that I've written: ``` def runtests(test_name, selective=False, tests_file=''): if selective: run_cmd = ['runtest', '--runfromfile', tests_file, test_name] else: run_cmd = 'runtest %s' % (test_name) print 'Executing command .. ' print run_cmd p = subprocess.Popen(run_cmd, shell=False, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) return p.stdout.read() def result(request): test_name = request.GET['test_name'] if not test_name: return render_to_response('webrun/execute.html', {'error_flag':True}) in_file = os.path.abspath('webrun/log/%s_in.xml' % test_name) suites = dict([(field[len('suite_'):],value) for field,value in request.GET.items() if field.startswith('suite_')]) if suites: _from_dict_to_xml(suites, in_file, test_name) output = runtests(test_name, bool(suites), in_file) return render_to_response('webrun/result.html', {'output':output}) ``` I've tried replacing subprocess with the older os.system method. But even that hangs in the exact same place. Again, this runs too if I were execute same code out of Django.
2010/08/06
[ "https://Stackoverflow.com/questions/3422775", "https://Stackoverflow.com", "https://Stackoverflow.com/users/412888/" ]
This is because code is JITted on a per-method basis, so when you first try to invoke `CheckCrystal()`, .NET first tries to compile it, subsequently loading all required and not-yet-loaded assemblies. .NET allows you to intercept a moment when assembly resolution fails. To do so, subscribe to `AppDomain.AssemblyResolve` event.
You would probably want to handle the `AppDomain.AssemblyResolve` event. More information [here](http://msdn.microsoft.com/en-us/library/system.appdomain.assemblyresolve(VS.71).aspx). A quick and dirty example: ``` AppDomain.CurrentDomain.AssemblyResolve += CurrentDomain_AssemblyResolve; private static Assembly CurrentDomain_AssemblyResolve(object sender, ResolveEventArgs args) { if (args.Name == "CrystalReports") { PTrace.Error("Some dependences needed to run Crystal Reports are not available."); } // return located here assembly here or throw exception, etc } ```
3,422,775
I have written a small Django App, that executes an interactive program based on user input and returns the output as the result. But for some reason, the subprocess hangs. On verification of the logs I found that a place where a '\n' has to be given as response to a challenge, the response seems to have never been made. Interestingly, if I run the same code from outside of Django, i.e either from a python module or from the interactive shell, subprocess works without a hitch. I am assuming some settings within the environment used by Django are the culprit here. Here are snippets of the code that I've written: ``` def runtests(test_name, selective=False, tests_file=''): if selective: run_cmd = ['runtest', '--runfromfile', tests_file, test_name] else: run_cmd = 'runtest %s' % (test_name) print 'Executing command .. ' print run_cmd p = subprocess.Popen(run_cmd, shell=False, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) return p.stdout.read() def result(request): test_name = request.GET['test_name'] if not test_name: return render_to_response('webrun/execute.html', {'error_flag':True}) in_file = os.path.abspath('webrun/log/%s_in.xml' % test_name) suites = dict([(field[len('suite_'):],value) for field,value in request.GET.items() if field.startswith('suite_')]) if suites: _from_dict_to_xml(suites, in_file, test_name) output = runtests(test_name, bool(suites), in_file) return render_to_response('webrun/result.html', {'output':output}) ``` I've tried replacing subprocess with the older os.system method. But even that hangs in the exact same place. Again, this runs too if I were execute same code out of Django.
2010/08/06
[ "https://Stackoverflow.com/questions/3422775", "https://Stackoverflow.com", "https://Stackoverflow.com/users/412888/" ]
This is because code is JITted on a per-method basis, so when you first try to invoke `CheckCrystal()`, .NET first tries to compile it, subsequently loading all required and not-yet-loaded assemblies. .NET allows you to intercept a moment when assembly resolution fails. To do so, subscribe to `AppDomain.AssemblyResolve` event.
> > Is like .Net knows that it will need the assembly before needing it. Is this true? > > > To improve startup performance the CLR lazily loads assemblies. Either manually load or handle [`AppDomain.AssemblyResolve`](http://msdn.microsoft.com/en-us/library/system.appdomain.assemblyresolve.aspx) event.
65,942,206
![My code](https://i.stack.imgur.com/QMrBx.png) ![the output](https://i.stack.imgur.com/r5kqL.png) can anyone help me? im pretty new to python and im trying to generate 10 files, each with increasingly harder questions. this code is for difficult 2. I dont want the answers in dif. 2 to be negative so whenever i get a second number bigger than the first i swap the two. for some reason some of them still come out with the first number bigger than the second. i added the "its less than" print statments for testing and it will detect the fact that its less than but wont do something about it.
2021/01/28
[ "https://Stackoverflow.com/questions/65942206", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15100687/" ]
Your issue is that you're casting your random numbers to a string **before** comparing their mathematical values. You need to compare them as integers then cast them to strings.
I believe this is because you are checking for comparison between 2 strings not 2 integers. This will give bad results for this type of program num1 = str(r.choice(numbers)) num2 = str(r.choice(numbers)) Here you are storing strings and not integers. and then below this you are checking if num1 <= num2. Convert them to integers before comparing them and your code should work.
38,657,109
I am using *Python 3.4*. I have a Python script `myscript.py` : ``` import sys def returnvalue(str) : if str == "hi" : return "yes" else : return "no" print("calling python function with parameters:") print(sys.argv[1]) str = sys.argv[1] res = returnvalue(str) target = open("file.txt", 'w') target.write(res) target.close() ``` I need to call this python script from the java class `PythonJava.java` ``` public class PythonJava { String arg1; public void setArg1(String arg1) { this.arg1 = arg1; } public void runPython() { //need to call myscript.py and also pass arg1 as its arguments. //and also myscript.py path is in C:\Demo\myscript.py } ``` and I am calling `runPython()` from another Java class by creating an object of `PythonJava` ``` obj.setArg1("hi"); ... obj.runPython(); ``` I have tried many ways but none of them are properly working. I used Jython and also ProcessBuilder but the script was not write into file.txt. Can you suggest a way to properly implement this?
2016/07/29
[ "https://Stackoverflow.com/questions/38657109", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6013429/" ]
Have you looked at these? They suggest different ways of doing this: [Call Python code from Java by passing parameters and results](https://stackoverflow.com/questions/27235286/call-python-code-from-java-by-passing-parameters-and-results) [How to call a python method from a java class?](https://stackoverflow.com/questions/9381906/how-to-call-a-python-method-from-a-java-class) In short one solution could be: ``` public void runPython() { //need to call myscript.py and also pass arg1 as its arguments. //and also myscript.py path is in C:\Demo\myscript.py String[] cmd = { "python", "C:/Demo/myscript.py", this.arg1, }; Runtime.getRuntime().exec(cmd); } ``` edit: just make sure you change the variable name from str to something else, as noted by cdarke Your python code (change str to something else, e.g. arg and specify a path for file): ``` def returnvalue(arg) : if arg == "hi" : return "yes" return "no" print("calling python function with parameters:") print(sys.argv[1]) arg = sys.argv[1] res = returnvalue(arg) print(res) with open("C:/path/to/where/you/want/file.txt", 'w') as target: # specify path or else it will be created where you run your java code target.write(res) ```
calling python from java with Argument and print python output in java console can be done with below simple method: ``` String pathPython = "pathtopython\\script.py"; String [] cmd = new String[3]; cmd[0] = "python"; cmd[1] = pathPython; cmd[2] = arg1; Runtime r = Runtime.getRuntime(); Process p = r.exec(cmd); BufferedReader in = new BufferedReader(new InputStreamReader(p.getInputStream())); while((s = in.readLine()) != null){ System.out.println(s); } ```
38,657,109
I am using *Python 3.4*. I have a Python script `myscript.py` : ``` import sys def returnvalue(str) : if str == "hi" : return "yes" else : return "no" print("calling python function with parameters:") print(sys.argv[1]) str = sys.argv[1] res = returnvalue(str) target = open("file.txt", 'w') target.write(res) target.close() ``` I need to call this python script from the java class `PythonJava.java` ``` public class PythonJava { String arg1; public void setArg1(String arg1) { this.arg1 = arg1; } public void runPython() { //need to call myscript.py and also pass arg1 as its arguments. //and also myscript.py path is in C:\Demo\myscript.py } ``` and I am calling `runPython()` from another Java class by creating an object of `PythonJava` ``` obj.setArg1("hi"); ... obj.runPython(); ``` I have tried many ways but none of them are properly working. I used Jython and also ProcessBuilder but the script was not write into file.txt. Can you suggest a way to properly implement this?
2016/07/29
[ "https://Stackoverflow.com/questions/38657109", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6013429/" ]
Have you looked at these? They suggest different ways of doing this: [Call Python code from Java by passing parameters and results](https://stackoverflow.com/questions/27235286/call-python-code-from-java-by-passing-parameters-and-results) [How to call a python method from a java class?](https://stackoverflow.com/questions/9381906/how-to-call-a-python-method-from-a-java-class) In short one solution could be: ``` public void runPython() { //need to call myscript.py and also pass arg1 as its arguments. //and also myscript.py path is in C:\Demo\myscript.py String[] cmd = { "python", "C:/Demo/myscript.py", this.arg1, }; Runtime.getRuntime().exec(cmd); } ``` edit: just make sure you change the variable name from str to something else, as noted by cdarke Your python code (change str to something else, e.g. arg and specify a path for file): ``` def returnvalue(arg) : if arg == "hi" : return "yes" return "no" print("calling python function with parameters:") print(sys.argv[1]) arg = sys.argv[1] res = returnvalue(arg) print(res) with open("C:/path/to/where/you/want/file.txt", 'w') as target: # specify path or else it will be created where you run your java code target.write(res) ```
Below is the python method with three sample arguments which can later be called through java. ``` #Sample python method with arguments import sys def getDataFromJava(arg1,arg2,arg3): arg1_val="Hi"+arg1 arg2_val=arg2 arg3_val=arg3 print(arg1_val) print(arg2_val) print(arg3_val) return arg1_val,arg2_val,arg3_val arg1 = sys.argv[1] arg2 = sys.argv[2] arg3 = sys.argv[3] getDataFromJava(arg1,arg2,arg3) ``` Below is the java code to invoke the above method with three sample arguments and also read console output of python script in java through InputStreamReader. ``` import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; public class Test1 { public static String s; public static void main(String[] args) throws IOException { String pathPython = "Path_to_the_file\\test.py"; String [] cmd = new String[5]; cmd[0] = "python"; cmd[1] = pathPython; cmd[2] = "arg1"; cmd[3] = "arg2"; cmd[4] = "arg3"; Runtime r = Runtime.getRuntime(); Process p = r.exec(cmd); BufferedReader in = new BufferedReader(new InputStreamReader(p.getInputStream())); while((s=in.readLine()) != null){ System.out.println(s); } } } ```
50,268,691
I am trying to train my binary classifier over a huge data. Previously, I could accomplish training via using fit method of sklearn. But now, I have more data and I cannot cope with them. I am trying to fitting them partially but couldn't get rid of errors. How can I train my huge data incrementally? With applying my previous approach, I get an error about pipeline object. I have gone through the examples from [Incremental Learning](http://dask-ml.readthedocs.io/en/latest/incremental.html) but still running these code samples gives error. I will appreciate any help. ``` X,y = transform_to_dataset(training_data) clf = Pipeline([ ('vectorizer', DictVectorizer()), ('classifier', LogisticRegression())]) length=len(X)/2 clf.partial_fit(X[:length],y[:length],classes=np.array([0,1])) clf.partial_fit(X[length:],y[length:],classes=np.array([0,1])) ``` **ERROR** ``` AttributeError: 'Pipeline' object has no attribute 'partial_fit' ``` **TRYING GIVEN CODE SAMPLES:** ``` clf=SGDClassifier(alpha=.0001, loss='log', penalty='l2', n_jobs=-1, #shuffle=True, n_iter=10, verbose=1) length=len(X)/2 clf.partial_fit(X[:length],y[:length],classes=np.array([0,1])) clf.partial_fit(X[length:],y[length:],classes=np.array([0,1])) ``` **ERROR** ``` File "/home/kntgu/anaconda2/lib/python2.7/site-packages/sklearn/utils/validation.py", line 573, in check_X_y ensure_min_features, warn_on_dtype, estimator) File "/home/kntgu/anaconda2/lib/python2.7/site-packages/sklearn/utils/validation.py", line 433, in check_array array = np.array(array, dtype=dtype, order=order, copy=copy) TypeError: float() argument must be a string or a number ``` My dataset consists of some sentences with their part of speech tags and dependency relations. ``` Thanks NN 0 root to IN 3 case all DT 1 nmod who WP 5 nsubj volunteered VBD 3 acl:relcl . . 1 punct You PRP 3 nsubj will MD 3 aux remain VB 0 root as IN 5 case alternates NNS 3 obl . . 3 punct ```
2018/05/10
[ "https://Stackoverflow.com/questions/50268691", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9079119/" ]
A `Pipeline` object from scikit-learn does not have the `partial_fit`, as seen in [the docs](http://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html). The reason for this is that you can add any estimator you want to that `Pipeline` object, and not all of them implement the `partial_fit`. [Here is a list of the supported estimators](http://scikit-learn.org/stable/modules/scaling_strategies.html#incremental-learning). As you see, using `SGDClassifier` (without `Pipeline`), you don't get this "no attribute" error, because this specific estimator is supported. The error message you get for this one is probably due to text data. You can use the [LabelEncoder](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html) to process the non-numeric columns.
I was going through the same problem as `SGDClassifier` inside pipeline doesn't support the incremental learning (i.e. partial\_fit param). There is a way we could do incremental learning using sklearn but it is not with `partial_fit`, it is with `warm_start`. There are two algorithms in sklearn `LogisticRegression` and `RandomForest` that support warm\_start. warm start is another way of doing incremental\_learning. read [here](http://scikit-learn.org/dev/glossary.html#term-warm-start)
50,268,691
I am trying to train my binary classifier over a huge data. Previously, I could accomplish training via using fit method of sklearn. But now, I have more data and I cannot cope with them. I am trying to fitting them partially but couldn't get rid of errors. How can I train my huge data incrementally? With applying my previous approach, I get an error about pipeline object. I have gone through the examples from [Incremental Learning](http://dask-ml.readthedocs.io/en/latest/incremental.html) but still running these code samples gives error. I will appreciate any help. ``` X,y = transform_to_dataset(training_data) clf = Pipeline([ ('vectorizer', DictVectorizer()), ('classifier', LogisticRegression())]) length=len(X)/2 clf.partial_fit(X[:length],y[:length],classes=np.array([0,1])) clf.partial_fit(X[length:],y[length:],classes=np.array([0,1])) ``` **ERROR** ``` AttributeError: 'Pipeline' object has no attribute 'partial_fit' ``` **TRYING GIVEN CODE SAMPLES:** ``` clf=SGDClassifier(alpha=.0001, loss='log', penalty='l2', n_jobs=-1, #shuffle=True, n_iter=10, verbose=1) length=len(X)/2 clf.partial_fit(X[:length],y[:length],classes=np.array([0,1])) clf.partial_fit(X[length:],y[length:],classes=np.array([0,1])) ``` **ERROR** ``` File "/home/kntgu/anaconda2/lib/python2.7/site-packages/sklearn/utils/validation.py", line 573, in check_X_y ensure_min_features, warn_on_dtype, estimator) File "/home/kntgu/anaconda2/lib/python2.7/site-packages/sklearn/utils/validation.py", line 433, in check_array array = np.array(array, dtype=dtype, order=order, copy=copy) TypeError: float() argument must be a string or a number ``` My dataset consists of some sentences with their part of speech tags and dependency relations. ``` Thanks NN 0 root to IN 3 case all DT 1 nmod who WP 5 nsubj volunteered VBD 3 acl:relcl . . 1 punct You PRP 3 nsubj will MD 3 aux remain VB 0 root as IN 5 case alternates NNS 3 obl . . 3 punct ```
2018/05/10
[ "https://Stackoverflow.com/questions/50268691", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9079119/" ]
A `Pipeline` object from scikit-learn does not have the `partial_fit`, as seen in [the docs](http://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html). The reason for this is that you can add any estimator you want to that `Pipeline` object, and not all of them implement the `partial_fit`. [Here is a list of the supported estimators](http://scikit-learn.org/stable/modules/scaling_strategies.html#incremental-learning). As you see, using `SGDClassifier` (without `Pipeline`), you don't get this "no attribute" error, because this specific estimator is supported. The error message you get for this one is probably due to text data. You can use the [LabelEncoder](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html) to process the non-numeric columns.
pipeline has no attribute partial\_fit as there are many models with no partial\_fit which can be assigned to the pipeline. My solution for this is to make a dictionary rather than pipeline and save it as joblib. ``` from sklearn.feature_extraction.text import CountVectorizer count_vect = CountVectorizer() from sklearn.feature_extraction.text import TfidfTransformer tfidf_transformer = TfidfTransformer() from sklearn.linear_model import SGDClassifier model=SGDClassifier(loss='hinge', penalty='l2',alpha=1e-3, random_state=42) tosave={ "model":model, "count":count_vect, "tfid":tfidf_transformer, } import joblib filename = 'package.sav' joblib.dump(tosave, filename) ``` Then use ``` import joblib filename = 'package.sav' pack=joblib.load(filename) pack['model'].partial_fit(X,Y) ```
50,268,691
I am trying to train my binary classifier over a huge data. Previously, I could accomplish training via using fit method of sklearn. But now, I have more data and I cannot cope with them. I am trying to fitting them partially but couldn't get rid of errors. How can I train my huge data incrementally? With applying my previous approach, I get an error about pipeline object. I have gone through the examples from [Incremental Learning](http://dask-ml.readthedocs.io/en/latest/incremental.html) but still running these code samples gives error. I will appreciate any help. ``` X,y = transform_to_dataset(training_data) clf = Pipeline([ ('vectorizer', DictVectorizer()), ('classifier', LogisticRegression())]) length=len(X)/2 clf.partial_fit(X[:length],y[:length],classes=np.array([0,1])) clf.partial_fit(X[length:],y[length:],classes=np.array([0,1])) ``` **ERROR** ``` AttributeError: 'Pipeline' object has no attribute 'partial_fit' ``` **TRYING GIVEN CODE SAMPLES:** ``` clf=SGDClassifier(alpha=.0001, loss='log', penalty='l2', n_jobs=-1, #shuffle=True, n_iter=10, verbose=1) length=len(X)/2 clf.partial_fit(X[:length],y[:length],classes=np.array([0,1])) clf.partial_fit(X[length:],y[length:],classes=np.array([0,1])) ``` **ERROR** ``` File "/home/kntgu/anaconda2/lib/python2.7/site-packages/sklearn/utils/validation.py", line 573, in check_X_y ensure_min_features, warn_on_dtype, estimator) File "/home/kntgu/anaconda2/lib/python2.7/site-packages/sklearn/utils/validation.py", line 433, in check_array array = np.array(array, dtype=dtype, order=order, copy=copy) TypeError: float() argument must be a string or a number ``` My dataset consists of some sentences with their part of speech tags and dependency relations. ``` Thanks NN 0 root to IN 3 case all DT 1 nmod who WP 5 nsubj volunteered VBD 3 acl:relcl . . 1 punct You PRP 3 nsubj will MD 3 aux remain VB 0 root as IN 5 case alternates NNS 3 obl . . 3 punct ```
2018/05/10
[ "https://Stackoverflow.com/questions/50268691", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9079119/" ]
I was going through the same problem as `SGDClassifier` inside pipeline doesn't support the incremental learning (i.e. partial\_fit param). There is a way we could do incremental learning using sklearn but it is not with `partial_fit`, it is with `warm_start`. There are two algorithms in sklearn `LogisticRegression` and `RandomForest` that support warm\_start. warm start is another way of doing incremental\_learning. read [here](http://scikit-learn.org/dev/glossary.html#term-warm-start)
pipeline has no attribute partial\_fit as there are many models with no partial\_fit which can be assigned to the pipeline. My solution for this is to make a dictionary rather than pipeline and save it as joblib. ``` from sklearn.feature_extraction.text import CountVectorizer count_vect = CountVectorizer() from sklearn.feature_extraction.text import TfidfTransformer tfidf_transformer = TfidfTransformer() from sklearn.linear_model import SGDClassifier model=SGDClassifier(loss='hinge', penalty='l2',alpha=1e-3, random_state=42) tosave={ "model":model, "count":count_vect, "tfid":tfidf_transformer, } import joblib filename = 'package.sav' joblib.dump(tosave, filename) ``` Then use ``` import joblib filename = 'package.sav' pack=joblib.load(filename) pack['model'].partial_fit(X,Y) ```
15,930,203
I am using **zbarimg** to scan bar codes, I want to redirect the output to a python script. How can I redirect the output of the following command: ``` zbarimg code.png ``` to a python script, and what should be the script like? I tried the following script: ``` #!/usr/local/bin/python s = raw_input() print s ``` I made it an executable by issuing the following: ``` chmod +x in.py ``` Than I ran the following : ``` zbarimg code.png | in.py ``` I know it's wrong but I can't figure out anything else!
2013/04/10
[ "https://Stackoverflow.com/questions/15930203", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1251851/" ]
Using the pipe operator `|` from the command is correct, actually. Did it not work? You might need to explicitly specify the path for the python script as in ``` zbarimg code.png | ./in.py ``` and as @dogbane says, reading from stdin like `sys.stdin.readlines()` is better than using `raw_input`
Use [`sys.stdin`](http://docs.python.org/2/library/sys.html#sys.stdin) to read from stdin in your python script. For example: ``` import sys data = sys.stdin.readlines() ```
15,930,203
I am using **zbarimg** to scan bar codes, I want to redirect the output to a python script. How can I redirect the output of the following command: ``` zbarimg code.png ``` to a python script, and what should be the script like? I tried the following script: ``` #!/usr/local/bin/python s = raw_input() print s ``` I made it an executable by issuing the following: ``` chmod +x in.py ``` Than I ran the following : ``` zbarimg code.png | in.py ``` I know it's wrong but I can't figure out anything else!
2013/04/10
[ "https://Stackoverflow.com/questions/15930203", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1251851/" ]
Using the pipe operator `|` from the command is correct, actually. Did it not work? You might need to explicitly specify the path for the python script as in ``` zbarimg code.png | ./in.py ``` and as @dogbane says, reading from stdin like `sys.stdin.readlines()` is better than using `raw_input`
I had to invoke the python program command as `somecommand | python mypythonscript.py` instead of `somecommand | ./mypythonscript.py`. This worked for me. The latter produced errors. My purpose: Sum up the durations of all mp3 files by piping output of `soxi -D *mp3` into python: `soxi -D *mp3 | python sum_durations.py` --- Details: `soxi -D *mp3`produces: ``` 122.473016 139.533016 128.456009 307.802993 ... ``` sum\_durations.py script: ``` import sys import math data = sys.stdin.readlines() #print(data) sum = 0.0 for line in data: #print(line) sum += float(line) mins = math.floor(sum / 60) secs = math.floor(sum) % 60 print("total duration: " + str(mins) + ":" + str(secs)) ```
45,430,966
why are function considered data type in lua? you can assign functions to variables and pass them as arguments in python too but there is no function data type in python.
2017/08/01
[ "https://Stackoverflow.com/questions/45430966", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4931135/" ]
I think you are mistaken. If you take a look into <https://docs.python.org/2/reference/datamodel.html#types> You'll find that Python even has multiple function types. Callable types: * user defined function * user defined methods * generator functions * built-in functions * built-in methods * ... There are further sections in the Python documentation that provide detail on the various types the interpreter supports. Why there is a function type? I guess because it makes sense in a typed language to have different types for different kinds of things. If you don't have different types you don't need types at all. Having a function type is just consequent. How else would you classify a reference to a function?
Python does actually have a function type, its just called `lambda`. In both of these programming languages, functions are first-class values which is just a fancy way of saying you can pass them around to functions just like numbers or strings. It makes it possible to use [functional programming](https://en.wikipedia.org/wiki/Functional_programming) as a paradigm if it fits your purposes and the problem you are solving.
53,863,318
First, I was able to fix the ImportError. I figured out that it was because the Django version of pythonanywhere is not updated, So I upgraded Django on pythonanywhere from 1.x.x to 2.0.9. The error came out like this: > > ImportError at / > cannot import name 'path' > > > ``` django version: 1.x.x python version: 3.6.6 ``` and, unfortunately, my app gave me another error: > > OperationalError at / > no such column: blog\_post.published\_date > Request Method: GET > Request URL: http://*.pythonanywhere.com/ > Django Version: 2.0.9 > Exception Type: OperationalError > Exception Value: > > no such column: blog\_post.published\_date > Exception Location: /home/*/my-first-blog/myenv/lib/python3.6/site-packages/django/db/backends/sqlite3/base.py > in execute, line 303 > Python Executable: /usr/local/bin/uwsgi > Python Version: 3.6.6 > > > I thought this error occurred because of some database, so I tried `migrate` or `makemigrations` on pythonanywhere, but I could not fix it still. So, is there anyone who knows how to fix this database? **Here is my `model.py`:** ``` from django.conf import settings from django.db import models from django.utils import timezone class Post(models.Model): author = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE) title = models.CharField(max_length=200) text = models.TextField() created_date = models.DateTimeField(default=timezone.now) published_date = models.DateTimeField(blank=True, null=True) def publish(self): self.published_date = timezone.now() self.save() def __str__(self): return self.title ``` **here is the output of `python manage.py showmigrations`:** ``` admin [X] 0001_initial [X] 0002_logentry_remove_auto_add auth [X] 0001_initial [X] 0002_alter_permission_name_max_length [X] 0003_alter_user_email_max_length [X] 0004_alter_user_username_opts [X] 0005_alter_user_last_login_null [X] 0006_require_contenttypes_0002 [X] 0007_alter_validators_add_error_messages [X] 0008_alter_user_username_max_length [X] 0009_alter_user_last_name_max_length blog [X] 0001_initial contenttypes [X] 0001_initial [X] 0002_remove_content_type_name sessions [X] 0001_initial ```
2018/12/20
[ "https://Stackoverflow.com/questions/53863318", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10665552/" ]
The problem as I see has to be with the database and django migrations. The `Post` object inside the blog has the attribute that django's trying to find. The migrations haven't been correctly applied to the database. Now considering the history of migrations, I do not know what's going wrong unless I can look around your database which I'm assuming is an sqlite. One way to resolve this if you're having a newly constructed database is to get rid of the database and do the following: * Delete all the migrations from `app/migrations` directory * `python manage.py makemigrations` * `python manage.py migrate` Also, try to avoid `sqlite` as much as possible. The same migrations that ran on an `sqlite` db might be erroneous on Postgres or MySQL database which are more production grade databases. **NOTE**: Please understand that this would lead to a complete data loss. Hence, try this only if you can afford to compromise on the existing/test data.
Don't forget to refresh your production server after every migration if you want the changes to take effect
11,226,252
Is there a way to loop in `while` if you start the script with `python -c`? This doesn't seem to be related to platform or python version... **Linux** ``` [mpenning@Hotcoffee ~]$ python -c "import os;while (True): os.system('ls')" File "<string>", line 1 import os;while (True): os.system('ls') ^ SyntaxError: invalid syntax [mpenning@Hotcoffee ~]$ [mpenning@Hotcoffee ~]$ python -V Python 2.6.6 [mpenning@Hotcoffee ~]$ uname -a Linux Hotcoffee 2.6.32-5-amd64 #1 SMP Sun May 6 04:00:17 UTC 2012 x86_64 GNU/Linux [mpenning@Hotcoffee ~]$ ``` **Windows** ``` C:\Users\mike_pennington>python -c "import os;while True: os.system('dir')" File "<string>", line 1 import os;while True: os.system('dir') ^ SyntaxError: invalid syntax C:\Users\mike_pennington>python -V Python 2.7.2 C:\Users\mike_pennington> ``` I have tried removing parenthesis in the `while` statement, but nothing seems to make this run.
2012/06/27
[ "https://Stackoverflow.com/questions/11226252", "https://Stackoverflow.com", "https://Stackoverflow.com/users/667301/" ]
Multiline statements may not start after a statement-separating `;` in Python – otherwise, there might be ambiguities about the code blocks. Simply use line breaks in stead of `;`. This "works" on Linux: ``` $ python -c "import os while True: os.system('ls')" ``` Not sure how to enter this on Windows, but why not simply write the commands to a `.py` file if it's more than one line?
Don't know about windows, if all you want is to be able to type in one-liners, you could consider line breaks inside quotes: ``` % python -c "import os; while (True): os.system('ls')" ```
11,226,252
Is there a way to loop in `while` if you start the script with `python -c`? This doesn't seem to be related to platform or python version... **Linux** ``` [mpenning@Hotcoffee ~]$ python -c "import os;while (True): os.system('ls')" File "<string>", line 1 import os;while (True): os.system('ls') ^ SyntaxError: invalid syntax [mpenning@Hotcoffee ~]$ [mpenning@Hotcoffee ~]$ python -V Python 2.6.6 [mpenning@Hotcoffee ~]$ uname -a Linux Hotcoffee 2.6.32-5-amd64 #1 SMP Sun May 6 04:00:17 UTC 2012 x86_64 GNU/Linux [mpenning@Hotcoffee ~]$ ``` **Windows** ``` C:\Users\mike_pennington>python -c "import os;while True: os.system('dir')" File "<string>", line 1 import os;while True: os.system('dir') ^ SyntaxError: invalid syntax C:\Users\mike_pennington>python -V Python 2.7.2 C:\Users\mike_pennington> ``` I have tried removing parenthesis in the `while` statement, but nothing seems to make this run.
2012/06/27
[ "https://Stackoverflow.com/questions/11226252", "https://Stackoverflow.com", "https://Stackoverflow.com/users/667301/" ]
``` python -c $'import subprocess\nwhile True: subprocess.call(["ls"])' ``` would work (note the `$'...'` and the `\n`). But it could be that it only works under [bash](/questions/tagged/bash "show questions tagged 'bash'") - I am not sure...
Multiline statements may not start after a statement-separating `;` in Python – otherwise, there might be ambiguities about the code blocks. Simply use line breaks in stead of `;`. This "works" on Linux: ``` $ python -c "import os while True: os.system('ls')" ``` Not sure how to enter this on Windows, but why not simply write the commands to a `.py` file if it's more than one line?
11,226,252
Is there a way to loop in `while` if you start the script with `python -c`? This doesn't seem to be related to platform or python version... **Linux** ``` [mpenning@Hotcoffee ~]$ python -c "import os;while (True): os.system('ls')" File "<string>", line 1 import os;while (True): os.system('ls') ^ SyntaxError: invalid syntax [mpenning@Hotcoffee ~]$ [mpenning@Hotcoffee ~]$ python -V Python 2.6.6 [mpenning@Hotcoffee ~]$ uname -a Linux Hotcoffee 2.6.32-5-amd64 #1 SMP Sun May 6 04:00:17 UTC 2012 x86_64 GNU/Linux [mpenning@Hotcoffee ~]$ ``` **Windows** ``` C:\Users\mike_pennington>python -c "import os;while True: os.system('dir')" File "<string>", line 1 import os;while True: os.system('dir') ^ SyntaxError: invalid syntax C:\Users\mike_pennington>python -V Python 2.7.2 C:\Users\mike_pennington> ``` I have tried removing parenthesis in the `while` statement, but nothing seems to make this run.
2012/06/27
[ "https://Stackoverflow.com/questions/11226252", "https://Stackoverflow.com", "https://Stackoverflow.com/users/667301/" ]
Multiline statements may not start after a statement-separating `;` in Python – otherwise, there might be ambiguities about the code blocks. Simply use line breaks in stead of `;`. This "works" on Linux: ``` $ python -c "import os while True: os.system('ls')" ``` Not sure how to enter this on Windows, but why not simply write the commands to a `.py` file if it's more than one line?
If you really must do this in windows, you could use exec: ``` python -c "exec \"import os;\rwhile True:\r os.system('dir')\"" ``` (I substituted `dir` so it works on my windows system)
11,226,252
Is there a way to loop in `while` if you start the script with `python -c`? This doesn't seem to be related to platform or python version... **Linux** ``` [mpenning@Hotcoffee ~]$ python -c "import os;while (True): os.system('ls')" File "<string>", line 1 import os;while (True): os.system('ls') ^ SyntaxError: invalid syntax [mpenning@Hotcoffee ~]$ [mpenning@Hotcoffee ~]$ python -V Python 2.6.6 [mpenning@Hotcoffee ~]$ uname -a Linux Hotcoffee 2.6.32-5-amd64 #1 SMP Sun May 6 04:00:17 UTC 2012 x86_64 GNU/Linux [mpenning@Hotcoffee ~]$ ``` **Windows** ``` C:\Users\mike_pennington>python -c "import os;while True: os.system('dir')" File "<string>", line 1 import os;while True: os.system('dir') ^ SyntaxError: invalid syntax C:\Users\mike_pennington>python -V Python 2.7.2 C:\Users\mike_pennington> ``` I have tried removing parenthesis in the `while` statement, but nothing seems to make this run.
2012/06/27
[ "https://Stackoverflow.com/questions/11226252", "https://Stackoverflow.com", "https://Stackoverflow.com/users/667301/" ]
``` python -c $'import subprocess\nwhile True: subprocess.call(["ls"])' ``` would work (note the `$'...'` and the `\n`). But it could be that it only works under [bash](/questions/tagged/bash "show questions tagged 'bash'") - I am not sure...
Don't know about windows, if all you want is to be able to type in one-liners, you could consider line breaks inside quotes: ``` % python -c "import os; while (True): os.system('ls')" ```
11,226,252
Is there a way to loop in `while` if you start the script with `python -c`? This doesn't seem to be related to platform or python version... **Linux** ``` [mpenning@Hotcoffee ~]$ python -c "import os;while (True): os.system('ls')" File "<string>", line 1 import os;while (True): os.system('ls') ^ SyntaxError: invalid syntax [mpenning@Hotcoffee ~]$ [mpenning@Hotcoffee ~]$ python -V Python 2.6.6 [mpenning@Hotcoffee ~]$ uname -a Linux Hotcoffee 2.6.32-5-amd64 #1 SMP Sun May 6 04:00:17 UTC 2012 x86_64 GNU/Linux [mpenning@Hotcoffee ~]$ ``` **Windows** ``` C:\Users\mike_pennington>python -c "import os;while True: os.system('dir')" File "<string>", line 1 import os;while True: os.system('dir') ^ SyntaxError: invalid syntax C:\Users\mike_pennington>python -V Python 2.7.2 C:\Users\mike_pennington> ``` I have tried removing parenthesis in the `while` statement, but nothing seems to make this run.
2012/06/27
[ "https://Stackoverflow.com/questions/11226252", "https://Stackoverflow.com", "https://Stackoverflow.com/users/667301/" ]
``` python -c $'import subprocess\nwhile True: subprocess.call(["ls"])' ``` would work (note the `$'...'` and the `\n`). But it could be that it only works under [bash](/questions/tagged/bash "show questions tagged 'bash'") - I am not sure...
If you really must do this in windows, you could use exec: ``` python -c "exec \"import os;\rwhile True:\r os.system('dir')\"" ``` (I substituted `dir` so it works on my windows system)
52,119,496
I am trying to write code to solve this python exercise: **I must use** the 'math' library, sqrt and possibly pow functions. > > "The distance between two points x and y is the square root of the sum > of squared differences along each dimension of x and y. > > > "Create a function that takes two vectors and outputs the distance > between them. > > > x = (0,0) y = (1,1)" > > > So far I've tried this - which certainly hasn't worked. ``` x = (0,0) y = (1,1) (c1, c2) = x (c3, c4) = y math.sqrt(sum((c1,**2)(c2,**2)(c3,**2)(c4,**2))) ``` > > > ``` > File "<ipython-input-14-ac0f3dc1fdeb>", line 1 > math.sqrt(sum((c1,**2)(c2,**2)(c3,**2)(c4,**2))) > ^ > SyntaxError: invalid syntax > ``` > > ``` if c1 < c3: difference1 = c3-c1 print(difference1) ``` > > 1 > > > ... not even sure if that's the kind of calculation I should be working with. ``` def distance(x, y): ``` ummm... I expect the function starts by unpacking the tuples! But not sure how to write the rest of it, or cleanly. I'm a beginner programmer & not a mathematician so I may be wrong in more than one sense... This exercise is from this HarvardX course: ['Using Python for Research'](https://courses.edx.org/courses/course-v1:HarvardX+PH526x+2T2018/4bdcc373b7a944f8861a3f190c10edca/). It's OK to search for solutions via StackOverflow for learning on this course... not cheating to ask for pointers. Many thanks for any ideas! I will keep searching around.
2018/08/31
[ "https://Stackoverflow.com/questions/52119496", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10265759/" ]
``` import math def distance (x,y): value= math.sqrt ((x[0]-y[0])**2 + (x[1] - y[1])**2) print (value) distance((0,0), (1,1)) ```
Thanks so much for those ideas! I figured it out. So happy. ``` for (a,b) in x,y: dis = math.sqrt((y[0] - x[0])**2 + (y[1] - x[1])**2) print(dis) ```
52,119,496
I am trying to write code to solve this python exercise: **I must use** the 'math' library, sqrt and possibly pow functions. > > "The distance between two points x and y is the square root of the sum > of squared differences along each dimension of x and y. > > > "Create a function that takes two vectors and outputs the distance > between them. > > > x = (0,0) y = (1,1)" > > > So far I've tried this - which certainly hasn't worked. ``` x = (0,0) y = (1,1) (c1, c2) = x (c3, c4) = y math.sqrt(sum((c1,**2)(c2,**2)(c3,**2)(c4,**2))) ``` > > > ``` > File "<ipython-input-14-ac0f3dc1fdeb>", line 1 > math.sqrt(sum((c1,**2)(c2,**2)(c3,**2)(c4,**2))) > ^ > SyntaxError: invalid syntax > ``` > > ``` if c1 < c3: difference1 = c3-c1 print(difference1) ``` > > 1 > > > ... not even sure if that's the kind of calculation I should be working with. ``` def distance(x, y): ``` ummm... I expect the function starts by unpacking the tuples! But not sure how to write the rest of it, or cleanly. I'm a beginner programmer & not a mathematician so I may be wrong in more than one sense... This exercise is from this HarvardX course: ['Using Python for Research'](https://courses.edx.org/courses/course-v1:HarvardX+PH526x+2T2018/4bdcc373b7a944f8861a3f190c10edca/). It's OK to search for solutions via StackOverflow for learning on this course... not cheating to ask for pointers. Many thanks for any ideas! I will keep searching around.
2018/08/31
[ "https://Stackoverflow.com/questions/52119496", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10265759/" ]
``` import math def distance (x,y): value= math.sqrt ((x[0]-y[0])**2 + (x[1] - y[1])**2) print (value) distance((0,0), (1,1)) ```
``` import math def distance(x1,x2,y1,y2): x=(x1,x2) y=(y1,y2) dis = math.sqrt((x[1]-x[0])**2 + (y[1] - y[0])**2) return dis print(dis(0,0,1,1)) ``` this works very well to answer your quest
64,260,105
I want to read all parquet files from an S3 bucket, including all those in the subdirectories (these are actually prefixes). Using wildcards (\*) in the S3 url only works for the files in the specified folder. For example using this code will only read the parquet files below the `target/` folder. ``` df = spark.read.parquet("s3://bucket/target/*.parquet") df.show() ``` Let say i have a structure like this in my s3 bucket: ``` "s3://bucket/target/2020/01/01/some-file.parquet" "s3://bucket/target/2020/01/02/some-file.parquet" ``` The above code will raise the exception: ``` pyspark.sql.utils.AnalysisException: 'Path does not exist: s3://mailswitch-extract-underwr-prod/target/*.parquet;' ``` **How can I read all the parquet files from the subdirectories from my s3 bucket?** To run my code, I am using AWS Glue 2.0 with Spark 2.4 and python 3.
2020/10/08
[ "https://Stackoverflow.com/questions/64260105", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1771155/" ]
If you want to read all parquet files below the target folder ``` "s3://bucket/target/2020/01/01/some-file.parquet" "s3://bucket/target/2020/01/02/some-file.parquet" ``` you can do ``` df = spark.read.parquet("bucket/target/*/*/*/*.parquet") ``` The downside is that you need to know the depth of your parquet files.
This worked for me: ``` df = spark.read.parquet("s3://your/path/here/some*wildcard") ```
40,446,084
Running Selenium locally on flask. Im using the PhantomJS driver. I previously had a path error: ``` selenium.common.exceptions.WebDriverException: Message: 'phantomjs' executable needs to be in PATH. ``` But after finding out from another StackOverflow question, I learned that I have to pass the environment path as a parameter for PhantomJS. The path I have below is the path to the phantomJS folder in my virtual environment folder. ``` driver = webdriver.PhantomJS(executable_path='/Users/MyAcc/Documents/MYWEBAPP/venv/lib/python3.5/site-packages/selenium/webdriver/phantomjs') ``` However, I get a new error-code now: ``` selenium.common.exceptions.WebDriverException: Message: 'phantomjs' executable may have wrong permissions. ``` Here's what I get when I check the file permissions of the path. ``` total 40 drwxr-xr-x 7 USER staff 238 Nov 6 00:07 . drwxr-xr-x 17 USER staff 578 Nov 6 00:03 .. -rw-r--r--@ 1 USER staff 6148 Nov 6 00:07 .DS_Store -rw-r--r-- 1 USER staff 787 Oct 31 12:27 __init__.py drwxr-xr-x 5 USER staff 170 Oct 31 12:27 __pycache__ -rw-r--r-- 1 USER staff 2587 Oct 31 12:27 service.py -rw-r--r-- 1 USER staff 2934 Oct 31 12:27 webdriver.py ```
2016/11/06
[ "https://Stackoverflow.com/questions/40446084", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7121239/" ]
I think the true reason for you problem is that: **The phantomjs which webdrive needs is not the one under `selenium/webdriver` fold**. When you use anaconda to install this package, it's really confusing (at least for me). * First install it with `conda install -c conda-forge phantomjs`, test it with `phantomjs --version`. * Then you can find the true phantomjs.exe in this fold: `"path = /${home_path}/anaconda3/envs/${env_name}/bin/phantomjs"`. To test if it's the true path, test with `/${home_path}/anaconda3/envs/${env_name}/bin/phantomjs --version`. It should output `__version__` information correctly. * Pause this path to `webdriver.PhantomJS(executable_path=path)` and it will be fixed. So there's no need to use `chmod` or put it in `/usr/local/bin` (in this way, the only goodness is that you can skip the `executable` parameter)
Strangely, for me it was fixed by putting phantomjs in `/usr/local/share` and adding some symbolic links. I followed [these steps](https://stackoverflow.com/questions/8778513/how-can-i-setup-run-phantomjs-on-ubuntu): * move the phantomjs folder to `/usr/local/share/`: + `sudo mv phantomjs-2.1.1-linux-x86_64.tar.bz2 /usr/local/share/.` * create the symbolic links: + `sudo ln -s /usr/local/share/phantomjs-1.8.1-linux-x86_64 /usr/local/share/phantomjs` + `sudo ln -s /usr/local/share/phantomjs-1.8.1-linux-x86_64 /usr/local/share/phantomjs` I'm no Linux expert so I don't know why this makes a difference. If anyone wants to pitch in, feel free.
40,446,084
Running Selenium locally on flask. Im using the PhantomJS driver. I previously had a path error: ``` selenium.common.exceptions.WebDriverException: Message: 'phantomjs' executable needs to be in PATH. ``` But after finding out from another StackOverflow question, I learned that I have to pass the environment path as a parameter for PhantomJS. The path I have below is the path to the phantomJS folder in my virtual environment folder. ``` driver = webdriver.PhantomJS(executable_path='/Users/MyAcc/Documents/MYWEBAPP/venv/lib/python3.5/site-packages/selenium/webdriver/phantomjs') ``` However, I get a new error-code now: ``` selenium.common.exceptions.WebDriverException: Message: 'phantomjs' executable may have wrong permissions. ``` Here's what I get when I check the file permissions of the path. ``` total 40 drwxr-xr-x 7 USER staff 238 Nov 6 00:07 . drwxr-xr-x 17 USER staff 578 Nov 6 00:03 .. -rw-r--r--@ 1 USER staff 6148 Nov 6 00:07 .DS_Store -rw-r--r-- 1 USER staff 787 Oct 31 12:27 __init__.py drwxr-xr-x 5 USER staff 170 Oct 31 12:27 __pycache__ -rw-r--r-- 1 USER staff 2587 Oct 31 12:27 service.py -rw-r--r-- 1 USER staff 2934 Oct 31 12:27 webdriver.py ```
2016/11/06
[ "https://Stackoverflow.com/questions/40446084", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7121239/" ]
Well I got this solved by the following CODE: ``` browser = webdriver.PhantomJS(executable_path = "/usr/local/Cellar/phantomjs/2.1.1/bin/phantomjs") ```
Strangely, for me it was fixed by putting phantomjs in `/usr/local/share` and adding some symbolic links. I followed [these steps](https://stackoverflow.com/questions/8778513/how-can-i-setup-run-phantomjs-on-ubuntu): * move the phantomjs folder to `/usr/local/share/`: + `sudo mv phantomjs-2.1.1-linux-x86_64.tar.bz2 /usr/local/share/.` * create the symbolic links: + `sudo ln -s /usr/local/share/phantomjs-1.8.1-linux-x86_64 /usr/local/share/phantomjs` + `sudo ln -s /usr/local/share/phantomjs-1.8.1-linux-x86_64 /usr/local/share/phantomjs` I'm no Linux expert so I don't know why this makes a difference. If anyone wants to pitch in, feel free.
40,446,084
Running Selenium locally on flask. Im using the PhantomJS driver. I previously had a path error: ``` selenium.common.exceptions.WebDriverException: Message: 'phantomjs' executable needs to be in PATH. ``` But after finding out from another StackOverflow question, I learned that I have to pass the environment path as a parameter for PhantomJS. The path I have below is the path to the phantomJS folder in my virtual environment folder. ``` driver = webdriver.PhantomJS(executable_path='/Users/MyAcc/Documents/MYWEBAPP/venv/lib/python3.5/site-packages/selenium/webdriver/phantomjs') ``` However, I get a new error-code now: ``` selenium.common.exceptions.WebDriverException: Message: 'phantomjs' executable may have wrong permissions. ``` Here's what I get when I check the file permissions of the path. ``` total 40 drwxr-xr-x 7 USER staff 238 Nov 6 00:07 . drwxr-xr-x 17 USER staff 578 Nov 6 00:03 .. -rw-r--r--@ 1 USER staff 6148 Nov 6 00:07 .DS_Store -rw-r--r-- 1 USER staff 787 Oct 31 12:27 __init__.py drwxr-xr-x 5 USER staff 170 Oct 31 12:27 __pycache__ -rw-r--r-- 1 USER staff 2587 Oct 31 12:27 service.py -rw-r--r-- 1 USER staff 2934 Oct 31 12:27 webdriver.py ```
2016/11/06
[ "https://Stackoverflow.com/questions/40446084", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7121239/" ]
I placed the phantomjs file into `/usr/local/bin` and it worked fine.
I think the true reason for you problem is that: **The phantomjs which webdrive needs is not the one under `selenium/webdriver` fold**. When you use anaconda to install this package, it's really confusing (at least for me). * First install it with `conda install -c conda-forge phantomjs`, test it with `phantomjs --version`. * Then you can find the true phantomjs.exe in this fold: `"path = /${home_path}/anaconda3/envs/${env_name}/bin/phantomjs"`. To test if it's the true path, test with `/${home_path}/anaconda3/envs/${env_name}/bin/phantomjs --version`. It should output `__version__` information correctly. * Pause this path to `webdriver.PhantomJS(executable_path=path)` and it will be fixed. So there's no need to use `chmod` or put it in `/usr/local/bin` (in this way, the only goodness is that you can skip the `executable` parameter)
40,446,084
Running Selenium locally on flask. Im using the PhantomJS driver. I previously had a path error: ``` selenium.common.exceptions.WebDriverException: Message: 'phantomjs' executable needs to be in PATH. ``` But after finding out from another StackOverflow question, I learned that I have to pass the environment path as a parameter for PhantomJS. The path I have below is the path to the phantomJS folder in my virtual environment folder. ``` driver = webdriver.PhantomJS(executable_path='/Users/MyAcc/Documents/MYWEBAPP/venv/lib/python3.5/site-packages/selenium/webdriver/phantomjs') ``` However, I get a new error-code now: ``` selenium.common.exceptions.WebDriverException: Message: 'phantomjs' executable may have wrong permissions. ``` Here's what I get when I check the file permissions of the path. ``` total 40 drwxr-xr-x 7 USER staff 238 Nov 6 00:07 . drwxr-xr-x 17 USER staff 578 Nov 6 00:03 .. -rw-r--r--@ 1 USER staff 6148 Nov 6 00:07 .DS_Store -rw-r--r-- 1 USER staff 787 Oct 31 12:27 __init__.py drwxr-xr-x 5 USER staff 170 Oct 31 12:27 __pycache__ -rw-r--r-- 1 USER staff 2587 Oct 31 12:27 service.py -rw-r--r-- 1 USER staff 2934 Oct 31 12:27 webdriver.py ```
2016/11/06
[ "https://Stackoverflow.com/questions/40446084", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7121239/" ]
> > selenium.common.exceptions.WebDriverException: Message: 'phantomjs' executable may have wrong permissions. > > > This error is because phantomjs didn't execute permission, as long as for phantomjs - 2.1.1 - Linux - x86\_64 / bin/phantomjs add execute permissions, chmod u + x phantomjs
Strangely, for me it was fixed by putting phantomjs in `/usr/local/share` and adding some symbolic links. I followed [these steps](https://stackoverflow.com/questions/8778513/how-can-i-setup-run-phantomjs-on-ubuntu): * move the phantomjs folder to `/usr/local/share/`: + `sudo mv phantomjs-2.1.1-linux-x86_64.tar.bz2 /usr/local/share/.` * create the symbolic links: + `sudo ln -s /usr/local/share/phantomjs-1.8.1-linux-x86_64 /usr/local/share/phantomjs` + `sudo ln -s /usr/local/share/phantomjs-1.8.1-linux-x86_64 /usr/local/share/phantomjs` I'm no Linux expert so I don't know why this makes a difference. If anyone wants to pitch in, feel free.
40,446,084
Running Selenium locally on flask. Im using the PhantomJS driver. I previously had a path error: ``` selenium.common.exceptions.WebDriverException: Message: 'phantomjs' executable needs to be in PATH. ``` But after finding out from another StackOverflow question, I learned that I have to pass the environment path as a parameter for PhantomJS. The path I have below is the path to the phantomJS folder in my virtual environment folder. ``` driver = webdriver.PhantomJS(executable_path='/Users/MyAcc/Documents/MYWEBAPP/venv/lib/python3.5/site-packages/selenium/webdriver/phantomjs') ``` However, I get a new error-code now: ``` selenium.common.exceptions.WebDriverException: Message: 'phantomjs' executable may have wrong permissions. ``` Here's what I get when I check the file permissions of the path. ``` total 40 drwxr-xr-x 7 USER staff 238 Nov 6 00:07 . drwxr-xr-x 17 USER staff 578 Nov 6 00:03 .. -rw-r--r--@ 1 USER staff 6148 Nov 6 00:07 .DS_Store -rw-r--r-- 1 USER staff 787 Oct 31 12:27 __init__.py drwxr-xr-x 5 USER staff 170 Oct 31 12:27 __pycache__ -rw-r--r-- 1 USER staff 2587 Oct 31 12:27 service.py -rw-r--r-- 1 USER staff 2934 Oct 31 12:27 webdriver.py ```
2016/11/06
[ "https://Stackoverflow.com/questions/40446084", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7121239/" ]
I placed the phantomjs file into `/usr/local/bin` and it worked fine.
I met this problem before about python+phanomjs. solution: **Linux** putting phantomjs in `/usr/local/share` **Windows** putting phantomjs in `/python/scripts`
40,446,084
Running Selenium locally on flask. Im using the PhantomJS driver. I previously had a path error: ``` selenium.common.exceptions.WebDriverException: Message: 'phantomjs' executable needs to be in PATH. ``` But after finding out from another StackOverflow question, I learned that I have to pass the environment path as a parameter for PhantomJS. The path I have below is the path to the phantomJS folder in my virtual environment folder. ``` driver = webdriver.PhantomJS(executable_path='/Users/MyAcc/Documents/MYWEBAPP/venv/lib/python3.5/site-packages/selenium/webdriver/phantomjs') ``` However, I get a new error-code now: ``` selenium.common.exceptions.WebDriverException: Message: 'phantomjs' executable may have wrong permissions. ``` Here's what I get when I check the file permissions of the path. ``` total 40 drwxr-xr-x 7 USER staff 238 Nov 6 00:07 . drwxr-xr-x 17 USER staff 578 Nov 6 00:03 .. -rw-r--r--@ 1 USER staff 6148 Nov 6 00:07 .DS_Store -rw-r--r-- 1 USER staff 787 Oct 31 12:27 __init__.py drwxr-xr-x 5 USER staff 170 Oct 31 12:27 __pycache__ -rw-r--r-- 1 USER staff 2587 Oct 31 12:27 service.py -rw-r--r-- 1 USER staff 2934 Oct 31 12:27 webdriver.py ```
2016/11/06
[ "https://Stackoverflow.com/questions/40446084", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7121239/" ]
I placed the phantomjs file into `/usr/local/bin` and it worked fine.
``` executable_path = './phantomjs-2.1.1-linux-x86_64/bin/phantomjs' service_log_path = './log/ghostdriver.log' driver = webdriver.PhantomJS(executable_path=executable_path, service_log_path=service_log_path) ``` You can use both the relative path and absolute paths.
40,446,084
Running Selenium locally on flask. Im using the PhantomJS driver. I previously had a path error: ``` selenium.common.exceptions.WebDriverException: Message: 'phantomjs' executable needs to be in PATH. ``` But after finding out from another StackOverflow question, I learned that I have to pass the environment path as a parameter for PhantomJS. The path I have below is the path to the phantomJS folder in my virtual environment folder. ``` driver = webdriver.PhantomJS(executable_path='/Users/MyAcc/Documents/MYWEBAPP/venv/lib/python3.5/site-packages/selenium/webdriver/phantomjs') ``` However, I get a new error-code now: ``` selenium.common.exceptions.WebDriverException: Message: 'phantomjs' executable may have wrong permissions. ``` Here's what I get when I check the file permissions of the path. ``` total 40 drwxr-xr-x 7 USER staff 238 Nov 6 00:07 . drwxr-xr-x 17 USER staff 578 Nov 6 00:03 .. -rw-r--r--@ 1 USER staff 6148 Nov 6 00:07 .DS_Store -rw-r--r-- 1 USER staff 787 Oct 31 12:27 __init__.py drwxr-xr-x 5 USER staff 170 Oct 31 12:27 __pycache__ -rw-r--r-- 1 USER staff 2587 Oct 31 12:27 service.py -rw-r--r-- 1 USER staff 2934 Oct 31 12:27 webdriver.py ```
2016/11/06
[ "https://Stackoverflow.com/questions/40446084", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7121239/" ]
I met this problem before about python+phanomjs. solution: **Linux** putting phantomjs in `/usr/local/share` **Windows** putting phantomjs in `/python/scripts`
Strangely, for me it was fixed by putting phantomjs in `/usr/local/share` and adding some symbolic links. I followed [these steps](https://stackoverflow.com/questions/8778513/how-can-i-setup-run-phantomjs-on-ubuntu): * move the phantomjs folder to `/usr/local/share/`: + `sudo mv phantomjs-2.1.1-linux-x86_64.tar.bz2 /usr/local/share/.` * create the symbolic links: + `sudo ln -s /usr/local/share/phantomjs-1.8.1-linux-x86_64 /usr/local/share/phantomjs` + `sudo ln -s /usr/local/share/phantomjs-1.8.1-linux-x86_64 /usr/local/share/phantomjs` I'm no Linux expert so I don't know why this makes a difference. If anyone wants to pitch in, feel free.
40,446,084
Running Selenium locally on flask. Im using the PhantomJS driver. I previously had a path error: ``` selenium.common.exceptions.WebDriverException: Message: 'phantomjs' executable needs to be in PATH. ``` But after finding out from another StackOverflow question, I learned that I have to pass the environment path as a parameter for PhantomJS. The path I have below is the path to the phantomJS folder in my virtual environment folder. ``` driver = webdriver.PhantomJS(executable_path='/Users/MyAcc/Documents/MYWEBAPP/venv/lib/python3.5/site-packages/selenium/webdriver/phantomjs') ``` However, I get a new error-code now: ``` selenium.common.exceptions.WebDriverException: Message: 'phantomjs' executable may have wrong permissions. ``` Here's what I get when I check the file permissions of the path. ``` total 40 drwxr-xr-x 7 USER staff 238 Nov 6 00:07 . drwxr-xr-x 17 USER staff 578 Nov 6 00:03 .. -rw-r--r--@ 1 USER staff 6148 Nov 6 00:07 .DS_Store -rw-r--r-- 1 USER staff 787 Oct 31 12:27 __init__.py drwxr-xr-x 5 USER staff 170 Oct 31 12:27 __pycache__ -rw-r--r-- 1 USER staff 2587 Oct 31 12:27 service.py -rw-r--r-- 1 USER staff 2934 Oct 31 12:27 webdriver.py ```
2016/11/06
[ "https://Stackoverflow.com/questions/40446084", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7121239/" ]
I met this problem before about python+phanomjs. solution: **Linux** putting phantomjs in `/usr/local/share` **Windows** putting phantomjs in `/python/scripts`
> > selenium.common.exceptions.WebDriverException: Message: 'phantomjs' executable may have wrong permissions. > > > This error is because phantomjs didn't execute permission, as long as for phantomjs - 2.1.1 - Linux - x86\_64 / bin/phantomjs add execute permissions, chmod u + x phantomjs
40,446,084
Running Selenium locally on flask. Im using the PhantomJS driver. I previously had a path error: ``` selenium.common.exceptions.WebDriverException: Message: 'phantomjs' executable needs to be in PATH. ``` But after finding out from another StackOverflow question, I learned that I have to pass the environment path as a parameter for PhantomJS. The path I have below is the path to the phantomJS folder in my virtual environment folder. ``` driver = webdriver.PhantomJS(executable_path='/Users/MyAcc/Documents/MYWEBAPP/venv/lib/python3.5/site-packages/selenium/webdriver/phantomjs') ``` However, I get a new error-code now: ``` selenium.common.exceptions.WebDriverException: Message: 'phantomjs' executable may have wrong permissions. ``` Here's what I get when I check the file permissions of the path. ``` total 40 drwxr-xr-x 7 USER staff 238 Nov 6 00:07 . drwxr-xr-x 17 USER staff 578 Nov 6 00:03 .. -rw-r--r--@ 1 USER staff 6148 Nov 6 00:07 .DS_Store -rw-r--r-- 1 USER staff 787 Oct 31 12:27 __init__.py drwxr-xr-x 5 USER staff 170 Oct 31 12:27 __pycache__ -rw-r--r-- 1 USER staff 2587 Oct 31 12:27 service.py -rw-r--r-- 1 USER staff 2934 Oct 31 12:27 webdriver.py ```
2016/11/06
[ "https://Stackoverflow.com/questions/40446084", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7121239/" ]
I met this problem before about python+phanomjs. solution: **Linux** putting phantomjs in `/usr/local/share` **Windows** putting phantomjs in `/python/scripts`
``` executable_path = './phantomjs-2.1.1-linux-x86_64/bin/phantomjs' service_log_path = './log/ghostdriver.log' driver = webdriver.PhantomJS(executable_path=executable_path, service_log_path=service_log_path) ``` You can use both the relative path and absolute paths.
40,446,084
Running Selenium locally on flask. Im using the PhantomJS driver. I previously had a path error: ``` selenium.common.exceptions.WebDriverException: Message: 'phantomjs' executable needs to be in PATH. ``` But after finding out from another StackOverflow question, I learned that I have to pass the environment path as a parameter for PhantomJS. The path I have below is the path to the phantomJS folder in my virtual environment folder. ``` driver = webdriver.PhantomJS(executable_path='/Users/MyAcc/Documents/MYWEBAPP/venv/lib/python3.5/site-packages/selenium/webdriver/phantomjs') ``` However, I get a new error-code now: ``` selenium.common.exceptions.WebDriverException: Message: 'phantomjs' executable may have wrong permissions. ``` Here's what I get when I check the file permissions of the path. ``` total 40 drwxr-xr-x 7 USER staff 238 Nov 6 00:07 . drwxr-xr-x 17 USER staff 578 Nov 6 00:03 .. -rw-r--r--@ 1 USER staff 6148 Nov 6 00:07 .DS_Store -rw-r--r-- 1 USER staff 787 Oct 31 12:27 __init__.py drwxr-xr-x 5 USER staff 170 Oct 31 12:27 __pycache__ -rw-r--r-- 1 USER staff 2587 Oct 31 12:27 service.py -rw-r--r-- 1 USER staff 2934 Oct 31 12:27 webdriver.py ```
2016/11/06
[ "https://Stackoverflow.com/questions/40446084", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7121239/" ]
Well I got this solved by the following CODE: ``` browser = webdriver.PhantomJS(executable_path = "/usr/local/Cellar/phantomjs/2.1.1/bin/phantomjs") ```
I think the true reason for you problem is that: **The phantomjs which webdrive needs is not the one under `selenium/webdriver` fold**. When you use anaconda to install this package, it's really confusing (at least for me). * First install it with `conda install -c conda-forge phantomjs`, test it with `phantomjs --version`. * Then you can find the true phantomjs.exe in this fold: `"path = /${home_path}/anaconda3/envs/${env_name}/bin/phantomjs"`. To test if it's the true path, test with `/${home_path}/anaconda3/envs/${env_name}/bin/phantomjs --version`. It should output `__version__` information correctly. * Pause this path to `webdriver.PhantomJS(executable_path=path)` and it will be fixed. So there's no need to use `chmod` or put it in `/usr/local/bin` (in this way, the only goodness is that you can skip the `executable` parameter)
58,460,780
**using python 3.7** Hi. I am trying to get the the selected treeview item and want to print it once i click left menu item. This is my treeview list. When I right click a menu appeas with stop process command. I am trying to get the selected item and print it but its giving me error ``` AttributeError: 'str' object has no attribute 'x' in treeview item ``` **Here is my tree list** [enter image description here](https://i.stack.imgur.com/EJRy1.png) **Here is my code** ``` self.popup_menu.add_command(label="stop process", command=lambda:self.delete_selected("<Button-3>")) self.tree.bind('<Button-3>', self.popup) def delete_selected(self, event): item = self.tree.identify('name','ID',event.x, event.y) print(item) def popup(self, event): """action in event of button 3 on tree view""" try: self.popup_menu.tk_popup(event.x_root, event.y_root, 0) finally: self.popup_menu.grab_release() ```
2019/10/19
[ "https://Stackoverflow.com/questions/58460780", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12241800/" ]
No, there is nothing like that, but there are tools, that try to mimic this behavior, for example lombok. Using `@Data` annotation we're getting default constructor, getters, setters, `toString`, `equals`, `hashCode`. We can fine-tune it by using annotations like `@Getter`, `@NoArgsConstructor` etc.
Neither Java nor Kotlin have anything similar to those Swift types you are talking about. Assignment *always* copies references to an object, rather than the object itself. What Kotlin's data classes do is that they create a `copy` method (among other things) that allows you to explicitly make a copy of an object, but you still have to actually call the method. ``` val b = a // b and a point to the same object, even if it is a data class ``` ``` val b = a.copy() // this is what you need to do to create a copy of a data class ``` Java assignment copies references, not objects, and the same is true for Kotlin. There is no way around this, because it is a feature of the language itself. Copy constructors and methods (like what Kotlin's data class gives you) are the closest thing you have to such a feature. To get something like this in Java without having to manually write the code everytime, you could look into Project Lombok.
58,460,780
**using python 3.7** Hi. I am trying to get the the selected treeview item and want to print it once i click left menu item. This is my treeview list. When I right click a menu appeas with stop process command. I am trying to get the selected item and print it but its giving me error ``` AttributeError: 'str' object has no attribute 'x' in treeview item ``` **Here is my tree list** [enter image description here](https://i.stack.imgur.com/EJRy1.png) **Here is my code** ``` self.popup_menu.add_command(label="stop process", command=lambda:self.delete_selected("<Button-3>")) self.tree.bind('<Button-3>', self.popup) def delete_selected(self, event): item = self.tree.identify('name','ID',event.x, event.y) print(item) def popup(self, event): """action in event of button 3 on tree view""" try: self.popup_menu.tk_popup(event.x_root, event.y_root, 0) finally: self.popup_menu.grab_release() ```
2019/10/19
[ "https://Stackoverflow.com/questions/58460780", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12241800/" ]
No, there is nothing like that, but there are tools, that try to mimic this behavior, for example lombok. Using `@Data` annotation we're getting default constructor, getters, setters, `toString`, `equals`, `hashCode`. We can fine-tune it by using annotations like `@Getter`, `@NoArgsConstructor` etc.
Starting with [Java 14](https://blogs.oracle.com/javamagazine/records-come-to-java) you will have access to [`Record`](https://docs.oracle.com/en/java/javase/14/docs/api/java.base/java/lang/Record.html) immutable class. It is similar in concept to `data` class in Kotlin.
29,943,146
I am new to python, trying to port a script in 2.x to 3.x i am encountering the error TypeError; Must use key word argument or key function in python 3.x. Below is the piece of code: Please help ``` def resort_working_array( self, chosen_values_arr, num ): for item in self.__working_arr[num]: data_node = self.__pairs.get_node_info( item ) new_combs = [] for i in range(0, self.__n): # numbers of new combinations to be created if this item is appended to array new_combs.append( set([pairs_storage.key(z) for z in xuniqueCombinations( chosen_values_arr+[item], i+1)]) - self.__pairs.get_combs()[i] ) # weighting the node item.weights = [ -len(new_combs[-1]) ] # node that creates most of new pairs is the best item.weights += [ len(data_node.out) ] # less used outbound connections most likely to produce more new pairs while search continues item.weights += [ len(x) for x in reversed(new_combs[:-1])] item.weights += [ -data_node.counter ] # less used node is better item.weights += [ -len(data_node.in_) ] # otherwise we will prefer node with most of free inbound connections; somehow it works out better ;) self.__working_arr[num].sort( key = lambda a,b: cmp(a.weights, b.weights) ) ```
2015/04/29
[ "https://Stackoverflow.com/questions/29943146", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4846265/" ]
Looks like the problem is in this line. ``` self.__working_arr[num].sort( key = lambda a,b: cmp(a.weights, b.weights) ) ``` The `key` callable should take only one argument. Try: ``` self.__working_arr[num].sort(key = lambda a: a.weights) ```
The exact same error message appears if you try to pass the *key* parameter as a positional parameter. Wrong: ``` sort(lst, myKeyFunction) ``` Correct: ``` sort(lst, key=myKeyFunction) ``` Python 3.6.6
29,943,146
I am new to python, trying to port a script in 2.x to 3.x i am encountering the error TypeError; Must use key word argument or key function in python 3.x. Below is the piece of code: Please help ``` def resort_working_array( self, chosen_values_arr, num ): for item in self.__working_arr[num]: data_node = self.__pairs.get_node_info( item ) new_combs = [] for i in range(0, self.__n): # numbers of new combinations to be created if this item is appended to array new_combs.append( set([pairs_storage.key(z) for z in xuniqueCombinations( chosen_values_arr+[item], i+1)]) - self.__pairs.get_combs()[i] ) # weighting the node item.weights = [ -len(new_combs[-1]) ] # node that creates most of new pairs is the best item.weights += [ len(data_node.out) ] # less used outbound connections most likely to produce more new pairs while search continues item.weights += [ len(x) for x in reversed(new_combs[:-1])] item.weights += [ -data_node.counter ] # less used node is better item.weights += [ -len(data_node.in_) ] # otherwise we will prefer node with most of free inbound connections; somehow it works out better ;) self.__working_arr[num].sort( key = lambda a,b: cmp(a.weights, b.weights) ) ```
2015/04/29
[ "https://Stackoverflow.com/questions/29943146", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4846265/" ]
Looks like the problem is in this line. ``` self.__working_arr[num].sort( key = lambda a,b: cmp(a.weights, b.weights) ) ``` The `key` callable should take only one argument. Try: ``` self.__working_arr[num].sort(key = lambda a: a.weights) ```
Following on from the [answer by @Kevin](https://stackoverflow.com/a/29944332/3363571) - and more specifically the comment/question by @featuresky: Using [functools.cmp\_to\_key](https://docs.python.org/3/library/functools.html#functools.cmp_to_key) and reimplementing cmp (as noted in the [porting guide](https://portingguide.readthedocs.io/en/latest/comparisons.html)) I have a hacky workaround for a scenario where 2 elements can be compared via lambda form. To use the OP as an example; instead of: ``` self.__working_arr[num].sort( key = lambda a,b: cmp(a.weights, b.weights) ) ``` You can use this: ``` from functools import cmp_to_key [...] def cmp(x, y): return (x > y) - (x < y) self.__working_arr[num].sort(key=cmp_to_key(lambda a,b: cmp(a.weights, b.weights))) ``` Admittedly, I'm somewhat new to python myself and don't really have a good handle on python2. I'm sure the code could be rewritten in a much better/cleaner way and I'd certainly love to hear a "proper" way to do this. OTOH in my case this was a handy hack for a old python2 script (updated to python3) that I don't have time/energy to "properly" understand and rewrite right now. Beyond the fact that it works, I would certainly not recommend wide usage of this hack! But I figured that it was worth sharing.
29,943,146
I am new to python, trying to port a script in 2.x to 3.x i am encountering the error TypeError; Must use key word argument or key function in python 3.x. Below is the piece of code: Please help ``` def resort_working_array( self, chosen_values_arr, num ): for item in self.__working_arr[num]: data_node = self.__pairs.get_node_info( item ) new_combs = [] for i in range(0, self.__n): # numbers of new combinations to be created if this item is appended to array new_combs.append( set([pairs_storage.key(z) for z in xuniqueCombinations( chosen_values_arr+[item], i+1)]) - self.__pairs.get_combs()[i] ) # weighting the node item.weights = [ -len(new_combs[-1]) ] # node that creates most of new pairs is the best item.weights += [ len(data_node.out) ] # less used outbound connections most likely to produce more new pairs while search continues item.weights += [ len(x) for x in reversed(new_combs[:-1])] item.weights += [ -data_node.counter ] # less used node is better item.weights += [ -len(data_node.in_) ] # otherwise we will prefer node with most of free inbound connections; somehow it works out better ;) self.__working_arr[num].sort( key = lambda a,b: cmp(a.weights, b.weights) ) ```
2015/04/29
[ "https://Stackoverflow.com/questions/29943146", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4846265/" ]
The exact same error message appears if you try to pass the *key* parameter as a positional parameter. Wrong: ``` sort(lst, myKeyFunction) ``` Correct: ``` sort(lst, key=myKeyFunction) ``` Python 3.6.6
Following on from the [answer by @Kevin](https://stackoverflow.com/a/29944332/3363571) - and more specifically the comment/question by @featuresky: Using [functools.cmp\_to\_key](https://docs.python.org/3/library/functools.html#functools.cmp_to_key) and reimplementing cmp (as noted in the [porting guide](https://portingguide.readthedocs.io/en/latest/comparisons.html)) I have a hacky workaround for a scenario where 2 elements can be compared via lambda form. To use the OP as an example; instead of: ``` self.__working_arr[num].sort( key = lambda a,b: cmp(a.weights, b.weights) ) ``` You can use this: ``` from functools import cmp_to_key [...] def cmp(x, y): return (x > y) - (x < y) self.__working_arr[num].sort(key=cmp_to_key(lambda a,b: cmp(a.weights, b.weights))) ``` Admittedly, I'm somewhat new to python myself and don't really have a good handle on python2. I'm sure the code could be rewritten in a much better/cleaner way and I'd certainly love to hear a "proper" way to do this. OTOH in my case this was a handy hack for a old python2 script (updated to python3) that I don't have time/energy to "properly" understand and rewrite right now. Beyond the fact that it works, I would certainly not recommend wide usage of this hack! But I figured that it was worth sharing.
64,620,456
I'm a beginner in python and I want to use comprehension to create a dictionary. Let's say I have the below two list and want to convert them to a dictionary like `{'Key 1':['c','d'], 'Key 2':['a','f'], 'Key 3':['b','e']}`. I can only think of the code below and I don't know how to change the value of the key and the filter using comprehension. How should I change my code? ``` value = ['a','b','c','d','e','f'] key = [2, 3, 1, 1, 3, 2] {"Key 1" : [value for key,value in list(zip(key,value)) if key==1]} ```
2020/10/31
[ "https://Stackoverflow.com/questions/64620456", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
This should do it: ``` value = ['a','b','c','d','e','f'] key = [2, 3, 1, 1, 3, 2] answer = {} for k, v in zip(key, value): if k in answer: answer[k].append(v) else: answer[k] = [v] print(answer) {2: ['a', 'f'], 3: ['b', 'e'], 1: ['c', 'd']} ``` EDIT: oops, jumped the gun. Apologies. Here's the comprehension version, but it's not very efficient: ``` { k: [v for i, v in enumerate(value) if key[i] == k] for k in set(key) } ``` EDIT 2: Here's an one that has better complexity: ``` import pandas as pd series = pd.Series(key) { k: [value[i] for i in indices] for k, indices in series.groupby(series).groups.items() } ```
You could do it with dictionary comprehension *and* list comprehension: ``` {f"Key {k}" : [value for key,value in zip(key,value) if key == k] for k in key} ``` Your lists would yield the following: ``` {'Key 2': ['a', 'f'], 'Key 3': ['b', 'e'], 'Key 1': ['c', 'd']} ``` As requested.
64,620,456
I'm a beginner in python and I want to use comprehension to create a dictionary. Let's say I have the below two list and want to convert them to a dictionary like `{'Key 1':['c','d'], 'Key 2':['a','f'], 'Key 3':['b','e']}`. I can only think of the code below and I don't know how to change the value of the key and the filter using comprehension. How should I change my code? ``` value = ['a','b','c','d','e','f'] key = [2, 3, 1, 1, 3, 2] {"Key 1" : [value for key,value in list(zip(key,value)) if key==1]} ```
2020/10/31
[ "https://Stackoverflow.com/questions/64620456", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
This should do it: ``` value = ['a','b','c','d','e','f'] key = [2, 3, 1, 1, 3, 2] answer = {} for k, v in zip(key, value): if k in answer: answer[k].append(v) else: answer[k] = [v] print(answer) {2: ['a', 'f'], 3: ['b', 'e'], 1: ['c', 'd']} ``` EDIT: oops, jumped the gun. Apologies. Here's the comprehension version, but it's not very efficient: ``` { k: [v for i, v in enumerate(value) if key[i] == k] for k in set(key) } ``` EDIT 2: Here's an one that has better complexity: ``` import pandas as pd series = pd.Series(key) { k: [value[i] for i in indices] for k, indices in series.groupby(series).groups.items() } ```
use dict [setdefault](https://www.w3schools.com/python/ref_dictionary_setdefault.asp) ``` value = ['a', 'b', 'c', 'd', 'e', 'f'] key = [2, 3, 1, 1, 3, 2] d = {} {d.setdefault(f'Key {k}', []).append(v) for k, v in zip(key, value)} print(d) ``` output ``` {'Key 2': ['a', 'f'], 'Key 3': ['b', 'e'], 'Key 1': ['c', 'd']} ```
64,620,456
I'm a beginner in python and I want to use comprehension to create a dictionary. Let's say I have the below two list and want to convert them to a dictionary like `{'Key 1':['c','d'], 'Key 2':['a','f'], 'Key 3':['b','e']}`. I can only think of the code below and I don't know how to change the value of the key and the filter using comprehension. How should I change my code? ``` value = ['a','b','c','d','e','f'] key = [2, 3, 1, 1, 3, 2] {"Key 1" : [value for key,value in list(zip(key,value)) if key==1]} ```
2020/10/31
[ "https://Stackoverflow.com/questions/64620456", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
This should do it: ``` value = ['a','b','c','d','e','f'] key = [2, 3, 1, 1, 3, 2] answer = {} for k, v in zip(key, value): if k in answer: answer[k].append(v) else: answer[k] = [v] print(answer) {2: ['a', 'f'], 3: ['b', 'e'], 1: ['c', 'd']} ``` EDIT: oops, jumped the gun. Apologies. Here's the comprehension version, but it's not very efficient: ``` { k: [v for i, v in enumerate(value) if key[i] == k] for k in set(key) } ``` EDIT 2: Here's an one that has better complexity: ``` import pandas as pd series = pd.Series(key) { k: [value[i] for i in indices] for k, indices in series.groupby(series).groups.items() } ```
Usually, it is written as an explicit loop (O(n) solution): ``` >>> letters = 'abcdef' >>> digits = [2, 3, 1, 1, 3, 2] >>> from collections import defaultdict >>> result = defaultdict(list) # digit -> letters >>> for digit, letter in zip(digits, letters): ... result[digit].append(letter) >>> result defaultdict(<class 'list'>, {2: ['a', 'f'], 3: ['b', 'e'], 1: ['c', 'd']}) ``` Nested comprehensions (O(n n) solution) like in other answers: ``` >>> { ... digit: [letter for d, letter in zip(digits, letters) if digit == d] ... for digit in set(digits) ... } {1: ['c', 'd'], 2: ['a', 'f'], 3: ['b', 'e']} ``` If you need to write it as a single dict comprehension, [`itertools.groupby`](https://docs.python.org/3/library/itertools.html#itertools.groupby) could be used (O(n log n) solution): ``` >>> from itertools import groupby >>> { ... digit: [letter for _, letter in group] ... for digit, group in groupby( ... sorted(zip(digits, letters), key=lambda x: x[0]), ... key=lambda x: x[0] ... ) ... } ```
64,620,456
I'm a beginner in python and I want to use comprehension to create a dictionary. Let's say I have the below two list and want to convert them to a dictionary like `{'Key 1':['c','d'], 'Key 2':['a','f'], 'Key 3':['b','e']}`. I can only think of the code below and I don't know how to change the value of the key and the filter using comprehension. How should I change my code? ``` value = ['a','b','c','d','e','f'] key = [2, 3, 1, 1, 3, 2] {"Key 1" : [value for key,value in list(zip(key,value)) if key==1]} ```
2020/10/31
[ "https://Stackoverflow.com/questions/64620456", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
You could do it with dictionary comprehension *and* list comprehension: ``` {f"Key {k}" : [value for key,value in zip(key,value) if key == k] for k in key} ``` Your lists would yield the following: ``` {'Key 2': ['a', 'f'], 'Key 3': ['b', 'e'], 'Key 1': ['c', 'd']} ``` As requested.
Usually, it is written as an explicit loop (O(n) solution): ``` >>> letters = 'abcdef' >>> digits = [2, 3, 1, 1, 3, 2] >>> from collections import defaultdict >>> result = defaultdict(list) # digit -> letters >>> for digit, letter in zip(digits, letters): ... result[digit].append(letter) >>> result defaultdict(<class 'list'>, {2: ['a', 'f'], 3: ['b', 'e'], 1: ['c', 'd']}) ``` Nested comprehensions (O(n n) solution) like in other answers: ``` >>> { ... digit: [letter for d, letter in zip(digits, letters) if digit == d] ... for digit in set(digits) ... } {1: ['c', 'd'], 2: ['a', 'f'], 3: ['b', 'e']} ``` If you need to write it as a single dict comprehension, [`itertools.groupby`](https://docs.python.org/3/library/itertools.html#itertools.groupby) could be used (O(n log n) solution): ``` >>> from itertools import groupby >>> { ... digit: [letter for _, letter in group] ... for digit, group in groupby( ... sorted(zip(digits, letters), key=lambda x: x[0]), ... key=lambda x: x[0] ... ) ... } ```
64,620,456
I'm a beginner in python and I want to use comprehension to create a dictionary. Let's say I have the below two list and want to convert them to a dictionary like `{'Key 1':['c','d'], 'Key 2':['a','f'], 'Key 3':['b','e']}`. I can only think of the code below and I don't know how to change the value of the key and the filter using comprehension. How should I change my code? ``` value = ['a','b','c','d','e','f'] key = [2, 3, 1, 1, 3, 2] {"Key 1" : [value for key,value in list(zip(key,value)) if key==1]} ```
2020/10/31
[ "https://Stackoverflow.com/questions/64620456", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
use dict [setdefault](https://www.w3schools.com/python/ref_dictionary_setdefault.asp) ``` value = ['a', 'b', 'c', 'd', 'e', 'f'] key = [2, 3, 1, 1, 3, 2] d = {} {d.setdefault(f'Key {k}', []).append(v) for k, v in zip(key, value)} print(d) ``` output ``` {'Key 2': ['a', 'f'], 'Key 3': ['b', 'e'], 'Key 1': ['c', 'd']} ```
Usually, it is written as an explicit loop (O(n) solution): ``` >>> letters = 'abcdef' >>> digits = [2, 3, 1, 1, 3, 2] >>> from collections import defaultdict >>> result = defaultdict(list) # digit -> letters >>> for digit, letter in zip(digits, letters): ... result[digit].append(letter) >>> result defaultdict(<class 'list'>, {2: ['a', 'f'], 3: ['b', 'e'], 1: ['c', 'd']}) ``` Nested comprehensions (O(n n) solution) like in other answers: ``` >>> { ... digit: [letter for d, letter in zip(digits, letters) if digit == d] ... for digit in set(digits) ... } {1: ['c', 'd'], 2: ['a', 'f'], 3: ['b', 'e']} ``` If you need to write it as a single dict comprehension, [`itertools.groupby`](https://docs.python.org/3/library/itertools.html#itertools.groupby) could be used (O(n log n) solution): ``` >>> from itertools import groupby >>> { ... digit: [letter for _, letter in group] ... for digit, group in groupby( ... sorted(zip(digits, letters), key=lambda x: x[0]), ... key=lambda x: x[0] ... ) ... } ```
35,697,643
I have a `Frame` with two columns of `String`, ``` let first = Series.ofValues(["a";"b";"c"]) let second = Series.ofValues(["d";"e";"f"]) let df = Frame(["first"; "second"], [first; second]) ``` How do I produce a third column as the concatenation of the two columns? In `python` `pandas`, this can be achieved with simple `+` operator, but `deedle` gives error if i do that, ``` error FS0043: No overloads match for method 'op_Addition'. ```
2016/02/29
[ "https://Stackoverflow.com/questions/35697643", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1569058/" ]
It sounds like what you want is to have something that returns something like: ``` Series.ofValues(["ad"; "be"; "cf"]) ``` Then I think you need to define an addition operator with something like this: ``` let additionOperator = (fun (a:string) (b:string) -> (a + b)) ``` And then you can add them like this: ``` Series.zipInto additionOperator first second ``` I get as the result: ``` val it : Series<int,string> = series [ 0 => ad; 1 => be; 2 => cf] ``` However if you are alright with tuples as your result, you can just use: ``` Series.zip first second ```
I come across this after facing the same issue, the trick is to get the values as seq and use Seq.map2 to concat the two seqs, my solution is ``` let first = Series.ofValues(["a";"b";"c"]) let second = Series.ofValues(["d";"e";"f"]) let df = Seq.map2 (fun x y -> x+y) first.Values second.Values |> Series.ofValues |> (fun x -> Frame.addCol "third" x (Frame(["first"; "second"], [first; second]))) ``` Result: ``` df.Print() first second third 0 -> a d ad 1 -> b e be 2 -> c f cf ```
35,697,643
I have a `Frame` with two columns of `String`, ``` let first = Series.ofValues(["a";"b";"c"]) let second = Series.ofValues(["d";"e";"f"]) let df = Frame(["first"; "second"], [first; second]) ``` How do I produce a third column as the concatenation of the two columns? In `python` `pandas`, this can be achieved with simple `+` operator, but `deedle` gives error if i do that, ``` error FS0043: No overloads match for method 'op_Addition'. ```
2016/02/29
[ "https://Stackoverflow.com/questions/35697643", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1569058/" ]
It sounds like what you want is to have something that returns something like: ``` Series.ofValues(["ad"; "be"; "cf"]) ``` Then I think you need to define an addition operator with something like this: ``` let additionOperator = (fun (a:string) (b:string) -> (a + b)) ``` And then you can add them like this: ``` Series.zipInto additionOperator first second ``` I get as the result: ``` val it : Series<int,string> = series [ 0 => ad; 1 => be; 2 => cf] ``` However if you are alright with tuples as your result, you can just use: ``` Series.zip first second ```
I believe this would work... Clearly not the most beautiful way to write it but... Will try to do some time testing later. ``` let df3c = df |> Frame.mapRows (fun _ b -> b.GetAt(0).ToString() + b.GetAt(1).ToString()) |> (fun a -> Frame.addCol "test" a df) ```
35,697,643
I have a `Frame` with two columns of `String`, ``` let first = Series.ofValues(["a";"b";"c"]) let second = Series.ofValues(["d";"e";"f"]) let df = Frame(["first"; "second"], [first; second]) ``` How do I produce a third column as the concatenation of the two columns? In `python` `pandas`, this can be achieved with simple `+` operator, but `deedle` gives error if i do that, ``` error FS0043: No overloads match for method 'op_Addition'. ```
2016/02/29
[ "https://Stackoverflow.com/questions/35697643", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1569058/" ]
I come across this after facing the same issue, the trick is to get the values as seq and use Seq.map2 to concat the two seqs, my solution is ``` let first = Series.ofValues(["a";"b";"c"]) let second = Series.ofValues(["d";"e";"f"]) let df = Seq.map2 (fun x y -> x+y) first.Values second.Values |> Series.ofValues |> (fun x -> Frame.addCol "third" x (Frame(["first"; "second"], [first; second]))) ``` Result: ``` df.Print() first second third 0 -> a d ad 1 -> b e be 2 -> c f cf ```
I believe this would work... Clearly not the most beautiful way to write it but... Will try to do some time testing later. ``` let df3c = df |> Frame.mapRows (fun _ b -> b.GetAt(0).ToString() + b.GetAt(1).ToString()) |> (fun a -> Frame.addCol "test" a df) ```
62,030,549
I have a directory filled with '.tbl' files. The file set up is as follows: \STAR\_ID = "HD 74156" \DATA\_CATEGORY = "Planet Radial Velocity Curve" \NUMBER\_OF\_POINTS = "82" \TIME\_REFERENCE\_FRAME = "JD" \MINIMUM\_DATE = "2453342.23249" \DATE\_UNITS = "days" \MAXIMUM\_DATE = "2454231.60002" .... I need to rename every file in the directory using the STAR\_ID, so in this case the files name would be 'HD 74156.tbl.' I have been able to do it for about 20 of the ~600 files. I am not sure why it will not continue through the rest of the files. My current code is: ``` for i in os.listdir(path): with open(i) as f: first_line = f.readline() system = first_line.split('"')[1] new_file = system + ".tbl" os.rename(file, new_file)` ``` and the error message is: ``` --------------------------------------------------------------------------- IndexError Traceback (most recent call last) <ipython-input-37-5883c060a977> in <module> 3 with open(i) as f: 4 first_line = f.readline() ----> 5 system = first_line.split('"')[1] 6 new_file = system + ".tbl" 7 os.rename(file, new_file) IndexError: list index out of range ```
2020/05/26
[ "https://Stackoverflow.com/questions/62030549", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13622725/" ]
This error occurs because of `first_line.split('"')` is returning a list with less of 2 items. you can try ``` first_line_ls = first_line.split('"') if len(first_line_ls) > 1: system = first_line_ls[1] else: #other method ``` This code can help you prevent the error and handle cases the file\_line str have less then 2 "
It looks like these `.tbl` files are not as uniform as you might have hoped. If this line: ``` ----> 5 system = first_line.split('"')[1] ``` fails on some files, it's because their first line is not formatted as you expected, as @Leo Arad noted. You also want to make sure you're *actually* using the `STAR_ID` field. Perhaps these files usually put all the fields in the same order (as an aside, what are these `.tbl` files? What software did they come from? I've never seen it before), but since you've already found other inconsistencies with the format, better to be safe than sorry. I might write a little helper function to parse the fields in this file. It takes a single line and returns a `(key, value)` tuple for the field. If the line does not look like a valid field it returns `(None, None)`: ```py import re # Dissection of this regular expression: # ^\\ : line begins with \ # (?P<key>\w+) : extract the key, which is one or more letters, numbers or underscores # \s*=\s* : an equal sign surrounding by any amount of white space # "(?P<value>[^"]*)" : extract the value, which is between a pair of double-quotes # and contains any characters other than double-quotes # (Note: I don't know if this file format has a mechanism for escaping # double-quotes inside the value; if so that would have to be handled as well) _field_re = re.compile(r'^\\(?P<key>\w+)\s*=\s*"(?P<value>[^"]*)"') def parse_field(line): # match the line against the regular expression match = _field_re.match(line) # if it doesn't match, return (None, None) if match is None: return (None, None) else: # return the key and value pair return match.groups() ``` Now open your file, loop over all the lines, and perform the rename once you find `STAR_ID`. If not, print a warning (this is mostly the same as your code with some slight variations): ``` for filename in os.listdir(path): filename = os.path.join(path, filename) star_id = None # NOTE: Do the rename outside the with statement so that the # file is closed; on Linux it doesn't matter but on Windows # the rename will fail if the file is not closed first with open(filename) as fobj: for line in fobj: key, value = parse_field(line) if key == 'STAR_ID': star_id = value break if star_id is not None: os.rename(filename, os.path.join(path, star_id + '.tbl')) else: print(f'WARNING: STAR_ID key missing from {filename}', file=sys.stderr) ``` If you are not comfortable with regular expressions (and really, who is?) it would be good to learn the basics as it's an extremely useful tool to have in your belt. However, this format is simple enough that you could get away with using simple string parsing methods like you were doing. Though I would still enhance it a bit to make sure you're actually getting the STAR\_ID field. Something like this: ``` def parse_field(line): if '=' not in line: return (None, None) key, value = [part.strip() for part in line.split('=', 1)] if key[0] != '\\': return (None, None) else: key = key[1:] if value[0] != '"' or value[-1] != '"': # still not a valid line assuming quotes are required return (None, None) else: return (key, value.split('"')[1]) ``` This is similar to what you were doing, but a little more robust (and returns the key as well as the value). But you can see this is more involved than the regular expression version. It's actually more-or-less implementing the exact same logic as the regular expression, but more slowly and verbosely.
48,675,435
In a personal project, I am trying to use Django as my front end and then allow data entered by users in a particular form to be copied to google sheets. Google's own docs recommend using <https://github.com/google/oauth2client> which is now deprecated, and the docs have not been updated. With this, I have started attempting to use [Python Social Auth](https://github.com/python-social-auth/social-core) and [Gspread](https://github.com/burnash/gspread). For Gspread to be able to function correctly, I need to be able to pass it not only an access token but also a refresh token. Python Social Auth however is not persisting the refresh token along with the rest of the "extra data". Looking at the data preserved and the URLs routed to, it seems to me more like somewhere it is routing through Google+. I have the following configurations in my Django settings files: ``` AUTHENTICATION_BACKENDS = ( 'social_core.backends.google.GoogleOAuth2', 'django.contrib.auth.backends.ModelBackend', ) SOCIAL_AUTH_PIPELINE = ( 'social_core.pipeline.social_auth.social_details', 'social_core.pipeline.social_auth.social_uid', 'social_core.pipeline.social_auth.social_user', 'social_core.pipeline.user.get_username', 'social_core.pipeline.user.create_user', 'social_core.pipeline.social_auth.associate_user', 'social_core.pipeline.social_auth.load_extra_data', 'social_core.pipeline.user.user_details', 'social_core.pipeline.social_auth.associate_by_email', ) SOCIAL_AUTH_GOOGLE_OAUTH2_KEY = '...' SOCIAL_AUTH_GOOGLE_OAUTH2_SECRET = '...' SOCIAL_AUTH_GOOGLE_OAUTH2_SCOPE = ['https://www.googleapis.com/auth/spreadsheets'] ``` * Is there a better way to access a google sheet? * Am I correct that PSA or google is redirecting me into a Google+ auth flow instead of the Google Oauth2? * If not, what must change so that Python Social Auth keeps the refresh token?
2018/02/08
[ "https://Stackoverflow.com/questions/48675435", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6883167/" ]
It's true that `python-social-auth` will use some bits of the Google+ platform, at least the API to retrieve details about the user to fill in the account. From your settings, I see you have `associate_by_email` at the bottom, at that point, at that point it has no use since the user is already be created, if you really plan to use it, it must be before the `create_user` one, you can check the [`DEFAULT_PIPELINE`](https://github.com/python-social-auth/social-core/blob/master/social_core/pipeline/__init__.py#L29) as a reference. In order to get a `refresh_token` from google, you need to tell it that you want one, to do that you need to set the `offline` access type: ``` SOCIAL_AUTH_GOOGLE_OAUTH2_AUTH_EXTRA_ARGUMENTS = { 'access_type': 'offline' } ``` With that setting Google will give you a `refresh_token` and it will automatically stored in `extra_data`.
Just provide this in your `settings.py`: `SOCIAL_AUTH_GOOGLE_OAUTH2_AUTH_EXTRA_ARGUMENTS = { 'access_type': 'offline', 'hd': 'xyzabc.com', 'approval_prompt':'force' }` remeber there is `{'approval_prompt' : 'force'}` which will force the user to select the gmail account, this way you will get refresh token.
48,675,435
In a personal project, I am trying to use Django as my front end and then allow data entered by users in a particular form to be copied to google sheets. Google's own docs recommend using <https://github.com/google/oauth2client> which is now deprecated, and the docs have not been updated. With this, I have started attempting to use [Python Social Auth](https://github.com/python-social-auth/social-core) and [Gspread](https://github.com/burnash/gspread). For Gspread to be able to function correctly, I need to be able to pass it not only an access token but also a refresh token. Python Social Auth however is not persisting the refresh token along with the rest of the "extra data". Looking at the data preserved and the URLs routed to, it seems to me more like somewhere it is routing through Google+. I have the following configurations in my Django settings files: ``` AUTHENTICATION_BACKENDS = ( 'social_core.backends.google.GoogleOAuth2', 'django.contrib.auth.backends.ModelBackend', ) SOCIAL_AUTH_PIPELINE = ( 'social_core.pipeline.social_auth.social_details', 'social_core.pipeline.social_auth.social_uid', 'social_core.pipeline.social_auth.social_user', 'social_core.pipeline.user.get_username', 'social_core.pipeline.user.create_user', 'social_core.pipeline.social_auth.associate_user', 'social_core.pipeline.social_auth.load_extra_data', 'social_core.pipeline.user.user_details', 'social_core.pipeline.social_auth.associate_by_email', ) SOCIAL_AUTH_GOOGLE_OAUTH2_KEY = '...' SOCIAL_AUTH_GOOGLE_OAUTH2_SECRET = '...' SOCIAL_AUTH_GOOGLE_OAUTH2_SCOPE = ['https://www.googleapis.com/auth/spreadsheets'] ``` * Is there a better way to access a google sheet? * Am I correct that PSA or google is redirecting me into a Google+ auth flow instead of the Google Oauth2? * If not, what must change so that Python Social Auth keeps the refresh token?
2018/02/08
[ "https://Stackoverflow.com/questions/48675435", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6883167/" ]
It's true that `python-social-auth` will use some bits of the Google+ platform, at least the API to retrieve details about the user to fill in the account. From your settings, I see you have `associate_by_email` at the bottom, at that point, at that point it has no use since the user is already be created, if you really plan to use it, it must be before the `create_user` one, you can check the [`DEFAULT_PIPELINE`](https://github.com/python-social-auth/social-core/blob/master/social_core/pipeline/__init__.py#L29) as a reference. In order to get a `refresh_token` from google, you need to tell it that you want one, to do that you need to set the `offline` access type: ``` SOCIAL_AUTH_GOOGLE_OAUTH2_AUTH_EXTRA_ARGUMENTS = { 'access_type': 'offline' } ``` With that setting Google will give you a `refresh_token` and it will automatically stored in `extra_data`.
You can send extra parameters to the OAuth2 provider using the variable ``` SOCIAL_AUTH_<PROVIDER>_AUTH_EXTRA_ARGUMENTS ``` For Google, you can see the extra parameters they accept [in their documentation (scroll down to "parameters")](https://developers.google.com/identity/protocols/OAuth2WebServer#creatingclient). The one we are looking for is `access_type`: > > **access\_type**: Indicates whether your application can refresh access tokens when the user is not present at the browser. Valid parameter values are online, which is the default value, and offline. > > > So we can add the following to `settings.py`, to indicate that we want to receive a refresh token: ``` SOCIAL_AUTH_GOOGLE_OAUTH2_EXTRA_ARGUMENTS = {"access_type: offline"} ``` The results from `EXTRA_ARGUMENTS` will be stored in `extra_data`, so the refresh token can be accessed like this: ```py refresh_token = user.social_auth.get(provider="google-oauth2").extra_data["refresh_token"] ``` --- One possible solution is to store the refresh token alongside the user in a `UserProfile` model, by adding a custom function to the social-auth pipeline: 1. Create the model ```py # models.py class UserProfile(models.Model): user = models.OneToOneField(User, on_delete=models.CASCADE, related_name="profile") refresh_token = models.CharField(max_length=255, default="") ``` 2. Add a function to access store the refresh token ```py # pipeline.py from .models import UserProfile def store_refresh_token(user=none, *args, **kwargs): extra_data = user.social_auth.get(provider="google-oauth2").extra_data UserProfile.objects.get_or_create( user=user, defaults={"refresh_token": extra_data["refresh_token"]} ) ``` 3. Add our new function to the social-auth pipeline. ```py # settings.py ... SOCIAL_AUTH_PIPELINE = ( ... "my-app.pipeline.store_refresh_token" ) SOCIAL_AUTH_GOOGLE_OAUTH2_SCOPE = [ 'https://www.googleapis.com/auth/spreadsheets' # any other scopes you need ] ... ``` The token is now stored alongside the user and can be used to initialise the sheets client or whatever else you need.
48,675,435
In a personal project, I am trying to use Django as my front end and then allow data entered by users in a particular form to be copied to google sheets. Google's own docs recommend using <https://github.com/google/oauth2client> which is now deprecated, and the docs have not been updated. With this, I have started attempting to use [Python Social Auth](https://github.com/python-social-auth/social-core) and [Gspread](https://github.com/burnash/gspread). For Gspread to be able to function correctly, I need to be able to pass it not only an access token but also a refresh token. Python Social Auth however is not persisting the refresh token along with the rest of the "extra data". Looking at the data preserved and the URLs routed to, it seems to me more like somewhere it is routing through Google+. I have the following configurations in my Django settings files: ``` AUTHENTICATION_BACKENDS = ( 'social_core.backends.google.GoogleOAuth2', 'django.contrib.auth.backends.ModelBackend', ) SOCIAL_AUTH_PIPELINE = ( 'social_core.pipeline.social_auth.social_details', 'social_core.pipeline.social_auth.social_uid', 'social_core.pipeline.social_auth.social_user', 'social_core.pipeline.user.get_username', 'social_core.pipeline.user.create_user', 'social_core.pipeline.social_auth.associate_user', 'social_core.pipeline.social_auth.load_extra_data', 'social_core.pipeline.user.user_details', 'social_core.pipeline.social_auth.associate_by_email', ) SOCIAL_AUTH_GOOGLE_OAUTH2_KEY = '...' SOCIAL_AUTH_GOOGLE_OAUTH2_SECRET = '...' SOCIAL_AUTH_GOOGLE_OAUTH2_SCOPE = ['https://www.googleapis.com/auth/spreadsheets'] ``` * Is there a better way to access a google sheet? * Am I correct that PSA or google is redirecting me into a Google+ auth flow instead of the Google Oauth2? * If not, what must change so that Python Social Auth keeps the refresh token?
2018/02/08
[ "https://Stackoverflow.com/questions/48675435", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6883167/" ]
Just provide this in your `settings.py`: `SOCIAL_AUTH_GOOGLE_OAUTH2_AUTH_EXTRA_ARGUMENTS = { 'access_type': 'offline', 'hd': 'xyzabc.com', 'approval_prompt':'force' }` remeber there is `{'approval_prompt' : 'force'}` which will force the user to select the gmail account, this way you will get refresh token.
You can send extra parameters to the OAuth2 provider using the variable ``` SOCIAL_AUTH_<PROVIDER>_AUTH_EXTRA_ARGUMENTS ``` For Google, you can see the extra parameters they accept [in their documentation (scroll down to "parameters")](https://developers.google.com/identity/protocols/OAuth2WebServer#creatingclient). The one we are looking for is `access_type`: > > **access\_type**: Indicates whether your application can refresh access tokens when the user is not present at the browser. Valid parameter values are online, which is the default value, and offline. > > > So we can add the following to `settings.py`, to indicate that we want to receive a refresh token: ``` SOCIAL_AUTH_GOOGLE_OAUTH2_EXTRA_ARGUMENTS = {"access_type: offline"} ``` The results from `EXTRA_ARGUMENTS` will be stored in `extra_data`, so the refresh token can be accessed like this: ```py refresh_token = user.social_auth.get(provider="google-oauth2").extra_data["refresh_token"] ``` --- One possible solution is to store the refresh token alongside the user in a `UserProfile` model, by adding a custom function to the social-auth pipeline: 1. Create the model ```py # models.py class UserProfile(models.Model): user = models.OneToOneField(User, on_delete=models.CASCADE, related_name="profile") refresh_token = models.CharField(max_length=255, default="") ``` 2. Add a function to access store the refresh token ```py # pipeline.py from .models import UserProfile def store_refresh_token(user=none, *args, **kwargs): extra_data = user.social_auth.get(provider="google-oauth2").extra_data UserProfile.objects.get_or_create( user=user, defaults={"refresh_token": extra_data["refresh_token"]} ) ``` 3. Add our new function to the social-auth pipeline. ```py # settings.py ... SOCIAL_AUTH_PIPELINE = ( ... "my-app.pipeline.store_refresh_token" ) SOCIAL_AUTH_GOOGLE_OAUTH2_SCOPE = [ 'https://www.googleapis.com/auth/spreadsheets' # any other scopes you need ] ... ``` The token is now stored alongside the user and can be used to initialise the sheets client or whatever else you need.
67,519,212
I have written a simple caesar cipher code to take a string and a positional shift argument i.e cipher to encrypt the string. However, I have realized some of the outputs won't decrypt correctly. For example: `python .\caesar_cipher.py 'fortuna' 6771 --encrypt` outputs `☼↑↔▲↨` `python .\caesar_cipher.py '☼↑↔▲↨' 6771 --decrypt` outputs `\`,/UC` ( \ should be ` forgive my markdown skills) I'm fairly certain there is some issue of encoding but I couldn't pinpoint it. Instead of printing and passing it as a command-line argument between two runs, if I were to just encrypt and decrypt in the same run output seems correct. I'm using windows and I tried to run the above example (and a couple of others) both in cmd and PowerShell to test it. Here is my code: ``` import argparse # 127 number of chars in ascii NO_OF_CHARS = 127 def encrypt(s: str) -> str: return ''.join([chr((ord(c)+cipher) % NO_OF_CHARS) for c in s]) def decrypt(s: str) -> str: return ''.join([chr((ord(c)-cipher) % NO_OF_CHARS) for c in s]) parser = argparse.ArgumentParser() group = parser.add_mutually_exclusive_group(required=True) group.add_argument("--encrypt", help="encrypt the string", action="store_true") group.add_argument("--decrypt", help="decrypt the string", action="store_true") parser.add_argument("string", type=str, help="string to encrypt/decrypt") parser.add_argument("cipher", type=int, help="positional shift amount for caesar cipher") args = parser.parse_args() string = args.string encrypt_arg = args.encrypt decrypt_arg = args.decrypt cipher = args.cipher if encrypt_arg: result = encrypt(string) else: result = decrypt(string) print(result) ```
2021/05/13
[ "https://Stackoverflow.com/questions/67519212", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9169087/" ]
I think the problem is after the encryption in copy and pasting the value. When I tested this code, what I found and you mentioned that too, directly transferring the encrypted value to the decrypt function by storing in a variable, doesn't cause any problem, but when directly pasting it is causing problem. To overcome this problem, you write the encrypted text by encoding it in binary to file and then reading from that file. File name has to be passed to the the CLI and **CIPHER**, it will give you the correct output. This would work: ``` import argparse # 127 number of chars in ascii NO_OF_CHARS = 127 def encrypt(s: str) -> str: return ''.join([chr((ord(c)+cipher) % NO_OF_CHARS) for c in s]) def decrypt(s: str) -> str: return ''.join([chr((ord(c)-cipher) % NO_OF_CHARS) for c in s]) parser = argparse.ArgumentParser() group = parser.add_mutually_exclusive_group(required=True) group.add_argument("--encrypt", help="encrypt the string", action="store_true") group.add_argument("--decrypt", help="decrypt the string", action="store_true") parser.add_argument("string", type=str, help="string to encrypt/decrypt") parser.add_argument("cipher", type=int, help="positional shift amount for caesar cipher") args = parser.parse_args() string = args.string encrypt_arg = args.encrypt decrypt_arg = args.decrypt cipher = args.cipher if encrypt_arg: result = encrypt(string) with open('encrypt','wb') as f: a = result.encode(encoding='utf-8') f.write(a) f.close() print("Encrypted File created with name encrypt") else: with open(string,'rb') as r: text = r.readlines() print(decrypt(text[0].decode('utf-8'))) ``` To test: $ python caesar\_cipher.py 'fortuna' 6771 --encrypt ``` Encrypted File created with name encrypt ``` $ python caesar\_cipher.py 'encrypt' 6771 --decrypt ``` fortuna ```
As @KnowledgeGainer mentioned, there is no problem with your code. The issue arises because you copied the output of your encryption from the terminal, and used that as your input for decryption. The terminal you're using is trying its best to interpret some potential non-printable control characters - `fortuna` has seven characters, but `☼↑↔▲↨` appears to be only five - but these are obviously unicode characters. In a caesar-cipher, your plaintext and encrypted message should be the same length, so it's clear that the terminal is mapping one or more "output bytes" to unicode characters. In what way it's doing this is not immediately obvious to me.
67,519,212
I have written a simple caesar cipher code to take a string and a positional shift argument i.e cipher to encrypt the string. However, I have realized some of the outputs won't decrypt correctly. For example: `python .\caesar_cipher.py 'fortuna' 6771 --encrypt` outputs `☼↑↔▲↨` `python .\caesar_cipher.py '☼↑↔▲↨' 6771 --decrypt` outputs `\`,/UC` ( \ should be ` forgive my markdown skills) I'm fairly certain there is some issue of encoding but I couldn't pinpoint it. Instead of printing and passing it as a command-line argument between two runs, if I were to just encrypt and decrypt in the same run output seems correct. I'm using windows and I tried to run the above example (and a couple of others) both in cmd and PowerShell to test it. Here is my code: ``` import argparse # 127 number of chars in ascii NO_OF_CHARS = 127 def encrypt(s: str) -> str: return ''.join([chr((ord(c)+cipher) % NO_OF_CHARS) for c in s]) def decrypt(s: str) -> str: return ''.join([chr((ord(c)-cipher) % NO_OF_CHARS) for c in s]) parser = argparse.ArgumentParser() group = parser.add_mutually_exclusive_group(required=True) group.add_argument("--encrypt", help="encrypt the string", action="store_true") group.add_argument("--decrypt", help="decrypt the string", action="store_true") parser.add_argument("string", type=str, help="string to encrypt/decrypt") parser.add_argument("cipher", type=int, help="positional shift amount for caesar cipher") args = parser.parse_args() string = args.string encrypt_arg = args.encrypt decrypt_arg = args.decrypt cipher = args.cipher if encrypt_arg: result = encrypt(string) else: result = decrypt(string) print(result) ```
2021/05/13
[ "https://Stackoverflow.com/questions/67519212", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9169087/" ]
Add `print(' '.join([str(ord(c)) for c in result]))` along with `print(result)`. Then you see that `result` contains unprintable characters under 32 (0x20): `15,24,27,29,30,23,10`. Here's a possible technique how to stay in *printable* range of *ASCII*: ``` NO_OF_NOPRNT = 32 # number of unprintable chars in ascii NO_OF_CHARS = 128 - NO_OF_NOPRNT # number of printable chars in ascii def encrypt(s: str) -> str: return ''.join([chr(((ord(c)-NO_OF_NOPRNT+cipher) % NO_OF_CHARS)+NO_OF_NOPRNT) for c in s]) def decrypt(s: str) -> str: return ''.join([chr(((ord(c)-NO_OF_NOPRNT-cipher) % NO_OF_CHARS)+NO_OF_NOPRNT) for c in s]) ``` With above improvement of your script: ``` .\SO\67519212.py "fortunaX" 6771 --encrypt ``` > > > ``` > 9BEGHA4+ > > ``` > > ``` .\SO\67519212.py "9BEGHA4+" 6771 --decrypt ``` > > > ``` > fortunaX > > ``` > >
As @KnowledgeGainer mentioned, there is no problem with your code. The issue arises because you copied the output of your encryption from the terminal, and used that as your input for decryption. The terminal you're using is trying its best to interpret some potential non-printable control characters - `fortuna` has seven characters, but `☼↑↔▲↨` appears to be only five - but these are obviously unicode characters. In a caesar-cipher, your plaintext and encrypted message should be the same length, so it's clear that the terminal is mapping one or more "output bytes" to unicode characters. In what way it's doing this is not immediately obvious to me.
34,284,737
This is a part of my code for a hangman game. it is used for all four difficulties, but when it is used on my "insane" difficulty (which uses words from a word file) it adds an extra symbol to the end of the word meaning you can't win the game. it does this for every word in the .txt file. This code works when using an array in the python window. ``` def insane(): global score print ("This words may contain an apostrophe. \nStart guessing...") time.sleep(0.5) word = random.choice(words).lower() print (word) guesses = '' fails = 0 while fails >= 0 and fails < 10: #try to fix this failed = 0 for char in word: if char in guesses: print (char,) else: print ("_"), failed += 1 if failed == 0: print ("\nYou won, WELL DONE!") score = score + 1 print ("your score is,", score) difficultyINSANE() guess = input("\nGuess a letter:").lower() guesses += guess if guess not in word: fails += 1 print ("\nWrong") if fails == 1: print ("You have", + fails, "fail....WATCH OUT!" ) elif fails >= 2 and fails < 10: print ("You have", + fails, "fails....WATCH OUT!" ) if fails == 10: print ("You Loose\n") print ("your score is, ", score) print ("the word was,", word) score = 0 difficultyINSANE() ``` **Edit:** this is how i read the words ``` INSANEWORDS = open("create.txt","r+") words = [] for item in INSANEWORDS: words.append(item) ```
2015/12/15
[ "https://Stackoverflow.com/questions/34284737", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5653652/" ]
You have a `\n` at the end of every word. You should strip the word of the `\n` before adding it: ``` INSANEWORDS = open("create.txt", "r+") words = [] for item in INSANEWORDS: words.append(item.strip('\n')) ``` **Before:** [![enter image description here](https://i.stack.imgur.com/d6mOJ.png)](https://i.stack.imgur.com/d6mOJ.png) **After:** [![enter image description here](https://i.stack.imgur.com/WarCk.png)](https://i.stack.imgur.com/WarCk.png)
If my guess is correct when you are reading a line from your text file you also reading the new line character **\n** at the end of the word, which you can remove using: ``` word = word.strip('\n') ```
10,589,933
I've been trying to learn python recently, and ran across something that I'm having a hard time understanding exactly how it works. Primarily, it is the design of a list. The list in question is from this security article talking about a simple fuzzing tool: <http://blog.securestate.com/post/2009/10/06/How-a-simple-python-fuzzer-brought-down-SMBv2-in-2-seconds.aspx> The actual list in question is: ``` #Negotiate Protocol Request packet = [chr(int(a, 16)) for a in """ 00 00 00 90 ff 53 4d 42 72 00 00 00 00 18 53 c8 00 00 00 00 00 00 00 00 00 00 00 00 ff ff ff fe 00 00 00 00 00 6d 00 02 50 43 20 4e 45 54 57 4f 52 4b 20 50 52 4f 47 52 41 4d 20 31 2e 30 00 02 4c 41 4e 4d 41 4e 31 2e 30 00 02 57 69 6e 64 6f 77 73 20 66 6f 72 20 57 6f 72 6b 67 72 6f 75 70 73 20 33 2e 31 61 00 02 4c 4d 31 2e 32 58 30 30 32 00 02 4c 41 4e 4d 41 4e 32 2e 31 00 02 4e 54 20 4c 4d 20 30 2e 31 32 00 02 53 4d 42 20 32 2e 30 30 32 00 """.split()] ``` He pulls a single byte (I think?) from it using the following lines: ``` what = packet[:] where = choice(range(len(packet))) which = chr(choice(range(256))) what[where] = which ``` I have never seen a list designed this way, and can't seem to follow how it is selecting whatever it does. What is confusing me most is the `packet = [chr(int(a, 16)) for a in """`, where he houses all of that stuff in what appears to be a comment block... then does `.split()`. 0\_o I know this is a vague question, but if anyone could either explain this to me or point me in the direction of some documentation that explains that style of list building I'd be exceptionally happy. This looks like a very efficient way to store/pull out a large number of bytes.
2012/05/14
[ "https://Stackoverflow.com/questions/10589933", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1298775/" ]
This ``` """ 00 00 00 90 ff 53 4d 42 72 00 00 00 00 18 53 c8 00 00 00 00 00 00 00 00 00 00 00 00 ff ff ff fe 00 00 00 00 00 6d 00 02 50 43 20 4e 45 54 57 4f 52 4b 20 50 52 4f 47 52 41 4d 20 31 2e 30 00 02 4c 41 4e 4d 41 4e 31 2e 30 00 02 57 69 6e 64 6f 77 73 20 66 6f 72 20 57 6f 72 6b 67 72 6f 75 70 73 20 33 2e 31 61 00 02 4c 4d 31 2e 32 58 30 30 32 00 02 4c 41 4e 4d 41 4e 32 2e 31 00 02 4e 54 20 4c 4d 20 30 2e 31 32 00 02 53 4d 42 20 32 2e 30 30 32 00 """ ``` is just [multiline string](http://docs.python.org/tutorial/introduction.html#strings). ``` """ 00 00 00 90 ff 53 4d 42 72 00 00 00 00 18 53 c8 00 00 00 00 """.split() ``` [produces split](http://docs.python.org/library/stdtypes.html#str.split) with spaces of the abovementioned string: ``` ['00', '00', '00', '90', 'ff', '53', '4d', '42', '72', '00', '00', '00', '00', '18', '53', 'c8', '00', '00', '00', '00'] ``` And this: ``` [chr(int(a, 16)) for a in ['00', '00', '00', '90', 'ff', '53', '4d', '42', '72', '00', '00', '00', '00', '18', '53', 'c8', '00', '00', '00', '00']] ``` is a [list comprehension](http://docs.python.org/tutorial/datastructures.html#list-comprehensions) which goes through the formed list and converts all the values applying `chr(int(a,16))` to each `a`. [`int(a,16)`](http://docs.python.org/library/functions.html#int) converts string containing string representation of hexadecimal into `int`. [`chr`](http://docs.python.org/library/functions.html#chr) converts this integer into char. The result is: ``` >>> [chr(int(a, 16)) for a in ['00', '00', '00', '90', 'ff', '53', '4d', '42', '72', '00', '00', '00', '00', '18', '53', 'c8', '00', '00', '00', '00']] ['\x00', '\x00', '\x00', '\x90', '\xff', 'S', 'M', 'B', 'r', '\x00', '\x00', '\x00', '\x00', '\x18', 'S', '\xc8', '\x00', '\x00', '\x00', '\x00'] ```
The ``` """ content """ ``` format is a simple way to define multiline string literals in python. This is **not** a comment block. The `[chr(int(a, 16)) for a in "00 00 00...".split()]` is a list comprehension. The large string is split into an array (split by spaces), and for each item in the array, it converts it to a hexadecimal number (`int(a,16)` means turn string a into an int, string a is in base 16) and then returns that ascii char (`chr(...)`) represented by that integer. `packet[:]` returns a [shallow copy](http://docs.python.org/tutorial/introduction.html#lists) of the list `packet`. `choice(range(len(packet)))` returns a random number in the range of the length of packet. `chr(choice(range(256)))` picks a random number in the range 0,255 and interprets it as an ascii char, and then the final statement inserts that ascii char into the randomly selected location.
10,589,933
I've been trying to learn python recently, and ran across something that I'm having a hard time understanding exactly how it works. Primarily, it is the design of a list. The list in question is from this security article talking about a simple fuzzing tool: <http://blog.securestate.com/post/2009/10/06/How-a-simple-python-fuzzer-brought-down-SMBv2-in-2-seconds.aspx> The actual list in question is: ``` #Negotiate Protocol Request packet = [chr(int(a, 16)) for a in """ 00 00 00 90 ff 53 4d 42 72 00 00 00 00 18 53 c8 00 00 00 00 00 00 00 00 00 00 00 00 ff ff ff fe 00 00 00 00 00 6d 00 02 50 43 20 4e 45 54 57 4f 52 4b 20 50 52 4f 47 52 41 4d 20 31 2e 30 00 02 4c 41 4e 4d 41 4e 31 2e 30 00 02 57 69 6e 64 6f 77 73 20 66 6f 72 20 57 6f 72 6b 67 72 6f 75 70 73 20 33 2e 31 61 00 02 4c 4d 31 2e 32 58 30 30 32 00 02 4c 41 4e 4d 41 4e 32 2e 31 00 02 4e 54 20 4c 4d 20 30 2e 31 32 00 02 53 4d 42 20 32 2e 30 30 32 00 """.split()] ``` He pulls a single byte (I think?) from it using the following lines: ``` what = packet[:] where = choice(range(len(packet))) which = chr(choice(range(256))) what[where] = which ``` I have never seen a list designed this way, and can't seem to follow how it is selecting whatever it does. What is confusing me most is the `packet = [chr(int(a, 16)) for a in """`, where he houses all of that stuff in what appears to be a comment block... then does `.split()`. 0\_o I know this is a vague question, but if anyone could either explain this to me or point me in the direction of some documentation that explains that style of list building I'd be exceptionally happy. This looks like a very efficient way to store/pull out a large number of bytes.
2012/05/14
[ "https://Stackoverflow.com/questions/10589933", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1298775/" ]
The ``` """ content """ ``` format is a simple way to define multiline string literals in python. This is **not** a comment block. The `[chr(int(a, 16)) for a in "00 00 00...".split()]` is a list comprehension. The large string is split into an array (split by spaces), and for each item in the array, it converts it to a hexadecimal number (`int(a,16)` means turn string a into an int, string a is in base 16) and then returns that ascii char (`chr(...)`) represented by that integer. `packet[:]` returns a [shallow copy](http://docs.python.org/tutorial/introduction.html#lists) of the list `packet`. `choice(range(len(packet)))` returns a random number in the range of the length of packet. `chr(choice(range(256)))` picks a random number in the range 0,255 and interprets it as an ascii char, and then the final statement inserts that ascii char into the randomly selected location.
You're running into a couple different concepts here. Just slowly work backwards and you'll figure it out. The """00 00 00 90 ff 53 4d 42 72 00 00 00 00 18 53 c8 00 00 00 00""" stuff is just a big string. The .split on it breaks it into an array on the spaces, so at that point you have something like ['00', '00', '00', '90' ....] The rest of that line is a list comprehension -- its a fancy way of doing this: ``` new_list = [] for a in that_list_we_split_above: new_list.append( chr( int(a, 16) ) ) ``` the int function is converting the string to an int in base 16 - <http://docs.python.org/library/functions.html#int> the chr function is then getting the ascii character using that number so at the end of all that nonsense you have a list 'packet' the line defining where takes the length of that list, creates a new list with every number from 0 to the length (ie, every possible index of that), and randomly selects one of them. the line for which picks a random int between 0 and 256 and gets the ascii character for it the last line replaces the item in the packets list at the 'where' index with the random ascii character defined in which tl;dr: go find different code to learn on - this is both confusing and uninspired
10,589,933
I've been trying to learn python recently, and ran across something that I'm having a hard time understanding exactly how it works. Primarily, it is the design of a list. The list in question is from this security article talking about a simple fuzzing tool: <http://blog.securestate.com/post/2009/10/06/How-a-simple-python-fuzzer-brought-down-SMBv2-in-2-seconds.aspx> The actual list in question is: ``` #Negotiate Protocol Request packet = [chr(int(a, 16)) for a in """ 00 00 00 90 ff 53 4d 42 72 00 00 00 00 18 53 c8 00 00 00 00 00 00 00 00 00 00 00 00 ff ff ff fe 00 00 00 00 00 6d 00 02 50 43 20 4e 45 54 57 4f 52 4b 20 50 52 4f 47 52 41 4d 20 31 2e 30 00 02 4c 41 4e 4d 41 4e 31 2e 30 00 02 57 69 6e 64 6f 77 73 20 66 6f 72 20 57 6f 72 6b 67 72 6f 75 70 73 20 33 2e 31 61 00 02 4c 4d 31 2e 32 58 30 30 32 00 02 4c 41 4e 4d 41 4e 32 2e 31 00 02 4e 54 20 4c 4d 20 30 2e 31 32 00 02 53 4d 42 20 32 2e 30 30 32 00 """.split()] ``` He pulls a single byte (I think?) from it using the following lines: ``` what = packet[:] where = choice(range(len(packet))) which = chr(choice(range(256))) what[where] = which ``` I have never seen a list designed this way, and can't seem to follow how it is selecting whatever it does. What is confusing me most is the `packet = [chr(int(a, 16)) for a in """`, where he houses all of that stuff in what appears to be a comment block... then does `.split()`. 0\_o I know this is a vague question, but if anyone could either explain this to me or point me in the direction of some documentation that explains that style of list building I'd be exceptionally happy. This looks like a very efficient way to store/pull out a large number of bytes.
2012/05/14
[ "https://Stackoverflow.com/questions/10589933", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1298775/" ]
The ``` """ content """ ``` format is a simple way to define multiline string literals in python. This is **not** a comment block. The `[chr(int(a, 16)) for a in "00 00 00...".split()]` is a list comprehension. The large string is split into an array (split by spaces), and for each item in the array, it converts it to a hexadecimal number (`int(a,16)` means turn string a into an int, string a is in base 16) and then returns that ascii char (`chr(...)`) represented by that integer. `packet[:]` returns a [shallow copy](http://docs.python.org/tutorial/introduction.html#lists) of the list `packet`. `choice(range(len(packet)))` returns a random number in the range of the length of packet. `chr(choice(range(256)))` picks a random number in the range 0,255 and interprets it as an ascii char, and then the final statement inserts that ascii char into the randomly selected location.
The code sample in question seems to substitute a randomly chosen byte in the original packet for another random byte (which I believe, is one of the ideas behind fuzzing.) ``` packet = [chr(int(a, 16)) for a in """ 00 00 00 90 .... """.split()] ``` This is "split the string on whitespace, read the substrings as characters decoded from integers in hex (the second argument to int is the base). ``` what = packet[:] ``` Python idiom for "copy the `packet` array into `what`". ``` where = choice(range(len(packet))) ``` Choose a random index in the packet. ``` which = chr(choice(range(256))) ``` Make a random character. ``` what[where] = which ``` Substitute it at the previously chosen index.
10,589,933
I've been trying to learn python recently, and ran across something that I'm having a hard time understanding exactly how it works. Primarily, it is the design of a list. The list in question is from this security article talking about a simple fuzzing tool: <http://blog.securestate.com/post/2009/10/06/How-a-simple-python-fuzzer-brought-down-SMBv2-in-2-seconds.aspx> The actual list in question is: ``` #Negotiate Protocol Request packet = [chr(int(a, 16)) for a in """ 00 00 00 90 ff 53 4d 42 72 00 00 00 00 18 53 c8 00 00 00 00 00 00 00 00 00 00 00 00 ff ff ff fe 00 00 00 00 00 6d 00 02 50 43 20 4e 45 54 57 4f 52 4b 20 50 52 4f 47 52 41 4d 20 31 2e 30 00 02 4c 41 4e 4d 41 4e 31 2e 30 00 02 57 69 6e 64 6f 77 73 20 66 6f 72 20 57 6f 72 6b 67 72 6f 75 70 73 20 33 2e 31 61 00 02 4c 4d 31 2e 32 58 30 30 32 00 02 4c 41 4e 4d 41 4e 32 2e 31 00 02 4e 54 20 4c 4d 20 30 2e 31 32 00 02 53 4d 42 20 32 2e 30 30 32 00 """.split()] ``` He pulls a single byte (I think?) from it using the following lines: ``` what = packet[:] where = choice(range(len(packet))) which = chr(choice(range(256))) what[where] = which ``` I have never seen a list designed this way, and can't seem to follow how it is selecting whatever it does. What is confusing me most is the `packet = [chr(int(a, 16)) for a in """`, where he houses all of that stuff in what appears to be a comment block... then does `.split()`. 0\_o I know this is a vague question, but if anyone could either explain this to me or point me in the direction of some documentation that explains that style of list building I'd be exceptionally happy. This looks like a very efficient way to store/pull out a large number of bytes.
2012/05/14
[ "https://Stackoverflow.com/questions/10589933", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1298775/" ]
Let's break it down, and simplify it for readability: ``` bytes = """ 00 00 00 90 ff 53 4d 42 72 00 00 00 00 18 53 c8 00 00 00 00 00 00 00 00 00 00 00 00 ff ff ff fe 00 00 00 00 00 6d 00 02 50 43 20 4e 45 54 57 4f 52 4b 20 50 52 4f 47 52 41 4d 20 31 2e 30 00 02 4c 41 4e 4d 41 4e 31 2e 30 00 02 57 69 6e 64 6f 77 73 20 66 6f 72 20 57 6f 72 6b 67 72 6f 75 70 73 20 33 2e 31 61 00 02 4c 4d 31 2e 32 58 30 30 32 00 02 4c 41 4e 4d 41 4e 32 2e 31 00 02 4e 54 20 4c 4d 20 30 2e 31 32 00 02 53 4d 42 20 32 2e 30 30 32 00 """ packet = [chr(int(a, 16)) for a in bytes.split()] ``` `bytes` is a string, the `"""` is usually used for Python docstrings, but you can use them in code to create very long strings (but they kind of suck because you will end up with extra spaces in your code. `bytes.split()` will split on white space, and return a list of the individual parts of the string that were space-separated. ``` print bytes.split() ['00', '00', '00', '90', 'ff', '53', '4d', '42', '72', '00', '00', '00', '00', '18', '53', 'c8', '00', '00' ... ] # and more ``` So then this: ``` packet = [chr(int(a, 16)) for a in bytes.split()] ``` This is a list comprehension: * split `bytes` and get that list as above * for each element in the list (`a` here), perform `int(a,16)` on it, which will get its integer value by doing base-16 to decimal conversion (i.e. `FF` would be `255`). * Then do `chr` on that value, which will give you back the ASCII value of that byte. So `packet` will be a list of the bytes in ASCII form. ``` print packet ['\x00', '\x00', '\x00', '\x90', '\xff', 'S', 'M', 'B', 'r', '\x00', '\x00', '\x00', '\x00', '\x18', 'S', '\xc8', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\xff', '\xff', '\xff', '\xfe', '\x00', '\x00', '\x00', '\x00', '\x00', 'm', '\x00', '\x02', 'P', 'C', ' ', 'N', 'E', 'T', 'W', 'O', 'R', 'K', ' ', 'P', 'R', 'O', 'G', 'R', 'A', 'M', ' ', '1', '.', '0', '\x00', '\x02', 'L', 'A', 'N', 'M', 'A', 'N', '1', '.', '0', '\x00', '\x02', 'W', 'i', 'n', 'd', 'o', 'w', 's', ' ', 'f', 'o', 'r', ' ', 'W', 'o', 'r', 'k', 'g', 'r', 'o', ... more ] ```
The ``` """ content """ ``` format is a simple way to define multiline string literals in python. This is **not** a comment block. The `[chr(int(a, 16)) for a in "00 00 00...".split()]` is a list comprehension. The large string is split into an array (split by spaces), and for each item in the array, it converts it to a hexadecimal number (`int(a,16)` means turn string a into an int, string a is in base 16) and then returns that ascii char (`chr(...)`) represented by that integer. `packet[:]` returns a [shallow copy](http://docs.python.org/tutorial/introduction.html#lists) of the list `packet`. `choice(range(len(packet)))` returns a random number in the range of the length of packet. `chr(choice(range(256)))` picks a random number in the range 0,255 and interprets it as an ascii char, and then the final statement inserts that ascii char into the randomly selected location.
10,589,933
I've been trying to learn python recently, and ran across something that I'm having a hard time understanding exactly how it works. Primarily, it is the design of a list. The list in question is from this security article talking about a simple fuzzing tool: <http://blog.securestate.com/post/2009/10/06/How-a-simple-python-fuzzer-brought-down-SMBv2-in-2-seconds.aspx> The actual list in question is: ``` #Negotiate Protocol Request packet = [chr(int(a, 16)) for a in """ 00 00 00 90 ff 53 4d 42 72 00 00 00 00 18 53 c8 00 00 00 00 00 00 00 00 00 00 00 00 ff ff ff fe 00 00 00 00 00 6d 00 02 50 43 20 4e 45 54 57 4f 52 4b 20 50 52 4f 47 52 41 4d 20 31 2e 30 00 02 4c 41 4e 4d 41 4e 31 2e 30 00 02 57 69 6e 64 6f 77 73 20 66 6f 72 20 57 6f 72 6b 67 72 6f 75 70 73 20 33 2e 31 61 00 02 4c 4d 31 2e 32 58 30 30 32 00 02 4c 41 4e 4d 41 4e 32 2e 31 00 02 4e 54 20 4c 4d 20 30 2e 31 32 00 02 53 4d 42 20 32 2e 30 30 32 00 """.split()] ``` He pulls a single byte (I think?) from it using the following lines: ``` what = packet[:] where = choice(range(len(packet))) which = chr(choice(range(256))) what[where] = which ``` I have never seen a list designed this way, and can't seem to follow how it is selecting whatever it does. What is confusing me most is the `packet = [chr(int(a, 16)) for a in """`, where he houses all of that stuff in what appears to be a comment block... then does `.split()`. 0\_o I know this is a vague question, but if anyone could either explain this to me or point me in the direction of some documentation that explains that style of list building I'd be exceptionally happy. This looks like a very efficient way to store/pull out a large number of bytes.
2012/05/14
[ "https://Stackoverflow.com/questions/10589933", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1298775/" ]
This ``` """ 00 00 00 90 ff 53 4d 42 72 00 00 00 00 18 53 c8 00 00 00 00 00 00 00 00 00 00 00 00 ff ff ff fe 00 00 00 00 00 6d 00 02 50 43 20 4e 45 54 57 4f 52 4b 20 50 52 4f 47 52 41 4d 20 31 2e 30 00 02 4c 41 4e 4d 41 4e 31 2e 30 00 02 57 69 6e 64 6f 77 73 20 66 6f 72 20 57 6f 72 6b 67 72 6f 75 70 73 20 33 2e 31 61 00 02 4c 4d 31 2e 32 58 30 30 32 00 02 4c 41 4e 4d 41 4e 32 2e 31 00 02 4e 54 20 4c 4d 20 30 2e 31 32 00 02 53 4d 42 20 32 2e 30 30 32 00 """ ``` is just [multiline string](http://docs.python.org/tutorial/introduction.html#strings). ``` """ 00 00 00 90 ff 53 4d 42 72 00 00 00 00 18 53 c8 00 00 00 00 """.split() ``` [produces split](http://docs.python.org/library/stdtypes.html#str.split) with spaces of the abovementioned string: ``` ['00', '00', '00', '90', 'ff', '53', '4d', '42', '72', '00', '00', '00', '00', '18', '53', 'c8', '00', '00', '00', '00'] ``` And this: ``` [chr(int(a, 16)) for a in ['00', '00', '00', '90', 'ff', '53', '4d', '42', '72', '00', '00', '00', '00', '18', '53', 'c8', '00', '00', '00', '00']] ``` is a [list comprehension](http://docs.python.org/tutorial/datastructures.html#list-comprehensions) which goes through the formed list and converts all the values applying `chr(int(a,16))` to each `a`. [`int(a,16)`](http://docs.python.org/library/functions.html#int) converts string containing string representation of hexadecimal into `int`. [`chr`](http://docs.python.org/library/functions.html#chr) converts this integer into char. The result is: ``` >>> [chr(int(a, 16)) for a in ['00', '00', '00', '90', 'ff', '53', '4d', '42', '72', '00', '00', '00', '00', '18', '53', 'c8', '00', '00', '00', '00']] ['\x00', '\x00', '\x00', '\x90', '\xff', 'S', 'M', 'B', 'r', '\x00', '\x00', '\x00', '\x00', '\x18', 'S', '\xc8', '\x00', '\x00', '\x00', '\x00'] ```
You're running into a couple different concepts here. Just slowly work backwards and you'll figure it out. The """00 00 00 90 ff 53 4d 42 72 00 00 00 00 18 53 c8 00 00 00 00""" stuff is just a big string. The .split on it breaks it into an array on the spaces, so at that point you have something like ['00', '00', '00', '90' ....] The rest of that line is a list comprehension -- its a fancy way of doing this: ``` new_list = [] for a in that_list_we_split_above: new_list.append( chr( int(a, 16) ) ) ``` the int function is converting the string to an int in base 16 - <http://docs.python.org/library/functions.html#int> the chr function is then getting the ascii character using that number so at the end of all that nonsense you have a list 'packet' the line defining where takes the length of that list, creates a new list with every number from 0 to the length (ie, every possible index of that), and randomly selects one of them. the line for which picks a random int between 0 and 256 and gets the ascii character for it the last line replaces the item in the packets list at the 'where' index with the random ascii character defined in which tl;dr: go find different code to learn on - this is both confusing and uninspired
10,589,933
I've been trying to learn python recently, and ran across something that I'm having a hard time understanding exactly how it works. Primarily, it is the design of a list. The list in question is from this security article talking about a simple fuzzing tool: <http://blog.securestate.com/post/2009/10/06/How-a-simple-python-fuzzer-brought-down-SMBv2-in-2-seconds.aspx> The actual list in question is: ``` #Negotiate Protocol Request packet = [chr(int(a, 16)) for a in """ 00 00 00 90 ff 53 4d 42 72 00 00 00 00 18 53 c8 00 00 00 00 00 00 00 00 00 00 00 00 ff ff ff fe 00 00 00 00 00 6d 00 02 50 43 20 4e 45 54 57 4f 52 4b 20 50 52 4f 47 52 41 4d 20 31 2e 30 00 02 4c 41 4e 4d 41 4e 31 2e 30 00 02 57 69 6e 64 6f 77 73 20 66 6f 72 20 57 6f 72 6b 67 72 6f 75 70 73 20 33 2e 31 61 00 02 4c 4d 31 2e 32 58 30 30 32 00 02 4c 41 4e 4d 41 4e 32 2e 31 00 02 4e 54 20 4c 4d 20 30 2e 31 32 00 02 53 4d 42 20 32 2e 30 30 32 00 """.split()] ``` He pulls a single byte (I think?) from it using the following lines: ``` what = packet[:] where = choice(range(len(packet))) which = chr(choice(range(256))) what[where] = which ``` I have never seen a list designed this way, and can't seem to follow how it is selecting whatever it does. What is confusing me most is the `packet = [chr(int(a, 16)) for a in """`, where he houses all of that stuff in what appears to be a comment block... then does `.split()`. 0\_o I know this is a vague question, but if anyone could either explain this to me or point me in the direction of some documentation that explains that style of list building I'd be exceptionally happy. This looks like a very efficient way to store/pull out a large number of bytes.
2012/05/14
[ "https://Stackoverflow.com/questions/10589933", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1298775/" ]
This ``` """ 00 00 00 90 ff 53 4d 42 72 00 00 00 00 18 53 c8 00 00 00 00 00 00 00 00 00 00 00 00 ff ff ff fe 00 00 00 00 00 6d 00 02 50 43 20 4e 45 54 57 4f 52 4b 20 50 52 4f 47 52 41 4d 20 31 2e 30 00 02 4c 41 4e 4d 41 4e 31 2e 30 00 02 57 69 6e 64 6f 77 73 20 66 6f 72 20 57 6f 72 6b 67 72 6f 75 70 73 20 33 2e 31 61 00 02 4c 4d 31 2e 32 58 30 30 32 00 02 4c 41 4e 4d 41 4e 32 2e 31 00 02 4e 54 20 4c 4d 20 30 2e 31 32 00 02 53 4d 42 20 32 2e 30 30 32 00 """ ``` is just [multiline string](http://docs.python.org/tutorial/introduction.html#strings). ``` """ 00 00 00 90 ff 53 4d 42 72 00 00 00 00 18 53 c8 00 00 00 00 """.split() ``` [produces split](http://docs.python.org/library/stdtypes.html#str.split) with spaces of the abovementioned string: ``` ['00', '00', '00', '90', 'ff', '53', '4d', '42', '72', '00', '00', '00', '00', '18', '53', 'c8', '00', '00', '00', '00'] ``` And this: ``` [chr(int(a, 16)) for a in ['00', '00', '00', '90', 'ff', '53', '4d', '42', '72', '00', '00', '00', '00', '18', '53', 'c8', '00', '00', '00', '00']] ``` is a [list comprehension](http://docs.python.org/tutorial/datastructures.html#list-comprehensions) which goes through the formed list and converts all the values applying `chr(int(a,16))` to each `a`. [`int(a,16)`](http://docs.python.org/library/functions.html#int) converts string containing string representation of hexadecimal into `int`. [`chr`](http://docs.python.org/library/functions.html#chr) converts this integer into char. The result is: ``` >>> [chr(int(a, 16)) for a in ['00', '00', '00', '90', 'ff', '53', '4d', '42', '72', '00', '00', '00', '00', '18', '53', 'c8', '00', '00', '00', '00']] ['\x00', '\x00', '\x00', '\x90', '\xff', 'S', 'M', 'B', 'r', '\x00', '\x00', '\x00', '\x00', '\x18', 'S', '\xc8', '\x00', '\x00', '\x00', '\x00'] ```
The code sample in question seems to substitute a randomly chosen byte in the original packet for another random byte (which I believe, is one of the ideas behind fuzzing.) ``` packet = [chr(int(a, 16)) for a in """ 00 00 00 90 .... """.split()] ``` This is "split the string on whitespace, read the substrings as characters decoded from integers in hex (the second argument to int is the base). ``` what = packet[:] ``` Python idiom for "copy the `packet` array into `what`". ``` where = choice(range(len(packet))) ``` Choose a random index in the packet. ``` which = chr(choice(range(256))) ``` Make a random character. ``` what[where] = which ``` Substitute it at the previously chosen index.
10,589,933
I've been trying to learn python recently, and ran across something that I'm having a hard time understanding exactly how it works. Primarily, it is the design of a list. The list in question is from this security article talking about a simple fuzzing tool: <http://blog.securestate.com/post/2009/10/06/How-a-simple-python-fuzzer-brought-down-SMBv2-in-2-seconds.aspx> The actual list in question is: ``` #Negotiate Protocol Request packet = [chr(int(a, 16)) for a in """ 00 00 00 90 ff 53 4d 42 72 00 00 00 00 18 53 c8 00 00 00 00 00 00 00 00 00 00 00 00 ff ff ff fe 00 00 00 00 00 6d 00 02 50 43 20 4e 45 54 57 4f 52 4b 20 50 52 4f 47 52 41 4d 20 31 2e 30 00 02 4c 41 4e 4d 41 4e 31 2e 30 00 02 57 69 6e 64 6f 77 73 20 66 6f 72 20 57 6f 72 6b 67 72 6f 75 70 73 20 33 2e 31 61 00 02 4c 4d 31 2e 32 58 30 30 32 00 02 4c 41 4e 4d 41 4e 32 2e 31 00 02 4e 54 20 4c 4d 20 30 2e 31 32 00 02 53 4d 42 20 32 2e 30 30 32 00 """.split()] ``` He pulls a single byte (I think?) from it using the following lines: ``` what = packet[:] where = choice(range(len(packet))) which = chr(choice(range(256))) what[where] = which ``` I have never seen a list designed this way, and can't seem to follow how it is selecting whatever it does. What is confusing me most is the `packet = [chr(int(a, 16)) for a in """`, where he houses all of that stuff in what appears to be a comment block... then does `.split()`. 0\_o I know this is a vague question, but if anyone could either explain this to me or point me in the direction of some documentation that explains that style of list building I'd be exceptionally happy. This looks like a very efficient way to store/pull out a large number of bytes.
2012/05/14
[ "https://Stackoverflow.com/questions/10589933", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1298775/" ]
You're running into a couple different concepts here. Just slowly work backwards and you'll figure it out. The """00 00 00 90 ff 53 4d 42 72 00 00 00 00 18 53 c8 00 00 00 00""" stuff is just a big string. The .split on it breaks it into an array on the spaces, so at that point you have something like ['00', '00', '00', '90' ....] The rest of that line is a list comprehension -- its a fancy way of doing this: ``` new_list = [] for a in that_list_we_split_above: new_list.append( chr( int(a, 16) ) ) ``` the int function is converting the string to an int in base 16 - <http://docs.python.org/library/functions.html#int> the chr function is then getting the ascii character using that number so at the end of all that nonsense you have a list 'packet' the line defining where takes the length of that list, creates a new list with every number from 0 to the length (ie, every possible index of that), and randomly selects one of them. the line for which picks a random int between 0 and 256 and gets the ascii character for it the last line replaces the item in the packets list at the 'where' index with the random ascii character defined in which tl;dr: go find different code to learn on - this is both confusing and uninspired
The code sample in question seems to substitute a randomly chosen byte in the original packet for another random byte (which I believe, is one of the ideas behind fuzzing.) ``` packet = [chr(int(a, 16)) for a in """ 00 00 00 90 .... """.split()] ``` This is "split the string on whitespace, read the substrings as characters decoded from integers in hex (the second argument to int is the base). ``` what = packet[:] ``` Python idiom for "copy the `packet` array into `what`". ``` where = choice(range(len(packet))) ``` Choose a random index in the packet. ``` which = chr(choice(range(256))) ``` Make a random character. ``` what[where] = which ``` Substitute it at the previously chosen index.
10,589,933
I've been trying to learn python recently, and ran across something that I'm having a hard time understanding exactly how it works. Primarily, it is the design of a list. The list in question is from this security article talking about a simple fuzzing tool: <http://blog.securestate.com/post/2009/10/06/How-a-simple-python-fuzzer-brought-down-SMBv2-in-2-seconds.aspx> The actual list in question is: ``` #Negotiate Protocol Request packet = [chr(int(a, 16)) for a in """ 00 00 00 90 ff 53 4d 42 72 00 00 00 00 18 53 c8 00 00 00 00 00 00 00 00 00 00 00 00 ff ff ff fe 00 00 00 00 00 6d 00 02 50 43 20 4e 45 54 57 4f 52 4b 20 50 52 4f 47 52 41 4d 20 31 2e 30 00 02 4c 41 4e 4d 41 4e 31 2e 30 00 02 57 69 6e 64 6f 77 73 20 66 6f 72 20 57 6f 72 6b 67 72 6f 75 70 73 20 33 2e 31 61 00 02 4c 4d 31 2e 32 58 30 30 32 00 02 4c 41 4e 4d 41 4e 32 2e 31 00 02 4e 54 20 4c 4d 20 30 2e 31 32 00 02 53 4d 42 20 32 2e 30 30 32 00 """.split()] ``` He pulls a single byte (I think?) from it using the following lines: ``` what = packet[:] where = choice(range(len(packet))) which = chr(choice(range(256))) what[where] = which ``` I have never seen a list designed this way, and can't seem to follow how it is selecting whatever it does. What is confusing me most is the `packet = [chr(int(a, 16)) for a in """`, where he houses all of that stuff in what appears to be a comment block... then does `.split()`. 0\_o I know this is a vague question, but if anyone could either explain this to me or point me in the direction of some documentation that explains that style of list building I'd be exceptionally happy. This looks like a very efficient way to store/pull out a large number of bytes.
2012/05/14
[ "https://Stackoverflow.com/questions/10589933", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1298775/" ]
Let's break it down, and simplify it for readability: ``` bytes = """ 00 00 00 90 ff 53 4d 42 72 00 00 00 00 18 53 c8 00 00 00 00 00 00 00 00 00 00 00 00 ff ff ff fe 00 00 00 00 00 6d 00 02 50 43 20 4e 45 54 57 4f 52 4b 20 50 52 4f 47 52 41 4d 20 31 2e 30 00 02 4c 41 4e 4d 41 4e 31 2e 30 00 02 57 69 6e 64 6f 77 73 20 66 6f 72 20 57 6f 72 6b 67 72 6f 75 70 73 20 33 2e 31 61 00 02 4c 4d 31 2e 32 58 30 30 32 00 02 4c 41 4e 4d 41 4e 32 2e 31 00 02 4e 54 20 4c 4d 20 30 2e 31 32 00 02 53 4d 42 20 32 2e 30 30 32 00 """ packet = [chr(int(a, 16)) for a in bytes.split()] ``` `bytes` is a string, the `"""` is usually used for Python docstrings, but you can use them in code to create very long strings (but they kind of suck because you will end up with extra spaces in your code. `bytes.split()` will split on white space, and return a list of the individual parts of the string that were space-separated. ``` print bytes.split() ['00', '00', '00', '90', 'ff', '53', '4d', '42', '72', '00', '00', '00', '00', '18', '53', 'c8', '00', '00' ... ] # and more ``` So then this: ``` packet = [chr(int(a, 16)) for a in bytes.split()] ``` This is a list comprehension: * split `bytes` and get that list as above * for each element in the list (`a` here), perform `int(a,16)` on it, which will get its integer value by doing base-16 to decimal conversion (i.e. `FF` would be `255`). * Then do `chr` on that value, which will give you back the ASCII value of that byte. So `packet` will be a list of the bytes in ASCII form. ``` print packet ['\x00', '\x00', '\x00', '\x90', '\xff', 'S', 'M', 'B', 'r', '\x00', '\x00', '\x00', '\x00', '\x18', 'S', '\xc8', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\xff', '\xff', '\xff', '\xfe', '\x00', '\x00', '\x00', '\x00', '\x00', 'm', '\x00', '\x02', 'P', 'C', ' ', 'N', 'E', 'T', 'W', 'O', 'R', 'K', ' ', 'P', 'R', 'O', 'G', 'R', 'A', 'M', ' ', '1', '.', '0', '\x00', '\x02', 'L', 'A', 'N', 'M', 'A', 'N', '1', '.', '0', '\x00', '\x02', 'W', 'i', 'n', 'd', 'o', 'w', 's', ' ', 'f', 'o', 'r', ' ', 'W', 'o', 'r', 'k', 'g', 'r', 'o', ... more ] ```
You're running into a couple different concepts here. Just slowly work backwards and you'll figure it out. The """00 00 00 90 ff 53 4d 42 72 00 00 00 00 18 53 c8 00 00 00 00""" stuff is just a big string. The .split on it breaks it into an array on the spaces, so at that point you have something like ['00', '00', '00', '90' ....] The rest of that line is a list comprehension -- its a fancy way of doing this: ``` new_list = [] for a in that_list_we_split_above: new_list.append( chr( int(a, 16) ) ) ``` the int function is converting the string to an int in base 16 - <http://docs.python.org/library/functions.html#int> the chr function is then getting the ascii character using that number so at the end of all that nonsense you have a list 'packet' the line defining where takes the length of that list, creates a new list with every number from 0 to the length (ie, every possible index of that), and randomly selects one of them. the line for which picks a random int between 0 and 256 and gets the ascii character for it the last line replaces the item in the packets list at the 'where' index with the random ascii character defined in which tl;dr: go find different code to learn on - this is both confusing and uninspired
10,589,933
I've been trying to learn python recently, and ran across something that I'm having a hard time understanding exactly how it works. Primarily, it is the design of a list. The list in question is from this security article talking about a simple fuzzing tool: <http://blog.securestate.com/post/2009/10/06/How-a-simple-python-fuzzer-brought-down-SMBv2-in-2-seconds.aspx> The actual list in question is: ``` #Negotiate Protocol Request packet = [chr(int(a, 16)) for a in """ 00 00 00 90 ff 53 4d 42 72 00 00 00 00 18 53 c8 00 00 00 00 00 00 00 00 00 00 00 00 ff ff ff fe 00 00 00 00 00 6d 00 02 50 43 20 4e 45 54 57 4f 52 4b 20 50 52 4f 47 52 41 4d 20 31 2e 30 00 02 4c 41 4e 4d 41 4e 31 2e 30 00 02 57 69 6e 64 6f 77 73 20 66 6f 72 20 57 6f 72 6b 67 72 6f 75 70 73 20 33 2e 31 61 00 02 4c 4d 31 2e 32 58 30 30 32 00 02 4c 41 4e 4d 41 4e 32 2e 31 00 02 4e 54 20 4c 4d 20 30 2e 31 32 00 02 53 4d 42 20 32 2e 30 30 32 00 """.split()] ``` He pulls a single byte (I think?) from it using the following lines: ``` what = packet[:] where = choice(range(len(packet))) which = chr(choice(range(256))) what[where] = which ``` I have never seen a list designed this way, and can't seem to follow how it is selecting whatever it does. What is confusing me most is the `packet = [chr(int(a, 16)) for a in """`, where he houses all of that stuff in what appears to be a comment block... then does `.split()`. 0\_o I know this is a vague question, but if anyone could either explain this to me or point me in the direction of some documentation that explains that style of list building I'd be exceptionally happy. This looks like a very efficient way to store/pull out a large number of bytes.
2012/05/14
[ "https://Stackoverflow.com/questions/10589933", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1298775/" ]
Let's break it down, and simplify it for readability: ``` bytes = """ 00 00 00 90 ff 53 4d 42 72 00 00 00 00 18 53 c8 00 00 00 00 00 00 00 00 00 00 00 00 ff ff ff fe 00 00 00 00 00 6d 00 02 50 43 20 4e 45 54 57 4f 52 4b 20 50 52 4f 47 52 41 4d 20 31 2e 30 00 02 4c 41 4e 4d 41 4e 31 2e 30 00 02 57 69 6e 64 6f 77 73 20 66 6f 72 20 57 6f 72 6b 67 72 6f 75 70 73 20 33 2e 31 61 00 02 4c 4d 31 2e 32 58 30 30 32 00 02 4c 41 4e 4d 41 4e 32 2e 31 00 02 4e 54 20 4c 4d 20 30 2e 31 32 00 02 53 4d 42 20 32 2e 30 30 32 00 """ packet = [chr(int(a, 16)) for a in bytes.split()] ``` `bytes` is a string, the `"""` is usually used for Python docstrings, but you can use them in code to create very long strings (but they kind of suck because you will end up with extra spaces in your code. `bytes.split()` will split on white space, and return a list of the individual parts of the string that were space-separated. ``` print bytes.split() ['00', '00', '00', '90', 'ff', '53', '4d', '42', '72', '00', '00', '00', '00', '18', '53', 'c8', '00', '00' ... ] # and more ``` So then this: ``` packet = [chr(int(a, 16)) for a in bytes.split()] ``` This is a list comprehension: * split `bytes` and get that list as above * for each element in the list (`a` here), perform `int(a,16)` on it, which will get its integer value by doing base-16 to decimal conversion (i.e. `FF` would be `255`). * Then do `chr` on that value, which will give you back the ASCII value of that byte. So `packet` will be a list of the bytes in ASCII form. ``` print packet ['\x00', '\x00', '\x00', '\x90', '\xff', 'S', 'M', 'B', 'r', '\x00', '\x00', '\x00', '\x00', '\x18', 'S', '\xc8', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\x00', '\xff', '\xff', '\xff', '\xfe', '\x00', '\x00', '\x00', '\x00', '\x00', 'm', '\x00', '\x02', 'P', 'C', ' ', 'N', 'E', 'T', 'W', 'O', 'R', 'K', ' ', 'P', 'R', 'O', 'G', 'R', 'A', 'M', ' ', '1', '.', '0', '\x00', '\x02', 'L', 'A', 'N', 'M', 'A', 'N', '1', '.', '0', '\x00', '\x02', 'W', 'i', 'n', 'd', 'o', 'w', 's', ' ', 'f', 'o', 'r', ' ', 'W', 'o', 'r', 'k', 'g', 'r', 'o', ... more ] ```
The code sample in question seems to substitute a randomly chosen byte in the original packet for another random byte (which I believe, is one of the ideas behind fuzzing.) ``` packet = [chr(int(a, 16)) for a in """ 00 00 00 90 .... """.split()] ``` This is "split the string on whitespace, read the substrings as characters decoded from integers in hex (the second argument to int is the base). ``` what = packet[:] ``` Python idiom for "copy the `packet` array into `what`". ``` where = choice(range(len(packet))) ``` Choose a random index in the packet. ``` which = chr(choice(range(256))) ``` Make a random character. ``` what[where] = which ``` Substitute it at the previously chosen index.
36,461,925
I am not even sure how to word my question due to me being quite new to python. The basic concept of what I want to accomplish is to be able to search for something in a 2D array and retrieve the right value as well as the values associated with that value (sorry for my bad explanation) e.g. `array=[[1,a,b],[2,x,d],[3,c,f]]` if the user wants to find `2`, I want the program to retrieve `[2,x,d]` and if possible, put that into a normal (1D) array. Likewise, if the user searches for `3`, the program should retrieve `[3,c,f]`. Thank you in advance (and if possible I want a solution that does not involve numpy)
2016/04/06
[ "https://Stackoverflow.com/questions/36461925", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6168984/" ]
I am not sure if I understood the question correctly, but from what I got, I think you can create a model instance with initial values (place holders), and allow your component to initialise with this model, and then, when your data is ready, change the model instance values, which will reflect to your component. This way, your component doesn't need to wait, it just uses place holder data, which you can of course test for, inside the component and display your template accordingly, and when the data is ready from the parent, updating it, will update the child. I hope this helped.
What version of Angular are you with? Not sure if you're copy-pasting the redacted code, but it seems as if you're missing the `implements` keyword there in your Class. `*ngIf` works good in this [plunker](https://plnkr.co/edit/jXsRvHZ33A1KrRxROGAK?p=preview). From what I gather, something like \*ngIf is the proper way to do things in Ng2. Basically, only show the component if the conditions are good. You might be running into a snag because your component gets *instantiated* before you expect it - because you require it in your parent component. That might be because your component itself (or the template) expects some values, but they're not there (so your `constructor` breaks down). According to [Lifecycle Hooks](https://angular.io/docs/ts/latest/guide/lifecycle-hooks.html) page on [angular.io](https://angular.io), that's exactly what OnInit interface is for. --- Here's the code from the plunker directly (yours would be the SubComponent): ``` import {Component, OnInit} from 'angular2/core' @Component({ selector: 'sub-component', template: '<p>Subcomponent is alive!</p>' }) class SubComponent {} @Component({ selector: 'my-app', providers: [], template: ` <div> <h2>Hello {{name}}</h2> <div *ngIf="initialized"> Initialized <sub-component>Sub</sub-component> </div> <div *ngIf="!initialized">Not initialized</div> </div> `, directives: [SubComponent] }) export class App implements OnInit { initialized = false; constructor() { this.name = 'Angular2' } ngOnInit() { setTimeout(() => { this.initialized = true; }, 2000) } } ```
36,461,925
I am not even sure how to word my question due to me being quite new to python. The basic concept of what I want to accomplish is to be able to search for something in a 2D array and retrieve the right value as well as the values associated with that value (sorry for my bad explanation) e.g. `array=[[1,a,b],[2,x,d],[3,c,f]]` if the user wants to find `2`, I want the program to retrieve `[2,x,d]` and if possible, put that into a normal (1D) array. Likewise, if the user searches for `3`, the program should retrieve `[3,c,f]`. Thank you in advance (and if possible I want a solution that does not involve numpy)
2016/04/06
[ "https://Stackoverflow.com/questions/36461925", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6168984/" ]
I am not sure if I understood the question correctly, but from what I got, I think you can create a model instance with initial values (place holders), and allow your component to initialise with this model, and then, when your data is ready, change the model instance values, which will reflect to your component. This way, your component doesn't need to wait, it just uses place holder data, which you can of course test for, inside the component and display your template accordingly, and when the data is ready from the parent, updating it, will update the child. I hope this helped.
What I usually do is create an `EventEmitter` in my data service, and then allow each component to listen for the `dataLoaded` event before doing anything. It may not be the most efficient and "textbook" way to go about this problem, but works well. For example, in `app.component.ts` (my most parent component), I load data in the `ngOnInit` hook. First, let's look at our data service: data.service.ts ``` @Injectable() export class DataService { dataLoaded = new EventEmitter<any>(); prop1: string; prop2: string; constructor(private http: HttpClient) {} // Asynchronously returns some initialization data loadAllData () { return this.http.get<string>('/api/some-path'); } } ``` app.component.ts ``` export class AppComponent { constructor (private dataService: DataService) { } ngOnInit() { this.dataService.loadAllData().subscribe((data) => { // Maybe you want to set a few props in the data service this.dataService.prop1 = data.prop1; this.dataService.prop2 = data.prop2; // Now, emit the event that the data has been loaded this.dataService.dataLoaded.emit(); }); } } ``` Now that we have the `DataService` loading and emitting a "loaded" event in the main app component, we can subscribe to this event in child components: ``` @Component({ selector: 'child-component', templateUrl: './child.component.html', styleUrls: ['./child.component.css'] }) export class ChildComponent implements OnInit { constructor(private dataService: DataService) { this.dataService.dataLoaded.subscribe(() => { // Once here, we know data has been loaded and we can do things dependent on that data this.methodThatRequiresData(); }); } methodThatRequiresData () { console.log(this.dataService.prop1); } } ```
36,461,925
I am not even sure how to word my question due to me being quite new to python. The basic concept of what I want to accomplish is to be able to search for something in a 2D array and retrieve the right value as well as the values associated with that value (sorry for my bad explanation) e.g. `array=[[1,a,b],[2,x,d],[3,c,f]]` if the user wants to find `2`, I want the program to retrieve `[2,x,d]` and if possible, put that into a normal (1D) array. Likewise, if the user searches for `3`, the program should retrieve `[3,c,f]`. Thank you in advance (and if possible I want a solution that does not involve numpy)
2016/04/06
[ "https://Stackoverflow.com/questions/36461925", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6168984/" ]
What version of Angular are you with? Not sure if you're copy-pasting the redacted code, but it seems as if you're missing the `implements` keyword there in your Class. `*ngIf` works good in this [plunker](https://plnkr.co/edit/jXsRvHZ33A1KrRxROGAK?p=preview). From what I gather, something like \*ngIf is the proper way to do things in Ng2. Basically, only show the component if the conditions are good. You might be running into a snag because your component gets *instantiated* before you expect it - because you require it in your parent component. That might be because your component itself (or the template) expects some values, but they're not there (so your `constructor` breaks down). According to [Lifecycle Hooks](https://angular.io/docs/ts/latest/guide/lifecycle-hooks.html) page on [angular.io](https://angular.io), that's exactly what OnInit interface is for. --- Here's the code from the plunker directly (yours would be the SubComponent): ``` import {Component, OnInit} from 'angular2/core' @Component({ selector: 'sub-component', template: '<p>Subcomponent is alive!</p>' }) class SubComponent {} @Component({ selector: 'my-app', providers: [], template: ` <div> <h2>Hello {{name}}</h2> <div *ngIf="initialized"> Initialized <sub-component>Sub</sub-component> </div> <div *ngIf="!initialized">Not initialized</div> </div> `, directives: [SubComponent] }) export class App implements OnInit { initialized = false; constructor() { this.name = 'Angular2' } ngOnInit() { setTimeout(() => { this.initialized = true; }, 2000) } } ```
What I usually do is create an `EventEmitter` in my data service, and then allow each component to listen for the `dataLoaded` event before doing anything. It may not be the most efficient and "textbook" way to go about this problem, but works well. For example, in `app.component.ts` (my most parent component), I load data in the `ngOnInit` hook. First, let's look at our data service: data.service.ts ``` @Injectable() export class DataService { dataLoaded = new EventEmitter<any>(); prop1: string; prop2: string; constructor(private http: HttpClient) {} // Asynchronously returns some initialization data loadAllData () { return this.http.get<string>('/api/some-path'); } } ``` app.component.ts ``` export class AppComponent { constructor (private dataService: DataService) { } ngOnInit() { this.dataService.loadAllData().subscribe((data) => { // Maybe you want to set a few props in the data service this.dataService.prop1 = data.prop1; this.dataService.prop2 = data.prop2; // Now, emit the event that the data has been loaded this.dataService.dataLoaded.emit(); }); } } ``` Now that we have the `DataService` loading and emitting a "loaded" event in the main app component, we can subscribe to this event in child components: ``` @Component({ selector: 'child-component', templateUrl: './child.component.html', styleUrls: ['./child.component.css'] }) export class ChildComponent implements OnInit { constructor(private dataService: DataService) { this.dataService.dataLoaded.subscribe(() => { // Once here, we know data has been loaded and we can do things dependent on that data this.methodThatRequiresData(); }); } methodThatRequiresData () { console.log(this.dataService.prop1); } } ```
39,816,500
I've recently began work on a Python program as seen in the fragment below. ``` # General Variables running = False new = True timeStart = 0.0 timeElapsed = 0.0 def endProg(): curses.nocbreak() stdscr.keypad(False) curses.echo() curses.endwin() quit() # Draw def draw(): stdscr.addstr(1, 1, ">", curses.color_pair(6)) stdscr.border() if running: stdscr.addstr(1, 3, t.strftime( "%H:%M.%S", t.ctime( timeStart - timeElapsed ) ) ) stdscr.redrawwin() stdscr.refresh() # Calculate def calc(): if running: timeElapsed = t.clock() - timeStart stdscr.border() stdscr.addstr(1, 3, t.strftime( "%H:%M.%S", t.gmtime( t.clock() - t.clock() ) ) ) # Main Loop while True: # Get Input kInput = stdscr.getch() # Close the program if kInput == ord('q'): endProg() # Stop the current run elif kInput == ord('s'): stdscr.addstr(1, 3, t.strftime( "%H:%M.%S", t.gmtime( t.clock() - t.clock() ) ) ) running = False new = True # Start a run elif kInput == ord(' ') and new: running = not running new = not new timeStart = dt.datetime.now() # Toggle the timer elif kInput == ord('p') and not new: timeStart = dt.datetime.now() - timeStart running = not running calc() draw() ``` **My program is a bit between solutions currently**, sorry if something doesn't look right. I'll be more than happy to explain. I've spent the last several hours reading online about the time and datetime modules for python, trying to figure out how I can use them to accomplish my goals, but however I've tried to implement them it's been no use. Essentially, I need my program to measure the elapsed time from when a button is pressed and be able to display it in a hour:minute.second format. The subtraction has made it very difficult, having to implement things such as timedelta. From what I have read online there is no way to do what I'm wanting without the datetime module, but it's given me nothing but problems. Is there an easier solution, does my code have any outstanding errors, and how stupid am I?
2016/10/02
[ "https://Stackoverflow.com/questions/39816500", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6911375/" ]
Install mocha and its types: ```sh npm install mocha --save-dev npm install @types/mocha --save-dev ``` Then, simply import mocha in your test files: ```js import 'mocha'; describe('my test', () => { it('does something', () => { // your test }); }); ```
Since TypeScript 2.0, you can add `mocha` to the `types` configuration of your `tsconfig.json` and it will always be loaded: ``` { "compilerOptions": { "types": [ "mocha" ] } } ```
39,816,500
I've recently began work on a Python program as seen in the fragment below. ``` # General Variables running = False new = True timeStart = 0.0 timeElapsed = 0.0 def endProg(): curses.nocbreak() stdscr.keypad(False) curses.echo() curses.endwin() quit() # Draw def draw(): stdscr.addstr(1, 1, ">", curses.color_pair(6)) stdscr.border() if running: stdscr.addstr(1, 3, t.strftime( "%H:%M.%S", t.ctime( timeStart - timeElapsed ) ) ) stdscr.redrawwin() stdscr.refresh() # Calculate def calc(): if running: timeElapsed = t.clock() - timeStart stdscr.border() stdscr.addstr(1, 3, t.strftime( "%H:%M.%S", t.gmtime( t.clock() - t.clock() ) ) ) # Main Loop while True: # Get Input kInput = stdscr.getch() # Close the program if kInput == ord('q'): endProg() # Stop the current run elif kInput == ord('s'): stdscr.addstr(1, 3, t.strftime( "%H:%M.%S", t.gmtime( t.clock() - t.clock() ) ) ) running = False new = True # Start a run elif kInput == ord(' ') and new: running = not running new = not new timeStart = dt.datetime.now() # Toggle the timer elif kInput == ord('p') and not new: timeStart = dt.datetime.now() - timeStart running = not running calc() draw() ``` **My program is a bit between solutions currently**, sorry if something doesn't look right. I'll be more than happy to explain. I've spent the last several hours reading online about the time and datetime modules for python, trying to figure out how I can use them to accomplish my goals, but however I've tried to implement them it's been no use. Essentially, I need my program to measure the elapsed time from when a button is pressed and be able to display it in a hour:minute.second format. The subtraction has made it very difficult, having to implement things such as timedelta. From what I have read online there is no way to do what I'm wanting without the datetime module, but it's given me nothing but problems. Is there an easier solution, does my code have any outstanding errors, and how stupid am I?
2016/10/02
[ "https://Stackoverflow.com/questions/39816500", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6911375/" ]
Since TypeScript 2.0, you can add `mocha` to the `types` configuration of your `tsconfig.json` and it will always be loaded: ``` { "compilerOptions": { "types": [ "mocha" ] } } ```
I was having issues with errors and warnings, the problem stemmed from me renaming `tsconfig.json` to something else which makes Visual Studio Code enter "File Scope" instead of "Explicit Project". That made it impossible to import `it` without a red squiggly. Now that I've renamed the config back to `tsconfig.json` then `import 'mocha';` works as Eryk mentioned. <https://code.visualstudio.com/Docs/languages/typescript>
39,816,500
I've recently began work on a Python program as seen in the fragment below. ``` # General Variables running = False new = True timeStart = 0.0 timeElapsed = 0.0 def endProg(): curses.nocbreak() stdscr.keypad(False) curses.echo() curses.endwin() quit() # Draw def draw(): stdscr.addstr(1, 1, ">", curses.color_pair(6)) stdscr.border() if running: stdscr.addstr(1, 3, t.strftime( "%H:%M.%S", t.ctime( timeStart - timeElapsed ) ) ) stdscr.redrawwin() stdscr.refresh() # Calculate def calc(): if running: timeElapsed = t.clock() - timeStart stdscr.border() stdscr.addstr(1, 3, t.strftime( "%H:%M.%S", t.gmtime( t.clock() - t.clock() ) ) ) # Main Loop while True: # Get Input kInput = stdscr.getch() # Close the program if kInput == ord('q'): endProg() # Stop the current run elif kInput == ord('s'): stdscr.addstr(1, 3, t.strftime( "%H:%M.%S", t.gmtime( t.clock() - t.clock() ) ) ) running = False new = True # Start a run elif kInput == ord(' ') and new: running = not running new = not new timeStart = dt.datetime.now() # Toggle the timer elif kInput == ord('p') and not new: timeStart = dt.datetime.now() - timeStart running = not running calc() draw() ``` **My program is a bit between solutions currently**, sorry if something doesn't look right. I'll be more than happy to explain. I've spent the last several hours reading online about the time and datetime modules for python, trying to figure out how I can use them to accomplish my goals, but however I've tried to implement them it's been no use. Essentially, I need my program to measure the elapsed time from when a button is pressed and be able to display it in a hour:minute.second format. The subtraction has made it very difficult, having to implement things such as timedelta. From what I have read online there is no way to do what I'm wanting without the datetime module, but it's given me nothing but problems. Is there an easier solution, does my code have any outstanding errors, and how stupid am I?
2016/10/02
[ "https://Stackoverflow.com/questions/39816500", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6911375/" ]
Install mocha and its types: ```sh npm install mocha --save-dev npm install @types/mocha --save-dev ``` Then, simply import mocha in your test files: ```js import 'mocha'; describe('my test', () => { it('does something', () => { // your test }); }); ```
I was having issues with errors and warnings, the problem stemmed from me renaming `tsconfig.json` to something else which makes Visual Studio Code enter "File Scope" instead of "Explicit Project". That made it impossible to import `it` without a red squiggly. Now that I've renamed the config back to `tsconfig.json` then `import 'mocha';` works as Eryk mentioned. <https://code.visualstudio.com/Docs/languages/typescript>
34,756,978
I am trying to download py2exe but every time that run the setup program it says "no python installation found in registry" but I have downloaded python 3.4 and have it on my computer working? please help. I'm using a 64 bit computer with the 64 bit py2exe, I downloaded python from the python website. And i'm on windows 8
2016/01/13
[ "https://Stackoverflow.com/questions/34756978", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5781821/" ]
Try to use [cx\_Freeze](https://pypi.python.org/pypi/cx_Freeze) instead py2exe.
I assume that you have installed everything properly. In your install settings you can choose if you want to assign the **system variable** python.as you can see from the [point 3.3 of the documentation](https://docs.python.org/3.4/using/windows.html#configuring-python), you should: > > 3.3.1. Excursus: Setting environment variables¶ > > > Windows has a built-in dialog for changing environment variables (following guide applies to XP classical view): Right-click the icon for your machine (usually located on your Desktop and called “My Computer”) and chooseProperties there. Then, open the Advanced tab and click the Environment Variables button. > > > In short, your path is: > > > ``` My Computer / Properties / Advanced / Environment Variables ``` > > In this dialog, you can add or modify User and System variables. To change System variables, you need non-restricted access to your machine (i.e. Administrator rights). > > > Another way of adding variables to your environment is using the set command: > > > ``` set PYTHONPATH=%PYTHONPATH%;C:My_python_lib ```
31,256,397
I have data of the following form: ``` #@ <abc> <http://stackoverflow.com/questions/ask> <question> _:question1 . #@ <def> <The> <second> <http://line> . #@ <ghi> _:question1 <http#responseCode> "200"^^<http://integer> . #@ <klm> <The> <second> <http://line1.xml> . #@ <nop> _:question1 <date> "Mon, 23 Apr 2012 13:49:27 GMT" . #@ <jkn> <G> <http#fifth> "200"^^<http://integer> . #@ <k93> _:question1 <http#responseCode> "200"^^<http://integer> . #@ <k22> <This> <third> <http://line2.xml> . #@ <k73> <http://site1> <hasAddress> <http://addr1> . #@ <i27> <kd8> <fourth> <http://addr2.xml> . ``` Now whenever two lines are equal, like: **`_:question1 <http#responseCode> "200"^^<http://integer> .`**, then I want to delete the equal lines (lines which match with each other character by character are equal lines) along with (i). the subsequent line (which ends with a fullstop) (ii). line previous to the equal line (which begins with #@). ``` #@ <abc> <http://stackoverflow.com/questions/ask> <question> _:question1 . #@ <def> <The> <second> <http://line> . #@ <nop> _:question1 <date> "Mon, 23 Apr 2012 13:49:27 GMT" . #@ <jkn> <G> <http#fifth> "200"^^<http://integer> . #@ <k73> <http://site1> <hasAddress> <http://addr1> . #@ <i27> <kd8> <fourth> <http://addr2.xml> . ``` Now one way to do this is to store all these lines in a set in python and whenever two lines are equal (i.e. they match character by character) the previous and subsequent two lines are deleted. However, the size of my dataset is 100GB (and I have RAM of size 64GB), therefore I can not keep this information in set form in main-memory. Is there some way by which I can delete the duplicate lines along with their previous and subsequent two lines in python with limited main-memory space (RAM size 64 GB)
2015/07/06
[ "https://Stackoverflow.com/questions/31256397", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4360034/" ]
Keep a boolean hashtable of hash codes of lines already seen. For each line: * if line hash()es to something you have already seen, you have a potential match: scan the file to check if it really is a duplicate. * if line hash()es to a new hash, just mark the hash for the first time. Dedicate as much memory you can to this hashtable, and the false positive rate will be low (i.e. less times you will have to scan for duplicates and found none). Example: ``` table_size = 2**16 seen = [False]*table_size for line in file: h = hash(line) % table_size if seen[h]: dup = False with open('yourfile','r') as f: for line1 in f: if line == line1: dup = True break if not dup: print(line) else: seen[h] = True print(line) ``` As it has been pointed out, since you cannot store all the lines in memory you don't have many options, but at least this option doesn't require to scan the file for every single line, because most of the entries in the table will be False, i.e. the algorithm is sub-quadratic if the tabe is not full; it will degenerate to O(n2) once the table is full. You can make a very memory-efficient implementation of the hash table, that requires only 1 bit for each hash code (e.g. make it an array of bytes, where each byte can store 8 boolean values) --- See also [Bloom Filters](https://en.wikipedia.org/wiki/Bloom_filter) for more advanced techniques.
One fairly straightforward way - make a version of your data such that each line includes a field with its line number. Use unix 'sort' to sort that new file, excluding the line number field. The sort utility will merge sort the file even if it exceeds the size of available memory. Now you have a new file in which the duplicates are ordered, along with their original line numbers. Extract the line numbers of the duplicates and then use that as input for linearly processing your original data. In more detailed steps. * Make a new version of your file such that each line is prepended by its line number. So, "someline" becomes "1, someline" * sort this file using the unix sort utility - sort -t"," -k2,2 file * Scan the new file for consecutive duplicate entries in the second field * the line numbers (first field) of such entries are the line numbers of duplicate lines in your original file - extract these and use them as input to remove duplicates in the original data. Since you know exactly where they are, you need not read in the entire file or create a giant in-memory structure for duplicates The advantage of this method compared to some of the others suggested - it always works, regardless of the size of the input and the size of your available memory and it does not fail due to hash collisions or other probabilistic artifacts. You are leveraging the merge sort in unix sort where the hard stuff - dealing with larger-than-memory input - has been done for you.
31,256,397
I have data of the following form: ``` #@ <abc> <http://stackoverflow.com/questions/ask> <question> _:question1 . #@ <def> <The> <second> <http://line> . #@ <ghi> _:question1 <http#responseCode> "200"^^<http://integer> . #@ <klm> <The> <second> <http://line1.xml> . #@ <nop> _:question1 <date> "Mon, 23 Apr 2012 13:49:27 GMT" . #@ <jkn> <G> <http#fifth> "200"^^<http://integer> . #@ <k93> _:question1 <http#responseCode> "200"^^<http://integer> . #@ <k22> <This> <third> <http://line2.xml> . #@ <k73> <http://site1> <hasAddress> <http://addr1> . #@ <i27> <kd8> <fourth> <http://addr2.xml> . ``` Now whenever two lines are equal, like: **`_:question1 <http#responseCode> "200"^^<http://integer> .`**, then I want to delete the equal lines (lines which match with each other character by character are equal lines) along with (i). the subsequent line (which ends with a fullstop) (ii). line previous to the equal line (which begins with #@). ``` #@ <abc> <http://stackoverflow.com/questions/ask> <question> _:question1 . #@ <def> <The> <second> <http://line> . #@ <nop> _:question1 <date> "Mon, 23 Apr 2012 13:49:27 GMT" . #@ <jkn> <G> <http#fifth> "200"^^<http://integer> . #@ <k73> <http://site1> <hasAddress> <http://addr1> . #@ <i27> <kd8> <fourth> <http://addr2.xml> . ``` Now one way to do this is to store all these lines in a set in python and whenever two lines are equal (i.e. they match character by character) the previous and subsequent two lines are deleted. However, the size of my dataset is 100GB (and I have RAM of size 64GB), therefore I can not keep this information in set form in main-memory. Is there some way by which I can delete the duplicate lines along with their previous and subsequent two lines in python with limited main-memory space (RAM size 64 GB)
2015/07/06
[ "https://Stackoverflow.com/questions/31256397", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4360034/" ]
Keep a boolean hashtable of hash codes of lines already seen. For each line: * if line hash()es to something you have already seen, you have a potential match: scan the file to check if it really is a duplicate. * if line hash()es to a new hash, just mark the hash for the first time. Dedicate as much memory you can to this hashtable, and the false positive rate will be low (i.e. less times you will have to scan for duplicates and found none). Example: ``` table_size = 2**16 seen = [False]*table_size for line in file: h = hash(line) % table_size if seen[h]: dup = False with open('yourfile','r') as f: for line1 in f: if line == line1: dup = True break if not dup: print(line) else: seen[h] = True print(line) ``` As it has been pointed out, since you cannot store all the lines in memory you don't have many options, but at least this option doesn't require to scan the file for every single line, because most of the entries in the table will be False, i.e. the algorithm is sub-quadratic if the tabe is not full; it will degenerate to O(n2) once the table is full. You can make a very memory-efficient implementation of the hash table, that requires only 1 bit for each hash code (e.g. make it an array of bytes, where each byte can store 8 boolean values) --- See also [Bloom Filters](https://en.wikipedia.org/wiki/Bloom_filter) for more advanced techniques.
Here's an outline of how I'd do it using UNIX sort/uniq: 1. Modify the data format so that each record is a single line. You could do this using the methods [here](https://stackoverflow.com/questions/8987257/concatenating-every-other-line-with-the-next). 2. Sort the data with the [`sort` command](http://unixhelp.ed.ac.uk/CGI/man-cgi?sort). Note the you can specify which fields are important with the `--key` option, you might need to exclude the `#@ <abc>` part by selecting all the other fields as keys (I wasn't entirely sure from your description). 3. Apply the [`uniq` command](http://unixhelp.ed.ac.uk/CGI/man-cgi?uniq) to the sorted output to get only the unique lines. This should all work fine on out-of-core data as far as I know.
48,964,181
I am currently trying to load a pickled file from S3 into AWS lambda and store it to a list (the pickle is a list). Here is my code: ``` import pickle import boto3 s3 = boto3.resource('s3') with open('oldscreenurls.pkl', 'rb') as data: old_list = s3.Bucket("pythonpickles").download_fileobj("oldscreenurls.pkl", data) ``` I get the following error even though the file exists: ``` FileNotFoundError: [Errno 2] No such file or directory: 'oldscreenurls.pkl' ``` Any ideas?
2018/02/24
[ "https://Stackoverflow.com/questions/48964181", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6327717/" ]
Super simple solution ```py import pickle import boto3 s3 = boto3.resource('s3') my_pickle = pickle.loads(s3.Bucket("bucket_name").Object("key_to_pickle.pickle").get()['Body'].read()) ```
As shown in the documentation for [`download_fileobj`](http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Bucket.download_fileobj), you need to open the file in binary *write* mode and save to the file first. Once the file is downloaded, you can open it for reading and unpickle. ``` import pickle import boto3 s3 = boto3.resource('s3') with open('oldscreenurls.pkl', 'wb') as data: s3.Bucket("pythonpickles").download_fileobj("oldscreenurls.pkl", data) with open('oldscreenurls.pkl', 'rb') as data: old_list = pickle.load(data) ``` `download_fileobj` takes the name of an object in S3 plus a handle to a local file, and saves the contents of that object to the file. There is also a version of this function called [`download_file`](http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Bucket.download_file) that takes a filename instead of an open file handle and handles opening it for you. In this case it would probably be better to use [S3Client.get\_object](http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Client.get_object) though, to avoid having to write and then immediately read a file. You could also write to an in-memory BytesIO object, which acts like a file but doesn't actually touch a disk. That would look something like this: ``` import pickle import boto3 from io import BytesIO s3 = boto3.resource('s3') with BytesIO() as data: s3.Bucket("pythonpickles").download_fileobj("oldscreenurls.pkl", data) data.seek(0) # move back to the beginning after writing old_list = pickle.load(data) ```
48,964,181
I am currently trying to load a pickled file from S3 into AWS lambda and store it to a list (the pickle is a list). Here is my code: ``` import pickle import boto3 s3 = boto3.resource('s3') with open('oldscreenurls.pkl', 'rb') as data: old_list = s3.Bucket("pythonpickles").download_fileobj("oldscreenurls.pkl", data) ``` I get the following error even though the file exists: ``` FileNotFoundError: [Errno 2] No such file or directory: 'oldscreenurls.pkl' ``` Any ideas?
2018/02/24
[ "https://Stackoverflow.com/questions/48964181", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6327717/" ]
As shown in the documentation for [`download_fileobj`](http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Bucket.download_fileobj), you need to open the file in binary *write* mode and save to the file first. Once the file is downloaded, you can open it for reading and unpickle. ``` import pickle import boto3 s3 = boto3.resource('s3') with open('oldscreenurls.pkl', 'wb') as data: s3.Bucket("pythonpickles").download_fileobj("oldscreenurls.pkl", data) with open('oldscreenurls.pkl', 'rb') as data: old_list = pickle.load(data) ``` `download_fileobj` takes the name of an object in S3 plus a handle to a local file, and saves the contents of that object to the file. There is also a version of this function called [`download_file`](http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Bucket.download_file) that takes a filename instead of an open file handle and handles opening it for you. In this case it would probably be better to use [S3Client.get\_object](http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Client.get_object) though, to avoid having to write and then immediately read a file. You could also write to an in-memory BytesIO object, which acts like a file but doesn't actually touch a disk. That would look something like this: ``` import pickle import boto3 from io import BytesIO s3 = boto3.resource('s3') with BytesIO() as data: s3.Bucket("pythonpickles").download_fileobj("oldscreenurls.pkl", data) data.seek(0) # move back to the beginning after writing old_list = pickle.load(data) ```
This is the easiest solution. You can load the data without even downloading the file locally using **S3FileSystem** ``` from s3fs.core import S3FileSystem s3_file = S3FileSystem() data = pickle.load(s3_file.open('{}/{}'.format(bucket_name, file_path))) ```
48,964,181
I am currently trying to load a pickled file from S3 into AWS lambda and store it to a list (the pickle is a list). Here is my code: ``` import pickle import boto3 s3 = boto3.resource('s3') with open('oldscreenurls.pkl', 'rb') as data: old_list = s3.Bucket("pythonpickles").download_fileobj("oldscreenurls.pkl", data) ``` I get the following error even though the file exists: ``` FileNotFoundError: [Errno 2] No such file or directory: 'oldscreenurls.pkl' ``` Any ideas?
2018/02/24
[ "https://Stackoverflow.com/questions/48964181", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6327717/" ]
As shown in the documentation for [`download_fileobj`](http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Bucket.download_fileobj), you need to open the file in binary *write* mode and save to the file first. Once the file is downloaded, you can open it for reading and unpickle. ``` import pickle import boto3 s3 = boto3.resource('s3') with open('oldscreenurls.pkl', 'wb') as data: s3.Bucket("pythonpickles").download_fileobj("oldscreenurls.pkl", data) with open('oldscreenurls.pkl', 'rb') as data: old_list = pickle.load(data) ``` `download_fileobj` takes the name of an object in S3 plus a handle to a local file, and saves the contents of that object to the file. There is also a version of this function called [`download_file`](http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Bucket.download_file) that takes a filename instead of an open file handle and handles opening it for you. In this case it would probably be better to use [S3Client.get\_object](http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Client.get_object) though, to avoid having to write and then immediately read a file. You could also write to an in-memory BytesIO object, which acts like a file but doesn't actually touch a disk. That would look something like this: ``` import pickle import boto3 from io import BytesIO s3 = boto3.resource('s3') with BytesIO() as data: s3.Bucket("pythonpickles").download_fileobj("oldscreenurls.pkl", data) data.seek(0) # move back to the beginning after writing old_list = pickle.load(data) ```
According to my implementation, S3 file path read with pickle. ``` import pickle import boto3 name = img_url.split('/')[::-1][0] folder = 'media' file_name = f'{folder}/{name}' bucket_name = bucket_name s3 = boto3.client('s3', aws_access_key_id=aws_access_key_id,aws_secret_access_key=aws_secret_access_key) response = s3.get_object(Bucket=bucket_name, Key=file_name) body = response['Body'].read() data = pickle.loads(body) ```
48,964,181
I am currently trying to load a pickled file from S3 into AWS lambda and store it to a list (the pickle is a list). Here is my code: ``` import pickle import boto3 s3 = boto3.resource('s3') with open('oldscreenurls.pkl', 'rb') as data: old_list = s3.Bucket("pythonpickles").download_fileobj("oldscreenurls.pkl", data) ``` I get the following error even though the file exists: ``` FileNotFoundError: [Errno 2] No such file or directory: 'oldscreenurls.pkl' ``` Any ideas?
2018/02/24
[ "https://Stackoverflow.com/questions/48964181", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6327717/" ]
Super simple solution ```py import pickle import boto3 s3 = boto3.resource('s3') my_pickle = pickle.loads(s3.Bucket("bucket_name").Object("key_to_pickle.pickle").get()['Body'].read()) ```
This is the easiest solution. You can load the data without even downloading the file locally using **S3FileSystem** ``` from s3fs.core import S3FileSystem s3_file = S3FileSystem() data = pickle.load(s3_file.open('{}/{}'.format(bucket_name, file_path))) ```
48,964,181
I am currently trying to load a pickled file from S3 into AWS lambda and store it to a list (the pickle is a list). Here is my code: ``` import pickle import boto3 s3 = boto3.resource('s3') with open('oldscreenurls.pkl', 'rb') as data: old_list = s3.Bucket("pythonpickles").download_fileobj("oldscreenurls.pkl", data) ``` I get the following error even though the file exists: ``` FileNotFoundError: [Errno 2] No such file or directory: 'oldscreenurls.pkl' ``` Any ideas?
2018/02/24
[ "https://Stackoverflow.com/questions/48964181", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6327717/" ]
Super simple solution ```py import pickle import boto3 s3 = boto3.resource('s3') my_pickle = pickle.loads(s3.Bucket("bucket_name").Object("key_to_pickle.pickle").get()['Body'].read()) ```
According to my implementation, S3 file path read with pickle. ``` import pickle import boto3 name = img_url.split('/')[::-1][0] folder = 'media' file_name = f'{folder}/{name}' bucket_name = bucket_name s3 = boto3.client('s3', aws_access_key_id=aws_access_key_id,aws_secret_access_key=aws_secret_access_key) response = s3.get_object(Bucket=bucket_name, Key=file_name) body = response['Body'].read() data = pickle.loads(body) ```
48,964,181
I am currently trying to load a pickled file from S3 into AWS lambda and store it to a list (the pickle is a list). Here is my code: ``` import pickle import boto3 s3 = boto3.resource('s3') with open('oldscreenurls.pkl', 'rb') as data: old_list = s3.Bucket("pythonpickles").download_fileobj("oldscreenurls.pkl", data) ``` I get the following error even though the file exists: ``` FileNotFoundError: [Errno 2] No such file or directory: 'oldscreenurls.pkl' ``` Any ideas?
2018/02/24
[ "https://Stackoverflow.com/questions/48964181", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6327717/" ]
This is the easiest solution. You can load the data without even downloading the file locally using **S3FileSystem** ``` from s3fs.core import S3FileSystem s3_file = S3FileSystem() data = pickle.load(s3_file.open('{}/{}'.format(bucket_name, file_path))) ```
According to my implementation, S3 file path read with pickle. ``` import pickle import boto3 name = img_url.split('/')[::-1][0] folder = 'media' file_name = f'{folder}/{name}' bucket_name = bucket_name s3 = boto3.client('s3', aws_access_key_id=aws_access_key_id,aws_secret_access_key=aws_secret_access_key) response = s3.get_object(Bucket=bucket_name, Key=file_name) body = response['Body'].read() data = pickle.loads(body) ```
66,144,386
I have a large list of numbers (4,718,515 of them) and I need to run a method with each of them. I have a method `Check(number)` that needs to be ran with the items in `numbers`. I don't know the most efficient way to go about this in c# Here are some examples of how I was able to do it in other languages if it helps python: `pool.imap_unordered` and in Java: ``` forkJoinPool = new ForkJoinPool(processes); forkJoinPool.submit(() -> words.stream() .unordered() .parallel() .filter(CheckValidity::checkValidity) ```
2021/02/10
[ "https://Stackoverflow.com/questions/66144386", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10252192/" ]
There should not be any joining since the information is available in that table. Instead of selecting all the columns you can select only DebtAccountReferences as required. ``` select DebtAccountReferences from DebtPayment_DL where PaymentStartDate > '01/01/2021' and CloseDate is null; ```
Based on the information you have given, you do not need a join statement. All of the information you need is stored in the table 'DebtPayment\_DL' Your code is therefore correct and should generate the correct output.
21,361,281
I'm trying to modify a list in a tuple, the `append` method works, while `+=` operator works yet with an exception raised saying tuple could not be modified. I know a tuple is immutable, but I'm not trying to mutate it. Why this happen? ``` In [36]: t=([1,2],) In [37]: t[0].append(123) In [38]: t Out[38]: ([1, 2, 123],) In [39]: t[0]+=[4,5,] --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-39-b5b3001fbe03> in <module>() ----> 1 t[0]+=[4,5,] TypeError: 'tuple' object does not support item assignment In [40]: t Out[40]: ([1, 2, 123, 4, 5],) ```
2014/01/26
[ "https://Stackoverflow.com/questions/21361281", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1150712/" ]
Because `t[0] += [4,5,]` is interpreted as: ``` t[0] = t[0].__iadd__([4,5,]) ``` `t[0]__iadd__([4,5])` succeed, while `t[0] = ..` fail. --- `list.__iadd__` extend the list, and return itself. ``` >>> lst = [0] >>> lst2 = lst.__iadd__([1]) >>> lst [0, 1] >>> lst2 [0, 1] >>> lst is lst2 True ```
In fact you **do** change the tuple: The `+` operator for lists creates a new list and you try to mutate your tuple by replacing the old list by the new one. `append`modifies the list in the tuple, therefore it works.
21,361,281
I'm trying to modify a list in a tuple, the `append` method works, while `+=` operator works yet with an exception raised saying tuple could not be modified. I know a tuple is immutable, but I'm not trying to mutate it. Why this happen? ``` In [36]: t=([1,2],) In [37]: t[0].append(123) In [38]: t Out[38]: ([1, 2, 123],) In [39]: t[0]+=[4,5,] --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-39-b5b3001fbe03> in <module>() ----> 1 t[0]+=[4,5,] TypeError: 'tuple' object does not support item assignment In [40]: t Out[40]: ([1, 2, 123, 4, 5],) ```
2014/01/26
[ "https://Stackoverflow.com/questions/21361281", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1150712/" ]
Because `t[0] += [4,5,]` is interpreted as: ``` t[0] = t[0].__iadd__([4,5,]) ``` `t[0]__iadd__([4,5])` succeed, while `t[0] = ..` fail. --- `list.__iadd__` extend the list, and return itself. ``` >>> lst = [0] >>> lst2 = lst.__iadd__([1]) >>> lst [0, 1] >>> lst2 [0, 1] >>> lst is lst2 True ```
When we say tuple is immutable, it means that the elements of a tuple (which are references to other objects), cannot be changed (read as, cannot be made to refer other objects). So, when you say, ``` t[0].append(123) ``` You are not changing the element at index 0, to refer some other object. Instead, you are making changes to the same object, which is perfectly okay as per the tuple. When you say, ``` t[0] += [4,5,] ``` Python internally calls `__iadd__` (stands for inplace add) method, which can be understood like this ``` t[0] = t[0] + [4,5,] ``` which means that, we take the object `t[0]`, adding it with `[4,5,]` to get a new object and that new object is being **assigned** back to `t[0]`. Now we are trying to mutate the tuple (making an element of a tuple refer to some other object). That is why you see ``` TypeError: 'tuple' object does not support item assignment ``` in the later case.
21,361,281
I'm trying to modify a list in a tuple, the `append` method works, while `+=` operator works yet with an exception raised saying tuple could not be modified. I know a tuple is immutable, but I'm not trying to mutate it. Why this happen? ``` In [36]: t=([1,2],) In [37]: t[0].append(123) In [38]: t Out[38]: ([1, 2, 123],) In [39]: t[0]+=[4,5,] --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-39-b5b3001fbe03> in <module>() ----> 1 t[0]+=[4,5,] TypeError: 'tuple' object does not support item assignment In [40]: t Out[40]: ([1, 2, 123, 4, 5],) ```
2014/01/26
[ "https://Stackoverflow.com/questions/21361281", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1150712/" ]
`+=` is the in-place addition operator. It does *two* things: * it calls `obj.__iadd__(rhs)` to give the object the *opportunity* to mutate the object in-place. * it rebinds the reference to whatever the `obj.__iadd__(rhs)` call returns. By using `+=` on a list stored in a tuple, the first step succeeds; the `t[0]` list is altered in-place, but the second step, rebinding `t[0]` to the return value of `t[0].__iadd__` fails because a tuple is immutable. The latter step is needed to support the same operator on both mutable and immutable objects: ``` >>> reference = somestr = 'Hello' >>> somestr += ' world!' >>> somestr 'Hello world!' >>> reference 'Hello' >>> reference is somestr False ``` Here a immutable string was added to, and `somestr` was rebound to a *new* object, because strings are immutable. ``` >>> reference = somelst = ['foo'] >>> somelst += ['bar'] >>> somelst ['foo', 'bar'] >>> reference ['foo', 'bar'] >>> reference is somestr True ``` Here the list was altered in-place and `somestr` was rebound to the *same object*, because `list.__iadd__()` can alter the list object in-place. From the [augmented arithmetic special method hooks documentation](http://docs.python.org/2/reference/datamodel.html#object.__iadd__): > > These methods are called to implement the augmented arithmetic assignments (`+=`, `-=`, `*=`, `/=`, `//=`, `%=`, `**=`, `<<=`, `>>=`, `&=`, `^=`, `|=`). These methods should attempt to do the operation in-place (modifying `self`) and return the result (which could be, but does not have to be, `self`). > > > The work-around here is to call `t[0].extend()` instead: ``` >>> t = ([1,2],) >>> t[0].extend([3, 4, 5]) >>> t[0] [1, 2, 3, 4, 5] ```
Because `t[0] += [4,5,]` is interpreted as: ``` t[0] = t[0].__iadd__([4,5,]) ``` `t[0]__iadd__([4,5])` succeed, while `t[0] = ..` fail. --- `list.__iadd__` extend the list, and return itself. ``` >>> lst = [0] >>> lst2 = lst.__iadd__([1]) >>> lst [0, 1] >>> lst2 [0, 1] >>> lst is lst2 True ```
21,361,281
I'm trying to modify a list in a tuple, the `append` method works, while `+=` operator works yet with an exception raised saying tuple could not be modified. I know a tuple is immutable, but I'm not trying to mutate it. Why this happen? ``` In [36]: t=([1,2],) In [37]: t[0].append(123) In [38]: t Out[38]: ([1, 2, 123],) In [39]: t[0]+=[4,5,] --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-39-b5b3001fbe03> in <module>() ----> 1 t[0]+=[4,5,] TypeError: 'tuple' object does not support item assignment In [40]: t Out[40]: ([1, 2, 123, 4, 5],) ```
2014/01/26
[ "https://Stackoverflow.com/questions/21361281", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1150712/" ]
`+=` is the in-place addition operator. It does *two* things: * it calls `obj.__iadd__(rhs)` to give the object the *opportunity* to mutate the object in-place. * it rebinds the reference to whatever the `obj.__iadd__(rhs)` call returns. By using `+=` on a list stored in a tuple, the first step succeeds; the `t[0]` list is altered in-place, but the second step, rebinding `t[0]` to the return value of `t[0].__iadd__` fails because a tuple is immutable. The latter step is needed to support the same operator on both mutable and immutable objects: ``` >>> reference = somestr = 'Hello' >>> somestr += ' world!' >>> somestr 'Hello world!' >>> reference 'Hello' >>> reference is somestr False ``` Here a immutable string was added to, and `somestr` was rebound to a *new* object, because strings are immutable. ``` >>> reference = somelst = ['foo'] >>> somelst += ['bar'] >>> somelst ['foo', 'bar'] >>> reference ['foo', 'bar'] >>> reference is somestr True ``` Here the list was altered in-place and `somestr` was rebound to the *same object*, because `list.__iadd__()` can alter the list object in-place. From the [augmented arithmetic special method hooks documentation](http://docs.python.org/2/reference/datamodel.html#object.__iadd__): > > These methods are called to implement the augmented arithmetic assignments (`+=`, `-=`, `*=`, `/=`, `//=`, `%=`, `**=`, `<<=`, `>>=`, `&=`, `^=`, `|=`). These methods should attempt to do the operation in-place (modifying `self`) and return the result (which could be, but does not have to be, `self`). > > > The work-around here is to call `t[0].extend()` instead: ``` >>> t = ([1,2],) >>> t[0].extend([3, 4, 5]) >>> t[0] [1, 2, 3, 4, 5] ```
In fact you **do** change the tuple: The `+` operator for lists creates a new list and you try to mutate your tuple by replacing the old list by the new one. `append`modifies the list in the tuple, therefore it works.
21,361,281
I'm trying to modify a list in a tuple, the `append` method works, while `+=` operator works yet with an exception raised saying tuple could not be modified. I know a tuple is immutable, but I'm not trying to mutate it. Why this happen? ``` In [36]: t=([1,2],) In [37]: t[0].append(123) In [38]: t Out[38]: ([1, 2, 123],) In [39]: t[0]+=[4,5,] --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-39-b5b3001fbe03> in <module>() ----> 1 t[0]+=[4,5,] TypeError: 'tuple' object does not support item assignment In [40]: t Out[40]: ([1, 2, 123, 4, 5],) ```
2014/01/26
[ "https://Stackoverflow.com/questions/21361281", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1150712/" ]
`+=` is the in-place addition operator. It does *two* things: * it calls `obj.__iadd__(rhs)` to give the object the *opportunity* to mutate the object in-place. * it rebinds the reference to whatever the `obj.__iadd__(rhs)` call returns. By using `+=` on a list stored in a tuple, the first step succeeds; the `t[0]` list is altered in-place, but the second step, rebinding `t[0]` to the return value of `t[0].__iadd__` fails because a tuple is immutable. The latter step is needed to support the same operator on both mutable and immutable objects: ``` >>> reference = somestr = 'Hello' >>> somestr += ' world!' >>> somestr 'Hello world!' >>> reference 'Hello' >>> reference is somestr False ``` Here a immutable string was added to, and `somestr` was rebound to a *new* object, because strings are immutable. ``` >>> reference = somelst = ['foo'] >>> somelst += ['bar'] >>> somelst ['foo', 'bar'] >>> reference ['foo', 'bar'] >>> reference is somestr True ``` Here the list was altered in-place and `somestr` was rebound to the *same object*, because `list.__iadd__()` can alter the list object in-place. From the [augmented arithmetic special method hooks documentation](http://docs.python.org/2/reference/datamodel.html#object.__iadd__): > > These methods are called to implement the augmented arithmetic assignments (`+=`, `-=`, `*=`, `/=`, `//=`, `%=`, `**=`, `<<=`, `>>=`, `&=`, `^=`, `|=`). These methods should attempt to do the operation in-place (modifying `self`) and return the result (which could be, but does not have to be, `self`). > > > The work-around here is to call `t[0].extend()` instead: ``` >>> t = ([1,2],) >>> t[0].extend([3, 4, 5]) >>> t[0] [1, 2, 3, 4, 5] ```
When we say tuple is immutable, it means that the elements of a tuple (which are references to other objects), cannot be changed (read as, cannot be made to refer other objects). So, when you say, ``` t[0].append(123) ``` You are not changing the element at index 0, to refer some other object. Instead, you are making changes to the same object, which is perfectly okay as per the tuple. When you say, ``` t[0] += [4,5,] ``` Python internally calls `__iadd__` (stands for inplace add) method, which can be understood like this ``` t[0] = t[0] + [4,5,] ``` which means that, we take the object `t[0]`, adding it with `[4,5,]` to get a new object and that new object is being **assigned** back to `t[0]`. Now we are trying to mutate the tuple (making an element of a tuple refer to some other object). That is why you see ``` TypeError: 'tuple' object does not support item assignment ``` in the later case.
50,314,242
I want to save floating-point numbers as pixels in an image file. I am currently working in OpenCV-python, but I had also tried it with Pillow (PIL). Both packages convert `float` pixel data to integer before writing them to the file. I want to save pixel values such as: ```none (245.7865, 123.18788, 98.9866) ``` But when I read back the image file I get: ```none (246, 123, 99) ``` Somehow my floating-point numbers get rounded off and converted to integers. How to stop PIL or OpenCV from converting them to integer?
2018/05/13
[ "https://Stackoverflow.com/questions/50314242", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5256558/" ]
Most likely your are looking for: ``` lapply(seq_along(x), function(i){ quantile(x[1:i], probs = 0.95) }) ``` for each index `i` in `x`, subset `x` from `1` to `i` and return `quantile`. The output will be a list, you can convert it to vector: ``` unlist(lapply(seq_along(x), function(i){ quantile(x[1:i], probs=0.95) })) ``` or better yer (as @Rui Barradas suggested in the comments) use sapply: ``` sapply(seq_along(x), function(i){ quantile(x[1:i], probs=0.95) }) ```
Using `rollapply` would be something like the following. ``` library(xts) rollapply(x[, "random"], width = list(seq(-length(x[, "random"]), 0)), FUN = quantile, probs = 0.95, partial = 0) ```
50,314,242
I want to save floating-point numbers as pixels in an image file. I am currently working in OpenCV-python, but I had also tried it with Pillow (PIL). Both packages convert `float` pixel data to integer before writing them to the file. I want to save pixel values such as: ```none (245.7865, 123.18788, 98.9866) ``` But when I read back the image file I get: ```none (246, 123, 99) ``` Somehow my floating-point numbers get rounded off and converted to integers. How to stop PIL or OpenCV from converting them to integer?
2018/05/13
[ "https://Stackoverflow.com/questions/50314242", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5256558/" ]
Most likely your are looking for: ``` lapply(seq_along(x), function(i){ quantile(x[1:i], probs = 0.95) }) ``` for each index `i` in `x`, subset `x` from `1` to `i` and return `quantile`. The output will be a list, you can convert it to vector: ``` unlist(lapply(seq_along(x), function(i){ quantile(x[1:i], probs=0.95) })) ``` or better yer (as @Rui Barradas suggested in the comments) use sapply: ``` sapply(seq_along(x), function(i){ quantile(x[1:i], probs=0.95) }) ```
Convert to zoo in which case `rollapplyr.zoo` can handle vector widths: ``` rollapplyr(as.zoo(x), seq_along(x), quantile, probs = 0.95) ``` Another approach is to use a width of `length(x)` and specify `partial=TRUE` : ``` rollapplyr(as.zoo(x), length(x), quantile, probs = 0.95, partial = TRUE) ```
68,873,535
I've a large MPEG (.ts) Binary file, usually a multiple of 188 bytes, I use python3,when I read 188 byte each time and parse to get required value, I found it really slow. I must traverse through each 188 bytes packet to get the value of the PID (binary data). * On the same time when I use any MPEG offline professional analyzer, they get the list of all PID values and their total counts, within a 45 seconds for 5 min duration TS file, where my program takes > 10 mins to get the same. * I don't understand how quickly they can find even though they might be written in c or c++. * I tried python multiprocessing, but it is not helping much. this means my method of parsing and working of 188 bytes of data is not proper and causing huge delay. --- ``` `with open(file2,'rb') as f: data=f.read(188) if len(data)==0: break b=BitStream(data) ... #parse b to get the required value ... # and increase count when needed ... cnt=cnt+188 f.seek(cnt)` ```
2021/08/21
[ "https://Stackoverflow.com/questions/68873535", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8014376/" ]
It's already been copied. A `c_char_p` return is automatically converted to an immutable Python `bytes` object. If the return type was `POINTER(c_char)` *then* you would have a pointer to the actual memory. Sometimes you need the explicit type if you need to pass that pointer to a function to free the memory later. A quick proof: ```py from ctypes import * dll = CDLL('msvcrt') dll.strcpy.argtypes = c_char_p,c_char_p dll.strcpy.restype = c_char_p # strcpy returns a pointer to the destination buffer 'b' b = create_string_buffer(30) b2 = dll.strcpy(b,b'hello, world!') print(b2) b[0] = b'm' # alter the destination print(b.value) print(b2) # no change to b2 print() dll.strcpy.restype = POINTER(c_char) b3 = dll.strcpy(b,b'hello there!') print(b3) print(b3[:12]) b[0] = b'm' # alter the destination print(b.value) print(b3[:12]) # changed! ``` Output: ```none b'hello, world!' b'mello, world!' b'hello, world!' # no change <ctypes.LP_c_char object at 0x000001B65E9A5840> # the pointer b'hello there!' # copied data from pointer b'mello there!' # change destination buffer b'mello there!' # cpoied data from pointer again, changed! ```
`c_char_p` by default returns bytes object. So it will print with `b'` bytes. If need to print as string, we can do with `.decode('utf-8')` **Example:** ``` print(b2) # prints b'hello, world!' as bytes print(b2.decode('utf-8')) # prints 'hello, world!' as string ```
38,736,872
I am trying to understand more about `__iter__` in Python 3. For some reason `__getitem__` is better understood by me than `__iter__`. I think I get somehow don't get the corresponding **next** implemention followed with `__iter__`. I have this following code: ``` class Item: def __getitem__(self,pos): return range(0,30,10)[pos] item1= Item() print (f[1]) # 10 for i in item1: print (i) # 0 10 20 ``` I understand the code above, but then again how do i write the equivalent code using `__iter__` and `__next__()` ? ``` class Item: def __iter__(self): return self #Lost here def __next__(self,pos): #Lost here ``` I understand when python sees a `__getitem__` method, it tries iterating over that object by calling the method with the integer index starting with `0`.
2016/08/03
[ "https://Stackoverflow.com/questions/38736872", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2126725/" ]
In general, a really good approach is to make `__iter__` a generator by `yield`ing values. This might be *less* intuitive but it is straight-forward; you just yield back the results you want and `__next__` is then provided automatically for you: ``` class Item: def __iter__(self): for item in range(0, 30, 10): yield item ``` This just uses the power of `yield` to get the desired effect, when `Python` calls `__iter__` on your object, it expects back an `iterator` (i.e an object that supports `__next__` calls), a generator does just that, producing each item as defined in your generator function (i.e `__iter__` in this case) when `__next__` is called: ``` >>> i = iter(Item()) >>> print(i) # generator, supports __next__ <generator object __iter__ at 0x7f6aeaf9e6d0> >>> next(i) 0 >>> next(i) 10 >>> next(i) 20 ``` Now you get the same effect as `__getitem__`. The difference is that no `index` is passed in, you have to manually loop through it in order to yield the result: ``` >>> for i in Item(): ... print(i) 0 10 20 ``` Apart from this, there's two other alternatives for creating an object that supports Iteration. **One time looping: Make item an iterator** Make `Item` an iterator by defining `__next__` and returning `self` from `__iter__` in this case, since you're not using `yield` the `__iter__` method returns `self` and `__next__` handles the logic of returning values: ``` class Item: def __init__(self): self.val = 0 def __iter__(self): return self def __next__(self): if self.val > 2: raise StopIteration res = range(0, 30, 10)[self.val] self.val += 1 return res ``` This also uses an auxiliary `val` to get the result from the range and check if we should still be iterating (if not, we raise `StopIteration`): ``` >>> for i in Item(): ... print(i) 0 10 20 ``` The problem with this approach is that it is a one time ride, after iterating once, the `self.val` points to `3` and iteration can't be performed again. (using `yield` resolves this issue). (Yes, you could go and set `val` to 0 but that's just being sneaky.) **Many times looping: create custom iterator object.** The second approach is to use a custom iterator object specifically for your `Item` class and return it from `Item.__iter__` instead of `self`: ``` class Item: def __iter__(self): return IterItem() class IterItem: def __init__(self): self.val = 0 def __iter__(self): return self def __next__(self): if self.val > 2: raise StopIteration res = range(0, 30, 10)[self.val] self.val += 1 return res ``` Now every time you iterate a new custom iterator is supplied and you can support multiple iterations over `Item` objects.
Iter returns a iterator, mainly a generator as @machineyearning told at the comments, with next you can iterate over the object, see the example: ``` class Item: def __init__(self): self.elems = range(10) self.current = 0 def __iter__(self): return (x for x in self.elems) def __next__(self): if self.current >= len(self.elems): self.current = 0 raise StopIteration return self.elems[self.current] >>> i = Item() >>> a = iter(i) >>> for x in a: ... print x ... 0 1 2 3 4 5 6 7 8 9 >>> for x in i: ... print x ... 0 1 2 3 4 5 6 7 8 9 ```
68,019,978
I am building an Ada boost model with Sklearn. Last year I made the same model with the same data, and I was able to access the feature importances. This year when I build the model with the same data the feature importance attribute contains NaNs.I have read some other stuff where people have has the same problem and its where there is NaN's in their data, however mine does not. I am at loss of what is different, but I have isolated the Base\_estimator DecisionTree max\_depth to be the problem. The higher the max\_depth, the greater number of NaNs. However I have identified that max\_depth=10 is best for my work. This is my code Can anyone point out where I am going wrong? Or explain what is happening or another way to get the feature\_importance? I have recreated the same error with a sklearn dataset below. I have a old version of sklearn with python 2.7 and with the same data this error doesn't occur. Thank you Data that I am working with is available here: <https://github.com/scikit-learn/scikit-learn/discussions/20315> ``` import pandas import xarray import numpy as np from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import AdaBoostClassifier train_data=pandas.read_csv('data_train.csv') model_variables=['RH','t2m','tp_r5','swvl1','SM_r20','tp','cvh','vdi','SM_r10','SM_IDW'] X = train_data[model_variables] # Features y = train_data.ignition_no np.count_nonzero(np.isnan(y)) 0 #no missing target variables tree = DecisionTreeClassifier(max_depth=10, random_state=12) ada_model= AdaBoostClassifier(base_estimator = tree, random_state=12) model= ada_model.fit(X,y) model.feature_importances_ /home/mo/morc/.virtualenvs/newroo/lib/python3.6/site-packages/sklearn/tree/_classes.py:605: RuntimeWarning: invalid value encountered in true_divide return self.tree_.compute_feature_importances() array([ nan, nan, nan, nan, nan, nan, nan, 0.02568412, nan, nan]) >>> #Here is the same error recreated with the load_digits dataset from sklearn import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.ensemble import AdaBoostClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.metrics import accuracy_score from sklearn.model_selection import cross_val_score from sklearn.model_selection import cross_val_predict from sklearn.model_selection import train_test_split from sklearn.model_selection import learning_curve from sklearn.datasets import load_digits >>> dataset = load_digits() >>> X = dataset['data'] >>> y = dataset['target'] >>> >>> score = [] >>> for depth in [1,2,10] : ... reg_ada = AdaBoostClassifier(DecisionTreeClassifier(max_depth=depth)) ... scores_ada = cross_val_score(reg_ada, X, y, cv=6) ... score.append(scores_ada.mean()) ... score >>>[0.2615310293571163, 0.6466908212560386, 0.9621609067261242] #best depth is 10, so making ada_boost classifier with base_estimator of max_depth=10 reg_ada = AdaBoostClassifier(DecisionTreeClassifier(max_depth=10)) model=reg_ada.fit(X,y) model.feature_importances_ /home/mo/morc/.virtualenvs/fox/lib/python3.6/site-packages/sklearn/tree/_classes.py:605: RuntimeWarning: invalid value encountered in true_divide return self.tree_.compute_feature_importances() array([0.00000000e+00, 3.97071545e-03, nan, 1.04739889e-02, 1.71911851e-02, 1.13877668e-02, 5.53334918e-03, 3.48635371e-03, 3.81562332e-16, 2.97882448e-04, 5.21107270e-03, 1.90482369e-03, 9.54317398e-03, nan, 4.04579846e-03, 2.85770367e-03, 2.41466161e-03, 2.22172771e-04, nan, nan, 2.64452796e-02, 2.35455672e-02, 5.91982800e-03, 9.63862404e-15, 2.51667106e-05, 8.22347398e-03, 3.53522516e-02, 3.49199633e-02, nan, nan, 7.85924750e-03, 0.00000000e+00, 0.00000000e+00, 2.43861329e-02, nan, 4.52136284e-03, 2.84309340e-02, 8.70846798e-03, nan, 0.00000000e+00, 0.00000000e+00, 8.51258472e-03, nan, 4.08880381e-02, 6.47568594e-03, 1.75046890e-02, 1.37183583e-02, 3.95955193e-32, 0.00000000e+00, 6.36631892e-05, 2.06906508e-02, nan, nan, nan, 9.47079562e-03, 3.71242630e-03, 0.00000000e+00, 7.14153611e-06, nan, 5.14482654e-03, 2.23621689e-02, 1.79753787e-02, 3.05869803e-03, 4.80512718e-03]) ```
2021/06/17
[ "https://Stackoverflow.com/questions/68019978", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8684167/" ]
Make an AJAX call to the specific endpoint and update the DOM accordingly.
Laravel is A PHP framework, PHP framework, PHP request data from server and return to client in which it must refresh the page. To archive interchange of data you have few option. **option one** use jquery ajax, it works well and fine with laravel and bootsrtap. Get started [here](https://jquery.com/) on offical website **option two (recommended by me)** use laravel Livewire. It is simple and easy, it is plain PHP and uses the same laravel functions. get started [here](https://laravel-livewire.com/) on offical website **option three** use VueJs. You can plugin vuejs in your laravel application. vue components can be used inside laravel blades, this maybe hard if you have little difficult of you have little experience with JavaScript frameworks. yuo can read more in laravel [docs](https://laravel.com/docs/8.x/mix#vue)
50,735,626
Am trying to make a simple post api in flask-python but am getting this error : ``` TypeError: list object is not an iterator ``` but when i revise my code seems fine what could be the problem. My function which specifically has the problem: ``` def post(self,name): #return {'message': name} item = next(filter(lambda x: x['name'] == name, items), None) if item: return {'message':"An item with name '{}' already exixts. ".format(name)},400 data = request.get_json() item = {'name': name, 'price':data['price']} items.append(item) return item, 201 ``` When i try to post something on **postman** i get this `logcat` **error**: ``` [2018-06-07 10:41:02,849] ERROR in app: Exception on /item/test [POST] Traceback (most recent call last): File "C:\Python27\lib\site-packages\flask\app.py", line 1612, in full_dispatch_request rv = self.dispatch_request() File "C:\Python27\lib\site-packages\flask\app.py", line 1598, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "C:\Python27\lib\site-packages\flask_restful\__init__.py", line 480, in wrapper resp = resource(*args, **kwargs) File "C:\Python27\lib\site-packages\flask\views.py", line 84, in view return self.dispatch_request(*args, **kwargs) File "C:\Python27\lib\site-packages\flask_restful\__init__.py", line 595, in dispatch_request resp = meth(*args, **kwargs) File "G:\flask_workspace\MealBookingApp\MealBookingApp\MealBookingApp\views.py", line 30, in post item = next(filter(lambda x: x['name'] == name, items), None) TypeError: list object is not an iterator 127.0.0.1 - - [07/Jun/2018 10:41:02] "POST /item/test HTTP/1.1" 500 - ``` **NB:** ***line 30*** , is the line below : ``` item = next(filter(lambda x: x['name'] == name, items), None) ```
2018/06/07
[ "https://Stackoverflow.com/questions/50735626", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6687699/" ]
Try using `iter()` **Ex:** ``` item = next(iter(filter(lambda x: x['name'] == name, items)), None) ```
To elaborate on @Rakesh's answer, lists aren't iterators, and the output of `filter()` in Python 2 is a list. To fix this, you can use the `iter()` function to output an iterator corresponding to the problematic list so that `next()` can be called appropriately. The same code then should solve your problem: ``` item = next(iter(filter(lambda x: x['name'] == name, items)), None) ``` Note that using `iter()` on an iterator still works in Python 3, so this code is forward compatible.
576,557
If I learn python 3.0 and code in it, will my code be still compatible with Python 2.6 (or 2.5 too!)? --- Remarkably similar to: [If I'm Going to Learn Python, Should I Learn 2.x or Just Jump Into 3.0?](https://stackoverflow.com/questions/410609/if-im-going-to-learn-python-should-i-learn-2-x-or-just-jump-into-3-0/410626)
2009/02/23
[ "https://Stackoverflow.com/questions/576557", "https://Stackoverflow.com", "https://Stackoverflow.com/users/69746/" ]
No, 3.x is largely incompatible with 2.x (that was actually a major motivation for doing it). In fact, you probably shouldn't be using 3.0 at all-- it's rather unusable at the moment, and is still mostly intended for library developers to port to it so that it can be usable.
NO. Python 3 code is backwards incompatible with 2.6. I recommend to begin with 2.6, because your code will be more **useful**.
576,557
If I learn python 3.0 and code in it, will my code be still compatible with Python 2.6 (or 2.5 too!)? --- Remarkably similar to: [If I'm Going to Learn Python, Should I Learn 2.x or Just Jump Into 3.0?](https://stackoverflow.com/questions/410609/if-im-going-to-learn-python-should-i-learn-2-x-or-just-jump-into-3-0/410626)
2009/02/23
[ "https://Stackoverflow.com/questions/576557", "https://Stackoverflow.com", "https://Stackoverflow.com/users/69746/" ]
No, 3.x is largely incompatible with 2.x (that was actually a major motivation for doing it). In fact, you probably shouldn't be using 3.0 at all-- it's rather unusable at the moment, and is still mostly intended for library developers to port to it so that it can be usable.
It would be easier to use 2.6 right now because most external libraries are not compatible with 3 yet.