qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
sequencelengths
3
3
response_j
stringlengths
29
22k
response_k
stringlengths
26
13.4k
__index_level_0__
int64
0
17.8k
61,889,217
I have this list of dictionaries. ``` [{'value': '299021.000000', 'abbrev': 'AAA'}, {'value': '299021.000000', 'abbrev': 'BBB'}, {'value': '8.597310', 'abbrev': 'CCC'}] ``` I want to transform this list to look like this; ``` [{'AAA': '299021.000000'}, {'BBB': '299021.000000'}, {'CCC': '8.597310'}] ``` Any hints on how to get started? I am using python 3.7
2020/05/19
[ "https://Stackoverflow.com/questions/61889217", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7518091/" ]
With [list comprehension](https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions) you can do the following: ``` data = [ {'value': '299021.000000', 'abbrev': 'AAA'}, {'value': '299021.000000', 'abbrev': 'BBB'}, {'value': '8.597310', 'abbrev': 'CCC'} ] data_2 = [{elem["abbrev"]: elem["value"]} for elem in data] print(data_2) # Output: # [{'AAA': '299021.000000'}, {'BBB': '299021.000000'}, {'CCC': '8.597310'}] ```
Using for loop; ``` original_list = [ {'value': '299021.000000', 'abbrev': 'AAA'}, {'value': '299021.000000', 'abbrev': 'BBB'}, {'value': '8.597310', 'abbrev': 'CCC'} ] transformed_list = [ ] for i in original_list: key = i['abbrev'] value = i['value'] single_dict = {} single_dict[key] = value transformed_list.append(single_dict) ``` I prefer the answer from Xukrao using list comprehension. Elegant, easy to read in one line.
16,778
30,070,300
**I want to bind two event to one ListCtrl weight in wxpython.** **Such as, left click and right click.** The former will refresh the content of somewhere, and the later will create a PopupMenu, which contains something about rename, setting... How should I do? I tried `wx.EVT_LIST_ITEM_SELECTED`, `wx.EVT_LIST_COL_CLICK`. It works! **But, when I use `wx.EVT_LIST_ITEM_RIGHT_CLICK`, it will also trigger the `wx.EVT_LIST_ITEM_SELECTED`** So, How to do this without confliction? Thank you! Here is my code! ``` import wx class ListCtrlLeft(wx.ListCtrl): def __init__(self, parent, i): wx.ListCtrl.__init__(self, parent, i, style=wx.LC_REPORT | wx.LC_HRULES | wx.LC_NO_HEADER | wx.LC_SINGLE_SEL) self.parent = parent self.Bind(wx.EVT_SIZE, self.on_size) self.InsertColumn(0, '') self.InsertStringItem(0, 'library-one') self.InsertStringItem(0, 'library-two') self.Bind(wx.EVT_LIST_ITEM_SELECTED, self.on_lib_select) self.Bind(wx.EVT_LIST_ITEM_RIGHT_CLICK, self.on_lib_right_click) def on_size(self, event): size = self.parent.GetSize() self.SetColumnWidth(0, size.x - 5) def on_lib_select(self, evt): print "Item selected" def on_lib_right_click(self, evt): print "Item right-clicked" class Memo(wx.Frame): def __init__(self, parent, i, title, size): wx.Frame.__init__(self, parent, i, title=title, size=size) self._create_splitter_windows() self.Centre() self.Show(True) def _create_splitter_windows(self): horizontal_box = wx.BoxSizer(wx.HORIZONTAL) splitter = wx.SplitterWindow(self, -1, style=wx.SP_LIVE_UPDATE | wx.SP_NOBORDER) splitter.SetMinimumPaneSize(250) vertical_box_left = wx.BoxSizer(wx.VERTICAL) panel_left = wx.Panel(splitter, -1) panel_left_top = wx.Panel(panel_left, -1, size=(-1, 30)) panel_left_top.SetBackgroundColour('#53728c') panel_left_str = wx.StaticText(panel_left_top, -1, 'Libraries', (5, 5)) panel_left_str.SetForegroundColour('white') panel_left_bottom = wx.Panel(panel_left, -1, style=wx.BORDER_NONE) vertical_box_left_bottom = wx.BoxSizer(wx.VERTICAL) # Here!!!! list_1 = ListCtrlLeft(panel_left_bottom, -1) # ---------- vertical_box_left_bottom.Add(list_1, 1, wx.EXPAND) panel_left_bottom.SetSizer(vertical_box_left_bottom) vertical_box_left.Add(panel_left_top, 0, wx.EXPAND) vertical_box_left.Add(panel_left_bottom, 1, wx.EXPAND) panel_left.SetSizer(vertical_box_left) # right vertical_box_right = wx.BoxSizer(wx.VERTICAL) panel_right = wx.Panel(splitter, -1) # ...... panel_right.SetSizer(vertical_box_right) horizontal_box.Add(splitter, -1, wx.EXPAND | wx.TOP, 1) self.SetSizer(horizontal_box) splitter.SplitVertically(panel_left, panel_right, 250) def on_quit(self, evt): self.Close() evt.Skip() if __name__ == "__main__": app = wx.App() Memo(None, -1, 'PyMemo', (500, 300)) app.MainLoop() ```
2015/05/06
[ "https://Stackoverflow.com/questions/30070300", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4827183/" ]
I got my Answer, it was quit simple. Open Terminal, Type command: ``` keytool -list -v -keystore ~/.android/debug.keystore -alias androiddebugkey -storepass android -keypass android ``` Press Enter: You will get the following info, and SHA1 can be seen there. ..... Certificate fingerprints: ``` MD5: 79:F5:59:................FE:09:D1:EC SHA1: 33:57:0A:C9:..................:91:47:14:CD SHA256: 39:AA:23:88:D6:...................33:DF:61:24:CB:17:47:EA:39:94:99 ``` .......
**Very easy and simply finding the SHA1 key for certificate in only android studio.** You can use below steps: ``` A.Open Android Studio B.Open Your Project C.Click on Gradle (From Right Side Panel, you will see Gradle Bar) D.Click on Refresh (Click on Refresh from Gradle Bar, you will see List Gradle scripts of your Project) E.Click on Your Project (Your Project Name form List (root)) F.Click on Tasks G.Click on android H.Double Click on signingReport (You will get SHA1 and MD5 in Run Bar) ``` **OR** ``` 1.Click on your package and choose New -> Google -> Google Maps Activity 2.Android Studio redirect you to google_maps_api.xml ``` [![enter image description here](https://i.stack.imgur.com/Na0EY.png)](https://i.stack.imgur.com/Na0EY.png)
16,779
72,304,877
super new to python and having an error on one of my codecademy projects that i cant seem to understand, even referencing the walkthrough and altering line 17 multiple times i cant quiet understand why this is returning an error. could somebody help me understand the error so i can learn from it? this is my code thats returning a type error: ``` lovely_loveseat_description = """Lovely Loveseat. Tufted polyester blend on wood. 32 inches high X 40 inches wide X 30 inches deep. Available in Red or White.""" lovely_loveseat_price = 254.00 stylish_setee_description = """Stylish Settee. Faux leather on birch. 29.50 inches high X 54.75 inches wide X 28 inches deep. Available only in Black.""" stylish_settee_price = 150.50 luxurious_lamp_description = "Luxurious Lamp. Glass and iron. 36 inches tall. Available in Brown with a Stylish Beige shade." luxurious_lamp_price = 52.15 #sales tax variable. sales_tax = .088 #base shopping cart total value. customer_one_total = 0 #shopping cart contents variable. customer_one_itemization = "" #shopping area. customer_one_total =+ lovely_loveseat_price customer_one_itemization =+ lovely_loveseat_description customer_one_total =+ luxurious_lamp_price customer_one_itemization =+ luxurious_lamp_description customer_one_tax = customer_one_total * sales_tax customer_one_total =+ customer_one_tax print("Customer One Items:") print(customer_one_itemization) print("Customer One Total Cost:") print(customer_one_total)``` and this is the error: ```Traceback (most recent call last): File "script.py", line 17, in <module> customer_one_itemization =+ lovely_loveseat_description TypeError: bad operand type for unary +: 'str'``` thanks heaps guys. ```
2022/05/19
[ "https://Stackoverflow.com/questions/72304877", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19153697/" ]
If there is only one non empty value per groups use: ``` df = df.replace('',np.nan).groupby('ID', as_index=False).first().fillna('') ``` If possible multiple values and need unique values in original order use lambda function: ``` print (df) ID LU MA ME JE VE SA DI 0 201 B C B 1 201 C C C B C f = lambda x: ','.join(dict.fromkeys(x.dropna()).keys()) df = df.replace('',np.nan).groupby('ID', as_index=False).agg(f) print (df) ID LU MA ME JE VE SA DI 0 201 B,C C C B C ```
This could be treated as a pivot. You'd need to melt the df first then pivot: ``` (df.melt(id_vars='ID') .dropna() .pivot(index='ID',columns='variable',values='value') .fillna('') .rename_axis(None, axis=1) .reset_index() ) ```
16,789
2,166,818
How do I check if an object is an instance of a [Named tuple](http://docs.python.org/dev/library/collections.html#namedtuple-factory-function-for-tuples-with-named-fields)?
2010/01/30
[ "https://Stackoverflow.com/questions/2166818", "https://Stackoverflow.com", "https://Stackoverflow.com/users/55246/" ]
Calling the *function* `collections.namedtuple` gives you a new type that's a subclass of `tuple` (and no other classes) with a member named `_fields` that's a tuple whose items are all strings. So you could check for each and every one of these things: ``` def isnamedtupleinstance(x): t = type(x) b = t.__bases__ if len(b) != 1 or b[0] != tuple: return False f = getattr(t, '_fields', None) if not isinstance(f, tuple): return False return all(type(n)==str for n in f) ``` it IS possible to get a false positive from this, but only if somebody's going out of their way to make a type that looks a **lot** like a named tuple but isn't one;-).
IMO this might be the best solution for *Python 3.6* and later. You can set a custom `__module__` when you instantiate your namedtuple, and check for it later ```py from collections import namedtuple # module parameter added in python 3.6 namespace = namedtuple("namespace", "foo bar", module=__name__ + ".namespace") ``` then check for `__module__` `if getattr(x, "__module__", None) == "xxxx.namespace":`
16,790
48,835,934
Installing Google Cloud SDK I get the response below: Note - I checked, and `C:\Users\jonat\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\bundledpython` does indeed lead to a python `2.7` that runs fine. ``` Output folder: C:\Users\jonat\AppData\Local\Google\Cloud SDK Downloading Google Cloud SDK core. Extracting Google Cloud SDK core. Create Google Cloud SDK bat file: C:\Users\jonat\AppData\Local\Google\Cloud SDK\cloud_env.bat Installing components. Welcome to the Google Cloud SDK! ERROR: gcloud failed to load: 'module' object has no attribute 'openssl_md_meth_names' gcloud_main = _import_gcloud_main() import googlecloudsdk.gcloud_main from googlecloudsdk.calliope import base from googlecloudsdk.calliope import arg_parsers from googlecloudsdk.core import log from googlecloudsdk.core import properties from googlecloudsdk.core import config from googlecloudsdk.core.util import files as file_utils import hashlib _hashlib.openssl_md_meth_names) This usually indicates corruption in your gcloud installation or problems with your Python interpreter. Please verify that the following is the path to a working Python 2.7 executable: C:\Users\jonat\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\bundledpython\python.exe If it is not, please set the CLOUDSDK_PYTHON environment variable to point to a working Python 2.7 executable. If you are still experiencing problems, please reinstall the Cloud SDK using the instructions here: https://cloud.google.com/sdk/ Traceback (most recent call last): File "C:\Users\jonat\AppData\Local\Google\Cloud SDK\google-cloud-sdk\bin\bootstrapping\install.py", line 8, in import bootstrapping File "C:\Users\jonat\AppData\Local\Google\Cloud SDK\google-cloud-sdk\bin\bootstrapping\bootstrapping.py", line 15, in from googlecloudsdk.core import config File "C:\Users\jonat\AppData\Local\Google\Cloud SDK\google-cloud-sdk\lib\googlecloudsdk\core\config.py", line 27, in from googlecloudsdk.core.util import files as file_utils File "C:\Users\jonat\AppData\Local\Google\Cloud SDK\google-cloud-sdk\lib\googlecloudsdk\core\util\files.py", line 22, in import hashlib File "C:\Users\jonat\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\bundledpython\lib\hashlib.py", line 138, in _hashlib.openssl_md_meth_names) AttributeError: 'module' object has no attribute 'openssl_md_meth_names' Failed to install. ```
2018/02/16
[ "https://Stackoverflow.com/questions/48835934", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9371800/" ]
Here is a piece of code that may help: ``` mat = cbind(1:3, 4:6, 7) layout(mat, width = c(1,1,.25)) pal = colorRampPalette(c("white", "black"))(100) # empty plots for (i in 1:6) image(matrix(runif(100), 10), col = pal) # color scale par(las=1, mar = c(4, 1, 4, 5)) image(t(1:100), col = pal, axes = F, ann = F) axis(4) par(las=0) mtext(4, text = "new_ylab", line = 3) ``` You may have to tweak the margins!
Okay, figured it out with the help of [this post](https://stackoverflow.com/questions/9314658/colorbar-from-custom-colorramppalette) that uses a customised function to plot scales. I just had to remove the `dev.new()` call to avoid plotting the colour scale in a new device. The function is flexible but you still need to play around with the `par(mar = c())` parameter to adjust width and height of the vertical bar.
16,800
4,250,939
I started programming in january of this year and have covered a lot of ground. I have learnt javascript, ruby on rails, html, css, jquery and every now and then i like to try out some clojure but i will really get into that in the middle of next yr. I really didnt like rails and prefer using netbeans with pure javaScript, html and css, i just feel like i have more control. I really like javascript, but when it comes to sever side programming i have a problem.......for my level of experience i just feel like server side js will not be a good fit yet as it is still not as mature/user friendly as php or ruby on rails. What server side language should i invest in, should i learn php? There is so much info on source code on php. I know that there is node.js and emerging frameworks like geddy.js but i need something more user friendly....or am i just being a woos.I would really like some help on this. Thanks in advance PS. Update: Thanks all for advice, i have settled on python and web2py framework. I decided between django and web2py by doing a couple of simple tutorials and preferred web2py by a huge margin.
2010/11/22
[ "https://Stackoverflow.com/questions/4250939", "https://Stackoverflow.com", "https://Stackoverflow.com/users/390129/" ]
Not sure why you don't like rails, but you might want to try the newly released "Rails for Zombies" tutorials by Envy Labs: <http://railsforzombies.org/>. Or if you like books instead of online stuff, check out [Agile Web Development with Rails](http://pragprog.com/titles/rails4/agile-web-development-with-rails) As ben states, sinatra is nice for smallish stuff. A few things to choose from, broken down by language and order of their complexity/learning curve: **Ruby** * [Sinatra](http://www.sinatrarb.com/) * [Rails](http://rubyonrails.org/) **Python** * [web.py](http://webpy.org/) * [Django](http://www.djangoproject.com/) **PHP** * [Code Igniter](http://codeigniter.com/) * [CakePHP](http://cakedc.com/) * [Symfony](http://www.symfony-project.org/) EDIT: I removed my comment about php and added in some PHP specifics. Personally I started with Symfony but it is rather complex. Code Igniter would be a good starting place if you want to learn a PHP framework.
If you like ruby as programming language, but find rails to be just too much to take in it once, I'd recommend trying [Sinatra](http://www.sinatrarb.com/). It's also a ruby-based web framework, but it's a lot simpler than rails, and offers you a lot more control over how you want to set things up. For smaller projects, it's often a much better fit than Rails.
16,801
22,585,176
This is my first ever post because I can't seem to find a solution to my problem. I have a text file that contains a simple line by line list of different names distinguished from male and female by an M or F next to it. A simple example of this is: ``` John M John M Jim M Jim M Jim M Jim M Sally F Sally F ``` You'll notice that names repeat because I want the python code to count what names occur the most and provide lists of most common names, male name and female names. I am very new to python and my understanding of many elements are limited at best.
2014/03/22
[ "https://Stackoverflow.com/questions/22585176", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3451030/" ]
You can use parameter android:showAsAction="Always" for each menu item in menu.xml to show your items in action bars ``` <menu xmlns:android="http://schemas.android.com/apk/res/android"> <item android:id="@+id/menu_add_size" android:title="@string/menu_add_item" android:orderInCategory="10" android:showAsAction="always" android:icon="@android:drawable/ic_menu_add" /> </menu> ``` And you have to remember: Android action bar menu view depends of device hardware control buttons type **EDIT** Just read [this](http://developer.android.com/design/patterns/compatibility.html) article to understand differences, enjoy! P.S. Sorry for my bad english :-(
in your **menu/main.xml:** ``` <menu xmlns:android="http://schemas.android.com/apk/res/android" > <item android:id="@+id/settings" android:title="@string/settings" android:orderInCategory="0" android:showAsAction="always" /> <item android:id="@+id/action_compose" android:title="hello" android:orderInCategory="1" android:showAsAction="always" /> </menu> ``` and in your **MainActivity** class you have to call the *onCreateOptionsMenu* function ``` @Override public boolean onCreateOptionsMenu(Menu menu) { // TODO Auto-generated method stub MenuInflater inflater = getMenuInflater(); inflater.inflate(R.menu.main, menu); return true; } @Override public boolean onOptionsItemSelected(MenuItem item) { // TODO Auto-generated method stub switch (item.getItemId()) { // action when action_search was selected case R.id.action_search: //****do something when the action_search item is clicked break; // action when action_compose was selected case R.id.action_compose: //****do something when the action_compose item is clicked break; default: break; } return true; } ```
16,803
33,679,414
Suppose I have two types of axis aligned rectangles: a) Defined by left-up and right-bottom points: (x1, y1), (x2, y2) b) Defined by (x1, y1) and (width, height) The aim is to create pythonic-way code, that allows for conversion between these types. E.g. if there is a function, that performs calculations only in one of the representations, it calls the necessary conversion on the given rectangle object first. I can think of three ways of accomplishing that: 1. Create two classes inheriting from a base class. Create two "abstract" methods in the base for conversion to and back between a) and b) with a set of *isinstance* branches inside. Overload the methods in the inheriting classes, making empty methods for useless T -> T conversions. 2. Create a class with one static method, accepting two arguments, unrelated to a) and b) 3. Same as 2., but get rid of the class, just create a function in the global source file scope Which one do you think is the best way, or may be there is something better?
2015/11/12
[ "https://Stackoverflow.com/questions/33679414", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2864143/" ]
Create one class, and give it two constructors. One the default `__init__` method, the other a classmethod accepting the other form to specify a rectangle: ``` class Rectangle(Shape): def __init__(self, x1, y1, x2, y2): # .... @classmethod def fromSize(cls, x1, y1, width, height): return cls(x1, y1, x1 + width, y1 + height) ``` The `Rectangle.fromSize()` class method converts the parameters to the 4-coordinate form and returns a new instance. You store just the one form to specify a rectangle, the extra classmethod is basically just a convenience method. I picked the 4-coordinate form as the 'canonical' definition here, but if storing the width and height makes more sense for your model, feel free to swap the default and classmethod factories.
I recommend creating one class and handling the inputs during your init to determine what is present/not present. Then add all the missing parameters based on a calculation. Here is a working example for your situation: ``` class RectangleClass: def __init__(self, x1, y1, x2=None, y2=None, width=None, height=None): if not x2 or not y2: self.x2, self.y2 = self.calculate_points(x1, y1, width, height) if not width or not height: self.height, self.width = self.calculate_dimensions(x1, y1, x2, y2) def calculate_points(self, x1, y1, width,height): x2 = x1 + width y2 = y1 + height return x2, y2 def calculate_dimensions(self, x1, y1, x2,y2): width = abs(x2 - x1) height = abs(y2 - y1) return height, width rectangle = RectangleClass(0, 0, x2=-1, y2=5) print "Rectangle 1: height: %s, width: %s" % (rectangle.height, rectangle.width) rectangle = RectangleClass(1, 3, height=2, width=2) print "Rectangle 2: x2: %s, y2: %s" % (rectangle.x2, rectangle.y2) ```
16,809
18,269,672
I mean the situation when lua is run not as embedded in another app but as standalone scripting language. I need something like `PHP_BINARY` or `sys.executable` in python. Is that possible with LUA ?
2013/08/16
[ "https://Stackoverflow.com/questions/18269672", "https://Stackoverflow.com", "https://Stackoverflow.com/users/393087/" ]
Note that the the solution given by lhf is not the most general. If the interpreter has been called with additional command line parameters (if this may be your case) you will have to search `arg`. In general the interpreter name is stored at the most negative integer index defined for `arg`. See this test script: ``` local i_min = 0 while arg[ i_min ] do i_min = i_min - 1 end i_min = i_min + 1 -- so that i_min is the lowest int index for which arg is not nil for i = i_min, #arg do print( string.format( "arg[%d] = %s", i, arg[ i ] ) ) end ```
Try `arg[-1]`. But note that `arg` is not defined when Lua is executed interactively.
16,810
29,321,077
I am trying to write a function to mix strings in python but I am getting stuck at the end. So for this example, I have 2 words, mix and pod. I would like to create a function that returns: pox mid My code only returns pox mix Code: ``` def mix_up(a, b): if len(a and b)>1: b=str.replace(b,b[2],a[2:3]) a=str.replace(a,a[2],b[2]) print b,"",a return mix_up('mix','pod') ``` I am seeking to do this for multiple words. So another example: if I used dog,dinner The output should return dig donner Thanks!
2015/03/28
[ "https://Stackoverflow.com/questions/29321077", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3120266/" ]
Little play on [string slicing](https://docs.python.org/2/tutorial/introduction.html#strings) ``` def mix_up(first, second): new_first = second[:2] + first[2:] new_second = first[:2] + second[2:] return " ".join((new_first, new_second)) assert mix_up('mix','pod') == 'pox mid' assert mix_up('dog','dinner') == 'dig donner' ```
If you simply wanted to put the 2nd word before the first word all the time: ``` def mix_up(a,b): return " ".join([b,a]) # Should return pod mix ``` Give that you aimed for `pox mix` suggests that you probably wanted to: 1) Replace the last character of word `b` with x 2) Place b before a. In that case, the function would be: ``` def mix_up(a,b): b = b.replace(b[len(b)-1], 'x') # 'x' could be any other character return " ".join([b,a]) # Should return pox mix ``` you can simply swap b with a in order to change the position of the words. If you didn't want the space in metween: ``` return "".join([b,a]) ``` **UPDATE** to "swap" the second letter between b and a, I simply correct your function like the following: ``` def mix_up(a, b): if len(a and b)>1: temp = b[1] # Store it in a temp b=str.replace(b,b[2],a[2:3]) a=str.replace(a,a[2],b[2]) print (b,"",a) return mix_up('mix','pod') ``` Your only problem was once you replaced `b`, you were using the **new** `b` to chose the 2nd letter and replace into `a`.
16,813
5,385,238
I've got a timestamp in a log file with the format like: ``` 2010-01-01 18:48:14.631829 ``` I've tried the usual suspects like strptime, and no matter what i do, I'm getting that it doesn't match the format I specify. `("%Y-%m-%d %H:%M:%S" OR "%Y-%m-%d %H:%M:%S.%f")` I've even tried splitting the value by "." so I can just compare vs the value not having the microseconds on it, but it STILL tells me it doesn't match: "%Y-%m-%d %H:%M:%S" Ug, all I need to do is a simple time delta, haha. Why is python's time stuff so scattered? time, datetime, other various imports
2011/03/22
[ "https://Stackoverflow.com/questions/5385238", "https://Stackoverflow.com", "https://Stackoverflow.com/users/669983/" ]
You can use `strptime` like so (Python 2.6+ only): ``` >>> import datetime >>> s = "2010-01-01 18:48:14.631829" >>> datetime.datetime.strptime(s, "%Y-%m-%d %H:%M:%S.%f") datetime.datetime(2010, 1, 1, 18, 48, 14, 631829) ``` Docs: <http://docs.python.org/library/datetime.html#strftime-and-strptime-behavior> > > ... > > > `%f` Microsecond as a decimal number [0,999999], zero-padded on the left > > > ... > > > If your on 2.5- *and* you don't care about the micros, you can just chop it off: ``` >>> import re >>> datetime.datetime.strptime(re.sub('\..*', '', s), "%Y-%m-%d %H:%M:%S") datetime.datetime(2010, 1, 1, 18, 48, 14) ```
Of course, splitting the string *does* work: ``` >>> print s 2010-01-01 18:48:14.631829 >>> time.strptime(s.split('.')[0], "%Y-%m-%d %H:%M:%S") time.struct_time(tm_year=2010, tm_mon=1, tm_mday=1, tm_hour=18, tm_min=48, tm_sec=14, tm_wday=4, tm_yday=1, tm_isdst=-1) >>> ```
16,815
27,929,400
I am trying to make a program in python that will accept an argument of text input, then randomly change each letter to be a different color This is what I have: ``` color = ['red' , 'blue', 'green' , 'purple' , 'yellow' , 'pink' , '#f60' , 'black' , 'white']; ``` I want to be able to have a program that can let me type out a paragraph, then it uses the colors in the list and randomly assigns them to each letter. For an output, I would like to to be something like this. ``` [color=random]H[/color][color=random]i[/color] [color=random]t[/color] [color=random]h[/color] [color=random]e[/color][color=random]r[/color][color=random]e[/color] ``` The colors should be random from the list I made. Is this possible to make?
2015/01/13
[ "https://Stackoverflow.com/questions/27929400", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4450564/" ]
This works for me: ``` from random import choice color = ['red' , 'blue', 'green' , 'purple' , 'yellow' , 'pink' , '#f60' , 'black' , 'white'] l = len(color) str = "Hit Here" html = '' for x in str: html += '[color=' + choice(color) + ']' + x + '[/color]' if len(x.strip()) > 0 else x print(html) ``` Sample output: ``` [color=yellow]H[/color][color=#f60]i[/color][color=yellow]t[/color] [color=black]H[/color][color=black]e[/color][color=white]r[/color][color=purple]e[/color] [color=purple]H[/color][color=white]i[/color][color=purple]t[/color] [color=black]H[/color][color=green]e[/color][color=red]r[/color][color=blue]e[/color] [color=yellow]H[/color][color=green]i[/color][color=#f60]t[/color] [color=blue]H[/color][color=white]e[/color][color=green]r[/color][color=pink]e[/color] ```
try like this: ``` import random #console color W = '\033[0m' # white (normal) R = '\033[31m' # red G = '\033[32m' # green O = '\033[33m' # orange B = '\033[34m' # blue P = '\033[35m' # purple my_color = [W, R, G, O, B, P] a = raw_input("Enter your text to be colourful: ") new_text ="" for x in a: new_text += x + random.choice(my_color) print new_text + W ``` ![enter image description here](https://i.stack.imgur.com/WXHuj.png) your Desired output: ``` >>> import random >>> color = ['red' , 'blue', 'green' , 'purple' , 'yellow' , 'pink' , '#f60' , 'black' , 'white'] >>> new_text = "" >>> import random >>> for x in text: ... new_text += "[color={}]{}[/color]".format(random.choice(color), x) ... >>> new_text '[color=#f60]h[/color][color=white]e[/color][color=blue]l[/color][color=#f60]l[/color][color=black]o[/color][color=white] [/color][color=purple]h[/color][color=white]o[/color][color=green]w[/color][color=pink] [/color][color=white]a[/color][color=white]r[/color][color=yellow]e[/color][color=red] [/color][color=#f60]y[/color][color=green]o[/color][color=red]u[/color]' ```
16,817
12,135,555
I am trying to build my first Django project from scratch and am having difficulty setting the background image. I am new to programming in general so forgive me if this is a stupid question. I have read the documentation [here](https://docs.djangoproject.com/en/dev/ref/contrib/staticfiles) on static file implementations and various stack overflow posts ([here](https://stackoverflow.com/questions/2451352/cant-figure-out-serving-static-images-in-django-dev-environment) [here](https://stackoverflow.com/questions/7057982/django-static-files-while-debug-mode-is-on) and [here](https://stackoverflow.com/questions/2148738/cannot-get-images-to-display-in-simple-django-site)) on setting the background image, but I still can't get it to work. I have: * installed the django.contrib.staticfiles as an installed app. * added this to the settings file: `TEMPLATE_CONTEXT_PROCESSORS = ( "django.contrib.auth.context_processors.auth", "django.core.context_processors.debug", "django.core.context_processors.i18n", "django.core.context_processors.media", "django.core.context_processors.static", "django.core.context_processors.tz", "django.contrib.messages.context_processors.messages", )` * set the Static\_Root and Static\_URL files to STATIC\_ROOT = '/Users/user\_name/development/projects/ecollar\_site/static/' and '/static/' respectively * put this code at the end of my URLs file: ``` `if settings.DEBUG: urlpatterns += patterns('', (r'^/static/(?P<path>.*)$', 'django.views.static.serve', {'/ecollar_site/': settings.STATIC_ROOT, 'show_indexes' : True}), )` ``` * ran `python manage.py collectstatic` * and then put this line in the CSS file. `body{ font:16px/26px Helvetica, Helvetica Neue, Arial; background-image: url("{{ STATIC_URL }}img/IMG_0002.jpg"); }` I know the static files bit is working because the CSS is loaded. And I can change the background color by using background-color: blue. But the static image is simply not being put as the background image. Have I made a rookie mistake somewhere?
2012/08/27
[ "https://Stackoverflow.com/questions/12135555", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1611337/" ]
Your css file is not rendered by Django template engine and so {{ STATIC\_URL }} is not being replaced. You'll have to use /static/img/IMG\_0002.jpg in the CSS file or move that bit of CSS in your html file's style tag.
Try this settings.py ``` MEDIA_URL = '/static_media/' ``` urly.py ``` if settings.DEBUG: urlpatterns += patterns('django.views.static', (r'^static_media/(?P<path>.*)$', 'serve', { 'document_root': '/path/to/static_media', 'show_indexes': True }),) ``` your css and jquery on template ``` <link rel="stylesheet" href="{{ MEDIA_URL }}base_min.css" type="text/css" media="screen"> ``` If your are using production version try this ``` MEDIA_URL = 'http://media.example.org/' Development: /static_media/base_min.css Production: http://media.example.org/base_min.css ``` Hope this will help you Don't forget to attached `'django.contrib.staticfiles',` in your INSTALLED\_APPS
16,820
39,675,898
I am reading [The Hitchhiker’s Guide to Python](http://docs.python-guide.org/en/latest/writing/structure/#mutable-and-immutable-types) and there is a short code snippet ``` foo = 'foo' bar = 'bar' foobar = foo + bar # This is good foo += 'ooo' # This is bad, instead you should do: foo = ''.join([foo, 'ooo']) ``` The author pointed out that `''.join()` is not always faster than `+`, so he is not against using `+` for string concatenation. But why is `foo += 'ooo'` bad practice whereas `foobar=foo+bar` is considered good? * is `foo += bar` good? * is `foo = foo + 'ooo'` good? Before this code snippet, the author wrote: > > One final thing to mention about strings is that using join() is not always best. In the instances where you are creating a new string from a pre-determined number of strings, using the addition operator is actually faster, but in cases like above or in cases where you are adding to an existing string, using join() should be your preferred method. > > >
2016/09/24
[ "https://Stackoverflow.com/questions/39675898", "https://Stackoverflow.com", "https://Stackoverflow.com/users/534298/" ]
Is it bad practice? ------------------- It's reasonable to assume that it isn't bad practice for this example because: * The author doesn't give any reason. Maybe it's just disliked by him/her. * Python documentation doesn't mention it's bad practice (from what I've seen). * `foo += 'ooo'` is just as readable (according to me) and is approximately 100 times faster than `foo = ''.join([foo, 'ooo'])`. When should one be used over the other? --------------------------------------- Concatenation of strings have the disadvantage of needing to create a new string and allocate new memory *for every concatenation*! This is time consuming, but isn't that big of a deal with few and small strings. When you know the number of strings to concatenate and don't need more than maybe 2-4 concatenations I'd go for it. --- When joining strings Python only has to allocate new memory for the final string, which is much more efficient, but could take longer to compute. Also, because strings are immutable it's often more practical to use a list of strings to dynamically mutate, and only convert it to a string when needed. It's often convenient to create strings with str.join() since it takes an iterable. For example: ``` letters = ", ".join("abcdefghij") ``` To conclude ----------- In most cases it makes more sense to use `str.join()` but there are times when concatenation is just as viable. Using any form of string concatenation for huge or many strings would be bad practice just as using `str.join()` would be bad practice for short and few strings, in my own opinion. I believe that the author was just trying to create a rule of thumb to easier identify when to use what without going in too much detail or make it complicated.
If the number of string is small and strings are known in advance, I would go : ``` foo = f"{foo}ooo" ``` Using [f-strings](https://docs.python.org/fr/3/tutorial/inputoutput.html#formatted-string-literals). However, this is valid only since python 3.6.
16,821
73,453,875
Few days ago I uninstalled and then reinstalled python due to some error related to pip . Since then whenever I start my pc it shows python modify setup window 2 or 3 times [you can see popup here](https://i.stack.imgur.com/nrsLv.png) Though I can close these windows ; Whenever I open vs code it can be upwards of 10 pop-ups . What's bizarre about this is that I already have my VS Code set up properly . How do I prevent these annoying pop-ups?
2022/08/23
[ "https://Stackoverflow.com/questions/73453875", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19825726/" ]
If there's code I'm likely to use repeatedly (like checking whether a radio button is selected or not), I like to put it in a method so it's easily reusable. This method takes the parent DIV of the radio button and counts the number of SVG circles inside. If there's more than one, the radio button is selected. This way you can use this method for not only the One Way radio button but any radio button on the page. ``` public static boolean isSelected(By locator) { return new WebDriverWait(driver, Duration.ofSeconds(10)).until(ExpectedConditions.visibilityOfElementLocated(locator)).findElements(By.cssSelector("circle")).size() > 1; } ``` Simple examples ``` By oneWayLocator = By.cssSelector("div[data-testid='one-way-radio-button']"); By roundTripLocator = By.cssSelector("div[data-testid='round-trip-radio-button']"); boolean oneWaySelected = isSelected(oneWayLocator); boolean roundTripSelected = isSelected(roundTripLocator); ``` It sounds like you want to check to see if the One Way radio button is selected and if so, select Round Trip instead. To do that, ``` By oneWayLocator = By.cssSelector("div[data-testid='one-way-radio-button']"); By roundTripLocator = By.cssSelector("div[data-testid='round-trip-radio-button']"); if (isSelected(oneWayLocator)) { driver.findElement(roundTripLocator).click(); } ``` I declare driver as a property of the page object class so that I don't have to pass it around in all of my methods, e.g. ``` public class HomePage { WebDriver driver; ... public HomePage(WebDriver driver) { this.driver = driver; } ``` and then create an instance of the class while passing in the driver from my main/test method. ``` HomePage homePage = new HomePage(driver); ```
Most important thing a circle tag represents the one circle, not the svg tag. So svg tag which looks like selected radiobutton contains two circle tags. You can measure number of circle tags in svg tag and based on that consider the svg as un/selected. **Code:** ``` List<WebElement> svgTags = driver.findElements(By.tagName("svg")); for (WebElement svgTag: svgTags) { int circlesCount = svgTag.findElements(By.tagName("circle")).size(); if (circlesCount == 1) { System.out.println("looks like unselected radiobutton"); } else if (circlesCount > 1) { System.out.println("looks like selected radiobutton"); } else { System.out.println("no circle in svg"); } } ``` **Output:** ``` looks like selected radiobutton looks like unselected radiobutton ```
16,822
25,504,738
I am not talking about the "Fixture Parametrizing" as defined by pytest, I am talking about real parameters that you pass to a function (the fixture function in this case) to make code more modular. To demonstrate, this is my fixture ``` @yield_fixture def a_fixture(a_dependency): do_setup_work() yield do_teardown_work() a_dependency.teardown() ``` As you see, my fixture depends on `a_dependency` whose teardown() needs to be called as well. I know in the naive use-case, I could do this: ``` @yield_fixture def a_dependency(): yield teardown() @yield_fixture def a_fixture(a_dependency): do_setup_work() yield do_teardown_work() ``` However, while the `a_fixture` code can be put in a central place and re-used by all tests, the `a_dependecy` code is test-specific and each test possibly needs to create a new `a_dependency` object. I want to avoid copy-pasting both fixture and dependency to all my tests. If this was regular python code, I could just pass the `a_dependecy` as a function argument. How can I pass this object to my shared fixture?
2014/08/26
[ "https://Stackoverflow.com/questions/25504738", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2440380/" ]
In your situation you can use `display:table` in container(`#option_one_div`) in your example and `display:table-cell` in children elements(`#ldiv`, `#rdiv`) like this: ``` <div style="padding:25px; width:400px;"> <div style="background-color:#bf5b5b;"> <span>Yes</span> <span>No</span></div> <div id="option_one_div" style="display: table;"> <div id="ldiv" style="background-color:#74d4dd; width:150px;display:table-cell;"> <label for="rbutton_radio_1_0" style="margin-left:30px; margin-right:30px;"> <input for="rbutton_radio_1_0" type="radio" name="radio" value="0"/></label> <label for="rbutton_radio_1_1" style="margin-left:30px; margin-right:30px;"> <input for="rbutton_radio_1_1" type="radio" name="radio" value="1"/></label> </div> <div id="rdiv" style="display:table-cell; background-color:#74d4dd; margin-left:151px; padding-left: 20px; padding-right: 20px"> <span>Label of first group of Radio Buttons radio buttons.</span> </div> </div> </div> ``` [**fiddle**](http://jsfiddle.net/41209azo/6/) As you can see you don't need `floats`.
use width with float in div ``` <div id="rdiv" style="float:right; background-color: #74d4dd; /* margin-left: 151px; */ padding-left: 20px; width: 210px;padding-right: 20px"> <span>Label of first group of Radio Buttons radio buttons.</span> </div> ``` [plz check](http://jsfiddle.net/akash4pj/41209azo/1/)
16,823
33,896,511
We are having problems running **"npm install"** on our project. A certain file cannot be found : ``` fatal error C1083: Cannot open include file: 'windows.h' ``` It appears to be coming from the **node-gyp** module : > > c:\Program > Files\nodejs\node\_modules\npm\node\_modules\node-gyp\src\win\_delay\_lo > ad\_hook.c(13): > > > fatal error C1083: Cannot open include file: 'windows.h': No suc h > file or directory > [D:\ngs-frontend-next\node\_modules\browser-sync\node\_module > s\socket.io\node\_modules\engine.io\node\_modules\ws\node\_modules\bufferutil\buil > d\binding.sln] > > > This node-gyp seems to be a never ending source of pain for us. At first it complained that it needed python, so we installed that. Then it complained that it needed VCBuild, which we installed (using .NET 2.0 SDK), now we have this error. It's almost as if the errors are getting more and more obscure, and it feels like we are going down some wrong path. The strange thing is, that other people in our team have zero problems running the npm-install. The full error looks like this : > > c:\Program > Files\nodejs\node\_modules\npm\node\_modules\node-gyp\src\win\_delay\_lo > ad\_hook.c(13): > > > fatal error C1083: Cannot open include file: 'windows.h': No suc h > file or directory > [D:\ngs-frontend-next\node\_modules\browser-sync\node\_module > s\socket.io\node\_modules\engine.io\node\_modules\ws\node\_modules\bufferutil\buil > d\binding.sln] > > > gyp ERR! build error gyp ERR! stack Error: > `C:\Windows\Microsoft.NET\Framework\v4.0.30319\msbuild.exe` failed > with exit code: 1 stack at ChildProcess.onExit (C:\Program > Files\nodejs\node\_modules\npm\node > \_modules\node-gyp\lib\build.js:270:23) gyp ERR! stack at emitTwo (events.js:87:13) gyp ERR! stack at ChildProcess.emit > (events.js:172:7) gyp ERR! stack at > Process.ChildProcess.\_handle.onexit (internal/child\_proces > s.js:200:12) gyp ERR! System Windows\_NT 6.1.7601 gyp ERR! command > "C:\Program Files\nodejs\node.exe" "C:\Program Files\nodej > s\node\_modules\npm\node\_modules\node-gyp\bin\node-gyp.js" > "rebuild" gyp ERR! cwd > D:\ngs-frontend-next\node\_modules\browser-sync\node\_modules\socket. > io\node\_modules\engine.io\node\_modules\ws\node\_modules\bufferutil gyp > ERR! node -v v4.2.2 gyp ERR! node-gyp -v v3.0.3 gyp ERR! not ok npm > WARN optional dep failed, continuing utf-8-validate@1.2.1 > > > > > > > gifsicle@3.0.3 postinstall D:\ngs-frontend-next\node\_modules\gulp-imagemin\nod > > e\_modules\imagemin\node\_modules\imagemin-gifsicle\node\_modules\gifsicle > > node lib/install.js > > > > > > > > > Out **pacakge.json** looks like this : ``` { "name": "Fast-nunjucks", "version": "0.0.1", "description": "A simple boilerplate using nunjucks as a template engine", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "repository": { "type": "git", "url": "https://github.com/willianjusten/Fast-nunjucks.git" }, "keywords": [ "nunjucks", "node", "gulp", "stylus" ], "author": "Willian Justen de Vasconcellos", "license": "ISC", "bugs": { "url": "https://github.com/willianjusten/Fast-nunjucks/issues" }, "homepage": "https://github.com/willianjusten/Fast-nunjucks", "devDependencies": { "autoprefixer-stylus": "^0.7.1", "browser-sync": "^2.8.2", "gulp": "^3.9.0", "gulp-cache": "^0.3.0", "gulp-concat": "^2.6.0", "gulp-if": "^1.2.5", "gulp-imagemin": "^2.3.0", "gulp-minify-html": "^1.0.4", "gulp-nunjucks-html": "^1.2.2", "gulp-order": "^1.1.1", "gulp-plumber": "^1.0.1", "gulp-stylus": "^2.0.6", "gulp-uglify": "^1.2.0", "gulp-util": "^3.0.6", "jeet": "^6.1.2", "kouto-swiss": "^0.11.13", "minimist": "^1.1.3", "rupture": "^0.6.1" }, "dependencies": { "gulp-install": "^0.6.0" } } ```
2015/11/24
[ "https://Stackoverflow.com/questions/33896511", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1022330/" ]
The last time I saw a similar error it was because I was using the wrong version of `npm` and/or `node` for one of my dependencies. Try upgrading these and try again. Before trying again remove your `node_modules` directory. You may need to investigate what versions of `npm` and `node` your dependencies need. You could try the latest versions of all your dependencies, node and npm. Check what versions your colleagues are using. What OS are you using? That can have an impact as version of CLANG maybe different.
Install python2 and try running `npm install` again. This approach worked for me.
16,830
16,007,094
I'm having problems with this method in python called findall. I'm accessing a web pages HTML and trying to return the name of a product in this case `'bread'` and print it out to the console.
2013/04/15
[ "https://Stackoverflow.com/questions/16007094", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2051382/" ]
Don't use regex for HTML parsing. There are a few solutions. I suggest BeautifulSoup (<http://www.crummy.com/software/BeautifulSoup/>) Having said so, however, in this particular case, RE will suffice. Just relax it a notch. There might be more or less spaces or maybe those are tabs. So instead of literal spaces use the space class **\s**: ``` product = re.findall(r'Item:\s*is\s*in\s*lane\s*12\s*(\w*)', content) print product[0] ``` Since The '\*', '+', and '?' qualifiers are all greedy (they match as much text as possible) you don't need to restrict it with **`[^<]*<br>`**
In case you still want to use regexps, here's a working one for your case: ``` product = re.findall(r'<br>\s*Item:\s+is\s+in\s+lane 12\s+(\w*)[^<]*<br>', content) ``` It takes into account DSM's space flexibility suggestion and non-letters after `(\w*)` that might appear before `<br>`.
16,840
21,790,203
I want to remove double open quotes and double close quotes from the text. By Double opening quotes i mean **“** not **"** I am trying to do it with python. But is unable to read **“**
2014/02/14
[ "https://Stackoverflow.com/questions/21790203", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3288929/" ]
how about... ``` >>> s = "“hello“" >>> s.replace('“','') 'hello' >>> ```
Well, since you didn't want to say how you initially did it in excel, here's how you remove those: * Option 1: Use the Find/Replace with find `“` and replace with nothing then find `”` and replace with nothing. * Option 2: In excel, use `CHAR(147)` to have `“` and `CHAR(148)` to have `”`. Then `SUBSTITUTE` to remove them: ``` =SUBSTITUTE(SUBSTITUTE(A1, CHAR(147), ""), CHAR(148), "") ```
16,841
60,229,299
If you're a python coder you may encounter looking for a way to comments your code better on subcategory code. My meaning by the subcategory code is you may have blocks of codes and then again blocks of codes that relate to the previous block. Here is an example (pay more attention to comments): ``` # Drink some water to keep your body hydrated: initialize some parameters # Choice a glass: pick up the glass number one if the glass was dirty: wash the dishes # this would be difficult. # Pour the glass with water: while the glass is not filled up: pour the glass ``` All I'm saying we are looking for simplicity and when you are in the middle of a code, it's better to know where you are exactly. See the comments again but this time as headers: ``` # h1 initialize some parameters # h2: pick up the glass number one if the glass was dirty: wash the dishes # h2: while the glass is not filled up: pour the glass ``` so when you see the comment `h2`s it would be convenient to know that this part of code is subcode of `h1`: I used to make the first letter capital as `h1` for comments, and the following code blocks that are related are not capital. Let's wipe out the codes and just focus on the comments: ``` # Drink some water to keep your body hydrated: ... line of codes ... # choice a glass: ... line of codes ... # pour the glass with water: ... line of codes ... ``` There are some pros and coins with that: *pros:* * Easy to use. *coins:* * It doesn't come to the mind right of way. * If you have more subcategory code this doesn't work. So I was like "why don't I share this with others to see what they think and what they prefer or use", said it to myself.
2020/02/14
[ "https://Stackoverflow.com/questions/60229299", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7643771/" ]
I think what you will find, is if you run your application, it will log an error like `requried a single bean but 2 were found`. What you can do however is remove the ambiguity using the @Qualifier where you need it injected and naming your bean definitions, i.e. for your example. ``` @Configuration public class Configuration { @Bean(name="restTemplateA") public RestTemplate restTemplate() { RestTemplate restTemplate = new RestTemplate(); //setting some restTemplate properties return restTemplate; } @Bean(name="restTemplateB") public RestTemplate restTemplate() { RestTemplate restTemplate = new RestTemplate(); //setting some restTemplate properties return restTemplate; } } ``` Then when you come to injecting and using the templates ``` @Service public class someClass { @Autowired @Qualifer("restTemplateA") private RestTemplate restTemplate; } ``` However, you can also mark one of the templates as a Primary with `@Primary`, and this bean will then be used in each place you do not qualify your autowired. ``` @Bean(name="restTemplateA") @Primary public RestTemplate restTemplate() { RestTemplate restTemplate = new RestTemplate(); //setting some restTemplate properties return restTemplate; } ```
Actually, you cannot have both configuration classes, because you'll get a bean name conflict. To fix this, rename the method name: ``` @Bean @Qualifier("restTemplateB") public RestTemplate restTemplateB() { RestTemplate restTemplate = new RestTemplate(); //setting some restTemplate properties return restTemplate; } ``` This way, two `RestTemplate` will be created with `restTemplate` and `restTemplateB` name respectively, and the first one will be injected in the service class.
16,842
72,587,334
In .Net c# there is a function Task.WhenAll that can take a list of tasks to await them. What should I use in python? I am trying to do the same with this: ``` tasks = ... #list of coroutines for task in tasks: await task ```
2022/06/11
[ "https://Stackoverflow.com/questions/72587334", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11660685/" ]
After adding tasks to a list, you should use [`asyncio.gather`](https://docs.python.org/3/library/asyncio-task.html#asyncio.gather) that gives coroutines as an argument list and executes them asynchronously. Also, you could use [`asyncio.create_task`](https://docs.python.org/3/library/asyncio-task.html#asyncio.create_task) that gives a coroutine and calls concurrent tasks in the event loop. ```py import asyncio async def coro(i): await asyncio.sleep(i//2) async def main(): tasks = [] for i in range(5): tasks.append(coro(i)) await asyncio.gather(*tasks) if __name__ == "__main__": loop = asyncio.get_event_loop() loop.run_until_complete(main()) loop.close() ```
Use [`asyncio.gather`](https://docs.python.org/3/library/asyncio-task.html#asyncio.gather) if you're on Python 3.7 or above. From the docs: > > Run awaitable objects in the aws sequence concurrently. > If any awaitable in aws is a coroutine, it is automatically scheduled as a Task. > If all awaitables are completed successfully, the result is an aggregate list of returned values. The order of result values corresponds to the order of awaitables in aws. > > >
16,844
16,808,349
I've installed following packages <https://github.com/zacharyvoase/django-postgres> via pip and virtualenv.: ``` pip install git+https://github.com/zacharyvoase/django-postgres.git ``` It was installed succesfully. I used it in my model(As described in its documentaion) ``` from django.db import models import django_postgres as pg USStates = pg.Enum('states_of_the_usa', ['AL', 'WY']) class Address(pg.CompositeType): line1 = models.CharField(max_length=100) line2 = models.CharField(max_length=100, blank=True) city = models.CharField(max_length=100) zip_code = models.CharField(max_length=10) state = USStates() country = models.CharField(max_length=100) ``` when I try to sync it via shell, it throws an error: ``` (virtualenv) user$ python manage.py sync_pgviews Unknown command: 'sync_pgviews' Type 'manage.py help' for usage. ``` Have I left something after installing an app? And is it the correct way to install django new app?
2013/05/29
[ "https://Stackoverflow.com/questions/16808349", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1089202/" ]
In order for management commands to work, the app has to be added to `INSTALLED_APPS`. However, a basic problem that you have is that the module doesn't support [`ENUM`](http://www.postgresql.org/docs/9.1/static/datatype-enum.html) yet. Its still a work in progress.
After adding new app: 1. add app to INSTALLED\_APPS in settings.py 2. run python manage.py syncdb 3. add urls to urls.py Perhaps you should go through this (again?) <https://docs.djangoproject.com/en/dev/intro/tutorial01/>
16,845
32,533,820
``So I'm basically trying to see if two items in a python list are beside each other. For example, if I'm looking to see if the number 2 is beside an element in this list. example\_List = [1,2,2,3,4] It should return True. So far I have this ``` def checkList(List1): for i in range(len(List1 - 1)): if list1[i] == 2 and list1[i+1] == 2: return True return False ``` I get the error, Error:unsupported operand type(s) for -: 'list' and 'int' Thanks!
2015/09/12
[ "https://Stackoverflow.com/questions/32533820", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
The issue is with this part: ``` len(List1 - 1) ``` You should change it into ``` len(List1) - 1 ``` And you should use the same case for variable List1. Change ``` if list1[i] == 2 and list1[i+1] == 2: ``` to: ``` if List1[i] == 2 and List1[i+1] == 2: ```
Replace ``` len(List1 - 1) ``` for ``` len(List1) - 1 ```
16,846
50,205,683
I'm using Ubuntu 14.04 with Django 2.0.5 with Django Cookiecutter. I am trying to start a Django server on DigitalOcean and trying to bind gunicorn to 0.0.0.0:8000. python manage.py runserver works fine, but the issue is that it says it can't import environ. Any tips are greatly appreciated, Thanks. I've ran > > pip install-r base.txt > > > pip install-r local.txt > > > pip install-r production.txt > > > and reinstalled django-environ 0.4.4 **This is the error that I receive when I run the following gunicorn command:** > > sudo gunicorn --bind 0.0.0.0:8000 config.wsgi:application > > > ``` (venv) root@django-manaland:/home/django/mana/manaland# sudo gunicorn -b 0.0.0.0:8000 config.wsgi:application [2018-05-07 00:12:32 +0000] [20500] [INFO] Starting gunicorn 19.8.1 [2018-05-07 00:12:32 +0000] [20500] [INFO] Listening at: http://0.0.0.0:8000 (20500) [2018-05-07 00:12:32 +0000] [20500] [INFO] Using worker: sync [2018-05-07 00:12:32 +0000] [20503] [INFO] Booting worker with pid: 20503 [2018-05-07 00:12:32 +0000] [20503] [ERROR] Exception in worker process Traceback (most recent call last): File "/usr/local/lib/python3.5/dist-packages/gunicorn/arbiter.py", line 583, in spawn_worker worker.init_process() File "/usr/local/lib/python3.5/dist-packages/gunicorn/workers/base.py", line 129, in init_process self.load_wsgi() File "/usr/local/lib/python3.5/dist-packages/gunicorn/workers/base.py", line 138, in load_wsgi self.wsgi = self.app.wsgi() File "/usr/local/lib/python3.5/dist-packages/gunicorn/app/base.py", line 67, in wsgi self.callable = self.load() File "/usr/local/lib/python3.5/dist-packages/gunicorn/app/wsgiapp.py", line 52, in load return self.load_wsgiapp() File "/usr/local/lib/python3.5/dist-packages/gunicorn/app/wsgiapp.py", line 41, in load_wsgiapp return util.import_app(self.app_uri) File "/usr/local/lib/python3.5/dist-packages/gunicorn/util.py", line 350, in import_app __import__(module) File "/home/django/mana/manaland/config/wsgi.py", line 38, in <module> application = get_wsgi_application() File "/usr/local/lib/python3.5/dist-packages/django/core/wsgi.py", line 13, in get_wsgi_application django.setup(set_prefix=False) File "/usr/local/lib/python3.5/dist-packages/django/__init__.py", line 22, in setup configure_logging(settings.LOGGING_CONFIG, settings.LOGGING) File "/usr/local/lib/python3.5/dist-packages/django/conf/__init__.py", line 56, in __getattr__ self._setup(name) File "/usr/local/lib/python3.5/dist-packages/django/conf/__init__.py", line 41, in _setup self._wrapped = Settings(settings_module) File "/usr/local/lib/python3.5/dist-packages/django/conf/__init__.py", line 110, in __init__ mod = importlib.import_module(self.SETTINGS_MODULE) File "/usr/lib/python3.5/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "/home/django/mana/manaland/config/settings/production.py", line 3, in <module> from .base import * # noqa File "/home/django/mana/manaland/config/settings/base.py", line 5, in <module> import environ ImportError: No module named 'environ' [2018-05-07 00:12:32 +0000] [20503] [INFO] Worker exiting (pid: 20503) [2018-05-07 00:12:32 +0000] [20500] [INFO] Shutting down: Master [2018-05-07 00:12:32 +0000] [20500] [INFO] Reason: Worker failed to boot. ``` base.py file ``` """ Base settings to build other settings files upon. """ import environ ROOT_DIR = environ.Path(__file__) - 3 # (manaland/config/settings/base.py - 3 = manaland/) APPS_DIR = ROOT_DIR.path('manaland') env = environ.Env() READ_DOT_ENV_FILE = env.bool('DJANGO_READ_DOT_ENV_FILE', default=False) if READ_DOT_ENV_FILE: # OS environment variables take precedence over variables from .env env.read_env(str(ROOT_DIR.path('.env'))) # GENERAL # ------------------------------------------------------------------------------ # https://docs.djangoproject.com/en/dev/ref/settings/#debug DEBUG = env.bool('DJANGO_DEBUG', False) # Local time zone. Choices are # http://en.wikipedia.org/wiki/List_of_tz_zones_by_name # though not all of them may be available with every OS. # In Windows, this must be set to your system time zone. TIME_ZONE = 'UTC' # https://docs.djangoproject.com/en/dev/ref/settings/#language-code LANGUAGE_CODE = 'en-us' # https://docs.djangoproject.com/en/dev/ref/settings/#site-id SITE_ID = 1 # https://docs.djangoproject.com/en/dev/ref/settings/#use-i18n USE_I18N = True # https://docs.djangoproject.com/en/dev/ref/settings/#use-l10n USE_L10N = True # https://docs.djangoproject.com/en/dev/ref/settings/#use-tz USE_TZ = True # DATABASES # ------------------------------------------------------------------------------ # https://docs.djangoproject.com/en/dev/ref/settings/#databases DATABASES = { 'default': env.db('DATABASE_URL', default='postgres:///manaland'), } DATABASES['default']['ATOMIC_REQUESTS'] = True # URLS # ------------------------------------------------------------------------------ # https://docs.djangoproject.com/en/dev/ref/settings/#root-urlconf ROOT_URLCONF = 'config.urls' # https://docs.djangoproject.com/en/dev/ref/settings/#wsgi-application WSGI_APPLICATION = 'config.wsgi.application' # APPS # ------------------------------------------------------------------------------ DJANGO_APPS = [ 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'django.contrib.messages', 'django.contrib.staticfiles', # 'django.contrib.humanize', # Handy template tags 'django.contrib.admin', ] THIRD_PARTY_APPS = [ 'crispy_forms', 'allauth', 'allauth.account', 'allauth.socialaccount', 'rest_framework', ] LOCAL_APPS = [ 'manaland.users.apps.UsersConfig', # Your stuff: custom apps go here ] # https://docs.djangoproject.com/en/dev/ref/settings/#installed-apps INSTALLED_APPS = DJANGO_APPS + THIRD_PARTY_APPS + LOCAL_APPS # MIGRATIONS # ------------------------------------------------------------------------------ # https://docs.djangoproject.com/en/dev/ref/settings/#migration-modules MIGRATION_MODULES = { 'sites': 'manaland.contrib.sites.migrations' } # AUTHENTICATION # ------------------------------------------------------------------------------ # https://docs.djangoproject.com/en/dev/ref/settings/#authentication-backends AUTHENTICATION_BACKENDS = [ 'django.contrib.auth.backends.ModelBackend', 'allauth.account.auth_backends.AuthenticationBackend', ] # https://docs.djangoproject.com/en/dev/ref/settings/#auth-user-model AUTH_USER_MODEL = 'users.User' # https://docs.djangoproject.com/en/dev/ref/settings/#login-redirect-url LOGIN_REDIRECT_URL = 'users:redirect' # https://docs.djangoproject.com/en/dev/ref/settings/#login-url LOGIN_URL = 'account_login' # PASSWORDS # ------------------------------------------------------------------------------ # https://docs.djangoproject.com/en/dev/ref/settings/#password-hashers PASSWORD_HASHERS = [ # https://docs.djangoproject.com/en/dev/topics/auth/passwords/#using-argon2-with-django 'django.contrib.auth.hashers.Argon2PasswordHasher', 'django.contrib.auth.hashers.PBKDF2PasswordHasher', 'django.contrib.auth.hashers.PBKDF2SHA1PasswordHasher', 'django.contrib.auth.hashers.BCryptSHA256PasswordHasher', 'django.contrib.auth.hashers.BCryptPasswordHasher', ] # https://docs.djangoproject.com/en/dev/ref/settings/#auth-password-validators AUTH_PASSWORD_VALIDATORS = [ { 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', }, { 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', }, { 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', }, { 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', }, ] # MIDDLEWARE # ------------------------------------------------------------------------------ # https://docs.djangoproject.com/en/dev/ref/settings/#middleware MIDDLEWARE = [ 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ] # STATIC # ------------------------------------------------------------------------------ # https://docs.djangoproject.com/en/dev/ref/settings/#static-root STATIC_ROOT = str(ROOT_DIR('staticfiles')) # https://docs.djangoproject.com/en/dev/ref/settings/#static-url STATIC_URL = '/static/' # https://docs.djangoproject.com/en/dev/ref/contrib/staticfiles/#std:setting-STATICFILES_DIRS STATICFILES_DIRS = [ str(APPS_DIR.path('static')), ] # https://docs.djangoproject.com/en/dev/ref/contrib/staticfiles/#staticfiles-finders STATICFILES_FINDERS = [ 'django.contrib.staticfiles.finders.FileSystemFinder', 'django.contrib.staticfiles.finders.AppDirectoriesFinder', ] # MEDIA # ------------------------------------------------------------------------------ # https://docs.djangoproject.com/en/dev/ref/settings/#media-root MEDIA_ROOT = str(APPS_DIR('media')) # https://docs.djangoproject.com/en/dev/ref/settings/#media-url MEDIA_URL = '/media/' # TEMPLATES # ------------------------------------------------------------------------------ # https://docs.djangoproject.com/en/dev/ref/settings/#templates TEMPLATES = [ { # https://docs.djangoproject.com/en/dev/ref/settings/#std:setting-TEMPLATES-BACKEND 'BACKEND': 'django.template.backends.django.DjangoTemplates', # https://docs.djangoproject.com/en/dev/ref/settings/#template-dirs 'DIRS': [ str(APPS_DIR.path('templates')), ], 'OPTIONS': { # https://docs.djangoproject.com/en/dev/ref/settings/#template-debug 'debug': DEBUG, # https://docs.djangoproject.com/en/dev/ref/settings/#template-loaders # https://docs.djangoproject.com/en/dev/ref/templates/api/#loader-types 'loaders': [ 'django.template.loaders.filesystem.Loader', 'django.template.loaders.app_directories.Loader', ], # https://docs.djangoproject.com/en/dev/ref/settings/#template-context-processors 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.template.context_processors.i18n', 'django.template.context_processors.media', 'django.template.context_processors.static', 'django.template.context_processors.tz', 'django.contrib.messages.context_processors.messages', ], }, }, ] # http://django-crispy-forms.readthedocs.io/en/latest/install.html#template-packs CRISPY_TEMPLATE_PACK = 'bootstrap4' # FIXTURES # ------------------------------------------------------------------------------ # https://docs.djangoproject.com/en/dev/ref/settings/#fixture-dirs FIXTURE_DIRS = ( str(APPS_DIR.path('fixtures')), ) # EMAIL # ------------------------------------------------------------------------------ # https://docs.djangoproject.com/en/dev/ref/settings/#email-backend EMAIL_BACKEND = env('DJANGO_EMAIL_BACKEND', default='django.core.mail.backends.smtp.EmailBackend') # ADMIN # ------------------------------------------------------------------------------ # Django Admin URL regex. ADMIN_URL = r'^admin/' # https://docs.djangoproject.com/en/dev/ref/settings/#admins ADMINS = [ ("""dom""", 'hello@manaland.io'), ] # https://docs.djangoproject.com/en/dev/ref/settings/#managers MANAGERS = ADMINS # django-allauth # ------------------------------------------------------------------------------ ACCOUNT_ALLOW_REGISTRATION = env.bool('DJANGO_ACCOUNT_ALLOW_REGISTRATION', True) # https://django-allauth.readthedocs.io/en/latest/configuration.html ACCOUNT_AUTHENTICATION_METHOD = 'username' # https://django-allauth.readthedocs.io/en/latest/configuration.html ACCOUNT_EMAIL_REQUIRED = True # https://django-allauth.readthedocs.io/en/latest/configuration.html ACCOUNT_EMAIL_VERIFICATION = 'mandatory' # https://django-allauth.readthedocs.io/en/latest/configuration.html ACCOUNT_ADAPTER = 'manaland.users.adapters.AccountAdapter' # https://django-allauth.readthedocs.io/en/latest/configuration.html SOCIALACCOUNT_ADAPTER = 'manaland.users.adapters.SocialAccountAdapter' # Your stuff... # ------------------------------------------------------------------------------ ``` production.py imports ``` import logging from .base import * # noqa from .base import env ``` manage.py ``` #!/usr/bin/env python import os import sys if __name__ == "__main__": os.environ.setdefault("DJANGO_SETTINGS_MODULE", "config.settings.production") try: from django.core.management import execute_from_command_line except ImportError: # The above import may fail for some other reason. Ensure that the # issue is really that Django is missing to avoid masking other # exceptions on Python 2. try: import django # noqa except ImportError: raise ImportError( "Couldn't import Django. Are you sure it's installed and " "available on your PYTHONPATH environment variable? Did you " "forget to activate a virtual environment?" ) raise # This allows easy placement of apps within the interior # manaland directory. current_path = os.path.dirname(os.path.abspath(__file__)) sys.path.append(os.path.join(current_path, "manaland")) execute_from_command_line(sys.argv) ```
2018/05/07
[ "https://Stackoverflow.com/questions/50205683", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6907366/" ]
You need to run your manage.py file with the local settings ``` python manage.py runserver --settings=config.settings.production ```
you need to set environment variables for you database : if you are on linux machine : ``` $ export DATABASE_URL=postgres://postgres:<password>@127.0.0.1:5432/<DB name given to createdb> ``` cookiecutter doc can help you more for all this : link : <https://cookiecutter-django.readthedocs.io/en/latest/developing-locally.html>
16,851
22,882,125
I updated my system to Mavericks and I have a python code using pgdb. How can I install pgdb on my new mac? I tried ``` sudo pip install git+git://github.com/cancerhermit/pgdb.py.git ``` And ``` sudo pip install pgdb ``` And ``` brew install pgdb ``` And I have even tried to install it from PyCharm directly (my first try).
2014/04/05
[ "https://Stackoverflow.com/questions/22882125", "https://Stackoverflow.com", "https://Stackoverflow.com/users/762435/" ]
**Assuming that you want to:** * Replace lower-case letters with lower-case letters * Replace upper-case letters with upper-case letters * Leave spaces and any other non-alphabetic characters as is --- ``` void encrypt (std::string &e) { int size = e.size(); for (int i=0; i<size; i++) { char c = e[i]; if (('A' <= c && c <= 'Z'-11) || ('a' <= c && c <= 'z'-11)) e[i] = c+11; else if ('Z'-11 < c && c <= 'Z') e[i] = c+11-'Z'+'A'; else if ('z'-11 < c && c <= 'z') e[i] = c+11-'z'+'a'; } } ```
You could do something like this: ``` char _character='X'; int _value=static_cast<int>(_character); if(_value!=32)//not space { int _newValue=((_value+11)%90); (_newValue<65)?_newValue+=65:_newValue+=0; char _newCharacter=static_cast<char>(_newValue); } ```
16,852
38,958,697
I am working on script in python with BeautifulSoup to find some data from html. I got a stacked and so much confused, my brain stopped working, I don't have any idea how to scrape full address of these elements: ``` <li class="spacer"> <span>Location:</span> <br>Some Sample Street<br> Abbeville, AL 00000 </li> ``` I have tried something like `location = info.find('li', 'spacer').text` but still I got only string "Location: " . Tried with many parents - child relations but still can't figure out how to scrape this one.. Can anybody help me?
2016/08/15
[ "https://Stackoverflow.com/questions/38958697", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6285753/" ]
You can choose names for the nodes in your model by passing the optional `name="myname"` argument to pretty much any Tensorflow operator that builds a node. Tensorflow will pick names for graph nodes automatically if you don't specify them, but if you want to identify those nodes to a tool like freeze\_graph.py, then it's best to choose the names yourself. Those names are what you pass to output\_node\_names.
You can get all of the node names in your model with something like: ```py node_names = [node.name for node in tf.get_default_graph().as_graph_def().node] ``` Or with restoring the graph: ```py saver = tf.train.import_meta_graph(/path/to/meta/graph) sess = tf.Session() saver.restore(sess, /path/to/checkpoints) graph = sess.graph print([node.name for node in graph.as_graph_def().node]) ``` You may need to filter these to get only your output nodes, or only the nodes that you want, but this can at least help you get the names for a graph that you have already trained and cannot afford to retrain with `name='some_name'` defined for each node. Ideally, you want to define a `name` parameter for each operation or tensor that you are going to want to access later.
16,853
45,489,388
I am working on a Python/Django project, and we need to use two databases. Following the documentation I added a second database like this: ``` DATABASE_URL = os.getenv('DATABASE_URL', 'postgres://*******:********@aws-us-***********:*****/*****') CURRENCIES_URL = os.getenv('CURRENCIES_URL', 'postgres://*******:********@aws-us-***********:*****/*****') DATABASES = { 'default': dj_database_url.parse(DATABASE_URL), 'currencies': dj_database_url.parse(CURRENCIES_URL) } ``` The parse() method returns the data in the format the object is expecting. Then, I have this code: ``` currencies = connection['currencies'].cursor() ``` Basically this allows me to run custom SQL code on the database, by returning its cursor and storing it in `currencies` However when running this code I get this in the console: ``` url(r'^', include('btcmag.urls', namespace="btcmag")), File "/Users/john/.virtualenvs/btcmag/lib/python2.7/site-packages/django/conf/urls/__init__.py", line 50, in include urlconf_module = import_module(urlconf_module) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module __import__(name) File "/Users/john/btcmag/btcmag/urls.py", line 7, in <module> from . import views File "/Users/john/btcmag/btcmag/views.py", line 4, in <module> from .apis import getTrendingArticles, getTickers, getCurrency, getHistory File "/Users/john/btcmag/btcmag/apis.py", line 9, in <module> currencies = connection['currencies'].cursor() TypeError: 'DefaultConnectionProxy' object has no attribute '__getitem__' ``` Some clarification: * When I run just `currencies = connection.cursor()` it uses the `default` database and works just fine, which tells me it's not the connection module * If I switch `currencies` and `default` in the databases settings it works also fine - running `currencies = connection.cursor()` - by using the Currencies DB as default. So the DB is not the issue either. What could be causing the issue here?
2017/08/03
[ "https://Stackoverflow.com/questions/45489388", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7776359/" ]
There is a small but fatal typo in your code. It should be `django.db.connections` instead of your `connection` (you have not specified where that comes from). Quoting from <https://docs.djangoproject.com/en/1.11/topics/db/sql/#executing-custom-sql-directly> > > If you are using more than one database, you can use > django.db.connections to obtain the connection (and cursor) for a > specific database. `django.db.connections` is a dictionary-like object > that allows you to retrieve a specific connection using its alias: > > > > ``` > from django.db import connections > cursor = connections['my_db_alias'].cursor() > # Your code here... > > ``` > >
You can't access `connection` using bracket notation. Perhaps this will work: ``` currencies = DATABASES['currencies'].cursor() ```
16,854
52,306,134
I am pretty new to Django. I think I cannot run django application as sudo since all pip related modules are installed for the user and not for the sudo user. So, it's a kind of basic question like how do I run django app that can listen for port 80 as well as port 443. So, far I have tried following option - i.e pre-routing - NAT I run my app using the following command - ``` $python manage.py runserver Performing system checks... System check identified no issues (0 silenced). You have 15 unapplied migration(s). Your project may not work properly until you apply the migrations for app(s): admin, auth, contenttypes, sessions. Run 'python manage.py migrate' to apply them. September 13, 2018 - 03:04:41 Django version 2.1.1, using settings 'WebBlogger.settings' Starting development server at http://127.0.0.1:8000/ ``` Next, here is my iptables settings nothing worked for me though sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8000 =============================================================================== ``` $sudo iptables -t nat --line-numbers -n -L Chain PREROUTING (policy ACCEPT) num target prot opt source destination 1 DOCKER all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL 2 REDIRECT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 redir ports 8000 Chain INPUT (policy ACCEPT) num target prot opt source destination Chain OUTPUT (policy ACCEPT) num target prot opt source destination 1 DOCKER all -- 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL Chain POSTROUTING (policy ACCEPT) num target prot opt source destination 1 MASQUERADE all -- 172.17.0.0/16 0.0.0.0/0 2 RETURN all -- 192.168.122.0/24 224.0.0.0/24 3 RETURN all -- 192.168.122.0/24 255.255.255.255 4 MASQUERADE tcp -- 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535 5 MASQUERADE udp -- 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535 6 MASQUERADE all -- 192.168.122.0/24 !192.168.122.0/24 Chain DOCKER (2 references) num target prot opt source destination 1 RETURN all -- 0.0.0.0/0 0.0.0.0/0 ``` I did http:// and I see connection refused. I have no idea how to debug the NAT stuff whether it is actually hitting NAT or not. How can I debug and also what is the correct solution to it?
2018/09/13
[ "https://Stackoverflow.com/questions/52306134", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10318265/" ]
Ideally module should go into your site packages. Do you see it there? If its there then check your package path. You package may be steplib, but have you checked if you are importing right package and module there. For example within your steplib folder you might have additional package and module within it. Say package is X and module is Y. Then you can import it as below. ``` from X import Y ``` Make sure you have **init**.py in your package to qualify that as a package.
Did you install the package in the python directory under Lib?
16,855
23,909,292
I am new to mqtt and still discovering this interesting protocol. I want to create a client in python that publishes every 10 seconds a message. Until now I succeed to publish only one message and keep the client connected to the broker. How can I make the publishing part a loop ? Below is my client: ``` import mosquitto mqttc=mosquitto.Mosquitto("ioana") mqttc.connect("127.0.0.1",1884,60,True) mqttc.publish("test","Hello") mqttc.subscribe("test/", 2) while mqttc.loop() == 0: pass ``` Thanks.
2014/05/28
[ "https://Stackoverflow.com/questions/23909292", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3683176/" ]
You can sleep between calls: ``` import mosquitto import time # import time module mqttc=mosquitto.Mosquitto("ioana") mqttc.connect("127.0.0.1",8000,60,True) mqttc.subscribe("test/", 2) while mqttc.loop() == 0: mqttc.publish("test","Hello") time.sleep(10)# sleep for 10 seconds before next call ```
I would suggest: ``` import paho.mqtt.client as mqtt # mosquitto.py is deprecated import time mqttc = mqtt.Client("ioana") mqttc.connect("127.0.0.1", 1883, 60) #mqttc.subscribe("test/", 2) # <- pointless unless you include a subscribe callback mqttc.loop_start() while True: mqttc.publish("test","Hello") time.sleep(10)# sleep for 10 seconds before next call ```
16,856
62,287,967
I have just started python. I am trying to web scrape a website to fetch the price and title from it. I have gone through multiple tutorial and blog, the most common libraries are beautiful soup and `scrapy`. `My question is that is there any way to scrape a website without using any library?` If there is a way to scrape a website without using any 3rd party library like `beautifulsoup` and `scrapy`. `It can use builtin libraries` Please suggest me a blog, article or tutorial so that I can learn
2020/06/09
[ "https://Stackoverflow.com/questions/62287967", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Instead of using `scrapy` you can use `urllib`. Instead of `beautifulsoup` you can use `regex`. But `scrapy` and `beautifulsoup` do your life easier. `Scrapy`, not easy library so you can use `requests` or `urllib`.
i think the best, popular and easy to learn and use libraries in python web scraping are requests, lxml and BeautifulSoup which has the latest version is bs4 in summary ‘Requests’ lets us make HTML requests to the website’s server for retrieving the data on its page. Getting the HTML content of a web page is the first and foremost step of web scraping. Let’s take a look at the advantages and disadvantages of the Requests Python library **Advantages:** * Simple * Basic/Digest Authentication * International Domains and URLs * Chunked Requests * HTTP(S) Proxy Support **Disadvantages:** * Retrieves only static content of a page * Can’t be used for parsing HTML * Can’t handle websites made purely with JavaScript We know the requests library cannot parse the HTML retrieved from a web page. Therefore, we require lxml, a high performance, blazingly fast, production-quality HTML, and XML parsing Python library. Let’s take a look at the advantages and disadvantages of the lxml Python library. **Advantages:** * Faster than most of the parser out there * Light-weight * Uses element trees * Pythonic API **Disadvantages:** * Does not work well with poorly designed HTML * The official documentation is not very beginner-friendly BeautifulSoup is perhaps the most widely used Python library for web scraping. It creates a parse tree for parsing HTML and XML documents. Beautiful Soup automatically converts incoming documents to Unicode and outgoing documents to UTF-8. One major advantage of the Beautiful Soup library is that it works very well with poorly designed HTML and has a lot of functions. The combination of Beautiful Soup and Requests is quite common in the industry. **Advantages:** * Requires a few lines of code * Great documentation * Easy to learn for beginners * Robust * Automatic encoding detection **Disadvantages:** * Slower than lxml If you want to learn how to scrape web pages using Beautiful Soup, this tutorial is for you: [turtorial](https://www.analyticsvidhya.com/blog/2015/10/beginner-guide-web-scraping-beautiful-soup-python/?utm_source=blog&utm_medium=5-popular-python-libraries-web-scraping) by the way there so many libraries you can try like Scrapy, Selenium Library for Web Scraping, regex and urllib
16,857
73,071,481
**edit** using utf-16 seems to get me closer in the right direction, but I have csv values that include commas such as "one example value is a description, which is long and can include commas, and quotes" So with my current code: ``` filepath="csv_input/frups.csv" rows = [] with open(filepath, encoding='utf-16') as f: for line in f: print('line=',line) formatted_line=line.strip().split(",") print('formatted_line=',formatted_line) rows.append(formatted_line) print('') ``` Lines get formatted incorrectly: ``` line= "FRUPS" "11111112" "Paahou 11111112, 11111112,11111112" "Bar, Achal" "Iagress" "Unassigned" "Normal" "GaWu , Suaair center will not be able to repair 3 couch part 11111112, 11111112,11111112 . Pleasa to repair . formatted_line= ['"FRUPS"\t"11111112"\t"Parts not able to repair in Suzhou 11111112', ' 11111112', '11111112"\t"Baaaaaar', ' Acaaaal"\t"In Progress"\t"Unassigned"\t"Normal"\t"Got coaow Wu ', ' Suar cat 11111112', ' 11111112', '11111112. Pleasa to repair .'] line= 11111112 formatted_line= ['11111112'] ``` So in this example, the `line` is separated by long spaces, but breaking up by commas is not as reliable for reading data line by line correctly --- I am trying to read a csv line by line in python but each solution leads to a different error. 1. Using pandas: ``` filepath="csv_input/frups.csv" data = pd.read_csv(filepath, encoding='utf-16') for thing in data: print(thing) print('') ``` Fails to read\_csv the file with an error `Error tokenizing data. C error: Expected 7 fields in line 16, saw 8` 2. Using csv\_reader ``` # open file in read mode with open(filepath, 'r') as read_obj: # pass the file object to reader() to get the reader object csv_reader = reader(read_obj) # Iterate over each row in the csv using reader object for row in csv_reader: # row variable is a list that represents a row in csv print(row) ``` Fails with error at `for row in csv_reader` line with `line contains NUL` I've tried to figure out what these `NUL` characters our but trying to investigate using code leads to different errors: ``` data = open(filepath, 'rb').read() print(data.find('\x00')) error: argument should be integer or bytes-like object, not 'str' ``` 3. another read solution trying to strip certain characters ``` with open(filepath,'rb') as f: contents = f.read() contents = contents.rstrip("\n").decode("utf-16") contents = contents.split("\r\n") ``` error: `TypeError: a bytes-like object is required, not 'str'` It seems like my csv has some weird characters that cause python to error out. I can open and view my csv just fine in excel, how can I read my csv line by line? Such as ``` row[0]=['col1','col2','col3'] row[1]=['val1','val2','val3'] etc... ```
2022/07/21
[ "https://Stackoverflow.com/questions/73071481", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18283635/" ]
No token expiration date is there for security reasons. If someone steals a token which has no expiration date that token will be able to be used forever. This can be extremely dangerous. Especially if the token is valuable. If a token has expired, the token should be refreshed and then you can request again.
I would say your frontend should manage valid token state properly. Good auth libs have config where you can define when before token expiration is token refreshed. So it should be configured in that way that token won't be never expired on the backend side.
16,860
11,203,167
I'm building a site in django that interfaces with a large program written in R, and I would like to have a button on the site that runs the R program. I have that working, using `subprocess.call()`, but, as expected, the server does not continue rendering the view until `subprocess.call()` returns. As this program could take several hours to run, that's not really an option. Is there any way to run the R program and and keep executing the python code? I've searched around, and looked into `subprocess.Popen()`, but I couldn't get that to work. Here's the generic code I'm using in the view: ``` if 'button' in request.POST: subprocess.call('R CMD BATCH /path/to/script.R', shell=True) return HttpResponseRedirect('') ``` Hopefully I've just overlooked something simple. Thank you.
2012/06/26
[ "https://Stackoverflow.com/questions/11203167", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1134853/" ]
``` subprocess.Popen(['R', 'CMD', 'BATCH', '/path/to/script.R']) ``` The process will be started asynchronously. Example: ``` $ cat 1.py import time import subprocess print time.time() subprocess.Popen(['sleep', '1000']) print time.time() $ python 1.py 1340698384.08 1340698384.08 ``` You must note that the child process will run even after the main process stops.
You may use a wrapper for subprocess.call(), that wrapper would have its own thread within which it will call subprocess.call() method.
16,861
58,455,611
Currently I’m working on a corpus/dataset. It’s in xml format as you can see the picture below. I’m facing a problem. I want to access all **‘ne’** elements one by one as shown in below picture. Then I want to access the **text of the ‘W’ elements** which are inside the ‘ne’ elements. Then I want to **concatenate** thy symbols **‘SDi’ and ‘EDi’** with the text of these ‘W’ elements. ‘i’ can take any positive whole number starting from 1. In the case of ‘SDi’ I need only the **text of first ‘W’** element that is inside the ‘ne’ element. In the case of ‘EDi’ I need only the **text of last ‘W’ element** that is inside the ‘ne’ element. Currently I don't get anything as output after running the code. I think this is because of the fact that the element 'W' is never accessed. Moreover, i think that element 'W' is not accessed because it is a grandchild of element 'ne' therefore it can't be accessed directly rather it may be possible with the help its father node. Note1: The number and names of sub elements inside ‘ne’ elements are not same. Note2: Only those things are explained here which needed. You may find some other details in the coding/picture but ignore them. I'm using Spyder (python 3.6) Any help would be appreciated. A picture from the XML file I'm working on is given below: [![enter image description here](https://i.stack.imgur.com/70UyY.png)](https://i.stack.imgur.com/70UyY.png) Text version of XML file: [Click here](https://drive.google.com/file/d/1PWvqpr758yb87OCtKao77TJWzBirLy1D/view?usp=sharing) Sample/Expected output image (below): [![enter image description here](https://i.stack.imgur.com/IgTLj.png)](https://i.stack.imgur.com/IgTLj.png) Coding I've done so far: ``` for i in range(len(List_of_root_nodes)): true_false = True current = List_of_root_nodes[i] start_ID = current.PDante_ID #print('start:', start_ID) # For Testing end_ID = None number = str(i+1) # This number will serve as i used with SD and ED that is (SDi and EDi) discourse_starting_symbol = "SD" + number discourse_ending_symbol = "ED" + number while true_false: if current.right_child is None: end_ID = current.PDante_ID #print('end:', end_ID) # For Testing true_false = False else: current = current.right_child # Finding 'ne' element with id='start_ID' ne_text = None ne_id = None for ne in myroot.iter('ne'): ne_id = ne.get('id') # If ne_id matches with start_ID means the place where SDi is to be placed is found if ne_id == start_ID: for w in ne.iter('W'): ne_text = str(w.text) boundary_and_text = " " + str(discourse_starting_symbol) + " " + ne_text w.text = boundary_and_text break # If ne_id matches with end_ID means the place where EDi is to be placed is found # Some changes Required here: Here the 'EDi' will need to be placed after the last 'W' element. # So last 'W' element needs to be accessed if ne_id == end_ID: for w in ne.iter('W'): ne_text = str(w.text) boundary_and_text = ne_text + " " + str(discourse_ending_symbol) + " " w.text = boundary_and_text break ```
2019/10/18
[ "https://Stackoverflow.com/questions/58455611", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5067040/" ]
The expression `let rec c_write = "printf(\" %d \");\n"` is not a function. It is a value of type `string` which is bound to a variable named `c_write`. So you're not using any I/O functions in your code. When entered in the interactive toplevel, this value is printed by the interpreter evaluation loop for user convenience. The same as when a Python interpreter will print for you the value that you've just entered. The representation, chosen by the OCaml toplevel interpreter, in general, has nothing to do with the representation which is used to store a value in a file or to print it. Moreover, in OCaml, there is no canonical representations. If you want to write a function that prints a C printf statement then this is how it will look like in OCaml ``` let print_printf () = print_endline {|printf("%d");|} ``` In the example above, I've used `{||}` to denote a sting literal instead of more common `""`, since in this literal there is no need to escape special characters and they are interpreted literally (i.e., the don't have any special meaning). You can achieve the same result using the regular `""` quotes for denoting it ``` let print_printf () = print_endline "printf(\"%d\");" ``` Here is an example of the toplevel interaction using these definitions: ``` # let print_printf () = print_endline {|printf("%d");|};; val print_printf : unit -> unit = <fun> # print_printf ();; printf("%d"); - : unit = () # let print_printf () = print_endline "printf(\"%d\");";; val print_printf : unit -> unit = <fun> # print_printf ();; printf("%d"); - : unit = () ``` If you will put this code in a file, compile, and execute and redirect into a C file it will be a well-formed C file (modulo the absence of the function body).
Since you are somehow using the toplevel printer for printing, and that you somehow needs a very specific format, you need to install a custom printer. The following would work: ``` # #install_printer Format.pp_print_string;; # " This \" is not escaped " ;; - : string = This " is not escaped ``` However, it seems very likely that this is not really the problem that you are trying to solve.
16,862
1,460,559
I'm using Django's `render_to_response` to write data out to an html page, but I'd also like that render\_ to \_response call to load a python dictionary into a javascript associative array. What's the best way to do this?
2009/09/22
[ "https://Stackoverflow.com/questions/1460559", "https://Stackoverflow.com", "https://Stackoverflow.com/users/72106/" ]
Convert it to JSON and include, in your template.html, inside a `<script>` tag, something like ``` var my_associative_array = {{ json_data }} ``` after having JSON-encoded your Python dict into a string and put it in the context using key `json_data`.
What does that mean exactly? If you mean you think data in the template is in JavaScript terms, it isn't: You can use python objects in the template directly. If you mean, how do I embed a JSON literal from a Python dictionary or list into my template: Encode it with simplejson, which is included with Django. But, you often don't want to do this for a couple reasons. If you include this dynamic data in the template, you can't cache it as easily. Shouldn't this be another view that is generating a JS file you're including? Or maybe an AJAX call to grab the data once the page is live? Take the pick for what best fits you situation.
16,863
20,658,451
I've opened Aptana Studio 3 (Ubuntu 10.04) just like I did it hundreds times before (last time was yesterday). But this time I see EMPTY workspace. No projects. No error messages. Nothing. Screen attached. I have not changed anything since the last time I used Aptana Studio (yesterday). I have not switched workspace or nothing like this. I've always had one workspace. I'm also using pydev extention - all my projects are python/django if that matters. It happened to me AGAIN, but last time it happened on my laptopt where I wanted to format disk and re-install the system anyway so I ignored it and simply re-installed everything. Now it happened on my PC where I have a lot of important projects. BTW, all my projects were not in default location (ie. workspace folder). I added them from other locations and workspace folder was actually empty. I don't know if this might have something to do with the issue. Any ideas how to quickly fix this? ![Aptana Studio 3 Destroyed My Workspace](https://i.stack.imgur.com/KSIuM.png)
2013/12/18
[ "https://Stackoverflow.com/questions/20658451", "https://Stackoverflow.com", "https://Stackoverflow.com/users/367181/" ]
I had this problem and after freaking out I looked around and found File -> Switch Workspace and choosing my workspace loaded everything as normal.
I solved the problem by manually adding the projects back to workspace. Wasn't that bad. I still don't know why they disappeared.
16,866
63,442,415
I had written a gui using PyQt5 and recently I wanted to increase the font size of all my QLabels to a particular size. I could go through the entire code and individually and change the qfont. But that is not efficient and I thought I could just override the class and set all QLabel font sizes to the desired size. However, I need to understand the class written in python so I can figure out how to override it. But I did not find any python documentation that shows what the code looks like for QLabel. There is just documentation for c++. Hence, I wanted to know where I can get the python code for all of PyQt5 if that exists? If not, how can I change the font size of all QLabels used in my code?
2020/08/16
[ "https://Stackoverflow.com/questions/63442415", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12946401/" ]
Qt Stylesheets -------------- This is probably the easiest way to do in your situation, you are really trying to apply a specific "style" to all your QLabels. You can apply a style to your whole application, or a specific window, and this will affect all children that match the selectors. So in your case, to apply to *all widgets* in your application you can do the following to set the font size of all `QLabel` instances: ``` app = QApplication([]) app.setStyleSheet('.QLabel { font-size: 14pt;}') ``` **Note:** Be sure to set the stylesheet *before* attaching your widgets to its parent, otherwise you would need to manually trigger a style refresh. ### Also... * The `.QLabel` selector will only apply to `QLabel` class instances, and not to classes that inherit `QLabel`. To apply to both QLabel and inherited classes, use `QLabel {...}` instead of `.QLabel {...}` in the stylesheet. Some documentation to help you beyond that: * Qt stylesheet documentation: <https://doc.qt.io/qt-5/stylesheet.html> * Qt stylesheet syntax: <https://doc.qt.io/qt-5/stylesheet-syntax.html> * Qt stylesheet reference: <https://doc.qt.io/qt-5/stylesheet-reference.html> * PyQt documentation: <https://doc.qt.io/qtforpython/api.html>
Completing Adrien's answer, you can use `QFont` class and perform `.setFont()` method for every button. ```py my_font = QFont("Times New Roman", 12) my_button.setFont(my_font) ``` Using this class you can also change some font parameters, see <https://doc.qt.io/qt-5/qfont.html> Yeah, documentation for C++ is okay to read because all methods & classes from C++ are implemented in Python. UPD: `QWidget` class also has `setFont` method so you can set font size on `centralwidget` as well as using stylesheets.
16,867
18,048,232
I get satchmo to try, but I have a great problem at first try, and I don't understand whats wrong. When I making `$ python clonesatchmo.py` into clear django project, it trows an error: ``` $ python clonesatchmo.py Creating the Satchmo Application Customizing the files Performing initial data synching Traceback (most recent call last): File "manage.py", line 18, in <module> from django.core.management import execute_manager ImportError: cannot import name execute_manager Traceback (most recent call last): File "manage.py", line 18, in <module> from django.core.management import execute_manager ImportError: cannot import name execute_manager Error: Can not copy the static files. Error: Can not syncdb. ``` AND creates a store folder. trying smth like this is working!!: ``` $ python manage.py shell >>> import os, sys >>> print sys.executable /some/path/to/python >>> os.system('bash') $ /some/path/to/python manage.py validate # this is NOT fail on "from django.core.management import execute_manager" ``` I have Django 1.6 and Satchmo 0.9.3, python 2.7.5 (I do not use virtualenv)
2013/08/04
[ "https://Stackoverflow.com/questions/18048232", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1768419/" ]
Replace the contents of manage.py with the following (from a new django 1.6 project). ``` #!/usr/bin/env python import os import sys if __name__ == "__main__": os.environ.setdefault("DJANGO_SETTINGS_MODULE", "<app>.settings") from django.core.management import execute_from_command_line execute_from_command_line(sys.argv) ```
`execute_manager` was put on the deprecation path as part of the project layout refactor in Django 1.4 <https://docs.djangoproject.com/en/1.4/releases/1.4/#django-core-management-execute-manager>. Per the deprecation policy that means that the code for `execute_manager` has been completely removed in 1.6. If you are seeing this import error then the version of Satchmo you are using has not been updated to be compatible with Django 1.6.
16,872
58,176,203
In python, I have gotten quite used to container objects having truthy behavior when they are populated, and falsey behavior when they are not: ```py # list a = [] not a True a.append(1) not a False # deque from collections import deque d = deque() not d True d.append(1) not d False # and so on ``` However, [queue.Queue](https://docs.python.org/3/library/queue.html#queue.Queue) does not have this behavior. To me, this seems odd and a contradiction against almost any other container data type that I can think of. Furthermore, the method `empty` on queue seem to go against coding conventions that avoid race conditions on any other object (checking if a file exists, checking if a list is empty, etc). For example, we would generally say the following is bad practice: ```py _queue = [] if not len(_queue): # do something ``` And should be replaced with ```py _queue = [] if not _queue: # do something ``` or to handle an `IndexError`, which we might still argue would be better with the `if not _queue` statement: ```py try: x = _queue.pop() except IndexError as e: logger.exception(e) # do something else ``` Yet, `Queue` requires someone to do one of the following: ```py _queue = queue.Queue() if _queue.empty(): # do something # though this smells like a race condition # or handle an exception try: _queue.get(timeout=5) except Empty as e: # do something else # maybe logger.exception(e) ``` Is there documentation somewhere that might point to *why* this design choice was made? It seems odd, especially when [the source code](https://github.com/python/cpython/blob/master/Lib/queue.py) shows that it was built on top of `collections.deque` (noted that Queue does *not* inherit from `deque`)
2019/09/30
[ "https://Stackoverflow.com/questions/58176203", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7867968/" ]
According to the definition of the [truth value testing](https://docs.python.org/3/library/stdtypes.html#truth-value-testing) procedure, the behavior is expected: > > Any object can be tested for truth value, for use in an if or while > condition or as operand of the Boolean operations below. > > > By default, an object is considered true unless its class defines > either a `__bool__()` method that returns False or a `__len__()` method > that returns zero, when called with the object. > > > As Queue does not neither implements `__bool__()` nor `__len__()` then it's truth value is `True`. As to why does Queue does not implement `__len__()` a clue can be found in the comments of the qsize function: > > '''Return the approximate size of the queue (not reliable!).''' > > > The same can be said of the `__bool__()` function.
I'm going to leave the accepted answer as is, but as far as I can tell, the reason is that `if _queue: # do something` *would* be a race condition, since `Queue` is designed to be passed between threads and therefore possesses dubious state as far as tasks go. From the source: ```py class Queue: ~snip~ def qsize(self): '''Return the approximate size of the queue (not reliable!).''' with self.mutex: return self._qsize() def empty(self): '''Return True if the queue is empty, False otherwise (not reliable!). This method is likely to be removed at some point. Use qsize() == 0 as a direct substitute, but be aware that either approach risks a race condition where a queue can grow before the result of empty() or qsize() can be used. To create code that needs to wait for all queued tasks to be completed, the preferred technique is to use the join() method. ''' with self.mutex: return not self._qsize() ~snip ``` Must have missed this helpful docstring when I was originally looking. The `qsize` bool is not tied to the state of the queue once it's evaluated. So the user is doing processing against a queue based on an already out-of-date state. Like checking the existence of a file, it's more pythonic to just handle the exception: ```py try: task = _queue.get(timeout=4) except Empty as e: # do something ``` since the exception/success against `get` *is* the state of the queue. Likewise, we would not do: ```py if os.exists(file): with open(file) as fh: # do processing ``` Instead, we would do: ```py try: with open(file) as fh: # do processing except FileNotFoundError as e: # do something else ``` I suppose the intentional leaving-out of the `__bool__` method by the author is to steer the developer *away* from leaning against such a paradigm, and treating the queue like you would any other object that might be of questionable state.
16,873
39,367,963
I did python script: ``` from string import punctuation from collections import Counter import urllib from stripogram import html2text myurl = urllib.urlopen("https://www.google.co.in/?gfe_rd=cr&ei=v-PPV5aYHs6L8Qfwwrlg#q=samsung%20j7") html_string = myurl.read() text = html2text( html_string ) file = open("/home/nextremer/Final_CF/contentBased/contentCount/hi.txt", "w") file.write(text) file.close() ``` Using this script I didn't get perfect output only some HTML code. - I want save all webpage text content in a text file. - I used urllib2 or bs4 but I didn't get results. - I don't want output as a html structure. - I want all text data from webpage
2016/09/07
[ "https://Stackoverflow.com/questions/39367963", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6708233/" ]
What do you mean with "webpage text"? It seems you don't want the full HTML-File. If you just want the text you see in your browser, that is not so easily solvable, as the parsing of a HTML-document can be very complex, especially with JavaScript-rich pages. That starts with assessing if a String between "<" and ">" is a regular tag and includes analyzing the CSS-Properties changed by JavaScript-behavior. That is why people write very big and complex rendering-Engines for Webpage-Browsers.
``` import urllib urllib.urlretrieve("http://www.example.com/test.html", "test.txt") ```
16,874
9,358,983
Assuming `connectionDetails` is a Python dictionary, what's the best, most elegant, most "pythonic" way of refactoring code like this? ``` if "host" in connectionDetails: host = connectionDetails["host"] else: host = someDefaultValue ```
2012/02/20
[ "https://Stackoverflow.com/questions/9358983", "https://Stackoverflow.com", "https://Stackoverflow.com/users/940208/" ]
While `.get()` is a nice idiom, it's slower than `if/else` (and slower than `try/except` if presence of the key in the dictionary can be expected most of the time): ``` >>> timeit.timeit(setup="d={1:2, 3:4, 5:6, 7:8, 9:0}", ... stmt="try:\n a=d[1]\nexcept KeyError:\n a=10") 0.07691968797894333 >>> timeit.timeit(setup="d={1:2, 3:4, 5:6, 7:8, 9:0}", ... stmt="try:\n a=d[2]\nexcept KeyError:\n a=10") 0.4583777282275605 >>> timeit.timeit(setup="d={1:2, 3:4, 5:6, 7:8, 9:0}", ... stmt="a=d.get(1, 10)") 0.17784020746671558 >>> timeit.timeit(setup="d={1:2, 3:4, 5:6, 7:8, 9:0}", ... stmt="a=d.get(2, 10)") 0.17952161730158878 >>> timeit.timeit(setup="d={1:2, 3:4, 5:6, 7:8, 9:0}", ... stmt="if 1 in d:\n a=d[1]\nelse:\n a=10") 0.10071221458065338 >>> timeit.timeit(setup="d={1:2, 3:4, 5:6, 7:8, 9:0}", ... stmt="if 2 in d:\n a=d[2]\nelse:\n a=10") 0.06966537335119938 ```
(this is a late answer) An alternative is to subclass the `dict` class and implement the [`__missing__()`](https://docs.python.org/3/reference/datamodel.html?highlight=__missing__#object.__missing__) method, like this: ``` class ConnectionDetails(dict): def __missing__(self, key): if key == 'host': return "localhost" raise KeyError(key) ``` Examples: ``` >>> connection_details = ConnectionDetails(port=80) >>> connection_details['host'] 'localhost' >>> connection_details['port'] 80 >>> connection_details['password'] Traceback (most recent call last): File "python", line 1, in <module> File "python", line 6, in __missing__ KeyError: 'password' ```
16,876
15,728,081
I'm doing monitoring work with django. The program required to have a looping-forever sitting in background to process receiving data and update to database. The job could be worked by writing a python script, and run in background with, for example, supervisord. However, I prefer the solution with django itself; that's the server process stay in a loop, listening for incoming packet. Any suggestion would be high value and appreciate. Many thanks in advance.
2013/03/31
[ "https://Stackoverflow.com/questions/15728081", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1929999/" ]
In case you feel that Celery is a bit oversized for your needs you could also define a [custom management command](https://docs.djangoproject.com/en/dev/howto/custom-management-commands/) that lives forever and waits for your incoming packet.
I suggest you to use Celery which works with Django and has support for long running tasks among other features. <http://docs.celeryproject.org/en/latest/django/first-steps-with-django.html> <http://docs.celeryproject.org/en/latest/getting-started/introduction.html>
16,886
61,256,730
I'm writing a python code that converts a binary to decimal. ``` def bin_dec (binary): binary_list = list(str(binary)) **for bit in binary_list: if int(bit) > 1 or int(bit) < 0: print('Invalid Binary') print('') exit()** total = 0 argument = 0 binary_length = len(str(binary)) exponent = binary_length - 1 while exponent >= 0: total += (int(binary_list[argument]) * (2**exponent)) argument += 1 exponent -= 1 print(total) print('') ``` When I test the code with neagtive binary numbers I don't see the output "Invalid Binary" Instead I see an integer error ``` >>> bin_dec(-10) Traceback (most recent call last): File "<pyshell#9>", line 1, in <module> bin_dec(-10) File "E:/Bronx Science/Sophomore/Computer Science_Python/Edwin Chen_Lab 8_Diamond and Squares.py", **line 27, in bin_dec if int(bit) > 1 or int(bit) < 0:** ValueError: invalid literal for int() with base 10: '-' ```
2020/04/16
[ "https://Stackoverflow.com/questions/61256730", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13332906/" ]
``` $agent = $this->request->getUserAgent(); if ($agent->isBrowser()) { if($this->input->cookie('country')) { $countryId = $this->input->cookie('country'); }else{ redirect(base_url()); } } ```
HTTP 302 response code means that the URL of requested resource has been changed temporarily. Further changes in the URL might be made in the future. Therefore, this same URI should be used by the client in future requests.[You can check this out to learn more](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/302) Since you are actually redirecting Non-authenticated users (A category bots belong to), i think the error code is correct. What you can do however, is to add the page to robots.txt file so the page does not get indexed at all.
16,891
25,844,794
While executing below code, I am getting error as mentioned. I downloaded the required package from <http://www.cs.unm.edu/~mccune/prover9/download/> and configure. But still same issue. I am getting this error: ``` >>> import nltk >>> dt = nltk.DiscourseTester(['A student dances', 'Every student is a person']) >>> dt.readings() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.7/dist-packages/nltk/inference/discourse.py", line 351, in readings self._construct_threads() File "/usr/local/lib/python2.7/dist-packages/nltk/inference/discourse.py", line 297, in _construct_threads consistency_checked = self._check_consistency(self._threads) File "/usr/local/lib/python2.7/dist-packages/nltk/inference/discourse.py", line 393, in _check_consistency modelfound = mb.build_model() File "/usr/local/lib/python2.7/dist-packages/nltk/inference/api.py", line 333, in build_model verbose) File "/usr/local/lib/python2.7/dist-packages/nltk/inference/mace.py", line 202, in _build_model verbose=verbose) File "/usr/local/lib/python2.7/dist-packages/nltk/inference/mace.py", line 215, in _call_mace4 self._mace4_bin = self._find_binary('mace4', verbose) File "/usr/local/lib/python2.7/dist-packages/nltk/inference/prover9.py", line 166, in _find_binary verbose=verbose) File "/usr/local/lib/python2.7/dist-packages/nltk/internals.py", line 544, in find_binary binary_names, url, verbose)) File "/usr/local/lib/python2.7/dist-packages/nltk/internals.py", line 538, in find_binary_iter url, verbose): File "/usr/local/lib/python2.7/dist-packages/nltk/internals.py", line 517, in find_file_iter raise LookupError('\n\n%s\n%s\n%s' % (div, msg, div)) LookupError: =========================================================================== NLTK was unable to find the mace4 file! Use software specific configuration paramaters or set the PROVER9HOME environment variable. Searched in: - /usr/local/bin/prover9 - /usr/local/bin/prover9/bin - /usr/local/bin - /usr/bin - /usr/local/prover9 - /usr/local/share/prover9 ``` While configuring the LADR-2009-11 through `make all`, ended process with ``` .o utilities.o provers.o foffer.o ../ladr/libladr.a search.o: In function `search': search.c:(.text+0x6e54): undefined reference to `round' ../ladr/libladr.a(avltree.o): In function `avl_item_at_position': avltree.c:(.text+0x7cb): undefined reference to `ceil' collect2: error: ld returned 1 exit status make[1]: *** [prover9] Error 1 make[1]: Leaving directory `/root/Desktop/karim/software/LADR-2009-11A/provers.src' make: *** [all] Error 2 ```
2014/09/15
[ "https://Stackoverflow.com/questions/25844794", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2518644/" ]
Try following code: ``` .mini1 { width: 100%; height: 6.7%; margin-top: -2%; background-image: url('../images/footer1.jpg'); z-index: 10; background-size: 100% 100%; } ``` Please see to it that, image path is correct. Go in console to check for any errors.
Well here is a working JSfiddle: <http://jsfiddle.net/knftvt6v/4/> I believe it is your file path that's causing the error, can you make a JS fiddle ``` .mini1 { width: 100%; height: 2em; margin-top: -2%; background: url('http://www.serenitybaumer.com/main_images/footer.jpg'); z-index: 10; color:white; text-align:center; } ``` I added this block of code in which worked fine
16,892
62,221,721
I have created GuI in Visual Studio 2019. [![enter image description here](https://i.stack.imgur.com/g21cN.png)](https://i.stack.imgur.com/g21cN.png) There user will enter username and password and that i have to pass to python script. That when user will click on login button, python script will be triggered and output will be displayed. My Tried python code is: ``` import paramiko import time ssh = paramiko.SSHClient() ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) try: hostname = input("Enter host IP address: ") username = input("Enter SSH Username: ") password = input("Enter SSH Password: ") port = 22 ssh.connect(hostname, port, username, password, look_for_keys=False) print("ssh login successfully") #stdin,stdout,stderr = ssh.exec_command('show version') #output = stdout.readlines() #print(output) Device_access = ssh.invoke_shell() Device_access.send(b'environment no more \n') Device_access.send(b'show version\n') time.sleep(2) output = Device_access.recv(65000) print (output.decode('ascii')) except: print("error in connection due to wrong input entered") ``` But in this i am not getting how to link with input enter to Gui c# with python script. Please let me know how i can do. Thanks in advance!
2020/06/05
[ "https://Stackoverflow.com/questions/62221721", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11699581/" ]
You could use arguments to call your Python Script. Change the python script: ``` import paramiko import time import sys # Used to get arguments ssh = paramiko.SSHClient() ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) try: hostname = sys.argv[1] # Skip 0th arg, since it is just the filename username = sys.argv[2] password = sys.argv[3] port = 22 ssh.connect(hostname, port, username, password, look_for_keys=False) print("ssh login successfully") #stdin,stdout,stderr = ssh.exec_command('show version') #output = stdout.readlines() #print(output) Device_access = ssh.invoke_shell() Device_access.send(b'environment no more \n') Device_access.send(b'show version\n') time.sleep(2) output = Device_access.recv(65000) print (output.decode('ascii')) except: print("error in connection due to wrong input entered") ``` And change your C# code which calls the Script to something like this: ``` Process pythonScript = new Process(); pythonScript.StartInfo.FileName = "Your python script"; pythonScript.StartInfo.Arguments = $"{YouHostnameVar} {YouUsernameVar} {YourPasswordVar}"; // Start the script with the credentials as arguments pythonScript.Start(); ```
There are multiple approaches to incorporating a Python script with .NET C# code ================================================================================ I will try and give a basic overview, along with my suggestion, but ultimately, it will be up to you to figure out what works best. IronPython ---------- IronPython is an actual separate interpreter to translate Python code into the .NET Common Language Runtime (CLR). It works well for simple Python2 scripts that are not reliant on certain libraries. Python.NET ---------- Python.NET uses the normal Python interpreter. It simply provides a way to interface between Python scripts and .NET code. System Diagnostics (My Suggestion) ---------------------------------- The System Diagnostics C# tool allows you to run Python scripts as a system process. Not that this only runs the Python script. In order to transfer information between the Python script and the C# code, you will need some kind of shared file. I recommend setting up a folder where you save information used by both the C# and Python programs. For a simple implementation of System Diagnostics, along with notes on the particular way System Diagnostics is being called, check out this: <https://www.dotnetlovers.com/article/216/executing-python-script-from-c-sharp> EDIT Based on Paul Sütterlin Answer ----------------------------------- As opposed to using a file to share information, Paul correctly points out that you can pass information as arguments. He also points out the simple process tool in C#, which is easier to set up than System Diagnostics. I recommend you read the article I linked to see which solution best suits you. System diagnostics gives you more options, but they do have to be configured.
16,893
1,104,762
How can I break a long one liner string in my code and keep the string indented with the rest of the code? [PEP 8](http://www.python.org/dev/peps/pep-0008/ "PEP-8") doesn't have any example for this case. Correct ouptut but strangely indented: ``` if True: print "long test long test long test long test long \ test long test long test long test long test long test" >>> long test long test long test long test long test long test long test long test long test long test ``` Bad output, but looks better in code: ``` if True: print "long test long test long test long test long \ test long test long test long test long test long test" >>> long test long test long test long test long test long test long test long test long test long test ``` --- Wow, lots of fast answers. Thanks!
2009/07/09
[ "https://Stackoverflow.com/questions/1104762", "https://Stackoverflow.com", "https://Stackoverflow.com/users/23002/" ]
Adjacent strings are concatenated at compile time: ``` if True: print ("this is the first line of a very long string" " this is the second line") ``` Output: ``` this is the first line of a very long string this is the second line ```
You can use a trailing backslash to join separate strings like this: ``` if True: print "long test long test long test long test long " \ "test long test long test long test long test long test" ```
16,894
56,680,581
If there's a function `f(x)`, and x's type may be Int or String, if it's Int, then this f will return `x+1` if it's String, then f will reverse x and return it. This is easy in dynamic typed languages like python and javascript which just uses `isinstance(x, Int)`. We can know its type and do something with if-else, but in static type languages like kotlin, I don't know how to do that? Because I don't know how to make x has a type that may be Int or String. def f(x): ``` if isinstance(x, int): return x+1 if isinstance(x, str): return x[::-1] ``` in haskell, we have pattern matching to do that f :: Either Int String -> Either Int String f (Left x) = Left (x+1) f (Right x) = Right (reverse x) and in kotlin?
2019/06/20
[ "https://Stackoverflow.com/questions/56680581", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11674346/" ]
In kotlin you have [Arrow](https://arrow-kt.io/) that provides a lot of functional capabilities to the language. Between them you have [`EitherT`](https://arrow-kt.io/docs/arrow/data/eithert/). That lets you define: ``` fun f(x: Either<Int, String>): Either<Int, String> = x.bimap({ it+1 }, { it.reversed() }) ```
You could do something like: ``` fun getValue(id: Int): Any { ... } fun process(value: Int) { ... } fun process(value: String) { ... } val value = getValue(valueId) when (value) { is Int -> process(value) is String -> process(value) else -> ... } ``` This way, you can use method overloading to do the job for you based on params.
16,904
34,894,096
What is the best way to read in a line of numbers from a file when they are presented in a format like this: ``` [1, 2, 3 , -4, 5] [10, 11, -12, 13, 14 ] ``` Annoyingly, as I depicted, sometimes there are extra spaces between the numbers, sometimes not. I've attempted to use `CSV` to work around the commas, but the brackets and the random spaces are proving difficult to remove as well. Ideally I would append each number between the brackets as an `int` to a `list`, but of course the brackets are causing `int()` to fail. I've already looked into similar solutions suggested with [Removing unwanted characters from a string in Python](https://stackoverflow.com/questions/2780904/removing-unwanted-characters-from-a-string-in-python/ "this") and [Python Read File, Look up a String and Remove Characters](https://stackoverflow.com/questions/19201575/python-read-file-look-up-a-string-and-remove-characters), but unfortunately I keep falling short when I try to combine everything.
2016/01/20
[ "https://Stackoverflow.com/questions/34894096", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5814412/" ]
Use regular expression to remove any unwanted characters from strings ``` import re text_ = re.sub("[0-9]+", " ", text); ``` Second Method: ``` str = "h3110 23 cat 444.4 rabbit 11 2 dog" >>> [int(s) for s in str.split() if s.isdigit()] [23, 11, 2] ```
Use the [`json`](https://docs.python.org/3/library/json.html#json.loads) module to parse each line as a [JSON](http://json.org/) array. ``` import json list_of_ints = [] for line in open("/tmp/so.txt").readlines(): a = json.loads(line) list_of_ints.extend(a) print(list_of_ints) ``` This collects all integers from all lines into `list_of_ints`. Output: ``` [1, 2, 3, -4, 5, 10, 11, -12, 13, 14] ```
16,906
30,798,447
I tried the following code, but I ran into problems. I think .values is the problem but how do I encode this as a Theano object? The following is my data source ``` home_team,away_team,home_score,away_score Wales,Italy,23,15 France,England,26,24 Ireland,Scotland,28,6 Ireland,Wales,26,3 Scotland,England,0,20 France,Italy,30,10 Wales,France,27,6 Italy,Scotland,20,21 England,Ireland,13,10 Ireland,Italy,46,7 Scotland,France,17,19 England,Wales,29,18 Italy,England,11,52 Wales,Scotland,51,3 France,Ireland,20,22 ``` Here is the PyMC2 Code which works: data\_file = DATA\_DIR + 'results\_2014.csv' ``` df = pd.read_csv(data_file, sep=',') # Or whatever it takes to get this into a data frame. teams = df.home_team.unique() teams = pd.DataFrame(teams, columns=['team']) teams['i'] = teams.index df = pd.merge(df, teams, left_on='home_team', right_on='team', how='left') df = df.rename(columns = {'i': 'i_home'}).drop('team', 1) df = pd.merge(df, teams, left_on='away_team', right_on='team', how='left') df = df.rename(columns = {'i': 'i_away'}).drop('team', 1) observed_home_goals = df.home_score.values observed_away_goals = df.away_score.values home_team = df.i_home.values away_team = df.i_away.values num_teams = len(df.i_home.drop_duplicates()) num_games = len(home_team) g = df.groupby('i_away') att_starting_points = np.log(g.away_score.mean()) g = df.groupby('i_home') def_starting_points = -np.log(g.away_score.mean()) #hyperpriors home = pymc.Normal('home', 0, .0001, value=0) tau_att = pymc.Gamma('tau_att', .1, .1, value=10) tau_def = pymc.Gamma('tau_def', .1, .1, value=10) intercept = pymc.Normal('intercept', 0, .0001, value=0) #team-specific parameters atts_star = pymc.Normal("atts_star", mu=0, tau=tau_att, size=num_teams, value=att_starting_points.values) defs_star = pymc.Normal("defs_star", mu=0, tau=tau_def, size=num_teams, value=def_starting_points.values) # trick to code the sum to zero constraint @pymc.deterministic def atts(atts_star=atts_star): atts = atts_star.copy() atts = atts - np.mean(atts_star) return atts @pymc.deterministic def defs(defs_star=defs_star): defs = defs_star.copy() defs = defs - np.mean(defs_star) return defs @pymc.deterministic def home_theta(home_team=home_team, away_team=away_team, home=home, atts=atts, defs=defs, intercept=intercept): return np.exp(intercept + home + atts[home_team] + defs[away_team]) @pymc.deterministic def away_theta(home_team=home_team, away_team=away_team, home=home, atts=atts, defs=defs, intercept=intercept): return np.exp(intercept + atts[away_team] + defs[home_team]) home_points = pymc.Poisson('home_points', mu=home_theta, value=observed_home_goals, observed=True) away_points = pymc.Poisson('away_points', mu=away_theta, value=observed_away_goals, observed=True) mcmc = pymc.MCMC([home, intercept, tau_att, tau_def, home_theta, away_theta, atts_star, defs_star, atts, defs, home_points, away_points]) map_ = pymc.MAP( mcmc ) map_.fit() mcmc.sample(200000, 40000, 20) ``` My attempt at porting to PyMC3 :) And I include the wrangling code. I defined my own data directory etc. ``` data_file = DATA_DIR + 'results_2014.csv' df = pd.read_csv(data_file, sep=',') # Or whatever it takes to get this into a data frame. teams = df.home_team.unique() teams = pd.DataFrame(teams, columns=['team']) teams['i'] = teams.index df = pd.merge(df, teams, left_on='home_team', right_on='team', how='left') df = df.rename(columns = {'i': 'i_home'}).drop('team', 1) df = pd.merge(df, teams, left_on='away_team', right_on='team', how='left') df = df.rename(columns = {'i': 'i_away'}).drop('team', 1) observed_home_goals = df.home_score.values observed_away_goals = df.away_score.values home_team = df.i_home.values away_team = df.i_away.values num_teams = len(df.i_home.drop_duplicates()) num_games = len(home_team) g = df.groupby('i_away') att_starting_points = np.log(g.away_score.mean()) g = df.groupby('i_home') def_starting_points = -np.log(g.away_score.mean()) import theano.tensor as T import pymc3 as pm3 #hyperpriors x = att_starting_points.values y = def_starting_points.values model = pm.Model() with pm3.Model() as model: home3 = pm3.Normal('home', 0, .0001) tau_att3 = pm3.Gamma('tau_att', .1, .1) tau_def3 = pm3.Gamma('tau_def', .1, .1) intercept3 = pm3.Normal('intercept', 0, .0001) #team-specific parameters atts_star3 = pm3.Normal("atts_star", mu=0, tau=tau_att3, observed=x) defs_star3 = pm3.Normal("defs_star", mu=0, tau=tau_def3, observed=y) #Seems to be the error here. atts = pm3.Deterministic('regression', atts_star3 - np.mean(atts_star3)) home_theta3 = pm3.Deterministic('regression', T.exp(intercept3 + atts[away_team] + defs[home_team])) atts = pm3.Deterministic('regression', atts_star3 - np.mean(atts_star3)) home_theta3 = pm3.Deterministic('regression', T.exp(intercept3 + atts[away_team] + defs[home_team])) # Unknown model parameters home_points3 = pm3.Poisson('home_points', mu=home_theta3, observed=observed_home_goals) away_points3 = pm3.Poisson('away_points', mu=home_theta3, observed=observed_away_goals) start = pm3.find_MAP() step = pm3.NUTS(state=start) trace = pm3.sample(2000, step, start=start, progressbar=True) pm3.traceplot(trace) ``` And I get an error like values isn't a Theano object. I think this is the .values part above. But i'm confused about how to convert this into a Theano tensor. The tensors are confusing me :) And the error for clarity, because I've misunderstood something in PyMC3 syntax. ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-71-ce51c1a64412> in <module>() 23 24 #Seems to be the error here. ---> 25 atts = pm3.Deterministic('regression', atts_star3 - np.mean(atts_star3)) 26 home_theta3 = pm3.Deterministic('regression', T.exp(intercept3 + atts[away_team] + defs[home_team])) 27 /Users/peadarcoyle/anaconda/lib/python3.4/site-packages/numpy/core/fromnumeric.py in mean(a, axis, dtype, out, keepdims) 2733 2734 return _methods._mean(a, axis=axis, dtype=dtype, -> 2735 out=out, keepdims=keepdims) 2736 2737 def std(a, axis=None, dtype=None, out=None, ddof=0, keepdims=False): /Users/peadarcoyle/anaconda/lib/python3.4/site-packages/numpy/core/_methods.py in _mean(a, axis, dtype, out, keepdims) 71 ret = ret.dtype.type(ret / rcount) 72 else: ---> 73 ret = ret / rcount 74 75 return ret TypeError: unsupported operand type(s) for /: 'ObservedRV' and 'int' ```
2015/06/12
[ "https://Stackoverflow.com/questions/30798447", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2610971/" ]
Here is my translation of your PyMC2 model: ``` model = pm.Model() with pm.Model() as model: # global model parameters home = pm.Normal('home', 0, .0001) tau_att = pm.Gamma('tau_att', .1, .1) tau_def = pm.Gamma('tau_def', .1, .1) intercept = pm.Normal('intercept', 0, .0001) # team-specific model parameters atts_star = pm.Normal("atts_star", mu =0, tau =tau_att, shape=num_teams) defs_star = pm.Normal("defs_star", mu =0, tau =tau_def, shape=num_teams) atts = pm.Deterministic('atts', atts_star - tt.mean(atts_star)) defs = pm.Deterministic('defs', defs_star - tt.mean(defs_star)) home_theta = tt.exp(intercept + home + atts[home_team] + defs[away_team] away_theta = tt.exp(intercept + atts[away_team] + defs[home_team]) # likelihood of observed data home_points = pm.Poisson('home_points', mu=home_theta, observed=observed_home_goals) away_points = pm.Poisson('away_points', mu=away_theta, observed=observed_away_goals) ``` The big difference, as I see it, between PyMC2 and 3 model building is that the whole business of initial values in PyMC2 is not included in model building in PyMC3. It is pushed off into the model fitting portion of the code. Here is a notebook that puts this model in context with your data and some fitting code: <http://nbviewer.ipython.org/gist/aflaxman/55e23195fe0a0b089103>
Your model is failing because you can't use NumPy functions on theano tensors. Thus ``` np.mean(atts_star3) ``` Will give you an error. You can remove `atts_star3 = pm3.Normal("atts_star",...)` and just use the NumPy array directly `atts_star3 = x`. I don't think you need to explicitly model `tau_att3`, `tau_def3` or `defs_star` either. Alternatively, if you want to keep those variables you can replace `np.mean` with `theano.tensor.mean`, which should work.
16,912
61,195,729
I have been working with binance websocket. Worked well if the start/stop command is in the main programm. Now I wanted to start and stop the socket through a GUI. So I placed the start/stop command in a function each. But it doesn't work. Just no reaction while calling the function. Any idea what's the problem? Here the relevant parts of my code (I am quite new to python, any hints to this code are welcome): ``` def start_websocket(conn_key): bm.start() def stop_websocket(conn_key): bm.close() def process_message(msg): currentValues['text']= msg['p'] # --- main --- PUBLIC = '************************' SECRET = '************************' client = Client(api_key=PUBLIC, api_secret=SECRET) bm = BinanceSocketManager(client) conn_key = bm.start_trade_socket('BNBBTC', process_message) # create main window and set its title root = tk.Tk() root.title('Websocket') # create variable for displayed time and use it with Label label = tk.Label(root) label.grid(column=5, row=0) #root.geometry('500x500') bt_start_socket = tk.Button(root, text="Start Websocket", command=start_websocket(conn_key)) bt_start_socket.grid (column=1, row=1) bt_stop_socket = tk.Button(root, text="Sop Websocket", command=stop_websocket(conn_key)) bt_stop_socket.grid (column=1, row=10) ```
2020/04/13
[ "https://Stackoverflow.com/questions/61195729", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13305068/" ]
Whatever user is executing that code, does not have permission to write to that file path. If you go to C:\Users\chris\Source\Repos\inventory2.0\PIC\_Program\_1.0\Content\images\Components, right click, properties, Security tab, you will see the users that have permissions and what those permissions are. You can add or edit your users permissions there.
I think the problem is your application user don't have permission to access your the folder. If you are testing this in VS IIS express, then you should grant permission for your current user. However, if you are receiving this error message from IIS Server. Then you should grant permission for application pool identity(IIS Apppool\apppoolname). Process monitor can help you fix access denied error all the time. You just need to create a filter for Result ="access is denied". Then it will tell you who and what permission are required. <https://learn.microsoft.com/en-us/sysinternals/downloads/procmon>
16,914
52,436,084
I have a word list and I need to find the count of words that are present in the string. eg: ``` text_string = 'I came, I saw, I conquered!' word_list=['I','saw','Britain'] ``` I require a python script that prints ``` {‘i’:3,’saw’:1,’britain':0} ```
2018/09/21
[ "https://Stackoverflow.com/questions/52436084", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9635284/" ]
You can use a [property accessor](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Property_Accessors) to reference `mutableValue` from accessing the property `a` like this: ```js let mutableValue = 3 const obj = { get a() { return mutableValue } } console.log(obj.a) mutableValue = 4 console.log(obj.a) ```
object -> reference values try ``` let mutableValue = {aa: 3} const getText = () => mutableValue const obj = {a: getText()} ``` run ``` obj.a// {aa: 3} mutableValue.aa = 4 obj.a// {aa: 4} ```
16,915
58,901,682
First of all i tried command from their main page, that they gave me: ``` pip3 install torch==1.3.1+cpu torchvision==0.4.2+cpu -f https://download.pytorch.org/whl/torch_stable.html ``` Could not find a version that satisfies the requirement torch==1.3.1+cpu (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2) ERROR: No matching distribution found for torch==1.3.1.+cpu After this i decided to take available from this list: <https://download.pytorch.org/whl/cpu/stable> So at the end i tried something like this ``` pip3 install torch-1.1.0-cp37-cp37m-win_amd64.whl -f https://download.pytorch.org/whl/torch_stable.html ``` And now they write that this is not supported wheel on my platform. Wtf? ( I use windows 7, python64, have amd) ( location of python: C:\Python38, location of pip C:\Python38\Scripts )
2019/11/17
[ "https://Stackoverflow.com/questions/58901682", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8037832/" ]
There is no any wheels for Python 3.8 at <https://download.pytorch.org/whl/torch_stable.html>. > > not supported wheel on my platform > > > This is because the wheel is for Python 3.7. Advice: downgrade to Python 3.7.
Adding to @phd's answer, you could consider [installing from source](https://github.com/pytorch/pytorch#from-source). Note that I have built PyTorch from the source in the past (and it was a mostly straightforward process) but I have not done this on windows or for Python 3.8.
16,917
12,938,786
Im trying to pass a sql ( wich works perfectly if i run it on the client ) inside my python script, but i receive the error "not enough arguments for format string" Following, the code: ``` sql = """ SELECT rr.iserver, foo.*, rr.queue_capacity, rr.queue_refill_level, rr.is_concurrent, rr.max_execution_threads, rr.retrieval_status, rr.processing_status FROM ( SELECT DISTINCT ip.package, it. TRIGGER FROM wip.info_package ip, wip.info_trigger it WHERE ip.service = it.service and ip.iserver = '%(iserver)s' and it.iserver = %(iserver)s' AND package = '%(package)s' UNION SELECT '%(package)s' AS package, TRIGGER FROM info_trigger WHERE TRIGGER LIKE '%(package)s%' ) AS foo, info_trigger rr WHERE rr. TRIGGER = foo. TRIGGER """ % {'iserver' : var_iserver,'package' : var_package} dcon = Database_connection() getResults = dcon.db_call(sql, dbHost, dbName, dbUser, dbPass) # more and more code to work the result.... ``` My main problem on this is how i can pass `'%(iserver)s' , '%(package)s'` correctly. Because usualy, when i select's or insert's on database, i only use two variables , but i dont know how to do it with more than two. Thanks.
2012/10/17
[ "https://Stackoverflow.com/questions/12938786", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1323826/" ]
``` WHERE TRIGGER LIKE '%(package)s%' ``` you have an EXTRA '%' if you want the actual character '%', you need to escape with a double '%'. so it should be ``` WHERE TRIGGER LIKE '%(package)s%%' ``` if you want to display a '%' and ``` WHERE TRIGGER LIKE '%(package)s' ``` if you dont
Don't build SQL like this using `%`: ``` "SELECT %(foo)s FROM bar WHERE %(baz)s" % {"foo": "FOO", "baz": "1=1;-- DROP TABLE bar;"} ``` This opens the door for nasty SQL injection attacks. Use the proper form of your [Python Database API Specification v2.0](http://www.python.org/dev/peps/pep-0249/) adapter. For Psychopg this form is described [here](http://initd.org/psycopg/docs/usage.html#passing-parameters-to-sql-queries). ``` cur.execute("SELECT %(foo)s FROM bar WHERE %(baz)s", {"foo": "FOO", "baz": "1=1;-- DROP TABLE bar;"}) ```
16,924
54,311,678
I have a UDP socket application where I am working on the server side. To test the server side I put together a simple python client program that sends the message "hello world how are you". The server, should then receive the message, convert to uppercase and send back to the client. The problem lies here: I can observe while debugging that the server is receiving the message, applies the conversion, sends the response back and eventually waits for another message. However the python client is not receiving the message but wait's endlessly for the response from the server. I found (an option) through the web that in order for the client to receive a response back it needs to bind to the server, which goes against what I have seen in a text book (The Linux Programming Interface). Nevertheless, I tried to bind the client to the server and the python program failed to connect at the binding line (don't know if I did it correctly). Python version is 2.7.5. The client program runs on RedHat and the server runs on a target module with Angstrom (it's cross compiled for a 32 bit processor). Here is the code for the client: ``` import socket import os UDP_IP = "192.168.10.4" UDP_PORT = 50005 #dir_path = os.path.dirname(os.path.realpath(__file__)) MESSAGE = "hello world how are you" print "UDP target IP: ", UDP_IP print "UDP target port: ", UDP_PORT print "message: ", MESSAGE sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) #sock.bind((UDP_IP, UDP_PORT)) print "Sending message..." sock.sendto(MESSAGE, (UDP_IP, UDP_PORT)) print "Message sent!" print "Waiting for response..." data = sock.recv(1024) print "Received", repr(data) ``` And here is the code for the server: ``` void server_side(void) { printf("Server start up.\n"); struct sockaddr_in svaddr; struct sockaddr_in claddr; int sfd; int j; ssize_t numBytes; socklen_t len; char buf[BUF_SIZE]; char claddrStr[INET_ADDRSTRLEN]; //int output = open("test_output.txt", O_WRONLY|O_CREAT, 0664); printf("Creating new UDP socket...\n"); sfd = socket(AF_INET, SOCK_DGRAM, 0); /* Create Server Socket*/ if (sfd == -1) { errExit("socket"); } printf("Socket has been created!\n"); memset(&svaddr, 0, sizeof(struct sockaddr_in)); svaddr.sin_family = AF_INET; svaddr.sin_addr.s_addr = htonl(INADDR_ANY); svaddr.sin_port = htons(PORT_NUM); printf("Binding in process...\n"); if (bind(sfd, (struct sockaddr *) &svaddr, sizeof(struct sockaddr_in)) == -1) { errExit("bind"); } printf("Binded!\n"); /* Receive messages, convert to upper case, and return to client.*/ for(;;) { len = sizeof(struct sockaddr_in); numBytes = recvfrom(sfd, buf, BUF_SIZE, 0, (struct sockaddr *) &claddr, &len); if (numBytes == -1) { errExit("recvfrom"); } if (inet_ntop(AF_INET, &claddr.sin_addr, claddrStr, INET_ADDRSTRLEN) == NULL) { printf("Couldn't convert client address to string.\n"); } else { printf("Server received %ld bytes from (%s, %u).\n", (long) numBytes, claddrStr, ntohs(claddr.sin_port)); } claddr.sin_port = htons(PORT_NUM); for (j = 0; j< numBytes; j++) { buf[j] = toupper((unsigned char) buf[j]); } if (sendto(sfd, buf, numBytes, 0, (struct sockaddr *) &claddr, len) != numBytes) { fatal("sendto"); } } } ``` Again the problem is I am not receiving the response and printing the message back on the client terminal. I should receive the same message in all uppercase letters. I feel like I am missing a small detail. Thanks for the help!
2019/01/22
[ "https://Stackoverflow.com/questions/54311678", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5970879/" ]
**Quick and dirty:** Remove this line from your C code: ``` claddr.sin_port = htons(PORT_NUM); ``` **Now why:** When you send a message in your python script, your operating system will fill a [UDP packet](https://en.wikipedia.org/wiki/User_Datagram_Protocol) with the destination IP address and port you specified, the IP address of the machine you are using on the source IP field, and finally, a source port assigned "randomly" by the OS. Do notice, this is not the same port as the destination port, and even if it was the case, how would you know what program will receive the source message?(both would be hearing messages from the same port) Luckily, this is not possible. Now, when your C code receives this packet, it will know who sent the message, and you have access to this information though the sockaddr struct filled by recvfrom. If you want to send some information back, you must send the packet with a destination port(as seen by the server) equal to the source port as seen by the client, which again, is not the same port that you are listening on the server. By doing `claddr.sin_port = htons(PORT_NUM)`, you set overwrite the field that contained the source port of the client with the server port, and when you try to send this packet, 2 things may happen: * If the client ran from the same computer, the destination IP and source IP will be the same, and you've just set the destination port to be the port that the server is listening, so you will have a message loop. * If running on different computers, the packet will be received by the client computer, but there probably won't be any programs waiting for messages on that port, so it is discarded. A half-baked analogy: you receive a letter from a friend, but when writing back to him, you change the number of his house with the number of your house... does not make much sense. Only difference is that this friend of yours moves a lot, and each letter may have a different number, but that is not important. In theory, you must bind if you want to receive data back, in this case bind is an equivalent to listening to that port. This answer clarifies why it was not necessary in this case: <https://stackoverflow.com/a/14243544/6253527> If you are on linux, you can see which port you OS assigned for your UDP socket using `sudo ss -antup | grep python`
N. Dijkhoffz, Would love to hear how you fixed it and perhaps post the correct code.
16,925
36,965,951
I'm begineer in python. I'm bit confused about this basic python program and its output ``` for num in range(2,10): for i in range(2,num): if (num % i) == 0: break else: print(num) ``` output ``` Python 3.5.1 (v3.5.1:37a07cee5969, Dec 6 2015, 01:38:48) [MSC v.1900 32 bit (Intel)] on win32 Type "copyright", "credits" or "license()" for more information. >>> ================= RESTART: C:\Users\ms\Desktop\python\new.py ================= 2 3 5 7 >>> ``` As per condition ``` if (2 %2) == 0: break ``` then how 2 prints to the output display Thanks for helping ..
2016/05/01
[ "https://Stackoverflow.com/questions/36965951", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5483135/" ]
You can use `ajax`. **timestamp.php** ``` <?php date_default_timezone_set('YOUR TIMEZONE'); echo $timestamp = date('H:i:s'); ``` **jQuery** ``` $(document).ready(function() { setInterval(timestamp, 1000); }); function timestamp() { $.ajax({ url: 'http://localhost/timestamp.php', success: function(data) { $('#timestamp').html(data); }, }); } ``` **HTML** ``` <div id="timestamp"></div> ```
PHP is a server-side programming language, Javascript is a client-side programming language. The PHP code that fills the variables will only update when the webpage is loaded, after that you are left with Javascript code and nothing more. I recommend you to search a basic programming book which mentions concepts such as client-side and server-side code because (and not trying to be harsh) you seems to have a big misunderstanding about how those things works.
16,926
58,543,054
I am trying to use pyspark to preprocess data for the prediction model. I get an error when I try spark.createDataFrame out of my preprocessing.Is there a way to check how processedRDD look like before making it to dataframe? ``` import findspark findspark.init('/usr/local/spark') import pyspark from pyspark.sql import SQLContext import os import pandas as pd import geohash2 sc = pyspark.SparkContext('local', 'sentinel') spark = pyspark.SQLContext(sc) sql = SQLContext(sc) working_dir = os.getcwd() df = sql.createDataFrame(data) df = df.select(['starttime', 'latstart','lonstart', 'latfinish', 'lonfinish', 'trip_type']) df.show(10, False) processedRDD = df.rdd processedRDD = processedRDD \ .map(lambda row: (row, g, b, minutes_per_bin)) \ .map(data_cleaner) \ .filter(lambda row: row != None) print(processedRDD) featuredDf = spark.createDataFrame(processedRDD, ['year', 'month', 'day', 'time_cat', 'time_num', 'time_cos', \ 'time_sin', 'day_cat', 'day_num', 'day_cos', 'day_sin', 'weekend', \ 'x_start', 'y_start', 'z_start','location_start', 'location_end', 'trip_type']) ``` I am getting this error: ``` [Stage 1:> (0 + 1) / 1]2019-10-24 15:37:56 ERROR Executor:91 - Exception in task 0.0 in stage 1.0 (TID 1) raise AppRegistryNotReady("Apps aren't loaded yet.") django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet. at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:452) at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:588) at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:571) at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:406) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator$class.foreach(Iterator.scala:891) at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28) at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48) at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310) at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28) at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302) at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28) at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289) at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28) at org.apache.spark.api.python.PythonRDD$$anonfun$3.apply(PythonRDD.scala:153) at org.apache.spark.api.python.PythonRDD$$anonfun$3.apply(PythonRDD.scala:153) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:121) at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ... 1 more ``` I do not understand what this have to do with importing an app
2019/10/24
[ "https://Stackoverflow.com/questions/58543054", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9240223/" ]
I don't know what this script has to do with Django exactly, but adding the following lines at the top of the script will probably fix this issue: ``` import os os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myproject.settings') import django django.setup() ```
Basically, you need to load your settings and populate Django’s application registry before doing anything else. You have all the information required in the Django docs. <https://docs.djangoproject.com/en/2.2/topics/settings/#calling-django-setup-is-required-for-standalone-django-usage>
16,928
149,474
This XML file contained archived news stories for all of last year. I was asked to sort these stories by story categor[y|ies] into new XML files. ``` big_story_export.xml ``` turns into ``` lifestyles.xml food.xml nascar.xml ``` ...and so on. I got the job done using a one-off python script, *however*, **I originally attempted this using XSLT**. This resulted in frustration as my XPATH selections were crapping the bed. Test files were transformed perfectly, but putting the big file up against my style sheet resulted in ...*nothing*. What strategies do you recommend for ensuring that files like this will run through XSLT? *This was handed to me by a vendor, so imagine that I don't have a lot of leverage when it comes to defining the structure of this file.* If you guys want code samples, I'll put some together. If anything, I'd be satisfied with some tips for making XML+XSLT work together smoothly. --- @Sklivvz I was using python's libxml2 & libxslt to process this. I'm looking into xsltproc now. It seems like a good tool for these one-off situations. Thanks! --- @diomidis-spinellis It's well-formed, though (as mentioned) I don't have faculties to discover it's validity. As for writing a Schema, I like the idea. The amount of time I invest in getting this one file validated would be impractical if it were a one-time thing, though I foresee having to handle more files like this from our vendor. Writing a schema (and submitting it to the vendor) would be an excellent long-term strategy for managing XML funk like this. Thanks!
2008/09/29
[ "https://Stackoverflow.com/questions/149474", "https://Stackoverflow.com", "https://Stackoverflow.com/users/22491/" ]
This sounds like a bug in the large XML file or the XSLT processor. There are two things you should check on your file. 1. Is the file well-formed XML? That is, are all tags and attributes properly terminated and matched? An XML processor, like [xmlstarlet](http://xmlstar.sourceforge.net/), can tell you that. 2. Does the file contain valid XML? For this you need a schema and an XML validator ([xmlstarlet](http://xmlstar.sourceforge.net/) can do this trick as well). I suggest you invest some effort to write the schema definition of your file. It will simplify a lot your debugging, because you can then easily pinpoint the exact source of problems you may be having. If the file is well-formed and valid, but the XSLT processor still refuses to give you the results you would expect, you can be sure that the problem lies in the processor, and you should try a different one.
What language/parser were you using? For large files I try to use Unix command line tools. They are usually much, much more efficient than other solutions and don't "crap out" on large files. Try using `xsltproc`
16,930
56,921,192
I have created a text file using file operations in python. I want the file to be pushed to my existed GITLAB repository. I have tried the below code where i get the created file in my local folders. ``` file_path = 'E:\My material\output.txt' k= 'Fail/Pass' with open (file_path, 'w+') as text: text.write('Test case :' +k) text.close() ``` What is the process or steps or any modifications in file\_path to move the created text file to the GITLAB repository through python code.
2019/07/07
[ "https://Stackoverflow.com/questions/56921192", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7547718/" ]
You can use `.loc` and column names in the following way: ``` import pandas as pd import numpy as np np.random.seed(12) df = pd.DataFrame( { "df0" : np.random.choice(["a", "b"], 100), "df1" : np.random.randint(0, 15, 100), "df2" : np.random.randint(0, 15, 100), "df3" : np.random.randint(0, 15, 100), "df4" : np.random.randint(0, 15, 100), } ) print(df.head()) l = [2, 3, 1, 4] df.loc[:, ["df1", "df2", "df3", "df4"]] *= np.array(l) df.head() ``` Here is the output: ``` df0 df1 df2 df3 df4 0 b 5 10 7 13 1 b 3 2 13 3 2 a 5 0 11 14 3 b 11 1 7 10 4 b 0 4 1 12 df0 df1 df2 df3 df4 0 b 10 30 7 52 1 b 6 6 13 12 2 a 10 0 11 56 3 b 22 3 7 40 4 b 0 12 1 48 ```
I think you were doing correct you need to define all columns you need to multiply ``` df.iloc[:,1:] = df.iloc[:,1:]*l ```
16,938
31,745,613
I have the below mysql table. I need to pull out the first two rows as a dictionary using python. I am using python 2.7. ``` C1 C2 C3 C4 C5 C6 C7 25 33 76 87 56 76 47 67 94 90 56 77 32 84 53 66 24 93 33 88 99 73 34 52 85 67 82 77 ``` I use the following code ``` exp = MySQLdb.connect(host,port,user,passwd,db) exp_cur = van.cursor(MySQLdb.cursors.DictCursor) exp_cur.execute("SELECT * FROM table;") data = exp_cur.fetchone() data_keys = data.keys() #print data_keys ``` The expected output (data\_keys) is ``` ['C1', 'C2', 'C3', 'C4', 'C5', 'C6', 'C7'] ``` But I get ``` ['C1', 'C3', 'C2', 'C5', 'C4', 'C7', 'C6'] ``` What is the mistake in my code?
2015/07/31
[ "https://Stackoverflow.com/questions/31745613", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5070767/" ]
[`dict` keys have no easily predictable order](https://stackoverflow.com/q/4458169/190597). To obtain the database table fields in the order in which they appear in the database, use the cursor's [description attribute](https://www.python.org/dev/peps/pep-0249/#description): ``` fields = [item[0] for item in cursor.description] ``` --- For example, ``` import MySQLdb import MySQLdb.cursors as cursors import config connection = MySQLdb.connect( host=config.HOST, user=config.USER, passwd=config.PASS, db=config.MYDB, cursorclass=cursors.DictCursor) with connection as cursor: cursor.execute('DROP TABLE IF EXISTS test') cursor.execute("""CREATE TABLE test (foo int, bar int, baz int)""") cursor.execute("""INSERT INTO test (foo, bar, baz) VALUES (%s,%s,%s)""", (1,2,3)) cursor.execute('SELECT * FROM test') data = cursor.fetchone() fields = [item[0] for item in cursor.description] ``` `data.keys()` may return the fields in any order: ``` print(data.keys()) # ['baz', 'foo', 'bar'] ``` But `fields` is always `('foo', 'bar', 'baz')`: ``` print(fields) # ('foo', 'bar', 'baz') ```
Instead of ``` data_keys = data.keys() ``` Try: ``` data_keys = exp_cur.column_names ``` Source: [10.5.11 Property MySQLCursor.column\_names](http://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-column-names.html)
16,941
67,384,831
Using this option in python it is possible to calculate the mean from multiple csv file If file1.csv through file100.csv are all in the same directory, you can use this Python script: ``` #!/usr/bin/env python3 N = 100 mean_sum = 0 std_sum = 0 for i in range(1, N + 1): with open(f"file{i}.csv") as f: mean_sum += float(f.readline().split(",")[1]) std_sum += float(f.readline().split(",")[1]) print(f"Mean of means: {mean_sum / N}") print(f"Mean of stds: {std_sum / N}") ``` How is it possible to make it in R?
2021/05/04
[ "https://Stackoverflow.com/questions/67384831", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14961922/" ]
Try the following **Solution 1, create a new axios instance in your plugins folder:** ``` export default function ({ $axios }, inject) { // Create a custom axios instance const api = $axios.create({ headers: { // headers you need } }) // Inject to context as $api inject('api', api) } ``` Declare this plugin in nuxt.config.js, then you can send your request : ``` this.$api.$put(...) ``` **Solution 2, declare axios as a plugin in plugins/axios.js and set the hearders according to the request url:** ``` export default function({ $axios, redirect, app }) { const apiS3BaseUrl = // Your s3 base url here $axios.onRequest(config => { if (config.url.includes(apiS3BaseUrl) { setToken(false) // Or delete $axios.defaults.headers.common['Authorization'] } else { // Your current axios config here } }); } ``` Declare this plugin in nuxt.config.js Personally I use the first solution, it doesn't matter if someday the s3 url changes. Here is the [doc](https://axios.nuxtjs.org/extend)
You can pass the below configuration to `nuxt-auth`. Beware, those `plugins` are not related to the root configuration, but related to the `nuxt-auth` package. `nuxt.config.js` ```js auth: { redirect: { login: '/login', home: '/', logout: '/login', callback: false, }, strategies: { ... }, plugins: ['~/plugins/config-file-for-nuxt-auth.js'], }, ``` Then, create a plugin file that will serve as configuration for `@nuxt/auth` (you need to have `@nuxt/axios` installed of course. PS: in this file, `exampleBaseUrlForAxios` is used as an example to set the variable for the axios calls while using `@nuxt/auth`. `config-file-for-nuxt-auth.js` ```js export default ({ $axios, $config: { exampleBaseUrlForAxios } }) => { $axios.defaults.baseURL = exampleBaseUrlForAxios // I guess that any usual axios configuration can be done here } ``` This is the recommended way of doing things as explained in this [article](https://nuxtjs.org/blog/moving-from-nuxtjs-dotenv-to-runtime-config/). Basically, you can pass runtime variables to your project when you're using this. Hence, here we are passing a `EXAMPLE_BASE_URL_FOR_AXIOS` variable (located in `.env`) and renaming it to a name that we wish to use in our project. `nuxt.config.js` ```js export default { publicRuntimeConfig: { exampleBaseUrlForAxios: process.env.EXAMPLE_BASE_URL_FOR_AXIOS, } } ```
16,944
35,387,277
Is there a way in python with selenium that instead of selecting an option using a value or name from a drop down menu, that I can select an option via count? Like select option 1 and another example select option 2. This is because it's a possibility that a value or text of a drop down menu option can change so to ensure an option is selected, I just want to say select the first option (regardless what it is) and for another example select the fifth option etc. Below is the code I have using value to select an option which will be a problem if the value changes in the future: ``` pax_one_bags = Select(driver.find_element_by_id("ctl00_MainContent_passengerList_PassengerGridView_ctl02_baggageOutDropDown")) pax_one_bags.select_by_value("2") pax_two_bags = Select(driver.find_element_by_id("ctl00_MainContent_passengerList_PassengerGridView_ctl03_baggageOutDropDown")) pax_two_bags.select_by_value("5") ```
2016/02/14
[ "https://Stackoverflow.com/questions/35387277", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1096892/" ]
If this represents the way you have been trying to match output, it's your problem: ``` while(reader.readLine() != "\u001B") {} ``` Except in special cases, you have to use the `equals()` method on `String` instances: ``` while (true) { String line = reader.readLine(); if ((line == null) || "\u001B".equals(line)) break; } ``` I'm not sure why you expect `ESC` and a newline when a process exits though.
I believe you need to call the Process.waitFor() method. So you need something like: ``` Process p = build.start(); p.waitFor() ``` If you are trying to simulate a bash shell, allowing input of a command, executing, and processing output without terminating. There is an open source project that may be a good reference for code on how to do this. It is available on Git. Take a look at the [Jediterm](https://github.com/JetBrains/jediterm) Pure Java Emulator. Thinking about simulating a bash, I also found this example for [Piping between processes](https://blog.art-of-coding.eu/piping-between-processes/) also be be relevant. It does show how to extract the output of a process executing and piping that data as the input into another Java Process. Should be helpful.
16,945
60,959,688
I have a python2 script I want to run with the [pwntools python module](https://github.com/Gallopsled/pwntools) and I tried running it using: > > python test.py > > > But then I get: > > File "test.py", line 3, in > from pwn import \* > ImportError: No module named pwn > > > But when I try it with python3, it gets past that error but it runs into other errors because it's a python2 script. Why does pwntools not work when I run it with python2 and can I get my script to run without porting the whole thing to python3?
2020/03/31
[ "https://Stackoverflow.com/questions/60959688", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12941653/" ]
**Yes**, It's absolutely possible to include a `JavaScript` object into `makeStyles`. Thanks to the `spread` operator. > > Advice is to spread over the object first, so that you can easily override any styles. > > Therefore it's preferred to do as follows. > > > ```js const useStyles = makeStyles(theme => ({ textField: { ...stylesFromDatabase, // object width: "100%", color: "green", // this would override "red" (easier fine tuning) }, }); ```
For the benefit of future posters, the code in my original post worked perfectly, I just had something overriding it later! (Without the callback function it was undefined) – H Capello just
16,946
65,370,140
Thanks for looking into this, I have a python program for which I need to have `process_tweet` and `build_freqs` for some NLP task, `nltk` is installed already and `utils` **wasn't** so I installed it via `pip install utils` but the above mentioned two modules apparently weren't installed, the error I got is standard one here, ``` ImportError: cannot import name 'process_tweet' from 'utils' (C:\Python\lib\site-packages\utils\__init__.py) ``` what have I done wrong or is there anything missing? Also I referred [This stackoverflow answer](https://stackoverflow.com/questions/37096364/python-importerror-cannot-import-name-utils) but it didn't help.
2020/12/19
[ "https://Stackoverflow.com/questions/65370140", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11779635/" ]
You can easily access any source code with ??, for example in this case: process\_tweet?? (the code above from deeplearning.ai NLP course custome utils library): ``` def process_tweet(tweet): """Process tweet function. Input: tweet: a string containing a tweet Output: tweets_clean: a list of words containing the processed tweet """ stemmer = PorterStemmer() stopwords_english = stopwords.words('english') # remove stock market tickers like $GE tweet = re.sub(r'\$\w*', '', tweet) # remove old style retweet text "RT" tweet = re.sub(r'^RT[\s]+', '', tweet) # remove hyperlinks tweet = re.sub(r'https?:\/\/.*[\r\n]*', '', tweet) # remove hashtags # only removing the hash # sign from the word tweet = re.sub(r'#', '', tweet) # tokenize tweets tokenizer = TweetTokenizer(preserve_case=False, strip_handles=True, reduce_len=True) tweet_tokens = tokenizer.tokenize(tweet) tweets_clean = [] for word in tweet_tokens: if (word not in stopwords_english and # remove stopwords word not in string.punctuation): # remove punctuation # tweets_clean.append(word) stem_word = stemmer.stem(word) # stemming word tweets_clean.append(stem_word) ```
Try this code, It should work: ``` def process_tweet(tweet): stemmer = PorterStemmer() stopwords_english = stopwords.words('english') tweet = re.sub(r'\$\w*', '', tweet) tweet = re.sub(r'^RT[\s]+', '', tweet) tweet = re.sub(r'https?:\/\/.*[\r\n]*', '', tweet) tweet = re.sub(r'#', '', tweet) tokenizer = TweetTokenizer(preserve_case=False, strip_handles=True,reduce_len=True) tweet_tokens = tokenizer.tokenize(tweet) tweets_clean = [] for word in tweet_tokens: if (word not in stopwords_english and word not in string.punctuation): stem_word = stemmer.stem(word) # stemming word tweets_clean.append(stem_word) return tweets_clean ```
16,947
25,916,444
I would like to test, using unittest, a method which reads from a file using a context manager: ``` with open(k_file, 'r') as content_file: content = content_file.read() ``` I don't want to have to create a file on my system so I wanted to mock it, but I'm not suceeding much at the moment. I've found [mock\_open](http://www.voidspace.org.uk/python/mock/helpers.html#mock-open) but I don't really understand how I'm supposed to use it and feed the mock as content\_file in my test case. There is for instance this [post](https://stackoverflow.com/a/19663055/914086) here, but I do not understand how one is supposed to write this in a test case without modifying the original code. Could anyone point me in the right direction?
2014/09/18
[ "https://Stackoverflow.com/questions/25916444", "https://Stackoverflow.com", "https://Stackoverflow.com/users/914086/" ]
`mock_open()` is the way to go; you patch `open` in your code-under-test with the result of a `mock_open()` call: ``` mocked_open = unittest.mock.mock_open(read_data='file contents\nas needed\n') with unittest.mock.patch('yourmodule.open', mocked_open, create=True): # tests calling your code; the open function will use the mocked_open object ``` The [`patch()` context manager](http://www.voidspace.org.uk/python/mock/patch.html#patch) will put a `open()` global into your module (I named it `yourmodule`), bound to the `mocked_open()`-produced object. This object will pretend to produce a file object when called. The only thing this mock file object *won't* do yet is iteration; you cannot do `for line in content_file` with it, at least not in current versions of the `mock` library. See [Customizing unittest.mock.mock\_open for iteration](https://stackoverflow.com/questions/24779893/customizing-unittest-mock-mock-open-for-iteration) for a work-around.
An alternative is [pyfakefs](http://github.com/jmcgeheeiv/pyfakefs). It allows you to create a fake file system, write and read files, set permissions and more without ever touching your real disk. It also contains a practical example and tutorial showing how to apply pyfakefs to both unittest and doctest.
16,956
26,679,011
I am trying to use mpl\_toolkits.basemap on python and everytime I use a function for plotting like drawcoastlines() or any other, the program automatically shows the plot on the screen. My problem is that I am trying to use those programs later on an external server and it returns 'SystemExit: Unable to access the X Display, is $DISPLAY set properly?' Is there any way I can avoid the plot to be shown when I use a Basemap function on it? I just want to save it to a file so later I can read it externally. My code is: ``` from mpl_toolkits.basemap import Basemap import numpy as np m = Basemap(projection='robin',lon_0=0) m.drawcoastlines() #m.fillcontinents(color='coral',lake_color='aqua') # draw parallels and meridians. m.drawparallels(np.arange(-90.,120.,10.)) m.drawmeridians(np.arange(0.,360.,60.)) ```
2014/10/31
[ "https://Stackoverflow.com/questions/26679011", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3510686/" ]
Use the `Agg` backend, it doesn't require a graphical environment: Do this at the very beginning of your script: ``` import matplotlib as mpl mpl.use('Agg') ``` See also the FAQ on [Generate images without having a window appear](http://matplotlib.org/faq/howto_faq.html#generate-images-without-having-a-window-appear).
The easiest way is to put off the interactive mode of matplotlib. ``` from mpl_toolkits.basemap import Basemap import matplotlib.pyplot as plt import numpy as np #NOT SHOW plt.ioff() m = Basemap(projection='robin',lon_0=0) m.drawcoastlines() #m.fillcontinents(color='coral',lake_color='aqua') # draw parallels and meridians. m.drawparallels(np.arange(-90.,120.,10.)) m.drawmeridians(np.arange(0.,360.,60.)) ```
16,957
32,567,357
Since today i've been using [remote\_api](https://cloud.google.com/appengine/articles/remote_api) (python) to access the datastore on GAE. I usually do `remote_api_shell.py -s <mydomain>`. Today I tried and it fails, the error is: > > oauth2client.client.ApplicationDefaultCredentialsError: The > Application Default Credentials are not available. They are available > if running in Google Compute Engine. Otherwise, the environment > variable GOOGLE\_APPLICATION\_CREDENTIALS must be defined pointing to a > file defining the credentials. See > <https://developers.google.com/accounts/docs/application-default-credentials> > for more information. > > > I cannot understand why it asks me that. the wole output is this ``` stefano@~/gc$ remote_api_shell.py -s .... Traceback (most recent call last): File "/usr/local/bin/remote_api_shell.py", line 133, in <module> run_file(__file__, globals()) File "/usr/local/bin/remote_api_shell.py", line 129, in run_file execfile(_PATHS.script_file(script_name), globals_) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/remote_api_shell.py", line 157, in <module> main(sys.argv) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/remote_api_shell.py", line 153, in main appengine_rpc.HttpRpcServer) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/remote_api_shell.py", line 74, in remote_api_shell secure=secure) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 734, in ConfigureRemoteApiForOAuth credentials = client.GoogleCredentials.get_application_default() File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/oauth2client/oauth2client/client.py", line 1204, in get_application_default return GoogleCredentials._get_implicit_credentials() File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/oauth2client/oauth2client/client.py", line 1194, in _get_implicit_credentials raise ApplicationDefaultCredentialsError(ADC_HELP_MSG) oauth2client.client.ApplicationDefaultCredentialsError: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information. ```
2015/09/14
[ "https://Stackoverflow.com/questions/32567357", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1257185/" ]
You could try implementing [SignalR](http://www.asp.net/signalr/overview/deployment/tutorial-signalr-self-host). It is a great library that uses web sockets to push data to clients. Edit: SignalR can help you solve your problem by allowing you to set up Hubs on your console app (server) that WPF application (clients) can connect to. When the clients start up you will register them with a specified Hub. When something changes on the server, you can push from the server Hub to the client. The client will receive the information from the server and allow you to handle it as you see fit. Rough mockup of some code: ``` namepsace Server{} public class YourHub : Hub { public void SomeHubMethod(string userName) { //clientMethodToCall is a method in the WPF application that //will be called. Client needs to be registered to hub first. Clients.User(userName).clientMethodToCall("This is a test."); //One issue you may face is mapping client connections. //There are a couple different ways/methodologies to do this. //Just figure what will work best for you. } } } namespace Client{ public class HubService{ public IHubProxy CreateHubProxy(){ var hubConnection = new HubConnection("http://serverAddress:serverPort/"); IHubProxy yourHubProxy = hubConnection.CreateHubProxy("YourHub"); return yourHubProxy; } } } ``` Then in your WPF window: ``` var hubService = new HubService(); var yourHubProxy = hubService.CreateHubProxy(); yourHubProxy.Start().Wait(); yourHubProxy.On("clientMethodToCall", () => DoSometingWithServerData()); ```
You need to create some kind of subscription model for the clients to the server to handle a Publish-Subscribe channel (see <http://www.enterpriseintegrationpatterns.com/patterns/messaging/PublishSubscribeChannel.html>). The basic architecture is this: 1. Client sends a request to the messaging channel to register itself as a subscriber to a certain kind of message/event/etc. 2. Server sends messages to the channel to be delivered to subscribers to that message. There are many ways to handle this. You could use some of the Azure services (like Event hub, or Topic) if you don't want to reinvent the wheel here. You could also have your server application track all of these things (updates to IP addresses, updates to subscription interest, making sure that messages don't get sent more than once; taking care of message durability [making sure messages get delivered even if the client is offline when the message gets created]).
16,958
11,866,944
I would like to be able to pickle a function or class from within \_\_main\_\_, with the obvious problem (mentioned in other posts) that the pickled function/class is in the \_\_main\_\_ namespace and unpickling in another script/module will fail. I have the following solution which works, is there a reason this should not be done? The following is in myscript.py: ``` import myscript import pickle if __name__ == "__main__": print pickle.dumps(myscript.myclass()) else: class myclass: pass ``` **edit**: The unpickling would be done in a script/module that *has access to* myscript.py and can do an `import myscript`. The aim is to use a solution like [parallel python](http://www.parallelpython.com/ "parallel python") to call functions remotely, and be able to write a short, *standalone* script that contains the functions/classes that can be accessed remotely.
2012/08/08
[ "https://Stackoverflow.com/questions/11866944", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1068490/" ]
You can get a better handle on global objects by importing `__main__`, and using the methods available in that module. This is what [dill](http://pythonhosted.org/dill) does in order to serialize almost anything in python. Basically, when dill serializes an interactively defined function, it uses some name mangling on `__main__` on both the serialization and deserialization side that makes `__main__` a valid module. ``` >>> import dill >>> >>> def bar(x): ... return foo(x) + x ... >>> def foo(x): ... return x**2 ... >>> bar(3) 12 >>> >>> _bar = dill.loads(dill.dumps(bar)) >>> _bar(3) 12 ``` Actually, dill registers it's types into the `pickle` registry, so if you have some black box code that uses `pickle` and you can't really edit it, then just importing dill can magically make it work without monkeypatching the 3rd party code. Or, if you want the whole interpreter session sent over as an "python image", dill can do that too. ``` >>> # continuing from above >>> dill.dump_session('foobar.pkl') >>> >>> ^D dude@sakurai>$ python Python 2.7.5 (default, Sep 30 2013, 20:15:49) [GCC 4.2.1 (Apple Inc. build 5566)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import dill >>> dill.load_session('foobar.pkl') >>> _bar(3) 12 ``` You can easily send the image across ssh to another computer, and start where you left off there as long as there's version compatibility of pickle and the usual caveats about python changing and things being installed. I actually use dill to serialize objects and send them across parallel resources with [parallel python](http://www.parallelpython.com/), multiprocessing, and [mpi4py](https://bitbucket.org/mpi4py/mpi4py). I roll these up conveniently into the [pathos](http://pythonhosted.org/pathos) package (and [pyina](http://pythonhosted.org/pyina) for MPI), which provides a uniform `map` interface for different parallel batch processing backends. ``` >>> # continued from above >>> from pathos.multiprocessing import ProcessingPool as Pool >>> Pool(4).map(foo, range(10)) [0, 1, 4, 9, 16, 25, 36, 49, 64, 81] >>> >>> from pyina.launchers import MpiPool >>> MpiPool(4).map(foo, range(10)) [0, 1, 4, 9, 16, 25, 36, 49, 64, 81] ``` There are also non-blocking and iterative maps as well as non-parallel pipe connections. I also have a pathos module for `pp`, however, it is somewhat unstable for functions defined in `__main__`. I'm working on improving that. If you like, fork [the code on github](https://www.github.com/uqfoundation) and help make the `pp` better for functions defined in `__main__`. The reason `pp` doesn't pickle well is that `pp` does it's serialization tricks through using temporary file objects and reading the interpreter session's history... so it doesn't serialize objects in the same way that multiprocessing or mpi4py do. I have a dill module `dill.source` that seamlessly does the same type of pickling that `pp` uses, but it's rather new.
If you are trying to pickle something so that you can use it somewhere else, separate from `test_script`, that's not going to work, because pickle (apparently) just tries to load the function from the module. Here's an example: test\_script.py ``` def my_awesome_function(x, y, z): return x + y + z ``` picklescript.py ``` import pickle import test_script with open("awesome.pickle", "wb") as f: pickle.dump(test_script.my_awesome_function, f) ``` If you run `python picklescript.py`, then change the filename of `test_script`, when you try to load the function, it will fail. e.g. Running this: ``` import pickle with open("awesome.pickle", "rb") as f: pickle.load(f) ``` Will give you the following traceback: ``` Traceback (most recent call last): File "load_pickle.py", line 3, in <module> pickle.load(f) File "/Library/Frameworks/Python.framework/Versions/7.3/lib/python2.7/pickle.py", line 1378, in load return Unpickler(file).load() File "/Library/Frameworks/Python.framework/Versions/7.3/lib/python2.7/pickle.py", line 858, in load dispatch[key](self) File "/Library/Frameworks/Python.framework/Versions/7.3/lib/python2.7/pickle.py", line 1090, in load_global klass = self.find_class(module, name) File "/Library/Frameworks/Python.framework/Versions/7.3/lib/python2.7/pickle.py", line 1124, in find_class __import__(module) ImportError: No module named test_script ```
16,963
50,005,229
So the assignment is: take 2 lists and write a program that returns a list that contains only the elements that are common to the lists without duplicates, and it must work on lists of different sizes. My code is: ``` a = [1, 2, 4] b = [3, 1, 5, 2] for j < len(a): for i < len(b): if a(elem) == b(i): print (a(elem)) i=i+1 j=j+1 ``` An infinite loop is then generated, where it prints 1 and then never exits. Can someone tell me why the infinite loop occurs? I understand this is not the most "python" way of doing things, however my coding background includes a very small, brute force technique of C, and I do not know much Python. If there are simple alternatives to this, please let me know, as well as why it never exits.
2018/04/24
[ "https://Stackoverflow.com/questions/50005229", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9691751/" ]
The value is a regular JavaScript expression. This way, if you want to pass a string, say `'test'`, use: ``` v-my-directive="'test'" ``` Demo: ```js Vue.directive('my-directive', function (el, binding) { console.log('directive expression:', binding.value) // => "test" }) new Vue({ el: '#app', data: { message: 'Hello Vue.js!' } }) ``` ```html <script src="https://unpkg.com/vue"></script> <div id="app"> <p>{{ message }}</p> <div v-my-directive="'test'"></div> </div> ```
You have to quote the string, otherwise it will look for the `test` variable in your component context (its `props` or `data`): ``` v-my-directive="'test'" ``` Inside your custom directive, you can access the passed value as in the `binding.value`: ``` Vue.directive('demo', { bind: function (el, binding, vnode) { var s = JSON.stringify el.innerHTML = 'name: ' + s(binding.name) + '<br>' + 'value: ' + s(binding.value) + '<br>' + 'expression: ' + s(binding.expression) + '<br>' + 'argument: ' + s(binding.arg) + '<br>' + 'modifiers: ' + s(binding.modifiers) + '<br>' + 'vnode keys: ' + Object.keys(vnode).join(', ') } }) ``` See the [Custom Directives](https://v2.vuejs.org/v2/guide/custom-directive.html) chapter of the guide.
16,966
37,490,609
i'm working on site for renting rooms. User picks 2 dates(UserStartDate & UserEndDate). with this python code i gonna get number of days in his date range: ``` user_date_range = [endUser - timedelta(i) for i in range((endUser - startUser).days+1)] user_range_num_days = len(user_date_range) ``` and i have a day price for room: 20$ but due to lack of proficiency in Django,I can't figure out how to calculate user price according to his date range. And where it should be done. hope for your help.
2016/05/27
[ "https://Stackoverflow.com/questions/37490609", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6361942/" ]
It doesn't have anything to do with django but rather python. I assume `user_start_date` and `user_end_date` are both python `datetime.date` or `datetime.datetime` objects, then you could do: ``` num_days = (user_end_date - user_start_date).days total_price = num_days * 20 ```
<https://docs.python.org/2/library/calendar.html> A calendar is necessary as you should be aware that not all months have the same amount of days in them. itermonthdates(year, month) returns an iterator for all days in the month. Run through that iterator and increment a count for every date match within the range. Of course if the end date extends into the next month keep the same counter.
16,967
6,831,685
I am learning python via dive into python. Got few questions and unable to understand, even through the documentation. 1) ***BaseClass*** 2) ***InheritClass*** What exactly happens when we assign a *InheritClass* instance to a variable, when the *InheritClass* doesn't contain an `__init__` method and *BaseClass* does ? * Is the *BaseClass* `__init__` method called automatically * Also, tell me other things that happen under the hood. Actually the fileInfo.py example is giving me serious headache, i am just unable to understand as to how the things are working. Following
2011/07/26
[ "https://Stackoverflow.com/questions/6831685", "https://Stackoverflow.com", "https://Stackoverflow.com/users/570928/" ]
Yes, `BaseClass.__init__` will be called automatically. Same goes for any other methods defined in the parent class but not the child class. Observe: ``` >>> class Parent(object): ... def __init__(self): ... print 'Parent.__init__' ... def func(self, x): ... print x ... >>> class Child(Parent): ... pass ... >>> x = Child() Parent.__init__ >>> x.func(1) 1 ``` The child inherits its parent's methods. It can override them, but it doesn't have to.
@FogleBird has already answered your question, but I wanted to add something and can't comment on his post: You may also want to look at the [`super` function](http://docs.python.org/library/functions.html#super). It's a way to call a parent's method from inside a child. It's helpful when you want to extend a method, for example: ``` class ParentClass(object): def __init__(self, x): self.x = x class ChildClass(ParentClass): def __init__(self, x, y): self.y = y super(ChildClass, self).__init__(x) ``` This can of course encompass methods that are a lot more complicated, *not* the `__init__` method or even a method by the same name!
16,968
23,969,296
I wanted to get number of indexes in two string which are not same. Things that are fixed: String data will only have 0 or 1 on any index. i.e strings are binary representation of a number. Both the string will be of same length. For the above problem I wrote the below function in python ``` def foo(a,b): result = 0 for x,y in zip(a,b): if x != y: result += 1 return result ``` But the thing is these strings are huge. Very large. So the above functions is taking too much time. any thing i should do to make it super fast. This is how i did same in c++, Its quite fast now, but still can't understand how to do packing in short integers and all that said by @Yves Daoust : ``` size_t diff(long long int n1, long long int n2) { long long int c = n1 ^ n2; bitset<sizeof(int) * CHAR_BIT> bits(c); string s = bits.to_string(); return std::count(s.begin(), s.end(), '1'); } ```
2014/05/31
[ "https://Stackoverflow.com/questions/23969296", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3694018/" ]
I'll walk through the options here, but basically you are calculating the hamming distance between two numbers. There are dedicated libraries that can make this really, really fast, but lets focus on the pure Python options first. Your approach, zipping ---------------------- `zip()` produces one big list *first*, then lets you loop. You could use `itertools.izip()` instead, and make it a generator expression: ``` from itertools import izip def foo(a, b): return sum(x != y for x, y in izip(a, b)) ``` This produces only one pair at a time, avoiding having to create a large list of tuples first. The Python boolean type is a subclass of `int`, where `True == 1` and `False == 0`, letting you sum them: ``` >>> True + True 2 ``` Using integers instead ---------------------- However, you probably want to rethink your input data. It's much more efficient to use integers to represent your binary data; integers can be operated on directly. Doing the conversion inline, then counting the number of 1s on the XOR result is: ``` def foo(a, b): return format(int(a, 2) ^ int(b, 2), 'b').count('1') ``` but not having to convert `a` and `b` to integers in the first place would be much more efficient. Time comparisons: ``` >>> from itertools import izip >>> import timeit >>> s1 = "0100010010" >>> s2 = "0011100010" >>> def foo_zipped(a, b): return sum(x != y for x, y in izip(a, b)) ... >>> def foo_xor(a, b): return format(int(a, 2) ^ int(b, 2), 'b').count('1') ... >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_zipped as f') 1.7872788906097412 >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_xor as f') 1.3399651050567627 >>> s1 = s1 * 1000 >>> s2 = s2 * 1000 >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_zipped as f', number=1000) 1.0649528503417969 >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_xor as f', number=1000) 0.0779869556427002 ``` The XOR approach is faster by orders of magnitude if the inputs get larger, and this is **with** converting the inputs to `int` first. Dedicated libraries for bitcounting ----------------------------------- The bit counting (`format(integer, 'b').count(1)`) is pretty fast, but can be made faster still if you installed the [`gmpy` extension library](https://pypi.python.org/pypi/gmpy) (a Python wrapper around the [GMP library](https://gmplib.org/)) and used the `gmpy.popcount()` function: ``` def foo(a, b): return gmpy.popcount(int(a, 2) ^ int(b, 2)) ``` `gmpy.popcount()` is about 20 times faster on my machine than the `str.count()` method. Again, not having to convert `a` and `b` to integers to begin with would remove another bottleneck, but even then there per-call performance is almost doubled: ``` >>> import gmpy >>> def foo_xor_gmpy(a, b): return gmpy.popcount(int(a, 2) ^ int(b, 2)) ... >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_xor as f', number=10000) 0.7225301265716553 >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_xor_gmpy as f', number=10000) 0.47731995582580566 ``` To illustrate the difference when `a` and `b` are integers to begin with: ``` >>> si1, si2 = int(s1, 2), int(s2, 2) >>> def foo_xor_int(a, b): return format(a ^ b, 'b').count('1') ... >>> def foo_xor_gmpy_int(a, b): return gmpy.popcount(a ^ b) ... >>> timeit.timeit('f(si1, si2)', 'from __main__ import si1, si2, foo_xor_int as f', number=100000) 3.0529568195343018 >>> timeit.timeit('f(si1, si2)', 'from __main__ import si1, si2, foo_xor_gmpy_int as f', number=100000) 0.15820622444152832 ``` Dedicated libraries for hamming distances ----------------------------------------- The `gmpy` library actually includes a `gmpy.hamdist()` function, which calculates this exact number (the number of 1 bits in the XOR result of the integers) *directly*: ``` def foo_gmpy_hamdist(a, b): return gmpy.hamdist(int(a, 2), int(b, 2)) ``` which'll blow your socks off *entirely* if you used integers to begin with: ``` def foo_gmpy_hamdist_int(a, b): return gmpy.hamdist(a, b) ``` Comparisons: ``` >>> def foo_gmpy_hamdist(a, b): ... return gmpy.hamdist(int(a, 2), int(b, 2)) ... >>> def foo_gmpy_hamdist_int(a, b): ... return gmpy.hamdist(a, b) ... >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_xor as f', number=100000) 7.479684114456177 >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_gmpy_hamdist as f', number=100000) 4.340585947036743 >>> timeit.timeit('f(si1, si2)', 'from __main__ import si1, si2, foo_gmpy_hamdist_int as f', number=100000) 0.22896099090576172 ``` That's 100.000 times the hamming distance between two 3k+ digit numbers. Another package that can calculate the distance is [`Distance`](https://pypi.python.org/pypi/Distance), which supports calculating the hamming distance between strings directly. Make sure you use the `--with-c` switch to have it compile the C optimisations; when installing with `pip` use `bin/pip install Distance --install-option --with-c` for example. Benchmarking this against the XOR-with-bitcount approach again: ``` >>> import distance >>> def foo_distance_hamming(a, b): ... return distance.hamming(a, b) ... >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_xor as f', number=100000) 7.229060173034668 >>> timeit.timeit('f(s1, s2)', 'from __main__ import s1, s2, foo_distance_hamming as f', number=100000) 0.7701470851898193 ``` It uses the naive approach; zip over both input strings and count the number of differences, but since it does this in C it is still plenty faster, about 10 times as fast. The `gmpy.hamdist()` function still beats it when you use integers, however.
Not tested, but how would this perform: ``` sum(x!=y for x,y in zip(a,b)) ```
16,969
24,502,360
I am a Python newbie and am trying to write a numpy array into format readable in Matlab in the following format into an array [xi, yi, ti], separated by a semi-colon. In python, I am able to currently write it in the following form, which is a numpy array printed on screen/written to file as [[xi yi ti]]. Here is the code: ``` import math import random import numpy as np SPOT = [] f = open('data_dump.txt', 'a') for i in range(10): X = random.randrange(6) Y = random.randrange(10) T = random.randrange(5) SPOT.append([X,Y,T]) SPOT = np.array(SPOT) f.write(str(SPOT[:])) f.close() ``` Please suggest how I should proceed to be able to write this data in Matlab readable format as mentioned above. Thanks in advance! Sree.
2014/07/01
[ "https://Stackoverflow.com/questions/24502360", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3792245/" ]
It is not very necessary to write your `array` into a special format. Write it into a normal `csv` and use [`dlmread`](http://www.mathworks.com/help/matlab/ref/dlmread.html) to open it in `matlab`. In `numpy` side, write your `array` using `np.savetxt('some_name.txt', aar, delimiter=' ')`
If you have scipy than you can do: ``` import scipy.io scipy.io.savemat('/tmp/test.mat', dict(SPOT=SPOT)) ``` And in matlab: ``` a=load('/tmp/test.mat'); a.SPOT % should have your data ```
16,972
65,753,830
I'm trying to train Mask-R CNN model from cocoapi(<https://github.com/cocodataset/cocoapi>), and this error code keep come out. ``` ModuleNotFoundError Traceback (most recent call last) <ipython-input-8-83356bb9cf95> in <module> 19 sys.path.append(os.path.join(ROOT_DIR, "samples/coco/")) # To find local version 20 ---> 21 from pycocotools.coco import coco 22 23 get_ipython().run_line_magic('matplotlib', 'inline ') ~/Desktop/coco/PythonAPI/pycocotools/coco.py in <module> 53 import copy 54 import itertools ---> 55 from . import mask as maskUtils 56 import os 57 from collections import defaultdict ~/Desktop/coco/PythonAPI/pycocotools/mask.py in <module> 1 __author__ = 'tsungyi' 2 ----> 3 import pycocotools._mask as _mask 4 5 # Interface for manipulating masks stored in RLE format. ModuleNotFoundError: No module named 'pycocotools._mask' ``` I tried all the methods on the github 'issues' tab, but it is not working to me at all. Is there are another solution for this? I'm using Python 3.6, Linux.
2021/01/16
[ "https://Stackoverflow.com/questions/65753830", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14258016/" ]
The answer is summarise from [these](https://github.com/cocodataset/cocoapi/issues/172) [three](https://github.com/cocodataset/cocoapi/issues/168) [GitHub issues](https://github.com/cocodataset/cocoapi/issues/141#issuecomment-386606299) 1.whether you have installed cython in the correct version. Namely, you should install cython for python2/3 if you use python2/3 ``` pip install cython ``` 2.whether you have downloaded the whole .zip file from this github project. Namely, you should download all the things here even though you only need PythonAPI ``` git clone https://github.com/cocodataset/cocoapi.git ``` or unzip the [zip file](https://github.com/cocodataset/cocoapi/archive/refs/heads/master.zip) 3.whether you open Terminal and run "make" under the correct folder. The correct folder is the one that "Makefile" is located in ``` cd path/to/coco/PythonAPI/Makefile make ``` Almost, the question can be solved. If not, 4 and 5 may help. 4.whether you have already installed gcc in the correct version 5.whether you have already installed python-dev in the correct version. Namely you should install python3-dev (you may try "sudo apt-get install python3-dev"), if you use python3.
Try cloning official repo and run below commands ``` python setup.py install make ```
16,977
53,469,976
I am using the osmnx library (python) to extract the road network of a city. I also have a separate data source that corresponds to GPS coordinates being sent by vehicles as they traverse the aforementioned road network. My issue is that I only have the GPS coordinates but I wish to also know which road they correspond to. I.e. I want to input a set of longitude, latitude coordinates and get the corresponding street on which that GPS coordinate lies. I believe the term for this is Map Matching. What is the best way to do this? Preferably the solution would be using osmnx but other solutions would also be appreciated. Note that the GPS coordinates may be noisy.
2018/11/25
[ "https://Stackoverflow.com/questions/53469976", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10702801/" ]
You can do map matching with OSMnx. See the nearest\_nodes and nearest\_edges functions in the OSMnx documentation: <https://osmnx.readthedocs.io/>
My suggestion woud be to use the leuvenmapmatching package. You will get the details in the documentation of the package itself. <https://github.com/wannesm/LeuvenMapMatching>
16,978
13,793,973
I have a string in python 3 that has several unicode representations in it, for example: ``` t = 'R\\u00f3is\\u00edn' ``` and I want to convert t so that it has the proper representation when I print it, ie: ``` >>> print(t) Róisín ``` However I just get the original string back. I've tried re.sub and some others, but I can't seem to find a way that will change these characters without having to iterate over each one. What would be the easiest way to do so?
2012/12/10
[ "https://Stackoverflow.com/questions/13793973", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1205923/" ]
You want to use the built-in codec `unicode_escape`. If `t` is already a `bytes` (an 8-bit string), it's as simple as this: ``` >>> print(t.decode('unicode_escape')) Róisín ``` If `t` has already been decoded to Unicode, you can to encode it back to a `bytes` and then `decode` it this way. If you're sure that all of your Unicode characters have been escaped, it actually doesn't matter what codec you use to do the encode. Otherwise, you could try to get your original byte string back, but it's simpler, and probably safer, to just force any non-encoded characters to get encoded, and then they'll get decoded along with the already-encoded ones: ``` >>> print(t.encode('unicode_escape').decode('unicode_escape') Róisín ``` In case you want to know how to do this kind of thing with regular expressions in the future, note that [`sub`](http://docs.python.org/3/library/re.html?highlight=unicode_escape#re.sub) lets you pass a function instead of a pattern for the `repl`. And you can convert any hex string into an integer by calling `int(hexstring, 16)`, and any integer into the corresponding Unicode character with `chr` (note that this is the one bit that's different in Python 2—you need `unichr` instead). So: ``` >>> re.sub(r'(\\u[0-9A-Fa-f]+)', lambda matchobj: chr(int(matchobj.group(0)[2:], 16)), t) Róisín ``` Or, making it a bit more clear: ``` >>> def unescapematch(matchobj): ... escapesequence = matchobj.group(0) ... digits = escapesequence[2:] ... ordinal = int(digits, 16) ... char = chr(ordinal) ... return char >>> re.sub(r'(\\u[0-9A-Fa-f]+)', unescapematch, t) Róisín ``` The `unicode_escape` codec actually handles `\U`, `\x`, `\X`, octal (`\066`), and special-character (`\n`) sequences as well as just `\u`, and it implements the proper rules for reading only the appropriate max number of digits (4 for `\u`, 8 for `\U`, etc., so `r'\\u22222'` decodes to `'∢2'` rather than `''`), and probably more things I haven't thought of. But this should give you the idea.
First of all, it is rather confused what you what to convert to. Just imagine that you may want to convert to 'o' and 'i'. In this case you can just make a map: ``` mp = {u'\u00f3':'o', u'\u00ed':'i'} ``` Than you may apply the replacement like: ``` t = u'R\u00f3is\u00edn' for i in range(len(t)): if t[i] in mp: t[i]=mp[t[i]] print t ```
16,979
62,502,606
I have the following python code which should be able to read a .csv file with cities and their coordinates. The .csv file is in the form of: ``` name,x,y name,x,y name,x,y ``` However, I am getting the error '**list index out of range**' at line 764: ``` 758 """function to calculate the route for files in data folder with coordinates""" 759 start_time = time.time() 760 f = open(csv_name, "r") 761 f.readline() 762 f.readline() 763 f.readline() 764 lines = int(f.readline().split()[2]) 765 f.readline() 766 f.readline() ``` The file has around 50 rows. What may be causing the problem? Thanks!
2020/06/21
[ "https://Stackoverflow.com/questions/62502606", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13722495/" ]
Change DateCutting to **DateTime** and adjust your criteria: ```vb Dim strCriteria As String strCriteria = "[DateCutting] >= #" & Format(Me.txtfrom, "yyyy\/mm\/dd") & "# And [DateCutting] <= #" & Format(Me.txtto, "yyyy\/mm\/dd") & "#" DoCmd.ApplyFilter strCriteria ``` To find a number: ``` strCriteria = "[Number] = " & Me.txtNumber & "" ``` as text: ``` strCriteria = "[TextNumber] = '" & Me.txtNumber & "'" ```
Try `Dim strCriteria as String` `dim task As String`
16,981
36,706,131
I am having trouble getting one of my functions in python to work. The code for my function is below: ``` def checkBlackjack(value, pot, player, wager): if (value == 21): print("Congratulations!! Blackjack!!") pot -= wager player += wager print ("The pot value is $", pot) print ("Your remaining balance is $",player) return (pot, player) ``` The function call is: ``` potValue, playerBalance = checkBlackjack(playerValue, potValue, playerBalance, wager) ``` And the error I get is: ``` potValue, playerBalance = checkBlackjack(playerValue, potValue, playerBalance, wager) TypeError: 'NoneType' object is not iterable ``` Since the error talks about not being able to iterate, I am not sure how to relate this to using the if condition. Any help will really be appreciated. Thanks!
2016/04/18
[ "https://Stackoverflow.com/questions/36706131", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4832091/" ]
You're only returning something if the condition in your function is met, otherwise the function returns `None` by default and it is then trying to unpack `None` into two values (your variables)
Here is an [MCVE](https://stackoverflow.com/help/mcve) for this question: ``` >>> a, b = None Traceback (most recent call last): File "<pyshell#2>", line 1, in <module> a, b = None TypeError: 'NoneType' object is not iterable ``` At this point, the problem should be clear. If not, one could look up multiple assignment in the manual.
16,982
59,952,898
I am trying to install `python3-psycopg2` as a part of `postgresql` installation, but I get: ``` The following packages have unmet dependencies: python3-psycopg2 : Depends: python3 (>= 3.7~) but 3.6.7-1~18.04 is to be installed E: Unable to correct problems, you have held broken packages. ``` I installed `python3.8` and configured `python3` link to it: ``` sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.8 1 ``` But I still get the same error. I have an `Ubuntu 18.04` OS.
2020/01/28
[ "https://Stackoverflow.com/questions/59952898", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1626977/" ]
The `Psycopg2` library is built as a wrapper around `libpq` and mostly written in C. It is distributed as a `sdist` and is built during installation. For this reason it requires some `PostgreSQL` binaries and headers to be present during installation. Consider running these 2 commands: ``` sudo apt install python3-dev libpq-dev ``` The main goal of the command above is to provide all requirements for building `Psycopg2`. Then: ``` pip3 install psycopg2 ``` You should have `psycopg2` installed and working now.
While the explanation of Gitau is very good. You can simply install the psycopg2 binary instead as mentioned by Maurice: `python3 -m pip install psycopg2-binary` or just `pip install psycopg2-binary`
16,983
60,140,174
I have a very basic flask app with dependencies installed from my requirements.txt. All of these dependencies are installed in my virtual environment. requirements.txt given below, ``` aniso8601==6.0.0 Click==7.0 Flask==1.0.3 Flask-Cors==3.0.7 Flask-RESTful==0.3.7 Flask-SQLAlchemy==2.4.0 itsdangerous==1.1.0 Jinja2==2.10.1 MarkupSafe==1.1.1 # psycopg2-binary==2.8.2 pytz==2019.1 six==1.12.0 # SQLAlchemy==1.3.4 Werkzeug==0.15.4 python-dotenv requests authlib ``` My code in NewTest.py file, ``` from flask import Flask, request, jsonify, abort, url_for app = Flask(__name__) @app.route('/') def index(): return jsonify({ 'success': True, 'index': 'Test Pass' }) if __name__ == '__main__': app.run(debug=True) ``` When I run the app through, ``` export FLASK_APP=NewTest.py export FLASK_ENV=development export FLASK_DEBUG=true flask run or flask run --reload ``` I get the following error, ``` 127.0.0.1 - - [09/Feb/2020 12:43:40] "GET / HTTP/1.1" 500 - Traceback (most recent call last): File "/projects/env/lib/python3.8/site-packages/flask/_compat.py", line 36, i n reraise raise value File "/projects/NewTest.py", line 3, in <module> app = Flask(__name__) File "/projects/env/lib/python3.8/site-packages/flask/app.py", line 559, in _ _init__ self.add_url_rule( File "/projects/env/lib/python3.8/site-packages/flask/app.py", line 67, in wr apper_func return f(self, *args, **kwargs) File "/projects/env/lib/python3.8/site-packages/flask/app.py", line 1217, in add_url_rule self.url_map.add(rule) File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 1388, in add rule.bind(self) File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 730, in bind self.compile() File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 794, in compile self._build = self._compile_builder(False).__get__(self, None) File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 951, in _compile_builder code = compile(module, "<werkzeug routing>", "exec") TypeError: required field "type_ignores" missing from Module ``` Can anyone please point out what am I missing or doing wrong and how can I fix it? Thanks.
2020/02/09
[ "https://Stackoverflow.com/questions/60140174", "https://Stackoverflow.com", "https://Stackoverflow.com/users/727390/" ]
The bug was fixed in [werkzeug 0.15.5](https://werkzeug.palletsprojects.com/en/1.0.x/changes/#version-0-15-5). Upgrade from 0.15.4 to to a later version.
I had the error in django shell, it seems there is a bug in ipython. finally, I decided to remove ipython temporary until bug fix ``` pip uninstall ipython ``` [more info](https://bugs.python.org/issue35894)
16,985
6,377,535
I have trouble setting up funkload to work well with cookies. I turn on `fl-record` and perform a series of requests of which each is sending a cookie. If I use the command without supplying a folder path, the output is stored in TCPWatch-Proxy format and I can see the contents of all the cookies, so I know that they are sent. For example this is the contents of `watch0003.request`: ``` GET http://mydomainnamehere.pl/api/world/me/ HTTP/1.1 Host: mydomainnamehere.pl Proxy-Connection: keep-alive Referer: http://mydomainnamehere.pl/test/engine/ X-Requested-With: XMLHttpRequest User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/534.24 (KHTML, like Gecko) Chrome/11.0.696.57 Safari/534.24 Accept: */* Accept-Encoding: gzip,deflate,sdch Accept-Language: pl,en-US;q=0.8,en;q=0.6,fr-FR;q=0.4,fr;q=0.2 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Cookie: Beacon-ClientID=<<<some-beaconpush-id-here>>>; sessionid=<<<some-session-id>>>; fbs_<<<some-facebook-id>>>="access_token=<<<some-access-token>>>&expires=1308254400&secret=<<<some-secret>>>&session_key=<<<some-session-key>>>&sig=<<<some-signature>>>&uid=<<<some-user-id>>>"; Beacon-Preferred-Client=WebSocket ``` However if I run `fl-record` with a name of the test case and by doing so order funkload to store the output as a python test, all the Cookies are omitted. There isn't a single line in the code that would have anything to do with them: ``` import unittest from funkload.FunkLoadTestCase import FunkLoadTestCase from webunit.utility import Upload from funkload.utils import Data #from funkload.utils import xmlrpc_get_credential class Simple(FunkLoadTestCase): def setUp(self): """Setting up test.""" self.logd("setUp") self.server_url = self.conf_get('main', 'url') # XXX here you can setup the credential access like this # credential_host = self.conf_get('credential', 'host') # credential_port = self.conf_getInt('credential', 'port') # self.login, self.password = xmlrpc_get_credential(credential_host, # credential_port, # XXX replace with a valid group # 'members') def test_simple(self): # The description should be set in the configuration file server_url = self.server_url # begin of test --------------------------------------------- ... # /tmp/tmp5Nv5lW_funkload/watch0003.request self.get(server_url + "/api/world/me/", description="Get /api/world/me/") ... # end of test ----------------------------------------------- def tearDown(self): """Setting up test.""" self.logd("tearDown.\n") if __name__ in ('main', '__main__'): unittest.main() ``` There is also a configuration file, but nothing about cookies there either. On the other hand the documentation states that fl has (Cookie support). I've also found some bugfixes in the previous releases concerning Cookie support so I can assume this isn't just an empty statement. I've also found a point in one of the changelogs that states that "deleted cookies" are not included in the output. This got me wondering that maybe the problem is that the cookies as they were recorded are marked for deletion or are recognized as such by fl upon conversion from the TCP-Watch format to an actual testcase. This is just a wild guess however. I'd like to know: * If you ever had successes with support of funkload for cookies. If so, which version were you using. * Of your general experiences with funkload and whether or not it is worth using in a more complex setup. **EDIT** Apparently some of the requests that are recorded by `TCPWatch` are totally ignored and not included in the output test case. Anybody has idea why would it do that? Does it have anything to do with redirection? **EDIT(2)** Ok, it does. This one thing actually makes sense. It leaves out the results of redirection as these will be generated by simply following `HTTP 302 Found`. However the question of cookies still remains unexplained.
2011/06/16
[ "https://Stackoverflow.com/questions/6377535", "https://Stackoverflow.com", "https://Stackoverflow.com/users/475763/" ]
I see this old post not answered, so I thought I could post: In Python: Identify the name of cookie you are sending. mine is 'csrftoken' in header and same one in post as 'csrfmiddlewaretoken'> intially I get the value of cookie then pass the same in post for authentication. Example: ``` res = self.get ( server_url + '/login/', description = 'Get url' ).cookies.itervalues ( ).next ( ) morsel_str = res [ '/' ] [ 'csrftoken' ] csrftoken = morsel_str.value # Once Cookie found include it in params params = [ [ 'csrfmiddlewaretoken', csrftoken ], [ 'username', 'username..' ], [ 'password', '********' ] ] self.setHeader ( 'cookie', 'csrftoken={0}'.format ( csrftoken ) ) resp = self.post ( server_url + '/login/', params, description = "Post /login/" ) ```
I've found a bug in Funkload. Funkload isn't handling correctly the cookies with a leading '.' in the domain. At the moment all that cookies are being silently ignored. Check this branch: <https://github.com/sbook/FunkLoad> I've already send a pull request: <https://github.com/nuxeo/FunkLoad/pull/32>
16,995
913,396
I'm using swig to wrap a class from a C++ library with python. It works overall, but there is an exception that is thrown from within the library and I can't seem to catch it in the swig interface, so it just crashes the python application! The class PyMonitor.cc describes the swig interface to the desired class, Monitor. Monitor's constructor throws an exception if it fails to connect. I'd like to handle this exception in PyMonitor, e.g.: PyMonitor.cc: ``` #include "Monitor.h" // ... bool PyMonitor::connect() { try { _monitor = new Monitor(_host, _calibration); } catch (...) { printf("oops!\n"); } } // ... ``` However, the connect() method never catches the exception, I just get a "terminate called after throwing ..." error, and the program aborts. I don't know too much about swig, but it seems to me that this is all fine C++ and the exception should propagate to the connect() method before killing the program. Any thoughts?
2009/05/27
[ "https://Stackoverflow.com/questions/913396", "https://Stackoverflow.com", "https://Stackoverflow.com/users/75827/" ]
You have to forward the exceptions to Python if you want to parse them there. See the [SWIG Documentation](http://www.swig.org/Doc1.3/Customization.html#exception). In order to forward exceptions, you only have to add some code in the SWIG interface (.i) file. Basically, this can be anywhere in the .i file. All types of exceptions should be specified here, and SWIG **only** catches the listed exception types (in this case std::runtime\_error, std::invalid\_argument, std::out\_of\_range), all other exceptions are caught as unknown exceptions (and are thus forwarded correctly!). ``` // Handle standard exceptions. // NOTE: needs to be before the %import! %include "exception.i" %exception { try { $action } catch (const std::runtime_error& e) { SWIG_exception(SWIG_RuntimeError, e.what()); } catch (const std::invalid_argument& e) { SWIG_exception(SWIG_ValueError, e.what()); } catch (const std::out_of_range& e) { SWIG_exception(SWIG_IndexError, e.what()); } catch (...) { SWIG_exception(SWIG_RuntimeError, "unknown exception"); } } ```
I'm not familiar with swig, or with using C++ and Python together, but if this is under a recent version of Microsoft Visual C++, then the `Monitor` class is probably throwing a C structured exception, rather than a C++ typed exception. C structured exceptions aren't caught by C++ exception handlers, even the `catch(...)` one. If that's the case, you can use the `__try/__except` keywords (instead of `try/catch`), or use the `_set_se_translator` function to translate the C structured exception into a C++ typed exception. (Older versions of MSVC++ treated C structured exceptions as C++ `int` types, and *are* caught by C++ handlers, if I remember correctly.) If this *isn't* under Microsoft Visual C++, then I'm not sure how this could be happening. EDIT: Since you say that this isn't MSVC, perhaps something else is catching the exception (and terminating the program) before your code gets it, or maybe there's something in your catch block that's throwing another exception? Without more detail to work with, those are the only cases I can think of that would cause those symptoms.
16,996
3,856,314
After using C# for long time I finally decided to switch to Python. The question I am facing for the moment has to do about auto-complete. I guess I am spoiled by C# and especially from resharper and I was expecting something similar to exist for Python. My editor of choice is emacs and after doing some research I found `autocomplete.pl`, `yasnippet` and rope although it is not clear to me if and how they can be installed in a cygwin based system which is what I use since all the related documentation appears to be linux specific... The version of emacs I currently use is 23.2.1 which bundles the python mode that although useful is far behind from whatever research has to offer. My question to python users has to do about how common is autocomplete vs manual typing (using `M-/` where possible) ? I am thinking about just memorizing python build-in functions like len, append, extend etc. and revert close to a pre-autocomplete editing mode. How different such an approach is from what other pythonistas are doing?
2010/10/04
[ "https://Stackoverflow.com/questions/3856314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/404984/" ]
I found this post > > [My Emacs Python environment](http://www.saltycrane.com/blog/2010/05/my-emacs-python-environment/) > > > to be the most useful and comprehensive list of instructions and references on how to setup a decent Python development environment in Emacs regardless of OS platform. It is still a bit of work to setup but at least it covers the popular packages and components generally recommended for Python in Emacs that provide auto-completion functionality. I loosely used this post as a guide to do the setup on my Windows machine with Emacs 23.2.1 and Python 2.6.5. Although, I also have Cygwin installed in some cases instead of running the \*nix shell commands mentioned in the post, I just download the packages via a web browser, unzip them with 7zip, and copy them to my Emacs' plugin directory. Also, to install Pymacs, Rope, and Ropemacs, I used Python's [EasyInstall](http://en.wikipedia.org/wiki/EasyInstall) package manager. To use it, I downloaded and installed [the `setuptools` package using the Windows install version](http://pypi.python.org/pypi/setuptools#windows). Once installed, at the command line, cd to their respective download locations and run the command `easy_install .` instead of the shell commands shown in the post. Generally, I saved any `*.el` files in my `~\.emacs.d\plugins` (e.g. in `%USERPROFILE%\Application Data\.emacs.d\`) and then updated my `.emacs` file to reference them as documented in the post. Despite all this, on occasion, I've used [DreamPie](http://dreampie.sourceforge.net/) since it does have overall better auto-completion out of the box than my Emacs setup.
I find that [PyDev](http://pydev.org/) + Eclipse can meet most of my needs. There is also [PyCharm](http://www.jetbrains.com/pycharm/) from the Intellij team. PyCharm has the added advantage of smooth integration with git.
16,998
55,062,944
I have seen multiple posts on passing the string but not able to find good solution on reading the string passed to python script from batch file. Here is my problem. I am calling python script from batch file and passing the argument. ``` string_var = "123_Asdf" bat 'testscript.py %string_var%' ``` I have following in my python code. ``` import sys passed_var = sys.argv[1] ``` When I run the above code I always see below error. ``` passed_var = sys.argv[1] IndexError: list index out of range ``` Has anyone seen this issue before? I am only passing string and expect it to be read as part of the first argument I am passing to the script.
2019/03/08
[ "https://Stackoverflow.com/questions/55062944", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7442477/" ]
You can order by a field created with annotate: ``` from django.db.models import IntegerField, Value as V from django.db.models.functions import Cast, StrIndex, Substr last = ( Machine.objects.annotate( part=Cast(Substr("deviceSerialNo", StrIndex("deviceSerialNo", V("-"))), IntegerField()) ) .order_by("part") .first() .deviceSerialNo ) ``` Just like you had we start by getting the index of the `-` character: ``` StrIndex('deviceSerialNo', V('-')) ``` We then take use [`Substr`](https://docs.djangoproject.com/en/dev/ref/models/database-functions/#django.db.models.functions.Substr) to get the second part including the `-` character: ``` Substr("deviceSerialNo", StrIndex("deviceSerialNo", V("-"))) ``` Then we cast it to an IntegerField, sort and get the first object. Note: We can get the first object as the integer cast of `"-12344"` is a negative number.
If number have multiple - and want to extract out number from reverse then try following. AB-12-12344 Output: 12344 ``` qs.annotate( r_part=Reverse('number') ).annotate( part=Reverse( Cast( Substr("r_part", 1, StrIndex("r_part", V("-"))) ), IntegerField() ) ``` ) thanks
17,008
18,401,287
I am trying to build documentation for my flask project, and I am experiencing issues with the path My project structure is like: ``` myproject config all.py __init__.py logger.py logger.conf myproject models.py __init__.py en (english language docs folder) conf.py ``` logger.py includes a line ``` with open('logger.conf') as f: CONFIG = ast.literal_eval(f.read()) ``` which reads the configuration from logger.conf While "make html" I receive many errors according to models: ``` /home/username/projects/fb/myproject/en/models/index.rst:7: WARNING: autodoc: failed to import class u'User' from module u'myproject.models'; the following exception was raised: Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/sphinx/ext/autodoc.py", line 326, in import_object __import__(self.modname) File "/home/username/projects/fb/myproject/myproject/__init__.py", line 14, in <module> from logger import flask_debug File "/home/username/projects/fb/myproject/logger.py", line 5, in <module> with open('logger.conf') as f: CONFIG = ast.literal_eval(f.read()) IOError: [Errno 2] No such file or directory: 'logger.conf' ``` which is strange because conf.py includes the path: sys.path.insert(0, '/home/username/projects/fb/myproject/') and when I print sys.path it shows that the path is there. When I paste FULL PATH to the file logger.conf in logger.py it goes to another line simmilar to that and throws the same error for a different file. Why Sphinx does not check the path files relatively to the sys.path? Because it does not work for "./file" or "file". It started working only for "../file" - when I changed all the paths, but "destroyed" python working, as for python the path is broken.
2013/08/23
[ "https://Stackoverflow.com/questions/18401287", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2080641/" ]
It is the behaviour of `open()` that is the problem. Commands like `open()` and `chdir()` and so work from the directory you're now in, which is probably the directory where the makefile is. To test it, add an `print(os.listdir('.')` above your call to `open('logger.conf')`, that'll show you the problem. The solution? Use an absolute path. So, a little bit verbose, something like this: ``` import os this_directory = os.path.dirname(__file__) # __file__ is the absolute path to the current python file. open(os.path.join(this_directory, 'logger.conf')) ``` Bonus points if you've turned it into a python package (="it has an setup.py"), in that case you can do:  ``` import pkg_resources open(pkg_resources.resource_filename('myproject.config', 'logger.conf')) ```
I had a similar problem when generating sphinx documentation for some python code that was not written to be run in my computer, but in an embedded system instead. In that case, the existing code attempted to open a file that did not exist in my computer, and that made sphinx fail. In this case, I decided to change the code to verify the file existence first, and that allowed sphinx to pass over this logic without a problem. ``` if os.path.isfile(filename): # open file here else: # handle error in a way that doesn't make sphinx crash print "ERROR: No such file: '%s'" % filename ``` For a moment, I tried [mocking open()](https://stackoverflow.com/questions/5237693/mocking-openfile-name-in-unit-tests), but it turns out that sphinx does require open() to do its job.
17,010
54,681,449
I upgraded from pandas 0.20.3 to pandas 0.24.1. While running the command `ts.sort_index(inplace=True)`, I am getting a `FutureWarning` in my test output, which is shown below. Can I change the method call to suppress the following warning? I am happy to keep the old behavior. ``` /lib/python3.6/site-packages/pandas/core/sorting.py:257: FutureWarning: Converting timezone-aware DatetimeArray to timezone-naive ndarray with 'datetime64[ns]' dtype. In the future, this will return an ndarray with 'object' dtype where each element is a 'pandas.Timestamp' with the correct 'tz'. To accept the future behavior, pass 'dtype=object'. To keep the old behavior, pass 'dtype="datetime64[ns]"'. items = np.asanyarray(items) ``` My index looks like the following prior to running the sort\_index: ``` ts.index DatetimeIndex(['2017-07-05 07:00:00+00:00', '2017-07-05 07:15:00+00:00', '2017-07-05 07:30:00+00:00', '2017-07-05 07:45:00+00:00', ... '2017-07-05 08:00:00+00:00'], dtype='datetime64[ns, UTC]', name='start', freq=None) ```
2019/02/14
[ "https://Stackoverflow.com/questions/54681449", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4808588/" ]
I rewrote your question [here](https://stackoverflow.com/questions/54854900/workaround-for-pandas-futurewarning-when-sorting-a-datetimeindex), to include an MCVE. After it went a while with no responses, I posted an issue against Pandas. Here's my workaround: ``` with warnings.catch_warnings(): # Bug in Pandas emits useless warning when sorting tz-aware index warnings.simplefilter("ignore") ds = df.sort_index() ```
If I were you, I would do a downgrade using pip and setting the previous version. It's the lazier answer. But if you really want to keep it upgraded, then there is a parameter call deprecated warning inside pandas data frame. Just adjust it accordingly what you need. You can check it using the documentation of pandas. Have a nice night
17,011
29,585,296
I have the following code (test.cgi): ``` #!/usr/bin/env python # -*- coding: UTF-8 -*- # enable debugging import cgitb cgitb.enable() print "Content-Type: text/plain;charset=utf-8" print print "Hello World!" ``` The file is CHMOD 777 and so is the directory it is in. I am getting the following error log ``` [Sun Apr 12 02:24:46.395628 2015] [cgi:error] [pid 3574:tid 34479148032] [client 172.17.240.2:19716] AH01215: env: python\r: : /fs5a/cheerupper/public/scripts/test.cgi [Sun Apr 12 02:24:46.396715 2015] [cgi:error] [pid 3574:tid 34479148032] [client 172.17.240.2:19716] AH01215: No such file or directory: /fs5a/cheerupper/public/scripts/test.cgi [Sun Apr 12 02:24:46.397453 2015] [cgi:error] [pid 3574:tid 34479148032] [client 172.17.240.2:19716] End of script output before headers: test.cgi ``` I am getting a 500 Internal Service Error when I try to run in a browser. I can run when SSHing into the server by the command line. I have tried on Namecheap servers and am now trying on NearlyFreeSpeech.net to the same results.
2015/04/12
[ "https://Stackoverflow.com/questions/29585296", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1103669/" ]
Flatten the list by using `itertools.chain`, then find the minimum as you would otherwise: ``` from itertools import chain listA = [[10,20,30],[40,50,60],[70,80,90]] min(chain.from_iterable(listA)) # 10 ```
Set `result` to `float("inf")`. Iterate over every number in every list and call each number `i`. If `i` is less than `result`, `result = i`. Once you're done, `result` will contain the lowest value.
17,012
16,505,259
I am new in django and python.I am using windows 7 and Eclipse IDE.I have installed python 2.7,django and pip.I have created a system variable called PYTHONPATH with values `C:\Python27;C:\Python27\Scripts`.I am unable to set path for django and pip.When i type django-admin.py and pip in powershell,it shows `commandnotfoundexception.`I have attached screen shots of my django files and pip files.![enter image description here](https://i.stack.imgur.com/XKC5A.jpg)![enter image description here](https://i.stack.imgur.com/FYsOt.jpg) Please help me
2013/05/12
[ "https://Stackoverflow.com/questions/16505259", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1770199/" ]
You need to set the Powershell specific path variable. It does not know to look for an environment variable called `pythonpath`. That is helpful only for python aware applications (such as installers for Python modules). You need to add the Python directories to the `$env:Path` environment variable in Powershell. See [Setting Windows PowerShell path variable](https://stackoverflow.com/questions/714877/setting-windows-powershell-path-variable) > > $env:Path += ";C:\Python27;C:\Python27\Scripts" > > >
Also, if you're just getting started with Django and Eclipse, make sure you configure your PyDev interpreter settings to include the site-packages directory. This will ensure Eclipse can find your Django packages. You can find more details about setting up your PYTHONPATH inside Eclipse here: [PyDev Interpreter Configuration](http://pydev.org/manual_101_interpreter.html)
17,017
2,767,013
I think in the past python scripts would run off CGI, which would create a new thread for each process. I am a newbie so I'm not really sure, what options do we have? Is the web server pipeline that python works under any more/less effecient than say php?
2010/05/04
[ "https://Stackoverflow.com/questions/2767013", "https://Stackoverflow.com", "https://Stackoverflow.com/users/39677/" ]
You can still use CGI if you want, but the normal approach these days is using WSGI on the Python side, e.g. through `mod_wsgi` on Apache or via bridges to `FastCGI` on other web servers. At least with `mod_wsgi`, I know of no inefficiencies with this approach. BTW, your description of CGI ("create a new thread for each process") is inaccurate: what it does is create a new process for each query's service (and that process typically needs to open a database connection, import all needed modules, etc etc, which is what may make it slow even on platforms where forking a process, per se, is pretty fast, such as all Unix variants).
I suggest cherrypy (<http://www.cherrypy.org/>). It is very convenient to use, has everything you need for making web services, but still quite simple (no mega-framework). The most efficient way to use it is to run it as self-contained server on localhost and put it behind Apache via a Proxy statement, and make apache itself serve the static files. This generally has better performance than solutions such as CGI and mod-python, as the Python process running the web service runs separate from the main web server, so it can cache stuff and easily re-use resources (like DB handles). Also, you can then tweak the number of worker threads for Apache and your web application separately, resulting in better scalability.
17,018
21,102,790
I am using RHEL 6.3 and have 2.6.6. I need to use the Python 2.7.6. I compiled python from source, installed pip and virtual env. Now I am trying in different ways: ``` virtualenv-2.7 testvirtualenv virtualenv --python=/usr/local/bin/python2.7 myenv ``` However I am getting AssertionError. Full trace: ``` New python executable in testvirtualenv/bin/python2.7 Also creating executable in testvirtualenv/bin/python Installing setuptools, pip... Complete output from command /tmp/testvirtualenv/bin/python2.7 -c "import sys, pip; pip...ll\"] + sys.argv[1:])" setuptools pip: Traceback (most recent call last): File "<string>", line 1, in <module> File "/usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/pip-1.5-py2.py3-none-any.whl/pip/__init__.py", line 9, in <module> File "/usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/pip-1.5-py2.py3-none-any.whl/pip/log.py", line 8, in <module> File "/usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py", line 2696, in <module> File "/usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py", line 429, in __init__ File "/usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py", line 443, in add_entry File "/usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py", line 1722, in find_in_zip File "/usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py", line 1298, in has_metadata File "/usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py", line 1614, in _has File "/usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/pkg_resources.py", line 1488, in _zipinfo_name AssertionError: /usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/EGG-INFO/PKG-INFO is not a subpath of /usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv_support/setuptools-2.0.2-py2.py3-none-any.whl/ ---------------------------------------- ...Installing setuptools, pip...done. Traceback (most recent call last): File "/usr/local/bin/virtualenv-2.7", line 9, in <module> load_entry_point('virtualenv==1.11', 'console_scripts', 'virtualenv-2.7')() File "/usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv.py", line 820, in main symlink=options.symlink) File "/usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv.py", line 988, in create_environment install_wheel(to_install, py_executable, search_dirs) File "/usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv.py", line 956, in install_wheel 'PIP_NO_INDEX': '1' File "/usr/local/lib/python2.7/site-packages/virtualenv-1.11-py2.7.egg/virtualenv.py", line 898, in call_subprocess % (cmd_desc, proc.returncode)) OSError: Command /tmp/testvirtualenv/bin/python2.7 -c "import sys, pip; pip...ll\"] + sys.argv[1:])" setuptools pip failed with error code 1 ```
2014/01/13
[ "https://Stackoverflow.com/questions/21102790", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2195440/" ]
You need type composition: ``` trait Composition[F[_], G[_]] { type T[A] = F[G[A]] } class Later extends Do[Composition[Future, Seq]#T] { def do[A](f: Int => A): Future[Seq[A]] } ``` Or if you just need it in this one place ``` class Later extends Do[({ type T[A] = Future[Seq[A]] })#T] { def do[A](f: Int => A): Future[Seq[A]] } ``` See [scalaz](https://github.com/scalaz/scalaz/blob/2406a0f039e4e515478536fb58974b97c04de3b8/core/src/main/scala/scalaz/Composition.scala) (I could have sworn it included general type composition, but apparently not.)
**I** believe you want this: ``` import scala.language.higherKinds import scala.concurrent.Future object Main { type Id[A] = A trait Do[F[_]] { // Notice the return type now contains `Seq`. def `do`[A](f: Int => A): F[Seq[A]] } class Now extends Do[Id] { override def `do`[A](f: Int => A): Seq[A] = ??? } class Later extends Do[Future] { override def `do`[A](f: Int => A): Future[Seq[A]] = ??? } } ``` But if you want something more general, where the abstract method is fully generic in its return type, then the type composition answer of @AlexeyRomanov is the one you're looking for.
17,021
20,986,255
In our application we allow users to write specific conditions and we allow them express the conditions using such notation: ``` (1 and 2 and 3 or 4) ``` Where each numeric number correspond to one specific rule/condition. Now the problem is, how should I convert it, such that the end result is something like this: ``` { "$or": [ "$and": [1, 2, 3], 4 ] } ``` One more example: ``` (1 or 2 or 3 and 4) ``` To: ``` { "$or": [ 1, 2, "$and": [3, 4] ] } ``` --- I have written 50 over lines of tokenizer that successfully tokenized the statement into tokens and validated using stack/peek algorithm, and the tokens looks like this: ``` ["(", "1", "and", "2", "and", "3", "or", "4", ")"] ``` And now how should I convert this kind of "infix notation" into "prefix notation" with the rule that `and` takes precedence over `or`? Some **pointers or keywords** are greatly appreciated! What I have now doesn't really lead me to what I needed at the moment. Some researches so far: * [Smart design of a math parser?](https://stackoverflow.com/questions/114586/smart-design-of-a-math-parser) * [Add missing left parentheses into equation](https://stackoverflow.com/questions/19062718/add-missing-left-parentheses-into-equation) * [Equation (expression) parser with precedence?](https://stackoverflow.com/questions/28256/equation-expression-parser-with-precedence?rq=1) * [Infix to postfix notation](http://scriptasylum.com/tutorials/infix_postfix/algorithms/infix-postfix/index.htm) * [Dijkstra's Shunting-yard Algorithm](http://en.wikipedia.org/wiki/Shunting-yard_algorithm) * [Infix and postfix algorithm](http://interactivepython.org/runestone/static/pythonds/BasicDS/stacks.html#infix-prefix-and-postfix-expressions) **EDIT** Also, user has the ability to specify any number of parentheses if they insist, such as like: ``` ((1 or 3) and (2 or 4) or 5) ``` So it get translates to: ``` { "$or": [{ $and": [ "$or": [1, 3], "$or": [2, 4] }, 5 ] } ``` --- **EDIT 2** I figured out the algorithm. [Posted as an answer below](https://stackoverflow.com/a/21024204/534862). Thanks for helping!
2014/01/08
[ "https://Stackoverflow.com/questions/20986255", "https://Stackoverflow.com", "https://Stackoverflow.com/users/534862/" ]
This is most easily done using a two step process. 1) Convert to syntax tree. 2) Convert syntax tree to prefix notation. A syntax tree is basically the same as your prefix notation, just built using the data structures of your programming language. The standard method to create a syntax tree is to use a LALR parser generator, which is available for most languages. LALR parsers are fast, powerful, and expressive. A LALR parser generator takes a .y file as input, and outputs a source code file for a parser in the programming language of your choice. So you run the LALR parser generator once to generate your parser. (All programmers should use learn to use parser generators :). It is also smart to use a standard tokenizer, while I am guessing you have written your own :).) The following is a .y-file to generate a LALR-parser for your mini-language. Running this .y file though a LALR parser generator will output the source for a LALR parser, which takes tokens as input and outputs a parse-tree (in the variable $root\_tree). You need to have defined the parsetree\_binaryop datastructure manually elsewhere. ``` %left AND. %left OR. start ::= expr(e). { $root_tree = e; } expr(r) ::= expr(e1) AND expr(e2). { r = new parsetree_binaryop(e1, OP_AND, e2); } expr(r) ::= expr(e1) OR expr(e2). { r = new parsetree_binaryop(e1, OP_OR, e2); } expr(r) ::= LPAR expr(e) RPAR. { r = e; } ``` The "%left AND" means that AND is left-associative (we could have chosen right too, doesn't matter for AND and OR). That "%left AND" is mentioned before "%left OR" means that AND binds tighter than OR, and the generated parser will therefore do the right thing. When you have the syntax tree the parser gives you, generating the text representation is easy. Edit: this seems to be a LALR parser generator which outputs a parser in JavaScript: <http://sourceforge.net/projects/jscc/>
First define semantics. In your first example you gave `(1 and 2 and 3) or 4` interpretation but it can also be `1 and 2 and (3 or 4)` so: ``` { "$and": [ {"$or": [3,4] }, [1,2] ] } ``` Let's assume that `and` has higher priority. Then just go through list join all terms with `and`. Next, join all the rest with `or`.
17,023
43,005,480
Let's say that I have three lists and want to add all elements that are integers to a list named `int_list`: ``` test1 = [1, 2, 3, "b", 6] test2 = [1, "foo", "bar", 7] test3 = ["more stuff", 1, 4, 99] int_list = [] ``` I know that I can do the following code to append all integers to a new list: ``` for elem1, elem2, elem3 in zip(test1, test2, test3): if elem1 is int: int_list.append(elem1) if elem2 is int: int_list.append(elem2) if elem3 is int: int_list.append(elem3) ``` Is there anyway that I can merge the if statements into one conditional statement? Or make it less code? Is there a more pythonic way to do this? I tried doing the following code, but it would include elements that were not integers: ``` for elem1, elem2, elem3 in zip(test1, test2, test3): if (elem1 is int, elem2 is int, elem3 is int): int_list.append(elem1) int_list.append(elem2) int_list.append(elem3) ```
2017/03/24
[ "https://Stackoverflow.com/questions/43005480", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6779980/" ]
When you import the data into Excel, tell the **Import Wizard** that the field is *Text*. [![enter image description here](https://i.stack.imgur.com/tqMFm.png)](https://i.stack.imgur.com/tqMFm.png)
My preference is to deal with the inputs, when possible, and in this case if you have control over the python script, it may be preferable to simply modify *that*, so that Excel's default behavior interprets the file in the desired way. Borrowing from [this similar question with a million upvotes](https://stackoverflow.com/questions/165042/stop-excel-from-automatically-converting-certain-text-values-to-dates?rq=1), you can modify your python script to include a non-printing character: ``` output.write('"{0}\t","{1}\t","{2}\t"\n'.format(value1, value2, value3)) ``` This way, you can easily double-click to open the file and the contents will be treated as text, rather than interpreted as a numeric/date value. The benefit of this is that other users won't have to remember to use the wizard, and it may be easier to deal with mixed data as well. Example: ``` def writeit(): csvPath = r'c:\debug\output.csv' a = '4-10' b = '10-0' with open(csvPath, 'w') as f: f.write('"{0}\t","{1}\t"'.format(a,b)) ``` Produces the following file in text editor: [![enter image description here](https://i.stack.imgur.com/iW4a5.png)](https://i.stack.imgur.com/iW4a5.png) And when opened via double-click in Excel: [![enter image description here](https://i.stack.imgur.com/RVxzo.png)](https://i.stack.imgur.com/RVxzo.png)
17,026
60,286,051
I have a python script and want to call a subprocess from it. The following example works completely fine: Script1: ``` from subprocess import Popen p = Popen('python Script2.py', shell=True) ``` Script2: ``` def execute(): print('works!') execute() ``` However as soon as I want to pass a variable to the function, I get the following error: ``` def execute(random_variable: str): SyntaxError: invalid syntax ``` Script1: ``` from subprocess import Popen p = Popen('python Script2.py', shell=True) ``` Script2: ``` def execute(random_variable: str): print(random_variable) execute(random_variable='does not work') ``` Does anyone have an idea why that could be the case? Couldn't find anything about it online :(
2020/02/18
[ "https://Stackoverflow.com/questions/60286051", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12920312/" ]
There is file called extensions.json in the bin folder. Your startup calls register in that file. Whichever function app deployed latest, that function's startup call will be replaced with earlier function's startup call. So, you need to take an action that all the functions' startup calls will be registered in this file. [![enter image description here](https://i.stack.imgur.com/SNEUS.jpg)](https://i.stack.imgur.com/SNEUS.jpg) [![enter image description here](https://i.stack.imgur.com/kecTy.jpg)](https://i.stack.imgur.com/kecTy.jpg)
Seems like you haven't [injected](https://learn.microsoft.com/en-us/aspnet/core/fundamentals/dependency-injection?view=aspnetcore-3.1) `IAzureTableStorageService` properly in your Startup class hence the DI can't find it. Reference the project where `IAzureTableStorageService` is located, add something like this in your Startup class: ``` services.AddTransient<IAzureTableStorageService, AzureTableStorageService>(); ``` where `AzureTableStorageService` is your class that implements `IAzureTableStorageService`.
17,027
1,083,391
Please, help me in: how to put a double command in the *cmd*, like this in the Linux: `apt-get install firefox && cp test.py /home/python/`, but how to do this in Windows?, more specific in Windows CE, but it´s the same in Windows and in Windows CE, because the *cmd* is the same. Thanks!
2009/07/05
[ "https://Stackoverflow.com/questions/1083391", "https://Stackoverflow.com", "https://Stackoverflow.com/users/126353/" ]
If CE is the same as XP Pro (and I'm not sure you're right about that), you can use the same method: ``` dir && echo hello ``` Here it is running on my Windows VM (XP SP3): ``` C:\Documents and Settings\Pax>dir && echo hello Volume in drive C is Primary Volume Serial Number is 04F7-0E7B Directory of C:\Documents and Settings\Pax 29/06/2009 05:00 PM <DIR> . 29/06/2009 05:00 PM <DIR> .. 17/01/2009 12:38 PM <DIR> Desktop : : : 29/06/2009 05:00 PM 4,487 _viminfo 14 File(s) 51,658 bytes 9 Dir(s) 13,424,406,528 bytes free hello C:\Documents and Settings\Pax> ``` Some of the useful multi-command options are: ``` cmd1 & cmd2 - run cmd1 then run cmd2. cmd1 && cmd2 - run cmd1 then, if cmd1 was successful, run cmd2. cmd1 || cmd2 - run cmd1 then, if cmd1 was not successful, run cmd2. ```
This <http://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/ntcmds_shelloverview.mspx?mfr=true> might be of some help. cheers
17,028
17,620,875
I'm python developer and most frequently I use [buildout](http://www.buildout.org/en/latest/) for managing my projects. In this case I dont ever need to run any command to activate my dependencies environment. However, sometime I use virtualenv when buildout is to complicated for this particular case. Recently I started playing with ruby. And noticed very useful feature. Enviourement is changing automatically when I `cd` in to the project folder. It is somehow related to `rvm` nad `.rvmrc` file. I'm just wondering if there are ways to hook some script on different bash commands. So than I can `workon environment_name` automatically when `cd` into to project folder. **So the logic as simple as:** When you `cd` in the project with `folder_name`, than script should run `workon folder_name`
2013/07/12
[ "https://Stackoverflow.com/questions/17620875", "https://Stackoverflow.com", "https://Stackoverflow.com/users/375373/" ]
One feature of Unix shells is that they let you create *shell functions*, which are much like functions in other languages; they are essentially named groups of commands. For example, you can write a function named `mycd` that first runs `cd`, and then runs other commands: ``` function mycd () { cd "$@" if ... ; then workon environment fi } ``` (The `"$@"` expands to the arguments that you passed to `mycd`; so `mycd /path/to/dir` will call `cd /path/to/dir`.) As a special case, a shell function actually supersedes a like-named builtin command; so if you name your function `cd`, it will be run instead of the `cd` builtin whenever you run `cd`. In that case, in order for the function to call the builtin `cd` to perform the actual directory-change (instead of calling itself, causing infinite recursion), it can use Bash's `builtin` builtin to call a specified builtin command. So: ``` function cd () { builtin cd "$@" # perform the actual cd if ... ; then workon environment fi } ``` (Note: I don't know what your logic is for recognizing a project directory, so I left that as `...` for you to fill in. If you describe your logic in a comment, I'll edit accordingly.)
I think you're looking for one of two things. [`autoenv`](https://github.com/kennethreitz/autoenv) is a relatively simple tool that creates the relevant bash functions for you. It's essentially doing what ruakh suggested, but you can use it without having to know how the shell works. [`virtualenvwrapper`](https://pypi.python.org/pypi/virtualenvwrapper) is full of tools that make it easier to build smarter versions of the bash functions—e.g., switch to the venv even if you `cd` into one of its subdirectories instead of the base, or track venvs stored in `git` or `hg`, or … See the [Tips and Tricks](http://virtualenvwrapper.readthedocs.org/en/latest/tips.html) page. The [Cookbook for `autoenv`](https://github.com/kennethreitz/autoenv/wiki/Cookbook), shows some nifty ways ways to use the two together.
17,029
62,389,496
I wanted to write a Python Script that lists all files in the current working directory, if the **length** of a file's name is between 3 - 6 characters long. Also, it should only list files with the extension `.py` I was not able to find any specific function that would return the legnth of a files name, only the size of its contet. Here is what my code looks like so far: ``` #!/usr/bin/env python3 import os for file in os.listdir(os.getcwd()): if file.endswith(".py"): print(file) ``` Can anyone tell me what the solution could look like? Do I use a RegEx in the `os.getcwd(RegEx)` function? **edit:** I am sorry for posting this trivial question. I found the solution and it looks as followed: ``` #!/usr/bin/env python3 import os for file in os.listdir(os.getcwd()): if file.endswith(".py"): if ((len(os.path.splitext(file)[0])) > 2 and (len(os.path.splitext(file)[0])) < 7): print(file) ``` This works for my intended purpose. Thanks for the answers, they made me realize that using len(filename) was an option and therefore my question was not very smart.
2020/06/15
[ "https://Stackoverflow.com/questions/62389496", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12781947/" ]
Maybe rewording `file` to `filename` might make things clearer... ``` import os for filename in os.listdir(os.getcwd()): if filename.endswith(".py"): print(filename, len(filename)) ``` Now since you know how `if` statements work, you can probably do something with `len(filename)`? :)
your variable `file` is a string so, you can use a function for retrive a lenght of a string ``` for file in os.listdir(os.getcwd()): if file.endswith(".py"): print(file, len(file)) ```
17,032
21,957,231
I'm very new to d3 and in order to learn I'm trying to manipulate the [d3.js line example](http://bl.ocks.org/mbostock/3883245), the code is below. I'm trying to modify this to use model data that I already have on hand. This data is passed down as a json object. The problem is that I don't know how to manipulate the data to fit what d3 expects. Most of the d3 examples use key-value arrays. I want to use a key array + a value array. For example my data is structured per the example below: ``` // my data. A name property, with array values and a value property with array values. // data is the json object returned from the server var tl = new Object; tl.date = data[0].fields.date; tl.close = data[0].fields.close; console.log(tl); ``` Here is the structure visually (yes it time format for now): ![My Data](https://i.stack.imgur.com/rGMYR.png) Now this is different from the [data.tsv](http://bl.ocks.org/mbostock/3883245#data.tsv) call which results in key-value pairs in the code below. ![key-value data](https://i.stack.imgur.com/rfgtP.png) *The goal is to use my data as is, without having to iterate over my array to preprocess it.* **Questions:** 1) Are there any built in's to d3 to deal with this situation? For example, if key-values are absolutely necessary in python we could use the `zip` function to quickly generate a key-value list. 2) Can I use my data as is, or does it *have* to be turned into key-value pairs? **Below is the line example code.** ``` // javascript/d3 (LINE EXAMPLE) var margin = {top: 20, right: 20, bottom: 30, left: 50}, width = 640 - margin.left - margin.right, height = 480 - margin.top - margin.bottom; var parseDate = d3.time.format("%d-%b-%y").parse; var x = d3.time.scale() .range([0, width]); var y = d3.scale.linear() .range([height, 0]); var xAxis = d3.svg.axis() .scale(x) .orient("bottom"); var yAxis = d3.svg.axis() .scale(y) .orient("left"); var line = d3.svg.line() .x(function(d) { return x(d.date); }) .y(function(d) { return y(d.close); }); var svg = d3.select("body").append("svg") .attr("width", width + margin.left + margin.right) .attr("height", height + margin.top + margin.bottom) .append("g") .attr("transform", "translate(" + margin.left + "," + margin.top + ")"); d3.tsv("/data.tsv", function(error, data) { data.forEach(function(d) { d.date = parseDate(d.date); d.close = +d.close; }); x.domain(d3.extent(data, function(d) { return d.date; })); y.domain(d3.extent(data, function(d) { return d.close; })); svg.append("g") .attr("class", "x axis") .attr("transform", "translate(0," + height + ")") .call(xAxis); svg.append("g") .attr("class", "y axis") .call(yAxis) .append("text") .attr("transform", "rotate(-90)") .attr("y", 6) .attr("dy", ".71em") .style("text-anchor", "end") .text("Price ($)"); svg.append("path") .datum(data) .attr("class", "line") .attr("d", line); }); ```
2014/02/22
[ "https://Stackoverflow.com/questions/21957231", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1870013/" ]
``` for (String hashTagged : hashTags) { if (tweet.equalsIgnoreCase(hashTagged) != true) { hashTags.add(hashTagged); -----------------------------^ } } ``` The issue is while iterating the hashTags list you cant update it.
You are getting `java.util.ConcurrentModificationException` because you are modifying the `List` `hashTags` while you are iterating over it. ``` for (String hashTagged : hashTags) { if (tweet.equalsIgnoreCase(hashTagged) != true) { hashTags.add(hashTagged); } } ``` You can create a temporary list of items that must be removed or improve your logic.
17,034
65,736,625
In python, I have a datetime object in python with that format. ``` datetime_object = datetime.strptime(date_time_str, '%Y-%m-%d %H:%M:%S') ``` In other classes, I'm using this object. When i reach this object,i want to extract time from it and compare string time. Like below; ``` if "01:15:13" == time_from_datetime_object ``` How can I do this?
2021/01/15
[ "https://Stackoverflow.com/questions/65736625", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14697436/" ]
You need to use the strftime method: ``` from datetime import datetime date_time_str = '2021-01-15 01:15:13' datetime_object = datetime.strptime(date_time_str, '%Y-%m-%d %H:%M:%S') if "01:15:13" == datetime_object.strftime('%H:%M:%S'): print("match") ```
If you want to compare it as string: ``` if "01:15:13" == datetime_object.strftime('%H:%M:%S'): ```
17,035
7,172,585
> > **Possible Duplicate:** > > [Should Python import statements always be at the top of a module?](https://stackoverflow.com/questions/128478/should-python-import-statements-always-be-at-the-top-of-a-module) > > > In a very simple one-file python program like ``` # ------------------------ # place 1 # import something def foo(): # place 2 # import something return something.foo() def bar(f): ... def baz(): f = foo() bar(f) baz() # ---------------- ``` Would you put the "import something" at place 1 or 2?
2011/08/24
[ "https://Stackoverflow.com/questions/7172585", "https://Stackoverflow.com", "https://Stackoverflow.com/users/909210/" ]
[PEP 8](http://www.python.org/dev/peps/pep-0008/) specifies that: * Imports are always put at the top of the file, just after any module comments and docstrings, and before module globals and constants. Imports should be grouped in the following order: 1. standard library imports 2. related third party imports 3. local application/library specific imports You should put a blank line between each group of imports. Put any relevant **all** specification after the imports.
I'd principally agree with Robert S. answer, but sometimes it makes sense to put it into a function. Especially if you want to control the importing mechanism. This is useful if you cannot be sure if you actually have access to a specific module. Consider this example: ``` def foo(): try: import somespecialmodule # do something # ... except ImportError: import anothermodule # do something else # ... ``` This might even be the case for standard library modules (I especially have in mind the `optparse` and `argparse` modules).
17,037
19,328,381
I am confused about classes in python. I don't want anyone to write down raw code but suggest methods of doing it. Right now I have the following code... ``` def main(): lst = [] filename = 'yob' + input('Enter year: ') + '.txt' for line in open(filename): line = line.strip() lst.append(line.split(',') ``` What this code does is have a input for a file based on a year. The program is placed in a folder with a bunch of text files that have different years to them. Then, I made a class... ``` class Names(): __slots__ = ('Name', 'Gender', 'Occurences') ``` This class just defines what objects I should make. The goal of the project is to build objects and create lists based off these objects. My main function returns a list containing several elements that look like the following: `[[jon, M, 190203], ...]` These elements have a name in `lst[0]`, a gender `M` or `F` in `[1]` and a occurence in `[3]`. I'm trying to find the top 20 Male and Female candidates and print them out. Goal- There should be a function which creates a name entry, i.e. mkEntry. It should be passed the appropriate information, build a new object, populate the fields, and return it.
2013/10/11
[ "https://Stackoverflow.com/questions/19328381", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2835743/" ]
You can use reflection ``` item.GetType().GetProperty(field).GetValue(item).ToString(); ``` (or `GetField()` instead of `GetProperty()` if... that's a field)
This is not trivial like it might be, say, in ecmascript. The simplest option is reflection, for example: ``` data = item.GetType().GetProperty(field).GetValue(item).ToString(); ``` however: depending on the API involved, there may be other options available involving indexers, etc. Note that reflection is slower than regular member access - if you are doing this in very high usage, you might need a more optimized implementation. It (reflection) is usually fast enough for light to moderate usage, though.
17,038
53,327,826
Many open-source projects use a "Fork me on Github" banner at the top-right corner of the pages in the documentation. To name just one, let's take the example of Python [requests](http://docs.python-requests.org/en/master/): [![Fork me on Github ribbon](https://i.stack.imgur.com/CHfGm.png)](https://i.stack.imgur.com/CHfGm.png) There is a post on the Github blog about those banners where image code is provided: [GitHub Ribbons](https://blog.github.com/2008-12-19-github-ribbons/) But nothing is explained about **how** to add the link in each of the page generated using Sphinx and then uploaded on ReadTheDocs. Could you please help to generate this automatically? I expected there could be an option in `conf.py` but I found none. My Sphinx configuration is the default one.
2018/11/15
[ "https://Stackoverflow.com/questions/53327826", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2291710/" ]
The easiest way is to use an alternative theme like [`alabaster`](https://pypi.org/project/alabaster/) as it comes with preconfigured option like `github_banner` (see [Joran's answer](https://stackoverflow.com/a/53328720/2291710)). For other themes like `sphinx-rtd-theme` which do not provide such setting, the solution is to rely on [Sphinx templating](https://www.sphinx-doc.org/en/master/templating.html). One has to create the file `docs/_templates/layout.html` with the following content: ``` {% extends '!layout.html' %} {% block document %} {{super()}} <a href="https://github.com/you"> <img style="position: absolute; top: 0; right: 0; border: 0;" src="https://s3.amazonaws.com/github/ribbons/forkme_right_darkblue_121621.png" alt="Fork me on GitHub"> </a> {% endblock %} ```
the great thing about python (especially python on github) is that you can simply look at the source I can go to <https://github.com/requests/requests/blob/master/docs/conf.py> and look at their conf.py where we can see this entry ``` # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. html_theme_options = { 'show_powered_by': False, 'github_user': 'requests', 'github_repo': 'requests', 'github_banner': True, 'show_related': False, 'note_bg': '#FFF59C' } ``` we also can notice they are using the theme alabaster with a quick google we find that alabaster has some docs <https://github.com/mitya57/alabaster-1> ``` github_banner: true or false (default: false) - whether to apply a 'Fork me on Github' banner in the top right corner of the page. If true, requires that you set github_user and github_repo. May also submit a string file path (as with logo, relative to $PROJECT/_static/) to be used as the banner image instead of the default. ``` so the answer is to use alabaster theme and set those options :)
17,039
44,922,108
The objective is to parse the output of an ill-behaving program which concatenates a list of numbers, e.g., 3, 4, 5, into a string "345", without any non-number separating the numbers. I also know that the list is sorted in ascending order. I came up with the following solution which reconstructs the list from a string: ``` a = '3456781015203040' numlist = [] numlist.append(int(a[0])) i = 1 while True: j = 1 while True: if int(a[i:i+j]) <= numlist[-1]: j = j + 1 else: numlist.append(int(a[i:i+j])) i = i + j break if i >= len(a): break ``` This works, but I have a feeling that the solution reflects too much the fact that I have been trained in Pascal, decades ago. Is there a better or more pythonic way to do it? I am aware that the problem is ill-posed, i.e., I could start with '34' as the initial element and get a different solution (or possibly end up with remaining trailing numeral characters which don't form the next element of the list).
2017/07/05
[ "https://Stackoverflow.com/questions/44922108", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2618889/" ]
This finds solutions for all possible initial number lengths: ``` a = '3456781015203040' def numbers(a,n): current_num, i = 0, 0 while True: while i+n <= len(a) and int(a[i:i+n]) <= current_num: n += 1 if i+n <= len(a): current_num = int(a[i:i+n]) yield current_num i += n else: return for n in range(1,len(a)): l = list(numbers(a,n)) # print only solutions that use up all digits of a if ''.join(map(str,l)) == a: print(l) ``` > > > ``` > [3, 4, 5, 6, 7, 8, 10, 15, 20, 30, 40] > [34, 56, 78, 101, 520, 3040] > [34567, 81015, 203040] > > ``` > >
little modification which allows to parse "7000000000001" data and give the best output (max list size) ``` a = 30000001 def numbers(a,n): current_num, i = 0, 0 while True: while i+n <= len(a) and int(a[i:i+n]) <= current_num:n += 1 if i+2*n>len(a):current_num = int(a[i:]);yield current_num; return elif i+n <= len(a):current_num = int(a[i:i+n]);yield current_num;i += n else: return print(current_num) for n in range(1,len(a)): l = list(numbers(a,n)) if "".join(map(str,l)) == a:print (l) ```
17,040