questions
stringlengths
50
48.9k
answers
stringlengths
0
58.3k
Can python or JS hide video embedded source? I'm doing a video website right now and I would like to hide the embedded source of the video from being seen by beginner programmer. (I know they are no 100% way to hide the video embeded source).Any experience programmer knows how python or JS can help to do this? or it can't?
For Javascript, hiding code (almost) cannot be done!However, if your code is sensetive in any manner, try using obfuscators so that the code will not be readable by human eye.Here are few obfuscation services:Free Javascript Obfuscator: Javascript ObfuscatorUglify JS: Uglify JSJSObfuscate to Obfuscate JS/jQueryJScrambler 3: JScrambler 3UPDATE: Use this tutorial to get a heads-up: Improved JScrambler 3 Helps JavaScript And HTML5 Developers Obfuscate Their Code
How to concatenate database elements to a string I'm currently trying to take elements from a database to be displayed in a string by iterating through each row and adding it to an empty string. def PrintOverdueBooks(): printed_message = "" for row in db_actions.GetAllOverdue(): printed_message += row printed_message += " \n" print(printed_message)Whenever the function is called I receive an error stating that "row" is a tuple and cannot be concatenated to a string. Using .join also creates an error.
You can try this:def PrintOverdueBooks(): printed_message = '' for row in db_actions.GetAllOverdue(): stup = ''.join(row) printed_message += stup print(printed_message)
'DataFrame' object has no attribute 'value_counts' My dataset is a DataFrame of dimension (840,84). When I write the code:ds[ds.columns[1]].value_counts()I get a correct output:Out[82]:0 8471 5Name: o_East, dtype: int64But when I write a loop to store values, I get 'DataFrame' object has no attribute 'value_counts'. I can't explain why ... wind_vec = []wind_vec = [(ds[x].value_counts()) for x in ds.columns]UPDATE FOR THE CODEimport pandas as pdimport numpy as npimport numpy.ma as maimport statsmodels.api as smimport matplotlibimport matplotlib.pyplot as pltfrom sklearn.preprocessing import OneHotEncoderdataset = pd.read_csv('data/dataset.csv')ds = dataseto_wdire = pd.get_dummies(ds['o_wdire'])s_wdire = pd.get_dummies(ds['s_wdire'])t_wdire = pd.get_dummies(ds['t_wdire'])k_wdire = pd.get_dummies(ds['k_wdire'])b_wdire = pd.get_dummies(ds['b_wdire'])o_wdire.rename(columns={'ENE': 'o_ENE','ESE': 'o_ESE', 'East': 'o_East', 'NE': 'o_NE', 'NNE': 'o_NNE', 'NNW': 'o_NNW', \ 'NW': 'o_NW', 'North': 'o_North', 'SE': 'o_SE', 'SSE': 'o_SSE', 'SSW': 'o_SSW', 'SW': 'o_SW', \ 'South': 'o_South', 'Variable': 'o_Variable', 'WSW': 'o_WSW','West':'o_West'}, inplace=True)s_wdire.rename(columns={'ENE': 's_ENE','ESE': 's_ESE', 'East': 's_East', 'NE': 's_NE', 'NNE': 's_NNE', 'NNW': 's_NNW', \ 'NW': 's_NW', 'North': 's_North', 'SE': 's_SE', 'SSE': 's_SSE', 'SSW': 's_SSW', 'SW': 's_SW', \ 'South': 's_South', 'Variable': 's_Variable', 'West': 's_West','WSW': 's_WSW'}, inplace=True)k_wdire.rename(columns={'ENE': 'k_ENE','ESE': 'k_ESE', 'East': 'k_East', 'NE': 'k_NE', 'NNE': 'k_NNE', 'NNW': 'k_NNW', \ 'NW': 'k_NW', 'North': 'k_North', 'SE': 'k_SE', 'SSE': 'k_SSE', 'SSW': 'k_SSW', 'SW': 'k_SW', \ 'South': 'k_South', 'Variable': 'k_Variable', 'WNW': 'k_WNW', 'West': 'k_West','WSW': 'k_WSW'}, inplace=True)b_wdire.rename(columns={'ENE': 'b_ENE','ESE': 'b_ESE', 'East': 'b_East', 'NE': 'b_NE', 'NNE': 'b_NNE', 'NNW': 'b_NNW', \ 'NW': 'b_NW', 'North': 'b_North', 'SE': 'b_SE', 'SSE': 'b_SSE', 'SSW': 'b_SSW', 'SW': 'b_SW', \ 'South': 'b_South', 'Variable': 'b_Variable', 'WSW': 'b_WSW', 'WNW': 'b_WNW', 'West': 'b_West'}, inplace=True)t_wdire.rename(columns={'ENE': 't_ENE','ESE': 't_ESE', 'East': 't_East', 'NE': 't_NE', 'NNE': 't_NNE', 'NNW': 't_NNW', \ 'NW': 't_NW', 'North': 't_North', 'SE': 't_SE', 'SSE': 't_SSE', 'SSW': 't_SSW', 'SW': 't_SW', \ 'South': 't_South', 'Variable': 't_Variable', 'WSW': 't_WSW', 'WNW': 't_WNW', 'West':'t_West'}, inplace=True)#WINDds_wdire = pd.DataFrame(pd.concat([o_wdire,s_wdire,t_wdire,k_wdire,b_wdire],axis=1))ds_wdire = ds_wdire.astype('float64')In [93]: ds_wdire.shapeOut[93]: (852, 84)In[101]: ds_wdire[ds_wdire.columns[0]].head()Out[101]: 0 01 02 03 04 0Name: o_ENE, dtype: float64In[103]: ds_wdire[ds_wdire.columns[0]].value_counts()Out[103]:0 8381 14Name: o_ENE, dtype: int64In[104]: [ds_wdire[x].value_counts() for x in ds_wdire.columns]---------------------------------------------------------------------------AttributeError Traceback (most recent call last)<ipython-input-104-d9756c468818> in <module>() 1 #Filtering for the wind direction based on the most frequent ones.----> 2 [ds_wdire[x].value_counts() for x in ds_wdire.columns]<ipython-input-104-d9756c468818> in <listcomp>(.0) 1 #Filtering for the wind direction based on the most frequent ones.----> 2 [ds_wdire[x].value_counts() for x in ds_wdire.columns]/home/florian/anaconda3/lib/python3.5/site-packages/pandas/core/generic.py in __getattr__(self, name) 2358 return self[name] 2359 raise AttributeError("'%s' object has no attribute '%s'" %-> 2360 (type(self).__name__, name)) 2361 2362 def __setattr__(self, name, value):AttributeError: 'DataFrame' object has no attribute 'value_counts'
Thanks to @EdChum adviced, I checked :len(ds_wdire.columns),len(ds_wdire.columns.unique())Out[100]: (83,84)Actually, there was a missing name value in the dict that should have been modified from 'WNW' to 'o_WNW'.:o_wdire.rename(columns={'ENE': 'o_ENE','ESE': 'o_ESE', 'East': 'o_East', 'NE': 'o_NE', 'NNE': 'o_NNE', 'NNW': 'o_NNW', \ 'NW': 'o_NW', 'North': 'o_North', 'SE': 'o_SE', 'SSE': 'o_SSE', 'SSW': 'o_SSW', 'SW': 'o_SW', \ 'South': 'o_South', 'Variable': 'o_Variable', 'WSW': 'o_WSW','West':'o_West', **[MISSING VALUE WNW]**}, inplace=True)Maybe it would be better to write a loop that inserts a prefix to the wind direction variables, this way, I would avoid that kind of problem.
How to find ellipses in text string Python? Fairly new to Python (And Stack Overflow!) here. I have a data set with subject line data (text strings) that I am working on building a bag of words model with. I'm creating new variables that flags a 0 or 1 for various possible scenarios, but I'm stuck trying to identify where there is an ellipsis ("...") in the text. Here's where I'm starting from:Data_Frame['Elipses'] = Data_Frame.Subject_Line.str.match('(\w+)\.{2,}(.+)')Inputting ('...') doesn't work for obvious reasons, but the above RegEx code was suggested--still not working. Also tried this:Data_Frame['Elipses'] = Data_Frame.Subject_Line.str.match('.\.\.\')No dice.The above code shell works for other variables I've created, but I'm also having trouble creating a 0-1 output instead of True/False (would be an 'as.numeric' argument in R.) Any help here would also be appreciated.Thanks!
Using search() instead of match() would spot an ellipses at any point in the text. If you need 0 or 1 to be returned, convert to bool and then int.import refor test in ["hello..", "again... this", "is......a test", "...def"]: print int(bool(re.search(r'(\w+)\.{3,}', test)))This matches on the middle two tests:0110Take a look at search-vs-match for a good explanation in the Python docs.To display the matching words:import refor test in ["hello..", "again... this", "is......a test", "...def"]: ellipses = re.search(r'(\w+)\.{3,}', test) if ellipses: print ellipses.group(1)Giving you:againis
google.cloud namespace import error in __init__.py I have read through at least a dozen different stackoverflow questions that all present the same basic problem and have the same basic answer: either the module isn't installed correctly or the OP is doing the import wrong.In this case, I am trying to do from google.cloud import secretmanager_v1beta1. It works in my airflow container when I run airflow dags or if I run pytest tests/dags/test_my_dag.py. However, if I run cd dags; python -m my_dag or cd dags; python my_dag.py I get this error:from google.cloud import secretmanager as secretmanagerImportError: cannot import name 'secretmanager' from 'google.cloud' (unknown location)I can add from google.cloud import bigquery in the line right above this line and that works OK. It appears to literally just be a problem with this particular package. Why does it matter if pytest and airflow commands succeed? Because, I have another environment where I am trying to run dataflow jobs from the command-line and I get this same error. And unfortunately I don't think I can bypass this error in that environment for several reasons.UPDATE 6I have narrowed down the error to an issue with the google.cloud namespace and the secretmanager package within that namespace in the __init__.py file.If I add from google.cloud import secretmanager to airflow/dags/__init__.py and then try to run python -m dags.my_dag.py, I receive this error but with a slightly different stacktrace:Traceback (most recent call last): File "/usr/local/lib/python3.7/runpy.py", line 183, in _run_module_as_main mod_name, mod_spec, code = _get_module_details(mod_name, _Error) File "/usr/local/lib/python3.7/runpy.py", line 109, in _get_module_details __import__(pkg_name) File "/workspace/airflow/dags/__init__.py", line 3, in <module> from google.cloud import secretmanagerImportError: cannot import name 'secretmanager' from 'google.cloud' (unknown location)OLD INFORMATIONI am 95% sure that it's still a path problem and that pytest and airflow are fixing something I'm not aware of that isn't handled when I try to manually run the python script.Things I have tried:cd /airflow; python setup.py develop --usercd /airflow; pip install -e . --usercd /airflow/dags; pip install -r requirements.txt --userUPDATEAs per requests in the comments, here are the contents of requirements.txt:boto3>=1.7.84google-auth==1.11.2google-cloud-bigtable==1.2.1google-cloud-bigquery==1.24.0google-cloud-spanner==1.14.0google-cloud-storage==1.26.0google-cloud-logging==1.14.0google-cloud-secret-manager>=0.2.0pycloudsqlproxy>=0.0.15pyconfighelper>=0.0.7pymysql==0.9.3setuptools==45.2.0six==1.14.0And I accidentally omitted the --user flags from the pip and python installation command examples above. In my container environment everything is installed into the user's home directory using --user and NOT in the global site-packages directory.UPDATE 2I've added the following code to the file that is generating the error:print('***********************************************************************************')import sysprint(sys.path)from google.cloud import secretmanager_v1beta1 as secretmanagerprint('secretmanager.__file__: {}'.format(secretmanager.__file__))From airflow list_dags:['/home/app/.local/bin', '/usr/local/lib/python37.zip', '/usr/local/lib/python3.7', '/usr/local/lib/python3.7/lib-dynload', '/home/app/.local/lib/python3.7/site-packages', '/home/app/.local/lib/python3.7/site-packages/Jeeves-0.0.1-py3.7.egg', '/home/app/.local/lib/python3.7/site-packages/google_cloud_secret_manager-0.2.0-py3.7.egg', '/home/app/.local/lib/python3.7/site-packages/pyconfighelper-0.0.7-py3.7.egg', '/home/app/.local/lib/python3.7/site-packages/click-7.1.1-py3.7.egg', '/workspace/airflow', '/usr/local/lib/python3.7/site-packages', '/workspace/airflow/dags', '/workspace/airflow/config', '/workspace/airflow/plugins']secretmanager.__file__: /home/app/.local/lib/python3.7/site-packages/google_cloud_secret_manager-0.2.0-py3.7.egg/google/cloud/secretmanager_v1beta1/__init__.pyFrom python my_dag.py:['/workspace/airflow/dags', '/usr/local/lib/python37.zip', '/usr/local/lib/python3.7', '/usr/local/lib/python3.7/lib-dynload', '/home/app/.local/lib/python3.7/site-packages', '/home/app/.local/lib/python3.7/site-packages/Jeeves-0.0.1-py3.7.egg', '/home/app/.local/lib/python3.7/site-packages/google_cloud_secret_manager-0.2.0-py3.7.egg', '/home/app/.local/lib/python3.7/site-packages/pyconfighelper-0.0.7-py3.7.egg', '/home/app/.local/lib/python3.7/site-packages/click-7.1.1-py3.7.egg', '/home/app/.local/lib/python3.7/site-packages/icentris_ml_airflow-0.0.0-py3.7.egg', '/usr/local/lib/python3.7/site-packages']UPDATE 3tree airflow/dagsairflow/dags├── __init__.py├── __pycache__│   ├── __init__.cpython-37.pyc│   ├── bq_to_cs.cpython-37.pyc│   ├── bq_to_wrench.cpython-37.pyc│   ├── fetch_cloudsql_tables-bluesun.cpython-37.pyc│   ├── fetch_cloudsql_tables.cpython-37.pyc│   ├── fetch_app_tables-bluesun.cpython-37.pyc│   ├── fetch_app_tables.cpython-37.pyc│   ├── gcs_to_cloudsql.cpython-37.pyc│   ├── gcs_to_s3.cpython-37.pyc│   ├── lake_to_staging.cpython-37.pyc│   ├── schedule_dfs_sql_to_bq-bluesun.cpython-37.pyc│   ├── schedule_dfs_sql_to_bq.cpython-37.pyc│   ├── app_to_bq_initial_load-bluesun.cpython-37.pyc│   ├── app_to_lake-bluesun.cpython-37.pyc│   └── app_to_lake.cpython-37.pyc├── bq_to_wrench.py├── composer_variables.json├── my_ml_airflow.egg-info│   ├── PKG-INFO│   ├── SOURCES.txt│   ├── dependency_links.txt│   └── top_level.txt├── lake_to_staging.py├── libs│   ├── __init__.py│   ├── __pycache__│   │   ├── __init__.cpython-37.pyc│   │   ├── checkpoint.cpython-37.pyc│   │   └── utils.cpython-37.pyc│   ├── checkpoint.py│   ├── io│   │   ├── __init__.py│   │   ├── __pycache__│   │   │   └── __init__.cpython-37.pyc│   │   └── gcp│   │   ├── __init__.py│   │   ├── __pycache__│   │   │   ├── __init__.cpython-37.pyc│   │   │   └── storage.cpython-37.pyc│   │   └── storage.py│   ├── shared -> /workspace/shared/│   └── utils.py├── requirements.txt├── table_lists│   └── table-list.json└── templates └── sql ├── lake_to_staging.contacts.sql ├── lake_to_staging.orders.sql └── lake_to_staging.users.sql11 directories, 41 filesUPDATE 4I tried fixing it so that sys.path looked the same when running python dags/my_dag.py as it does when running airflow list_dags or pytest test_my_dag.py.Still get the same error. Looking at a more recent version of documentation, I noticed that you should be able to just do from google.cloud import secretmanager. Which gave me the same result (works with airflow and pytest, not when trying to run directly). At this point, my best guess is that it has something to do with namespace magic, but I'm not sure?
It have to be installed via terminal: pip install google-cloud-secret-managerBecause package name is not secretmanager but google-cloud-secret-manager
Python - Multiple 'split' Error I posted here 4-5 days ago, about one problem to sort some numbers from file.Now, is the same of the other problem but, I want to sort numbers from one file (x) to another file (y). For example: In x i have: (5,6,3,11,7), and I want to sort this numbers to y (3,5,6,7,11). But I have some errors and can't resolve on my own, I do not understand them, can you help me?from sys import argvtry: with open(argv[1],"r") as desti: cad = desti.readlines() k= list(cad) for n in range(len(cad)): k = n.split(',') k = (int, cad) k = sorted(cad) with open("nums_ordenats.txt","w") as prl: prl.write(k)except Exception as err: print(err, "Error")Actually, the error message is " 'int' object has no attribute 'split' Error". I think the code is correct. Also, the program says me other errors, but how im changintg code everytime, they also change. Tanks a lot!
There are too many problems with your code for me to address. Try this:from sys import argvwith open(argv[1], "r") as infile: with open("nums_ordenats.txt", "w") as outfile: for line in infile: nums = [int(n) for n in line.split(',')] nums.sort() outfile.write(','.join([str(n) for n in nums])) outfile.write('\n')
PyQT - QlistWidget with infinite scroll I have a QlistWidget and I need to implement on this an Infinite Scroll, something like this HTML example:https://scrollmagic.io/examples/advanced/infinite_scrolling.htmlBasically, when the user scrolls to the last item of the list, I need to load more items and dynamically append it in the QlistWidget.Is it possible? I didn't find any example yet.
There are likely many ways to achieve this task, but the easiest I found is to watch for changes in the scroll bar, and detect if we're at the bottom before adding more items to the list widget.import sys, randomfrom PyQt5.QtWidgets import QApplication, QListWidgetclass infinite_scroll_area(QListWidget): #https://doc.qt.io/qt-5/qlistwidget.html def __init__(self): super().__init__() #call the parent constructor if you're overriding it. #connect our own function the valueChanged event self.verticalScrollBar().valueChanged.connect(self.valueChanged) self.add_lines(15) self.show() def valueChanged(self, value): #https://doc.qt.io/qt-5/qabstractslider.html#valueChanged if value == self.verticalScrollBar().maximum(): #if we're at the end self.add_lines(5) def add_lines(self, n): for _ in range(n): #add random lines line_text = str(random.randint(0,100)) + ' some data' self.addItem(line_text)if __name__ == "__main__": app = QApplication(sys.argv) widget = infinite_scroll_area() sys.exit(app.exec_())You can directly grab scroll wheel events by overriding the wheelEvent method of QListWidget, then do the logic there which solves the potential problem of not starting out with enough list items for the scrollbar to appear. If it's not there, it can't change value, and the event can't fire. It introduces a new problem however as scrolling with the mouse wheel is not the only way to scroll the view (arrow keys, page up/down keys, etc). With the number of classes and subclasses in any gui library, it becomes imperative to get really familiar with the documentation. It's a little inconvenient that it isn't as comprehensive for python specifically, but I think the c++ docs are second to none as far as gui library documentation goes.
How to correctly import custom widgets in kivy I have a widget(W2), made of other widgets (W1). Each has a corresponding .kv file as below. Running main.py, I expect to see a black background with two labels, vertically stacked. Instead, I get both labels on top of each other, so something has gone wrong.kivy.factory.FactoryException: Unknown class <W1>So I thought, "Maybe I should import w1.py in w2.py even though it's not explicitly used in the py file? That ... sort of worked. I get both labels on top of each other, as in the following image.What am I doing wrong? What is the correct way to write a widget that is expected to be imported/included by another widget? And the correct way to import it?I tried using Builder.load_file() in the .py file and just importing the .py file but that had similar results.w1.py:import kivyfrom kivy.properties import StringPropertyfrom kivy.uix.widget import Widgetkivy.require('1.10.0')class W1(Widget): text = StringProperty('default') def __init__(self, **kwargs): super(W1, self).__init__(**kwargs)w1.kv:#:kivy 1.10.0<W1>: text: Label: text: root.textw2.py:import kivy from kivy.uix.boxlayout import BoxLayout# from w1 import W1 # added this to get a working, but the incorrect layoutkivy.require('1.10.0')class W2(BoxLayout): def __init__(self, **kwargs): super(W2, self).__init__(**kwargs)w2.kv:#:kivy 1.10.0#:include w1.kv<W2>: orientation: 'vertical' W1: text: 'w1.1' W1: text: 'w1.2'main.py:import kivyfrom w2 import W2from kivy.app import Appkivy.require('1.10.0')class mainApp(App): def build(self): passif __name__ == '__main__': mainApp().run()main.kv:#:kivy 1.10.0#:include w2.kvW2:EDITThe overlapping has been resolved, though maybe not correctly. I had W1 inherit from BoxLayout rather than Widget, with the thought that maybe there was a minimum height/width property missing in the base Widget class.I'm still not certain what the "correct" way to handle importing a widget which has a paired .kv file is, or exactly why I'm getting overlapping widgets when I inherit from Widget; only speculation.
why are you using two different kv files for this?I would say the proper way would be similar to what i have with my kv file. because you are spliting up things that can be done on a single page and if you need different pages you use the ScreenManager import stuffmain.py:`import kivyfrom kivy.app import Appfrom kivy.uix.widgets import Widgetfrom kivy.uix.label import Labelfrom kivy.uix.gridlayut import GridLayoutclass MyGrid(Widget): passclass MyApp(App): def build(self): # this calls what we want to show in the kv file return MyGrid()if __name__ == "__main__":MyApp().run()`file is written like this because the App falls off and in order to link the 2 it must have the same namemy.kv:# "<>" Basically links MyGrid from the .py file and then displays the# gridlayout and suchGridLayout:rows: 2 Label: text: "whatever" Label: text: "whatever 2"
ImportError: cannot import name 'convert_kernel' When i try to use tensorflow to train model, i get this error message. File "/Users/ABC/anaconda3/lib/python3.6/site-packages/keras/utils/layer_utils.py", line 7, in from .conv_utils import convert_kernelImportError: cannot import name 'convert_kernel'i have already install Keras
I got the same issue. The filename of my python code was "tensorflow.py". After I changed the name to "test.py". The issue was resolved.I guess there is already a "tensorflow.py" in the tensorflow package. If anyone uses the same name, it may lead to the conflict.If your python code is also called "tensorflow.py", you can try to use other names and see if it helps.
Python function that identifies if the numbers in a list or array are closer to 0 or 1 I have a numpy array of numbers. Below is an example:[[-2.10044520e-04 1.72314372e-04 1.77235336e-04 -1.06613465e-046.76617611e-07 2.71623057e-03 -3.32789944e-05 1.44899758e-055.79249863e-05 4.06502549e-04 -1.35823707e-05 -4.13955189e-045.29862793e-05 -1.98286005e-04 -2.22829175e-04 -8.88758230e-045.62228710e-05 1.36249752e-05 -2.00474996e-05 -2.10090068e-051.00007518e+00 1.00007569e+00 -4.44597417e-05 -2.93724453e-041.00007513e+00 1.00007496e+00 1.00007532e+00 -1.22357142e-033.27903892e-06 1.00007592e+00 1.00007468e+00 1.00007558e+002.09869172e-05 -1.97610235e-05 1.00007529e+00 1.00007530e+001.00007503e+00 -2.68725642e-05 -3.00372853e-03 1.00007386e+001.00007443e+00 1.00007388e+00 5.86993822e-05 -8.69989983e-061.00007590e+00 1.00007488e+00 1.00007515e+00 8.81850779e-042.03875532e-05 1.00007480e+00 1.00007425e+00 1.00007517e+00-2.44678912e-05 -4.36556267e-08 1.00007436e+00 1.00007558e+001.00007571e+00 -5.42990711e-04 1.45517859e-04 1.00007522e+001.00007469e+00 1.00007575e+00 -2.52271817e-05 -7.46339417e-051.00007427e+00]]I want to know if each of the numbers is closer to 0 or 1. Is there a function in Python that could do it or do I have to do it manually?
A straightforward way:lst=[0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9]closerTo1 = [x >= 0.5 for x in lst]Or you can use np:import numpy as nplst=[0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9]arr = np.array(lst)closerTo1 = arr >= 0.5Note that >= 0.5 can be changed to > 0.5, however you choose to treat it.
Python pandas: map and return Nan I have two data frame, the first one is:id code1 22 33 34 1and the second one is:id code name1 1 Mary2 2 Ben3 3 JohnI would like to map the data frame 1 so that it looks like:id code name1 2 Ben2 3 John3 3 John4 1 MaryI try to use this code:mapping = dict(df2[['code','name']].values)df1['name'] = df1['code'].map(mapping)My mapping is correct, but the mapping value are all NAN:mapping = {1:"Mary", 2:"Ben", 3:"John"}id code name1 2 NaN2 3 NaN3 3 NaN4 1 NaNCan anyone know why an how to solve?
Problem is different type of values in column code so necessary converting to integers or strings by astype for same types in both:print (df1['code'].dtype)objectprint (df2['code'].dtype)int64print (type(df1.loc[0, 'code']))<class 'str'>print (type(df2.loc[0, 'code']))<class 'numpy.int64'>mapping = dict(df2[['code','name']].values)#same dtypes - integersdf1['name'] = df1['code'].astype(int).map(mapping)#same dtypes - object (obviously strings)df2['code'] = df2['code'].astype(str)mapping = dict(df2[['code','name']].values)df1['name'] = df1['code'].map(mapping)print (df1) id code name0 1 2 Ben1 2 3 John2 3 3 John3 4 1 Mary
How to select a specific range of cells in an Excel worksheet with Python library tools I would like to select a specific range of cells in a workbook worksheet. I am currently able to set a variable to a workbook worksheet with the line below. import pandas as pd sheet1 = pd.read_excel('workbookname1.xlsx', sheet_name = ['sheet1'])I would like to go one step further and have the ability to select a range of cells in the worksheet so that I can practice dataframe functionality with the defined range. Some of my ranges are with row values greater than 1000. How am I able to select a variable length row value for the desired excel range?
You can utilize OpenPyXl module.from openpyxl import Workbook, load_workbookwb = load_workbook("workbookname1.xlsx")ws = wb.activecell_range = ws['A1':'C2']You can also use iter_rows() or iter_columns() methods.For additional information, you can refer to OpenPyXL documentation.
Listing all class members with Python `inspect` module What is the "optimal" way to list all class methods of a given class using inspect? It works if I use the inspect.isfunction as predicate in getmembers like soclass MyClass(object): def __init(self, a=1): pass def somemethod(self, b=1): passinspect.getmembers(MyClass, predicate=inspect.isfunction)returns[('_MyClass__init', <function __main__.MyClass.__init>), ('somemethod', <function __main__.MyClass.somemethod>)]But isn't it supposed to work via ismethod? inspect.getmembers(MyClass, predicate=inspect.ismethod)which returns an empty list in this case. Would be nice if someone could clarify what's going on. I was running this in Python 3.5.
As described in the documentation, inspect.ismethod will show bound methods. This means you have to create an instance of the class if you want to inspect its methods. Since you are trying to inspect methods on the un-instantiated class you are getting an empty list.If you do:x = MyClass()inspect.getmembers(x, predicate=inspect.ismethod)you would get the methods.
Groupby in pandas multiplication I have a data frame called bf. The commas are mine, it was imported from a csv file.val,bena,123b,234c,123I have another larger data frame df bla,val, blablab, blablaa 1,a,123,333 2,b,333,222 3,c,12,33 1,a,123,333 .....I would like to create a new data frame which multiplies all rows of df with the specific value corresponding to ben for it, taken from bf. For exaple the first row of this new data frame will be 1,a,123*123,333*123 How do we do that using pandas and groupby?EDIT: Note that bf and df have different lengths.
Probably you want to use a merge to bring in the column ben into your dataframe:df_merged = pd.merge(df, bf, on='val')Then you can calculate your product however you like, for example:df_prod = df_merged * df_merged.ben
Why doesn't this python function with a dictionary named parameter and a default value apply the default value each time it's called? Possible Duplicate: “Least Astonishment” in Python: The Mutable Default Argument The following code illustrates the issue:def fn(param, named_param={}, another_named_param=1): named_param[param] = str(another_named_param) another_named_param += param return named_paramfor i in range(0, 2): result = {} result = fn(i) print resultprintfor i in range(0, 2): result = fn(i, named_param={}) print resultprintresult = fn(0)print resultresult = fn(1)print resultOutput:{0: '1'}{0: '1', 1: '1'}{0: '1'}{1: '1'}{0: '1', 1: '1'}{0: '1', 1: '1'}I expected the output of the 1st, 2nd loop and subsequent 2 single calls with param matching the values of the for loop would have the same textual output, but fn holds onto the value of named_param if not explicitly defaulted to an empty dictionary. Is functionality defined in the documentation?
The default value of named_param is evaluated once, when the function definition is executed. It is the same dictionary each time and its value is retained between calls to the function.Do not use mutable objects as default values in functions unless you do not mutate them. Instead, use None or another sentinel value, and check for that value and replace it with a fresh object (e.g. empty dictionary). This way you get a fresh one each time your function is called.
Importing a class to another class in python I am trying to learn python i tried to import a class in another class but it is not workingApplication.py:class Application: def example(self): return "i am from Application class"Main.pyclass Main: def main(): application = Application() application.example()if __name__ == "__main__": Main.main()This gives me :File "Main.py", line 11, in <module> Main.main()TypeError: unbound method main() must be called with Main instance as first argument (got nothing instead)
You should instantiate your Main class first. if __name__ == '__main__': myMain = Main() myMain.main()But this will give you another error: TypeError: main() takes no arguments (1 given)There are two ways to fix this. Either make Main.main take one argument:class Main: def main(self): application = Application() application.example()or make Main.main a static method. In which case you don't have to instantiate your Main class:class Main: @staticmethod def main(): application = Application() application.example()if __name__ == "__main__": Main.main()
Jinja2 extensions - get the value of variable passed to extension So I have a Jinja2 extension. Basically follows the parser logic, except that I need to get a value from the parsed args being passed in.For instance, if I have an extension called loadfile, and pass it a variable:{% loadfile "file.txt" %}when I grab the argument through parser.parse_expression() I get a node.Const variable that has a .value argument - and I can get the name file.txt no problem.However...{% set filename = "file.txt" %}{% loadfile filename %}causes me issues. The parser gives me a node.Name expr node, which neither responds to .value or the as_const(...) call that all other nodes respond to.I can't figure out how to evaluate the value of the node.Name node I'm getting from parsing the arguments, and thus cannot get the name file.txt.Is there a good way to parse argument variables/values in an extension so that I can use them to execute the extention?Thanks!
This works for medef parse(self, parser): lineno = parser.stream.next().lineno # args will contains filename args = [parser.parse_expression()] return nodes.Output([ nodes.MarkSafeIfAutoescape(self.call_method('handle', args)) ]).set_lineno(lineno)def handle(self, filename): # bla-bla-bla
Get relative links from html page I want to extract only relative urls from html page; somebody has suggest this :find_re = re.compile(r'\bhref\s*=\s*("[^"]*"|\'[^\']*\'|[^"\'<>=\s]+)', re.IGNORECASE)but it return :1/all absolute and relative urls from the page.2/the url may be quated by "" or '' randomly .
Use the tool for the job: an HTML parser, like BeautifulSoup.You can pass a function as an attribute value to find_all() and check whether href starts with http:from bs4 import BeautifulSoupdata = """<div><a href="http://google.com">test1</a><a href="test2">test2</a><a href="http://amazon.com">test3</a><a href="here/we/go">test4</a></div>"""soup = BeautifulSoup(data)print soup.find_all('a', href=lambda x: not x.startswith('http'))Or, using urlparse and checking for network location part:def is_relative(url): return not bool(urlparse.urlparse(url).netloc)print soup.find_all('a', href=is_relative)Both solutions print:[<a href="test2">test2</a>, <a href="here/we/go">test4</a>]
replace information in Json string based on a condition I have a very large json file with several nested keys. From whaat I've read so far, if you do:x = json.loads(data)Python will interpret it as a dictionary (correct me if I'm wrong). The fourth level of nesting in the json file contains several elements named by an ID number and all of them contain an element called children, something like this:{"level1": {"level2": {"level3": {"ID1": {"children": [1,2,3,4,5]} } {"ID2": {"children": []} } {"ID3": {"children": [6,7,8,9,10]} } } }}What I need to do is to replace all items in all the "children" elements with nothing, meaning "children": [] if the ID number is in a list called new_ids and then convert it back to json. I've been reading on the subject for a few hours now but I haven't found anything similar to this to try to help myself.I'm running Python 3.3.3. Any ideas are greatly appreciated!!Thanks!!EDITList:new_ids=["ID1","ID3"]Expected result:{"level1": {"level2": {"level3": {"ID1": {"children": []} } {"ID2": {"children": []} } {"ID3": {"children": []} } } }}
First of all, your JSON is invalid. I assume you want this:{"level1": {"level2": {"level3": { "ID1":{"children": [1,2,3,4,5]}, "ID2":{"children": []}, "ID3":{"children": [6,7,8,9,10]} } } }}Now, load your data as a dictionary:>>> with open('file', 'r') as f:... x = json.load(f)... >>> x{u'level1': {u'level2': {u'level3': {u'ID2': {u'children': []}, u'ID3': {u'children': [6, 7, 8, 9, 10]}, u'ID1': {u'children': [1, 2, 3, 4, 5]}}}}}Now you can loop over the keys in x['level1']['level2']['level3'] and check whether they are in your new_ids.>>> new_ids=["ID1","ID3"]>>> for key in x['level1']['level2']['level3']:... if key in new_ids:... x['level1']['level2']['level3'][key]['children'] = []... >>> x{u'level1': {u'level2': {u'level3': {u'ID2': {u'children': []}, u'ID3': {u'children': []}, u'ID1': {u'children': []}}}}}You can now write x back to a file like this:with open('myfile', 'w') as f: f.write(json.dumps(x))If your new_ids list is large, consider making it a set.
How to get percentiles on groupby column in python? I have a dataframe as below:df = pd.DataFrame({'state': ['CA', 'WA', 'CO', 'AZ'] * 3, 'office_id': list(range(1, 7)) * 2, 'sales': [np.random.randint(100000, 999999) for _ in range(12)]})To get percentiles of sales,state wise,I have written below code:pct_list1 = []pct_list2 = []for i in df['state'].unique().tolist(): pct_list1.append(i) for j in range(0,101,10): pct_list1.append(np.percentile(df[df['state'] == i]['sales'],j)) pct_list2.append(pct_list1) pct_list1 = []colnm_list1 = []for k in range(0,101,10): colnm_list1.append('perct_'+str(k))colnm_list2 = ['state'] + colnm_list1df1 = pd.DataFrame(pct_list2)df1.columns = colnm_list2df1Can we optimize this code?I feel that,we can also usedf1 = df[['state','sales']].groupby('state').quantile(0.1).reset_index(level=0)df1.columns = ['state','perct_0']for i in range(10,101,10): df1.loc[:,('perct_'+str(i))] = df[['state','sales']].groupby('state').quantile(float(i/100.0)).reset_index(level=0)['sales']If there are any other alternatives,please help.Thanks.
How about this?quants = np.arange(.1,1,.1)pd.concat([df.groupby('state')['sales'].quantile(x) for x in quants],axis=1,keys=[str(x) for x in quants])
Isolating subquery from its parent I have a column_property on my model that is a count of the relationships on a secondary model.membership_total = column_property( select([func.count(MembershipModel.id)]).where( MembershipModel.account_id == id).correlate_except(None))This works fine until I try to join the membership model.query = AccountModel.query.join(MembershipModel)# ProgrammingError: subquery uses ungrouped column "membership.account_id" from outer queryI can fix this issue by appending:query = query.group_by(MembershipModel.account_id, AccountModel.id)# resolves the issueBut I don't really want to do that. I want it to be its own island that ignores whatever the query is doing and just focuses on returning a count of memberships for that particular row's account ID.What can I do to the column_property to make it more robust and less reliant on what the parent query is doing?
Pass MembershipModel to correlate_except() instead of None, as described here in the documentation. Your current method allows omitting everything from the subquery's FROM-clause, if it can be correlated to the enclosing query. When you join MembershipModel it becomes available in the enclosing query.Here's a simplified example. Given 2 models A and B:In [2]: class A(Base): ...: __tablename__ = 'a' ...: id = Column(Integer, primary_key=True, autoincrement=True) ...: In [3]: class B(Base): ...: __tablename__ = 'b' ...: id = Column(Integer, primary_key=True, autoincrement=True) ...: a_id = Column(Integer, ForeignKey('a.id')) ...: a = relationship('A', backref='bs')and 2 column_property definitions on A:In [10]: A.b_count = column_property( select([func.count(B.id)]).where(B.a_id == A.id).correlate_except(B))In [11]: A.b_count_wrong = column_property( select([func.count(B.id)]).where(B.a_id == A.id).correlate_except(None))If we query just A, everything's fine:In [12]: print(session.query(A))SELECT a.id AS a_id, (SELECT count(b.id) AS count_1 FROM b WHERE b.a_id = a.id) AS anon_1, (SELECT count(b.id) AS count_2 FROM b WHERE b.a_id = a.id) AS anon_2 FROM aBut if we join B, the second property incorrectly correlates B from the enclosing query and completely omits the FROM-clause:In [13]: print(session.query(A).join(B))SELECT a.id AS a_id, (SELECT count(b.id) AS count_1 FROM b WHERE b.a_id = a.id) AS anon_1, (SELECT count(b.id) AS count_2 WHERE b.a_id = a.id) AS anon_2 FROM a JOIN b ON a.id = b.a_id
Clustering latitude longitude points in Python with fixed number of clusters kmeans does not work properly for geospatial coordinates - even when changing the distance function to haversine as stated here.I had a look at DBSCAN which doesnt let me set a fixed number of clusters.Is there any algorithm (in python if possible) that has the same input values as kmeans? orCan I easily convert latitude, longitude to euclidean coordinates (x,y,z) as done here and do the calculation on my data?It does not have to perfectly accurate, but it would nice if it would.
Using just lat and longitude leads to problems when your geo data spans a large area. Especially since the distance between longitudes is less near the poles. To account for this it is good practice to first convert lon and lat to cartesian coordinates.If your geo data spans the united states for example you could define an origin from which to calculate distance from as the center of the contiguous united states. I believe this is located at Latitude 39 degrees 50 minutes and Longitude 98 degrees 35 minute.TO CONVERT lat lon to CARTESIAN coordinates- calculate the distance using haversine, from every location in your dataset to the defined origin. Again, I suggest Latitude 39 degrees 50 minutes and Longitude 98 degrees 35 minute. You can use haversine in python to calculate these distances:from haversine import haversineorigin = (39.50, 98.35)paris = (48.8567, 2.3508)haversine(origin, paris, miles=True)Now you can use k-means on this data to cluster, assuming the haversin model of the earth is adequate for your needs. If you are doing data analysis and not planning on launching a satellite I think this should be okay.
How to make exceptions during the iteration of a for loop in python Sorry in advance for the certainly simple answer to this but I can't seem to figure out how to nest an if ______ in ____: block into an existing for block.For example, how would I change this block to iterate through each instance of i, omitting odd numbers. odds = '1 3 5 7 9'.split()for i in range(x): if i in odds: continue print(i)this code works for if i == y but I cannot get it to work with a specific set of "y"s
This has nothing to do with nesting. You are comparing apples to pears, or in this case, trying to find an int in a list of str objects.So the if test never matches, because there is no 1 in the list ['1', '3', '5', '7', '9']; there is no 3 or 5 or 7 or 9 either, because an integer is a different type of object from a string, even if that string contains digits that look, to you as a human, like digits.Either convert your int to a string first, or convert your strings to integers:if str(i) in odds:orodds = [int(i) for i in '1 3 5 7 9'.split()]If you want to test for odd numbers, there is a much better test; check if the remainder of division by 2 is 1:if i % 2 == 1: # i is an odd number
Python 2.6.2: writing lines to file hard wrap at 192 characters I implemented a function to make a wrapper to write to files. This is the code:def writeStringToFile(thestring, thefile, mode='w'): """Write a string to filename `thefile' to the directory specified in `dir_out'.""" with open(os.path.join(dir_out, thefile), mode) as fh: fh.write("{0}\n".format(thestring))I found out that when I write any string over 192 characters, a newline is inserted at character 192, resulting in a hard wrap in my output file, which I don't want.I looked at the docs for the open and write function/method, and I don't see anything that would specify a hard wrap at any line length.Any insight into fixing this is appreciated.
My own stupidity -- I was writing strings that had the character sequence \n in them, and python was rightly interpreting them as newlines. I need to escape them in my string. I'd take this post down if it hadn't already been responded to.
list of indices in 3D array I have an array that looks like this: [[[ -1., 1., -1., 1., -1., 1., 1., 1., 1., -1., 1., 1., 1., 1.]], [[ 1., 0., 1., 0., 1., 0., 1., 0., 1., 0., 1., 0., 1., 0.]], [[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]]I have a list of 3 indices[2,3,4]I would like to get only the "rows" where those indices are zero. So the mask would look like this:[False, True, True]And the result I am looking for would just be the two "rows" which satisfy the condition: [[ 1., 0., 1., 0., 1., 0., 1., 0., 1., 0., 1., 0., 1., 0.]], [[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]]I put "rows" in quotes because I understand that there is an extra dimension in there - but it needs to stay.======================EXTENDED EXAMPLE==================a = [[[0,1,0]], [[0,0,0]], [[1,1,1]], [[1,0,1]], [[0,0,1]], [[1,0,0]]]b = [0, 1, 2, 2, 1, 0]c = f(a,b)For f(a,b)the first element, [[0,1,0]] is skipped because it has a 0 in the 0th positionthe second element, [[0,0,0]] is likewise omitted because because there is a 0 in the 1st positionThe third element, [[1,1,1]] is included because it does not have a 0 in the 2nd index position... and so on...until the final result is...c = [[[1, 1, 1]], [[1, 0, 1]], [[1, 0, 0]]So I am looking for f()
Solution 1:The most pythonic way (my way to go).c = [a[i] for i,j in enumerate(b) if a[i][0][j] == 1]print(c)[[[1, 1, 1]], [[1, 0, 1]], [[1, 0, 0]]]Solution 2:a = [[[0,1,0]], [[0,0,0]], [[1,1,1]], [[1,0,1]], [[0,0,1]], [[1,0,0]]]b = [0, 1, 2, 2, 1, 0]c=[]for i,j in enumerate(b): if a[i][0][j] == 1: c.append(a[i])print(c)[[[1, 1, 1]], [[1, 0, 1]], [[1, 0, 0]]]
Close cmd while Tk object remains So I made a simple calculator in Python 3.7 and made a batch file to get it to run from the CMD. The thing is, after I run the batch file, I get a CMD window and then the Tk window, but the CMD window remains there and shuts my program down if I close it.Is there a way to hide the CMD window or just omit its apperance at all?Here is how it looks, in case my description is bad:The BATCH FILE reads:Start "" "C:\Users\Username\AppData\Local\Programs\Python\Python37\draft.py"
You can use the pythonw executable, or rename your script to something.pyw..pyw is a special extension for Python files on Windows which are associated with pythonw, the Python interpreter that does not pop up the console window at all.
PyQt app won't run properly unless I open the whole folder So I'm trying to share my PyQt project. When I download the zip file and extract it, it looks likeIf I run app.py from CMD, it will run the app, but without the icon file which is inside of that folder. Inside of the code I do need that file and point to it, so I'm not sure why it doesn't find it automatically. It seems that without it the app doesn't work properly. I was wondering if there's a work around for this issue.Here is how the app looks when I "open folder" in my IDE:Here is how it looks when I simply open the .py file, in that same folder:Anything related to the icons (basically all notifications) are not working when I run it like that.I'm not sure what why it behaves like this, but I'd like to be able to share the code for anyone to use without them opening the whole folder.
Eventually, I ended up changing how I'm using the paths.I added thisdirname = os.path.dirname(__file__)iconFile = os.path.join(dirname, 'icon/icon.png')So now I'm using iconFile as my path. Seems to fix the issue
Cannot connect VPS Server to MS SQL Server I'm trying to connect to MS SQL database using my VPS server IP, and login info. But I kept getting login failed errorpyodbc.InterfaceError: ('28000', "[28000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Login failed for user 'root'. (18456) (SQLDriverConnect); [28000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Login failed for user 'root'. (18456)")[enter image description here][1]Products:Vultr VPS ServerVersion: Ubuntu 18.04I already installed SQL Server 2017In my python program, I got thisserver = '66.42.92.32'username = 'root'password = 'abc'conn = pyodbc.connect('DRIVER={ODBC Driver 17 for SQL Server};' + f'Server={server};' + 'Database=KyInventory;' + 'UID=root;' + 'PWD=abc;'+ 'Trusted_Connection=no;')cursor = conn.cursor()Please help me!
When you use IP address to connect to your server, you have to set SQL-Server port even it was default. like this:server = '66.42.92.32,1433'for more information look at this Microsoft link:https://docs.microsoft.com/en-us/sql/connect/python/pyodbc/step-3-proof-of-concept-connecting-to-sql-using-pyodbc?view=sql-server-ver15
Spark Sql: TypeError("StructType can not accept object in type %s" % type(obj)) I am currently pulling data from SQL Server using PyODBC and trying to insert into a table in Hive in a Near Real Time (NRT) manner. I got a single row from source and converted into List[Strings] and creating schema programatically but while creating a DataFrame, Spark is throwing StructType error.>>> cnxn = pyodbc.connect(con_string)>>> aj = cnxn.cursor()>>>>>> aj.execute("select * from tjob")<pyodbc.Cursor object at 0x257b2d0>>>> row = aj.fetchone()>>> row(1127, u'', u'8196660', u'', u'', 0, u'', u'', None, 35, None, 0, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, u'', 0, None, None)>>> rowstr = map(str,row)>>> rowstr['1127', '', '8196660', '', '', '0', '', '', 'None', '35', 'None', '0', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', '', '0', 'None', 'None']>>> schemaString = " ".join([row.column_name for row in aj.columns(table='tjob')])>>> schemaStringu'ID ExternalID Name Description Notes Type Lot SubLot ParentJobID ProductID PlannedStartDateTime PlannedDurationSeconds Capture01 Capture02 Capture03 Capture04 Capture05 Capture06 Capture07 Capture08 Capture09 Capture10 Capture11 Capture12 Capture13 Capture14 Capture15 Capture16 Capture17 Capture18 Capture19 Capture20 User UserState ModifiedDateTime UploadedDateTime'>>> fields = [StructField(field_name, StringType(), True) for field_name in schemaString.split()]>>> schema = StructType(fields)>>> [f.dataType for f in schema.fields][StringType, StringType, StringType, StringType, StringType, StringType, StringType, StringType, StringType, StringType, StringType, StringType, StringType, StringType, StringType, StringType, StringType, StringType, StringType, StringType, StringType, StringType, StringType, StringType, StringType, StringType, StringType, StringType, StringType, StringType, StringType, StringType, StringType, StringType, StringType, StringType]>>> myrdd = sc.parallelize(rowstr)>>> myrdd.collect()['1127', '', '8196660', '', '', '0', '', '', 'None', '35', 'None', '0', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', 'None', '', '0', 'None', 'None']>>> schemaPeople = sqlContext.createDataFrame(myrdd, schema)Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/apps/opt/cloudera/parcels/CDH-5.5.2-1.cdh5.5.2.p0.4/lib/spark/python/pyspark/sql/context.py", line 404, in createDataFrame rdd, schema = self._createFromRDD(data, schema, samplingRatio) File "/apps/opt/cloudera/parcels/CDH-5.5.2-1.cdh5.5.2.p0.4/lib/spark/python/pyspark/sql/context.py", line 298, in _createFromRDD _verify_type(row, schema) File "/apps/opt/cloudera/parcels/CDH-5.5.2-1.cdh5.5.2.p0.4/lib/spark/python/pyspark/sql/types.py", line 1132, in _verify_type raise TypeError("StructType can not accept object in type %s" % type(obj))TypeError: StructType can not accept object in type <type 'str'>
here is the reason for error message:>>> rowstr['1127', '', '8196660', '', '', '0', '', '', 'None' ... ] #rowstr is a list of str>>> myrdd = sc.parallelize(rowstr)#myrdd is a rdd of str>>> schema = StructType(fields)#schema is StructType([StringType, StringType, ....])>>> schemaPeople = sqlContext.createDataFrame(myrdd, schema)#myrdd should have been RDD([StringType, StringType,...]) but is RDD(str)to fix that, make the RDD of proper type:>>> myrdd = sc.parallelize([rowstr])
Encoder for a string - Python I've been playing around with encoding random sets of strings using a dictionary. I've gotten my code to replace the letters I want, but in some cases it will replace a character more than once, when I truly only want it to replace the letter in the string once. This is what I have:def encode(msg,code): for i in msg: for i in code: msg = msg.replace(i, code[i]) return msgfor testing purposes I used the function calls:initial:encode("blagh", {"a":"e","h":"r"})and a more complex string:encode("once upon a time",{'a':'ae','e':'ei','i':'io','o':'ou','u':'ua'})for the second one right above, I'm looking for the output of : 'ouncei uapoun ae tiomei'but instead am finding myself with :"ounceio uapoun aeio tiomeio"How can I limit my loop to only replacing each character once?
Python 3's str.translate function does what you want. Note that the translation dictionary must use Unicode ordinals for keys, so the function uses a dictionary comprehension to convert it to the right format:def encode(msg,code): code = {ord(k):v for k,v in code.items()} return msg.translate(code)print(encode("blagh", {"a":"e","h":"r"}))print(encode("once upon a time",{'a':'ae','e':'ei','i':'io','o':'ou','u':'ua'}))Output:blegrouncei uapoun ae tiomeiIt works in Python 2 as well if you use Unicode strings or add the following to the top of the file to make strings Unicode by default:from __future__ import unicode_literals
assigning values in a numpy array I have a numpy array of zeros. For concreteness, suppose it's 2x3x4:x = np.zeros((2,3,4))and suppose I have a 2x3 array of random integers from 0 to 3 (the index of the 3rd dimension of x).>>> y = sp.stats.distributions.randint.rvs(0, 4, size=(2,3))>>> y[[2 1 0] [3 2 0]]How do I do the following assignments efficiently (edit: something that doesn't use for loops and works for x with any number of dimensions and any number of elements in each dimension)? >>> x[0,0,y[0,0]]=1>>> x[0,1,y[0,1]]=1>>> x[0,2,y[0,2]]=1>>> x[1,0,y[1,0]]=1>>> x[1,1,y[1,1]]=1>>> x[1,2,y[1,2]]=1>>> xarray([[[ 0., 0., 1., 0.], [ 0., 1., 0., 0.], [ 1., 0., 0., 0.]], [[ 0., 0., 0., 1.], [ 0., 0., 1., 0.], [ 1., 0., 0., 0.]]])Thanks,James
Use numpy.meshgrid() to make arrays of indexes that you can use to index into both your original array and the array of values for the third dimension.import numpy as npimport scipy as spimport scipy.stats.distributionsa = np.zeros((2,3,4))z = sp.stats.distributions.randint.rvs(0, 4, size=(2,3))xx, yy = np.meshgrid( np.arange(2), np.arange(3) )a[ xx, yy, z[xx, yy] ] = 1print aI've renamed your array from x to a, and the array of indexes from y to z, for clarity.EDIT: 4D example:a = np.zeros((2,3,4,5))z = sp.stats.distributions.randint.rvs(0, 4, size=(2,3))w = sp.stats.distributions.randint.rvs(0, 5, size=(2,3))xx, yy = np.meshgrid( np.arange(2), np.arange(3) )a[ xx, yy, z[xx, yy], w[xx, yy] ] = 1
How can I create a new folder with Google Drive API in Python? From this example. Can I use MediafileUpload with creating folder? How can I get the parent_id from?From https://developers.google.com/drive/folderI just know that i should use mime = "application/vnd.google-apps.folder" but how do I implement this tutorial to programming in Python?Thank you for your suggestions.
To create a folder on Drive, try: def createRemoteFolder(self, folderName, parentID = None): # Create a folder on Drive, returns the newely created folders ID body = { 'title': folderName, 'mimeType': "application/vnd.google-apps.folder" } if parentID: body['parents'] = [{'id': parentID}] root_folder = drive_service.files().insert(body = body).execute() return root_folder['id']You only need a parent ID here if you want to create folder within another folder, otherwise just don't pass any value for that. If you want the parent ID, you'll need to write a method to search Drive for folders with that parent name in that location (do a list() call) and then get the ID of that folder.Edit: Note that v3 of the API uses a list for the 'parents' field, instead of a dictionary. Also, the 'title' field changed to 'name', and the insert() method changed to create(). The code from above would change to the following for v3: def createRemoteFolder(self, folderName, parentID = None): # Create a folder on Drive, returns the newely created folders ID body = { 'name': folderName, 'mimeType': "application/vnd.google-apps.folder" } if parentID: body['parents'] = [parentID] root_folder = drive_service.files().create(body = body).execute() return root_folder['id']
Deadlock when creating index I try to create an index with a Cypher query using py2neo 1.6.2 and neo4j 2.0.1:graph_db = neo4j.GraphDatabaseService()query = "CREATE INDEX ON :Label(prop)"neo4j.CypherQuery(graph_db, query).run()The query works fine in the neo4j web interface but throws a deadlock error in py2neo:py2neo.neo4j.DeadlockDetectedException: Don't panic.A deadlock scenario has been detected and avoided. This means that two or more transactions, which were holding locks, were wanting to await locks held by one another, which would have resulted in a deadlock between these transactions. This exception was thrown instead of ending up in that deadlock.See the deadlock section in the Neo4j manual for how to avoid this: http://docs.neo4j.org/chunked/stable/transactions-deadlocks.htmlDetails: 'Transaction(15438, owner:"qtp1927594840-9525")[STATUS_ACTIVE,Resources=1] can't wait on resource RWLock[SchemaLock]since => Transaction(15438, owner:"qtp1927594840-9525")[STATUS_ACTIVE,Resources=1] <-[:HELD_BY]- RWLock[SchemaLock] <-[:WAITING_FOR]- Transaction(15233, owner:"qtp1927594840-9503")[STATUS_ACTIVE,Resources=1] <-[:HELD_BY]- RWLock[SchemaLock]'.It doesn't make a difference if the label exists or not, the request usually fails.
Judging from the deadlock graph in the details section, this looks like a bug in 2.0.1. Are you doing anything else to the database other than running this specific query, or is this just starting up a fresh database and running the code you provided?In any case, since it works in the Neo4j Browser, I'd suggest swapping to use the transactional APIs, as that is what the browser uses. Py2neo supports this using the "Cypher Transactions" feature, documented here:http://book.py2neo.org/en/latest/cypher/#id2
Data comes out misaligned when printing list I have a code that reads an inventory txt file that is suppose to display a menu for the user when it is run. However, when it runs the quantity and pice columns are misaligned:Select an item ID to purchase or return: ID Item Quantity Price244 Large Cake Pan 7.00 19.99576 Assorted Sprinkles 3.00 12.89212 Deluxe Icing Set 6.00 37.97827 Yellow Cake Mix 3.00 1.99194 Cupcake Display Board 2.00 27.99285 Bakery Boxes 7.00 8.59736 Mixer 5.00 136.94Enter another item ID or 0 to stopHere is my code:import InventoryFiledef readFile (): #open the file and read the lines inventoryFile = open ('Inventory.txt', 'r') raw_data = inventoryFile.readlines () #remove the new line characters clean_data = [] for item in raw_data: clean_item = item.rstrip ('\n') clean_data.append (clean_item) #read lists into objects all_objects = [] for i in range (0, len(clean_data), 4): ID = clean_data [i] item = clean_data [i+1] qty = float (clean_data [i+2]) price = float (clean_data [i+3]) inventory_object = InventoryFile.Inventory (ID, item, qty, price) all_objects.append (inventory_object) return all_objectsdef printMenu (all_data): print () print ('Select an item ID to purchase or return: ') print () print ('ID\tItem\t\t Quantity\t Price') for item in all_data: print (item) print () print ('Enter another item ID or 0 to stop')def main (): all_items = readFile () printMenu (all_items)main ()How can I format the output so that the quantity and price columns are correctly aligned?Here is the inventory class:class Inventory: def __init__ (self, new_id, new_name, new_stock, new_price): self.__id = new_id self.__name = new_name self.__stock = new_stock self.__price = new_price def get_id (self): return self.__id def get_name (self): return self.__name def get_stock (self): return self.__stock def get_price (self): return self.__price def restock (self, new_stock): if new_stock < 0: print ('ERROR') return False else: self.__stock = self.__stock + new_stock return True def purchase (self, purch_qty): if (new_stock - purch_qty < 0): print ('ERROR') return False else: self.__stock = self.__stock + purch_qty return True def __str__ (self): return self.__id + '\t' + self.__name + '\t' + \ format (self.__stock, '7.2f') + format (self.__price, '7.2f')
Using your class Inventory's getters you can make a list and just join the output.def printMenu (all_data): print () print ('Select an item ID to purchase or return: ') print () print ('ID\tItem\t\t Quantity\t Price') for item in all_data: product_id = item.get_id() product_name = item.get_name() product_stock = item.get_stock() product_price = item.get_price() output = [product_id, product_name, product_stock, product_price] output = [str(item) for item in output] print('{:<5}\t{:<5}\t{:<5}\t{:<5}'.format(output)) print () print ('Enter another item ID or 0 to stop')
How to save a dictionary having multiple lists of values for each key in a csv file I have a dictionary in the format:cu = {'m':[['a1','a2'],['a3','a4'],['a5','a6']], 'n':[['b1','b2'], ['b3','b4']]}#the code I used to save the dictionary in csv file was:#using numpy to make the csv fileimport numpy as np # using the savetxt np.savetxt("cu_ck.csv", cu , delimiter ="," , fmt ='% s')and it raised an error stating that:ValueError: Expected 1D or 2D array, got 0D array insteadPlease help me write a code which can be used to save a dictionary of such type.And it is to be noted that, this dictionary is only for example basis...the original dictionary has keys more than 12, wherein the length of values for each key may vary but are in the same format as stated in the cu dictionary.The csv file should at least look like this:m a1 a2m a3 a4m a5 a6n b1 b2n b3 b4
The error is caused by cu being a dictionary type, which is not an array type.However, simply converting to an array isn't going to work either, since what you want is fairly complicated. One way to perform this data transformation is to append the key to each subarray:([['m', a1, a2], ['m', a3, a4], ['m', a5, a6]], [['n', b1, b2], ['n', b3, b4]])and then concatenate the outer lists:[['m', a1, a2], ['m', a3, a4], ['m', a5, a6], ['n', b1, b2], ['n', b3, b4]]Admittedly, I don't know whether this is very Pythonic, but it does the trick:cu_arr2d = sum(([[key, *row] for row in cu[key]] for key in cu), [])Here the([[key, *row] for row in cu[key]] for key in cu)is iterating over all the keys and then iterating over all the rows of that key, and appending the key to the row. Its output is that tuple of 2d lists from the top of this post. Then the sum is concatenating everything.
Binding command line arguments to the object methods calls in Python I am working on a command line utility with a few possible arguments. The argument parsing is done with argparse module. In the end, with some additional customization, I get a dictionary with one and only one element:{'add_account': ['example.com', 'example']}Where the key is an option that should translate to a method call and the value is the arguments list.I have all the planned objects method implemented.I wonder what would be the best, most pythonic way to create method calls based on received dictionary.I could obviously go through a predefined mapping like:if option == 'add_account': object.add_account( dictionary['add_account'][0], dictionary['add_account'][1] )I feel that there's a much better way to do it, though.
You can use getattr to fetch a method object (argparse.py uses this approach several times).You didn't give us a concrete example, but I'm guessing you have a class like this:In [387]: class MyClass(object): ...: def add_account(self,*args): ...: print(args) ...: In [388]: obj=MyClass()In [389]: obj.add_account(*['one','two'])('one', 'two')To do the same thing, starting with a string, I can use getattr to fetch the method object:In [390]: getattr(obj,'add_account')Out[390]: <bound method MyClass.add_account of <__main__.MyClass object at 0x98ddaf2c>>In [391]: getattr(obj,'add_account')('one')('one',)Now with your dictionary:In [392]: dd={'add_account': ['example.com', 'example']}In [393]: key='add_account'In [394]: getattr(obj, key)(*dd[key])('example.com', 'example')
The most efficient way to iterate over a list of elements. Python 2.7 I am trying to iterate over a list of elements, however the list can be massive and takes too long to execute. I am using newspaper api. The for loop I constructed is:for article in list_articles:Each article in the list_articles are an object in the format of:<newspaper.article.Article object at 0x1103e1250>I checked that some recommended using xrange or range, however that did not work in my case, giving a type error:TypeError: 'int' object is not iterableIt would be awesome if anyone can point me to the right direction or give me some idea that can efficietly increase iterating over this list.
The best way is to use built in functions, when possible, such as functions to split strings, join strings, group things, etc...The there is the list comprehension or map when possible. If you need to construct one list from another by manipulating each element, then this is it.The thirst best way is the for item in items loop.ADDEDOne of the things that makes you a Python programmer, a better programmer, takes you to the next level of programming is the second thing I mentioned - list comprehension and map. Many times you iterate a list only to construct something that could be easily done with list comprehension. For example:new_items = []for item in items: if item > 3: print(item * 10) new_items.append(item * 10)You could do this much better (shorter and faster and more robust) like this:new_items = [item * 10 for item in items if item > 3]print(items)Now, the printing is a bit different from the first example, but more often than not, it doesn't matter, and even better, and also can be transformed with one line of code to what you need.
SyntaxError: invalid syntax in URLpattern hi am getting a syntax errorurl:url(r'^reset-password/$', PasswordResetView.as_view(template_name='accounts/reset_password.html', 'post_reset_redirect': 'accounts:password_reset_done'), name='reset_password'),What is the problem?thanks
The problem is that you mix dictionary syntax with parameter syntax:url( r'^reset-password/$', PasswordResetView.as_view( template_name='accounts/reset_password.html', 'post_reset_redirect': 'accounts:password_reset_done' ), name='reset_password')This syntax with a colon, is used for dictionaries. For parameters, it is identifier=expression, so:from django.urls import reverse_lazyurl( r'^reset-password/$', PasswordResetView.as_view( template_name='accounts/reset_password.html', success_url=reverse_lazy('accounts:password_reset_done') ), name='reset_password')The post_reset_redirect has been removed as parameter, but the success_url performs the same functionality: it is the URL to which a redirect is done, after the POST request has been handled successfully.The wrong syntax probably originates from the fact that when you used a function-based view, you passed parameters through the kwargs paramter, which accepted a dictionary.The class-based view however, obtains these parameter through the .as_view(..) call. Furthermore class-based views typically aim to generalize the process, and there the success_url, is used for FormViews.
Converting hex to binary in array I'm working on Visual Studio about python Project and The user input like that 010203 and I use this code for saparating the input:dynamic_array = [ ] hexdec = input("Enter the hex number to binary ");strArray = [hexdec[idx:idx+2] for idx in range(len(hexdec)) if idx%2 == 0]dynamic_array = strArrayprint(dynamic_array[0] + " IAM" ) print(dynamic_array[1] + " NOA" )print(dynamic_array[2] + " FCI" )So, the output is:01 IAM02 NOA03 FCIHowever my expected output converting this hex numbers to binary numbers look like this:00000001 IAM00000010 NOA00000011 FCIIs there any way to do this?
It's a lot easier if you thnk of hex as a integer (number).There's a lot of tips on how to convert integers to different outcomes, but one useful string representation tool is .format() which can format a integer (and others) to various outputs.This is a combination of:Convert hex string to int in PythonPython int to binary?The solutions would be:binary = '{:08b}'.format(int(hex_val, 16))And the end result code would look something like this:def toBin(hex_val): return '{:08b}'.format(int(hex_val, 16)).zfill(8)hexdec = input("Enter the hex number to binary ");dynamic_array = [toBin(hexdec[idx:idx+2]) for idx in range(len(hexdec)) if idx%2 == 0]print(dynamic_array[0] + " IAM" ) print(dynamic_array[1] + " NOA" )print(dynamic_array[2] + " FCI" )Rohit also proposed a pretty good solution, but I'd suggest you swap the contents of toBin() with bin(int()) rather than doing it per print statement.I also restructured the code a bit, because I saw no point in initializing dynamic_array with a empty list. Python doesn't need you to set up variables before assigning values to them. There was also no point in creating strArray just to replace the empty dynamic_array with it, so concatenated three lines into one.machnic also points out a good programming tip, the use of format() on your entire string. Makes for a very readable code :) Hope this helps and that my tips makes sense.
Why is my program shifting s and c by the wrong amount if I enter 5 into my program but not with any other letters or number? This program is meant to ask you for a sentence and a number then it shifts the letters down the alphabet all by the inputted number and then lets you undo it by shift it by minus what you enter. For some reason when you enter 5 as your shift the letter s shift to different random letters and does not give you the correct word when you try and shift back and I have no idea why.import sysimport timeletters = ["a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z"](a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y, z) = (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26) def program(): def encryption(): def encryption1(): global message global shift message = list ((input ("Please enter the sentence you would like to be %s\n>" % (EnDe1))).lower()) print ("To %s your message please %s your private key number (from 1 - 10)" % (EnDe2, EnDe3)) shift = int (input (">")) if EnDe == "b": shift = - (shift) if shift < 11 or shift > 0: for x in range(len(message)): if message[x] != " ": if eval(message[x]) > 26 - shift: message[x] = letters[eval(message[x]) + shift - 27] else: message[x] = letters[eval(message[x]) + shift - 1] else: shift = int (input ("only numbers from 1 to 10 are accepted, try again\n>")) encryption1() def choice(): global EnDe global EnDe1 global EnDe2 global EnDe3 EnDe = (input ("would you like to A)encrypt or B)decrypt\n>")).lower() if EnDe == "a": EnDe1 = "encrypted" EnDe2 = "encrypt" EnDe3 = "pick" encryption1() elif EnDe == "b": EnDe1 = "decrypted" EnDe2 = "decrypt" EnDe3 = "enter" encryption1() else: print ("please pick either 'A' or 'B' , ONLY!") time.sleep(2) choice() choice() output = ''.join(message) print (output) retry = input ("would you like to Decrypt/Encrypt another message? (Y/N)\n>") retry = retry.lower() while retry != ("y" or "n"): retry = input ("please select either y or n\n>") retry = retry.lower() while retry == "y": program() else: sys.exit() encryption()
The problem is that you define a global x variable, and also a local one. The local one shadows the global one and so the result of eval("x") is not anymore what you expected to have. Solution: use a different variable for the for loop.There is much that can be improved in your code. You can take advantage of the modulo operator and the ord function, avoiding the need for all those 26 letter names.Here is how that for loop could look without all that:if 0 < shift < 11: for i, ch in enumerate(message): if ch != " ": message[i] = chr((ord(ch)-ord('a')+shift)%26+ord('a'))Unrelated: note that retry != ("y" or "n") does not work like that. You should do retry not in "yn"
Receiving service messages in a group chat using Telegram Bot I am trying to create a bot in my group to help me track the group users who have invited other users into the group.I have disabled the privacy mode so the bot can receive all messages in a group chat. However, it seems to be that update.message only gets messages supplied by other users but not service messages like Alice has added Bob into the groupIs there any way that I can get these service messages as well?Thanks for helping!
I suppose you are using python-telegram-bot library.You can add a handler with a specific filter to listen to service messages:from telegram.ext import MessageHandler, Filtersdef callback_func(bot, update): # here you receive a list of new members (User Objects) in a single service message new_members = update.message.new_chat_members # do your stuff here: for member in new_members: print(member.username)def main(): ... dispatcher.add_handler(MessageHandler(Filters.status_update.new_chat_members, callback_func)There are several more service message types your bot may receive using the Filters module, check them out here.
Pandas division with 2 dfs I want to divide 2 dfs by matching their names. For example,df1 = pd.DataFrame({'Name':['xy-yz','xa-ab','yz-ijk','zb-ijk'],1:[1,2,3,4],2:[1,2,1,2],3:[2,2,2,2]} )df2 = pd.DataFrame({'Name2':['x','y','z','a'],1:[0,1,2,3],2:[1,2,3,4],3:[5,5,5,6]})df1:Name1 1 2 3xy-yz 1 1 2xa-ab 2 2 2yz-ijk 3 1 2zb-ijk 4 2 2df2:Name2 1 2 3x 0 1 5y 1 2 5z 2 3 5a 3 4 6The result would be df3: Name1 1 2 3xy-yz 1 1 2x 0 1 5xy-yz 1 .4 <---(xy-yz)/x xa-ab 2 2 2x 0 1 5xa-ab 2 .4 <---(xa-ab)/xyz-ijk 3 1 2y 1 2 5yz-ijk 3 .5 .4 <---(yz-ijk)/yzb-ijk 4 2 2z 2 3 5zb-ijk 2 .67 .4 <---(zb-ijk)/zI would use concat but I'm not not sure how to do the division by mapping the Name2 to the first letter in Name1 here. Thank you!
I do not know why you need it , but this give back what you need df2=df2.set_index('Name2')dfNew=df2.reindex(df1.Name1.str.split('-',expand=True)[0])df1=df1.set_index('Name1')pd.concat([df1.reset_index(),dfNew.reset_index().rename(columns={0:'Name1'}),pd.DataFrame(df1.values/dfNew.values,columns=df1.columns).assign(Name1=df1.index)]).sort_index()Out[897]: 1 2 3 Name10 1.000000 1.000000 2.0 x-yz0 0.000000 1.000000 5.0 x0 inf 1.000000 0.4 x-yz1 2.000000 2.000000 2.0 x-ab1 0.000000 1.000000 5.0 x1 inf 2.000000 0.4 x-ab2 3.000000 1.000000 2.0 y-ijk2 1.000000 2.000000 5.0 y2 3.000000 0.500000 0.4 y-ijk3 4.000000 2.000000 2.0 z-ijk3 2.000000 3.000000 5.0 z3 2.000000 0.666667 0.4 z-ijk
Django logout() returns none Morning everyoneIm using django logout() to end my sessions just like django docs says :views.pyclass Logout(View): def logout_view(request): logout(request) return HttpResponseRedirect(reverse('cost_control_app:login'))and im calling it from this url :urls.pyurl(r'^logout/$', views.Logout.as_view(), name = "logout"),Buttttttt it's not working, when i do a trace i find that the function : def logout_view(request):it's returning "none" and it's nos entering to execute the code inside...Please help me !
I'm curious, why do you have the method named logout_view()? By default, nothing is going to call that method. You need to change the name to match the HTTP verb which will be used to call the page. For instance, if it's going to be a GET request, you would change it to:def get(self, request):If you want it to be a POST request, you would change it to:def post(self, request):This is the standard way that class-based views work in Django. Also, you may want to look at the documentation for class-based views, as this may give you a better idea of their workings and what they can provide to you. (Hint: There is a built-in RedirectView)
Def and Return Function in python I'm having some problems with the def and return function in Python.On the top of my program I've defined:from subprogram import subprogramthen I've defined the def in which I've included the values I wanted to be returned:def subprogram(ssh, x_off, y_off, data_array, i, j): if j==1: print('shutdonw X') # Run command. ssh_stdin, ssh_stdout, ssh_stderr = ssh.exec_command(x_off) var_colonna_1 = data_array[i][j] return var_colonna_1 print(var_colonna_1) if j==2: print('shutdown Y') # Run command. ssh_stdin, ssh_stdout, ssh_stderr = ssh.exec_command(y_off) var_colonna_2 = data_array[i][j] return var_colonna_2this is then called in the main program as:for j in range(5, lunghezza_colonna): if data_array[i][j] == 'V': subprogram(ssh, x_off, y_off, data_array, i, j)print(var_colonna_1)I was expecting that every time the subprogram is called, it returns the var_colonna_1 or var_colonna_2, but the values I see when I print var_colonna_1 is always 0 even if, internally the other commands are execute (so X and Y are set in shut down).Can you help me? I don't see my coding mistake.
The function doesn't return the variable, only the value on it.If you want to get the returned value on var_colonna_1, you should asign it, as Sayse said, you should do:var_colonna_1 = subprogram(ssh, x_off, y_off, data_array, i, j)
Faster alternatives to Popen for CAN bus access? I'm currently using Popen to send instructions to a utility (canutils... the cansend function in particular) via the command line.The entire function looks like this.def _CANSend(self, register, value, readWrite = 'write'): """send a CAN frame""" queue=self.CANbus.queue cobID = hex(0x600 + self.nodeID) #assign nodeID indexByteLow,indexByteHigh,indexByteHigher,indexByteHighest = _bytes(register['index'], register['objectDataType']) subIndex = hex(register['subindex']) valueByteLow,valueByteHigh,valueByteHigher,valueByteHighest = _bytes(value, register['objectDataType']) io = hex(COMMAND_SPECIFIER[readWrite]) frame = ["cansend", self.formattedCANBus, "-i", cobID, io, indexByteLow, indexByteHigh, subIndex, valueByteLow, valueByteHigh, valueByteHigher, valueByteHighest, "0x00"] Popen(frame,stdout=PIPE) a=queue.get() queue.task_done() return aI was running into some issues as I was trying to send frames (the Popen frame actually executes the command that sends the frame) in rapid succession, but found that the Popen line was taking somewhere on the order of 35 ms to execute... every other line was less than 2 us.So... what might be a better way to invoke the cansend function (which, again, is part of the canutils utility..._CANSend is the python function above that calls ) more rapidly?
I suspect that most of that time is due to the overhead of forking every time you run cansend. To get rid of it, you'll want an approach that doesn't have to create a new process for each send.According to this blog post, SocketCAN is supported by python 3.3. It should let your program create and use CAN sockets directly. That's probably the direction you'll want to go.
how to install wordcloud package in python? pip install wordcloud File "<ipython-input-130-12ee30540bab>", line 1 pip install wordcloud ^SyntaxError: invalid syntaxThis is the problem I am facing while using pip install wordcloud.
pip is a tool used for installing python packages. You should not use this command inside the python interactive shell. Instead, exit out of it and write pip install wordcloud on the main shell.
Python -- sizing frames with weights In the minimum example code below, you can change the last range(), currently at 3, to 6 and notice that the frames with buttons all get smaller than if you run it with 3. I have configured 6 columns of "lower_frame" to all be weight 1. The expected result is that there are 6 empty columns of the same width no matter how many buttons I put in there. If I put 3 buttons in there, as the example below has by default, the buttons are quite large and leave only room for about 1 more button. If I put 6 buttons in, they fill the space and each gets smaller.How do I achieve the expected result of having equal width columns no matter how many widgets I actually put in the cells? The goal here is a standard size to these buttons that is based on proportion of the screen, not a pixel size, and have it always be the same no matter the number of buttons. I realize I could do a bunch of math with bounding boxes and programatically set the sizes at runtime, but that seems like overkill and lacking elegance.Minimum example:import Tkinter as tkimport ttkmods = {}modBtns = {}root = tk.Tk()upper_frame = ttk.Frame(master=root)lower_frame = ttk.Frame(master=root)right_frame = ttk.Frame(master=root)root.columnconfigure(0, weight=3)root.columnconfigure(1, weight=1)root.rowconfigure(0, weight=1)root.rowconfigure(1, weight=5)for i in range(6): lower_frame.columnconfigure(i, weight=1)for i in range(5): lower_frame.rowconfigure(i, weight=1)upper_frame.grid(column=0, row=0, sticky=tk.N + tk.S + tk.E + tk.W)lower_frame.grid(column=0, row=1, sticky=tk.N + tk.S + tk.E + tk.W)right_frame.grid(column=1, row=0, sticky=tk.N + tk.S + tk.E + tk.W)for i in range(3): mods[i] = ttk.Frame(master=lower_frame) mods[i].columnconfigure(0, weight=1) mods[i].rowconfigure(0, weight=1) modBtns[i] = ttk.Button(master=mods[i], text="mod{0}".format(i)) modBtns[i].grid(column=0, row=0, sticky=tk.N + tk.S + tk.E + tk.W) mods[i].grid(column=i, row=0, sticky=tk.N + tk.S + tk.E + tk.W)root.geometry("700x700+0+0")root.mainloop()
If you want all of the rows and all of the columns to have the same width/height, you can set the uniform attribute of each row and column. All columns with the same uniform value will be the same width, and all rows with the same uniform value will be the same height.Note: the actual value to the uniform attribute is irrelevant, as long as it is consistent.for i in range(6): lower_frame.columnconfigure(i, weight=1, uniform='whatever')for i in range(5): lower_frame.rowconfigure(i, weight=1, uniform='whatever')
Can't get stored python integer back in java in habase google cloud I'm using hbase over google cloud bigtable to store my bigdata. I have 2 programs. first, store data using python into hbase and the second, read those info back from java by connecting to the same endpoint.so from python interactive shell I can read byte arrays back into an integer (command 15)In [13]: row.cells['stat']['viewability'][0].value Out[13]: '\x00\x00\x00\x00\x00\x00\x00A'In [14]: len(row.cells['stat']['viewability'][0].value) Out[14]: 8In [15]: struct.unpack('>Q', row.cells['stat']['viewability'][0].value) Out[15]: (65,)but I can't read back the same byte array into java Integer data typeI'm using the following in javabyte[] columnFamilyBytes = Bytes.toBytes("stat");byte[] viewabilityColumnBytes = Bytes.toBytes("viewability");Integer viewability = Bytes.toInt(c1.getValue(columnFamilyBytes, viewabilityColumnBytes));and I'm getting NULL in response.
I found the problemthe column stored as long value so I had to first read it as long in java and then convert it to int
Using IronPython to learn the .NET framework, is this bad? Because I'm a Python fan, I'd like to learn the .NET framework using IronPython. Would I be missing out on something? Is this in some way not recommended?EDIT:I'm pretty knowledgeable of Java ( so learning/using a new language is not a problem for me ). If needed, will I be able to use everything I learned in IronPython ( excluding language featurs ) to write C# code?
No, sounds like a good way to learn to me. You get to stick with a language and syntax that you are familiar with, and learn about the huge range of classes available in the framework, and how the CLR supports your code.Once you've got to grips with some of the framework and the CLR services you could always pick up C# in the future. By that point it will just be a minor syntax change from what you already know.Bare in mind that if you are thinking with respect to a career, you won't find many iron python jobs, but like I say, this could be a good way to learn about the framework first, then build on that with C# in a month or twos time.
Django: handle migrations for an imported database? I'm working in Django 1.8 and trying to set up an existing project. I've inherited a database dump, plus a codebase. I've imported the database dump successfully. The problem is that if I try to run migrate against the imported database I then get errors about columns already existing, because the database is already at the end state of all the migrations: django.db.utils.ProgrammingError: column "managing_group_id" of relation "frontend_pct" already existsHow can I resolve this? I would like to be able to add new migrations from this point, and I would also prefer not to delete all the existing migrations. Basically I need a way to say "skip straight to migration 36, and continue from there".
I think your migrations problem solved by the previous Answer. Therefore I'm adding a link below...If you just started django 1.7 and above thenHere I'ld like to add a link Django Migration How works That will useful where I think.
convert python integer to its signed binary representation Given a positive integer such as 171 and a "register" size, e.g. 8.I want the integer which is represented by the binary representation of 171, i.e. '0b10101011' interpreted as twos complement.In the present case, the 171 should become -85.It is negative because given the "register" size 8, the MSB is 1.I hope I managed to explain my problem.How can I do this conversion?What I tried:size = 8value = 171b = bin(value)if b[len(b)-size] == '1': print "signed" # What to do next?
You don't need binary conversion to achieve that:>>> size = 8>>> value = 171>>> unsigned = value % 2**size >>> signed = unsigned - 2**size if unsigned >= 2**(size-1) else unsigned>>> signed-85
Unable to uninstall anaconda from Ubuntu 16.04 I am trying to uninstall Ananconda from my Ubuntu 16.04 LTS machine.I ran the following commandsconda install anaconda-cleananaconda-cleanrm -rf ~/anacondaEverything is getting exceuted without any error/warning. If fact, when I run anaconda-clean it is saying so and so packages have been uninstalled. However, I can still open up anaconda navigator and everything seems to be working just fine. What am I missing?
conda install anaconda-cleananaconda-clean --yesrm -rf ~/anaconda3 Replace anaconda3 with your version of anacondaThis will uninstall anaconda
if loop repeating first if statement I'm trying to create a continuous question loop to process all my calculations for my nmea sentences in my project. For some reason only the first if statement is executed. What am I doing wrong? I'm still fairly new to python if command_type == "$GPGGA" or "GPGGA" or "GGA": #define the classes gps = GPS() createworkbook = CreateWorkbook() convertfile = ConvertFile() print_gps = PrintGPS() #do the deeds createworkbook.openworkbook(data) print_gps.process_gpgga_data(data) createworkbook.closeworkbook_gpgga(data) convertfile.convert2csv(data) convertfile.convert2kml(data) if command_type == "$GPRMC" or "GPRMC" or "RMC": #define the classes gps = GPS() createworkbook = CreateWorkbook() convertfile = ConvertFile() print_gps = PrintGPS() #do the deeds createworkbook.openworkbook(data) print_gps.process_gprmc_data(data) createworkbook.closeworkbook_gprmc(data) convertfile.convert2csv(data) convertfile.convert2kml(data) if command_type == "$GPGLL" or "GPGLL" or "GLL": #define the classes gps = GPS() createworkbook = CreateWorkbook() convertfile = ConvertFile() print_gps = PrintGPS() #do the deeds createworkbook.openworkbook(data) print_gps.process_gpgll_data(data) createworkbook.closeworkbook_gpgll(data) convertfile.convert2csv(data) convertfile.convert2kml(data) if command_type == "$GPGSA" or "GPGSA" or "GSA": #define the classes gps = GPS() createworkbook = CreateWorkbook() convertfile = ConvertFile() print_gps = PrintGPS() #do the deeds createworkbook.openworkbook(data) print_gps.process_gpgsa_data(data) createworkbook.closeworkbook_gpgsa(data) convertfile.convert2csv(data) if command_type == "$GPVTG" or "GPVTG" or "VTG": print('Better check $GPRMC') else: print("Invalid type:", command_type) list_gps_commands(data) wannalook = input('Want to look at another message or no?') if not wannalook.startswith('y'): keep_asking = False print('********************') print('**mischief managed**') print('********************')
if command_type == "$GPGGA" or "GPGGA" or "GGA":As you can see, here you are not trying to check if command_type is valued "$GPGGA" or "GPGGA" or "GGA". But if command_type == "$GPGGA" is true or "GPGGA" is true or "GGA" is true.And a non-empty string in python is always true : your first condition will be evaluated true.So you have to do : if command_type == "$GPGGA" or command_type == "GPGGA" or command_type == "GGA"
What is the easiest way to build Python26.zip for embedded distribution? I am using Python as a plug-in scripting language for an existing C++ application. I am able to embed the python interpreter as stated in the Python documentation. Everything works successfully with the initialization and de-initialization of the interpreter. I am, however, having trouble loading modules because I have not been able to zip up the standard library in to a zip file (normally PythonXX.zip, corresponding to the version number of the python dll). What is the simplest way to zip up all of the standard library after optimized bytecode compiling? I'm looking for a simple script or command to do so for me, as I really don't want to do this by hand. Any ideas? Thanks!
I would probably use setuptools to create an egg (basically a java jar for python). The setup.py would probably look something like this:from setuptools import setup, find_packagessetup( name='python26_stdlib', package_dir = {'' : '/path/to/python/lib/directory'}, packages = find_packages(), #any other metadata)You could run this using python setup.py bdist_egg. Once you have the egg, you can either add it to the python path or you can install it using setuptools. I believe this should also handle the generation of pycs for you as well.NOTE: I wouldn't use this on my system python directory. You might want to set up a virtualenv for this.
Modeling a complex relationship in Django I'm working on a Web service in Django, and I need to model a very specific, complex relationship which I just can't be able to solve.Imagine three general models, let's call them Site, Category and Item. Each Site contains one or several Categories, but it can relate to them in one of two possible ways: one are "common" categories, which are in a many-to-many relationship: they are predefined, and each Site can relate to zero or more of the Categories, and vice versa. The other type of categories are individually defined for each site, and one such category "belongs" only to that site and none other; i.e. they are in a many-to-one relationship, as each Site may have a number of those Categories.Internally, those two type of Categories are completely identical, they only differ in the way they are related to the Sites. It could, however, separate them in two different models (with a common parent model probably), but that solves only half of my problem: the Item model is in a many-to-one relationship with the Categories, i.e. each Item belongs to only one Category, and ideally it shouldn't care how it is related to a Site.Another solution would be to allow the two separate types of Site-Category relations to coexist (i.e. to have both a ForeignKey and a ManyToMany field on the same Category model), but this solution feels like opening a whole other can of worms.Does anyone have an idea if there is a third, better solution to this dead end?
Why not just have both types of category in one model, so you just have 3 models?SiteCategory Sites = models.ManyToManyField(Site) IsCommon = models.BooleanField()Item Category = models.ForeignKey(Category)You say "Internally, those two type of Categories are completely identical". So in sounds like this is possible. Note it is perfectly valid for a ManyToManyField to have only one value, so you don't need "ForeignKey and a ManyToMany field on the same Category model" which just sounds like a hassle. Just put only one value in the ManyToMany field
How to use subversion Ctypes Python Bindings? Subversion 1.6 introduce something that is called 'Ctypes Python Binding', but it is not documented. Is it any information available what this bindings are and how to use it? For example, i have a fresh windows XP and want to control SVN repository using subversiion 1.6 and this mysterious python bindings. What exactly i need to download/install/compile in order to do something likeimport svn from almighty_ctype_subversion_bindingssvn.get( "\\rep\\project" )And how is this connected to pysvn project? Is this a same technology, or different technologies?
You need the Subversion source distribution, Python (>= 2.5), and ctypesgen.Instructions for building the ctypes bindings are here.You will end up with a package called csvn, examples of it's use are here.
Installing Python's Cryptography on Windows I've created a script on windows to connect to Remote SSH server. I have successfully installed cryptography, pynacl and finally paramiko(Took me an entire day to figure out how to successfully install them on windows).Now that I run the script, it pops an error saying that the DLL loading has failed. The error seems to be related to libsodium but I cannot figure out exactly which DLL is to trying to load and from where. Just to be on the safer side I also installed pysodium.Here's the script: automate.pyimport SSHconnection = ssh("10.10.65.100", "gerrit2", "gerrit@123")print("Calling OpenShell")connection.openShell()print("Calling sendShell")connection.sendShell("ls -l")print("Calling process")connection.process()print("Calling closeConnection")connection.closeConnection() SSH.pyimport threading, paramikoclass ssh: shell = None client = None transport = None def __init__(self, address, username, password): print("Connecting to server on ip", str(address) + ".") self.client = paramiko.client.SSHClient() self.client.set_missing_host_key_policy(paramiko.client.AutoAddPolicy()) self.client.connect(address, username=username, password=password, look_for_keys=False) self.transport = paramiko.Transport((address, 22)) self.transport.connect(username=username, password=password) thread = threading.Thread(target=self.process) thread.daemon = True thread.start() def closeConnection(self): if(self.client != None): self.client.close() self.transport.close() def openShell(self): self.shell = self.client.invoke_shell() def sendShell(self, command): if(self.shell): self.shell.send(command + "\n") else: print("Shell not opened.") def process(self): global connection while True: # Print data when available if self.shell != None and self.shell.recv_ready(): alldata = self.shell.recv(1024) while self.shell.recv_ready(): alldata += self.shell.recv(1024) strdata = str(alldata, "utf8") strdata.replace('\r', '') print(strdata, end = "") if(strdata.endswith("$ ")): print("\n$ ", end = "")And here's the error:> python automate.pyTraceback (most recent call last): File "automate.py", line 1, in <module> import SSH File "D:\Automate\SSH_Paramiko\SSH.py", line 1, in <module> import threading, paramiko File "D:\Users\prashant-gu\AppData\Local\Programs\Python\Python37\lib\site-packages\paramiko-2.4.0-py3.7.egg\paramiko\__init__.py", line 22, in <module> File "D:\Users\prashant-gu\AppData\Local\Programs\Python\Python37\lib\site-packages\paramiko-2.4.0-py3.7.egg\paramiko\transport.py", line 57, in <module> File "D:\Users\prashant-gu\AppData\Local\Programs\Python\Python37\lib\site-packages\paramiko-2.4.0-py3.7.egg\paramiko\ed25519key.py", line 22, in <module> File "D:\Users\prashant-gu\AppData\Local\Programs\Python\Python37\lib\site-packages\nacl\signing.py", line 19, in <module> import nacl.bindings File "D:\Users\prashant-gu\AppData\Local\Programs\Python\Python37\lib\site-packages\nacl\bindings\__init__.py", line 17, in <module> from nacl.bindings.crypto_box import ( File "D:\Users\prashant-gu\AppData\Local\Programs\Python\Python37\lib\site-packages\nacl\bindings\crypto_box.py", line 18, in <module> from nacl._sodium import ffi, libImportError: DLL load failed: The specified module could not be found.
After a lot of googling, I finally stumbled upon this. As mentioned in the conversation I uninstalled my previous pynacl installation, downloaded the zipped source from https://github.com/lmctv/pynacl/archive/v1.2.a0.reorder.zip, downloaded libsodium from https://github.com/jedisct1/libsodium/releases/download/1.0.15/libsodium-1.0.15.tar.gz, set LIB environment variable to D:\Users\prashant-gu\Downloads\libsodium-1.0.15\bin\x64\Release\v140\dynamic, and finally installed pynacl form this downloaded source using pip install .Now it works fine.During the installation of paramiko, I also happen to download OpenSSL from https://ci.cryptography.io/job/cryptography-support-jobs/job/openssl-release-1.1/, and set INCLUDE environment variable to D:\Users\prashant-gu\Downloads\openssl-1.1.0g-2015-x86_64\openssl-win64-2015\include in order to successfully install the cryptography package which happens to be a dependency for paramiko.
OpenCV Python Assertion Failed I am trying to run opencv-python==3.3.0.10 on a macOS 10.12.6 to read from a file and show the video in a window. I have exactly copied the code from here http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_gui/py_video_display/py_video_display.html, section 'Playing' video from file.The code runs correctly and shows the video, however after termination of the video it breaks the program, giving the following error: Assertion failed: (ec == 0), function unlock, file /BuildRoot/Library/Caches/com.apple.xbs/Sources/libcxx/libcxx-307.5/src/mutex.cpp, line 48.Does anyone have any idea of what might cause this?Code snippet for your convenience (edited to include some suggestions in the comment)cap = cv2.VideoCapture('vtest.avi')while(True): ret, frame = cap.read() if not ret: break gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) cv2.imshow('frame',gray) if cv2.waitKey(1) & 0xFF == ord('q'): break cv2.destroyAllWindows()
It's not clear from your question, but it looks like you're specifically running into a situation where the video completes playing without being interrupted. I think the issue is that the VideoCapture object is already closed by the time you get to cap.release(). I'd recommend putting the call to release inside of the if statement by the break.I've not had time to experiment, but I normally follow this pattern:reader = cv2.VideoCapture(<stuff>)while True: success, frame = reader.read() if not success: breakI'd not had to call release explicitly in those contexts.
reference to invalid character number: (Python ElementTree parse) I have xml file which has following content: <word>vegetation</word> <word>cover</word> <word>(31%</word> <word>split_identifier ;</word> <word>Still</word> <word>and</word>When I read the file using ElmentTree parse, it gives me error : xml.etree.ElementTree.ParseError: reference to invalid character number Its becuase of (&#x2 which is "~").How can I take care of such issues. I am not sure how many other symbols i would get in future.
If you want to get rid of those special characters, you can by scrubbing the input XML as a string:respXML = response.content.decode("utf-16")scrubbedXML = re.sub('&.+[0-9]+;', '', respXML)respRoot = ET.fromstring(scrubbedXML)If you prefer to keep the special characters you may parse them beforehand. In your case it looks like html, therefore you may use the python html module:import htmlrespRoot = ET.fromstring(html.unescape(response.content.decode("utf-16"))
Can I save results anyway even when Keyboardinterrupt? I have a very long code which is taking forever to run. I was wondering if there is a way to save the results even if I use the keyboard to interrupt the code from running? All the examples I found were using except with Keyboardinterrupt, so I don't know if this is the right code to use.More concretely: I have a code which ends with saving results in a list, and returning the list. In this case, is there a way to return the list despite keyboardinterrupt? Can I use if keyboardinterrupt statement?My code:# removed is a very long listfor a, b in itertools.combinations(removed, 2): temp = [a,b] Token_Set_Ratio = fuzz.token_set_ratio(temp[0],temp[1]) if Token_Set_Ratio > k: c = random.choice(temp) if c in removed: removed.remove(c) else: pass else: pass return removedWhere can I add the part for python to retain removed even if keyboard interrupt occurs?
You could use a try-except with KeyboardInterrupt:def your_function(): removed = [...] try: # Code that takes long time for a, b in itertools.combinations(removed, 2): ... return removed except KeyboardInterrupt: return removedA small example:import timedef foo(): result = [] try: # Long running code for i in range(10000): result.append(i) time.sleep(0.1) return result except KeyboardInterrupt: # Code to "save" return resultprint(foo())When you Ctrl-C before the end of execution, a partial list is printed.
How can i replace a value in a specific row and column in a csv file I need to replace a value in correspondence of an ID in a .csv file:ChatId,Color805525230,blackSo if the ID in input is equal to the one in the file my program will replace the Color "black" with the new one. I tried this:for idx, row in enumerate(df.ChatId): if str(row) == str(CHAT_ID): df.loc[1,idx] = BGc df.to_csv("path")
Assuming you loaded the csv on a dataframe, you could you use some of its functions.For example: https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.replace.htmlAlternatively, you could use the loc function:df.loc[df['ChatId'=='SomeId'], 'Color'] = 'ValueToReplace'If you need to apply it to a different column, you can 'Color' with the appropriate name. I think you could even include a list of column names, though I haven't tested it
What is the simplest language which support template&context mechanism? I need to find the easiest way to automatically build ebooks from downloaded articles.I want to automatically generate TOC, which will be based on HTML template.I know that python django has template & context mechanism, however django is a little to complicated for people to which I am preparing this whole mechanism. I don't need all web-related features.
IMHO, if you are familiar with Django:if you want to build a command line application or a abstract library, look at Jinja2 template engine.if you are looking for a web framework simpler than Django, look at Flask (Flask uses Jinja2 as the default template engine).
python managing tasks in threads when using priority queue I'm trying to write a program which starts new tasks in new threads.Data is passed from task threads to a single worker/processing thread via a priority queue (so more important jobs are processes first). The worker/processing thread gets higher priority data from the queue and limits calls to a REST API How can I passed the data back to it's origionating task thread, while tracking that all that particular task threads data has been processed?Thanks
In your request queue entry include a response queue. When finished place a response on the response queue.The requesting thread waits on the response queue.A callback method could alternately be used.
Wrong datetimes picked up by pandas Data ScrapedSo I've scraped data from a website with a timestamp of when it was scraped. As you can see I have no date between 2017-09-14 13:56:28 and 2017-09-16 14:43:05, however when I scrape it using the following code:path ='law_scraped'files = glob.glob(path + "/*.csv")frame = pd.DataFrame()for f in files: df = pd.read_csv(f) df['dtScraped'] = df['dtScraped'].str.replace("|", " ") try: df['dtScraped'] = pd.to_datetime(df['dtScraped'], format = "%Y/%m/%d %H:%M:%S") except Exception as e: df['dtScraped'] = pd.to_datetime(df['dtScraped']) frame = pd.concat([frame, df], ignore_index=True)I get datetimes that don't match the data as you can see below:+-----------+---------------------+-------+-------------------+| | dtScraped | odds | team |+-----------+---------------------+-------+-------------------+| 15117 | 2017-09-14 14:00:00 | 7.75 | Feyenoord || 15118 | 2017-09-14 14:00:00 | 1.446 | Manchester City || 15119 | 2017-09-14 14:00:00 | 5.01 | Draw || 15120 | 2017-09-14 14:00:00 | 4.73 | NK Maribor || 15121 | 2017-09-14 14:00:00 | 1.869 | Spartak Moscow || 15122 | 2017-09-14 14:00:00 | 3.65 | Draw || 15123 | 2017-09-14 14:00:00 | 1.694 | Liverpool || 15124 | 2017-09-14 14:00:00 | 5.16 | Sevilla || 15125 | 2017-09-14 14:00:00 | 4.25 | Draw || 15126 | 2017-09-14 14:00:00 | 3.53 | Shakhtar Donetsk || 15127 | 2017-09-14 14:00:00 | 2.19 | Napoli || 15128 | 2017-09-14 14:00:00 | 3.58 | Draw || 15129 | 2017-09-14 14:00:00 | 2.15 | RB Leipzig || 15130 | 2017-09-14 14:00:00 | 3.5 | AS Monaco || 15131 | 2017-09-14 14:00:00 | 3.73 | Draw || 15132 | 2017-09-14 14:00:00 | 1.044 | Real Madrid || 15133 | 2017-09-14 14:00:00 | 34.68 | APOEL Nicosia || 15134 | 2017-09-14 14:00:00 | 23.04 | Draw || 15135 | 2017-09-14 14:00:00 | 2.33 | Tottenham Hotspur || 15136 | 2017-09-14 14:00:00 | 3.12 | Borussia Dortmund || 15137 | 2017-09-14 14:00:00 | 3.69 | Draw || 15138 | 2017-09-14 14:00:00 | 1.52 | FC Porto || 15139 | 2017-09-14 14:00:00 | 7.63 | Besiktas JK || 15140 | 2017-09-14 14:00:00 | 4.32 | Draw || 144009 | 2017-09-14 14:00:00 | 7.75 | Feyenoord || 144010 | 2017-09-14 14:00:00 | 1.446 | Manchester City || 144011 | 2017-09-14 14:00:00 | 5.01 | Draw || 144012 | 2017-09-14 14:00:00 | 4.609 | NK Maribor || 144013 | 2017-09-14 14:00:00 | 1.892 | Spartak Moscow || 144014 | 2017-09-14 14:00:00 | 3.64 | Draw || 144015 | 2017-09-14 14:00:00 | 1.694 | Liverpool || 144016 | 2017-09-14 14:00:00 | 5.16 | Sevilla || 144017 | 2017-09-14 14:00:00 | 4.25 | Draw || 144018 | 2017-09-14 14:00:00 | 3.53 | Shakhtar Donetsk || 144019 | 2017-09-14 14:00:00 | 2.19 | Napoli || 144020 | 2017-09-14 14:00:00 | 3.58 | Draw || 144021 | 2017-09-14 14:00:00 | 2.15 | RB Leipzig || 144022 | 2017-09-14 14:00:00 | 3.5 | AS Monaco || 144023 | 2017-09-14 14:00:00 | 3.73 | Draw || 144024 | 2017-09-14 14:00:00 | 1.044 | Real Madrid || 144025 | 2017-09-14 14:00:00 | 34.68 | APOEL Nicosia || 144026 | 2017-09-14 14:00:00 | 23.04 | Draw || 144027 | 2017-09-14 14:00:00 | 2.33 | Tottenham Hotspur || 144028 | 2017-09-14 14:00:00 | 3.12 | Borussia Dortmund || 144029 | 2017-09-14 14:00:00 | 3.69 | Draw || 144030 | 2017-09-14 14:00:00 | 1.52 | FC Porto || 144031 | 2017-09-14 14:00:00 | 7.63 | Besiktas JK || 144032 | 2017-09-14 14:00:00 | 4.32 | Draw |+-----------+---------------------+-------+-------------------+
Assuming your timestamps have the same format as the filenames in your screenshot, this should work (after the replacement of "|" by " "):df['dtScraped'] = pd.to_datetime(df['dtScraped'], format="%Y-%m-%d %H-%M-%S")
Plot wind speed and direction from u, v components I'm trying to plot the wind speed and direction, but there is an error code that keeps telling me that "sequence too large; cannot be greater than 32." Here is the code that I am using:N = 500ws = np.array(u)wd = np.array(v)df = pd.DataFrame({'direction': [ws], 'speed': [wd]})dfdirection speed0 [[-7.87291, -8.19969, -8.41213, -8.42775, -8.4... [[-3.68055, -4.07912, -4.07992, -3.55594, -3.2...from windrose import plot_windroseN = 500ws = np.random.random(u) * 6wd = np.random.random(v) * 360df = pd.DataFrame({'speed': ws, 'direction': wd})plot_windrose(df, kind='contour', bins=np.arange(0.01,8,1), cmap=cm.hot, lw=3)ValueError Traceback (most recent call last)<ipython-input-78-dfb188ec377a> in <module>()1 from windrose import plot_windrose2 N = 5003 ws = np.random.random(u) * 64 wd = np.random.random(v) * 3605 df = pd.DataFrame({'speed': ws, 'direction': wd})mtrand.pyx in mtrand.RandomState.random_sample (numpy\random\mtrand\mtrand.c:10396)()mtrand.pyx in mtrand.cont0_array (numpy\random\mtrand\mtrand.c:1865)()ValueError: sequence too large; cannot be greater than 32How do I fix this and plot the U and V? Thank you.
To plot wind U, V use barbs and quiver. Look at the code below:import matplotlib.pylab as pltimport numpy as npx = np.linspace(-5, 5, 5)X, Y = np.meshgrid(x, x)d = np.arctan(Y ** 2. - .25 * Y - X)U, V = 5 * np.cos(d), np.sin(d)# barbs plotax1 = plt.subplot(1, 2, 1)ax1.barbs(X, Y, U, V)#quiver plotax2 = plt.subplot(1, 2, 2)qui = ax2.quiver(X, Y, U, V)plt.quiverkey(qui, 0.9, 1.05, 1, '1 m/s',labelpos='E',fontproperties={'weight': 'bold'})plt.show()
Python 2.7.3 urllib2 Error When I try to run my python script this error happens. How can I solve this problem ?['__builtins__', '__doc__', '__file__', '__name__', '__package__', 'urllib2']Traceback (most recent call last): File "m.py", line 3, in <module> import requests File "/usr/local/lib/python2.7/dist-packages/requests-2.9.1-py2.7.egg/requests/__init__.py", line 58, in <module> from . import utils File "/usr/local/lib/python2.7/dist-packages/requests-2.9.1-py2.7.egg/requests/utils.py", line 26, in <module> from .compat import parse_http_list as _parse_list_header File "/usr/local/lib/python2.7/dist-packages/requests-2.9.1-py2.7.egg/requests/compat.py", line 38, in <module> from urllib2 import parse_http_listImportError: cannot import name parse_http_list
you need to upgrade requests pip install --upgrade requests
"The request's session was deleted before the request completed. The user may have logged out in a concurrent request" "The request's session was deleted before the request completed. The user may have logged out in a concurrent request" I am facing this error when trying to use 2 request.session().In my code my using two request.session() to store variables.After one request successfully completed,its going to another request and throwing this error.request.session['dataset1'] = dataset1.to_json()request.session['total_cols'] = total_cols // getting error herePlease help to resolve the same.
Since my dataset has 8000 rows, It was not a good idea to store in session variables. I have written some rest calls and that solved my problem.
Shelves an multiple items I've been trying to make a quote input that puts multiple quotes inside a file (coupled with the authors name). I have tried with pickle, but I could not get more than 2 pickled items inside a file and finally I decided to use shelf.However, I am having some trouble with shelves as well.I dont really know how to put multiple items inside a file, even if I can shelf one.import pickleimport shelvequote = []author = []def givequote(): f = shelve.open('quotation') ## open the shelve so that i can write stuff in it f["quote"] = raw_input("What quote has its place in the quote book? \n to quit press Q\n\n") ## ask for input so that i can put stuff into quote, ##quote is a random value so its a problem, i might have to make a key/value first. if quote != "Q": f['author'] = raw_input("what author said that? \n to quit press Q \n\n") if author == "Q": print "goodbye" elif quote == "Q": print "goodbye" f.close()def readquote(): f = shelve.open('quotation') print "%3s\n - %s" % (f["quote"], f['author'])thank you.After finding out how it works I plan to try to make the same program using classes ( was thinking of nested ones) and methods, just to practice figuring out my inner programmer.
You can do this with pickle. As this answer describes, you can append a pickled object to a binary file using open with append in binary mode. To read the multiple pickled objects out of the file, just call pickle.load on the file handle until you get an EOF error. so your unpickle code might look likeimport pickleobjs = []while 1: try: objs.append(pickle.load(f)) except EOFError: break
"chalice deploy" call ends up with "Unknown parameter in input: "Layers"" I create the most basic chalice app from chalice import Chaliceapp = Chalice(app_name='testApp')@app.route('/')def index(): return {'hello': 'world'}with empty requirements.txt and config that looks like this:{ "version": "2.0", "app_name": "testApp", "stages": { "dev": { "api_gateway_stage": "api" } }}Error fires right after the first deployThis is the error i receive:Creating deployment package.Updating policy for IAM role: testApp-devUpdating lambda function: testApp-devTraceback (most recent call last): File "c:\users\vic\appdata\local\programs\python\python37-32\lib\site-packages\chalice\cli__init__.py", line 466, in main return cli(obj={}) File "c:\users\vic\appdata\local\programs\python\python37-32\lib\site-packages\click\core.py", line 722, in call return self.main(*args, **kwargs) File "c:\users\vic\appdata\local\programs\python\python37-32\lib\site-packages\click\core.py", line 697, in main rv = self.invoke(ctx) File "c:\users\vic\appdata\local\programs\python\python37-32\lib\site-packages\click\core.py", line 1066, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "c:\users\vic\appdata\local\programs\python\python37-32\lib\site-packages\click\core.py", line 895, in invoke return ctx.invoke(self.callback, **ctx.params) File "c:\users\vic\appdata\local\programs\python\python37-32\lib\site-packages\click\core.py", line 535, in invoke return callback(*args, **kwargs) File "c:\users\vic\appdata\local\programs\python\python37-32\lib\site-packages\click\decorators.py", line 17, in new_func return f(get_current_context(), *args, **kwargs) File "c:\users\vic\appdata\local\programs\python\python37-32\lib\site-packages\chalice\cli__init__.py", line 202, in deploy deployed_values = d.deploy(config, chalice_stage_name=stage) File "c:\users\vic\appdata\local\programs\python\python37-32\lib\site-packages\chalice\deploy\deployer.py", line 342, in deploy return self._deploy(config, chalice_stage_name) File "c:\users\vic\appdata\local\programs\python\python37-32\lib\site-packages\chalice\deploy\deployer.py", line 355, in _deploy self._executor.execute(plan) File "c:\users\vic\appdata\local\programs\python\python37-32\lib\site-packages\chalice\deploy\executor.py", line 31, in execute self._default_handler)(instruction) File "c:\users\vic\appdata\local\programs\python\python37-32\lib\site-packages\chalice\deploy\executor.py", line 43, in _do_apicall result = method(**final_kwargs) File "c:\users\vic\appdata\local\programs\python\python37-32\lib\site-packages\chalice\awsclient.py", line 283, in update_function layers=layers File "c:\users\vic\appdata\local\programs\python\python37-32\lib\site-packages\chalice\awsclient.py", line 352, in _update_function_config max_attempts=self.LAMBDA_CREATE_ATTEMPTS File "c:\users\vic\appdata\local\programs\python\python37-32\lib\site-packages\chalice\awsclient.py", line 1009, in _call_client_method_with_retries response = method(**kwargs) File "c:\users\vic\appdata\local\programs\python\python37-32\lib\site-packages\botocore\client.py", line 314, in _api_call return self._make_api_call(operation_name, kwargs) File "c:\users\vic\appdata\local\programs\python\python37-32\lib\site-packages\botocore\client.py", line 586, in _make_api_call api_params, operation_model, context=request_context) File "c:\users\vic\appdata\local\programs\python\python37-32\lib\site-packages\botocore\client.py", line 621, in _convert_to_request_dict api_params, operation_model) File "c:\users\vic\appdata\local\programs\python\python37-32\lib\site-packages\botocore\validate.py", line 291, in serialize_to_request raise ParamValidationError(report=report.generate_report())botocore.exceptions.ParamValidationError: Parameter validation failed:Unknown parameter in input: "Layers", must be one of: FunctionName, Role, Handler, Description, Timeout, MemorySize, VpcConfig, Environment, Runtime, DeadLetterConfig, KMSKeyArn, TracingConfig, RevisionId
After troubleshooting found some issues with my local configurations. What helped was running the chalice in virtualenv (https://virtualenv.pypa.io/en/latest/)
Import and insert word in sequence in Python I want to import and insert word in sequence and NOT RANDOMLY, each registration attempt uses a single username and stop until the registration is completed. Then logout and begin a new registration with the next username in the list if the REGISTRATION is FAILED, and skip if the REGISTRATION is SUCCEDED.I'm really confused because I have no clue. I've tried this code but it chooses randomly and I have no idea how to use the "for loop"import randomCopy = driver.find_element_by_xpath('XPATH')Copy.click()names = [ "Noah" ,"Liam" ,"William" ,"Anthony" ]idx = random.randint(0, len(names) - 1)print(f"Picked name: {names[idx]}")Copy.send_keys(names[idx])How can I make it choose the next word in sequence and NOT RANDOMLYAny Help Please
I am going to assume that you are happy with what the code does, with exception that the names it picks are random. This narrows everything down to one line, and namely the one that picks names randomly:idx = random.randint(0, len(names) - 1)Simple enough, you want "the next word in sequence and NOT RANDOMLY":https://docs.python.org/3/tutorial/datastructures.html#more-on-listsIf you take a look at the link I've provided, you can see that lists have a pop() method, returning and removing some element from the list. We want the first one so we will provide 0 as the argument for the pop method.We modify the line to look something like thisname = names.pop(0)Now you still want to have the for-loop that will loop over all of the actions including name picking so you encapsulate all of the code in a for-loop:names = [ "Noah" ,"Liam" ,"William" ,"Anthony" ]for i in range(len(names)): # ... Copy = driver.find_element_by_xpath('XPATH') Copy.click() name = names.pop(0) print(f"Picked name: {name}") Copy.send_keys(name) # ...You might notice that the names list is not inside the for-loop. That is because we don't want to reassign the list every time we try to use a new name.If you're completely unsure how for-loops work or how to implement one yourself, you should probably start by reading about how they work.https://docs.python.org/3/tutorial/controlflow.html?highlight=loop#for-statementsLast but not least you can see some # ... comments in my example indicating where the logic will probably go for the other part of your question: "Then logout and begin a new registration with the next username in the list if the REGISTRATION is FAILED, and skip if the REGISTRATION is SUCCEDED." I don't think we I can help you with that since there is simply not enough context or examples in your question.Refer to this guide explaining how to ask a well formulated question so we can help you more next time.
Load a text file paragraph into a string without libraries sorry if this question may look a bit dumb for some of you but i'm totally a beginner at programming in Python so i'm quite bad and got a still got a lot to learn.So basically I have this long text file separated by paragraphs, sometimes the newline can be double or triple to make the task more hard for us so i added a little check and looks like it's working fine so i have a variable called "paragraph" that tells me in which paragraph i am currently.Now basically i need to scan this text file and search for some sequences of words in it but the newline character is the worst enemy here, for example if i have the string = "dummy text" and i'm looking into this:"random questions about files with a dummy text and strings hey look a new paragraph here"As you can see there is a newline between dummy and text so reading the file line by line doesn't work. So i was wondering to load directly the entire paragraph to a string so this way i can even remove punctuation and stuff more easly and check directly if those sequences of words are contained in it.All this must be done without libraries.However my piece of code of paragraph counter works while the file is being read, so if uploading a whole paragraph in a string is possible i should basically use something like "".join until the paragraph increases by 1 because we're on the next paragraph? Any idea?
This should do the trick. It is very short and elegant:with open('dummy text.txt') as file: data = file.read().replace('\n', '')print(data)#prints out the fileThe output is:"random questions about files with a dummy text and strings hey look a new paragraph here"
Python/Kivy Assertion Error I obtained an Assertion error while attempting to learn BoxLayout in kivy. I cannot figure out what has went wrong. from kivy.app import App from kivy.uix.button import Button from kivy.uix.boxlayout import BoxLayout class BoxLayoutApp(App): def build(self): return BoxLayout() if __name__=="__main__": BoxLayoutApp().run()And for the kv code:<BoxLayout>: BoxLayout: Button: text: "test" Button: text: "test" Button: text: "test" BoxLayout: Button: text: "test" Button: text: "test" Button: text: "test"Edit: I tried to subclass BoxLayout as suggested however, I still face an AssertionError. The full (original) error message I reproduce here: Traceback (most recent call last): File "boxlayout.py", line 12, in <module> BoxLayoutApp().run() File "C:\Users\user\AppData\Local\Programs\Python\Python36-32\lib\site-packages\kivy\app.py", line 802, in run root = self.build() File "boxlayout.py", line 8, in build return BoxLayout() File "C:\Users\user\AppData\Local\Programs\Python\Python36-32\lib\site-packages\kivy\uix\boxlayout.py", line 131, in__init__ super(BoxLayout, self).__init__(**kwargs) File "C:\Users\user\AppData\Local\Programs\Python\Python36-32\lib\site-packages\kivy\uix\layout.py", line 76, in __init__ super(Layout, self).__init__(**kwargs) File "C:\Users\user\AppData\Local\Programs\Python\Python36-32\lib\site-packages\kivy\uix\widget.py", line 345, in __init__ Builder.apply(self, ignored_consts=self._kwargs_applied_init) File "C:\Users\user\AppData\Local\Programs\Python\Python36-32\lib\site-packages\kivy\lang\builder.py", line 451, in apply self._apply_rule(widget, rule, rule, ignored_consts=ignored_consts) File "C:\Users\user\AppData\Local\Programs\Python\Python36-32\lib\site-packages\kivy\lang\builder.py", line 566, in _apply_rule self.apply(child) File "C:\Users\user\AppData\Local\Programs\Python\Python36-32\lib\site-packages\kivy\lang\builder.py", line 451, in apply self._apply_rule(widget, rule, rule, ignored_consts=ignored_consts) File "C:\Users\user\AppData\Local\Programs\Python\Python36-32\lib\site-packages\kivy\lang\builder.py", line 464, in _apply_rule assert(rule not in self.rulectx) AssertionError
Try subclassing boxlayout instead:from kivy.app import Appfrom kivy.uix.button import Buttonfrom kivy.uix.boxlayout import BoxLayoutfrom kivy.lang import Builderclass MyBoxLayout(BoxLayout): passBuilder.load_string('''<MyBoxLayout>: BoxLayout: Button: text: "test" Button: text: "test" Button: text: "test" BoxLayout: Button: text: "test" Button: text: "test" Button: text: "test"''')class BoxLayoutApp(App): def build(self): return MyBoxLayout()if __name__=="__main__": BoxLayoutApp().run()The AssertionError is being thrown because you try to apply rules to the same class you are nesting.In other words you apply a rule to a class, that it shall contain itself.That causes issues.Following will throw the same error.<MyBoxLayout>: MyBoxLayout:
How to get the dimensions of a tensor (in TensorFlow) at graph construction time? I am trying an Op that is not behaving as expected.graph = tf.Graph()with graph.as_default(): train_dataset = tf.placeholder(tf.int32, shape=[128, 2]) embeddings = tf.Variable( tf.random_uniform([50000, 64], -1.0, 1.0)) embed = tf.nn.embedding_lookup(embeddings, train_dataset) embed = tf.reduce_sum(embed, reduction_indices=0)So I need to know the dimensions of the Tensor embed. I know that it can be done at the run time but it's too much work for such a simple operation. What's the easier way to do it?
I see most people confused about tf.shape(tensor) and tensor.get_shape()Let's make it clear:tf.shapetf.shape is used for dynamic shape. If your tensor's shape is changable, use it. An example: a input is an image with changable width and height, we want resize it to half of its size, then we can write something like:new_height = tf.shape(image)[0] / 2tensor.get_shapetensor.get_shape is used for fixed shapes, which means the tensor's shape can be deduced in the graph. Conclusion:tf.shape can be used almost anywhere, but t.get_shape only for shapes can be deduced from graph.
/usr/bin/python: No module named pip I am having a bit of trouble getting everything to work on my Mac running El Capitan. I am running 3.5.1. I am under the impression that Pip is included with an install of the above, however when I try to use it to install sympy using the syntax in terminal: python -m pip install SomePackage, I get the error mentioned in the title. I tried running import pip in IDLE, and got no error, so I am quite confused. If I type pip into IDLE, I get:<module 'pip' from '/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/pip/__init__.py'>Does anybody know what the problem is? Do I need to navigate to a certain directory in Terminal when I run the command?
I believe that you can run it by calling pip in the terminal. If you have it already installed.pip install sympy
Get params validation on viewsets.ModelViewSet I am new to django and building a REST API using django-rest-framework.I have written some code to check whether the user has supplied some parameters or not.But that is very ugly with lot of if conditions, so i want to refactor it.Below is the code that i have written please suggest how to refactor it.I am looking for some django based validations.class AssetsViewSet(viewsets.ModelViewSet): queryset = Assets.objects.using("gpr").all() def create(self, request): assets = [] farming_details = {} bluenumberid = request.data.get('bluenumberid', None) if not bluenumberid: return Response({'error': 'BlueNumber is required.'}) actorid = request.data.get('actorid', None) if not actorid: return Response({'error': 'Actorid is required.'}) asset_details = request.data.get('asset_details', None) if not asset_details: return Response({'error': 'AssetDetails is required.'}) for asset_detail in asset_details: location = asset_detail.get('location', None) if not location: return Response({'error': 'location details is required.'}) assettype = asset_detail.get('type', None) if not assettype: return Response({'error': 'assettype is required.'}) asset_relationship = asset_detail.get('asset_relationship', None) if not asset_relationship: return Response({'error': 'asset_relationship is required.'}) subdivision_code = location.get('subdivision_code', None) if not subdivision_code: return Response({'error': 'subdivision_code is required.'}) country_code = location.get('country_code', None) if not country_code: return Response({'error': 'country_code is required.'}) locationtype = location.get('locationtype', None) if not locationtype: return Response({'error': 'locationtype is required.'}) latitude = location.get('latitude', None) if not latitude: return Response({'error': 'latitude is required.'}) longitude = location.get('longitude', None) if not longitude: return Response({'error': 'longitude is required.'}) try: country_instance = Countries.objects.using('gpr').get(countrycode=country_code) except: return Response({'error': 'Unable to find country with countrycode ' + str(country_code)}) try: subdivision_instance = NationalSubdivisions.objects.using('gpr').get(subdivisioncode=subdivision_code, countrycode=country_code) except: return Response({'error': 'Unable to find subdivision with countrycode ' + str(country_code) + ' and' + ' subdivisioncode ' + str(subdivision_code)}) kwargs = {} kwargs['pobox'] = location.get('pobox', '') kwargs['sublocation'] = location.get('sublocation', '') kwargs['streetaddressone'] = location.get('streetaddressone', '') kwargs['streetaddresstwo'] = location.get('streetaddresstwo', '') kwargs['streetaddressthree'] = location.get('streetaddressthree', '') kwargs['city'] = location.get('city', '') kwargs['postalcode'] = location.get('postalcode', '') cursor = connections['gpr'].cursor() cursor.execute("Select uuid() as uuid") u = cursor.fetchall() uuid = u[0][0].replace("-", "") kwargs['locationid'] = uuid # l.refresh_from_db() try: Locations.objects.using('gpr').create_location(locationtype=locationtype, latitude=latitude, longitude=longitude, countrycode=country_instance, subdivisioncode = subdivision_instance, **kwargs) except (TypeError, ValueError): return Response({'error': 'Error while saving location'}) try: location_entry = Locations.objects.using('gpr').get(locationid=uuid) except: return Response({'error': 'Unable to find location with locationid ' + str(uuid)}) asset_entry = Assets.objects.using('gpr').create(locationid=location_entry, assettype=assettype) asset_entry = Assets.objects.using('gpr').filter(locationid=location_entry, assettype=assettype).latest('assetinserted') farming_details[asset_entry.assetid] = [] try: actor = Actors.objects.using('gpr').get(actorid = actorid) except: return Response({'error': 'Unable to find actor with actorid ' + str(actorid)}) assetrelationship = AssetRelationships.objects.using('gpr').create(assetid= asset_entry, actorid= actor,assetrelationship=asset_relationship) assets.append(asset_entry) if assettype=="Farm or pasture land": hectares = asset_detail.get('hectares', None) if hectares is None: return Response({'error': 'hectares must be a decimal number'}) try: farmingasset = FarmingAssets.objects.using('gpr').create(assetid=asset_entry, hectares=hectares) except ValidationError: return Response({'error': 'hectares must be decimal value.'}) farmingasset = FarmingAssets.objects.using('gpr').filter(assetid=asset_entry, hectares=hectares).last() for type_detail in asset_detail.get('type_details', []): crop = type_detail.get('crop', '') hectare = type_detail.get('hectare', '') if crop != '' and hectare != '': try: h3code = ProductCodes.objects.using('gpr').get(h3code=crop) except: return Response({'error': 'Unable to find ProductCode with h3code' + str(crop)}) try: farming = Farming.objects.using('gpr').create(assetid=farmingasset, h3code=h3code, annualyield=hectare) farming_details[asset_entry.assetid].append(farming.farmingid) except Exception as e: return Response({'error': e}) else: return Response({'error': 'crop with hectare is required.'}) i = 0 data = {} for asset in assets: if farming_details[asset.assetid]: data[i] = {"assetid": asset.assetid, "assetbluenumber": asset.assetuniversalid, "farming_ids": farming_details[asset.assetid]} else: data[i] = {"assetid": asset.assetid, "assetbluenumber": asset.assetuniversalid} i+=1 return Response(data)Asset Modelclass Assets(models.Model): assetid = models.CharField(db_column='AssetID', primary_key=True, max_length=255) # Field name made lowercase. assetname = models.CharField(db_column='AssetName', max_length=255, blank=True, null=True) # Field name made lowercase. locationid = models.ForeignKey('Locations', models.DO_NOTHING, db_column='LocationID') # Field name made lowercase. assetuniversalid = models.CharField(db_column='AssetBluenumber', unique=True, blank=True, null=True, max_length=255) # Field name made lowercase. assettype = models.CharField(db_column='AssetType', max_length=45, blank=True, null=True) # Field name made lowercase. assetinserted = models.DateTimeField(db_column='AssetInserted', blank=True, null=True, auto_now_add=True) # Field name made lowercase. assetupdated = models.DateTimeField(db_column='AssetUpdated', blank=True, null=True, auto_now=True) # Field name made lowercase.
You can make serializers, they have a very easy way to validate your data. As in your case all the fields seem to be required it becomes even easier.Create a file on you api app like:serializers.py#Import Serializers libfrom rest_framework import serializers#Import your models here (You can put more than one serializer in one file)from assets.model import Assets#Now make you serializer classclass AssetsSerializer(serializers.ModelSerializer): class Meta: model = Profile fields = '__all__' #This last line will put all the fields on you serializer #but you can also especify only some fields like: #fields = ('assetid', 'assetname')On you view you can use your serializer(s) class to validate you data.views.py#Serializersfrom assets.serializers import AssetsSerializer#Libraries you can usefrom django.http import Http404from rest_framework.views import APIViewfrom rest_framework.response import Responsefrom rest_framework import statusclass AssetsViewSet(viewsets.ModelViewSet): queryset = Assets.objects.using("gpr").all() def create(self, request): assets = [] farming_details = {} #Set your serializer serializer = AssetsSerializer(data=request.data) if serializer.is_valid(): #MAGIC HAPPENS HERE #... Here you do the routine you do when the data is valid #You can use the serializer as an object of you Assets Model #Save it serializer.save() return Response(serializer.data, status=status.HTTP_201_CREATED) return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)i took this all from the documentation. You can learn a lot doing the tutorial from the official site. I hope it helps.
Python - Error in two-layer neural network I am trying to implement a 2-layer neural network from scratch. But there is something wrong. After some iterations, my loss becomes nan.'''We are implementing a two layer neural network.'''import numpy as npx,y = np.random.rand(64,1000),np.random.randn(64,10)w1,w2 = np.random.rand(1000,100),np.random.rand(100,10)learning_rate = 1e-4x -= np.mean(x,axis=0) #Normalizing the Training Data Setfor t in range(2000): h = np.maximum(0,x.dot(w1)) # Applying Relu Non linearity ypred = h.dot(w2) #Output of Hidden layer loss = np.square(ypred - y).sum() print('Step',t,'\tLoss:- ',loss) #Gradient Descent grad_ypred = 2.0 * (ypred - y) gradw2 = (h.transpose()).dot(grad_ypred) grad_h = grad_ypred.dot(w2.transpose()) gradw1 = (x.transpose()).dot(grad_h*h*(1-h)) w1 -= learning_rate*gradw1 w2 -= learning_rate*gradw2I also implemented Linear Regression using a Softmax classifier and multi-class SVM loss. Same problem happens. Please tell me how to resolve this.Output:D:\Study Material\Python 3 Tutorial\PythonScripts\Machine Learning>python TwoLayerNeuralNet.pyStep 0 Loss:- 19436393.79233052Step 1 Loss:- 236820315509427.38Step 2 Loss:- 1.3887002186558748e+47Step 3 Loss:- 1.868219503527502e+189Step 4 Loss:- infTwoLayerNeuralNet.py:23: RuntimeWarning: invalid value encountered in multiply gradw1 = (x.transpose()).dot(grad_h*h*(1-h))TwoLayerNeuralNet.py:12: RuntimeWarning: invalid value encountered in maximum h = np.maximum(0,x.dot(w1)) # Applying Relu Non linearityStep 5 Loss:- nanStep 6 Loss:- nanStep 7 Loss:- nanStep 8 Loss:- nanStep 9 Loss:- nanStep 10 Loss:- nanStep 11 Loss:- nanStep 12 Loss:- nanStep 13 Loss:- nanStep 14 Loss:- nanStep 15 Loss:- nanStep 16 Loss:- nanStep 17 Loss:- nanStep 18 Loss:- nanStep 19 Loss:- nanStep 20 Loss:- nan
Its because your loss is becoming too hightry this loss = np.square(ypred - y).mean()and if still doesn't work try reducing the learning rate to something like 1e-8.and observe if the loss is going up or down, if loss is reducing thats good, if the loss is increasing thats a bad sign, you might want to consider using a better dataset and check the weights updation.
How to implement n times nested loops in python? I want to do nested loops with n times, this n is an variable and can be provided by function or input methods. In order to do this, I have to write lots of if..elif blocks depend on size of n, does anybody have good strategies to handle this task? The codes (for combination of 4 letters problem) I used are as follows:def charCombination(n): patList = [] s = 'ATCG' if n == 1: for i in s: patList.append(i) elif n == 2: for i in s: for j in s: patList.append(i+j) elif n == 3: for i in s: for j in s: for k in s: patList.append(i+j+k) ... return patList
You can use itertools.product with repeat parameterimport itertoolsdef charCombination(n): return ["".join(item) for item in itertools.product("ATCG", repeat=n)]print charCombination(1)print charCombination(2)print charCombination(3)Output['A', 'T', 'C', 'G']['AA', 'AT', 'AC', 'AG', 'TA', 'TT', 'TC', 'TG', 'CA', 'CT', 'CC', 'CG', 'GA', 'GT', 'GC', 'GG']['AAA', 'AAT', 'AAC', 'AAG', 'ATA', 'ATT', 'ATC', 'ATG', 'ACA', 'ACT', 'ACC', 'ACG', 'AGA', 'AGT', 'AGC', 'AGG', 'TAA', 'TAT', 'TAC', 'TAG', 'TTA', 'TTT', 'TTC', 'TTG', 'TCA', 'TCT', 'TCC', 'TCG', 'TGA', 'TGT', 'TGC', 'TGG', 'CAA', 'CAT', 'CAC', 'CAG', 'CTA', 'CTT', 'CTC', 'CTG', 'CCA', 'CCT', 'CCC', 'CCG', 'CGA', 'CGT', 'CGC', 'CGG', 'GAA', 'GAT', 'GAC', 'GAG', 'GTA', 'GTT', 'GTC', 'GTG', 'GCA', 'GCT', 'GCC', 'GCG', 'GGA', 'GGT', 'GGC', 'GGG']
Timer for variable time delay I would like a timer (Using Python 3.8 currently) that first checks the system time or Naval Observatory clock, so it can be started at any time and synch with the 00 seconds.I'm only interested in the number of seconds on the system clock or Naval Observatory. At the top of every minute, i.e when the seconds = 00 I need to write data to a DataFrame or database, then sleep again for another 60 seconds.I first checked the system time, determined how long it is from the 00 seconds, and placed the first delay for that amount. After that it should delay or sleep for 60 seconds, then run again. Data is constantly changing but at this point I only need to write the data every 60 seconds, would like to also have it have the capability of using other time frames like 5 minutes, 15 minutes etc, but once the first is done the other time frames will be easy.Here is my lame attempt, it runs a few iterations then quits, and I'm sure it's not very efficientdef time_delay(): sec = int(time.strftime('%S')) if sec != 0: wait_time = 60 - sec time.sleep(wait_time) sec = int(time.strftime('%S')) wait_time = 60 - sec elif time.sleep(60): time_delay()
This will call a function when the seconds are 0:def time_loop(job): while True: if int(time.time()) % 60 == 0: job() time.sleep( 1 )
Finding nearest timeindex for many categories I am trying to obtain the data points nearest to the query timestamp for multiple independent categories like this (example in more detail in the gist):dt = pd.to_datetime(dt)df_output = list()for category in df.category.unique(): df_temp = df[df.category == category] i = df_temp.index.get_loc(dt, method='nearest') latest = df_temp.iloc[i] df_output.append(latest)pd.DataFrame(df_output)The issue with this approach is that it is very slow (and obviously feels very blunt). Profiling suggests the bottleneck is iloc, which seems odd.What is a faster/more correct way to go about it? Is there a way to obtain the result for all of the categories at once? (I'm thinking of some groupby magic) Is pandas capable of doing it or should I switch to some other timeseries storage method?
Pandas was made for time-series data so this is it's bread and butter. Try this for performance:dt = '2017-12-23 01:49:13'df["timedelta"] = abs(df.index - pd.Timestamp(dt))df.loc[df.groupby(by="category")["timedelta"].idxmin()].drop("timedelta", axis=1)This is creating a new column called timedelta, named after pandas.Timedelta class, and then using groupby to combine all the categories, find the smallest timedelta in each and return their index into .loc. Lastly I dropped the column.
Cassandra Pagination CPU Utilization Issue I had developed a python script for pulling the data but it is using only single cpu core and when I do top cassandra is using more than 200% cpu. Going into Idle state since in between GC coming into picture Unable understand how can I convert the code to utilize multiple cores and parallel processing.class PagedResultHandler(object):def __init__(self, future): self.error = None self.finished_event = multiprocessing.Event() self.future = future self.future.add_callbacks( callback=self.handle_page, errback=self.handle_error) self.rows = []def handle_page(self, rows): self.rows += rows if self.future.has_more_pages: self.future.start_fetching_next_page() else: self.finished_event.set()def handle_error(self, exc): self.error = exc self.finished_event.set()start_time = time.time()cluster = Cluster(contact_points=['127.0.0.1'],protocol_version=4)session = cluster.connect('unit_test')query = "select * from "+table_name+" where runseq=0" print("--Fired Query--->> ", query)future = session.execute_async(query)handler = PagedResultHandler(future) handler.finished_event.wait()data = pd.DataFrame(handler.rows)print("--- %s seconds ---" % (time.time() - start_time))if handler.error: raise handler.errorcluster.shutdown()Each table I pull contains more than 3million rows and has lot performance issue. Can I help me how I can Make cpu cores and improve performance
You wont get blazing performance out of python driver, but you can look at cqlsh's copy functions (https://github.com/apache/cassandra/blob/trunk/pylib/cqlshlib/copyutil.py#L229) if you really want to see a fast implementation that can use multiple cores.On C* side make sure you have enough nodes with adequate hardware (ssds, multiple cores, >16gb of ram). If using sub 8gb heaps etc don't expect much out of it. Cassandra/JVM (with default settings) are designed to fully utilize the server as much as it can, not share resources so expect high CPU.
finding just full words in a python string Basically it all comes down to finding just the fullword, not matching also a substring thereof.I have phrases like: texto = "hello today is the first day of working week" and what I wanted to do is to split that phrase into words to see if any matched fullwords that I have obtained from a sql query, like this:sql = "select keyword from keywords" try: cursor.execute(sql) # Fetch all the rows in a list of lists. results = cursor.fetchall() for result in results: keywords.append(result) so there I have a tuple of keywords.So, yes, of course, you would split the phrase like this:for word in texto.split(): if word in keywords.__str__(): print ("keyword %s detected in texto" % (word))but while that does indeed find me words, it also "finds" me things that I would have not wanted or expected (a substring of a word):I know that in PHP you would do something like this:if (preg_match("/\b$search\b/", texto)): {print "word found"}and I ve read quite a few discussions on this at SO. Some people say that you just do split, (but that is what I have done), others say use this:in isn't how it's done.>>> re.search(r'\babc\b', 'abc123')>>> re.search(r'\babc\b', 'abc 123')<_sre.SRE_Match object at 0x1146780>is this latest example the way to do it? according to the shell interpreter it would match the second row.
I dont't see why split() should not work. The issue is the .__str__() (which I don't see any need for). It creates one single string in which the keywords are searched - and then it will find substrings as well. The following is working for me:texto = "hello today is the first day of working week"keywords = ["is", "day", "week", "work", "sun"]for keyword in keywords: print("keyword", keyword, end=" ") if keyword in texto.split(): print("found.") else: print("not found")work and sun should not match, work is a substring in the text, sun is not in the text.Output is keyword is found.keyword day found.keyword week found.keyword work not foundkeyword sun not found
How to solve a Dataset problem in python? i have a dataset with different programming language in a column titled and i want to get the 10 most used programming language in my dataset pythonDataset: https://drive.google.com/file/d/1nJLDFSdIbkNxcqY7NBtJZfcgLW1wpsUZ/view?usp=sharing
On StackOverflow, you should not link to outside sources, but include relevant data in your question. You should also pare down the data - if it is long, make it as short as possible and still illustrate your question.Finally, on StackOverflow, we don't ask bare questions like "how to do x". You must first make an effort yourself to solve your problem, and then, if you don't understand why your code does not work, then you post the smallest possible example, and we will tell you where the bug is. You must show the effort first and then we help you fix it. The purpose of it is so that you learn to code yourself, and not just copy a ready solution.Since you did not show any effort, I will not give you a complete solution, but I will give you some start to point you in the right direction. You should first analyze my code so that you understand how it works and then you can finish it.When you run this code, it will print for you the relevant words. You should probably further process them, to remove irrelevant characters that people put in, and also to use proper case of letters, and then you can count the occurrences of words.#!/usr/bin/python3LANGUAGE_COLUMN_INDEX = 8with open('Salary.csv') as fp: #skip over first line fp.readline() for line in fp: for word in line.split(',')[LANGUAGE_COLUMN_INDEX].split('/'): print(word.strip())
Python/Django - Having trouble giving an object a foreignkey that was just created I am expanding on the basic django Poll site tutorial, and I have made a view that allows users to add their own polls. Adding a poll works, adding choices does not. Apparently this is because the poll does not "exist" yet, and the p.id cannot be used. However, the p.id works when redirecting the browser at the bottom. Anyy ideas? def save(request): p = Poll(question=request.POST['question'], pub_date=timezone.now()) p.save() c1 = Choice(poll=p.id, choice_text=request.POST['c1'], votes=0) c2 = Choice(poll=p.id, choice_text=request.POST['c2'], votes=0) c3 = Choice(poll=p.id, choice_text=request.POST['c3'], votes=0) c4 = Choice(poll=p.id, choice_text=request.POST['c4'], votes=0) c1.save() c2.save() c3.save() c4.save() return HttpResponseRedirect(reverse('detail', args=(p.id,)))
Nevermind, I figured it out. The choice doent need an id, rather, it needs the object. FIxed by changing:c1 = Choice(poll=p.id, choice_text=request.POST['c1'], votes=0)toc1 = Choice(poll=p, choice_text=request.POST['c1'], votes=0)
No such file or directory: ...build/sip/setup.py I'm trying to install PyQt on Ubuntu. The list of obstacles I'm dealing with is far too long to include here. The obstacle I'm currently trying to get past is this:(myvenv)% cd ~/.virtualenvs/myvenv/build/pyqt(myvenv)% python ./configure.pyTraceback (most recent call last): File "./configure.py", line 32, in <module> import sipconfigOK, so let's install sipconfig...(myvenv)% pip install SIPDownloading/unpacking SIP Downloading sip-4.14.8-snapshot-02bdf6cc32c1.zip (848Kb): 848Kb downloaded Running setup.py egg_info for package SIP Traceback (most recent call last): File "<string>", line 14, in <module> IOError: [Errno 2] No such file or directory: '/home/yt/.virtualenvs/myvenv/build/SIP/setup.py' Complete output from command python setup.py egg_info: Traceback (most recent call last): File "<string>", line 14, in <module>IOError: [Errno 2] No such file or directory: '/home/yt/.virtualenvs/myvenv/build/SIP/setup.py'----------------------------------------Command python setup.py egg_info failed with error code 1 in /home/yt/.virtualenvs/myvenv/build/SIPStoring complete log in /home/yt/.pip/pip.logThe only recipe I've found so far installing SIP is thispython configure.pymakesudo make installBut, on Ubuntu I try to do every installation through apt-get, so I'm reluctant to follow the recipe above.How else can I install SIP?
allready answer herestart all from the beginning and make sure that all dependencies are satisfied.
parallel installion of Python 2.7 and 3.3 via Homebrew - pip3 fails I would like to make the jump and get acquainted with Python 3.I followed the instructions found here with the installation working flawlessly.I'm also able to use the provided virtualenv to create enviroments for Python 2 and Python 3 (Followed the instuctions here.). Unfortunalty pip3 fails when no virtualenv is activated. I need to use it to install global modules for python3.This is the error message:± |master ✓| → pip3Traceback (most recent call last): File "/usr/local/bin/pip3", line 5, in <module> from pkg_resources import load_entry_point File "/usr/local/lib/python2.7/site-packages/distribute-0.6.45-py2.7.egg/pkg_resources.py", line 51 def _bypass_ensure_directory(name, mode=0777): ^SyntaxError: invalid tokenIt looks like pip3 is trying to access distribute of python2. Is there any workaround for this?
I was having the same problem as you were and I had export PYTHONPATH="/usr/local/lib/python2.7/site-packages:$PYTHONPATH"in my ~/.bash_profile. Removing that line solved the problem for me. If you have that or something like it in your ~/.bashrc or ~/.bash_profile, try removing it.
graph theory - connect point in 3D space with other three nearest points (distance based) I want to connect nodes (Atoms) with the three nearest nodes(atoms).I am doingad1 = np.zeros((31,31))for i in range(31): dist_i = dist_mat[i] cut_off = sorted(dist_i)[3] ad1[i, np.where((dist_i<=cut_off) & (dist_i!=0.0))] = 1np.sum(ad1, axis=1) #[3,3,3,3........3]np.sum(ad1, axis=0)#array([3., 3., 2., 2., 2., 4., 2., 5., 2., 3., 2., 3., 3., 2., 3., 6., 3., 5., 3., 4., 2., 3., 2., 4., 3., 3., 2., 4., 3., 2., 3.])I want np.sum(ad1, axis=0) to be all 3. That would mean all node (atoms) are connected to exactly 3 nearest nodes. As we can see node/atom 5 is connected to 4 other nodes/atoms which is wrong, I want it to be connected to exactly 3 nearest nodes. How do I do it.Below is the distance matrix of 31 atoms (31 X 31).0.0000,6.8223,7.5588,4.8966,7.2452,2.7778,3.7082,2.7345,7.1540,6.8273,3.6995,7.4136,4.6132,5.8456,2.8037,5.4881,8.1769,2.7361,8.3034,4.9450,4.8225,4.6152,4.8243,9.4876,7.2391,2.9941,7.4180,5.8523,7.6310,5.5996,8.17616.8223,0.0000,3.0097,2.8567,2.6647,5.0092,5.8451,6.8037,6.7031,4.8983,7.5806,5.2873,5.5000,7.0038,4.9530,3.9763,7.0263,4.8941,4.7416,6.8450,2.8166,2.9221,5.5502,7.0328,5.5148,7.6318,2.7456,5.1150,2.9654,4.2863,5.31687.5588,3.0097,0.0000,2.8679,2.9242,5.1443,6.8372,6.5621,5.5169,3.0135,6.8412,4.6886,4.2507,5.4276,5.8673,4.8804,4.7149,6.5707,2.5568,6.3820,5.0625,4.2533,5.0600,5.2125,2.9262,7.8126,4.6864,5.4224,2.6001,3.2330,4.71164.8966,2.8567,2.8679,0.0000,3.8064,3.0221,5.1335,4.1131,5.8284,2.8610,5.1354,3.8488,2.8086,4.9618,3.0028,2.7539,5.7401,4.1214,4.5830,5.4268,2.9346,2.8122,2.9319,6.8084,3.8015,5.8992,3.8516,4.9611,2.9212,3.0441,5.73927.2452,2.6647,2.9242,3.8064,0.0000,4.7907,5.1676,7.2899,4.5138,5.5206,7.0249,6.9978,5.3888,5.7435,6.2424,6.0645,5.2615,5.6079,2.8907,5.3589,4.7384,2.8152,6.6866,4.6140,4.7660,6.9472,5.3921,3.2734,4.7050,2.9157,2.66842.7778,5.0092,5.1443,3.0221,4.7907,0.0000,2.8352,3.0025,4.6425,5.0210,2.8371,6.4170,2.6958,3.6516,3.1115,4.9489,5.5823,3.0042,5.5496,3.0193,4.1730,2.6916,4.1796,6.7613,4.7913,2.8772,6.4156,3.6500,5.9378,2.8228,5.57543.7082,5.8451,6.8372,5.1335,5.1676,2.8352,0.0000,5.4535,5.0179,7.5889,4.7500,8.8240,5.4549,5.4488,4.9281,6.8616,7.0789,2.8247,6.7504,3.1734,4.9441,2.9387,6.8368,7.3498,7.0263,2.7571,7.6165,2.7391,7.8184,4.2915,5.43522.7345,6.8037,6.5621,4.1131,7.2899,3.0025,5.4535,0.0000,6.9646,4.8923,2.8238,5.6867,2.8002,4.7436,2.7895,4.7319,7.0025,4.5800,7.4954,5.2983,5.3961,5.3071,2.7803,8.9227,5.5914,4.2758,7.1749,6.6267,6.5470,5.1104,8.29057.1540,6.7031,5.5169,5.8284,4.5138,4.6425,5.0179,6.9646,0.0000,6.7047,5.0056,9.1250,4.9769,2.9634,7.5819,8.5546,2.8061,6.9800,3.6751,2.5834,7.6541,4.9881,7.6494,2.7948,4.5088,5.2110,9.1264,2.9761,7.8062,2.7941,2.80386.8273,4.8983,3.0135,2.8610,5.5206,5.0210,7.5889,4.8923,6.7047,0.0000,5.8617,2.7465,2.9301,5.1272,4.9558,3.9808,5.3162,6.8165,4.7404,6.8576,5.5561,5.5098,2.8152,7.0330,2.6644,7.6461,5.2912,7.0096,2.9674,4.2945,7.02583.6995,7.5806,6.8412,5.1354,7.0249,2.8371,4.7500,2.8238,5.0056,5.8617,0.0000,7.6266,2.9474,2.7359,4.9270,6.8677,5.4363,5.4517,6.7478,3.1721,6.8329,5.4529,4.9555,7.3438,5.1708,2.7597,8.8287,5.4452,7.8249,4.2909,7.06467.4136,5.2873,4.6886,3.8488,6.9978,6.4170,8.8240,5.6867,9.1250,2.7465,7.6266,0.0000,4.9529,7.5866,4.8602,2.7541,8.0299,7.1729,7.0207,8.9107,5.2910,6.5588,2.9146,9.5249,5.3922,9.1160,4.1719,8.7725,2.7497,6.4529,9.07934.6132,5.5000,4.2507,2.8086,5.3888,2.6958,5.4549,2.8002,4.9769,2.9301,2.9474,4.9529,0.0000,2.8097,3.9433,4.7505,4.4056,5.3122,4.8719,4.3293,5.3691,4.4387,2.8442,6.4077,2.8070,4.8696,6.5606,5.3482,5.0545,2.9281,6.20315.8456,7.0038,5.4276,4.9618,5.7435,3.6516,5.4488,4.7436,2.9634,5.1272,2.7359,7.5866,2.8097,0.0000,6.2398,7.3805,2.7234,6.6316,4.5653,2.8038,7.3250,5.3513,5.6427,4.8167,3.2793,4.4545,8.7753,4.6687,7.1729,2.8747,5.23952.8037,4.9530,5.8673,3.0028,6.2424,3.1115,4.9281,2.7895,7.5819,4.9558,4.9270,4.8602,3.9433,6.2398,0.0000,2.6964,7.9654,2.7924,7.3935,6.1211,2.8429,3.9452,2.8424,9.2720,6.2355,5.1988,4.8668,6.2430,5.2004,5.1737,7.96595.4881,3.9763,4.8804,2.7539,6.0645,4.9489,6.8616,4.7319,8.5546,3.9808,6.8677,2.7541,4.7505,7.3805,2.6964,0.0000,8.3401,4.7306,7.1148,7.8221,2.7718,4.7476,2.7753,9.5030,6.0617,7.5711,2.7577,7.3777,3.1397,5.7882,8.33928.1769,7.0263,4.7149,5.7401,5.2615,5.5823,7.0789,7.0025,2.8061,5.3162,5.4363,8.0299,4.4056,2.7234,7.9654,8.3401,0.0000,8.3066,2.9172,4.5762,8.3187,6.2166,6.9974,2.6988,2.6667,6.8504,9.0816,5.2518,7.0366,3.3010,4.30792.7361,4.8941,6.5707,4.1214,5.6079,3.0042,2.8247,4.5800,6.9800,6.8165,5.4517,7.1729,5.3122,6.6316,2.7924,4.7306,8.3066,0.0000,7.5094,5.3015,2.7788,2.8080,5.4012,8.9373,7.2956,4.2691,5.6901,4.7565,6.5515,5.1204,7.01698.3034,4.7416,2.5568,4.5830,2.8907,5.5496,6.7504,7.4954,3.6751,4.7404,6.7478,7.0207,4.8719,4.5653,7.3935,7.1148,2.9172,7.5094,0.0000,5.4477,6.7736,4.8812,6.7675,2.6661,2.8871,7.5857,7.0211,4.5660,5.1569,2.7695,2.91654.9450,6.8450,6.3820,5.4268,5.3589,3.0193,3.1734,5.2983,2.5834,6.8576,3.1721,8.9107,4.3293,2.8038,6.1211,7.8221,4.5762,5.3015,5.4477,0.0000,6.7806,4.3263,6.7865,5.3395,5.3642,2.6393,8.9072,2.7997,8.0722,3.1518,4.56234.8225,2.8166,5.0625,2.9346,4.7384,4.1730,4.9441,5.3961,7.6541,5.5561,6.8329,5.2910,5.3691,7.3250,2.8429,2.7718,8.3187,2.7788,6.7736,6.7806,0.0000,2.8411,4.6768,8.8466,6.6849,6.5259,2.9190,5.6409,4.2802,5.1556,7.00364.6152,2.9221,4.2533,2.8122,2.8152,2.6916,2.9387,5.3071,4.9881,5.5098,5.4529,6.5588,4.4387,5.3513,3.9452,4.7476,6.2166,2.8080,4.8812,4.3263,2.8411,0.0000,5.3713,6.4170,5.3921,4.8619,4.9490,2.8105,5.0539,2.9321,4.41294.8243,5.5502,5.0600,2.9319,6.6866,4.1796,6.8368,2.7803,7.6494,2.8152,4.9555,2.9146,2.8442,5.6427,2.8424,2.7753,6.9974,5.4012,6.7675,6.7865,4.6768,5.3713,0.0000,8.8413,4.7294,6.5361,5.2956,7.3262,4.2788,5.1556,8.31299.4876,7.0328,5.2125,6.8084,4.6140,6.7613,7.3498,8.9227,2.7948,7.0330,7.3438,9.5249,6.4077,4.8167,9.2720,9.5030,2.6988,8.9373,2.6661,5.3395,8.8466,6.4170,8.8413,0.0000,4.6127,7.9200,9.5247,4.8201,7.8081,4.1075,2.69567.2391,5.5148,2.9262,3.8015,4.7660,4.7913,7.0263,5.5914,4.5088,2.6644,5.1708,5.3922,2.8070,3.2793,6.2355,6.0617,2.6667,7.2956,2.8871,5.3642,6.6849,5.3921,4.7294,4.6127,0.0000,6.9513,6.9963,5.7431,4.7049,2.9171,5.25462.9941,7.6318,7.8126,5.8992,6.9472,2.8772,2.7571,4.2758,5.2110,7.6461,2.7597,9.1160,4.8696,4.4545,5.1988,7.5711,6.8504,4.2691,7.5857,2.6393,6.5259,4.8619,6.5361,7.9200,6.9513,0.0000,9.1125,4.4513,8.8145,4.8939,6.83957.4180,2.7456,4.6864,3.8516,5.3921,6.4156,7.6165,7.1749,9.1264,5.2912,8.8287,4.1719,6.5606,8.7753,4.8668,2.7577,9.0816,5.6901,7.0211,8.9072,2.9190,4.9490,5.2956,9.5247,6.9963,9.1125,0.0000,7.5797,2.7482,6.4514,8.03025.8523,5.1150,5.4224,4.9611,3.2734,3.6500,2.7391,6.6267,2.9761,7.0096,5.4452,8.7725,5.3482,4.6687,6.2430,7.3777,5.2518,4.7565,4.5660,2.7997,5.6409,2.8105,7.3262,4.8201,5.7431,4.4513,7.5797,0.0000,7.1676,2.8724,2.71907.6310,2.9654,2.6001,2.9212,4.7050,5.9378,7.8184,6.5470,7.8062,2.9674,7.8249,2.7497,5.0545,7.1729,5.2004,3.1397,7.0366,6.5515,5.1569,8.0722,4.2802,5.0539,4.2788,7.8081,4.7049,8.8145,2.7482,7.1676,0.0000,5.1559,7.03545.5996,4.2863,3.2330,3.0441,2.9157,2.8228,4.2915,5.1104,2.7941,4.2945,4.2909,6.4529,2.9281,2.8747,5.1737,5.7882,3.3010,5.1204,2.7695,3.1518,5.1556,2.9321,5.1556,4.1075,2.9171,4.8939,6.4514,2.8724,5.1559,0.0000,3.29278.1761,5.3168,4.7116,5.7392,2.6684,5.5754,5.4352,8.2905,2.8038,7.0258,7.0646,9.0793,6.2031,5.2395,7.9659,8.3392,4.3079,7.0169,2.9165,4.5623,7.0036,4.4129,8.3129,2.6956,5.2546,6.8395,8.0302,2.7190,7.0354,3.2927,0.0000Edit 1Thank-you, @yatu, and @mathfux but Kdtree does not produce what I want.so from @mathfux answer if the nearest list of 3 is 0,1,2 and 0 has the nearest list of 1,4,2 then the algorithm should give more importance to distances and break the connections of 3 and 0. 3 should find other points nearest to it, instead of joining with 0. if 3 cannot find other points nearest to it then 0 has to exclude either 1 or 2 based on the distance and include 3 because 3 could not find 3 nearest points other than 0,1,2.Also one can see 0 is connected to 25 but 25 is not connected to 0 which is wrong. if 0 is connected to 25 then 25 should also be connected to 0.[ 0, 17, 7, 25][ 5, 21, 12, 6][ 6, 25, 27, 17][10, 25, 13, 7][12, 5, 3, 24][21, 5, 3, 4][25, 6, 10, 19][29, 4, 12, 24]
A KDTree would be more appropriate for what you're trying to do. Once you've built the tree, you can search for the 3 nearest points to all points in the tree:from sklearn.neighbors import KDTreetree = KDTree(X, leaf_size=2)dist, ind = tree.query(X, k=3) To check the result, we can verify that the three smallest values in the first row, match the three smallest distances returned by the query indexing on the first row ind[0]:np.sort(X[0])[:3]#array([0. , 2.7345, 2.7361])X[0,ind[0]]# array([0. , 2.7361, 2.7345])
How to change the field of username to user_name in to django custom user model? I have created custom user model. now I want to use user_name as username field instead of username. like shown in below code snippet.class CustomUser(AbstractBaseUser): username_validator = UnicodeUsernameValidator() user_name = models.CharField( _('username'), max_length=100, unique=True, help_text=_('Required. 100 characters or fewer. Letters, digits and @/./+/-/_ only.'), validators=[username_validator], error_messages={ 'unique': _("A user with that username already exists."), }, ) USERNAME_FIELD = 'user_name'i'm unable to do that. i'm getting below error:SystemCheckError: System check identified some issues:ERRORS:<class 'accounts.admin.CustomUserAdmin'>: (admin.E033) The value of 'ordering[0]' refers to 'username', which is not an attribute of 'accounts.CustomUser'.<class 'accounts.admin.CustomUserAdmin'>: (admin.E108) The value of 'list_display[0]' refers to 'username', which is not a callable, an attribute of 'CustomUserAdmin', or an attribute or method on 'accounts.CustomUser'the reason of using this, is because all the project's database tables convention is like this. It would be more better if i could define field name is database just as we do for tables in Meta class like below. where i'm calling my customuser model as user model in db.class Meta: db_table = "user"is there anyway to call table field like this way ?class Meta: db_table_user_name = "username"if it possible then we dont need to change username to user_name. we can directly call username field is equal to user_name in database. if and only if it is possible with django models.
in admin.py where you are registering your User model. you are registering it with ModelAdmin and in that ModelAdmion you have named field incorrectly. change it to user_name their too.
Video Intelligence API - Label Segment time I am following this LABEL DETECTION TUTORIAL.The code below does the following(after getting the response back) Our response will contain result within an AnnotateVideoResponse, which consists of a list of annotationResults, one for each video sent in the request. Because we sent only one video in the request, we take the first segmentLabelAnnotations of the results. We then loop through all the labels in segmentLabelAnnotations. For the purpose of this tutorial, we only display video-level annotations. To identify video-level annotations, we pull segment_label_annotations data from the results. Each segment label annotation includes a description (segment_label.description), a list of entity categories (category_entity.description) and where they occur in segments by start and end time offsets from the beginning of the video.segment_labels = result.annotation_results[0].segment_label_annotationsfor i, segment_label in enumerate(segment_labels): print('Video label description: {}'.format( segment_label.entity.description)) for category_entity in segment_label.category_entities: print('\tLabel category description: {}'.format( category_entity.description)) for i, segment in enumerate(segment_label.segments): start_time = (segment.segment.start_time_offset.seconds + segment.segment.start_time_offset.nanos / 1e9) end_time = (segment.segment.end_time_offset.seconds + segment.segment.end_time_offset.nanos / 1e9) positions = '{}s to {}s'.format(start_time, end_time) confidence = segment.confidence print('\tSegment {}: {}'.format(i, positions)) print('\tConfidence: {}'.format(confidence)) print('\n')So, it says "Each segment label annotation includes a description (segment_label.description), a list of entity categories (category_entity.description) and where they occur in segments by start and end time offsets from the beginning of the video."But, in the output, all the labels urban area, traffic, vehicle.. have the same start and end time offsets which are basically the start and the end of the video.$ python label_det.py gs://cloud-ml-sandbox/video/chicago.mp4Operation us-west1.4757250774497581229 started: 2017-01-30T01:46:30.158989ZOperation processing ...The video has been successfully processed.Video label description: urban area Label category description: city Segment 0: 0.0s to 38.752016s Confidence: 0.946980476379Video label description: traffic Segment 0: 0.0s to 38.752016s Confidence: 0.94105899334Video label description: vehicle Segment 0: 0.0s to 38.752016s Confidence: 0.919958174229...Why is this happening?Why is the API returning these offsets for all the labels and not thestart and end time offsets of the segment where that particular label(entity) appears?(I feel like it has something to do with thevideo-level annotation but I am not sure)How can I get the start and end time offsets of the segment wherethey actually appear?
I see that the part of the tutorial that you are following uses the simplest examples available, while the list of samples provides a more complete example where more features of the Video Intelligence API are used.In order to achieve the objective you want (have a more detailed information about the time instants when each annotation is identified), there are two possibilities that you can explore:Option 1The key point here is the fact that the video-level annotations only work over segments. As explained in this documentation page I linked, if segments in a video are not specified, the API will treat the video as a single segment. Therefore, if you want that the API returns more "specific" results about when each annotation is identified, you should split the video in segments yourself, by splitting it into different segments (which can overlap and may not need the complete video), and passing those arguments as part of the videoContext field in the annotate request.If you do these through the API request, you may do a request such as the following one, defining as many segments as you want, by specifying the start and end TimeOffsets:{ "inputUri": "gs://cloud-ml-sandbox/video/chicago.mp4", "features": [ "LABEL_DETECTION" ], "videoContext": { "segments": [ { "startTimeOffset": "TO_DO", "endTimeOffset": "TO_DO" } { "startTimeOffset": "TO_DO", "endTimeOffset": "TO_DO" } ] }}If, instead, you are willing to use the Python Client Library, you can instead use the video_context parameter as in the code below:video_client = videointelligence.VideoIntelligenceServiceClient()features = [videointelligence.enums.Feature.LABEL_DETECTION]mode = videointelligence.enums.LabelDetectionMode.SHOT_AND_FRAME_MODEconfig = videointelligence.types.LabelDetectionConfig(label_detection_mode=mode)context = videointelligence.types.VideoContext(label_detection_config=config)operation = video_client.annotate_video("gs://cloud-ml-sandbox/video/chicago.mp4", features=features, video_context=context)Option 2The second option that I propose for your use case is using a different Label Detection Mode. The list of available Label Detection Modes is available in this documentation link. By default, the SHOT_MODE is used, and it will only provide video-level and shot-level annotations, which require that you work with segments as explained in Option 1. If, instead, you use FRAME_MODE, frame-level annotations will be processed. This is a costly option, as it analyses all the frames in the video and annotates each of them, but it may be a suitable option depending on your specific use case. This mode (well, actually the SHOT_AND_FRAME_MODE one, which is a combination of the two previous) is used in the more complete example that I mentioned at the beginning of my answer. The analyze_labels() function in that code provides a really complete example on how to perform video/shot/frame-level annotations, and specifically for frame-level annotation there is an explanation on how to obtain information about the frames were the annotations happen.Note that this option is really costly, as I explained earlier, and for example, I have run it for the "chicago.mp4" video provided as a sample in the tutorial, and it took around 30 minutes to complete. However, the level of detail achieved is really high (again, each frame is analyzed, and then annotations are grouped by element), and this is the kind of response that you can expect to obtain:"frameLabelAnnotations": [ { "entity": { "entityId": "/m/088l6h", "description": "family car", "languageCode": "en-US" }, "categoryEntities": [ { "entityId": "/m/0k4j", "description": "car", "languageCode": "en-US" } ], "frames": [ { "timeOffset": "0.570808s", "confidence": 0.76606256 }, { "timeOffset": "1.381775s", "confidence": 0.74966145 }, { "timeOffset": "2.468091s", "confidence": 0.85502887 }, { "timeOffset": "3.426006s", "confidence": 0.78749716 }, ] },TL;DR:The results returned by the type of call you are making following the simple example in the tutorial is expected. If there is no specific configuration, a video will be considered as a single segment, reason why the response you are getting identifies annotations in the whole video.If you want to get more details about when are the elements identified, you will need to follow one of the two following approaches: (1) define segments in your video (which requires that you manually specify the segments in which you want to split the video), or (2) use FRAME_MODE (which is way more costly and precise).
Is there a way to extend a PyTables EArray in the second dimension? I have a 2D array that can grow to larger sizes than I'm able to fit on memory, so I'm trying to store it in a h5 file using Pytables. The number of rows is known beforehand but the length of each row is not known and is variable between rows. After some research, I thought something along these lines would work, where I can set the extendable dimension as the second dimension.filename = os.path.join(tempfile.mkdtemp(), 'example.h5')h5_file = open_file(filename, mode="w", title="Example Extendable Array")h5_group = h5_file.create_group("/", "example_on_dim_2")e_array = h5_file.create_earray(h5_group, "example", Int32Atom(shape=()), (100, 0)) # Assume num of rows is 100# Add some item to index 2print(e_array[2]) # should print an empty arraye_array[2] = np.append(e_array[2], 5) # add the value 5 to row 2print(e_array[2]) # should print [5], currently printing empty arrayI'm not sure if it's possible to add elements in this way (I might have misunderstood the way earrays work), but any help would be greatly appreciated!
Here is an example showing how to create a VLArray (Variable Length). It is similar to the EArray example above, and follows the example from the Pytables doc (link in comment above). However, although a VLArray supports variable length rows, it does not have a mechanism to add items to an existing row (AFAIK).import tables as tbimport numpy as npfilename = 'example_vlarray.h5'with tb.File(filename, mode="w", title="Example Variable Length Array") as h5_file : h5_group = h5_file.create_group("/", "vl_example") vlarray = h5_file.create_vlarray(h5_group, "example", tb.IntAtom(), "ragged array of ints",) # Append some (variable length) rows: vlarray.append(np.array([0])) vlarray.append(np.array([1, 2])) vlarray.append([3, 4, 5]) vlarray.append([6, 7, 8, 9]) # Now, read it through an iterator: print('-->', vlarray.title) for x in vlarray: print('%s[%d]--> %s' % (vlarray.name, vlarray.nrow, x))
Legend with vertical line in matplotlib I need to show a vertical line in a matplotlib legend for a specific reason. I am trying to make matplotlib understand that I want a vertical line with the lines.Line2D(x,y) but this is clearly not working.import matplotlib.pyplot as pltfrom matplotlib import linesfig, ax = plt.subplots()ax.plot([0,0],[0,3])lgd = []lgd.append(lines.Line2D([0,0],[0,1], color = 'blue', label = 'Vertical line'))plt.legend(handles = lgd)I need the line to appear vertical, not the legend. Can anyone help?
You can use the vertical line marker when making your line2D object. A list of valid markers can be found here.import matplotlib.pyplot as pltfrom matplotlib import linesfig, ax = plt.subplots()ax.plot([0,0],[0,3])vertical_line = lines.Line2D([], [], color='#1f77b4', marker='|', linestyle='None', markersize=10, markeredgewidth=1.5, label='Vertical line')plt.legend(handles = [vertical_line])plt.show()
count the occurrences of alphabet of string in python String containing both upper and lower case alphabets. We need to count the number of occurrences of each alphabet(case insensitive) and display the same.Below is the program,but does not led to desired outputoutput should be-2A 3B 2C 1Gmy output is -A 2B 3A 2B 3C 2B 3G 1C 2String="ABaBCbGc"String1=String.upper()for i in String1: print(i,String1.count(i))
Use Counter:from collections import CounterString = "ABaBCbGc"counts = Counter(String.lower())print(counts)OutputCounter({'b': 3, 'c': 2, 'a': 2, 'g': 1})If you prefer upper case, just change str.lower to str.upper. Or use a dictionary to keep track of the counts:string = "ABaBCbGc"counts = {}for c in string.upper(): counts[c] = counts.get(c, 0) + 1print(counts)Output{'C': 2, 'B': 3, 'A': 2, 'G': 1}
How do I insert something into multiple strings? I have 12 different strings that I want in a tuple format because I will use the strings later in a graph.How do I add the same string into the array of strings?I have these months:January, February etc. and I want to insert into each string "January LSDS", "February LSDS", etc.I tried this but I get an error:insert = 'LSDS'month_names = ('January {}', 'February {}','March {}','April {}', 'May {}', 'June {}', 'July {}', 'August {}', 'September {}', 'October {}', 'November {}', 'December {}').format(insert)print(month_names)---------------------------------------------------------------------------AttributeError Traceback (most recent call last)<ipython-input-167-c79c038f3ebc> in <module> 3 insert = 'LSDS' 4 ----> 5 month_names = ('January {}', 'February {}','March {}','April {}', 'May {}', 'June {}', 'July {}', 'August {}', 'September {}', 'October {}', 'November {}', 'December {}').format(insert) 6 7 print(month_names)AttributeError: 'tuple' object has no attribute 'format'
inserted = [month.format(insert) for month in month_names]
Pandas: Reshaping Long Data to Wide with duplicated columns I need to pivot long pandas dataframe to wide. The issue is that for some id there are multiple values for the same parameter. Some parameters present only in a few ids.df = pd.DataFrame({'indx':[11,11,11,11,12,12,12,13,13,13,13],'param':['a','b','b','c','a','b','d','a','b','c','c'],'value':[100,54,65,65,789,24,98,24,27,75,35]})indx param value11 a 10011 b 5411 b 6511 c 6512 a 78912 b 2412 d 9813 a 2413 b 2713 c 7513 c 35Want to receive something like this:indx a b c d11 100 `54,65` 65 None12 789 None 98 2413 24 27 `75,35` Noneorindx a b b1 c c1 d11 100 54 65 65 None None12 789 None None 98 None 2413 24 27 None 75 35 NoneSo, obviously direct df.pivot() not a solution.Thanks for any help.
Option 1:df.astype(str).groupby(['indx', 'param'])['value'].agg(','.join).unstack()Output:param a b c dindx 11 100 54,65 65 NaN12 789 24 NaN 9813 24 27 75,35 NaNOption 2df_out = df.set_index(['indx', 'param', df.groupby(['indx','param']).cumcount()])['value'].unstack([1,2])df_out.columns = [f'{i}_{j}' if j != 0 else f'{i}' for i, j in df_out.columns]df_out.reset_index()Output: indx a b b_1 c d c_10 11 100.0 54.0 65.0 65.0 NaN NaN1 12 789.0 24.0 NaN NaN 98.0 NaN2 13 24.0 27.0 NaN 75.0 NaN 35.0
Using Pygame and Pymunk Circle will not spawn in space So, I'm trying to make a function create_particle then make the function draw a partial with draw_circle. However, whenever I open the window, I get my grey window but no particle is shown. I'm extremely new to both pygame and pymunk so any help is appreciated.import sys, pygame, random, pymunkBG = (94, 93, 93)S_width = 800S_height = 800pygame.init()Window = pygame.display.set_mode((S_width,S_height))clock = pygame.time.Clock()pygame.display.set_caption("H20 Particle simulation")Window.fill(BG)space = pymunk.Space()space.gravity = (0,100)def create_particle(space): body = pymunk.Body(1, 100, body_type = pymunk.Body.DYNAMIC) body.position = (400, 400) shape = pymunk.Circle(body,80) space.add(body, shape) return shapedef draw_circle(circle): for circle in circles: pos_x = int(circle.body.position.x) pos_y = int(circle.body.position.y) pygame.draw.circle(screen,(0,0,0),circle.body.position20)circles = []circles.append(create_particle(space))while True: Window.fill((217,217,217)) clock.tick(120) pygame.display.update() for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() sys.exit()
A few changes are needed:draw_circle() does not require a parameterWhen you draw the circle, you need to specify the coordinates and radiusIn the main loop, call draw_circle() and space.step(0.02)Here is the updated code:def draw_circle(): for circle in circles: pos_x = int(circle.body.position.x) pos_y = int(circle.body.position.y) pygame.draw.circle(Window,(0,200,0), (pos_x, pos_y), 20)circles = []circles.append(create_particle(space))while True: Window.fill((217,217,217)) draw_circle() space.step(0.02) clock.tick(120) for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() sys.exit() pygame.display.update()
Add a new column to a dataframe based on an existing column value using pandas I am working with a dataframe created by importing a .csv file I created. I want to (1) create a new column in the dataframe and (2) use values from an existing column to assign a value to the new column. This is an example of what I'm working with: date id height genderdd/mm/yyyy 1A 6 Mdd/mm/yyyy 2A 4 Fdd/mm/yyyy 1B 1 Mdd/mm/yyyy 2B 7 FSo I want to make a new column "side" and make that side have the value "A" or "B" based on the existing "id" column value: date id height gender sidedd/mm/yyyy 1A 6 M Add/mm/yyyy 2A 4 F Add/mm/yyyy 1B 1 M Bdd/mm/yyyy 2B 7 F BI have gotten to a point where I have been able to make the new column and assign a new value but when I attempt to use the .groupby method on the "side" column it doesn't work as expected. df = pd.read_csv("clean.csv")df = df.drop(["Unnamed: 0"], axis=1)df["side"] = ""df.columns = ["date", "id", "height", "gender", "side"]for i, row in df.iterrows(): if "A" in row["id"]: df.at[i, row["side"]] = "A" else: df.at[i, row["side"]] = "B"df["side"]calling df["side"] results in blank output, but calling df by itself produces this:So there is a value in the dataframe, but using the .groupby method treats the values in the side column as not existing. This is a real headscratcher. I'm new to Python and would appreciate if someone could explain to me what I'm doing wrong.
Just use str[]. I could not see the image. If your id has more than 2 chars, you need this to get the last chardf['side'] = df.id.str[-1]Out[582]: date id height gender side0 dd/mm/yyyy 1A 6 M A1 dd/mm/yyyy 2A 4 F A2 dd/mm/yyyy 1B 1 M B3 dd/mm/yyyy 2B 7 F B
How can I load my homemade 32bit cpp DLL into my canopy 3.5 32bit python? I am trying to load my DLL (32bit) file containing CPP functions into python. It works on python 3.7 (32bit) but it gives an error when using canopy 3.5 (32bit). the code I use to load my dll:import osimport ctypesos.chdir(r"G:\DLLdirectory")mydll = ctypes.cdll.LoadLibrary('MyDLL.dll')If I run it on pyton 3.7 it works, if I run it with canopy 3.5 I get:Traceback (most recent call last): File "DIR/PythonFile.py", line 26, in <module> mydll = ctypes.cdll.LoadLibrary('MyDLL.dll') File "DIR\Canopy32\edm\envs\User\lib\ctypes\__init__.py", line 425, in LoadLibrary return self._dlltype(name) File "DIR\Canopy32\edm\envs\User\lib\ctypes\__init__.py", line 347, in __init__ self._handle = _dlopen(self._name, mode)OSError: [WinError 126] The specified module could not be foundIf you change os.chdir() to sys.path.append() in canopy still the module is not found and in python 3.7 I get this error: Traceback (most recent call last): File "DIR/PythonFile.py", line 26, in <module> mydll = ctypes.cdll.LoadLibrary('MyDLL.dll') File "DIR\Python\Python37-32\lib\ctypes\__init__.py", line 434, in LoadLibrary return self._dlltype(name) File "DIR\Python37-32\lib\ctypes\__init__.py", line 356, in __init__ self._handle = _dlopen(self._name, mode)OSError: [WinError 193] %1 is not a valid Win32 application
It turned out that the DLL was dependent on another DLL and this DLL was found in python automatically. However, in Canopy the second DLL needed to be loaded separately.
Django - pass a list of results when querying using filter In my FollowingPageView function, I'm trying to filter posts based on the logged in user's list of user's he/she is following.You'll see the Profile model has a "following" field that captures the names of users the Profile owner is following. What I'm trying to do in my view is capture these names of the users in "following" then pass them to Post.objects.filter(created_by=user_list), but I will only get the last user in that list in this case. How can I iterate over the "user_list" Queryset and pass that to Post.objects.filter in order to return the posts from each user in that list? In this case, I should have two users in the Queryset [<User: winter>, <User: daisy>].models.pyclass Profile(models.Model): user = models.OneToOneField(User, null=True, on_delete=models.CASCADE) bio = models.TextField(null=True, blank=True) website = models.CharField(max_length=225, null=True, blank=True) follower = models.ManyToManyField(User, blank=True, related_name="followed_user") # user following this profile following = models.ManyToManyField(User, blank=True, related_name="following_user") # profile user that follows this profile def __str__(self): return f"{self.user}'s' profile id is {self.id}" def following_users(self): for username in self.following: return username def get_absolute_url(self): return reverse("network:profile-detail", args=[str(self.id)])class Post(models.Model): created_by = models.ForeignKey(User, on_delete=models.CASCADE) subject = models.CharField(max_length=50) body = models.TextField(max_length=1000) timestamp = models.DateTimeField(auto_now_add=True) likes = models.ManyToManyField(User, blank=True, related_name="posts") def __str__(self): return f"{self.created_by} posted {self.body}"views.py# Following Usersdef FollowingPageView(request, pk): profile = get_object_or_404(Profile, id=pk) user_list = [] for user in profile.following.all(): user_list.append(user) posts = Post.objects.filter(created_by=user_list[0]) print(user_list) paginator = Paginator(posts, 10) page_number = request.GET.get("page") page_obj = paginator.get_page(page_number) try: if request.method == "GET": return render(request, "network/follow-posts.html", { "profile": profile, "page_obj": page_obj }) except ValueError: return render(request, "network:index.html", {"error": ValueError})
One approach is to use an __in query. Here, because you're not using user_list for anything else, you'll probably get the best results from using an inner query:posts = Post.objects.filter(created_by__in=profile.following.all())But note the performance advice in the linked docs - test it on your actual setup and see.Possibly with a distinct() call required, I can't remember exactly what triggers the possibility of duplicate records with many-to-many fields.There are other ways to express it using field references, something like:posts = Post.objects.filter(created_by__profile__followed_user=profile.user).distinct() Backing databases tend to do that with a join rather than a subquery, so it can have different performance characteristics.
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
0
Edit dataset card