qid
int64
469
74.7M
question
stringlengths
36
37.8k
date
stringlengths
10
10
metadata
sequence
response_j
stringlengths
5
31.5k
response_k
stringlengths
10
31.6k
39,303,710
I am new to Python and machine learning and i am trying to work out how to fix this issue with date time. next\_unix is 13148730, because that is how many seconds are in five months, which is the time in between my dates. I have searched and i can't seem to find anything that works. ``` last_date = df.iloc[1,0] last_unix = pd.to_datetime('2015-01-31 00:00:00') +pd.Timedelta(13148730) five_months = 13148730 next_unix = last_unix + five_months for i in forecast_set: next_date = Timestamp('2015-06-30 00:00:00') next_unix += 13148730 df.loc[next_date] = [np.nan for _ in range(len(df.columns)-1)]+[i] ``` Error: ``` Traceback (most recent call last): File "<ipython-input-23-18adaa6b781f>", line 1, in <module> runfile('C:/Users/HP/Documents/machine learning.py', wdir='C:/Users/HP/Documents') File "C:\Users\HP\Anaconda3\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 714, in runfile execfile(filename, namespace) File "C:\Users\HP\Anaconda3\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 89, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "C:/Users/HP/Documents/machine learning.py", line 74, in <module> next_unix = last_unix + five_months File "pandas\tslib.pyx", line 1025, in pandas.tslib._Timestamp.__add__ (pandas\tslib.c:20118) ValueError: Cannot add integral value to Timestamp without offset. ```
2016/09/03
[ "https://Stackoverflow.com/questions/39303710", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2770803/" ]
If my understanding is correct then you can get desired result with the following: ``` SELECT i.*, CASE WHEN prop1.PROPERTY_ID = 1 THEN prop1.VALUE ELSE '' END AS PROPERTY_ONE, CASE WHEN prop1.PROPERTY_ID = 2 THEN prop1.VALUE ELSE '' END AS PROPERTY_TWO FROM ITEM i LEFT JOIN ITEM_PROPERTY prop1 on i.ITEM_ID = prop1.ITEM_D AND prop1.PROPERTY_ID IN (1, 2) ```
``` Select i.*, prop1.VALUE as PROPERTY_ONE, prop2.VALUE as PROPERTY_TWO From ITEM i Left Join ITEM_PROPERTY prop on i.ITEM_ID = prop.ITEM_D and prop.PROPERTY_ID in (1,2) ```
39,303,710
I am new to Python and machine learning and i am trying to work out how to fix this issue with date time. next\_unix is 13148730, because that is how many seconds are in five months, which is the time in between my dates. I have searched and i can't seem to find anything that works. ``` last_date = df.iloc[1,0] last_unix = pd.to_datetime('2015-01-31 00:00:00') +pd.Timedelta(13148730) five_months = 13148730 next_unix = last_unix + five_months for i in forecast_set: next_date = Timestamp('2015-06-30 00:00:00') next_unix += 13148730 df.loc[next_date] = [np.nan for _ in range(len(df.columns)-1)]+[i] ``` Error: ``` Traceback (most recent call last): File "<ipython-input-23-18adaa6b781f>", line 1, in <module> runfile('C:/Users/HP/Documents/machine learning.py', wdir='C:/Users/HP/Documents') File "C:\Users\HP\Anaconda3\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 714, in runfile execfile(filename, namespace) File "C:\Users\HP\Anaconda3\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 89, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "C:/Users/HP/Documents/machine learning.py", line 74, in <module> next_unix = last_unix + five_months File "pandas\tslib.pyx", line 1025, in pandas.tslib._Timestamp.__add__ (pandas\tslib.c:20118) ValueError: Cannot add integral value to Timestamp without offset. ```
2016/09/03
[ "https://Stackoverflow.com/questions/39303710", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2770803/" ]
Old style: ``` Select i.*, max(decode(prop.PROPERTY_ID,1,prop.VALUE,NULL)) as PROPERTY_ONE, max(decode(prop.PROPERTY_ID,2,prop.VALUE,NULL)) as PROPERTY_TWO From ITEM i Left Join ITEM_PROPERTY prop on i.ITEM_ID = prop.ITEM_D and prop.PROPERTY_ID in(1,2) group by there_will_have_to_list_all_the_fields_from_ITEM ``` Or ("light" version, less list in gorup by. But there may be a problem with optimization): ``` Select i.*,prop.PROPERTY_ONE,prop.PROPERTY_TWO From ITEM i Left Join ( select ITEM_ID, max(decode(PROPERTY_ID,1,VALUE,NULL)) as PROPERTY_ONE, max(decode(PROPERTY_ID,2,VALUE,NULL)) as PROPERTY_TWO from ITEM_PROPERTY where PROPERTY_ID in(1,2) group by ITEM_ID ) prop on i.ITEM_ID = prop.ITEM_D ``` New style (Oracle 11g+): ``` select * from ( Select i.*, prop.PROPERTY_ID, prop1.VALUE From ITEM i Left Join ITEM_PROPERTY prop on i.ITEM_ID = prop.ITEM_D and prop.PROPERTY_ID in(1,2) ) pivot( max(VALUE) for PROPERTY_ID in(1 as "PROPERTY_ONE",2 as "PROPERTY_TWO") ) ```
``` Select i.*, prop1.VALUE as PROPERTY_ONE, prop2.VALUE as PROPERTY_TWO From ITEM i Left Join ITEM_PROPERTY prop on i.ITEM_ID = prop.ITEM_D and prop.PROPERTY_ID in (1,2) ```
50,693,966
I have a directory containing many images(\*.jpg). Each image has a name. In the same directory i have a file containing python code(below). ``` import numpy as np import pandas as pd import glob fd = open('melanoma.csv', 'a') for img in glob.glob('*.jpg'): dataFrame = pd.read_csv('allcsv.csv') name = dataFrame['name'] for i in name: #print(i) if(i+'.jpg' == img): print(i) ``` In the same directory i have another file(allcsv.csv) containing large amount of csv data for all images in the directory and many other images also. The above code compares the names of images with the name column in the allcsv.csv file and prints the names. I need to modify this code to write all the data in a row of the compared images into a file named 'melanoma.csv'. eg: **allcsv.csv** ``` name,age,sex ISIC_001,85,female ISIC_002,40,female ISIC_003,30,male ISIC_004,70,female ``` *if the folder has the images only for ISIC\_002 and ISIC\_003* **melanoma.csv** ``` name,age,sex ISIC_002,40,female ISIC_003,30,male ```
2018/06/05
[ "https://Stackoverflow.com/questions/50693966", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6612871/" ]
First, your code reads the .csv file once for every image. Second, you have a nested `for`-loop. Both is not ideal. I recommend the following approach: **Step 1 - Create list of image file names** ``` import glob image_names = [f.replace('.jpg', '') for f in glob.glob("*.jpg")] ``` **Step 2 - Create dataframe with patient names** ``` import pandas df_patients = pd.read_csv('allcsv.csv') ``` **Step 3 - Filter healthy patients and dump to csv** ``` df_sick = df_patients[df_patients['name'].isin(image_names)] df_sick.to_csv('melanoma.csv', index = False) ``` **Step 4 - Print names of sick patients** ``` for rows in df_sick.iterrows(): print row.name, 'has cancer' ```
This is just a solution for storing the matched values to a new file melanoma.csv. Your code can be further improved and optimized. ``` import numpy as np import pandas as pd import glob # Create a dictionary object d={} for img in glob.glob('*.jpg'): dataFrame = pd.read_csv('allcsv.csv') name = dataFrame['name'] for i in name: #print(i) if(i+'.jpg' == img): # update dictionary d everytime a match is found with all the required values d['name'] = i d['age']= dataFrame['age'] d['sex'] = dataFrame['sex'] # convert dictionary d to dataframe df = pd.DataFrame(d, columns=d.keys()) #Save dataframe to csv df.to_csv('--file path--/melanoma.csv') ```
39,771,366
I am a beginner in python. However, I have some problems when I try to use the readline() method. ``` f=raw_input("filename> ") a=open(f) print a.read() print a.readline() print a.readline() print a.readline() ``` and my txt file is ``` aaaaaaaaa bbbbbbbbb ccccccccc ``` However, when I tried to run it on a Mac terminal, I got this: ``` aaaaaaaaa bbbbbbbbb ccccccccc ``` It seems that readline() is not working at all. But when I disable print a.read(), the readline() gets back to work. This confuses me a lot. Is there any solution where I can use read() and readline() at the same time?
2016/09/29
[ "https://Stackoverflow.com/questions/39771366", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6899656/" ]
When you open a file you get a pointer to some place of the file (by default: the begining). Now whenever you run `.read()` or `.readline()` this pointer moves: 1. `.read()` reads until the end of the file and moves the pointer to the end (thus further calls to any reading gives nothing) 2. `.readline()` reads until newline is seen and sets the pointer after it 3. `.read(X)` reads X bytes and sets the pointer at `CURRENT_LOCATION + X` (or the end) If you wish you can manually move that pointer by issuing `a.seek(X)` call where `X` is a place in file (seen as an array of bytes). For example this should give you the desired output: ``` print a.read() a.seek(0) print a.readline() print a.readline() print a.readline() ```
You need to understand the concept of file pointers. When you read the file, it is fully consumed, and the pointer is at the end of the file. > > It seems that the readline() is not working at all. > > > It is working as expected. There are no lines to read. > > when I disable print a.read(), the readline() gets back to work. > > > Because the pointer is at the beginning of the file, and the lines can be read > > Is there any solution that I can use read() and readline() at the same time? > > > Sure. Flip the ordering of reading a few lines, then the remainder of the file, or seek the file pointer back to a position that you would like. Also, don't forget to close the file when you are finished reading it
39,771,366
I am a beginner in python. However, I have some problems when I try to use the readline() method. ``` f=raw_input("filename> ") a=open(f) print a.read() print a.readline() print a.readline() print a.readline() ``` and my txt file is ``` aaaaaaaaa bbbbbbbbb ccccccccc ``` However, when I tried to run it on a Mac terminal, I got this: ``` aaaaaaaaa bbbbbbbbb ccccccccc ``` It seems that readline() is not working at all. But when I disable print a.read(), the readline() gets back to work. This confuses me a lot. Is there any solution where I can use read() and readline() at the same time?
2016/09/29
[ "https://Stackoverflow.com/questions/39771366", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6899656/" ]
When you open a file you get a pointer to some place of the file (by default: the begining). Now whenever you run `.read()` or `.readline()` this pointer moves: 1. `.read()` reads until the end of the file and moves the pointer to the end (thus further calls to any reading gives nothing) 2. `.readline()` reads until newline is seen and sets the pointer after it 3. `.read(X)` reads X bytes and sets the pointer at `CURRENT_LOCATION + X` (or the end) If you wish you can manually move that pointer by issuing `a.seek(X)` call where `X` is a place in file (seen as an array of bytes). For example this should give you the desired output: ``` print a.read() a.seek(0) print a.readline() print a.readline() print a.readline() ```
The file object `a` remembers it's position in the file. * `a.read()` reads from the current position to end of the file (moving the position to the end of the file) * `a.readline()` reads from the current position to the end of the line (moving the position to the next line) * `a.seek(n)` moves to position n in the file (without returning anything) * `a.tell()` returns the position in the file. So try putting the calls to readline first. You'll notice that now the read call won't return the whole file, just the remaining lines (maybe none), depending on how many times you called readline. And play around with seek and tell to confirm whats going on. Details [here](https://docs.python.org/2/library/stdtypes.html#bltin-file-objects).
39,771,366
I am a beginner in python. However, I have some problems when I try to use the readline() method. ``` f=raw_input("filename> ") a=open(f) print a.read() print a.readline() print a.readline() print a.readline() ``` and my txt file is ``` aaaaaaaaa bbbbbbbbb ccccccccc ``` However, when I tried to run it on a Mac terminal, I got this: ``` aaaaaaaaa bbbbbbbbb ccccccccc ``` It seems that readline() is not working at all. But when I disable print a.read(), the readline() gets back to work. This confuses me a lot. Is there any solution where I can use read() and readline() at the same time?
2016/09/29
[ "https://Stackoverflow.com/questions/39771366", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6899656/" ]
You need to understand the concept of file pointers. When you read the file, it is fully consumed, and the pointer is at the end of the file. > > It seems that the readline() is not working at all. > > > It is working as expected. There are no lines to read. > > when I disable print a.read(), the readline() gets back to work. > > > Because the pointer is at the beginning of the file, and the lines can be read > > Is there any solution that I can use read() and readline() at the same time? > > > Sure. Flip the ordering of reading a few lines, then the remainder of the file, or seek the file pointer back to a position that you would like. Also, don't forget to close the file when you are finished reading it
The file object `a` remembers it's position in the file. * `a.read()` reads from the current position to end of the file (moving the position to the end of the file) * `a.readline()` reads from the current position to the end of the line (moving the position to the next line) * `a.seek(n)` moves to position n in the file (without returning anything) * `a.tell()` returns the position in the file. So try putting the calls to readline first. You'll notice that now the read call won't return the whole file, just the remaining lines (maybe none), depending on how many times you called readline. And play around with seek and tell to confirm whats going on. Details [here](https://docs.python.org/2/library/stdtypes.html#bltin-file-objects).
883,313
On a django site, I want to generate an excel file based on some data in the database. I'm thinking of using [xlwt](http://pypi.python.org/pypi/xlwt), but it only has a method to save the data to a file. How can get the file to the HttpResponse object? Or maybe do you know a better library? I've also found this [snippet](http://www.djangosnippets.org/snippets/1151/) but it doesn't do what I need. All I want is a way to get the stream from the xlwt object to the response object (without writing to a temporary file)
2009/05/19
[ "https://Stackoverflow.com/questions/883313", "https://Stackoverflow.com", "https://Stackoverflow.com/users/92763/" ]
You can save your XLS file to a [StringIO](http://docs.python.org/library/stringio.html) object, which is file-like. You can return the StringIO object's `getvalue()` in the response. Be sure to add headers to mark it as a downloadable spreadsheet.
If your data result doesn't need formulas or exact presentation styles, you can always use CSV. any spreadsheet program would directly read it. I've even seen some webapps that generate CSV but name it as .XSL just to be sure that Excel opens it
883,313
On a django site, I want to generate an excel file based on some data in the database. I'm thinking of using [xlwt](http://pypi.python.org/pypi/xlwt), but it only has a method to save the data to a file. How can get the file to the HttpResponse object? Or maybe do you know a better library? I've also found this [snippet](http://www.djangosnippets.org/snippets/1151/) but it doesn't do what I need. All I want is a way to get the stream from the xlwt object to the response object (without writing to a temporary file)
2009/05/19
[ "https://Stackoverflow.com/questions/883313", "https://Stackoverflow.com", "https://Stackoverflow.com/users/92763/" ]
You can save your XLS file to a [StringIO](http://docs.python.org/library/stringio.html) object, which is file-like. You can return the StringIO object's `getvalue()` in the response. Be sure to add headers to mark it as a downloadable spreadsheet.
Use <https://bitbucket.org/kmike/django-excel-response>
883,313
On a django site, I want to generate an excel file based on some data in the database. I'm thinking of using [xlwt](http://pypi.python.org/pypi/xlwt), but it only has a method to save the data to a file. How can get the file to the HttpResponse object? Or maybe do you know a better library? I've also found this [snippet](http://www.djangosnippets.org/snippets/1151/) but it doesn't do what I need. All I want is a way to get the stream from the xlwt object to the response object (without writing to a temporary file)
2009/05/19
[ "https://Stackoverflow.com/questions/883313", "https://Stackoverflow.com", "https://Stackoverflow.com/users/92763/" ]
neat package! i didn't know about this According to the doc, the `save(filename_or_stream)` method takes either a filename to save on, or a file-like stream to write on. And a Django response object happens to be a file-like stream! so just do `xls.save(response)`. Look the Django docs about [generating PDFs](http://docs.djangoproject.com/en/dev/howto/outputting-pdf/#complex-pdfs) with ReportLab to see a similar situation. **edit:** (adapted from ShawnMilo's comment): ``` def xls_to_response(xls, fname): response = HttpResponse(mimetype="application/ms-excel") response['Content-Disposition'] = 'attachment; filename=%s' % fname xls.save(response) return response ``` then, from your view function, just create the `xls` object and finish with ``` return xls_to_response(xls,'foo.xls') ```
Use <https://bitbucket.org/kmike/django-excel-response>
883,313
On a django site, I want to generate an excel file based on some data in the database. I'm thinking of using [xlwt](http://pypi.python.org/pypi/xlwt), but it only has a method to save the data to a file. How can get the file to the HttpResponse object? Or maybe do you know a better library? I've also found this [snippet](http://www.djangosnippets.org/snippets/1151/) but it doesn't do what I need. All I want is a way to get the stream from the xlwt object to the response object (without writing to a temporary file)
2009/05/19
[ "https://Stackoverflow.com/questions/883313", "https://Stackoverflow.com", "https://Stackoverflow.com/users/92763/" ]
neat package! i didn't know about this According to the doc, the `save(filename_or_stream)` method takes either a filename to save on, or a file-like stream to write on. And a Django response object happens to be a file-like stream! so just do `xls.save(response)`. Look the Django docs about [generating PDFs](http://docs.djangoproject.com/en/dev/howto/outputting-pdf/#complex-pdfs) with ReportLab to see a similar situation. **edit:** (adapted from ShawnMilo's comment): ``` def xls_to_response(xls, fname): response = HttpResponse(mimetype="application/ms-excel") response['Content-Disposition'] = 'attachment; filename=%s' % fname xls.save(response) return response ``` then, from your view function, just create the `xls` object and finish with ``` return xls_to_response(xls,'foo.xls') ```
If your data result doesn't need formulas or exact presentation styles, you can always use CSV. any spreadsheet program would directly read it. I've even seen some webapps that generate CSV but name it as .XSL just to be sure that Excel opens it
883,313
On a django site, I want to generate an excel file based on some data in the database. I'm thinking of using [xlwt](http://pypi.python.org/pypi/xlwt), but it only has a method to save the data to a file. How can get the file to the HttpResponse object? Or maybe do you know a better library? I've also found this [snippet](http://www.djangosnippets.org/snippets/1151/) but it doesn't do what I need. All I want is a way to get the stream from the xlwt object to the response object (without writing to a temporary file)
2009/05/19
[ "https://Stackoverflow.com/questions/883313", "https://Stackoverflow.com", "https://Stackoverflow.com/users/92763/" ]
\*\*\*UPDATE: django-excel-templates no longer being maintained, instead try Marmir <http://brianray.github.com/mm/> Still in development as I type this but <http://code.google.com/p/django-excel-templates/> Django excel templates project aims to do what your asking. Specifically look at the tests. Here is a simple case: ``` # from django_excel_templates import * from django_excel_templates.color_converter import * from models import * from django.http import HttpResponse def xls_simple(request): ## Simple ## testobj = Book.objects.all() formatter = ExcelFormatter() simpleStyle = ExcelStyle(vert=2,wrap=1) formatter.addBodyStyle(simpleStyle) formatter.setWidth('name,category,publish_date,bought_on',3000) formatter.setWidth('price',600) formatter.setWidth('ebook',1200) formatter.setWidth('about',20000) simple_report = ExcelReport() simple_report.addSheet("TestSimple") filter = ExcelFilter(order='name,category,publish_date,about,bought_on,price,ebook') simple_report.addQuerySet(testobj,REPORT_HORZ,formatter, filter) response = HttpResponse(simple_report.writeReport(),mimetype='application/ms-excel') response['Content-Disposition'] = 'attachment; filename=simple_test.xls' return response ```
You can save your XLS file to a [StringIO](http://docs.python.org/library/stringio.html) object, which is file-like. You can return the StringIO object's `getvalue()` in the response. Be sure to add headers to mark it as a downloadable spreadsheet.
883,313
On a django site, I want to generate an excel file based on some data in the database. I'm thinking of using [xlwt](http://pypi.python.org/pypi/xlwt), but it only has a method to save the data to a file. How can get the file to the HttpResponse object? Or maybe do you know a better library? I've also found this [snippet](http://www.djangosnippets.org/snippets/1151/) but it doesn't do what I need. All I want is a way to get the stream from the xlwt object to the response object (without writing to a temporary file)
2009/05/19
[ "https://Stackoverflow.com/questions/883313", "https://Stackoverflow.com", "https://Stackoverflow.com/users/92763/" ]
neat package! i didn't know about this According to the doc, the `save(filename_or_stream)` method takes either a filename to save on, or a file-like stream to write on. And a Django response object happens to be a file-like stream! so just do `xls.save(response)`. Look the Django docs about [generating PDFs](http://docs.djangoproject.com/en/dev/howto/outputting-pdf/#complex-pdfs) with ReportLab to see a similar situation. **edit:** (adapted from ShawnMilo's comment): ``` def xls_to_response(xls, fname): response = HttpResponse(mimetype="application/ms-excel") response['Content-Disposition'] = 'attachment; filename=%s' % fname xls.save(response) return response ``` then, from your view function, just create the `xls` object and finish with ``` return xls_to_response(xls,'foo.xls') ```
\*\*\*UPDATE: django-excel-templates no longer being maintained, instead try Marmir <http://brianray.github.com/mm/> Still in development as I type this but <http://code.google.com/p/django-excel-templates/> Django excel templates project aims to do what your asking. Specifically look at the tests. Here is a simple case: ``` # from django_excel_templates import * from django_excel_templates.color_converter import * from models import * from django.http import HttpResponse def xls_simple(request): ## Simple ## testobj = Book.objects.all() formatter = ExcelFormatter() simpleStyle = ExcelStyle(vert=2,wrap=1) formatter.addBodyStyle(simpleStyle) formatter.setWidth('name,category,publish_date,bought_on',3000) formatter.setWidth('price',600) formatter.setWidth('ebook',1200) formatter.setWidth('about',20000) simple_report = ExcelReport() simple_report.addSheet("TestSimple") filter = ExcelFilter(order='name,category,publish_date,about,bought_on,price,ebook') simple_report.addQuerySet(testobj,REPORT_HORZ,formatter, filter) response = HttpResponse(simple_report.writeReport(),mimetype='application/ms-excel') response['Content-Disposition'] = 'attachment; filename=simple_test.xls' return response ```
883,313
On a django site, I want to generate an excel file based on some data in the database. I'm thinking of using [xlwt](http://pypi.python.org/pypi/xlwt), but it only has a method to save the data to a file. How can get the file to the HttpResponse object? Or maybe do you know a better library? I've also found this [snippet](http://www.djangosnippets.org/snippets/1151/) but it doesn't do what I need. All I want is a way to get the stream from the xlwt object to the response object (without writing to a temporary file)
2009/05/19
[ "https://Stackoverflow.com/questions/883313", "https://Stackoverflow.com", "https://Stackoverflow.com/users/92763/" ]
\*\*\*UPDATE: django-excel-templates no longer being maintained, instead try Marmir <http://brianray.github.com/mm/> Still in development as I type this but <http://code.google.com/p/django-excel-templates/> Django excel templates project aims to do what your asking. Specifically look at the tests. Here is a simple case: ``` # from django_excel_templates import * from django_excel_templates.color_converter import * from models import * from django.http import HttpResponse def xls_simple(request): ## Simple ## testobj = Book.objects.all() formatter = ExcelFormatter() simpleStyle = ExcelStyle(vert=2,wrap=1) formatter.addBodyStyle(simpleStyle) formatter.setWidth('name,category,publish_date,bought_on',3000) formatter.setWidth('price',600) formatter.setWidth('ebook',1200) formatter.setWidth('about',20000) simple_report = ExcelReport() simple_report.addSheet("TestSimple") filter = ExcelFilter(order='name,category,publish_date,about,bought_on,price,ebook') simple_report.addQuerySet(testobj,REPORT_HORZ,formatter, filter) response = HttpResponse(simple_report.writeReport(),mimetype='application/ms-excel') response['Content-Disposition'] = 'attachment; filename=simple_test.xls' return response ```
You might want to check [huDjango](https://cybernetics.hudora.biz/projects/wiki/huDjango) which comes fith a function called `serializers.queryset_to_xls()` do convert a queryset into an downloadable Excel Sheet.
883,313
On a django site, I want to generate an excel file based on some data in the database. I'm thinking of using [xlwt](http://pypi.python.org/pypi/xlwt), but it only has a method to save the data to a file. How can get the file to the HttpResponse object? Or maybe do you know a better library? I've also found this [snippet](http://www.djangosnippets.org/snippets/1151/) but it doesn't do what I need. All I want is a way to get the stream from the xlwt object to the response object (without writing to a temporary file)
2009/05/19
[ "https://Stackoverflow.com/questions/883313", "https://Stackoverflow.com", "https://Stackoverflow.com/users/92763/" ]
\*\*\*UPDATE: django-excel-templates no longer being maintained, instead try Marmir <http://brianray.github.com/mm/> Still in development as I type this but <http://code.google.com/p/django-excel-templates/> Django excel templates project aims to do what your asking. Specifically look at the tests. Here is a simple case: ``` # from django_excel_templates import * from django_excel_templates.color_converter import * from models import * from django.http import HttpResponse def xls_simple(request): ## Simple ## testobj = Book.objects.all() formatter = ExcelFormatter() simpleStyle = ExcelStyle(vert=2,wrap=1) formatter.addBodyStyle(simpleStyle) formatter.setWidth('name,category,publish_date,bought_on',3000) formatter.setWidth('price',600) formatter.setWidth('ebook',1200) formatter.setWidth('about',20000) simple_report = ExcelReport() simple_report.addSheet("TestSimple") filter = ExcelFilter(order='name,category,publish_date,about,bought_on,price,ebook') simple_report.addQuerySet(testobj,REPORT_HORZ,formatter, filter) response = HttpResponse(simple_report.writeReport(),mimetype='application/ms-excel') response['Content-Disposition'] = 'attachment; filename=simple_test.xls' return response ```
Use <https://bitbucket.org/kmike/django-excel-response>
883,313
On a django site, I want to generate an excel file based on some data in the database. I'm thinking of using [xlwt](http://pypi.python.org/pypi/xlwt), but it only has a method to save the data to a file. How can get the file to the HttpResponse object? Or maybe do you know a better library? I've also found this [snippet](http://www.djangosnippets.org/snippets/1151/) but it doesn't do what I need. All I want is a way to get the stream from the xlwt object to the response object (without writing to a temporary file)
2009/05/19
[ "https://Stackoverflow.com/questions/883313", "https://Stackoverflow.com", "https://Stackoverflow.com/users/92763/" ]
\*\*\*UPDATE: django-excel-templates no longer being maintained, instead try Marmir <http://brianray.github.com/mm/> Still in development as I type this but <http://code.google.com/p/django-excel-templates/> Django excel templates project aims to do what your asking. Specifically look at the tests. Here is a simple case: ``` # from django_excel_templates import * from django_excel_templates.color_converter import * from models import * from django.http import HttpResponse def xls_simple(request): ## Simple ## testobj = Book.objects.all() formatter = ExcelFormatter() simpleStyle = ExcelStyle(vert=2,wrap=1) formatter.addBodyStyle(simpleStyle) formatter.setWidth('name,category,publish_date,bought_on',3000) formatter.setWidth('price',600) formatter.setWidth('ebook',1200) formatter.setWidth('about',20000) simple_report = ExcelReport() simple_report.addSheet("TestSimple") filter = ExcelFilter(order='name,category,publish_date,about,bought_on,price,ebook') simple_report.addQuerySet(testobj,REPORT_HORZ,formatter, filter) response = HttpResponse(simple_report.writeReport(),mimetype='application/ms-excel') response['Content-Disposition'] = 'attachment; filename=simple_test.xls' return response ```
If your data result doesn't need formulas or exact presentation styles, you can always use CSV. any spreadsheet program would directly read it. I've even seen some webapps that generate CSV but name it as .XSL just to be sure that Excel opens it
883,313
On a django site, I want to generate an excel file based on some data in the database. I'm thinking of using [xlwt](http://pypi.python.org/pypi/xlwt), but it only has a method to save the data to a file. How can get the file to the HttpResponse object? Or maybe do you know a better library? I've also found this [snippet](http://www.djangosnippets.org/snippets/1151/) but it doesn't do what I need. All I want is a way to get the stream from the xlwt object to the response object (without writing to a temporary file)
2009/05/19
[ "https://Stackoverflow.com/questions/883313", "https://Stackoverflow.com", "https://Stackoverflow.com/users/92763/" ]
Use <https://bitbucket.org/kmike/django-excel-response>
If your data result doesn't need formulas or exact presentation styles, you can always use CSV. any spreadsheet program would directly read it. I've even seen some webapps that generate CSV but name it as .XSL just to be sure that Excel opens it
14,484,386
I'm interrogating a nested dictionary using the dict.get('keyword') method. Currently my syntax is... ``` M = cursor_object_results_of_db_query for m in M: X = m.get("gparents").get("parent").get("child") for x in X: y = x.get("key") ``` However, sometimes one of the "parent" or "child" tags doesn't exist, and my script fails. I know using `get()` I can include a default in the case the key doesn't exist of the form... ``` get("parent", '') or get("parent", 'orphan') ``` But if I include any `Null`, `''`, or empty I can think of, the chained `.get("child")` fails when called on `''.get("child")` since `""` has no method `.get()`. The way I'm solving this now is by using a bunch of sequential `try-except` around each `.get("")` call, but that seems foolish and unpython---is there a way to default return `"skip"` or `"pass"` or something that would still support chaining and fail intelligently, rather than deep-dive into keys that don't exist? Ideally, I'd like this to be a list comprehension of the form: ``` [m.get("gparents").get("parent").get("child") for m in M] ``` but this is currently impossible when an absent parent causes the `.get("child")` call to terminate my program.
2013/01/23
[ "https://Stackoverflow.com/questions/14484386", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1052117/" ]
Since these are all python `dict`s and you are calling the `dict.get()` method on them, you can use an empty `dict` to chain: ``` [m.get("gparents", {}).get("parent", {}).get("child") for m in M] ``` By leaving off the default for the last `.get()` you fall back to `None`. Now, if any of the intermediary keys is not found, the rest of the chain will use empty dictionaries to look things up, terminating in `.get('child')` returning `None`.
Another approach is to recognize that if the key isn't found, `dict.get` returns `None`. However, `None` doesn't have an attribute `.get`, so it will throw an `AttributeError`: ``` for m in M: try: X = m.get("gparents").get("parent").get("child") except AttributeError: continue for x in X: y = x.get("key") #do something with `y` probably??? ``` Just like Martijn's answer, this doesn't guarantee that `X` is iterable (non-`None`). Although, you could fix that by making the last `get` in the chain default to returning an empty list: ``` try: X = m.get("gparents").get("parent").get("child",[]) except AttributeError: continue ``` --- Finally, I think that probably the best solution to this problem is to use `reduce`: ``` try: X = reduce(dict.__getitem__,["gparents","parent","child"],m) except (KeyError,TypeError): pass else: for x in X: #do something with x ``` The advantage here is that you know if any of the `get`s failed based on the type of exception that was raised. It's possible that a `get` returns the wrong type, then you get a `TypeError`. If the dictionary doesn't have the key however, it raises a `KeyError`. You can handle those separately or together. Whatever works best for your use case.
14,484,386
I'm interrogating a nested dictionary using the dict.get('keyword') method. Currently my syntax is... ``` M = cursor_object_results_of_db_query for m in M: X = m.get("gparents").get("parent").get("child") for x in X: y = x.get("key") ``` However, sometimes one of the "parent" or "child" tags doesn't exist, and my script fails. I know using `get()` I can include a default in the case the key doesn't exist of the form... ``` get("parent", '') or get("parent", 'orphan') ``` But if I include any `Null`, `''`, or empty I can think of, the chained `.get("child")` fails when called on `''.get("child")` since `""` has no method `.get()`. The way I'm solving this now is by using a bunch of sequential `try-except` around each `.get("")` call, but that seems foolish and unpython---is there a way to default return `"skip"` or `"pass"` or something that would still support chaining and fail intelligently, rather than deep-dive into keys that don't exist? Ideally, I'd like this to be a list comprehension of the form: ``` [m.get("gparents").get("parent").get("child") for m in M] ``` but this is currently impossible when an absent parent causes the `.get("child")` call to terminate my program.
2013/01/23
[ "https://Stackoverflow.com/questions/14484386", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1052117/" ]
Since these are all python `dict`s and you are calling the `dict.get()` method on them, you can use an empty `dict` to chain: ``` [m.get("gparents", {}).get("parent", {}).get("child") for m in M] ``` By leaving off the default for the last `.get()` you fall back to `None`. Now, if any of the intermediary keys is not found, the rest of the chain will use empty dictionaries to look things up, terminating in `.get('child')` returning `None`.
How about using a small helper function? ``` def getn(d, path): for p in path: if p not in d: return None d = d[p] return d ``` and then ``` [getn(m, ["gparents", "parent", "child"]) for m in M] ```
14,484,386
I'm interrogating a nested dictionary using the dict.get('keyword') method. Currently my syntax is... ``` M = cursor_object_results_of_db_query for m in M: X = m.get("gparents").get("parent").get("child") for x in X: y = x.get("key") ``` However, sometimes one of the "parent" or "child" tags doesn't exist, and my script fails. I know using `get()` I can include a default in the case the key doesn't exist of the form... ``` get("parent", '') or get("parent", 'orphan') ``` But if I include any `Null`, `''`, or empty I can think of, the chained `.get("child")` fails when called on `''.get("child")` since `""` has no method `.get()`. The way I'm solving this now is by using a bunch of sequential `try-except` around each `.get("")` call, but that seems foolish and unpython---is there a way to default return `"skip"` or `"pass"` or something that would still support chaining and fail intelligently, rather than deep-dive into keys that don't exist? Ideally, I'd like this to be a list comprehension of the form: ``` [m.get("gparents").get("parent").get("child") for m in M] ``` but this is currently impossible when an absent parent causes the `.get("child")` call to terminate my program.
2013/01/23
[ "https://Stackoverflow.com/questions/14484386", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1052117/" ]
Since these are all python `dict`s and you are calling the `dict.get()` method on them, you can use an empty `dict` to chain: ``` [m.get("gparents", {}).get("parent", {}).get("child") for m in M] ``` By leaving off the default for the last `.get()` you fall back to `None`. Now, if any of the intermediary keys is not found, the rest of the chain will use empty dictionaries to look things up, terminating in `.get('child')` returning `None`.
I realise I'm a bit late for the part but here's the solution I came up with when faced with a similar problem: ``` def get_nested(dict_, *keys, default=None): if not isinstance(dict_, dict): return default elem = dict_.get(keys[0], default) if len(keys) == 1: return elem return get_nested(elem, *keys[1:], default=default) ``` For example: ``` In [29]: a = {'b': {'c': 1}} In [30]: get_nested(a, 'b', 'c') Out[30]: 1 In [31]: get_nested(a, 'b', 'd') is None Out[31]: True ```
20,375,954
I have a large collection of images which I'm trying to sort according to quality by crowd-sourcing. Images can be assigned 1, 2, 3, 4, or 5 stars according to how much the user likes them. A 5-star image would be very visually appealing, a 1-star image might be blurry and out of focus. At first I created a page showing an image with the option to rate it directly by choosing 1-5 stars. But it was too time-consuming to to do this. I'd like to try to create an interface where the user is presented with 2 images side by side and asked to click the image s/he likes more. Using this comparison data of one image compared to another, is there then some way to convert it over to a score of 1-5? What kind of algorithm would allow me to globally rank images by comparing them only to each other, and how could I implement it in python?
2013/12/04
[ "https://Stackoverflow.com/questions/20375954", "https://Stackoverflow.com", "https://Stackoverflow.com/users/216605/" ]
Sounds like you need a ranking algorithm similar to what is used in sport to rank players. Think of the comparison of two images as a match and the one the user selects as the better one is the winner of the match. After some time, many players have played many matches and sometimes against the same person. They win some they lose some eh? How do you rank which is the best overall? You can look at the [Elo Rating System](http://en.wikipedia.org/wiki/Elo_rating_system). which is used in chess to rank chess players. There is an algorithm specified so it should be a matter of implementing in your language of choice.
Let each image start with a ranking of 3 (the mean of 1 … 5), then for each comparison (which wasn't equal) lower the rank of the loser image and increase the rank of the winner image. I propose to simply *count* the +1s and the -1s, so that you have a number of wins and a number of losses for each image. Then the value 1 … 5 could be calculated as: ``` import math def rank(wins, losses): return 3 + 4 * math.atan(wins - losses) / math.pi ``` This will rank images higher and higher with each win, but it will lead to the silly situation that (+1010 / -1000) will be ranked alike with (+10 / -0) which is not desirable. One can remedy this flaw by using a mean of the values: ``` def rank(wins, losses): return (3 + 4 * math.atan((wins - losses) / (wins + losses) * 10) / math.pi if wins + losses > 0 else 3) ``` Both curves will never *quite* reach 1 or 5, but they will come ever closer if an image always wins or always loses.
20,375,954
I have a large collection of images which I'm trying to sort according to quality by crowd-sourcing. Images can be assigned 1, 2, 3, 4, or 5 stars according to how much the user likes them. A 5-star image would be very visually appealing, a 1-star image might be blurry and out of focus. At first I created a page showing an image with the option to rate it directly by choosing 1-5 stars. But it was too time-consuming to to do this. I'd like to try to create an interface where the user is presented with 2 images side by side and asked to click the image s/he likes more. Using this comparison data of one image compared to another, is there then some way to convert it over to a score of 1-5? What kind of algorithm would allow me to globally rank images by comparing them only to each other, and how could I implement it in python?
2013/12/04
[ "https://Stackoverflow.com/questions/20375954", "https://Stackoverflow.com", "https://Stackoverflow.com/users/216605/" ]
Sounds like you need a ranking algorithm similar to what is used in sport to rank players. Think of the comparison of two images as a match and the one the user selects as the better one is the winner of the match. After some time, many players have played many matches and sometimes against the same person. They win some they lose some eh? How do you rank which is the best overall? You can look at the [Elo Rating System](http://en.wikipedia.org/wiki/Elo_rating_system). which is used in chess to rank chess players. There is an algorithm specified so it should be a matter of implementing in your language of choice.
If you don't want to deal with a complex statistical model like the Elo rating system suggested by @VincentRamdhanie (which will yield optimal results), you can always model this as a simple [optimization problem](http://en.wikipedia.org/wiki/Optimization_%28mathematics%29). You have datapoints of the type `a>b`. If you assign values to a and b, `a>b` is simply a condition that evaluates to true or false. Then, one possible "solution" to this problem is to maximize the number of conditions (datapoints) that evaluate to true. Since you already know your metric, all that is left is to choose a search algorithm. [Hill climbing](http://en.wikipedia.org/wiki/Hill_climbing) is a very simple one. It works like this: * assign to each rating a random value - this is your current solution * change the value of one of the ratings * if by performing the change, our metric (number of valid conditions) is improved, the change is incorporated in our current solution * repeat until you're happy (typically, until you can't improve the solution for some time) It's important to mention that this method will yield meaningful order between the images, but the ratings themselves will not have any meaning. There's no difference between a rating of 3 or 4 if there are no images with ratings between 3 or 4. After the search algorithm runs, you may want to simply take the order information, and distribute the images evenly in the rating space (if you have 3 images, they'll have final ratings of 1, 3 and 5, for example). Here's some code to illustrate this: ``` import random N_ELEMS= 5 #number of images N_RATINGS= 5 #number of (faked) user-given ratings MIN_SCORE, MAX_SCORE= 0.0, 5.0 N_ITERATIONS= 1000 #for search stopping condition def random_score(): #generate random score between MIN_SCORE and MAX_SCORE return MIN_SCORE+random.random()*(MAX_SCORE-MIN_SCORE) elements=range(N_ELEMS) #this would be strings or objects in the real world ratings= [] #tuples of (elem_a, elem_b), representing rating a<b for i in range(N_RATINGS): #generate fake ratings while True: elem_a, elem_b=(random.choice(elements),random.choice(elements)) if elem_a!=elem_b: break ratings.append((elem_a, elem_b)) scores= [random_score() for i in range(N_ELEMS)] #assign random scores def evaluate_condition( rating ): #is a user-provided rating true, given the current scores return scores[rating[0]]<scores[rating[1]] def metric(): #number of true conditions return sum( map(evaluate_condition, ratings)) no_improvement_iterations=0 #number of successive iterations where there has been no improvement current_score= metric() while no_improvement_iterations<N_ITERATIONS: change_element= random.randint(0,N_ELEMS-1) new_value= random_score() old_value= scores[change_element] scores[change_element]= new_value new_score= metric() if new_score<=current_score: scores[change_element]= old_value no_improvement_iterations+=1 else: no_improvement_iterations=0 current_score= new_score def distribute_scores(scores): '''distribute scores evently in the interval (MIN_SCORE, MAX_SCORE)''' sorted_scores= sorted(scores) order= [sorted_scores.index(x) for x in scores] #inefficient but easy to understand step= (MAX_SCORE-MIN_SCORE)/(len(order)) return [x*step for x in order] print "ratings:", ", ".join(["{0}<{1}".format(a,b) for a,b in ratings]) print "scores:", scores print "distributed scores:", distribute_scores(scores) ``` And output: ``` ratings: 1<2, 3<4, 0<3, 4<3, 4<0 scores: [2.3647080073611955, 0.7188260611863462, 4.295792794993049, 4.286501742802684, 0.3471914376983337] distributed scores: [2.0, 1.0, 4.0, 3.0, 0.0] ``` This is not strictly hill climbing, since we generate new\_value randomly. Hill climbing would choose new\_value to maximize the score, but we can't calculate that directly, so we use [random optimization](http://en.wikipedia.org/wiki/Random_optimization) Also, obviously, this is not the best way to solve the search problem - a genetic algorithm that operated on the order would probably be faster, for example. I aimed for simplicity, not efficiency.
20,375,954
I have a large collection of images which I'm trying to sort according to quality by crowd-sourcing. Images can be assigned 1, 2, 3, 4, or 5 stars according to how much the user likes them. A 5-star image would be very visually appealing, a 1-star image might be blurry and out of focus. At first I created a page showing an image with the option to rate it directly by choosing 1-5 stars. But it was too time-consuming to to do this. I'd like to try to create an interface where the user is presented with 2 images side by side and asked to click the image s/he likes more. Using this comparison data of one image compared to another, is there then some way to convert it over to a score of 1-5? What kind of algorithm would allow me to globally rank images by comparing them only to each other, and how could I implement it in python?
2013/12/04
[ "https://Stackoverflow.com/questions/20375954", "https://Stackoverflow.com", "https://Stackoverflow.com/users/216605/" ]
Let each image start with a ranking of 3 (the mean of 1 … 5), then for each comparison (which wasn't equal) lower the rank of the loser image and increase the rank of the winner image. I propose to simply *count* the +1s and the -1s, so that you have a number of wins and a number of losses for each image. Then the value 1 … 5 could be calculated as: ``` import math def rank(wins, losses): return 3 + 4 * math.atan(wins - losses) / math.pi ``` This will rank images higher and higher with each win, but it will lead to the silly situation that (+1010 / -1000) will be ranked alike with (+10 / -0) which is not desirable. One can remedy this flaw by using a mean of the values: ``` def rank(wins, losses): return (3 + 4 * math.atan((wins - losses) / (wins + losses) * 10) / math.pi if wins + losses > 0 else 3) ``` Both curves will never *quite* reach 1 or 5, but they will come ever closer if an image always wins or always loses.
If you don't want to deal with a complex statistical model like the Elo rating system suggested by @VincentRamdhanie (which will yield optimal results), you can always model this as a simple [optimization problem](http://en.wikipedia.org/wiki/Optimization_%28mathematics%29). You have datapoints of the type `a>b`. If you assign values to a and b, `a>b` is simply a condition that evaluates to true or false. Then, one possible "solution" to this problem is to maximize the number of conditions (datapoints) that evaluate to true. Since you already know your metric, all that is left is to choose a search algorithm. [Hill climbing](http://en.wikipedia.org/wiki/Hill_climbing) is a very simple one. It works like this: * assign to each rating a random value - this is your current solution * change the value of one of the ratings * if by performing the change, our metric (number of valid conditions) is improved, the change is incorporated in our current solution * repeat until you're happy (typically, until you can't improve the solution for some time) It's important to mention that this method will yield meaningful order between the images, but the ratings themselves will not have any meaning. There's no difference between a rating of 3 or 4 if there are no images with ratings between 3 or 4. After the search algorithm runs, you may want to simply take the order information, and distribute the images evenly in the rating space (if you have 3 images, they'll have final ratings of 1, 3 and 5, for example). Here's some code to illustrate this: ``` import random N_ELEMS= 5 #number of images N_RATINGS= 5 #number of (faked) user-given ratings MIN_SCORE, MAX_SCORE= 0.0, 5.0 N_ITERATIONS= 1000 #for search stopping condition def random_score(): #generate random score between MIN_SCORE and MAX_SCORE return MIN_SCORE+random.random()*(MAX_SCORE-MIN_SCORE) elements=range(N_ELEMS) #this would be strings or objects in the real world ratings= [] #tuples of (elem_a, elem_b), representing rating a<b for i in range(N_RATINGS): #generate fake ratings while True: elem_a, elem_b=(random.choice(elements),random.choice(elements)) if elem_a!=elem_b: break ratings.append((elem_a, elem_b)) scores= [random_score() for i in range(N_ELEMS)] #assign random scores def evaluate_condition( rating ): #is a user-provided rating true, given the current scores return scores[rating[0]]<scores[rating[1]] def metric(): #number of true conditions return sum( map(evaluate_condition, ratings)) no_improvement_iterations=0 #number of successive iterations where there has been no improvement current_score= metric() while no_improvement_iterations<N_ITERATIONS: change_element= random.randint(0,N_ELEMS-1) new_value= random_score() old_value= scores[change_element] scores[change_element]= new_value new_score= metric() if new_score<=current_score: scores[change_element]= old_value no_improvement_iterations+=1 else: no_improvement_iterations=0 current_score= new_score def distribute_scores(scores): '''distribute scores evently in the interval (MIN_SCORE, MAX_SCORE)''' sorted_scores= sorted(scores) order= [sorted_scores.index(x) for x in scores] #inefficient but easy to understand step= (MAX_SCORE-MIN_SCORE)/(len(order)) return [x*step for x in order] print "ratings:", ", ".join(["{0}<{1}".format(a,b) for a,b in ratings]) print "scores:", scores print "distributed scores:", distribute_scores(scores) ``` And output: ``` ratings: 1<2, 3<4, 0<3, 4<3, 4<0 scores: [2.3647080073611955, 0.7188260611863462, 4.295792794993049, 4.286501742802684, 0.3471914376983337] distributed scores: [2.0, 1.0, 4.0, 3.0, 0.0] ``` This is not strictly hill climbing, since we generate new\_value randomly. Hill climbing would choose new\_value to maximize the score, but we can't calculate that directly, so we use [random optimization](http://en.wikipedia.org/wiki/Random_optimization) Also, obviously, this is not the best way to solve the search problem - a genetic algorithm that operated on the order would probably be faster, for example. I aimed for simplicity, not efficiency.
51,865,923
I have been trying out DroneKit Python and have been working with some of the examples provided. Having got to a point of some knowledge of working with DroneKit I have started writing some python code to perform a single mission. My only problem is that the start location for my missions are always defaulting to `Lat = -35.3632605, Lon = 149.1652287` - even though I have set the home location to the following: ``` start_location = LocationGlobal(51.945102, -2.074558, 10) vehicle.home_location = start_location ``` Is there something else in the api I am missing out on doing in order to set the start location of the drone in the simulation environment?
2018/08/15
[ "https://Stackoverflow.com/questions/51865923", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10231182/" ]
If you really don't want to wrap you can use `@media` queries to change `flex-direction` of your quizlist class to `column`. ```css input[type="radio"] { display: none; } input[type="radio"]:checked+.quizlabel { border: 2px solid #0052e7; transition: .1s; background-color: #0052e7; box-shadow: 2px 2px 3px #c8c8c8; color: #fff; } input[type="radio"]:hover+.quizlabel { border: 2px solid #0052e7; } .quizlabel { padding: 5px; margin: 5px; border: 2px solid #484848; color: #000; font-family: sans-serif; font-size: 14px; } .quizform>div { padding: 10px; margin-top: 10px; } .quizlabel:first-of-type { margin-left: 0; } .quizform { padding: 10px; font-family: sans-serif; } .quizform p { margin: 2px; font-weight: Bold; } .quizrow:nth-of-type(odd) { background-color: #e2e3e5; } .quizlist { display: flex; justify-content: flex-start; flex-direction: row; flex-wrap: nowrap; margin: 0 padding: 0; } @media (max-width: 500px){ .quizlist { display: flex; flex-direction: column; flex-wrap: nowrap; text-align: center; } } #result_div { font-family: sans-serif; color: #000; border: 3px solid #000; padding: 10px; } #result_div p { color: #000; } .quiz-submit { font-family: sans-serif; color: #fff; background-color: #000; padding: 10px; cursor: pointer; } .quiz-submit:hover { background-color: #0052e7; } ``` ```html <form name="quizform" class="quizform"> <div class="quizrow"> <p>Q1</p> <div class="quizlist"> <input type="radio" name="q1" value="1" id="q1-1"><label for="q1-1" class="quizlabel"><span>Strongly Disagree</span></label> <input type="radio" name="q1" value="2" id="q1-2"><label for="q1-2" class="quizlabel"><span>Disagree</span></label> <input type="radio" name="q1" value="3" id="q1-3"><label for="q1-3" class="quizlabel">Neutral</label> <input type="radio" name="q1" value="4" id="q1-4"><label for="q1-4" class="quizlabel">Agree</label> <input type="radio" name="q1" value="5" id="q1-5"><label for="q1-5" class="quizlabel">Strongly Agree</label> </div> </div> <div class="quizrow"> <p>Q2</p> <div class="quizlist"> <input type="radio" name="q2" value="1" id="q2-1"><label for="q-1" class="quizlabel"><span>Strongly Disagree</span></label> <input type="radio" name="q2" value="2" id="q2-2"><label for="q1-2" class="quizlabel"><span>Disagree</span></label> <input type="radio" name="q2" value="3" id="q2-3"><label for="q1-3" class="quizlabel">Neutral</label> <input type="radio" name="q2" value="4" id="q2-4"><label for="q1-4" class="quizlabel">Agree</label> <input type="radio" name="q2" value="5" id="q2-5"><label for="q1-5" class="quizlabel">Strongly Agree</label> </div> </div> <p></p> <button type="submit" class="quiz-submit">Submit</button> <div>&nbsp;</div> <div>&nbsp;</div> <div id="result_div" style="display:none;"> <p id="result_text"></p> </div> </form> ```
I think this is what you're aiming for? The boxes weren't getting smaller because of the text inside of them, so you needed to add `flex-wrap:wrap;` to the `.quizlist` so that way they would go onto the next row. You also needed to add a `flex` and `flex-grow` to specify the widths you want them to flex to. If you don't want them to increase widths to match that of the screen size, then remove the `flex-grow`. ```css input[type="radio"] { display: none; } input[type="radio"]:checked+.quizlabel { border: 2px solid #0052e7; transition: .1s; background-color: #0052e7; box-shadow: 2px 2px 3px #c8c8c8; color: #fff; } input[type="radio"]:hover+.quizlabel { border: 2px solid #0052e7; } .quizlabel { padding: 5px; margin: 5px; border: 2px solid #484848; color: #000; font-family: sans-serif; font-size: 14px; flex: 0 0 5%; flex-grow: 1; } .quizform>div { padding: 10px; margin-top: 10px; } .quizlabel:first-of-type { margin-left: 0; } .quizform { padding: 10px; font-family: sans-serif; } .quizform p { margin: 2px; font-weight: Bold; } .quizrow:nth-of-type(odd) { background-color: #e2e3e5; } .quizlist { display: flex; justify-content: flex-start; flex-direction: row; flex-wrap: wrap; margin: 0; padding: 0; } #result_div { font-family: sans-serif; color: #000; border: 3px solid #000; padding: 10px; } #result_div p { color: #000; } .quiz-submit { font-family: sans-serif; color: #fff; background-color: #000; padding: 10px; cursor: pointer; } .quiz-submit:hover { background-color: #0052e7; } ``` ```html <form name="quizform" class="quizform"> <div class="quizrow"> <p>Q1</p> <div class="quizlist"> <input type="radio" name="q1" value="1" id="q1-1"><label for="q1-1" class="quizlabel"><span>Strongly Disagree</span></label> <input type="radio" name="q1" value="2" id="q1-2"><label for="q1-2" class="quizlabel"><span>Disagree</span></label> <input type="radio" name="q1" value="3" id="q1-3"><label for="q1-3" class="quizlabel">Neutral</label> <input type="radio" name="q1" value="4" id="q1-4"><label for="q1-4" class="quizlabel">Agree</label> <input type="radio" name="q1" value="5" id="q1-5"><label for="q1-5" class="quizlabel">Strongly Agree</label> </div> </div> <div class="quizrow"> <p>Q2</p> <div class="quizlist"> <input type="radio" name="q2" value="1" id="q2-1"><label for="q-1" class="quizlabel"><span>Strongly Disagree</span></label> <input type="radio" name="q2" value="2" id="q2-2"><label for="q1-2" class="quizlabel"><span>Disagree</span></label> <input type="radio" name="q2" value="3" id="q2-3"><label for="q1-3" class="quizlabel">Neutral</label> <input type="radio" name="q2" value="4" id="q2-4"><label for="q1-4" class="quizlabel">Agree</label> <input type="radio" name="q2" value="5" id="q2-5"><label for="q1-5" class="quizlabel">Strongly Agree</label> </div> </div> <p></p> <button type="submit" class="quiz-submit">Submit</button> </form> ```
71,949,010
After I install Google cloud sdk in my computer, I open the terminal and type "gcloud --version" but it says "python was not found" note: I unchecked the box saying "Install python bundle" when I install Google cloud sdk because I already have python 3.10.2 installed. so, how do fix this? Thanks in advance.
2022/04/21
[ "https://Stackoverflow.com/questions/71949010", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17138122/" ]
As mentioned in the [document](https://cloud.google.com/sdk/docs/install-sdk#windows): > > Cloud SDK requires Python; supported versions are Python 3 (preferred, > 3.5 to 3.8) and Python 2 (2.7.9 or later). By default, the Windows version of Cloud SDK comes bundled with Python 3 and Python 2. To use > Cloud SDK, your operating system must be able to run a supported > version of Python. > > > As suggested by @John Hanley the CLI cannot find Python which is already installed. Try reinstalling the CLI selecting **install Python bundle**. If you are still facing the issue another workaround can be to try with Python version 2.x.x . You can follow the below steps : 1.Uninstall all Python version 3 and above. 2.Install Python version -2.x.x (I have installed - 2.7.17) 3.Create environment variable - CLOUDSDK\_PYTHON and provide value as C:\Python27\python.exe 4.Run GoogleCloudSDKInstaller.exe again.
On ubuntu Linux, you can define this variable in the `.bashrc` file: ```bash export CLOUDSDK_PYTHON=/usr/bin/python3 ```
15,866,765
What is the recommended library for web client programming which involves HTTP requests. I know there is a package called [HTTP](https://github.com/haskell/HTTP) but it doesn't seem to support HTTPS. Is there any better library for it ? I expect a library with functionality something like [this](http://docs.python-requests.org/en/latest/) for Haskell.
2013/04/07
[ "https://Stackoverflow.com/questions/15866765", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1651941/" ]
[`Network.HTTP.Conduit`](http://hackage.haskell.org/package/http-conduit) has a clean API (it uses [`Network.HTTP.Types`](http://hackage.haskell.org/package/http-types)) and is quite simple to use if you know a bit about conduits. Example: ```hs {-# LANGUAGE OverloadedStrings #-} module Main where import Data.Conduit import Network.HTTP.Conduit import qualified Data.Aeson as J main = do manager <- newManager def initReq <- parseUrl "https://api.github.com/user" let req = applyBasicAuth "niklasb" "password" initReq resp <- runResourceT $ httpLbs req manager print (responseStatus resp) print (lookup "content-type" (responseHeaders resp)) -- you will probably want a proper FromJSON instance here, -- rather than decoding to Data.Aeson.Object print (J.decode (responseBody resp) :: Maybe J.Object) ``` Also make sure to [consult the tutorial](https://haskell-lang.org/library/http-client).
In addition to `Network.HTTP.Conduit` there [`Network.Http.Client`](http://hackage.haskell.org/package/http-streams) which exposes an [`io-streams`](http://hackage.haskell.org/package/io-streams-1.0.1.0) interface.
15,866,765
What is the recommended library for web client programming which involves HTTP requests. I know there is a package called [HTTP](https://github.com/haskell/HTTP) but it doesn't seem to support HTTPS. Is there any better library for it ? I expect a library with functionality something like [this](http://docs.python-requests.org/en/latest/) for Haskell.
2013/04/07
[ "https://Stackoverflow.com/questions/15866765", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1651941/" ]
A library named [wreq](https://hackage.haskell.org/package/wreq) has been released by Bryan O'Sullivan which is great and easy to use for HTTP communication. A related tutorial for that by the same author is [here.](http://www.serpentine.com/wreq/tutorial.html) There is also another library named [req](https://github.com/mrkkrp/req) which provides a nice API.
[`Network.HTTP.Conduit`](http://hackage.haskell.org/package/http-conduit) has a clean API (it uses [`Network.HTTP.Types`](http://hackage.haskell.org/package/http-types)) and is quite simple to use if you know a bit about conduits. Example: ```hs {-# LANGUAGE OverloadedStrings #-} module Main where import Data.Conduit import Network.HTTP.Conduit import qualified Data.Aeson as J main = do manager <- newManager def initReq <- parseUrl "https://api.github.com/user" let req = applyBasicAuth "niklasb" "password" initReq resp <- runResourceT $ httpLbs req manager print (responseStatus resp) print (lookup "content-type" (responseHeaders resp)) -- you will probably want a proper FromJSON instance here, -- rather than decoding to Data.Aeson.Object print (J.decode (responseBody resp) :: Maybe J.Object) ``` Also make sure to [consult the tutorial](https://haskell-lang.org/library/http-client).
15,866,765
What is the recommended library for web client programming which involves HTTP requests. I know there is a package called [HTTP](https://github.com/haskell/HTTP) but it doesn't seem to support HTTPS. Is there any better library for it ? I expect a library with functionality something like [this](http://docs.python-requests.org/en/latest/) for Haskell.
2013/04/07
[ "https://Stackoverflow.com/questions/15866765", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1651941/" ]
[`Network.HTTP.Conduit`](http://hackage.haskell.org/package/http-conduit) has a clean API (it uses [`Network.HTTP.Types`](http://hackage.haskell.org/package/http-types)) and is quite simple to use if you know a bit about conduits. Example: ```hs {-# LANGUAGE OverloadedStrings #-} module Main where import Data.Conduit import Network.HTTP.Conduit import qualified Data.Aeson as J main = do manager <- newManager def initReq <- parseUrl "https://api.github.com/user" let req = applyBasicAuth "niklasb" "password" initReq resp <- runResourceT $ httpLbs req manager print (responseStatus resp) print (lookup "content-type" (responseHeaders resp)) -- you will probably want a proper FromJSON instance here, -- rather than decoding to Data.Aeson.Object print (J.decode (responseBody resp) :: Maybe J.Object) ``` Also make sure to [consult the tutorial](https://haskell-lang.org/library/http-client).
[Servant](https://hackage.haskell.org/package/servant) is easy to use (albeit hard to understand) and magical. It lets you specify the API as an uninhabited type, and generates request and response behaviors based on it. You'll never have to worry about serialization or deserialization, or even JSON -- it converts JSON to and from native Haskell objects automatically, based on the API. It's got an excellent [tutorial](http://haskell-servant.readthedocs.io/en/stable/tutorial/), too.
15,866,765
What is the recommended library for web client programming which involves HTTP requests. I know there is a package called [HTTP](https://github.com/haskell/HTTP) but it doesn't seem to support HTTPS. Is there any better library for it ? I expect a library with functionality something like [this](http://docs.python-requests.org/en/latest/) for Haskell.
2013/04/07
[ "https://Stackoverflow.com/questions/15866765", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1651941/" ]
A library named [wreq](https://hackage.haskell.org/package/wreq) has been released by Bryan O'Sullivan which is great and easy to use for HTTP communication. A related tutorial for that by the same author is [here.](http://www.serpentine.com/wreq/tutorial.html) There is also another library named [req](https://github.com/mrkkrp/req) which provides a nice API.
In addition to `Network.HTTP.Conduit` there [`Network.Http.Client`](http://hackage.haskell.org/package/http-streams) which exposes an [`io-streams`](http://hackage.haskell.org/package/io-streams-1.0.1.0) interface.
15,866,765
What is the recommended library for web client programming which involves HTTP requests. I know there is a package called [HTTP](https://github.com/haskell/HTTP) but it doesn't seem to support HTTPS. Is there any better library for it ? I expect a library with functionality something like [this](http://docs.python-requests.org/en/latest/) for Haskell.
2013/04/07
[ "https://Stackoverflow.com/questions/15866765", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1651941/" ]
In addition to `Network.HTTP.Conduit` there [`Network.Http.Client`](http://hackage.haskell.org/package/http-streams) which exposes an [`io-streams`](http://hackage.haskell.org/package/io-streams-1.0.1.0) interface.
[Servant](https://hackage.haskell.org/package/servant) is easy to use (albeit hard to understand) and magical. It lets you specify the API as an uninhabited type, and generates request and response behaviors based on it. You'll never have to worry about serialization or deserialization, or even JSON -- it converts JSON to and from native Haskell objects automatically, based on the API. It's got an excellent [tutorial](http://haskell-servant.readthedocs.io/en/stable/tutorial/), too.
15,866,765
What is the recommended library for web client programming which involves HTTP requests. I know there is a package called [HTTP](https://github.com/haskell/HTTP) but it doesn't seem to support HTTPS. Is there any better library for it ? I expect a library with functionality something like [this](http://docs.python-requests.org/en/latest/) for Haskell.
2013/04/07
[ "https://Stackoverflow.com/questions/15866765", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1651941/" ]
A library named [wreq](https://hackage.haskell.org/package/wreq) has been released by Bryan O'Sullivan which is great and easy to use for HTTP communication. A related tutorial for that by the same author is [here.](http://www.serpentine.com/wreq/tutorial.html) There is also another library named [req](https://github.com/mrkkrp/req) which provides a nice API.
[Servant](https://hackage.haskell.org/package/servant) is easy to use (albeit hard to understand) and magical. It lets you specify the API as an uninhabited type, and generates request and response behaviors based on it. You'll never have to worry about serialization or deserialization, or even JSON -- it converts JSON to and from native Haskell objects automatically, based on the API. It's got an excellent [tutorial](http://haskell-servant.readthedocs.io/en/stable/tutorial/), too.
11,923,645
Lets say i have this code: ``` def dosomething(thing1, thing2=hello, thing3=world): print thing1 print thing2 print thing3 ``` I would like to be able to specify what thing3 is, but wihout having to say what thing2 is. (The code below is how i thought it might work...) ``` dosomething("This says 'hello fail!'", , 'fail!') ``` and it would say ``` This says 'hello fail!' hello fail! ``` So is there a way to do it like that, or would i have to specify `thing2` every time i wanted to say what `thing3` was? I am using python2, if that matters.
2012/08/12
[ "https://Stackoverflow.com/questions/11923645", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1542540/" ]
Use keyword arguments ``` dosomething("This says 'hello fail!'", thing3='fail!') ```
Yes, you can: ``` dosomething("This says 'hello fail!'", thing3 = 'fail!') ```
32,550,447
If I have a set of integers which denote the values that a list element can take and a python list of a given length. I want to fill the list with all possible combinations. **example** > > list `length=3` and the `my_set ={1,-1}` > > > **Possible combinations** ``` [1,1,1],[1,1,-1],[1,-1,1],[1,-1,-1], [-1,1,1],[-1,1,-1],[-1,-1,1],[-1,-1,-1] ``` I tried approaching with random.sample method from random class but it doesn't help. I did: ``` my_set=[1,-1] from random import sample as sm print sm(my_set,1) #Outputs: -1,-1,1,1 and so on..(random) print sm(my_set,length_I_require) #Outputs**:Error ```
2015/09/13
[ "https://Stackoverflow.com/questions/32550447", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4355529/" ]
That's what `itertools.product` is for : ``` >>> from itertools import product >>> list(product({1,-1},repeat=3)) [(1, 1, 1), (1, 1, -1), (1, -1, 1), (1, -1, -1), (-1, 1, 1), (-1, 1, -1), (-1, -1, 1), (-1, -1, -1)] >>> ``` And if you want the result as list you can use `map` to convert the iterator of tuples to list if list (in python3 it returns an iterator which as a more efficient way you can use a list comprehension ): ``` >>> map(list,product({1,-1},repeat=3)) [[1, 1, 1], [1, 1, -1], [1, -1, 1], [1, -1, -1], [-1, 1, 1], [-1, 1, -1], [-1, -1, 1], [-1, -1, -1]] ``` In python3 : ``` >>> [list(pro) for pro in product({1,-1},repeat=3)] [[1, 1, 1], [1, 1, -1], [1, -1, 1], [1, -1, -1], [-1, 1, 1], [-1, 1, -1], [-1, -1, 1], [-1, -1, -1]] >>> ```
Use the [`itertools.product()` function](https://docs.python.org/3/library/itertools.html#itertools.combinations): ``` from itertools import product result = [list(combo) for combo in product(my_set, repeat=length)] ``` The `list()` call is optional; if tuples instead of lists are fine to, then `result = list(product(my_set, repeat=length))` suffices. Demo: ``` >>> from itertools import product >>> length = 3 >>> my_set = {1, -1} >>> list(product(my_set, repeat=length)) [(1, 1, 1), (1, 1, -1), (1, -1, 1), (1, -1, -1), (-1, 1, 1), (-1, 1, -1), (-1, -1, 1), (-1, -1, -1)] >>> [list(combo) for combo in product(my_set, repeat=length)] [[1, 1, 1], [1, 1, -1], [1, -1, 1], [1, -1, -1], [-1, 1, 1], [-1, 1, -1], [-1, -1, 1], [-1, -1, -1]] ``` `random.sample()` gives you a random subset of the given input sequence; it doesn't produce all possible combinations of values.
32,550,447
If I have a set of integers which denote the values that a list element can take and a python list of a given length. I want to fill the list with all possible combinations. **example** > > list `length=3` and the `my_set ={1,-1}` > > > **Possible combinations** ``` [1,1,1],[1,1,-1],[1,-1,1],[1,-1,-1], [-1,1,1],[-1,1,-1],[-1,-1,1],[-1,-1,-1] ``` I tried approaching with random.sample method from random class but it doesn't help. I did: ``` my_set=[1,-1] from random import sample as sm print sm(my_set,1) #Outputs: -1,-1,1,1 and so on..(random) print sm(my_set,length_I_require) #Outputs**:Error ```
2015/09/13
[ "https://Stackoverflow.com/questions/32550447", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4355529/" ]
That's what `itertools.product` is for : ``` >>> from itertools import product >>> list(product({1,-1},repeat=3)) [(1, 1, 1), (1, 1, -1), (1, -1, 1), (1, -1, -1), (-1, 1, 1), (-1, 1, -1), (-1, -1, 1), (-1, -1, -1)] >>> ``` And if you want the result as list you can use `map` to convert the iterator of tuples to list if list (in python3 it returns an iterator which as a more efficient way you can use a list comprehension ): ``` >>> map(list,product({1,-1},repeat=3)) [[1, 1, 1], [1, 1, -1], [1, -1, 1], [1, -1, -1], [-1, 1, 1], [-1, 1, -1], [-1, -1, 1], [-1, -1, -1]] ``` In python3 : ``` >>> [list(pro) for pro in product({1,-1},repeat=3)] [[1, 1, 1], [1, 1, -1], [1, -1, 1], [1, -1, -1], [-1, 1, 1], [-1, 1, -1], [-1, -1, 1], [-1, -1, -1]] >>> ```
``` lst_length = 3 my_set = {1,-1} result = [[x] for x in my_set] for i in range(1,lst_length): temp = [] for candidate in my_set: for item in result: new_item = [candidate] new_item += item temp.append(new_item) result = temp print result ``` If the list length is 1, the result is a list whose elements equal to the set. Every time the list length increases one, the result can be gotten by append each element of the set to the resulted list.
32,550,447
If I have a set of integers which denote the values that a list element can take and a python list of a given length. I want to fill the list with all possible combinations. **example** > > list `length=3` and the `my_set ={1,-1}` > > > **Possible combinations** ``` [1,1,1],[1,1,-1],[1,-1,1],[1,-1,-1], [-1,1,1],[-1,1,-1],[-1,-1,1],[-1,-1,-1] ``` I tried approaching with random.sample method from random class but it doesn't help. I did: ``` my_set=[1,-1] from random import sample as sm print sm(my_set,1) #Outputs: -1,-1,1,1 and so on..(random) print sm(my_set,length_I_require) #Outputs**:Error ```
2015/09/13
[ "https://Stackoverflow.com/questions/32550447", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4355529/" ]
Use the [`itertools.product()` function](https://docs.python.org/3/library/itertools.html#itertools.combinations): ``` from itertools import product result = [list(combo) for combo in product(my_set, repeat=length)] ``` The `list()` call is optional; if tuples instead of lists are fine to, then `result = list(product(my_set, repeat=length))` suffices. Demo: ``` >>> from itertools import product >>> length = 3 >>> my_set = {1, -1} >>> list(product(my_set, repeat=length)) [(1, 1, 1), (1, 1, -1), (1, -1, 1), (1, -1, -1), (-1, 1, 1), (-1, 1, -1), (-1, -1, 1), (-1, -1, -1)] >>> [list(combo) for combo in product(my_set, repeat=length)] [[1, 1, 1], [1, 1, -1], [1, -1, 1], [1, -1, -1], [-1, 1, 1], [-1, 1, -1], [-1, -1, 1], [-1, -1, -1]] ``` `random.sample()` gives you a random subset of the given input sequence; it doesn't produce all possible combinations of values.
``` lst_length = 3 my_set = {1,-1} result = [[x] for x in my_set] for i in range(1,lst_length): temp = [] for candidate in my_set: for item in result: new_item = [candidate] new_item += item temp.append(new_item) result = temp print result ``` If the list length is 1, the result is a list whose elements equal to the set. Every time the list length increases one, the result can be gotten by append each element of the set to the resulted list.
64,087,848
I'm trying to check how much times does some value repeat in a row but I ran in a problem where my code is leaving the last number without checking it. ``` Ai = input() arr = [int(x) for x in Ai.split()] c = 0 frozen_num = arr[0] for i in range(0,len(arr)): print(arr) if frozen_num == arr[0]: arr.remove(arr[0]) c+=1 else: frozen_num = arr[0] print(c) ``` So let's say I enter: 1 1 1 1 5 5 My code will give an output 5 and not 6 I hope you understand what I'm saying. I'm pretty new to python and also this code is not finished, later numbers will be appended so I get the output: [4, 2] because "1" repeats 4 times and "5" 2 times. Edited - I accidentally wrote 6 and 7 and not 5 and 6.
2020/09/27
[ "https://Stackoverflow.com/questions/64087848", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12733326/" ]
You could use the `Counter` of the `Collections` module to measure all the occurrences of different numbers. ``` from collections import Counter arr = list(Counter(input().split()).values()) print(arr) ``` Output with an input of `1 1 1 1 5 5`: ``` 1 1 1 1 5 5 [4, 2] ```
If you want to stick with your method and not use external libraries, you can add an if statement that detects when you reach the last element of your array and process it differently from the others: ``` Ai=input() arr = [int(x) for x in Ai.split()] L=[] c = 0 frozen_num = arr[0] for i in range(0, len(arr)+1): print(arr) if len(arr)==1: #If we reached the end of the array if frozen_num == arr[0]: #if the last element of arr is the same as the previous one c+=1 L.append(c) else: #if the last element is different, just append 1 to the end of the list L.append(c) L.append(1) elif frozen_num == arr[0]: arr.remove(arr[0]) c += 1 else: L.append(c) c=0 frozen_num = arr[0] print(L) ``` input ``` [5,5,5,6,6,1] ``` output ``` [3,2,1] ```
49,813,481
I am trying to fit some data that I have using scipy.optimize.curve\_fit. My fit function is: ``` def fitfun(x, a): return np.exp(a*(x - b)) ``` What i want is to define `a` as the fitting parameter, and `b` as a parameter that changes depending on the data I want to fit. This means that for one set of data I would want to fit the function: `np.exp(a*(x - 10))` while for another set I would like to fit the function `np.exp(a*(x - 20))`. In principle, I would like the parameter b to be passed in as any value. The way I am currently calling curve\_fit is: ``` coeffs, coeffs_cov = curve_fit(fitfun, xdata, ydata) ``` But what I would like would be something like this: ``` b=10 coeffs, coeffs_cov = curve_fit(fitfun(b), xdata, ydata) b=20 coeffs2, coeffs_cov2 = curve_fit(fitfun(b), xdata, ydata) ``` So that I get the coefficient a for both cases (b=10 and b=20). I am new to python so I cannot make it work, even though I have tried to read the documentation. Any help would be greatly appreciated.
2018/04/13
[ "https://Stackoverflow.com/questions/49813481", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7654219/" ]
I don't know if this is the "proper" way of doing things, but I usually wrap my function in a class, so that I can access parameters from `self`. Your example would then look like: ``` class fitClass: def __init__(self): pass def fitfun(self, x, a): return np.exp(a*(x - self.b)) inst = fitClass() inst.b = 10 coeffs, coeffs_cov = curve_fit(inst.fitfun, xdata, ydata) inst.b = 20 coeffs, coeffs_cov = curve_fit(inst.fitfun, xdata, ydata) ``` This approach avoids using global parameters, which are [generally considered evil](http://www.learncpp.com/cpp-tutorial/4-2a-why-global-variables-are-evil/).
You can define `b` as a global variable inside the fit function. ``` from scipy.optimize import curve_fit def fitfun(x, a): global b return np.exp(a*(x - b)) xdata = np.arange(10) #first sample data set ydata = np.exp(2 * (xdata - 10)) b = 10 coeffs, coeffs_cov = curve_fit(fitfun, xdata, ydata) print(coeffs) #second sample data set ydata = np.exp(5 * (xdata - 20)) b = 20 coeffs, coeffs_cov = curve_fit(fitfun, xdata, ydata) print(coeffs) ``` Output: ``` [2.] [5.] ```
49,813,481
I am trying to fit some data that I have using scipy.optimize.curve\_fit. My fit function is: ``` def fitfun(x, a): return np.exp(a*(x - b)) ``` What i want is to define `a` as the fitting parameter, and `b` as a parameter that changes depending on the data I want to fit. This means that for one set of data I would want to fit the function: `np.exp(a*(x - 10))` while for another set I would like to fit the function `np.exp(a*(x - 20))`. In principle, I would like the parameter b to be passed in as any value. The way I am currently calling curve\_fit is: ``` coeffs, coeffs_cov = curve_fit(fitfun, xdata, ydata) ``` But what I would like would be something like this: ``` b=10 coeffs, coeffs_cov = curve_fit(fitfun(b), xdata, ydata) b=20 coeffs2, coeffs_cov2 = curve_fit(fitfun(b), xdata, ydata) ``` So that I get the coefficient a for both cases (b=10 and b=20). I am new to python so I cannot make it work, even though I have tried to read the documentation. Any help would be greatly appreciated.
2018/04/13
[ "https://Stackoverflow.com/questions/49813481", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7654219/" ]
I don't know if this is the "proper" way of doing things, but I usually wrap my function in a class, so that I can access parameters from `self`. Your example would then look like: ``` class fitClass: def __init__(self): pass def fitfun(self, x, a): return np.exp(a*(x - self.b)) inst = fitClass() inst.b = 10 coeffs, coeffs_cov = curve_fit(inst.fitfun, xdata, ydata) inst.b = 20 coeffs, coeffs_cov = curve_fit(inst.fitfun, xdata, ydata) ``` This approach avoids using global parameters, which are [generally considered evil](http://www.learncpp.com/cpp-tutorial/4-2a-why-global-variables-are-evil/).
UPDATE: Apologies for posting the untested code. As pointed out by @mr-t , the code indeed throws an error. It seems , the kwargs argument of the curve\_fit function is to set the keywords arguments of `leastsq` and `least_squares` functions and not the keyword arguments of fit function itself. In this case, in addition to answer proposed by others, another possible solution is to redefine the fit function to return the error and directly call the `leastsq` function which allows to pass the arguments. ``` def fitfun(a,x,y,b): return np.exp(a*(x - b)) - y b=10 leastsq(fitfun,x0=1,args=(xdata,ydata,b)) ```
49,813,481
I am trying to fit some data that I have using scipy.optimize.curve\_fit. My fit function is: ``` def fitfun(x, a): return np.exp(a*(x - b)) ``` What i want is to define `a` as the fitting parameter, and `b` as a parameter that changes depending on the data I want to fit. This means that for one set of data I would want to fit the function: `np.exp(a*(x - 10))` while for another set I would like to fit the function `np.exp(a*(x - 20))`. In principle, I would like the parameter b to be passed in as any value. The way I am currently calling curve\_fit is: ``` coeffs, coeffs_cov = curve_fit(fitfun, xdata, ydata) ``` But what I would like would be something like this: ``` b=10 coeffs, coeffs_cov = curve_fit(fitfun(b), xdata, ydata) b=20 coeffs2, coeffs_cov2 = curve_fit(fitfun(b), xdata, ydata) ``` So that I get the coefficient a for both cases (b=10 and b=20). I am new to python so I cannot make it work, even though I have tried to read the documentation. Any help would be greatly appreciated.
2018/04/13
[ "https://Stackoverflow.com/questions/49813481", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7654219/" ]
I don't know if this is the "proper" way of doing things, but I usually wrap my function in a class, so that I can access parameters from `self`. Your example would then look like: ``` class fitClass: def __init__(self): pass def fitfun(self, x, a): return np.exp(a*(x - self.b)) inst = fitClass() inst.b = 10 coeffs, coeffs_cov = curve_fit(inst.fitfun, xdata, ydata) inst.b = 20 coeffs, coeffs_cov = curve_fit(inst.fitfun, xdata, ydata) ``` This approach avoids using global parameters, which are [generally considered evil](http://www.learncpp.com/cpp-tutorial/4-2a-why-global-variables-are-evil/).
One really easy way to do this would be to use the `partial` function from functools. In this case all you would have to do is the following. In this case `b` be would have to defined otherwise I believe `scipy.optimize.curvefit` would try to optimize b in addition to a ``` from functools import partial def fitfun(x, a, b): return np.exp(a*(x - b)) fitfun10 = partial(fitfun, b=10) coeffs, coeffs_cov = curve_fit(fitfun10, xdata, ydata) fitfun20 = partial(fitfun, b=20) coeffs2, coeffs_cov2 = curve_fit(fitfun20, xdata, ydata) ```
49,813,481
I am trying to fit some data that I have using scipy.optimize.curve\_fit. My fit function is: ``` def fitfun(x, a): return np.exp(a*(x - b)) ``` What i want is to define `a` as the fitting parameter, and `b` as a parameter that changes depending on the data I want to fit. This means that for one set of data I would want to fit the function: `np.exp(a*(x - 10))` while for another set I would like to fit the function `np.exp(a*(x - 20))`. In principle, I would like the parameter b to be passed in as any value. The way I am currently calling curve\_fit is: ``` coeffs, coeffs_cov = curve_fit(fitfun, xdata, ydata) ``` But what I would like would be something like this: ``` b=10 coeffs, coeffs_cov = curve_fit(fitfun(b), xdata, ydata) b=20 coeffs2, coeffs_cov2 = curve_fit(fitfun(b), xdata, ydata) ``` So that I get the coefficient a for both cases (b=10 and b=20). I am new to python so I cannot make it work, even though I have tried to read the documentation. Any help would be greatly appreciated.
2018/04/13
[ "https://Stackoverflow.com/questions/49813481", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7654219/" ]
Let me also recommend lmfit (<http://lmfit.github.io/lmfit-py/>) and its Model class for this type of problem. Lmfit provides a higher-level abstraction for curve fitting and optimization problems. With lmfit, each parameter in the model becomes an object that can be fixed, varied freely, or given upper and lower bounds without changing the fitting function. In addition, you can define multiple "independent variables" for any model. That gives you two possible approaches. First, define parameters and fix `b`: ``` from lmfit import Model def fitfun(x, a, b): return np.exp(a*(x - b)) # turn this model function into a Model: mymodel = Model(fitfun) # create parameters with initial values. Note that parameters are # **named** according to the arguments of your model function: params = mymodel.make_params(a=1, b=10) # tell the 'b' parameter to not vary during the fit params['b'].vary = False # do fit result = mymodel.fit(ydata, params, x=xdata) print(result.fit_report()) ``` The `params` is not changed in the fit (updated params are in `result.params`), so to fit another set of data, you could just do: ``` params['b'].value = 20 # Note that vary is still False result2 = mymodel.fit(ydata2, params, x=xdata2) ``` An alternative approach would be to define `b` as an independent variable: ``` mymodel = Model(fitfun, independent_vars=['x', 'b']) params = mymodel.make_params(a=1) result = model.fit(ydata, params, x=xdata, b=10) ``` Lmfit has many other nice features for curve-fitting including composing complex models and evaluation of confidence intervals.
You can define `b` as a global variable inside the fit function. ``` from scipy.optimize import curve_fit def fitfun(x, a): global b return np.exp(a*(x - b)) xdata = np.arange(10) #first sample data set ydata = np.exp(2 * (xdata - 10)) b = 10 coeffs, coeffs_cov = curve_fit(fitfun, xdata, ydata) print(coeffs) #second sample data set ydata = np.exp(5 * (xdata - 20)) b = 20 coeffs, coeffs_cov = curve_fit(fitfun, xdata, ydata) print(coeffs) ``` Output: ``` [2.] [5.] ```
49,813,481
I am trying to fit some data that I have using scipy.optimize.curve\_fit. My fit function is: ``` def fitfun(x, a): return np.exp(a*(x - b)) ``` What i want is to define `a` as the fitting parameter, and `b` as a parameter that changes depending on the data I want to fit. This means that for one set of data I would want to fit the function: `np.exp(a*(x - 10))` while for another set I would like to fit the function `np.exp(a*(x - 20))`. In principle, I would like the parameter b to be passed in as any value. The way I am currently calling curve\_fit is: ``` coeffs, coeffs_cov = curve_fit(fitfun, xdata, ydata) ``` But what I would like would be something like this: ``` b=10 coeffs, coeffs_cov = curve_fit(fitfun(b), xdata, ydata) b=20 coeffs2, coeffs_cov2 = curve_fit(fitfun(b), xdata, ydata) ``` So that I get the coefficient a for both cases (b=10 and b=20). I am new to python so I cannot make it work, even though I have tried to read the documentation. Any help would be greatly appreciated.
2018/04/13
[ "https://Stackoverflow.com/questions/49813481", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7654219/" ]
One really easy way to do this would be to use the `partial` function from functools. In this case all you would have to do is the following. In this case `b` be would have to defined otherwise I believe `scipy.optimize.curvefit` would try to optimize b in addition to a ``` from functools import partial def fitfun(x, a, b): return np.exp(a*(x - b)) fitfun10 = partial(fitfun, b=10) coeffs, coeffs_cov = curve_fit(fitfun10, xdata, ydata) fitfun20 = partial(fitfun, b=20) coeffs2, coeffs_cov2 = curve_fit(fitfun20, xdata, ydata) ```
You can define `b` as a global variable inside the fit function. ``` from scipy.optimize import curve_fit def fitfun(x, a): global b return np.exp(a*(x - b)) xdata = np.arange(10) #first sample data set ydata = np.exp(2 * (xdata - 10)) b = 10 coeffs, coeffs_cov = curve_fit(fitfun, xdata, ydata) print(coeffs) #second sample data set ydata = np.exp(5 * (xdata - 20)) b = 20 coeffs, coeffs_cov = curve_fit(fitfun, xdata, ydata) print(coeffs) ``` Output: ``` [2.] [5.] ```
49,813,481
I am trying to fit some data that I have using scipy.optimize.curve\_fit. My fit function is: ``` def fitfun(x, a): return np.exp(a*(x - b)) ``` What i want is to define `a` as the fitting parameter, and `b` as a parameter that changes depending on the data I want to fit. This means that for one set of data I would want to fit the function: `np.exp(a*(x - 10))` while for another set I would like to fit the function `np.exp(a*(x - 20))`. In principle, I would like the parameter b to be passed in as any value. The way I am currently calling curve\_fit is: ``` coeffs, coeffs_cov = curve_fit(fitfun, xdata, ydata) ``` But what I would like would be something like this: ``` b=10 coeffs, coeffs_cov = curve_fit(fitfun(b), xdata, ydata) b=20 coeffs2, coeffs_cov2 = curve_fit(fitfun(b), xdata, ydata) ``` So that I get the coefficient a for both cases (b=10 and b=20). I am new to python so I cannot make it work, even though I have tried to read the documentation. Any help would be greatly appreciated.
2018/04/13
[ "https://Stackoverflow.com/questions/49813481", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7654219/" ]
Let me also recommend lmfit (<http://lmfit.github.io/lmfit-py/>) and its Model class for this type of problem. Lmfit provides a higher-level abstraction for curve fitting and optimization problems. With lmfit, each parameter in the model becomes an object that can be fixed, varied freely, or given upper and lower bounds without changing the fitting function. In addition, you can define multiple "independent variables" for any model. That gives you two possible approaches. First, define parameters and fix `b`: ``` from lmfit import Model def fitfun(x, a, b): return np.exp(a*(x - b)) # turn this model function into a Model: mymodel = Model(fitfun) # create parameters with initial values. Note that parameters are # **named** according to the arguments of your model function: params = mymodel.make_params(a=1, b=10) # tell the 'b' parameter to not vary during the fit params['b'].vary = False # do fit result = mymodel.fit(ydata, params, x=xdata) print(result.fit_report()) ``` The `params` is not changed in the fit (updated params are in `result.params`), so to fit another set of data, you could just do: ``` params['b'].value = 20 # Note that vary is still False result2 = mymodel.fit(ydata2, params, x=xdata2) ``` An alternative approach would be to define `b` as an independent variable: ``` mymodel = Model(fitfun, independent_vars=['x', 'b']) params = mymodel.make_params(a=1) result = model.fit(ydata, params, x=xdata, b=10) ``` Lmfit has many other nice features for curve-fitting including composing complex models and evaluation of confidence intervals.
UPDATE: Apologies for posting the untested code. As pointed out by @mr-t , the code indeed throws an error. It seems , the kwargs argument of the curve\_fit function is to set the keywords arguments of `leastsq` and `least_squares` functions and not the keyword arguments of fit function itself. In this case, in addition to answer proposed by others, another possible solution is to redefine the fit function to return the error and directly call the `leastsq` function which allows to pass the arguments. ``` def fitfun(a,x,y,b): return np.exp(a*(x - b)) - y b=10 leastsq(fitfun,x0=1,args=(xdata,ydata,b)) ```
49,813,481
I am trying to fit some data that I have using scipy.optimize.curve\_fit. My fit function is: ``` def fitfun(x, a): return np.exp(a*(x - b)) ``` What i want is to define `a` as the fitting parameter, and `b` as a parameter that changes depending on the data I want to fit. This means that for one set of data I would want to fit the function: `np.exp(a*(x - 10))` while for another set I would like to fit the function `np.exp(a*(x - 20))`. In principle, I would like the parameter b to be passed in as any value. The way I am currently calling curve\_fit is: ``` coeffs, coeffs_cov = curve_fit(fitfun, xdata, ydata) ``` But what I would like would be something like this: ``` b=10 coeffs, coeffs_cov = curve_fit(fitfun(b), xdata, ydata) b=20 coeffs2, coeffs_cov2 = curve_fit(fitfun(b), xdata, ydata) ``` So that I get the coefficient a for both cases (b=10 and b=20). I am new to python so I cannot make it work, even though I have tried to read the documentation. Any help would be greatly appreciated.
2018/04/13
[ "https://Stackoverflow.com/questions/49813481", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7654219/" ]
One really easy way to do this would be to use the `partial` function from functools. In this case all you would have to do is the following. In this case `b` be would have to defined otherwise I believe `scipy.optimize.curvefit` would try to optimize b in addition to a ``` from functools import partial def fitfun(x, a, b): return np.exp(a*(x - b)) fitfun10 = partial(fitfun, b=10) coeffs, coeffs_cov = curve_fit(fitfun10, xdata, ydata) fitfun20 = partial(fitfun, b=20) coeffs2, coeffs_cov2 = curve_fit(fitfun20, xdata, ydata) ```
UPDATE: Apologies for posting the untested code. As pointed out by @mr-t , the code indeed throws an error. It seems , the kwargs argument of the curve\_fit function is to set the keywords arguments of `leastsq` and `least_squares` functions and not the keyword arguments of fit function itself. In this case, in addition to answer proposed by others, another possible solution is to redefine the fit function to return the error and directly call the `leastsq` function which allows to pass the arguments. ``` def fitfun(a,x,y,b): return np.exp(a*(x - b)) - y b=10 leastsq(fitfun,x0=1,args=(xdata,ydata,b)) ```
49,813,481
I am trying to fit some data that I have using scipy.optimize.curve\_fit. My fit function is: ``` def fitfun(x, a): return np.exp(a*(x - b)) ``` What i want is to define `a` as the fitting parameter, and `b` as a parameter that changes depending on the data I want to fit. This means that for one set of data I would want to fit the function: `np.exp(a*(x - 10))` while for another set I would like to fit the function `np.exp(a*(x - 20))`. In principle, I would like the parameter b to be passed in as any value. The way I am currently calling curve\_fit is: ``` coeffs, coeffs_cov = curve_fit(fitfun, xdata, ydata) ``` But what I would like would be something like this: ``` b=10 coeffs, coeffs_cov = curve_fit(fitfun(b), xdata, ydata) b=20 coeffs2, coeffs_cov2 = curve_fit(fitfun(b), xdata, ydata) ``` So that I get the coefficient a for both cases (b=10 and b=20). I am new to python so I cannot make it work, even though I have tried to read the documentation. Any help would be greatly appreciated.
2018/04/13
[ "https://Stackoverflow.com/questions/49813481", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7654219/" ]
Let me also recommend lmfit (<http://lmfit.github.io/lmfit-py/>) and its Model class for this type of problem. Lmfit provides a higher-level abstraction for curve fitting and optimization problems. With lmfit, each parameter in the model becomes an object that can be fixed, varied freely, or given upper and lower bounds without changing the fitting function. In addition, you can define multiple "independent variables" for any model. That gives you two possible approaches. First, define parameters and fix `b`: ``` from lmfit import Model def fitfun(x, a, b): return np.exp(a*(x - b)) # turn this model function into a Model: mymodel = Model(fitfun) # create parameters with initial values. Note that parameters are # **named** according to the arguments of your model function: params = mymodel.make_params(a=1, b=10) # tell the 'b' parameter to not vary during the fit params['b'].vary = False # do fit result = mymodel.fit(ydata, params, x=xdata) print(result.fit_report()) ``` The `params` is not changed in the fit (updated params are in `result.params`), so to fit another set of data, you could just do: ``` params['b'].value = 20 # Note that vary is still False result2 = mymodel.fit(ydata2, params, x=xdata2) ``` An alternative approach would be to define `b` as an independent variable: ``` mymodel = Model(fitfun, independent_vars=['x', 'b']) params = mymodel.make_params(a=1) result = model.fit(ydata, params, x=xdata, b=10) ``` Lmfit has many other nice features for curve-fitting including composing complex models and evaluation of confidence intervals.
One really easy way to do this would be to use the `partial` function from functools. In this case all you would have to do is the following. In this case `b` be would have to defined otherwise I believe `scipy.optimize.curvefit` would try to optimize b in addition to a ``` from functools import partial def fitfun(x, a, b): return np.exp(a*(x - b)) fitfun10 = partial(fitfun, b=10) coeffs, coeffs_cov = curve_fit(fitfun10, xdata, ydata) fitfun20 = partial(fitfun, b=20) coeffs2, coeffs_cov2 = curve_fit(fitfun20, xdata, ydata) ```
63,153,688
I edited this post so that i could give more info about the goal I am trying to achieve. basically I want to be able to open VSCode in a directory that I can input inside a python file I am running trhough a shell command i created. So what I need is for the python file to ask me for the name of the folder I want to open, pass that information to the terminal so that it can then cd into that folder and open vscode automatically. I tried with os.system() that is, as I read, one of the ways I can achieve that goal. The problem is that if I use standard commands like os.system('date') or os.system('code') it works without any problem. If I try to use os.system(cd /directory/) nothing happens. As suggested I also tryied with `subprocess.call(["cd", "/home/simon/Desktop"])` but the terminal gives me the error: `FileNotFoundError: [Errno 2] No such file or directory: 'cd'` I am going to include both the python file: ``` import os, subprocess PATH = "/home/simon/Linux_Storage/Projects" def main(): print("\n") print("********************") for folder in os.listdir(PATH): print(folder) print("********************") project = input("Choose project: ") print("\n") folders = os.listdir(PATH) while project: if project in folders: break else: print("Project doesn't exist.") project = input("Choose project: ") os.system(f"cd /home/simon/Linux_Storage/Projects/{project}") if __name__ == "__main__": main() ``` and the shell script (maybe I should change something here): ``` function open() { python3 .open.py code . } ```
2020/07/29
[ "https://Stackoverflow.com/questions/63153688", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12288571/" ]
Store dataValue in some variable and use expectation to wait for your closure to execute and then test. Note: This example was written in swift 4 ``` let yourExpectationName = expectation(description: "xyz") var dataToAssert = [String]() //replace with you data type sut.apiSuccessClouser = { dataValue in dataToAssert = dataValue yourExpectationName.fulfill() } waitForExpectations(timeout: 3) { (error) in //specify wait time in seconds XCTAssert(dataToAssert) } ```
apiSuccessClouser in MockApiService is a property of type closure `(()->Void?)?`. In line `sut.apiSuccessClouser = { ... }` you assign the the property apiSuccessClouser a closure but you never access this closure so that the `print("apiSuccessClouser")` to be executed. to execute the print("apiSuccessClouser") you need to call the closure ``` sut.apiSuccessClouser?() ``` So refactor the test like: ``` func test_fetch_photo() { sut.apiSuccessClouser = { dataValue in print("apiSuccessClouser") // This doesnot executes XCTAssert(dataValue) } sut.apiSuccessClouser?() } ``` for more info: <https://docs.swift.org/swift-book/LanguageGuide/Closures.html>
63,153,688
I edited this post so that i could give more info about the goal I am trying to achieve. basically I want to be able to open VSCode in a directory that I can input inside a python file I am running trhough a shell command i created. So what I need is for the python file to ask me for the name of the folder I want to open, pass that information to the terminal so that it can then cd into that folder and open vscode automatically. I tried with os.system() that is, as I read, one of the ways I can achieve that goal. The problem is that if I use standard commands like os.system('date') or os.system('code') it works without any problem. If I try to use os.system(cd /directory/) nothing happens. As suggested I also tryied with `subprocess.call(["cd", "/home/simon/Desktop"])` but the terminal gives me the error: `FileNotFoundError: [Errno 2] No such file or directory: 'cd'` I am going to include both the python file: ``` import os, subprocess PATH = "/home/simon/Linux_Storage/Projects" def main(): print("\n") print("********************") for folder in os.listdir(PATH): print(folder) print("********************") project = input("Choose project: ") print("\n") folders = os.listdir(PATH) while project: if project in folders: break else: print("Project doesn't exist.") project = input("Choose project: ") os.system(f"cd /home/simon/Linux_Storage/Projects/{project}") if __name__ == "__main__": main() ``` and the shell script (maybe I should change something here): ``` function open() { python3 .open.py code . } ```
2020/07/29
[ "https://Stackoverflow.com/questions/63153688", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12288571/" ]
Store dataValue in some variable and use expectation to wait for your closure to execute and then test. Note: This example was written in swift 4 ``` let yourExpectationName = expectation(description: "xyz") var dataToAssert = [String]() //replace with you data type sut.apiSuccessClouser = { dataValue in dataToAssert = dataValue yourExpectationName.fulfill() } waitForExpectations(timeout: 3) { (error) in //specify wait time in seconds XCTAssert(dataToAssert) } ```
To test that kind of asynchronous code with vanilla XCTest, you'll need to use an [`XCTestExpectation`](https://developer.apple.com/documentation/xctest/xctestexpectation). ``` func test_fetch_photo() { let expectation = XCTestExpectation(description: "photo is fetched") sut.apiSuccessClouser = { dataValue in // I like to run the assertions directly inside of the async closure, that way // you don't need to have vars around to store the value in order to check it // afterwards XCTAssert(dataValue) expectation.fulfill() } sut.apiSuccessClouser?() // The timeout duration depends on what kind of async work the closure is doing, // here, because it's called directly, you probably don't need long. wait(for: [expectation], timeout: 0.1) } ``` The assertions library [Nimble](https://github.com/Quick/Nimble) also offers a neat way to test asynchronous code. ``` func test_fetch_photo() { waitUntil { done in self.sut.apiSuccessClouser = { dataValue in assert(dataValue).toNot(beNil()) done() } self.sut.apiSuccessClouser?() } } ``` Alongside `waitUntil`, Nimble also has a [`.toEventually` assertion](https://github.com/Quick/Nimble/tree/2c6978e887d7caa8acc2cab7b38e89d4d63eafb6#asynchronous-expectations). You asked about using [Quick](https://github.com/Quick/Quick), too. Quick is a test *harness* library, it allows you to write tests in a different way, but it doesn't make a different in how you write the actual expectation. ``` describe("Name of your system under test") { it("does something asynchronously") { let sut = ... waitUntil { done in self.sut.apiSuccessClouser = { dataValue in assert(dataValue).toNot(beNil()) done() } self.sut.apiSuccessClouser?() } } } ```
54,060,243
Hi ultimately I'm trying to install django on my computer, but I'm unable to do this as the when I run pip in the command line I get the following error message: `''pip' is not recognized as an internal or external command, operable program or batch file.'` I've added the following locations to my path environment: '`C:\Python37-32;C:\Python37-32\Lib\site-packages;C:\Python37-32\Scripts'` I've also tried to reinstall pip using 'py -3.7 -m ensurepip -U --default-pip', but then I get the following error message: `'Requirement already up-to-date: setuptools in c:\users\tom_p\anaconda3\lib\site-packages (40.6.3) Requirement already up-to-date: pip in c:\users\tom_p\anaconda3\lib\site-packages (18.1) spyder 3.3.2 requires pyqt5<5.10; python_version >= "3", which is not installed. xlwings 0.15.1 has requirement pywin32>=224, but you'll have pywin32 223 which is incompatible.'` I'm new to this so I'm struggling with the install and I'm confused by the fact pip is in C:\Python37-32\Scripts, but the above error seems to be looking in the anaconda folder. The only reason I installed anaconda was to use the Spyder IDE. I've installed python 3.7 32-bit on my Windows 10, any help would be much appreciated. Thanks
2019/01/06
[ "https://Stackoverflow.com/questions/54060243", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9815902/" ]
You can put a conditional expression on a single item update to make the update fail if the condition is not met. However it will not fail an entire batch, just the single update. The batch update response would contain information on which updates succeeded and which failed
It's possible to do it, by using conditional expression for filter expression. But please don't do it. DynamoDB is a key-value NoSQL. It means that you can get the right data by keys only. If you do the filter, it will loop through a lot of records and slow down you app. You can check this article: [5 things that you should know about DynamoDB](https://problemlover.com/5-things-that-you-should-know-before-using-dynamodb-for-your-project/) So when you CRUD the data, the recommended way to interact with data is by key. I can translate it to pseudo code like this: ``` GET: SELECT * FROM THE TABLE WHERE Id ='SampleId' UPDATE: UPDATE THE ITEM WHERE id = 'SampleId' DELETE: DELETETHE ITEM WHERE id = 'SampleId' ``` To satisfy your needs, you need to use elastic search to get the right items, after that you can update the data by key.
54,060,243
Hi ultimately I'm trying to install django on my computer, but I'm unable to do this as the when I run pip in the command line I get the following error message: `''pip' is not recognized as an internal or external command, operable program or batch file.'` I've added the following locations to my path environment: '`C:\Python37-32;C:\Python37-32\Lib\site-packages;C:\Python37-32\Scripts'` I've also tried to reinstall pip using 'py -3.7 -m ensurepip -U --default-pip', but then I get the following error message: `'Requirement already up-to-date: setuptools in c:\users\tom_p\anaconda3\lib\site-packages (40.6.3) Requirement already up-to-date: pip in c:\users\tom_p\anaconda3\lib\site-packages (18.1) spyder 3.3.2 requires pyqt5<5.10; python_version >= "3", which is not installed. xlwings 0.15.1 has requirement pywin32>=224, but you'll have pywin32 223 which is incompatible.'` I'm new to this so I'm struggling with the install and I'm confused by the fact pip is in C:\Python37-32\Scripts, but the above error seems to be looking in the anaconda folder. The only reason I installed anaconda was to use the Spyder IDE. I've installed python 3.7 32-bit on my Windows 10, any help would be much appreciated. Thanks
2019/01/06
[ "https://Stackoverflow.com/questions/54060243", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9815902/" ]
You should look at the [DynamoDB transactions](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/transactions.html). It has the conditional expressions you are looking for and all or nothing batch updates.
It's possible to do it, by using conditional expression for filter expression. But please don't do it. DynamoDB is a key-value NoSQL. It means that you can get the right data by keys only. If you do the filter, it will loop through a lot of records and slow down you app. You can check this article: [5 things that you should know about DynamoDB](https://problemlover.com/5-things-that-you-should-know-before-using-dynamodb-for-your-project/) So when you CRUD the data, the recommended way to interact with data is by key. I can translate it to pseudo code like this: ``` GET: SELECT * FROM THE TABLE WHERE Id ='SampleId' UPDATE: UPDATE THE ITEM WHERE id = 'SampleId' DELETE: DELETETHE ITEM WHERE id = 'SampleId' ``` To satisfy your needs, you need to use elastic search to get the right items, after that you can update the data by key.
54,060,243
Hi ultimately I'm trying to install django on my computer, but I'm unable to do this as the when I run pip in the command line I get the following error message: `''pip' is not recognized as an internal or external command, operable program or batch file.'` I've added the following locations to my path environment: '`C:\Python37-32;C:\Python37-32\Lib\site-packages;C:\Python37-32\Scripts'` I've also tried to reinstall pip using 'py -3.7 -m ensurepip -U --default-pip', but then I get the following error message: `'Requirement already up-to-date: setuptools in c:\users\tom_p\anaconda3\lib\site-packages (40.6.3) Requirement already up-to-date: pip in c:\users\tom_p\anaconda3\lib\site-packages (18.1) spyder 3.3.2 requires pyqt5<5.10; python_version >= "3", which is not installed. xlwings 0.15.1 has requirement pywin32>=224, but you'll have pywin32 223 which is incompatible.'` I'm new to this so I'm struggling with the install and I'm confused by the fact pip is in C:\Python37-32\Scripts, but the above error seems to be looking in the anaconda folder. The only reason I installed anaconda was to use the Spyder IDE. I've installed python 3.7 32-bit on my Windows 10, any help would be much appreciated. Thanks
2019/01/06
[ "https://Stackoverflow.com/questions/54060243", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9815902/" ]
I guess you're looking for conditional expressions, check this [link](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.ConditionExpressions.html). You should use UpdateItem, which edits an existing item's attributes, or adds a new item to the table if it does not already exist. e.g. copied from aws doc, The following example performs an UpdateItem operation. It attempts to reduce the Price of a product by 75—but the condition expression prevents the update if the current Price is below 500: ``` aws dynamodb update-item \ --table-name ProductCatalog \ --key '{"Id": {"N": "456"}}' \ --update-expression "SET Price = Price - :discount" \ --condition-expression "Price > :limit" \ --expression-attribute-values file://values.json ``` Table ProductCatalog is like this, ``` { "Id": { "N": "456"}, "Price": {"N": "650"}, "ProductCategory": {"S": "Sporting Goods"} } ``` and values.json likem this, ``` { ":discount": { "N": "75"}, ":limit": {"N": "500"} } ``` Here initially price is 650, with conditional expression you're trying to reduce price by 75 if Price is greater than 500. so first two run of update-item should will work, and on third run since price is reduced to 500 (which is not greater than 500) run will fail.
It's possible to do it, by using conditional expression for filter expression. But please don't do it. DynamoDB is a key-value NoSQL. It means that you can get the right data by keys only. If you do the filter, it will loop through a lot of records and slow down you app. You can check this article: [5 things that you should know about DynamoDB](https://problemlover.com/5-things-that-you-should-know-before-using-dynamodb-for-your-project/) So when you CRUD the data, the recommended way to interact with data is by key. I can translate it to pseudo code like this: ``` GET: SELECT * FROM THE TABLE WHERE Id ='SampleId' UPDATE: UPDATE THE ITEM WHERE id = 'SampleId' DELETE: DELETETHE ITEM WHERE id = 'SampleId' ``` To satisfy your needs, you need to use elastic search to get the right items, after that you can update the data by key.
98,687
I have developed some custom DAO-like classes to meet some very specialized requirements for my project that is a server-side process that does not run inside any kind of framework. The solution works great except that every time a new request is made, I open a new connection via MySQLdb.connect. What is the best "drop in" solution to switch this over to using connection pooling in python? I am imagining something like the commons DBCP solution for Java. The process is long running and has many threads that need to make requests, but not all at the same time... specifically they do quite a lot of work before brief bursts of writing out a chunk of their results. Edited to add: After some more searching I found [anitpool.py](http://furius.ca/antiorm/) which looks decent, but as I'm relatively new to python I guess I just want to make sure I'm not missing a more obvious/more idiomatic/better solution.
2008/09/19
[ "https://Stackoverflow.com/questions/98687", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2168/" ]
IMO, the "more obvious/more idiomatic/better solution" is to use an existing ORM rather than invent DAO-like classes. It appears to me that ORM's are more popular than "raw" SQL connections. Why? Because Python *is* OO, and the mapping from a SQL row to an object *is* absolutely essential. There aren't many use cases where you deal with SQL rows that don't map to Python objects. I think that [SQLAlchemy](http://www.sqlalchemy.org/) or [SQLObject](http://www.sqlobject.org/) (and the associated connection pooling) are the more idiomatic Pythonic solutions. Pooling as a separate feature isn't very common because pure SQL (without object mapping) isn't very popular for the kind of complex, long-running processes that benefit from connection pooling. Yes, pure SQL *is* used, but it's always used in simpler or more controlled applications where pooling isn't helpful. I think you might have two alternatives: 1. Revise your classes to use SQLAlchemy or SQLObject. While this appears painful at first (all that work wasted), you should be able to leverage all the design and thought. It's merely an exercise in adopting a widely-used ORM and pooling solution. 2. Roll out your own simple connection pool using the algorithm you outlined -- a simple Set or List of connections that you cycle through.
i did it for opensearch so you can refer it. ``` from opensearchpy import OpenSearch def get_connection(): connection = None try: connection = OpenSearch( hosts=[{'host': settings.OPEN_SEARCH_HOST, 'port': settings.OPEN_SEARCH_PORT}], http_compress=True, http_auth=(settings.OPEN_SEARCH_USER, settings.OPEN_SEARCH_PASSWORD), use_ssl=True, verify_certs=True, ssl_assert_hostname=False, ssl_show_warn=False, ) except Exception as error: print("Error: Connection not established {}".format(error)) else: print("Connection established") return connection class OpenSearchClient(object): connection_pool = [] connection_in_use = [] def __init__(self): if OpenSearchClient.connection_pool: pass else: OpenSearchClient.connection_pool = [get_connection() for i in range(0, settings.CONNECTION_POOL_SIZE)] def search_data(self, query="", index_name=settings.OPEN_SEARCH_INDEX): available_cursor = OpenSearchClient.connection_pool.pop(0) OpenSearchClient.connection_in_use.append(available_cursor) response = available_cursor.search(body=query, index=index_name) available_cursor.close() OpenSearchClient.connection_pool.append(available_cursor) OpenSearchClient.connection_in_use.pop(-1) return response ```
98,687
I have developed some custom DAO-like classes to meet some very specialized requirements for my project that is a server-side process that does not run inside any kind of framework. The solution works great except that every time a new request is made, I open a new connection via MySQLdb.connect. What is the best "drop in" solution to switch this over to using connection pooling in python? I am imagining something like the commons DBCP solution for Java. The process is long running and has many threads that need to make requests, but not all at the same time... specifically they do quite a lot of work before brief bursts of writing out a chunk of their results. Edited to add: After some more searching I found [anitpool.py](http://furius.ca/antiorm/) which looks decent, but as I'm relatively new to python I guess I just want to make sure I'm not missing a more obvious/more idiomatic/better solution.
2008/09/19
[ "https://Stackoverflow.com/questions/98687", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2168/" ]
IMO, the "more obvious/more idiomatic/better solution" is to use an existing ORM rather than invent DAO-like classes. It appears to me that ORM's are more popular than "raw" SQL connections. Why? Because Python *is* OO, and the mapping from a SQL row to an object *is* absolutely essential. There aren't many use cases where you deal with SQL rows that don't map to Python objects. I think that [SQLAlchemy](http://www.sqlalchemy.org/) or [SQLObject](http://www.sqlobject.org/) (and the associated connection pooling) are the more idiomatic Pythonic solutions. Pooling as a separate feature isn't very common because pure SQL (without object mapping) isn't very popular for the kind of complex, long-running processes that benefit from connection pooling. Yes, pure SQL *is* used, but it's always used in simpler or more controlled applications where pooling isn't helpful. I think you might have two alternatives: 1. Revise your classes to use SQLAlchemy or SQLObject. While this appears painful at first (all that work wasted), you should be able to leverage all the design and thought. It's merely an exercise in adopting a widely-used ORM and pooling solution. 2. Roll out your own simple connection pool using the algorithm you outlined -- a simple Set or List of connections that you cycle through.
Wrap your connection class. Set a limit on how many connections you make. Return an unused connection. Intercept close to free the connection. Update: I put something like this in dbpool.py: ``` import sqlalchemy.pool as pool import MySQLdb as mysql mysql = pool.manage(mysql) ```
98,687
I have developed some custom DAO-like classes to meet some very specialized requirements for my project that is a server-side process that does not run inside any kind of framework. The solution works great except that every time a new request is made, I open a new connection via MySQLdb.connect. What is the best "drop in" solution to switch this over to using connection pooling in python? I am imagining something like the commons DBCP solution for Java. The process is long running and has many threads that need to make requests, but not all at the same time... specifically they do quite a lot of work before brief bursts of writing out a chunk of their results. Edited to add: After some more searching I found [anitpool.py](http://furius.ca/antiorm/) which looks decent, but as I'm relatively new to python I guess I just want to make sure I'm not missing a more obvious/more idiomatic/better solution.
2008/09/19
[ "https://Stackoverflow.com/questions/98687", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2168/" ]
IMO, the "more obvious/more idiomatic/better solution" is to use an existing ORM rather than invent DAO-like classes. It appears to me that ORM's are more popular than "raw" SQL connections. Why? Because Python *is* OO, and the mapping from a SQL row to an object *is* absolutely essential. There aren't many use cases where you deal with SQL rows that don't map to Python objects. I think that [SQLAlchemy](http://www.sqlalchemy.org/) or [SQLObject](http://www.sqlobject.org/) (and the associated connection pooling) are the more idiomatic Pythonic solutions. Pooling as a separate feature isn't very common because pure SQL (without object mapping) isn't very popular for the kind of complex, long-running processes that benefit from connection pooling. Yes, pure SQL *is* used, but it's always used in simpler or more controlled applications where pooling isn't helpful. I think you might have two alternatives: 1. Revise your classes to use SQLAlchemy or SQLObject. While this appears painful at first (all that work wasted), you should be able to leverage all the design and thought. It's merely an exercise in adopting a widely-used ORM and pooling solution. 2. Roll out your own simple connection pool using the algorithm you outlined -- a simple Set or List of connections that you cycle through.
Old thread, but for general-purpose pooling (connections or any expensive object), I use something like: ``` def pool(ctor, limit=None): local_pool = multiprocessing.Queue() n = multiprocesing.Value('i', 0) @contextlib.contextmanager def pooled(ctor=ctor, lpool=local_pool, n=n): # block iff at limit try: i = lpool.get(limit and n.value >= limit) except multiprocessing.queues.Empty: n.value += 1 i = ctor() yield i lpool.put(i) return pooled ``` Which constructs lazily, has an optional limit, and should generalize to any use case I can think of. Of course, this assumes that you really need the pooling of whatever resource, which you may not for many modern SQL-likes. Usage: ``` # in main: my_pool = pool(lambda: do_something()) # in thread: with my_pool() as my_obj: my_obj.do_something() ``` This does assume that whatever object ctor creates has an appropriate destructor if needed (some servers don't kill connection objects unless they are closed explicitly).
98,687
I have developed some custom DAO-like classes to meet some very specialized requirements for my project that is a server-side process that does not run inside any kind of framework. The solution works great except that every time a new request is made, I open a new connection via MySQLdb.connect. What is the best "drop in" solution to switch this over to using connection pooling in python? I am imagining something like the commons DBCP solution for Java. The process is long running and has many threads that need to make requests, but not all at the same time... specifically they do quite a lot of work before brief bursts of writing out a chunk of their results. Edited to add: After some more searching I found [anitpool.py](http://furius.ca/antiorm/) which looks decent, but as I'm relatively new to python I guess I just want to make sure I'm not missing a more obvious/more idiomatic/better solution.
2008/09/19
[ "https://Stackoverflow.com/questions/98687", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2168/" ]
Old thread, but for general-purpose pooling (connections or any expensive object), I use something like: ``` def pool(ctor, limit=None): local_pool = multiprocessing.Queue() n = multiprocesing.Value('i', 0) @contextlib.contextmanager def pooled(ctor=ctor, lpool=local_pool, n=n): # block iff at limit try: i = lpool.get(limit and n.value >= limit) except multiprocessing.queues.Empty: n.value += 1 i = ctor() yield i lpool.put(i) return pooled ``` Which constructs lazily, has an optional limit, and should generalize to any use case I can think of. Of course, this assumes that you really need the pooling of whatever resource, which you may not for many modern SQL-likes. Usage: ``` # in main: my_pool = pool(lambda: do_something()) # in thread: with my_pool() as my_obj: my_obj.do_something() ``` This does assume that whatever object ctor creates has an appropriate destructor if needed (some servers don't kill connection objects unless they are closed explicitly).
Making your own connection pool is a BAD idea if your app ever decides to start using multi-threading. Making a connection pool for a multi-threaded application is much more complicated than one for a single-threaded application. You can use something like PySQLPool in that case. It's also a BAD idea to use an ORM if you're looking for performance. If you'll be dealing with huge/heavy databases that have to handle lots of selects, inserts, updates and deletes at the same time, then you're going to need performance, which means you'll need custom SQL written to optimize lookups and lock times. With an ORM you don't usually have that flexibility. So basically, yeah, you can make your own connection pool and use ORMs but only if you're sure you won't need anything of what I just described.
98,687
I have developed some custom DAO-like classes to meet some very specialized requirements for my project that is a server-side process that does not run inside any kind of framework. The solution works great except that every time a new request is made, I open a new connection via MySQLdb.connect. What is the best "drop in" solution to switch this over to using connection pooling in python? I am imagining something like the commons DBCP solution for Java. The process is long running and has many threads that need to make requests, but not all at the same time... specifically they do quite a lot of work before brief bursts of writing out a chunk of their results. Edited to add: After some more searching I found [anitpool.py](http://furius.ca/antiorm/) which looks decent, but as I'm relatively new to python I guess I just want to make sure I'm not missing a more obvious/more idiomatic/better solution.
2008/09/19
[ "https://Stackoverflow.com/questions/98687", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2168/" ]
I've just been looking for the same sort of thing. I've found [pysqlpool](https://pythonhosted.org/PySQLPool/tutorial.html) and the [sqlalchemy pool module](https://docs.sqlalchemy.org/en/14/)
Making your own connection pool is a BAD idea if your app ever decides to start using multi-threading. Making a connection pool for a multi-threaded application is much more complicated than one for a single-threaded application. You can use something like PySQLPool in that case. It's also a BAD idea to use an ORM if you're looking for performance. If you'll be dealing with huge/heavy databases that have to handle lots of selects, inserts, updates and deletes at the same time, then you're going to need performance, which means you'll need custom SQL written to optimize lookups and lock times. With an ORM you don't usually have that flexibility. So basically, yeah, you can make your own connection pool and use ORMs but only if you're sure you won't need anything of what I just described.
98,687
I have developed some custom DAO-like classes to meet some very specialized requirements for my project that is a server-side process that does not run inside any kind of framework. The solution works great except that every time a new request is made, I open a new connection via MySQLdb.connect. What is the best "drop in" solution to switch this over to using connection pooling in python? I am imagining something like the commons DBCP solution for Java. The process is long running and has many threads that need to make requests, but not all at the same time... specifically they do quite a lot of work before brief bursts of writing out a chunk of their results. Edited to add: After some more searching I found [anitpool.py](http://furius.ca/antiorm/) which looks decent, but as I'm relatively new to python I guess I just want to make sure I'm not missing a more obvious/more idiomatic/better solution.
2008/09/19
[ "https://Stackoverflow.com/questions/98687", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2168/" ]
IMO, the "more obvious/more idiomatic/better solution" is to use an existing ORM rather than invent DAO-like classes. It appears to me that ORM's are more popular than "raw" SQL connections. Why? Because Python *is* OO, and the mapping from a SQL row to an object *is* absolutely essential. There aren't many use cases where you deal with SQL rows that don't map to Python objects. I think that [SQLAlchemy](http://www.sqlalchemy.org/) or [SQLObject](http://www.sqlobject.org/) (and the associated connection pooling) are the more idiomatic Pythonic solutions. Pooling as a separate feature isn't very common because pure SQL (without object mapping) isn't very popular for the kind of complex, long-running processes that benefit from connection pooling. Yes, pure SQL *is* used, but it's always used in simpler or more controlled applications where pooling isn't helpful. I think you might have two alternatives: 1. Revise your classes to use SQLAlchemy or SQLObject. While this appears painful at first (all that work wasted), you should be able to leverage all the design and thought. It's merely an exercise in adopting a widely-used ORM and pooling solution. 2. Roll out your own simple connection pool using the algorithm you outlined -- a simple Set or List of connections that you cycle through.
Use `DBUtils`, simple and reliable. ``` pip install DBUtils ```
98,687
I have developed some custom DAO-like classes to meet some very specialized requirements for my project that is a server-side process that does not run inside any kind of framework. The solution works great except that every time a new request is made, I open a new connection via MySQLdb.connect. What is the best "drop in" solution to switch this over to using connection pooling in python? I am imagining something like the commons DBCP solution for Java. The process is long running and has many threads that need to make requests, but not all at the same time... specifically they do quite a lot of work before brief bursts of writing out a chunk of their results. Edited to add: After some more searching I found [anitpool.py](http://furius.ca/antiorm/) which looks decent, but as I'm relatively new to python I guess I just want to make sure I'm not missing a more obvious/more idiomatic/better solution.
2008/09/19
[ "https://Stackoverflow.com/questions/98687", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2168/" ]
IMO, the "more obvious/more idiomatic/better solution" is to use an existing ORM rather than invent DAO-like classes. It appears to me that ORM's are more popular than "raw" SQL connections. Why? Because Python *is* OO, and the mapping from a SQL row to an object *is* absolutely essential. There aren't many use cases where you deal with SQL rows that don't map to Python objects. I think that [SQLAlchemy](http://www.sqlalchemy.org/) or [SQLObject](http://www.sqlobject.org/) (and the associated connection pooling) are the more idiomatic Pythonic solutions. Pooling as a separate feature isn't very common because pure SQL (without object mapping) isn't very popular for the kind of complex, long-running processes that benefit from connection pooling. Yes, pure SQL *is* used, but it's always used in simpler or more controlled applications where pooling isn't helpful. I think you might have two alternatives: 1. Revise your classes to use SQLAlchemy or SQLObject. While this appears painful at first (all that work wasted), you should be able to leverage all the design and thought. It's merely an exercise in adopting a widely-used ORM and pooling solution. 2. Roll out your own simple connection pool using the algorithm you outlined -- a simple Set or List of connections that you cycle through.
Making your own connection pool is a BAD idea if your app ever decides to start using multi-threading. Making a connection pool for a multi-threaded application is much more complicated than one for a single-threaded application. You can use something like PySQLPool in that case. It's also a BAD idea to use an ORM if you're looking for performance. If you'll be dealing with huge/heavy databases that have to handle lots of selects, inserts, updates and deletes at the same time, then you're going to need performance, which means you'll need custom SQL written to optimize lookups and lock times. With an ORM you don't usually have that flexibility. So basically, yeah, you can make your own connection pool and use ORMs but only if you're sure you won't need anything of what I just described.
98,687
I have developed some custom DAO-like classes to meet some very specialized requirements for my project that is a server-side process that does not run inside any kind of framework. The solution works great except that every time a new request is made, I open a new connection via MySQLdb.connect. What is the best "drop in" solution to switch this over to using connection pooling in python? I am imagining something like the commons DBCP solution for Java. The process is long running and has many threads that need to make requests, but not all at the same time... specifically they do quite a lot of work before brief bursts of writing out a chunk of their results. Edited to add: After some more searching I found [anitpool.py](http://furius.ca/antiorm/) which looks decent, but as I'm relatively new to python I guess I just want to make sure I'm not missing a more obvious/more idiomatic/better solution.
2008/09/19
[ "https://Stackoverflow.com/questions/98687", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2168/" ]
Wrap your connection class. Set a limit on how many connections you make. Return an unused connection. Intercept close to free the connection. Update: I put something like this in dbpool.py: ``` import sqlalchemy.pool as pool import MySQLdb as mysql mysql = pool.manage(mysql) ```
Old thread, but for general-purpose pooling (connections or any expensive object), I use something like: ``` def pool(ctor, limit=None): local_pool = multiprocessing.Queue() n = multiprocesing.Value('i', 0) @contextlib.contextmanager def pooled(ctor=ctor, lpool=local_pool, n=n): # block iff at limit try: i = lpool.get(limit and n.value >= limit) except multiprocessing.queues.Empty: n.value += 1 i = ctor() yield i lpool.put(i) return pooled ``` Which constructs lazily, has an optional limit, and should generalize to any use case I can think of. Of course, this assumes that you really need the pooling of whatever resource, which you may not for many modern SQL-likes. Usage: ``` # in main: my_pool = pool(lambda: do_something()) # in thread: with my_pool() as my_obj: my_obj.do_something() ``` This does assume that whatever object ctor creates has an appropriate destructor if needed (some servers don't kill connection objects unless they are closed explicitly).
98,687
I have developed some custom DAO-like classes to meet some very specialized requirements for my project that is a server-side process that does not run inside any kind of framework. The solution works great except that every time a new request is made, I open a new connection via MySQLdb.connect. What is the best "drop in" solution to switch this over to using connection pooling in python? I am imagining something like the commons DBCP solution for Java. The process is long running and has many threads that need to make requests, but not all at the same time... specifically they do quite a lot of work before brief bursts of writing out a chunk of their results. Edited to add: After some more searching I found [anitpool.py](http://furius.ca/antiorm/) which looks decent, but as I'm relatively new to python I guess I just want to make sure I'm not missing a more obvious/more idiomatic/better solution.
2008/09/19
[ "https://Stackoverflow.com/questions/98687", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2168/" ]
Wrap your connection class. Set a limit on how many connections you make. Return an unused connection. Intercept close to free the connection. Update: I put something like this in dbpool.py: ``` import sqlalchemy.pool as pool import MySQLdb as mysql mysql = pool.manage(mysql) ```
Replying to an old thread but the last time I checked, MySQL offers connection pooling as part of its drivers. You can check them out at : > > <https://dev.mysql.com/doc/connector-python/en/connector-python-connection-pooling.html> > > > From TFA, Assuming you want to open a connection pool explicitly (as OP had stated): ``` dbconfig = { "database": "test", "user":"joe" } cnxpool = mysql.connector.pooling.MySQLConnectionPool(pool_name = "mypool",pool_size = 3, **dbconfig) ``` This pool is then accessed by requesting from the pool through the get\_connection() function. ``` cnx1 = cnxpool.get_connection() cnx2 = cnxpool.get_connection() ```
98,687
I have developed some custom DAO-like classes to meet some very specialized requirements for my project that is a server-side process that does not run inside any kind of framework. The solution works great except that every time a new request is made, I open a new connection via MySQLdb.connect. What is the best "drop in" solution to switch this over to using connection pooling in python? I am imagining something like the commons DBCP solution for Java. The process is long running and has many threads that need to make requests, but not all at the same time... specifically they do quite a lot of work before brief bursts of writing out a chunk of their results. Edited to add: After some more searching I found [anitpool.py](http://furius.ca/antiorm/) which looks decent, but as I'm relatively new to python I guess I just want to make sure I'm not missing a more obvious/more idiomatic/better solution.
2008/09/19
[ "https://Stackoverflow.com/questions/98687", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2168/" ]
In MySQL? I'd say don't bother with the connection pooling. They're often a source of trouble and with MySQL they're not going to bring you the performance advantage you're hoping for. This road may be a lot of effort to follow--politically--because there's so much best practices hand waving and textbook verbiage in this space about the advantages of connection pooling. Connection pools are simply a bridge between the post-web era of stateless applications (e.g. HTTP protocol) and the pre-web era of stateful long-lived batch processing applications. Since connections were very expensive in pre-web databases (since no one used to care too much about how long a connection took to establish), post-web applications devised this connection pool scheme so that every hit didn't incur this huge processing overhead on the RDBMS. Since MySQL is more of a web-era RDBMS, connections are extremely lightweight and fast. I have written many high volume web applications that don't use a connection pool at all for MySQL. This is a complication you may benefit from doing without, so long as there isn't a political obstacle to overcome.
i did it for opensearch so you can refer it. ``` from opensearchpy import OpenSearch def get_connection(): connection = None try: connection = OpenSearch( hosts=[{'host': settings.OPEN_SEARCH_HOST, 'port': settings.OPEN_SEARCH_PORT}], http_compress=True, http_auth=(settings.OPEN_SEARCH_USER, settings.OPEN_SEARCH_PASSWORD), use_ssl=True, verify_certs=True, ssl_assert_hostname=False, ssl_show_warn=False, ) except Exception as error: print("Error: Connection not established {}".format(error)) else: print("Connection established") return connection class OpenSearchClient(object): connection_pool = [] connection_in_use = [] def __init__(self): if OpenSearchClient.connection_pool: pass else: OpenSearchClient.connection_pool = [get_connection() for i in range(0, settings.CONNECTION_POOL_SIZE)] def search_data(self, query="", index_name=settings.OPEN_SEARCH_INDEX): available_cursor = OpenSearchClient.connection_pool.pop(0) OpenSearchClient.connection_in_use.append(available_cursor) response = available_cursor.search(body=query, index=index_name) available_cursor.close() OpenSearchClient.connection_pool.append(available_cursor) OpenSearchClient.connection_in_use.pop(-1) return response ```
18,808,150
I have two accounts on my system, an admin account and a user account. I use the admin account to install macport and have set the default python using ``` sudo port select --set python python27 ``` On the user account I can run all the python I need using ``` /opt/local/bin/python ``` but how do I select that to be default? ``` port select --list python ``` reports ``` python27 (active) ``` but which python returns ``` /usr/bin/python ```
2013/09/15
[ "https://Stackoverflow.com/questions/18808150", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1816807/" ]
This is really a shell question. `which python` returns the first python on your PATH environment variable. The PATH variable is a list of paths that the shell searches for executables. This is usually set in .profile, .bash\_profile or .bashrc. If you reorder your paths, such that `/opt/local/bin` comes before `/usr/bin` then `/opt/local/bin/python` will be your default. This will also be return by `#!/usr/bin/env python`, which is the normal shebang put at the top of python scripts.
You can use `alias python=/opt/local/bin/python` in your .bashrc, or the equivalent rc file for your shell.
57,903,358
I am attempting to build an image for the jetson-nano using yocto poky-warrior and meta-tegra warrior-l4t-r32.2 layer. I've been following [this thread](https://stackoverflow.com/questions/56481980/yocto-for-nvidia-jetson-fails-because-of-gcc-7-cannot-compute-suffix-of-object/56528785#56528785) because he had the same problem as me, and the answer on that thread fixed it, but then a new problem occoured.Building with ``` bitbake core-image-minimal ``` Stops with an error stating ``` ERROR: Task (…/jetson-nano/layers/poky-warrior/meta/recipes-core/libxcrypt/libxcrypt.bb:do_configure) failed with exit code '1' ``` I've been told that applying the following patch would fix this problem: ``` diff --git a/meta/recipes-core/busybox/busybox.inc b/meta/recipes- core/busybox/busybox.inc index 174ce5a8c0..e8d651a010 100644 --- a/meta/recipes-core/busybox/busybox.inc +++ b/meta/recipes-core/busybox/busybox.inc @@ -128,7 +128,7 @@ do_prepare_config () { ${S}/.config.oe-tmp > ${S}/.config fi sed -i 's/CONFIG_IFUPDOWN_UDHCPC_CMD_OPTIONS="-R -n"/CONFIG_IFUPDOWN_UDHCPC_CMD_OPTIONS="-R -b"/' ${S}/.config - sed -i 's|${DEBUG_PREFIX_MAP}||g' ${S}/.config + #sed -i 's|${DEBUG_PREFIX_MAP}||g' ${S}/.config } # returns all the elements from the src uri that are .cfg files diff --git a/meta/recipes-core/libxcrypt/libxcrypt.bb b/meta/recipes-core/libxcrypt/libxcrypt.bb index 3b9af6d739..350f7807a7 100644 --- a/meta/recipes-core/libxcrypt/libxcrypt.bb +++ b/meta/recipes-core/libxcrypt/libxcrypt.bb @@ -24,7 +24,7 @@ FILES_${PN} = "${libdir}/libcrypt*.so.* ${libdir}/libcrypt-*.so ${libdir}/libowc S = "${WORKDIR}/git" BUILD_CPPFLAGS = "-I${STAGING_INCDIR_NATIVE} -std=gnu99" -TARGET_CPPFLAGS = "-I${STAGING_DIR_TARGET}${includedir} -Wno-error=missing-attributes" -CPPFLAGS_append_class-nativesdk = " -Wno-error=missing-attributes" +TARGET_CPPFLAGS = "-I${STAGING_DIR_TARGET}${includedir} " +CPPFLAGS_append_class-nativesdk = " " BBCLASSEXTEND = "nativesdk" ``` So I've made a libxcrypt.patch file and copy pasted the patch content and put the file in my poky meta layer. But how do I apply the patch? I can't figure out what to do from here, do I need to make an bbappend file or add to one?- if so which one? or do I need to edit a .bb file?- maybe libxcrypt.bb? And do I need to add these lines: ``` FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:" SRC_URI += "file://path/to/patch/file" ``` I've been trying to look at similar stackoverflow posts about this but they don't seem to be precise enough for me to work it out as I am completely new to yocto and the likes. So far I've tried to add the lines ``` FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:" SRC_URI += "file://path/to/patch/file" ``` to the libxcrypt.bb file but it says it cannot find the file to patch. Then I found out this could potentially be solved with adding ;striplevel=0 to the SRC\_URI line, so I did this: ``` SRC_URI += "file://path/to/patch/file;striplevel=0" ``` Which did nothing. Then I tried to put ``` --- a/meta/recipes-core/busybox/busybox.inc +++ b/meta/recipes-core/busybox/busybox.inc ``` In the top of the patch file, but this also did nothing. This is the full error message without attempting to apply the patch: ``` ERROR: libxcrypt-4.4.2-r0 do_configure: configure failed ERROR: libxcrypt-4.4.2-r0 do_configure: Function failed: do_configure (log file is located at /home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/temp/log.do_configure.42560) ERROR: Logfile of failure stored in: /home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/temp/log.do_configure.42560 Log data follows: | DEBUG: SITE files ['endian-little', 'bit-64', 'arm-common', 'arm-64', 'common-linux', 'common-glibc', 'aarch64-linux', 'common'] | DEBUG: Executing shell function autotools_preconfigure | DEBUG: Shell function autotools_preconfigure finished | DEBUG: Executing python function autotools_aclocals | DEBUG: SITE files ['endian-little', 'bit-64', 'arm-common', 'arm-64', 'common-linux', 'common-glibc', 'aarch64-linux', 'common'] | DEBUG: Python function autotools_aclocals finished | DEBUG: Executing shell function do_configure | automake (GNU automake) 1.16.1 | Copyright (C) 2018 Free Software Foundation, Inc. | License GPLv2+: GNU GPL version 2 or later <https://gnu.org/licenses/gpl-2.0.html> | This is free software: you are free to change and redistribute it. | There is NO WARRANTY, to the extent permitted by law. | | Written by Tom Tromey <tromey@redhat.com> | and Alexandre Duret-Lutz <adl@gnu.org>. | AUTOV is 1.16 | NOTE: Executing ACLOCAL="aclocal --system-acdir=/home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/recipe-sysroot/usr/share/aclocal/ --automake-acdir=/home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/recipe-sysroot-native/usr/share/aclocal-1.16" autoreconf -Wcross --verbose --install --force --exclude=autopoint -I /home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/git/m4/ -I /home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/recipe-sysroot-native/usr/share/aclocal/ | autoreconf: Entering directory `.' | autoreconf: configure.ac: not using Gettext | autoreconf: running: aclocal --system-acdir=/home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/recipe-sysroot/usr/share/aclocal/ --automake-acdir=/home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/recipe-sysroot-native/usr/share/aclocal-1.16 -I /home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/git/m4/ -I /home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/recipe-sysroot-native/usr/share/aclocal/ --force -I m4 | autoreconf: configure.ac: tracing | autoreconf: running: libtoolize --copy --force | libtoolize: putting auxiliary files in AC_CONFIG_AUX_DIR, 'm4'. | libtoolize: copying file 'm4/ltmain.sh' | libtoolize: putting macros in AC_CONFIG_MACRO_DIRS, 'm4'. | libtoolize: copying file 'm4/libtool.m4' | libtoolize: copying file 'm4/ltoptions.m4' | libtoolize: copying file 'm4/ltsugar.m4' | libtoolize: copying file 'm4/ltversion.m4' | libtoolize: copying file 'm4/lt~obsolete.m4' | autoreconf: running: /home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/recipe-sysroot-native/usr/bin/autoconf --include=/home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/git/m4/ --include=/home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/recipe-sysroot-native/usr/share/aclocal/ --force | autoreconf: running: /home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/recipe-sysroot-native/usr/bin/autoheader --include=/home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/git/m4/ --include=/home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/recipe-sysroot-native/usr/share/aclocal/ --force | autoreconf: running: automake --add-missing --copy --force-missing | configure.ac:31: installing 'm4/compile' | configure.ac:30: installing 'm4/config.guess' | configure.ac:30: installing 'm4/config.sub' | configure.ac:17: installing 'm4/install-sh' | configure.ac:17: installing 'm4/missing' | Makefile.am: installing './INSTALL' | Makefile.am: installing 'm4/depcomp' | parallel-tests: installing 'm4/test-driver' | autoreconf: running: gnu-configize | autoreconf: Leaving directory `.' | NOTE: Running ../git/configure --build=x86_64-linux --host=aarch64-poky-linux --target=aarch64-poky-linux --prefix=/usr --exec_prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin --libexecdir=/usr/libexec --datadir=/usr/share --sysconfdir=/etc --sharedstatedir=/com --localstatedir=/var --libdir=/usr/lib --includedir=/usr/include --oldincludedir=/usr/include --infodir=/usr/share/info --mandir=/usr/share/man --disable-silent-rules --disable-dependency-tracking --with-libtool-sysroot=/home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/recipe-sysroot --disable-static | configure: loading site script /home/mci/yocto/dev-jetson-nano/layers/poky-warrior/meta/site/endian-little | configure: loading site script /home/mci/yocto/dev-jetson-nano/layers/poky-warrior/meta/site/arm-common | configure: loading site script /home/mci/yocto/dev-jetson-nano/layers/poky-warrior/meta/site/arm-64 | configure: loading site script /home/mci/yocto/dev-jetson-nano/layers/poky-warrior/meta/site/common-linux | configure: loading site script /home/mci/yocto/dev-jetson-nano/layers/poky-warrior/meta/site/common-glibc | configure: loading site script /home/mci/yocto/dev-jetson-nano/layers/poky-warrior/meta/site/common | configure: loading site script /home/mci/yocto/dev-jetson-nano/layers/meta-openembedded/meta-networking/site/endian-little | checking for a BSD-compatible install... /home/mci/yocto/dev-jetson-nano/build/tmp/hosttools/install -c | checking whether build environment is sane... yes | checking for aarch64-poky-linux-strip... aarch64-poky-linux-strip | checking for a thread-safe mkdir -p... /home/mci/yocto/dev-jetson-nano/build/tmp/hosttools/mkdir -p | checking for gawk... gawk | checking whether make sets $(MAKE)... yes | checking whether make supports nested variables... yes | checking build system type... x86_64-pc-linux-gnu | checking host system type... aarch64-poky-linux-gnu | checking for aarch64-poky-linux-gcc... aarch64-poky-linux-gcc -march=armv8-a+crc -fstack-protector-strong -D_FORTIFY_SOURCE=2 -Wformat -Wformat-security -Werror=format-security --sysroot=/home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/recipe-sysroot | checking whether the C compiler works... no | configure: error: in `/home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/build': | configure: error: C compiler cannot create executables | See `config.log' for more details | NOTE: The following config.log files may provide further information. | NOTE: /home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/build/config.log | ERROR: configure failed | WARNING: /home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/temp/run.do_configure.42560:1 exit 1 from 'exit 1' | ERROR: Function failed: do_configure (log file is located at /home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/temp/log.do_configure.42560) ERROR: Task (/home/mci/yocto/dev-jetson-nano/layers/poky-warrior/meta/recipes-core/libxcrypt/libxcrypt.bb:do_configure) failed with exit code '1' NOTE: Tasks Summary: Attempted 883 tasks of which 848 didn't need to be rerun and 1 failed. ``` This is the full error log when I try to add the lines to the libxcrypt.bb file to apply the patch: ``` ERROR: libxcrypt-4.4.2-r0 do_patch: Command Error: 'quilt --quiltrc /home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/recipe-sysroot-native/etc/quiltrc push' exited with 0 Output: Applying patch libxcrypt.patch can't find file to patch at input line 7 Perhaps you used the wrong -p or --strip option? The text leading up to this was: -------------------------- |--- a/meta/recipes-core/busybox/busybox.inc |+++ b/meta/recipes-core/busybox/busybox.inc |diff --git a/meta/recipes-core/busybox/busybox.inc b/meta/recipes-core/busybox/busybox.inc |index 174ce5a8c0..e8d651a010 100644 |--- a/meta/recipes-core/busybox/busybox.inc |+++ b/meta/recipes-core/busybox/busybox.inc -------------------------- No file to patch. Skipping patch. 1 out of 1 hunk ignored can't find file to patch at input line 20 Perhaps you used the wrong -p or --strip option? The text leading up to this was: -------------------------- |diff --git a/meta/recipes-core/libxcrypt/libxcrypt.bb b/meta/recipes-core/libxcrypt/libxcrypt.bb |index 3b9af6d739..350f7807a7 100644 |--- a/meta/recipes-core/libxcrypt/libxcrypt.bb |+++ b/meta/recipes-core/libxcrypt/libxcrypt.bb -------------------------- No file to patch. Skipping patch. 1 out of 1 hunk ignored Patch libxcrypt.patch does not apply (enforce with -f) ERROR: libxcrypt-4.4.2-r0 do_patch: ERROR: libxcrypt-4.4.2-r0 do_patch: Function failed: patch_do_patch ERROR: Logfile of failure stored in: /home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/temp/log.do_patch.34179 ERROR: Task (/home/mci/yocto/dev-jetson-nano/layers/poky-warrior/meta/recipes-core/libxcrypt/libxcrypt.bb:do_patch) failed with exit code '1' NOTE: Tasks Summary: Attempted 811 tasks of which 793 didn't need to be rerun and 1 failed. ``` I know this might be a trivial question for a lot, but as a new developer this is very hard to figure out on my own.
2019/09/12
[ "https://Stackoverflow.com/questions/57903358", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5999131/" ]
When using Callable in dictConfig, the Callable you put into the value of dictConfig has to be a Callable which returns a Callable as discussed in the Python Bug Tracker: * <https://bugs.python.org/issue41906> E.g. ```py def my_filter_wrapper(): # the returned Callable has to accept a single argument (the LogRecord instance passed in this callable) with return value of 1 or 0 return lambda record: 0 if <your_condition_here> else 1 logging_dict = { ... 'filters': { 'ignore_progress': { '()': my_filter_wrapper, } }, ... ``` Or even simpler if your custom filtering logic is a one-liner and independent on the log record instance: ```py logging_dict = { ... 'filters': { 'ignore_progress': { '()': lambda : lambda _: 0 if <your_condition> else 1 } }, ... ``` It took me a long while to figure this out. Hope it helps anyone has the same questions. And there is definitely something needed in its Python implementation to make it more elegant.
I suggest using [loguru](https://github.com/Delgan/loguru) as logging package. you can easily add a handler for your logger.
57,903,358
I am attempting to build an image for the jetson-nano using yocto poky-warrior and meta-tegra warrior-l4t-r32.2 layer. I've been following [this thread](https://stackoverflow.com/questions/56481980/yocto-for-nvidia-jetson-fails-because-of-gcc-7-cannot-compute-suffix-of-object/56528785#56528785) because he had the same problem as me, and the answer on that thread fixed it, but then a new problem occoured.Building with ``` bitbake core-image-minimal ``` Stops with an error stating ``` ERROR: Task (…/jetson-nano/layers/poky-warrior/meta/recipes-core/libxcrypt/libxcrypt.bb:do_configure) failed with exit code '1' ``` I've been told that applying the following patch would fix this problem: ``` diff --git a/meta/recipes-core/busybox/busybox.inc b/meta/recipes- core/busybox/busybox.inc index 174ce5a8c0..e8d651a010 100644 --- a/meta/recipes-core/busybox/busybox.inc +++ b/meta/recipes-core/busybox/busybox.inc @@ -128,7 +128,7 @@ do_prepare_config () { ${S}/.config.oe-tmp > ${S}/.config fi sed -i 's/CONFIG_IFUPDOWN_UDHCPC_CMD_OPTIONS="-R -n"/CONFIG_IFUPDOWN_UDHCPC_CMD_OPTIONS="-R -b"/' ${S}/.config - sed -i 's|${DEBUG_PREFIX_MAP}||g' ${S}/.config + #sed -i 's|${DEBUG_PREFIX_MAP}||g' ${S}/.config } # returns all the elements from the src uri that are .cfg files diff --git a/meta/recipes-core/libxcrypt/libxcrypt.bb b/meta/recipes-core/libxcrypt/libxcrypt.bb index 3b9af6d739..350f7807a7 100644 --- a/meta/recipes-core/libxcrypt/libxcrypt.bb +++ b/meta/recipes-core/libxcrypt/libxcrypt.bb @@ -24,7 +24,7 @@ FILES_${PN} = "${libdir}/libcrypt*.so.* ${libdir}/libcrypt-*.so ${libdir}/libowc S = "${WORKDIR}/git" BUILD_CPPFLAGS = "-I${STAGING_INCDIR_NATIVE} -std=gnu99" -TARGET_CPPFLAGS = "-I${STAGING_DIR_TARGET}${includedir} -Wno-error=missing-attributes" -CPPFLAGS_append_class-nativesdk = " -Wno-error=missing-attributes" +TARGET_CPPFLAGS = "-I${STAGING_DIR_TARGET}${includedir} " +CPPFLAGS_append_class-nativesdk = " " BBCLASSEXTEND = "nativesdk" ``` So I've made a libxcrypt.patch file and copy pasted the patch content and put the file in my poky meta layer. But how do I apply the patch? I can't figure out what to do from here, do I need to make an bbappend file or add to one?- if so which one? or do I need to edit a .bb file?- maybe libxcrypt.bb? And do I need to add these lines: ``` FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:" SRC_URI += "file://path/to/patch/file" ``` I've been trying to look at similar stackoverflow posts about this but they don't seem to be precise enough for me to work it out as I am completely new to yocto and the likes. So far I've tried to add the lines ``` FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:" SRC_URI += "file://path/to/patch/file" ``` to the libxcrypt.bb file but it says it cannot find the file to patch. Then I found out this could potentially be solved with adding ;striplevel=0 to the SRC\_URI line, so I did this: ``` SRC_URI += "file://path/to/patch/file;striplevel=0" ``` Which did nothing. Then I tried to put ``` --- a/meta/recipes-core/busybox/busybox.inc +++ b/meta/recipes-core/busybox/busybox.inc ``` In the top of the patch file, but this also did nothing. This is the full error message without attempting to apply the patch: ``` ERROR: libxcrypt-4.4.2-r0 do_configure: configure failed ERROR: libxcrypt-4.4.2-r0 do_configure: Function failed: do_configure (log file is located at /home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/temp/log.do_configure.42560) ERROR: Logfile of failure stored in: /home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/temp/log.do_configure.42560 Log data follows: | DEBUG: SITE files ['endian-little', 'bit-64', 'arm-common', 'arm-64', 'common-linux', 'common-glibc', 'aarch64-linux', 'common'] | DEBUG: Executing shell function autotools_preconfigure | DEBUG: Shell function autotools_preconfigure finished | DEBUG: Executing python function autotools_aclocals | DEBUG: SITE files ['endian-little', 'bit-64', 'arm-common', 'arm-64', 'common-linux', 'common-glibc', 'aarch64-linux', 'common'] | DEBUG: Python function autotools_aclocals finished | DEBUG: Executing shell function do_configure | automake (GNU automake) 1.16.1 | Copyright (C) 2018 Free Software Foundation, Inc. | License GPLv2+: GNU GPL version 2 or later <https://gnu.org/licenses/gpl-2.0.html> | This is free software: you are free to change and redistribute it. | There is NO WARRANTY, to the extent permitted by law. | | Written by Tom Tromey <tromey@redhat.com> | and Alexandre Duret-Lutz <adl@gnu.org>. | AUTOV is 1.16 | NOTE: Executing ACLOCAL="aclocal --system-acdir=/home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/recipe-sysroot/usr/share/aclocal/ --automake-acdir=/home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/recipe-sysroot-native/usr/share/aclocal-1.16" autoreconf -Wcross --verbose --install --force --exclude=autopoint -I /home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/git/m4/ -I /home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/recipe-sysroot-native/usr/share/aclocal/ | autoreconf: Entering directory `.' | autoreconf: configure.ac: not using Gettext | autoreconf: running: aclocal --system-acdir=/home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/recipe-sysroot/usr/share/aclocal/ --automake-acdir=/home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/recipe-sysroot-native/usr/share/aclocal-1.16 -I /home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/git/m4/ -I /home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/recipe-sysroot-native/usr/share/aclocal/ --force -I m4 | autoreconf: configure.ac: tracing | autoreconf: running: libtoolize --copy --force | libtoolize: putting auxiliary files in AC_CONFIG_AUX_DIR, 'm4'. | libtoolize: copying file 'm4/ltmain.sh' | libtoolize: putting macros in AC_CONFIG_MACRO_DIRS, 'm4'. | libtoolize: copying file 'm4/libtool.m4' | libtoolize: copying file 'm4/ltoptions.m4' | libtoolize: copying file 'm4/ltsugar.m4' | libtoolize: copying file 'm4/ltversion.m4' | libtoolize: copying file 'm4/lt~obsolete.m4' | autoreconf: running: /home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/recipe-sysroot-native/usr/bin/autoconf --include=/home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/git/m4/ --include=/home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/recipe-sysroot-native/usr/share/aclocal/ --force | autoreconf: running: /home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/recipe-sysroot-native/usr/bin/autoheader --include=/home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/git/m4/ --include=/home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/recipe-sysroot-native/usr/share/aclocal/ --force | autoreconf: running: automake --add-missing --copy --force-missing | configure.ac:31: installing 'm4/compile' | configure.ac:30: installing 'm4/config.guess' | configure.ac:30: installing 'm4/config.sub' | configure.ac:17: installing 'm4/install-sh' | configure.ac:17: installing 'm4/missing' | Makefile.am: installing './INSTALL' | Makefile.am: installing 'm4/depcomp' | parallel-tests: installing 'm4/test-driver' | autoreconf: running: gnu-configize | autoreconf: Leaving directory `.' | NOTE: Running ../git/configure --build=x86_64-linux --host=aarch64-poky-linux --target=aarch64-poky-linux --prefix=/usr --exec_prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin --libexecdir=/usr/libexec --datadir=/usr/share --sysconfdir=/etc --sharedstatedir=/com --localstatedir=/var --libdir=/usr/lib --includedir=/usr/include --oldincludedir=/usr/include --infodir=/usr/share/info --mandir=/usr/share/man --disable-silent-rules --disable-dependency-tracking --with-libtool-sysroot=/home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/recipe-sysroot --disable-static | configure: loading site script /home/mci/yocto/dev-jetson-nano/layers/poky-warrior/meta/site/endian-little | configure: loading site script /home/mci/yocto/dev-jetson-nano/layers/poky-warrior/meta/site/arm-common | configure: loading site script /home/mci/yocto/dev-jetson-nano/layers/poky-warrior/meta/site/arm-64 | configure: loading site script /home/mci/yocto/dev-jetson-nano/layers/poky-warrior/meta/site/common-linux | configure: loading site script /home/mci/yocto/dev-jetson-nano/layers/poky-warrior/meta/site/common-glibc | configure: loading site script /home/mci/yocto/dev-jetson-nano/layers/poky-warrior/meta/site/common | configure: loading site script /home/mci/yocto/dev-jetson-nano/layers/meta-openembedded/meta-networking/site/endian-little | checking for a BSD-compatible install... /home/mci/yocto/dev-jetson-nano/build/tmp/hosttools/install -c | checking whether build environment is sane... yes | checking for aarch64-poky-linux-strip... aarch64-poky-linux-strip | checking for a thread-safe mkdir -p... /home/mci/yocto/dev-jetson-nano/build/tmp/hosttools/mkdir -p | checking for gawk... gawk | checking whether make sets $(MAKE)... yes | checking whether make supports nested variables... yes | checking build system type... x86_64-pc-linux-gnu | checking host system type... aarch64-poky-linux-gnu | checking for aarch64-poky-linux-gcc... aarch64-poky-linux-gcc -march=armv8-a+crc -fstack-protector-strong -D_FORTIFY_SOURCE=2 -Wformat -Wformat-security -Werror=format-security --sysroot=/home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/recipe-sysroot | checking whether the C compiler works... no | configure: error: in `/home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/build': | configure: error: C compiler cannot create executables | See `config.log' for more details | NOTE: The following config.log files may provide further information. | NOTE: /home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/build/config.log | ERROR: configure failed | WARNING: /home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/temp/run.do_configure.42560:1 exit 1 from 'exit 1' | ERROR: Function failed: do_configure (log file is located at /home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/temp/log.do_configure.42560) ERROR: Task (/home/mci/yocto/dev-jetson-nano/layers/poky-warrior/meta/recipes-core/libxcrypt/libxcrypt.bb:do_configure) failed with exit code '1' NOTE: Tasks Summary: Attempted 883 tasks of which 848 didn't need to be rerun and 1 failed. ``` This is the full error log when I try to add the lines to the libxcrypt.bb file to apply the patch: ``` ERROR: libxcrypt-4.4.2-r0 do_patch: Command Error: 'quilt --quiltrc /home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/recipe-sysroot-native/etc/quiltrc push' exited with 0 Output: Applying patch libxcrypt.patch can't find file to patch at input line 7 Perhaps you used the wrong -p or --strip option? The text leading up to this was: -------------------------- |--- a/meta/recipes-core/busybox/busybox.inc |+++ b/meta/recipes-core/busybox/busybox.inc |diff --git a/meta/recipes-core/busybox/busybox.inc b/meta/recipes-core/busybox/busybox.inc |index 174ce5a8c0..e8d651a010 100644 |--- a/meta/recipes-core/busybox/busybox.inc |+++ b/meta/recipes-core/busybox/busybox.inc -------------------------- No file to patch. Skipping patch. 1 out of 1 hunk ignored can't find file to patch at input line 20 Perhaps you used the wrong -p or --strip option? The text leading up to this was: -------------------------- |diff --git a/meta/recipes-core/libxcrypt/libxcrypt.bb b/meta/recipes-core/libxcrypt/libxcrypt.bb |index 3b9af6d739..350f7807a7 100644 |--- a/meta/recipes-core/libxcrypt/libxcrypt.bb |+++ b/meta/recipes-core/libxcrypt/libxcrypt.bb -------------------------- No file to patch. Skipping patch. 1 out of 1 hunk ignored Patch libxcrypt.patch does not apply (enforce with -f) ERROR: libxcrypt-4.4.2-r0 do_patch: ERROR: libxcrypt-4.4.2-r0 do_patch: Function failed: patch_do_patch ERROR: Logfile of failure stored in: /home/mci/yocto/dev-jetson-nano/build/tmp/work/aarch64-poky-linux/libxcrypt/4.4.2-r0/temp/log.do_patch.34179 ERROR: Task (/home/mci/yocto/dev-jetson-nano/layers/poky-warrior/meta/recipes-core/libxcrypt/libxcrypt.bb:do_patch) failed with exit code '1' NOTE: Tasks Summary: Attempted 811 tasks of which 793 didn't need to be rerun and 1 failed. ``` I know this might be a trivial question for a lot, but as a new developer this is very hard to figure out on my own.
2019/09/12
[ "https://Stackoverflow.com/questions/57903358", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5999131/" ]
When using Callable in dictConfig, the Callable you put into the value of dictConfig has to be a Callable which returns a Callable as discussed in the Python Bug Tracker: * <https://bugs.python.org/issue41906> E.g. ```py def my_filter_wrapper(): # the returned Callable has to accept a single argument (the LogRecord instance passed in this callable) with return value of 1 or 0 return lambda record: 0 if <your_condition_here> else 1 logging_dict = { ... 'filters': { 'ignore_progress': { '()': my_filter_wrapper, } }, ... ``` Or even simpler if your custom filtering logic is a one-liner and independent on the log record instance: ```py logging_dict = { ... 'filters': { 'ignore_progress': { '()': lambda : lambda _: 0 if <your_condition> else 1 } }, ... ``` It took me a long while to figure this out. Hope it helps anyone has the same questions. And there is definitely something needed in its Python implementation to make it more elegant.
This is not working because it is a bug or the docs are not correct. In either case, I opened a ticket with the python folks here: <https://bugs.python.org/issue41906> Workaround ---------- If you return a function that returns a function things will work fine. For example: ``` def no_error_logs(): """ :return: function that returns 0 if log should show, 1 if not """ return lambda param: 1 if param.levelno < logging.ERROR else 0 ``` Then ``` "filters": { "myfilter": { "()": no_error_logs, } }, ``` Note: Your filter function returns true/false. According to [the docs](https://docs.python.org/3/library/logging.html#logging.Filter): Filter functions are answering the question: > > Is the specified record to be logged? Returns zero for no, nonzero for yes. > > > So you would need to adjust your function accordingly.
31,444,776
I want to create a bunch of simple geometric shapes (colored rectangles, triangles, squares ...) using pygame and then later analyze their relations and features. I first tried [turtle](https://docs.python.org/2/library/turtle.html) but apparently that is only a graphing library and cannot keep track of the shapes it creates and I wonder if the same holds true for Pygame. To illustrate the point, say I have this script: ``` # Import a library of functions called 'pygame' import pygame from math import pi # Initialize the game engine pygame.init() # Define the colors we will use in RGB format BLACK = ( 0, 0, 0) WHITE = (255, 255, 255) BLUE = ( 0, 0, 255) GREEN = ( 0, 255, 0) RED = (255, 0, 0) # Set the height and width of the screen size = [800, 600] screen = pygame.display.set_mode(size) pygame.display.set_caption("Example code for the draw module") #Loop until the user clicks the close button. done = False clock = pygame.time.Clock() while not done: # This limits the while loop to a max of 10 times per second. # Leave this out and we will use all CPU we can. clock.tick(10) for event in pygame.event.get(): # User did something if event.type == pygame.QUIT: # If user clicked close done=True # Flag that we are done so we exit this loop screen.fill(WHITE) # Draw a rectangle outline pygame.draw.rect(screen, BLACK, [75, 10, 50, 20], 2) # Draw a solid rectangle pygame.draw.rect(screen, BLACK, [150, 10, 50, 20]) # Draw an ellipse outline, using a rectangle as the outside boundaries pygame.draw.ellipse(screen, RED, [225, 10, 50, 20], 2) # Draw a circle pygame.draw.circle(screen, BLUE, [60, 250], 40) # Go ahead and update the screen with what we've drawn. # This MUST happen after all the other drawing commands. pygame.display.flip() # Be IDLE friendly pygame.quit() ``` It creates this image: ![enter image description here](https://i.stack.imgur.com/8oisH.jpg) Now, suppose I save the image created by Pygame. Is there a way Pygame would be able to detect the shapes, colors and coordinates from the image?
2015/07/16
[ "https://Stackoverflow.com/questions/31444776", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4321788/" ]
PyGame is a gaming library - it helps with making graphics and audio and controllers for games. It doesn't have support to detect objects in a preexisting image. What you want is OpenCV (It has Python bindings) - this is made to "understand" things about an image. One popular math algorithm used to detect shapes (or edges) of any sort are Hough Transforms. You can read more about it here - <http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/hough_circle/hough_circle.html> OpenCV has Hough transform functions inside it which are very useful. --- You could attempt to make your own Hough transform code and use it ... but libraries make it easier.
Yes, It can, but pygame is also good for making games but unfortunately, you can't convert them to IOS or Android, in the past, there was a program called PGS4A which allowed you to convert pygame projects to android but sadly, the program has been discontinued and now, there is no way. On this case, my sggestion would be that if you ever wanted to do this, download Android Studio from this link "<http://developer.android.com/sdk/index.html#top>" and google on how to use libgdx with Android Studio, this guy has an extremely helpful tutorial which has a lot of parts, but if your goal is to make commercial applications, I would highly recommend you to check this tutorial "<https://www.youtube.com/watch?v=_pwJv1QRSPM>" extremely helpful. Good luck with your goals and hoped this helped you on making your decision, but python is a good programming language, it will give you the basic idea on how programming is.
51,772,333
I am new to python and would love to know this. Suppose I want to scrape stock price data from a website to excel. Now the data keeps refreshing every second, how do I refresh the data on my excel sheet automatically using python. I have read about win32 but couldn’t understand it’s use much. Any help would be dearly appreciated.
2018/08/09
[ "https://Stackoverflow.com/questions/51772333", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10041192/" ]
As stated in the documentation: > > Help on built-in function readlines: > > > readlines(hint=-1, /) method of \_io.TextIOWrapper instance > Return a list of lines from the stream. > > > > ``` > hint can be specified to control the number of lines read: no more > lines will be read if the total size (in bytes/characters) of all > lines so far exceeds hint. > > ``` > > Once you have consumed all lines, the next call to `readlines` will be empty. Change your function to store the result in a temporary variable: ``` with open(os.path.join(root, file)) as fRead: lines = fRead.readlines() line_3 = lines[3] line_4 = lines[4] print line_3 print line_4 ```
The method `readlines()` reads all lines in a file until it hits the EOF (end of file). The "cursor" is then at the end of the file and a subsequent call to `readlines()` will not yield anything, because EOF is directly found. Hence, after `line_3 = fRead.readlines()[3]` you have consumed the whole file but only stored the fourth (!) line of the file (if you start to count the lines at 1). If you do ``` all_lines = fRead.readlines() line_3 = all_lines[3] line_4 = all_lines[4] ``` you have read the file only once and saved every information you needed.
34,124,259
I'm new here and fairly new to python and I have a question. I had a similar question during my midterm a while back and it has bugged me that I cannot seem to figure it out. The overall idea was that I had to find the longest string in a nested list. So I came up with my own example to try and figure it out but for some reason I just can't. So I was hoping someone could tell me what I did wrong and how I can go about the problem without using the function max but instead with a for loop. This is my own example with my code: ``` typing_test = ['The', ['quick', 'brown'], ['fox', ['jumped'], 'over'], 'the', 'lazy', 'dog'] def longest_string (nested_list: 'nested list of strings') -> int: '''return the longest string within the nested list''' maximum_length = 0 for word in nested_list: try: if type(word) == str: maximum_length >= len(word) maximum_length = len(word) else: (longest_string((word))) except: print('Error') return maximum_length ``` My code returns 3 but the highest should be 6 because of the length of jumped I'm not sure if it's going through each list and checking each strings length. In short I don't think it is replacing/updating the longest string. So if someone can tell me what I'm doing wrong or how to fix my example I would greatly appreciate it. And thank you very much in advance.
2015/12/06
[ "https://Stackoverflow.com/questions/34124259", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5647743/" ]
As Simon mentioned, you should be using `FindAllString` to find all matches. Also, you need to remove the ^ from the beginning of the RE (^ anchors the pattern to the beginning of the string). You should also move the regexp.Compile outside the loop for efficiency.
<https://play.golang.org/p/Q_yfub0k80> As mentioned here, `FindAllString` returns a slice of all successive matches of the regular expression. But, `FindString` returns the leftmost match.
49,147,937
I am trying to get specific coordinates in an image. I have marked a red dot in the image at several locations to specify the coordinates I want to get. In GIMP I used the purist red I could find (HTML notation **ff000**). The idea was that I would iterate through the image until I found a pure shade of red and then print out the coordinates. I am using python and opencv to do so but I can't find any good tutorials (best I could find is [this](http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_core/py_basic_ops/py_basic_ops.html) but it's not very clear...at least for me). Here is an example of the image I am dealing with.[![enter image description here](https://i.stack.imgur.com/OXmDn.png)](https://i.stack.imgur.com/OXmDn.png) I just want to know how to find the coordinates of the pixels with the red dots. EDIT (added code): ``` import cv2 import numpy as np img = cv2.imread('image.jpg') width, height = img.shape[:2] for i in range(0,width): for j in range(0,height): px = img[i,j] ``` I don't know what to do from here. I have tried code such as `if px == [x,y,z]` looking for color detection but that doesn't work.
2018/03/07
[ "https://Stackoverflow.com/questions/49147937", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4902160/" ]
You can do it with cv2 this way: ``` image = cv2.imread('image.jpg') lower_red = np.array([0,0,220]) # BGR-code of your lowest red upper_red = np.array([10,10,255]) # BGR-code of your highest red mask = cv2.inRange(image, lower_red, upper_red) #get all non zero values coord=cv2.findNonZero(mask) ```
You can do this with PIL and numpy. I'm sure there is a similar implementation with cv2. ``` from PIL import Image import numpy as np img = Image.open('image.png') width, height = img.size[:2] px = np.array(img) for i in range(height): for j in range(width): if(px[i,j,0] == 255 & px[i,j,1] == 0 & px[i,j,2] == 0): print(i,j,px[i,j]) ``` This doesn't work with the image you provided, since there aren't any pixels that are exactly (255,0,0). Something may have changed when it got compressed to a .jpg, or you didn't make them as red as you thought you did. Perhaps you could try turning off anti-aliasing in GIMP.
57,462,530
I need to have a python GUI communicating with an mbed (LPC1768) board. I am able to send a string from the mbed board to python's IDLE but when I try to send a value back to the mbed board, it does not work as expected. I have written a very basic program where I read a string from the mbed board and print it on Python's IDLE. The program should then ask for the user's to type a value which should be sent to the mbed board. This value should set the time between LED's flashing. The python code ``` import serial ser = serial.Serial('COM8', 9600) try: ser.open() except: print("Port already open") out= ser.readline() #while(1): print(out) time=input("Enter a time: " ) print (time) ser.write(time.encode()) ser.close() ``` and the mbed c++ code ``` #include "mbed.h" //DigitalOut myled(LED1); DigitalOut one(LED1); DigitalOut two(LED2); DigitalOut three(LED3); DigitalOut four(LED4); Serial pc(USBTX, USBRX); float c = 0.2; int main() { while(1) { pc.printf("Hello World!\n"); one = 1; wait(c); two=1; one = 0; wait(c); two=0; c = float(pc.getc()); three=1; wait(c); three=0; four=1; wait(c); four=0; } } ``` The program waits for the value to be entered in IDLE and sent to the mbed board and begins to use the value sent to it but suddenly stops working and I cannot figure out why.
2019/08/12
[ "https://Stackoverflow.com/questions/57462530", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11671221/" ]
If using index labels between 2 and 4 use `loc`: ``` df.loc[2:4, 'number'].max() ``` Output: ``` 10 ``` If using index integer positions 2nd through the 4th labels, then use `iloc`: ``` df.iloc[2:5, df.columns.get_loc('number')].max() ``` *Note: you must use `get_loc` to get the integer position of the column 'number'* Output: ``` 10 ```
Even can be used: ``` >>> df.iloc[2:4,:].loc[:,'number'].max() 10 ```
51,062,920
i'm tryng to import **mysqlclient** library for python with **pip**, when i use the command `pip install mysqlclient` it return an error: ``` Collecting mysqlclient Using cached https://files.pythonhosted.org/packages/ec/fd/83329b9d3e14f7344d1cb31f128e6dbba70c5975c9e57896815dbb1988ad/mysqlclient-1.3.13.tar.gz Installing collected packages: mysqlclient Running setup.py install for mysqlclient ... error Complete output from command c:\users\astrina\appdata\local\programs\python\python36\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\astrina\\AppData\\Local\\Temp\\pip-install-40l_x_f4\\mysqlclient\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\astrina\AppData\Local\Temp\pip-record-va173t5v\install-record.txt --single-version-externally-managed --compile: c:\users\astrina\appdata\local\programs\python\python36\lib\distutils\dist.py:261: UserWarning: Unknown distribution option: 'long_description_content_type' warnings.warn(msg) running install running build running build_py creating build creating build\lib.win-amd64-3.6 copying _mysql_exceptions.py -> build\lib.win-amd64-3.6 creating build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\__init__.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\compat.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\connections.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\converters.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\cursors.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\release.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\times.py -> build\lib.win-amd64-3.6\MySQLdb creating build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\__init__.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\CLIENT.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\CR.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\ER.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\FIELD_TYPE.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\FLAG.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\REFRESH.py -> build\lib.win-amd64-3.6\MySQLdb\constants running build_ext building '_mysql' extension error: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": http://landinghub.visualstudio.com/visual-cpp-build-tools ---------------------------------------- Command "c:\users\astrina\appdata\local\programs\python\python36\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\astrina\\AppData\\Local\\Temp\\pip-install- 40l_x_f4\\mysqlclient\\setup.py';f=getattr(tokenize, 'open', open) (__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\astrina\AppData\Local\Temp\pip- record-va173t5v\install-record.txt --single-version-externally-managed -- compile" failed with error code 1 in C:\Users\astrina\AppData\Local\Temp\pip- install-40l_x_f4\mysqlclient\ ``` I've already installed **Microsoft Build Tools 2015** but the problem persist
2018/06/27
[ "https://Stackoverflow.com/questions/51062920", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9595624/" ]
First install python 3.6.5, then run ``` pip install mysqlclient==1.3.12 ```
For me, it was a mixture of an old setup tools and missing packages ``` pip install --upgrade setuptools apt install gcc libssl-dev ```
51,062,920
i'm tryng to import **mysqlclient** library for python with **pip**, when i use the command `pip install mysqlclient` it return an error: ``` Collecting mysqlclient Using cached https://files.pythonhosted.org/packages/ec/fd/83329b9d3e14f7344d1cb31f128e6dbba70c5975c9e57896815dbb1988ad/mysqlclient-1.3.13.tar.gz Installing collected packages: mysqlclient Running setup.py install for mysqlclient ... error Complete output from command c:\users\astrina\appdata\local\programs\python\python36\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\astrina\\AppData\\Local\\Temp\\pip-install-40l_x_f4\\mysqlclient\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\astrina\AppData\Local\Temp\pip-record-va173t5v\install-record.txt --single-version-externally-managed --compile: c:\users\astrina\appdata\local\programs\python\python36\lib\distutils\dist.py:261: UserWarning: Unknown distribution option: 'long_description_content_type' warnings.warn(msg) running install running build running build_py creating build creating build\lib.win-amd64-3.6 copying _mysql_exceptions.py -> build\lib.win-amd64-3.6 creating build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\__init__.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\compat.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\connections.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\converters.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\cursors.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\release.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\times.py -> build\lib.win-amd64-3.6\MySQLdb creating build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\__init__.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\CLIENT.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\CR.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\ER.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\FIELD_TYPE.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\FLAG.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\REFRESH.py -> build\lib.win-amd64-3.6\MySQLdb\constants running build_ext building '_mysql' extension error: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": http://landinghub.visualstudio.com/visual-cpp-build-tools ---------------------------------------- Command "c:\users\astrina\appdata\local\programs\python\python36\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\astrina\\AppData\\Local\\Temp\\pip-install- 40l_x_f4\\mysqlclient\\setup.py';f=getattr(tokenize, 'open', open) (__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\astrina\AppData\Local\Temp\pip- record-va173t5v\install-record.txt --single-version-externally-managed -- compile" failed with error code 1 in C:\Users\astrina\AppData\Local\Temp\pip- install-40l_x_f4\mysqlclient\ ``` I've already installed **Microsoft Build Tools 2015** but the problem persist
2018/06/27
[ "https://Stackoverflow.com/questions/51062920", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9595624/" ]
You may need to install the Python 3 and MySQL development headers and libraries like so: **For UBUNTU or Debian** ``` sudo apt-get install python3-dev default-libmysqlclient-dev build-essential ``` **Red Hat / CentOS** ``` sudo yum install python3-devel mysql-devel ``` Then try ``` pip install mysqlclient ```
[You can set ssl library path explicitly.](https://github.com/PyMySQL/mysqlclient-python/issues/131#issuecomment-338635251) ```py LDFLAGS=-L/usr/local/opt/openssl/lib pip install mysqlclient ```
51,062,920
i'm tryng to import **mysqlclient** library for python with **pip**, when i use the command `pip install mysqlclient` it return an error: ``` Collecting mysqlclient Using cached https://files.pythonhosted.org/packages/ec/fd/83329b9d3e14f7344d1cb31f128e6dbba70c5975c9e57896815dbb1988ad/mysqlclient-1.3.13.tar.gz Installing collected packages: mysqlclient Running setup.py install for mysqlclient ... error Complete output from command c:\users\astrina\appdata\local\programs\python\python36\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\astrina\\AppData\\Local\\Temp\\pip-install-40l_x_f4\\mysqlclient\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\astrina\AppData\Local\Temp\pip-record-va173t5v\install-record.txt --single-version-externally-managed --compile: c:\users\astrina\appdata\local\programs\python\python36\lib\distutils\dist.py:261: UserWarning: Unknown distribution option: 'long_description_content_type' warnings.warn(msg) running install running build running build_py creating build creating build\lib.win-amd64-3.6 copying _mysql_exceptions.py -> build\lib.win-amd64-3.6 creating build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\__init__.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\compat.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\connections.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\converters.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\cursors.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\release.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\times.py -> build\lib.win-amd64-3.6\MySQLdb creating build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\__init__.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\CLIENT.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\CR.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\ER.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\FIELD_TYPE.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\FLAG.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\REFRESH.py -> build\lib.win-amd64-3.6\MySQLdb\constants running build_ext building '_mysql' extension error: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": http://landinghub.visualstudio.com/visual-cpp-build-tools ---------------------------------------- Command "c:\users\astrina\appdata\local\programs\python\python36\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\astrina\\AppData\\Local\\Temp\\pip-install- 40l_x_f4\\mysqlclient\\setup.py';f=getattr(tokenize, 'open', open) (__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\astrina\AppData\Local\Temp\pip- record-va173t5v\install-record.txt --single-version-externally-managed -- compile" failed with error code 1 in C:\Users\astrina\AppData\Local\Temp\pip- install-40l_x_f4\mysqlclient\ ``` I've already installed **Microsoft Build Tools 2015** but the problem persist
2018/06/27
[ "https://Stackoverflow.com/questions/51062920", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9595624/" ]
First install python 3.6.5, then run ``` pip install mysqlclient==1.3.12 ```
Better if you install python 64-bit. Then `pip install mysqlclient` will work sure otherwise you can follow these steps[steps to install using python extension packages](https://stackoverflow.com/a/58931018/12181656)
51,062,920
i'm tryng to import **mysqlclient** library for python with **pip**, when i use the command `pip install mysqlclient` it return an error: ``` Collecting mysqlclient Using cached https://files.pythonhosted.org/packages/ec/fd/83329b9d3e14f7344d1cb31f128e6dbba70c5975c9e57896815dbb1988ad/mysqlclient-1.3.13.tar.gz Installing collected packages: mysqlclient Running setup.py install for mysqlclient ... error Complete output from command c:\users\astrina\appdata\local\programs\python\python36\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\astrina\\AppData\\Local\\Temp\\pip-install-40l_x_f4\\mysqlclient\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\astrina\AppData\Local\Temp\pip-record-va173t5v\install-record.txt --single-version-externally-managed --compile: c:\users\astrina\appdata\local\programs\python\python36\lib\distutils\dist.py:261: UserWarning: Unknown distribution option: 'long_description_content_type' warnings.warn(msg) running install running build running build_py creating build creating build\lib.win-amd64-3.6 copying _mysql_exceptions.py -> build\lib.win-amd64-3.6 creating build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\__init__.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\compat.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\connections.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\converters.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\cursors.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\release.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\times.py -> build\lib.win-amd64-3.6\MySQLdb creating build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\__init__.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\CLIENT.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\CR.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\ER.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\FIELD_TYPE.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\FLAG.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\REFRESH.py -> build\lib.win-amd64-3.6\MySQLdb\constants running build_ext building '_mysql' extension error: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": http://landinghub.visualstudio.com/visual-cpp-build-tools ---------------------------------------- Command "c:\users\astrina\appdata\local\programs\python\python36\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\astrina\\AppData\\Local\\Temp\\pip-install- 40l_x_f4\\mysqlclient\\setup.py';f=getattr(tokenize, 'open', open) (__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\astrina\AppData\Local\Temp\pip- record-va173t5v\install-record.txt --single-version-externally-managed -- compile" failed with error code 1 in C:\Users\astrina\AppData\Local\Temp\pip- install-40l_x_f4\mysqlclient\ ``` I've already installed **Microsoft Build Tools 2015** but the problem persist
2018/06/27
[ "https://Stackoverflow.com/questions/51062920", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9595624/" ]
Try download and install from wheel instead. Take note of your python version and download the correct one. [https://www.lfd.uci.edu/~gohlke/pythonlibs/#mysqlclient](https://www.lfd.uci.edu/%7Egohlke/pythonlibs/#mysqlclient)
Try pip `install --only-binary :all: mysqlclient` Worked for me
51,062,920
i'm tryng to import **mysqlclient** library for python with **pip**, when i use the command `pip install mysqlclient` it return an error: ``` Collecting mysqlclient Using cached https://files.pythonhosted.org/packages/ec/fd/83329b9d3e14f7344d1cb31f128e6dbba70c5975c9e57896815dbb1988ad/mysqlclient-1.3.13.tar.gz Installing collected packages: mysqlclient Running setup.py install for mysqlclient ... error Complete output from command c:\users\astrina\appdata\local\programs\python\python36\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\astrina\\AppData\\Local\\Temp\\pip-install-40l_x_f4\\mysqlclient\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\astrina\AppData\Local\Temp\pip-record-va173t5v\install-record.txt --single-version-externally-managed --compile: c:\users\astrina\appdata\local\programs\python\python36\lib\distutils\dist.py:261: UserWarning: Unknown distribution option: 'long_description_content_type' warnings.warn(msg) running install running build running build_py creating build creating build\lib.win-amd64-3.6 copying _mysql_exceptions.py -> build\lib.win-amd64-3.6 creating build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\__init__.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\compat.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\connections.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\converters.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\cursors.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\release.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\times.py -> build\lib.win-amd64-3.6\MySQLdb creating build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\__init__.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\CLIENT.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\CR.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\ER.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\FIELD_TYPE.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\FLAG.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\REFRESH.py -> build\lib.win-amd64-3.6\MySQLdb\constants running build_ext building '_mysql' extension error: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": http://landinghub.visualstudio.com/visual-cpp-build-tools ---------------------------------------- Command "c:\users\astrina\appdata\local\programs\python\python36\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\astrina\\AppData\\Local\\Temp\\pip-install- 40l_x_f4\\mysqlclient\\setup.py';f=getattr(tokenize, 'open', open) (__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\astrina\AppData\Local\Temp\pip- record-va173t5v\install-record.txt --single-version-externally-managed -- compile" failed with error code 1 in C:\Users\astrina\AppData\Local\Temp\pip- install-40l_x_f4\mysqlclient\ ``` I've already installed **Microsoft Build Tools 2015** but the problem persist
2018/06/27
[ "https://Stackoverflow.com/questions/51062920", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9595624/" ]
First install python 3.6.5, then run ``` pip install mysqlclient==1.3.12 ```
First try this command > > (keep space properly ie, pip space install space --only-binary > space :all: space mysqlclient) > > > `pip install --only-binary :all: mysqlclient` if still error then try this... Go to this website [Python Extension package](https://www.lfd.uci.edu/~gohlke/pythonlibs/) and press ctrl+F and search mysqlclient. You will find a file name like this.. mysqlclient‑1.4.5‑cp38‑cp38‑win\_amd64.whl Choose carefully ---According to python version There is a step given to choose it. 1. cp38 means for python 3.8 version and cp 37 means for python 3.6 version so first check your python version wether it is 3.8,3.7,3.6,3.5,3.4 then download accordingly. 2. amd64--- while checking python version also check whether your python is of 64-bit or 32-bit. then select accordingly. If your python is of 32-bit then select amd32. Otherwise you will face problem. Then download the file and install it manually using pip command. There are steps given below:- After download open command prompt and go to that directory where that downloaded file available( or better to cut that file and paste into your desktop) and type: `NOTE :- PLEASE TYPE AFTER DOWNLOAD THE FILE... This is for python 3.8 and 64-bit $ pip install mysqlclient‑1.4.5‑cp38‑cp38‑win_amd64.whl For python 3.7 and 32-bit $ pip install mysqlclient‑1.4.5‑cp37‑cp37m‑win32.whl`
51,062,920
i'm tryng to import **mysqlclient** library for python with **pip**, when i use the command `pip install mysqlclient` it return an error: ``` Collecting mysqlclient Using cached https://files.pythonhosted.org/packages/ec/fd/83329b9d3e14f7344d1cb31f128e6dbba70c5975c9e57896815dbb1988ad/mysqlclient-1.3.13.tar.gz Installing collected packages: mysqlclient Running setup.py install for mysqlclient ... error Complete output from command c:\users\astrina\appdata\local\programs\python\python36\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\astrina\\AppData\\Local\\Temp\\pip-install-40l_x_f4\\mysqlclient\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\astrina\AppData\Local\Temp\pip-record-va173t5v\install-record.txt --single-version-externally-managed --compile: c:\users\astrina\appdata\local\programs\python\python36\lib\distutils\dist.py:261: UserWarning: Unknown distribution option: 'long_description_content_type' warnings.warn(msg) running install running build running build_py creating build creating build\lib.win-amd64-3.6 copying _mysql_exceptions.py -> build\lib.win-amd64-3.6 creating build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\__init__.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\compat.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\connections.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\converters.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\cursors.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\release.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\times.py -> build\lib.win-amd64-3.6\MySQLdb creating build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\__init__.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\CLIENT.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\CR.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\ER.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\FIELD_TYPE.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\FLAG.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\REFRESH.py -> build\lib.win-amd64-3.6\MySQLdb\constants running build_ext building '_mysql' extension error: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": http://landinghub.visualstudio.com/visual-cpp-build-tools ---------------------------------------- Command "c:\users\astrina\appdata\local\programs\python\python36\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\astrina\\AppData\\Local\\Temp\\pip-install- 40l_x_f4\\mysqlclient\\setup.py';f=getattr(tokenize, 'open', open) (__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\astrina\AppData\Local\Temp\pip- record-va173t5v\install-record.txt --single-version-externally-managed -- compile" failed with error code 1 in C:\Users\astrina\AppData\Local\Temp\pip- install-40l_x_f4\mysqlclient\ ``` I've already installed **Microsoft Build Tools 2015** but the problem persist
2018/06/27
[ "https://Stackoverflow.com/questions/51062920", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9595624/" ]
[You can set ssl library path explicitly.](https://github.com/PyMySQL/mysqlclient-python/issues/131#issuecomment-338635251) ```py LDFLAGS=-L/usr/local/opt/openssl/lib pip install mysqlclient ```
I had the same problem and I fixed in a really stupid way. I just uninstalled python and installed it through the Microsoft Store.
51,062,920
i'm tryng to import **mysqlclient** library for python with **pip**, when i use the command `pip install mysqlclient` it return an error: ``` Collecting mysqlclient Using cached https://files.pythonhosted.org/packages/ec/fd/83329b9d3e14f7344d1cb31f128e6dbba70c5975c9e57896815dbb1988ad/mysqlclient-1.3.13.tar.gz Installing collected packages: mysqlclient Running setup.py install for mysqlclient ... error Complete output from command c:\users\astrina\appdata\local\programs\python\python36\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\astrina\\AppData\\Local\\Temp\\pip-install-40l_x_f4\\mysqlclient\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\astrina\AppData\Local\Temp\pip-record-va173t5v\install-record.txt --single-version-externally-managed --compile: c:\users\astrina\appdata\local\programs\python\python36\lib\distutils\dist.py:261: UserWarning: Unknown distribution option: 'long_description_content_type' warnings.warn(msg) running install running build running build_py creating build creating build\lib.win-amd64-3.6 copying _mysql_exceptions.py -> build\lib.win-amd64-3.6 creating build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\__init__.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\compat.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\connections.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\converters.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\cursors.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\release.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\times.py -> build\lib.win-amd64-3.6\MySQLdb creating build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\__init__.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\CLIENT.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\CR.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\ER.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\FIELD_TYPE.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\FLAG.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\REFRESH.py -> build\lib.win-amd64-3.6\MySQLdb\constants running build_ext building '_mysql' extension error: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": http://landinghub.visualstudio.com/visual-cpp-build-tools ---------------------------------------- Command "c:\users\astrina\appdata\local\programs\python\python36\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\astrina\\AppData\\Local\\Temp\\pip-install- 40l_x_f4\\mysqlclient\\setup.py';f=getattr(tokenize, 'open', open) (__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\astrina\AppData\Local\Temp\pip- record-va173t5v\install-record.txt --single-version-externally-managed -- compile" failed with error code 1 in C:\Users\astrina\AppData\Local\Temp\pip- install-40l_x_f4\mysqlclient\ ``` I've already installed **Microsoft Build Tools 2015** but the problem persist
2018/06/27
[ "https://Stackoverflow.com/questions/51062920", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9595624/" ]
1. Install `build-essential` `sudo apt-get install build-essential` 2. Install `mysqlclient` `pip install mysqlclient`
This happened to me when I installed python3.8 from deadsnakes/ppa repository and created virtualenv using it. Above solutions didn't work for me and after installing `python3.8-dev` it is installed successfully. `sudo apt install python3.8-dev` After that `python3.8 -m pip install mysqlclient==1.3.12`
51,062,920
i'm tryng to import **mysqlclient** library for python with **pip**, when i use the command `pip install mysqlclient` it return an error: ``` Collecting mysqlclient Using cached https://files.pythonhosted.org/packages/ec/fd/83329b9d3e14f7344d1cb31f128e6dbba70c5975c9e57896815dbb1988ad/mysqlclient-1.3.13.tar.gz Installing collected packages: mysqlclient Running setup.py install for mysqlclient ... error Complete output from command c:\users\astrina\appdata\local\programs\python\python36\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\astrina\\AppData\\Local\\Temp\\pip-install-40l_x_f4\\mysqlclient\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\astrina\AppData\Local\Temp\pip-record-va173t5v\install-record.txt --single-version-externally-managed --compile: c:\users\astrina\appdata\local\programs\python\python36\lib\distutils\dist.py:261: UserWarning: Unknown distribution option: 'long_description_content_type' warnings.warn(msg) running install running build running build_py creating build creating build\lib.win-amd64-3.6 copying _mysql_exceptions.py -> build\lib.win-amd64-3.6 creating build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\__init__.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\compat.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\connections.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\converters.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\cursors.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\release.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\times.py -> build\lib.win-amd64-3.6\MySQLdb creating build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\__init__.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\CLIENT.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\CR.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\ER.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\FIELD_TYPE.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\FLAG.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\REFRESH.py -> build\lib.win-amd64-3.6\MySQLdb\constants running build_ext building '_mysql' extension error: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": http://landinghub.visualstudio.com/visual-cpp-build-tools ---------------------------------------- Command "c:\users\astrina\appdata\local\programs\python\python36\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\astrina\\AppData\\Local\\Temp\\pip-install- 40l_x_f4\\mysqlclient\\setup.py';f=getattr(tokenize, 'open', open) (__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\astrina\AppData\Local\Temp\pip- record-va173t5v\install-record.txt --single-version-externally-managed -- compile" failed with error code 1 in C:\Users\astrina\AppData\Local\Temp\pip- install-40l_x_f4\mysqlclient\ ``` I've already installed **Microsoft Build Tools 2015** but the problem persist
2018/06/27
[ "https://Stackoverflow.com/questions/51062920", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9595624/" ]
I had look a like problem, on MacOs Catalina, solved with this: ``` ARCHFLAGS="-arch x86_64" pip3 install mysqlclient ```
1. Install `build-essential` `sudo apt-get install build-essential` 2. Install `mysqlclient` `pip install mysqlclient`
51,062,920
i'm tryng to import **mysqlclient** library for python with **pip**, when i use the command `pip install mysqlclient` it return an error: ``` Collecting mysqlclient Using cached https://files.pythonhosted.org/packages/ec/fd/83329b9d3e14f7344d1cb31f128e6dbba70c5975c9e57896815dbb1988ad/mysqlclient-1.3.13.tar.gz Installing collected packages: mysqlclient Running setup.py install for mysqlclient ... error Complete output from command c:\users\astrina\appdata\local\programs\python\python36\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\astrina\\AppData\\Local\\Temp\\pip-install-40l_x_f4\\mysqlclient\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\astrina\AppData\Local\Temp\pip-record-va173t5v\install-record.txt --single-version-externally-managed --compile: c:\users\astrina\appdata\local\programs\python\python36\lib\distutils\dist.py:261: UserWarning: Unknown distribution option: 'long_description_content_type' warnings.warn(msg) running install running build running build_py creating build creating build\lib.win-amd64-3.6 copying _mysql_exceptions.py -> build\lib.win-amd64-3.6 creating build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\__init__.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\compat.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\connections.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\converters.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\cursors.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\release.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\times.py -> build\lib.win-amd64-3.6\MySQLdb creating build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\__init__.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\CLIENT.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\CR.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\ER.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\FIELD_TYPE.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\FLAG.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\REFRESH.py -> build\lib.win-amd64-3.6\MySQLdb\constants running build_ext building '_mysql' extension error: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": http://landinghub.visualstudio.com/visual-cpp-build-tools ---------------------------------------- Command "c:\users\astrina\appdata\local\programs\python\python36\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\astrina\\AppData\\Local\\Temp\\pip-install- 40l_x_f4\\mysqlclient\\setup.py';f=getattr(tokenize, 'open', open) (__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\astrina\AppData\Local\Temp\pip- record-va173t5v\install-record.txt --single-version-externally-managed -- compile" failed with error code 1 in C:\Users\astrina\AppData\Local\Temp\pip- install-40l_x_f4\mysqlclient\ ``` I've already installed **Microsoft Build Tools 2015** but the problem persist
2018/06/27
[ "https://Stackoverflow.com/questions/51062920", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9595624/" ]
You may need to install the Python 3 and MySQL development headers and libraries like so: **For UBUNTU or Debian** ``` sudo apt-get install python3-dev default-libmysqlclient-dev build-essential ``` **Red Hat / CentOS** ``` sudo yum install python3-devel mysql-devel ``` Then try ``` pip install mysqlclient ```
1. Install `build-essential` `sudo apt-get install build-essential` 2. Install `mysqlclient` `pip install mysqlclient`
51,062,920
i'm tryng to import **mysqlclient** library for python with **pip**, when i use the command `pip install mysqlclient` it return an error: ``` Collecting mysqlclient Using cached https://files.pythonhosted.org/packages/ec/fd/83329b9d3e14f7344d1cb31f128e6dbba70c5975c9e57896815dbb1988ad/mysqlclient-1.3.13.tar.gz Installing collected packages: mysqlclient Running setup.py install for mysqlclient ... error Complete output from command c:\users\astrina\appdata\local\programs\python\python36\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\astrina\\AppData\\Local\\Temp\\pip-install-40l_x_f4\\mysqlclient\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\astrina\AppData\Local\Temp\pip-record-va173t5v\install-record.txt --single-version-externally-managed --compile: c:\users\astrina\appdata\local\programs\python\python36\lib\distutils\dist.py:261: UserWarning: Unknown distribution option: 'long_description_content_type' warnings.warn(msg) running install running build running build_py creating build creating build\lib.win-amd64-3.6 copying _mysql_exceptions.py -> build\lib.win-amd64-3.6 creating build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\__init__.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\compat.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\connections.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\converters.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\cursors.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\release.py -> build\lib.win-amd64-3.6\MySQLdb copying MySQLdb\times.py -> build\lib.win-amd64-3.6\MySQLdb creating build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\__init__.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\CLIENT.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\CR.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\ER.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\FIELD_TYPE.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\FLAG.py -> build\lib.win-amd64-3.6\MySQLdb\constants copying MySQLdb\constants\REFRESH.py -> build\lib.win-amd64-3.6\MySQLdb\constants running build_ext building '_mysql' extension error: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": http://landinghub.visualstudio.com/visual-cpp-build-tools ---------------------------------------- Command "c:\users\astrina\appdata\local\programs\python\python36\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\astrina\\AppData\\Local\\Temp\\pip-install- 40l_x_f4\\mysqlclient\\setup.py';f=getattr(tokenize, 'open', open) (__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\astrina\AppData\Local\Temp\pip- record-va173t5v\install-record.txt --single-version-externally-managed -- compile" failed with error code 1 in C:\Users\astrina\AppData\Local\Temp\pip- install-40l_x_f4\mysqlclient\ ``` I've already installed **Microsoft Build Tools 2015** but the problem persist
2018/06/27
[ "https://Stackoverflow.com/questions/51062920", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9595624/" ]
[You can set ssl library path explicitly.](https://github.com/PyMySQL/mysqlclient-python/issues/131#issuecomment-338635251) ```py LDFLAGS=-L/usr/local/opt/openssl/lib pip install mysqlclient ```
This happened to me when I installed python3.8 from deadsnakes/ppa repository and created virtualenv using it. Above solutions didn't work for me and after installing `python3.8-dev` it is installed successfully. `sudo apt install python3.8-dev` After that `python3.8 -m pip install mysqlclient==1.3.12`
60,520,272
I'm new to python and I've looked up a little bit of info and i cant't find the problem with my code, please help. Code: ``` array = [] print ('Enter values in array: ') for i in range(0,5): n = input("value: ") array.append(n) a = input("Enter search term: ") for i in range(len(array)): found = False while found == False : if a == array(i): found = True position = i else : found = False print("Your search term is in position " + position) ``` Error: at `if a == array(i)` line it says > > list object is not callable > > >
2020/03/04
[ "https://Stackoverflow.com/questions/60520272", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13003923/" ]
You don't? > > Should prettier not be installed locally with your project's dependencies or globally on the machine, the version of prettier that is bundled with the extension will be used. > > > <https://github.com/prettier/prettier-vscode#prettier-resolution>
It seems like you want Prettier to be your code formatter in VS Code for all of your projects. In VS Code navigate to: > > File > Preferences > Settings > > > Search for "Default formatter" And then select "esbenp.prettier-vscode"
6,361,775
I know there are quite a few solutions for this problem but mine was peculiar in the sense that, I might get truncated utf16 data and yet have to make the best effort of dealing with conversions where decode and encode will fail with UnicodeDecodeError. So came up with the following code in python. Please let me know your comments on how I can improve them for faster processing. ``` try: # conversion to ascii if utf16 data is formatted correctly input = open(filename).read().decode('UTF16') asciiStr = input.encode('ASCII', 'ignore') open(filename).close() return asciiStr except: # if fail with UnicodeDecodeError, then use brute force # to decode truncated data try: unicode = open(filename).read() if (ord(unicode[0]) == 255 and ord(unicode[1]) == 254): print("Little-Endian format, UTF-16") leAscii = "".join([(unicode[i]) for i in range(2, len(unicode), 2) if 0 < ord(unicode[i]) < 127]) open(filename).close() return leAscii elif (ord(unicode[0]) == 254 and ord(unicode[1]) == 255): print("Big-Endian format, UTF-16") beAscii = "".join([(unicode[i]) for i in range(3, len(unicode), 2) if 0 < ord(unicode[i]) < 127]) open(filename).close() return beAscii else: open(filename).close() return None except: open(filename).close() print("Error in converting to ASCII") return None ```
2011/06/15
[ "https://Stackoverflow.com/questions/6361775", "https://Stackoverflow.com", "https://Stackoverflow.com/users/684799/" ]
What about: ``` data = open(filename).read() try: data = data.decode("utf-16") except UnicodeDecodeError: data = data[:-1].decode("utf-16") ``` I.e. if it's truncated mid-way through a code unit, snip the last byte off, and do it again. That should get you back to a valid UTF-16 string, without having to try to implement a decoder yourself.
This just jumped out at me as a "best practice" improvement. File accesses should really be wrapped in `with` blocks. This will handle opening and cleaning up for you.
6,361,775
I know there are quite a few solutions for this problem but mine was peculiar in the sense that, I might get truncated utf16 data and yet have to make the best effort of dealing with conversions where decode and encode will fail with UnicodeDecodeError. So came up with the following code in python. Please let me know your comments on how I can improve them for faster processing. ``` try: # conversion to ascii if utf16 data is formatted correctly input = open(filename).read().decode('UTF16') asciiStr = input.encode('ASCII', 'ignore') open(filename).close() return asciiStr except: # if fail with UnicodeDecodeError, then use brute force # to decode truncated data try: unicode = open(filename).read() if (ord(unicode[0]) == 255 and ord(unicode[1]) == 254): print("Little-Endian format, UTF-16") leAscii = "".join([(unicode[i]) for i in range(2, len(unicode), 2) if 0 < ord(unicode[i]) < 127]) open(filename).close() return leAscii elif (ord(unicode[0]) == 254 and ord(unicode[1]) == 255): print("Big-Endian format, UTF-16") beAscii = "".join([(unicode[i]) for i in range(3, len(unicode), 2) if 0 < ord(unicode[i]) < 127]) open(filename).close() return beAscii else: open(filename).close() return None except: open(filename).close() print("Error in converting to ASCII") return None ```
2011/06/15
[ "https://Stackoverflow.com/questions/6361775", "https://Stackoverflow.com", "https://Stackoverflow.com/users/684799/" ]
To tolerate errors you could use the optional second argument to the byte-string's decode method. In this example the dangling third byte ('c') is replaced with the "replacement character" U+FFFD: ``` >>> 'abc'.decode('UTF-16', 'replace') u'\u6261\ufffd' ``` There is also an 'ignore' option which will simply drop bytes that can't be decoded: ``` >>> 'abc'.decode('UTF-16', 'ignore') u'\u6261' ``` While it is common to desire a system that is "tolerant" of incorrectly encoded text, it is often quite difficult to define precisely what the expected behavior is in these situations. You may find that the one who provided the requirement to "deal with" incorrectly encoded text does not fully grasp the concept of character encoding.
What about: ``` data = open(filename).read() try: data = data.decode("utf-16") except UnicodeDecodeError: data = data[:-1].decode("utf-16") ``` I.e. if it's truncated mid-way through a code unit, snip the last byte off, and do it again. That should get you back to a valid UTF-16 string, without having to try to implement a decoder yourself.
6,361,775
I know there are quite a few solutions for this problem but mine was peculiar in the sense that, I might get truncated utf16 data and yet have to make the best effort of dealing with conversions where decode and encode will fail with UnicodeDecodeError. So came up with the following code in python. Please let me know your comments on how I can improve them for faster processing. ``` try: # conversion to ascii if utf16 data is formatted correctly input = open(filename).read().decode('UTF16') asciiStr = input.encode('ASCII', 'ignore') open(filename).close() return asciiStr except: # if fail with UnicodeDecodeError, then use brute force # to decode truncated data try: unicode = open(filename).read() if (ord(unicode[0]) == 255 and ord(unicode[1]) == 254): print("Little-Endian format, UTF-16") leAscii = "".join([(unicode[i]) for i in range(2, len(unicode), 2) if 0 < ord(unicode[i]) < 127]) open(filename).close() return leAscii elif (ord(unicode[0]) == 254 and ord(unicode[1]) == 255): print("Big-Endian format, UTF-16") beAscii = "".join([(unicode[i]) for i in range(3, len(unicode), 2) if 0 < ord(unicode[i]) < 127]) open(filename).close() return beAscii else: open(filename).close() return None except: open(filename).close() print("Error in converting to ASCII") return None ```
2011/06/15
[ "https://Stackoverflow.com/questions/6361775", "https://Stackoverflow.com", "https://Stackoverflow.com/users/684799/" ]
To tolerate errors you could use the optional second argument to the byte-string's decode method. In this example the dangling third byte ('c') is replaced with the "replacement character" U+FFFD: ``` >>> 'abc'.decode('UTF-16', 'replace') u'\u6261\ufffd' ``` There is also an 'ignore' option which will simply drop bytes that can't be decoded: ``` >>> 'abc'.decode('UTF-16', 'ignore') u'\u6261' ``` While it is common to desire a system that is "tolerant" of incorrectly encoded text, it is often quite difficult to define precisely what the expected behavior is in these situations. You may find that the one who provided the requirement to "deal with" incorrectly encoded text does not fully grasp the concept of character encoding.
This just jumped out at me as a "best practice" improvement. File accesses should really be wrapped in `with` blocks. This will handle opening and cleaning up for you.
52,372,489
I am wanting to get the average brightness of a file in python. Having read a previous question [[Problem getting terminal output from ImageMagick's compare.exe ( Either by pipe or Python )](https://stackoverflow.com/questions/5145508/problem-getting-terminal-output-from-imagemagicks-compare-exe-either-by-pipe]) I have come up with : ``` cmd='/usr/bin/convert {} -format "%[fx:100*image.mean]\n" info: > bright.txt'.format(full) subprocess.call(cmd,shell=True) with open('bright.txt', 'r') as myfile: x=myfile.read().replace('\n', '') return x ``` the previous question recommended the use of 'pythonmagick' which I can find but with no current documentation and very little recent activity. I could not work out the syntax to use it. I know that my code is unsatisfactory but it does work. Is there a better way which does not need 'shell=true' or additional file processing ?
2018/09/17
[ "https://Stackoverflow.com/questions/52372489", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7869335/" ]
This seems to works for me to return the mean as a variable that can be printed. **(This is a bit erroneous. See the correction near the bottom)** ``` #!/opt/local/bin/python3.6 import subprocess cmd = '/usr/local/bin/convert lena.jpg -format "%[fx:100*mean]" info:' mean=subprocess.call(cmd, shell=True) print (mean) ``` The result is 70.67860, which is returned to the terminal. This also works with shell=False, if you parse each part of the command. ``` #!/opt/local/bin/python3.6 import subprocess cmd = ['/usr/local/bin/convert','lena.jpg','-format','%[fx:100*mean]','info:'] mean=subprocess.call(cmd, shell=False) print (mean) ``` The result is 70.67860, which is returned to the terminal. **The comment from `tripleee` below indicates that my process above is not correct in that the mean is being shown at the terminal, but not actually put into the variable.** He suggested to use `subprocess.check_output()`. The following is his solution. (Thank you, tripleee) ``` #!/opt/local/bin/python3.6 import subprocess filename = 'lena.jpg' mean=subprocess.check_output( ['/usr/local/bin/convert', filename, '-format', 'mean=%[fx:100*mean]', 'info:'], universal_newlines=True) print (mean) ``` Prints: `mean=70.6786`
You can probably improve the subprocess, and eliminate the temporary text file with `Popen` + `PIPE`. ```py cmd=['/usr/bin/convert', full, '-format', '%[fx:100*image.mean]', 'info:'] pid = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) out, err = pid.communicate() return float(out) ``` ImageMagick also ships with the `identify` utility. The same method could be achieved with... ```py cmd=['/usr/bin/identify', '-format', '%[fx:100*image.mean]', full] ``` It might be worth exploring if it's worth working directly with ImageMagick's shared libraries. Usually connected through C-API (pythonmagick, wand, &etc). For what your doing, this would only increase code-complexity, increase module dependancies, but in no way improve performance or accuracy.
41,861,138
I am trying to loop through subreddits, but want to ignore the sticky posts at the top. I am able to print the first 5 posts, unfortunately including the stickies. Various pythonic methods of trying to skip these have failed. Two different examples of my code below. ``` subreddit = reddit.subreddit(sub) for submission in subreddit.hot(limit=5): # If we haven't replied to this post before if submission.id not in posts_replied_to: ##FOOD if subreddit == 'food': if 'pLEASE SEE' in submission.title: pass if "please vote" in submission.title: pass else: print(submission.title) if re.search("please vote", submission.title, re.IGNORECASE): pass else: print(submission.title) ``` I noticed a sticky tag in the documents but not sure exactly how to use it. Any help is appreciated.
2017/01/25
[ "https://Stackoverflow.com/questions/41861138", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4750577/" ]
[It looks like you can get the id of a stickied post based on docs](http://praw.readthedocs.io/en/latest/code_overview/models/subreddit.html?highlight=sticky). So perhaps you could get the id(s) of the stickied post(s) (note that with the 'number' parameter of the sticky method you can say give me the first, or second, or third, stickied post; use this to your advantage to get *all* of the stickied posts) and for each submission that you are going to pull, first check its id against the stickied ids. Example: ``` # assuming there are no more than three stickies... stickies = [reddit.subreddit("chicago").sticky(i).id for i in range(1,4)] ``` and then when you want to make sure a given post isn't stickied, use: ``` if post.id not in stickies: do something ``` It looks like, were there fewer than three, this would give you a list with duplicate ids, which won't be a problem.
As an addendum to @Al Avery's answer, you can do a complete search for the IDs of all stickies on a given subreddit by doing something like ``` def get_all_stickies(sub): stickies = set() for i in itertools.count(1): try: sid = sub.sticky(i) except pawcore.NotFound: break if sid in stickies: break stickies.add(sid) return stickies ``` This function takes into account that the documentation lead one to expect an error if an invalid index is supplied to `stick`, while the actual behavior seems to be that a duplicate ID is returned. Using a `set` instead of a list makes lookup faster if you have a large number of stickies. You would use the function as ``` subreddit = reddit.subreddit(sub) stickies = get_all_stickies(subreddit) for submission in subreddit.hot(limit=5): if submission.id not in posts_replied_to and submission.id not in stickies: print(submission.title) ```
41,861,138
I am trying to loop through subreddits, but want to ignore the sticky posts at the top. I am able to print the first 5 posts, unfortunately including the stickies. Various pythonic methods of trying to skip these have failed. Two different examples of my code below. ``` subreddit = reddit.subreddit(sub) for submission in subreddit.hot(limit=5): # If we haven't replied to this post before if submission.id not in posts_replied_to: ##FOOD if subreddit == 'food': if 'pLEASE SEE' in submission.title: pass if "please vote" in submission.title: pass else: print(submission.title) if re.search("please vote", submission.title, re.IGNORECASE): pass else: print(submission.title) ``` I noticed a sticky tag in the documents but not sure exactly how to use it. Any help is appreciated.
2017/01/25
[ "https://Stackoverflow.com/questions/41861138", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4750577/" ]
[It looks like you can get the id of a stickied post based on docs](http://praw.readthedocs.io/en/latest/code_overview/models/subreddit.html?highlight=sticky). So perhaps you could get the id(s) of the stickied post(s) (note that with the 'number' parameter of the sticky method you can say give me the first, or second, or third, stickied post; use this to your advantage to get *all* of the stickied posts) and for each submission that you are going to pull, first check its id against the stickied ids. Example: ``` # assuming there are no more than three stickies... stickies = [reddit.subreddit("chicago").sticky(i).id for i in range(1,4)] ``` and then when you want to make sure a given post isn't stickied, use: ``` if post.id not in stickies: do something ``` It looks like, were there fewer than three, this would give you a list with duplicate ids, which won't be a problem.
Submissions which are stickied have a `sticked` attribute that evaluates to `True`. Add the following to your loop, and you should be good to go. ``` if submission.stickied: continue ``` In general, I recommend checking the available attributes on the objects you are working with to see if there is something usable. See: [Determine Available Attributes of an Object](https://praw.readthedocs.io/en/latest/getting_started/quick_start.html#determine-available-attributes-of-an-object)
41,861,138
I am trying to loop through subreddits, but want to ignore the sticky posts at the top. I am able to print the first 5 posts, unfortunately including the stickies. Various pythonic methods of trying to skip these have failed. Two different examples of my code below. ``` subreddit = reddit.subreddit(sub) for submission in subreddit.hot(limit=5): # If we haven't replied to this post before if submission.id not in posts_replied_to: ##FOOD if subreddit == 'food': if 'pLEASE SEE' in submission.title: pass if "please vote" in submission.title: pass else: print(submission.title) if re.search("please vote", submission.title, re.IGNORECASE): pass else: print(submission.title) ``` I noticed a sticky tag in the documents but not sure exactly how to use it. Any help is appreciated.
2017/01/25
[ "https://Stackoverflow.com/questions/41861138", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4750577/" ]
Submissions which are stickied have a `sticked` attribute that evaluates to `True`. Add the following to your loop, and you should be good to go. ``` if submission.stickied: continue ``` In general, I recommend checking the available attributes on the objects you are working with to see if there is something usable. See: [Determine Available Attributes of an Object](https://praw.readthedocs.io/en/latest/getting_started/quick_start.html#determine-available-attributes-of-an-object)
As an addendum to @Al Avery's answer, you can do a complete search for the IDs of all stickies on a given subreddit by doing something like ``` def get_all_stickies(sub): stickies = set() for i in itertools.count(1): try: sid = sub.sticky(i) except pawcore.NotFound: break if sid in stickies: break stickies.add(sid) return stickies ``` This function takes into account that the documentation lead one to expect an error if an invalid index is supplied to `stick`, while the actual behavior seems to be that a duplicate ID is returned. Using a `set` instead of a list makes lookup faster if you have a large number of stickies. You would use the function as ``` subreddit = reddit.subreddit(sub) stickies = get_all_stickies(subreddit) for submission in subreddit.hot(limit=5): if submission.id not in posts_replied_to and submission.id not in stickies: print(submission.title) ```
32,221,890
I want a user to input a list with object in every new line. The user will copy and past a whole list to the program and not enter a new object every time. For example, here is the users input: > > january > > february > > march > > april > > may > > june > > > and he gets a list just like this: ``` ('january','february','march','april','may','june') ``` Someone has an idea for a python code that can help me?
2015/08/26
[ "https://Stackoverflow.com/questions/32221890", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4982967/" ]
You should use <http://eonasdan.github.io/bootstrap-datetimepicker/> datetimePicker, by setting the format of the `dateTimePicker` to `'hh:mm:ss'` You have to use - `moment.js` - For more formats, you should check: <http://momentjs.com/docs/#/displaying/format/> I have created a JSFiddle. <http://jsfiddle.net/jagtx65n/> HTML: ``` <div class="col-sm-6"> <div class="form-group"> <div class="input-group date" id="datetimepicker1"> <input type="text" class="form-control"> <span class="input-group-addon"> <span class="glyphicon glyphicon-calendar"></span> </span> </div> </div> </div> ``` JS: ``` $(function () { $('#datetimepicker1').datetimepicker({ format: 'hh:mm:ss' }); }); ``` **EDIT** open when click the input field: ``` $(function(){ $('#datetimepicker1').datetimepicker({ format: 'hh:mm:ss', allowInputToggle: true }); }); ```
[DEMO](http://jsfiddle.net/SantoshPandu/B4BzK/466/) HTML ``` <div class="container"> <div class="row"> <div class="col-sm-6 form-group"> <label for="dd" class="sr-only">Time Pick</label> <input type="text" id="dd" name="dd" data-format="MM/DD/YYYY" placeholder="date" class="form-control" /> </div> </div> </div> </div> <input type='button' id='clear' Value='Clear Date'> ``` JS ``` var Date = $('#dd').datetimepicker({ format: 'DD-MM-YYYY hh:mm:ss', }) $('#clear').click(function () { $('#dd').data("DateTimePicker").clear() }) ```
32,221,890
I want a user to input a list with object in every new line. The user will copy and past a whole list to the program and not enter a new object every time. For example, here is the users input: > > january > > february > > march > > april > > may > > june > > > and he gets a list just like this: ``` ('january','february','march','april','may','june') ``` Someone has an idea for a python code that can help me?
2015/08/26
[ "https://Stackoverflow.com/questions/32221890", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4982967/" ]
You should use <http://eonasdan.github.io/bootstrap-datetimepicker/> datetimePicker, by setting the format of the `dateTimePicker` to `'hh:mm:ss'` You have to use - `moment.js` - For more formats, you should check: <http://momentjs.com/docs/#/displaying/format/> I have created a JSFiddle. <http://jsfiddle.net/jagtx65n/> HTML: ``` <div class="col-sm-6"> <div class="form-group"> <div class="input-group date" id="datetimepicker1"> <input type="text" class="form-control"> <span class="input-group-addon"> <span class="glyphicon glyphicon-calendar"></span> </span> </div> </div> </div> ``` JS: ``` $(function () { $('#datetimepicker1').datetimepicker({ format: 'hh:mm:ss' }); }); ``` **EDIT** open when click the input field: ``` $(function(){ $('#datetimepicker1').datetimepicker({ format: 'hh:mm:ss', allowInputToggle: true }); }); ```
Hi: Here's the part of the code that does the format parsing. As you can see `HH` and `H` are for 12 hour format and for seconds just use `ss` as in your example, but for **minutes** you **have to** use `i` or `ii` ``` setters_order = ['hh', 'h', 'ii', 'i', 'ss', 's', 'yyyy', 'yy', 'M', 'MM', 'm', 'mm', 'D', 'DD', 'd', 'dd', 'H', 'HH', 'p', 'P'], setters_map = { hh: function (d, v) { return d.setUTCHours(v); }, h: function (d, v) { return d.setUTCHours(v); }, HH: function (d, v) { return d.setUTCHours(v == 12 ? 0 : v); }, H: function (d, v) { return d.setUTCHours(v == 12 ? 0 : v); }, ii: function (d, v) { return d.setUTCMinutes(v); }, i: function (d, v) { return d.setUTCMinutes(v); }, ss: function (d, v) { return d.setUTCSeconds(v); }, s: function (d, v) { return d.setUTCSeconds(v); }, yyyy: function (d, v) { return d.setUTCFullYear(v); }, yy: function (d, v) { return d.setUTCFullYear(2000 + v); }, m: function (d, v) { v -= 1; while (v < 0) v += 12; v %= 12; d.setUTCMonth(v); while (d.getUTCMonth() != v) if (isNaN(d.getUTCMonth())) return d; else d.setUTCDate(d.getUTCDate() - 1); return d; }, d: function (d, v) { return d.setUTCDate(v); }, p: function (d, v) { return d.setUTCHours(v == 1 ? d.getUTCHours() + 12 : d.getUTCHours()); } }, ```
32,221,890
I want a user to input a list with object in every new line. The user will copy and past a whole list to the program and not enter a new object every time. For example, here is the users input: > > january > > february > > march > > april > > may > > june > > > and he gets a list just like this: ``` ('january','february','march','april','may','june') ``` Someone has an idea for a python code that can help me?
2015/08/26
[ "https://Stackoverflow.com/questions/32221890", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4982967/" ]
[DEMO](http://jsfiddle.net/SantoshPandu/B4BzK/466/) HTML ``` <div class="container"> <div class="row"> <div class="col-sm-6 form-group"> <label for="dd" class="sr-only">Time Pick</label> <input type="text" id="dd" name="dd" data-format="MM/DD/YYYY" placeholder="date" class="form-control" /> </div> </div> </div> </div> <input type='button' id='clear' Value='Clear Date'> ``` JS ``` var Date = $('#dd').datetimepicker({ format: 'DD-MM-YYYY hh:mm:ss', }) $('#clear').click(function () { $('#dd').data("DateTimePicker").clear() }) ```
Hi: Here's the part of the code that does the format parsing. As you can see `HH` and `H` are for 12 hour format and for seconds just use `ss` as in your example, but for **minutes** you **have to** use `i` or `ii` ``` setters_order = ['hh', 'h', 'ii', 'i', 'ss', 's', 'yyyy', 'yy', 'M', 'MM', 'm', 'mm', 'D', 'DD', 'd', 'dd', 'H', 'HH', 'p', 'P'], setters_map = { hh: function (d, v) { return d.setUTCHours(v); }, h: function (d, v) { return d.setUTCHours(v); }, HH: function (d, v) { return d.setUTCHours(v == 12 ? 0 : v); }, H: function (d, v) { return d.setUTCHours(v == 12 ? 0 : v); }, ii: function (d, v) { return d.setUTCMinutes(v); }, i: function (d, v) { return d.setUTCMinutes(v); }, ss: function (d, v) { return d.setUTCSeconds(v); }, s: function (d, v) { return d.setUTCSeconds(v); }, yyyy: function (d, v) { return d.setUTCFullYear(v); }, yy: function (d, v) { return d.setUTCFullYear(2000 + v); }, m: function (d, v) { v -= 1; while (v < 0) v += 12; v %= 12; d.setUTCMonth(v); while (d.getUTCMonth() != v) if (isNaN(d.getUTCMonth())) return d; else d.setUTCDate(d.getUTCDate() - 1); return d; }, d: function (d, v) { return d.setUTCDate(v); }, p: function (d, v) { return d.setUTCHours(v == 1 ? d.getUTCHours() + 12 : d.getUTCHours()); } }, ```
53,451,057
I would like to display the following ``` $ env/bin/python >>>import requests >>> requests.get('http://dabapps.com') <Response [200]> ``` as a code sample within a bullet paragraph for Github styled markdown. How do I do it?
2018/11/23
[ "https://Stackoverflow.com/questions/53451057", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5722359/" ]
> > h:25:59: friend declaration delares a non template function. > > > You are missing to declare the function as a template that takes `Pairwise<K, V>`: header.h: ``` #ifndef HEADER_H_INCLUDED /* or pragma once */ #define HEADER_H_INCLUDED /* if you like it */ #include <iostream> // or <ostream> template<typename K, typename V> class Pairwise { // made it a class so that the K first; // friend actually makes sense. V second; public: Pairwise() = default; Pairwise(K first, V second) : first{ first }, second{ second } {} template<typename K, typename V> friend std::ostream& operator<<(std::ostream &out, Pairwise<K, V> const &p) { return out << p.first << ": " << p.second; } }; #endif /* HEADER_H_INCLUDED */ ``` source file: ``` #include <iostream> // the user can't know a random header includes it #include <string> #include "header.h" int main() { Pairwise<std::string, std::string> p{ "foo", "bar" }; std::cout << p << '\n'; } ``` Sidenote: You could also use ``` { using Stringpair = Pairwise<std::string, std::string>; // ... Stringpair sp{ "foo", "bar" }; } ``` if you need that more often. The other errors you got result from confusing `std::ostringstream` with `std::ostream` in `operator<<()`.
As you write it, you define the operator as a member function, which is very likely not intended. Divide it like ... ``` template<typename K, typename V> struct Pairwise{ K first; V second; Pairwise() = default; Pairwise(K, V); //print out as a string in main friend ostream& operator<<(ostream &out, const Pairwise &n); }; template<typename K, typename V> ostream& operator<<(ostream &out, const Pairwise<K,V> &n) { ... return out; } ``` And it should work. BTW: Note that in a `struct` all members are public by default; so you would be able to access them even in absence of the `friend`-declaration.
53,451,057
I would like to display the following ``` $ env/bin/python >>>import requests >>> requests.get('http://dabapps.com') <Response [200]> ``` as a code sample within a bullet paragraph for Github styled markdown. How do I do it?
2018/11/23
[ "https://Stackoverflow.com/questions/53451057", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5722359/" ]
As you write it, you define the operator as a member function, which is very likely not intended. Divide it like ... ``` template<typename K, typename V> struct Pairwise{ K first; V second; Pairwise() = default; Pairwise(K, V); //print out as a string in main friend ostream& operator<<(ostream &out, const Pairwise &n); }; template<typename K, typename V> ostream& operator<<(ostream &out, const Pairwise<K,V> &n) { ... return out; } ``` And it should work. BTW: Note that in a `struct` all members are public by default; so you would be able to access them even in absence of the `friend`-declaration.
In c++, less is often more... ``` #pragma once #include<iostream> #include<string> // never do this in a header file: // using std::ostream; template<typename K, typename V> struct Pairwise{ K first; V second; Pairwise() = default; Pairwise(K, V); //print out as a string in main friend std::ostream& operator<<(std::ostream &out, const Pairwise &n) { return out << n.first << ':' << n.second; } }; int main (){ using std::cout; using std::endl; using std::string; Pairwise<string, string> example = {"key", "value"}; cout << example << endl; } ``` <https://godbolt.org/z/ZUlLTu>
53,451,057
I would like to display the following ``` $ env/bin/python >>>import requests >>> requests.get('http://dabapps.com') <Response [200]> ``` as a code sample within a bullet paragraph for Github styled markdown. How do I do it?
2018/11/23
[ "https://Stackoverflow.com/questions/53451057", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5722359/" ]
> > h:25:59: friend declaration delares a non template function. > > > You are missing to declare the function as a template that takes `Pairwise<K, V>`: header.h: ``` #ifndef HEADER_H_INCLUDED /* or pragma once */ #define HEADER_H_INCLUDED /* if you like it */ #include <iostream> // or <ostream> template<typename K, typename V> class Pairwise { // made it a class so that the K first; // friend actually makes sense. V second; public: Pairwise() = default; Pairwise(K first, V second) : first{ first }, second{ second } {} template<typename K, typename V> friend std::ostream& operator<<(std::ostream &out, Pairwise<K, V> const &p) { return out << p.first << ": " << p.second; } }; #endif /* HEADER_H_INCLUDED */ ``` source file: ``` #include <iostream> // the user can't know a random header includes it #include <string> #include "header.h" int main() { Pairwise<std::string, std::string> p{ "foo", "bar" }; std::cout << p << '\n'; } ``` Sidenote: You could also use ``` { using Stringpair = Pairwise<std::string, std::string>; // ... Stringpair sp{ "foo", "bar" }; } ``` if you need that more often. The other errors you got result from confusing `std::ostringstream` with `std::ostream` in `operator<<()`.
In c++, less is often more... ``` #pragma once #include<iostream> #include<string> // never do this in a header file: // using std::ostream; template<typename K, typename V> struct Pairwise{ K first; V second; Pairwise() = default; Pairwise(K, V); //print out as a string in main friend std::ostream& operator<<(std::ostream &out, const Pairwise &n) { return out << n.first << ':' << n.second; } }; int main (){ using std::cout; using std::endl; using std::string; Pairwise<string, string> example = {"key", "value"}; cout << example << endl; } ``` <https://godbolt.org/z/ZUlLTu>
43,513,121
As per my application requirement, I need to get the server IP and the server name from the python program. But my application is resides inside the specific docker container on top of the Ubuntu. I have tried like the below ``` import os os.system("hostname") # to get the hostname os.system("hostname -i") # to get the host ip ``` Output: `2496c9ab2f4a172.*.*.*` But it is giving the host name as a it's residing docker containerid and the host\_ip as it's private ip address as above. I need the hostname as it is the server name. But when I type these above commands in the terminal I am able to get result what I want.
2017/04/20
[ "https://Stackoverflow.com/questions/43513121", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3666266/" ]
You won't be able to get the host system's name this way. To get it, you can either define an environment variable, either in your Dockerfile, or when running your container (-e option). Alternatively, you can mount your host `/etc/hostname` file into the container, or copy it... This is an example run command I use to set the environment variable HOSTNAME to the host's hostname within the container: ``` docker run -it -e "HOSTNAME=$(cat /etc/hostname)" <image> <cmd> ``` In python you can then run `os.environ["HOSTNAME"]` to get the hostname. As far as the IP address goes, I use this command to get retrieve it from a running container: ``` route -n | awk '/UG[ \t]/{print $2}' ``` You will have to install route to be able to use this command. It is included in the package net-tools. `apt-get install net-tools`
An alternative might be the following: ENV: ``` NODENAME: '{{.Node.Hostname}}' ``` This will get you the Hostname of the Node, where the container is running as an environment variable (tested on Docker-Swarm / CoreOs Stable).
43,513,121
As per my application requirement, I need to get the server IP and the server name from the python program. But my application is resides inside the specific docker container on top of the Ubuntu. I have tried like the below ``` import os os.system("hostname") # to get the hostname os.system("hostname -i") # to get the host ip ``` Output: `2496c9ab2f4a172.*.*.*` But it is giving the host name as a it's residing docker containerid and the host\_ip as it's private ip address as above. I need the hostname as it is the server name. But when I type these above commands in the terminal I am able to get result what I want.
2017/04/20
[ "https://Stackoverflow.com/questions/43513121", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3666266/" ]
You won't be able to get the host system's name this way. To get it, you can either define an environment variable, either in your Dockerfile, or when running your container (-e option). Alternatively, you can mount your host `/etc/hostname` file into the container, or copy it... This is an example run command I use to set the environment variable HOSTNAME to the host's hostname within the container: ``` docker run -it -e "HOSTNAME=$(cat /etc/hostname)" <image> <cmd> ``` In python you can then run `os.environ["HOSTNAME"]` to get the hostname. As far as the IP address goes, I use this command to get retrieve it from a running container: ``` route -n | awk '/UG[ \t]/{print $2}' ``` You will have to install route to be able to use this command. It is included in the package net-tools. `apt-get install net-tools`
you can go for something like this: ``` def determine_docker_host_ip_address(): cmd = "ip route show" import subprocess process = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE) output, error = process.communicate() return str(output).split(' ')[2] ```
43,513,121
As per my application requirement, I need to get the server IP and the server name from the python program. But my application is resides inside the specific docker container on top of the Ubuntu. I have tried like the below ``` import os os.system("hostname") # to get the hostname os.system("hostname -i") # to get the host ip ``` Output: `2496c9ab2f4a172.*.*.*` But it is giving the host name as a it's residing docker containerid and the host\_ip as it's private ip address as above. I need the hostname as it is the server name. But when I type these above commands in the terminal I am able to get result what I want.
2017/04/20
[ "https://Stackoverflow.com/questions/43513121", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3666266/" ]
You won't be able to get the host system's name this way. To get it, you can either define an environment variable, either in your Dockerfile, or when running your container (-e option). Alternatively, you can mount your host `/etc/hostname` file into the container, or copy it... This is an example run command I use to set the environment variable HOSTNAME to the host's hostname within the container: ``` docker run -it -e "HOSTNAME=$(cat /etc/hostname)" <image> <cmd> ``` In python you can then run `os.environ["HOSTNAME"]` to get the hostname. As far as the IP address goes, I use this command to get retrieve it from a running container: ``` route -n | awk '/UG[ \t]/{print $2}' ``` You will have to install route to be able to use this command. It is included in the package net-tools. `apt-get install net-tools`
``` import os os.uname().nodename ```
7,052,874
I had a custom script programmed and it is using the authors own module that is hosted on Google code in a Mercurial repo. I understand how to clone the repo but this will just stick the source into a folder on my computer. Is there a proper way to add the module into my python install to make it available for my projects? (e.g. with modules hosted on pypi you can use virtualenv and pip to install). Thanks Dave O
2011/08/13
[ "https://Stackoverflow.com/questions/7052874", "https://Stackoverflow.com", "https://Stackoverflow.com/users/893341/" ]
In exactly the same way. Just pass the address of the repo to `pip install`, using the `-e` parameter: ``` pip install -e hg+http://code.google.com/path/to/repo ```
If the module isn't on pypi, clone the repository with Hg and see if there's a setup.py file. If there is, open a command prompt, cd to that directory, and run: ``` python setup.py install ```
48,601,123
Here I have a mistake that I can't find the solution. Please excuse me for the quality of the code, I didn't start classes until 6 months ago. I've tried to detach category objects with expunge but once it's added it doesn't work.I was thinking when detaching the object with expunge it will work. and I can't find a solution :( . I pasted as much code as I could so you could see ``` Traceback (most recent call last): File "/home/scwall/PycharmProjects/purebeurre/recovery.py", line 171, in <module> connection.connect.add(article) File "/home/scwall/PycharmProjects/purebeurre/venv/lib/python3.6/site-packages/sqlalchemy/orm/session.py", line 1776, in add self._save_or_update_state(state) File "/home/scwall/PycharmProjects/purebeurre/venv/lib/python3.6/site-packages/sqlalchemy/orm/session.py", line 1796, in _save_or_update_state self._save_or_update_impl(st_) File "/home/scwall/PycharmProjects/purebeurre/venv/lib/python3.6/site-packages/sqlalchemy/orm/session.py", line 2101, in _save_or_update_impl self._update_impl(state) File "/home/scwall/PycharmProjects/purebeurre/venv/lib/python3.6/site-packages/sqlalchemy/orm/session.py", line 2090, in _update_impl self.identity_map.add(state) File "/home/scwall/PycharmProjects/purebeurre/venv/lib/python3.6/site-packages/sqlalchemy/orm/identity.py", line 149, in add orm_util.state_str(state), state.key)) sqlalchemy.exc.InvalidRequestError: Can't attach instance <Categories at 0x7fe8d8000e48>; another instance with key (<class 'packages.databases.models.Categories'>, (26,), None) is already present in this session. Process finished with exit code 1 class CategoriesQuery(ConnectionQuery): @classmethod def get_categories_by_tags(cls, tags_list): return cls.connection.connect.query(Categories).filter(Categories.id_category.in_(tags_list)).all() ``` --- other file: ``` def function_recovery_and_push(link_page): count_and_end_page_return_all = {} count_f = 0 total_count_f = 0 list_article = [] try: products_dic = requests.get(link_page).json() if products_dic['count']: count_f = products_dic['page_size'] if products_dic['count']: total_count_f = products_dic['count'] if not products_dic['products']: count_and_end_page_return_all['count'] = False count_and_end_page_return_all['total_count'] = False count_and_end_page_return_all['final_page'] = True else: count_and_end_page_return_all['final_page'] = False for product in products_dic["products"]: if 'nutrition_grades' in product.keys() \ and 'product_name_fr' in product.keys() \ and 'categories_tags' in product.keys() \ and 1 <= len(product['product_name_fr']) <= 100: try: list_article.append( Products(name=product['product_name_fr'], description=product['ingredients_text_fr'], nutrition_grade=product['nutrition_grades'], shop=product['stores'], link_http=product['url'], categories=CategoriesQuery.get_categories_by_tags(product['categories_tags']))) except KeyError: continue count_and_end_page_return_all['count'] = count_f count_and_end_page_return_all['total_count'] = total_count_f list_article.append(count_and_end_page_return_all) return list_article except: count_and_end_page_return_all['count'] = False count_and_end_page_return_all['total_count'] = False count_and_end_page_return_all['final_page'] = True list_article.append(count_and_end_page_return_all) return list_article p = Pool() articles_list_all_pool = p.map(function_recovery_and_push, list_page_for_pool) p.close() for articles_list_pool in articles_list_all_pool: for article in articles_list_pool: if type(article) is dict: if article['count'] != False and article['total_count'] != False: count += article['count'] total_count = article['total_count'] if article['final_page'] is True: final_page = article['final_page'] else: connection.connect.add(article) ``` I receive this as an error message, thank you in advance for your answers
2018/02/03
[ "https://Stackoverflow.com/questions/48601123", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8551016/" ]
This error happens when you try to add an object to a session but it is already loaded. The only line that I see you use .add function is at the end where you run: `connection.connect.add(article)` So my guess is that this Model is already loaded in the session and you don't need to add it again. You can add a try, except and rollback the operation if it throws an exception.
unloading all objects from session and then adding it again in session might help. ```py db.session.expunge_all() db.session.add() ```
48,601,123
Here I have a mistake that I can't find the solution. Please excuse me for the quality of the code, I didn't start classes until 6 months ago. I've tried to detach category objects with expunge but once it's added it doesn't work.I was thinking when detaching the object with expunge it will work. and I can't find a solution :( . I pasted as much code as I could so you could see ``` Traceback (most recent call last): File "/home/scwall/PycharmProjects/purebeurre/recovery.py", line 171, in <module> connection.connect.add(article) File "/home/scwall/PycharmProjects/purebeurre/venv/lib/python3.6/site-packages/sqlalchemy/orm/session.py", line 1776, in add self._save_or_update_state(state) File "/home/scwall/PycharmProjects/purebeurre/venv/lib/python3.6/site-packages/sqlalchemy/orm/session.py", line 1796, in _save_or_update_state self._save_or_update_impl(st_) File "/home/scwall/PycharmProjects/purebeurre/venv/lib/python3.6/site-packages/sqlalchemy/orm/session.py", line 2101, in _save_or_update_impl self._update_impl(state) File "/home/scwall/PycharmProjects/purebeurre/venv/lib/python3.6/site-packages/sqlalchemy/orm/session.py", line 2090, in _update_impl self.identity_map.add(state) File "/home/scwall/PycharmProjects/purebeurre/venv/lib/python3.6/site-packages/sqlalchemy/orm/identity.py", line 149, in add orm_util.state_str(state), state.key)) sqlalchemy.exc.InvalidRequestError: Can't attach instance <Categories at 0x7fe8d8000e48>; another instance with key (<class 'packages.databases.models.Categories'>, (26,), None) is already present in this session. Process finished with exit code 1 class CategoriesQuery(ConnectionQuery): @classmethod def get_categories_by_tags(cls, tags_list): return cls.connection.connect.query(Categories).filter(Categories.id_category.in_(tags_list)).all() ``` --- other file: ``` def function_recovery_and_push(link_page): count_and_end_page_return_all = {} count_f = 0 total_count_f = 0 list_article = [] try: products_dic = requests.get(link_page).json() if products_dic['count']: count_f = products_dic['page_size'] if products_dic['count']: total_count_f = products_dic['count'] if not products_dic['products']: count_and_end_page_return_all['count'] = False count_and_end_page_return_all['total_count'] = False count_and_end_page_return_all['final_page'] = True else: count_and_end_page_return_all['final_page'] = False for product in products_dic["products"]: if 'nutrition_grades' in product.keys() \ and 'product_name_fr' in product.keys() \ and 'categories_tags' in product.keys() \ and 1 <= len(product['product_name_fr']) <= 100: try: list_article.append( Products(name=product['product_name_fr'], description=product['ingredients_text_fr'], nutrition_grade=product['nutrition_grades'], shop=product['stores'], link_http=product['url'], categories=CategoriesQuery.get_categories_by_tags(product['categories_tags']))) except KeyError: continue count_and_end_page_return_all['count'] = count_f count_and_end_page_return_all['total_count'] = total_count_f list_article.append(count_and_end_page_return_all) return list_article except: count_and_end_page_return_all['count'] = False count_and_end_page_return_all['total_count'] = False count_and_end_page_return_all['final_page'] = True list_article.append(count_and_end_page_return_all) return list_article p = Pool() articles_list_all_pool = p.map(function_recovery_and_push, list_page_for_pool) p.close() for articles_list_pool in articles_list_all_pool: for article in articles_list_pool: if type(article) is dict: if article['count'] != False and article['total_count'] != False: count += article['count'] total_count = article['total_count'] if article['final_page'] is True: final_page = article['final_page'] else: connection.connect.add(article) ``` I receive this as an error message, thank you in advance for your answers
2018/02/03
[ "https://Stackoverflow.com/questions/48601123", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8551016/" ]
Had the same issue, not sure you implemented the models as same as I did, but in my case at least, I had in the table's model - i.e: ``` product_items = relationship(...) ``` So later when I tried to do ``` products = session.query(Products).all() one_of_the_products = products[0] new_product = ProductItem(product_id=one_of_the_products.id, name='foo', category='bla') session.add(new_product) ``` It raises the same exception as you: ``` sqlalchemy.exc.InvalidRequestError: Can't attach instance <ProductItem at 0x7fe8d8000e48>; another instance with key (<class 'packages.databases.models.ProductItem'>, (26,), None) is already present in this session. ``` The reason for the exception, is that when I queried for `products` - the `relationship` created it's own sub-query and attached the `product_item`'s objects, it placed them in the variable name I defined in the `relationship()` -> `product_items`. So instead of doing: ``` session_add(new_product) ``` I just had to use the relationship: ``` one_of_the_products.product_items.append(new_product) session.commit() ``` hope it helps others.
unloading all objects from session and then adding it again in session might help. ```py db.session.expunge_all() db.session.add() ```
10,732,812
I'm trying to read some numbers from a text file and convert them to a list of floats, but nothing I try seems to work right. Here's my code right now: ``` python_data = open('C:\Documents and Settings\redacted\Desktop\python_lengths.txt','r') python_lengths = [] for line in python_data: python_lengths.append(line.split()) python_lengths.sort() print python_lengths ``` It returns: ``` [['12.2'], ['26'], ['34.2'], ['5.0'], ['62'], ['62'], ['62.6']] ``` (all brackets included) But I can't convert it to a list of floats with any regular commands like: ``` python_lengths = float(python_lengths) ``` or: ``` float_lengths = [map(float, x) for x in python_lengths] ``` because it seems to be nested or something?
2012/05/24
[ "https://Stackoverflow.com/questions/10732812", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1367212/" ]
That is happening because `.split()` always returns a list of items even if there was just 1 element present. If you change your `python_lengths.append(line.split())` to `python_lengths.extend(line.split())` you will get your flat list you expected.
@eumiro's answer is correct, but here is something else that can help: ``` numbers = [] with open('C:\Documents and Settings\redacted\Desktop\python_lengths.txt','r') as f: for line in f.readlines(): numbers.extend(line.split()) numbers.sort() print numbers ```
10,732,812
I'm trying to read some numbers from a text file and convert them to a list of floats, but nothing I try seems to work right. Here's my code right now: ``` python_data = open('C:\Documents and Settings\redacted\Desktop\python_lengths.txt','r') python_lengths = [] for line in python_data: python_lengths.append(line.split()) python_lengths.sort() print python_lengths ``` It returns: ``` [['12.2'], ['26'], ['34.2'], ['5.0'], ['62'], ['62'], ['62.6']] ``` (all brackets included) But I can't convert it to a list of floats with any regular commands like: ``` python_lengths = float(python_lengths) ``` or: ``` float_lengths = [map(float, x) for x in python_lengths] ``` because it seems to be nested or something?
2012/05/24
[ "https://Stackoverflow.com/questions/10732812", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1367212/" ]
That is happening because `.split()` always returns a list of items even if there was just 1 element present. If you change your `python_lengths.append(line.split())` to `python_lengths.extend(line.split())` you will get your flat list you expected.
``` def floats_from_file(f): for line in f: for word in line.split(): yield float(word) with open('C:/Documents and Settings/redacted/Desktop/python_lengths.txt') as f: python_lengths = list(floats_from_file(f)) python_lengths.sort() print python_lengths ``` Note that you can use forward slashes, even on Windows. If you want to use backslashes you should use a "raw" string, to avoid problems. What sort of problems? Well, some characters are special with backslash; for example, `\n` represents a newline. If you just put a path in plain quotes, and one of the directory names starts with `n`, you will get a newline there. Solutions are to double the backslashes, use raw strings, or just use forward slashes.
41,528,941
I'm new to python and html. I am trying to retrieve the number of comments from a page using requests and BeautifulSoup. In this example I am trying to get the number 226. Here is the code as I can see it when I inspect the page in Chrome: ``` <a title="Go to the comments page" class="article__comments-counts" href="http://www.theglobeandmail.com/opinion/will-kevin-oleary-be-stopped/article33519766/comments/"> <span class="civil-comment-count" data-site-id="globeandmail" data-id="33519766" data-language="en"> 226 </span> Comments </a> ``` When I request the text from the URL, I can find the code but there is no content between the span tags, no 226. Here is my code: ``` import requests, bs4 url = 'http://www.theglobeandmail.com/opinion/will-kevin-oleary-be-stopped/article33519766/' r = requests.get() soup = bs4.BeautifulSoup(r.text, 'html.parser') span = soup.find('span', class_='civil-comment-count') ``` It returns this, same as the above but no 226. ``` <span class="civil-comment-count" data-id="33519766" data-language="en" data-site-id="globeandmail"> </span> ``` I'm at a loss as to why the value isn't appearing. Thank you in advance for any assistance.
2017/01/08
[ "https://Stackoverflow.com/questions/41528941", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7389440/" ]
The page, and specifically the number of comments, does involve JavaScript to be loaded and shown. But, *you don't have to use Selenium*, make a request to the API behind it: ``` import requests with requests.Session() as session: session.headers = {"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.95 Safari/537.36"} # visit main page base_url = 'http://www.theglobeandmail.com/opinion/will-kevin-oleary-be-stopped/article33519766/' session.get(base_url) # get the comments count url = "https://api-civilcomments.global.ssl.fastly.net/api/v1/topics/multiple_comments_count.json" params = {"publication_slug": "globeandmail", "reference_language": "en", "reference_ids": "33519766"} r = session.get(url, params=params) print(r.json()) ``` Prints: ``` {'comment_counts': {'33519766': 226}} ```
This page use JavaScript to get the comment number, this is what the page look like when disable the JavaScript: [![enter image description here](https://i.stack.imgur.com/V8mcE.png)](https://i.stack.imgur.com/V8mcE.png) You can find the real url which contains the number in Chrome's Developer tools: [![enter image description here](https://i.stack.imgur.com/FqwR5.png)](https://i.stack.imgur.com/FqwR5.png) Than you can mimic the requests using @alecxe code.
26,575,303
Hello people I hope you an help me out with this problem: I am currently implementing an interpreter for a scripting language. The language needs a native call interface to C functions, like java has JNI. My problem is, that i want to call the original C functions without writing a wrapper function, which converts the call stack of my scripting language into the C call stack. This means, that I need a way, to generate argument lists of C functions at runtime. Example: ``` void a(int a, int b) { printf("function a called %d", a + b); } void b(double a, int b, double c) { printf("function b called %f", a * b + c); } interpreter.registerNativeFunction("a", a); interpreter.registerNativeFunction("b", b); ``` The interpreter should be able to call the functions, with only knowing the function prototypes of my scripting language: `native void a(int a, int b);` and `native void b(double a, int b, double c);` Is there any way to generate a C function call stack in C++, or do I have to use assembler for this task. Assembler is a problem, because the interpreter should run on almost any platform. Edit: The solution is to use libffi, a library, which handles the call stack creation for many different platforms and operating systems. libffi is also used by some prominent language implementations like cpython and openjdk. Edit: @MatsPetersson Somewhere in my code I have a method like: ``` void CInterpreter::CallNativeFunction(string name, vector<IValue> arguments, IReturnReference ret) { // Call here correct native C function. // this.nativeFunctions is a map which contains the function pointers. } ``` **Edit: Thanks for all your help! I will stay with libffi, and test it on all required platforms.**
2014/10/26
[ "https://Stackoverflow.com/questions/26575303", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4180673/" ]
Yes we can. No FFI library needed, no restriction to C calls, only pure C++11. ``` #include <iostream> #include <list> #include <iostream> #include <boost/any.hpp> template <typename T> auto fetch_back(T& t) -> typename std::remove_reference<decltype(t.back())>::type { typename std::remove_reference<decltype(t.back())>::type ret = t.back(); t.pop_back(); return ret; } template <typename X> struct any_ref_cast { X do_cast(boost::any y) { return boost::any_cast<X>(y); } }; template <typename X> struct any_ref_cast<X&> { X& do_cast(boost::any y) { std::reference_wrapper<X> ref = boost::any_cast<std::reference_wrapper<X>>(y); return ref.get(); } }; template <typename X> struct any_ref_cast<const X&> { const X& do_cast(boost::any y) { std::reference_wrapper<const X> ref = boost::any_cast<std::reference_wrapper<const X>>(y); return ref.get(); } }; template <typename Ret, typename...Arg> Ret call (Ret (*func)(Arg...), std::list<boost::any> args) { if (sizeof...(Arg) != args.size()) throw "Argument number mismatch!"; return func(any_ref_cast<Arg>().do_cast(fetch_back(args))...); } int foo(int x, double y, const std::string& z, std::string& w) { std::cout << "foo called : " << x << " " << y << " " << z << " " << w << std::endl; return 42; } ``` Test drive: ``` int main () { std::list<boost::any> args; args.push_back(1); args.push_back(4.56); const std::string yyy("abc"); std::string zzz("123"); args.push_back(std::cref(yyy)); args.push_back(std::ref(zzz)); call(foo, args); } ``` Exercise for the reader: implement `registerNativeFunction` in three easy steps. 1. Create an abstract base class with a pure `call` method that accepts a list of `boost::any`, call it `AbstractFunction` 2. Create a variadic class template that inherits `AbstractFunction` and adds a pointer to a concrete-type function (or `std::function`). Implement `call` in terms of that function. 3. Create an `map<string, AbstractFunction*>` (use smart pointers actually). Drawback: totally cannot call variadic C-style functions (e.g. printf and friends) with this method. There is also no support for implicit argument conversions. If you pass an `int` to a function that requires a `double`, it will throw an exception (which is slightly better than a core dump you can get with a dynamic solution). It is possible to partially solve this for a finite fixed set of conversions by specializing `any_ref_cast`.
The way to do this is to use pointers to functions: ``` void (*native)(int a, int b) ; ``` The problem you will face is finding the address of the function to store in the pointer is system dependent. On Windoze, you will probably be loading a DLL, finding the address of the function by name within the DLL, then store that point in native to call the function.