qid
int64
469
74.7M
question
stringlengths
36
37.8k
date
stringlengths
10
10
metadata
sequence
response_j
stringlengths
5
31.5k
response_k
stringlengths
10
31.6k
57,476,304
am getting below exception while trying to use multiprocessing with flask sqlalchemy. ``` sqlalchemy.exc.ResourceClosedError: This result object does not return rows. It has been closed automatically. [12/Aug/2019 18:09:52] "GET /api/resources HTTP/1.1" 500 - Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/SQLAlchemy-1.3.6-py3.7-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1244, in _execute_context cursor, statement, parameters, context File "/usr/local/lib/python3.7/site-packages/SQLAlchemy-1.3.6-py3.7-linux-x86_64.egg/sqlalchemy/engine/default.py", line 552, in do_execute cursor.execute(statement, parameters) psycopg2.DatabaseError: error with status PGRES_TUPLES_OK and no message from the libpq ``` Without multiprocessing the code works perfect, but when i add the multiprocessing as below, am running into this issue. ``` worker = multiprocessing.Process(target=<target_method_which_has_business_logic_with_DB>, args=(data,), name='PROCESS_ID', daemon=False) worker.start() return Response("Request Accepted", status=202) ``` I see an answer to similar question in SO (<https://stackoverflow.com/a/33331954/8085047>), which suggests to use engine.dispose(), but in my case am using db.session directly, not creating the engine and scope manually. Please help to resolve the issue. Thanks!
2019/08/13
[ "https://Stackoverflow.com/questions/57476304", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8085047/" ]
I had the same issue. Following Sam's link helped me solve it. Before I had (not working): ``` from multiprocessing import Pool with Pool() as pool: pool.map(f, [arg1, arg2, ...]) ``` This works for me: ``` from multiprocessing import get_context with get_context("spawn").Pool() as pool: pool.map(f, [arg1, arg2, ...]) ```
The answer from dibrovsd@github was really useful for me. If you are using a PREFORKING server like uwsgi or gunicorn, this would also help you. Post his comment here for your reference. > > Found. This happens when uwsgi (or gunicorn) starts when multiple workers are forked from the first process. > > If there is a request in the first process when it starts, then this opens a database connection and the connection is forked to the next process. But in the database, of course, no new connection is opened and a broken connection occurs. > > You had to specify lazy: true, lazy-apps: true (uwsgi) or preload\_app = False (gunicorn) > > In this case, add. workers do not fork, but run themselves and open their normal connections themselves > > > Refer to link: <https://github.com/psycopg/psycopg2/issues/281#issuecomment-985387977>
59,010,815
This is my code: I have used the find element by id RESULT\_RadioButton-7\_0, but I am getting the following error: ``` from selenium import webdriver from selenium.webdriver.common.by import By driver = webdriver.Chrome(executable_path="/home/real/Desktop/Selenium_with_python/SeleniumProjects/chromedriver_linux64/chromedriver") driver.get("https://fs2.formsite.com/meherpavan/form2/index.html?153770259640") radiostatus = driver.find_element(By.ID, "RESULT_RadioButton-7_0").click() ``` My error is this: > > elementClickInterceptedException: element click intercepted: Element is not clickable at point (40, 567). Other element would receive the click: <label for="RESULT\_RadioButton-7\_0">...</label> (Session info: chrome=78.0.3904.70) > > >
2019/11/23
[ "https://Stackoverflow.com/questions/59010815", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11132456/" ]
Based on the page link you provided, it looks like your locator strategy is correct here. If you are getting an error—most likely `NoSuchElementException`, I am assuming it might have something to do with waiting for the page to load before attempting to find the element. Let's use the `ExpectedConditions` class to wait on the element to exist before locating it: ``` from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC # Add the above references to your .py file # Wait on the element to exist, and store its reference in radiostatus radiostatus = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, "RESULT_RadioButton-7_0"))) # Click the element #radiostatus.click() # Click intercepted workaround: JavaScript click driver.execute_script("arguments[0].click();", radiostatus) ``` This will tick the radio button next to "Male" on the form.
Unless you need to wait on the element (which doesn't seem necessary), you should be able to do the following: ``` element_to_click_or_whatever = driver.find_element_by_id('RESULT_RadioButton-7_0') ``` If you look at the source for [`find_element_by_id`](https://github.com/SeleniumHQ/selenium/blob/master/py/selenium/webdriver/remote/webelement.py#L162), it calls `find_element` with `By.ID` as an argument: ``` def find_element_by_id(self, id_): return self.find_element(by=By.ID, value=id_) ``` IMO: `find_element_by_id` reads better, and it's one less package to import. I don't think your issue is finding the element; there's an `ElementClickInterceptedException` when trying to click on the element. For example, the radio button is located, but (strangely) Selenium doesn't think it's displayed. ``` from selenium import webdriver driver = webdriver.Chrome() driver.maximize_window() driver.get("https://fs2.formsite.com/meherpavan/form2/index.html?153770259640") radiostatus = driver.find_element_by_id('RESULT_RadioButton-7_0') if radiostatus: print('found') # Found print(radiostatus.is_displayed()) # False ```
59,010,815
This is my code: I have used the find element by id RESULT\_RadioButton-7\_0, but I am getting the following error: ``` from selenium import webdriver from selenium.webdriver.common.by import By driver = webdriver.Chrome(executable_path="/home/real/Desktop/Selenium_with_python/SeleniumProjects/chromedriver_linux64/chromedriver") driver.get("https://fs2.formsite.com/meherpavan/form2/index.html?153770259640") radiostatus = driver.find_element(By.ID, "RESULT_RadioButton-7_0").click() ``` My error is this: > > elementClickInterceptedException: element click intercepted: Element is not clickable at point (40, 567). Other element would receive the click: <label for="RESULT\_RadioButton-7\_0">...</label> (Session info: chrome=78.0.3904.70) > > >
2019/11/23
[ "https://Stackoverflow.com/questions/59010815", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11132456/" ]
Please find the below answer which will help you to click on the *"Male"* radio button from your link. ``` from selenium.webdriver.common.by import By from selenium import webdriver from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.action_chains import ActionChains driver = webdriver.Chrome(executable_path=r"C:\New folder\chromedriver.exe") driver.maximize_window() driver.get('https://fs2.formsite.com/meherpavan/form2/index.html?153770259640') # Clicking on the "Male" checkbox button maleRadioButton = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, "RESULT_RadioButton-7_0"))) ActionChains(driver).move_to_element(maleRadioButton).click().perform() ```
Unless you need to wait on the element (which doesn't seem necessary), you should be able to do the following: ``` element_to_click_or_whatever = driver.find_element_by_id('RESULT_RadioButton-7_0') ``` If you look at the source for [`find_element_by_id`](https://github.com/SeleniumHQ/selenium/blob/master/py/selenium/webdriver/remote/webelement.py#L162), it calls `find_element` with `By.ID` as an argument: ``` def find_element_by_id(self, id_): return self.find_element(by=By.ID, value=id_) ``` IMO: `find_element_by_id` reads better, and it's one less package to import. I don't think your issue is finding the element; there's an `ElementClickInterceptedException` when trying to click on the element. For example, the radio button is located, but (strangely) Selenium doesn't think it's displayed. ``` from selenium import webdriver driver = webdriver.Chrome() driver.maximize_window() driver.get("https://fs2.formsite.com/meherpavan/form2/index.html?153770259640") radiostatus = driver.find_element_by_id('RESULT_RadioButton-7_0') if radiostatus: print('found') # Found print(radiostatus.is_displayed()) # False ```
3,631,556
I have found several topics with this title, but none of their solutions worked for me. I have two Django sites running on my server, both through Apache using different virtualhosts on two ports fed by my Nginx frontend (using for static files). One site uses MySql and runs just fine. The other uses Sqlite3 and gets the error in the title. I downloaded a copy of sqlite.exe and looked at the mysite.sqlite3 (SQLite database in this directory) file and there is indeed a django\_session table with valid data in it. I have the sqlite.exe in my system32 as well as the site-packages folder in my Python path. Here is a section of my settings.py file: ``` MANAGERS = ADMINS DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', # Add 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'oracle'. 'NAME': 'mysite.sqlite3', # Or path to database file if using sqlite3. 'USER': '', # Not used with sqlite3. 'PASSWORD': '', # Not used with sqlite3. 'HOST': '', # Set to empty string for localhost. Not used with sqlite3. 'PORT': '', # Set to empty string for default. Not used with sqlite3. } } ``` I did use the python manage.py syncdb with no errors and just a "No Fixtures" comment. Does anyone have any ideas what else might be going on here? I'm considering just transferring everything over to my old pal MySql and just ignoring Sqlite, as really it's always given me some kind of trouble. I was only using it for the benefit of knowing it anyway. I have no overwhelming reason why I should use it. But again, just for my edification does anyone know what this problem is? I don't like to give up.
2010/09/02
[ "https://Stackoverflow.com/questions/3631556", "https://Stackoverflow.com", "https://Stackoverflow.com/users/438289/" ]
It could be that the server uses a different working directory than the `manage.py` command. Since you provide a relative path to the sqlite database, it is created in the working directory. Try it with an absolute path, e.g.: ``` 'NAME': '/tmp/mysite.sqlite3', ``` Remember that you have to either run `./manage.py syncdb` again or copy your current database with the existing tables to `/tmp`. If it resolves the error message, you can look for a better place than `/tmp` :-)
You have unapplied migrations. your app may not work properly until they are applied. Run 'python manage.py migrate' to apply them. python manage.py migrate This one worked for me.
3,631,556
I have found several topics with this title, but none of their solutions worked for me. I have two Django sites running on my server, both through Apache using different virtualhosts on two ports fed by my Nginx frontend (using for static files). One site uses MySql and runs just fine. The other uses Sqlite3 and gets the error in the title. I downloaded a copy of sqlite.exe and looked at the mysite.sqlite3 (SQLite database in this directory) file and there is indeed a django\_session table with valid data in it. I have the sqlite.exe in my system32 as well as the site-packages folder in my Python path. Here is a section of my settings.py file: ``` MANAGERS = ADMINS DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', # Add 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'oracle'. 'NAME': 'mysite.sqlite3', # Or path to database file if using sqlite3. 'USER': '', # Not used with sqlite3. 'PASSWORD': '', # Not used with sqlite3. 'HOST': '', # Set to empty string for localhost. Not used with sqlite3. 'PORT': '', # Set to empty string for default. Not used with sqlite3. } } ``` I did use the python manage.py syncdb with no errors and just a "No Fixtures" comment. Does anyone have any ideas what else might be going on here? I'm considering just transferring everything over to my old pal MySql and just ignoring Sqlite, as really it's always given me some kind of trouble. I was only using it for the benefit of knowing it anyway. I have no overwhelming reason why I should use it. But again, just for my edification does anyone know what this problem is? I don't like to give up.
2010/09/02
[ "https://Stackoverflow.com/questions/3631556", "https://Stackoverflow.com", "https://Stackoverflow.com/users/438289/" ]
I had made some changes in Model which was not migrated to db properly. Using the command ``` manage.py makemigrations ``` fixed my problem. I hope this will help someone.
Add `'django.contrib.sessions',` line in INSTALLED\_APPS Run below commands from django shell ``` python manage.py makemigrations #check for changes python manage.py migrate #apply changes in DbSQLite python manage.py syncdb #sync with database ``` django\_session will appear in database with `(session_key, session_data , expire_date)`
3,631,556
I have found several topics with this title, but none of their solutions worked for me. I have two Django sites running on my server, both through Apache using different virtualhosts on two ports fed by my Nginx frontend (using for static files). One site uses MySql and runs just fine. The other uses Sqlite3 and gets the error in the title. I downloaded a copy of sqlite.exe and looked at the mysite.sqlite3 (SQLite database in this directory) file and there is indeed a django\_session table with valid data in it. I have the sqlite.exe in my system32 as well as the site-packages folder in my Python path. Here is a section of my settings.py file: ``` MANAGERS = ADMINS DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', # Add 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'oracle'. 'NAME': 'mysite.sqlite3', # Or path to database file if using sqlite3. 'USER': '', # Not used with sqlite3. 'PASSWORD': '', # Not used with sqlite3. 'HOST': '', # Set to empty string for localhost. Not used with sqlite3. 'PORT': '', # Set to empty string for default. Not used with sqlite3. } } ``` I did use the python manage.py syncdb with no errors and just a "No Fixtures" comment. Does anyone have any ideas what else might be going on here? I'm considering just transferring everything over to my old pal MySql and just ignoring Sqlite, as really it's always given me some kind of trouble. I was only using it for the benefit of knowing it anyway. I have no overwhelming reason why I should use it. But again, just for my edification does anyone know what this problem is? I don't like to give up.
2010/09/02
[ "https://Stackoverflow.com/questions/3631556", "https://Stackoverflow.com", "https://Stackoverflow.com/users/438289/" ]
run this in command shell: ``` python manage.py migrate ``` This fixed for me.
it's simple just run the following command ``` python ./manage.py migrate python ./manage.py makemigrations AppName ```
3,631,556
I have found several topics with this title, but none of their solutions worked for me. I have two Django sites running on my server, both through Apache using different virtualhosts on two ports fed by my Nginx frontend (using for static files). One site uses MySql and runs just fine. The other uses Sqlite3 and gets the error in the title. I downloaded a copy of sqlite.exe and looked at the mysite.sqlite3 (SQLite database in this directory) file and there is indeed a django\_session table with valid data in it. I have the sqlite.exe in my system32 as well as the site-packages folder in my Python path. Here is a section of my settings.py file: ``` MANAGERS = ADMINS DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', # Add 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'oracle'. 'NAME': 'mysite.sqlite3', # Or path to database file if using sqlite3. 'USER': '', # Not used with sqlite3. 'PASSWORD': '', # Not used with sqlite3. 'HOST': '', # Set to empty string for localhost. Not used with sqlite3. 'PORT': '', # Set to empty string for default. Not used with sqlite3. } } ``` I did use the python manage.py syncdb with no errors and just a "No Fixtures" comment. Does anyone have any ideas what else might be going on here? I'm considering just transferring everything over to my old pal MySql and just ignoring Sqlite, as really it's always given me some kind of trouble. I was only using it for the benefit of knowing it anyway. I have no overwhelming reason why I should use it. But again, just for my edification does anyone know what this problem is? I don't like to give up.
2010/09/02
[ "https://Stackoverflow.com/questions/3631556", "https://Stackoverflow.com", "https://Stackoverflow.com/users/438289/" ]
create a schema and add its name under NAME in 'databases' run manage.py syncdb
I found that's it all about migratinge. ``` python manage.py makemigrations APPNAME ``` As the answer ticked brakes when changed to a different virtual host such as windows to linux and vice versa
3,631,556
I have found several topics with this title, but none of their solutions worked for me. I have two Django sites running on my server, both through Apache using different virtualhosts on two ports fed by my Nginx frontend (using for static files). One site uses MySql and runs just fine. The other uses Sqlite3 and gets the error in the title. I downloaded a copy of sqlite.exe and looked at the mysite.sqlite3 (SQLite database in this directory) file and there is indeed a django\_session table with valid data in it. I have the sqlite.exe in my system32 as well as the site-packages folder in my Python path. Here is a section of my settings.py file: ``` MANAGERS = ADMINS DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', # Add 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'oracle'. 'NAME': 'mysite.sqlite3', # Or path to database file if using sqlite3. 'USER': '', # Not used with sqlite3. 'PASSWORD': '', # Not used with sqlite3. 'HOST': '', # Set to empty string for localhost. Not used with sqlite3. 'PORT': '', # Set to empty string for default. Not used with sqlite3. } } ``` I did use the python manage.py syncdb with no errors and just a "No Fixtures" comment. Does anyone have any ideas what else might be going on here? I'm considering just transferring everything over to my old pal MySql and just ignoring Sqlite, as really it's always given me some kind of trouble. I was only using it for the benefit of knowing it anyway. I have no overwhelming reason why I should use it. But again, just for my edification does anyone know what this problem is? I don't like to give up.
2010/09/02
[ "https://Stackoverflow.com/questions/3631556", "https://Stackoverflow.com", "https://Stackoverflow.com/users/438289/" ]
run this in command shell: ``` python manage.py migrate ``` This fixed for me.
had the same issue, my resolution was to simply add 'django.contrib.comments' to INSTALLED\_APPS and run `./manage.py syncdb` again.
3,631,556
I have found several topics with this title, but none of their solutions worked for me. I have two Django sites running on my server, both through Apache using different virtualhosts on two ports fed by my Nginx frontend (using for static files). One site uses MySql and runs just fine. The other uses Sqlite3 and gets the error in the title. I downloaded a copy of sqlite.exe and looked at the mysite.sqlite3 (SQLite database in this directory) file and there is indeed a django\_session table with valid data in it. I have the sqlite.exe in my system32 as well as the site-packages folder in my Python path. Here is a section of my settings.py file: ``` MANAGERS = ADMINS DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', # Add 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'oracle'. 'NAME': 'mysite.sqlite3', # Or path to database file if using sqlite3. 'USER': '', # Not used with sqlite3. 'PASSWORD': '', # Not used with sqlite3. 'HOST': '', # Set to empty string for localhost. Not used with sqlite3. 'PORT': '', # Set to empty string for default. Not used with sqlite3. } } ``` I did use the python manage.py syncdb with no errors and just a "No Fixtures" comment. Does anyone have any ideas what else might be going on here? I'm considering just transferring everything over to my old pal MySql and just ignoring Sqlite, as really it's always given me some kind of trouble. I was only using it for the benefit of knowing it anyway. I have no overwhelming reason why I should use it. But again, just for my edification does anyone know what this problem is? I don't like to give up.
2010/09/02
[ "https://Stackoverflow.com/questions/3631556", "https://Stackoverflow.com", "https://Stackoverflow.com/users/438289/" ]
I found that's it all about migratinge. ``` python manage.py makemigrations APPNAME ``` As the answer ticked brakes when changed to a different virtual host such as windows to linux and vice versa
Django documentation says "Once you have configured your installation, run manage.py migrate to install the single database table that stores session data." One possibility that i have come across is if the migration is ran for app first time before running migrations for the new project so just run migrations for the project ``` python manage.py makemigrations python manage.py migrate ``` later you can run if needed ``` python manage.py makemigrations APPNAME python manage.py migrate APPNAME ```
3,631,556
I have found several topics with this title, but none of their solutions worked for me. I have two Django sites running on my server, both through Apache using different virtualhosts on two ports fed by my Nginx frontend (using for static files). One site uses MySql and runs just fine. The other uses Sqlite3 and gets the error in the title. I downloaded a copy of sqlite.exe and looked at the mysite.sqlite3 (SQLite database in this directory) file and there is indeed a django\_session table with valid data in it. I have the sqlite.exe in my system32 as well as the site-packages folder in my Python path. Here is a section of my settings.py file: ``` MANAGERS = ADMINS DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', # Add 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'oracle'. 'NAME': 'mysite.sqlite3', # Or path to database file if using sqlite3. 'USER': '', # Not used with sqlite3. 'PASSWORD': '', # Not used with sqlite3. 'HOST': '', # Set to empty string for localhost. Not used with sqlite3. 'PORT': '', # Set to empty string for default. Not used with sqlite3. } } ``` I did use the python manage.py syncdb with no errors and just a "No Fixtures" comment. Does anyone have any ideas what else might be going on here? I'm considering just transferring everything over to my old pal MySql and just ignoring Sqlite, as really it's always given me some kind of trouble. I was only using it for the benefit of knowing it anyway. I have no overwhelming reason why I should use it. But again, just for my edification does anyone know what this problem is? I don't like to give up.
2010/09/02
[ "https://Stackoverflow.com/questions/3631556", "https://Stackoverflow.com", "https://Stackoverflow.com/users/438289/" ]
In case it helps anyone else: the problem for me was that I didn't have the `django.contrib.sessions` app uncommented in my `INSTALLED_APPS`. Uncommenting it, and rerunning a `syncdb` did the trick.
When I run "manage.py runserver". If I run when I my current path is not in project dir.(such as python /somefolder/somefolder2/currentprj/manage.py runserver) I'll got the problem like you. solve by cd to project directory before run command.
3,631,556
I have found several topics with this title, but none of their solutions worked for me. I have two Django sites running on my server, both through Apache using different virtualhosts on two ports fed by my Nginx frontend (using for static files). One site uses MySql and runs just fine. The other uses Sqlite3 and gets the error in the title. I downloaded a copy of sqlite.exe and looked at the mysite.sqlite3 (SQLite database in this directory) file and there is indeed a django\_session table with valid data in it. I have the sqlite.exe in my system32 as well as the site-packages folder in my Python path. Here is a section of my settings.py file: ``` MANAGERS = ADMINS DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', # Add 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'oracle'. 'NAME': 'mysite.sqlite3', # Or path to database file if using sqlite3. 'USER': '', # Not used with sqlite3. 'PASSWORD': '', # Not used with sqlite3. 'HOST': '', # Set to empty string for localhost. Not used with sqlite3. 'PORT': '', # Set to empty string for default. Not used with sqlite3. } } ``` I did use the python manage.py syncdb with no errors and just a "No Fixtures" comment. Does anyone have any ideas what else might be going on here? I'm considering just transferring everything over to my old pal MySql and just ignoring Sqlite, as really it's always given me some kind of trouble. I was only using it for the benefit of knowing it anyway. I have no overwhelming reason why I should use it. But again, just for my edification does anyone know what this problem is? I don't like to give up.
2010/09/02
[ "https://Stackoverflow.com/questions/3631556", "https://Stackoverflow.com", "https://Stackoverflow.com/users/438289/" ]
After made any changes in code, run the following commands ``` manage.py makemigrations manage.py migrate ``` it worked for me.
When I run "manage.py runserver". If I run when I my current path is not in project dir.(such as python /somefolder/somefolder2/currentprj/manage.py runserver) I'll got the problem like you. solve by cd to project directory before run command.
3,631,556
I have found several topics with this title, but none of their solutions worked for me. I have two Django sites running on my server, both through Apache using different virtualhosts on two ports fed by my Nginx frontend (using for static files). One site uses MySql and runs just fine. The other uses Sqlite3 and gets the error in the title. I downloaded a copy of sqlite.exe and looked at the mysite.sqlite3 (SQLite database in this directory) file and there is indeed a django\_session table with valid data in it. I have the sqlite.exe in my system32 as well as the site-packages folder in my Python path. Here is a section of my settings.py file: ``` MANAGERS = ADMINS DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', # Add 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'oracle'. 'NAME': 'mysite.sqlite3', # Or path to database file if using sqlite3. 'USER': '', # Not used with sqlite3. 'PASSWORD': '', # Not used with sqlite3. 'HOST': '', # Set to empty string for localhost. Not used with sqlite3. 'PORT': '', # Set to empty string for default. Not used with sqlite3. } } ``` I did use the python manage.py syncdb with no errors and just a "No Fixtures" comment. Does anyone have any ideas what else might be going on here? I'm considering just transferring everything over to my old pal MySql and just ignoring Sqlite, as really it's always given me some kind of trouble. I was only using it for the benefit of knowing it anyway. I have no overwhelming reason why I should use it. But again, just for my edification does anyone know what this problem is? I don't like to give up.
2010/09/02
[ "https://Stackoverflow.com/questions/3631556", "https://Stackoverflow.com", "https://Stackoverflow.com/users/438289/" ]
I had made some changes in Model which was not migrated to db properly. Using the command ``` manage.py makemigrations ``` fixed my problem. I hope this will help someone.
And it may be a case you are getting this error because you forget to run query python manage.py migrate before creating super user
3,631,556
I have found several topics with this title, but none of their solutions worked for me. I have two Django sites running on my server, both through Apache using different virtualhosts on two ports fed by my Nginx frontend (using for static files). One site uses MySql and runs just fine. The other uses Sqlite3 and gets the error in the title. I downloaded a copy of sqlite.exe and looked at the mysite.sqlite3 (SQLite database in this directory) file and there is indeed a django\_session table with valid data in it. I have the sqlite.exe in my system32 as well as the site-packages folder in my Python path. Here is a section of my settings.py file: ``` MANAGERS = ADMINS DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', # Add 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'oracle'. 'NAME': 'mysite.sqlite3', # Or path to database file if using sqlite3. 'USER': '', # Not used with sqlite3. 'PASSWORD': '', # Not used with sqlite3. 'HOST': '', # Set to empty string for localhost. Not used with sqlite3. 'PORT': '', # Set to empty string for default. Not used with sqlite3. } } ``` I did use the python manage.py syncdb with no errors and just a "No Fixtures" comment. Does anyone have any ideas what else might be going on here? I'm considering just transferring everything over to my old pal MySql and just ignoring Sqlite, as really it's always given me some kind of trouble. I was only using it for the benefit of knowing it anyway. I have no overwhelming reason why I should use it. But again, just for my edification does anyone know what this problem is? I don't like to give up.
2010/09/02
[ "https://Stackoverflow.com/questions/3631556", "https://Stackoverflow.com", "https://Stackoverflow.com/users/438289/" ]
In case it helps anyone else: the problem for me was that I didn't have the `django.contrib.sessions` app uncommented in my `INSTALLED_APPS`. Uncommenting it, and rerunning a `syncdb` did the trick.
Run this command in cmd : ``` Python ./manage.py migrate --all ``` It should come on your **db**
47,249,474
I'm working on a python GUI application, using tkinter, which displays text in Hebrew. On Windows (10, python 3.6, tkinter 8.6) Hebrew strings are displayed fine. On Linux (Ubuntu 14, both python 3.4 and 3.6, tkinter 8.6) Hebrew strings are displayed incorrectly - with no BiDi awareness - **am I missing something?** I installed pybidi, and via `bidi.algorithm.get_display(hebrew_string)` - the strings are displayed correctly. But then, on Windows, `get_display(hebrew_string)` is displayed incorrectly. Is BiDi not supported on python-tkinter-Linux? Must I wrap each string with `get_display(string)`? Must I wrap `get_display(string)` with a `only_on_linux(...)` function?
2017/11/12
[ "https://Stackoverflow.com/questions/47249474", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1499700/" ]
I searched a bit and it is a known issue that tk/tcl uses Windows bidi support since about 2011, but their is apparently nothing equivalent on linux. Example: <https://wiki.tcl.tk/3158>. One answer to [Python/Tkinter: Using Tkinter for RTL (right-to-left) languages like Arabic/Hebrew?](https://stackoverflow.com/questions/4150053/python-tkinter-using-tkinter-for-rtl-right-to-left-languages-like-arabic-hebr/7864523#7864523) has some workarounds for \*nix. I am not sure about Mac support with the latest tcl/tk. For cross-platform work you will need a function that echoes on Windows and reverses on your Ubuntu.
As on of the main authors of FriBidi and a contributor to the bidi text support in Gtk, I strongly suggest that you don't use TkInter for anything Hebrew or any other text other than Latin, Greek, or Cyrillic scripts. In theory you can rearrange the text ordering with the stand alone fribidi executable on on Linux, or use the fribidi binding, but bidi and complex language support goes well beyond that. You might need to support text inserting, cut and paste, shaping, just to mention a few of the pitfalls. You are much better off using the excellent gtk or Qt bindings to python.
24,872,243
I created and ImageField model for my blog app in my "test" django project on my local server using sqllite. I have in my settings.py `MEDIA_ROOT = '/Users/me/Sites/python/djangotut/media/' MEDIA_ROOT_URL = 'http://127.0.0.1:8000/media/images/photos/'` and my blog/models.py ``` photo = models.ImageField(upload_to='images/photos/') ``` but the problem is my blog.urls.py I dont know how to add the url to work with my patterns thats in the documentation <https://docs.djangoproject.com/en/1.6/howto/static-files/#serving-files-uploaded-by-a-user-during-development> ``` from django.conf.urls import url from django.conf.urls.static import static from .views import index, post urlpatterns = [ url( regex=r'^$', view=index, name='blog-index' ), url( regex=r'^(?P<slug>[\w\-]+)/$', view=post, name='blog-detail' ), ] ``` Also I have read something about urls being setup for a "production environment" for when you distribute apps. What would my urls need to look like in that case?
2014/07/21
[ "https://Stackoverflow.com/questions/24872243", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3142105/" ]
Use weight sum technique for layouts, so that the controls in your each line consumes the assigned percentage of space ( there won't be any need to put them in Grid or other UI Controls)
Use a nested ViewGroup: ``` <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:paddingBottom="@dimen/activity_vertical_margin" android:paddingLeft="@dimen/activity_horizontal_margin" android:paddingRight="@dimen/activity_horizontal_margin" android:paddingTop="@dimen/activity_vertical_margin" tools:context="com.example.helloworld.MainActivity" > <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:orientation="horizontal" android:text="@string/fractions" android:textSize="30sp" /> <RelativeLayout android:layout_width="fill_parent" android:layout_height="wrap_content"> <RadioGroup android:alignParentRight = "true" android:id="@+id/fractions" android:layout_width="wrap_content" android:layout_height="wrap_content" android:orientation="horizontal" > <RadioButton android:id="@+id/fraction_true" android:layout_width="wrap_content" android:layout_height="wrap_content" android:checked="true" android:paddingRight="15dip" android:text="@string/fraction_true" android:textSize="30sp" android:gravity="right" android:textStyle="bold" /> <RadioButton android:id="@+id/fraction_false" android:layout_width="wrap_content" android:layout_height="wrap_content" android:paddingRight="15dip" android:text="@string/fraction_false" android:textSize="30sp" /> </RadioGroup> </RelativeLayout> </LinearLayout> ```
47,074,966
I am trying to create a simple test-scorer that grades your test and gives you a response - but a simple if/else function isn't running - Python - ``` testScore = input("Please enter your test score") if testScore <= 50: print "You didn't pass... sorry!" elif testScore >=60 and <=71: print "You passed, but you can do better!" ``` The Error is - ``` Traceback (most recent call last): File "python", line 6 elif testScore >= 60 and <= 71: ^ SyntaxError: invalid syntax ```
2017/11/02
[ "https://Stackoverflow.com/questions/47074966", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
You missed testScore in elif statement ``` testScore = input("Please enter your test score") if testScore <= 50: print "You didn't pass... sorry!" elif testScore >=60 and testScore<=71: print "You passed, but you can do better!" ```
The below shown way would be the better way of solving it, you always need to make the type conversion to integer when you are comparing/checking with numbers. > > input() in python would generally take as string > > > ``` testScore = input("Please enter your test score") if int(testScore) <= 50: print("You didn't pass... sorry!" ) elif int(testScore) >=60 and int(testScore)<=71: print("You passed, but you can do better!") ```
47,074,966
I am trying to create a simple test-scorer that grades your test and gives you a response - but a simple if/else function isn't running - Python - ``` testScore = input("Please enter your test score") if testScore <= 50: print "You didn't pass... sorry!" elif testScore >=60 and <=71: print "You passed, but you can do better!" ``` The Error is - ``` Traceback (most recent call last): File "python", line 6 elif testScore >= 60 and <= 71: ^ SyntaxError: invalid syntax ```
2017/11/02
[ "https://Stackoverflow.com/questions/47074966", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
You missed testScore in elif statement ``` testScore = input("Please enter your test score") if testScore <= 50: print "You didn't pass... sorry!" elif testScore >=60 and testScore<=71: print "You passed, but you can do better!" ```
You made some mistakes here: * You are comparing a string with an Integer `if testScore <= 50:` * You have missed the variable here --> `elif testScore >=60 and <=71:` I think those should be like this ---> * `if int(testScore) <= 50:` * `elif testScore >=60 and testScore<=71:` And try this, it is working ---> ``` testScore = input("Please enter your test score") if int(testScore) <= 50: print ("You didn't pass... sorry!") elif testScore >=60 and testScore<=71: print ("You passed, but you can do better!") ```
47,074,966
I am trying to create a simple test-scorer that grades your test and gives you a response - but a simple if/else function isn't running - Python - ``` testScore = input("Please enter your test score") if testScore <= 50: print "You didn't pass... sorry!" elif testScore >=60 and <=71: print "You passed, but you can do better!" ``` The Error is - ``` Traceback (most recent call last): File "python", line 6 elif testScore >= 60 and <= 71: ^ SyntaxError: invalid syntax ```
2017/11/02
[ "https://Stackoverflow.com/questions/47074966", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
The below shown way would be the better way of solving it, you always need to make the type conversion to integer when you are comparing/checking with numbers. > > input() in python would generally take as string > > > ``` testScore = input("Please enter your test score") if int(testScore) <= 50: print("You didn't pass... sorry!" ) elif int(testScore) >=60 and int(testScore)<=71: print("You passed, but you can do better!") ```
You made some mistakes here: * You are comparing a string with an Integer `if testScore <= 50:` * You have missed the variable here --> `elif testScore >=60 and <=71:` I think those should be like this ---> * `if int(testScore) <= 50:` * `elif testScore >=60 and testScore<=71:` And try this, it is working ---> ``` testScore = input("Please enter your test score") if int(testScore) <= 50: print ("You didn't pass... sorry!") elif testScore >=60 and testScore<=71: print ("You passed, but you can do better!") ```
66,697,840
I guess once upon a time, I was able to find this information by Googling but not this time. I believe each script file (e.g. my.py, run.sh, etc) could have the path to an executable that is supposed to parse & run the script file. For example, a bash script file `run.sh` could start with: ``` #!/bin/bash ``` Then, my user will run it like: ``` $ ./run.sh ``` What if some users may not have `bash` there but has one under `/usr/sbin/`? Actually, my issue is Python3. Some users may have `python3` not as `/usr/bin/python3`. Some distros seem to install it as `/usr/bin/python37` while some other `/usr/bin/python`. Yet again, some do `$HOME/bin/virtualenv/python3`. At least, what could I do to tell any (future) user's shell that my script should be run by `which python`. Or, even better if I could tell "Try `which python3`, and if not available, try `which python`."
2021/03/18
[ "https://Stackoverflow.com/questions/66697840", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7254686/" ]
If you want to pass json data with axios,you need to set `Content-Type`,here is a demo: axios(I use 1 to replace `${rockId}` to test): ``` var payload = "this is a test"; const request = axios.put(`/api/rocks/1/rockText`, JSON.stringify(payload), { headers: { 'Content-Type': 'application/json' } }); ``` Controller: ``` [HttpPut("{id}/rockText")] public IActionResult PutRockText(int id,[FromBody]string rock) { return Ok(); } ``` result: [![enter image description here](https://i.stack.imgur.com/Oyyjj.gif)](https://i.stack.imgur.com/Oyyjj.gif)
The issue is that the model binder cannot resolve the payload. The reason is that it's expecting a string, but you're actually passing a json object with a property `rockText`. I would create a class to represent the json you're sending: ``` public class Rock { public string RockText { get; set; } } [HttpPut("{id}/rockText")] public IActionResult PutRockText(Int32 id, [FromBody] Rock rock) { ... } ``` --- Alternatively, you could try passing the string from axios: ``` var payload = "this is a test"; const request = axios.put(`/api/rocks/${rockId}/rockText`, payload); ```
29,956,883
I am fairly new to python. I want to create a program that can generate random numbers and write them to a file, but I am curious to as whether it is possible to write the output to a `.txt` file, but in individual lists. (*every time the program executes the script, it creates a new list*) Here is my code so far: ``` def main(): import random data = open("Random.txt", "w" ) for i in range(int(input('How many random numbers?: '))): line = str(random.randint(1, 1000)) data.write(line + '\n') print(line) data.close() print('data has been written') main() ```
2015/04/30
[ "https://Stackoverflow.com/questions/29956883", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4848614/" ]
ABout append or `a` - > > Opens a file for appending. The file pointer is at the end of the file > if the file exists. That is, the file is in the append mode. If the > file does not exist, it creates a new file for writing. > > > ``` def main(): import random data = open("Random.txt", "a" ) #open file in append mode data.write('New run\n') #separator in file for i in range(int(input('How many random numbers?: '))): line = str(random.randint(1, 1000)) data.write(line + '\n') print(line) data.close() print('data has been written') main() ```
If you read through the documentation for [open()](https://docs.python.org/2/library/functions.html#open) you'll note: > > Modes 'r+', 'w+' and 'a+' open the file for updating (reading and > writing); note that 'w+' truncates the file. Append 'b' to the mode to > open the file in binary mode, on systems that differentiate between > binary and text files; on systems that don’t have this distinction, > adding the 'b' has no effect. > > > So use mode `a` if you want to "append to the open file. **Example:** ``` f = open("random.txt", "a") f.write(...) ``` **Update:** If you want to separate entries from subsequent program runs you'll have to append a line fo the file that your program *understand*. e.g: `f.write("!!!MARKER!!!\n")`
29,956,883
I am fairly new to python. I want to create a program that can generate random numbers and write them to a file, but I am curious to as whether it is possible to write the output to a `.txt` file, but in individual lists. (*every time the program executes the script, it creates a new list*) Here is my code so far: ``` def main(): import random data = open("Random.txt", "w" ) for i in range(int(input('How many random numbers?: '))): line = str(random.randint(1, 1000)) data.write(line + '\n') print(line) data.close() print('data has been written') main() ```
2015/04/30
[ "https://Stackoverflow.com/questions/29956883", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4848614/" ]
ABout append or `a` - > > Opens a file for appending. The file pointer is at the end of the file > if the file exists. That is, the file is in the append mode. If the > file does not exist, it creates a new file for writing. > > > ``` def main(): import random data = open("Random.txt", "a" ) #open file in append mode data.write('New run\n') #separator in file for i in range(int(input('How many random numbers?: '))): line = str(random.randint(1, 1000)) data.write(line + '\n') print(line) data.close() print('data has been written') main() ```
Exactly same like `letsc` answer, but formatted to be more "pythonic" Python 3 example, as was used print() in OP syntax. ```py import random def main(): with open("Random.txt", "a") as data: print('New run', file=data) numbers_count = int(input('How many random numbers?: ')) for i in range(numbers_count): line = str(random.randint(1, 1000)) print(line, file=data) print(line) print('data has been written') if __name__ == "__main__": main() ```
70,141,901
I have get\_Time function working fine but I would like to take the result it produces and store it int the "t" variable inside the function simple\_Interest function. Here is the code I have now. ``` y = input("Enter value for year: ") m = input("Enter value for month: ") p = input("Enter value for principle: ") r = input("Enter value for rate (in %): ") def get_Time(y, m, d): total_time = y + m / 12 + d / 365 return total_time print ("The total time in years is: " , get_Time(int(y), int(m), int(d))) def simple_Interest(t, p, r): simplint = p *(r / 100) * t return simplint ``` sorry if I sound like a dummy.. im still very newbish to python and programming in general but im learning. thanks in advance for your help.
2021/11/28
[ "https://Stackoverflow.com/questions/70141901", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17529617/" ]
Try this. ``` static int indexOfLastNumber(String s) { int removedLength = s.replaceFirst("\\d+\\D*$", "").length(); return s.length() == removedLength ? 0 : removedLength; } static void test(String s) { System.out.println(s + " : " + indexOfLastNumber(s)); } public static void main(String[] args) { test("987abc<*(123"); test("987abc<*(123)"); test("123"); test("foo"); test(""); } ``` output: ``` 987abc<*(123 : 9 987abc<*(123) : 9 123 : 0 foo : 0 : 0 ``` or ``` static final Pattern LAST_NUMBER = Pattern.compile("\\d+\\D*$"); static int indexOfLastNumber(String s) { Matcher m = LAST_NUMBER.matcher(s); return m.find() ? m.start() : 0; } ```
Note: the '1' is at index 9 in your String. If you don't want, it not necessary to use RegEx for this. A method like this should do the job: ```java public static int findLastNumbersIndex(String s) { boolean numberFound = false; boolean charBeforeNumberFound = false; //start at the end of the String int index = s.length() - 1; //loop from the back to the front while there are more chars //and no nonDigit is found before a digit while (index >= 0 && !charBeforeNumberFound) { //when the first number was found, set the boolean flag if (!numberFound && Character.isDigit(s.charAt(index))) { numberFound = true; } //when already a number was found and there is any nonDigit stop the execution if (numberFound && !Character.isDigit(s.charAt(index))) { charBeforeNumberFound = true; break; } index--; } return index + 1; } ``` The execution for different Strings: ```java public static void main(String[] args) { System.out.println("\"987abc<*(123\"" + " index of lastNumberSet: " + findLastNumbersIndex("987abc<*(123")); System.out.println("\"987abc<*(123abc\"" + " index of lastNumberSet: " + findLastNumbersIndex("987abc<*(123abc")); System.out.println("\"987abc\"" + " index of lastNumberSet: " + findLastNumbersIndex("987abc")); System.out.println("\"abc987\"" + " index of lastNumberSet: " + findLastNumbersIndex("abc987")); System.out.println("\"987\"" + " index of lastNumberSet: " + findLastNumbersIndex("987")); System.out.println("(Empty String)" + " index of lastNumberSet: " + findLastNumbersIndex("")); System.out.println("\"abc\"" + " index of lastNumberSet: " + findLastNumbersIndex("abc")); } ``` returns this output: ``` "987abc<*(123" index of lastNumberSet: 9 "987abc<*(123abc" index of lastNumberSet: 9 "987abc" index of lastNumberSet: 0 "abc987" index of lastNumberSet: 3 "987" index of lastNumberSet: 0 (Empty String) index of lastNumberSet: 0 "abc" index of lastNumberSet: 0 ```
70,141,901
I have get\_Time function working fine but I would like to take the result it produces and store it int the "t" variable inside the function simple\_Interest function. Here is the code I have now. ``` y = input("Enter value for year: ") m = input("Enter value for month: ") p = input("Enter value for principle: ") r = input("Enter value for rate (in %): ") def get_Time(y, m, d): total_time = y + m / 12 + d / 365 return total_time print ("The total time in years is: " , get_Time(int(y), int(m), int(d))) def simple_Interest(t, p, r): simplint = p *(r / 100) * t return simplint ``` sorry if I sound like a dummy.. im still very newbish to python and programming in general but im learning. thanks in advance for your help.
2021/11/28
[ "https://Stackoverflow.com/questions/70141901", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17529617/" ]
Try this. ``` static int indexOfLastNumber(String s) { int removedLength = s.replaceFirst("\\d+\\D*$", "").length(); return s.length() == removedLength ? 0 : removedLength; } static void test(String s) { System.out.println(s + " : " + indexOfLastNumber(s)); } public static void main(String[] args) { test("987abc<*(123"); test("987abc<*(123)"); test("123"); test("foo"); test(""); } ``` output: ``` 987abc<*(123 : 9 987abc<*(123) : 9 123 : 0 foo : 0 : 0 ``` or ``` static final Pattern LAST_NUMBER = Pattern.compile("\\d+\\D*$"); static int indexOfLastNumber(String s) { Matcher m = LAST_NUMBER.matcher(s); return m.find() ? m.start() : 0; } ```
You can get it using Regex named group ```java public static int indexOfLastNumber(String text) { Pattern pattern = Pattern.compile("(\\d+)(?!.*\\d)"); Matcher matcher = pattern.matcher(text); return matcher.find() ? matcher.start() : -1; } ``` and I used test cases from @csalmhof answer, thanks to him ```java public static void main(String[] args) { System.out.println("\"987abc<*123\"" + " index of lastNumberSet: " + indexOfLastNumber("987abc<*123")); System.out.println("\"987abc<*123abc\"" + " index of lastNumberSet: " + indexOfLastNumber("987abc<*123abc")); System.out.println("\"987abc\"" + " index of lastNumberSet: " + indexOfLastNumber("987abc")); System.out.println("\"abc987\"" + " index of lastNumberSet: " + indexOfLastNumber("abc987")); System.out.println("\"987\"" + " index of lastNumberSet: " + indexOfLastNumber("987")); System.out.println("(Empty String)" + " index of lastNumberSet: " + indexOfLastNumber("")); System.out.println("\"abc\"" + " index of lastNumberSet: " + indexOfLastNumber("abc")); } ``` Output, -1 for a text that has no numbers ```java "987abc<*123" index of lastNumberSet: 8 "987abc<*123abc" index of lastNumberSet: 8 "987abc" index of lastNumberSet: 0 "abc987" index of lastNumberSet: 3 "987" index of lastNumberSet: 0 (Empty String) index of lastNumberSet: -1 "abc" index of lastNumberSet: -1 ```
70,141,901
I have get\_Time function working fine but I would like to take the result it produces and store it int the "t" variable inside the function simple\_Interest function. Here is the code I have now. ``` y = input("Enter value for year: ") m = input("Enter value for month: ") p = input("Enter value for principle: ") r = input("Enter value for rate (in %): ") def get_Time(y, m, d): total_time = y + m / 12 + d / 365 return total_time print ("The total time in years is: " , get_Time(int(y), int(m), int(d))) def simple_Interest(t, p, r): simplint = p *(r / 100) * t return simplint ``` sorry if I sound like a dummy.. im still very newbish to python and programming in general but im learning. thanks in advance for your help.
2021/11/28
[ "https://Stackoverflow.com/questions/70141901", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17529617/" ]
Try this. ``` static int indexOfLastNumber(String s) { int removedLength = s.replaceFirst("\\d+\\D*$", "").length(); return s.length() == removedLength ? 0 : removedLength; } static void test(String s) { System.out.println(s + " : " + indexOfLastNumber(s)); } public static void main(String[] args) { test("987abc<*(123"); test("987abc<*(123)"); test("123"); test("foo"); test(""); } ``` output: ``` 987abc<*(123 : 9 987abc<*(123) : 9 123 : 0 foo : 0 : 0 ``` or ``` static final Pattern LAST_NUMBER = Pattern.compile("\\d+\\D*$"); static int indexOfLastNumber(String s) { Matcher m = LAST_NUMBER.matcher(s); return m.find() ? m.start() : 0; } ```
You can use a pattern with a capture group, and if there is a match you can use [public int start(int group)](https://docs.oracle.com/javase/8/docs/api/java/util/regex/Matcher.html#start-int-) to get the start index of the capture group of the Matcher. ``` (\d)\d*\D*$ ``` * `(\d)` Capture a single digit in **group 1** * `\d*` Match optional digits * `\D*` Match optional non digits * `$` End of string See a [regex demo](https://regex101.com/r/n6426S/1) and a [Java demo](https://ideone.com/TcevP3) Example: ``` String[] strings = { "987abc<*(123", "", "123", "test", "abc123" }; Pattern pattern = Pattern.compile("(\\d)\\d*\\D*$"); for (String s : strings) { Matcher matcher = pattern.matcher(s); if (matcher.find()) { System.out.printf("'%s' --> %d\n", s, matcher.start(1)); continue; } System.out.printf("'%s' --> %d\n", s, 0); } ``` Output ``` '987abc<*(123' --> 9 '' --> 0 '123' --> 0 'test' --> 0 'abc123' --> 3 ```
38,593,309
How do get logging from custom authorizer lambda function in API Gateway? I do not want to enable logging for API. I need logging from authorizer lambda function. I use a python lambda function and have prints in the code. I want to view the prints in **Cloud Watch** logs. But logs are not seen in cloud watch. I do not get errors either. What am I missing? Lambda has execution role **role/service-role/MyLambdaRole**. This role has the policy to write to cloud watch. ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "logs:CreateLogGroup", "Resource": "arn:aws:logs:us-east-1:123456:*" }, { "Effect": "Allow", "Action": [ "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": [ "arn:aws:logs:us-east-1:123456:log-group:MyCustomAuthorizer:*" ] } ] } ``` I also tested by adding CloudWatchLogsFullAccess policy to **role/service-role/MyLambdaRole** role. ``` { "Version": "2012-10-17", "Statement": [ { "Action": [ "logs:*" ], "Effect": "Allow", "Resource": "*" } ] } ```
2016/07/26
[ "https://Stackoverflow.com/questions/38593309", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2184930/" ]
I deleted the lambda function, IAM role, custom authorizer from API Gateway. Recreated all the above with the same settings and published the API. It started working and logging as expected. I do not know what was preventing earlier to log to cloud watch logs. Weird!!
When I set up my authorizer, I set a Lambda Event payload for a custom header, and I had neglected to set that header in my browser session. According to the documentation at *<https://docs.aws.amazon.com/apigateway/latest/developerguide/configure-api-gateway-lambda-authorization-with-console.html>*, section 9b, the API Gateway will throw a 401 Unauthorized error without even executing the Lambda function. So that was the source of the problem.
838,991
I'm using pycurl to upload a file via put and python cgi script to receive the file on the server side. Essentially, the code on the server side is: ``` while True: next = sys.stdin.read(4096) if not next: break #.... write the buffer ``` This seems to work with text, but not binary files (I'm on windows). With binary files, the loop doing stdin.read breaks after receiving anything around 10kb to 100kb. Any ideas?
2009/05/08
[ "https://Stackoverflow.com/questions/838991", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
You need to run Python in binary mode. Change your CGI script from: ``` #!C:/Python25/python.exe ``` or whatever it says to: ``` #!C:/Python25/python.exe -u ``` Or you can do it programmatically like this: ``` msvcrt.setmode(sys.stdin.fileno(), os.O_BINARY) ``` before starting to read from `stdin`.
Use [mod\_wsgi](http://code.google.com/p/modwsgi/) instead of cgi. It will provide you an input file for the upload that's correctly opened.
40,762,324
I want to write a function to compare two values, val1 and val2, and if val1 is larger than val2, add 1 point to a\_points (Think of it like Team A) and vice versa (add one point to b\_points if val2 is larger.) If the two values are even I won't add any points to a\_points or b\_points. My problem is **test\_val will not return the values of a\_points or b\_points.** ``` a_points=0 b_points=0 def test_val(a_points,b_points,val1,val2): if val1 > val2: a_points+=1 return a_points elif val2 > val1: b_points+=1 return b_points elif val1==val2: pass ``` [Here's a link to a visualization showing the problem.](http://pythontutor.com/visualize.html#code=a0%3D5%0Aa1%3D6%0Aa2%3D7%0Ab0%3D3%0Ab1%3D6%0Ab2%3D10%0Aa_points%3D0%0Ab_points%3D0%0A%0Adef%20test_val(a_points,b_points,val1,val2%29%3A%0A%20%20%20%20if%20val1%20%3E%20val2%3A%0A%20%20%20%20%20%20%20%20a_points%2B%3D1%0A%20%20%20%20%20%20%20%20return%20a_points%0A%20%20%20%20elif%20val2%20%3E%20val1%3A%0A%20%20%20%20%20%20%20%20b_points%2B%3D1%0A%20%20%20%20%20%20%20%20return%20b_points%0A%20%20%20%20elif%20val1%3D%3Dval2%3A%0A%20%20%20%20%20%20%20%20pass%0A%0Atest_val(a_points,b_points,a0,b0%29%0Atest_val(a_points,b_points,a1,b1%29%0Atest_val(a_points,b_points,a2,b2%29%0A%0Aprint(a_points,b_points%29&cumulative=false&curInstr=13&heapPrimitives=false&mode=display&origin=opt-frontend.js&py=3&rawInputLstJSON=%5B%5D&textReferences=false)
2016/11/23
[ "https://Stackoverflow.com/questions/40762324", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7017454/" ]
Global variables are generally a **bad idea**. Don't use them unless you really have to. The proper way to implement such counter is to use a class. ``` class MyCounter(object): def __init__(self): self.a_points = 0 self.b_points = 0 def test_val(self, val1, val2): if val1 > val2: self.a_points += 1 elif val2 > val1: self.b_points += 1 else: pass counter = MyCounter() counter.test_val(1, 2) counter.test_val(1, 3) counter.test_val(5, 3) print(counter.a_points, counter.b_points) ``` Output: ``` (1, 2) ``` Note that returning a value from `test_val` doesn't make sense, because caller has no way to know if she gets `a_points` or `b_points`, so she can't use return value in any meaningful way.
``` a_points=0 b_points=0 def test_val(a_points,b_points,val1,val2): global a_points global b_points if val1 > val2: a_points+=1 return a_points elif val2 > val1: b_points+=1 return b_points elif val1==val2: # If you pass, it won't return a_points nor b_points return a_points # or b_points ```
40,762,324
I want to write a function to compare two values, val1 and val2, and if val1 is larger than val2, add 1 point to a\_points (Think of it like Team A) and vice versa (add one point to b\_points if val2 is larger.) If the two values are even I won't add any points to a\_points or b\_points. My problem is **test\_val will not return the values of a\_points or b\_points.** ``` a_points=0 b_points=0 def test_val(a_points,b_points,val1,val2): if val1 > val2: a_points+=1 return a_points elif val2 > val1: b_points+=1 return b_points elif val1==val2: pass ``` [Here's a link to a visualization showing the problem.](http://pythontutor.com/visualize.html#code=a0%3D5%0Aa1%3D6%0Aa2%3D7%0Ab0%3D3%0Ab1%3D6%0Ab2%3D10%0Aa_points%3D0%0Ab_points%3D0%0A%0Adef%20test_val(a_points,b_points,val1,val2%29%3A%0A%20%20%20%20if%20val1%20%3E%20val2%3A%0A%20%20%20%20%20%20%20%20a_points%2B%3D1%0A%20%20%20%20%20%20%20%20return%20a_points%0A%20%20%20%20elif%20val2%20%3E%20val1%3A%0A%20%20%20%20%20%20%20%20b_points%2B%3D1%0A%20%20%20%20%20%20%20%20return%20b_points%0A%20%20%20%20elif%20val1%3D%3Dval2%3A%0A%20%20%20%20%20%20%20%20pass%0A%0Atest_val(a_points,b_points,a0,b0%29%0Atest_val(a_points,b_points,a1,b1%29%0Atest_val(a_points,b_points,a2,b2%29%0A%0Aprint(a_points,b_points%29&cumulative=false&curInstr=13&heapPrimitives=false&mode=display&origin=opt-frontend.js&py=3&rawInputLstJSON=%5B%5D&textReferences=false)
2016/11/23
[ "https://Stackoverflow.com/questions/40762324", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7017454/" ]
``` a_points=0 b_points=0 def test_val(a_points,b_points,val1,val2): global a_points global b_points if val1 > val2: a_points+=1 return a_points elif val2 > val1: b_points+=1 return b_points elif val1==val2: # If you pass, it won't return a_points nor b_points return a_points # or b_points ```
Note that `a_points` and `b_points` shadow your global variables, since they are also passed as parameters. Any way, you are not returning value in case of equality, instead of `pass`, return a value ``` def test_val(a_points,b_points,val1,val2): if val1 > val2: a_points+=1 return a_points elif val2 > val1: b_points+=1 return b_points elif val1==val2: return a_points ```
40,762,324
I want to write a function to compare two values, val1 and val2, and if val1 is larger than val2, add 1 point to a\_points (Think of it like Team A) and vice versa (add one point to b\_points if val2 is larger.) If the two values are even I won't add any points to a\_points or b\_points. My problem is **test\_val will not return the values of a\_points or b\_points.** ``` a_points=0 b_points=0 def test_val(a_points,b_points,val1,val2): if val1 > val2: a_points+=1 return a_points elif val2 > val1: b_points+=1 return b_points elif val1==val2: pass ``` [Here's a link to a visualization showing the problem.](http://pythontutor.com/visualize.html#code=a0%3D5%0Aa1%3D6%0Aa2%3D7%0Ab0%3D3%0Ab1%3D6%0Ab2%3D10%0Aa_points%3D0%0Ab_points%3D0%0A%0Adef%20test_val(a_points,b_points,val1,val2%29%3A%0A%20%20%20%20if%20val1%20%3E%20val2%3A%0A%20%20%20%20%20%20%20%20a_points%2B%3D1%0A%20%20%20%20%20%20%20%20return%20a_points%0A%20%20%20%20elif%20val2%20%3E%20val1%3A%0A%20%20%20%20%20%20%20%20b_points%2B%3D1%0A%20%20%20%20%20%20%20%20return%20b_points%0A%20%20%20%20elif%20val1%3D%3Dval2%3A%0A%20%20%20%20%20%20%20%20pass%0A%0Atest_val(a_points,b_points,a0,b0%29%0Atest_val(a_points,b_points,a1,b1%29%0Atest_val(a_points,b_points,a2,b2%29%0A%0Aprint(a_points,b_points%29&cumulative=false&curInstr=13&heapPrimitives=false&mode=display&origin=opt-frontend.js&py=3&rawInputLstJSON=%5B%5D&textReferences=false)
2016/11/23
[ "https://Stackoverflow.com/questions/40762324", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7017454/" ]
Consider this: ``` a0=5 a1=6 a2=7 b0=3 b1=6 b2=10 a_points=0 b_points=0 def test_val(a_points, b_points, val1, val2): if val1 > val2: a_points += 1 return (a_points, b_points) elif val2 > val1: b_points += 1 return (a_points, b_points) elif val1==val2: return (a_points, b_points) a_points, b_points = test_val(a_points,b_points, a0, b0) a_points, b_points = test_val(a_points,b_points, a1, b1) a_points, b_points = test_val(a_points,b_points, a2, b2) print(a_points, b_points) ``` Good luck!
Note that `a_points` and `b_points` shadow your global variables, since they are also passed as parameters. Any way, you are not returning value in case of equality, instead of `pass`, return a value ``` def test_val(a_points,b_points,val1,val2): if val1 > val2: a_points+=1 return a_points elif val2 > val1: b_points+=1 return b_points elif val1==val2: return a_points ```
40,762,324
I want to write a function to compare two values, val1 and val2, and if val1 is larger than val2, add 1 point to a\_points (Think of it like Team A) and vice versa (add one point to b\_points if val2 is larger.) If the two values are even I won't add any points to a\_points or b\_points. My problem is **test\_val will not return the values of a\_points or b\_points.** ``` a_points=0 b_points=0 def test_val(a_points,b_points,val1,val2): if val1 > val2: a_points+=1 return a_points elif val2 > val1: b_points+=1 return b_points elif val1==val2: pass ``` [Here's a link to a visualization showing the problem.](http://pythontutor.com/visualize.html#code=a0%3D5%0Aa1%3D6%0Aa2%3D7%0Ab0%3D3%0Ab1%3D6%0Ab2%3D10%0Aa_points%3D0%0Ab_points%3D0%0A%0Adef%20test_val(a_points,b_points,val1,val2%29%3A%0A%20%20%20%20if%20val1%20%3E%20val2%3A%0A%20%20%20%20%20%20%20%20a_points%2B%3D1%0A%20%20%20%20%20%20%20%20return%20a_points%0A%20%20%20%20elif%20val2%20%3E%20val1%3A%0A%20%20%20%20%20%20%20%20b_points%2B%3D1%0A%20%20%20%20%20%20%20%20return%20b_points%0A%20%20%20%20elif%20val1%3D%3Dval2%3A%0A%20%20%20%20%20%20%20%20pass%0A%0Atest_val(a_points,b_points,a0,b0%29%0Atest_val(a_points,b_points,a1,b1%29%0Atest_val(a_points,b_points,a2,b2%29%0A%0Aprint(a_points,b_points%29&cumulative=false&curInstr=13&heapPrimitives=false&mode=display&origin=opt-frontend.js&py=3&rawInputLstJSON=%5B%5D&textReferences=false)
2016/11/23
[ "https://Stackoverflow.com/questions/40762324", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7017454/" ]
Global variables are generally a **bad idea**. Don't use them unless you really have to. The proper way to implement such counter is to use a class. ``` class MyCounter(object): def __init__(self): self.a_points = 0 self.b_points = 0 def test_val(self, val1, val2): if val1 > val2: self.a_points += 1 elif val2 > val1: self.b_points += 1 else: pass counter = MyCounter() counter.test_val(1, 2) counter.test_val(1, 3) counter.test_val(5, 3) print(counter.a_points, counter.b_points) ``` Output: ``` (1, 2) ``` Note that returning a value from `test_val` doesn't make sense, because caller has no way to know if she gets `a_points` or `b_points`, so she can't use return value in any meaningful way.
This will simplify your code and logic. And make it work ;-) ``` a0=5 a1=6 a2=7 b0=3 b1=6 b2=10 a_points=0 b_points=0 def test_val(a_points,b_points,val1,val2): if val1 > val2: a_points+=1 elif val2 > val1: b_points+=1 return a_points, b_points a_points, b_points = test_val(a_points,b_points,a0,b0) a_points, b_points = test_val(a_points,b_points,a1,b1) a_points, b_points = test_val(a_points,b_points,a2,b2) print(a_points,b_points) ```
40,762,324
I want to write a function to compare two values, val1 and val2, and if val1 is larger than val2, add 1 point to a\_points (Think of it like Team A) and vice versa (add one point to b\_points if val2 is larger.) If the two values are even I won't add any points to a\_points or b\_points. My problem is **test\_val will not return the values of a\_points or b\_points.** ``` a_points=0 b_points=0 def test_val(a_points,b_points,val1,val2): if val1 > val2: a_points+=1 return a_points elif val2 > val1: b_points+=1 return b_points elif val1==val2: pass ``` [Here's a link to a visualization showing the problem.](http://pythontutor.com/visualize.html#code=a0%3D5%0Aa1%3D6%0Aa2%3D7%0Ab0%3D3%0Ab1%3D6%0Ab2%3D10%0Aa_points%3D0%0Ab_points%3D0%0A%0Adef%20test_val(a_points,b_points,val1,val2%29%3A%0A%20%20%20%20if%20val1%20%3E%20val2%3A%0A%20%20%20%20%20%20%20%20a_points%2B%3D1%0A%20%20%20%20%20%20%20%20return%20a_points%0A%20%20%20%20elif%20val2%20%3E%20val1%3A%0A%20%20%20%20%20%20%20%20b_points%2B%3D1%0A%20%20%20%20%20%20%20%20return%20b_points%0A%20%20%20%20elif%20val1%3D%3Dval2%3A%0A%20%20%20%20%20%20%20%20pass%0A%0Atest_val(a_points,b_points,a0,b0%29%0Atest_val(a_points,b_points,a1,b1%29%0Atest_val(a_points,b_points,a2,b2%29%0A%0Aprint(a_points,b_points%29&cumulative=false&curInstr=13&heapPrimitives=false&mode=display&origin=opt-frontend.js&py=3&rawInputLstJSON=%5B%5D&textReferences=false)
2016/11/23
[ "https://Stackoverflow.com/questions/40762324", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7017454/" ]
Your problem is the that Python integers are immutable which in general is good to read about. A few more details can be found [here](https://stackoverflow.com/a/15148557/3727050). Now, regarding solutions: 1. As suggested, you can use `global` variables. Keep in mind this is usually considered bad practice cause it leads to messy code... but `global`s have their place in programming. 2. Also suggested, you can always return both `a_points` and `b_points` 3. Use `list`s to keep score: * The `test_val` will return either 0, 1 or 2 where 0 means equal, 1 means the first argument is larger and 2 means the second argument is larger. * Your main script will have a list with the above indexes where it will "keep score" The code: ``` a0=5 a1=6 a2=7 b0=3 b1=6 b2=10 points=[0, 0, 0] def test_val(val1,val2): if val1 > val2: return 1 elif val2 > val1: return 2 elif val1==val2: return 0 points[test_val(a0,b0)] += 1 points[test_val(a1,b1)] += 1 points[test_val(a2,b2)] += 1 print("eq=%d, A=%d, B=%d" % (points[0], points[1], points[2])) ``` Output ([visualize](http://pythontutor.com/visualize.html#code=a0%3D5%0Aa1%3D6%0Aa2%3D7%0Ab0%3D3%0Ab1%3D6%0Ab2%3D10%0Apoints%3D%5B0,%200,%200%5D%0A%0A%0Adef%20test_val(val1,val2%29%3A%0A%20%20%20%20if%20val1%20%3E%20val2%3A%0A%20%20%20%20%20%20%20%20return%201%0A%20%20%20%20elif%20val2%20%3E%20val1%3A%0A%20%20%20%20%20%20%20%20return%202%0A%20%20%20%20elif%20val1%3D%3Dval2%3A%0A%20%20%20%20%20%20%20%20return%200%0A%0Apoints%5Btest_val(a0,b0%29%5D%20%2B%3D%201%0Apoints%5Btest_val(a1,b1%29%5D%20%2B%3D%201%0Apoints%5Btest_val(a2,b2%29%5D%20%2B%3D%201%0A%0Aprint(%22eq%3D%25d,%20A%3D%25d,%20B%3D%25d%22%20%25%20(points%5B0%5D,%20points%5B1%5D,%20points%5B2%5D%29%29&cumulative=false&curInstr=27&heapPrimitives=false&mode=display&origin=opt-frontend.js&py=3&rawInputLstJSON=%5B%5D&textReferences=false)) ``` eq=1, A=1, B=1 ``` Hope it helps
Note that `a_points` and `b_points` shadow your global variables, since they are also passed as parameters. Any way, you are not returning value in case of equality, instead of `pass`, return a value ``` def test_val(a_points,b_points,val1,val2): if val1 > val2: a_points+=1 return a_points elif val2 > val1: b_points+=1 return b_points elif val1==val2: return a_points ```
40,762,324
I want to write a function to compare two values, val1 and val2, and if val1 is larger than val2, add 1 point to a\_points (Think of it like Team A) and vice versa (add one point to b\_points if val2 is larger.) If the two values are even I won't add any points to a\_points or b\_points. My problem is **test\_val will not return the values of a\_points or b\_points.** ``` a_points=0 b_points=0 def test_val(a_points,b_points,val1,val2): if val1 > val2: a_points+=1 return a_points elif val2 > val1: b_points+=1 return b_points elif val1==val2: pass ``` [Here's a link to a visualization showing the problem.](http://pythontutor.com/visualize.html#code=a0%3D5%0Aa1%3D6%0Aa2%3D7%0Ab0%3D3%0Ab1%3D6%0Ab2%3D10%0Aa_points%3D0%0Ab_points%3D0%0A%0Adef%20test_val(a_points,b_points,val1,val2%29%3A%0A%20%20%20%20if%20val1%20%3E%20val2%3A%0A%20%20%20%20%20%20%20%20a_points%2B%3D1%0A%20%20%20%20%20%20%20%20return%20a_points%0A%20%20%20%20elif%20val2%20%3E%20val1%3A%0A%20%20%20%20%20%20%20%20b_points%2B%3D1%0A%20%20%20%20%20%20%20%20return%20b_points%0A%20%20%20%20elif%20val1%3D%3Dval2%3A%0A%20%20%20%20%20%20%20%20pass%0A%0Atest_val(a_points,b_points,a0,b0%29%0Atest_val(a_points,b_points,a1,b1%29%0Atest_val(a_points,b_points,a2,b2%29%0A%0Aprint(a_points,b_points%29&cumulative=false&curInstr=13&heapPrimitives=false&mode=display&origin=opt-frontend.js&py=3&rawInputLstJSON=%5B%5D&textReferences=false)
2016/11/23
[ "https://Stackoverflow.com/questions/40762324", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7017454/" ]
``` print (test_val(a_points,b_points,1,2)) print (test_val(a_points,b_points,2,1)) print (test_val(a_points,b_points,2,2)) ``` This will give you a result: ``` 1 1 None ``` Hence you should not look at the function to return values, rather it updates the values of variables a\_points and b\_points. That is why in the link that you shared the code includes a `print(a_points,b_points)` statement at the end
Note that `a_points` and `b_points` shadow your global variables, since they are also passed as parameters. Any way, you are not returning value in case of equality, instead of `pass`, return a value ``` def test_val(a_points,b_points,val1,val2): if val1 > val2: a_points+=1 return a_points elif val2 > val1: b_points+=1 return b_points elif val1==val2: return a_points ```
40,762,324
I want to write a function to compare two values, val1 and val2, and if val1 is larger than val2, add 1 point to a\_points (Think of it like Team A) and vice versa (add one point to b\_points if val2 is larger.) If the two values are even I won't add any points to a\_points or b\_points. My problem is **test\_val will not return the values of a\_points or b\_points.** ``` a_points=0 b_points=0 def test_val(a_points,b_points,val1,val2): if val1 > val2: a_points+=1 return a_points elif val2 > val1: b_points+=1 return b_points elif val1==val2: pass ``` [Here's a link to a visualization showing the problem.](http://pythontutor.com/visualize.html#code=a0%3D5%0Aa1%3D6%0Aa2%3D7%0Ab0%3D3%0Ab1%3D6%0Ab2%3D10%0Aa_points%3D0%0Ab_points%3D0%0A%0Adef%20test_val(a_points,b_points,val1,val2%29%3A%0A%20%20%20%20if%20val1%20%3E%20val2%3A%0A%20%20%20%20%20%20%20%20a_points%2B%3D1%0A%20%20%20%20%20%20%20%20return%20a_points%0A%20%20%20%20elif%20val2%20%3E%20val1%3A%0A%20%20%20%20%20%20%20%20b_points%2B%3D1%0A%20%20%20%20%20%20%20%20return%20b_points%0A%20%20%20%20elif%20val1%3D%3Dval2%3A%0A%20%20%20%20%20%20%20%20pass%0A%0Atest_val(a_points,b_points,a0,b0%29%0Atest_val(a_points,b_points,a1,b1%29%0Atest_val(a_points,b_points,a2,b2%29%0A%0Aprint(a_points,b_points%29&cumulative=false&curInstr=13&heapPrimitives=false&mode=display&origin=opt-frontend.js&py=3&rawInputLstJSON=%5B%5D&textReferences=false)
2016/11/23
[ "https://Stackoverflow.com/questions/40762324", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7017454/" ]
Consider this: ``` a0=5 a1=6 a2=7 b0=3 b1=6 b2=10 a_points=0 b_points=0 def test_val(a_points, b_points, val1, val2): if val1 > val2: a_points += 1 return (a_points, b_points) elif val2 > val1: b_points += 1 return (a_points, b_points) elif val1==val2: return (a_points, b_points) a_points, b_points = test_val(a_points,b_points, a0, b0) a_points, b_points = test_val(a_points,b_points, a1, b1) a_points, b_points = test_val(a_points,b_points, a2, b2) print(a_points, b_points) ``` Good luck!
Your problem is the that Python integers are immutable which in general is good to read about. A few more details can be found [here](https://stackoverflow.com/a/15148557/3727050). Now, regarding solutions: 1. As suggested, you can use `global` variables. Keep in mind this is usually considered bad practice cause it leads to messy code... but `global`s have their place in programming. 2. Also suggested, you can always return both `a_points` and `b_points` 3. Use `list`s to keep score: * The `test_val` will return either 0, 1 or 2 where 0 means equal, 1 means the first argument is larger and 2 means the second argument is larger. * Your main script will have a list with the above indexes where it will "keep score" The code: ``` a0=5 a1=6 a2=7 b0=3 b1=6 b2=10 points=[0, 0, 0] def test_val(val1,val2): if val1 > val2: return 1 elif val2 > val1: return 2 elif val1==val2: return 0 points[test_val(a0,b0)] += 1 points[test_val(a1,b1)] += 1 points[test_val(a2,b2)] += 1 print("eq=%d, A=%d, B=%d" % (points[0], points[1], points[2])) ``` Output ([visualize](http://pythontutor.com/visualize.html#code=a0%3D5%0Aa1%3D6%0Aa2%3D7%0Ab0%3D3%0Ab1%3D6%0Ab2%3D10%0Apoints%3D%5B0,%200,%200%5D%0A%0A%0Adef%20test_val(val1,val2%29%3A%0A%20%20%20%20if%20val1%20%3E%20val2%3A%0A%20%20%20%20%20%20%20%20return%201%0A%20%20%20%20elif%20val2%20%3E%20val1%3A%0A%20%20%20%20%20%20%20%20return%202%0A%20%20%20%20elif%20val1%3D%3Dval2%3A%0A%20%20%20%20%20%20%20%20return%200%0A%0Apoints%5Btest_val(a0,b0%29%5D%20%2B%3D%201%0Apoints%5Btest_val(a1,b1%29%5D%20%2B%3D%201%0Apoints%5Btest_val(a2,b2%29%5D%20%2B%3D%201%0A%0Aprint(%22eq%3D%25d,%20A%3D%25d,%20B%3D%25d%22%20%25%20(points%5B0%5D,%20points%5B1%5D,%20points%5B2%5D%29%29&cumulative=false&curInstr=27&heapPrimitives=false&mode=display&origin=opt-frontend.js&py=3&rawInputLstJSON=%5B%5D&textReferences=false)) ``` eq=1, A=1, B=1 ``` Hope it helps
40,762,324
I want to write a function to compare two values, val1 and val2, and if val1 is larger than val2, add 1 point to a\_points (Think of it like Team A) and vice versa (add one point to b\_points if val2 is larger.) If the two values are even I won't add any points to a\_points or b\_points. My problem is **test\_val will not return the values of a\_points or b\_points.** ``` a_points=0 b_points=0 def test_val(a_points,b_points,val1,val2): if val1 > val2: a_points+=1 return a_points elif val2 > val1: b_points+=1 return b_points elif val1==val2: pass ``` [Here's a link to a visualization showing the problem.](http://pythontutor.com/visualize.html#code=a0%3D5%0Aa1%3D6%0Aa2%3D7%0Ab0%3D3%0Ab1%3D6%0Ab2%3D10%0Aa_points%3D0%0Ab_points%3D0%0A%0Adef%20test_val(a_points,b_points,val1,val2%29%3A%0A%20%20%20%20if%20val1%20%3E%20val2%3A%0A%20%20%20%20%20%20%20%20a_points%2B%3D1%0A%20%20%20%20%20%20%20%20return%20a_points%0A%20%20%20%20elif%20val2%20%3E%20val1%3A%0A%20%20%20%20%20%20%20%20b_points%2B%3D1%0A%20%20%20%20%20%20%20%20return%20b_points%0A%20%20%20%20elif%20val1%3D%3Dval2%3A%0A%20%20%20%20%20%20%20%20pass%0A%0Atest_val(a_points,b_points,a0,b0%29%0Atest_val(a_points,b_points,a1,b1%29%0Atest_val(a_points,b_points,a2,b2%29%0A%0Aprint(a_points,b_points%29&cumulative=false&curInstr=13&heapPrimitives=false&mode=display&origin=opt-frontend.js&py=3&rawInputLstJSON=%5B%5D&textReferences=false)
2016/11/23
[ "https://Stackoverflow.com/questions/40762324", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7017454/" ]
Consider this: ``` a0=5 a1=6 a2=7 b0=3 b1=6 b2=10 a_points=0 b_points=0 def test_val(a_points, b_points, val1, val2): if val1 > val2: a_points += 1 return (a_points, b_points) elif val2 > val1: b_points += 1 return (a_points, b_points) elif val1==val2: return (a_points, b_points) a_points, b_points = test_val(a_points,b_points, a0, b0) a_points, b_points = test_val(a_points,b_points, a1, b1) a_points, b_points = test_val(a_points,b_points, a2, b2) print(a_points, b_points) ``` Good luck!
``` print (test_val(a_points,b_points,1,2)) print (test_val(a_points,b_points,2,1)) print (test_val(a_points,b_points,2,2)) ``` This will give you a result: ``` 1 1 None ``` Hence you should not look at the function to return values, rather it updates the values of variables a\_points and b\_points. That is why in the link that you shared the code includes a `print(a_points,b_points)` statement at the end
40,762,324
I want to write a function to compare two values, val1 and val2, and if val1 is larger than val2, add 1 point to a\_points (Think of it like Team A) and vice versa (add one point to b\_points if val2 is larger.) If the two values are even I won't add any points to a\_points or b\_points. My problem is **test\_val will not return the values of a\_points or b\_points.** ``` a_points=0 b_points=0 def test_val(a_points,b_points,val1,val2): if val1 > val2: a_points+=1 return a_points elif val2 > val1: b_points+=1 return b_points elif val1==val2: pass ``` [Here's a link to a visualization showing the problem.](http://pythontutor.com/visualize.html#code=a0%3D5%0Aa1%3D6%0Aa2%3D7%0Ab0%3D3%0Ab1%3D6%0Ab2%3D10%0Aa_points%3D0%0Ab_points%3D0%0A%0Adef%20test_val(a_points,b_points,val1,val2%29%3A%0A%20%20%20%20if%20val1%20%3E%20val2%3A%0A%20%20%20%20%20%20%20%20a_points%2B%3D1%0A%20%20%20%20%20%20%20%20return%20a_points%0A%20%20%20%20elif%20val2%20%3E%20val1%3A%0A%20%20%20%20%20%20%20%20b_points%2B%3D1%0A%20%20%20%20%20%20%20%20return%20b_points%0A%20%20%20%20elif%20val1%3D%3Dval2%3A%0A%20%20%20%20%20%20%20%20pass%0A%0Atest_val(a_points,b_points,a0,b0%29%0Atest_val(a_points,b_points,a1,b1%29%0Atest_val(a_points,b_points,a2,b2%29%0A%0Aprint(a_points,b_points%29&cumulative=false&curInstr=13&heapPrimitives=false&mode=display&origin=opt-frontend.js&py=3&rawInputLstJSON=%5B%5D&textReferences=false)
2016/11/23
[ "https://Stackoverflow.com/questions/40762324", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7017454/" ]
Global variables are generally a **bad idea**. Don't use them unless you really have to. The proper way to implement such counter is to use a class. ``` class MyCounter(object): def __init__(self): self.a_points = 0 self.b_points = 0 def test_val(self, val1, val2): if val1 > val2: self.a_points += 1 elif val2 > val1: self.b_points += 1 else: pass counter = MyCounter() counter.test_val(1, 2) counter.test_val(1, 3) counter.test_val(5, 3) print(counter.a_points, counter.b_points) ``` Output: ``` (1, 2) ``` Note that returning a value from `test_val` doesn't make sense, because caller has no way to know if she gets `a_points` or `b_points`, so she can't use return value in any meaningful way.
Note that `a_points` and `b_points` shadow your global variables, since they are also passed as parameters. Any way, you are not returning value in case of equality, instead of `pass`, return a value ``` def test_val(a_points,b_points,val1,val2): if val1 > val2: a_points+=1 return a_points elif val2 > val1: b_points+=1 return b_points elif val1==val2: return a_points ```
40,762,324
I want to write a function to compare two values, val1 and val2, and if val1 is larger than val2, add 1 point to a\_points (Think of it like Team A) and vice versa (add one point to b\_points if val2 is larger.) If the two values are even I won't add any points to a\_points or b\_points. My problem is **test\_val will not return the values of a\_points or b\_points.** ``` a_points=0 b_points=0 def test_val(a_points,b_points,val1,val2): if val1 > val2: a_points+=1 return a_points elif val2 > val1: b_points+=1 return b_points elif val1==val2: pass ``` [Here's a link to a visualization showing the problem.](http://pythontutor.com/visualize.html#code=a0%3D5%0Aa1%3D6%0Aa2%3D7%0Ab0%3D3%0Ab1%3D6%0Ab2%3D10%0Aa_points%3D0%0Ab_points%3D0%0A%0Adef%20test_val(a_points,b_points,val1,val2%29%3A%0A%20%20%20%20if%20val1%20%3E%20val2%3A%0A%20%20%20%20%20%20%20%20a_points%2B%3D1%0A%20%20%20%20%20%20%20%20return%20a_points%0A%20%20%20%20elif%20val2%20%3E%20val1%3A%0A%20%20%20%20%20%20%20%20b_points%2B%3D1%0A%20%20%20%20%20%20%20%20return%20b_points%0A%20%20%20%20elif%20val1%3D%3Dval2%3A%0A%20%20%20%20%20%20%20%20pass%0A%0Atest_val(a_points,b_points,a0,b0%29%0Atest_val(a_points,b_points,a1,b1%29%0Atest_val(a_points,b_points,a2,b2%29%0A%0Aprint(a_points,b_points%29&cumulative=false&curInstr=13&heapPrimitives=false&mode=display&origin=opt-frontend.js&py=3&rawInputLstJSON=%5B%5D&textReferences=false)
2016/11/23
[ "https://Stackoverflow.com/questions/40762324", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7017454/" ]
Consider this: ``` a0=5 a1=6 a2=7 b0=3 b1=6 b2=10 a_points=0 b_points=0 def test_val(a_points, b_points, val1, val2): if val1 > val2: a_points += 1 return (a_points, b_points) elif val2 > val1: b_points += 1 return (a_points, b_points) elif val1==val2: return (a_points, b_points) a_points, b_points = test_val(a_points,b_points, a0, b0) a_points, b_points = test_val(a_points,b_points, a1, b1) a_points, b_points = test_val(a_points,b_points, a2, b2) print(a_points, b_points) ``` Good luck!
This will simplify your code and logic. And make it work ;-) ``` a0=5 a1=6 a2=7 b0=3 b1=6 b2=10 a_points=0 b_points=0 def test_val(a_points,b_points,val1,val2): if val1 > val2: a_points+=1 elif val2 > val1: b_points+=1 return a_points, b_points a_points, b_points = test_val(a_points,b_points,a0,b0) a_points, b_points = test_val(a_points,b_points,a1,b1) a_points, b_points = test_val(a_points,b_points,a2,b2) print(a_points,b_points) ```
38,044,264
``` import pandas as pd import numpy as np from datetime import datetime, time # history file and batch size for processing. historyFilePath = 'EURUSD.SAMPLE.csv' batch_size = 5000 # function for date parsing dateparse = lambda x: pd.datetime.strptime(x, '%Y-%m-%d %H:%M:%S.%f') # load data into a pandas iterator with all the chunks ratesFromCSVChunks = pd.read_csv(historyFilePath, index_col=0, engine='python', parse_dates=True, date_parser=dateparse, header=None, names=["datetime", "1_Current", "2_BidPx", "3_BidSz", "4_AskPx", "5_AskSz"], iterator=True, chunksize=batch_size) # concatenate chunks to get the final array ratesFromCSV = pd.concat([chunk for chunk in ratesFromCSVChunks]) # save final csv file df.to_csv('EURUSD_processed.csv', date_format='%Y-%m-%d %H:%M:%S.%f', columns=['1_Current', '2_BidPx', '3_BidSz', '4_AskPx', '5_AskSz'], header=False, float_format='%.5f') ``` I am reading a CSV file containing forex data in the format ``` 2014-08-17 17:00:01.000000,1.33910,1.33910,1.00000,1.33930,1.00000 2014-08-17 17:00:01.000000,1.33910,1.33910,1.00000,1.33950,1.00000 2014-08-17 17:00:02.000000,1.33910,1.33910,1.00000,1.33930,1.00000 2014-08-17 17:00:02.000000,1.33900,1.33900,1.00000,1.33940,1.00000 2014-08-17 17:00:04.000000,1.33910,1.33910,1.00000,1.33950,1.00000 2014-08-17 17:00:05.000000,1.33930,1.33930,1.00000,1.33950,1.00000 2014-08-17 17:00:06.000000,1.33920,1.33920,1.00000,1.33960,1.00000 2014-08-17 17:00:06.000000,1.33910,1.33910,1.00000,1.33950,1.00000 2014-08-17 17:00:08.000000,1.33900,1.33900,1.00000,1.33942,1.00000 2014-08-17 17:00:16.000000,1.33900,1.33900,1.00000,1.33940,1.00000 ``` How do you convert from Datatime in the CSV file or pandas dataframe being read to EPOCH time in MILLISECONDS from MIDNIGHT ( UTC or localized ) by the time it is being saved. Each file Starts at Midnight every day . The only thing being changed is the format of datetime to miilliseconds from midnight every day( UTC or localized) . The format i am looking for is: ``` 43264234, 1.33910,1.33910,1.00000,1.33930,1.00000 43264739, 1.33910,1.33910,1.00000,1.33950,1.00000 43265282, 1.33910,1.33910,1.00000,1.33930,1.00000 43265789, 1.33900,1.33900,1.00000,1.33940,1.00000 43266318, 1.33910,1.33910,1.00000,1.33950,1.00000 43266846, 1.33930,1.33930,1.00000,1.33950,1.00000 43267353, 1.33920,1.33920,1.00000,1.33960,1.00000 43267872, 1.33910,1.33910,1.00000,1.33950,1.00000 43268387, 1.33900,1.33900,1.00000,1.33942,1.00000 ``` Any help is well appreciated ( short & precise in Python 3.5 or Python 3.4 and above with Pandas 0.18.1 and numpy 1.11 )
2016/06/26
[ "https://Stackoverflow.com/questions/38044264", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5310427/" ]
This snippet of code should be what you want ``` # Create some fake data, similar to yours import pandas as pd s = pd.Series(pd.date_range('2014-08-17 17:00:01.1230000', periods=4)) print(s) print(type(s[0])) # Create a new series using just the date portion of the original data. # This effectively truncates the time portion. # Can't use d = s.dt.date or you'll get date objects back, not datetime64. d = pd.to_datetime(s.dt.date) print(d) print(type(d[0])) # Calculate the time delta between the original datetime and # just the date portion. This is the elapsed time since your epoch. delta_t = s-d print(delta_t) # Display the elapsed time as seconds. print(delta_t.dt.total_seconds()) ``` This results in the following output ``` 0 2014-08-17 17:00:01.123 1 2014-08-18 17:00:01.123 2 2014-08-19 17:00:01.123 3 2014-08-20 17:00:01.123 dtype: datetime64[ns] <class 'pandas.tslib.Timestamp'> 0 2014-08-17 1 2014-08-18 2 2014-08-19 3 2014-08-20 dtype: datetime64[ns] <class 'pandas.tslib.Timestamp'> 0 17:00:01.123000 1 17:00:01.123000 2 17:00:01.123000 3 17:00:01.123000 dtype: timedelta64[ns] 0 61201.123 1 61201.123 2 61201.123 3 61201.123 dtype: float64 ```
Here's how I did it with my data: ``` import pandas as pd import numpy as np rng = pd.date_range('1/1/2011', periods=72, freq='H') df = pd.DataFrame({"Data": np.random.randn(len(rng))}, index=rng) df["Time_Since_Midnight"] = (df.index - pd.to_datetime(df.index.date)) / np.timedelta64(1, 'ms') ``` By converting the `DateTimeIndex` into a `date` object, we drop off the hours and seconds. Then by taking the difference of the two, you get a `timedelta64` object, which you can then format into milliseconds. Here's the output I get (the last column is the time since midnight): ``` 2011-01-01 00:00:00 2.383501 0.0 2011-01-01 01:00:00 0.725419 3600000.0 2011-01-01 02:00:00 -0.361533 7200000.0 2011-01-01 03:00:00 2.311185 10800000.0 2011-01-01 04:00:00 1.596148 14400000.0 ```
32,778,316
I am a vim user and edited a large python file using vim, everything is OK and it could run properly. Now I want to build a huge projects and I want to edit this python file in Intellij, but the indentation in intellij is completely wrong, and it's hard for me to edit one line by one line. Do you know what happened? (if the edit some lines in Intellij to remove the indentation error, when I display them in vim, they are wrong indentation as well)
2015/09/25
[ "https://Stackoverflow.com/questions/32778316", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3390810/" ]
Yes, use [perfect forwarding](https://stackoverflow.com/questions/3582001/advantages-of-using-forward): ``` template <typename P> bool VectorList::put (P &&p) { //can't forward p here as it could move p and we need it later if (not_good_for_insert(p)) return false; // ... Node node = create_node(); node.pair = std::forward<P>(p); // ... return true; } ``` Another possibility is to just pass by value like in [Maxim's answer](https://stackoverflow.com/a/32778410/496161). The advantage of the perfect-forwarding version is that it requires no intermediate conversions if you pass in compatible arguments and performs better if moves are expensive. The disadvantage is that forwarding reference functions are very greedy, so other overloads might not act how you want. Note that `Pair &&p` is not a universal reference, it's just an rvalue reference. Universal (or forwarding) references require an rvalue in a deduced context, like template arguments.
The ideal solution is to accept a universal reference, as [TartanLlama](https://stackoverflow.com/a/32778379/412080) advises. The ideal solution works if you can afford having the function definition in the header file. If your function definition cannot be exposed in the header (e.g. you employ Pimpl idiom or interface-based design, or the function resides in a shared library), the second best option is to accept by value. This way the caller can choose how to construct the argument (copy, move, uniform initialization). The callee will have to pay the price of one move though. E.g. `bool VectorList::put(Pair p);`: ``` VectorList v; Pair p { "key", "value" }; v.put(p); v.put(std::move(p)); v.put(Pair{ "anotherkey", "anothervalue" }); v.put({ "anotherkey", "anothervalue" }); ``` And in the implementation you move from the argument: ``` bool VectorList::put(Pair p) { container_.push_back(std::move(p)); } ``` --- Another comment is that you may like to stick with standard C++ names for container operations, like `push_back/push_front`, so that it is clear what it does. `put` is obscure and requires readers of your code to look into the source code or documentation to understand what is going on.
32,778,316
I am a vim user and edited a large python file using vim, everything is OK and it could run properly. Now I want to build a huge projects and I want to edit this python file in Intellij, but the indentation in intellij is completely wrong, and it's hard for me to edit one line by one line. Do you know what happened? (if the edit some lines in Intellij to remove the indentation error, when I display them in vim, they are wrong indentation as well)
2015/09/25
[ "https://Stackoverflow.com/questions/32778316", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3390810/" ]
Yes, use [perfect forwarding](https://stackoverflow.com/questions/3582001/advantages-of-using-forward): ``` template <typename P> bool VectorList::put (P &&p) { //can't forward p here as it could move p and we need it later if (not_good_for_insert(p)) return false; // ... Node node = create_node(); node.pair = std::forward<P>(p); // ... return true; } ``` Another possibility is to just pass by value like in [Maxim's answer](https://stackoverflow.com/a/32778410/496161). The advantage of the perfect-forwarding version is that it requires no intermediate conversions if you pass in compatible arguments and performs better if moves are expensive. The disadvantage is that forwarding reference functions are very greedy, so other overloads might not act how you want. Note that `Pair &&p` is not a universal reference, it's just an rvalue reference. Universal (or forwarding) references require an rvalue in a deduced context, like template arguments.
With help of TartanLlama, I made following test code: ``` #include <utility> #include <iostream> #include <string> class MyClass{ public: MyClass(int s2) : s(s2){ std::cout << "c-tor " << s << std::endl; } MyClass(MyClass &&other) : s(other.s){ other.s = -1; std::cout << "move c-tor " << s << std::endl; } MyClass(const MyClass &other) : s(other.s){ std::cout << "copy c-tor " << s << std::endl; } ~MyClass(){ std::cout << "d-tor " << s << std::endl; } public: int s; }; // ============================== template <typename T> MyClass process(T &&p){ MyClass out = std::forward<T>(p); return out; } // ============================== void t1(){ MyClass out = process( 100 ); } void t2(){ MyClass out = process( MyClass(100) ); } void t3(){ MyClass in = 100; MyClass out = process(std::move(in)); std::cout << in.s << std::endl; std::cout << out.s << std::endl; } void t4(){ MyClass in = 100; MyClass out = process(in); std::cout << in.s << std::endl; std::cout << out.s << std::endl; } int main(int argc, char** argv){ std::cout << "testing fast c-tor" << std::endl; t1(); std::cout << std::endl; std::cout << "testing c-tor" << std::endl; t2(); std::cout << std::endl; std::cout << "testing move object" << std::endl; t3(); std::cout << std::endl; std::cout << "testing normal object" << std::endl; t4(); std::cout << std::endl; } ``` Output on gcc is following: ``` testing fast c-tor c-tor 100 d-tor 100 testing c-tor c-tor 100 move c-tor 100 d-tor -1 d-tor 100 testing move object c-tor 100 move c-tor 100 -1 100 d-tor 100 d-tor -1 testing normal object c-tor 100 copy c-tor 100 100 100 d-tor 100 d-tor 100 ```
32,778,316
I am a vim user and edited a large python file using vim, everything is OK and it could run properly. Now I want to build a huge projects and I want to edit this python file in Intellij, but the indentation in intellij is completely wrong, and it's hard for me to edit one line by one line. Do you know what happened? (if the edit some lines in Intellij to remove the indentation error, when I display them in vim, they are wrong indentation as well)
2015/09/25
[ "https://Stackoverflow.com/questions/32778316", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3390810/" ]
The ideal solution is to accept a universal reference, as [TartanLlama](https://stackoverflow.com/a/32778379/412080) advises. The ideal solution works if you can afford having the function definition in the header file. If your function definition cannot be exposed in the header (e.g. you employ Pimpl idiom or interface-based design, or the function resides in a shared library), the second best option is to accept by value. This way the caller can choose how to construct the argument (copy, move, uniform initialization). The callee will have to pay the price of one move though. E.g. `bool VectorList::put(Pair p);`: ``` VectorList v; Pair p { "key", "value" }; v.put(p); v.put(std::move(p)); v.put(Pair{ "anotherkey", "anothervalue" }); v.put({ "anotherkey", "anothervalue" }); ``` And in the implementation you move from the argument: ``` bool VectorList::put(Pair p) { container_.push_back(std::move(p)); } ``` --- Another comment is that you may like to stick with standard C++ names for container operations, like `push_back/push_front`, so that it is clear what it does. `put` is obscure and requires readers of your code to look into the source code or documentation to understand what is going on.
With help of TartanLlama, I made following test code: ``` #include <utility> #include <iostream> #include <string> class MyClass{ public: MyClass(int s2) : s(s2){ std::cout << "c-tor " << s << std::endl; } MyClass(MyClass &&other) : s(other.s){ other.s = -1; std::cout << "move c-tor " << s << std::endl; } MyClass(const MyClass &other) : s(other.s){ std::cout << "copy c-tor " << s << std::endl; } ~MyClass(){ std::cout << "d-tor " << s << std::endl; } public: int s; }; // ============================== template <typename T> MyClass process(T &&p){ MyClass out = std::forward<T>(p); return out; } // ============================== void t1(){ MyClass out = process( 100 ); } void t2(){ MyClass out = process( MyClass(100) ); } void t3(){ MyClass in = 100; MyClass out = process(std::move(in)); std::cout << in.s << std::endl; std::cout << out.s << std::endl; } void t4(){ MyClass in = 100; MyClass out = process(in); std::cout << in.s << std::endl; std::cout << out.s << std::endl; } int main(int argc, char** argv){ std::cout << "testing fast c-tor" << std::endl; t1(); std::cout << std::endl; std::cout << "testing c-tor" << std::endl; t2(); std::cout << std::endl; std::cout << "testing move object" << std::endl; t3(); std::cout << std::endl; std::cout << "testing normal object" << std::endl; t4(); std::cout << std::endl; } ``` Output on gcc is following: ``` testing fast c-tor c-tor 100 d-tor 100 testing c-tor c-tor 100 move c-tor 100 d-tor -1 d-tor 100 testing move object c-tor 100 move c-tor 100 -1 100 d-tor 100 d-tor -1 testing normal object c-tor 100 copy c-tor 100 100 100 d-tor 100 d-tor 100 ```
13,096,339
> > **Possible Duplicate:** > > [Python Question: Year and Day of Year to date?](https://stackoverflow.com/questions/2427555/python-question-year-and-day-of-year-to-date) > > > Is there a method in Python to figure out which month a certain day of the year is in, e.g. today is day 299 (October 26th). I would like to figure out, that day 299 is in month 10 (to compile the string to set the Linux system time). How can I do this?
2012/10/27
[ "https://Stackoverflow.com/questions/13096339", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1198201/" ]
``` print (datetime.datetime(2012,1,1) + datetime.timedelta(days=299)).month ``` Here's a little more usable version that returns both the month and day: ``` def get_month_day(year, day, one_based=False): if one_based: # if Jan 1st is 1 instead of 0 day -= 1 dt = datetime.datetime(year, 1, 1) + datetime.timedelta(days=day) return dt.month, dt.day >>> get_month_day(2012, 299) (10, 26) ```
I know of no such method, but you can do it like this: ``` print datetime.datetime.strptime('2012 299', '%Y %j').month ``` The above prints `10`
18,897,631
Guys i'm a newbie to the socket programming Following program is a client program which request a file from the server,But i'm getting the error as show below.. My input is GET index.html and the code is Can anyone solve this error...? ``` #!/usr/bin/env python import httplib import sys http_server = sys.argv[0] conn = httplib.HTTPConnection(http_server) while 1: cmd = raw_input('input command (ex. GET index.html): ') cmd = cmd.split() if cmd[0] == 'exit': break conn.request(cmd[0],cmd[1]) rsp = conn.getresponse() print(rsp.status, rsp.reason) data_received = rsp.read() print(data_received) conn.close() input command (ex. GET index.html): GET index.html Traceback (most recent call last): File "./client1.py", line 19, in <module> conn.request(cmd[0],cmd[1]) File "/usr/lib/python2.6/httplib.py", line 910, in request self._send_request(method, url, body, headers) File "/usr/lib/python2.6/httplib.py", line 947, in _send_request self.endheaders() File "/usr/lib/python2.6/httplib.py", line 904, in endheaders self._send_output() File "/usr/lib/python2.6/httplib.py", line 776, in _send_output self.send(msg) File "/usr/lib/python2.6/httplib.py", line 735, in send self.connect() File "/usr/lib/python2.6/httplib.py", line 716, in connect self.timeout) File "/usr/lib/python2.6/socket.py", line 500, in create_connection for res in getaddrinfo(host, port, 0, SOCK_STREAM): socket.gaierror: [Errno -2] Name or service not known ```
2013/09/19
[ "https://Stackoverflow.com/questions/18897631", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2795866/" ]
sys.argv[0] is not what you think it is. sys.argv[0] is the name of the program or script. The script's first argument is sys.argv[1].
The problem is that the first item in `sys.argv` is the script name. So your script is actually using your filename as the hostname. Change the 5th line to: ``` http_server = sys.argv[1] ``` [More info here.](http://docs.python.org/2/library/sys.html#sys.argv)
35,869,561
For a task I am to use ConditionalProbDist using LidstoneProbDist as the estimator, adding +0.01 to the sample count for each bin. I thought the following line of code would achieve this, but it produces a value error ``` fd = nltk.ConditionalProbDist(fd,nltk.probability.LidstoneProbDist,0.01) ``` I'm not sure how to format the arguments within ConditionalProbDist and haven't had much luck in finding out how to do so via python's help feature or google, so if anyone could set me right, it would be much appreciated!
2016/03/08
[ "https://Stackoverflow.com/questions/35869561", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3255571/" ]
I found [the probability tutorial](http://www.nltk.org/howto/probability.html) on the NLTK website quite helpful as a reference. As mentioned in the answer above, using a lambda expression is a good idea, since the `ConditionalProbDist` will generate a frequency distribution (`nltk.FreqDist`) on the fly that's passed through to the estimator. A more subtle point is that passing through the bins parameter can't be done if you don't know how many bins you originally have in your input sample! However, a `FreqDist` has the number of bins available as `FreqDist.B()` ([docs](http://www.nltk.org/api/nltk.html#nltk.probability.FreqDist.B)). Instead use `FreqDist` as the only parameter to your lambda: ``` from nltk.probability import * # ... # Using the given parameters of one extra bin and a gamma of 0.01 lidstone_estimator = lambda fd: LidstoneProbDist(fd, 0.01, fd.B() + 1) conditional_pd = ConditionalProbDist(conditional_fd, lidstone_estimator) ``` I know this question is very old now, but I too struggled to find documentation, so I'm documenting it here in case someone else down the line runs into a similar struggle. Good luck (with fnlp)!
You probably don't need this anymore as the question is very old, but still, you can pass LidstoneProbDist arguments to ConditionalProbDist with the help of lambda: ``` estimator = lambda fdist, bins: nltk.LidstoneProbDist(fdist, 0.01, bins) cpd = nltk.ConditionalProbDist(fd, estimator, bins) ```
68,293,321
In Python/Pandas, I want to create a column in my dataframe that shows the average number of days between customer visits at a venue. That is, for each customer, what are the average number of days between that customer's visits? Data looks like [Image of My Data](https://i.stack.imgur.com/NPFMU.png) Sorry I'm really inexperienced and don't know how to type the data up other than this. I am following the solution in [this StackOverflow answer](https://stackoverflow.com/questions/45241221/python-pandas-calculate-average-days-between-dates), except that that person wanted the average number of days between visits in general, and I want days between visits for each customer. Thank you.
2021/07/07
[ "https://Stackoverflow.com/questions/68293321", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14814034/" ]
On windows linking DLLs goes through a trampoline library (.lib file) which generates the right bindings. The convention for these is to prefix the function names with `__imp__` ([there is a related C++ answer](https://stackoverflow.com/a/5159395/1818675)). There is an [open issue](https://github.com/rust-lang/reference/issues/638) that explains some of the difficulties creating and linking rust dlls under windows. Here are the relevant bits: > > If you start developing on Windows, Rust will produce a mylib.dll and mylib.dll.lib. To use this lib again from Rust you will have to specify #[link(name = "mylib.dll")], thus giving the impression that the full file name has to be specified. On Mac, however, #[link(name = "libmylib.dylib"] will fail (likewise Linux). > > > > > If you start developing on Mac and Linux, #[link(name = "mylib")] just works, giving you the impression Rust handles the name resolution (fully) automatically like other platforms that just require the base name. > > > > > In fact, the correct way to cross platform link against a dylib produced by Rust seems to be: > > > ```rust #[cfg_attr(all(target_os = "windows", target_env = "msvc"), link(name = "dylib.dll"))] #[cfg_attr(not(all(target_os = "windows", target_env = "msvc")), link(name = "dylib"))] extern "C" {} ```
This is not my ideal answer, but it is how I solve the problem. What I'm still looking for is a way to get the Microsoft Linker (I believe) to output full verbosity in the rust build as it can do when doing C++ builds. There are options to the build that might trigger this but I haven't found them yet. That plus this name munging in maybe 80% less text than I write here would be an ideal answer I think. The users.rust-lang.org user chrefr helped by asking some clarifying questiongs which jogged my brain. He mentioned that "*name mangling schema is unspecified in C++*" which was my aha moment. I was trying to force RUST to make the RUST linker look for my external output() API function, expecting it to look for the mangled name, as the native API call I am accessing was not declared with "cdecl" to prevent name mangling. I simply forced RUST to use the mangled name I found with dumpbin.hex (code below) What I was hoping for as an answer was a way to get linker.exe to output all the symbols it is looking for. Which would have been "output" which was what the compiler error was stating. I was thinking it was looking for a mangled name and wanted to compare the two mangled names by getting the microsoft linker to output what it was attempting to match. So my solution was to use the dumpbin munged name in my #[link] directive: ``` //#[link(name="myNativeLib")] //#[link(name="myNativeLib", kind="dylib")] // prepends _imp to symbol below #[link(name="myNativeLib", kind="static")] // I'm linking with a DLL extern { //#[link_name = "output"] #[link_name = "?output@@YAXPEBDZZ"] // Name found via DUMPBIN.exe /Exports fn output( format:LPCTSTR, ...); } ``` Although I have access to sources of myNativeLib, these are not distributed, and not going to change. The \*.lib and \*.exp are only available internally, so long term I will need a solution to bind to these modules that only relys on the \*.dll being present. That suggests I might need to dynamically load the DLL instead of doing what I consider "implicit" linking of the DLL. As I suspect rust is looking just at the \*.lib module to resolve the symbols. I need a kind="dylibOnly" for Windows DLLS that are distributed without \*.lib and \*.exp modules. But for the moment I was able to get all my link issues resolved. I can now call my RUST DLL from a VS2019 Platform Toolset V142 "main" and the RUST DLL can call a 'C' DLL function "output" and the data goes to the propriatary stream that the native "output" function was designed to send data to. There were several hoops involved but generally cargo/rustc/cbindgen worked well for this newbie. Now I'm trying to condsider any compute intensive task where multithreading is being avoided in 'C' that could be safely implemented in RUST which could be bencmarked to illustrate all this pain is worthwhile.
68,293,321
In Python/Pandas, I want to create a column in my dataframe that shows the average number of days between customer visits at a venue. That is, for each customer, what are the average number of days between that customer's visits? Data looks like [Image of My Data](https://i.stack.imgur.com/NPFMU.png) Sorry I'm really inexperienced and don't know how to type the data up other than this. I am following the solution in [this StackOverflow answer](https://stackoverflow.com/questions/45241221/python-pandas-calculate-average-days-between-dates), except that that person wanted the average number of days between visits in general, and I want days between visits for each customer. Thank you.
2021/07/07
[ "https://Stackoverflow.com/questions/68293321", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14814034/" ]
Like your [previous question](https://stackoverflow.com/q/68289334/1889329) you continue to ignore how compilers and linkers work. The two concepts you need to wrap your head around are these: * `LPCTSTR` is not a type. It is a preprocessor macro that expands to `char const*`, `wchar_t const*`, or `__wchar_t const*` if you are [particularly unlucky](https://learn.microsoft.com/en-us/cpp/build/reference/zc-wchar-t-wchar-t-is-native-type). Either way, once the compiler is done, `LPCTSTR` is gone. Forever. It will not ever show up as a type even when using C++ name decoration. **It is not a type, don't use it in places where only types are allowed.** * Compilers support different types of [language linkage](https://en.cppreference.com/w/cpp/language/language_linkage) for external symbols. While you insist to have a C DLL, you are in fact using C++ linkage. This is evidenced by the symbol assigned to the exported function. While C++ linkage is great in that it allows type information to be encoded in the [decorated names](https://learn.microsoft.com/en-us/cpp/build/reference/decorated-names), the name decoration scheme isn't standardized in any way, and varies widely across compilers and platforms. As such, it is useless when the goal is cross language interoperability (or any interoperability). --- As explained in [my previous answer](https://stackoverflow.com/a/68298524/1889329), you will need to get rid of the `LPCTSTR` in your C (or C++) interface. That's non-negotiable. It **must** go, and unwittingly you have done that already. Since DUMPBIN understands MSVC's C++ name decoration scheme, it was able to turn this symbol ```none ?output@@YAXPEBDZZ ``` into this code ```cpp void __cdecl output(char const *,...) ``` All type information is encoded in the decorated name, including the calling convention used. Take special note that the first formal parameter is of type `char const *`. That's fixed, set in stone, compiled into the DLL. There is no going back and changing your mind, so make sure your clients can't either. You **MUST** change the signature of your C or C++ function. Pick either `char const*` or `wchar_t const*`. When it comes to strings in Rust on Windows there is no good option. Picking either one is the best you have. --- The other issue you are going up against is insisting on having Rust come to terms with C++' language linkage. That isn't going to be an option until Standard C++ has formally standardized C++ language linkage. In statistics, this is called the *"Impossible Event"*, so don't sink any more time into something that's not going to get you anywhere. Instead, instruct your C or C++ library to export symbols using C language linkage by prepending an `extern "C"` specifier. While not formally specified either, most tools agree on a sufficiently large set of rules to be usable. Whether you like it or not, `extern "C"` is the only option we have when making compiled C or C++ code available to other languages (or C and C++, for that matter). If for whatever reason you cannot use C language linkage (and frankly, since you are compiling C code I don't see a valid reason for that being the case) you *could* [export from a DLL using a DEF file](https://learn.microsoft.com/en-us/cpp/build/exporting-from-a-dll-using-def-files), giving you control over the names of the exported symbols. I don't see much benefit in using C++ language linkage, then throwing out all the benefits and pretend to the linker that this were C language linkage. I mean, why not just have the compiler do all that work instead? Of course, if you are this desperately trying to avoid the solution, you could also follow the approach from your [proposed answer](https://stackoverflow.com/a/68309674/1889329), so long as you understand, why it works, when it stops working, and which new error mode you've introduced. * It works, in part by tricking the compiler, and in part by coincidence. The `link_name = "?output@@YAXPEBDZZ"` attribute tells the compiler to stop massaging the import symbol and instead use the provided name when requesting the linker to resolve symbols. This works by coincidence because Rust defaults to `__cdecl` which happens to be the calling convention for all variadic functions in C. Most functions in the Windows API use `__stdcall`, though. Now ironically, had you used C linkage instead, you would have lost all type information, but retained the calling convention in the name decoration. A mismatch between calling conventions would have thus been caught during linking. Another opportunity missed, oh well. * It stops working when you recompile your C DLL and define `UNICODE` or `_UNICODE`, because now the symbol has a different name, due to different types. It will also stop working when Microsoft ever decide to change their (undocumented) name decoration scheme. And it will certainly stop working when using a different C++ compiler. * The Rust implementation introduced a new error mode. Presumably, `LPCTSTR` is a type alias, gated by some sort of configuration. This allows clients to select, whether they want an `output` that accepts a `*const u8` or `*const u16`. The library, though, is compiled to accept `char const*` only. Another mismatch opportunity introduced needlessly. There is no place for [generic-text mappings](https://learn.microsoft.com/en-us/cpp/c-runtime-library/generic-text-mappings) in Windows code, and hasn't been for decades. --- As always, a few words of caution: Trying to introduce Rust into a business that's squarely footed on C and C++ requires careful consideration. Someone doing that will need to be intimately familiar with C++ compilers, linkers, and Rust. I feel that you are struggling with all three of those, and fear that you are ultimately going to provide a disservice. Consider whether you should be bringing someone in that is sufficiently experienced. You can either thank me later for the advice, or pay me to fill in that role.
This is not my ideal answer, but it is how I solve the problem. What I'm still looking for is a way to get the Microsoft Linker (I believe) to output full verbosity in the rust build as it can do when doing C++ builds. There are options to the build that might trigger this but I haven't found them yet. That plus this name munging in maybe 80% less text than I write here would be an ideal answer I think. The users.rust-lang.org user chrefr helped by asking some clarifying questiongs which jogged my brain. He mentioned that "*name mangling schema is unspecified in C++*" which was my aha moment. I was trying to force RUST to make the RUST linker look for my external output() API function, expecting it to look for the mangled name, as the native API call I am accessing was not declared with "cdecl" to prevent name mangling. I simply forced RUST to use the mangled name I found with dumpbin.hex (code below) What I was hoping for as an answer was a way to get linker.exe to output all the symbols it is looking for. Which would have been "output" which was what the compiler error was stating. I was thinking it was looking for a mangled name and wanted to compare the two mangled names by getting the microsoft linker to output what it was attempting to match. So my solution was to use the dumpbin munged name in my #[link] directive: ``` //#[link(name="myNativeLib")] //#[link(name="myNativeLib", kind="dylib")] // prepends _imp to symbol below #[link(name="myNativeLib", kind="static")] // I'm linking with a DLL extern { //#[link_name = "output"] #[link_name = "?output@@YAXPEBDZZ"] // Name found via DUMPBIN.exe /Exports fn output( format:LPCTSTR, ...); } ``` Although I have access to sources of myNativeLib, these are not distributed, and not going to change. The \*.lib and \*.exp are only available internally, so long term I will need a solution to bind to these modules that only relys on the \*.dll being present. That suggests I might need to dynamically load the DLL instead of doing what I consider "implicit" linking of the DLL. As I suspect rust is looking just at the \*.lib module to resolve the symbols. I need a kind="dylibOnly" for Windows DLLS that are distributed without \*.lib and \*.exp modules. But for the moment I was able to get all my link issues resolved. I can now call my RUST DLL from a VS2019 Platform Toolset V142 "main" and the RUST DLL can call a 'C' DLL function "output" and the data goes to the propriatary stream that the native "output" function was designed to send data to. There were several hoops involved but generally cargo/rustc/cbindgen worked well for this newbie. Now I'm trying to condsider any compute intensive task where multithreading is being avoided in 'C' that could be safely implemented in RUST which could be bencmarked to illustrate all this pain is worthwhile.
13,217,434
I'm planning to insert data to bellow CF that has compound keys. ``` CREATE TABLE event_attend ( event_id int, event_type varchar, event_user_id int, PRIMARY KEY (event_id, event_type) #compound keys... ); ``` But I can't insert data to this CF from python using cql. (http://code.google.com/a/apache-extras.org/p/cassandra-dbapi2/) ``` import cql connection = cql.connect(host, port, keyspace) cursor = connection.cursor() cursor.execute("INSERT INTO event_attend (event_id, event_type, event_user_id) VALUES (1, 'test', 2)", dict({}) ) ``` I get the following traceback: ``` Traceback (most recent call last): File "./v2_initial.py", line 153, in <module> db2cass.execute() File "./v2_initial.py", line 134, in execute cscursor.execute("insert into event_attend (event_id, event_type, event_user_id ) values (1, 'test', 2)", dict({})) File "/usr/local/pythonbrew/pythons/Python-2.7.2/lib/python2.7/site-packages/cql-1.4.0-py2.7.egg/cql/cursor.py", line 80, in execute response = self.get_response(prepared_q, cl) File "/usr/local/pythonbrew/pythons/Python-2.7.2/lib/python2.7/site-packages/cql-1.4.0-py2.7.egg/cql/thrifteries.py", line 80, in get_response return self.handle_cql_execution_errors(doquery, compressed_q, compress) File "/usr/local/pythonbrew/pythons/Python-2.7.2/lib/python2.7/site-packages/cql-1.4.0-py2.7.egg/cql/thrifteries.py", line 98, in handle_cql_execution_errors raise cql.ProgrammingError("Bad Request: %s" % ire.why) cql.apivalues.ProgrammingError: Bad Request: unable to make int from 'event_user_id' ``` What am I doing wrong?
2012/11/04
[ "https://Stackoverflow.com/questions/13217434", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1797779/" ]
It looks like you are trying to follow the example in: <http://pypi.python.org/pypi/cql/1.4.0> ``` import cql con = cql.connect(host, port, keyspace) cursor = con.cursor() cursor.execute("CQL QUERY", dict(kw='Foo', kw2='Bar', kwn='etc...')) ``` However, if you only need to insert one row (like in your question), just drop the empty dict() parameter. Also, since you are using composite keys, make sure you use CQL3 <http://www.datastax.com/dev/blog/whats-new-in-cql-3-0> ``` connection = cql.connect('localhost:9160', cql_version='3.0.0') ``` The following code should work (just adapt it to localhost if needed): ``` import cql con = cql.connect('172.24.24.24', 9160, keyspace, cql_version='3.0.0') print ("Connected!") cursor = con.cursor() CQLString = "INSERT INTO event_attend (event_id, event_type, event_user_id) VALUES (131, 'Party', 3156);" cursor.execute(CQLString) ```
For python 2.7, 3.3, 3.4, 3.5, and 3.6 for installation you can use ``` $ pip install cassandra-driver ``` And in python: ``` import cassandra ``` Documentation can be found under <https://datastax.github.io/python-driver/getting_started.html#passing-parameters-to-cql-queries>
41,351,431
Suppose I have the following numpy structured array: ``` In [250]: x Out[250]: array([(22, 2, -1000000000, 2000), (22, 2, 400, 2000), (22, 2, 804846, 2000), (44, 2, 800, 4000), (55, 5, 900, 5000), (55, 5, 1000, 5000), (55, 5, 8900, 5000), (55, 5, 11400, 5000), (33, 3, 14500, 3000), (33, 3, 40550, 3000), (33, 3, 40990, 3000), (33, 3, 44400, 3000)], dtype=[('f1', '<i4'), ('f2', '<f4'), ('f3', '<f4'), ('f4', '<i4')]) ``` I am trying to modify a subset of the above array to a regular numpy array. It is essential for my application that no copies are created (only views). Fields are retrieved from the above structured array by using the following function: ``` def fields_view(array, fields): return array.getfield(numpy.dtype( {name: array.dtype.fields[name] for name in fields} )) ``` If I am interested in fields 'f2' and 'f3', I would do the following: ``` In [251]: y=fields_view(x,['f2','f3']) In [252]: y Out [252]: array([(2.0, -1000000000.0), (2.0, 400.0), (2.0, 804846.0), (2.0, 800.0), (5.0, 900.0), (5.0, 1000.0), (5.0, 8900.0), (5.0, 11400.0), (3.0, 14500.0), (3.0, 40550.0), (3.0, 40990.0), (3.0, 44400.0)], dtype={'names':['f2','f3'], 'formats':['<f4','<f4'], 'offsets':[4,8], 'itemsize':12}) ``` There is a way to directly get an ndarray from the 'f2' and 'f3' fields of the original structured array. However, for my application, it is necessary to build this intermediary structured array as this data subset is an attribute of a class. I can't convert the intermediary structured array to a regular numpy array without doing a copy. ``` In [253]: y.view(('<f4', len(y.dtype.names))) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-54-f8fc3a40fd1b> in <module>() ----> 1 y.view(('<f4', len(y.dtype.names))) ValueError: new type not compatible with array. ``` This function can also be used to convert a record array to an ndarray: ``` def recarr_to_ndarr(x,typ): fields = x.dtype.names shape = x.shape + (len(fields),) offsets = [x.dtype.fields[name][1] for name in fields] assert not any(np.diff(offsets, n=2)) strides = x.strides + (offsets[1] - offsets[0],) y = np.ndarray(shape=shape, dtype=typ, buffer=x, offset=offsets[0], strides=strides) return y ``` However, I get the following error: ``` In [254]: recarr_to_ndarr(y,'<f4') --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-65-2ebda2a39e9f> in <module>() ----> 1 recarr_to_ndarr(y,'<f4') <ipython-input-62-8a9eea8e7512> in recarr_to_ndarr(x, typ) 8 strides = x.strides + (offsets[1] - offsets[0],) 9 y = np.ndarray(shape=shape, dtype=typ, buffer=x, ---> 10 offset=offsets[0], strides=strides) 11 return y 12 TypeError: expected a single-segment buffer object ``` The function works fine if I create a copy: ``` In [255]: recarr_to_ndarr(np.array(y),'<f4') Out[255]: array([[ 2.00000000e+00, -1.00000000e+09], [ 2.00000000e+00, 4.00000000e+02], [ 2.00000000e+00, 8.04846000e+05], [ 2.00000000e+00, 8.00000000e+02], [ 5.00000000e+00, 9.00000000e+02], [ 5.00000000e+00, 1.00000000e+03], [ 5.00000000e+00, 8.90000000e+03], [ 5.00000000e+00, 1.14000000e+04], [ 3.00000000e+00, 1.45000000e+04], [ 3.00000000e+00, 4.05500000e+04], [ 3.00000000e+00, 4.09900000e+04], [ 3.00000000e+00, 4.44000000e+04]], dtype=float32) ``` There seems to be no difference between the two arrays: ``` In [66]: y Out[66]: array([(2.0, -1000000000.0), (2.0, 400.0), (2.0, 804846.0), (2.0, 800.0), (5.0, 900.0), (5.0, 1000.0), (5.0, 8900.0), (5.0, 11400.0), (3.0, 14500.0), (3.0, 40550.0), (3.0, 40990.0), (3.0, 44400.0)], dtype={'names':['f2','f3'], 'formats':['<f4','<f4'], 'offsets':[4,8], 'itemsize':12}) In [67]: np.array(y) Out[67]: array([(2.0, -1000000000.0), (2.0, 400.0), (2.0, 804846.0), (2.0, 800.0), (5.0, 900.0), (5.0, 1000.0), (5.0, 8900.0), (5.0, 11400.0), (3.0, 14500.0), (3.0, 40550.0), (3.0, 40990.0), (3.0, 44400.0)], dtype={'names':['f2','f3'], 'formats':['<f4','<f4'], 'offsets':[4,8], 'itemsize':12}) ```
2016/12/27
[ "https://Stackoverflow.com/questions/41351431", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2407231/" ]
This answer is a bit long and rambling. I started with what I knew from previous work on taking array views, and then tried to relate that to your functions. ================ In your case, all fields are 4 bytes long, both floats and ints. I can then view it as all ints or all floats: ``` In [1431]: x Out[1431]: array([(22, 2.0, -1000000000.0, 2000), (22, 2.0, 400.0, 2000), (22, 2.0, 804846.0, 2000), (44, 2.0, 800.0, 4000), (55, 5.0, 900.0, 5000), (55, 5.0, 1000.0, 5000), (55, 5.0, 8900.0, 5000), (55, 5.0, 11400.0, 5000), (33, 3.0, 14500.0, 3000), (33, 3.0, 40550.0, 3000), (33, 3.0, 40990.0, 3000), (33, 3.0, 44400.0, 3000)], dtype=[('f1', '<i4'), ('f2', '<f4'), ('f3', '<f4'), ('f4', '<i4')]) In [1432]: x.view('i4') Out[1432]: array([ 22, 1073741824, -831624408, 2000, 22, 1073741824, 1137180672, 2000, 22, 1073741824, 1229225696, 2000, 44, 1073741824, 1145569280, .... 3000]) In [1433]: x.view('f4') Out[1433]: array([ 3.08285662e-44, 2.00000000e+00, -1.00000000e+09, 2.80259693e-42, 3.08285662e-44, 2.00000000e+00, .... 4.20389539e-42], dtype=float32) ``` This view is 1d. I can reshape and slice the 2 float columns ``` In [1434]: x.shape Out[1434]: (12,) In [1435]: x.view('f4').reshape(12,-1) Out[1435]: array([[ 3.08285662e-44, 2.00000000e+00, -1.00000000e+09, 2.80259693e-42], [ 3.08285662e-44, 2.00000000e+00, 4.00000000e+02, 2.80259693e-42], ... [ 4.62428493e-44, 3.00000000e+00, 4.44000000e+04, 4.20389539e-42]], dtype=float32) In [1437]: x.view('f4').reshape(12,-1)[:,1:3] Out[1437]: array([[ 2.00000000e+00, -1.00000000e+09], [ 2.00000000e+00, 4.00000000e+02], [ 2.00000000e+00, 8.04846000e+05], [ 2.00000000e+00, 8.00000000e+02], ... [ 3.00000000e+00, 4.44000000e+04]], dtype=float32) ``` That this is a view can be verified by doing a bit of inplace math, and seeing the results in `x`: ``` In [1439]: y=x.view('f4').reshape(12,-1)[:,1:3] In [1440]: y[:,0] += .5 In [1441]: y Out[1441]: array([[ 2.50000000e+00, -1.00000000e+09], [ 2.50000000e+00, 4.00000000e+02], ... [ 3.50000000e+00, 4.44000000e+04]], dtype=float32) In [1442]: x Out[1442]: array([(22, 2.5, -1000000000.0, 2000), (22, 2.5, 400.0, 2000), (22, 2.5, 804846.0, 2000), (44, 2.5, 800.0, 4000), (55, 5.5, 900.0, 5000), (55, 5.5, 1000.0, 5000), (55, 5.5, 8900.0, 5000), (55, 5.5, 11400.0, 5000), (33, 3.5, 14500.0, 3000), (33, 3.5, 40550.0, 3000), (33, 3.5, 40990.0, 3000), (33, 3.5, 44400.0, 3000)], dtype=[('f1', '<i4'), ('f2', '<f4'), ('f3', '<f4'), ('f4', '<i4')]) ``` If the field sizes differed this might be impossible. For example if the floats were 8 bytes. The key is picturing how the structured data is stored, and imagining whether that can be viewed as a simple dtype of multiple columns. And field choice has to be equivalent to a basic slice. Working with ['f1','f4'] would be equivalent to advanced indexing with [:,[0,3], which has to be a copy. ========== The 'direct' field indexing is: ``` z = x[['f2','f3']].view('f4').reshape(12,-1) z -= .5 ``` modifies `z` but with a `futurewarning`. Also it does not modify `x`; `z` has become a copy. I can also see this by looking at `z.__array_interface__['data']`, the data buffer location (and comparing with that of `x` and `y`). ================= Your `fields_view` does create a structured view: ``` In [1480]: w=fields_view(x,['f2','f3']) In [1481]: w.__array_interface__['data'] Out[1481]: (151950184, False) In [1482]: x.__array_interface__['data'] Out[1482]: (151950184, False) ``` which can be used to modify `x`, `w['f2'] -= .5`. So it is more versatile than the 'direct' `x[['f2','f3']]`. The `w` dtype is ``` dtype({'names':['f2','f3'], 'formats':['<f4','<f4'], 'offsets':[4,8], 'itemsize':12}) ``` Adding `print(shape, typ, offsets, strides)` to your `recarr_to_ndarr`, I get (py3) ``` In [1499]: recarr_to_ndarr(w,'<f4') (12, 2) <f4 [4, 8] (16, 4) .... ValueError: ndarray is not contiguous In [1500]: np.ndarray(shape=(12,2), dtype='<f4', buffer=w.data, offset=4, strides=(16,4)) ... BufferError: memoryview: underlying buffer is not contiguous ``` That `contiguous` problem must be refering to the values shown in `w.flags`: ``` In [1502]: w.flags Out[1502]: C_CONTIGUOUS : False F_CONTIGUOUS : False .... ``` It's interesting that `w.dtype.descr` converts the 'offsets' into a unnamed field: ``` In [1506]: w.__array_interface__ Out[1506]: {'data': (151950184, False), 'descr': [('', '|V4'), ('f2', '<f4'), ('f3', '<f4')], 'shape': (12,), 'strides': (16,), 'typestr': '|V12', 'version': 3} ``` One way or other, `w` has a non-contiguous data buffer, which can't be used to create a new array. Flattened, the data buffer looks something like ``` xoox|xoox|xoox|... # x 4 bytes we want to skip # o 4 bytes we want to use # | invisible bdry between records in x ``` The `y` I constructed above has: ``` In [1511]: y.__array_interface__ Out[1511]: {'data': (151950188, False), 'descr': [('', '<f4')], 'shape': (12, 2), 'strides': (16, 4), 'typestr': '<f4', 'version': 3} ``` So it accesses the `o` bytes with a 4 byte offset, and then (16,4) strides, and (12,2) shape. If I modify your `ndarray` call to use the original `x.data`, it works: ``` In [1514]: xx=np.ndarray(shape=(12,2), dtype='<f4', buffer=x.data, offset=4, strides=(16,4)) In [1515]: xx Out[1515]: array([[ 2.00000000e+00, -1.00000000e+09], [ 2.00000000e+00, 4.00000000e+02], .... [ 3.00000000e+00, 4.44000000e+04]], dtype=float32) ``` with the same array\_interface as my `y`: ``` In [1516]: xx.__array_interface__ Out[1516]: {'data': (151950188, False), 'descr': [('', '<f4')], 'shape': (12, 2), 'strides': (16, 4), 'typestr': '<f4', 'version': 3} ```
hpaulj was right in saying that the problem is that the subset of the structured array is not contiguous. Interestingly, I figured out a way to make the array subset contiguous with the following function: ``` def view_fields(a, fields): """ `a` must be a numpy structured array. `names` is the collection of field names to keep. Returns a view of the array `a` (not a copy). """ dt = a.dtype formats = [dt.fields[name][0] for name in fields] offsets = [dt.fields[name][1] for name in fields] itemsize = a.dtype.itemsize newdt = np.dtype(dict(names=fields, formats=formats, offsets=offsets, itemsize=itemsize)) b = a.view(newdt) return b In [5]: view_fields(x,['f2','f3']).flags Out[5]: C_CONTIGUOUS : True F_CONTIGUOUS : True OWNDATA : False WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False ``` The old function: ``` In [10]: fields_view(x,['f2','f3']).flags Out[10]: C_CONTIGUOUS : False F_CONTIGUOUS : False OWNDATA : False WRITEABLE : True ALIGNED : True UPDATEIFCOPY : False ```
62,980,784
I'm importing skimage in a python code. ``` from skimage.feature import greycomatrix, greycoprops ``` and I get this error > > ***No module named 'skimage'*** > > > Although I've already installed the scikit-image. Can anyone help ? This is the output of pip freeze [![enter image description here](https://i.stack.imgur.com/cC9k8.png)](https://i.stack.imgur.com/cC9k8.png) [![enter image description here](https://i.stack.imgur.com/rnE9b.png)](https://i.stack.imgur.com/rnE9b.png) [![enter image description here](https://i.stack.imgur.com/jXl7N.png)](https://i.stack.imgur.com/jXl7N.png)
2020/07/19
[ "https://Stackoverflow.com/questions/62980784", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8151481/" ]
You can use `pip install scikit-image`. Also, see the [recommended procedure](http://scikit-image.org/docs/dev/install.html).
If you are using python3 you should install the package using `python3 -m pip install package_name` or `pip3 install package_name` Using the `pip` binary will install the package for `python2` on some systems.
62,980,784
I'm importing skimage in a python code. ``` from skimage.feature import greycomatrix, greycoprops ``` and I get this error > > ***No module named 'skimage'*** > > > Although I've already installed the scikit-image. Can anyone help ? This is the output of pip freeze [![enter image description here](https://i.stack.imgur.com/cC9k8.png)](https://i.stack.imgur.com/cC9k8.png) [![enter image description here](https://i.stack.imgur.com/rnE9b.png)](https://i.stack.imgur.com/rnE9b.png) [![enter image description here](https://i.stack.imgur.com/jXl7N.png)](https://i.stack.imgur.com/jXl7N.png)
2020/07/19
[ "https://Stackoverflow.com/questions/62980784", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8151481/" ]
Since pip freeze indeed shows scikit-image as installed, I presume that you are launching your script/session using a different *environment* from the one listed by pip. You should make sure that you are in the same environment. Try `python -m pip freeze` and `python my_script.py` from the same terminal to make sure that you are comparing the same environment. RealPython has a decent guide on Python environments [here](https://realpython.com/python-virtual-environments-a-primer/).
You can use `pip install scikit-image`. Also, see the [recommended procedure](http://scikit-image.org/docs/dev/install.html).
62,980,784
I'm importing skimage in a python code. ``` from skimage.feature import greycomatrix, greycoprops ``` and I get this error > > ***No module named 'skimage'*** > > > Although I've already installed the scikit-image. Can anyone help ? This is the output of pip freeze [![enter image description here](https://i.stack.imgur.com/cC9k8.png)](https://i.stack.imgur.com/cC9k8.png) [![enter image description here](https://i.stack.imgur.com/rnE9b.png)](https://i.stack.imgur.com/rnE9b.png) [![enter image description here](https://i.stack.imgur.com/jXl7N.png)](https://i.stack.imgur.com/jXl7N.png)
2020/07/19
[ "https://Stackoverflow.com/questions/62980784", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8151481/" ]
Since pip freeze indeed shows scikit-image as installed, I presume that you are launching your script/session using a different *environment* from the one listed by pip. You should make sure that you are in the same environment. Try `python -m pip freeze` and `python my_script.py` from the same terminal to make sure that you are comparing the same environment. RealPython has a decent guide on Python environments [here](https://realpython.com/python-virtual-environments-a-primer/).
If you are using python3 you should install the package using `python3 -m pip install package_name` or `pip3 install package_name` Using the `pip` binary will install the package for `python2` on some systems.
69,465,428
I have a dictionary that looks like this: d = {key1 : {(key2,key3) : value}, ...} so it is a dictionary of dictionaries and in the inside dict the keys are tuples. I would like to get a triple nested dict: {key1 : {key2 : {key3 : value}, ...} I know how to do it with 2 loops and a condition: ``` new_d = {} for key1, inside_dict in d.items(): new_d[key1] = {} for (key2,key3), value in inside_dict.items(): if key2 in new_d[key1].keys(): new_d[key1][key2][key3] = value else: new_d[key1][key2] = {key3 : value} ``` Edit: key2 values are not guaranteed to be unique. This is why I added the condition It feels very unpythonic to me. Is there a faster and/or shorter way to do this?
2021/10/06
[ "https://Stackoverflow.com/questions/69465428", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11930768/" ]
You could use the common trick for nesting dicts arbitrarily, using `collections.defaultdict`: ``` from collections import defaultdict tree = lambda: defaultdict(tree) new_d = tree() for k1, dct in d.items(): for (k2, k3), val in dct.items(): new_d[k1][k2][k3] = val ```
If I understand the problem correctly, for this case you can wrap all the looping up in a dict comprehension. This assumes that your data is unique: ```py data = {"key1": {("key2", "key3"): "val"}} {k: {keys[0]: {keys[1]: val}} for k,v in data.items() for keys, val in v.items()} ```
52,029,026
i am developing a python script for my telegram right now. The problem is: How do I know when my bot is added to a group? Is there an Event or something else for that? I want the Bot to send a message to the group he´s beeing added to which says hi and the functions he can. I dont know if any kind of handler is able deal with this.
2018/08/26
[ "https://Stackoverflow.com/questions/52029026", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4847304/" ]
Very roughly, you would need to do something like this: register an handler that filters only service messages about new chat members. Then check if the bot is one of the new chat members. ``` from telegram.ext import Updater, MessageHandler, Filters def new_member(bot, update): for member in update.message.new_chat_members: if member.username == 'YourBot': update.message.reply_text('Welcome') updater = Updater('TOKEN') updater.dispatcher.add_handler(MessageHandler(Filters.status_update.new_chat_members, new_member)) updater.start_polling() updater.idle() ```
With callbacks (preferred) ========================== As of version 12, the preferred way to handle updates is via callbacks. To use them prior to version 13 state `use_context=True` in your `Updater`. Version 13 will have this as default. ``` from telegram.ext import Updater, MessageHandler, Filters def new_member(update, context): for member in update.message.new_chat_members: if member.username == 'YourBot': update.message.reply_text('Welcome') updater = Updater('TOKEN', use_context=True) # use_context will be True by default in version 13+ updater.dispatcher.add_handler(MessageHandler(Filters.status_update.new_chat_members, new_member)) updater.start_polling() updater.idle() ``` Please note that the order changed here. Instead of having the update as second, it is now the first argument. Executing the code below will result in an Exception like this: ``` AttributeError: 'CallbackContext' object has no attribute 'message' ``` Without callbacks (deprecated in version 12) ============================================ Blatantly copying from [mcont's answer](https://stackoverflow.com/a/52093608/11739543): ``` from telegram.ext import Updater, MessageHandler, Filters def new_member(bot, update): for member in update.message.new_chat_members: if member.username == 'YourBot': update.message.reply_text('Welcome') updater = Updater('TOKEN') updater.dispatcher.add_handler(MessageHandler(Filters.status_update.new_chat_members, new_member)) updater.start_polling() updater.idle() ```
58,491,838
I was setting up to use Numba along with my AMD GPU. I started out with the most basic example available on their website, to calculate the value of Pi using the Monte-Carlo simulation. I made some changes to the code so that it can run on GPU first and then on the CPU. By doing this, I just wanted to compare the time taken to execute the code and verify the results. Below is the code: ``` from numba import jit import random from timeit import default_timer as timer @jit(nopython=True) def monte_carlo_pi(nsamples): acc = 0 for i in range(nsamples): x = random.random() y = random.random() if (x ** 2 + y ** 2) < 1.0: acc += 1 return 4.0 * acc / nsamples def monte_carlo_pi_cpu(nsamples): acc = 0 for i in range(nsamples): x = random.random() y = random.random() if (x ** 2 + y ** 2) < 1.0: acc += 1 return 4.0 * acc / nsamples num = int(input()) start = timer() random.seed(0) print(monte_carlo_pi(num)) print("with gpu", timer()-start) start = timer() random.seed(0) print(monte_carlo_pi_cpu(num)) print("without gpu", timer()-start) ``` I was expecting the GPU to perform better, and so it did. But however, some results for the CPU and the CPU were not matching. ``` 1000000 # input parameter 3.140836 # gpu_out with gpu 0.2317520289998356 3.14244 # cpu_out without gpu 0.39849199899981613 ``` I am aware that Python does not fare the long floating-point operations that well, but these are only 6 decimal places, and I was not expecting such a large discrepancy. Can anyone explain as to why this difference comes up?
2019/10/21
[ "https://Stackoverflow.com/questions/58491838", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8726146/" ]
I've reorganized your code a bit: ``` import numpy from numba import jit import random from timeit import default_timer as timer @jit(nopython=True) def monte_carlo_pi(nsamples): random.seed(0) acc = 0 for i in range(nsamples): x = random.random() y = random.random() if (x ** 2 + y ** 2) < 1.0: acc += 1 return 4.0 * acc / nsamples num = 1000000 # run the jitted code once to remove compile time from timing monte_carlo_pi(10) start = timer() print(monte_carlo_pi(num)) print("jitted code", timer()-start) start = timer() print(monte_carlo_pi.py_func(num)) print("non-jitted", timer()-start) ``` results in: ``` 3.140936 jitted code 0.01403845699996964 3.14244 non-jitted 0.39901430800000526 ``` Note, you are **not** running the jitted code on your GPU. The code is compiled, but for your CPU. The reason for the difference in the computed value of Pi is likely due to differing implementations of the underlying random number generator. Numba isn't actually using Python's `random` module, but has its own implementation that is meant to mimic it. In fact, if you look at the source code, it appears as if the numba implementation is primarily designed based on numpy's random module, and then just aliases the `random` module from that, so if you swap out `random.random` for `np.random.random`, with the same seed, you get the same results: ``` @jit(nopython=True) def monte_carlo_pi2(nsamples): np.random.seed(0) acc = 0 for i in range(nsamples): x = np.random.random() y = np.random.random() if (x ** 2 + y ** 2) < 1.0: acc += 1 return 4.0 * acc / nsamples ``` Results in: ``` 3.140936 jitted code 0.013946142999998301 3.140936 non-jitted 0.9277294739999888 ``` And just a few other notes: * When timing numba jitted functions, always run the function once to compile it before doing benchmarking so you don't include the one-time compile time cost in the timing * You can access the pure python version of a numba jitted function using `.py_func`, so you don't have to duplicate the code twice.
> > **Q** : *Can anyone explain as to **why** this difference comes up?* > > > The availability and almost pedantic care of systematic use of re-setting the same state via the PRNG-of-choice **`.seed( aRepeatableExperimentSeedNUMBER )`**-method is the root-cause of all these surprises. Proper seeding works **if and only if** the same PRNG-algorithm is used - being principally different in **`random`**-module's `.random()`-method than the one in **`numpy.random`**-module's `.random()`. Another sort of observed artifact ( different values of the dart-throwing **`pi`**-guesstimates ) is related to a rather tiny scale ( yes, `1E6`-points is a tiny amount, compared to the initial axiom of the art of statistics - which is "using **infinitely and only infinitely** sized populations" ), where ***different* order** of using thenumbers that have been ( thanks to a pedantic and systematic re-`seed(0)`-ing the PRNG-FSA ) reproducibly generated into the always the same sequence of values, produces different results ( see difference of values in yesterday's experiments ). These artifacts, however, play less and less important role as the size grows ( as was shown at the very bottom, reproducible experiment ): ``` # 1E+6: 3.138196 # block-wise generation in np.where().sum() # 3.140936 # pair-wise generation in monte_carlo_pi2() # 1E+7: 3.142726 # block-wise generation in np.where().sum() # 3.142358 # pair-wise generation in monte_carlo_pi2() # 3E+7: 3.1421996 # block-wise generation in np.where().sum() # 3.1416629333333335 # pair-wise generation in monte_carlo_pi2() # 1E+8: 3.14178916 # block-wise generation in np.where().sum() # 3.14167324 # pair-wise generation in monte_carlo_pi2() # 1E+9: -. # block-wise generation in np.where().sum() -x-RAM-SWAP- # 3.141618484 # pair-wise generation in monte_carlo_pi2() # 1E10 -. # block-wise generation in np.where().sum() -x-RAM-SWAP- # 3.1415940572 # pair-wise generation in monte_carlo_pi2() # 1E11 -. # block-wise generation in np.where().sum() -x-RAM-SWAP- # 3.14159550084 # pair-wise generation in monte_carlo_pi2() ``` --- Next, let me show another aspect: What are the actual costs of doing so and where do they come from ?!? --------------------------------------------------------------------- A plain pure-**`numpy`** code was to compute this in on *`localhost`* in about **`108 [ms]`** ``` >>> from zmq import Stopwatch; clk = Stopwatch() # [us]-clock resolution >>> np.random.seed(0); clk.start();x = np.random.random( 1000000 ); y = np.random.random( 1000000 ); _ = ( np.where( x**2 + y**2 < 1.0, 1, 0 ).sum() * 4.0 / 1000000 );clk.stop() 108444 >>> _ 3.138196 ``` Here the most of the "costs" are related to the memory-I/O traffic ( for storing twice the 1E6-elements and making them squared ) "halved" problem has been "twice" as fast **`~ 52.7 [ms]`** ``` >>> np.random.seed(0); clk.start(); _ = ( np.where( np.random.random( 1000000 )**2 ... + np.random.random()**2 < 1.0, ... 1, ... 0 ... ).sum() * 4.0 / 1000000 ); clk.stop() 52696 ``` An interim-storage-less **`numpy`**-code was slower a bit on *`localhost`* in about **`~115 [ms]`** ``` >>> np.random.seed(0); clk.start(); _ = ( np.where( np.random.random( 1000000 )**2 ... + np.random.random( 1000000 )**2 < 1.0, ... 1, ... 0 ... ).sum() * 4.0 / 1000000 ); clk.stop(); print _ 114501 3.138196 ``` An ordinary python code with `numpy.random` PRNG-generator was able to compute the same but in more than **`3,937.9+ [ms]`** ( here you see the python's **`for`**-iterators' looping pains - **4 seconds** compared to **`~ 50 [ms]`** ) plus you can detect a different order of how PRNG-numbers sequence were generated and pair-wise consumed (seen in the result difference) : ``` >>> def monte_carlo_pi2(nsamples): ... np.random.seed(0) ... acc = 0 ... for i in range(nsamples): ... if ( np.random.random()**2 ... + np.random.random()**2 ) < 1.0: ... acc += 1 ... return 4.0 * acc / nsamples >>> np.random.seed( 0 ); clk.start(); _ = monte_carlo_pi2( 1000000 ); clk.stop(); print _ 3937892 3.140936 ``` A **`numba.jit()`**-compiled code was to compute the same in about **`692 [ms]`** as it has to bear and bears also the ***cost-of*-`jit`-*compilation*** ( only the next call will harvest the fruits of this one-stop-cost, executing in about **`~ 50 [ms]`** ): ``` >>> @jit(nopython=True) # COPY/PASTE ... def monte_carlo_pi2(nsamples): ... np.random.seed(0) ... acc = 0 ... for i in range(nsamples): ... x = np.random.random() ... y = np.random.random() ... if (x ** 2 + y ** 2) < 1.0: ... acc += 1 ... return 4.0 * acc / nsamples ... >>> np.random.seed( 0 ); clk.start(); _ = monte_carlo_pi2( 1000000 ); clk.stop(); print _ 692811 3.140936 >>> np.random.seed( 0 ); clk.start(); _ = monte_carlo_pi2( 1000000 ); clk.stop(); print _ 50193 3.140936 ``` --- EPILOGUE : ---------- Costs matter. Always. A `jit`-compiled code can help **if and only if** the LLVM-compiled code is re-used so often, that it can adjust the costs of the initial compilation. > > ( In case arcane gurus present a fair objection: a trick with a pre-compiled code is still paying that cost, isn't it? ) > > > And the values ? ---------------- Using just as few as **`1E6`** samples is not very convincing, neither for the pi-dart-throwing experiment, nor for the performance benchmarking (as the indeed tiny small-scale of the data samples permits in-cache introduced timing artefacts, that do not scale or fail to generalise ). The larger the scale, the closer the **`pi`**-guesstimate gets and the better will perform data-efficient computing ( stream / pair-wise will get better than block-wise ( due to data-instantiation costs and later the memory swapping-related suffocation ) **as shown in the** [**online** reproducible-experimentation **sandbox IDE**](https://tio.run/##xVbbctpADH33V2gmD7FJstmLvWtD88gPNNNnxiGmuMGXep1Jk5@nMtjG8W5ayLSDYNhBK0tH0pGgfK3XRS622wtYVUUGAG/ZT0izsqhquK@L8iWul@sZLDdPcHdQuJ7TPpA/Zw9x98CPtHbKWOsZtNLq0ah8hVhDXjrOxcmCsWyi07cEylrDtNM8bIrl081LqpNPRclLUsX5Y5ERnSSPLlDwZvsod5DmtQtirhrVAr@j7cs6qRIXwx6e2x9uB86bTDhcWe4Pt618AUboNX5cAyXgEf2cuR5MwCdwu7OeQVntMCzAc8ZATaRsHppIXQsSPjlg8UiV6HVcoim/bjHukED8K9XoCsOciPTfdbuRXMdZuUl02/UyTqtdt@EzYS4vL@ExWUFW5HWyWMbVpliUKXcHYbyp04c2udHfxctlU5z@@6qoIMU@AD7xPXE7dwNvu9lY2RrSNMIZpX1lN@uaMR3bN3iu7oD1@iqpn6vcxR5R7FRzfTtI0nHahhmVOJC@eTUVw@awuZy2BwjCRMgiOQj@NYk3UKdZglNJCZccNMA3nVRDZdgo7181aZWUMEZRd/MHORC5I50pwwXwIYtGCAWzIBTKghANnYvjHQc2xzIwHVNxdOrQj2hXhO78TOq@NXXfRMj9YerYdZ9G4qOuMxJINXaMSmWmTiP1t9THpPTQfPzuF8GRqSOYQFoQBpauU96mzuZq2h67CnDFP@Z9pCylVSwy/AuuzsN7ZZtMuaPie4Rcncb7kCubY5NVQgTn4b2SFoRBRM3Uw3DMey6C0O6Y@UQZrEKltHT9iJH/D7xHMKGKTITKp5Ztp9rUxZ73ouc9i4YL//1cRcw350oo07/0g3PwnhEVWSZfhCbCgMkTeM9xkgPTsWTm0lPRWXjPcdtGlqXHzMmUSo15z6TkkWglGDoOkCzheJf4EZGWmjJ6jn3vK/ypo2OEEvcxs09mu@/DaXvsK6DCiEnrXHHc92P/EBC@p9rAPwJR4hy8DzGwQU9JaDj@J8YJU9jM7fY3) ``` # 1E6: # 1E6: 3.138196 Real time: 0.262 s User time: 0.268 s Sys. time: 0.110 s ---------------------------- np.where().sum() block-wise # Real time: 0.231 s User time: 0.237 s Sys. time: 0.111 s # # Real time: 0.251 s User time: 0.265 s Sys. time: 0.103 s ---------------------------- np.where( .reshape().sum() ).sum() block-wise # Real time: 0.241 s User time: 0.234 s Sys. time: 0.124 s # # 3.140936 Real time: 1.567 s User time: 1.575 s Sys. time: 0.097 s ---------------------------- monte_carlo_pi2() -- -- -- -- -- -- pair-wise # Real time: 1.556 s User time: 1.557 s Sys. time: 0.102 s # # 1E7: # 1E7: 3.142726 Real time: 0.971 s User time: 0.719 s Sys. time: 0.327 s ---------------------------- np.where().sum() block-wise # Real time: 0.762 s User time: 0.603 s Sys. time: 0.271 s # # Real time: 0.827 s User time: 0.604 s Sys. time: 0.335 s ---------------------------- np.where( .reshape().sum() ).sum() block-wise # Real time: 0.767 s User time: 0.590 s Sys. time: 0.288 s # # 3.142358 Real time: 14.756 s User time: 14.619 s Sys. time: 0.103 s ---------------------------- monte_carlo_pi2() -- -- -- -- -- -- pair-wise # Real time: 14.879 s User time: 14.740 s Sys. time: 0.117 s # # 3E7: # 3E7: 3.1421996 Real time: 1.914 s User time: 1.370 s Sys. time: 0.645 s ---------------------------- np.where().sum() block-wise # Real time: 1.796 s User time: 1.380 s Sys. time: 0.516 s # # Real time: 2.325 s User time: 1.615 s Sys. time: 0.795 s ---------------------------- np.where( .reshape().sum() ).sum() block-wise # Real time: 2.099 s User time: 1.514 s Sys. time: 0.677 s # # 3.1416629333333335 Real time: 50.182 s User time: 49.680 s Sys. time: 0.107 s ---------------------------- monte_carlo_pi2() -- -- -- -- -- -- pair-wise # Real time: 47.240 s User time: 46.711 s Sys. time: 0.103 s # # 1E8: # 1E8: 3.14178916 Real time: 12.970 s User time: 5.296 s Sys. time: 7.273 s ---------------------------- np.where().sum() block-wise # Real time: 8.275 s User time: 6.088 s Sys. time: 2.172 s ``` And we did not speak about the ultimate performance edge - check a read about the [**cython**](http://docs.cython.org/en/latest/src/userguide/numpy_tutorial.html) with an option to harness an OpenMP-code as a next dose of performance-boosting steroids for python
43,810,256
In DOS or batch file on windows we can access multiple consecutive files fieldgen1.txt, fieldgen2.txt, etc. as follows: ``` for /L %%i in (1,1,250) do ( copy fieldgen%%i.txt hk.ref Process the file and go to next file. ``` I have 250 files name like fieldgen1.ref, fieldgen2.ref, etc. Now I want to access one file, process that file, and access another file whenever processing is done. As I know python do like this ``` with open('fieldgen1.txt', 'r') as inpfile, with open('fieldgen2.txt', 'r') as inpfile: ``` I can access only two files this way. Is there any short way to access multiple consecutive files in python?
2017/05/05
[ "https://Stackoverflow.com/questions/43810256", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6210264/" ]
Yes, you can access and process consecutive files in python ``` for i in range(1, 251): with open('fieldgen%s.txt' % i, 'r') as fp: lines = fp.readlines() # Do all your processing here ``` The code will loop and read each file. You can then do your processing once you have read all the lines. You didn't mention if you needed to alter the file as part of your processing so I am just including the reading part. If you do need to write back to the file make sure you do that after all the processing is done.
You could do something like ``` import os files = os.listdir(".") for f in files: print (str(f)) ``` This will print all files and directories in the current run directory. Once you have the file name you can use that to process the content.
43,810,256
In DOS or batch file on windows we can access multiple consecutive files fieldgen1.txt, fieldgen2.txt, etc. as follows: ``` for /L %%i in (1,1,250) do ( copy fieldgen%%i.txt hk.ref Process the file and go to next file. ``` I have 250 files name like fieldgen1.ref, fieldgen2.ref, etc. Now I want to access one file, process that file, and access another file whenever processing is done. As I know python do like this ``` with open('fieldgen1.txt', 'r') as inpfile, with open('fieldgen2.txt', 'r') as inpfile: ``` I can access only two files this way. Is there any short way to access multiple consecutive files in python?
2017/05/05
[ "https://Stackoverflow.com/questions/43810256", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6210264/" ]
Yes, you can access and process consecutive files in python ``` for i in range(1, 251): with open('fieldgen%s.txt' % i, 'r') as fp: lines = fp.readlines() # Do all your processing here ``` The code will loop and read each file. You can then do your processing once you have read all the lines. You didn't mention if you needed to alter the file as part of your processing so I am just including the reading part. If you do need to write back to the file make sure you do that after all the processing is done.
I will consider using a string template. ``` for i in range(1,251): with open('fieldgen'+str(i)+'.txt', 'r') as fp: #Parsing your file ``` or you can use a List Comprehension: ``` files = [open('fieldgen'+str(i)+'.txt', 'r') for i in range(1,251)] for file in files: #Parsing your file ```
52,621,859
I am a new python learner and I want to write a program which reads a text file, and save value of a line contains "width" and print it. The file looks like: ``` width: 10128 nlines: 7101 ``` I am trying something like: ``` filename = "text.txtr" # open the file for reading filehandle = open(filename, 'r') while True: # read a single line line = filehandle.readline() if " width " in line: num = str(num) # type: print (num) # close the pointer to that file filehandle.close() ```
2018/10/03
[ "https://Stackoverflow.com/questions/52621859", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9084038/" ]
Your approach to opening the file is not good, try using with statement whenever opening a file. Afterwards you can iterate over each line from the file and check if it contains width, and if it does you need to extract the number, which can be done using regex. See the code below. ``` import re filename = "text.txtr" with open(filename, 'r') as filehandle: # read a single line for line in filehandle: if "width" in line: num = re.search(r'(\d+)\D+$', line).group(1) num = str(num) # type: print (num) ``` Please see Matt's comment below for another solution to get the number.
It's not returning results because of the line `if " width " in line:`. As you can see from your file, there is not a line with `" width "` in there, maybe you want: ``` if "width:" in line: #Do things ``` Also note there are a few issues with the code, for example that your program will never finish becasuse of your line `While True:`, so you'll never actually reach the line `filehandle.close()` and the manner which you open the file (using `with` is preferred). Also that you are defining `num = str(num)` but num isn't already defined so you will run into issues there too.
52,621,859
I am a new python learner and I want to write a program which reads a text file, and save value of a line contains "width" and print it. The file looks like: ``` width: 10128 nlines: 7101 ``` I am trying something like: ``` filename = "text.txtr" # open the file for reading filehandle = open(filename, 'r') while True: # read a single line line = filehandle.readline() if " width " in line: num = str(num) # type: print (num) # close the pointer to that file filehandle.close() ```
2018/10/03
[ "https://Stackoverflow.com/questions/52621859", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9084038/" ]
This is a simplified way based on Muhamad's one. What you need is: * open a file * read lines until you find "width" in one * extract the number that follows a colon * close the file * print the number It Python it can give ``` num = None # "sentinel" value with open(file) as fd: # with will ensure file is closed at end of block for line in fd: # a Python open file is an iterator on lines if "width" in line: # identify line of interest num = int(line.split(':')[1]) # get the number in the second part of # the line when it is splitted on colons break # ok, line has been found: stop looping if num is not None: # ok we have found the line print(num) ```
It's not returning results because of the line `if " width " in line:`. As you can see from your file, there is not a line with `" width "` in there, maybe you want: ``` if "width:" in line: #Do things ``` Also note there are a few issues with the code, for example that your program will never finish becasuse of your line `While True:`, so you'll never actually reach the line `filehandle.close()` and the manner which you open the file (using `with` is preferred). Also that you are defining `num = str(num)` but num isn't already defined so you will run into issues there too.
52,621,859
I am a new python learner and I want to write a program which reads a text file, and save value of a line contains "width" and print it. The file looks like: ``` width: 10128 nlines: 7101 ``` I am trying something like: ``` filename = "text.txtr" # open the file for reading filehandle = open(filename, 'r') while True: # read a single line line = filehandle.readline() if " width " in line: num = str(num) # type: print (num) # close the pointer to that file filehandle.close() ```
2018/10/03
[ "https://Stackoverflow.com/questions/52621859", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9084038/" ]
Your approach to opening the file is not good, try using with statement whenever opening a file. Afterwards you can iterate over each line from the file and check if it contains width, and if it does you need to extract the number, which can be done using regex. See the code below. ``` import re filename = "text.txtr" with open(filename, 'r') as filehandle: # read a single line for line in filehandle: if "width" in line: num = re.search(r'(\d+)\D+$', line).group(1) num = str(num) # type: print (num) ``` Please see Matt's comment below for another solution to get the number.
Fist of all no need to hold the file in a variable better directly open the file with `with open` method it takes care of file closing once read/write operation are done on the `self_exit()` function. So, you can start & clean your code like below: ``` with open("text.txtr", "r") as fh: lines = fh.readlines() for line in lines: if 'width' in line: print(line.strip()) ```
52,621,859
I am a new python learner and I want to write a program which reads a text file, and save value of a line contains "width" and print it. The file looks like: ``` width: 10128 nlines: 7101 ``` I am trying something like: ``` filename = "text.txtr" # open the file for reading filehandle = open(filename, 'r') while True: # read a single line line = filehandle.readline() if " width " in line: num = str(num) # type: print (num) # close the pointer to that file filehandle.close() ```
2018/10/03
[ "https://Stackoverflow.com/questions/52621859", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9084038/" ]
This is a simplified way based on Muhamad's one. What you need is: * open a file * read lines until you find "width" in one * extract the number that follows a colon * close the file * print the number It Python it can give ``` num = None # "sentinel" value with open(file) as fd: # with will ensure file is closed at end of block for line in fd: # a Python open file is an iterator on lines if "width" in line: # identify line of interest num = int(line.split(':')[1]) # get the number in the second part of # the line when it is splitted on colons break # ok, line has been found: stop looping if num is not None: # ok we have found the line print(num) ```
Fist of all no need to hold the file in a variable better directly open the file with `with open` method it takes care of file closing once read/write operation are done on the `self_exit()` function. So, you can start & clean your code like below: ``` with open("text.txtr", "r") as fh: lines = fh.readlines() for line in lines: if 'width' in line: print(line.strip()) ```
56,561,072
I'm trying to upgrade pip, and also install pywinusb, but I'm getting the error: "**UnicodeDecodeError: 'ascii' codec can't decode byte 0xe9 in position 8: ordinal not in range(128)**". Pip upgrade: ``` PS C:\Python27> pip --version pip 18.1 from c:\python27\lib\site-packages\pip (python 2.7) PS C:\Python27> python -m pip install --upgrade pip Collecting pip Exception: Traceback (most recent call last): File "C:\Python27\lib\site-packages\pip\_internal\cli\base_command.py", line 143, in main status = self.run(options, args) File "C:\Python27\lib\site-packages\pip\_internal\commands\install.py", line 318, in run resolver.resolve(requirement_set) File "C:\Python27\lib\site-packages\pip\_internal\resolve.py", line 102, in resolve self._resolve_one(requirement_set, req) File "C:\Python27\lib\site-packages\pip\_internal\resolve.py", line 256, in _resolve_one abstract_dist = self._get_abstract_dist_for(req_to_install) File "C:\Python27\lib\site-packages\pip\_internal\resolve.py", line 209, in _get_abstract_dist_for self.require_hashes File "C:\Python27\lib\site-packages\pip\_internal\operations\prepare.py", line 283, in prepare_linked_requirement progress_bar=self.progress_bar File "C:\Python27\lib\site-packages\pip\_internal\download.py", line 836, in unpack_url progress_bar=progress_bar File "C:\Python27\lib\site-packages\pip\_internal\download.py", line 673, in unpack_http_url progress_bar) File "C:\Python27\lib\site-packages\pip\_internal\download.py", line 895, in _download_http_url file_path = os.path.join(temp_dir, filename) File "C:\Python27\lib\ntpath.py", line 85, in join result_path = result_path + p_path UnicodeDecodeError: 'ascii' codec can't decode byte 0xe9 in position 8: ordinal not in range(128) You are using pip version 18.1, however version 19.1.1 is available. You should consider upgrading via the 'python -m pip install --upgrade pip' command. ``` And in "pywinusb" install: ``` PS C:\Python27> pip install pywinusb Collecting pywinusb Exception: Traceback (most recent call last): File "c:\python27\lib\site-packages\pip\_internal\cli\base_command.py", line 143, in main status = self.run(options, args) File "c:\python27\lib\site-packages\pip\_internal\commands\install.py", line 318, in run resolver.resolve(requirement_set) File "c:\python27\lib\site-packages\pip\_internal\resolve.py", line 102, in resolve self._resolve_one(requirement_set, req) File "c:\python27\lib\site-packages\pip\_internal\resolve.py", line 256, in _resolve_one abstract_dist = self._get_abstract_dist_for(req_to_install) File "c:\python27\lib\site-packages\pip\_internal\resolve.py", line 209, in _get_abstract_dist_for self.require_hashes File "c:\python27\lib\site-packages\pip\_internal\operations\prepare.py", line 283, in prepare_linked_requirement progress_bar=self.progress_bar File "c:\python27\lib\site-packages\pip\_internal\download.py", line 836, in unpack_url progress_bar=progress_bar File "c:\python27\lib\site-packages\pip\_internal\download.py", line 673, in unpack_http_url progress_bar) File "c:\python27\lib\site-packages\pip\_internal\download.py", line 895, in _download_http_url file_path = os.path.join(temp_dir, filename) File "c:\python27\lib\ntpath.py", line 85, in join result_path = result_path + p_path UnicodeDecodeError: 'ascii' codec can't decode byte 0xe9 in position 8: ordinal not in range(128) You are using pip version 18.1, however version 19.1.1 is available. You should consider upgrading via the 'python -m pip install --upgrade pip' command. ``` Before this I have installed the package "pyusb" without any problem, without getting any error. I've searched in google for this error, but not getting a very good explanation. How to solve this error?
2019/06/12
[ "https://Stackoverflow.com/questions/56561072", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6078511/" ]
Seems to be a specific issue concerning `Button` when contained in a `List` row. **Workaround**: ```swift List { HStack { Text("One").onTapGesture { print("One") } Text("Two").onTapGesture { print("Two") } } } ``` This yields the desired output. You can also use a `Group` instead of `Text` to have a sophisticated design for the "buttons".
One of the differences with SwiftUI is that you are not creating specific instances of, for example UIButton, because you might be in a Mac app. With SwiftUI, you are requesting a button type thing. In this case since you are in a list row, the system gives you a full size, tap anywhere to trigger the action, button. And since you've added two of them, both are triggered when you tap anywhere. You can add two separate Views and give them a `.onTapGesture` to have them act essentially as buttons, but you would lose the tap flash of the cell row and any other automatic button like features SwiftUI would give. ```swift List { HStack { Text("One").onTapGesture { print("Button 1 tapped") } Spacer() Text("Two").onTapGesture { print("Button 2 tapped") } } } ```
56,561,072
I'm trying to upgrade pip, and also install pywinusb, but I'm getting the error: "**UnicodeDecodeError: 'ascii' codec can't decode byte 0xe9 in position 8: ordinal not in range(128)**". Pip upgrade: ``` PS C:\Python27> pip --version pip 18.1 from c:\python27\lib\site-packages\pip (python 2.7) PS C:\Python27> python -m pip install --upgrade pip Collecting pip Exception: Traceback (most recent call last): File "C:\Python27\lib\site-packages\pip\_internal\cli\base_command.py", line 143, in main status = self.run(options, args) File "C:\Python27\lib\site-packages\pip\_internal\commands\install.py", line 318, in run resolver.resolve(requirement_set) File "C:\Python27\lib\site-packages\pip\_internal\resolve.py", line 102, in resolve self._resolve_one(requirement_set, req) File "C:\Python27\lib\site-packages\pip\_internal\resolve.py", line 256, in _resolve_one abstract_dist = self._get_abstract_dist_for(req_to_install) File "C:\Python27\lib\site-packages\pip\_internal\resolve.py", line 209, in _get_abstract_dist_for self.require_hashes File "C:\Python27\lib\site-packages\pip\_internal\operations\prepare.py", line 283, in prepare_linked_requirement progress_bar=self.progress_bar File "C:\Python27\lib\site-packages\pip\_internal\download.py", line 836, in unpack_url progress_bar=progress_bar File "C:\Python27\lib\site-packages\pip\_internal\download.py", line 673, in unpack_http_url progress_bar) File "C:\Python27\lib\site-packages\pip\_internal\download.py", line 895, in _download_http_url file_path = os.path.join(temp_dir, filename) File "C:\Python27\lib\ntpath.py", line 85, in join result_path = result_path + p_path UnicodeDecodeError: 'ascii' codec can't decode byte 0xe9 in position 8: ordinal not in range(128) You are using pip version 18.1, however version 19.1.1 is available. You should consider upgrading via the 'python -m pip install --upgrade pip' command. ``` And in "pywinusb" install: ``` PS C:\Python27> pip install pywinusb Collecting pywinusb Exception: Traceback (most recent call last): File "c:\python27\lib\site-packages\pip\_internal\cli\base_command.py", line 143, in main status = self.run(options, args) File "c:\python27\lib\site-packages\pip\_internal\commands\install.py", line 318, in run resolver.resolve(requirement_set) File "c:\python27\lib\site-packages\pip\_internal\resolve.py", line 102, in resolve self._resolve_one(requirement_set, req) File "c:\python27\lib\site-packages\pip\_internal\resolve.py", line 256, in _resolve_one abstract_dist = self._get_abstract_dist_for(req_to_install) File "c:\python27\lib\site-packages\pip\_internal\resolve.py", line 209, in _get_abstract_dist_for self.require_hashes File "c:\python27\lib\site-packages\pip\_internal\operations\prepare.py", line 283, in prepare_linked_requirement progress_bar=self.progress_bar File "c:\python27\lib\site-packages\pip\_internal\download.py", line 836, in unpack_url progress_bar=progress_bar File "c:\python27\lib\site-packages\pip\_internal\download.py", line 673, in unpack_http_url progress_bar) File "c:\python27\lib\site-packages\pip\_internal\download.py", line 895, in _download_http_url file_path = os.path.join(temp_dir, filename) File "c:\python27\lib\ntpath.py", line 85, in join result_path = result_path + p_path UnicodeDecodeError: 'ascii' codec can't decode byte 0xe9 in position 8: ordinal not in range(128) You are using pip version 18.1, however version 19.1.1 is available. You should consider upgrading via the 'python -m pip install --upgrade pip' command. ``` Before this I have installed the package "pyusb" without any problem, without getting any error. I've searched in google for this error, but not getting a very good explanation. How to solve this error?
2019/06/12
[ "https://Stackoverflow.com/questions/56561072", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6078511/" ]
Seems to be a specific issue concerning `Button` when contained in a `List` row. **Workaround**: ```swift List { HStack { Text("One").onTapGesture { print("One") } Text("Two").onTapGesture { print("Two") } } } ``` This yields the desired output. You can also use a `Group` instead of `Text` to have a sophisticated design for the "buttons".
You need to create your own ButtonStyle: ``` struct MyButtonStyle: ButtonStyle { func makeBody(configuration: Configuration) -> some View { configuration.label .foregroundColor(.accentColor) .opacity(configuration.isPressed ? 0.5 : 1.0) } } struct IdentifiableString: Identifiable { let text: String var id: String { text } } struct Test: View { var body: some View { List([ IdentifiableString(text: "Line 1"), IdentifiableString(text: "Line 2"), ]) { item in HStack { Text("\(item.text)") Spacer() Button(action: { print("\(item.text) 1")}) { Text("Button 1") } Button(action: { print("\(item.text) 2")}) { Text("Button 2") } } }.buttonStyle(MyButtonStyle()) } } ```
56,561,072
I'm trying to upgrade pip, and also install pywinusb, but I'm getting the error: "**UnicodeDecodeError: 'ascii' codec can't decode byte 0xe9 in position 8: ordinal not in range(128)**". Pip upgrade: ``` PS C:\Python27> pip --version pip 18.1 from c:\python27\lib\site-packages\pip (python 2.7) PS C:\Python27> python -m pip install --upgrade pip Collecting pip Exception: Traceback (most recent call last): File "C:\Python27\lib\site-packages\pip\_internal\cli\base_command.py", line 143, in main status = self.run(options, args) File "C:\Python27\lib\site-packages\pip\_internal\commands\install.py", line 318, in run resolver.resolve(requirement_set) File "C:\Python27\lib\site-packages\pip\_internal\resolve.py", line 102, in resolve self._resolve_one(requirement_set, req) File "C:\Python27\lib\site-packages\pip\_internal\resolve.py", line 256, in _resolve_one abstract_dist = self._get_abstract_dist_for(req_to_install) File "C:\Python27\lib\site-packages\pip\_internal\resolve.py", line 209, in _get_abstract_dist_for self.require_hashes File "C:\Python27\lib\site-packages\pip\_internal\operations\prepare.py", line 283, in prepare_linked_requirement progress_bar=self.progress_bar File "C:\Python27\lib\site-packages\pip\_internal\download.py", line 836, in unpack_url progress_bar=progress_bar File "C:\Python27\lib\site-packages\pip\_internal\download.py", line 673, in unpack_http_url progress_bar) File "C:\Python27\lib\site-packages\pip\_internal\download.py", line 895, in _download_http_url file_path = os.path.join(temp_dir, filename) File "C:\Python27\lib\ntpath.py", line 85, in join result_path = result_path + p_path UnicodeDecodeError: 'ascii' codec can't decode byte 0xe9 in position 8: ordinal not in range(128) You are using pip version 18.1, however version 19.1.1 is available. You should consider upgrading via the 'python -m pip install --upgrade pip' command. ``` And in "pywinusb" install: ``` PS C:\Python27> pip install pywinusb Collecting pywinusb Exception: Traceback (most recent call last): File "c:\python27\lib\site-packages\pip\_internal\cli\base_command.py", line 143, in main status = self.run(options, args) File "c:\python27\lib\site-packages\pip\_internal\commands\install.py", line 318, in run resolver.resolve(requirement_set) File "c:\python27\lib\site-packages\pip\_internal\resolve.py", line 102, in resolve self._resolve_one(requirement_set, req) File "c:\python27\lib\site-packages\pip\_internal\resolve.py", line 256, in _resolve_one abstract_dist = self._get_abstract_dist_for(req_to_install) File "c:\python27\lib\site-packages\pip\_internal\resolve.py", line 209, in _get_abstract_dist_for self.require_hashes File "c:\python27\lib\site-packages\pip\_internal\operations\prepare.py", line 283, in prepare_linked_requirement progress_bar=self.progress_bar File "c:\python27\lib\site-packages\pip\_internal\download.py", line 836, in unpack_url progress_bar=progress_bar File "c:\python27\lib\site-packages\pip\_internal\download.py", line 673, in unpack_http_url progress_bar) File "c:\python27\lib\site-packages\pip\_internal\download.py", line 895, in _download_http_url file_path = os.path.join(temp_dir, filename) File "c:\python27\lib\ntpath.py", line 85, in join result_path = result_path + p_path UnicodeDecodeError: 'ascii' codec can't decode byte 0xe9 in position 8: ordinal not in range(128) You are using pip version 18.1, however version 19.1.1 is available. You should consider upgrading via the 'python -m pip install --upgrade pip' command. ``` Before this I have installed the package "pyusb" without any problem, without getting any error. I've searched in google for this error, but not getting a very good explanation. How to solve this error?
2019/06/12
[ "https://Stackoverflow.com/questions/56561072", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6078511/" ]
You need to use **[BorderlessButtonStyle()](https://developer.apple.com/documentation/swiftui/borderlessbuttonstyle)** or **PlainButtonStyle()**. ```swift List([1, 2, 3], id: \.self) { row in HStack { Button(action: { print("Button at \(row)") }) { Text("Row: \(row) Name: A") } .buttonStyle(BorderlessButtonStyle()) Button(action: { print("Button at \(row)") }) { Text("Row: \(row) Name: B") } .buttonStyle(PlainButtonStyle()) } } ```
Seems to be a specific issue concerning `Button` when contained in a `List` row. **Workaround**: ```swift List { HStack { Text("One").onTapGesture { print("One") } Text("Two").onTapGesture { print("Two") } } } ``` This yields the desired output. You can also use a `Group` instead of `Text` to have a sophisticated design for the "buttons".
56,561,072
I'm trying to upgrade pip, and also install pywinusb, but I'm getting the error: "**UnicodeDecodeError: 'ascii' codec can't decode byte 0xe9 in position 8: ordinal not in range(128)**". Pip upgrade: ``` PS C:\Python27> pip --version pip 18.1 from c:\python27\lib\site-packages\pip (python 2.7) PS C:\Python27> python -m pip install --upgrade pip Collecting pip Exception: Traceback (most recent call last): File "C:\Python27\lib\site-packages\pip\_internal\cli\base_command.py", line 143, in main status = self.run(options, args) File "C:\Python27\lib\site-packages\pip\_internal\commands\install.py", line 318, in run resolver.resolve(requirement_set) File "C:\Python27\lib\site-packages\pip\_internal\resolve.py", line 102, in resolve self._resolve_one(requirement_set, req) File "C:\Python27\lib\site-packages\pip\_internal\resolve.py", line 256, in _resolve_one abstract_dist = self._get_abstract_dist_for(req_to_install) File "C:\Python27\lib\site-packages\pip\_internal\resolve.py", line 209, in _get_abstract_dist_for self.require_hashes File "C:\Python27\lib\site-packages\pip\_internal\operations\prepare.py", line 283, in prepare_linked_requirement progress_bar=self.progress_bar File "C:\Python27\lib\site-packages\pip\_internal\download.py", line 836, in unpack_url progress_bar=progress_bar File "C:\Python27\lib\site-packages\pip\_internal\download.py", line 673, in unpack_http_url progress_bar) File "C:\Python27\lib\site-packages\pip\_internal\download.py", line 895, in _download_http_url file_path = os.path.join(temp_dir, filename) File "C:\Python27\lib\ntpath.py", line 85, in join result_path = result_path + p_path UnicodeDecodeError: 'ascii' codec can't decode byte 0xe9 in position 8: ordinal not in range(128) You are using pip version 18.1, however version 19.1.1 is available. You should consider upgrading via the 'python -m pip install --upgrade pip' command. ``` And in "pywinusb" install: ``` PS C:\Python27> pip install pywinusb Collecting pywinusb Exception: Traceback (most recent call last): File "c:\python27\lib\site-packages\pip\_internal\cli\base_command.py", line 143, in main status = self.run(options, args) File "c:\python27\lib\site-packages\pip\_internal\commands\install.py", line 318, in run resolver.resolve(requirement_set) File "c:\python27\lib\site-packages\pip\_internal\resolve.py", line 102, in resolve self._resolve_one(requirement_set, req) File "c:\python27\lib\site-packages\pip\_internal\resolve.py", line 256, in _resolve_one abstract_dist = self._get_abstract_dist_for(req_to_install) File "c:\python27\lib\site-packages\pip\_internal\resolve.py", line 209, in _get_abstract_dist_for self.require_hashes File "c:\python27\lib\site-packages\pip\_internal\operations\prepare.py", line 283, in prepare_linked_requirement progress_bar=self.progress_bar File "c:\python27\lib\site-packages\pip\_internal\download.py", line 836, in unpack_url progress_bar=progress_bar File "c:\python27\lib\site-packages\pip\_internal\download.py", line 673, in unpack_http_url progress_bar) File "c:\python27\lib\site-packages\pip\_internal\download.py", line 895, in _download_http_url file_path = os.path.join(temp_dir, filename) File "c:\python27\lib\ntpath.py", line 85, in join result_path = result_path + p_path UnicodeDecodeError: 'ascii' codec can't decode byte 0xe9 in position 8: ordinal not in range(128) You are using pip version 18.1, however version 19.1.1 is available. You should consider upgrading via the 'python -m pip install --upgrade pip' command. ``` Before this I have installed the package "pyusb" without any problem, without getting any error. I've searched in google for this error, but not getting a very good explanation. How to solve this error?
2019/06/12
[ "https://Stackoverflow.com/questions/56561072", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6078511/" ]
One of the differences with SwiftUI is that you are not creating specific instances of, for example UIButton, because you might be in a Mac app. With SwiftUI, you are requesting a button type thing. In this case since you are in a list row, the system gives you a full size, tap anywhere to trigger the action, button. And since you've added two of them, both are triggered when you tap anywhere. You can add two separate Views and give them a `.onTapGesture` to have them act essentially as buttons, but you would lose the tap flash of the cell row and any other automatic button like features SwiftUI would give. ```swift List { HStack { Text("One").onTapGesture { print("Button 1 tapped") } Spacer() Text("Two").onTapGesture { print("Button 2 tapped") } } } ```
You need to create your own ButtonStyle: ``` struct MyButtonStyle: ButtonStyle { func makeBody(configuration: Configuration) -> some View { configuration.label .foregroundColor(.accentColor) .opacity(configuration.isPressed ? 0.5 : 1.0) } } struct IdentifiableString: Identifiable { let text: String var id: String { text } } struct Test: View { var body: some View { List([ IdentifiableString(text: "Line 1"), IdentifiableString(text: "Line 2"), ]) { item in HStack { Text("\(item.text)") Spacer() Button(action: { print("\(item.text) 1")}) { Text("Button 1") } Button(action: { print("\(item.text) 2")}) { Text("Button 2") } } }.buttonStyle(MyButtonStyle()) } } ```
56,561,072
I'm trying to upgrade pip, and also install pywinusb, but I'm getting the error: "**UnicodeDecodeError: 'ascii' codec can't decode byte 0xe9 in position 8: ordinal not in range(128)**". Pip upgrade: ``` PS C:\Python27> pip --version pip 18.1 from c:\python27\lib\site-packages\pip (python 2.7) PS C:\Python27> python -m pip install --upgrade pip Collecting pip Exception: Traceback (most recent call last): File "C:\Python27\lib\site-packages\pip\_internal\cli\base_command.py", line 143, in main status = self.run(options, args) File "C:\Python27\lib\site-packages\pip\_internal\commands\install.py", line 318, in run resolver.resolve(requirement_set) File "C:\Python27\lib\site-packages\pip\_internal\resolve.py", line 102, in resolve self._resolve_one(requirement_set, req) File "C:\Python27\lib\site-packages\pip\_internal\resolve.py", line 256, in _resolve_one abstract_dist = self._get_abstract_dist_for(req_to_install) File "C:\Python27\lib\site-packages\pip\_internal\resolve.py", line 209, in _get_abstract_dist_for self.require_hashes File "C:\Python27\lib\site-packages\pip\_internal\operations\prepare.py", line 283, in prepare_linked_requirement progress_bar=self.progress_bar File "C:\Python27\lib\site-packages\pip\_internal\download.py", line 836, in unpack_url progress_bar=progress_bar File "C:\Python27\lib\site-packages\pip\_internal\download.py", line 673, in unpack_http_url progress_bar) File "C:\Python27\lib\site-packages\pip\_internal\download.py", line 895, in _download_http_url file_path = os.path.join(temp_dir, filename) File "C:\Python27\lib\ntpath.py", line 85, in join result_path = result_path + p_path UnicodeDecodeError: 'ascii' codec can't decode byte 0xe9 in position 8: ordinal not in range(128) You are using pip version 18.1, however version 19.1.1 is available. You should consider upgrading via the 'python -m pip install --upgrade pip' command. ``` And in "pywinusb" install: ``` PS C:\Python27> pip install pywinusb Collecting pywinusb Exception: Traceback (most recent call last): File "c:\python27\lib\site-packages\pip\_internal\cli\base_command.py", line 143, in main status = self.run(options, args) File "c:\python27\lib\site-packages\pip\_internal\commands\install.py", line 318, in run resolver.resolve(requirement_set) File "c:\python27\lib\site-packages\pip\_internal\resolve.py", line 102, in resolve self._resolve_one(requirement_set, req) File "c:\python27\lib\site-packages\pip\_internal\resolve.py", line 256, in _resolve_one abstract_dist = self._get_abstract_dist_for(req_to_install) File "c:\python27\lib\site-packages\pip\_internal\resolve.py", line 209, in _get_abstract_dist_for self.require_hashes File "c:\python27\lib\site-packages\pip\_internal\operations\prepare.py", line 283, in prepare_linked_requirement progress_bar=self.progress_bar File "c:\python27\lib\site-packages\pip\_internal\download.py", line 836, in unpack_url progress_bar=progress_bar File "c:\python27\lib\site-packages\pip\_internal\download.py", line 673, in unpack_http_url progress_bar) File "c:\python27\lib\site-packages\pip\_internal\download.py", line 895, in _download_http_url file_path = os.path.join(temp_dir, filename) File "c:\python27\lib\ntpath.py", line 85, in join result_path = result_path + p_path UnicodeDecodeError: 'ascii' codec can't decode byte 0xe9 in position 8: ordinal not in range(128) You are using pip version 18.1, however version 19.1.1 is available. You should consider upgrading via the 'python -m pip install --upgrade pip' command. ``` Before this I have installed the package "pyusb" without any problem, without getting any error. I've searched in google for this error, but not getting a very good explanation. How to solve this error?
2019/06/12
[ "https://Stackoverflow.com/questions/56561072", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6078511/" ]
You need to use **[BorderlessButtonStyle()](https://developer.apple.com/documentation/swiftui/borderlessbuttonstyle)** or **PlainButtonStyle()**. ```swift List([1, 2, 3], id: \.self) { row in HStack { Button(action: { print("Button at \(row)") }) { Text("Row: \(row) Name: A") } .buttonStyle(BorderlessButtonStyle()) Button(action: { print("Button at \(row)") }) { Text("Row: \(row) Name: B") } .buttonStyle(PlainButtonStyle()) } } ```
One of the differences with SwiftUI is that you are not creating specific instances of, for example UIButton, because you might be in a Mac app. With SwiftUI, you are requesting a button type thing. In this case since you are in a list row, the system gives you a full size, tap anywhere to trigger the action, button. And since you've added two of them, both are triggered when you tap anywhere. You can add two separate Views and give them a `.onTapGesture` to have them act essentially as buttons, but you would lose the tap flash of the cell row and any other automatic button like features SwiftUI would give. ```swift List { HStack { Text("One").onTapGesture { print("Button 1 tapped") } Spacer() Text("Two").onTapGesture { print("Button 2 tapped") } } } ```
56,561,072
I'm trying to upgrade pip, and also install pywinusb, but I'm getting the error: "**UnicodeDecodeError: 'ascii' codec can't decode byte 0xe9 in position 8: ordinal not in range(128)**". Pip upgrade: ``` PS C:\Python27> pip --version pip 18.1 from c:\python27\lib\site-packages\pip (python 2.7) PS C:\Python27> python -m pip install --upgrade pip Collecting pip Exception: Traceback (most recent call last): File "C:\Python27\lib\site-packages\pip\_internal\cli\base_command.py", line 143, in main status = self.run(options, args) File "C:\Python27\lib\site-packages\pip\_internal\commands\install.py", line 318, in run resolver.resolve(requirement_set) File "C:\Python27\lib\site-packages\pip\_internal\resolve.py", line 102, in resolve self._resolve_one(requirement_set, req) File "C:\Python27\lib\site-packages\pip\_internal\resolve.py", line 256, in _resolve_one abstract_dist = self._get_abstract_dist_for(req_to_install) File "C:\Python27\lib\site-packages\pip\_internal\resolve.py", line 209, in _get_abstract_dist_for self.require_hashes File "C:\Python27\lib\site-packages\pip\_internal\operations\prepare.py", line 283, in prepare_linked_requirement progress_bar=self.progress_bar File "C:\Python27\lib\site-packages\pip\_internal\download.py", line 836, in unpack_url progress_bar=progress_bar File "C:\Python27\lib\site-packages\pip\_internal\download.py", line 673, in unpack_http_url progress_bar) File "C:\Python27\lib\site-packages\pip\_internal\download.py", line 895, in _download_http_url file_path = os.path.join(temp_dir, filename) File "C:\Python27\lib\ntpath.py", line 85, in join result_path = result_path + p_path UnicodeDecodeError: 'ascii' codec can't decode byte 0xe9 in position 8: ordinal not in range(128) You are using pip version 18.1, however version 19.1.1 is available. You should consider upgrading via the 'python -m pip install --upgrade pip' command. ``` And in "pywinusb" install: ``` PS C:\Python27> pip install pywinusb Collecting pywinusb Exception: Traceback (most recent call last): File "c:\python27\lib\site-packages\pip\_internal\cli\base_command.py", line 143, in main status = self.run(options, args) File "c:\python27\lib\site-packages\pip\_internal\commands\install.py", line 318, in run resolver.resolve(requirement_set) File "c:\python27\lib\site-packages\pip\_internal\resolve.py", line 102, in resolve self._resolve_one(requirement_set, req) File "c:\python27\lib\site-packages\pip\_internal\resolve.py", line 256, in _resolve_one abstract_dist = self._get_abstract_dist_for(req_to_install) File "c:\python27\lib\site-packages\pip\_internal\resolve.py", line 209, in _get_abstract_dist_for self.require_hashes File "c:\python27\lib\site-packages\pip\_internal\operations\prepare.py", line 283, in prepare_linked_requirement progress_bar=self.progress_bar File "c:\python27\lib\site-packages\pip\_internal\download.py", line 836, in unpack_url progress_bar=progress_bar File "c:\python27\lib\site-packages\pip\_internal\download.py", line 673, in unpack_http_url progress_bar) File "c:\python27\lib\site-packages\pip\_internal\download.py", line 895, in _download_http_url file_path = os.path.join(temp_dir, filename) File "c:\python27\lib\ntpath.py", line 85, in join result_path = result_path + p_path UnicodeDecodeError: 'ascii' codec can't decode byte 0xe9 in position 8: ordinal not in range(128) You are using pip version 18.1, however version 19.1.1 is available. You should consider upgrading via the 'python -m pip install --upgrade pip' command. ``` Before this I have installed the package "pyusb" without any problem, without getting any error. I've searched in google for this error, but not getting a very good explanation. How to solve this error?
2019/06/12
[ "https://Stackoverflow.com/questions/56561072", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6078511/" ]
You need to use **[BorderlessButtonStyle()](https://developer.apple.com/documentation/swiftui/borderlessbuttonstyle)** or **PlainButtonStyle()**. ```swift List([1, 2, 3], id: \.self) { row in HStack { Button(action: { print("Button at \(row)") }) { Text("Row: \(row) Name: A") } .buttonStyle(BorderlessButtonStyle()) Button(action: { print("Button at \(row)") }) { Text("Row: \(row) Name: B") } .buttonStyle(PlainButtonStyle()) } } ```
You need to create your own ButtonStyle: ``` struct MyButtonStyle: ButtonStyle { func makeBody(configuration: Configuration) -> some View { configuration.label .foregroundColor(.accentColor) .opacity(configuration.isPressed ? 0.5 : 1.0) } } struct IdentifiableString: Identifiable { let text: String var id: String { text } } struct Test: View { var body: some View { List([ IdentifiableString(text: "Line 1"), IdentifiableString(text: "Line 2"), ]) { item in HStack { Text("\(item.text)") Spacer() Button(action: { print("\(item.text) 1")}) { Text("Button 1") } Button(action: { print("\(item.text) 2")}) { Text("Button 2") } } }.buttonStyle(MyButtonStyle()) } } ```
45,906,144
I was trying to open stackoverflow and search for a query and then click the search button. almost everything went fine except I was not able to click submit button I encountered error > > WebDriverException: unknown error: Element ... is not clickable at point (608, 31). Other element would > receive the click: (Session info: > chrome=60.0.3112.101) (Driver info: chromedriver=2.29.461591 > (62ebf098771772160f391d75e589dc567915b233),platform=Windows NT > 6.1.7601 SP1 x86) > > > ``` browser=webdriver.Chrome() browser.get("https://stackoverflow.com/questions/19035186/how-to-select-element-with-selenium-python-xpath") z=browser.find_element_by_css_selector(".f-input.js-search-field")#use .for class and replace space with . z.send_keys("geckodriver not working") submi=browser.find_element_by_css_selector(".svg-icon.iconSearch") submi.click() ```
2017/08/27
[ "https://Stackoverflow.com/questions/45906144", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7698247/" ]
``` <button type="submit" class="btn js-search-submit"> <svg role="icon" class="svg-icon iconSearch" width="18" height="18" viewBox="0 0 18 18"> <path d="..."></path> </svg> </button> ``` You are trying to click on the `svg`. That icon is not clickable, but the button is. So change the button selector to `.btn.js-search-submit` will work.
Click the element with right locator, your button locator is wrong. Other code is looking good try this ``` browser=webdriver.Chrome() browser.get("https://stackoverflow.com/questions/19035186/how-to-select-element-with-selenium-python-xpath") z=browser.find_element_by_css_selector(".f-input.js-search-field")#use .for class and replace space with . z.send_keys("geckodriver not working") submi=browser.find_element_by_css_selector(".btn.js-search-submit") submi.click() ```
45,906,144
I was trying to open stackoverflow and search for a query and then click the search button. almost everything went fine except I was not able to click submit button I encountered error > > WebDriverException: unknown error: Element ... is not clickable at point (608, 31). Other element would > receive the click: (Session info: > chrome=60.0.3112.101) (Driver info: chromedriver=2.29.461591 > (62ebf098771772160f391d75e589dc567915b233),platform=Windows NT > 6.1.7601 SP1 x86) > > > ``` browser=webdriver.Chrome() browser.get("https://stackoverflow.com/questions/19035186/how-to-select-element-with-selenium-python-xpath") z=browser.find_element_by_css_selector(".f-input.js-search-field")#use .for class and replace space with . z.send_keys("geckodriver not working") submi=browser.find_element_by_css_selector(".svg-icon.iconSearch") submi.click() ```
2017/08/27
[ "https://Stackoverflow.com/questions/45906144", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7698247/" ]
``` <button type="submit" class="btn js-search-submit"> <svg role="icon" class="svg-icon iconSearch" width="18" height="18" viewBox="0 0 18 18"> <path d="..."></path> </svg> </button> ``` You are trying to click on the `svg`. That icon is not clickable, but the button is. So change the button selector to `.btn.js-search-submit` will work.
Use below code to click on submit button: ``` browser.find_element_by_css_selector(".btn.js-search-submit").click() ```
45,906,144
I was trying to open stackoverflow and search for a query and then click the search button. almost everything went fine except I was not able to click submit button I encountered error > > WebDriverException: unknown error: Element ... is not clickable at point (608, 31). Other element would > receive the click: (Session info: > chrome=60.0.3112.101) (Driver info: chromedriver=2.29.461591 > (62ebf098771772160f391d75e589dc567915b233),platform=Windows NT > 6.1.7601 SP1 x86) > > > ``` browser=webdriver.Chrome() browser.get("https://stackoverflow.com/questions/19035186/how-to-select-element-with-selenium-python-xpath") z=browser.find_element_by_css_selector(".f-input.js-search-field")#use .for class and replace space with . z.send_keys("geckodriver not working") submi=browser.find_element_by_css_selector(".svg-icon.iconSearch") submi.click() ```
2017/08/27
[ "https://Stackoverflow.com/questions/45906144", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7698247/" ]
Use below code to click on submit button: ``` browser.find_element_by_css_selector(".btn.js-search-submit").click() ```
Click the element with right locator, your button locator is wrong. Other code is looking good try this ``` browser=webdriver.Chrome() browser.get("https://stackoverflow.com/questions/19035186/how-to-select-element-with-selenium-python-xpath") z=browser.find_element_by_css_selector(".f-input.js-search-field")#use .for class and replace space with . z.send_keys("geckodriver not working") submi=browser.find_element_by_css_selector(".btn.js-search-submit") submi.click() ```
16,647,186
I have a bunch of functions that I've written in C and I'd like some code I've written in Python to be able to access those functions. I've read several questions on here that deal with a similar problem ([here](https://stackoverflow.com/questions/145270/calling-c-c-from-python) and [here](https://stackoverflow.com/questions/5090585/segfault-when-trying-to-call-a-python-function-from-c) for example) but I'm confused about which approach I need to take. One question recommends ctypes and another recommends cython. I've read a bit of the documentation for both, and I'm completely unclear about which one will work better for me. Basically I've written some python code to do some two dimensional FFTs and I'd like the C code to be able to see that result and then process it through the various C functions I've written. I don't know if it will be easier for me to call the Python from C or vice versa.
2013/05/20
[ "https://Stackoverflow.com/questions/16647186", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1765768/" ]
You should call C from Python by writing a **ctypes** wrapper. Cython is for making python-like code run faster, ctypes is for making C functions callable from python. What you need to do is the following: 1. Write the C functions you want to use. (You probably did this already) 2. Create a shared object (.so, for linux, os x, etc) or dynamically loaded library (.dll, for windows) for those functions. (Maybe you already did this, too) 3. Write the ctypes wrapper (It's easier than it sounds, [I wrote a how-to for that](https://pgi-jcns.fz-juelich.de/portal/pages/using-c-from-python.html "Using C from Python: How to create a ctypes wrapper")) 4. Call a function from that wrapper in Python. (This is just as simple as calling any other python function)
It'll be easier to call C from python. Your scenario sounds weird - normally people write most of the code in python except for the processor-intensive portion, which is written in C. Is the two-dimensional FFT the computationally-intensive part of your code?
16,647,186
I have a bunch of functions that I've written in C and I'd like some code I've written in Python to be able to access those functions. I've read several questions on here that deal with a similar problem ([here](https://stackoverflow.com/questions/145270/calling-c-c-from-python) and [here](https://stackoverflow.com/questions/5090585/segfault-when-trying-to-call-a-python-function-from-c) for example) but I'm confused about which approach I need to take. One question recommends ctypes and another recommends cython. I've read a bit of the documentation for both, and I'm completely unclear about which one will work better for me. Basically I've written some python code to do some two dimensional FFTs and I'd like the C code to be able to see that result and then process it through the various C functions I've written. I don't know if it will be easier for me to call the Python from C or vice versa.
2013/05/20
[ "https://Stackoverflow.com/questions/16647186", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1765768/" ]
If I understand well, you have no preference for dialoging as c => python or like python => c. In that case I would recommend `Cython`. It is quite open to many kinds of manipulation, specially, in your case, calling a function that has been written in Python from C. Here is how it works ([`public api`](http://docs.cython.org/src/userguide/external_C_code.html#c-api-declarations)) : The following example assumes that you have a Python Class (`self` is an instance of it), and that this class has a method (name `method`) you want to call on this class and deal with the result (here, a `double`) from C. This function, written in a `Cython extension` would help you to do this call. ``` cdef public api double cy_call_func_double(object self, char* method, bint *error): if (hasattr(self, method)): error[0] = 0 return getattr(self, method)(); else: error[0] = 1 ``` On the C side, you'll then be able to perform the call like so : ``` PyObject *py_obj = .... ... if (py_obj) { int error; double result; result = cy_call_func_double(py_obj, (char*)"initSimulation", &error); cout << "Do something with the result : " << result << endl; } ``` Where [`PyObject`](http://docs.python.org/2/c-api/structures.html) is a `struct` provided by Python/C API After having caught the `py_obj` (by casting a regular python `object`, in your cython extension like this : `<PyObject *>my_python_object`), you would finally be able to call the `initSimulation` method on it and do something with the result. (Here a `double`, but Cython can deal easily with [`vectors`, `sets`, ...](http://docs.cython.org/src/userguide/wrapping_CPlusPlus.html#standard-library)) Well, I am aware that what I just wrote can be confusing if you never wrote anything using `Cython`, but it aims to be a short demonstration of the numerous things it can do for you in term of **merging**. By another hand, this approach can take more time than recoding your Python code into C, depending on the complexity of your algorithms. In my opinion, investing time into learning Cython is pertinent only if you plan to have this kind of needs quite often... Hope this was at least informative...
It'll be easier to call C from python. Your scenario sounds weird - normally people write most of the code in python except for the processor-intensive portion, which is written in C. Is the two-dimensional FFT the computationally-intensive part of your code?
16,647,186
I have a bunch of functions that I've written in C and I'd like some code I've written in Python to be able to access those functions. I've read several questions on here that deal with a similar problem ([here](https://stackoverflow.com/questions/145270/calling-c-c-from-python) and [here](https://stackoverflow.com/questions/5090585/segfault-when-trying-to-call-a-python-function-from-c) for example) but I'm confused about which approach I need to take. One question recommends ctypes and another recommends cython. I've read a bit of the documentation for both, and I'm completely unclear about which one will work better for me. Basically I've written some python code to do some two dimensional FFTs and I'd like the C code to be able to see that result and then process it through the various C functions I've written. I don't know if it will be easier for me to call the Python from C or vice versa.
2013/05/20
[ "https://Stackoverflow.com/questions/16647186", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1765768/" ]
Well, here you are referring to two below things. 1. How to call c function within from python (Extending python) 2. How to call python function/script from C program (Embedding Python) **For #2 that is *'Embedding Python'*** You may use below code segment: ``` #include "python.h" int main(int argc, char *argv[]) { Py_SetProgramName(argv[0]); /* optional but recommended */ Py_Initialize(); PyRun_SimpleString("from time import time,ctime\n" "print 'Today is',ctime(time())\n"); /*Or if you want to run python file within from the C code*/ //pyRun_SimpleFile("Filename"); Py_Finalize(); return 0; } ``` **For #1 that is *'Extending Python'*** Then best bet would be to use Ctypes (btw portable across all variant of python). > > > > > > from ctypes import \* > > > > > > libc = cdll.msvcrt > > > > > > print libc.time(None) > > > > > > 1438069008 > > > > > > printf = libc.printf > > > > > > printf("Hello, %s\n", "World!") > > > > > > Hello, World! > > 14 > > > > > > printf("%d bottles of beer\n", 42) > > > > > > 42 bottles of beer > > 19 > > > > > > > > > For detailed guide you may want to refer to [my blog article](http://myunixworld.blogspot.in/2015/07/how-python-can-talk-to-other-language.html):
It'll be easier to call C from python. Your scenario sounds weird - normally people write most of the code in python except for the processor-intensive portion, which is written in C. Is the two-dimensional FFT the computationally-intensive part of your code?
16,647,186
I have a bunch of functions that I've written in C and I'd like some code I've written in Python to be able to access those functions. I've read several questions on here that deal with a similar problem ([here](https://stackoverflow.com/questions/145270/calling-c-c-from-python) and [here](https://stackoverflow.com/questions/5090585/segfault-when-trying-to-call-a-python-function-from-c) for example) but I'm confused about which approach I need to take. One question recommends ctypes and another recommends cython. I've read a bit of the documentation for both, and I'm completely unclear about which one will work better for me. Basically I've written some python code to do some two dimensional FFTs and I'd like the C code to be able to see that result and then process it through the various C functions I've written. I don't know if it will be easier for me to call the Python from C or vice versa.
2013/05/20
[ "https://Stackoverflow.com/questions/16647186", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1765768/" ]
It'll be easier to call C from python. Your scenario sounds weird - normally people write most of the code in python except for the processor-intensive portion, which is written in C. Is the two-dimensional FFT the computationally-intensive part of your code?
There's a nice and brief tutorial on this from [Digital Ocean here](https://www.digitalocean.com/community/tutorials/calling-c-functions-from-python). Short version: **1. Write C Code** You've already done this, so super short example: ``` #include <stdio.h> int addFive(int i) { return i + 5; } ``` **2. Create Shared Library File** *Assuming the above C file is saved as `c_functions.c`,* then to generate the `.so` file to call from python type in your terminal: `cc -fPIC -shared -o c_functions.so c_functions.c` **3. Use Your C Code in Python!** Within your python module: ``` # Access your C code from ctypes import * so_file = "./c_functions.so" c_functions = CDLL(so_file) # Use your C code c_functions.addFive(10) ``` That last line will output 15. You're done!
16,647,186
I have a bunch of functions that I've written in C and I'd like some code I've written in Python to be able to access those functions. I've read several questions on here that deal with a similar problem ([here](https://stackoverflow.com/questions/145270/calling-c-c-from-python) and [here](https://stackoverflow.com/questions/5090585/segfault-when-trying-to-call-a-python-function-from-c) for example) but I'm confused about which approach I need to take. One question recommends ctypes and another recommends cython. I've read a bit of the documentation for both, and I'm completely unclear about which one will work better for me. Basically I've written some python code to do some two dimensional FFTs and I'd like the C code to be able to see that result and then process it through the various C functions I've written. I don't know if it will be easier for me to call the Python from C or vice versa.
2013/05/20
[ "https://Stackoverflow.com/questions/16647186", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1765768/" ]
You should call C from Python by writing a **ctypes** wrapper. Cython is for making python-like code run faster, ctypes is for making C functions callable from python. What you need to do is the following: 1. Write the C functions you want to use. (You probably did this already) 2. Create a shared object (.so, for linux, os x, etc) or dynamically loaded library (.dll, for windows) for those functions. (Maybe you already did this, too) 3. Write the ctypes wrapper (It's easier than it sounds, [I wrote a how-to for that](https://pgi-jcns.fz-juelich.de/portal/pages/using-c-from-python.html "Using C from Python: How to create a ctypes wrapper")) 4. Call a function from that wrapper in Python. (This is just as simple as calling any other python function)
If I understand well, you have no preference for dialoging as c => python or like python => c. In that case I would recommend `Cython`. It is quite open to many kinds of manipulation, specially, in your case, calling a function that has been written in Python from C. Here is how it works ([`public api`](http://docs.cython.org/src/userguide/external_C_code.html#c-api-declarations)) : The following example assumes that you have a Python Class (`self` is an instance of it), and that this class has a method (name `method`) you want to call on this class and deal with the result (here, a `double`) from C. This function, written in a `Cython extension` would help you to do this call. ``` cdef public api double cy_call_func_double(object self, char* method, bint *error): if (hasattr(self, method)): error[0] = 0 return getattr(self, method)(); else: error[0] = 1 ``` On the C side, you'll then be able to perform the call like so : ``` PyObject *py_obj = .... ... if (py_obj) { int error; double result; result = cy_call_func_double(py_obj, (char*)"initSimulation", &error); cout << "Do something with the result : " << result << endl; } ``` Where [`PyObject`](http://docs.python.org/2/c-api/structures.html) is a `struct` provided by Python/C API After having caught the `py_obj` (by casting a regular python `object`, in your cython extension like this : `<PyObject *>my_python_object`), you would finally be able to call the `initSimulation` method on it and do something with the result. (Here a `double`, but Cython can deal easily with [`vectors`, `sets`, ...](http://docs.cython.org/src/userguide/wrapping_CPlusPlus.html#standard-library)) Well, I am aware that what I just wrote can be confusing if you never wrote anything using `Cython`, but it aims to be a short demonstration of the numerous things it can do for you in term of **merging**. By another hand, this approach can take more time than recoding your Python code into C, depending on the complexity of your algorithms. In my opinion, investing time into learning Cython is pertinent only if you plan to have this kind of needs quite often... Hope this was at least informative...
16,647,186
I have a bunch of functions that I've written in C and I'd like some code I've written in Python to be able to access those functions. I've read several questions on here that deal with a similar problem ([here](https://stackoverflow.com/questions/145270/calling-c-c-from-python) and [here](https://stackoverflow.com/questions/5090585/segfault-when-trying-to-call-a-python-function-from-c) for example) but I'm confused about which approach I need to take. One question recommends ctypes and another recommends cython. I've read a bit of the documentation for both, and I'm completely unclear about which one will work better for me. Basically I've written some python code to do some two dimensional FFTs and I'd like the C code to be able to see that result and then process it through the various C functions I've written. I don't know if it will be easier for me to call the Python from C or vice versa.
2013/05/20
[ "https://Stackoverflow.com/questions/16647186", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1765768/" ]
You should call C from Python by writing a **ctypes** wrapper. Cython is for making python-like code run faster, ctypes is for making C functions callable from python. What you need to do is the following: 1. Write the C functions you want to use. (You probably did this already) 2. Create a shared object (.so, for linux, os x, etc) or dynamically loaded library (.dll, for windows) for those functions. (Maybe you already did this, too) 3. Write the ctypes wrapper (It's easier than it sounds, [I wrote a how-to for that](https://pgi-jcns.fz-juelich.de/portal/pages/using-c-from-python.html "Using C from Python: How to create a ctypes wrapper")) 4. Call a function from that wrapper in Python. (This is just as simple as calling any other python function)
Well, here you are referring to two below things. 1. How to call c function within from python (Extending python) 2. How to call python function/script from C program (Embedding Python) **For #2 that is *'Embedding Python'*** You may use below code segment: ``` #include "python.h" int main(int argc, char *argv[]) { Py_SetProgramName(argv[0]); /* optional but recommended */ Py_Initialize(); PyRun_SimpleString("from time import time,ctime\n" "print 'Today is',ctime(time())\n"); /*Or if you want to run python file within from the C code*/ //pyRun_SimpleFile("Filename"); Py_Finalize(); return 0; } ``` **For #1 that is *'Extending Python'*** Then best bet would be to use Ctypes (btw portable across all variant of python). > > > > > > from ctypes import \* > > > > > > libc = cdll.msvcrt > > > > > > print libc.time(None) > > > > > > 1438069008 > > > > > > printf = libc.printf > > > > > > printf("Hello, %s\n", "World!") > > > > > > Hello, World! > > 14 > > > > > > printf("%d bottles of beer\n", 42) > > > > > > 42 bottles of beer > > 19 > > > > > > > > > For detailed guide you may want to refer to [my blog article](http://myunixworld.blogspot.in/2015/07/how-python-can-talk-to-other-language.html):
16,647,186
I have a bunch of functions that I've written in C and I'd like some code I've written in Python to be able to access those functions. I've read several questions on here that deal with a similar problem ([here](https://stackoverflow.com/questions/145270/calling-c-c-from-python) and [here](https://stackoverflow.com/questions/5090585/segfault-when-trying-to-call-a-python-function-from-c) for example) but I'm confused about which approach I need to take. One question recommends ctypes and another recommends cython. I've read a bit of the documentation for both, and I'm completely unclear about which one will work better for me. Basically I've written some python code to do some two dimensional FFTs and I'd like the C code to be able to see that result and then process it through the various C functions I've written. I don't know if it will be easier for me to call the Python from C or vice versa.
2013/05/20
[ "https://Stackoverflow.com/questions/16647186", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1765768/" ]
You should call C from Python by writing a **ctypes** wrapper. Cython is for making python-like code run faster, ctypes is for making C functions callable from python. What you need to do is the following: 1. Write the C functions you want to use. (You probably did this already) 2. Create a shared object (.so, for linux, os x, etc) or dynamically loaded library (.dll, for windows) for those functions. (Maybe you already did this, too) 3. Write the ctypes wrapper (It's easier than it sounds, [I wrote a how-to for that](https://pgi-jcns.fz-juelich.de/portal/pages/using-c-from-python.html "Using C from Python: How to create a ctypes wrapper")) 4. Call a function from that wrapper in Python. (This is just as simple as calling any other python function)
There's a nice and brief tutorial on this from [Digital Ocean here](https://www.digitalocean.com/community/tutorials/calling-c-functions-from-python). Short version: **1. Write C Code** You've already done this, so super short example: ``` #include <stdio.h> int addFive(int i) { return i + 5; } ``` **2. Create Shared Library File** *Assuming the above C file is saved as `c_functions.c`,* then to generate the `.so` file to call from python type in your terminal: `cc -fPIC -shared -o c_functions.so c_functions.c` **3. Use Your C Code in Python!** Within your python module: ``` # Access your C code from ctypes import * so_file = "./c_functions.so" c_functions = CDLL(so_file) # Use your C code c_functions.addFive(10) ``` That last line will output 15. You're done!
16,647,186
I have a bunch of functions that I've written in C and I'd like some code I've written in Python to be able to access those functions. I've read several questions on here that deal with a similar problem ([here](https://stackoverflow.com/questions/145270/calling-c-c-from-python) and [here](https://stackoverflow.com/questions/5090585/segfault-when-trying-to-call-a-python-function-from-c) for example) but I'm confused about which approach I need to take. One question recommends ctypes and another recommends cython. I've read a bit of the documentation for both, and I'm completely unclear about which one will work better for me. Basically I've written some python code to do some two dimensional FFTs and I'd like the C code to be able to see that result and then process it through the various C functions I've written. I don't know if it will be easier for me to call the Python from C or vice versa.
2013/05/20
[ "https://Stackoverflow.com/questions/16647186", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1765768/" ]
If I understand well, you have no preference for dialoging as c => python or like python => c. In that case I would recommend `Cython`. It is quite open to many kinds of manipulation, specially, in your case, calling a function that has been written in Python from C. Here is how it works ([`public api`](http://docs.cython.org/src/userguide/external_C_code.html#c-api-declarations)) : The following example assumes that you have a Python Class (`self` is an instance of it), and that this class has a method (name `method`) you want to call on this class and deal with the result (here, a `double`) from C. This function, written in a `Cython extension` would help you to do this call. ``` cdef public api double cy_call_func_double(object self, char* method, bint *error): if (hasattr(self, method)): error[0] = 0 return getattr(self, method)(); else: error[0] = 1 ``` On the C side, you'll then be able to perform the call like so : ``` PyObject *py_obj = .... ... if (py_obj) { int error; double result; result = cy_call_func_double(py_obj, (char*)"initSimulation", &error); cout << "Do something with the result : " << result << endl; } ``` Where [`PyObject`](http://docs.python.org/2/c-api/structures.html) is a `struct` provided by Python/C API After having caught the `py_obj` (by casting a regular python `object`, in your cython extension like this : `<PyObject *>my_python_object`), you would finally be able to call the `initSimulation` method on it and do something with the result. (Here a `double`, but Cython can deal easily with [`vectors`, `sets`, ...](http://docs.cython.org/src/userguide/wrapping_CPlusPlus.html#standard-library)) Well, I am aware that what I just wrote can be confusing if you never wrote anything using `Cython`, but it aims to be a short demonstration of the numerous things it can do for you in term of **merging**. By another hand, this approach can take more time than recoding your Python code into C, depending on the complexity of your algorithms. In my opinion, investing time into learning Cython is pertinent only if you plan to have this kind of needs quite often... Hope this was at least informative...
Well, here you are referring to two below things. 1. How to call c function within from python (Extending python) 2. How to call python function/script from C program (Embedding Python) **For #2 that is *'Embedding Python'*** You may use below code segment: ``` #include "python.h" int main(int argc, char *argv[]) { Py_SetProgramName(argv[0]); /* optional but recommended */ Py_Initialize(); PyRun_SimpleString("from time import time,ctime\n" "print 'Today is',ctime(time())\n"); /*Or if you want to run python file within from the C code*/ //pyRun_SimpleFile("Filename"); Py_Finalize(); return 0; } ``` **For #1 that is *'Extending Python'*** Then best bet would be to use Ctypes (btw portable across all variant of python). > > > > > > from ctypes import \* > > > > > > libc = cdll.msvcrt > > > > > > print libc.time(None) > > > > > > 1438069008 > > > > > > printf = libc.printf > > > > > > printf("Hello, %s\n", "World!") > > > > > > Hello, World! > > 14 > > > > > > printf("%d bottles of beer\n", 42) > > > > > > 42 bottles of beer > > 19 > > > > > > > > > For detailed guide you may want to refer to [my blog article](http://myunixworld.blogspot.in/2015/07/how-python-can-talk-to-other-language.html):
16,647,186
I have a bunch of functions that I've written in C and I'd like some code I've written in Python to be able to access those functions. I've read several questions on here that deal with a similar problem ([here](https://stackoverflow.com/questions/145270/calling-c-c-from-python) and [here](https://stackoverflow.com/questions/5090585/segfault-when-trying-to-call-a-python-function-from-c) for example) but I'm confused about which approach I need to take. One question recommends ctypes and another recommends cython. I've read a bit of the documentation for both, and I'm completely unclear about which one will work better for me. Basically I've written some python code to do some two dimensional FFTs and I'd like the C code to be able to see that result and then process it through the various C functions I've written. I don't know if it will be easier for me to call the Python from C or vice versa.
2013/05/20
[ "https://Stackoverflow.com/questions/16647186", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1765768/" ]
If I understand well, you have no preference for dialoging as c => python or like python => c. In that case I would recommend `Cython`. It is quite open to many kinds of manipulation, specially, in your case, calling a function that has been written in Python from C. Here is how it works ([`public api`](http://docs.cython.org/src/userguide/external_C_code.html#c-api-declarations)) : The following example assumes that you have a Python Class (`self` is an instance of it), and that this class has a method (name `method`) you want to call on this class and deal with the result (here, a `double`) from C. This function, written in a `Cython extension` would help you to do this call. ``` cdef public api double cy_call_func_double(object self, char* method, bint *error): if (hasattr(self, method)): error[0] = 0 return getattr(self, method)(); else: error[0] = 1 ``` On the C side, you'll then be able to perform the call like so : ``` PyObject *py_obj = .... ... if (py_obj) { int error; double result; result = cy_call_func_double(py_obj, (char*)"initSimulation", &error); cout << "Do something with the result : " << result << endl; } ``` Where [`PyObject`](http://docs.python.org/2/c-api/structures.html) is a `struct` provided by Python/C API After having caught the `py_obj` (by casting a regular python `object`, in your cython extension like this : `<PyObject *>my_python_object`), you would finally be able to call the `initSimulation` method on it and do something with the result. (Here a `double`, but Cython can deal easily with [`vectors`, `sets`, ...](http://docs.cython.org/src/userguide/wrapping_CPlusPlus.html#standard-library)) Well, I am aware that what I just wrote can be confusing if you never wrote anything using `Cython`, but it aims to be a short demonstration of the numerous things it can do for you in term of **merging**. By another hand, this approach can take more time than recoding your Python code into C, depending on the complexity of your algorithms. In my opinion, investing time into learning Cython is pertinent only if you plan to have this kind of needs quite often... Hope this was at least informative...
There's a nice and brief tutorial on this from [Digital Ocean here](https://www.digitalocean.com/community/tutorials/calling-c-functions-from-python). Short version: **1. Write C Code** You've already done this, so super short example: ``` #include <stdio.h> int addFive(int i) { return i + 5; } ``` **2. Create Shared Library File** *Assuming the above C file is saved as `c_functions.c`,* then to generate the `.so` file to call from python type in your terminal: `cc -fPIC -shared -o c_functions.so c_functions.c` **3. Use Your C Code in Python!** Within your python module: ``` # Access your C code from ctypes import * so_file = "./c_functions.so" c_functions = CDLL(so_file) # Use your C code c_functions.addFive(10) ``` That last line will output 15. You're done!
16,647,186
I have a bunch of functions that I've written in C and I'd like some code I've written in Python to be able to access those functions. I've read several questions on here that deal with a similar problem ([here](https://stackoverflow.com/questions/145270/calling-c-c-from-python) and [here](https://stackoverflow.com/questions/5090585/segfault-when-trying-to-call-a-python-function-from-c) for example) but I'm confused about which approach I need to take. One question recommends ctypes and another recommends cython. I've read a bit of the documentation for both, and I'm completely unclear about which one will work better for me. Basically I've written some python code to do some two dimensional FFTs and I'd like the C code to be able to see that result and then process it through the various C functions I've written. I don't know if it will be easier for me to call the Python from C or vice versa.
2013/05/20
[ "https://Stackoverflow.com/questions/16647186", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1765768/" ]
Well, here you are referring to two below things. 1. How to call c function within from python (Extending python) 2. How to call python function/script from C program (Embedding Python) **For #2 that is *'Embedding Python'*** You may use below code segment: ``` #include "python.h" int main(int argc, char *argv[]) { Py_SetProgramName(argv[0]); /* optional but recommended */ Py_Initialize(); PyRun_SimpleString("from time import time,ctime\n" "print 'Today is',ctime(time())\n"); /*Or if you want to run python file within from the C code*/ //pyRun_SimpleFile("Filename"); Py_Finalize(); return 0; } ``` **For #1 that is *'Extending Python'*** Then best bet would be to use Ctypes (btw portable across all variant of python). > > > > > > from ctypes import \* > > > > > > libc = cdll.msvcrt > > > > > > print libc.time(None) > > > > > > 1438069008 > > > > > > printf = libc.printf > > > > > > printf("Hello, %s\n", "World!") > > > > > > Hello, World! > > 14 > > > > > > printf("%d bottles of beer\n", 42) > > > > > > 42 bottles of beer > > 19 > > > > > > > > > For detailed guide you may want to refer to [my blog article](http://myunixworld.blogspot.in/2015/07/how-python-can-talk-to-other-language.html):
There's a nice and brief tutorial on this from [Digital Ocean here](https://www.digitalocean.com/community/tutorials/calling-c-functions-from-python). Short version: **1. Write C Code** You've already done this, so super short example: ``` #include <stdio.h> int addFive(int i) { return i + 5; } ``` **2. Create Shared Library File** *Assuming the above C file is saved as `c_functions.c`,* then to generate the `.so` file to call from python type in your terminal: `cc -fPIC -shared -o c_functions.so c_functions.c` **3. Use Your C Code in Python!** Within your python module: ``` # Access your C code from ctypes import * so_file = "./c_functions.so" c_functions = CDLL(so_file) # Use your C code c_functions.addFive(10) ``` That last line will output 15. You're done!
50,874,453
Hi I am both new to python and q/KDB. I am using qpython to get results from a kdb database doing the following: ``` q = qconnection.QConnection(host=self.host, port=self.port, username=self.username, password=self.password) results = q.sync(query) ``` The result is a qtable. I need to convert the qtable into a string which is straight forward. I just need to do this: ``` resultString = str(results) ``` However the string is somewhat convoluted. Not to say that the table contains dates and they come in a numeric format. resultsString look like this: ``` [(6606, b'XX', b'5Y', 26.67, 0.023, 4.833, -22.88, 0.4, b'sx, 570869003211035000) (6607, b'XX', b'5Y', 28.40, 0.025, 4.824, -22.75, 0.4, b'sx, 571128191858653000)] ``` I would like to know if there is a straightforward conversion of the qtable to turn the string into something like this: ``` 2018-02-01,XX,5Y,26.67,0.023,4.83,-22.88,0.4,sx,2018-02-02D06:43:23\n 2018-02-02,XX,5Y,28.40,0.025,4.82,-22.75,0.4,sx,2018-02-05D06:43:11\n ```
2018/06/15
[ "https://Stackoverflow.com/questions/50874453", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9946190/" ]
You might just want to string the table on the way out from kdb rather than in python. It'll get you what you want but the data won't be easy or efficient to deal with on the python side ``` q)csv 0: select from t "col1,col2" "a,1" "b,2" "c,3" ``` Try issuing `q.sync("csv 0: select from t")`
Converting the numerical columns to `string` can achieve the results you are after. ``` results = q.sync('t:([] 2?.z.d;2?.z.t;2?`3;p:2?100.);update string d, string t, string p from t') for item in results: t = () for x in item: t = t + (x.decode(),) print(t) ('2017.05.31', '16:46:10.161', 'jgj', '43.9081') ('2006.09.28', '19:44:11.560', 'cfl', '57.59051') ```
51,434,538
I am looking for a way to understand [ioloop in tornado](http://www.tornadoweb.org/en/stable/ioloop.html#tornado.ioloop.IOLoop), since I read the official doc several times, but can't understand it. Specifically, why it exists. ``` from tornado.concurrent import Future from tornado.httpclient import AsyncHTTPClient from tornado.ioloop import IOLoop def async_fetch_future(): http_client = AsyncHTTPClient() future = Future() fetch_future = http_client.fetch( "http://mock.kite.com/text") fetch_future.add_done_callback( lambda f: future.set_result(f.result())) return future response = IOLoop.current().run_sync(async_fetch_future) # why get current IO of this thread? display IO, hard drive IO, or network IO? print response.body ``` I know what is IO, input and output, e.g. read a hard drive, display graph on the screen, get keyboard input. by definition, `IOLoop.current()` returns the current io loop of this thread. There are many IO device on my laptop running this python code. Which IO does this `IOLoop.current()` return? I never heard of IO loop in javascript nodejs. Furthermore, why do I care this low level thing if I just want to do a database query, read a file?
2018/07/20
[ "https://Stackoverflow.com/questions/51434538", "https://Stackoverflow.com", "https://Stackoverflow.com/users/887103/" ]
Rather to say it is `IOLoop`, maybe `EventLoop` is clearer for you to understand. `IOLoop.current()` doesn't really return an IO device but just a pure python event loop which is basically the same as `asyncio.get_event_loop()` or the underlying event loop in `nodejs`. The reason why you need event loop to just do a database query is that you are using event-driven structure to do databse query(In your example, you are doing http request). Most of time you do not need to care about this low level structure. Instead you just need to use `async&await` keywords. Let's say there is a lib which supports asynchronous database access: ``` async def get_user(user_id): user = await async_cursor.execute("select * from user where user_id = %s" % user_id) return user ``` Then you just need to use this function in your handler: ``` class YourHandler(tornado.web.RequestHandler): async def get(): user = await get_user(self.get_cookie("user_id")) if user is None: return self.finish("No such user") return self.finish("Your are %s" % user.user_name) ```
> > I never heard of IO loop in javascript nodejs. > > > In node.js, the equivalent concept is the [event loop](https://nodejs.org/en/docs/guides/event-loop-timers-and-nexttick/). The node event loop is mostly invisible because all programs use it - it's what's running in between your callbacks. In Python, most programs don't use an event loop, so when you want one, you have to run it yourself. This can be a Tornado IOLoop, a Twisted Reactor, or an asyncio event loop (all of these are specific types of event loops). Tornado's IOLoop is perhaps confusingly named - it doesn't do any IO directly. Instead, it coordinates all the different IO (mainly network IO) that may be happening in the program. It may help you to think of it as an "event loop" or "callback runner".
68,472,830
Today I have tried to send email with python: ``` import smtplib EMAIL_HOST = 'smtp.google.com' EMAIL_PORT = 587 EMAIL_FROM_LOGIN = 'sender@gmail.com' EMAIL_FROM_PASSWORD = 'password' MESSAGE = 'Hi!' EMAIL_TO_LOGIN = 'recipient@gmail.com' print('starting...') server = smtplib.SMTP(EMAIL_HOST, EMAIL_PORT) server.starttls() print('logging...') server.login(EMAIL_FROM_LOGIN, EMAIL_FROM_PASSWORD) print('sending message...') server.sendmail(EMAIL_FROM_LOGIN, EMAIL_TO_LOGIN, MESSAGE) ``` This script doesn't goes further than `starting...` print. I searched about this issue whole day, but found only something like: > > "check that port isn't blocked..." > > > At least I got info about blocked/disabled/etc. port, what I don't have, is specifics of problem. --- Additional info: following some of advices I found earlier, I checked output of `telnet smtp.google.com 587`. The output is static `Connecting to smtp.google.com...`. It remains for like 2 mins, and then prints: > > Could not open a connection to this host, on port 587: Connection failed > > > --- **UPD 1** I tried to open ports manually, on which python script has been run, but nothing changes... --- So, my question is: what should I do? Where I can find those blocked ports, how to unblock/enable them?
2021/07/21
[ "https://Stackoverflow.com/questions/68472830", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10872199/" ]
Enable lower security in your gmail account and fix your smtp address: '**smtp.gmail.com**': My sample: ``` import smtplib from email.mime.multipart import MIMEMultipart from email.mime.text import MIMEText mail_content = 'Sample text' sender_address = 'xxx@xxx' sender_pass = 'xxxx' receiver_address = 'xxx@xxx' message = MIMEMultipart() message['From'] = sender_address message['To'] = receiver_address message['Subject'] = 'Test mail' message.attach(MIMEText(mail_content, 'plain')) session = smtplib.SMTP('smtp.gmail.com', 587) session.starttls() session.login(sender_address, sender_pass) session.sendmail(sender_address, receiver_address, message.as_string()) session.quit() print('Mail Sent') ```
--- Have you checked your code? there is **smtp.google.com** instead of **smtp.gmail.com**. Before executing the script --- 1. First of all, ensure that you logged in by that mail you are going to use to send mail in your script. 2. The second thing and important you must have on your [Less Security App](https://myaccount.google.com/lesssecureapps?pli=1&rapt=AEjHL4NW9JsHpBngFJepAMWVt38ISamxkCE1oZCeN2JLrrJhjrv23mFLGCXpwzF9ZZEqzjykTOjTvr286mEHEyd65j4OHLMpYg) or [2-Step Verification](https://myaccount.google.com/signinoptions/two-step-verification/enroll-welcome). --- Now you can use the following **Python 3 Code** also --- ``` import smtplib as s obj = s.SMTP("smtp.gmail.com", 587) #smtp server host and port no. 587 obj.starttls() #tls is a way of encryption obj.login("sender@gmail.com","password")# login credential email and password by which you want to send mail. subject = "Write your subject" #Subject of mail message_body = " Hello Dear...\n Write your message here... " msg = "Subject:{}\n\n{}".format(subject,message_body) # complete mail subject + message body list_of_address = ["mail1@gmail.com", "mail2@gmail.com", "mail3@yahoo.com", "mail4@outlook.in"]# list of email address obj.sendmail("sender@gmail.com", list_of_address, msg) print("Send Successfully...") obj.quit() ``` --- Now coming to your second question **what should I do? Where I can find those blocked ports, how to unblock/enable them?** **Unblock Ports** --- **Warning!** Opening or unblocking ports are very dangerous it's like breaking the firewall. It's up to you to be careful. --- You can try unblocking the following ports **80, 433, 443, 3478, 3479, 5060, 5062, 5222, 6250, and 12000-65000.** --- 1. Go to **Control Panel**->Click **System and Security**->Click **Windows Firewall**->Select **Advanced settings**, and then select **Inbound Rules** in the left pane->Right-click **Inbound Rules** and then select **New Rule**->Select **Port**->click **Next**. 2. Select **TCP** as the protocol to apply the rule. 3. Select **Specific Local Ports**, add all the above ports, and then click **Next**. 4. Select **Allow the connection**. 5. Leave **Domain**, **Private**, and **Public** checked to apply the rule to all types of networks. 6. Name the rule something click **Finish**. --- I hope your problem will be solved.
46,143,079
I have wriiten a code for linear search in python language. The code is working fine for single digit numbers but its not working for double digit numbers or for numbers more than that. Here is my code. ``` def linear_search(x,sort_lst): i = 0 c= 0 for i in range(len(sort_lst)): if sort_lst[i] == x : c= c+1 if (c > 0): print ("item found") else : print ("not found") sort_lst= input("enter an array of numbers:") item= input("enter the number to searched :") linear_search(item,sort_lst) ``` any suggestions ?
2017/09/10
[ "https://Stackoverflow.com/questions/46143079", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8279672/" ]
`redux-promise` will handle only a promise but ``` { pass : Promise, fail : Promise, exempt : Promise, } ``` is not a promise. You have to convert it to single promise so that `redux-promise` can handle it. I think you need `Promise.all` for this task. Try something like: ``` const payload = Promise.all([ pass, fail, exempt ]) .then( ([ pass, fail, exempt ]) => { return { pass, fail, exempt } }); // now payload will be a single promise and you can pass it on normally. return { type: FETCH_RATINGS, payload: payload }; ``` `Promise.all` will convert your multiple promises into single promise and will resolve only if all the promise will resolve, else it will get rejected. **Reference:** Read more about [Promise.all()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/all)
**Edit**: the answer of [Raghavgarg](https://stackoverflow.com/users/3439731/raghavgarg) is probably better if you already have logic that depends on your final payload (the one in the reducer) having the same structure as before. The middle-ware you use for promises probably expects the payload to be a promise, not an object that happens to contain promises. To solve this you could wrap them all in [`Promise.all`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/all) ```js return { type: FETCH_RATINGS, payload: Promise.all([pass, fail, exempt]) }; ``` Then in your reducer the payload would be an array where the responses will be ordered in the same way as you put them (above).
73,513,397
I am having issues with emails address and with a small correction, they are can be converted to valid email addresses. For Ex: ``` %20adi@gmail.com, --- Not valid 'sam@tell.net, --- Not valid (hi@telligen.com), --- Not valid (gii@weerte.com), --- Not valid :qwert34@embright.com, --- Not valid //24adifrmaes@microsot.com --- Not valid tellei@apple.com --- valid ... ``` I could write "if else", but if a new email address comes with new issues, I need to write "ifelse " and update every time. What is the best way to clean all these small issues, some python packes or regex? PLease suggest.
2022/08/27
[ "https://Stackoverflow.com/questions/73513397", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17867413/" ]
You can do this (I basically check if the elements in the email are alpha characters or a point, and remove them if not so): ``` emails = [ 'sam@tell.net', '(hi@telligen.com)', '(gii@weerte.com)', ':qwert34@embright.com', '//24adifrmaes@microsot.com', 'tellei@apple.com' ] def correct_email_format(email): return ''.join(e for e in email if (e.isalnum() or e in ['.', '@'])) for email in emails: corrected_email = correct_email_format(email) print(corrected_email) ``` output: ``` sam@tell.net hi@telligen.com gii@weerte.com qwert34@embright.com 24adifrmaes@microsot.com tellei@apple.com ```
Data clean-up is messy but I found the approach of defining a set of rules to be an easy way to manage this (order of the rules matters): ``` rules = [ lambda s: s.replace('%20', ' '), lambda s: s.strip(" ,'"), ] addresses = [ '%20adi@gmail.com,', 'sam@tell.net,' ] for a in addresses: for r in rules: a = r(a) print(a) ``` and here is the resulting output: ``` adi@gmail.com sam@tell.net ``` Make sure you write a test suite that covers both invalid and valid data. It's easy break, and you may be tweaking the rules often. While I used lambda for the rules above, it can be an arbitrary complex function that accepts and return a string.
73,513,397
I am having issues with emails address and with a small correction, they are can be converted to valid email addresses. For Ex: ``` %20adi@gmail.com, --- Not valid 'sam@tell.net, --- Not valid (hi@telligen.com), --- Not valid (gii@weerte.com), --- Not valid :qwert34@embright.com, --- Not valid //24adifrmaes@microsot.com --- Not valid tellei@apple.com --- valid ... ``` I could write "if else", but if a new email address comes with new issues, I need to write "ifelse " and update every time. What is the best way to clean all these small issues, some python packes or regex? PLease suggest.
2022/08/27
[ "https://Stackoverflow.com/questions/73513397", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17867413/" ]
Data clean-up is messy but I found the approach of defining a set of rules to be an easy way to manage this (order of the rules matters): ``` rules = [ lambda s: s.replace('%20', ' '), lambda s: s.strip(" ,'"), ] addresses = [ '%20adi@gmail.com,', 'sam@tell.net,' ] for a in addresses: for r in rules: a = r(a) print(a) ``` and here is the resulting output: ``` adi@gmail.com sam@tell.net ``` Make sure you write a test suite that covers both invalid and valid data. It's easy break, and you may be tweaking the rules often. While I used lambda for the rules above, it can be an arbitrary complex function that accepts and return a string.
This is actually a complicated one if you have more different test cases, ie., `email@email.com.`, `,,email@email.com.@_)` or `..email@email.com@@@`. Still, you can strip them but it cannot be limited to what is to be stripped at the end and beginning. **Note:** Email addresses with numbers and \_ are valid too. ``` emails = [ 'sam@tell.net', '(hi@telligen.com)', '(gii@weerte.com)', ':qwert34@embright.com', '//24adifrmaes@microsot.com', 'tellei@apple.com' ] def clean(email): return ''.join(filter(lambda x: ord(x) >= 65 or x in ['.', '@', '_'], email)) for email in emails: print(clean(email)) ``` **Output:** ``` sam@tell.net hi@telligen.com gii@weerte.com qwert@embright.com adifrmaes@microsot.com tellei@apple.com ```
73,513,397
I am having issues with emails address and with a small correction, they are can be converted to valid email addresses. For Ex: ``` %20adi@gmail.com, --- Not valid 'sam@tell.net, --- Not valid (hi@telligen.com), --- Not valid (gii@weerte.com), --- Not valid :qwert34@embright.com, --- Not valid //24adifrmaes@microsot.com --- Not valid tellei@apple.com --- valid ... ``` I could write "if else", but if a new email address comes with new issues, I need to write "ifelse " and update every time. What is the best way to clean all these small issues, some python packes or regex? PLease suggest.
2022/08/27
[ "https://Stackoverflow.com/questions/73513397", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17867413/" ]
You can do this (I basically check if the elements in the email are alpha characters or a point, and remove them if not so): ``` emails = [ 'sam@tell.net', '(hi@telligen.com)', '(gii@weerte.com)', ':qwert34@embright.com', '//24adifrmaes@microsot.com', 'tellei@apple.com' ] def correct_email_format(email): return ''.join(e for e in email if (e.isalnum() or e in ['.', '@'])) for email in emails: corrected_email = correct_email_format(email) print(corrected_email) ``` output: ``` sam@tell.net hi@telligen.com gii@weerte.com qwert34@embright.com 24adifrmaes@microsot.com tellei@apple.com ```
This is actually a complicated one if you have more different test cases, ie., `email@email.com.`, `,,email@email.com.@_)` or `..email@email.com@@@`. Still, you can strip them but it cannot be limited to what is to be stripped at the end and beginning. **Note:** Email addresses with numbers and \_ are valid too. ``` emails = [ 'sam@tell.net', '(hi@telligen.com)', '(gii@weerte.com)', ':qwert34@embright.com', '//24adifrmaes@microsot.com', 'tellei@apple.com' ] def clean(email): return ''.join(filter(lambda x: ord(x) >= 65 or x in ['.', '@', '_'], email)) for email in emails: print(clean(email)) ``` **Output:** ``` sam@tell.net hi@telligen.com gii@weerte.com qwert@embright.com adifrmaes@microsot.com tellei@apple.com ```
20,262,552
I have an embedded system using a python interface. Currently the system is using a (system-local) XML-file to persist data in case the system gets turned off. But normally the system is running the entire time. When the system starts, the XML-file is read in and information is stored in python-objects. The information then is used for processing. My aim is to edit this information remotely (over TCP/IP) even during process. I would like to use JAVA to get this done, and i have been thinking about something to share the objects. The problem is, that I'm missing some keywords to find the right technologies to get this done. What i found is SOAP, but i think it is not the right thing for this case, is that true? I'm grateful for any tips.
2013/11/28
[ "https://Stackoverflow.com/questions/20262552", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2127432/" ]
You can use [FileVersionInfo](http://msdn.microsoft.com/en-us/library/system.diagnostics.fileversioninfo%28v=vs.110%29.aspx) class to get the version of another program. ``` FileVersionInfo myFileVersionInfo = FileVersionInfo.GetVersionInfo(Environment.SystemDirectory + "\\Notepad.exe"); Console.WriteLine("File: " + myFileVersionInfo.FileDescription + '\n' + "Version number: " + myFileVersionInfo.FileVersion); ```
If I'm not wrong, you are trying to fetch the version number of a file using c#. You can try the below example: ``` using System; using System.IO; using System.Diagnostics; class Class1 { public static void Main(string[] args) { // Get the file version for the notepad. // Use either of the two following commands. FileVersionInfo.GetVersionInfo(Path.Combine(Environment.SystemDirectory, "Notepad.exe")); FileVersionInfo myFileVersionInfo = FileVersionInfo.GetVersionInfo(Environment.SystemDirectory + "\\Notepad.exe"); // Print the file name and version number. Console.WriteLine("File: " + myFileVersionInfo.FileDescription + '\n' + "Version number: " + myFileVersionInfo.FileVersion); } } ``` Source: <http://msdn.microsoft.com/en-us/library/system.diagnostics.fileversioninfo(v=vs.110).aspx>
20,262,552
I have an embedded system using a python interface. Currently the system is using a (system-local) XML-file to persist data in case the system gets turned off. But normally the system is running the entire time. When the system starts, the XML-file is read in and information is stored in python-objects. The information then is used for processing. My aim is to edit this information remotely (over TCP/IP) even during process. I would like to use JAVA to get this done, and i have been thinking about something to share the objects. The problem is, that I'm missing some keywords to find the right technologies to get this done. What i found is SOAP, but i think it is not the right thing for this case, is that true? I'm grateful for any tips.
2013/11/28
[ "https://Stackoverflow.com/questions/20262552", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2127432/" ]
You can use [FileVersionInfo](http://msdn.microsoft.com/en-us/library/system.diagnostics.fileversioninfo%28v=vs.110%29.aspx) class to get the version of another program. ``` FileVersionInfo myFileVersionInfo = FileVersionInfo.GetVersionInfo(Environment.SystemDirectory + "\\Notepad.exe"); Console.WriteLine("File: " + myFileVersionInfo.FileDescription + '\n' + "Version number: " + myFileVersionInfo.FileVersion); ```
You can try exploring File, FileInfo and FileVersionInfo classes to get the required details.
9,598,739
I have two versions of python installed on Win7. (Python 2.5 and Python 2.7). These are located in 'C:/Python25' and 'C:/Python27' respectively. I am trying to run a file using Python 2.5 but by default Cygwin picks up 2.7. How do I change which version Cygwin uses?
2012/03/07
[ "https://Stackoverflow.com/questions/9598739", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1145456/" ]
The fast way is to reorder your $PATH so that 2.5 is picked up first. The correct way is to use virtualenv to create a jail environment that's specific to a python version.
As an addition to Bon's post, if you're not sand-boxing your not doing it right. Why would you want to put your global install of Python at risk of anything? With Virtualenv you can select which Python interpreter is used for that particular sand-box. Virtualenv and Virtualenvwrapper(or custom solution) are two of the most essential tools a Python Developer can have. You can view your virtualenvs, create, delete, and activate them all with ease. You can get both pieces of software from pip. If you're not using those I assume you're not using requirements files either? $ pip freeze > requirements.txt will generate a requirements.txt with all exact versions and dependencies of your project. That way you can do fast deployment. If your current project requires 10 dependencies from pip if you deploy a lot then requirements files will help you tremendously. You can have a good beginners look at virtualenv and pip [here](http://www.saltycrane.com/blog/2009/05/notes-using-pip-and-virtualenv-django/ "here")
9,598,739
I have two versions of python installed on Win7. (Python 2.5 and Python 2.7). These are located in 'C:/Python25' and 'C:/Python27' respectively. I am trying to run a file using Python 2.5 but by default Cygwin picks up 2.7. How do I change which version Cygwin uses?
2012/03/07
[ "https://Stackoverflow.com/questions/9598739", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1145456/" ]
The fast way is to reorder your $PATH so that 2.5 is picked up first. The correct way is to use virtualenv to create a jail environment that's specific to a python version.
Open the Cygwin terminal ``` $cd /usr/bin $ls -l | grep "python ->" lrwxrwxrwx 1 XXXX Domain Users XXXXX python -> etc/alternatives/python $ python Python 3.8.10 (default, May 20 2021, 11:41:59) [GCC 10.2.0] on cygwin Type "help", "copyright", "credits" or "license" for more information. >>> ``` Update this link to the required python version I have installed python 2.7 and python 3.8 in my Cygwin I wanted to make Python 2.7 the default. This means when is execute “python” on my terminal it should Load Python version 2.7 ``` $cd /usr/bin $rm -r python $ln -s /usr/bin/python2.7 python $ ls -l | grep "python ->" lrwxrwxrwx 1 XXXX Domain Users XXXXX python -> /usr/bin/python2.7 $ python Python 2.7.18 (default, Jan 2 2021, 09:22:32) [GCC 10.2.0] on cygwin Type "help", "copyright", "credits" or "license" for more information. >>> ```
9,598,739
I have two versions of python installed on Win7. (Python 2.5 and Python 2.7). These are located in 'C:/Python25' and 'C:/Python27' respectively. I am trying to run a file using Python 2.5 but by default Cygwin picks up 2.7. How do I change which version Cygwin uses?
2012/03/07
[ "https://Stackoverflow.com/questions/9598739", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1145456/" ]
Open the Cygwin terminal ``` $cd /usr/bin $ls -l | grep "python ->" lrwxrwxrwx 1 XXXX Domain Users XXXXX python -> etc/alternatives/python $ python Python 3.8.10 (default, May 20 2021, 11:41:59) [GCC 10.2.0] on cygwin Type "help", "copyright", "credits" or "license" for more information. >>> ``` Update this link to the required python version I have installed python 2.7 and python 3.8 in my Cygwin I wanted to make Python 2.7 the default. This means when is execute “python” on my terminal it should Load Python version 2.7 ``` $cd /usr/bin $rm -r python $ln -s /usr/bin/python2.7 python $ ls -l | grep "python ->" lrwxrwxrwx 1 XXXX Domain Users XXXXX python -> /usr/bin/python2.7 $ python Python 2.7.18 (default, Jan 2 2021, 09:22:32) [GCC 10.2.0] on cygwin Type "help", "copyright", "credits" or "license" for more information. >>> ```
As an addition to Bon's post, if you're not sand-boxing your not doing it right. Why would you want to put your global install of Python at risk of anything? With Virtualenv you can select which Python interpreter is used for that particular sand-box. Virtualenv and Virtualenvwrapper(or custom solution) are two of the most essential tools a Python Developer can have. You can view your virtualenvs, create, delete, and activate them all with ease. You can get both pieces of software from pip. If you're not using those I assume you're not using requirements files either? $ pip freeze > requirements.txt will generate a requirements.txt with all exact versions and dependencies of your project. That way you can do fast deployment. If your current project requires 10 dependencies from pip if you deploy a lot then requirements files will help you tremendously. You can have a good beginners look at virtualenv and pip [here](http://www.saltycrane.com/blog/2009/05/notes-using-pip-and-virtualenv-django/ "here")
38,967,402
I'm trying to multiply two pandas dataframes with each other. Specifically, I want to multiply every column with every column of the other df. The dataframes are one-hot encoded, so they look like this: ``` col_1, col_2, col_3, ... 0 1 0 1 0 0 0 0 1 ... ``` I could just iterate through each of the columns using a for loop, but in python that is computationally expensive, and I'm hoping there's an easier way. One of the dataframes has 500 columns, the other has 100 columns. This is the fastest version that I've been able to write so far: ``` interact_pd = pd.DataFrame(index=df_1.index) df1_columns = [column for column in df_1] for column in df_2: col_pd = df_1[df1_columns].multiply(df_2[column], axis="index") interact_pd = interact_pd.join(col_pd, lsuffix='_' + column) ``` I iterate over each column in df\_2 and multiply all of df\_1 by that column, then I append the result to interact\_pd. I would rather not do it using a for loop however, as this is very computationally costly. Is there a faster way of doing it? EDIT: example df\_1: ``` 1col_1, 1col_2, 1col_3 0 1 0 1 0 0 0 0 1 ``` df\_2: ``` 2col_1, 2col_2 0 1 1 0 0 0 ``` interact\_pd: ``` 1col_1_2col_1, 1col_2_2col_1,1col_3_2col_1, 1col_1_2col_2, 1col_2_2col_2,1col_3_2col_2 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 ```
2016/08/16
[ "https://Stackoverflow.com/questions/38967402", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3950550/" ]
``` # use numpy to get a pair of indices that map out every # combination of columns from df_1 and columns of df_2 pidx = np.indices((df_1.shape[1], df_2.shape[1])).reshape(2, -1) # use pandas MultiIndex to create a nice MultiIndex for # the final output lcol = pd.MultiIndex.from_product([df_1.columns, df_2.columns], names=[df_1.columns.name, df_2.columns.name]) # df_1.values[:, pidx[0]] slices df_1 values for every combination # like wise with df_2.values[:, pidx[1]] # finally, I marry up the product of arrays with the MultiIndex pd.DataFrame(df_1.values[:, pidx[0]] * df_2.values[:, pidx[1]], columns=lcol) ``` [![enter image description here](https://i.stack.imgur.com/YaMNM.png)](https://i.stack.imgur.com/YaMNM.png) --- ### Timing **code** ``` from string import ascii_letters df_1 = pd.DataFrame(np.random.randint(0, 2, (1000, 26)), columns=list(ascii_letters[:26])) df_2 = pd.DataFrame(np.random.randint(0, 2, (1000, 52)), columns=list(ascii_letters)) def pir1(df_1, df_2): pidx = np.indices((df_1.shape[1], df_2.shape[1])).reshape(2, -1) lcol = pd.MultiIndex.from_product([df_1.columns, df_2.columns], names=[df_1.columns.name, df_2.columns.name]) return pd.DataFrame(df_1.values[:, pidx[0]] * df_2.values[:, pidx[1]], columns=lcol) def Test2(DA,DB): MA = DA.as_matrix() MB = DB.as_matrix() MM = np.zeros((len(MA),len(MA[0])*len(MB[0]))) Col = [] for i in range(len(MB[0])): for j in range(len(MA[0])): MM[:,i*len(MA[0])+j] = MA[:,j]*MB[:,i] Col.append('1col_'+str(i+1)+'_2col_'+str(j+1)) return pd.DataFrame(MM,dtype=int,columns=Col) ``` **results** [![enter image description here](https://i.stack.imgur.com/WJ7KH.png)](https://i.stack.imgur.com/WJ7KH.png)
You can use numpy. Consider this example code, I did modify the variable names, but `Test1()` is essentially your code. I didn't bother create the correct column names in that function though: ``` import pandas as pd import numpy as np A = [[1,0,1,1],[0,1,1,0],[0,1,0,1]] B = [[0,0,1,0],[1,0,1,0],[1,1,0,0],[1,0,0,1],[1,0,0,0]] DA = pd.DataFrame(A).T DB = pd.DataFrame(B).T def Test1(DA,DB): E = pd.DataFrame(index=DA.index) DAC = [column for column in DA] for column in DB: C = DA[DAC].multiply(DB[column], axis="index") E = E.join(C, lsuffix='_' + str(column)) return E def Test2(DA,DB): MA = DA.as_matrix() MB = DB.as_matrix() MM = np.zeros((len(MA),len(MA[0])*len(MB[0]))) Col = [] for i in range(len(MB[0])): for j in range(len(MA[0])): MM[:,i*len(MA[0])+j] = MA[:,j]*MB[:,i] Col.append('1col_'+str(i+1)+'_2col_'+str(j+1)) return pd.DataFrame(MM,dtype=int,columns=Col) print Test1(DA,DB) print Test2(DA,DB) ``` Output: ``` 0_1 1_1 2_1 0 1 2 0_3 1_3 2_3 0 1 2 0 1 2 0 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 2 1 1 0 1 1 0 0 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 1col_1_2col_1 1col_1_2col_2 1col_1_2col_3 1col_2_2col_1 1col_2_2col_2 \ 0 0 0 0 1 0 1 0 0 0 0 0 2 1 1 0 1 1 3 0 0 0 0 0 1col_2_2col_3 1col_3_2col_1 1col_3_2col_2 1col_3_2col_3 1col_4_2col_1 \ 0 0 1 0 0 1 1 0 0 1 1 0 2 0 0 0 0 0 3 0 0 0 0 1 1col_4_2col_2 1col_4_2col_3 1col_5_2col_1 1col_5_2col_2 1col_5_2col_3 0 0 0 1 0 0 1 0 0 0 0 0 2 0 0 0 0 0 3 0 1 0 0 0 ``` Performance of your function: ``` %timeit(Test1(DA,DB)) 100 loops, best of 3: 11.1 ms per loop ``` Performance of my function: ``` %timeit(Test2(DA,DB)) 1000 loops, best of 3: 464 µs per loop ``` It's not beautiful, but it's efficient.
38,967,402
I'm trying to multiply two pandas dataframes with each other. Specifically, I want to multiply every column with every column of the other df. The dataframes are one-hot encoded, so they look like this: ``` col_1, col_2, col_3, ... 0 1 0 1 0 0 0 0 1 ... ``` I could just iterate through each of the columns using a for loop, but in python that is computationally expensive, and I'm hoping there's an easier way. One of the dataframes has 500 columns, the other has 100 columns. This is the fastest version that I've been able to write so far: ``` interact_pd = pd.DataFrame(index=df_1.index) df1_columns = [column for column in df_1] for column in df_2: col_pd = df_1[df1_columns].multiply(df_2[column], axis="index") interact_pd = interact_pd.join(col_pd, lsuffix='_' + column) ``` I iterate over each column in df\_2 and multiply all of df\_1 by that column, then I append the result to interact\_pd. I would rather not do it using a for loop however, as this is very computationally costly. Is there a faster way of doing it? EDIT: example df\_1: ``` 1col_1, 1col_2, 1col_3 0 1 0 1 0 0 0 0 1 ``` df\_2: ``` 2col_1, 2col_2 0 1 1 0 0 0 ``` interact\_pd: ``` 1col_1_2col_1, 1col_2_2col_1,1col_3_2col_1, 1col_1_2col_2, 1col_2_2col_2,1col_3_2col_2 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 ```
2016/08/16
[ "https://Stackoverflow.com/questions/38967402", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3950550/" ]
You can multiply along the `index` axis your first `df` with each column of the second `df`, this is the ***fastest method*** for big datasets (see below): ``` df = pd.concat([df_1.mul(col[1], axis="index") for col in df_2.iteritems()], axis=1) # Change the name of the columns df.columns = ["_".join([i, j]) for j in df_2.columns for i in df_1.columns] df 1col_1_2col_1 1col_2_2col_1 1col_3_2col_1 1col_1_2col_2 \ 0 0 0 0 0 1 1 0 0 0 2 0 0 0 0 1col_2_2col_2 1col_3_2col_2 0 1 0 1 0 0 2 0 0 ``` ### --> See benchmark for comparisons with other answers to choose the best option for your dataset. --- Benchmark ========= Functions: ---------- ``` def Test2(DA,DB): MA = DA.as_matrix() MB = DB.as_matrix() MM = np.zeros((len(MA),len(MA[0])*len(MB[0]))) Col = [] for i in range(len(MB[0])): for j in range(len(MA[0])): MM[:,i*len(MA[0])+j] = MA[:,j]*MB[:,i] Col.append('1col_'+str(i+1)+'_2col_'+str(j+1)) return pd.DataFrame(MM,dtype=int,columns=Col) def Test3(df_1, df_2): df = pd.concat([df_1.mul(i[1], axis="index") for i in df_2.iteritems()], axis=1) df.columns = ["_".join([i,j]) for j in df_2.columns for i in df_1.columns] return df def Test4(df_1,df_2): pidx = np.indices((df_1.shape[1], df_2.shape[1])).reshape(2, -1) lcol = pd.MultiIndex.from_product([df_1.columns, df_2.columns], names=[df_1.columns.name, df_2.columns.name]) return pd.DataFrame(df_1.values[:, pidx[0]] * df_2.values[:, pidx[1]], columns=lcol) def jeanrjc_imp(df_1, df_2): df = pd.concat([df_1.mul(‌​i[1], axis="index") for i in df_2.iteritems()], axis=1, keys=df_2.columns) return df ``` Code: ----- Sorry, ugly code, the plot at the end matters : ``` import matplotlib.pyplot as plt import pandas as pd import numpy as np df_1 = pd.DataFrame(np.random.randint(0, 2, (1000, 600))) df_2 = pd.DataFrame(np.random.randint(0, 2, (1000, 600))) df_1.columns = ["1col_"+str(i) for i in range(len(df_1.columns))] df_2.columns = ["2col_"+str(i) for i in range(len(df_2.columns))] resa = {} resb = {} resc = {} for f, r in zip([Test2, Test3, Test4, jeanrjc_imp], ["T2", "T3", "T4", "T3bis"]): resa[r] = [] resb[r] = [] resc[r] = [] for i in [5, 10, 30, 50, 150, 200]: a = %timeit -o f(df_1.iloc[:,:i], df_2.iloc[:, :10]) b = %timeit -o f(df_1.iloc[:,:i], df_2.iloc[:, :50]) c = %timeit -o f(df_1.iloc[:,:i], df_2.iloc[:, :200]) resa[r].append(a.best) resb[r].append(b.best) resc[r].append(c.best) X = [5, 10, 30, 50, 150, 200] fig, ax = plt.subplots(1, 3, figsize=[16,5]) for j, (a, r) in enumerate(zip(ax, [resa, resb, resc])): for i in r: a.plot(X, r[i], label=i) a.set_xlabel("df_1 columns #") a.set_title("df_2 columns # = {}".format(["10", "50", "200"][j])) ax[0].set_ylabel("time(s)") plt.legend(loc=0) plt.tight_layout() ``` [![Pandas column multiplication](https://i.stack.imgur.com/P9wls.png)](https://i.stack.imgur.com/P9wls.png) With `T3b <=> jeanrjc_imp`. Which is a bit faster that Test3. Conclusion: ----------- Depending on your dataset size, pick the right function, between Test4 and Test3(b). Given the OP's dataset, `Test3` or `jeanrjc_imp` should be the fastest, and also the shortest to write! HTH
You can use numpy. Consider this example code, I did modify the variable names, but `Test1()` is essentially your code. I didn't bother create the correct column names in that function though: ``` import pandas as pd import numpy as np A = [[1,0,1,1],[0,1,1,0],[0,1,0,1]] B = [[0,0,1,0],[1,0,1,0],[1,1,0,0],[1,0,0,1],[1,0,0,0]] DA = pd.DataFrame(A).T DB = pd.DataFrame(B).T def Test1(DA,DB): E = pd.DataFrame(index=DA.index) DAC = [column for column in DA] for column in DB: C = DA[DAC].multiply(DB[column], axis="index") E = E.join(C, lsuffix='_' + str(column)) return E def Test2(DA,DB): MA = DA.as_matrix() MB = DB.as_matrix() MM = np.zeros((len(MA),len(MA[0])*len(MB[0]))) Col = [] for i in range(len(MB[0])): for j in range(len(MA[0])): MM[:,i*len(MA[0])+j] = MA[:,j]*MB[:,i] Col.append('1col_'+str(i+1)+'_2col_'+str(j+1)) return pd.DataFrame(MM,dtype=int,columns=Col) print Test1(DA,DB) print Test2(DA,DB) ``` Output: ``` 0_1 1_1 2_1 0 1 2 0_3 1_3 2_3 0 1 2 0 1 2 0 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 2 1 1 0 1 1 0 0 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 1col_1_2col_1 1col_1_2col_2 1col_1_2col_3 1col_2_2col_1 1col_2_2col_2 \ 0 0 0 0 1 0 1 0 0 0 0 0 2 1 1 0 1 1 3 0 0 0 0 0 1col_2_2col_3 1col_3_2col_1 1col_3_2col_2 1col_3_2col_3 1col_4_2col_1 \ 0 0 1 0 0 1 1 0 0 1 1 0 2 0 0 0 0 0 3 0 0 0 0 1 1col_4_2col_2 1col_4_2col_3 1col_5_2col_1 1col_5_2col_2 1col_5_2col_3 0 0 0 1 0 0 1 0 0 0 0 0 2 0 0 0 0 0 3 0 1 0 0 0 ``` Performance of your function: ``` %timeit(Test1(DA,DB)) 100 loops, best of 3: 11.1 ms per loop ``` Performance of my function: ``` %timeit(Test2(DA,DB)) 1000 loops, best of 3: 464 µs per loop ``` It's not beautiful, but it's efficient.
38,967,402
I'm trying to multiply two pandas dataframes with each other. Specifically, I want to multiply every column with every column of the other df. The dataframes are one-hot encoded, so they look like this: ``` col_1, col_2, col_3, ... 0 1 0 1 0 0 0 0 1 ... ``` I could just iterate through each of the columns using a for loop, but in python that is computationally expensive, and I'm hoping there's an easier way. One of the dataframes has 500 columns, the other has 100 columns. This is the fastest version that I've been able to write so far: ``` interact_pd = pd.DataFrame(index=df_1.index) df1_columns = [column for column in df_1] for column in df_2: col_pd = df_1[df1_columns].multiply(df_2[column], axis="index") interact_pd = interact_pd.join(col_pd, lsuffix='_' + column) ``` I iterate over each column in df\_2 and multiply all of df\_1 by that column, then I append the result to interact\_pd. I would rather not do it using a for loop however, as this is very computationally costly. Is there a faster way of doing it? EDIT: example df\_1: ``` 1col_1, 1col_2, 1col_3 0 1 0 1 0 0 0 0 1 ``` df\_2: ``` 2col_1, 2col_2 0 1 1 0 0 0 ``` interact\_pd: ``` 1col_1_2col_1, 1col_2_2col_1,1col_3_2col_1, 1col_1_2col_2, 1col_2_2col_2,1col_3_2col_2 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 ```
2016/08/16
[ "https://Stackoverflow.com/questions/38967402", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3950550/" ]
``` # use numpy to get a pair of indices that map out every # combination of columns from df_1 and columns of df_2 pidx = np.indices((df_1.shape[1], df_2.shape[1])).reshape(2, -1) # use pandas MultiIndex to create a nice MultiIndex for # the final output lcol = pd.MultiIndex.from_product([df_1.columns, df_2.columns], names=[df_1.columns.name, df_2.columns.name]) # df_1.values[:, pidx[0]] slices df_1 values for every combination # like wise with df_2.values[:, pidx[1]] # finally, I marry up the product of arrays with the MultiIndex pd.DataFrame(df_1.values[:, pidx[0]] * df_2.values[:, pidx[1]], columns=lcol) ``` [![enter image description here](https://i.stack.imgur.com/YaMNM.png)](https://i.stack.imgur.com/YaMNM.png) --- ### Timing **code** ``` from string import ascii_letters df_1 = pd.DataFrame(np.random.randint(0, 2, (1000, 26)), columns=list(ascii_letters[:26])) df_2 = pd.DataFrame(np.random.randint(0, 2, (1000, 52)), columns=list(ascii_letters)) def pir1(df_1, df_2): pidx = np.indices((df_1.shape[1], df_2.shape[1])).reshape(2, -1) lcol = pd.MultiIndex.from_product([df_1.columns, df_2.columns], names=[df_1.columns.name, df_2.columns.name]) return pd.DataFrame(df_1.values[:, pidx[0]] * df_2.values[:, pidx[1]], columns=lcol) def Test2(DA,DB): MA = DA.as_matrix() MB = DB.as_matrix() MM = np.zeros((len(MA),len(MA[0])*len(MB[0]))) Col = [] for i in range(len(MB[0])): for j in range(len(MA[0])): MM[:,i*len(MA[0])+j] = MA[:,j]*MB[:,i] Col.append('1col_'+str(i+1)+'_2col_'+str(j+1)) return pd.DataFrame(MM,dtype=int,columns=Col) ``` **results** [![enter image description here](https://i.stack.imgur.com/WJ7KH.png)](https://i.stack.imgur.com/WJ7KH.png)
You can multiply along the `index` axis your first `df` with each column of the second `df`, this is the ***fastest method*** for big datasets (see below): ``` df = pd.concat([df_1.mul(col[1], axis="index") for col in df_2.iteritems()], axis=1) # Change the name of the columns df.columns = ["_".join([i, j]) for j in df_2.columns for i in df_1.columns] df 1col_1_2col_1 1col_2_2col_1 1col_3_2col_1 1col_1_2col_2 \ 0 0 0 0 0 1 1 0 0 0 2 0 0 0 0 1col_2_2col_2 1col_3_2col_2 0 1 0 1 0 0 2 0 0 ``` ### --> See benchmark for comparisons with other answers to choose the best option for your dataset. --- Benchmark ========= Functions: ---------- ``` def Test2(DA,DB): MA = DA.as_matrix() MB = DB.as_matrix() MM = np.zeros((len(MA),len(MA[0])*len(MB[0]))) Col = [] for i in range(len(MB[0])): for j in range(len(MA[0])): MM[:,i*len(MA[0])+j] = MA[:,j]*MB[:,i] Col.append('1col_'+str(i+1)+'_2col_'+str(j+1)) return pd.DataFrame(MM,dtype=int,columns=Col) def Test3(df_1, df_2): df = pd.concat([df_1.mul(i[1], axis="index") for i in df_2.iteritems()], axis=1) df.columns = ["_".join([i,j]) for j in df_2.columns for i in df_1.columns] return df def Test4(df_1,df_2): pidx = np.indices((df_1.shape[1], df_2.shape[1])).reshape(2, -1) lcol = pd.MultiIndex.from_product([df_1.columns, df_2.columns], names=[df_1.columns.name, df_2.columns.name]) return pd.DataFrame(df_1.values[:, pidx[0]] * df_2.values[:, pidx[1]], columns=lcol) def jeanrjc_imp(df_1, df_2): df = pd.concat([df_1.mul(‌​i[1], axis="index") for i in df_2.iteritems()], axis=1, keys=df_2.columns) return df ``` Code: ----- Sorry, ugly code, the plot at the end matters : ``` import matplotlib.pyplot as plt import pandas as pd import numpy as np df_1 = pd.DataFrame(np.random.randint(0, 2, (1000, 600))) df_2 = pd.DataFrame(np.random.randint(0, 2, (1000, 600))) df_1.columns = ["1col_"+str(i) for i in range(len(df_1.columns))] df_2.columns = ["2col_"+str(i) for i in range(len(df_2.columns))] resa = {} resb = {} resc = {} for f, r in zip([Test2, Test3, Test4, jeanrjc_imp], ["T2", "T3", "T4", "T3bis"]): resa[r] = [] resb[r] = [] resc[r] = [] for i in [5, 10, 30, 50, 150, 200]: a = %timeit -o f(df_1.iloc[:,:i], df_2.iloc[:, :10]) b = %timeit -o f(df_1.iloc[:,:i], df_2.iloc[:, :50]) c = %timeit -o f(df_1.iloc[:,:i], df_2.iloc[:, :200]) resa[r].append(a.best) resb[r].append(b.best) resc[r].append(c.best) X = [5, 10, 30, 50, 150, 200] fig, ax = plt.subplots(1, 3, figsize=[16,5]) for j, (a, r) in enumerate(zip(ax, [resa, resb, resc])): for i in r: a.plot(X, r[i], label=i) a.set_xlabel("df_1 columns #") a.set_title("df_2 columns # = {}".format(["10", "50", "200"][j])) ax[0].set_ylabel("time(s)") plt.legend(loc=0) plt.tight_layout() ``` [![Pandas column multiplication](https://i.stack.imgur.com/P9wls.png)](https://i.stack.imgur.com/P9wls.png) With `T3b <=> jeanrjc_imp`. Which is a bit faster that Test3. Conclusion: ----------- Depending on your dataset size, pick the right function, between Test4 and Test3(b). Given the OP's dataset, `Test3` or `jeanrjc_imp` should be the fastest, and also the shortest to write! HTH
23,237,692
I use PyDev in Eclipse and have a custom source path for my Python project: *src/main/python*/. The path is added to the PythonPath. Now, i want to use the library pyMIR: <https://github.com/jsawruk/pymir>, which doesn't has any install script. So I downloaded it and and included it direclty into my project as a Pydev package, the complete path to the pyMIR is: *src/main/python/music/pymir*. In the music package (*src/main/python/music*), now i want to use the library and import it via: `from pymir import AudioFile`. There appears no error, so class AudioFile is found. Afterward, I want to read an audio file via: `AudioFile.open(path)` and there i get the error "Undefined variable from import: open". But when I run the script, it works, no error occurs. Furthermore, when I look in the pyMIR package, there are also unresolved import errors. For example: `from pymir import Frame` in the class AudioFile produces the error: "Unresolved import: Frame", when I change it to `from music.pymir import Frame`, the error disappears, but then I get an error when it runs: "type object 'Frame' has no attribute 'Frame'". 1. What I have to change, another import or how to include a Pydev package? 2. When I make a new project with a standard path "src", then no "unresolved impor" errors appear. Where is the difference to *src/main/python*? Because I changed the path of source folders to *src/main/python*. Thanks in advance.
2014/04/23
[ "https://Stackoverflow.com/questions/23237692", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
I tried to download and install the pymir package. There is one project structure that works for me: ``` project/music/ project/music/pymir/ project/music/pymir/AudioFile project/music/pymir/... project/music/audio_files/01.wav project/music/test.py ``` The test.py: ``` import numpy from pymir import AudioFile filename = "audio_files/01.wav" print "Opening File: " + filename audiofile = AudioFile.open(filename) frames = audiofile.frames(2048, numpy.hamming) print len(frames) ``` If I moved 'test.py' out from 'music' package, I haven't found a way to make it work. The reason why the project structure is sensitive and tricky is, in my opinion, the pymir package is not well structured. E.g., the author set module name as "Frame.py" and inside the module a class is named "Frame". Then in "\_\_init\_\_.py", codes are like "from Frame import Frame". And in "AudioFile.py", codes are "from pymir import Frame". I really think the naming and structure of the current pymir is messy. Suggest you use this package carefully
add **"\_\_init\_\_.py"** empty file in base folder location and it works
23,237,692
I use PyDev in Eclipse and have a custom source path for my Python project: *src/main/python*/. The path is added to the PythonPath. Now, i want to use the library pyMIR: <https://github.com/jsawruk/pymir>, which doesn't has any install script. So I downloaded it and and included it direclty into my project as a Pydev package, the complete path to the pyMIR is: *src/main/python/music/pymir*. In the music package (*src/main/python/music*), now i want to use the library and import it via: `from pymir import AudioFile`. There appears no error, so class AudioFile is found. Afterward, I want to read an audio file via: `AudioFile.open(path)` and there i get the error "Undefined variable from import: open". But when I run the script, it works, no error occurs. Furthermore, when I look in the pyMIR package, there are also unresolved import errors. For example: `from pymir import Frame` in the class AudioFile produces the error: "Unresolved import: Frame", when I change it to `from music.pymir import Frame`, the error disappears, but then I get an error when it runs: "type object 'Frame' has no attribute 'Frame'". 1. What I have to change, another import or how to include a Pydev package? 2. When I make a new project with a standard path "src", then no "unresolved impor" errors appear. Where is the difference to *src/main/python*? Because I changed the path of source folders to *src/main/python*. Thanks in advance.
2014/04/23
[ "https://Stackoverflow.com/questions/23237692", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
I tried to download and install the pymir package. There is one project structure that works for me: ``` project/music/ project/music/pymir/ project/music/pymir/AudioFile project/music/pymir/... project/music/audio_files/01.wav project/music/test.py ``` The test.py: ``` import numpy from pymir import AudioFile filename = "audio_files/01.wav" print "Opening File: " + filename audiofile = AudioFile.open(filename) frames = audiofile.frames(2048, numpy.hamming) print len(frames) ``` If I moved 'test.py' out from 'music' package, I haven't found a way to make it work. The reason why the project structure is sensitive and tricky is, in my opinion, the pymir package is not well structured. E.g., the author set module name as "Frame.py" and inside the module a class is named "Frame". Then in "\_\_init\_\_.py", codes are like "from Frame import Frame". And in "AudioFile.py", codes are "from pymir import Frame". I really think the naming and structure of the current pymir is messy. Suggest you use this package carefully
1. unzip the folder `pymir` to `site-packages`, make sure the path like ``` site-packages\pymir site-packages\pymir\AudioFile.py site-packages\pymir\Frame.py site-packages\pymir\... ``` 2. comment the content of the file `__init__.py` ``` #from AudioFile import AudioFile #from Frame import Frame #from Spectrum import Spectrum ``` 3. test it ``` import numpy as np import matplotlib.pyplot as plt from pymir.AudioFile import AudioFile filename = '../wavs/cxy_6s_mono_16KHz.wav' audiofile = AudioFile.open(filename) plt.plot(audiofile) plt.show() frames = audiofile.frames(2048, np.hamming) print(len(frames)) ```
20,154,490
I am trying to use `RotatingHandler` for our logging purpose in Python. I have kept backup files as 500 which means it will create maximum of 500 files I guess and the size that I have set is 2000 Bytes (not sure what is the recommended size limit is). If I run my below code, it doesn't log everything into a file. I want to log everything into a file - ``` #!/usr/bin/python import logging import logging.handlers LOG_FILENAME = 'testing.log' # Set up a specific logger with our desired output level my_logger = logging.getLogger('agentlogger') # Add the log message handler to the logger handler = logging.handlers.RotatingFileHandler(LOG_FILENAME, maxBytes=2000, backupCount=100) # create a logging format formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') handler.setFormatter(formatter) my_logger.addHandler(handler) my_logger.debug('debug message') my_logger.info('info message') my_logger.warn('warn message') my_logger.error('error message') my_logger.critical('critical message') # Log some messages for i in range(10): my_logger.error('i = %d' % i) ``` This is what gets printed out in my `testing.log` file - ``` 2013-11-22 12:59:34,782 - agentlogger - WARNING - warn message 2013-11-22 12:59:34,782 - agentlogger - ERROR - error message 2013-11-22 12:59:34,782 - agentlogger - CRITICAL - critical message 2013-11-22 12:59:34,782 - agentlogger - ERROR - i = 0 2013-11-22 12:59:34,782 - agentlogger - ERROR - i = 1 2013-11-22 12:59:34,783 - agentlogger - ERROR - i = 2 2013-11-22 12:59:34,783 - agentlogger - ERROR - i = 3 2013-11-22 12:59:34,783 - agentlogger - ERROR - i = 4 2013-11-22 12:59:34,783 - agentlogger - ERROR - i = 5 2013-11-22 12:59:34,783 - agentlogger - ERROR - i = 6 2013-11-22 12:59:34,784 - agentlogger - ERROR - i = 7 2013-11-22 12:59:34,784 - agentlogger - ERROR - i = 8 2013-11-22 12:59:34,784 - agentlogger - ERROR - i = 9 ``` It doesn't print out `INFO`, `DEBUG` message into the file somehow.. Any thoughts why it is not working out? And also, right now, I have defined everything in this python file for logging purpose. I want to define above things in the `logging conf` file and read it using the `fileConfig()` function. I am not sure how to use the `RotatingFileHandler` example in the `logging.conf` file? **UPDATE:-** Below is my updated Python code that I have modified to use with `log.conf` file - ``` #!/usr/bin/python import logging import logging.handlers my_logger = logging.getLogger(' ') my_logger.config.fileConfig('log.conf') my_logger.debug('debug message') my_logger.info('info message') my_logger.warn('warn message') my_logger.error('error message') my_logger.critical('critical message') # Log some messages for i in range(10): my_logger.error('i = %d' % i) ``` And below is my `log.conf file` - ``` [loggers] keys=root [handlers] keys=logfile [formatters] keys=logfileformatter [logger_root] level=DEBUG handlers=logfile [logger_zkagentlogger] level=DEBUG handlers=logfile qualname=zkagentlogger propagate=0 [formatter_logfileformatter] format=%(asctime)s %(name)-12s: %(levelname)s %(message)s [handler_logfile] class=handlers.RotatingFileHandler level=NOTSET args=('testing.log',2000,100) formatter=logfileformatter ``` But whenever I compile it, this is the error I got on my console - ``` $ python logtest3.py Traceback (most recent call last): File "logtest3.py", line 6, in <module> my_logger.config.fileConfig('log.conf') AttributeError: 'Logger' object has no attribute 'config' ``` Any idea what wrong I am doing here?
2013/11/22
[ "https://Stackoverflow.com/questions/20154490", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
> > It doesn't print out INFO, DEBUG message into the file somehow.. Any > thoughts why it is not working out? > > > you don't seem to set a loglevel, so the default (warning) is used from <http://docs.python.org/2/library/logging.html> : > > Note that the root logger is created with level WARNING. > > > as for your second question, something like this should do the trick (I haven't tested it, just adapted from my config which is using the TimedRotatingFileHandler): ``` [loggers] keys=root [handlers] keys=logfile [formatters] keys=logfileformatter [logger_root] level=DEBUG handlers=logfile [formatter_logfileformatter] format=%(asctime)s %(name)-12s: %(levelname)s %(message)s [handler_logfile] class=handlers.RotatingFileHandler level=NOTSET args=('testing.log','a',2000,100) formatter=logfileformatter ```
I know, it is very late ,but I just got same error, and while searching that are I got your problem. I am able to resolve my problem, and I thought it might be helpful for some other user also : you have created a logger object and trying to access **my\_logger.config.fileConfig('log.conf')** which is wrong you should use **logger.config.fileConfig('log.conf')** as I mention below and need to import **logging.config** as well : ``` #!/usr/bin/python import logging import logging.handlers import logging.config logging.config.fileConfig('log.config',disable_existing_loggers=0) my_logger = logging.getLogger('you logger name as you mention in your conf file') my_logger.debug('debug message') my_logger.info('info message') my_logger.warn('warn message') my_logger.error('error message') my_logger.critical('critical message') ``` after doing these changes, **AttributeError: 'Logger' object has no attribute 'config'** error must be gone.
20,420,937
I have python script that set the IP4 address for my wireless and wired interfaces. So far, I use `subprocess` command like : ``` subprocess.call(["ip addr add local 192.168.1.2/24 broadcast 192.168.1.255 dev wlan0"]) ``` How can I set the IP4 address of an interface using python libraries? and if there is any way to get an already existing IP configurations using python libraries ?
2013/12/06
[ "https://Stackoverflow.com/questions/20420937", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2468276/" ]
Set an address via the older `ioctl` interface: ``` import socket, struct, fcntl SIOCSIFADDR = 0x8916 sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) def setIpAddr(iface, ip): bin_ip = socket.inet_aton(ip) ifreq = struct.pack('16sH2s4s8s', iface, socket.AF_INET, '\x00' * 2, bin_ip, '\x00' * 8) fcntl.ioctl(sock, SIOCSIFADDR, ifreq) setIpAddr('em1', '192.168.0.1') ``` (setting the netmask is done with `SIOCSIFNETMASK = 0x891C`) Ip addresses can be retrieved in the same way: [Finding local IP addresses using Python's stdlib](https://stackoverflow.com/questions/166506/finding-local-ip-addresses-using-pythons-stdlib/9267833#9267833) I believe there is a python implementation of Netlink should you want to use that over `ioctl`
You have multiple options to do it from your python program. One could use the `ip` tool like you showed. While this is not the best option at all this usualy does the job while being a little bit slow and arkward to program. Another way would be to do the things `ip` does on your own by using the kernel netlink interface directly. I know that [libnl](http://www.carisma.slowglass.com/~tgr/libnl/) has some experimental (?) python bindings. This may work but you will have to deal with a lot of low level stuff. I wouldn't recommend this way for simple "set and get" ips but it's the most "correct" and reliable way to do so. The best way in my opinion (if you only want to set and get ips) would be to use the NetworkManagers dbus interface. While this is very limited and may have its own problems (it might behave not the way you would like it to) this is the most straight forward way if the NetworkManager is running anyway. So, choose the `libnl` approach if you want to get your hands dirty, it's clearly superior but also way more work. You may also get away with the NetworkManager dbus interface, depending on your needs and general system setup. Otherwise you can just leave it that way.
20,420,937
I have python script that set the IP4 address for my wireless and wired interfaces. So far, I use `subprocess` command like : ``` subprocess.call(["ip addr add local 192.168.1.2/24 broadcast 192.168.1.255 dev wlan0"]) ``` How can I set the IP4 address of an interface using python libraries? and if there is any way to get an already existing IP configurations using python libraries ?
2013/12/06
[ "https://Stackoverflow.com/questions/20420937", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2468276/" ]
With [pyroute2](https://github.com/svinota/pyroute2).IPRoute: ``` from pyroute2 import IPRoute ip = IPRoute() index = ip.link_lookup(ifname='em1')[0] ip.addr('add', index, address='192.168.0.1', mask=24) ip.close() ``` With [pyroute2](https://github.com/svinota/pyroute2).IPDB: ``` from pyroute2 import IPDB ip = IPDB() with ip.interfaces.em1 as em1: em1.add_ip('192.168.0.1/24') ip.release() ```
You have multiple options to do it from your python program. One could use the `ip` tool like you showed. While this is not the best option at all this usualy does the job while being a little bit slow and arkward to program. Another way would be to do the things `ip` does on your own by using the kernel netlink interface directly. I know that [libnl](http://www.carisma.slowglass.com/~tgr/libnl/) has some experimental (?) python bindings. This may work but you will have to deal with a lot of low level stuff. I wouldn't recommend this way for simple "set and get" ips but it's the most "correct" and reliable way to do so. The best way in my opinion (if you only want to set and get ips) would be to use the NetworkManagers dbus interface. While this is very limited and may have its own problems (it might behave not the way you would like it to) this is the most straight forward way if the NetworkManager is running anyway. So, choose the `libnl` approach if you want to get your hands dirty, it's clearly superior but also way more work. You may also get away with the NetworkManager dbus interface, depending on your needs and general system setup. Otherwise you can just leave it that way.
48,756,249
I have 2 models `Task` and `TaskImage` which is a collection of images belonging to `Task` object. What I want is to be able to add multiple images to my `Task` object, but I can only do it using 2 models. Currently, when I add images, it doesn't let me upload them and save new objects. **settings.py** ``` MEDIA_ROOT = os.path.join(BASE_DIR, 'media') MEDIA_URL = '/media/' ``` **serializers.py** ``` class TaskImageSerializer(serializers.ModelSerializer): class Meta: model = TaskImage fields = ('image',) class TaskSerializer(serializers.HyperlinkedModelSerializer): user = serializers.ReadOnlyField(source='user.username') images = TaskImageSerializer(source='image_set', many=True, read_only=True) class Meta: model = Task fields = '__all__' def create(self, validated_data): images_data = validated_data.pop('images') task = Task.objects.create(**validated_data) for image_data in images_data: TaskImage.objects.create(task=task, **image_data) return task ``` **models.py** ``` class Task(models.Model): title = models.CharField(max_length=100, blank=False) user = models.ForeignKey(User) def save(self, *args, **kwargs): super(Task, self).save(*args, **kwargs) class TaskImage(models.Model): task = models.ForeignKey(Task, on_delete=models.CASCADE) image = models.FileField(blank=True) ``` However, when I do a post request: [![enter image description here](https://i.stack.imgur.com/NndxK.png)](https://i.stack.imgur.com/NndxK.png) I get the following traceback: > > File > "/Applications/Anaconda/anaconda/envs/godo/lib/python3.6/site-packages/django/core/handlers/exception.py" > in inner > 41. response = get\_response(request) > > > File > "/Applications/Anaconda/anaconda/envs/godo/lib/python3.6/site-packages/django/core/handlers/base.py" > in \_get\_response > 187. response = self.process\_exception\_by\_middleware(e, request) > > > File > "/Applications/Anaconda/anaconda/envs/godo/lib/python3.6/site-packages/django/core/handlers/base.py" > in \_get\_response > 185. response = wrapped\_callback(request, \*callback\_args, \*\*callback\_kwargs) > > > File > "/Applications/Anaconda/anaconda/envs/godo/lib/python3.6/site-packages/django/views/decorators/csrf.py" > in wrapped\_view > 58. return view\_func(\*args, \*\*kwargs) > > > File > "/Applications/Anaconda/anaconda/envs/godo/lib/python3.6/site-packages/rest\_framework/viewsets.py" > in view > 95. return self.dispatch(request, \*args, \*\*kwargs) > > > File > "/Applications/Anaconda/anaconda/envs/godo/lib/python3.6/site-packages/rest\_framework/views.py" > in dispatch > 494. response = self.handle\_exception(exc) > > > File > "/Applications/Anaconda/anaconda/envs/godo/lib/python3.6/site-packages/rest\_framework/views.py" > in handle\_exception > 454. self.raise\_uncaught\_exception(exc) > > > File > "/Applications/Anaconda/anaconda/envs/godo/lib/python3.6/site-packages/rest\_framework/views.py" > in dispatch > 491. response = handler(request, \*args, \*\*kwargs) > > > File > "/Applications/Anaconda/anaconda/envs/godo/lib/python3.6/site-packages/rest\_framework/mixins.py" > in create > 21. self.perform\_create(serializer) > > > File "/Users/gr/Desktop/PycharmProjects/godo/api/views.py" in > perform\_create > 152. serializer.save(user=self.request.user) > > > File > "/Applications/Anaconda/anaconda/envs/godo/lib/python3.6/site-packages/rest\_framework/serializers.py" > in save > 214. self.instance = self.create(validated\_data) > > > File "/Users/gr/Desktop/PycharmProjects/godo/api/serializers.py" in > create > 67. images\_data = validated\_data.pop('images') > > > Exception Type: KeyError at /api/tasks/ Exception Value: 'images' > > >
2018/02/12
[ "https://Stackoverflow.com/questions/48756249", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4729764/" ]
**Description for the issue** The origin of the exception was a `KeyError`, because of this statement ``` images_data = validated_data.pop('images') ``` This is because the validated data has no key `images`. This means the images input doesn't validate the image inputs from postman. Django post request store `InMemmoryUpload` in `request.FILES`, so we use it for fetching files. also, you want multiple image upload at once. So, you have to use different image\_names while your image upload (in postman). Change your `serializer` to like this: ``` class TaskSerializer(serializers.HyperlinkedModelSerializer): user = serializers.ReadOnlyField(source='user.username') images = TaskImageSerializer(source='taskimage_set', many=True, read_only=True) class Meta: model = Task fields = ('id', 'title', 'user', 'images') def create(self, validated_data): images_data = self.context.get('view').request.FILES task = Task.objects.create(title=validated_data.get('title', 'no-title'), user_id=1) for image_data in images_data.values(): TaskImage.objects.create(task=task, image=image_data) return task ``` I don't know about your view, but I'd like to use `ModelViewSet` preferrable view class ``` class Upload(ModelViewSet): serializer_class = TaskSerializer queryset = Task.objects.all() ``` Postman console: [![enter image description here](https://i.stack.imgur.com/QeTGn.png)](https://i.stack.imgur.com/QeTGn.png) DRF result: ``` { "id": 12, "title": "This Is Task Title", "user": "admin", "images": [ { "image": "http://127.0.0.1:8000/media/Screenshot_from_2017-12-20_07-18-43_tNIbUXV.png" }, { "image": "http://127.0.0.1:8000/media/game-of-thrones-season-valar-morghulis-wallpaper-1366x768_3bkMk78.jpg" }, { "image": "http://127.0.0.1:8000/media/IMG_212433_lZ2Mijj.jpg" } ] } ``` **UPDATE** This is the answer for your comment. In django `reverse foreignKey` are capturing using `_set`. see this [official doc](https://docs.djangoproject.com/en/dev/topics/db/queries/#following-relationships-backward). Here, `Task` and `TaskImage` are in `OneToMany` relationship, so if you have one `Task` instance, you could get all related `TaskImage` instance by this `reverse look-up` feature. Here is the example: ``` task_instance = Task.objects.get(id=1) task_img_set_all = task_instance.taskimage_set.all() ``` Here this `task_img_set_all` will be equal to `TaskImage.objects.filter(task_id=1)`
You have `read_only` set to true in `TaskImageSerializer` nested field. So there will be no validated\_data there.
7,461,570
I'm trying to build the most recent version of OpenCV on a minimal enough VPS but am running into trouble with CMake. I'm not familiar with CMake so I'm finding it difficult to interpret the log output and thus how to proceed to debug the problem. From the command line (x11 isn't installed) and within devel/OpenCV/-2.3.1/release I issue the following ``` sudo cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local .. ``` and the result of this is the following: ``` -- Extracting svn version, please wait... -- SVNVERSION: exported -- Detected version of GNU GCC: 44 (404) -- Could NOT find ZLIB (missing: ZLIB_LIBRARY ZLIB_INCLUDE_DIR) -- Could NOT find ZLIB (missing: ZLIB_LIBRARY ZLIB_INCLUDE_DIR) -- Could NOT find PNG (missing: PNG_LIBRARY PNG_PNG_INCLUDE_DIR) -- Could NOT find TIFF (missing: TIFF_LIBRARY TIFF_INCLUDE_DIR) -- Could NOT find JPEG (missing: JPEG_LIBRARY JPEG_INCLUDE_DIR) -- Use NumPy headers from: /usr/lib/python2.6/site-packages/numpy-1.6.1-py2.6-linux-i686.egg/numpy/core/include -- Found Sphinx 0.6.6: /usr/bin/sphinx-build -- Parsing 'cvconfig.h.cmake' -- -- General configuration for opencv 2.3.1 ===================================== -- -- Built as dynamic libs?: YES -- Compiler: /usr/bin/c++ -- C++ flags (Release): -Wall -pthread -march=i686 -ffunction-sections -O3 -DNDEBUG -fomit-frame-pointer -msse -msse2 -mfpmath=387 -DNDEBUG\ -- C++ flags (Debug): -Wall -pthread -march=i686 -ffunction-sections -g -O0 -DDEBUG -D_DEBUG -ggdb3 -- Linker flags (Release): -- Linker flags (Debug): -- -- GUI: -- GTK+ 2.x: NO -- GThread: NO -- -- Media I/O: -- ZLib: build -- JPEG: build -- PNG: build -- TIFF: build -- JPEG 2000: FALSE -- OpenEXR: NO -- OpenNI: NO -- OpenNI PrimeSensor Modules: NO -- XIMEA: NO -- -- Video I/O: -- DC1394 1.x: NO -- DC1394 2.x: NO -- FFMPEG: NO -- codec: NO -- format: NO -- util: NO -- swscale: NO -- gentoo-style: NO -- GStreamer: NO -- UniCap: NO -- PvAPI: NO -- V4L/V4L2: FALSE/FALSE -- Xine: NO -- -- Other third-party libraries: -- Use IPP: NO -- Use TBB: NO -- Use ThreadingFramework: NO -- Use Cuda: NO -- Use Eigen: NO -- -- Interfaces: -- Python: NO -- Python interpreter: /usr/bin/python2.6 -B (ver 2.6) -- Python numpy: YES -- Java: NO -- -- Documentation: -- Sphinx: /usr/bin/sphinx-build (ver 0.6.6) -- PdfLaTeX compiler: NO -- Build Documentation: NO -- -- Tests and samples: -- Tests: YES -- Examples: NO -- -- Install path: /usr/local -- -- cvconfig.h is in: /home/ec2-user/OpenCV-2.3.1/release -- ----------------------------------------------------------------- -- -- Configuring incomplete, errors occurred! ``` Also when I run the command I also seem to be getting the following error message CMake Error at CMakeLists.txt:44 (set\_property): set\_property given invalid scope CACHE. Valid scopes are GLOBAL, DIRECTORY, TARGET, SOURCE, TEST. Line 42-45 is the following: ``` set(CMAKE_CONFIGURATION_TYPES "Debug;Release" CACHE STRING "Configs" FORCE) if(DEFINED CMAKE_BUILD_TYPE) set_property( CACHE CMAKE_BUILD_TYPE PROPERTY STRINGS ${CMAKE_CONFIGURATION_TYPES} ) endif() ``` However I'm not sure what this means? Does aNyone have any pointers? Many thanks
2011/09/18
[ "https://Stackoverflow.com/questions/7461570", "https://Stackoverflow.com", "https://Stackoverflow.com/users/413797/" ]
Check your CMake version. Support for `set_property(CACHE ... )` was implemented in 2.8.0. If upgrading CMake is not an option for you - I guess it's safe to comment line #44. It seems to be used to create values for drop-down list in GUI. <http://www.kitware.com/blog/home/post/82> <http://blog.bethcodes.com/cmake-tips-tricks-drop-down-list>
I've experienced lots of error building opencv that were caused by the wrong version of OpenCV. I successfully built opencv 3.0 using cmake 3.0 (though cmake 2.6 did not work for me). Then when I found I had to downgrade to opencv 2.4.9 I had to go back to my system's default cmake 2.6, as cmake 3.0 did not work. The first thing to check if you get errors when running cmake in opencv is the version.