content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
sequence
answers_scores
sequence
non_answers
sequence
non_answers_scores
sequence
tags
sequence
name
stringlengths
35
137
Q: Python JSON -> CSV different headers I have a json file that is like so: {"16CD7631-0ED0-4DA0-8D3B-8BBB41992EED": {"id": "16CD7631-0ED0-4DA0-8D3B-8BBB41992EED", "longitude": "-122.406417", "reportType": "Other", "latitude": "37.785834"}, "91CA4A9C-9A48-41A2-8453-07CBC8DC723E": {"id": "91CA4A9C-9A48-41A2-8453-07CBC8DC723E", "longitude": "-1.1932383", "reportType": "Street Obstruction", "latitude": "45.8827419"}} The goal is to get this to turn into a csv file like so: id,longitude,reportType,latitude 16CD7631-0ED0-4DA0-8D3B-8BBB41992EED,-122.406417,Other,37.785834 91CA4A9C-9A48-41A2-8453-07CBC8DC723E,-1.1932383,Street Obstruction,45.8827419 I tried just doing with open('sample.json', encoding='utf-8') as inputfile: df = pd.read_json(inputfile) df.to_csv('csvfile.csv', encoding='utf-8', index=False) But because the name of each document was named its id, I get incorrect output. What is the best way to achieve my goal? Thanks A: You can use pandas.json_normalize. Try this : import json import pandas as pd with open('sample.json', encoding='utf-8') as inputfile: data = json.load(inputfile) df = pd.json_normalize(data[k] for k in data.keys()) # Output : print(df.to_string()) id longitude reportType latitude 0 16CD7631-0ED0-4DA0-8D3B-8BBB41992EED -122.406417 Other 37.785834 1 91CA4A9C-9A48-41A2-8453-07CBC8DC723E -1.1932383 Street Obstruction 45.8827419
Python JSON -> CSV different headers
I have a json file that is like so: {"16CD7631-0ED0-4DA0-8D3B-8BBB41992EED": {"id": "16CD7631-0ED0-4DA0-8D3B-8BBB41992EED", "longitude": "-122.406417", "reportType": "Other", "latitude": "37.785834"}, "91CA4A9C-9A48-41A2-8453-07CBC8DC723E": {"id": "91CA4A9C-9A48-41A2-8453-07CBC8DC723E", "longitude": "-1.1932383", "reportType": "Street Obstruction", "latitude": "45.8827419"}} The goal is to get this to turn into a csv file like so: id,longitude,reportType,latitude 16CD7631-0ED0-4DA0-8D3B-8BBB41992EED,-122.406417,Other,37.785834 91CA4A9C-9A48-41A2-8453-07CBC8DC723E,-1.1932383,Street Obstruction,45.8827419 I tried just doing with open('sample.json', encoding='utf-8') as inputfile: df = pd.read_json(inputfile) df.to_csv('csvfile.csv', encoding='utf-8', index=False) But because the name of each document was named its id, I get incorrect output. What is the best way to achieve my goal? Thanks
[ "You can use pandas.json_normalize.\nTry this :\nimport json\nimport pandas as pd\n\nwith open('sample.json', encoding='utf-8') as inputfile:\n data = json.load(inputfile)\n df = pd.json_normalize(data[k] for k in data.keys())\n\n# Output :\nprint(df.to_string())\n\n id longitude reportType latitude\n0 16CD7631-0ED0-4DA0-8D3B-8BBB41992EED -122.406417 Other 37.785834\n1 91CA4A9C-9A48-41A2-8453-07CBC8DC723E -1.1932383 Street Obstruction 45.8827419\n\n" ]
[ 0 ]
[]
[]
[ "csv", "json", "pandas", "python" ]
stackoverflow_0074680602_csv_json_pandas_python.txt
Q: How can I extract specific text and link from div class using a BeautifulSoup I am trying to extract text and link from this website: https://www.rexelusa.com/s/terminal-block-end-stops?cat=61imhp2p In my code, I was trying to extract first output that is all CAT# numbers. This is my code: import selenium.webdriver from bs4 import BeautifulSoup from selenium.webdriver.firefox.options import Options options = Options() options.binary_location = r"C:\Program Files\Mozilla Firefox\firefox.exe" url = "https://www.rexelusa.com/s/terminal-block-end-stops?cat=61imhp2p" driver = selenium.webdriver.Firefox(options=options, executable_path='C:\webdrivers\geckodriver.exe') driver.get(url) soup = BeautifulSoup(driver.page_source,"html.parser") all_div = soup.find_all("div", class_= 'row no-gutters') #print(all_div) for div in all_div: all_items = div.find_all(class_= 'pr-4 col col-auto') for item in all_items: print(item) driver.quit() And my expected output is: all CAT# numbers(means total 92 will come in output) and category detail as shown in picture CAT #: 1492-EAJ35 Categories Control & Automation Terminal Blocks Terminal Blocks Accessories Terminal Block End Stops enter image description here A: #To extract the CAT# numbers and category details from the website, you can try using the requests and BeautifulSoup libraries. You can use the requests library to send an HTTP GET request to the URL, and then use the BeautifulSoup library to parse the HTML response and extract the data you want. #Here is an example of how you could do this: import requests from bs4 import BeautifulSoup url = "https://www.rexelusa.com/s/terminal-block-end-stops?cat=61imhp2p" # Send an HTTP GET request to the URL and get the response response = requests.get(url) # Parse the response HTML using BeautifulSoup soup = BeautifulSoup(response.text, "html.parser") # Extract the CAT# numbers from the response HTML cat_numbers = [x.text for x in soup.find_all("span", class_="c-black-text f-s-18 f-w-600")] # Print the CAT# numbers for cat_number in cat_numbers: print(cat_number) # Extract the category details from the response HTML category_details = [x.text for x in soup.find_all("div", class_="c-black-text f-s-12")] # Print the category details for category_detail in category_details: print(category_detail) #This code should extract the CAT# numbers and category details from the website and print them to the console. Note that you may need to modify the code to use the correct CSS classes for the elements you want to extract, as these may have changed since the original question was posted.
How can I extract specific text and link from div class using a BeautifulSoup
I am trying to extract text and link from this website: https://www.rexelusa.com/s/terminal-block-end-stops?cat=61imhp2p In my code, I was trying to extract first output that is all CAT# numbers. This is my code: import selenium.webdriver from bs4 import BeautifulSoup from selenium.webdriver.firefox.options import Options options = Options() options.binary_location = r"C:\Program Files\Mozilla Firefox\firefox.exe" url = "https://www.rexelusa.com/s/terminal-block-end-stops?cat=61imhp2p" driver = selenium.webdriver.Firefox(options=options, executable_path='C:\webdrivers\geckodriver.exe') driver.get(url) soup = BeautifulSoup(driver.page_source,"html.parser") all_div = soup.find_all("div", class_= 'row no-gutters') #print(all_div) for div in all_div: all_items = div.find_all(class_= 'pr-4 col col-auto') for item in all_items: print(item) driver.quit() And my expected output is: all CAT# numbers(means total 92 will come in output) and category detail as shown in picture CAT #: 1492-EAJ35 Categories Control & Automation Terminal Blocks Terminal Blocks Accessories Terminal Block End Stops enter image description here
[ "#To extract the CAT# numbers and category details from the website, you can try using the requests and BeautifulSoup libraries. You can use the requests library to send an HTTP GET request to the URL, and then use the BeautifulSoup library to parse the HTML response and extract the data you want.\n\n#Here is an example of how you could do this:\n\nimport requests\nfrom bs4 import BeautifulSoup\n\nurl = \"https://www.rexelusa.com/s/terminal-block-end-stops?cat=61imhp2p\"\n\n# Send an HTTP GET request to the URL and get the response\nresponse = requests.get(url)\n\n# Parse the response HTML using BeautifulSoup\nsoup = BeautifulSoup(response.text, \"html.parser\")\n\n# Extract the CAT# numbers from the response HTML\ncat_numbers = [x.text for x in soup.find_all(\"span\", class_=\"c-black-text f-s-18 f-w-600\")]\n\n# Print the CAT# numbers\nfor cat_number in cat_numbers:\n print(cat_number)\n\n# Extract the category details from the response HTML\ncategory_details = [x.text for x in soup.find_all(\"div\", class_=\"c-black-text f-s-12\")]\n\n# Print the category details\nfor category_detail in category_details:\n print(category_detail)\n\n#This code should extract the CAT# numbers and category details from the website and print them to the console. Note that you may need to modify the code to use the correct CSS classes for the elements you want to extract, as these may have changed since the original question was posted.\n\n" ]
[ 0 ]
[]
[]
[ "beautifulsoup", "html", "javascript", "python" ]
stackoverflow_0074680638_beautifulsoup_html_javascript_python.txt
Q: Hierarchical Index from pd dataframe to Excel, need to forward fill and unmerge I have a pandas dataframe with a three-level hierarchical index, created by the following: df_grouped = df.groupby(['Country','Description', pd.Grouper(freq = 'M')]).sum() Basically, a table where Country is the highest level, and Description is the second level, and followed by the date grouped by month. PICTURE A I'd like to do two unrelated things: Unmerge all the hierarchical indices in this structure within python, then forward fill to create PICTURE B. PICTURE B Be able to transform the datetimes while in the hierarchical structure of PICTURE A into YYYY-MM in python so when I export it I get PICTURE C. (I understand that I can do that from the structure in PICTURE B, I just want to be able to do it while it's still in the hierarchical structure in a pandas dataframe). PICTURE C Any tips? A: After groupby you get MultiIndex DataFrame, so values are repaeting in first and second level, only not displayning. If second DataFrame is not necessary you can convert DatetimeIndex to YYYY-MM format by strftime or to month period by to_period: df_grouped = df.groupby(['Country','Description', df.index.strftime('%Y-%m')]).sum() Or: df_grouped = df.groupby(['Country','Description', df.index.to_period('m')]).sum() If need second DataFrame add reset_index for convert levels to columns and for convert second level MultiIndex.set_levels with get_level_values: df_grouped = df.groupby(['Country','Description', pd.Grouper(freq = 'M')]).sum() df = df_grouped.reset_index() idx = df_grouped.index.get_level_values(2).strftime('%Y-%m') df_grouped.index = df_grouped.index.set_levels(idx, level=2) Sample: rng = pd.date_range('2017-04-03', periods=10, freq='10D') df = pd.DataFrame({'Country': ['Country'] * 10, 'Description':['A'] * 3 + ['B'] * 3 + ['C'] * 4, 'a': range(10)}, index=rng) print (df) Country Description a 2017-04-03 Country A 0 2017-04-13 Country A 1 2017-04-23 Country A 2 2017-05-03 Country B 3 2017-05-13 Country B 4 2017-05-23 Country B 5 2017-06-02 Country C 6 2017-06-12 Country C 7 2017-06-22 Country C 8 2017-07-02 Country C 9 df_grouped = df.groupby(['Country','Description', pd.Grouper(freq = 'M')]).sum() print (df_grouped) a Country Description Country A 2017-04-30 3 B 2017-05-31 12 C 2017-06-30 21 2017-07-31 9 df = df_grouped.reset_index().rename(columns={'level_2':'Date'}) print (df) Country Description Date a 0 Country A 2017-04-30 3 1 Country B 2017-05-31 12 2 Country C 2017-06-30 21 3 Country C 2017-07-31 9 idx = df_grouped.index.get_level_values(2).strftime('%Y-%m') df_grouped.index = df_grouped.index.set_levels(idx, level=2) print (df_grouped) a Country Description Country A 2017-04 3 B 2017-05 12 C 2017-06 21 2017-07 9 A: I realize this is an older post, but if you just want to get the displays to not look sparse, but the export to Excel still ends up merged, check that you have pandas version 1.5.2 then use the following: pd.set_option("display.multi_sparse", False) # for output display I don't know how to get the export to Excel to have all the grouped-by rows be filled with the index, that's my question here.
Hierarchical Index from pd dataframe to Excel, need to forward fill and unmerge
I have a pandas dataframe with a three-level hierarchical index, created by the following: df_grouped = df.groupby(['Country','Description', pd.Grouper(freq = 'M')]).sum() Basically, a table where Country is the highest level, and Description is the second level, and followed by the date grouped by month. PICTURE A I'd like to do two unrelated things: Unmerge all the hierarchical indices in this structure within python, then forward fill to create PICTURE B. PICTURE B Be able to transform the datetimes while in the hierarchical structure of PICTURE A into YYYY-MM in python so when I export it I get PICTURE C. (I understand that I can do that from the structure in PICTURE B, I just want to be able to do it while it's still in the hierarchical structure in a pandas dataframe). PICTURE C Any tips?
[ "After groupby you get MultiIndex DataFrame, so values are repaeting in first and second level, only not displayning.\nIf second DataFrame is not necessary you can convert DatetimeIndex to YYYY-MM format by strftime or to month period by to_period:\ndf_grouped = df.groupby(['Country','Description', df.index.strftime('%Y-%m')]).sum()\n\nOr:\ndf_grouped = df.groupby(['Country','Description', df.index.to_period('m')]).sum()\n\nIf need second DataFrame add reset_index for convert levels to columns and for convert second level MultiIndex.set_levels with get_level_values:\ndf_grouped = df.groupby(['Country','Description', pd.Grouper(freq = 'M')]).sum()\n\ndf = df_grouped.reset_index()\n\nidx = df_grouped.index.get_level_values(2).strftime('%Y-%m')\ndf_grouped.index = df_grouped.index.set_levels(idx, level=2)\n\nSample:\nrng = pd.date_range('2017-04-03', periods=10, freq='10D')\ndf = pd.DataFrame({'Country': ['Country'] * 10,\n 'Description':['A'] * 3 + ['B'] * 3 + ['C'] * 4, \n 'a': range(10)}, index=rng) \nprint (df)\n Country Description a\n2017-04-03 Country A 0\n2017-04-13 Country A 1\n2017-04-23 Country A 2\n2017-05-03 Country B 3\n2017-05-13 Country B 4\n2017-05-23 Country B 5\n2017-06-02 Country C 6\n2017-06-12 Country C 7\n2017-06-22 Country C 8\n2017-07-02 Country C 9\n\ndf_grouped = df.groupby(['Country','Description', pd.Grouper(freq = 'M')]).sum()\nprint (df_grouped)\n a\nCountry Description \nCountry A 2017-04-30 3\n B 2017-05-31 12\n C 2017-06-30 21\n 2017-07-31 9\n\n\ndf = df_grouped.reset_index().rename(columns={'level_2':'Date'})\nprint (df)\n Country Description Date a\n0 Country A 2017-04-30 3\n1 Country B 2017-05-31 12\n2 Country C 2017-06-30 21\n3 Country C 2017-07-31 9\n\nidx = df_grouped.index.get_level_values(2).strftime('%Y-%m')\ndf_grouped.index = df_grouped.index.set_levels(idx, level=2)\nprint (df_grouped)\n a\nCountry Description \nCountry A 2017-04 3\n B 2017-05 12\n C 2017-06 21\n 2017-07 9\n\n", "I realize this is an older post, but if you just want to get the displays to not look sparse, but the export to Excel still ends up merged, check that you have pandas version 1.5.2 then use the following:\npd.set_option(\"display.multi_sparse\", False) # for output display\n\nI don't know how to get the export to Excel to have all the grouped-by rows be filled with the index, that's my question here.\n" ]
[ 1, 0 ]
[]
[]
[ "datetime", "excel", "pandas", "python" ]
stackoverflow_0054019732_datetime_excel_pandas_python.txt
Q: Data scraping from forexfactory.com I am a beginner in python. In this question they extract data from forex factory. In that time the solution was working with their logic, finding table soup.find('table', class_="calendar__table") . But, now the web structure has been changed, the html table is removed and converted to some javascript format. So, this solution is not find anything now. import requests from bs4 import BeautifulSoup r = requests.get('http://www.forexfactory.com/calendar.php?day=nov18.2016') soup = BeautifulSoup(r.text, 'lxml') calendar_table = soup.find('table', class_="calendar__table") print(calendar_table) # for row in calendar_table.find_all('tr', class_=['calendar__row calendar_row','newday']): # row_data = [td.get_text(strip=True) for td in row.find_all('td')] # print(row_data) As I am a begineer I have no idea how to do that. So, how can I scrape the data? If you give me any hints it will be helpful for me. Thanks a lot for reading my post. A: As you've tagged this question with selenium, this answer relies on Selenium. I am using webdriver manager for ease. from selenium import webdriver from selenium.webdriver.common.by import By from webdriver_manager.chrome import ChromeDriverManager driver = webdriver.Chrome(ChromeDriverManager().install()) try: driver.get("http://www.forexfactory.com/calendar.php?day=nov18.2016") # Get the table table = driver.find_element(By.CLASS_NAME, "calendar__table") # Iterate over each table row for row in table.find_elements(By.TAG_NAME, "tr"): # list comprehension to get each cell's data and filter out empty cells row_data = list(filter(None, [td.text for td in row.find_elements(By.TAG_NAME, "td")])) if row_data == []: continue print(row_data) except Exception as e: print(e) finally: driver.quit() This currently prints out: ['Fri\nNov 18', '2:00am', 'EUR', 'German PPI m/m', '0.7%', '0.3%', '-0.2%'] ['3:30am', 'EUR', 'ECB President Draghi Speaks'] ['4:00am', 'EUR', 'Current Account', '25.3B', '31.3B', '29.1B'] ['4:10am', 'GBP', 'MPC Member Broadbent Speaks'] ['5:30am', 'CHF', 'Gov Board Member Maechler Speaks'] ['EUR', 'German Buba President Weidmann Speaks'] ['USD', 'FOMC Member Bullard Speaks'] ['8:30am', 'CAD', 'Core CPI m/m', '0.2%', '0.3%', '0.2%'] ['CAD', 'CPI m/m', '0.2%', '0.2%', '0.1%'] ['9:30am', 'USD', 'FOMC Member Dudley Speaks'] ['USD', 'FOMC Member George Speaks'] ['10:00am', 'USD', 'CB Leading Index m/m', '0.1%', '0.1%', '0.2%'] ['9:45pm', 'USD', 'FOMC Member Powell Speaks'] The data it's printing is just to show that it can extract the data, you will need to change and format it as you see fit. A: Currently they have implemented some cloudfare protection so only beautifulsouop can't collect data. We have to use selenium for that. Example Working Code: import random import selenium from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.by import By def create_driver(): user_agent_list = [ 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:90.0) Gecko/20100101 Firefox/90.0', 'Mozilla/5.0 (Macintosh; Intel Mac OS X 11.5; rv:90.0) Gecko/20100101 Firefox/90.0', 'Mozilla/5.0 (Windows NT 10.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36', 'Mozilla/5.0 (Macintosh; Intel Mac OS X 11_5_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36', 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:90.0) Gecko/20100101 Firefox/90.0', 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36' ] user_agent = random.choice(user_agent_list) browser_options = webdriver.ChromeOptions() browser_options.add_argument("--no-sandbox") browser_options.add_argument("--headless") browser_options.add_argument("start-maximized") browser_options.add_argument("window-size=1900,1080") browser_options.add_argument("disable-gpu") browser_options.add_argument("--disable-software-rasterizer") browser_options.add_argument("--disable-dev-shm-usage") browser_options.add_argument(f'user-agent={user_agent}') driver = webdriver.Chrome(options=browser_options, service_args=["--verbose", "--log-path=test.log"]) return driver def parse_data(driver, url): driver.get(url) data_table = driver.find_element(By.CLASS_NAME, "calendar__table") value_list = [] for row in data_table.find_elements(By.TAG_NAME, "tr"): row_data = list(filter(None, [td.text for td in row.find_elements(By.TAG_NAME, "td")])) if row_data: value_list.append(row_data) return value_list driver = create_driver() url = 'https://www.forexfactory.com/calendar?day=aug26.2021' value_list = parse_data(driver=driver, url=url) for value in value_list: if '\n' in value[0]: date_str = value.pop(0).replace('\n', ' - ') print(f'Date: {date_str}') print(value) Output: Date: Thu - Aug 26 ['2:00am', 'EUR', 'German GfK Consumer Climate', '-1.2', '-0.5', '-0.4'] ['4:00am', 'EUR', 'M3 Money Supply y/y', '7.6%', '7.6%', '8.3%'] ['EUR', 'Private Loans y/y', '4.2%', '4.1%', '4.0%'] ['7:30am', 'EUR', 'ECB Monetary Policy Meeting Accounts'] ['8:30am', 'USD', 'Prelim GDP q/q', '6.6%', '6.7%', '6.5%'] ['USD', 'Unemployment Claims', '353K', '345K', '349K'] ['USD', 'Prelim GDP Price Index q/q', '6.1%', '6.0%', '6.0%'] ['10:30am', 'USD', 'Natural Gas Storage', '29B', '40B', '46B'] ['Day 1', 'All', 'Jackson Hole Symposium'] ['5:00pm', 'USD', 'President Biden Speaks'] ['7:30pm', 'JPY', 'Tokyo Core CPI y/y', '0.0%', '-0.1%', '0.1%'] ['9:30pm', 'AUD', 'Retail Sales m/m', '-2.7%', '-2.6%', '-1.8%']
Data scraping from forexfactory.com
I am a beginner in python. In this question they extract data from forex factory. In that time the solution was working with their logic, finding table soup.find('table', class_="calendar__table") . But, now the web structure has been changed, the html table is removed and converted to some javascript format. So, this solution is not find anything now. import requests from bs4 import BeautifulSoup r = requests.get('http://www.forexfactory.com/calendar.php?day=nov18.2016') soup = BeautifulSoup(r.text, 'lxml') calendar_table = soup.find('table', class_="calendar__table") print(calendar_table) # for row in calendar_table.find_all('tr', class_=['calendar__row calendar_row','newday']): # row_data = [td.get_text(strip=True) for td in row.find_all('td')] # print(row_data) As I am a begineer I have no idea how to do that. So, how can I scrape the data? If you give me any hints it will be helpful for me. Thanks a lot for reading my post.
[ "As you've tagged this question with selenium, this answer relies on Selenium. I am using webdriver manager for ease.\nfrom selenium import webdriver\nfrom selenium.webdriver.common.by import By\nfrom webdriver_manager.chrome import ChromeDriverManager\n\ndriver = webdriver.Chrome(ChromeDriverManager().install())\n\ntry:\n driver.get(\"http://www.forexfactory.com/calendar.php?day=nov18.2016\")\n # Get the table\n table = driver.find_element(By.CLASS_NAME, \"calendar__table\")\n # Iterate over each table row\n for row in table.find_elements(By.TAG_NAME, \"tr\"):\n # list comprehension to get each cell's data and filter out empty cells\n row_data = list(filter(None, [td.text for td in row.find_elements(By.TAG_NAME, \"td\")]))\n if row_data == []:\n continue\n print(row_data)\nexcept Exception as e:\n print(e)\nfinally:\n driver.quit()\n\nThis currently prints out:\n['Fri\\nNov 18', '2:00am', 'EUR', 'German PPI m/m', '0.7%', '0.3%', '-0.2%']\n['3:30am', 'EUR', 'ECB President Draghi Speaks']\n['4:00am', 'EUR', 'Current Account', '25.3B', '31.3B', '29.1B']\n['4:10am', 'GBP', 'MPC Member Broadbent Speaks']\n['5:30am', 'CHF', 'Gov Board Member Maechler Speaks']\n['EUR', 'German Buba President Weidmann Speaks']\n['USD', 'FOMC Member Bullard Speaks']\n['8:30am', 'CAD', 'Core CPI m/m', '0.2%', '0.3%', '0.2%']\n['CAD', 'CPI m/m', '0.2%', '0.2%', '0.1%']\n['9:30am', 'USD', 'FOMC Member Dudley Speaks']\n['USD', 'FOMC Member George Speaks']\n['10:00am', 'USD', 'CB Leading Index m/m', '0.1%', '0.1%', '0.2%']\n['9:45pm', 'USD', 'FOMC Member Powell Speaks']\n\nThe data it's printing is just to show that it can extract the data, you will need to change and format it as you see fit.\n", "Currently they have implemented some cloudfare protection so only beautifulsouop can't collect data. We have to use selenium for that.\nExample Working Code:\nimport random\nimport selenium\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.common.by import By\n\ndef create_driver():\n user_agent_list = [\n 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36',\n 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:90.0) Gecko/20100101 Firefox/90.0',\n 'Mozilla/5.0 (Macintosh; Intel Mac OS X 11.5; rv:90.0) Gecko/20100101 Firefox/90.0',\n 'Mozilla/5.0 (Windows NT 10.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36',\n 'Mozilla/5.0 (Macintosh; Intel Mac OS X 11_5_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36',\n 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:90.0) Gecko/20100101 Firefox/90.0',\n 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36'\n ]\n user_agent = random.choice(user_agent_list)\n\n browser_options = webdriver.ChromeOptions()\n browser_options.add_argument(\"--no-sandbox\")\n browser_options.add_argument(\"--headless\")\n browser_options.add_argument(\"start-maximized\")\n browser_options.add_argument(\"window-size=1900,1080\")\n browser_options.add_argument(\"disable-gpu\")\n browser_options.add_argument(\"--disable-software-rasterizer\")\n browser_options.add_argument(\"--disable-dev-shm-usage\")\n browser_options.add_argument(f'user-agent={user_agent}')\n\n driver = webdriver.Chrome(options=browser_options, service_args=[\"--verbose\", \"--log-path=test.log\"])\n\n return driver\n\ndef parse_data(driver, url):\n driver.get(url)\n\n data_table = driver.find_element(By.CLASS_NAME, \"calendar__table\")\n value_list = []\n\n for row in data_table.find_elements(By.TAG_NAME, \"tr\"):\n row_data = list(filter(None, [td.text for td in row.find_elements(By.TAG_NAME, \"td\")]))\n if row_data:\n value_list.append(row_data)\n return value_list\n\ndriver = create_driver()\nurl = 'https://www.forexfactory.com/calendar?day=aug26.2021'\n\nvalue_list = parse_data(driver=driver, url=url)\n\nfor value in value_list:\n if '\\n' in value[0]:\n date_str = value.pop(0).replace('\\n', ' - ')\n print(f'Date: {date_str}')\n print(value)\n\n\nOutput:\nDate: Thu - Aug 26\n['2:00am', 'EUR', 'German GfK Consumer Climate', '-1.2', '-0.5', '-0.4']\n['4:00am', 'EUR', 'M3 Money Supply y/y', '7.6%', '7.6%', '8.3%']\n['EUR', 'Private Loans y/y', '4.2%', '4.1%', '4.0%']\n['7:30am', 'EUR', 'ECB Monetary Policy Meeting Accounts']\n['8:30am', 'USD', 'Prelim GDP q/q', '6.6%', '6.7%', '6.5%']\n['USD', 'Unemployment Claims', '353K', '345K', '349K']\n['USD', 'Prelim GDP Price Index q/q', '6.1%', '6.0%', '6.0%']\n['10:30am', 'USD', 'Natural Gas Storage', '29B', '40B', '46B']\n['Day 1', 'All', 'Jackson Hole Symposium']\n['5:00pm', 'USD', 'President Biden Speaks']\n['7:30pm', 'JPY', 'Tokyo Core CPI y/y', '0.0%', '-0.1%', '0.1%']\n['9:30pm', 'AUD', 'Retail Sales m/m', '-2.7%', '-2.6%', '-1.8%']\n\n" ]
[ 4, 3 ]
[ "how to get that into discord ?\n" ]
[ -2 ]
[ "beautifulsoup", "python", "python_3.x", "selenium", "web_scraping" ]
stackoverflow_0067068287_beautifulsoup_python_python_3.x_selenium_web_scraping.txt
Q: How do I decompose() a reoccurring row in a table that I find located in an html page using Python? The row is a duplicate of the header row. The row occurs over and over again randomly, and I do not want it in the data set (naturally). I think the HTML page has it there to remind the viewer what column attributes they are looking at as they scroll down. Below is a sample of one of the row elements I want delete: <tr class ="thead" data-row="25> Here is another one: <tr class="thead" data-row="77"> They occur randomly, but if there's any way we could make a loop that can iterate and find the first cell in the row and determine that it is in fact the row we want to delete? Because they are identical each time. The first cell is always "Player", identifying the attribute. Below is an example of what that looks like as an HTML element. <th aria-label="Player" data-stat="player" scope="col" class=" poptip sort_default_asc center">Player</th> Maybe I can create a loop that iterates through each row and determines if that first cell says "Player". If it does, then delete that whole row. Is that possible? Here is my code so far: from bs4 import BeautifulSoup import pandas as pd import requests import string years = list(range(2023, 2024)) alphabet = list(string.ascii_lowercase) url_namegather = 'https://www.basketball-reference.com/players/a' lastname_a = 'a' url = url_namegather.format(lastname_a) data = requests.get(url) with open("player_names/lastname_a.html".format(lastname_a), "w+", encoding="utf-8") as f: f.write(data.text) with open("player_names/lastname_a.html", encoding="utf-8") as f: page = f.read() soup = BeautifulSoup(page, "html.parser") A: You can read the table directly using pandas. You may need to install lxml package though. df = pd.read_html('https://www.basketball-reference.com/players/a')[0] df This will get data without any duplicated header rows.
How do I decompose() a reoccurring row in a table that I find located in an html page using Python?
The row is a duplicate of the header row. The row occurs over and over again randomly, and I do not want it in the data set (naturally). I think the HTML page has it there to remind the viewer what column attributes they are looking at as they scroll down. Below is a sample of one of the row elements I want delete: <tr class ="thead" data-row="25> Here is another one: <tr class="thead" data-row="77"> They occur randomly, but if there's any way we could make a loop that can iterate and find the first cell in the row and determine that it is in fact the row we want to delete? Because they are identical each time. The first cell is always "Player", identifying the attribute. Below is an example of what that looks like as an HTML element. <th aria-label="Player" data-stat="player" scope="col" class=" poptip sort_default_asc center">Player</th> Maybe I can create a loop that iterates through each row and determines if that first cell says "Player". If it does, then delete that whole row. Is that possible? Here is my code so far: from bs4 import BeautifulSoup import pandas as pd import requests import string years = list(range(2023, 2024)) alphabet = list(string.ascii_lowercase) url_namegather = 'https://www.basketball-reference.com/players/a' lastname_a = 'a' url = url_namegather.format(lastname_a) data = requests.get(url) with open("player_names/lastname_a.html".format(lastname_a), "w+", encoding="utf-8") as f: f.write(data.text) with open("player_names/lastname_a.html", encoding="utf-8") as f: page = f.read() soup = BeautifulSoup(page, "html.parser")
[ "You can read the table directly using pandas. You may need to install lxml package though.\n\ndf = pd.read_html('https://www.basketball-reference.com/players/a')[0]\ndf\n\nThis will get data without any duplicated header rows.\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074680578_python.txt
Q: Can't instantiate abstract class Service with abstract method command_line_args I am trying to make my first program using Python to download the meme from one of the sites and it was working well after that it started throwing problems that I do not know how to solve from urllib import request import undetected_chromedriver as UC from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.by import By from selenium.webdriver.common.service import Service from undetected_chromedriver._compat import ChromeDriverManager post = "az28zPm" #this 6 digit value is the unique id for the meme options = Options() options.headless = True url = "https://9gag.com/gag/" + post driver = UC.Chrome(service=Service(ChromeDriverManager().install()),options=options) driver.get(url) if not os.path.isdir("Attachments/" + post): os.makedirs("Attachments/" + post) try: video_link = driver.find_element( By.XPATH, '//*[@id]/div[2]/div[1]/a/div/video/source[1]').get_attribute('src') request.urlretrieve(video_link, "Attachments/" + post + "/" + "Meme.mp4") except: image_link = driver.find_element( By.XPATH,'//*[@id]/div[2]/div[1]/a/div/picture/img').get_attribute('src') request.urlretrieve(image_link, "Attachments/" + post + "/" + "Meme.jpg") A: change common in 5th line to chrome if you have: TypeError: Can't instantiate abstract class Service with abstract method command_line_args. before: from selenium.webdriver.common.service import Service after: from selenium.webdriver.chrome.service import Service
Can't instantiate abstract class Service with abstract method command_line_args
I am trying to make my first program using Python to download the meme from one of the sites and it was working well after that it started throwing problems that I do not know how to solve from urllib import request import undetected_chromedriver as UC from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.by import By from selenium.webdriver.common.service import Service from undetected_chromedriver._compat import ChromeDriverManager post = "az28zPm" #this 6 digit value is the unique id for the meme options = Options() options.headless = True url = "https://9gag.com/gag/" + post driver = UC.Chrome(service=Service(ChromeDriverManager().install()),options=options) driver.get(url) if not os.path.isdir("Attachments/" + post): os.makedirs("Attachments/" + post) try: video_link = driver.find_element( By.XPATH, '//*[@id]/div[2]/div[1]/a/div/video/source[1]').get_attribute('src') request.urlretrieve(video_link, "Attachments/" + post + "/" + "Meme.mp4") except: image_link = driver.find_element( By.XPATH,'//*[@id]/div[2]/div[1]/a/div/picture/img').get_attribute('src') request.urlretrieve(image_link, "Attachments/" + post + "/" + "Meme.jpg")
[ "change common in 5th line to chrome if you have:\nTypeError: Can't instantiate abstract class Service with abstract method command_line_args.\nbefore:\nfrom selenium.webdriver.common.service import Service\n\nafter:\nfrom selenium.webdriver.chrome.service import Service\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074645457_python.txt
Q: python -m build fails due to syntax error in `long_description` I am failing to get my README.rst file to be working for my long_description within the pyproject.toml file. I am unclear why (advice appreciated, thank you). I have a pyproject.toml file: [build-system] requires = ["setuptools"] build-backend = "setuptools.build_meta" [project] name = "growbuddies" version = "2022.12.0" readme = "README.rst" description = "Buddies to help maximize the growth of plants in your indoor garden." dependencies = [ "influxdb ~=5.3.1", "paho_mqtt ~=1.6.1", ] [project.urls] "Homepage" = "https://github.com/solarslurpi/GrowBuddies" [project.scripts] get-readings = "growbuddies.__main__:main" The [project] tables notes readme = "README.RST". At the same directory level as pyproject.toml, I have an EMPTY README.rst file. I run $ twine check dist/* and get: Checking dist/growbuddies-2022.11.28-py3-none-any.whl: FAILED ERROR `long_description` has syntax errors in markup and would not be rendered on PyPI. No content rendered from RST source. WARNING `long_description_content_type` missing. defaulting to `text/x-rst`. Checking dist/growbuddies-2022.12.0-py3-none-any.whl: FAILED ERROR `long_description` has syntax errors in markup and would not be rendered on PyPI. No content rendered from RST source. Checking dist/growbuddies-2022.11.28.tar.gz: PASSED with warnings WARNING `long_description_content_type` missing. defaulting to `text/x-rst`. WARNING `long_description` missing. Checking dist/growbuddies-2022.12.0.tar.gz: PASSED with warnings WARNING `long_description` missing.
python -m build fails due to syntax error in `long_description`
I am failing to get my README.rst file to be working for my long_description within the pyproject.toml file. I am unclear why (advice appreciated, thank you). I have a pyproject.toml file: [build-system] requires = ["setuptools"] build-backend = "setuptools.build_meta" [project] name = "growbuddies" version = "2022.12.0" readme = "README.rst" description = "Buddies to help maximize the growth of plants in your indoor garden." dependencies = [ "influxdb ~=5.3.1", "paho_mqtt ~=1.6.1", ] [project.urls] "Homepage" = "https://github.com/solarslurpi/GrowBuddies" [project.scripts] get-readings = "growbuddies.__main__:main" The [project] tables notes readme = "README.RST". At the same directory level as pyproject.toml, I have an EMPTY README.rst file. I run $ twine check dist/* and get: Checking dist/growbuddies-2022.11.28-py3-none-any.whl: FAILED ERROR `long_description` has syntax errors in markup and would not be rendered on PyPI. No content rendered from RST source. WARNING `long_description_content_type` missing. defaulting to `text/x-rst`. Checking dist/growbuddies-2022.12.0-py3-none-any.whl: FAILED ERROR `long_description` has syntax errors in markup and would not be rendered on PyPI. No content rendered from RST source. Checking dist/growbuddies-2022.11.28.tar.gz: PASSED with warnings WARNING `long_description_content_type` missing. defaulting to `text/x-rst`. WARNING `long_description` missing. Checking dist/growbuddies-2022.12.0.tar.gz: PASSED with warnings WARNING `long_description` missing.
[]
[]
[ "I changed to README.md for some reason, this worked.\n" ]
[ -1 ]
[ "pyproject.toml", "python", "restructuredtext" ]
stackoverflow_0074678194_pyproject.toml_python_restructuredtext.txt
Q: Nested loop to take input 5 times and display total and average hoursWorked = 0 researchAssistants = 3 for assisstant in range(researchAssistants): for day in range(5): if day == 0: hoursWorked += float(input("Enter hours for research assistant {0} for Day 1: ".format(assisstant+1))) if day == 1: hoursWorked += float(input("Enter hours for research assistant {0} for Day 2: ".format(assisstant+1))) if day == 2: hoursWorked += float(input("Enter hours for research assistant {0} for Day 3: ".format(assisstant+1))) if day == 3: hoursWorked += float(input("Enter hours for research assistant {0} for Day 4: ".format(assisstant+1))) if day == 4: hoursWorked += float(input("Enter hours for research assistant {0} for Day 5: ".format(assisstant+1))) print() print("Research assistant {0} worked in total".format(assisstant+1), hoursWorked, "hours") avgHoursWorked = hoursWorked / 5 if avgHoursWorked > 6: print("Research assistant {0} has an average number of hours per day above 6".format(assisstant+1) ) print() I want the code to take amount of hours worked per day in a 5-day work week for three employees. I then want it to summarize it to hours/week for each employee. If an employee's average number of hours per day is above 6, the program should flag this. So far I the program takes the input, and gives the total. But the average is wrong. I believe avgHoursWorked should be in the nested for loop - but this does not really work for me. I would rather want the input to be taken first, then display the totals and flag an avg. >6 at the end. Edit: Here is the output as per above code. Enter hours for research assistant 1 for Day 1: 5 Enter hours for research assistant 1 for Day 2: 5 Enter hours for research assistant 1 for Day 3: 5 Enter hours for research assistant 1 for Day 4: 5 Enter hours for research assistant 1 for Day 5: 5 Research assistant 1 worked in total 25.0 hours Enter hours for research assistant 2 for Day 1: 5 Enter hours for research assistant 2 for Day 2: 5 Enter hours for research assistant 2 for Day 3: 5 Enter hours for research assistant 2 for Day 4: 5 Enter hours for research assistant 2 for Day 5: 5 Research assistant 2 worked in total 50.0 hours Research assistant 2 has an average number of hours per day above 6 Enter hours for research assistant 3 for Day 1: 5 Enter hours for research assistant 3 for Day 2: 5 Enter hours for research assistant 3 for Day 3: 5 Enter hours for research assistant 3 for Day 4: 5 Enter hours for research assistant 3 for Day 5: 5 Research assistant 3 worked in total 75.0 hours Research assistant 3 has an average number of hours per day above 6 In this case, the hours worked are being added-up where they should really be per individual Research Assistant A: It sounds like you meant to reset hoursWorked for each assistant: researchAssistants = 3 for assisstant in range(researchAssistants): hoursWorked = 0 for day in range(5): ...
Nested loop to take input 5 times and display total and average
hoursWorked = 0 researchAssistants = 3 for assisstant in range(researchAssistants): for day in range(5): if day == 0: hoursWorked += float(input("Enter hours for research assistant {0} for Day 1: ".format(assisstant+1))) if day == 1: hoursWorked += float(input("Enter hours for research assistant {0} for Day 2: ".format(assisstant+1))) if day == 2: hoursWorked += float(input("Enter hours for research assistant {0} for Day 3: ".format(assisstant+1))) if day == 3: hoursWorked += float(input("Enter hours for research assistant {0} for Day 4: ".format(assisstant+1))) if day == 4: hoursWorked += float(input("Enter hours for research assistant {0} for Day 5: ".format(assisstant+1))) print() print("Research assistant {0} worked in total".format(assisstant+1), hoursWorked, "hours") avgHoursWorked = hoursWorked / 5 if avgHoursWorked > 6: print("Research assistant {0} has an average number of hours per day above 6".format(assisstant+1) ) print() I want the code to take amount of hours worked per day in a 5-day work week for three employees. I then want it to summarize it to hours/week for each employee. If an employee's average number of hours per day is above 6, the program should flag this. So far I the program takes the input, and gives the total. But the average is wrong. I believe avgHoursWorked should be in the nested for loop - but this does not really work for me. I would rather want the input to be taken first, then display the totals and flag an avg. >6 at the end. Edit: Here is the output as per above code. Enter hours for research assistant 1 for Day 1: 5 Enter hours for research assistant 1 for Day 2: 5 Enter hours for research assistant 1 for Day 3: 5 Enter hours for research assistant 1 for Day 4: 5 Enter hours for research assistant 1 for Day 5: 5 Research assistant 1 worked in total 25.0 hours Enter hours for research assistant 2 for Day 1: 5 Enter hours for research assistant 2 for Day 2: 5 Enter hours for research assistant 2 for Day 3: 5 Enter hours for research assistant 2 for Day 4: 5 Enter hours for research assistant 2 for Day 5: 5 Research assistant 2 worked in total 50.0 hours Research assistant 2 has an average number of hours per day above 6 Enter hours for research assistant 3 for Day 1: 5 Enter hours for research assistant 3 for Day 2: 5 Enter hours for research assistant 3 for Day 3: 5 Enter hours for research assistant 3 for Day 4: 5 Enter hours for research assistant 3 for Day 5: 5 Research assistant 3 worked in total 75.0 hours Research assistant 3 has an average number of hours per day above 6 In this case, the hours worked are being added-up where they should really be per individual Research Assistant
[ "It sounds like you meant to reset hoursWorked for each assistant:\nresearchAssistants = 3\n \nfor assisstant in range(researchAssistants):\n hoursWorked = 0\n for day in range(5):\n ...\n\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074680621_python.txt
Q: Python - PRINT_ERRORS=0 or 1 I am new to python , I am executing a small code, I see below code written at the start of my code PRINT_ERRORS=0 if I run the code as it is, it works fine, however if I change the value to 1 if prints some errors, I want to understand what that line of code is doing there, can anyone help? I am not sure what should I expect from this line of code A: The line of code PRINT_ERRORS=0 is defining a variable called PRINT_ERRORS and setting its value to 0. This variable is likely being used later in the code to control whether or not error messages are printed during the execution of the code. For example, if the code contains a line like if PRINT_ERRORS: print(error_message), then setting PRINT_ERRORS to 0 will prevent the error message from being printed, while setting it to 1 will cause the error message to be printed. In general, this line of code is included in the code to give the user or developer the option to control whether or not error messages are printed during the execution of the code. This can be useful for debugging purposes, as it allows the user to see any error messages that may be generated while the code is running.
Python - PRINT_ERRORS=0 or 1
I am new to python , I am executing a small code, I see below code written at the start of my code PRINT_ERRORS=0 if I run the code as it is, it works fine, however if I change the value to 1 if prints some errors, I want to understand what that line of code is doing there, can anyone help? I am not sure what should I expect from this line of code
[ "The line of code PRINT_ERRORS=0 is defining a variable called PRINT_ERRORS and setting its value to 0. This variable is likely being used later in the code to control whether or not error messages are printed during the execution of the code.\nFor example, if the code contains a line like if PRINT_ERRORS: print(error_message), then setting PRINT_ERRORS to 0 will prevent the error message from being printed, while setting it to 1 will cause the error message to be printed.\nIn general, this line of code is included in the code to give the user or developer the option to control whether or not error messages are printed during the execution of the code. This can be useful for debugging purposes, as it allows the user to see any error messages that may be generated while the code is running.\n" ]
[ 1 ]
[]
[]
[ "exception", "nlp", "printing", "python", "sentiment_analysis" ]
stackoverflow_0074680695_exception_nlp_printing_python_sentiment_analysis.txt
Q: Keep String Formatted When Sending it via Email [PYTHON] I have a string when it's printed in my terminal, it looks the way I want, but when I use the variable to send it via email, it looses all its formatting. Is there anyway I can fix this? This is how it looks on the terminal This is how it looks on email This is how I am declaring and printing for the first image: ticket_medio = (faturamento['Valor Final'] / quantidade['Quantidade']).to_frame() print(ticket_medio) And this is how I am doing the formatted string that is being sent to the user: body_email = f'Segue o ticket médio deste mês:\n\n{ticket_medio}' A: To save a pandas dataframe named ticket_medio to a CSV file, you can use the to_csv method. Here is an example of how to do this: # Import the pandas library import pandas as pd # Save the dataframe to a CSV file ticket_medio.to_csv('ticket_medio.csv') This will create a CSV file named ticket_medio.csv in the current directory. The CSV file will contain the data from the ticket_medio dataframe. Once the CSV file has been created, you can attach it to an email and send it to the desired recipient. The recipient can then open the CSV file in a program like Microsoft Excel or Google Sheets to view the data. Alternatively, you can also specify a different file path to save the CSV file to a different location on your computer, or you can specify additional options to customize the way the data is saved to the CSV file. For more information, you can refer to the documentation for the to_csv method.
Keep String Formatted When Sending it via Email [PYTHON]
I have a string when it's printed in my terminal, it looks the way I want, but when I use the variable to send it via email, it looses all its formatting. Is there anyway I can fix this? This is how it looks on the terminal This is how it looks on email This is how I am declaring and printing for the first image: ticket_medio = (faturamento['Valor Final'] / quantidade['Quantidade']).to_frame() print(ticket_medio) And this is how I am doing the formatted string that is being sent to the user: body_email = f'Segue o ticket médio deste mês:\n\n{ticket_medio}'
[ "To save a pandas dataframe named ticket_medio to a CSV file, you can use the to_csv method. Here is an example of how to do this:\n# Import the pandas library\nimport pandas as pd\n\n# Save the dataframe to a CSV file\nticket_medio.to_csv('ticket_medio.csv')\n\nThis will create a CSV file named ticket_medio.csv in the current directory. The CSV file will contain the data from the ticket_medio dataframe.\nOnce the CSV file has been created, you can attach it to an email and send it to the desired recipient. The recipient can then open the CSV file in a program like Microsoft Excel or Google Sheets to view the data.\nAlternatively, you can also specify a different file path to save the CSV file to a different location on your computer, or you can specify additional options to customize the way the data is saved to the CSV file. For more information, you can refer to the documentation for the to_csv method.\n" ]
[ 0 ]
[ "You can use pandas.DataFrame.to_string.\n\nRender a DataFrame to a console-friendly tabular output.\n\nTry this :\nbody_email = '''Segue o ticket médio destemês:\\n\\n{}'''.format(ticket_medio.to_string())\n\n" ]
[ -1 ]
[ "dataframe", "email", "pandas", "python", "string" ]
stackoverflow_0074680645_dataframe_email_pandas_python_string.txt
Q: How do I combine html, css, vanilla JS and python? I'm taking a python course and I want to make a projects page, displaying all course projects. So far, I've been working on creating a hub page, each project getting a project "card" which, upon being clicked, redirects one to a particular project page. The basic gist of it is this: https://codepen.io/MaFomedanu/pen/mdKGVNN So far I've been using replit to write the python code and then import it on the page using an iframe tag. As the course progressed, I'm required to use Pycharm. I would still like to put projects on my page but I don't really know how. I'm aware that you can't run python on a page the same way you run JS but I've read about Flask. I don't know how it works exactly, but from what I've seen in tutorials, they all create a new page that runs a python script. My question/what I want to achieve: Is it possible for me to create python web apps (pages?) that I can import in my already existing project(which only uses html, css and vanilla JS)? In extra layman terms: how can I click a project card so that I get redirected to a page that allows me to present a working python "app" as if it were run by the browser (the same way JS is)?
How do I combine html, css, vanilla JS and python?
I'm taking a python course and I want to make a projects page, displaying all course projects. So far, I've been working on creating a hub page, each project getting a project "card" which, upon being clicked, redirects one to a particular project page. The basic gist of it is this: https://codepen.io/MaFomedanu/pen/mdKGVNN So far I've been using replit to write the python code and then import it on the page using an iframe tag. As the course progressed, I'm required to use Pycharm. I would still like to put projects on my page but I don't really know how. I'm aware that you can't run python on a page the same way you run JS but I've read about Flask. I don't know how it works exactly, but from what I've seen in tutorials, they all create a new page that runs a python script. My question/what I want to achieve: Is it possible for me to create python web apps (pages?) that I can import in my already existing project(which only uses html, css and vanilla JS)? In extra layman terms: how can I click a project card so that I get redirected to a page that allows me to present a working python "app" as if it were run by the browser (the same way JS is)?
[]
[]
[ "Flask is a library for that allows you to write all the backend code for a web server in Python. Flask handles listening for requests, setting up routes, and things like session management, but it does not run in the browser. You can use Jinja with Flask to insert data into the HTML templates that it returns, but once the webpage is sent to the client, the server no longer has any control over it.\nYou could try using something like PyScript to run Python code on the client side too, but Python does not support this natively, and Flask does not include any functionality like this out of the box.\nIf you already have the HTML, CSS, and JS written for your webpages, all you would need to do in Flask is set up a route for clients to request, then create a function that returns your webpage.\nFlask has some great documentation that describes how to set this up (you'll want to view the rendering templates section to see how to return html files)\nHope this helps!\n" ]
[ -1 ]
[ "flask", "frameworks", "frontend", "javascript", "python" ]
stackoverflow_0074622193_flask_frameworks_frontend_javascript_python.txt
Q: win32api.SendMessage not working when trying to release a button i am trying to send some virtual keycodes to an application while it is out of focus. I get it to work without a problem except for releasing normal keys. I have tried: win32api.SendMessage(hwnd, win32con.WM_KEYUP, VK_CODE["a"]) win32api.PostMessage(hwnd, win32con.WM_KEYUP, VK_CODE["a"]) releasing a key works perfectly with the left mouse button: win32api.SendMessage(hwnd, win32con.WM_LBUTTONUP, win32con.MK_LBUTTON, 0) and using keydb_event: win32api.keybd_event(VK_CODE[i],0 ,win32con.KEYEVENTF_KEYUP ,0) But for some reason when trying to release a key using SendMessage it pressed down the button instead. A: SendMessage lParam(0) PostMessage lParam(0) Keystroke Messages INPUT inputs{}; inputs.type = INPUT_KEYBOARD; inputs.ki.wVk = 0x41; inputs.ki.dwFlags = KEYEVENTF_KEYUP; UINT uSent = SendInput(1, &inputs, sizeof(INPUT)); Dead-Character Messages(such as The circumflex key on a German keyboard) WM_KEYDOWN WM_DEADCHAR WM_KEYUP WM_KEYDOWN WM_CHAR WM_KEYUP It depends on how the application implements WM_KEYUP and then simulation is not reliable. Attached a classic article You can't simulate keyboard input with PostMessage. A: for anyone reading this later on the problem was i wasn't specifying the Lparam in the function. this post explains it very well. should also mention SPY++ which made understanding how keypresses in windows work a lot easier.
win32api.SendMessage not working when trying to release a button
i am trying to send some virtual keycodes to an application while it is out of focus. I get it to work without a problem except for releasing normal keys. I have tried: win32api.SendMessage(hwnd, win32con.WM_KEYUP, VK_CODE["a"]) win32api.PostMessage(hwnd, win32con.WM_KEYUP, VK_CODE["a"]) releasing a key works perfectly with the left mouse button: win32api.SendMessage(hwnd, win32con.WM_LBUTTONUP, win32con.MK_LBUTTON, 0) and using keydb_event: win32api.keybd_event(VK_CODE[i],0 ,win32con.KEYEVENTF_KEYUP ,0) But for some reason when trying to release a key using SendMessage it pressed down the button instead.
[ "SendMessage lParam(0)\n\nPostMessage lParam(0)\n\nKeystroke Messages\n\nINPUT inputs{};\ninputs.type = INPUT_KEYBOARD;\ninputs.ki.wVk = 0x41;\ninputs.ki.dwFlags = KEYEVENTF_KEYUP;\nUINT uSent = SendInput(1, &inputs, sizeof(INPUT));\n\n\nDead-Character Messages(such as The circumflex key on a German keyboard)\n\nWM_KEYDOWN\nWM_DEADCHAR\nWM_KEYUP\nWM_KEYDOWN\nWM_CHAR\nWM_KEYUP\n\nIt depends on how the application implements WM_KEYUP and then simulation is not reliable.\nAttached a classic article You can't simulate keyboard input with PostMessage.\n", "for anyone reading this later on the problem was i wasn't specifying the Lparam in the function. this post explains it very well.\nshould also mention SPY++ which made understanding how keypresses in windows work a lot easier.\n" ]
[ 0, 0 ]
[]
[]
[ "python", "pywin32", "winapi" ]
stackoverflow_0074532299_python_pywin32_winapi.txt
Q: How to quantify how good the model is after using train_test_split I'm using the train_test_split from sklearn.model_selection. My code looks like the following: x_train, x_test , y_train, y_test = train_test_split(x, y, test_size=0.25, random_state=1234) Edit: After this is done, how do I fit these to the linear regression model, and then see how good this model is? i.e. Which of the four components (x_train, x_test, y_train, or y_test) would I use to calculate MSE or RMSE? And how exactly how would I do that? A: To evaluate the model's performance, you can use the x_test and y_test data sets. These are the datasets that the model has not seen before and will be used to evaluate the model's generalization ability. To calculate the MSE for the model, you can use the mean_squared_error() function from the sklearn.metrics module. These functions take the true values (y_test) and the predicted values (the output of your model on the x_test data) as input and return the MSE or RMSE as a floating-point value.
How to quantify how good the model is after using train_test_split
I'm using the train_test_split from sklearn.model_selection. My code looks like the following: x_train, x_test , y_train, y_test = train_test_split(x, y, test_size=0.25, random_state=1234) Edit: After this is done, how do I fit these to the linear regression model, and then see how good this model is? i.e. Which of the four components (x_train, x_test, y_train, or y_test) would I use to calculate MSE or RMSE? And how exactly how would I do that?
[ "To evaluate the model's performance, you can use the x_test and y_test data sets. These are the datasets that the model has not seen before and will be used to evaluate the model's generalization ability.\nTo calculate the MSE for the model, you can use the mean_squared_error() function from the sklearn.metrics module. These functions take the true values (y_test) and the predicted values (the output of your model on the x_test data) as input and return the MSE or RMSE as a floating-point value.\n" ]
[ 0 ]
[]
[]
[ "python", "scikit_learn" ]
stackoverflow_0074680716_python_scikit_learn.txt
Q: Matplotlib: Draw second y-axis with different length I'm trying to make a matplotlib plot with a second y-axis. This works so far, but I was wondering, wether it was possible to shorten the second y-axis? Furthermore, I struggle on some other formatting issues. a) I want to draw an arrow on the second y-axis, just as drawn on the first y-axis. b) I want to align the second y-axis at -1, so that the intersection of x- and 2nd y-axis is at(...; -1) c) The x-axis crosses the x- and y-ticks at the origin, which I want to avoid. d) How can I get a common legend for both y-axis? Here is my code snippet so far. fig, ax = plt.subplots() bx = ax.twinx() # 2nd y-axis ax.spines['bottom'].set_position(('data',0)) ax.spines['left'].set_position(('data',0)) ax.xaxis.set_ticks_position('bottom') bx.spines['left'].set_position(('data',-1)) bx.spines['bottom'].set_position(('data',-1)) ax.spines["top"].set_visible(False) ax.spines["right"].set_visible(False) bx.spines["top"].set_visible(False) bx.spines["bottom"].set_visible(False) bx.spines["left"].set_visible(False) ## Graphs x_val = np.arange(0,10) y_val = 0.1*x_val ax.plot(x_val, y_val, 'k--') bx.plot(x_val, -y_val+1, color = 'purple') ## Arrows ms=2 #ax.plot(1, 0, ">k", ms=ms, transform=ax.get_yaxis_transform(), clip_on=False) ax.plot(0, 1, "^k", ms=ms, transform=ax.get_xaxis_transform(), clip_on=False) bx.plot(1, 1, "^k", ms=ms, transform=bx.get_xaxis_transform(), clip_on=False) plt.ylim((-1, 1.2)) bx.set_yticks([-1, -0.75, -0.5, -0.25, 0, 0.25, 0.5]) ## Legend ax.legend([r'$A_{hull}$'], frameon=False, loc='upper left', bbox_to_anchor=(0.2, .75)) plt.show() I've uploaded a screenshot of my plot so far, annotating the questioned points. EDIT: I've changed the plotted values in the code snippet so that the example is easier to reproduce. However, the question is more or less only related to formatting issues so that the acutual values are not too important. Image is not changed, so don't be surprised when plotting the edited values, the graphs will look differently. A: To avoid the strange overlap at x=0 and y=0, you could leave out the calls to ax.spines[...].set_position(('data',0)). You can change the transforms that place the arrows. Explicitly setting the x and y limits to start at 0 will also have the spines at those positions. ax2.set_bounds(...) shortens the right y-axis. To put items in the legend, each plotted item needs a label. get_legend_handles_labels can fetch the handles and labels of both axes, which can be combined in a new legend. Renaming bx to something like ax2 makes the code easier to compare with existing example code. In matplotlib it often also helps to first put the plotting code and only later changes to limits, ticks and embellishments. import matplotlib.pyplot as plt import pandas as pd import numpy as np fig, ax = plt.subplots() ax2 = ax.twinx() # 2nd y-axis ## Graphs x_val = np.arange(0, 10) y_val = 0.1 * x_val ax.plot(x_val, y_val, 'k--', label=r'$A_{hull}$') ax2.plot(x_val, -y_val + 1, color='purple', label='right axis') ax.spines["top"].set_visible(False) ax.spines["right"].set_visible(False) ax2.spines["top"].set_visible(False) ax2.spines["bottom"].set_visible(False) ax2.spines["left"].set_visible(False) ax2_upper_bound = 0.55 ax2.spines["right"].set_bounds(-1, ax2_upper_bound) # shorten the right y-axis ## add arrows to spines ms = 2 # ax.plot(1, 0, ">k", ms=ms, transform=ax.get_yaxis_transform(), clip_on=False) ax.plot(0, 1, "^k", ms=ms, transform=ax.transAxes, clip_on=False) ax2.plot(1, ax2_upper_bound, "^k", ms=ms, transform=ax2.get_yaxis_transform(), clip_on=False) # set limits to the axes ax.set_xlim(xmin=0) ax.set_ylim(ymin=0) ax2.set_ylim((-1, 1.2)) ax2.set_yticks(np.arange(-1, 0.5001, 0.25)) ## Legend handles1, labels1 = ax.get_legend_handles_labels() handles2, labels2 = ax2.get_legend_handles_labels() ax.legend(handles1 + handles2, labels1 + labels2, frameon=False, loc='upper left', bbox_to_anchor=(0.2, .75)) plt.show()
Matplotlib: Draw second y-axis with different length
I'm trying to make a matplotlib plot with a second y-axis. This works so far, but I was wondering, wether it was possible to shorten the second y-axis? Furthermore, I struggle on some other formatting issues. a) I want to draw an arrow on the second y-axis, just as drawn on the first y-axis. b) I want to align the second y-axis at -1, so that the intersection of x- and 2nd y-axis is at(...; -1) c) The x-axis crosses the x- and y-ticks at the origin, which I want to avoid. d) How can I get a common legend for both y-axis? Here is my code snippet so far. fig, ax = plt.subplots() bx = ax.twinx() # 2nd y-axis ax.spines['bottom'].set_position(('data',0)) ax.spines['left'].set_position(('data',0)) ax.xaxis.set_ticks_position('bottom') bx.spines['left'].set_position(('data',-1)) bx.spines['bottom'].set_position(('data',-1)) ax.spines["top"].set_visible(False) ax.spines["right"].set_visible(False) bx.spines["top"].set_visible(False) bx.spines["bottom"].set_visible(False) bx.spines["left"].set_visible(False) ## Graphs x_val = np.arange(0,10) y_val = 0.1*x_val ax.plot(x_val, y_val, 'k--') bx.plot(x_val, -y_val+1, color = 'purple') ## Arrows ms=2 #ax.plot(1, 0, ">k", ms=ms, transform=ax.get_yaxis_transform(), clip_on=False) ax.plot(0, 1, "^k", ms=ms, transform=ax.get_xaxis_transform(), clip_on=False) bx.plot(1, 1, "^k", ms=ms, transform=bx.get_xaxis_transform(), clip_on=False) plt.ylim((-1, 1.2)) bx.set_yticks([-1, -0.75, -0.5, -0.25, 0, 0.25, 0.5]) ## Legend ax.legend([r'$A_{hull}$'], frameon=False, loc='upper left', bbox_to_anchor=(0.2, .75)) plt.show() I've uploaded a screenshot of my plot so far, annotating the questioned points. EDIT: I've changed the plotted values in the code snippet so that the example is easier to reproduce. However, the question is more or less only related to formatting issues so that the acutual values are not too important. Image is not changed, so don't be surprised when plotting the edited values, the graphs will look differently.
[ "To avoid the strange overlap at x=0 and y=0, you could leave out the calls to ax.spines[...].set_position(('data',0)). You can change the transforms that place the arrows. Explicitly setting the x and y limits to start at 0 will also have the spines at those positions.\nax2.set_bounds(...) shortens the right y-axis.\nTo put items in the legend, each plotted item needs a label. get_legend_handles_labels can fetch the handles and labels of both axes, which can be combined in a new legend.\nRenaming bx to something like ax2 makes the code easier to compare with existing example code. In matplotlib it often also helps to first put the plotting code and only later changes to limits, ticks and embellishments.\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\n\nfig, ax = plt.subplots()\nax2 = ax.twinx() # 2nd y-axis\n\n## Graphs\nx_val = np.arange(0, 10)\ny_val = 0.1 * x_val\nax.plot(x_val, y_val, 'k--', label=r'$A_{hull}$')\nax2.plot(x_val, -y_val + 1, color='purple', label='right axis')\n\nax.spines[\"top\"].set_visible(False)\nax.spines[\"right\"].set_visible(False)\nax2.spines[\"top\"].set_visible(False)\nax2.spines[\"bottom\"].set_visible(False)\nax2.spines[\"left\"].set_visible(False)\nax2_upper_bound = 0.55\nax2.spines[\"right\"].set_bounds(-1, ax2_upper_bound) # shorten the right y-axis\n\n## add arrows to spines\nms = 2\n# ax.plot(1, 0, \">k\", ms=ms, transform=ax.get_yaxis_transform(), clip_on=False)\nax.plot(0, 1, \"^k\", ms=ms, transform=ax.transAxes, clip_on=False)\nax2.plot(1, ax2_upper_bound, \"^k\", ms=ms, transform=ax2.get_yaxis_transform(), clip_on=False)\n\n# set limits to the axes\nax.set_xlim(xmin=0)\nax.set_ylim(ymin=0)\nax2.set_ylim((-1, 1.2))\nax2.set_yticks(np.arange(-1, 0.5001, 0.25))\n\n## Legend\nhandles1, labels1 = ax.get_legend_handles_labels()\nhandles2, labels2 = ax2.get_legend_handles_labels()\nax.legend(handles1 + handles2, labels1 + labels2, frameon=False,\n loc='upper left', bbox_to_anchor=(0.2, .75))\n\nplt.show()\n\n\n" ]
[ 0 ]
[]
[]
[ "matplotlib", "python" ]
stackoverflow_0074677995_matplotlib_python.txt
Q: Confused With "TypeError: '<=' not supported between instances of 'int' and 'str'" I tried making a random password generator and it gave me this error. Here is my source code It says the problem is at | password="".join(random.sample(characters,USER_INP)) #Variables import random import tkinter from tkinter import simpledialog characters="abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890" USER_INP = simpledialog.askstring(title="Secure Password Gen", prompt="How many characters do you want your password to be? *password will be in Terminal*") password="".join(random.sample(characters,USER_INP)) #Gui ROOT = tkinter.Tk() ROOT.withdraw() #Password Generator print("The Password is: ", password) tried to add password="".joinint(random.sample(characters,USER_INP)) A: To use tkinter.simpledialog.askstring, random.choices, and a string of acceptable password characters to generate a random password of a length requested by the user in Python, you can use the following code: import tkinter as tk from tkinter import simpledialog import random # Create a string containing all the acceptable password characters password_chars = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!@#$%^&*" # Use the askstring function from the simpledialog module to ask the user for the password length root = tk.Tk() root.withdraw() password_length = simpledialog.askstring("Input", "Enter password length:") # Use the random.choices function to generate a random password of the requested length password = "".join(random.choices(password_chars, k=password_length)) # Print the generated password to the console print(password) This code will use tkinter.simpledialog.askstring to ask the user for the desired password length - it must cast this str input to an int type. It will then use the random.choices function to generate a random password of the requested length, using the string of acceptable password characters as the source of possible characters. Finally, it will print the generated password to the console. The part which your code is missing is the casting from string to integer when the user enters their desired length.
Confused With "TypeError: '<=' not supported between instances of 'int' and 'str'"
I tried making a random password generator and it gave me this error. Here is my source code It says the problem is at | password="".join(random.sample(characters,USER_INP)) #Variables import random import tkinter from tkinter import simpledialog characters="abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@$1234567890" USER_INP = simpledialog.askstring(title="Secure Password Gen", prompt="How many characters do you want your password to be? *password will be in Terminal*") password="".join(random.sample(characters,USER_INP)) #Gui ROOT = tkinter.Tk() ROOT.withdraw() #Password Generator print("The Password is: ", password) tried to add password="".joinint(random.sample(characters,USER_INP))
[ "To use tkinter.simpledialog.askstring, random.choices, and a string of acceptable password characters to generate a random password of a length requested by the user in Python, you can use the following code:\nimport tkinter as tk\nfrom tkinter import simpledialog\nimport random\n\n# Create a string containing all the acceptable password characters\npassword_chars = \"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!@#$%^&*\"\n\n# Use the askstring function from the simpledialog module to ask the user for the password length\nroot = tk.Tk()\nroot.withdraw()\npassword_length = simpledialog.askstring(\"Input\", \"Enter password length:\")\n\n# Use the random.choices function to generate a random password of the requested length\npassword = \"\".join(random.choices(password_chars, k=password_length))\n\n# Print the generated password to the console\nprint(password)\n\nThis code will use tkinter.simpledialog.askstring to ask the user for the desired password length - it must cast this str input to an int type. It will then use the random.choices function to generate a random password of the requested length, using the string of acceptable password characters as the source of possible characters. Finally, it will print the generated password to the console.\nThe part which your code is missing is the casting from string to integer when the user enters their desired length.\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074680657_python.txt
Q: How to report and fix errors while iterating? I would like to iterate through elements in raw_data below, and store the value of f(x) when f(x) gives an error, show the error msg and store this message fix the error that arised due to type, ie "four" instead of 4 Would it be possible to do all three at the same time? import math import sys raw_data = [5,"four", -3,2,1] def f(x): return math.log(x) What I have so far is: import math import sys raw_data = [5,"four", -3,2,1] def f(x): return math.log(x) for x in raw_data: try: print(f(x)) except: print("error:",sys.exc_info()[0]) This gives me a list results: 1.6094379124341003 error: <class 'TypeError'> error: <class 'ValueError'> 0.6931471805599453 0.0 How would I a) store the values of f(x) where there are no errors b) where there are errors, report and store the error message c) correct the type error? Thank you very much in advance A: Assuming you want to store the function results and error messages in two different lists, I'd suggest creating two lists and appending to one or the other in your try/except. Use a dictionary to do the translation between specific strings and their numeric equivalents. results = [] errors = [] num_names = { 'four': 4, } for x in raw_data: x = num_names.get(x, x) try: results.append(f(x)) except Exception as e: errors.append(repr(e)) print("Results:", *results, sep='\n') print("Errors:", *errors, sep='\n') Results: 1.6094379124341003 1.3862943611198906 0.6931471805599453 0.0 Errors: ValueError('math domain error')
How to report and fix errors while iterating?
I would like to iterate through elements in raw_data below, and store the value of f(x) when f(x) gives an error, show the error msg and store this message fix the error that arised due to type, ie "four" instead of 4 Would it be possible to do all three at the same time? import math import sys raw_data = [5,"four", -3,2,1] def f(x): return math.log(x) What I have so far is: import math import sys raw_data = [5,"four", -3,2,1] def f(x): return math.log(x) for x in raw_data: try: print(f(x)) except: print("error:",sys.exc_info()[0]) This gives me a list results: 1.6094379124341003 error: <class 'TypeError'> error: <class 'ValueError'> 0.6931471805599453 0.0 How would I a) store the values of f(x) where there are no errors b) where there are errors, report and store the error message c) correct the type error? Thank you very much in advance
[ "Assuming you want to store the function results and error messages in two different lists, I'd suggest creating two lists and appending to one or the other in your try/except. Use a dictionary to do the translation between specific strings and their numeric equivalents.\nresults = []\nerrors = []\nnum_names = {\n 'four': 4,\n}\n\nfor x in raw_data:\n x = num_names.get(x, x)\n try:\n results.append(f(x))\n except Exception as e:\n errors.append(repr(e))\n\nprint(\"Results:\", *results, sep='\\n')\nprint(\"Errors:\", *errors, sep='\\n')\n\nResults:\n1.6094379124341003\n1.3862943611198906\n0.6931471805599453\n0.0\nErrors:\nValueError('math domain error')\n\n" ]
[ 1 ]
[]
[]
[ "iteration", "loops", "python", "store", "typeerror" ]
stackoverflow_0074680732_iteration_loops_python_store_typeerror.txt
Q: Python: Assigning numbering to list of ascii Need help with making a encryption python program that encrypts with use of ascii values. I have 1-127 random number generator with no repeats and need to basically assign a value to each one. Example: list 1 is (1,2,3...127) list 2 is (54,60,27...) I need to get a list or dictionary of (1 : 54 , 2 : 60 , 3 : 27...). End goal is that after encryption, 54 is assigned to ascii 1 (soh), if the number 54 appears in the encrypted string, then original string had a soh in that slot I do not know the proper way to assign the random number list a number. I think its dictionary but I am not familiar with dict A: You can make a dict from 2 lists with: listsdict = dict(zip(list1, list2)) Additionally then you can iterate through your input string look up the Value like ascii_value = ord(char) # Look up the corresponding value in the dictionary using the ASCII value as the key encrypted_value = dict1[ascii_value] A: Welcome to StackOverflow. I urge you to take a look at the How do I ask a good question?, and How to create a Minimal, Reproducible Example pages so that we can point you in the right direction more easily. You're correct in thinking that a dictionary would be a suitable tool for this problem. You can learn all about dict and how it works in the Python docs page about built-in types. That page has a nifty example that covers what you described perfectly (through the usage of zip): c = dict(zip(['one', 'two', 'three'], [1, 2, 3]))
Python: Assigning numbering to list of ascii
Need help with making a encryption python program that encrypts with use of ascii values. I have 1-127 random number generator with no repeats and need to basically assign a value to each one. Example: list 1 is (1,2,3...127) list 2 is (54,60,27...) I need to get a list or dictionary of (1 : 54 , 2 : 60 , 3 : 27...). End goal is that after encryption, 54 is assigned to ascii 1 (soh), if the number 54 appears in the encrypted string, then original string had a soh in that slot I do not know the proper way to assign the random number list a number. I think its dictionary but I am not familiar with dict
[ "You can make a dict from 2 lists with:\nlistsdict = dict(zip(list1, list2))\nAdditionally then you can iterate through your input string look up the Value like\nascii_value = ord(char)\n\n# Look up the corresponding value in the dictionary using the ASCII value as the key\nencrypted_value = dict1[ascii_value]\n\n", "Welcome to StackOverflow. I urge you to take a look at the How do I ask a good question?, and How to create a Minimal, Reproducible Example pages so that we can point you in the right direction more easily.\nYou're correct in thinking that a dictionary would be a suitable tool for this problem.\nYou can learn all about dict and how it works in the Python docs page about built-in types.\nThat page has a nifty example that covers what you described perfectly (through the usage of zip):\nc = dict(zip(['one', 'two', 'three'], [1, 2, 3]))\n\n" ]
[ 0, 0 ]
[]
[]
[ "dictionary", "encryption", "list", "python" ]
stackoverflow_0074680717_dictionary_encryption_list_python.txt
Q: Tkinter: pass arguments when threading a function I'm trying to pass some arguments while threading a function, this is my code: import tkinter as tk from PIL import ImageTk, Image, ImageGrab import time import threading class Flashing(tk.Frame): def __init__(self, parent, *args, **kwargs): tk.Frame.__init__(self, parent, *args, **kwargs) self.first_label = tk.Label(self, image="", foreground="black") self.first_label.pack(padx=50, side="left", anchor="w") self.button = tk.Button( self, text="start", command=threading.Thread(target=self.flash_all, args=[label, img]).start(), ) self.button.pack() def flash_all(self, label, img): for num in range(6): num += 1 if (num % 2) == 0: print("{0} is Even".format(num)) time.sleep(1) label.config(text="one) if (num % 2) == 1: print("{0} is Odd".format(num)) time.sleep(1) self.bip1.play(loops=0) label.config(text='two') if num == 6: time.sleep(1) self.bip2.play(loops=0) label.config(text='three') time.sleep(5) label.config(image="") if __name__ == "__main__": root = tk.Tk() Flashing(root).pack(side="top", fill="both", expand=True) root.mainloop() But I'm getting this error (virt) What shoudl I change to fix it? Important: I've trimmed and change some label in my code to make it more easy to read. The original work fine except for the error I've mention. Thanks you all A: It looks like the Thread object is being called immediately instead of being passed to the Button's command attribute. To fix this, you can define a new function that creates a Thread object and starts it, then pass that function to the Button's command attribute. import tkinter as tk import threading import time class Flashing(tk.Frame): def __init__(self, parent, *args, **kwargs): tk.Frame.__init__(self, parent, *args, **kwargs) self.first_label = tk.Label(self, image="", foreground="black") self.first_label.pack(padx=50, side="left", anchor="w") self.button = tk.Button(self, text="start", command=self.start_flash_all) self.button.pack() def start_flash_all(self): # Define a new function that creates a Thread object and starts it def flash_all_thread(): for num in range(6): num += 1 if (num % 2) == 0: print("{0} is Even".format(num)) time.sleep(1) label.config(text="one") if (num % 2) == 1: print("{0} is Odd".format(num)) time.sleep(1) self.bip1.play(loops=0) label.config(text="two") if num == 6: time.sleep(1) self.bip2.play(loops=0) label.config(text="three") time.sleep(5) label.config(image="") # Create a Thread object and start it thread = threading.Thread(target=flash_all_thread) thread.start() if __name__ == "__main__": root = tk.Tk() Flashing(root).pack(side="top", fill="both", expand=True) root.mainloop() In this example, the start_flash_all function creates a new function called flash_all_thread that contains the code from your flash_all function, then creates a Thread object using that function as the target and starts it. This allows you to pass the Thread object to the Button's command attribute without calling it immediately.
Tkinter: pass arguments when threading a function
I'm trying to pass some arguments while threading a function, this is my code: import tkinter as tk from PIL import ImageTk, Image, ImageGrab import time import threading class Flashing(tk.Frame): def __init__(self, parent, *args, **kwargs): tk.Frame.__init__(self, parent, *args, **kwargs) self.first_label = tk.Label(self, image="", foreground="black") self.first_label.pack(padx=50, side="left", anchor="w") self.button = tk.Button( self, text="start", command=threading.Thread(target=self.flash_all, args=[label, img]).start(), ) self.button.pack() def flash_all(self, label, img): for num in range(6): num += 1 if (num % 2) == 0: print("{0} is Even".format(num)) time.sleep(1) label.config(text="one) if (num % 2) == 1: print("{0} is Odd".format(num)) time.sleep(1) self.bip1.play(loops=0) label.config(text='two') if num == 6: time.sleep(1) self.bip2.play(loops=0) label.config(text='three') time.sleep(5) label.config(image="") if __name__ == "__main__": root = tk.Tk() Flashing(root).pack(side="top", fill="both", expand=True) root.mainloop() But I'm getting this error (virt) What shoudl I change to fix it? Important: I've trimmed and change some label in my code to make it more easy to read. The original work fine except for the error I've mention. Thanks you all
[ "It looks like the Thread object is being called immediately instead of being passed to the Button's command attribute. To fix this, you can define a new function that creates a Thread object and starts it, then pass that function to the Button's command attribute.\nimport tkinter as tk\nimport threading\nimport time\n\n\nclass Flashing(tk.Frame):\n def __init__(self, parent, *args, **kwargs):\n tk.Frame.__init__(self, parent, *args, **kwargs)\n\n self.first_label = tk.Label(self, image=\"\", foreground=\"black\")\n self.first_label.pack(padx=50, side=\"left\", anchor=\"w\")\n\n self.button = tk.Button(self, text=\"start\", command=self.start_flash_all)\n self.button.pack()\n\n def start_flash_all(self):\n # Define a new function that creates a Thread object and starts it\n def flash_all_thread():\n for num in range(6):\n num += 1\n if (num % 2) == 0:\n print(\"{0} is Even\".format(num))\n time.sleep(1)\n label.config(text=\"one\")\n if (num % 2) == 1:\n print(\"{0} is Odd\".format(num))\n time.sleep(1)\n self.bip1.play(loops=0)\n label.config(text=\"two\")\n if num == 6:\n time.sleep(1)\n self.bip2.play(loops=0)\n label.config(text=\"three\")\n time.sleep(5)\n label.config(image=\"\")\n\n # Create a Thread object and start it\n thread = threading.Thread(target=flash_all_thread)\n thread.start()\n\n\nif __name__ == \"__main__\":\n root = tk.Tk()\n Flashing(root).pack(side=\"top\", fill=\"both\", expand=True)\n root.mainloop()\n\nIn this example, the start_flash_all function creates a new function called flash_all_thread that contains the code from your flash_all function, then creates a Thread object using that function as the target and starts it. This allows you to pass the Thread object to the Button's command attribute without calling it immediately.\n" ]
[ 1 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0074680767_python_tkinter.txt
Q: Missing numpy lib when trying to install tensorflow I have numpy install as shown. I'm using the instructions for the M1 chip https://developer.apple.com/metal/tensorflow-plugin/ (base) cody@Codys-MBP ~ % pip install numpy --upgrade --force-reinstall Defaulting to user installation because normal site-packages is not writeable Collecting numpy Using cached numpy-1.23.5-cp39-cp39-macosx_11_0_arm64.whl (13.4 MB) Installing collected packages: numpy Attempting uninstall: numpy Found existing installation: numpy 1.23.5 Uninstalling numpy-1.23.5: Successfully uninstalled numpy-1.23.5 WARNING: The scripts f2py, f2py3 and f2py3.9 are installed in '/Users/cody/Library/Python/3.9/bin' which is not on PATH. Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location. Successfully installed numpy-1.23.5 (base) cody@Codys-MBP ~ % python3 -c "import tensorflow as tf;" RuntimeError: module compiled against API version 0x10 but this version of numpy is 0xf RuntimeError: module compiled against API version 0x10 but this version of numpy is 0xf ImportError: numpy.core._multiarray_umath failed to import ImportError: numpy.core.umath failed to import A: It was a numpy version issue. I uninstalled everything , then let tensor flow resolve its dependency.
Missing numpy lib when trying to install tensorflow
I have numpy install as shown. I'm using the instructions for the M1 chip https://developer.apple.com/metal/tensorflow-plugin/ (base) cody@Codys-MBP ~ % pip install numpy --upgrade --force-reinstall Defaulting to user installation because normal site-packages is not writeable Collecting numpy Using cached numpy-1.23.5-cp39-cp39-macosx_11_0_arm64.whl (13.4 MB) Installing collected packages: numpy Attempting uninstall: numpy Found existing installation: numpy 1.23.5 Uninstalling numpy-1.23.5: Successfully uninstalled numpy-1.23.5 WARNING: The scripts f2py, f2py3 and f2py3.9 are installed in '/Users/cody/Library/Python/3.9/bin' which is not on PATH. Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location. Successfully installed numpy-1.23.5 (base) cody@Codys-MBP ~ % python3 -c "import tensorflow as tf;" RuntimeError: module compiled against API version 0x10 but this version of numpy is 0xf RuntimeError: module compiled against API version 0x10 but this version of numpy is 0xf ImportError: numpy.core._multiarray_umath failed to import ImportError: numpy.core.umath failed to import
[ "It was a numpy version issue. I uninstalled everything , then let tensor flow resolve its dependency.\n" ]
[ 0 ]
[]
[]
[ "python", "tensorflow" ]
stackoverflow_0074662366_python_tensorflow.txt
Q: TypeError: unhashable type: 'CatBoostClassifier' Context: I'm trying to use catboost classifier using a dictionary with parameters as such: from catboost import CatBoostClassifier model_params_grid_search = { naive_bayes.MultinomialNB(): { 'param_grid': { 'alpha': [0.01, 0.1, 0.5, 1.0, 10.0], } }, linear_model.LogisticRegression(): { 'param_grid': { 'C': [0.01, 0.1, 0.5, 1.0], 'penalty': ['l1', 'l2'], 'solver': ['liblinear', 'lbfgs', 'saga'], } }, CatBoostClassifier(): { 'param_grid':{...} }, svm.SVC(): { 'param_grid': { 'C': [0.01, 0.1, 0.5, 1.0], 'kernel': ['linear', 'rbf'], 'gamma': ['auto'] } },... To then apply the model class and do some hyperparameter gridsearch. However I keep getting the error TypeError: unhashable type: 'CatBoostClassifier' when running it for CatBoostClassifier(). All other models work fine this way, not sure why CatBoost is giving this error. I just wanted to loop through all the models to find the best one. Thank you!
TypeError: unhashable type: 'CatBoostClassifier'
Context: I'm trying to use catboost classifier using a dictionary with parameters as such: from catboost import CatBoostClassifier model_params_grid_search = { naive_bayes.MultinomialNB(): { 'param_grid': { 'alpha': [0.01, 0.1, 0.5, 1.0, 10.0], } }, linear_model.LogisticRegression(): { 'param_grid': { 'C': [0.01, 0.1, 0.5, 1.0], 'penalty': ['l1', 'l2'], 'solver': ['liblinear', 'lbfgs', 'saga'], } }, CatBoostClassifier(): { 'param_grid':{...} }, svm.SVC(): { 'param_grid': { 'C': [0.01, 0.1, 0.5, 1.0], 'kernel': ['linear', 'rbf'], 'gamma': ['auto'] } },... To then apply the model class and do some hyperparameter gridsearch. However I keep getting the error TypeError: unhashable type: 'CatBoostClassifier' when running it for CatBoostClassifier(). All other models work fine this way, not sure why CatBoost is giving this error. I just wanted to loop through all the models to find the best one. Thank you!
[]
[]
[ "I have the same issue. Did you find a solution?\n" ]
[ -3 ]
[ "catboost", "python" ]
stackoverflow_0073192979_catboost_python.txt
Q: OPEN AI WHISPER : These errors make me mad (help please) I have a problem so I hope some programmers can help me solve it. Basically I run this : import whisper model = whisper.load_model("base") result = model.transcribe('test.mp3', fp16=False) And I get this : Output exceeds the size limit. Open the full output data in a text editor. Error Traceback (most recent call last) File 38 try: 39 # This launches a subprocess to decode audio while down-mixing and resampling as necessary. 40 # Requires the ffmpeg CLI and `ffmpeg-python` package to be installed. 41 out, _ = ( ---> 42 ffmpeg.input(file, threads=0) 43 .output("-", format="s16le", acodec="pcm_s16le", ac=1, ar=sr) 44 .run(cmd=["ffmpeg", "-nostdin"], capture_stdout=True, capture_stderr=True) 45 ) 46 except ffmpeg.Error as e: File 324 if retcode: --> 325 raise Error('ffmpeg', out, err) 326 return out, err ` Error: ffmpeg error (see stderr output for detail) The above exception was the direct cause of the following exception: RuntimeError Traceback (most recent call last) Cell In[25], line 4 1 import whisper ... libswscale 6. 7.100 / 6. 7.100 libswresample 4. 7.100 / 4. 7.100 libpostproc 56. 6.100 / 56. 6.100 test.mp3: No such file or directory My goal is just to transcribe an mp3 audio file "test.mp3" into text, via the AI Whisper of OpenAI. I just want to make working Whisper with some audio files but these errors prevent me from doing so. And honestly, I've been at it for two days now and I'm not getting anywhere, I've opened the files in question, located lines 42 and 325 but I don't know what to do next. Thank you in advance for your help and your explanations. A: Use os module. import os # get the current working dir cwd = os.getcwd() # construct the full path to the file file_path = os.path.join(cwd, 'test.mp3') # transcribe the file result = model.transcribe(file_path, fp16=False)
OPEN AI WHISPER : These errors make me mad (help please)
I have a problem so I hope some programmers can help me solve it. Basically I run this : import whisper model = whisper.load_model("base") result = model.transcribe('test.mp3', fp16=False) And I get this : Output exceeds the size limit. Open the full output data in a text editor. Error Traceback (most recent call last) File 38 try: 39 # This launches a subprocess to decode audio while down-mixing and resampling as necessary. 40 # Requires the ffmpeg CLI and `ffmpeg-python` package to be installed. 41 out, _ = ( ---> 42 ffmpeg.input(file, threads=0) 43 .output("-", format="s16le", acodec="pcm_s16le", ac=1, ar=sr) 44 .run(cmd=["ffmpeg", "-nostdin"], capture_stdout=True, capture_stderr=True) 45 ) 46 except ffmpeg.Error as e: File 324 if retcode: --> 325 raise Error('ffmpeg', out, err) 326 return out, err ` Error: ffmpeg error (see stderr output for detail) The above exception was the direct cause of the following exception: RuntimeError Traceback (most recent call last) Cell In[25], line 4 1 import whisper ... libswscale 6. 7.100 / 6. 7.100 libswresample 4. 7.100 / 4. 7.100 libpostproc 56. 6.100 / 56. 6.100 test.mp3: No such file or directory My goal is just to transcribe an mp3 audio file "test.mp3" into text, via the AI Whisper of OpenAI. I just want to make working Whisper with some audio files but these errors prevent me from doing so. And honestly, I've been at it for two days now and I'm not getting anywhere, I've opened the files in question, located lines 42 and 325 but I don't know what to do next. Thank you in advance for your help and your explanations.
[ "Use os module.\nimport os\n\n# get the current working dir\ncwd = os.getcwd()\n\n# construct the full path to the file\nfile_path = os.path.join(cwd, 'test.mp3')\n\n# transcribe the file\nresult = model.transcribe(file_path, fp16=False)\n\n" ]
[ 0 ]
[]
[]
[ "openai", "python", "runtime_error", "speech_to_text", "whisper" ]
stackoverflow_0074680851_openai_python_runtime_error_speech_to_text_whisper.txt
Q: selenium-python cannot locate element I need to enter credentials on garmin connect website. I use python 3.10 and chrome=108.0.5359.94. The username element code: <input class="login_email" name="username" id="username" value="" type="email" spellcheck="false" autocorrect="off" autocapitalize="off" aria-required="true"> And I tried the following: browser.maximize_window() browser.implicitly_wait(3) browser.find_element(By.ID, "username") browser.find_element(By.CLASS_NAME, 'input#username.login_email') browser.find_element(By.CLASS_NAME, 'login_email') browser.find_element(By.XPATH, '/html/body/div/div/div[1]/form/div[2]/input') browser.find_element(By.XPATH, '//*[@id="username"]') I get the following error: selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: I searched some info and some say it could be due to shadow dom. But I think it's not my case as I cannot see shadow-smth in html structure. Any ideas? A: The login form is inside an iframe. So, to access elements inside it you first need to switch into that iframe. The following code works: from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC options = Options() options.add_argument("start-maximized") webdriver_service = Service('C:\webdrivers\chromedriver.exe') driver = webdriver.Chrome(service=webdriver_service, options=options) wait = WebDriverWait(driver, 20) url = "https://connect.garmin.com/signin/" driver.get(url) wait.until(EC.frame_to_be_available_and_switch_to_it((By.TAG_NAME, "iframe"))) wait.until(EC.element_to_be_clickable((By.ID, "username"))).send_keys("my_name") wait.until(EC.element_to_be_clickable((By.ID, "password"))).send_keys("my_password") The result is When finished working inside the iframe don't forget to switch to the default content with driver.switch_to.default_content()
selenium-python cannot locate element
I need to enter credentials on garmin connect website. I use python 3.10 and chrome=108.0.5359.94. The username element code: <input class="login_email" name="username" id="username" value="" type="email" spellcheck="false" autocorrect="off" autocapitalize="off" aria-required="true"> And I tried the following: browser.maximize_window() browser.implicitly_wait(3) browser.find_element(By.ID, "username") browser.find_element(By.CLASS_NAME, 'input#username.login_email') browser.find_element(By.CLASS_NAME, 'login_email') browser.find_element(By.XPATH, '/html/body/div/div/div[1]/form/div[2]/input') browser.find_element(By.XPATH, '//*[@id="username"]') I get the following error: selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: I searched some info and some say it could be due to shadow dom. But I think it's not my case as I cannot see shadow-smth in html structure. Any ideas?
[ "The login form is inside an iframe. So, to access elements inside it you first need to switch into that iframe.\nThe following code works:\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\noptions = Options()\noptions.add_argument(\"start-maximized\")\n\nwebdriver_service = Service('C:\\webdrivers\\chromedriver.exe')\ndriver = webdriver.Chrome(service=webdriver_service, options=options)\nwait = WebDriverWait(driver, 20)\n\nurl = \"https://connect.garmin.com/signin/\"\n\ndriver.get(url)\n\nwait.until(EC.frame_to_be_available_and_switch_to_it((By.TAG_NAME, \"iframe\")))\n\nwait.until(EC.element_to_be_clickable((By.ID, \"username\"))).send_keys(\"my_name\")\nwait.until(EC.element_to_be_clickable((By.ID, \"password\"))).send_keys(\"my_password\")\n\nThe result is\n\nWhen finished working inside the iframe don't forget to switch to the default content with\ndriver.switch_to.default_content()\n\n" ]
[ 0 ]
[]
[]
[ "iframe", "python", "selenium", "selenium_webdriver", "xpath" ]
stackoverflow_0074680627_iframe_python_selenium_selenium_webdriver_xpath.txt
Q: python for each run async function without await and parallel I have 10 links in my CSV which I'm trying to run all at the same time in a loop from getTasks function. However, the way it's working now, it send a request to link 1, waits for it to complete, then link 2, etc, etc. I want the 10 links that I have to run all whenever startTask is called, leading to 10 requests a second. Anyone know how to code that using the code below? Thanks in advance. import requests from bs4 import BeautifulSoup import asyncio def getTasks(tasks): for task in tasks: asyncio.run(startTask(task)) async def startTask(task): success = await getProduct(task) if success is None: return startTask(task) success = await addToCart(task) if success is None: return startTask(task) ... ... ... getTasks(tasks) A: import asyncio async def getTasks(tasks): # Use asyncio.gather to run multiple tasks concurrently # This will start all the tasks at the same time await asyncio.gather(*[startTask(task) for task in tasks]) async def startTask(task): # Your existing code goes here success = await getProduct(task) if success is None: return startTask(task) success = await addToCart(task) if success is None: return startTask(task) ... ... ... # Use asyncio.run to start the main task asyncio.run(getTasks(tasks))
python for each run async function without await and parallel
I have 10 links in my CSV which I'm trying to run all at the same time in a loop from getTasks function. However, the way it's working now, it send a request to link 1, waits for it to complete, then link 2, etc, etc. I want the 10 links that I have to run all whenever startTask is called, leading to 10 requests a second. Anyone know how to code that using the code below? Thanks in advance. import requests from bs4 import BeautifulSoup import asyncio def getTasks(tasks): for task in tasks: asyncio.run(startTask(task)) async def startTask(task): success = await getProduct(task) if success is None: return startTask(task) success = await addToCart(task) if success is None: return startTask(task) ... ... ... getTasks(tasks)
[ "import asyncio\n\nasync def getTasks(tasks):\n # Use asyncio.gather to run multiple tasks concurrently\n # This will start all the tasks at the same time\n await asyncio.gather(*[startTask(task) for task in tasks])\n\nasync def startTask(task):\n # Your existing code goes here\n success = await getProduct(task)\n if success is None:\n return startTask(task)\n\n success = await addToCart(task)\n if success is None:\n return startTask(task)\n\n ...\n ...\n ...\n\n# Use asyncio.run to start the main task\nasyncio.run(getTasks(tasks))\n\n" ]
[ 0 ]
[]
[]
[ "async_await", "asynchronous", "parallel_processing", "python", "request" ]
stackoverflow_0074661156_async_await_asynchronous_parallel_processing_python_request.txt
Q: Reverse words in a given String in Python3.8 using functions We are given a string and we need to reverse words of a given string how do i do that? i tried, but the compiler doesnt work properly. something wrong with the syntax instead A: I don't know what you tried, but that works: s = "this is a string" rev = " ".join(s.split(" ")[::-1]) print(rev) output: string a is this
Reverse words in a given String in Python3.8 using functions
We are given a string and we need to reverse words of a given string how do i do that? i tried, but the compiler doesnt work properly. something wrong with the syntax instead
[ "I don't know what you tried, but that works:\ns = \"this is a string\"\nrev = \" \".join(s.split(\" \")[::-1])\nprint(rev)\n\noutput:\nstring a is this\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074680589_python.txt
Q: PySpark error: java.net.SocketTimeoutException: Accept timed out I am getting error "java.net.SocketTimeoutException: Accept timed out" while running pyspark using python 3.9.6 and spark 3.3.1. Source code: import json from pyspark.sql import SparkSession from pyspark.sql.functions import * from pyspark.sql.types import StringType with open('config.json') as cfg: json_data = json.load(cfg) dataset_path = json_data['Dataset'] # Init spark spark = SparkSession.builder.master('local[*]').appName('A').getOrCreate() sc = spark.sparkContext # Load Dataset df = spark.read.options(delimiter=';', inferSchema=True, header=True).csv(dataset_path); df.show(5) # Dataset preprocessing # Converts integer to double and converts 'quality' column to categorical @udf(returnType=StringType()) def condition(r): if r == 0: label = "bad" else: label = "good" return label df = df.withColumn("NO2", df["NO2"].cast('double')) df = df.withColumn("O3", df["O3"].cast('double')) df = df.withColumn("PM10", df["PM10"].cast('double')) df = df.withColumn("PM25", df["PM25"].cast('double')) df = df.withColumn('quality', condition('quality')) df.show(5) It happens when I try to apply the condition function for dataframe. The full stack trace: py4j.protocol.Py4JJavaError: An error occurred while calling o60.showString. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 3.0 failed 1 times, most recent failure: Lost task 0.0 in stage 3.0 (TID 3) (host.docker.internal executor driver): org.apache.spark.SparkException: Python worker failed to connect back. at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:189) at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:109) at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:124) at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:164) at org.apache.spark.sql.execution.python.BatchEvalPythonExec.evaluate(BatchEvalPythonExec.scala:81) at org.apache.spark.sql.execution.python.EvalPythonExec.$anonfun$doExecute$2(EvalPythonExec.scala:131) at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2(RDD.scala:855) at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2$adapted(RDD.scala:855) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:136) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: java.net.SocketTimeoutException: Accept timed out at java.base/java.net.PlainSocketImpl.waitForNewConnection(Native Method) at java.base/java.net.PlainSocketImpl.socketAccept(PlainSocketImpl.java:163) at java.base/java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:458) at java.base/java.net.ServerSocket.implAccept(ServerSocket.java:565) at java.base/java.net.ServerSocket.accept(ServerSocket.java:533) at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:176) ... 24 more Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2672) at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2608) at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2607) at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2607) at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1182) at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1182) at scala.Option.foreach(Option.scala:407) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1182) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2860) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2802) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2791) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:952) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2228) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2249) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2268) at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:506) at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:459) at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:48) at org.apache.spark.sql.Dataset.collectFromPlan(Dataset.scala:3868) at org.apache.spark.sql.Dataset.$anonfun$head$1(Dataset.scala:2863) at org.apache.spark.sql.Dataset.$anonfun$withAction$2(Dataset.scala:3858) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:510) at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3856) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:109) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:169) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:95) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3856) at org.apache.spark.sql.Dataset.head(Dataset.scala:2863) at org.apache.spark.sql.Dataset.take(Dataset.scala:3084) at org.apache.spark.sql.Dataset.getRows(Dataset.scala:288) at org.apache.spark.sql.Dataset.showString(Dataset.scala:327) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182) at py4j.ClientServerConnection.run(ClientServerConnection.java:106) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: org.apache.spark.SparkException: Python worker failed to connect back. at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:189) at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:109) at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:124) at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:164) at org.apache.spark.sql.execution.python.BatchEvalPythonExec.evaluate(BatchEvalPythonExec.scala:81) at org.apache.spark.sql.execution.python.EvalPythonExec.$anonfun$doExecute$2(EvalPythonExec.scala:131) at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2(RDD.scala:855) at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2$adapted(RDD.scala:855) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:136) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ... 1 more Caused by: java.net.SocketTimeoutException: Accept timed out at java.base/java.net.PlainSocketImpl.waitForNewConnection(Native Method) at java.base/java.net.PlainSocketImpl.socketAccept(PlainSocketImpl.java:163) at java.base/java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:458) at java.base/java.net.ServerSocket.implAccept(ServerSocket.java:565) at java.base/java.net.ServerSocket.accept(ServerSocket.java:533) at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:176) ... 24 more I have tried to google it but the only appropriate question I've found is without answer. A: The solution is to import "findspark" import findspark findspark.init()
PySpark error: java.net.SocketTimeoutException: Accept timed out
I am getting error "java.net.SocketTimeoutException: Accept timed out" while running pyspark using python 3.9.6 and spark 3.3.1. Source code: import json from pyspark.sql import SparkSession from pyspark.sql.functions import * from pyspark.sql.types import StringType with open('config.json') as cfg: json_data = json.load(cfg) dataset_path = json_data['Dataset'] # Init spark spark = SparkSession.builder.master('local[*]').appName('A').getOrCreate() sc = spark.sparkContext # Load Dataset df = spark.read.options(delimiter=';', inferSchema=True, header=True).csv(dataset_path); df.show(5) # Dataset preprocessing # Converts integer to double and converts 'quality' column to categorical @udf(returnType=StringType()) def condition(r): if r == 0: label = "bad" else: label = "good" return label df = df.withColumn("NO2", df["NO2"].cast('double')) df = df.withColumn("O3", df["O3"].cast('double')) df = df.withColumn("PM10", df["PM10"].cast('double')) df = df.withColumn("PM25", df["PM25"].cast('double')) df = df.withColumn('quality', condition('quality')) df.show(5) It happens when I try to apply the condition function for dataframe. The full stack trace: py4j.protocol.Py4JJavaError: An error occurred while calling o60.showString. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 3.0 failed 1 times, most recent failure: Lost task 0.0 in stage 3.0 (TID 3) (host.docker.internal executor driver): org.apache.spark.SparkException: Python worker failed to connect back. at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:189) at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:109) at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:124) at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:164) at org.apache.spark.sql.execution.python.BatchEvalPythonExec.evaluate(BatchEvalPythonExec.scala:81) at org.apache.spark.sql.execution.python.EvalPythonExec.$anonfun$doExecute$2(EvalPythonExec.scala:131) at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2(RDD.scala:855) at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2$adapted(RDD.scala:855) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:136) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: java.net.SocketTimeoutException: Accept timed out at java.base/java.net.PlainSocketImpl.waitForNewConnection(Native Method) at java.base/java.net.PlainSocketImpl.socketAccept(PlainSocketImpl.java:163) at java.base/java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:458) at java.base/java.net.ServerSocket.implAccept(ServerSocket.java:565) at java.base/java.net.ServerSocket.accept(ServerSocket.java:533) at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:176) ... 24 more Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2672) at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2608) at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2607) at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2607) at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1182) at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1182) at scala.Option.foreach(Option.scala:407) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1182) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2860) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2802) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2791) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:952) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2228) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2249) at org.apache.spark.SparkContext.runJob(SparkContext.scala:2268) at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:506) at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:459) at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:48) at org.apache.spark.sql.Dataset.collectFromPlan(Dataset.scala:3868) at org.apache.spark.sql.Dataset.$anonfun$head$1(Dataset.scala:2863) at org.apache.spark.sql.Dataset.$anonfun$withAction$2(Dataset.scala:3858) at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:510) at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3856) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:109) at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:169) at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:95) at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3856) at org.apache.spark.sql.Dataset.head(Dataset.scala:2863) at org.apache.spark.sql.Dataset.take(Dataset.scala:3084) at org.apache.spark.sql.Dataset.getRows(Dataset.scala:288) at org.apache.spark.sql.Dataset.showString(Dataset.scala:327) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182) at py4j.ClientServerConnection.run(ClientServerConnection.java:106) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: org.apache.spark.SparkException: Python worker failed to connect back. at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:189) at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:109) at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:124) at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:164) at org.apache.spark.sql.execution.python.BatchEvalPythonExec.evaluate(BatchEvalPythonExec.scala:81) at org.apache.spark.sql.execution.python.EvalPythonExec.$anonfun$doExecute$2(EvalPythonExec.scala:131) at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2(RDD.scala:855) at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2$adapted(RDD.scala:855) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365) at org.apache.spark.rdd.RDD.iterator(RDD.scala:329) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:136) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ... 1 more Caused by: java.net.SocketTimeoutException: Accept timed out at java.base/java.net.PlainSocketImpl.waitForNewConnection(Native Method) at java.base/java.net.PlainSocketImpl.socketAccept(PlainSocketImpl.java:163) at java.base/java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:458) at java.base/java.net.ServerSocket.implAccept(ServerSocket.java:565) at java.base/java.net.ServerSocket.accept(ServerSocket.java:533) at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:176) ... 24 more I have tried to google it but the only appropriate question I've found is without answer.
[ "The solution is to import \"findspark\"\nimport findspark\nfindspark.init()\n\n" ]
[ 0 ]
[]
[]
[ "apache_spark", "pyspark", "python" ]
stackoverflow_0074679957_apache_spark_pyspark_python.txt
Q: Images aren't showed in tabs in tkinter python I'm trying to show a picture using canvas or directly in the tab, but it doesn't work, it doesn't show an error but picture is not displayed, what am I doing wrong? I need to use a vertical scrollbar and add some widgets, I tried using canvas.create_image and labels but pictures aren't being showed This is my main code: import tkinter as tk from tkinter import ttk from Listado import * import os class Gui(ttk.Frame): def __init__(self): self.db_filename = 'Tienda.db' # Temp file self.temp_dir = "temp" dir_list = os.listdir("./") for file in dir_list: if file.lower() == "temp": self.temp_dir = file break else: self.temp_dir = "temp" if not os.path.exists(self.temp_dir): os.mkdir(self.temp_dir) self.create_gui() def create_gui(self): self.main_window = tk.Tk() self.main_window.attributes('-fullscreen', True) self.screen_width = self.main_window.winfo_screenwidth() self.screen_height = self.main_window.winfo_screenheight() self.tabs = ttk.Notebook(self.main_window) self.tab_admin = ttk.Frame(self.tabs) self.tab_listado = ttk.Frame(self.tabs) self.tab_carrito = ttk.Frame(self.tabs) self.tab_ventas = ttk.Frame(self.tabs) self.tab_login = ttk.Frame(self.tabs) self.tabs.add(self.tab_admin, text ='Administrar') self.tabs.add(self.tab_listado, text ='Ver listado') self.tabs.add(self.tab_carrito, text ='Carrito') self.tabs.add(self.tab_ventas, text ='Ventas') self.tabs.add(self.tab_login, text ='Login') self.listado = Listado(self) self.tabs.pack(expand = 1, fill ="both") self.main_window.mainloop() if __name__ == "__main__": Gui() This is the file named "Listado" import tkinter as tk from tkinter import ttk from tkinter import * from tkinter import scrolledtext as st from PIL import Image, ImageTk class Listado: def __init__(self,Gui): self.create_gui(Gui) def create_gui(self,Gui): img = Image.open("laptop.png") img = img.resize((100, 100), Image.ANTIALIAS) img = ImageTk.PhotoImage(img) container = ttk.Frame(Gui.tab_admin) canvas = tk.Canvas(container) scrollbar = ttk.Scrollbar(container, orient="vertical", command=canvas.yview) scrollable_frame = ttk.Frame(canvas) scrollable_frame.bind( "<Configure>", lambda e: canvas.configure( scrollregion=canvas.bbox("all") ) ) canvas.create_window((0, 0), window=scrollable_frame, anchor="nw") canvas.configure(yscrollcommand=scrollbar.set) for i in range(200): ttk.Label(scrollable_frame, text="Sample scrolling label",image=img).pack() container.pack(side="left",fill="both",expand=True) canvas.pack(side="left", fill="both", expand=True) scrollbar.pack(side="right", fill="y") ttk.Label(scrollable_frame,image=img).pack()
Images aren't showed in tabs in tkinter python
I'm trying to show a picture using canvas or directly in the tab, but it doesn't work, it doesn't show an error but picture is not displayed, what am I doing wrong? I need to use a vertical scrollbar and add some widgets, I tried using canvas.create_image and labels but pictures aren't being showed This is my main code: import tkinter as tk from tkinter import ttk from Listado import * import os class Gui(ttk.Frame): def __init__(self): self.db_filename = 'Tienda.db' # Temp file self.temp_dir = "temp" dir_list = os.listdir("./") for file in dir_list: if file.lower() == "temp": self.temp_dir = file break else: self.temp_dir = "temp" if not os.path.exists(self.temp_dir): os.mkdir(self.temp_dir) self.create_gui() def create_gui(self): self.main_window = tk.Tk() self.main_window.attributes('-fullscreen', True) self.screen_width = self.main_window.winfo_screenwidth() self.screen_height = self.main_window.winfo_screenheight() self.tabs = ttk.Notebook(self.main_window) self.tab_admin = ttk.Frame(self.tabs) self.tab_listado = ttk.Frame(self.tabs) self.tab_carrito = ttk.Frame(self.tabs) self.tab_ventas = ttk.Frame(self.tabs) self.tab_login = ttk.Frame(self.tabs) self.tabs.add(self.tab_admin, text ='Administrar') self.tabs.add(self.tab_listado, text ='Ver listado') self.tabs.add(self.tab_carrito, text ='Carrito') self.tabs.add(self.tab_ventas, text ='Ventas') self.tabs.add(self.tab_login, text ='Login') self.listado = Listado(self) self.tabs.pack(expand = 1, fill ="both") self.main_window.mainloop() if __name__ == "__main__": Gui() This is the file named "Listado" import tkinter as tk from tkinter import ttk from tkinter import * from tkinter import scrolledtext as st from PIL import Image, ImageTk class Listado: def __init__(self,Gui): self.create_gui(Gui) def create_gui(self,Gui): img = Image.open("laptop.png") img = img.resize((100, 100), Image.ANTIALIAS) img = ImageTk.PhotoImage(img) container = ttk.Frame(Gui.tab_admin) canvas = tk.Canvas(container) scrollbar = ttk.Scrollbar(container, orient="vertical", command=canvas.yview) scrollable_frame = ttk.Frame(canvas) scrollable_frame.bind( "<Configure>", lambda e: canvas.configure( scrollregion=canvas.bbox("all") ) ) canvas.create_window((0, 0), window=scrollable_frame, anchor="nw") canvas.configure(yscrollcommand=scrollbar.set) for i in range(200): ttk.Label(scrollable_frame, text="Sample scrolling label",image=img).pack() container.pack(side="left",fill="both",expand=True) canvas.pack(side="left", fill="both", expand=True) scrollbar.pack(side="right", fill="y") ttk.Label(scrollable_frame,image=img).pack()
[]
[]
[ "It looks like you are trying to display an image in a Tkinter canvas widget. However, you are not keeping a reference to the img object that you create, which means that it will be garbage collected and will not be displayed in the canvas.\nTo fix this, you need to keep a reference to the img object. You can do this by assigning it to a variable that is accessible in the scope where you use it to create the image in the canvas. Here is an example of how you could do this:\n# Import the necessary modules\nimport tkinter as tk\nfrom PIL import Image, ImageTk\n\n# Create the main window\nwindow = tk.Tk()\n\n# Load the image and resize it\nimg = Image.open(\"laptop.png\")\nimg = img.resize((100, 100), Image.ANTIALIAS)\n\n# Create a canvas widget\ncanvas = tk.Canvas(window)\n\n# Use the ImageTk.PhotoImage class to create a Tkinter-compatible image\n# object, and keep a reference to it\nimg = ImageTk.PhotoImage(img)\n\n# Create an image item in the canvas and display the image\ncanvas.create_image(0, 0, image=img, anchor=\"nw\")\n\n# Pack the canvas to display it\ncanvas.pack()\n\n# Start the main event loop\nwindow.mainloop()\n\nIn this code, we keep a reference to the img object by assigning it to a variable with the same name. We then pass this variable as the image option when we create the image item in the canvas. This ensures that the img object is not garbage collected and the image is displayed in the canvas.\nYou can apply this same principle to your code to fix the issue with the image not being displayed. You can either assign the img object to a variable with the same name in your create_gui function, or you can create a class attribute to store the reference to the img object and use it in the create_gui function.\n" ]
[ -2 ]
[ "canvas", "image", "python", "scrollbar", "tkinter" ]
stackoverflow_0074680916_canvas_image_python_scrollbar_tkinter.txt
Q: How can I create a window in python? Write a program that displays a rectangle whose frame consists of asterisk ' * ' characters, the inner part of ' Q ' characters. The program will ask the user to indicate the number of rows and columns of the rectangle, these values ​​cannot be less than 3. I tried to create various print() one below the other but I don't understand how to make them adapt to the user, in the sense that if the user asks me for 10 lines I don't know how to make this happen...the window should look like this: ********************* *QQQQQQQQQQQQQQQQQQQ* *QQQQQQQQQQQQQQQQQQQ* *QQQQQQQQQQQQQQQQQQQ* ********************* A: # Ask the user for the number of rows and columns num_rows = int(input("Enter the number of rows: ")) num_cols = int(input("Enter the number of columns: ")) # Make sure the values are at least 3 num_rows = max(num_rows, 3) num_cols = max(num_cols, 3) # Print the top row of asterisks print("*" * num_cols) # Print the middle rows of asterisks and Qs for i in range(num_rows - 2): print("*" + "Q" * (num_cols - 2) + "*") # Print the bottom row of asterisks print("*" * num_cols)
How can I create a window in python?
Write a program that displays a rectangle whose frame consists of asterisk ' * ' characters, the inner part of ' Q ' characters. The program will ask the user to indicate the number of rows and columns of the rectangle, these values ​​cannot be less than 3. I tried to create various print() one below the other but I don't understand how to make them adapt to the user, in the sense that if the user asks me for 10 lines I don't know how to make this happen...the window should look like this: ********************* *QQQQQQQQQQQQQQQQQQQ* *QQQQQQQQQQQQQQQQQQQ* *QQQQQQQQQQQQQQQQQQQ* *********************
[ "# Ask the user for the number of rows and columns\nnum_rows = int(input(\"Enter the number of rows: \"))\nnum_cols = int(input(\"Enter the number of columns: \"))\n\n# Make sure the values are at least 3\nnum_rows = max(num_rows, 3)\nnum_cols = max(num_cols, 3)\n\n# Print the top row of asterisks\nprint(\"*\" * num_cols)\n\n# Print the middle rows of asterisks and Qs\nfor i in range(num_rows - 2):\n print(\"*\" + \"Q\" * (num_cols - 2) + \"*\")\n\n# Print the bottom row of asterisks\nprint(\"*\" * num_cols)\n\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074680867_python.txt
Q: Access packages outside of current package setup.py I am trying to access packages outside of the current package using setup.py. My project structure looks like this. Example1/ |-- submodule1/ | |-- __init__.py | |-- main/ | |-- __init__.py | |-- hello.py | |-- setup.py |-- submodule2/ | |-- __init__.py | |-- main/ | |-- __init__.py | |-- world.py | |-- setup.py |-- submodule3/ | |-- __init__.py | |-- main/ | |-- __init__.py | |-- sample.py | |-- setup.py |-- utils/ | |-- __init__.py | |-- util_code1.py | |-- util_code2.py I am trying to include utils package dir in setup.py of submodules. here is how my setup.py looks setup( name='sample_package', description='my test wheel', #packages=find_packages(), packages=['main', '../../utils'] entry_points={ 'group_1': 'module1=Example1.main.hello:method1' } ], include_package_data=True, ) When I run command inside any of submodule python setup.py bdist_wheel to create a wheel for any submodule I am getting the following error. error: package directory '../../utils' does not exist A: It looks like the setup.py file you provided is not correct. In the packages parameter of the setup function, you are trying to include the ../../utils directory as a package, but this directory does not exist relative to the setup.py file. In order to include the utils package, you should include it as utils instead of ../../utils. This will make the setup function look for the utils package in the same directory as the setup.py file. Here is how your setup.py file should look: setup( name='sample_package', description='my test wheel', #packages=find_packages(), packages=['main', 'utils'] entry_points={ 'group_1': 'module1=Example1.main.hello:method1' } ], include_package_data=True, ) You can also use the find_packages function from the setuptools package to automatically find all packages in your project. This can be useful if you have a complex project with many subdirectories. Here is an example of how you can use the find_packages function in your setup.py file: from setuptools import find_packages setup( name='sample_package', description='my test wheel', packages=find_packages(), entry_points={ 'group_1': 'module1=Example1.main.hello:method1' } ], include_package_data=True, ) This will automatically find all packages in your project and include them in the wheel file that is generated when you run the python setup.py bdist_wheel command. A: #It looks like you are trying to include the utils package in the setup.py file of your submodules. However, the way you have specified the package in the setup function is incorrect. #To include a package in your setup.py file, you need to specify the package name and its path relative to the setup.py file. In your case, the utils package is located at ../../utils, but this is not a valid package name. Instead, you need to specify the package name, which is utils, and its relative path, which is ../../utils. #Here is how you can fix this error: setup( name='sample_package', description='my test wheel', #packages=find_packages(), packages=['main', 'utils'], package_dir={'utils': '../../utils'}, entry_points={ 'group_1': 'module1=Example1.main.hello:method1' } ], include_package_data=True, ) #The package_dir parameter specifies the package name and its relative path, so that the setup.py script knows where to find the package. #You can then run the python setup.py bdist_wheel command to build the wheel for your submodule.
Access packages outside of current package setup.py
I am trying to access packages outside of the current package using setup.py. My project structure looks like this. Example1/ |-- submodule1/ | |-- __init__.py | |-- main/ | |-- __init__.py | |-- hello.py | |-- setup.py |-- submodule2/ | |-- __init__.py | |-- main/ | |-- __init__.py | |-- world.py | |-- setup.py |-- submodule3/ | |-- __init__.py | |-- main/ | |-- __init__.py | |-- sample.py | |-- setup.py |-- utils/ | |-- __init__.py | |-- util_code1.py | |-- util_code2.py I am trying to include utils package dir in setup.py of submodules. here is how my setup.py looks setup( name='sample_package', description='my test wheel', #packages=find_packages(), packages=['main', '../../utils'] entry_points={ 'group_1': 'module1=Example1.main.hello:method1' } ], include_package_data=True, ) When I run command inside any of submodule python setup.py bdist_wheel to create a wheel for any submodule I am getting the following error. error: package directory '../../utils' does not exist
[ "It looks like the setup.py file you provided is not correct. In the packages parameter of the setup function, you are trying to include the ../../utils directory as a package, but this directory does not exist relative to the setup.py file.\nIn order to include the utils package, you should include it as utils instead of ../../utils. This will make the setup function look for the utils package in the same directory as the setup.py file.\nHere is how your setup.py file should look:\nsetup(\n name='sample_package',\n description='my test wheel',\n #packages=find_packages(), \n packages=['main', 'utils']\n entry_points={\n 'group_1': 'module1=Example1.main.hello:method1'\n }\n ],\n include_package_data=True,\n)\n\nYou can also use the find_packages function from the setuptools package to automatically find all packages in your project. This can be useful if you have a complex project with many subdirectories.\nHere is an example of how you can use the find_packages function in your setup.py file:\nfrom setuptools import find_packages\n\nsetup(\n name='sample_package',\n description='my test wheel',\n packages=find_packages(), \n entry_points={\n 'group_1': 'module1=Example1.main.hello:method1'\n }\n ],\n include_package_data=True,\n)\n\nThis will automatically find all packages in your project and include them in the wheel file that is generated when you run the python setup.py bdist_wheel command.\n", "#It looks like you are trying to include the utils package in the setup.py file of your submodules. However, the way you have specified the package in the setup function is incorrect.\n\n#To include a package in your setup.py file, you need to specify the package name and its path relative to the setup.py file. In your case, the utils package is located at ../../utils, but this is not a valid package name. Instead, you need to specify the package name, which is utils, and its relative path, which is ../../utils.\n\n#Here is how you can fix this error:\n\nsetup(\n name='sample_package',\n description='my test wheel',\n #packages=find_packages(), \n packages=['main', 'utils'],\n package_dir={'utils': '../../utils'},\n entry_points={\n 'group_1': 'module1=Example1.main.hello:method1'\n }\n ],\n include_package_data=True,\n)\n#The package_dir parameter specifies the package name and its relative path, so that the setup.py script knows where to find the package.\n\n#You can then run the python setup.py bdist_wheel command to build the wheel for your submodule.\n\n" ]
[ 0, 0 ]
[ "The issue is that the package_dir parameter in your setup.py file is not correctly specifying the path to the utils package. The package_dir parameter should be a dictionary that maps package names to the directories where the packages are located. In your case, you could add the following to your setup.py file to correctly specify the path to the utils package:\npackage_dir={\n 'main': 'main',\n 'utils': '../../utils'\n}\n\nWith this change, your setup.py file should look like this:\nsetup(\n name='sample_package',\n description='my test wheel',\n #packages=find_packages(), \n packages=['main', 'utils'],\n package_dir={\n 'main': 'main',\n 'utils': '../../utils'\n },\n entry_points={\n 'group_1': 'module1=Example1.main.hello:method1'\n }\n ],\n include_package_data=True,\n)\n\nThis should fix the error and allow you to correctly create a wheel for your submodule. For more information about the package_dir parameter and other parameters you can use in setup.py, you can check out the documentation for the setuptools package here.\n" ]
[ -1 ]
[ "python", "python_packaging", "setup.py", "setuptools" ]
stackoverflow_0074652871_python_python_packaging_setup.py_setuptools.txt
Q: How to run an alter table migration with alembic - taking too long and never ends I'm trying to run a migration with alembic (add a column) but it taking too long - and never ends. The table has 100 rows and i don't see an error. This is my migration code in python """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = 'd6fe1dec4bcd' down_revision = '3f532791c5f3' branch_labels = None depends_on = None def upgrade() -> None: op.add_column('products2', sa.Column( 'product_status', sa.String(255))) def downgrade() -> None: op.drop_column('products2', 'product_status') This is what i see in postgres when I check SELECT * FROM pg_stat_activity WHERE state = 'active'; ALTER TABLE products2 ADD COLUMN product_status VARCHAR(255) This is what I see in terminal INFO [alembic.runtime.migration] Context impl PostgresqlImpl. INFO [alembic.runtime.migration] Will assume transactional DDL. INFO [alembic.runtime.migration] Running upgrade 3f532791c5f3 -> d6fe1dec4bcd, create product status column How can I fix this? I'm running the postgres in a Google Cloud COnsole, but i don`t see any error on their platform A: Get the active locks from pg_locks: SELECT t.relname, l.locktype, page, virtualtransaction, pid, mode, granted FROM pg_locks l, pg_stat_all_tables t WHERE l.relation = t.relid ORDER BY relation asc; Copy the pid(ex: 14210) from above result and substitute in the below command. SELECT pg_terminate_backend(14210)
How to run an alter table migration with alembic - taking too long and never ends
I'm trying to run a migration with alembic (add a column) but it taking too long - and never ends. The table has 100 rows and i don't see an error. This is my migration code in python """ from alembic import op import sqlalchemy as sa # revision identifiers, used by Alembic. revision = 'd6fe1dec4bcd' down_revision = '3f532791c5f3' branch_labels = None depends_on = None def upgrade() -> None: op.add_column('products2', sa.Column( 'product_status', sa.String(255))) def downgrade() -> None: op.drop_column('products2', 'product_status') This is what i see in postgres when I check SELECT * FROM pg_stat_activity WHERE state = 'active'; ALTER TABLE products2 ADD COLUMN product_status VARCHAR(255) This is what I see in terminal INFO [alembic.runtime.migration] Context impl PostgresqlImpl. INFO [alembic.runtime.migration] Will assume transactional DDL. INFO [alembic.runtime.migration] Running upgrade 3f532791c5f3 -> d6fe1dec4bcd, create product status column How can I fix this? I'm running the postgres in a Google Cloud COnsole, but i don`t see any error on their platform
[ "Get the active locks from pg_locks:\nSELECT t.relname, l.locktype, page, virtualtransaction, pid, mode, granted\nFROM pg_locks l, pg_stat_all_tables t \nWHERE l.relation = t.relid \nORDER BY relation asc;\nCopy the pid(ex: 14210) from above result and substitute in the below command.\n\nSELECT pg_terminate_backend(14210)\n\n" ]
[ 0 ]
[]
[]
[ "google_cloud_platform", "postgresql", "python" ]
stackoverflow_0074680825_google_cloud_platform_postgresql_python.txt
Q: How to delete all rows from pandas dataframe1 that do NOT exist in pandas dataframe2 I have two pandas dataframes, data1 and data2. They each have album and artist columns along with other columns that are different attributes. For the sake of what I'm trying to do, I want to delete all of the rows in data2 that DO NOT exist in data1. So, essentially I want all of the album and artists in data2 to match data1. Does anyone know the right way to go about this in python? TIA! So far I've tried: data2 = data2[data2['album', 'artist'].isin(data1['album', 'artist'])] but it doesn't like the ',' to get both attributes to match. A: To remove all rows from a dataframe that do not exist in another dataframe, you can use the merge() method from pandas, along with the indicator parameter. The indicator parameter allows you to specify whether you want to keep only the rows that exist in both dataframes (the default behavior), only the rows that exist in the left dataframe, only the rows that exist in the right dataframe, or all rows from both dataframes. For example, to remove all rows from data1 that do not exist in data2, you can use the merge() method with the indicator parameter set to 'right_only', like this: # Merge data1 and data2 on the 'album' and 'artist' columns merged_data = data1.merge(data2, on=['album', 'artist'], indicator=True) # Keep only the rows where the _merge column is 'right_only' merged_data = merged_data[merged_data['_merge'] == 'right_only'] # Drop the _merge column merged_data = merged_data.drop('_merge', axis=1) # Print the first few rows of the merged dataframe print(merged_data.head()) This will create a new dataframe called merged_data that contains only the rows from data1 that do not exist in data2. The _merge column indicates whether the row exists in both dataframes ('both'), only in the left dataframe ('left_only'), only in the right dataframe ('right_only'), or in neither dataframe ('neither'). In this case, we use the _merge column to filter the dataframe and keep only the rows that have a value of 'right_only'. Then, we drop the _merge column from the dataframe, since it is no longer needed. A: May be this solves your case: # First, create a new column that concatenates the album and artist columns in data1 data1['combo'] = data1['album'] + data1['artist'] # Repeat this for data2 data2['combo'] = data2['album'] + data2['artist'] # Next, keep only the rows in data2 where the combo column exists in data1 data2 = data2[data2['combo'].isin(data1['combo'])] # Finally, drop the combo column from both dataframes data1.drop(columns=['combo'], inplace=True) data2.drop(columns=['combo'], inplace=True) This approach creates a new column in each dataframe that concatenates the album and artist columns, and then uses the isin method to keep only the rows in data2 where the combo column exists in data1. The combo columns are then dropped from both dataframes. Note that this approach assumes that there are no duplicate rows in either dataframe. If there are duplicate rows, you may need to use a different approach, such as grouping by the combo column and then keeping only groups that exist in both dataframes. A: You can use the merge method in Pandas to join the two dataframes on the album and artist columns and keep only the rows that exist in both dataframes. Here is an example of how you could do this: import pandas as pd # Create some sample dataframes data1 = pd.DataFrame({ "album": ["Thriller", "Back in Black", "The Dark Side of the Moon"], "artist": ["Michael Jackson", "AC/DC", "Pink Floyd"], "year": [1982, 1980, 1973] }) data2 = pd.DataFrame({ "album": ["The Bodyguard", "Thriller", "The Dark Side of the Moon"], "artist": ["Whitney Houston", "Michael Jackson", "Pink Floyd"], "genre": ["Soundtrack", "Pop", "Rock"] }) # Merge the dataframes on the album and artist columns, and keep only the rows that exist in both dataframes merged_data = data1.merge(data2, on=["album", "artist"], how="inner") # Print the result print(merged_data) This code will print the following dataframe: album artist year genre 0 Thriller Michael Jackson 1982 Pop 1 The Dark Side of the Moon Pink Floyd 1973 Rock As you can see, this dataframe only contains the rows that exist in both data1 and data2. You can then use this dataframe instead of data2 to work with the rows that exist in both dataframes. Note that the merge method will also join the columns from the two dataframes, so you may need to drop any unnecessary columns or rename columns with the same name to avoid conflicts. You can do this using the drop and rename methods in Pandas, respectively. For example: # Drop the "genre" column from the merged dataframe merged_data = merged_data.drop("genre", axis=1) # Rename the "year" column in the merged dataframe merged_data = merged_data.rename({"year": "release_year"}, axis=1) # Print the result print(merged_data) This code will print the following dataframe: album artist
How to delete all rows from pandas dataframe1 that do NOT exist in pandas dataframe2
I have two pandas dataframes, data1 and data2. They each have album and artist columns along with other columns that are different attributes. For the sake of what I'm trying to do, I want to delete all of the rows in data2 that DO NOT exist in data1. So, essentially I want all of the album and artists in data2 to match data1. Does anyone know the right way to go about this in python? TIA! So far I've tried: data2 = data2[data2['album', 'artist'].isin(data1['album', 'artist'])] but it doesn't like the ',' to get both attributes to match.
[ "To remove all rows from a dataframe that do not exist in another dataframe, you can use the merge() method from pandas, along with the indicator parameter. The indicator parameter allows you to specify whether you want to keep only the rows that exist in both dataframes (the default behavior), only the rows that exist in the left dataframe, only the rows that exist in the right dataframe, or all rows from both dataframes.\nFor example, to remove all rows from data1 that do not exist in data2, you can use the merge() method with the indicator parameter set to 'right_only', like this:\n# Merge data1 and data2 on the 'album' and 'artist' columns\nmerged_data = data1.merge(data2, on=['album', 'artist'], indicator=True)\n\n# Keep only the rows where the _merge column is 'right_only'\nmerged_data = merged_data[merged_data['_merge'] == 'right_only']\n\n# Drop the _merge column\nmerged_data = merged_data.drop('_merge', axis=1)\n\n# Print the first few rows of the merged dataframe\nprint(merged_data.head())\n\nThis will create a new dataframe called merged_data that contains only the rows from data1 that do not exist in data2. The _merge column indicates whether the row exists in both dataframes ('both'), only in the left dataframe ('left_only'), only in the right dataframe ('right_only'), or in neither dataframe ('neither'). In this case, we use the _merge column to filter the dataframe and keep only the rows that have a value of 'right_only'. Then, we drop the _merge column from the dataframe, since it is no longer needed.\n", "May be this solves your case:\n# First, create a new column that concatenates the album and artist columns in data1\ndata1['combo'] = data1['album'] + data1['artist']\n\n# Repeat this for data2\ndata2['combo'] = data2['album'] + data2['artist']\n\n# Next, keep only the rows in data2 where the combo column exists in data1\ndata2 = data2[data2['combo'].isin(data1['combo'])]\n\n# Finally, drop the combo column from both dataframes\ndata1.drop(columns=['combo'], inplace=True)\ndata2.drop(columns=['combo'], inplace=True)\n\n\nThis approach creates a new column in each dataframe that concatenates the album and artist columns, and then uses the isin method to keep only the rows in data2 where the combo column exists in data1. The combo columns are then dropped from both dataframes.\nNote that this approach assumes that there are no duplicate rows in either dataframe. If there are duplicate rows, you may need to use a different approach, such as grouping by the combo column and then keeping only groups that exist in both dataframes.\n", "You can use the merge method in Pandas to join the two dataframes on the album and artist columns and keep only the rows that exist in both dataframes. Here is an example of how you could do this:\nimport pandas as pd\n\n# Create some sample dataframes\ndata1 = pd.DataFrame({\n \"album\": [\"Thriller\", \"Back in Black\", \"The Dark Side of the Moon\"],\n \"artist\": [\"Michael Jackson\", \"AC/DC\", \"Pink Floyd\"],\n \"year\": [1982, 1980, 1973]\n})\n\ndata2 = pd.DataFrame({\n \"album\": [\"The Bodyguard\", \"Thriller\", \"The Dark Side of the Moon\"],\n \"artist\": [\"Whitney Houston\", \"Michael Jackson\", \"Pink Floyd\"],\n \"genre\": [\"Soundtrack\", \"Pop\", \"Rock\"]\n})\n\n# Merge the dataframes on the album and artist columns, and keep only the rows that exist in both dataframes\nmerged_data = data1.merge(data2, on=[\"album\", \"artist\"], how=\"inner\")\n\n# Print the result\nprint(merged_data)\n\nThis code will print the following dataframe:\n album artist year genre\n0 Thriller Michael Jackson 1982 Pop\n1 The Dark Side of the Moon Pink Floyd 1973 Rock\n\nAs you can see, this dataframe only contains the rows that exist in both data1 and data2. You can then use this dataframe instead of data2 to work with the rows that exist in both dataframes.\nNote that the merge method will also join the columns from the two dataframes, so you may need to drop any unnecessary columns or rename columns with the same name to avoid conflicts. You can do this using the drop and rename methods in Pandas, respectively. For example:\n# Drop the \"genre\" column from the merged dataframe\nmerged_data = merged_data.drop(\"genre\", axis=1)\n\n# Rename the \"year\" column in the merged dataframe\nmerged_data = merged_data.rename({\"year\": \"release_year\"}, axis=1)\n\n# Print the result\nprint(merged_data)\n\nThis code will print the following dataframe:\nalbum artist\n\n" ]
[ 0, 0, -1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074680948_dataframe_pandas_python.txt
Q: pyenv deletes python after installing I tried installing python with pyenv install 3.11.0 (though this happens no matter the version) on my Raspberry Pi. When the install was running, there was a 3.11.0 directory in ~/.pyenv/versions, pyenv versions recognized it, and the installed python is actually usable, but the dir disappeared after the installation process finished. Raspberry Pi OS - Debian GNU/Linux 11 (bullseye) aarch64 Aside from one time when it errored out, this has happened every time I tried installing, including 3.11, 3.10, and 3.9 A: #It sounds like something went wrong with the installation of Python on your Raspberry Pi. The first thing you should try is running the pyenv install command with the --verbose flag, which will provide you with more detailed output and may help you identify the issue. For example: pyenv install 3.11.0 --verbose #If that doesn't help, you can try removing the Python version that was partially installed and then try installing it again. You can use the pyenv uninstall command to remove the partially installed Python version, followed by the pyenv install command to try installing it again. For example: pyenv uninstall 3.11.0 pyenv install 3.11.0 #If you continue to have problems, you may want to try installing a different version of Python, such as 3.9.0 or 3.8.6, which are the latest versions of Python 3.9 and 3.8, respectively. You can use the same pyenv install command to install these versions. For example: pyenv install 3.9.0 pyenv install 3.8.6 #If you still can't get Python installed on your Raspberry Pi, you may want to try reinstalling pyenv itself, using the pyenv-installer script, which you can download from the pyenv GitHub page (https://github.com/pyenv/pyenv-installer). This script will automatically install pyenv and its dependencies on your system, which may help resolve any issues you're experiencing. curl -L https://github.com/pyenv/pyenv-installer/raw/master/bin/pyenv-installer | bash #Alternatively, you can try manually installing pyenv and its dependencies using your system's package manager. For example, on a Raspberry Pi running Debian or Raspbian, you can use the apt-get command to install pyenv and its dependencies. sudo apt-get update sudo apt-get install -y make build-essential libssl-dev zlib1g-dev libbz2-dev \ libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev libncursesw5-dev \ xz-utils tk-dev libffi-dev liblzma-dev python-openssl git #After installing pyenv and its dependencies, you can try installing Python again using the pyenv install command. #I hope this helps! Let me know if you have any other questions.
pyenv deletes python after installing
I tried installing python with pyenv install 3.11.0 (though this happens no matter the version) on my Raspberry Pi. When the install was running, there was a 3.11.0 directory in ~/.pyenv/versions, pyenv versions recognized it, and the installed python is actually usable, but the dir disappeared after the installation process finished. Raspberry Pi OS - Debian GNU/Linux 11 (bullseye) aarch64 Aside from one time when it errored out, this has happened every time I tried installing, including 3.11, 3.10, and 3.9
[ "#It sounds like something went wrong with the installation of Python on your Raspberry Pi. The first thing you should try is running the pyenv install command with the --verbose flag, which will provide you with more detailed output and may help you identify the issue. For example:\n\npyenv install 3.11.0 --verbose\n\n#If that doesn't help, you can try removing the Python version that was partially installed and then try installing it again. You can use the pyenv uninstall command to remove the partially installed Python version, followed by the pyenv install command to try installing it again. For example:\n\npyenv uninstall 3.11.0\npyenv install 3.11.0\n\n#If you continue to have problems, you may want to try installing a different version of Python, such as 3.9.0 or 3.8.6, which are the latest versions of Python 3.9 and 3.8, respectively. You can use the same pyenv install command to install these versions. For example:\n\npyenv install 3.9.0\npyenv install 3.8.6\n\n#If you still can't get Python installed on your Raspberry Pi, you may want to try reinstalling pyenv itself, using the pyenv-installer script, which you can download from the pyenv GitHub page (https://github.com/pyenv/pyenv-installer). This script will automatically install pyenv and its dependencies on your system, which may help resolve any issues you're experiencing.\n\ncurl -L https://github.com/pyenv/pyenv-installer/raw/master/bin/pyenv-installer | bash\n\n#Alternatively, you can try manually installing pyenv and its dependencies using your system's package manager. For example, on a Raspberry Pi running Debian or Raspbian, you can use the apt-get command to install pyenv and its dependencies.\n\nsudo apt-get update\nsudo apt-get install -y make build-essential libssl-dev zlib1g-dev libbz2-dev \\\nlibreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev libncursesw5-dev \\\nxz-utils tk-dev libffi-dev liblzma-dev python-openssl git\n#After installing pyenv and its dependencies, you can try installing Python again using the pyenv install command.\n\n#I hope this helps! Let me know if you have any other questions.\n\n" ]
[ 0 ]
[]
[]
[ "linux", "pyenv", "python", "raspberry_pi" ]
stackoverflow_0074648670_linux_pyenv_python_raspberry_pi.txt
Q: Installing Cartopy error on Windows 10 with VSCode I al trying to install Cartopy on my laptop. I have Windows 10, and use VSCode. When installing Cartopy with pip install cartopyI get the following error: ` lib/cartopy/trace.cpp(767): fatal error C1083: Cannot open include file: 'geos_c.h': No such file or directory error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\BuildTools\\VC\\Tools\\MSVC\\14.34.31933\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2 ` I installed shapely, matplotlib and pygeos beforehands, but somehow it doesn't seem to do the trick. I then tried to install GEOS, but didnt succeed, apparently you have to use CMAKE to install it correctly but htet didn't work. (still get the same error) is it possible to install it without installing Anaconda ? (I have seen that a lot online) Any help/advice please would help me greatly.
Installing Cartopy error on Windows 10 with VSCode
I al trying to install Cartopy on my laptop. I have Windows 10, and use VSCode. When installing Cartopy with pip install cartopyI get the following error: ` lib/cartopy/trace.cpp(767): fatal error C1083: Cannot open include file: 'geos_c.h': No such file or directory error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\BuildTools\\VC\\Tools\\MSVC\\14.34.31933\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2 ` I installed shapely, matplotlib and pygeos beforehands, but somehow it doesn't seem to do the trick. I then tried to install GEOS, but didnt succeed, apparently you have to use CMAKE to install it correctly but htet didn't work. (still get the same error) is it possible to install it without installing Anaconda ? (I have seen that a lot online) Any help/advice please would help me greatly.
[]
[]
[ "Yes, you can install Cartopy without Anaconda by using the pip package manager. However, the error you are getting indicates that the geos_c.h header file is missing, which is required for Cartopy to build and work properly.\nIn order to fix this issue, you will need to install the GEOS library, which provides the geos_c.h header file. You can install GEOS using the pip command, like this:\npip install geos\n\nThis will install GEOS and its dependencies, including the geos_c.h header file. Once GEOS is installed, you should be able to install Cartopy using the pip install cartopy command without any errors.\nIf you are still having issues with the installation, you may need to set the INCLUDE and LIB environment variables to point to the directories where the GEOS headers and libraries are installed. You can do this by setting the INCLUDE variable to the path of the geos_c.h header file, and the LIB variable to the path of the geos_c.dll library file. For example:\nset INCLUDE=C:\\Program Files\\GEOS\\include\nset LIB=C:\\Program Files\\GEOS\\lib\n\nOnce you have set these variables, you should be able to run the pip install cartopy command without any errors.\nNote that you may need to restart your terminal or command prompt in order for the changes to the environment variables to take effect. Additionally, the paths to the geos_c.h and geos_c.dll files may be different on your system, so you will need to adjust the paths in the INCLUDE and LIB variables accordingly.\n" ]
[ -1 ]
[ "cartopy", "cmake", "geos", "python" ]
stackoverflow_0074680953_cartopy_cmake_geos_python.txt
Q: discord.py limiting a command to only be a slash command I am trying to make a command that is only a slash command however my bot uses hybrid commands and normal prefix commands and Im not sure how to make it just a slash command. @client.event async def on_message(message): if message.content.lower() == ";report" or message.content.lower() == ";suggest": return await client.process_commands(message) @client.hybrid_command(name = "report", with_app_command=True, description="Make a suggestion to the bot or report an issue", aliases=["suggest"]) @commands.guild_only() async def report(interaction: discord.Interaction): await interaction.response.send_modal(my_modal()) I tried making an on_message event that ignores the prefix command but it ignores the on_message and listens to the command. I've tried @tree.command and @client.slash_command but they don't work. A: To make a command that can only be used as a slash command, you can use the is_slash_command attribute of the Interaction object in your command function. This attribute will be True if the command was called using the slash command syntax, and False if it was called using a prefix or hybrid command. Here is an example of how you can use this attribute to limit your report command to only be used as a slash command: @client.hybrid_command(name="report", with_app_command=True, description="Make a suggestion to the bot or report an issue", aliases=["suggest"]) @commands.guild_only() async def report(interaction: discord.Interaction): if interaction.is_slash_command: # The command was called using the slash command syntax, so we can process it await interaction.response.send_modal(my_modal()) else: # The command was not called using the slash command syntax, so we will ignore it return Note that you can also use the command attribute of the Interaction object to access the command object itself, if you need to access its attributes or perform other operations on it. @client.hybrid_command(name="report", with_app_command=True, description="Make a suggestion to the bot or report an issue", aliases=["suggest"]) @commands.guild_only() async def report(interaction: discord.Interaction): if interaction.is_slash_command: # Print the name of the command being called print(interaction.command.name) await interaction.response.send_modal(my_modal()) else: return EDIT: Maybe is_app_command @client.hybrid_command(name="report", with_app_command=True, description="Make a suggestion to the bot or report an issue", aliases=["suggest"]) @commands.guild_only() async def report(interaction: discord.Interaction): if interaction.is_app_command: # Print the name of the command being called print(interaction.command.name) await interaction.response.send_modal(my_modal()) else: return
discord.py limiting a command to only be a slash command
I am trying to make a command that is only a slash command however my bot uses hybrid commands and normal prefix commands and Im not sure how to make it just a slash command. @client.event async def on_message(message): if message.content.lower() == ";report" or message.content.lower() == ";suggest": return await client.process_commands(message) @client.hybrid_command(name = "report", with_app_command=True, description="Make a suggestion to the bot or report an issue", aliases=["suggest"]) @commands.guild_only() async def report(interaction: discord.Interaction): await interaction.response.send_modal(my_modal()) I tried making an on_message event that ignores the prefix command but it ignores the on_message and listens to the command. I've tried @tree.command and @client.slash_command but they don't work.
[ "To make a command that can only be used as a slash command, you can use the is_slash_command attribute of the Interaction object in your command function. This attribute will be True if the command was called using the slash command syntax, and False if it was called using a prefix or hybrid command.\nHere is an example of how you can use this attribute to limit your report command to only be used as a slash command:\n@client.hybrid_command(name=\"report\", with_app_command=True, description=\"Make a suggestion to the bot or report an issue\", aliases=[\"suggest\"])\n@commands.guild_only()\nasync def report(interaction: discord.Interaction):\n if interaction.is_slash_command:\n # The command was called using the slash command syntax, so we can process it\n await interaction.response.send_modal(my_modal())\n else:\n # The command was not called using the slash command syntax, so we will ignore it\n return\n\nNote that you can also use the command attribute of the Interaction object to access the command object itself, if you need to access its attributes or perform other operations on it.\n@client.hybrid_command(name=\"report\", with_app_command=True, description=\"Make a suggestion to the bot or report an issue\", aliases=[\"suggest\"])\n@commands.guild_only()\nasync def report(interaction: discord.Interaction):\n if interaction.is_slash_command:\n # Print the name of the command being called\n print(interaction.command.name)\n await interaction.response.send_modal(my_modal())\n else:\n return\n\nEDIT:\nMaybe is_app_command\n@client.hybrid_command(name=\"report\", with_app_command=True, description=\"Make a suggestion to the bot or report an issue\", aliases=[\"suggest\"])\n@commands.guild_only()\nasync def report(interaction: discord.Interaction):\n if interaction.is_app_command:\n # Print the name of the command being called\n print(interaction.command.name)\n await interaction.response.send_modal(my_modal())\n else:\n return\n\n" ]
[ 0 ]
[]
[]
[ "discord", "discord.py", "python" ]
stackoverflow_0074680538_discord_discord.py_python.txt
Q: Exception has occurred: ValueError Data cardinality is ambiguous: Trying to build an RNN model for the first time. For some reason I am getting a cardinality error and I am not sure why. Each column is labeled, has a respective date, and has a value in the value field. Excluding the header I have 142 values in each column. ERROR Exception has occurred: ValueError Data cardinality is ambiguous: x sizes: 142 y sizes: 141 Make sure all arrays contain the same number of samples. `import numpy as np import pandas as pd import matplotlib.pyplot as plt training_set=pd.read_csv(r'~/Desktop/HII.csv') training_set=training_set.iloc[:,1:2].values from sklearn.preprocessing import MinMaxScaler sc= MinMaxScaler() training_set=sc.fit_transform(training_set) X_train= training_set[0:142] y_train= training_set[1:142] X_train=np.reshape(X_train, (142 , 1 , 1))` A: The error message is telling you that the sizes of the x and y arrays are different. You are trying to create a dataset with 142 samples, but you are only providing 141 values for the y array. Here is the code that is causing the error: X_train= training_set[0:142] y_train= training_set[1:142] The training_set array has 142 samples, so the X_train array is correct. However, the y_train array only contains the second through the last values of the training_set array, so it only has 141 values. This is why you are getting the error message. To fix the problem, you can simply add one more value to the y_train array. For example, you could use the following code to create the X_train and y_train arrays: X_train = training_set[0:142] y_train = training_set[0:142] In this case, the y_train array will contain the same values as the X_train array, shifted by one position. This will make the sizes of the x and y arrays match, so the error will not occur.
Exception has occurred: ValueError Data cardinality is ambiguous:
Trying to build an RNN model for the first time. For some reason I am getting a cardinality error and I am not sure why. Each column is labeled, has a respective date, and has a value in the value field. Excluding the header I have 142 values in each column. ERROR Exception has occurred: ValueError Data cardinality is ambiguous: x sizes: 142 y sizes: 141 Make sure all arrays contain the same number of samples. `import numpy as np import pandas as pd import matplotlib.pyplot as plt training_set=pd.read_csv(r'~/Desktop/HII.csv') training_set=training_set.iloc[:,1:2].values from sklearn.preprocessing import MinMaxScaler sc= MinMaxScaler() training_set=sc.fit_transform(training_set) X_train= training_set[0:142] y_train= training_set[1:142] X_train=np.reshape(X_train, (142 , 1 , 1))`
[ "The error message is telling you that the sizes of the x and y arrays are different. You are trying to create a dataset with 142 samples, but you are only providing 141 values for the y array.\nHere is the code that is causing the error:\nX_train= training_set[0:142]\ny_train= training_set[1:142]\n\nThe training_set array has 142 samples, so the X_train array is correct. However, the y_train array only contains the second through the last values of the training_set array, so it only has 141 values. This is why you are getting the error message.\nTo fix the problem, you can simply add one more value to the y_train array. For example, you could use the following code to create the X_train and y_train arrays:\nX_train = training_set[0:142]\ny_train = training_set[0:142]\n\nIn this case, the y_train array will contain the same values as the X_train array, shifted by one position. This will make the sizes of the x and y arrays match, so the error will not occur.\n" ]
[ 0 ]
[]
[]
[ "ml", "python", "recurrent_neural_network" ]
stackoverflow_0074681011_ml_python_recurrent_neural_network.txt
Q: Process a large file using Apache Airflow Task Groups I need to process a zip file(that contains a text file) using task groups in airflow. No. of lines can vary from 1 to 50 Million. I want to read the text file in the zip file process each line and write the processed line to another text file, zip it, update Postgres tables and call another DAG to transmit this new zip file to an SFTP server. Since a single task can take more time to process a file with millions of lines, I would like to process the file using a task group. That is, a single task in the task group can process certain no. of lines and transform them. For ex. if we receive a file with 15 Million lines, 6 task groups can be called to process 2.5 Million lines each. But I am confused how to make the task group dynamic and pass the offset to each task. Below is a sample that I tried with fixed offset in islice(), def start_task(**context): print("starting the Main task...") def apply_transformation(line): return f"{line}_NEW" def task1(**context): data = context['dag_run'].conf file_name = data.get("file_name") with zipfile.ZipFile(file_name) as zf: for name in zf.namelist(): with io.TextIOWrapper(zf.open(name), encoding="UTF-8") as fp: for record in islice(fp, 1, 2000000): apply_transformation(record) def task2(**context): data = context['dag_run'].conf file_name = data.get("file_name") with zipfile.ZipFile(file_name) as zf: for name in zf.namelist(): with io.TextIOWrapper(zf.open(name), encoding="UTF-8") as fp: for record in islice(fp, 2000001, 4000000): apply_transformation(record) def task3(**context): data = context['dag_run'].conf file_name = data.get("file_name") with zipfile.ZipFile(file_name) as zf: for name in zf.namelist(): with io.TextIOWrapper(zf.open(name), encoding="UTF-8") as fp: for record in islice(fp, 4000001, 6000000): apply_transformation(record) def task4(**context): data = context['dag_run'].conf file_name = data.get("file_name") with zipfile.ZipFile(file_name) as zf: for name in zf.namelist(): with io.TextIOWrapper(zf.open(name), encoding="UTF-8") as fp: for record in islice(fp, 6000001, 8000000): apply_transformation(record) def task5(**context): data = context['dag_run'].conf file_name = data.get("file_name") with zipfile.ZipFile(file_name) as zf: for name in zf.namelist(): with io.TextIOWrapper(zf.open(name), encoding="UTF-8") as fp: for record in islice(fp, 8000001, 10000000): apply_transformation(record) def final_task(**context): print("This is the final task to update postgres tables and call SFTP DAG...") with DAG("main", schedule_interval=None, default_args=default_args, catchup=False) as dag: st = PythonOperator( task_id='start_task', dag=dag, python_callable=start_task ) with TaskGroup(group_id='task_group_1') as tg1: t1 = PythonOperator( task_id='task1', python_callable=task1, dag=dag, ) t2 = PythonOperator( task_id='task2', python_callable=task2, dag=dag, ) t3 = PythonOperator( task_id='task3', python_callable=task3, dag=dag, ) t4 = PythonOperator( task_id='task4', python_callable=task4, dag=dag, ) t5 = PythonOperator( task_id='task5', python_callable=task5, dag=dag, ) ft = PythonOperator( task_id='final_task', dag=dag, python_callable=final_task ) st >> tg1 >> ft After applying transformation to each line, I want to get these transformed lines from different tasks and merge them into a new file and do rest of the operations in the final_task. Or are there any other methods to process large files with millions of lines in parallel? A: Apache Spark, Apache Hadoop, and Apache Flink are distributed computing frameworks that can be used to process large datasets in parallel. They can be used to read the text file in the zip file, process each line in parallel, and write the processed line to another text file. After that, you can zip the file, update Postgres tables, and call another DAG to transmit the new zip file to an SFTP server. A: Yes, there are several methods to process large files with millions of lines in parallel. Here are a few options: MapReduce: MapReduce is a programming model for distributed computing. It splits the data into chunks and process each chunk in parallel. It is an efficient way to process large datasets. Apache Spark: Apache Spark is an open source distributed computing platform which can be used to process large datasets. It uses a cluster of computers to process the data in parallel. Hadoop: Hadoop is a distributed computing platform that can be used to store and process large datasets. It also uses a cluster of computers to process the data in parallel. Distributed task queue: A distributed task queue is a distributed computing system that allows the execution of tasks on multiple machines, in parallel. It is a great way to process large datasets, as each task can run on a different machine, in parallel. Simple Parallel Processing: Simple parallel processing allows you to execute multiple tasks on different machines simultaneously. The advantage of this approach is that it is easy to implement and requires minimal setup. Cloud Computing: Cloud computing allows you to leverage the power of a large network of computers to process large datasets. The advantage of this approach is that it is cost-effective and can scale up easily. A: One possible solution is to use a single PythonOperator for all tasks in the task group and pass the offset as a parameter to this operator. The operator can then read the lines from the specified offset and process the required number of lines. Here is an example of how this can be done: def process_lines(**context): # Read the parameters passed to the operator data = context['dag_run'].conf file_name = data.get("file_name") offset = data.get("offset") num_lines = data.get("num_lines") # Open the zip file and read the text file with zipfile.ZipFile(file_name) as zf: for name in zf.namelist(): with io.TextIOWrapper(zf.open(name), encoding="UTF-8") as fp: # Read the lines from the specified offset and process them for record in islice(fp, offset, offset + num_lines): apply_transformation(record) with DAG("main", schedule_interval=None, default_args=default_args, catchup=False) as dag: st = PythonOperator( task_id='start_task', dag=dag, python_callable=start_task ) with TaskGroup(group_id='task_group_1') as tg1: # Call the process_lines operator with the appropriate offset and number of lines to process t1 = PythonOperator( task_id='task1', python_callable=process_lines, dag=dag, op_kwargs={"offset": 1, "num_lines": 2000000} ) t2 = PythonOperator( task_id='task2', python_callable=process_lines, dag=dag, op_kwargs={"offset": 2000001, "num_lines": 2000000} ) t3 = PythonOperator( task_id='task3', python_callable=process_lines, dag=dag, op_kwargs={"offset": 4000001, "num_lines": 2000000} ) # Add other tasks to the task group in a similar way # Add dependencies between the tasks in the task group tg1 >> final_task In the above example, we have defined a single operator process_lines that reads the lines from a specified offset and processes the specified number of lines. This operator is called multiple times in the task group with different offsets and number of lines to process.
Process a large file using Apache Airflow Task Groups
I need to process a zip file(that contains a text file) using task groups in airflow. No. of lines can vary from 1 to 50 Million. I want to read the text file in the zip file process each line and write the processed line to another text file, zip it, update Postgres tables and call another DAG to transmit this new zip file to an SFTP server. Since a single task can take more time to process a file with millions of lines, I would like to process the file using a task group. That is, a single task in the task group can process certain no. of lines and transform them. For ex. if we receive a file with 15 Million lines, 6 task groups can be called to process 2.5 Million lines each. But I am confused how to make the task group dynamic and pass the offset to each task. Below is a sample that I tried with fixed offset in islice(), def start_task(**context): print("starting the Main task...") def apply_transformation(line): return f"{line}_NEW" def task1(**context): data = context['dag_run'].conf file_name = data.get("file_name") with zipfile.ZipFile(file_name) as zf: for name in zf.namelist(): with io.TextIOWrapper(zf.open(name), encoding="UTF-8") as fp: for record in islice(fp, 1, 2000000): apply_transformation(record) def task2(**context): data = context['dag_run'].conf file_name = data.get("file_name") with zipfile.ZipFile(file_name) as zf: for name in zf.namelist(): with io.TextIOWrapper(zf.open(name), encoding="UTF-8") as fp: for record in islice(fp, 2000001, 4000000): apply_transformation(record) def task3(**context): data = context['dag_run'].conf file_name = data.get("file_name") with zipfile.ZipFile(file_name) as zf: for name in zf.namelist(): with io.TextIOWrapper(zf.open(name), encoding="UTF-8") as fp: for record in islice(fp, 4000001, 6000000): apply_transformation(record) def task4(**context): data = context['dag_run'].conf file_name = data.get("file_name") with zipfile.ZipFile(file_name) as zf: for name in zf.namelist(): with io.TextIOWrapper(zf.open(name), encoding="UTF-8") as fp: for record in islice(fp, 6000001, 8000000): apply_transformation(record) def task5(**context): data = context['dag_run'].conf file_name = data.get("file_name") with zipfile.ZipFile(file_name) as zf: for name in zf.namelist(): with io.TextIOWrapper(zf.open(name), encoding="UTF-8") as fp: for record in islice(fp, 8000001, 10000000): apply_transformation(record) def final_task(**context): print("This is the final task to update postgres tables and call SFTP DAG...") with DAG("main", schedule_interval=None, default_args=default_args, catchup=False) as dag: st = PythonOperator( task_id='start_task', dag=dag, python_callable=start_task ) with TaskGroup(group_id='task_group_1') as tg1: t1 = PythonOperator( task_id='task1', python_callable=task1, dag=dag, ) t2 = PythonOperator( task_id='task2', python_callable=task2, dag=dag, ) t3 = PythonOperator( task_id='task3', python_callable=task3, dag=dag, ) t4 = PythonOperator( task_id='task4', python_callable=task4, dag=dag, ) t5 = PythonOperator( task_id='task5', python_callable=task5, dag=dag, ) ft = PythonOperator( task_id='final_task', dag=dag, python_callable=final_task ) st >> tg1 >> ft After applying transformation to each line, I want to get these transformed lines from different tasks and merge them into a new file and do rest of the operations in the final_task. Or are there any other methods to process large files with millions of lines in parallel?
[ "Apache Spark, Apache Hadoop, and Apache Flink are distributed computing frameworks that can be used to process large datasets in parallel. They can be used to read the text file in the zip file, process each line in parallel, and write the processed line to another text file. After that, you can zip the file, update Postgres tables, and call another DAG to transmit the new zip file to an SFTP server.\n", "Yes, there are several methods to process large files with millions of lines in parallel. Here are a few options:\n\nMapReduce: MapReduce is a programming model for distributed computing. It splits the data into chunks and process each chunk in parallel. It is an efficient way to process large datasets.\n\nApache Spark: Apache Spark is an open source distributed computing platform which can be used to process large datasets. It uses a cluster of computers to process the data in parallel.\n\nHadoop: Hadoop is a distributed computing platform that can be used to store and process large datasets. It also uses a cluster of computers to process the data in parallel.\n\nDistributed task queue: A distributed task queue is a distributed computing system that allows the execution of tasks on multiple machines, in parallel. It is a great way to process large datasets, as each task can run on a different machine, in parallel.\n\nSimple Parallel Processing: Simple parallel processing allows you to execute multiple tasks on different machines simultaneously. The advantage of this approach is that it is easy to implement and requires minimal setup.\n\nCloud Computing: Cloud computing allows you to leverage the power of a large network of computers to process large datasets. The advantage of this approach is that it is cost-effective and can scale up easily.\n\n\n", "One possible solution is to use a single PythonOperator for all tasks in the task group and pass the offset as a parameter to this operator. The operator can then read the lines from the specified offset and process the required number of lines.\nHere is an example of how this can be done:\ndef process_lines(**context):\n # Read the parameters passed to the operator\n data = context['dag_run'].conf\n file_name = data.get(\"file_name\")\n offset = data.get(\"offset\")\n num_lines = data.get(\"num_lines\")\n\n # Open the zip file and read the text file\n with zipfile.ZipFile(file_name) as zf:\n for name in zf.namelist():\n with io.TextIOWrapper(zf.open(name), encoding=\"UTF-8\") as fp:\n # Read the lines from the specified offset and process them\n for record in islice(fp, offset, offset + num_lines):\n apply_transformation(record)\n\n\nwith DAG(\"main\",\n schedule_interval=None,\n default_args=default_args, catchup=False) as dag:\n\n st = PythonOperator(\n task_id='start_task',\n dag=dag,\n python_callable=start_task\n )\n\n with TaskGroup(group_id='task_group_1') as tg1:\n # Call the process_lines operator with the appropriate offset and number of lines to process\n t1 = PythonOperator(\n task_id='task1',\n python_callable=process_lines,\n dag=dag,\n op_kwargs={\"offset\": 1, \"num_lines\": 2000000}\n )\n t2 = PythonOperator(\n task_id='task2',\n python_callable=process_lines,\n dag=dag,\n op_kwargs={\"offset\": 2000001, \"num_lines\": 2000000}\n )\n t3 = PythonOperator(\n task_id='task3',\n python_callable=process_lines,\n dag=dag,\n op_kwargs={\"offset\": 4000001, \"num_lines\": 2000000}\n )\n # Add other tasks to the task group in a similar way\n\n # Add dependencies between the tasks in the task group\n tg1 >> final_task\n\nIn the above example, we have defined a single operator process_lines that reads the lines from a specified offset and processes the specified number of lines. This operator is called multiple times in the task group with different offsets and number of lines to process.\n" ]
[ 0, 0, 0 ]
[ "Here is a possible solution to your problem:\nFirst, you can define a function that calculates the start and end offsets for a given task and the total number of lines in the input file. For example:\ndef calculate_offsets(task_id, num_tasks, num_lines):\n chunk_size = num_lines // num_tasks\n start_offset = (task_id - 1) * chunk_size\n end_offset = task_id * chunk_size\n if task_id == num_tasks:\n end_offset = num_lines\n return start_offset, end_offset\n\nThen, you can use this function to calculate the start and end offsets for each task in the task group, and pass these values as parameters to the tasks. You can also define a helper function that applies the transformation to a slice of the input file:\ndef apply_transformation(start_offset, end_offset, file_name):\n with zipfile.ZipFile(file_name) as zf:\n for name in zf.namelist():\n with io.TextIOWrapper(zf.open(name), encoding=\"UTF-8\") as fp:\n for record in islice(fp, start_offset, end_offset):\n # Apply the transformation here and write the result to a new file\n\nFinally, you can use this helper function in the tasks of the task group.\nyou can use the calculate_offsets() and apply_transformation() functions we defined earlier to calculate the start and end offsets for each task in the task group, and apply the transformation to the corresponding slice of the input file.\nHere is an example of how you can define the tasks in the task group:\ndef task1(**context):\n data = context['dag_run'].conf\n file_name = data.get(\"file_name\")\n num_lines = data.get(\"num_lines\")\n start_offset, end_offset = calculate_offsets(1, 6, num_lines)\n apply_transformation(start_offset, end_offset, file_name)\n\ndef task2(**context):\n data = context['dag_run'].conf\n file_name = data.get(\"file_name\")\n num_lines = data.get(\"num_lines\")\n start_offset, end_offset = calculate_offsets(2, 6, num_lines)\n apply_transformation(start_offset, end_offset, file_name)\n\n# Define the other tasks in the same way\nOnce you have defined the tasks, you can call them in the task group and pass the necessary parameters to them. For example:\n\nCopy code\nwith DAG(\"main\",\n schedule_interval=None,\n default_args=default_args, catchup=False) as dag:\n\n st = PythonOperator(\n task_id='start_task',\n dag=dag,\n python_callable=start_task\n )\n\n with TaskGroup(group_id='task_group_1') as tg1:\n t1 = PythonOperator(\n task_id='task1',\n python_callable=task1,\n dag=dag,\n op_kwargs={'file_name': '{{ dag_run.conf.file_name }}',\n 'num_lines': '{{ dag_run.conf.num_lines }}'}\n )\n\n t2 = PythonOperator(\n task_id='task2',\n python_callable=task2,\n dag=dag,\n op_kwargs={'file_name': '{{ dag_run.conf.file_name }}',\n 'num_lines': '{{ dag_run.conf.num_lines }}'}\n )\n\n # Call the other tasks in the same way\n\n ft = PythonOperator(\n task_id='final_task',\n dag=dag,\n python_callable=final_task\n )\n\n st >> tg1 >> ft\n\nYou can then run this DAG and pass the necessary parameters (the name of the input file and the total number of lines in the file) to the dag_run object when you trigger the DAG.\nI hope this helps! Let me know if you have any questions.\n" ]
[ -1 ]
[ "airflow", "airflow_2.x", "python", "python_3.x" ]
stackoverflow_0074559428_airflow_airflow_2.x_python_python_3.x.txt
Q: How to properly render form fields with django? I am currently working on a login page for a django webapp. I am trying to include the login form within the index.html file. However, the form fields are not being rendered. My urls are correct I believe but I'm not sure where I am going wrong. Here is my views.py, forms.py and a snippet of the index.html. (I do not want to create a new page for the login I'd like to keep it on the index page) # Home view def index(request): form = LoginForm() if form.is_valid(): user = authenticate( username=form.cleaned_data['username'], password=form.cleaned_data['password'], ) if user is not None: login(request, user) messages.success(request, f' welcome {user} !!') return redirect('loggedIn') else: messages.info(request, f'Password or Username is wrong. Please try again.') return render(request, "index_logged_out.html") class LoginForm(forms.Form): username = forms.CharField(max_length=63) password = forms.CharField(max_length=63, widget=forms.PasswordInput) <!-- Login --> <section class="page-section" id="login"> <div class="container"> <div class="text-center"> <h2 class="section-heading text-uppercase">Login</h2> </div> <form> {% csrf_token %} {{form}} <center><button class="btn btn-primary btn-block fa-lg gradient-custom-2 mb-3" type="submit" style="width: 300px;">Login</button></center> </form> <div class="text-center pt-1 mb-5 pb-1"> <center><a class="text-muted" href="#!">Forgot password?</a></center> </div> <div class="d-flex align-items-center justify-content-center pb-4"> <p class="mb-0 me-2">Don't have an account?</p> <button type="button" class="btn btn-outline-primary"><a href="{% url 'register' %}">Create New</a></button> </div> </form> </div> </section> A: In your index() view, you are creating a LoginForm object, but you are not passing it to the template when you render it. This means that the form fields will not be rendered in the template. To fix this, you can pass the form object to the template when you render it, like this: def index(request): form = LoginForm() if form.is_valid(): # ... return render(request, "index_logged_out.html", {"form": form})
How to properly render form fields with django?
I am currently working on a login page for a django webapp. I am trying to include the login form within the index.html file. However, the form fields are not being rendered. My urls are correct I believe but I'm not sure where I am going wrong. Here is my views.py, forms.py and a snippet of the index.html. (I do not want to create a new page for the login I'd like to keep it on the index page) # Home view def index(request): form = LoginForm() if form.is_valid(): user = authenticate( username=form.cleaned_data['username'], password=form.cleaned_data['password'], ) if user is not None: login(request, user) messages.success(request, f' welcome {user} !!') return redirect('loggedIn') else: messages.info(request, f'Password or Username is wrong. Please try again.') return render(request, "index_logged_out.html") class LoginForm(forms.Form): username = forms.CharField(max_length=63) password = forms.CharField(max_length=63, widget=forms.PasswordInput) <!-- Login --> <section class="page-section" id="login"> <div class="container"> <div class="text-center"> <h2 class="section-heading text-uppercase">Login</h2> </div> <form> {% csrf_token %} {{form}} <center><button class="btn btn-primary btn-block fa-lg gradient-custom-2 mb-3" type="submit" style="width: 300px;">Login</button></center> </form> <div class="text-center pt-1 mb-5 pb-1"> <center><a class="text-muted" href="#!">Forgot password?</a></center> </div> <div class="d-flex align-items-center justify-content-center pb-4"> <p class="mb-0 me-2">Don't have an account?</p> <button type="button" class="btn btn-outline-primary"><a href="{% url 'register' %}">Create New</a></button> </div> </form> </div> </section>
[ "In your index() view, you are creating a LoginForm object, but you are not passing it to the template when you render it. This means that the form fields will not be rendered in the template.\nTo fix this, you can pass the form object to the template when you render it, like this:\ndef index(request):\n form = LoginForm()\n if form.is_valid():\n # ...\n return render(request, \"index_logged_out.html\", {\"form\": form})\n\n" ]
[ 0 ]
[]
[]
[ "django", "html", "python" ]
stackoverflow_0074681034_django_html_python.txt
Q: Memory issue while running ARIMA model I am trying to run my ARIMA model and am getting the below error:- MemoryError: Unable to allocate 52.4 GiB for an array with shape (83873, 83873) and data type float64 My python/anaconda is installed in the C drive and has somewhere around 110GB free space but still am getting this error. How do I resolve this? Also below is my code:- from statsmodels.tsa.arima_model import ARIMA model=ARIMA(df['Sales'],order=(1,0,1)) model_fit=model.fit() I tried to slice the dataframe for only 1 year of values, but still having issues. Anaconda version is 3.8- 64 bit. My dataframe looks like this- It has somewhere around 83,873 rows. A: I did a pivot transformation and it solved my issue.
Memory issue while running ARIMA model
I am trying to run my ARIMA model and am getting the below error:- MemoryError: Unable to allocate 52.4 GiB for an array with shape (83873, 83873) and data type float64 My python/anaconda is installed in the C drive and has somewhere around 110GB free space but still am getting this error. How do I resolve this? Also below is my code:- from statsmodels.tsa.arima_model import ARIMA model=ARIMA(df['Sales'],order=(1,0,1)) model_fit=model.fit() I tried to slice the dataframe for only 1 year of values, but still having issues. Anaconda version is 3.8- 64 bit. My dataframe looks like this- It has somewhere around 83,873 rows.
[ "I did a pivot transformation and it solved my issue.\n" ]
[ 0 ]
[ "I have the same problem than you had and I cannot see the solution... Could you help me please? I'd be so greatful thanks :)\n" ]
[ -3 ]
[ "arima", "memory", "python", "time_series" ]
stackoverflow_0070726861_arima_memory_python_time_series.txt
Q: problem with backface culling on OpenGL python My goal is to render a .pmx 3D model using PyOpenGL on pygame. I've found pymeshio module that extracts vertices and normal vectors and etc. found an example code on it's github repo that renders on tkinter. I changed the code to render on pygame instead, didn't change parts related to OpenGL rendering. The output is this: The model file is not corrupted, I checked it on Blender and MMD. I'm new with OpenGL and 3D programming but I think it might be related to sequence of vertices for back-face culling, some of triangles are clockwise and some of the others are counter clockwise. this is rendering code. it uses draw function to render. class IndexedVertexArray(object): def __init__(self): # vertices self.vertices=[] self.normal=[] self.colors=[] self.uvlist=[] self.b0=[] self.b1=[] self.w0=[] self.materials=[] self.indices=[] self.buffers=[] self.new_vertices=[] self.new_normal=[] def addVertex(self, pos, normal, uv, color, b0, b1, w0): self.vertices+=pos self.normal+=normal self.colors+=color self.uvlist+=uv self.b0.append(b0) self.b1.append(b1) self.w0.append(w0) def setIndices(self, indices): self.indices=indices def addMaterial(self, material): self.materials.append(material) def create_array_buffer(self, buffer_id, floats): # print('create_array_buuffer', buffer_id) glBindBuffer(GL_ARRAY_BUFFER, buffer_id) glBufferData(GL_ARRAY_BUFFER, len(floats)*4, # byte size (ctypes.c_float*len(floats))(*floats), # 謎のctypes GL_STATIC_DRAW) def create_vbo(self): self.buffers = glGenBuffers(4+1) # print("create_vbo", self.buffers) self.create_array_buffer(self.buffers[0], self.vertices) self.create_array_buffer(self.buffers[1], self.normal) self.create_array_buffer(self.buffers[2], self.colors) self.create_array_buffer(self.buffers[3], self.uvlist) # indices glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, self.buffers[4]) glBufferData(GL_ELEMENT_ARRAY_BUFFER, len(self.indices)*4, # byte size (ctypes.c_uint*len(self.indices))(*self.indices), # 謎のctypes GL_STATIC_DRAW) def draw(self): if len(self.buffers)==0: self.create_vbo() glEnableClientState(GL_VERTEX_ARRAY); glBindBuffer(GL_ARRAY_BUFFER, self.buffers[0]); glVertexPointer(4, GL_FLOAT, 0, None); glEnableClientState(GL_NORMAL_ARRAY); glBindBuffer(GL_ARRAY_BUFFER, self.buffers[1]); glNormalPointer(GL_FLOAT, 0, None); glEnableClientState(GL_COLOR_ARRAY); glBindBuffer(GL_ARRAY_BUFFER, self.buffers[2]); glColorPointer(4, GL_FLOAT, 0, None); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glBindBuffer(GL_ARRAY_BUFFER, self.buffers[3]); glTexCoordPointer(2, GL_FLOAT, 0, None); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, self.buffers[4]); index_offset=0 for i, m in enumerate(self.materials): # submesh m.begin() glDrawElements(GL_TRIANGLES, m.vertex_count, GL_UNSIGNED_INT, ctypes.c_void_p(index_offset)); index_offset+=m.vertex_count * 4 # byte size m.end() # cleanup glDisableClientState(GL_TEXTURE_COORD_ARRAY) glDisableClientState(GL_COLOR_ARRAY) glDisableClientState(GL_NORMAL_ARRAY); glDisableClientState(GL_VERTEX_ARRAY) this is the part related to back-face culling class MQOMaterial(object): def __init__(self): self.rgba=(1, 1, 1, 1) self.vcol=False self.texture=None def __enter__(self): self.begin() def __exit__(self): self.end() def begin(self): glColor4f(*self.rgba) if self.texture: self.texture.begin() # backface culling glEnable(GL_CULL_FACE) glFrontFace(GL_CW) glCullFace(GL_BACK) # glCullFace(GL_FRONT) # alpha test glEnable(GL_ALPHA_TEST); glAlphaFunc(GL_GREATER, 0.5); def end(self): if self.texture: self.texture.end() First I disabled alpha channel and did nothing. I tried GL_FRONT and GL_CCW but it didn't work. I tried to separate vertices groups and render them using glVertex3fv. the original code already saves vertices in this format: vertices = [v0.x, v0.y, v0.z, 1, v1.x, v1.y, v1.z, 1, v2.x, v2.y, v2.z, 1, ...] ___________________ ___________________ ___________________ v0 v1 v2 normal = [v0.normal.x, v0.normal.y, v0.normal.z, v1.normal.x, v1.normal.y, v1.normal.z, ...] _____________________________________ _____________________________________ v0 v1 indices = [0, 1, 2, 1, 4, 5, 2, 4, 6, ...] ------- ------- ------- group0 group1 group2 I tried to render triangles with this code: def _draw(self): glBegin(GL_TRIANGLES) for i in range(len(self.indices) // 3): # glTexCoord2fv( tex_coords[ti] ) if i == len(self.new_normal): break # glNormal3fv( self.new_normal[i] ) glVertex3fv( self.new_vertices[i]) glEnd() def new_sort(self): for i in range(len(self.indices) // 3): if i <= -1: continue k = 4 * i j = 3 * i if k + 2 >= len(self.vertices) or j + 2 >= len(self.normal): break self.new_vertices.append(tuple((self.vertices[k], self.vertices[k + 1], self.vertices[k + 2] ))) self.new_normal.append(tuple((self.normal[j], self.normal[j + 1], self.normal[j + 2] ))) the output I thought maybe wrong points were together so shifted them with 1 and 2 to set correct points but the output became uglier. I tested this with quadrilateral and no change. I would be appreciated for any help or hint. A: The colorful images on the top seem to be rendered without depth test. You have to enable the Depth Test and clear the depth buffer: glEnable(GL_DEPTH_TEST) glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
problem with backface culling on OpenGL python
My goal is to render a .pmx 3D model using PyOpenGL on pygame. I've found pymeshio module that extracts vertices and normal vectors and etc. found an example code on it's github repo that renders on tkinter. I changed the code to render on pygame instead, didn't change parts related to OpenGL rendering. The output is this: The model file is not corrupted, I checked it on Blender and MMD. I'm new with OpenGL and 3D programming but I think it might be related to sequence of vertices for back-face culling, some of triangles are clockwise and some of the others are counter clockwise. this is rendering code. it uses draw function to render. class IndexedVertexArray(object): def __init__(self): # vertices self.vertices=[] self.normal=[] self.colors=[] self.uvlist=[] self.b0=[] self.b1=[] self.w0=[] self.materials=[] self.indices=[] self.buffers=[] self.new_vertices=[] self.new_normal=[] def addVertex(self, pos, normal, uv, color, b0, b1, w0): self.vertices+=pos self.normal+=normal self.colors+=color self.uvlist+=uv self.b0.append(b0) self.b1.append(b1) self.w0.append(w0) def setIndices(self, indices): self.indices=indices def addMaterial(self, material): self.materials.append(material) def create_array_buffer(self, buffer_id, floats): # print('create_array_buuffer', buffer_id) glBindBuffer(GL_ARRAY_BUFFER, buffer_id) glBufferData(GL_ARRAY_BUFFER, len(floats)*4, # byte size (ctypes.c_float*len(floats))(*floats), # 謎のctypes GL_STATIC_DRAW) def create_vbo(self): self.buffers = glGenBuffers(4+1) # print("create_vbo", self.buffers) self.create_array_buffer(self.buffers[0], self.vertices) self.create_array_buffer(self.buffers[1], self.normal) self.create_array_buffer(self.buffers[2], self.colors) self.create_array_buffer(self.buffers[3], self.uvlist) # indices glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, self.buffers[4]) glBufferData(GL_ELEMENT_ARRAY_BUFFER, len(self.indices)*4, # byte size (ctypes.c_uint*len(self.indices))(*self.indices), # 謎のctypes GL_STATIC_DRAW) def draw(self): if len(self.buffers)==0: self.create_vbo() glEnableClientState(GL_VERTEX_ARRAY); glBindBuffer(GL_ARRAY_BUFFER, self.buffers[0]); glVertexPointer(4, GL_FLOAT, 0, None); glEnableClientState(GL_NORMAL_ARRAY); glBindBuffer(GL_ARRAY_BUFFER, self.buffers[1]); glNormalPointer(GL_FLOAT, 0, None); glEnableClientState(GL_COLOR_ARRAY); glBindBuffer(GL_ARRAY_BUFFER, self.buffers[2]); glColorPointer(4, GL_FLOAT, 0, None); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glBindBuffer(GL_ARRAY_BUFFER, self.buffers[3]); glTexCoordPointer(2, GL_FLOAT, 0, None); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, self.buffers[4]); index_offset=0 for i, m in enumerate(self.materials): # submesh m.begin() glDrawElements(GL_TRIANGLES, m.vertex_count, GL_UNSIGNED_INT, ctypes.c_void_p(index_offset)); index_offset+=m.vertex_count * 4 # byte size m.end() # cleanup glDisableClientState(GL_TEXTURE_COORD_ARRAY) glDisableClientState(GL_COLOR_ARRAY) glDisableClientState(GL_NORMAL_ARRAY); glDisableClientState(GL_VERTEX_ARRAY) this is the part related to back-face culling class MQOMaterial(object): def __init__(self): self.rgba=(1, 1, 1, 1) self.vcol=False self.texture=None def __enter__(self): self.begin() def __exit__(self): self.end() def begin(self): glColor4f(*self.rgba) if self.texture: self.texture.begin() # backface culling glEnable(GL_CULL_FACE) glFrontFace(GL_CW) glCullFace(GL_BACK) # glCullFace(GL_FRONT) # alpha test glEnable(GL_ALPHA_TEST); glAlphaFunc(GL_GREATER, 0.5); def end(self): if self.texture: self.texture.end() First I disabled alpha channel and did nothing. I tried GL_FRONT and GL_CCW but it didn't work. I tried to separate vertices groups and render them using glVertex3fv. the original code already saves vertices in this format: vertices = [v0.x, v0.y, v0.z, 1, v1.x, v1.y, v1.z, 1, v2.x, v2.y, v2.z, 1, ...] ___________________ ___________________ ___________________ v0 v1 v2 normal = [v0.normal.x, v0.normal.y, v0.normal.z, v1.normal.x, v1.normal.y, v1.normal.z, ...] _____________________________________ _____________________________________ v0 v1 indices = [0, 1, 2, 1, 4, 5, 2, 4, 6, ...] ------- ------- ------- group0 group1 group2 I tried to render triangles with this code: def _draw(self): glBegin(GL_TRIANGLES) for i in range(len(self.indices) // 3): # glTexCoord2fv( tex_coords[ti] ) if i == len(self.new_normal): break # glNormal3fv( self.new_normal[i] ) glVertex3fv( self.new_vertices[i]) glEnd() def new_sort(self): for i in range(len(self.indices) // 3): if i <= -1: continue k = 4 * i j = 3 * i if k + 2 >= len(self.vertices) or j + 2 >= len(self.normal): break self.new_vertices.append(tuple((self.vertices[k], self.vertices[k + 1], self.vertices[k + 2] ))) self.new_normal.append(tuple((self.normal[j], self.normal[j + 1], self.normal[j + 2] ))) the output I thought maybe wrong points were together so shifted them with 1 and 2 to set correct points but the output became uglier. I tested this with quadrilateral and no change. I would be appreciated for any help or hint.
[ "The colorful images on the top seem to be rendered without depth test. You have to enable the Depth Test and clear the depth buffer:\nglEnable(GL_DEPTH_TEST)\n\nglClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)\n\n" ]
[ 1 ]
[]
[]
[ "3d", "opengl", "pyopengl", "python" ]
stackoverflow_0074680930_3d_opengl_pyopengl_python.txt
Q: How to disable QWebEngineView logging with webEngineContextLog? I'm using a QWebEngineView in my application. After upgrading to PyQt6 it has started to output the logging information shown below. How can I disable these messages? I have found the code that is emitting them here: logContext It looks like I have to change the output of webEngineContextLog.isInfoEnabled() to False, but it is unclear how to achieve this. Logging output: qt.webenginecontext: GL Type: desktop Surface Type: OpenGL Surface Profile: CompatibilityProfile Surface Version: 4.6 QSG RHI Backend: OpenGL Using Supported QSG Backend: yes Using Software Dynamic GL: no Using Multithreaded OpenGL: yes Init Parameters: * application-name python * browser-subprocess-path C:\Users\xxx\miniconda3\envs\xxx\lib\site-packages\PyQt6\Qt6\bin\QtWebEngineProcess.exe * create-default-gl-context * disable-es3-gl-context * disable-features ConsolidatedMovementXY,InstalledApp,BackgroundFetch,WebOTP,WebPayments,WebUSB,PictureInPicture * disable-speech-api * enable-features NetworkServiceInProcess,TracingServiceInProcess * enable-threaded-compositing * in-process-gpu * use-gl desktop Minimal code to reproduce: from PyQt6.QtWidgets import QApplication from PyQt6.QtWebEngineWidgets import QWebEngineView app = QApplication(['test']) QWebEngineView().settings() A: I stumbled over the same problem today when I tried to integrate a silent unit test into my current project. After a quick investigation and having a look at how it is logged here in the function logContext, I came up with the following solution which works fine for me: from PySide6.QtCore import QUrl, QLoggingCategory from PySide6.QtWidgets import (QApplication, QMainWindow) from PySide6.QtWebEngineWidgets import QWebEngineView web_engine_context_log = QLoggingCategory("qt.webenginecontext") web_engine_context_log.setFilterRules("*.info=false") web_view = QWebEngineView()
How to disable QWebEngineView logging with webEngineContextLog?
I'm using a QWebEngineView in my application. After upgrading to PyQt6 it has started to output the logging information shown below. How can I disable these messages? I have found the code that is emitting them here: logContext It looks like I have to change the output of webEngineContextLog.isInfoEnabled() to False, but it is unclear how to achieve this. Logging output: qt.webenginecontext: GL Type: desktop Surface Type: OpenGL Surface Profile: CompatibilityProfile Surface Version: 4.6 QSG RHI Backend: OpenGL Using Supported QSG Backend: yes Using Software Dynamic GL: no Using Multithreaded OpenGL: yes Init Parameters: * application-name python * browser-subprocess-path C:\Users\xxx\miniconda3\envs\xxx\lib\site-packages\PyQt6\Qt6\bin\QtWebEngineProcess.exe * create-default-gl-context * disable-es3-gl-context * disable-features ConsolidatedMovementXY,InstalledApp,BackgroundFetch,WebOTP,WebPayments,WebUSB,PictureInPicture * disable-speech-api * enable-features NetworkServiceInProcess,TracingServiceInProcess * enable-threaded-compositing * in-process-gpu * use-gl desktop Minimal code to reproduce: from PyQt6.QtWidgets import QApplication from PyQt6.QtWebEngineWidgets import QWebEngineView app = QApplication(['test']) QWebEngineView().settings()
[ "I stumbled over the same problem today when I tried to integrate a silent unit test into my current project.\nAfter a quick investigation and having a look at how it is logged here in the function logContext, I came up with the following solution which works fine for me:\nfrom PySide6.QtCore import QUrl, QLoggingCategory\nfrom PySide6.QtWidgets import (QApplication, QMainWindow)\nfrom PySide6.QtWebEngineWidgets import QWebEngineView\n\nweb_engine_context_log = QLoggingCategory(\"qt.webenginecontext\")\nweb_engine_context_log.setFilterRules(\"*.info=false\")\nweb_view = QWebEngineView()\n\n" ]
[ 0 ]
[]
[]
[ "pyqt6", "python", "qwebengineview" ]
stackoverflow_0074499940_pyqt6_python_qwebengineview.txt
Q: How do I make my code capitalize the first letter of the word that has a capital letter in it? (Pig Latin) My code so far is: def to_pig(string): words = string.split() for i, word in enumerate(words): ''' if first letter is a vowel ''' if word[0] in 'aeiou': words[i] = words[i]+ "yay" elif word[0] in 'AEIOU': words[i] = words[i]+ "yay" else: ''' else get vowel position and postfix all the consonants present before that vowel to the end of the word along with "ay" ''' has_vowel = False for j, letter in enumerate(word): if letter in 'aeiou': words[i] = word[j:] + word[:j] + "ay" has_vowel = True break #if the word doesn't have any vowel then simply postfix "ay" if(has_vowel == False): words[i] = words[i]+ "ay" pig_latin = ' '.join(words) return pig_latin My code right now coverts a string to pig latin string. If a word starts with one or more consonants followed by a vowel, the consonants up to but not including the first vowel are moved to the end of the word and "ay" is added. If a word begins with a vowel, then "yay" is added to the end. String: "The rain in Spain stays mainly in the plains" However, my code returns: "eThay ainray inyay ainSpay aysstay ainlymay inyay ethay ainsplay" While it should return: "Ethay ainray inyay Ainspay aysstay ainlymay inyay ethay ainsplay" How do I fix my code so that it returns the first letter capital for the word that has a capital letter? A: Use any(... isupper()) to check for the presence of a capital letter and str.title() to capitalize the first letter. >>> words = "eThay ainray inyay ainSpay aysstay ainlymay inyay ethay ainsplay".split() >>> words = [word.title() if any(c.isupper() for c in word) else word for word in words] >>> ' '.join(words) 'Ethay ainray inyay Ainspay aysstay ainlymay inyay ethay ainsplay' A: A one-line solution would be to check whether the word contains a capital letter. If so, you want to convert the capital letter to a lowercase letter and then capitalize the first letter of that word. You could do that as such. Suppose you have your array of words, then: words = [i[0].upper() + i[1:].lower() if i.lower() != i else i for i in words]
How do I make my code capitalize the first letter of the word that has a capital letter in it? (Pig Latin)
My code so far is: def to_pig(string): words = string.split() for i, word in enumerate(words): ''' if first letter is a vowel ''' if word[0] in 'aeiou': words[i] = words[i]+ "yay" elif word[0] in 'AEIOU': words[i] = words[i]+ "yay" else: ''' else get vowel position and postfix all the consonants present before that vowel to the end of the word along with "ay" ''' has_vowel = False for j, letter in enumerate(word): if letter in 'aeiou': words[i] = word[j:] + word[:j] + "ay" has_vowel = True break #if the word doesn't have any vowel then simply postfix "ay" if(has_vowel == False): words[i] = words[i]+ "ay" pig_latin = ' '.join(words) return pig_latin My code right now coverts a string to pig latin string. If a word starts with one or more consonants followed by a vowel, the consonants up to but not including the first vowel are moved to the end of the word and "ay" is added. If a word begins with a vowel, then "yay" is added to the end. String: "The rain in Spain stays mainly in the plains" However, my code returns: "eThay ainray inyay ainSpay aysstay ainlymay inyay ethay ainsplay" While it should return: "Ethay ainray inyay Ainspay aysstay ainlymay inyay ethay ainsplay" How do I fix my code so that it returns the first letter capital for the word that has a capital letter?
[ "Use any(... isupper()) to check for the presence of a capital letter and str.title() to capitalize the first letter.\n>>> words = \"eThay ainray inyay ainSpay aysstay ainlymay inyay ethay ainsplay\".split()\n>>> words = [word.title() if any(c.isupper() for c in word) else word for word in words]\n>>> ' '.join(words)\n'Ethay ainray inyay Ainspay aysstay ainlymay inyay ethay ainsplay'\n\n", "A one-line solution would be to check whether the word contains a capital letter. If so, you want to convert the capital letter to a lowercase letter and then capitalize the first letter of that word. You could do that as such. Suppose you have your array of words, then:\nwords = [i[0].upper() + i[1:].lower() if i.lower() != i else i for i in words]\n\n" ]
[ 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074681051_python.txt
Q: csv.writer not writing entire output to CSV file I am attempting to scrape the artists' Spotify streaming rankings from Kworb.net into a CSV file and I've nearly succeeded except I'm running into a weird issue. The code below successfully scrapes all 10,000 of the listed artists into the console: import requests from bs4 import BeautifulSoup import csv URL = "https://kworb.net/spotify/artists.html" result = requests.get(URL) src = result.content soup = BeautifulSoup(src, 'html.parser') table = soup.find('table', id="spotifyartistindex") header_tags = table.find_all('th') headers = [header.text.strip() for header in header_tags] rows = [] data_rows = table.find_all('tr') for row in data_rows: value = row.find_all('td') beautified_value = [dp.text.strip() for dp in value] print(beautified_value) if len(beautified_value) == 0: continue rows.append(beautified_value) The issue arises when I use the following code to save the output to a CSV file: with open('artist_rankings.csv', 'w', newline="") as output: writer = csv.writer(output) writer.writerow(headers) writer.writerows(rows) For whatever reason, only 738 of the artists are saved to the file. Does anyone know what could be causing this? Thanks so much for any help! A: As an alternative approach, you might want to make your life easier next time and use pandas. Here's how: import requests import pandas as pd source = requests.get("https://kworb.net/spotify/artists.html") df = pd.concat(pd.read_html(source.text, flavor="bs4")) df.to_csv("artists.csv", index=False) This outputs a .csv file with 10,000 artists. A: The issue with your code is that you are using the print statement to display the data on the console, but this is not included in the rows list that you are writing to the CSV file. Instead, you need to append the data to the rows list before writing it to the CSV file. Here is how you can modify your code to fix this issue: import requests from bs4 import BeautifulSoup import csv URL = "https://kworb.net/spotify/artists.html" result = requests.get(URL) src = result.content soup = BeautifulSoup(src, 'html.parser') table = soup.find('table', id="spotifyartistindex") header_tags = table.find_all('th') headers = [header.text.strip() for header in header_tags] rows = [] data_rows = table.find_all('tr') for row in data_rows: value = row.find_all('td') beautified_value = [dp.text.strip() for dp in value] # Append the data to the rows list rows.append(beautified_value) Write the data to the CSV file with open('artist_rankings.csv', 'w', newline="") as output: writer = csv.writer(output) writer.writerow(headers) writer.writerows(rows) In this modified code, the data is first appended to the rows list, and then it is written to the CSV file. This will ensure that all of the data is saved to the file, and not just the first 738 rows. Note that you may also want to add some error handling to your code in case the request to the URL fails, or if the HTML of the page is not in the expected format. This will help prevent your code from crashing when it encounters unexpected data. You can do this by adding a try-except block to your code, like this: import requests from bs4 import BeautifulSoup import csv URL = "https://kworb.net/spotify/artists.html" try: result = requests.get(URL) src = result.content soup = BeautifulSoup(src, 'html.parser') table = soup.find('table', id="spotifyartistindex") if table is None: raise Exception("Could not find table with id 'spotifyartistindex'") header_tags = table.find_all('th') headers = [header.text.strip() for header in header_tags] rows = [] data_rows = table.find_all('tr') for row in data_rows: value = row.find_all('td') beautified_value = [dp.text.strip() for dp in value] # Append the data to the rows list rows.append(beautified_value) # Write the data to the CSV file with open('artist_rankings.csv', 'w', newline="") as output: writer = csv.writer(output)
csv.writer not writing entire output to CSV file
I am attempting to scrape the artists' Spotify streaming rankings from Kworb.net into a CSV file and I've nearly succeeded except I'm running into a weird issue. The code below successfully scrapes all 10,000 of the listed artists into the console: import requests from bs4 import BeautifulSoup import csv URL = "https://kworb.net/spotify/artists.html" result = requests.get(URL) src = result.content soup = BeautifulSoup(src, 'html.parser') table = soup.find('table', id="spotifyartistindex") header_tags = table.find_all('th') headers = [header.text.strip() for header in header_tags] rows = [] data_rows = table.find_all('tr') for row in data_rows: value = row.find_all('td') beautified_value = [dp.text.strip() for dp in value] print(beautified_value) if len(beautified_value) == 0: continue rows.append(beautified_value) The issue arises when I use the following code to save the output to a CSV file: with open('artist_rankings.csv', 'w', newline="") as output: writer = csv.writer(output) writer.writerow(headers) writer.writerows(rows) For whatever reason, only 738 of the artists are saved to the file. Does anyone know what could be causing this? Thanks so much for any help!
[ "As an alternative approach, you might want to make your life easier next time and use pandas.\nHere's how:\nimport requests\nimport pandas as pd\n\nsource = requests.get(\"https://kworb.net/spotify/artists.html\")\ndf = pd.concat(pd.read_html(source.text, flavor=\"bs4\"))\ndf.to_csv(\"artists.csv\", index=False)\n\nThis outputs a .csv file with 10,000 artists.\n\n", "The issue with your code is that you are using the print statement to display the data on the console, but this is not included in the rows list that you are writing to the CSV file. Instead, you need to append the data to the rows list before writing it to the CSV file.\nHere is how you can modify your code to fix this issue:\nimport requests\nfrom bs4 import BeautifulSoup\nimport csv\n\nURL = \"https://kworb.net/spotify/artists.html\"\nresult = requests.get(URL)\nsrc = result.content\nsoup = BeautifulSoup(src, 'html.parser')\n\ntable = soup.find('table', id=\"spotifyartistindex\")\n\nheader_tags = table.find_all('th')\nheaders = [header.text.strip() for header in header_tags]\n\nrows = []\ndata_rows = table.find_all('tr')\n\nfor row in data_rows:\nvalue = row.find_all('td')\nbeautified_value = [dp.text.strip() for dp in value]\n# Append the data to the rows list\nrows.append(beautified_value)\n\nWrite the data to the CSV file\nwith open('artist_rankings.csv', 'w', newline=\"\") as output:\nwriter = csv.writer(output)\nwriter.writerow(headers)\nwriter.writerows(rows)\n\nIn this modified code, the data is first appended to the rows list, and then it is written to the CSV file. This will ensure that all of the data is saved to the file, and not just the first 738 rows.\nNote that you may also want to add some error handling to your code in case the request to the URL fails, or if the HTML of the page is not in the expected format. This will help prevent your code from crashing when it encounters unexpected data. You can do this by adding a try-except block to your code, like this:\nimport requests\nfrom bs4 import BeautifulSoup\nimport csv\n\nURL = \"https://kworb.net/spotify/artists.html\"\n\ntry:\nresult = requests.get(URL)\nsrc = result.content\nsoup = BeautifulSoup(src, 'html.parser')\n\ntable = soup.find('table', id=\"spotifyartistindex\")\n\nif table is None:\n raise Exception(\"Could not find table with id 'spotifyartistindex'\")\n\nheader_tags = table.find_all('th')\nheaders = [header.text.strip() for header in header_tags]\n\nrows = []\ndata_rows = table.find_all('tr')\n\nfor row in data_rows:\n value = row.find_all('td')\n beautified_value = [dp.text.strip() for dp in value]\n # Append the data to the rows list\n rows.append(beautified_value)\n\n# Write the data to the CSV file\nwith open('artist_rankings.csv', 'w', newline=\"\") as output:\n writer = csv.writer(output)\n\n" ]
[ 1, 0 ]
[]
[]
[ "python", "python_3.x", "web_scraping" ]
stackoverflow_0074680982_python_python_3.x_web_scraping.txt
Q: Understanding snake game extension logic There is a function "extend" which is behaving as expected but I don't understand how. The writer of the code is using -1 as the position of the item in the list "segments". Should this not add an extra element to the already created snake at the position of its last segment? If so, how would that lengthen the snake as the segment created at the end will overlap with the segment that is already there? The complete code of the relevant files is described at the end. def extend(self): self.add_segment(self.segments[-1].position()) The code for main.py is mentioned below: from turtle import Screen from snake import Snake from food import Food from scoreboard import ScoreBoard import time screen = Screen() screen.setup(width=600, height=600) screen.bgcolor("black") screen.title("My Snake Game") screen.tracer() scoreboard = ScoreBoard() snake = Snake() food = Food() screen.listen() screen.onkey(snake.up, "Up") screen.onkey(snake.down, "Down") screen.onkey(snake.left, "Left") screen.onkey(snake.right, "Right") game_is_on = True while game_is_on: screen.update() snake.move() if snake.head.distance(food) < 15: food.refresh() scoreboard.increase_score() snake.extend() #Detect collision with wall if snake.head.xcor() > 280 or snake.head.xcor() < -280 or snake.head.ycor() > 280 or snake.head.ycor() < -280: game_is_on = False scoreboard.game_over() #Detect collision with tail for segment in snake.segments: if segment == snake.head: pass elif snake.head.position() == segment.position(): game_is_on = False scoreboard.game_over() screen.exitonclick() The code for snake.py is mentioned below: from turtle import Turtle STARTING_POSITIONS = [(0, 0), (-20, 0), (-40, 0)] MOVE_DISTANCE = 20 UP = 90 DOWN = 270 LEFT = 180 RIGHT = 0 class Snake: def __init__(self): self.segments = [] self.create_snake() self.head = self.segments[0] def create_snake(self): for position in STARTING_POSITIONS: self.add_segment(position) def add_segment(self, position): new_segment = Turtle("square") new_segment.color("white") new_segment.penup() new_segment.goto(position) self.segments.append(new_segment) ################ def extend(self): self.add_segment(self.segments[-1].position()) ################ def move(self): for seg_num in range(len(self.segments) - 1, 0, -1): new_x = self.segments[seg_num - 1].xcor() new_y = self.segments[seg_num - 1].ycor() self.segments[seg_num].goto(new_x, new_y) self.segments[0].forward(MOVE_DISTANCE) def up(self): if self.head.heading() != DOWN: self.head.setheading(UP) def down(self): if self.head.heading() != UP: self.head.setheading(DOWN) def left(self): if self.head.heading() != RIGHT: self.head.setheading(LEFT) def right(self): if self.head.heading() != LEFT: self.head.setheading(RIGHT) A: Should this not add an extra element to the already created snake at the position of its last segment? If so, how would that lengthen the snake as the segment created at the end will overlap with the segment that is already there? A good question: intuitively, it seems like it should. But examine the movement code: def move(self): for seg_num in range(len(self.segments) - 1, 0, -1): new_x = self.segments[seg_num - 1].xcor() new_y = self.segments[seg_num - 1].ycor() self.segments[seg_num].goto(new_x, new_y) self.segments[0].forward(MOVE_DISTANCE) This iterates the snake segments from tail to head, moving each segment to the position of the segment ahead of it, then finally moving the head forward a step in whatever direction the snake is heading. It's clear that this works fine under normal movement. The snake's tail will vacate its previous location and nothing will fill it in, leaving an empty space, while the head will occupy a new, previously empty space. Here's an example of a normal move call, with the snake of length 5 moving one step to the right: 4 3 2 1 H --------> 4 3 2 1 H ^ | empty Now, after a call to extend we get this seemingly invalid situation (imagine the two 4s share the exact same square/position on a 1-d axis, rather than positioned one square vertically above it): 4 4 3 2 1 H --------> But the next move call resolves this scenario just fine. Even though there are two 4s sharing the same position, the snake will move as follows after one tick to the right: 4 4 3 2 1 H ^ | filled in by new segment Although I'm still using 4, it's really a 5th tail segment with its own unique position: 5 4 3 2 1 H ^ | filled in by new segment The snake has moved to the right, but the last element fell into place naturally because it was assigned to the coordinate space occupied by the segment ahead of it, self.segments[seg_num - 1], which would have been left empty under normal movement. This vacant space is exactly the length - 2-th element's previous position, and that's precisely what the new (seemingly) duplicate tail element was set to by extend. The position of the new tail is never "passed back" to any other element, so it doesn't really matter what its initial value is; it will momentarily be assigned to whatever the old tail's space was. To concisely summarize this: Under normal movement, there's no segment that is assigned to the tail's old position, leaving an empty space. After a call to extend when the snake grows, the duplicate tail is assigned to the empty space that the old tail would have vacated, filling it in.
Understanding snake game extension logic
There is a function "extend" which is behaving as expected but I don't understand how. The writer of the code is using -1 as the position of the item in the list "segments". Should this not add an extra element to the already created snake at the position of its last segment? If so, how would that lengthen the snake as the segment created at the end will overlap with the segment that is already there? The complete code of the relevant files is described at the end. def extend(self): self.add_segment(self.segments[-1].position()) The code for main.py is mentioned below: from turtle import Screen from snake import Snake from food import Food from scoreboard import ScoreBoard import time screen = Screen() screen.setup(width=600, height=600) screen.bgcolor("black") screen.title("My Snake Game") screen.tracer() scoreboard = ScoreBoard() snake = Snake() food = Food() screen.listen() screen.onkey(snake.up, "Up") screen.onkey(snake.down, "Down") screen.onkey(snake.left, "Left") screen.onkey(snake.right, "Right") game_is_on = True while game_is_on: screen.update() snake.move() if snake.head.distance(food) < 15: food.refresh() scoreboard.increase_score() snake.extend() #Detect collision with wall if snake.head.xcor() > 280 or snake.head.xcor() < -280 or snake.head.ycor() > 280 or snake.head.ycor() < -280: game_is_on = False scoreboard.game_over() #Detect collision with tail for segment in snake.segments: if segment == snake.head: pass elif snake.head.position() == segment.position(): game_is_on = False scoreboard.game_over() screen.exitonclick() The code for snake.py is mentioned below: from turtle import Turtle STARTING_POSITIONS = [(0, 0), (-20, 0), (-40, 0)] MOVE_DISTANCE = 20 UP = 90 DOWN = 270 LEFT = 180 RIGHT = 0 class Snake: def __init__(self): self.segments = [] self.create_snake() self.head = self.segments[0] def create_snake(self): for position in STARTING_POSITIONS: self.add_segment(position) def add_segment(self, position): new_segment = Turtle("square") new_segment.color("white") new_segment.penup() new_segment.goto(position) self.segments.append(new_segment) ################ def extend(self): self.add_segment(self.segments[-1].position()) ################ def move(self): for seg_num in range(len(self.segments) - 1, 0, -1): new_x = self.segments[seg_num - 1].xcor() new_y = self.segments[seg_num - 1].ycor() self.segments[seg_num].goto(new_x, new_y) self.segments[0].forward(MOVE_DISTANCE) def up(self): if self.head.heading() != DOWN: self.head.setheading(UP) def down(self): if self.head.heading() != UP: self.head.setheading(DOWN) def left(self): if self.head.heading() != RIGHT: self.head.setheading(LEFT) def right(self): if self.head.heading() != LEFT: self.head.setheading(RIGHT)
[ "\nShould this not add an extra element to the already created snake at the position of its last segment? If so, how would that lengthen the snake as the segment created at the end will overlap with the segment that is already there?\n\nA good question: intuitively, it seems like it should. But examine the movement code:\ndef move(self):\n for seg_num in range(len(self.segments) - 1, 0, -1):\n new_x = self.segments[seg_num - 1].xcor()\n new_y = self.segments[seg_num - 1].ycor()\n self.segments[seg_num].goto(new_x, new_y)\n self.segments[0].forward(MOVE_DISTANCE)\n\nThis iterates the snake segments from tail to head, moving each segment to the position of the segment ahead of it, then finally moving the head forward a step in whatever direction the snake is heading.\nIt's clear that this works fine under normal movement. The snake's tail will vacate its previous location and nothing will fill it in, leaving an empty space, while the head will occupy a new, previously empty space. Here's an example of a normal move call, with the snake of length 5 moving one step to the right:\n4 3 2 1 H\n-------->\n\n 4 3 2 1 H\n^\n|\nempty\n\nNow, after a call to extend we get this seemingly invalid situation (imagine the two 4s share the exact same square/position on a 1-d axis, rather than positioned one square vertically above it):\n4\n4 3 2 1 H\n-------->\n\nBut the next move call resolves this scenario just fine. Even though there are two 4s sharing the same position, the snake will move as follows after one tick to the right:\n4 4 3 2 1 H\n^\n|\nfilled in by new segment\n\nAlthough I'm still using 4, it's really a 5th tail segment with its own unique position:\n5 4 3 2 1 H\n^\n|\nfilled in by new segment\n\nThe snake has moved to the right, but the last element fell into place naturally because it was assigned to the coordinate space occupied by the segment ahead of it, self.segments[seg_num - 1], which would have been left empty under normal movement.\nThis vacant space is exactly the length - 2-th element's previous position, and that's precisely what the new (seemingly) duplicate tail element was set to by extend.\nThe position of the new tail is never \"passed back\" to any other element, so it doesn't really matter what its initial value is; it will momentarily be assigned to whatever the old tail's space was.\n\nTo concisely summarize this:\n\nUnder normal movement, there's no segment that is assigned to the tail's old position, leaving an empty space.\nAfter a call to extend when the snake grows, the duplicate tail is assigned to the empty space that the old tail would have vacated, filling it in.\n\n" ]
[ 0 ]
[]
[]
[ "python", "python_turtle", "turtle_graphics" ]
stackoverflow_0074677711_python_python_turtle_turtle_graphics.txt
Q: Print Dictionnary using generator Is it possible to print a dictionnary using a generator using a pattern ? Exemple : giving this dictionnary people = [ { 'name' : 'AAA', 'date_birth': '12/08/1990', 'class': '1st'}, { 'name' : 'BB', 'date_birth': '12/08/1992', 'class': '2nd'}, { 'name' : 'CC', 'date_birth': '12/08/1988', 'class': '3rd'}, ] Is it possible to print them as [name,date_birth,class] using generator ? A: Yes, it is possible to print the elements of a dictionary using a generator and a pattern. Here is an example of how you can do this: people = [ {'name': 'AAA', 'date_birth': '12/08/1990', 'class': '1st'}, {'name': 'BB', 'date_birth': '12/08/1992', 'class': '2nd'}, {'name': 'CC', 'date_birth': '12/08/1988', 'class': '3rd'}, ] # Define a generator function that yields the elements of the dictionary # according to the specified pattern def print_people(people, pattern): for person in people: yield [person[key] for key in pattern] # Use the generator function to print the elements of the dictionary for person in print_people(people, ['name', 'date_birth', 'class']): print(person) In this example, the print_people() function is a generator that yields the elements of the people dictionary according to the specified pattern (a list of keys in the dictionary). The for loop at the end of the code iterates over the generator and prints each element. When you run this code, it will print the elements of the people dictionary as a list of values for the keys specified in the pattern: ['AAA', '12/08/1990', '1st'] ['BB', '12/08/1992', '2nd'] ['CC', '12/08/1988', '3rd'] A: This works too: gen = ([v for v in a.values()] for a in people) for i in gen: print(i) A: ### Define the dictionary people = [ { 'name' : 'AAA', 'date_birth': '12/08/1990', 'class': '1st'}, { 'name' : 'BB', 'date_birth': '12/08/1992', 'class': '2nd'}, { 'name' : 'CC', 'date_birth': '12/08/1988', 'class': '3rd'}, ] ### Define the pattern for the generator pattern = ['name', 'date_birth', 'class'] ### Create the generator person_generator = (person[key] for person in people for key in pattern) ### Print the elements of the dictionary using the generator print(list(person_generator))
Print Dictionnary using generator
Is it possible to print a dictionnary using a generator using a pattern ? Exemple : giving this dictionnary people = [ { 'name' : 'AAA', 'date_birth': '12/08/1990', 'class': '1st'}, { 'name' : 'BB', 'date_birth': '12/08/1992', 'class': '2nd'}, { 'name' : 'CC', 'date_birth': '12/08/1988', 'class': '3rd'}, ] Is it possible to print them as [name,date_birth,class] using generator ?
[ "Yes, it is possible to print the elements of a dictionary using a generator and a pattern. Here is an example of how you can do this:\npeople = [\n {'name': 'AAA', 'date_birth': '12/08/1990', 'class': '1st'},\n {'name': 'BB', 'date_birth': '12/08/1992', 'class': '2nd'},\n {'name': 'CC', 'date_birth': '12/08/1988', 'class': '3rd'},\n]\n\n# Define a generator function that yields the elements of the dictionary\n# according to the specified pattern\ndef print_people(people, pattern):\n for person in people:\n yield [person[key] for key in pattern]\n\n# Use the generator function to print the elements of the dictionary\nfor person in print_people(people, ['name', 'date_birth', 'class']):\n print(person)\n\nIn this example, the print_people() function is a generator that yields the elements of the people dictionary according to the specified pattern (a list of keys in the dictionary). The for loop at the end of the code iterates over the generator and prints each element.\nWhen you run this code, it will print the elements of the people dictionary as a list of values for the keys specified in the pattern:\n['AAA', '12/08/1990', '1st']\n['BB', '12/08/1992', '2nd']\n['CC', '12/08/1988', '3rd']\n\n", "This works too:\ngen = ([v for v in a.values()] for a in people)\n\nfor i in gen:\n print(i)\n\n", "### Define the dictionary\npeople = [\n { 'name' : 'AAA', 'date_birth': '12/08/1990', 'class': '1st'}, \n { 'name' : 'BB', 'date_birth': '12/08/1992', 'class': '2nd'}, \n { 'name' : 'CC', 'date_birth': '12/08/1988', 'class': '3rd'}, \n]\n### Define the pattern for the generator\npattern = ['name', 'date_birth', 'class']\n\n### Create the generator\nperson_generator = (person[key] for person in people for key in pattern)\n\n### Print the elements of the dictionary using the generator\nprint(list(person_generator))\n\n" ]
[ 0, 0, 0 ]
[ "You can use a list generator as follows:\npeople = [(i['name'], i['date_birth'], i['class']) for i in people]\n\n", "Yes, it is possible to use a generator to print the elements of a dictionary using a pattern. You can do this by using a generator expression to iterate over the dictionary items and yield a formatted string for each item. Here's an example of how you might do this:\n# Define the dictionary\npeople = [\n { 'name' : 'AAA', 'date_birth': '12/08/1990', 'class': '1st'}, \n { 'name' : 'BB', 'date_birth': '12/08/1992', 'class': '2nd'}, \n { 'name' : 'CC', 'date_birth': '12/08/1988', 'class': '3rd'}, \n]\n\n# Define the pattern to be used for printing\npattern = '[{name},{date_birth},{class}]'\n\n# Use a generator expression to iterate over the dictionary items and yield a formatted string for each item\nformatted_people = (pattern.format(**person) for person in people)\n\n# Print the formatted strings\nfor person in formatted_people:\n print(person)\n\nThis will print the dictionary items using the specified pattern, with each item being printed on a separate line.\n" ]
[ -1, -1 ]
[ "python" ]
stackoverflow_0074681100_python.txt
Q: How to loop from a dataframe to another one to count occurence of certain words? enter image description here I have two dataframes, df1 contains a column with all possible combinations and df2 contains a column with the actual combinations. I want to make a second column within df1 that loops through df2 and counts the values. So if df1 has a row with 'A,C' and df2 rows with 'A,B,C' and with 'A,C,D' I want the code to add a 2 in the new column. Ofcourse if a loop isnt necessary here, something else is also ok.. I added a excel example, now I want to do it in python with more than 20000 rows.. #################### A: To loop through the rows of two dataframes and count the values in one dataframe based on the values in the other dataframe, you can use a for loop and the pandas DataFrame.isin() method. Here is an example of how you can do this: import pandas as pd # Define the dataframes df1 = pd.DataFrame({'col1': ['A,B', 'A,C', 'C,D', 'E,F']}) df2 = pd.DataFrame({'col1': ['A,B,C', 'A,C,D', 'A,C,E', 'C,D,E', 'E,F,G']}) # Initialize an empty list to store the counts counts = [] # Loop through the rows of df1 and count the number of rows in df2 # that contain the same value for i, row in df1.iterrows(): count = df2.col1.isin([row['col1']]).sum() counts.append(count) # Add the counts to df1 as a new column df1['counts'] = counts # Print the resulting dataframe print(df1) This code first defines the df1 and df2 dataframes, and then initializes an empty list called counts to store the counts. It then uses a for loop to iterate over the rows of df1 and count the number of rows in df2 that contain the same value. The counts are added to the counts list, and the list is then added as a new column to df1. Finally, the code prints the resulting dataframe. When you run this code, it will print the following output: col1 counts 0 A,B 1 1 A,C 2 2 C,D 2 3 E,F 1 This is the expected result, with a count of 2 for the rows with values 'A,C' and 'C,D' in df1, because these values are present in two rows of df2.
How to loop from a dataframe to another one to count occurence of certain words?
enter image description here I have two dataframes, df1 contains a column with all possible combinations and df2 contains a column with the actual combinations. I want to make a second column within df1 that loops through df2 and counts the values. So if df1 has a row with 'A,C' and df2 rows with 'A,B,C' and with 'A,C,D' I want the code to add a 2 in the new column. Ofcourse if a loop isnt necessary here, something else is also ok.. I added a excel example, now I want to do it in python with more than 20000 rows.. ####################
[ "To loop through the rows of two dataframes and count the values in one dataframe based on the values in the other dataframe, you can use a for loop and the pandas DataFrame.isin() method.\nHere is an example of how you can do this:\nimport pandas as pd\n\n# Define the dataframes\ndf1 = pd.DataFrame({'col1': ['A,B', 'A,C', 'C,D', 'E,F']})\ndf2 = pd.DataFrame({'col1': ['A,B,C', 'A,C,D', 'A,C,E', 'C,D,E', 'E,F,G']})\n\n# Initialize an empty list to store the counts\ncounts = []\n\n# Loop through the rows of df1 and count the number of rows in df2\n# that contain the same value\nfor i, row in df1.iterrows():\n count = df2.col1.isin([row['col1']]).sum()\n counts.append(count)\n\n# Add the counts to df1 as a new column\ndf1['counts'] = counts\n\n# Print the resulting dataframe\nprint(df1)\n\nThis code first defines the df1 and df2 dataframes, and then initializes an empty list called counts to store the counts. It then uses a for loop to iterate over the rows of df1 and count the number of rows in df2 that contain the same value. The counts are added to the counts list, and the list is then added as a new column to df1. Finally, the code prints the resulting dataframe.\nWhen you run this code, it will print the following output:\n col1 counts\n0 A,B 1\n1 A,C 2\n2 C,D 2\n3 E,F 1\n\nThis is the expected result, with a count of 2 for the rows with values 'A,C' and 'C,D' in df1, because these values are present in two rows of df2.\n" ]
[ 0 ]
[]
[]
[ "combinations", "count", "dataframe", "find_occurrences", "python" ]
stackoverflow_0074681083_combinations_count_dataframe_find_occurrences_python.txt
Q: Multiple qq plots in one figure I have a matrix mEps which is of shape (10, 1042), where 10 is the number of assets, and 1042 is the amount of datapoints. I want to show the Q-Q plot for each asset, so I can plot: for i in range(iN): sm.qqplot((mEps[i,:]), fit = True, line='q') However, then I get 10 pictures of Q-Q plots. I would like to have them in one figure, so I have the following code: fig, axes = plt.subplots(nrows=4, ncols=3, figsize=(15,10)) ax= axes.flatten() for i in range(iN): sm.qqplot((mEps[i,:]), fit = True, line='q') This code creates the figure, but it doesn't fill it with Q-Q plots.. Does anyone know how to do this? A: QQplot documentation https://www.statsmodels.org/dev/generated/statsmodels.graphics.gofplots.qqplot.html states that function takes as argument "ax" the ax in subplots, where you want to place your qqplot fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(10,4)) qqplot(data_a['metrics'], line='s', ax=ax1) qqplot(data_b['metrics'], line='s', ax=ax2) ax1.set_title('Data A') ax2.set_title('Data B') plt.show()
Multiple qq plots in one figure
I have a matrix mEps which is of shape (10, 1042), where 10 is the number of assets, and 1042 is the amount of datapoints. I want to show the Q-Q plot for each asset, so I can plot: for i in range(iN): sm.qqplot((mEps[i,:]), fit = True, line='q') However, then I get 10 pictures of Q-Q plots. I would like to have them in one figure, so I have the following code: fig, axes = plt.subplots(nrows=4, ncols=3, figsize=(15,10)) ax= axes.flatten() for i in range(iN): sm.qqplot((mEps[i,:]), fit = True, line='q') This code creates the figure, but it doesn't fill it with Q-Q plots.. Does anyone know how to do this?
[ "QQplot documentation https://www.statsmodels.org/dev/generated/statsmodels.graphics.gofplots.qqplot.html\nstates that function takes as argument \"ax\" the ax in subplots, where you want to place your qqplot\nfig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(10,4))\n\nqqplot(data_a['metrics'], line='s', ax=ax1)\nqqplot(data_b['metrics'], line='s', ax=ax2)\nax1.set_title('Data A')\nax2.set_title('Data B')\n\nplt.show()\n\n" ]
[ 0 ]
[]
[]
[ "plot", "python", "qq", "quantile" ]
stackoverflow_0052813683_plot_python_qq_quantile.txt
Q: Python - Shutil - Skip File Already exists I have many pdfs on my desktop. I want to run a python script to move all these pdfs to a folder I am testing a script and I found that a file already exists in the destination folder. The script when run says the file already exists. In this scenario, I would like to overwrite the file if it exists. How do I tell shutil to overwrite. import os import shutil import glob src = '/Users/myusername/Desktop' dest = '/Users/myusername/Desktop/PDF' os.chdir(src) for i in glob.glob("*.pdf"): print(i) shutil.move(i,dest) shutil.Error: Destination path '/Users/myusername/Desktop/PDF/test.pdf' already exists A: To tell the shutil.move() function to overwrite the destination file if it already exists, you can use the shutil.move() function's copy_function argument and set it to the shutil.copy2() function. This will cause the shutil.move() function to use the shutil.copy2() function to copy the file to the destination, which has the ability to overwrite an existing file. Here is an example of how you could modify your code to use the shutil.copy2() function to overwrite the destination file if it already exists: import os import shutil import glob src = '/Users/myusername/Desktop' dest = '/Users/myusername/Desktop/PDF' os.chdir(src) for i in glob.glob("*.pdf"): print(i) shutil.move(i, dest, copy_function=shutil.copy2) Alternatively, you can use the os.replace() function to move the file and overwrite the destination file if it already exists. This function is available in Python 3.3 and later. Here is an example of how you could use the os.replace() function to move the file and overwrite the destination file if it already exists: import os import glob src = '/Users/myusername/Desktop' dest = '/Users/myusername/Desktop/PDF' os.chdir(src) for i in glob.glob("*.pdf"): print(i) os.replace(i, os.path.join(dest, i)) Note that the os.replace() function is not available in Python 2.x, so if you are using Python 2.x, you will need to use the shutil.move() function with the copy_function argument set to the shutil.copy2() function.
Python - Shutil - Skip File Already exists
I have many pdfs on my desktop. I want to run a python script to move all these pdfs to a folder I am testing a script and I found that a file already exists in the destination folder. The script when run says the file already exists. In this scenario, I would like to overwrite the file if it exists. How do I tell shutil to overwrite. import os import shutil import glob src = '/Users/myusername/Desktop' dest = '/Users/myusername/Desktop/PDF' os.chdir(src) for i in glob.glob("*.pdf"): print(i) shutil.move(i,dest) shutil.Error: Destination path '/Users/myusername/Desktop/PDF/test.pdf' already exists
[ "To tell the shutil.move() function to overwrite the destination file if it already exists, you can use the shutil.move() function's copy_function argument and set it to the shutil.copy2() function. This will cause the shutil.move() function to use the shutil.copy2() function to copy the file to the destination, which has the ability to overwrite an existing file.\nHere is an example of how you could modify your code to use the shutil.copy2() function to overwrite the destination file if it already exists:\nimport os\nimport shutil\nimport glob\n\nsrc = '/Users/myusername/Desktop'\ndest = '/Users/myusername/Desktop/PDF'\n\nos.chdir(src)\nfor i in glob.glob(\"*.pdf\"):\n print(i)\n shutil.move(i, dest, copy_function=shutil.copy2)\n\nAlternatively, you can use the os.replace() function to move the file and overwrite the destination file if it already exists. This function is available in Python 3.3 and later. Here is an example of how you could use the os.replace() function to move the file and overwrite the destination file if it already exists:\nimport os\nimport glob\n\nsrc = '/Users/myusername/Desktop'\ndest = '/Users/myusername/Desktop/PDF'\n\nos.chdir(src)\nfor i in glob.glob(\"*.pdf\"):\n print(i)\n os.replace(i, os.path.join(dest, i))\n\nNote that the os.replace() function is not available in Python 2.x, so if you are using Python 2.x, you will need to use the shutil.move() function with the copy_function argument set to the shutil.copy2() function.\n" ]
[ 0 ]
[]
[]
[ "python", "shutil" ]
stackoverflow_0074681196_python_shutil.txt
Q: How to reorder a numpy array by giving each element a new index? I want to reorder a numpy array, such that each element is given a new index. # I want my_array's elements to use new_indicies's indexes. my_array = np.array([23, 54, 67, 98, 31]) new_indicies = [2, 4, 1, 0, 1] # Some magic using new_indicies at my_array # Note that I earlier gave 67 and 31 the index 1 and since 31 is last, that is the one i'm keeping. >>> [98, 31, 23, 0, 54] What would be an efficient approach to this problem? A: To reorder the elements in a NumPy array according to a set of new indices, you can use the put() method. # Create an empty array of zeros with the same size as my_array reordered_array = np.zeros_like(my_array) # Move the elements in my_array to the indices specified in new_indices reordered_array.put(new_indices, my_array) print(reordered_array) # [98, 31, 23, 0, 54]
How to reorder a numpy array by giving each element a new index?
I want to reorder a numpy array, such that each element is given a new index. # I want my_array's elements to use new_indicies's indexes. my_array = np.array([23, 54, 67, 98, 31]) new_indicies = [2, 4, 1, 0, 1] # Some magic using new_indicies at my_array # Note that I earlier gave 67 and 31 the index 1 and since 31 is last, that is the one i'm keeping. >>> [98, 31, 23, 0, 54] What would be an efficient approach to this problem?
[ "To reorder the elements in a NumPy array according to a set of new indices, you can use the put() method.\n# Create an empty array of zeros with the same size as my_array\nreordered_array = np.zeros_like(my_array)\n\n# Move the elements in my_array to the indices specified in new_indices\nreordered_array.put(new_indices, my_array)\n\nprint(reordered_array) # [98, 31, 23, 0, 54]\n\n" ]
[ 1 ]
[]
[]
[ "arrays", "numpy", "python" ]
stackoverflow_0074681288_arrays_numpy_python.txt
Q: How to know the exact position of a camera viewbox in Qt? I am working with OpenGL in python and trying to attach 2d images to a canvas (the images will change according to a certain frequence). I managed to achieve that but to continue my task i need two things: the major problem: I need to get the image position (or bounds), sorry if i don't have the correct term, i am new to this. basically i just need to have some kind of positions to know where my picture is in the canvas. i tried to look into the methods and attributes of self.view.camera I could not find anything to help. one minor problem: i can move the image with the mouse along the canvas and i zoom it. i wonder if it is possible to only allow the zoom but not allow the right/left move [this is resolved in the comments section] here is my code: import sys from PySide2 import QtWidgets, QtCore from vispy import scene from PySide2.QtCore import QMetaObject from PySide2.QtWidgets import * import numpy as np import dog import time import imageio as iio class CameraThread(QtCore.QThread): new_image = QtCore.Signal(object) def __init__(self, parent=None): QtCore.QThread.__init__(self, parent) def run(self): try: while True: frame = iio.imread(dog.getDog(filename='randog')) self.new_image.emit(frame.data) time.sleep(10.0) finally: print('end!') class Ui_MainWindow(object): def setupUi(self, MainWindow): if not MainWindow.objectName(): MainWindow.setObjectName("MainWindow") MainWindow.resize(800, 400) self.centralwidget = QWidget(MainWindow) self.centralwidget.setObjectName("centralwidget") self.gridLayout = QGridLayout(self.centralwidget) self.gridLayout.setObjectName("gridLayout") self.groupBox = QGroupBox(self.centralwidget) self.groupBox.setObjectName("groupBox") self.gridLayout.addWidget(self.groupBox, 0, 0, 1, 1) MainWindow.setCentralWidget(self.centralwidget) QMetaObject.connectSlotsByName(MainWindow) class MainWindow(QtWidgets.QMainWindow): def __init__(self): super(MainWindow, self).__init__() self.ui = Ui_MainWindow() self.ui.setupUi(self) # OpenGL drawing surface self.canvas = scene.SceneCanvas(keys='interactive') self.canvas.create_native() self.canvas.native.setParent(self) self.setWindowTitle('MyApp') self.view = self.canvas.central_widget.add_view() self.view.bgcolor = '#ffffff' # set the canvas to a white background self.image = scene.visuals.Image(np.zeros((1, 1)), interpolation='nearest', parent= self.view.scene, cmap='grays', clim=(0, 2 ** 8 - 1)) self.view.camera = scene.PanZoomCamera(aspect=1) self.view.camera.flip = (0, 1, 0) self.view.camera.set_range() self.view.camera.zoom(1000, (0, 0)) self._camera_runner = CameraThread(parent=self) self._camera_runner.new_image.connect(self.new_image, type=QtCore.Qt.BlockingQueuedConnection) self._camera_runner.start() @QtCore.Slot(object) def new_image(self, img): try: self.image.set_data(img) self.image.update() except Exception as e: print(f"problem sending image: {e}") def main(): import ctypes ctypes.windll.shell32.SetCurrentProcessExplicitAppUserModelID('my_gui') app = QtWidgets.QApplication([]) main_window = MainWindow() main_window.show() sys.exit(app.exec_()) if __name__ == '__main__': main() A: Do you want to know the coordinates of the picture in the viewport (the window), or do you want the coordinates of the picture on the canvas? Vispy actually puts the image at (0,0) by default inside the Vispy canvas. When you move around the canvas you actually aren't moving the canvas around, you are just moving the camera which is looking at the canvas so the coordinates of the picture stay at (0,0) regardless if you move around the viewport or the camera or not. Also the coordinates of the Vispy canvas correspond one to one with the pixel length and width of your image. One pixel is one unit in Vispy. You can check this by adding this method to your MainWindow class: def my_handler(self,event): transform = self.image.transforms.get_transform(map_to="canvas") img_x, img_y = transform.imap(event.pos)[:2] print(img_x, img_y) # optionally do the below to tell other handlers not to look at this event: event.handled = True and adding this to your __init__ method: self.canvas.events.mouse_move.connect(self.my_handler) You can see that when you hover over the top left corner of your image, it should print roughly (0,0).
How to know the exact position of a camera viewbox in Qt?
I am working with OpenGL in python and trying to attach 2d images to a canvas (the images will change according to a certain frequence). I managed to achieve that but to continue my task i need two things: the major problem: I need to get the image position (or bounds), sorry if i don't have the correct term, i am new to this. basically i just need to have some kind of positions to know where my picture is in the canvas. i tried to look into the methods and attributes of self.view.camera I could not find anything to help. one minor problem: i can move the image with the mouse along the canvas and i zoom it. i wonder if it is possible to only allow the zoom but not allow the right/left move [this is resolved in the comments section] here is my code: import sys from PySide2 import QtWidgets, QtCore from vispy import scene from PySide2.QtCore import QMetaObject from PySide2.QtWidgets import * import numpy as np import dog import time import imageio as iio class CameraThread(QtCore.QThread): new_image = QtCore.Signal(object) def __init__(self, parent=None): QtCore.QThread.__init__(self, parent) def run(self): try: while True: frame = iio.imread(dog.getDog(filename='randog')) self.new_image.emit(frame.data) time.sleep(10.0) finally: print('end!') class Ui_MainWindow(object): def setupUi(self, MainWindow): if not MainWindow.objectName(): MainWindow.setObjectName("MainWindow") MainWindow.resize(800, 400) self.centralwidget = QWidget(MainWindow) self.centralwidget.setObjectName("centralwidget") self.gridLayout = QGridLayout(self.centralwidget) self.gridLayout.setObjectName("gridLayout") self.groupBox = QGroupBox(self.centralwidget) self.groupBox.setObjectName("groupBox") self.gridLayout.addWidget(self.groupBox, 0, 0, 1, 1) MainWindow.setCentralWidget(self.centralwidget) QMetaObject.connectSlotsByName(MainWindow) class MainWindow(QtWidgets.QMainWindow): def __init__(self): super(MainWindow, self).__init__() self.ui = Ui_MainWindow() self.ui.setupUi(self) # OpenGL drawing surface self.canvas = scene.SceneCanvas(keys='interactive') self.canvas.create_native() self.canvas.native.setParent(self) self.setWindowTitle('MyApp') self.view = self.canvas.central_widget.add_view() self.view.bgcolor = '#ffffff' # set the canvas to a white background self.image = scene.visuals.Image(np.zeros((1, 1)), interpolation='nearest', parent= self.view.scene, cmap='grays', clim=(0, 2 ** 8 - 1)) self.view.camera = scene.PanZoomCamera(aspect=1) self.view.camera.flip = (0, 1, 0) self.view.camera.set_range() self.view.camera.zoom(1000, (0, 0)) self._camera_runner = CameraThread(parent=self) self._camera_runner.new_image.connect(self.new_image, type=QtCore.Qt.BlockingQueuedConnection) self._camera_runner.start() @QtCore.Slot(object) def new_image(self, img): try: self.image.set_data(img) self.image.update() except Exception as e: print(f"problem sending image: {e}") def main(): import ctypes ctypes.windll.shell32.SetCurrentProcessExplicitAppUserModelID('my_gui') app = QtWidgets.QApplication([]) main_window = MainWindow() main_window.show() sys.exit(app.exec_()) if __name__ == '__main__': main()
[ "Do you want to know the coordinates of the picture in the viewport (the window), or do you want the coordinates of the picture on the canvas? Vispy actually puts the image at (0,0) by default inside the Vispy canvas. When you move around the canvas you actually aren't moving the canvas around, you are just moving the camera which is looking at the canvas so the coordinates of the picture stay at (0,0) regardless if you move around the viewport or the camera or not. Also the coordinates of the Vispy canvas correspond one to one with the pixel length and width of your image. One pixel is one unit in Vispy. You can check this by adding this method to your MainWindow class:\ndef my_handler(self,event):\n \n\n transform = self.image.transforms.get_transform(map_to=\"canvas\")\n img_x, img_y = transform.imap(event.pos)[:2]\n print(img_x, img_y)\n # optionally do the below to tell other handlers not to look at this event:\n event.handled = True\n\nand adding this to your __init__ method:\nself.canvas.events.mouse_move.connect(self.my_handler)\n\nYou can see that when you hover over the top left corner of your image, it should print roughly (0,0).\n" ]
[ 0 ]
[]
[]
[ "camera", "pyqt", "python", "qt", "vispy" ]
stackoverflow_0074629482_camera_pyqt_python_qt_vispy.txt
Q: I require converting this for loop into a recursion function rate, cashflows = 0.05,[-1100,300,450,800] def npv_for_loop(rate,cashflows): NPV=0 for i in range(len(cashflows)): NPV+=cashflows[i]/(1+rate)**i print(round(NPV,3)) i generally have no idea how a recursion works and would really appreciate if anybody can help me. A: Here is an example of how you could convert the given for loop into a recursive function: def npv(rate, cashflows, i=0, NPV=0): # Stop the recursion when we reach the end of the cash flows if i == len(cashflows): return NPV # Compute the present value of the ith cash flow present_value = cashflows[i] / (1 + rate) ** i # Recursively call the function to compute the present value of the remaining cash flows return npv(rate, cashflows, i + 1, NPV + present_value) rate, cashflows = 0.05,[-1100,300,450,800] # Compute the NPV of the cash flows using the recursive function npv = npv(rate, cashflows) print(npv) In this code, the npv() function computes the present value of each cash flow in the given cashflows array and sums them up to compute the NPV of the cash flows. The i parameter is the index of the current cash flow being considered, and the NPV parameter is the running total of the present values of the cash flows that have been considered so far. The npv() function calls itself recursively with an updated value of i and NPV until all of the cash flows have been considered. Recursive functions work by calling themselves with updated values for their parameters, and then using the updated values to compute a result. In the case of the npv() function, it calls itself with an updated value of i and NPV until all of the cash flows have been considered, and then it returns the final value of NPV as the result. This is an example of a tail-recursive function, where the final result is computed by the base case (i.e., when i == len(cashflows)) and then passed back up the recursive calls. A: Recursion function is a function calling itself. It works by changing the parameters each time it calls itself until some condition accruing and then it returns. When the function hit the return value it goes back to the last line called it and continue to execute from that line, just like main function calling other. When its done executing, or hit a return, it goes back to the last line called it and continues.. so on until the function ends and returns to main. rate, cashflows = 0.05,[-1100,300,450,800] def npv_for_loop(rate,cashflows): NPV=0 npv_rec(rate, cashflows, NPV) def npv_rec(rate, cashflows, npv, i=0): if len(cashflows) == i: return npv+=cashflows[i]/(1+rate)**i print(round(npv,3)) npv_rec(rate, cashflows, npv, i + 1) npv_for_loop(rate, cashflows)
I require converting this for loop into a recursion function
rate, cashflows = 0.05,[-1100,300,450,800] def npv_for_loop(rate,cashflows): NPV=0 for i in range(len(cashflows)): NPV+=cashflows[i]/(1+rate)**i print(round(NPV,3)) i generally have no idea how a recursion works and would really appreciate if anybody can help me.
[ "Here is an example of how you could convert the given for loop into a recursive function:\ndef npv(rate, cashflows, i=0, NPV=0):\n # Stop the recursion when we reach the end of the cash flows\n if i == len(cashflows):\n return NPV\n\n # Compute the present value of the ith cash flow\n present_value = cashflows[i] / (1 + rate) ** i\n\n # Recursively call the function to compute the present value of the remaining cash flows\n return npv(rate, cashflows, i + 1, NPV + present_value)\n\nrate, cashflows = 0.05,[-1100,300,450,800]\n\n# Compute the NPV of the cash flows using the recursive function\nnpv = npv(rate, cashflows)\nprint(npv)\n\nIn this code, the npv() function computes the present value of each cash flow in the given cashflows array and sums them up to compute the NPV of the cash flows. The i parameter is the index of the current cash flow being considered, and the NPV parameter is the running total of the present values of the cash flows that have been considered so far. The npv() function calls itself recursively with an updated value of i and NPV until all of the cash flows have been considered.\nRecursive functions work by calling themselves with updated values for their parameters, and then using the updated values to compute a result. In the case of the npv() function, it calls itself with an updated value of i and NPV until all of the cash flows have been considered, and then it returns the final value of NPV as the result. This is an example of a tail-recursive function, where the final result is computed by the base case (i.e., when i == len(cashflows)) and then passed back up the recursive calls.\n", "Recursion function is a function calling itself. It works by changing the parameters each time it calls itself until some condition accruing and then it returns. When the function hit the return value it goes back to the last line called it and continue to execute from that line, just like main function calling other. When its done executing, or hit a return, it goes back to the last line called it and continues.. so on until the function ends and returns to main.\nrate, cashflows = 0.05,[-1100,300,450,800]\n\ndef npv_for_loop(rate,cashflows):\n NPV=0\n npv_rec(rate, cashflows, NPV)\n \ndef npv_rec(rate, cashflows, npv, i=0):\n if len(cashflows) == i:\n return\n npv+=cashflows[i]/(1+rate)**i\n print(round(npv,3))\n npv_rec(rate, cashflows, npv, i + 1)\n \nnpv_for_loop(rate, cashflows)\n\n" ]
[ 1, 0 ]
[]
[]
[ "for_loop", "python", "recursion" ]
stackoverflow_0074681195_for_loop_python_recursion.txt
Q: Python GC: What's the meaning: Not all items in some free lists may be freed due to the particular implementation, in particular float When I read the doc of gc.collect(). There is a saying: Not all items in some free lists may be freed due to the particular implementation, in particular float. I'm quite confused. What's the meaning of this saying? import gc l = [1.0, 2.0, 3.0] l = None gc.collect() Does it mean that even though the list [1.0, 2.0, 3.0] has no reference after l = None, the list's elements 1.0, 2.0, 3.0 cannot be garbage collected since it is float. However, if it's int [1, 2, 3], then elements will be freed. Why? It's quite counterintuitive. Could we give me a solid example, what's the meaning of Not all items in some free lists may be freed due to the particular implementation, in particular float. PS: Does it mean that if I have a function which will generate list of float in intermediate step but not return it out. Since float cannot be gc, if I repeatedly call this function, it has risk of memory leak? import random def f(): l = [random.uniform(0, 1) for _ in range(100)] while True: f() A: It's counterintuitive, but thats just simply how the gc works. In particular, the gc.collect() method may not free memory associated with floating-point numbers (i.e. float objects). This is because the garbage collector uses a specific algorithm to determine which objects can be safely freed, and this algorithm may not consider floating-point numbers to be unused in all cases. A: The statement "Not all items in some free lists may be freed due to the particular implementation, in particular float" means that the gc.collect() method may not be able to free all objects in the free list, especially if they are floating-point numbers. This is because the garbage collector uses a particular implementation to free objects in the free list, and this implementation may not be able to free all types of objects, especially floating-point numbers. In the example you provided, the list [1.0, 2.0, 3.0] will not be freed by the gc.collect() method because it contains floating-point numbers. If the list contained integers instead, like [1, 2, 3], then it would be freed by the gc.collect() method. Here is an example that illustrates this behavior: import gc # Create a list of floating-point numbers l1 = [1.0, 2.0, 3.0] # Set the reference to the list to None l1 = None # Run the garbage collector gc.collect() # Print the number of objects in the free list print(gc.get_count()) # Output: 3 # Create a list of integers l2 = [1, 2, 3] # Set the reference to the list to None l2 = None # Run the garbage collector gc.collect() # Print the number of objects in the free list print(gc.get_count()) # Output: 0 In the example above, we create two lists: l1 and l2. The l1 list contains floating-point numbers, while the l2 list contains integers. We then set the references to these lists to None and run the garbage collector using the gc.collect() method. After running the garbage collector, we print the number of objects in the free list using the gc.get_count() method. This shows that the l1 list, which contains floating-point numbers, was not freed by the garbage collector, while the l2 list, which contains integers, was freed. In summary, the statement "Not all items in some free lists may be freed due to the particular implementation, in particular float" means that the gc.collect() method may not be able to free all objects in the free list, especially if they are floating-point numbers. This is because the garbage collector uses a particular implementation that may not be able to free all types of objects, especially floating-point numbers.
Python GC: What's the meaning: Not all items in some free lists may be freed due to the particular implementation, in particular float
When I read the doc of gc.collect(). There is a saying: Not all items in some free lists may be freed due to the particular implementation, in particular float. I'm quite confused. What's the meaning of this saying? import gc l = [1.0, 2.0, 3.0] l = None gc.collect() Does it mean that even though the list [1.0, 2.0, 3.0] has no reference after l = None, the list's elements 1.0, 2.0, 3.0 cannot be garbage collected since it is float. However, if it's int [1, 2, 3], then elements will be freed. Why? It's quite counterintuitive. Could we give me a solid example, what's the meaning of Not all items in some free lists may be freed due to the particular implementation, in particular float. PS: Does it mean that if I have a function which will generate list of float in intermediate step but not return it out. Since float cannot be gc, if I repeatedly call this function, it has risk of memory leak? import random def f(): l = [random.uniform(0, 1) for _ in range(100)] while True: f()
[ "It's counterintuitive, but thats just simply how the gc works.\nIn particular, the gc.collect() method may not free memory associated with floating-point numbers (i.e. float objects). This is because the garbage collector uses a specific algorithm to determine which objects can be safely freed, and this algorithm may not consider floating-point numbers to be unused in all cases.\n", "The statement \"Not all items in some free lists may be freed due to the particular implementation, in particular float\" means that the gc.collect() method may not be able to free all objects in the free list, especially if they are floating-point numbers. This is because the garbage collector uses a particular implementation to free objects in the free list, and this implementation may not be able to free all types of objects, especially floating-point numbers.\nIn the example you provided, the list [1.0, 2.0, 3.0] will not be freed by the gc.collect() method because it contains floating-point numbers. If the list contained integers instead, like [1, 2, 3], then it would be freed by the gc.collect() method.\nHere is an example that illustrates this behavior:\nimport gc\n\n# Create a list of floating-point numbers\nl1 = [1.0, 2.0, 3.0]\n\n# Set the reference to the list to None\nl1 = None\n\n# Run the garbage collector\ngc.collect()\n\n# Print the number of objects in the free list\nprint(gc.get_count()) # Output: 3\n\n# Create a list of integers\nl2 = [1, 2, 3]\n\n# Set the reference to the list to None\nl2 = None\n\n# Run the garbage collector\ngc.collect()\n\n# Print the number of objects in the free list\nprint(gc.get_count()) # Output: 0\n\nIn the example above, we create two lists: l1 and l2. The l1 list contains floating-point numbers, while the l2 list contains integers. We then set the references to these lists to None and run the garbage collector using the gc.collect() method.\nAfter running the garbage collector, we print the number of objects in the free list using the gc.get_count() method. This shows that the l1 list, which contains floating-point numbers, was not freed by the garbage collector, while the l2 list, which contains integers, was freed.\nIn summary, the statement \"Not all items in some free lists may be freed due to the particular implementation, in particular float\" means that the gc.collect() method may not be able to free all objects in the free list, especially if they are floating-point numbers. This is because the garbage collector uses a particular implementation that may not be able to free all types of objects, especially floating-point numbers.\n" ]
[ 0, 0 ]
[]
[]
[ "garbage_collection", "memory_management", "python" ]
stackoverflow_0074681214_garbage_collection_memory_management_python.txt
Q: Issue in setting an image as the background a of a scene in Manim Community v0.17.0 Issue in setting an image as the background a of a scene in Manim Community v0.17.0 from manim import * class ImageFromArray(Scene): def construct(self): self.background_image =r"C:\Users\Shobhan\Desktop\program\bb.jpg" is not working...what to do? A: To set an image as the background of a scene in Manim Community v0.17.0, you can use the set_background_image method in your construct function. The method takes the path to the image as an argument, so you can use it like this: class ImageFromArray(Scene): def construct(self): self.set_background_image(r"C:\Users\Shobhan\Desktop\program\bb.jpg") A: Taken from Manim page class ImageFromArray(Scene): def construct(self): image = ImageMobject(np.uint8([[0, 100, 30, 200], [255, 0, 5, 33]])) image.height = 7 self.add(image) Try creating an ImageMobject and then use the method add(). Where the class ImageMobject seems to accept a path in its constructor: class ImageMobject(filename_or_array, scale_to_resolution=1080, invert=False, image_mode='RGBA', **kwargs
Issue in setting an image as the background a of a scene in Manim Community v0.17.0
Issue in setting an image as the background a of a scene in Manim Community v0.17.0 from manim import * class ImageFromArray(Scene): def construct(self): self.background_image =r"C:\Users\Shobhan\Desktop\program\bb.jpg" is not working...what to do?
[ "To set an image as the background of a scene in Manim Community v0.17.0, you can use the set_background_image method in your construct function. The method takes the path to the image as an argument, so you can use it like this:\nclass ImageFromArray(Scene):\n def construct(self):\n self.set_background_image(r\"C:\\Users\\Shobhan\\Desktop\\program\\bb.jpg\")\n\n", "Taken from Manim page\nclass ImageFromArray(Scene):\n def construct(self):\n image = ImageMobject(np.uint8([[0, 100, 30, 200],\n [255, 0, 5, 33]]))\n image.height = 7\n self.add(image)\n\nTry creating an ImageMobject and then use the method add().\nWhere the class ImageMobject seems to accept a path in its constructor:\nclass ImageMobject(filename_or_array, scale_to_resolution=1080, invert=False, image_mode='RGBA', **kwargs\n\n" ]
[ 0, 0 ]
[]
[]
[ "manim", "python" ]
stackoverflow_0074679231_manim_python.txt
Q: could I loop through 3 arrays and join them to one list? could I loop through 3 arrays and join to one list ? list1 = ['test1','test2','test3'] list2 = ['2022-12-12T16:44','2022-12-12T13:45','2022-12-12T22:57'] list3 = ['low','medium','high'] can i get something like this? result =[ ['test1','2022-12-12T16:44','low']] ['test2','2022-12-12T13:45','medium'] ['test3','2022-12-12T22:57','high'] ] A: zip allows you to iterate simultaneously on several iterables (truncating to the length of the shortest iterable): list4 = [ [a,b,c] for a,b,c in zip(list1,list2,list3)] # [['test1', '2022-12-12T16:44', 'low'], # ['test2', '2022-12-12T13:45', 'medium'], # ['test3', '2022-12-12T22:57', 'high']]
could I loop through 3 arrays and join them to one list?
could I loop through 3 arrays and join to one list ? list1 = ['test1','test2','test3'] list2 = ['2022-12-12T16:44','2022-12-12T13:45','2022-12-12T22:57'] list3 = ['low','medium','high'] can i get something like this? result =[ ['test1','2022-12-12T16:44','low']] ['test2','2022-12-12T13:45','medium'] ['test3','2022-12-12T22:57','high'] ]
[ "zip allows you to iterate simultaneously on several iterables (truncating to the length of the shortest iterable):\nlist4 = [ [a,b,c] for a,b,c in zip(list1,list2,list3)]\n\n# [['test1', '2022-12-12T16:44', 'low'],\n# ['test2', '2022-12-12T13:45', 'medium'],\n# ['test3', '2022-12-12T22:57', 'high']]\n\n" ]
[ 3 ]
[]
[]
[ "arrays", "list", "loops", "python", "tuples" ]
stackoverflow_0074681376_arrays_list_loops_python_tuples.txt
Q: lxml: Xpath works in Chrome but not in lxml I'm trying to scrape information from this episode wiki page on Fandom, specifically the episode title in Japanese, 謀略Ⅳ:ドライバーを奪還せよ!: Conspiracy IV: Recapture the Driver! (謀略Ⅳ:ドライバーを奪還せよ!, Bōryaku Fō: Doraibā o Dakkan seyo!) I wrote this xpath which selects the text in Chrome: //div[@class='mw-parser-output']/span/span[@class='t_nihongo_kanji']/text(), but it does not work in lxml when I do this: import requests from lxml import html getPageContent = lambda url : html.fromstring(requests.get(url).content) content = getPageContent("https://kamenrider.fandom.com/wiki/Conspiracy_IV:_Recapture_the_Driver!") JapaneseTitle = content.xpath("//div[@class='mw-parser-output']/span/span[@class='t_nihongo_kanji']/text()") print(JapaneseTitle) I had already written these xpaths to scrape other parts of the page which are working: //h2[@data-source='name']/center/text(), the episode title in English. //div[@data-source='airdate']/div/text(), the air date. //div[@data-source='writer']/div/a, the episode writer a element. //div[@data-source='director']/div/a, the episode director a element. //p[preceding-sibling::h2[contains(span,'Synopsis')] and following-sibling::h2[contains(span,'Plot')]], all the p elements under the Snyposis section. A: As with all questions of this sort, start by breaking down your xpath into smaller expressions: Let's start with the first expression... >>> content.xpath("//div[@class='mw-parser-output']") [<Element div at 0x7fbf905d5400>] Great, that works! But if we add the next component from your expression... >>> content.xpath("//div[@class='mw-parser-output']/span") [] ...we don't get any results. It looks like the <div> element matched by the first component of your expression doesn't have any immediate descendants that are <span> elements. If we select the relevant element in Chrome and select "inspect element", and then "copy full xpath", we get: /html/body/div[4]/div[3]/div[2]/main/div[3]/div[2]/div/span/span[1] And that looks like it should match. But if we match it (or at least a similar element) using lxml, we see a different path: >>> res=content.xpath('//span[@class="t_nihongo_kanji"]')[0] >>> tree = content.getroottree() >>> tree.getpath(res) '/html/body/div[4]/div[3]/div[2]/main/div[3]/div[2]/div/p[1]/span/span[1]' The difference is here: /html/body/div[4]/div[3]/div[2]/main/div[3]/div[2]/div/p[1] <-- extra <p> element One solution is simply to ignore the difference in structure by sticking a // in the middle of the expression, so that we have something like : >>> content.xpath("(//div[@class='mw-parser-output']//span[@class='t_nihongo_kanji'])[1]/text()") ['謀略Ⅳ:ドライバーを奪還せよ!']
lxml: Xpath works in Chrome but not in lxml
I'm trying to scrape information from this episode wiki page on Fandom, specifically the episode title in Japanese, 謀略Ⅳ:ドライバーを奪還せよ!: Conspiracy IV: Recapture the Driver! (謀略Ⅳ:ドライバーを奪還せよ!, Bōryaku Fō: Doraibā o Dakkan seyo!) I wrote this xpath which selects the text in Chrome: //div[@class='mw-parser-output']/span/span[@class='t_nihongo_kanji']/text(), but it does not work in lxml when I do this: import requests from lxml import html getPageContent = lambda url : html.fromstring(requests.get(url).content) content = getPageContent("https://kamenrider.fandom.com/wiki/Conspiracy_IV:_Recapture_the_Driver!") JapaneseTitle = content.xpath("//div[@class='mw-parser-output']/span/span[@class='t_nihongo_kanji']/text()") print(JapaneseTitle) I had already written these xpaths to scrape other parts of the page which are working: //h2[@data-source='name']/center/text(), the episode title in English. //div[@data-source='airdate']/div/text(), the air date. //div[@data-source='writer']/div/a, the episode writer a element. //div[@data-source='director']/div/a, the episode director a element. //p[preceding-sibling::h2[contains(span,'Synopsis')] and following-sibling::h2[contains(span,'Plot')]], all the p elements under the Snyposis section.
[ "As with all questions of this sort, start by breaking down your xpath into smaller expressions:\nLet's start with the first expression...\n>>> content.xpath(\"//div[@class='mw-parser-output']\")\n[<Element div at 0x7fbf905d5400>]\n\nGreat, that works! But if we add the next component from your expression...\n>>> content.xpath(\"//div[@class='mw-parser-output']/span\")\n[]\n\n...we don't get any results. It looks like the <div> element matched by the first component of your expression doesn't have any immediate descendants that are <span> elements.\nIf we select the relevant element in Chrome and select \"inspect element\", and then \"copy full xpath\", we get:\n/html/body/div[4]/div[3]/div[2]/main/div[3]/div[2]/div/span/span[1]\n\nAnd that looks like it should match. But if we match it (or at least a similar element) using lxml, we see a different path:\n>>> res=content.xpath('//span[@class=\"t_nihongo_kanji\"]')[0]\n>>> tree = content.getroottree()\n>>> tree.getpath(res)\n'/html/body/div[4]/div[3]/div[2]/main/div[3]/div[2]/div/p[1]/span/span[1]'\n\nThe difference is here:\n/html/body/div[4]/div[3]/div[2]/main/div[3]/div[2]/div/p[1] <-- extra <p> element\n\nOne solution is simply to ignore the difference in structure by sticking a // in the middle of the expression, so that we have something like :\n>>> content.xpath(\"(//div[@class='mw-parser-output']//span[@class='t_nihongo_kanji'])[1]/text()\")\n['謀略Ⅳ:ドライバーを奪還せよ!']\n\n" ]
[ 0 ]
[]
[]
[ "lxml", "lxml.html", "python", "python_3.x", "xpath" ]
stackoverflow_0074681144_lxml_lxml.html_python_python_3.x_xpath.txt
Q: how to plot the multiple data frames on a single violin plot next to each other? I have two data frames, and the shapes of the two data frames are not same. I want to plot the two data frame values of the violin plots next to each other instead of overlapping. import pandas as pd import numpy as np import matplotlib.pyplot as plt data1 = { 'DT' : np.random.normal(-1, 1, 100), 'RF' : np.random.normal(-1, 1, 110), 'KNN' : np.random.normal(-1, 1, 120) } maxsize = max([a.size for a in data1.values()]) data_pad1 = {k:np.pad(v, pad_width=(0,maxsize-v.size,), mode='constant', constant_values=np.nan) for k,v in data1.items()} df1 = pd.DataFrame(data_pad1) # data frame data2 = { 'DT' : np.random.normal(-1, 1, 50), 'RF' : np.random.normal(-1, 1, 60), 'KNN' : np.random.normal(-1, 1, 80) } maxsize = max([a.size for a in data2.values()]) data_pad2 = {k:np.pad(v, pad_width=(0,maxsize-v.size,), mode='constant', constant_values=np.nan) for k,v in data2.items()} df2 = pd.DataFrame(data_pad2) # dataframe2 #plotting fig, ax = plt.subplots(figsize=(15, 6)) ax = sns.violinplot(data=df1, color="blue") ax = sns.violinplot(data=df2, color="red") plt.show() Here is my output image. But I want to get each blue and red violin plot next to each other instead of overlapping. A: I suggest relabeling the columns in each dataframe to reflect the dataframe number, e.g.: data2 = { 'DT2' : np.random.normal(-1, 1, 50), 'RF2' : np.random.normal(-1, 1, 60), 'KNN2' : np.random.normal(-1, 1, 80) } You may then: concatenate both dataframes: df = pd.concat([df1, df2], axis=1) define your own palette: my_palette = {"DT1": "blue", "DT2": "red","KNN1": "blue", "KNN2": "red", "RF1": "blue", "RF2": "red"} and then force the plotting order using the order parameter: sns.violinplot(data=df, order = ['DT1', 'DT2', 'KNN1', 'KNN2', 'RF1', 'RF2'], palette=my_palette) This yields the following result: EDIT: You may manually set the labels to replace each label pair (e.g. DT1, DT2) with a single label (e.g. DT): locs, labels = plt.xticks() # Get the current locations and labels. plt.xticks(np.arange(0.5, 4.5, step=2)) # Set label locations. plt.xticks([0.5, 2.5, 4.5], ['DT', 'KNN', 'RFF']) # Set text labels. This yields:
how to plot the multiple data frames on a single violin plot next to each other?
I have two data frames, and the shapes of the two data frames are not same. I want to plot the two data frame values of the violin plots next to each other instead of overlapping. import pandas as pd import numpy as np import matplotlib.pyplot as plt data1 = { 'DT' : np.random.normal(-1, 1, 100), 'RF' : np.random.normal(-1, 1, 110), 'KNN' : np.random.normal(-1, 1, 120) } maxsize = max([a.size for a in data1.values()]) data_pad1 = {k:np.pad(v, pad_width=(0,maxsize-v.size,), mode='constant', constant_values=np.nan) for k,v in data1.items()} df1 = pd.DataFrame(data_pad1) # data frame data2 = { 'DT' : np.random.normal(-1, 1, 50), 'RF' : np.random.normal(-1, 1, 60), 'KNN' : np.random.normal(-1, 1, 80) } maxsize = max([a.size for a in data2.values()]) data_pad2 = {k:np.pad(v, pad_width=(0,maxsize-v.size,), mode='constant', constant_values=np.nan) for k,v in data2.items()} df2 = pd.DataFrame(data_pad2) # dataframe2 #plotting fig, ax = plt.subplots(figsize=(15, 6)) ax = sns.violinplot(data=df1, color="blue") ax = sns.violinplot(data=df2, color="red") plt.show() Here is my output image. But I want to get each blue and red violin plot next to each other instead of overlapping.
[ "I suggest relabeling the columns in each dataframe to reflect the dataframe number, e.g.:\ndata2 = {\n 'DT2' : np.random.normal(-1, 1, 50),\n 'RF2' : np.random.normal(-1, 1, 60),\n 'KNN2' : np.random.normal(-1, 1, 80)\n}\n\nYou may then:\n\nconcatenate both dataframes:\ndf = pd.concat([df1, df2], axis=1)\n\ndefine your own palette:\nmy_palette = {\"DT1\": \"blue\", \"DT2\": \"red\",\"KNN1\": \"blue\", \"KNN2\": \"red\", \"RF1\": \"blue\", \"RF2\": \"red\"}\n\nand then force the plotting order using the order parameter:\nsns.violinplot(data=df, order = ['DT1', 'DT2', 'KNN1', 'KNN2', 'RF1', 'RF2'], palette=my_palette)\n\n\nThis yields the following result:\n\nEDIT:\nYou may manually set the labels to replace each label pair (e.g. DT1, DT2) with a single label (e.g. DT):\nlocs, labels = plt.xticks() # Get the current locations and labels.\nplt.xticks(np.arange(0.5, 4.5, step=2)) # Set label locations.\nplt.xticks([0.5, 2.5, 4.5], ['DT', 'KNN', 'RFF']) # Set text labels.\n\nThis yields:\n\n" ]
[ 0 ]
[]
[]
[ "matplotlib", "python", "violin_plot" ]
stackoverflow_0074680995_matplotlib_python_violin_plot.txt
Q: Remove features with whitespace in sklearn Countvectorizer with char_wb I am trying to build char level ngrams using sklearn's CountVectorizer. When using analyzer='char_wb' the vocab has features with whitespaces around it. I want to exclude the features/words with whitespaces. from sklearn.feature_extraction.text import CountVectorizer vectorizer = CountVectorizer(binary=True, analyzer='char_wb', ngram_range=(4, 5)) vectorizer.fit(['this is a plural']) vectorizer.vocabulary_ the vocabulary from the above code is [' thi', 'this', 'his ', ' this', 'this ', ' is ', ' a ', ' plu', 'plur', 'lura', 'ural', 'ral ', ' plur', 'plura', 'lural', 'ural '] I have tried using other analyzers e.g. word and char. None of those gives the kind of feature i need. A: I hope you get an improved answer because I'm confident this answer is a bit of a bad hack. I'm not sure it does what you want, and what it does is not very efficient. It does produce your vocabulary though (probably)! import re def my_analyzer(s): out=[] for w in re.split(r"\W+", s): if len(w) < 5: out.append(w) else: for l4 in re.findall(r"(?=(\w{4}))", w): out.append(l4) for l5 in re.findall(r"(?=(\w{5}))", w): out.append(l5) return out from sklearn.feature_extraction.text import CountVectorizer vectorizer = CountVectorizer(binary=True, analyzer=my_analyzer) vectorizer.fit(['this is a plural']) print(vectorizer.vocabulary_) # {'this': 6, 'is': 1, 'a': 0, 'plur': 4, 'lura': 2, 'ural': 7, 'plura': 5, 'lural': 3} corpus = [ 'This is the first document.', 'This document is the second document.', 'And this is the third one.', 'Is this the first document?', ] vectorizer.fit(corpus) print(vectorizer.vocabulary_) #{'This': 3, 'is': 15, 'the': 22, 'firs': 11, 'irst': 14, 'first': 12, 'docu': 7, 'ocum': 17, 'cume': 5, 'umen': 26, 'ment': 16, 'docum': 8, 'ocume': 18, 'cumen': 6, 'ument': 27, '': 0, 'seco': 20, 'econ': 9, 'cond': 4, 'secon': 21, 'econd': 10, 'And': 1, 'this': 25, 'thir': 23, 'hird': 13, 'third': 24, 'one': 19, 'Is': 2}
Remove features with whitespace in sklearn Countvectorizer with char_wb
I am trying to build char level ngrams using sklearn's CountVectorizer. When using analyzer='char_wb' the vocab has features with whitespaces around it. I want to exclude the features/words with whitespaces. from sklearn.feature_extraction.text import CountVectorizer vectorizer = CountVectorizer(binary=True, analyzer='char_wb', ngram_range=(4, 5)) vectorizer.fit(['this is a plural']) vectorizer.vocabulary_ the vocabulary from the above code is [' thi', 'this', 'his ', ' this', 'this ', ' is ', ' a ', ' plu', 'plur', 'lura', 'ural', 'ral ', ' plur', 'plura', 'lural', 'ural '] I have tried using other analyzers e.g. word and char. None of those gives the kind of feature i need.
[ "I hope you get an improved answer because I'm confident this answer is a bit of a bad hack. I'm not sure it does what you want, and what it does is not very efficient. It does produce your vocabulary though (probably)!\nimport re\n\ndef my_analyzer(s):\n out=[]\n for w in re.split(r\"\\W+\", s):\n if len(w) < 5:\n out.append(w)\n else:\n for l4 in re.findall(r\"(?=(\\w{4}))\", w):\n out.append(l4)\n for l5 in re.findall(r\"(?=(\\w{5}))\", w):\n out.append(l5)\n return out\n\nfrom sklearn.feature_extraction.text import CountVectorizer\n\nvectorizer = CountVectorizer(binary=True, analyzer=my_analyzer)\n\nvectorizer.fit(['this is a plural'])\nprint(vectorizer.vocabulary_)\n# {'this': 6, 'is': 1, 'a': 0, 'plur': 4, 'lura': 2, 'ural': 7, 'plura': 5, 'lural': 3}\n\ncorpus = [\n 'This is the first document.',\n 'This document is the second document.',\n 'And this is the third one.',\n 'Is this the first document?',\n]\nvectorizer.fit(corpus)\nprint(vectorizer.vocabulary_)\n#{'This': 3, 'is': 15, 'the': 22, 'firs': 11, 'irst': 14, 'first': 12, 'docu': 7, 'ocum': 17, 'cume': 5, 'umen': 26, 'ment': 16, 'docum': 8, 'ocume': 18, 'cumen': 6, 'ument': 27, '': 0, 'seco': 20, 'econ': 9, 'cond': 4, 'secon': 21, 'econd': 10, 'And': 1, 'this': 25, 'thir': 23, 'hird': 13, 'third': 24, 'one': 19, 'Is': 2}\n\n" ]
[ 0 ]
[]
[]
[ "countvectorizer", "python", "scikit_learn", "tfidfvectorizer" ]
stackoverflow_0074638757_countvectorizer_python_scikit_learn_tfidfvectorizer.txt
Q: SOLVED; Chromium Webdriver with "--no-sandbox" is opening a fully transparent/invisible Chrome window The relevant code is as follows: ' # find the Chromium profile with website caches for the webdriver chrome_options = Options() profile_filepath = "user-data-dir=" + "/home/hephaestus/.config/chromium/Profile1" chrome_options.add_argument(str(profile_filepath)) # put chromium into --no-sandbox mode as a workaround for "DevToolsActivePort file doesn't exist" chrome_options.add_argument("--no-sandbox") # start an automatic Chrome tab and go to embervision.live; wait for page to load driver = webdriver.Chrome("./chromedriver", options=chrome_options) ` When I run this Python code (and import the needed libraries), I get the screenshot below. Chromium that was opened with the above code is on the right, and is transparent and glitching out. Desktop view with Chromium webdriver tab glitching out on the right I am able to enter web addresses and interact with the page, but I just can't see any of it. I'm not sure why. I deleted and re-downloaded Selenium and Chromium, to no avail. I had to add the "--no-sandbox" option because it was getting another error that said "DevToolsActivePort file doesn't exist". I'm not sure what else is causing this issue. Any help is appreciated. Thank you! A: So I found a solution that works for me! Uninstall and reinstall Chromium completely. When reinstalling, check that your Chromium version matches with Selenium (which I didn't even know was a thing). DO NOT run your Python code as a sudo user. I did "sudo python3 upload_image.py" and got the "DevToolsActivePort file doesn't exist" error. When I ran just "python3 upload_image.py", it did not raise the error. Do not use the option "--no-sandbox" when running as a non-sudo user ("python 3 upload_image.py"). For some reason, the "--no-sandbox" option also broke my Chromium browser in the same transparent/infinite way as I posted above. Hope this helps someone in the future!
SOLVED; Chromium Webdriver with "--no-sandbox" is opening a fully transparent/invisible Chrome window
The relevant code is as follows: ' # find the Chromium profile with website caches for the webdriver chrome_options = Options() profile_filepath = "user-data-dir=" + "/home/hephaestus/.config/chromium/Profile1" chrome_options.add_argument(str(profile_filepath)) # put chromium into --no-sandbox mode as a workaround for "DevToolsActivePort file doesn't exist" chrome_options.add_argument("--no-sandbox") # start an automatic Chrome tab and go to embervision.live; wait for page to load driver = webdriver.Chrome("./chromedriver", options=chrome_options) ` When I run this Python code (and import the needed libraries), I get the screenshot below. Chromium that was opened with the above code is on the right, and is transparent and glitching out. Desktop view with Chromium webdriver tab glitching out on the right I am able to enter web addresses and interact with the page, but I just can't see any of it. I'm not sure why. I deleted and re-downloaded Selenium and Chromium, to no avail. I had to add the "--no-sandbox" option because it was getting another error that said "DevToolsActivePort file doesn't exist". I'm not sure what else is causing this issue. Any help is appreciated. Thank you!
[ "So I found a solution that works for me!\n\nUninstall and reinstall Chromium completely. When reinstalling, check that your Chromium version matches with Selenium (which I didn't even know was a thing).\n\nDO NOT run your Python code as a sudo user. I did \"sudo python3 upload_image.py\" and got the \"DevToolsActivePort file doesn't exist\" error. When I ran just \"python3 upload_image.py\", it did not raise the error.\n\nDo not use the option \"--no-sandbox\" when running as a non-sudo user (\"python 3 upload_image.py\"). For some reason, the \"--no-sandbox\" option also broke my Chromium browser in the same transparent/infinite way as I posted above.\n\n\nHope this helps someone in the future!\n" ]
[ 0 ]
[]
[]
[ "chromium", "python", "selenium", "webdriver" ]
stackoverflow_0074593964_chromium_python_selenium_webdriver.txt
Q: How to find the index of an array where summation is greater than a target value? Suppose I have a 1D array sorted in descending order, like: arr = np.array([10, 10, 8, 5, 4, 4, 3, 2, 2, 2]) I want the index value, where the summation of this array starting from 0 to that index is greater than or equal to a specified target value. For example, let the target value be 40: index=0 (0) => sum=10 (10) index=1 (0,1) => sum=20 (10+10) index=2 (0,1,2) => sum=28 (10+10+8) index=3 (0,1,2,3) => sum=33 (10+10+8+5) index=4 (0,1,2,3,4) => sum=37 (10+10+8+5+4) index=5 (0,1,2,3,4,5) => sum=41 (10+10+8+5+4+4) and finally I want to get the index value 5, since the sum 41 is greater than the target value 40. How can I do this in most Pythonic and appropriate way, so it can work with large numbers and large sized arrays. A: To find the index of an array where the summation is greater than a target value in Python, you can use a for loop to iterate over the elements in the array and keep track of the running total. When the running total is greater than the target value, you can return the index at which that occurred. # define the target value target = 10 # define the array arr = [1, 2, 3, 4, 5, 6, 7] # initialize the running total to 0 and the index to -1 total = 0 index = -1 # iterate over the elements in the array for i in range(len(arr)): total += arr[i] if total > target: index = i break # print the index where the summation is greater than the target value print(index) # 3 In this example, the index where the summation of the array is greater than the target value is 3. This is because the summation of the first three elements in the array (1 + 2 + 3) is greater than the target value of 10. A: using numpy: import numpy as np # Create the array arr = np.array([10, 10, 8, 5, 4, 4, 3, 2, 2, 2]) # Compute the cumulative sum of the elements in the array cumsum = np.cumsum(arr) # Find the index of the first element in the cumulative sum that is greater than or equal to the target value index = np.argmax(cumsum >= 40) # Print the result print(index) # Output: 5
How to find the index of an array where summation is greater than a target value?
Suppose I have a 1D array sorted in descending order, like: arr = np.array([10, 10, 8, 5, 4, 4, 3, 2, 2, 2]) I want the index value, where the summation of this array starting from 0 to that index is greater than or equal to a specified target value. For example, let the target value be 40: index=0 (0) => sum=10 (10) index=1 (0,1) => sum=20 (10+10) index=2 (0,1,2) => sum=28 (10+10+8) index=3 (0,1,2,3) => sum=33 (10+10+8+5) index=4 (0,1,2,3,4) => sum=37 (10+10+8+5+4) index=5 (0,1,2,3,4,5) => sum=41 (10+10+8+5+4+4) and finally I want to get the index value 5, since the sum 41 is greater than the target value 40. How can I do this in most Pythonic and appropriate way, so it can work with large numbers and large sized arrays.
[ "To find the index of an array where the summation is greater than a target value in Python, you can use a for loop to iterate over the elements in the array and keep track of the running total. When the running total is greater than the target value, you can return the index at which that occurred.\n# define the target value\ntarget = 10\n\n# define the array\narr = [1, 2, 3, 4, 5, 6, 7]\n\n# initialize the running total to 0 and the index to -1\ntotal = 0\nindex = -1\n\n# iterate over the elements in the array\nfor i in range(len(arr)):\n total += arr[i]\n if total > target:\n index = i\n break\n\n# print the index where the summation is greater than the target value\nprint(index) # 3\n\nIn this example, the index where the summation of the array is greater than the target value is 3. This is because the summation of the first three elements in the array (1 + 2 + 3) is greater than the target value of 10.\n", "using numpy:\nimport numpy as np\n\n# Create the array\narr = np.array([10, 10, 8, 5, 4, 4, 3, 2, 2, 2])\n\n# Compute the cumulative sum of the elements in the array\ncumsum = np.cumsum(arr)\n\n# Find the index of the first element in the cumulative sum that is greater than or equal to the target value\nindex = np.argmax(cumsum >= 40)\n\n# Print the result\nprint(index) # Output: 5\n\n" ]
[ 0, 0 ]
[]
[]
[ "arrays", "numpy", "python" ]
stackoverflow_0074681382_arrays_numpy_python.txt
Q: Getting content from a dm in discord.py So I want to know if it is possible, that a bot gets the content sent to it in a dm and send that in a specifyed channel on a server. So basically you dm the bot the word "test" and the bots sends the word in a channel of a server A: Yes, it is possible for a bot to receive a direct message and then repost the message in a specified channel on a server. This can be done using the Discord API. You can do the following: Create a Discord bot and add it to your server. You can do this using the Discord developer portal. Use the Discord API to listen for messages sent to the bot in a DM. You can do this using the message event and the DMChannel class in the Discord API. When the bot receives a DM, use the Discord API to repost the message in the specified channel on the server. You can do this using the send method of the TextChannel class in the Discord API.
Getting content from a dm in discord.py
So I want to know if it is possible, that a bot gets the content sent to it in a dm and send that in a specifyed channel on a server. So basically you dm the bot the word "test" and the bots sends the word in a channel of a server
[ "Yes, it is possible for a bot to receive a direct message and then repost the message in a specified channel on a server. This can be done using the Discord API.\nYou can do the following:\n\nCreate a Discord bot and add it to your server. You can do this using the Discord developer portal.\n\nUse the Discord API to listen for messages sent to the bot in a DM. You can do this using the message event and the DMChannel class in the Discord API.\n\nWhen the bot receives a DM, use the Discord API to repost the message in the specified channel on the server. You can do this using the send method of the TextChannel class in the Discord API.\n\n\n" ]
[ 0 ]
[]
[]
[ "discord", "discord.py", "python" ]
stackoverflow_0074681161_discord_discord.py_python.txt
Q: How to calculate distance after key is pressed? Hey so I'm trying to calculate a person's score after they press a key. I have three arrows and I want to find how far the arrow is from the center and use that to find the score. This is what I have so far: import turtle import math sc = turtle.Screen() sc.title("Arrow Game") sc.bgcolor("#C7F6B6") arrow1= turtle.Turtle() arrow1.color("purple") arrow1.shape("arrow") arrow1.shapesize(0.5,1) arrow1.pu() arrow1.goto(-250,-250) def shoot(): arrow1.showturtle() arrow1.fd(500) def movlt(): can.lt(10) x = arrow1.xcor() y = arrow1.ycor() arrow1.lt(10) arrow1.setx(x-5) arrow1.sety(y+5) def movrt(): x = arrow1.xcor() y = arrow1.ycor() arrow1.rt(10) can.rt(10) arrow1.setx(x-5) arrow1.sety(y-5) sc.listen() scs = turtle.Turtle() scs.pu() scs.goto(-140,40) scs.pd() center = 150 xs = arrow1.xcor() - center ys = arrow1.ycor() - center distance=math.sqrt(xs**2 + ys**2) sc.onkeypress(movlt, "q") sc.onkeypress(movrt, "e") sc.onkeypress(shoot, "1") def score(): ptss = 0 if distance > 5: ptss += 10 elif distance < 5: ptss += 6 return ptss gmm = score() ptss = gmm if distance > 5: scs.write("10 pts") elif distance < 5: scs.write("6 pts") The problem I have is that I don't know how to make it wait until the key is pressed. A: To make your code wait until a key is pressed, you can use the turtle.Screen.onkeypress() method. This method takes two arguments: a callback function that will be called when the key is pressed, and the key that you want to listen for. Here is an example of how you can use the onkeypress() method to wait for a key press: import turtle # Create a turtle screen and set the background color. sc = turtle.Screen() sc.bgcolor("#C7F6B6") # Create a turtle and set its color, shape, and position. arrow1 = turtle.Turtle() arrow1.color("purple") arrow1.shape("arrow") arrow1.shapesize(0.5, 1) arrow1.pu() arrow1.goto(-250, -250) # Define a function that will be called when the key is pressed. def shoot(): # Move the turtle forward by 500 units. arrow1.fd(500) # Calculate the distance of the arrow from the center. center = 150 xs = arrow1.xcor() - center ys = arrow1.ycor() - center distance = math.sqrt(xs**2 + ys**2) # Calculate the score based on the distance of the arrow from the center. ptss = 0 if distance > 5: ptss += 10 elif distance < 5: ptss += 6 # Print the score. sc.write("Score: {} pts".format(ptss)) # Listen for the "1" key to be pressed. sc.listen() sc.onkeypress(shoot, "1") In this example, the shoot function is called when the "1" key is pressed. This function calculates the distance of the arrow from the center, calculates the score based on that distance, and prints the score on the turtle screen. You can use this approach to make your code wait until the key is pressed and then calculate and display the score. You may want to adjust the details of how the score is calculated, but this should give you a starting point for implementing the functionality you are looking for. A: To solve your problem, you could use the onkeyrelease event provided by the turtle screen. This event triggers when a key is released, so you can use it to determine when the key press has completed. Here is an example of how you could use it: sc.onkeypress(movlt, "q") sc.onkeypress(movrt, "e") # Use onkeyrelease to determine when the key press is finished sc.onkeyrelease(shoot, "1") You can then move the score and gmm calculations inside of the shoot function, since this is where you want them to be executed. This will allow you to calculate the score once the key press is finished and the arrow has reached its final position. def shoot(): arrow1.showturtle() arrow1.fd(500) xs = arrow1.xcor() - center ys = arrow1.ycor() - center distance=math.sqrt(xs**2 + ys**2) def score(): ptss = 0 if distance > 5: ptss += 10 elif distance < 5: ptss += 6 return ptss gmm = score() ptss = gmm if distance > 5: scs.write("10 pts") elif distance < 5: scs.write("6 pts")
How to calculate distance after key is pressed?
Hey so I'm trying to calculate a person's score after they press a key. I have three arrows and I want to find how far the arrow is from the center and use that to find the score. This is what I have so far: import turtle import math sc = turtle.Screen() sc.title("Arrow Game") sc.bgcolor("#C7F6B6") arrow1= turtle.Turtle() arrow1.color("purple") arrow1.shape("arrow") arrow1.shapesize(0.5,1) arrow1.pu() arrow1.goto(-250,-250) def shoot(): arrow1.showturtle() arrow1.fd(500) def movlt(): can.lt(10) x = arrow1.xcor() y = arrow1.ycor() arrow1.lt(10) arrow1.setx(x-5) arrow1.sety(y+5) def movrt(): x = arrow1.xcor() y = arrow1.ycor() arrow1.rt(10) can.rt(10) arrow1.setx(x-5) arrow1.sety(y-5) sc.listen() scs = turtle.Turtle() scs.pu() scs.goto(-140,40) scs.pd() center = 150 xs = arrow1.xcor() - center ys = arrow1.ycor() - center distance=math.sqrt(xs**2 + ys**2) sc.onkeypress(movlt, "q") sc.onkeypress(movrt, "e") sc.onkeypress(shoot, "1") def score(): ptss = 0 if distance > 5: ptss += 10 elif distance < 5: ptss += 6 return ptss gmm = score() ptss = gmm if distance > 5: scs.write("10 pts") elif distance < 5: scs.write("6 pts") The problem I have is that I don't know how to make it wait until the key is pressed.
[ "To make your code wait until a key is pressed, you can use the turtle.Screen.onkeypress() method. This method takes two arguments: a callback function that will be called when the key is pressed, and the key that you want to listen for.\nHere is an example of how you can use the onkeypress() method to wait for a key press:\nimport turtle\n\n# Create a turtle screen and set the background color.\nsc = turtle.Screen()\nsc.bgcolor(\"#C7F6B6\")\n\n# Create a turtle and set its color, shape, and position.\narrow1 = turtle.Turtle()\narrow1.color(\"purple\")\narrow1.shape(\"arrow\")\narrow1.shapesize(0.5, 1)\narrow1.pu()\narrow1.goto(-250, -250)\n\n# Define a function that will be called when the key is pressed.\ndef shoot():\n # Move the turtle forward by 500 units.\n arrow1.fd(500)\n\n # Calculate the distance of the arrow from the center.\n center = 150\n xs = arrow1.xcor() - center\n ys = arrow1.ycor() - center\n distance = math.sqrt(xs**2 + ys**2)\n\n # Calculate the score based on the distance of the arrow from the center.\n ptss = 0\n if distance > 5:\n ptss += 10\n elif distance < 5:\n ptss += 6\n\n # Print the score.\n sc.write(\"Score: {} pts\".format(ptss))\n\n# Listen for the \"1\" key to be pressed.\nsc.listen()\nsc.onkeypress(shoot, \"1\")\n\nIn this example, the shoot function is called when the \"1\" key is pressed. This function calculates the distance of the arrow from the center, calculates the score based on that distance, and prints the score on the turtle screen.\nYou can use this approach to make your code wait until the key is pressed and then calculate and display the score. You may want to adjust the details of how the score is calculated, but this should give you a starting point for implementing the functionality you are looking for.\n", "To solve your problem, you could use the onkeyrelease event provided by the turtle screen. This event triggers when a key is released, so you can use it to determine when the key press has completed. Here is an example of how you could use it:\nsc.onkeypress(movlt, \"q\")\nsc.onkeypress(movrt, \"e\")\n\n# Use onkeyrelease to determine when the key press is finished\nsc.onkeyrelease(shoot, \"1\")\n\nYou can then move the score and gmm calculations inside of the shoot function, since this is where you want them to be executed. This will allow you to calculate the score once the key press is finished and the arrow has reached its final position.\ndef shoot():\n arrow1.showturtle()\n arrow1.fd(500)\n \n xs = arrow1.xcor() - center\n ys = arrow1.ycor() - center\n \n distance=math.sqrt(xs**2 + ys**2)\n \n def score():\n ptss = 0\n if distance > 5:\n ptss += 10\n elif distance < 5:\n ptss += 6\n return ptss\n \n gmm = score()\n ptss = gmm\n \n if distance > 5:\n scs.write(\"10 pts\")\n elif distance < 5:\n scs.write(\"6 pts\")\n\n" ]
[ 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074681396_python.txt
Q: How to properly install MechanicalSoup for Python? I wanted to practice web scraping with Python module MechanicalSoup, but when I started installing it using pip install mechanicalsoup I encountered this error "Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?". I then tried running pip3 install lxml --use-pep517 to install lxml and its dependencies it returned the same error. Note that I'm using Visual Studio Code and installing this in a Python Virtual Environment. I looked up every where for possible resolution but so far nothing I found has worked. Any help would be appreciated. Thanks! A: To properly install MechanicalSoup, you need to make sure that you have the required dependencies installed. In this case, it looks like you need to install the lxml library. Here are the steps you can follow to properly install MechanicalSoup: Create a Python virtual environment for your project, if you haven't already done so. This will help you avoid conflicts with other Python projects and their dependencies. To create a virtual environment, you can use the virtualenv module. For example: $ virtualenv my-project-venv Activate your virtual environment. This will enable the virtual environment for your current shell session and allow you to install packages within this environment. To activate your virtual environment, you can use the source command, followed by the path to your virtual environment's bin/activate script. For example: $ source my-project-venv/bin/activate Install the required dependencies for MechanicalSoup. In this case, you need to install the lxml library. You can do this using pip by running the following command: $ pip install lxml Install MechanicalSoup itself. Once you have installed the required dependencies, you can install MechanicalSoup using pip by running the following command: Copy code $ pip install mechanicalsoup After following these steps, you should be able to import and use MechanicalSoup in your Python code. For example: import mechanicalsoup browser = mechanicalsoup.StatefulBrowser() I hope this helps! Let me know if you have any other questions.
How to properly install MechanicalSoup for Python?
I wanted to practice web scraping with Python module MechanicalSoup, but when I started installing it using pip install mechanicalsoup I encountered this error "Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?". I then tried running pip3 install lxml --use-pep517 to install lxml and its dependencies it returned the same error. Note that I'm using Visual Studio Code and installing this in a Python Virtual Environment. I looked up every where for possible resolution but so far nothing I found has worked. Any help would be appreciated. Thanks!
[ "To properly install MechanicalSoup, you need to make sure that you have the required dependencies installed. In this case, it looks like you need to install the lxml library.\nHere are the steps you can follow to properly install MechanicalSoup:\nCreate a Python virtual environment for your project, if you haven't already done so. This will help you avoid conflicts with other Python projects and their dependencies. To create a virtual environment, you can use the virtualenv module. For example:\n$ virtualenv my-project-venv\n\nActivate your virtual environment. This will enable the virtual environment for your current shell session and allow you to install packages within this environment. To activate your virtual environment, you can use the source command, followed by the path to your virtual environment's bin/activate script. For example:\n$ source my-project-venv/bin/activate\n\nInstall the required dependencies for MechanicalSoup. In this case, you need to install the lxml library. You can do this using pip by running the following command:\n$ pip install lxml\n\nInstall MechanicalSoup itself. Once you have installed the required dependencies, you can install MechanicalSoup using pip by running the following command:\nCopy code\n$ pip install mechanicalsoup\n\nAfter following these steps, you should be able to import and use MechanicalSoup in your Python code. For example:\nimport mechanicalsoup\n\nbrowser = mechanicalsoup.StatefulBrowser()\n\nI hope this helps! Let me know if you have any other questions.\n" ]
[ 0 ]
[]
[]
[ "beautifulsoup", "mechanicalsoup", "python", "web_scraping" ]
stackoverflow_0074681403_beautifulsoup_mechanicalsoup_python_web_scraping.txt
Q: Python how to do find with leading and trailing spaces I'm doing an extensive word search. How do I do a find that keeps leading and trailing spaces. the word is imported from a list. An example: find " oil " in "Use Cooking Oil" but do not find with "Sally spoiled the food." .find() strips the leading and trailing spaces. nltk tokenizing does also. this code works if i want a simple lookup. It finds "oil" in "spoiled" which, for me, creates a false positive. the false positive is what I am trying to solve. I've tried putting " oil " in the word list (with spaces), but all methods I've tried strip the leading spaces (" oil " becomes "oil"). for r in search_list_df['title']: ###<- Search for word in this list. tfl_converted = [] token_found_list.clear() words = search_list ###<- list of words to cycle through. (including " oil ") for x in words: phrase = x text = r if phrase in r: ### <- this works if i DO NOT care about leading spaces. token_found_list.append(x) tfl_converted = ", ".join(token_found_list) if len(token_found_list) > 0: search_list_output.append(tfl_converted) else: tfl_converted = float("nan") search_list_output.append(tfl_converted) ** How do I iterate through a list of words and keep the leading and trailing spaces to avoid false positives and find only exact word matches?** A: You could split the sentence into an array of words. This way, you can see if a word is present in the array, and thus overcome false positives: words = [word.lower() for word in sentence.split()] if 'oil' in words: print(True) Here, I have also made sure that every word in the sentence is lowercase, such that case sensitivity is not going to be a problem. The split() method makes sure that the string sentence is split by spaces. Hope this helps A: Create a function find_term with oneliner, def find_term(sentence, term): return len([word for word in sentence.lower().split() if term == word]) > 0 then you can use it in your code like, sentence = " xy z Oil spoil" if find_term(sentence, "oil"): #do something with the sentence
Python how to do find with leading and trailing spaces
I'm doing an extensive word search. How do I do a find that keeps leading and trailing spaces. the word is imported from a list. An example: find " oil " in "Use Cooking Oil" but do not find with "Sally spoiled the food." .find() strips the leading and trailing spaces. nltk tokenizing does also. this code works if i want a simple lookup. It finds "oil" in "spoiled" which, for me, creates a false positive. the false positive is what I am trying to solve. I've tried putting " oil " in the word list (with spaces), but all methods I've tried strip the leading spaces (" oil " becomes "oil"). for r in search_list_df['title']: ###<- Search for word in this list. tfl_converted = [] token_found_list.clear() words = search_list ###<- list of words to cycle through. (including " oil ") for x in words: phrase = x text = r if phrase in r: ### <- this works if i DO NOT care about leading spaces. token_found_list.append(x) tfl_converted = ", ".join(token_found_list) if len(token_found_list) > 0: search_list_output.append(tfl_converted) else: tfl_converted = float("nan") search_list_output.append(tfl_converted) ** How do I iterate through a list of words and keep the leading and trailing spaces to avoid false positives and find only exact word matches?**
[ "You could split the sentence into an array of words. This way, you can see if a word is present in the array, and thus overcome false positives:\nwords = [word.lower() for word in sentence.split()]\nif 'oil' in words:\n print(True)\n\nHere, I have also made sure that every word in the sentence is lowercase, such that case sensitivity is not going to be a problem. The split() method makes sure that the string sentence is split by spaces.\nHope this helps\n", "Create a function find_term with oneliner,\ndef find_term(sentence, term):\n return len([word for word in sentence.lower().split() if term == word]) > 0\n\n\nthen you can use it in your code like,\nsentence = \" xy z Oil spoil\"\n\nif find_term(sentence, \"oil\"):\n #do something with the sentence\n\n" ]
[ 0, 0 ]
[]
[]
[ "find", "python", "space" ]
stackoverflow_0074680977_find_python_space.txt
Q: TF2 transform can't find an actuall existing frame In a global planner node that I wrote, I have the following init code #!/usr/bin/env python import rospy import copy import tf2_ros import time import numpy as np import math import tf from math import sqrt, pow from geometry_msgs.msg import Vector3, Point from std_msgs.msg import Int32MultiArray from std_msgs.msg import Bool from nav_msgs.msg import OccupancyGrid, Path from geometry_msgs.msg import PoseStamped, PointStamped from tf2_geometry_msgs import do_transform_point from Queue import PriorityQueue class GlobalPlanner(): def __init__(self): print("init global planner") self.tfBuffer = tf2_ros.Buffer() self.listener = tf2_ros.TransformListener(self.tfBuffer) self.drone_position_sub = rospy.Subscriber('uav/sensors/gps', PoseStamped, self.get_drone_position) self.drone_position = [] self.drone_map_position = [] self.map_sub = rospy.Subscriber("/map", OccupancyGrid, self.get_map) self.goal_sub = rospy.Subscriber("/cell_tower/position", Point, self.getTransformedGoal) self.goal_position = [] self.goal = Point() self.goal_map_position = [] self.occupancy_grid = OccupancyGrid() self.map = [] self.p_path = Int32MultiArray() self.position_pub = rospy.Publisher("/uav/input/position", Vector3, queue_size = 1) #next_movement in self.next_movement = Vector3 self.next_movement.z = 3 self.path_pub = rospy.Publisher('/uav/path', Int32MultiArray, queue_size=1) self.width = rospy.get_param('global_planner_node/map_width') self.height = rospy.get_param('global_planner_node/map_height') #Check whether there is a path plan self.have_plan = False self.path = [] self.euc_distance_drone_goal = 100 self.twod_distance_drone_goal = [] self.map_distance_drone_goal = [] self.mainLoop() And there is a call-back function call getTransformed goal, which will take the goal position in the "cell_tower" frame to the "world" frame. Which looks like this def getTransformedGoal(self, msg): self.goal = msg try: #Lookup the tower to world transform transform = self.tfBuffer.lookup_transform('cell_tower', 'world', rospy.Time()) #transform = self.tfBuffer.lookup_transform('world','cell-tower' rospy.Time()) #Convert the goal to a PointStamped goal_pointStamped = PointStamped() goal_pointStamped.point.x = self.goal.x goal_pointStamped.point.y = self.goal.y goal_pointStamped.point.z = self.goal.z #Use the do_transform_point function to convert the point using the transform new_point = do_transform_point(goal_pointStamped, transform) #Convert the point back into a vector message containing integers transform_point = [new_point.point.x, new_point.point.y] #Publish the vector self.goal_position = transform_point except (tf2_ros.LookupException, tf2_ros.ConnectivityException, tf2_ros.ExtrapolationException) as e: print(e) print('global_planner tf2 exception, continuing') The error message said that "cell_tower" passed to lookupTransform argument target_frame does not exist. I check the RQT plot for both active and all, which shows that when active, the topic /tf is not being subscribe by the node global planner. Check the following image, which is for active enter image description here and this image is for all the node (include non-active) enter image description here But I have actually set up the listner, I have another node call local planner that use the same strategy and it works for that node, but not for the global planner I'm not sure why this is. A: Try adding a timeout to your lookup_transform() function call, as your transformation may not be available when you need it: transform = self.tfBuffer.lookup_transform('cell_tower', 'world',rospy.Time.now(), rospy.Duration(1.0))
TF2 transform can't find an actuall existing frame
In a global planner node that I wrote, I have the following init code #!/usr/bin/env python import rospy import copy import tf2_ros import time import numpy as np import math import tf from math import sqrt, pow from geometry_msgs.msg import Vector3, Point from std_msgs.msg import Int32MultiArray from std_msgs.msg import Bool from nav_msgs.msg import OccupancyGrid, Path from geometry_msgs.msg import PoseStamped, PointStamped from tf2_geometry_msgs import do_transform_point from Queue import PriorityQueue class GlobalPlanner(): def __init__(self): print("init global planner") self.tfBuffer = tf2_ros.Buffer() self.listener = tf2_ros.TransformListener(self.tfBuffer) self.drone_position_sub = rospy.Subscriber('uav/sensors/gps', PoseStamped, self.get_drone_position) self.drone_position = [] self.drone_map_position = [] self.map_sub = rospy.Subscriber("/map", OccupancyGrid, self.get_map) self.goal_sub = rospy.Subscriber("/cell_tower/position", Point, self.getTransformedGoal) self.goal_position = [] self.goal = Point() self.goal_map_position = [] self.occupancy_grid = OccupancyGrid() self.map = [] self.p_path = Int32MultiArray() self.position_pub = rospy.Publisher("/uav/input/position", Vector3, queue_size = 1) #next_movement in self.next_movement = Vector3 self.next_movement.z = 3 self.path_pub = rospy.Publisher('/uav/path', Int32MultiArray, queue_size=1) self.width = rospy.get_param('global_planner_node/map_width') self.height = rospy.get_param('global_planner_node/map_height') #Check whether there is a path plan self.have_plan = False self.path = [] self.euc_distance_drone_goal = 100 self.twod_distance_drone_goal = [] self.map_distance_drone_goal = [] self.mainLoop() And there is a call-back function call getTransformed goal, which will take the goal position in the "cell_tower" frame to the "world" frame. Which looks like this def getTransformedGoal(self, msg): self.goal = msg try: #Lookup the tower to world transform transform = self.tfBuffer.lookup_transform('cell_tower', 'world', rospy.Time()) #transform = self.tfBuffer.lookup_transform('world','cell-tower' rospy.Time()) #Convert the goal to a PointStamped goal_pointStamped = PointStamped() goal_pointStamped.point.x = self.goal.x goal_pointStamped.point.y = self.goal.y goal_pointStamped.point.z = self.goal.z #Use the do_transform_point function to convert the point using the transform new_point = do_transform_point(goal_pointStamped, transform) #Convert the point back into a vector message containing integers transform_point = [new_point.point.x, new_point.point.y] #Publish the vector self.goal_position = transform_point except (tf2_ros.LookupException, tf2_ros.ConnectivityException, tf2_ros.ExtrapolationException) as e: print(e) print('global_planner tf2 exception, continuing') The error message said that "cell_tower" passed to lookupTransform argument target_frame does not exist. I check the RQT plot for both active and all, which shows that when active, the topic /tf is not being subscribe by the node global planner. Check the following image, which is for active enter image description here and this image is for all the node (include non-active) enter image description here But I have actually set up the listner, I have another node call local planner that use the same strategy and it works for that node, but not for the global planner I'm not sure why this is.
[ "Try adding a timeout to your lookup_transform() function call, as your transformation may not be available when you need it:\ntransform = self.tfBuffer.lookup_transform('cell_tower', 'world',rospy.Time.now(), rospy.Duration(1.0))\n\n" ]
[ 0 ]
[]
[]
[ "python", "ros", "slam", "subscriber", "tf2_ros" ]
stackoverflow_0074681266_python_ros_slam_subscriber_tf2_ros.txt
Q: how to parse all data I dont know why but when i get all data from requests it works but if i want get data by some category it return me that import requests import json headers = {'Accept': 'application/json, text/javascript, */*; q=0.01', 'Accept-Encoding': 'gzip, deflate, br', 'Accept-Language': 'uk-UA,uk;q=0.9,en-US;q=0.8,en;q=0.7,ru;q=0.6', 'X-Requested-With': 'XMLHttpRequest'} def get_data(): # url of all data url = 'https://buff.163.com/api/market/goods?game=csgo&page_num=1&use_suggestion=0&trigger=undefined_trigger&_=1670185664532' # url by category url2 = 'https://buff.163.com/api/market/goods?game=csgo&page_num=1&category_group=rifle&use_suggestion=0&trigger=undefined_trigger&_=1670191032071' r = requests.get(url=url2, headers=headers) print(r.json()) with open('r.json', 'w', encoding="utf-8") as file: json.dump(r.json(), file, indent=4, ensure_ascii=False) def main(): get_data() if __name__ == '__main__': main() when i run url i get good json object but when i run url2 i get that '{'code': 'Login Required', 'error': 'Please login.', 'extra': None}' help me pls do it!!!!! A: It looks like you need to authenticate with the server before you can access the data in the second URL. The server is returning a "Login Required" error because it is unable to verify that you are authorized to access the data. To fix this issue, you need to include the necessary authentication information in the request headers when making the request to the second URL. This could include a login token or other authentication credentials that the server requires in order to grant you access to the data. Without more information about the authentication requirements of the server, it is not possible to provide specific instructions on how to include the necessary authentication information in the request headers. You will need to consult the documentation for the server or contact the server's maintainers to learn more about the authentication requirements. A: You are not authorized on the website. Try to use cookie to get correct response from site. By the way you can use selenium web driver function get_cookie(), then save it and use in your request. To my mind such way you’ll get desired result. If you have any questions you can ask me on telegram @deep0xFF. Im good in selenium webdriver and requests also.)
how to parse all data
I dont know why but when i get all data from requests it works but if i want get data by some category it return me that import requests import json headers = {'Accept': 'application/json, text/javascript, */*; q=0.01', 'Accept-Encoding': 'gzip, deflate, br', 'Accept-Language': 'uk-UA,uk;q=0.9,en-US;q=0.8,en;q=0.7,ru;q=0.6', 'X-Requested-With': 'XMLHttpRequest'} def get_data(): # url of all data url = 'https://buff.163.com/api/market/goods?game=csgo&page_num=1&use_suggestion=0&trigger=undefined_trigger&_=1670185664532' # url by category url2 = 'https://buff.163.com/api/market/goods?game=csgo&page_num=1&category_group=rifle&use_suggestion=0&trigger=undefined_trigger&_=1670191032071' r = requests.get(url=url2, headers=headers) print(r.json()) with open('r.json', 'w', encoding="utf-8") as file: json.dump(r.json(), file, indent=4, ensure_ascii=False) def main(): get_data() if __name__ == '__main__': main() when i run url i get good json object but when i run url2 i get that '{'code': 'Login Required', 'error': 'Please login.', 'extra': None}' help me pls do it!!!!!
[ "It looks like you need to authenticate with the server before you can access the data in the second URL. The server is returning a \"Login Required\" error because it is unable to verify that you are authorized to access the data.\nTo fix this issue, you need to include the necessary authentication information in the request headers when making the request to the second URL. This could include a login token or other authentication credentials that the server requires in order to grant you access to the data.\nWithout more information about the authentication requirements of the server, it is not possible to provide specific instructions on how to include the necessary authentication information in the request headers. You will need to consult the documentation for the server or contact the server's maintainers to learn more about the authentication requirements.\n", "You are not authorized on the website. Try to use cookie to get correct response from site.\nBy the way you can use selenium web driver function get_cookie(), then save it and use in your request. To my mind such way you’ll get desired result.\nIf you have any questions you can ask me on telegram @deep0xFF. Im good in selenium webdriver and requests also.)\n" ]
[ 0, 0 ]
[]
[]
[ "json", "parsing", "python" ]
stackoverflow_0074681343_json_parsing_python.txt
Q: Coin Toss game for fun How do I create a coin toss using def and return and using random int 0 and 1. I have never used python before. So I'm wondering how to make a function. from random import randint num = input('Number of times to flip coin: ') flips = [randint(0,1) for r in range(num)] results = [] for object in flips: if object == 0: results.append('Heads') elif object == 1: results.append('Tails') print results A: Like this? from random import randint def flipcoin(num_of_times): results = [] for i in range(num_of_times): results.append(randint(0,1)) return results num = int(input('Number of times to flip coin: ')) results = flipcoin(num) print(results) EDIT: Dealing with coin faces, also using a function. from random import randint def coin_face(x): if (x == 0): return "Heads" if (x == 1): return "Tails" def flipcoin(num_of_times): results = [] for i in range(num_of_times): results.append(coin_face(randint(0,1))) return results num = int(input('Number of times to flip coin: ')) results = flipcoin(num) print(results) Thanks. A: Using a function for each coin flip: from random import randint def toss(): flip = randint(0,1) if flip == 0: return 'Heads' return 'Tails' results = [] num = input('Number of times to flip coin: ') for i in range(num): results.append(toss()) print(results)
Coin Toss game for fun
How do I create a coin toss using def and return and using random int 0 and 1. I have never used python before. So I'm wondering how to make a function. from random import randint num = input('Number of times to flip coin: ') flips = [randint(0,1) for r in range(num)] results = [] for object in flips: if object == 0: results.append('Heads') elif object == 1: results.append('Tails') print results
[ "Like this?\nfrom random import randint\n\ndef flipcoin(num_of_times):\n results = []\n for i in range(num_of_times):\n results.append(randint(0,1))\n return results\n\nnum = int(input('Number of times to flip coin: '))\nresults = flipcoin(num)\n\nprint(results)\n\nEDIT: Dealing with coin faces, also using a function.\nfrom random import randint\n\ndef coin_face(x):\n if (x == 0):\n return \"Heads\"\n if (x == 1):\n return \"Tails\"\n\ndef flipcoin(num_of_times):\n results = []\n for i in range(num_of_times):\n results.append(coin_face(randint(0,1)))\n return results\n\nnum = int(input('Number of times to flip coin: '))\nresults = flipcoin(num)\n\nprint(results)\n\nThanks.\n", "Using a function for each coin flip:\nfrom random import randint\ndef toss():\n flip = randint(0,1)\n if flip == 0:\n return 'Heads'\n return 'Tails'\n\nresults = []\n\nnum = input('Number of times to flip coin: ')\nfor i in range(num):\n results.append(toss())\n\nprint(results)\n\n" ]
[ 0, 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074681448_python_python_3.x.txt
Q: Python: how to instantiate a class "like a data class"? Data classes have this nice property of a much short / more readable "init function". Example: from dataclasses import dataclass, field @dataclass class MyClass1: x: int = field(default=1) y: int = field(default=2) As opposed to: class MyClass2: def __init__(self, x : int = 1, y : int = 2): self.x = x self.y = y Here, the code for MyClass1 is shorter because it doesn't need the explicit "def __init__(...)" function. Further, the ability to use fields allows for even more control while maintaining readability. How does that work under the hood, and how can one implement this (and only this) particular syntactic sugar without actually using/importing dataclass? A: In Python, classes are defined using the class keyword, and the @dataclass decorator is used to make a class a data class. The field function is used to specify the default value for a field in the class. To define a class without using the @dataclass decorator, you can simply use the class keyword followed by the class name, and then specify the init method, which is the constructor for the class. The init method takes in the necessary parameters and assigns them to the corresponding fields in the class. Here's an example of how you could define the MyClass2 class without using the @dataclass decorator: class MyClass2: def __init__(self, x : int = 1, y : int = 2): self.x = x self.y = y As you can see, this is a bit more verbose than using the @dataclass decorator, but it achieves the same result. If you want to define a class that has the same concise syntax as a data class, but without using the @dataclass decorator, you can use a metaclass. A metaclass is a class that is used to create a class. In Python, a metaclass is specified using the metaclass keyword argument in the class definition. Here's an example of how you could define a metaclass that has the same behavior as the @dataclass decorator: class DataClassMeta(type): def __new__(cls, name, bases, dct): # This code is executed when the class is defined # It creates the __init__ method for the class # using the fields and their default values fields = {} for key, value in dct.items(): if isinstance(value, field): fields[key] = value.default dct["__init__"] = lambda self, **kwargs: self.__dict__.update({**fields, **kwargs}) return super().__new__(cls, name, bases, dct) class MyClass3(metaclass=DataClassMeta): x = field(default=1) y = field(default=2) In this code, the DataClassMeta metaclass is defined, and it has a new method that creates the init method for the class using the fields and their default values. The MyClass3 class is then defined using the DataClassMeta metaclass. This allows it to have the same concise syntax as a data class, without using the @dataclass decorator. You can use this approach to define a class that has the same concise syntax as a data class, but without using the @dataclass decorator. However, keep in mind that the @dataclass decorator provides additional functionality, such as automatically generating methods for comparing instances of the class and for representing them as strings. If you want to include this functionality in your class, you'll need to implement it yourself.
Python: how to instantiate a class "like a data class"?
Data classes have this nice property of a much short / more readable "init function". Example: from dataclasses import dataclass, field @dataclass class MyClass1: x: int = field(default=1) y: int = field(default=2) As opposed to: class MyClass2: def __init__(self, x : int = 1, y : int = 2): self.x = x self.y = y Here, the code for MyClass1 is shorter because it doesn't need the explicit "def __init__(...)" function. Further, the ability to use fields allows for even more control while maintaining readability. How does that work under the hood, and how can one implement this (and only this) particular syntactic sugar without actually using/importing dataclass?
[ "In Python, classes are defined using the class keyword, and the @dataclass decorator is used to make a class a data class. The field function is used to specify the default value for a field in the class.\nTo define a class without using the @dataclass decorator, you can simply use the class keyword followed by the class name, and then specify the init method, which is the constructor for the class. The init method takes in the necessary parameters and assigns them to the corresponding fields in the class. Here's an example of how you could define the MyClass2 class without using the @dataclass decorator:\nclass MyClass2:\n def __init__(self, x : int = 1, y : int = 2):\n self.x = x\n self.y = y\n\nAs you can see, this is a bit more verbose than using the @dataclass decorator, but it achieves the same result.\nIf you want to define a class that has the same concise syntax as a data class, but without using the @dataclass decorator, you can use a metaclass. A metaclass is a class that is used to create a class. In Python, a metaclass is specified using the metaclass keyword argument in the class definition.\nHere's an example of how you could define a metaclass that has the same behavior as the @dataclass decorator:\nclass DataClassMeta(type):\n def __new__(cls, name, bases, dct):\n # This code is executed when the class is defined\n # It creates the __init__ method for the class\n # using the fields and their default values\n fields = {}\n for key, value in dct.items():\n if isinstance(value, field):\n fields[key] = value.default\n dct[\"__init__\"] = lambda self, **kwargs: \n\nself.__dict__.update({**fields, **kwargs})\n return super().__new__(cls, name, bases, dct)\n \n class MyClass3(metaclass=DataClassMeta):\n x = field(default=1)\n y = field(default=2)\n\nIn this code, the DataClassMeta metaclass is defined, and it has a new method that creates the init method for the class using the fields and their default values. The MyClass3 class is then defined using the DataClassMeta metaclass. This allows it to have the same concise syntax as a data class, without using the @dataclass decorator.\nYou can use this approach to define a class that has the same concise syntax as a data class, but without using the @dataclass decorator. However, keep in mind that the @dataclass decorator provides additional functionality, such as automatically generating methods for comparing instances of the class and for representing them as strings. If you want to include this functionality in your class, you'll need to implement it yourself.\n" ]
[ 0 ]
[]
[]
[ "python", "python_dataclasses" ]
stackoverflow_0074681453_python_python_dataclasses.txt
Q: How to locate a specific var type inside many others arrays in python? I'd like know how can I localize a specific type variable in a set of arrays, that could change its own length structure, i.e: [[[[11.0, 16.0], [113.0, 16.0], [113.0, 41.0], [11.0, 41.0]], ("I WANNA BE LOCATED", 548967)]] I just needed to extract the type variable that is a Str in this case: "I WANNA BE LOCATED" I tried use "for" loop, but it doesn't help, because possibly in my case, the string might be there in other indices. Is there another way? Maybe with numpy or some lambda? A: Here is an example of how you could use these functions to extract the string from the nested array: # Define the nested array arr = [[[[11.0, 16.0], [113.0, 16.0], [113.0, 41.0], [11.0, 41.0]], (1, "I WANNA BE LOCATED",)]] # Define a function to extract the string from the nested array def extract_string(arr): print(arr) # Iterate over the elements in the array for i, elem in enumerate(arr): # Check if the element is a string if isinstance(elem, str): # Return the string if it is a string return elem # Check if the element is a nested array elif isinstance(elem, list) or isinstance(elem, tuple): # Recursively call the function to search for the string in the nested array result = extract_string(elem) if result: return result # Extract the string from the nested array string = extract_string(arr) print(string) In this code, the extract_string() function recursively searches the nested array for a string. If it finds a string, it returns the string. If it finds a nested array, it recursively calls itself to search for the string in the nested array. This allows the function to search for the string in any level of the nested array. A: I'd do it recursively; this, for example, will work (provided you only have tuples and lists): collection = [[[[11.0, 16.0], [113.0, 16.0], [113.0, 41.0], [11.0, 41.0]], ("I WANNA BE LOCATED", 548967)]] def get_string(thing): if type(thing) == str: return thing if type(thing) in [list, tuple]: for i in thing: if (a := get_string(i)): return a return None get_string(collection) # Out[456]: 'I WANNA BE LOCATED' A: Flatten the arbitrarily nested list; Filter the strings (and perhaps bytes). Example: from collections.abc import Iterable li=[[[[11.0, 16.0], [113.0, 16.0], [113.0, 41.0], [11.0, 41.0]], ("I WANNA BE LOCATED", 548967)]] def flatten(xs): for x in xs: if isinstance(x, Iterable) and not isinstance(x, (str, bytes)): yield from flatten(x) else: yield x >>> [item for item in flatten(li) if isinstance(item,(str, bytes))] ['I WANNA BE LOCATED']
How to locate a specific var type inside many others arrays in python?
I'd like know how can I localize a specific type variable in a set of arrays, that could change its own length structure, i.e: [[[[11.0, 16.0], [113.0, 16.0], [113.0, 41.0], [11.0, 41.0]], ("I WANNA BE LOCATED", 548967)]] I just needed to extract the type variable that is a Str in this case: "I WANNA BE LOCATED" I tried use "for" loop, but it doesn't help, because possibly in my case, the string might be there in other indices. Is there another way? Maybe with numpy or some lambda?
[ "Here is an example of how you could use these functions to extract the string from the nested array:\n# Define the nested array\narr = [[[[11.0, 16.0], [113.0, 16.0], [113.0, 41.0], [11.0, 41.0]], (1, \"I WANNA BE LOCATED\",)]]\n\n# Define a function to extract the string from the nested array\ndef extract_string(arr):\n print(arr)\n # Iterate over the elements in the array\n for i, elem in enumerate(arr):\n # Check if the element is a string\n if isinstance(elem, str):\n # Return the string if it is a string\n return elem\n # Check if the element is a nested array\n elif isinstance(elem, list) or isinstance(elem, tuple):\n # Recursively call the function to search for the string in the nested array\n result = extract_string(elem)\n if result:\n return result\n\n# Extract the string from the nested array\nstring = extract_string(arr)\nprint(string)\n\nIn this code, the extract_string() function recursively searches the nested array for a string. If it finds a string, it returns the string. If it finds a nested array, it recursively calls itself to search for the string in the nested array. This allows the function to search for the string in any level of the nested array.\n", "I'd do it recursively; this, for example, will work (provided you only have tuples and lists):\ncollection = [[[[11.0, 16.0], [113.0, 16.0], [113.0, 41.0], [11.0, 41.0]], (\"I WANNA BE LOCATED\", 548967)]]\n\ndef get_string(thing):\n if type(thing) == str:\n return thing\n if type(thing) in [list, tuple]:\n for i in thing:\n if (a := get_string(i)):\n return a\n return None\n\nget_string(collection)\n# Out[456]: 'I WANNA BE LOCATED'\n\n", "\nFlatten the arbitrarily nested list;\nFilter the strings (and perhaps bytes).\n\nExample:\nfrom collections.abc import Iterable\n\nli=[[[[11.0, 16.0], [113.0, 16.0], [113.0, 41.0], [11.0, 41.0]], (\"I WANNA BE LOCATED\", 548967)]]\n\ndef flatten(xs):\n for x in xs:\n if isinstance(x, Iterable) and not isinstance(x, (str, bytes)):\n yield from flatten(x)\n else:\n yield x\n\n>>> [item for item in flatten(li) if isinstance(item,(str, bytes))]\n['I WANNA BE LOCATED']\n\n" ]
[ 1, 1, 1 ]
[]
[]
[ "filter", "indexing", "list", "numpy", "python" ]
stackoverflow_0074681279_filter_indexing_list_numpy_python.txt
Q: Selenium - python webdriver exits from browser after loading I try to open browser using Selenium in Python and after the browser opens, it exits from it, I tried several ways to write my code but every possible way works this way. Thank you in advance for help `from selenium import webdriver from selenium.webdriver import Chrome from selenium.webdriver.chrome.service import Service from webdriver_manager.chrome import ChromeDriverManager options = webdriver.ChromeOptions() options.add_experimental_option("detach", True) s=Service(ChromeDriverManager().install()) driver = webdriver.Chrome(service=s) driver.get("https://amazon.com")` I expected the browser to open amazon.com and stay like this until I close or the programme close it. Actual result - when the browser loads the website, it exists from itself. A: It looks like you are using the webdriver.Chrome class to create your Chrome driver instance. This class has a service parameter that you can use to specify the Chrome service that should be used to start the Chrome browser. In your code, you are creating a Chrome service using the Service class and passing it to the webdriver.Chrome class as the service parameter. However, you are not starting the Chrome service before creating the driver instance. To fix this, you can call the start() method on the Chrome service before creating the driver instance, like this: from selenium import webdriver from selenium.webdriver import Chrome from selenium.webdriver.chrome.service import Service from webdriver_manager.chrome import ChromeDriverManager options = webdriver.ChromeOptions() options.add_experimental_option("detach", True) # Create the Chrome service s = Service(ChromeDriverManager().install()) # Start the Chrome service s.start() # Create the driver instance using the Chrome service driver = webdriver.Chrome(service=s) # Open the website driver.get("https://amazon.com") This should start the Chrome service before creating the driver instance, which should prevent the browser from exiting immediately after opening. You can then use the driver.quit() method to close the browser when you are done. A: Use driver.close() function after getting result ;)
Selenium - python webdriver exits from browser after loading
I try to open browser using Selenium in Python and after the browser opens, it exits from it, I tried several ways to write my code but every possible way works this way. Thank you in advance for help `from selenium import webdriver from selenium.webdriver import Chrome from selenium.webdriver.chrome.service import Service from webdriver_manager.chrome import ChromeDriverManager options = webdriver.ChromeOptions() options.add_experimental_option("detach", True) s=Service(ChromeDriverManager().install()) driver = webdriver.Chrome(service=s) driver.get("https://amazon.com")` I expected the browser to open amazon.com and stay like this until I close or the programme close it. Actual result - when the browser loads the website, it exists from itself.
[ "It looks like you are using the webdriver.Chrome class to create your Chrome driver instance. This class has a service parameter that you can use to specify the Chrome service that should be used to start the Chrome browser.\nIn your code, you are creating a Chrome service using the Service class and passing it to the webdriver.Chrome class as the service parameter. However, you are not starting the Chrome service before creating the driver instance. To fix this, you can call the start() method on the Chrome service before creating the driver instance, like this:\nfrom selenium import webdriver\nfrom selenium.webdriver import Chrome\nfrom selenium.webdriver.chrome.service import Service\nfrom webdriver_manager.chrome import ChromeDriverManager\n\noptions = webdriver.ChromeOptions()\noptions.add_experimental_option(\"detach\", True)\n\n# Create the Chrome service\ns = Service(ChromeDriverManager().install())\n\n# Start the Chrome service\ns.start()\n\n# Create the driver instance using the Chrome service\ndriver = webdriver.Chrome(service=s)\n\n# Open the website\ndriver.get(\"https://amazon.com\")\n\nThis should start the Chrome service before creating the driver instance, which should prevent the browser from exiting immediately after opening. You can then use the driver.quit() method to close the browser when you are done.\n", "Use driver.close() function after getting result ;)\n" ]
[ 0, 0 ]
[]
[]
[ "automation", "crash", "python", "selenium", "webdriver" ]
stackoverflow_0074681137_automation_crash_python_selenium_webdriver.txt
Q: How to convert space separated file to tab delimited file in python? I have two data files, viz., 'fin.dat' and 'shape.dat'. I want to format 'shape.dat' just the way the 'fin.dat' is written with Python. The files can be found here https://easyupload.io/m/h94wd3. The snippets of the data structures are given here fin.dat,shape.dat. Please help me doing that. A: To convert a space-separated file to a tab-delimited file in Python, you can use the replace() method to replace all occurrences of spaces with tabs. Here's an example: # Open the file in read mode with open('input.txt', 'r') as input_file: # Read the file content content = input_file.read() # Replace all occurrences of space with tab content = content.replace(' ', '\t') # Open the file in write mode with open('output.txt', 'w') as output_file: # Write the modified content to the file output_file.write(content) In this example, the input.txt file is read and its content is stored in the content variable. Then, all occurrences of space are replaced with tab using the replace() method. Finally, the modified content is written back to the output.txt file. You can modify this code to work with your specific requirements. For example, you can use different delimiters, or you can process the file line by line instead of reading and writing the entire content in one go.
How to convert space separated file to tab delimited file in python?
I have two data files, viz., 'fin.dat' and 'shape.dat'. I want to format 'shape.dat' just the way the 'fin.dat' is written with Python. The files can be found here https://easyupload.io/m/h94wd3. The snippets of the data structures are given here fin.dat,shape.dat. Please help me doing that.
[ "To convert a space-separated file to a tab-delimited file in Python, you can use the replace() method to replace all occurrences of spaces with tabs. Here's an example:\n# Open the file in read mode\nwith open('input.txt', 'r') as input_file:\n # Read the file content\n content = input_file.read()\n\n# Replace all occurrences of space with tab\ncontent = content.replace(' ', '\\t')\n\n# Open the file in write mode\nwith open('output.txt', 'w') as output_file:\n # Write the modified content to the file\n output_file.write(content)\n\nIn this example, the input.txt file is read and its content is stored in the content variable. Then, all occurrences of space are replaced with tab using the replace() method. Finally, the modified content is written back to the output.txt file.\nYou can modify this code to work with your specific requirements. For example, you can use different delimiters, or you can process the file line by line instead of reading and writing the entire content in one go.\n" ]
[ 1 ]
[]
[]
[ "numpy", "pandas", "python" ]
stackoverflow_0074681480_numpy_pandas_python.txt
Q: Why does Undetered Chromedriver not work with Selenium Wire I want to make a request using Selenium Wire. The site has an anti -bot protection. I tried to use only Undetateded-Chromedriver. Everything work well. import undetected_chromedriver as uc driver = uc.Chrome() driver.get(f'https://nowsecure.nl/') time.sleep(10) driver.close() driver.quit() But when I use Selenium Wire ... nothing works ... import seleniumwire.undetected_chromedriver as uc driver = uc.Chrome() driver.get(f'https://nowsecure.nl/') time.sleep(10) driver.close() driver.quit() A: You have to add an options in your undetected chrome browser. options = uc.ChromeOptions() options.add_argument('--start-maximized') options.add_argument('--disable-notifications') driver = uc.Chrome(options=options, seleniumwire_options={ 'proxy': { 'http': f'http://{proxy_user}:{proxy_password}@{proxy_ip}:{proxy_port}', } }) uc.Chrome options you can find in Google
Why does Undetered Chromedriver not work with Selenium Wire
I want to make a request using Selenium Wire. The site has an anti -bot protection. I tried to use only Undetateded-Chromedriver. Everything work well. import undetected_chromedriver as uc driver = uc.Chrome() driver.get(f'https://nowsecure.nl/') time.sleep(10) driver.close() driver.quit() But when I use Selenium Wire ... nothing works ... import seleniumwire.undetected_chromedriver as uc driver = uc.Chrome() driver.get(f'https://nowsecure.nl/') time.sleep(10) driver.close() driver.quit()
[ "You have to add an options in your undetected chrome browser.\noptions = uc.ChromeOptions()\noptions.add_argument('--start-maximized')\noptions.add_argument('--disable-notifications')\n\ndriver = uc.Chrome(options=options, seleniumwire_options={\n 'proxy': {\n 'http': f'http://{proxy_user}:{proxy_password}@{proxy_ip}:{proxy_port}',\n }\n })\n\nuc.Chrome options you can find in Google\n" ]
[ 0 ]
[]
[]
[ "cloudflare", "python", "selenium", "seleniumwire", "undetected_chromedriver" ]
stackoverflow_0074680942_cloudflare_python_selenium_seleniumwire_undetected_chromedriver.txt
Q: how to use info from .txt file to create variables in python? I'm very new to python, and I'd like to know how I can use the info in a text file to create variables. For example, if the txt file looked like this: vin_brand_type_year_price 2132_BMW_330xi_2016_67000 1234_audi_a4_2019_92000 9876_mclaren_720s_2022_327000 How do I then, for example, use it to make a variable called vin and have all the vin numbers in it? I can have the terminal read it. this is what i have so far with open('car.txt', 'r') as file: file_content = file.read() print(file_content) Thank you for any help you can provide. A: We can then use the index() method to find the index of the "vin" header in the list of header values. This will give us the index of the VIN number in each line of the text file. We can then use this index to extract # Create an empty list to store the VIN numbers. vin = [] # Open the text file and read its contents. with open('car.txt', 'r') as file: # Read the first line of the file, which contains the header. header = file.readline() # Split the header on the underscore character. header_values = header.split("_") # Get the index of the "vin" header. vin_index = header_values.index("vin") # Read each line of the file, starting with the second line. for line in file: # Split the line on the underscore character. values = line.split("_") # Get the VIN number, using the index of the "vin" header. vin_number = values[vin_index] # Add the VIN number to the list. vin.append(vin_number) # Print the list of VIN numbers. print(vin) A: There are several ways to do this. The best depends on what you plan to do next. This file will parse with the csv module and you can use csv.reader to iterate all of the lines. To get vin specifically, you could import csv with open('car.txt', 'r') as file: vin = [row[0] for row in csv.reader(file, delimiter="_")] A: You can slice the strings around '_', get the first part (at index 0) and append it to a list variable: vin = [] with open('car.txt', 'r') as file: lines = file.readlines() for line in lines.splitlines(): line = line.strip() if line: vin.append(line.split('_')[0]) vin.pop(0) # this one because I was too cheap to skip the header line :) A: I would use regex to accomplish that. Assuming the file (car.txt) looks like this: vin_brand_type_year_price 2132_BMW_330xi_2016_67000 1234_audi_a4_2019_92000 9876_mclaren_720s_2022_327000 I would use this python script: import re with open('car.txt') as f: data = f.readlines() vin = [] for v in data: if match := re.match(r'(\d+)', v.strip()): vin.append(match.group(0)) print(vin) the r'^(\d)+' is a regex for selecting the part of the text that starts with digits. This is to ensure any line in the file that doesn't start with digits will be ignored.
how to use info from .txt file to create variables in python?
I'm very new to python, and I'd like to know how I can use the info in a text file to create variables. For example, if the txt file looked like this: vin_brand_type_year_price 2132_BMW_330xi_2016_67000 1234_audi_a4_2019_92000 9876_mclaren_720s_2022_327000 How do I then, for example, use it to make a variable called vin and have all the vin numbers in it? I can have the terminal read it. this is what i have so far with open('car.txt', 'r') as file: file_content = file.read() print(file_content) Thank you for any help you can provide.
[ "We can then use the index() method to find the index of the \"vin\" header in the list of header values. This will give us the index of the VIN number in each line of the text file. We can then use this index to extract\n# Create an empty list to store the VIN numbers.\nvin = []\n\n# Open the text file and read its contents.\nwith open('car.txt', 'r') as file:\n # Read the first line of the file, which contains the header.\n header = file.readline()\n\n # Split the header on the underscore character.\n header_values = header.split(\"_\")\n\n # Get the index of the \"vin\" header.\n vin_index = header_values.index(\"vin\")\n\n # Read each line of the file, starting with the second line.\n for line in file:\n # Split the line on the underscore character.\n values = line.split(\"_\")\n\n # Get the VIN number, using the index of the \"vin\" header.\n vin_number = values[vin_index]\n\n # Add the VIN number to the list.\n vin.append(vin_number)\n\n# Print the list of VIN numbers.\nprint(vin)\n\n", "There are several ways to do this. The best depends on what you plan to do next. This file will parse with the csv module and you can use csv.reader to iterate all of the lines. To get vin specifically, you could\nimport csv\n\nwith open('car.txt', 'r') as file:\n vin = [row[0] for row in csv.reader(file, delimiter=\"_\")]\n\n", "You can slice the strings around '_', get the first part (at index 0) and append it to a list variable:\nvin = []\n\nwith open('car.txt', 'r') as file:\n lines = file.readlines() \nfor line in lines.splitlines():\n line = line.strip()\n if line:\n vin.append(line.split('_')[0])\n \nvin.pop(0) # this one because I was too cheap to skip the header line :)\n\n", "I would use regex to accomplish that. Assuming the file (car.txt) looks like this:\nvin_brand_type_year_price\n2132_BMW_330xi_2016_67000\n1234_audi_a4_2019_92000\n9876_mclaren_720s_2022_327000\n\nI would use this python script:\nimport re\n\nwith open('car.txt') as f:\n data = f.readlines()\n\nvin = []\nfor v in data:\n if match := re.match(r'(\\d+)', v.strip()):\n vin.append(match.group(0))\n\nprint(vin)\n\nthe\n\nr'^(\\d)+'\n\nis a regex for selecting the part of the text that starts with digits. This is to ensure any line in the file that doesn't start with digits will be ignored.\n" ]
[ 1, 1, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074681417_python.txt
Q: Using PIL module to open file from GCS I am a beginner in programming, and this is my first little try. I'm currently facing a bottleneck, I would like to ask for the help. Any advice will be welcome. Thank you in advance! Here is what I want to do: To make a text detection application and extract the text for the further usage(for instance, to map some of the other relevant information in a data). So, I devided into two steps: 1.first, to detect the text 2.extract the text and use the regular expression to rearrange it for the data mapping. For the first step, I use google vision api, so I have no probelm reading the image from google cloud storage(code reference 1): However, when it comes to step two, I need a PIL module to open the file for drawing the text. When useing the methodImage.open(), it requries a path`. My question is how do I call the path? (code reference 2): code reference 1: from google.cloud import vision image_uri = 'gs://img_platecapture/img_001.jpg' client = vision.ImageAnnotatorClient() image = vision.Image() image.source.image_uri = image_uri ## <- THE PATH ## response = client.text_detection(image=image) for text in response.text_annotations: print('=' * 30) print(text.description) vertices = ['(%s,%s)' % (v.x, v.y) for v in text.bounding_poly.vertices] print('bounds:', ",".join(vertices)) if response.error.message: raise Exception( '{}\nFor more info on error messages, check: ' 'https://cloud.google.com/apis/design/errors'.format( response.error.message)) code reference 2: from PIL import Image, ImageDraw from PIL import ImageFont import re img = Image.open(?) <- THE PATH ## draw = ImageDraw.Draw(img) font = ImageFont.truetype("simsun.ttc", 18) for text in response.text_annotations[1::]: ocr = text.description bound=text.bounding_poly draw.text((bound.vertices[0].x-25, bound.vertices[0].y-25),ocr,fill=(255,0,0),font=font) draw.polygon( [ bound.vertices[0].x, bound.vertices[0].y, bound.vertices[1].x, bound.vertices[1].y, bound.vertices[2].x, bound.vertices[2].y, bound.vertices[3].x, bound.vertices[3].y, ], None, 'yellow', ) texts=response.text_annotations a=str(texts[0].description.split()) b=re.sub(u"([^\u4e00-\u9fa5\u0030-u0039])","",a) b1="".join(b) regex1 = re.search(r"\D{1,2}Dist.",b) if regex1: regex1="{}".format(regex1.group(0)) ......... A: PIL does not have built in ability to automatically open files from GCS. you will need to either Download the file to local storage and point PIL to that file or Give PIL a BlobReader which it can use to access the data: from PIL import Image from google.cloud import storage storage_client = storage.Client() bucket = storage_client.bucket('img_platecapture') blob = bucket.get_blob('img_001.jpg') # use get_blob to fix generation number, so we don't get corruption if blob is overwritten while we read it. with blob.open() as file: img = Image.open(file) # ...
Using PIL module to open file from GCS
I am a beginner in programming, and this is my first little try. I'm currently facing a bottleneck, I would like to ask for the help. Any advice will be welcome. Thank you in advance! Here is what I want to do: To make a text detection application and extract the text for the further usage(for instance, to map some of the other relevant information in a data). So, I devided into two steps: 1.first, to detect the text 2.extract the text and use the regular expression to rearrange it for the data mapping. For the first step, I use google vision api, so I have no probelm reading the image from google cloud storage(code reference 1): However, when it comes to step two, I need a PIL module to open the file for drawing the text. When useing the methodImage.open(), it requries a path`. My question is how do I call the path? (code reference 2): code reference 1: from google.cloud import vision image_uri = 'gs://img_platecapture/img_001.jpg' client = vision.ImageAnnotatorClient() image = vision.Image() image.source.image_uri = image_uri ## <- THE PATH ## response = client.text_detection(image=image) for text in response.text_annotations: print('=' * 30) print(text.description) vertices = ['(%s,%s)' % (v.x, v.y) for v in text.bounding_poly.vertices] print('bounds:', ",".join(vertices)) if response.error.message: raise Exception( '{}\nFor more info on error messages, check: ' 'https://cloud.google.com/apis/design/errors'.format( response.error.message)) code reference 2: from PIL import Image, ImageDraw from PIL import ImageFont import re img = Image.open(?) <- THE PATH ## draw = ImageDraw.Draw(img) font = ImageFont.truetype("simsun.ttc", 18) for text in response.text_annotations[1::]: ocr = text.description bound=text.bounding_poly draw.text((bound.vertices[0].x-25, bound.vertices[0].y-25),ocr,fill=(255,0,0),font=font) draw.polygon( [ bound.vertices[0].x, bound.vertices[0].y, bound.vertices[1].x, bound.vertices[1].y, bound.vertices[2].x, bound.vertices[2].y, bound.vertices[3].x, bound.vertices[3].y, ], None, 'yellow', ) texts=response.text_annotations a=str(texts[0].description.split()) b=re.sub(u"([^\u4e00-\u9fa5\u0030-u0039])","",a) b1="".join(b) regex1 = re.search(r"\D{1,2}Dist.",b) if regex1: regex1="{}".format(regex1.group(0)) .........
[ "PIL does not have built in ability to automatically open files from GCS. you will need to either\n\nDownload the file to local storage and point PIL to that file or\n\nGive PIL a BlobReader which it can use to access the data:\nfrom PIL import Image\nfrom google.cloud import storage\n\nstorage_client = storage.Client()\nbucket = storage_client.bucket('img_platecapture')\nblob = bucket.get_blob('img_001.jpg') # use get_blob to fix generation number, so we don't get corruption if blob is overwritten while we read it.\nwith blob.open() as file:\n img = Image.open(file)\n # ...\n\n\n\n" ]
[ 0 ]
[]
[]
[ "gcs", "google_cloud_storage", "path", "python", "python_imaging_library" ]
stackoverflow_0074678150_gcs_google_cloud_storage_path_python_python_imaging_library.txt
Q: Binary matrix multiplication I got a matrix A, with the following bytes as rows: 11111110 (0xfe) 11111000 (0xf8) 10000100 (0x84) 10010010 (0x92) My program reads a byte from stdin with the function sys.stdin.read(1). Suppose I receive the byte x 10101010 (0xaa). Is there a way using numpy to perform the multiplication: >>> A.dot(x) 0x06 (00000110) As A is a 4x8 matrix, compossed by 4 bytes as rows, and x is an 8 bit array, I was expecting to receive the (nibble 0110) byte 0000 0110 as a result of the multiplication A * x, treating bits as elements of the matrix. If the elements of the matrix were treated as binary bytes, the result would be: >>> A = np.array([[1,1,1,1,1,1,1,0],[1,1,1,1,1,0,0,0],[1,0,0,0,0,1,0,0],[1,0,0,1,0,0,1,0]]) >>> x = np.array([1,0,1,0,1,0,1,0]) >>> A.dot(x)%2 array([0, 1, 1, 0]) A: 1. Not using dot You do not need to fully expand your matrix to do bitwise "multiplication" on it. You want to treat A as a 4x8 matrix of bits and x as an 8-element vector of bits. A row multiplication yields 1 for the bits that are on in both A and x and 0 if either bit is 0. This is equivalent to applying bitwise and (&): >>> [hex(n) for n in (A & x)] ['0xaa', '0xa8', '0x80', '0x82'] 10101010 10101000 10000000 10000000 Here is a post on counting the bits in a byte. bin(n).count("1") is probably the easiest one to use, so >>> [bin(n).count("1") % 2 for n in (A & x)] [0, 1, 1, 0] If you want just a number, you can do something like >>> int(''.join(str(bin(n).count("1") % 2) for n in (A & x)), 2) 6 2. Using dot To use dot, you can easily expand A and x into their numpy equivalents: >>> list(list(int(n) for n in list(bin(r)[2:])) for r in A) [['1', '1', '1', '1', '1', '1', '1', '0'], ['1', '1', '1', '1', '1', '0', '0', '0'], ['1', '0', '0', '0', '0', '1', '0', '0'], ['1', '0', '0', '1', '0', '0', '1', '0']] >>> list(int(n) for n in bin(x)[2:]) [1, 0, 1, 0, 1, 0, 1, 0] You can apply dot to the result: >>> np.dot(list(list(int(n) for n in list(bin(r)[2:])) for r in A), list(int(n) for n in bin(x)[2:])) % 2 array([0, 1, 1, 0]) A: It is possible to do a binary matrix multiplication using binary arithmetic consider this answer: Binary matrix multiplication bit twiddling hack
Binary matrix multiplication
I got a matrix A, with the following bytes as rows: 11111110 (0xfe) 11111000 (0xf8) 10000100 (0x84) 10010010 (0x92) My program reads a byte from stdin with the function sys.stdin.read(1). Suppose I receive the byte x 10101010 (0xaa). Is there a way using numpy to perform the multiplication: >>> A.dot(x) 0x06 (00000110) As A is a 4x8 matrix, compossed by 4 bytes as rows, and x is an 8 bit array, I was expecting to receive the (nibble 0110) byte 0000 0110 as a result of the multiplication A * x, treating bits as elements of the matrix. If the elements of the matrix were treated as binary bytes, the result would be: >>> A = np.array([[1,1,1,1,1,1,1,0],[1,1,1,1,1,0,0,0],[1,0,0,0,0,1,0,0],[1,0,0,1,0,0,1,0]]) >>> x = np.array([1,0,1,0,1,0,1,0]) >>> A.dot(x)%2 array([0, 1, 1, 0])
[ "1. Not using dot\nYou do not need to fully expand your matrix to do bitwise \"multiplication\" on it. You want to treat A as a 4x8 matrix of bits and x as an 8-element vector of bits. A row multiplication yields 1 for the bits that are on in both A and x and 0 if either bit is 0. This is equivalent to applying bitwise and (&):\n>>> [hex(n) for n in (A & x)]\n['0xaa', '0xa8', '0x80', '0x82']\n\n\n10101010\n10101000\n10000000\n10000000\n\nHere is a post on counting the bits in a byte. bin(n).count(\"1\") is probably the easiest one to use, so\n>>> [bin(n).count(\"1\") % 2 for n in (A & x)]\n[0, 1, 1, 0]\n\nIf you want just a number, you can do something like\n>>> int(''.join(str(bin(n).count(\"1\") % 2) for n in (A & x)), 2)\n6\n\n2. Using dot\nTo use dot, you can easily expand A and x into their numpy equivalents:\n>>> list(list(int(n) for n in list(bin(r)[2:])) for r in A)\n[['1', '1', '1', '1', '1', '1', '1', '0'],\n ['1', '1', '1', '1', '1', '0', '0', '0'],\n ['1', '0', '0', '0', '0', '1', '0', '0'],\n ['1', '0', '0', '1', '0', '0', '1', '0']]\n>>> list(int(n) for n in bin(x)[2:])\n[1, 0, 1, 0, 1, 0, 1, 0]\n\nYou can apply dot to the result:\n>>> np.dot(list(list(int(n) for n in list(bin(r)[2:])) for r in A),\n list(int(n) for n in bin(x)[2:])) % 2\narray([0, 1, 1, 0])\n\n", "It is possible to do a binary matrix multiplication using binary arithmetic consider this answer: Binary matrix multiplication bit twiddling hack\n" ]
[ 0, 0 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0044203732_numpy_python.txt