CreationDate
stringlengths
19
19
Users Score
int64
-3
17
Tags
stringlengths
6
76
AnswerCount
int64
1
12
A_Id
int64
75.3M
76.6M
Title
stringlengths
16
149
Q_Id
int64
75.3M
76.2M
is_accepted
bool
2 classes
ViewCount
int64
13
82.6k
Question
stringlengths
114
20.6k
Score
float64
-0.38
1.2
Q_Score
int64
0
46
Available Count
int64
1
5
Answer
stringlengths
30
9.2k
2023-02-20 18:39:29
0
python,selenium-webdriver,google-chrome-extension,selenium-chromedriver
1
75,544,557
Running Selenium Chrome with extension - Chrome not reachable
75,513,028
false
124
I'm developing an app with selenium on an ubuntu EC2 instance. Therefore, there are no displays. To start Selenium I use xvbf. This is what I used to install xvbf and selenium: sudo apt-get -y update sudo apt-get install -y unzip xvfb libxi6 libgconf-2-4 default-jdk xdg-utils sudo snap install chromium sudo wget -O /tmp/chromedriver.zip http://chromedriver.storage.googleapis.com/`curl -sS chromedriver.storage.googleapis.com/LATEST_RELEASE`/chromedriver_linux64.zip sudo unzip /tmp/chromedriver.zip chromedriver -d /usr/local/bin/ pip install selenium Now, I want to open Selenium within python. If I run the following code, I can get source code from google webpage: from pyvirtualdisplay import Display from selenium import webdriver import time display = Display(visible=0, size=(800, 600)) display.start() print("display started") # now Chrome will run in a virtual display. chromeOptions = webdriver.ChromeOptions() #chromeOptions.add_experimental_option('prefs',{"extensions.ui.developer_mode": True,}) # Trial for dev #chromeOptions.add_argument('--no-startup-window') # This blocks running selenium #chromeOptions.add_argument("--force-dev-mode-highlighting") # Trial for dev #chromeOptions.add_argument("--system-developer-mode") # Trial for dev chromeOptions.add_argument('--start-maximized') chromeOptions.add_argument("--remote-debugging-port=9222") # If I don't put a port I get an error about some port #chromeOptions.add_extension('my.crx') # This blocks selenium browser = webdriver.Chrome(chrome_options=chromeOptions) print("Selenium loaded") browser.get('http://www.google.com') print("Page loaded") time.sleep(3) print(browser.page_source) browser.stop_client() browser.quit() display.stop() However, as soon as I uncomment the line for the extension, I get an error: Chrome not reachable. I downloaded the extension from a github project where it stays that you must enable dev tools. Therefore I also tried adding the lines commented with "Trial for dev". These lines do not block Selenium initialization (e.g., if I uncomment them and comment the line for extension Selenium works), but neither I see that adding them have any influence on the extension working. I get the same error. What should I do? NOTE: I tested in a windows PC with a device, and without using pyvirtualdisplay the extension works and I can get google source code.
0
1
1
The code was working under Ubuntu showing the window explorer too. At the end I removed all configuration for webdriver and used firefox instead of chrome since the extension was in Firefox too. You can init firefox with a virtual display with no problem, and after you init firefox you can apply the extension. That worked for me!
2023-02-20 20:40:07
2
python,regex,python-re
2
75,514,043
Expression that captures all characters up to a group of characters
75,513,933
false
85
I have several alerts coming from a DC server, which have the following pattern: alert - name risk score - severity - total The examples of these alerts would be: A member was added to a security-enabled local group 47 medium 2 A member was added to a security-enabled universal group 47 medium 1 A security-enabled global group was changed 73 high 2 A security-enabled local group was changed 73 high 2 A user account was locked out 47 medium 31 An attempt was made to reset an accounts password 73 high 14 Member added to security-enabled global group 73 high 2 PowerShell Keylogging Script 73 high 23 PowerShell Suspicious Script with Audio Capture Capabilities 47 medium 23 More Than 3 Failed Login Attempts Within 1 Hour 47 medium 6 Over 100 Connection from 10 Diff. IPs 47 medium 234 Over 100 Connections Attempted 73 high 123 Failed Logins Not Followed by Success Within 2 Hours 21 low 8 I've been using the following pattern to capture only the name of the alerts: ^(\D*) Essentially, this filters out all of the digits, but now have I received a few alerts I hadn't accounted for. These alerts contain digits in them. For example: More Than 3 Failed Login Attempts Within 1 Hour 47 medium 6 Over 100 Connection from 10 Diff. IPs 47 medium 234 Over 100 Connections Attempted 73 high 123 Failed Logins Not Followed by Success Within 2 Hours 21 low 8 So I need to be able to capture the complete name, otherwise, I'm ending up with: More than Over Over Failed Logins Not Followed by Success Within Despite my efforts, I have not been able to capture the desire pattern. This would be the desired output: A member was added to a security-enabled local group A member was added to a security-enabled universal group A security-enabled global group was changed A security-enabled local group was changed A user account was locked out An attempt was made to reset an accounts password PowerShell Keylogging Script PowerShell Suspicious Script with Audio Capture Capabilities More Than 3 Failed Login Attempts Within 1 Hour Over 100 Connection from 10 Diff. IPs Over 100 Connections Attempted Failed Logins Not Followed by Success Within 2 Hours Thanks for taking the time to help!
0.197375
1
1
The following regex should do the trick: .*\b(?= \d* .* \d*$) The (?=...) syntax is called a lookahead, and it allows us to specify the text that must follow the specified regex. Here, we're essentially looking for anything followed by the pattern: space, number, space, anything, space, number, end of line.
2023-02-20 21:32:45
0
python,mysql,sql
2
75,515,270
How to insert multiple values into MySQL database using Python Script
75,514,342
false
63
I have the following sample values; lst = [{'title': 'Guld för Odermatt i schweizisk dubbel', 'summary': '', 'link': '``https://www.dn.se/sport/guld-for-odermatt-i-schweizisk-dubbel/``', 'topic': ['empty', 'empty', 'empty', 'empty', 'empty', 'empty', 'SamhalleKonflikter', 'empty', 'empty', 'empty']} , {'title': 'Bengt Hall blir ny tillförordnad vd på Malmö Opera', 'summary': '', 'link': '``https://www.dn.se/kultur/bengt-hall-blir-ny-tillforordnad-vd-pa-malmo-opera/``', 'topic': ['empty', 'empty', 'empty', 'empty', 'empty', 'empty', 'SamhalleKonflikter', 'empty', 'empty', 'empty']} , {'title': 'Fyra gripna för grova narkotikabrott', 'summary': '', 'link': '``https://www.dn.se/sverige/fyra-gripna-for-grova-narkotikabrott/``', 'topic': ['empty', 'empty', 'empty', 'empty', 'empty', 'empty', 'SamhalleKonflikter', 'empty', 'empty', 'empty']}] and I tired using the following script to insert them into my database; # Connect to MySQL server `cnxn = mysql.connector.connect(` `host="localhost",` `user="root",` `password="password",` `database="NewsExtractDb"` `)` # Create a cursor object cursor = cnxn.cursor() sql = "INSERT INTO database (title, summary, link, topic) VALUES (%s, %s, %s, %s)" params = [(item['title'], item['summary'], item['link'], ', '.join(item['topic'])) for item in lst] cursor.executemany(sql, params)cnxn.commit() But I am keep getting this error; File "C:\Python311\Lib\site-packages\mysql\connector\connection_cext.py", line 616, in cmd_query raise get_mysql_exception( mysql.connector.errors.ProgrammingError: 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'database (title, summary, link, topic) VALUES ('Guld för Odermatt i schweizisk ' at line 1 I have tired re-write the code with a for loop statement instead of 'executemany'; sql = "INSERT INTO database (title, summary, link, topic) VALUES (%s, %s, %s, %s)" for item in lst:values = (item['title'], item['summary'], item['link'], ', '.join(item['topic']))cursor.execute(sql, values) But I still end up getting the same identical error which I cannot fix. Any ideas?
0
1
1
After "INSERT INTO" statement there should be valid table name. But in your query table name is "database" which is invalid identifier for table name. To insert data into a table, firstly that table must be created in the NewsExtractDb database. You can try this: use NewsExtractDb; create table newstable ( title nvarchar(255), summary nvarchar(255), link nvarchar(255), topic nvarchar(500) ); and change sql query as below: sql = "INSERT INTO newstable (title, summary, link, topic) VALUES (%s, %s, %s, %s)" After this your code shoud run without errors.
2023-02-21 03:38:47
1
python,tensorflow,keras,deep-learning,neural-network
1
75,516,124
Difference between sklearn.neural_network and simple Deep Learning by Keras with Sequential and Dese Nodes?
75,516,086
false
48
Given , sklearn.neural_network and simple Deep Learning by Keras with Sequential and Dese Nodes, are the mathematically same just two API's with computation optimization? Yes Keras has a Tensor Support and could also liverage GPU and Complex models like CNN and RNN are permissible. However, are they mathematically same and we will yield same results given same hyper parameter , random state, input data etc ? Else apart from computational efficiency what maker Keras a better choice ?
0.197375
1
1
I don't think they will give you the exact same results as the internal implementations for 2 same operations are different even across pytorch and tensorflow. What makes Keras a better option is the ecosystem. You have the DataLoaders which can load the complex data in batches for you in the desired format, then you have the Tensorboard where you can see the model training, then you have preprocessing functions especially for data augmentations. In TF/Keras, you now even have data augmentation layers, in PyTorch Torchvision provides this in Transforms. Then you have the flexibility, you can define what types of layers in what order you want, what should be the initializer of the layer, do you want batch norm between layers or not, do you want a dropout layer between the layers or not, what should be the activation of hidden layers you can have relu in 1 layer and tanh in other, you can define how your forward pass should exist, blah blah. Then you have the callbacks to customize the training experience, and so on.
2023-02-21 10:46:15
1
python,pandas,dataframe,optimization,dask
2
75,528,276
Loading .txt files fast into an pandas df
75,519,382
false
55
This is the code i am using at the moment it is working fine and does exactly what i want. df_list = [] for file_name in reversed (os.listdir(path)): df_small = pd.read_csv(os.path.join(path, file_name), delimiter='\t', decimal='.', skiprows=6) df_small = df_small.dropna(subset=[df_small.columns[6]]) df_list.append(df_small) df= pd.concat(df_list, ignore_index=True) print(df) I am looking for ways to make it faster. At the moment i am loading around 5000 files in the df, the resulting df has around 140 000 rows for this process i need around 20 seconds. (The files all of have the same layout and around the same size ca.7 kb) So are there any ways to make it even faster? Would it make sense to switch to something like dask to read the data even faster or is that unnessary
0.099668
1
1
you are iterating through the files. if the read or load is what’s taking up the majority of the time, this iterative approach is your bottleneck. you can use a dask bag to distribute the file paths and load them across however many cores you have available.
2023-02-21 11:12:48
0
python,dataframe,multithreading,fastapi,python-polars
2
75,940,032
Polars with FastAPI and docker
75,519,686
false
449
I have been exploring Polars for my web application. Its been impressive so far, until I hit this issue that has stalled my use of this awesome library. Usecase: I read a parquet file into Polars dataframe, use this pl dataframe to serve results for a get request on FastAPI. @fastApi.get("/polars-test") async def polars_test(): polars_df = pl.read_parquet(f"/data/all_area_keys.parquet") df = polars_df.limit(3) return df.to_dicts() polars= 0.16.2 pyarrow=9.0.0 fastapi=0.92.0 BaseDockerImage = tiangolo/uvicorn-gunicorn-fastapi:python3.11 When I package it up into docker image and run the FastAPI app on gunicorn, this get path does not respond. Using the /docs, hitting this end point will just wait for several minutes and the worker terminates, without any errors logged I am starting to think Polars multithread is not playing well with FastAPI'S concurrency. But I an unable to find related documents to get an understanding. Please help, would absolutely hate to abandon Polars. Troubleshooting done so far: The get request works perfectly when I test it locally. log on to the running docker container and run the above pl commands - it works Just tried to print the schema of the dataframe - it works. So the dataframe is created and metadata available. I get this issue only when I run filter or Any transform on the polars dataframe Created a lazy frame and tried to collect, but no luck Remove async from the method, no luck Changed python version from 3.8 to 3.11, no luck Spcifying the platform to linus/amd64 while running the docker, no luck
0
1
1
Having recently switched from pandas to polars for a Dash plotly app (flask also), I can definitely confirm the two don't play well together. The speed increase on the DF operations themselves is impressive (all cores enganged, I'm seeing 10-12x increases in some cases) but once I started replacing pandas code with polars in a computations.py that has a few functions returning DFs, basic operations like reading a file, joining DFs have become extremely hard to debug or even execute. I've spent hours today trying to figure out why a bunch of files weren't being read. I eventually set the parallel argument to 'none' in .read_parquet and that particular block now works. It now stops at a .join. I have dozens of operations that I managed to parallelize with polars but it seems a nightmare to add this to a web app. Doing a test run with the production recommended settings for Dash i.e. gunicorn, redis for cache and celery with different settings for workers and such and I got a lot of leaked semaphore objects to clean up errors and a perfectly working app now has a lot of choke points. Not sure where to go from here, I'll probably build a FastAPI endpoint that returns JSONs and read those responses into pandas in Dash for plotting. Anybody else trying to integrate polars into Dash?
2023-02-21 16:45:13
-2
python-3.x,class
2
75,523,538
How to get an updated version of the function in class when changing input?
75,523,461
false
31
I have this fitness function in this class, I have changed the attributes of an instance of this class, however when calling the function I want the returned value to be updated with the modified input, how can I achieve this? (I wanted to get the output of 12) class ready: def __init__(self,x): self.x=x self.fitness=fit(self.x) def fit(z): return z p=ready(10) p.x=12 print(p.fitness)
-0.197375
1
1
You already updated your x value, but you dont print it. You are printing p.fitness. When you print p.x you will see you changed the value of x. Or you just change the value of p.fitness.
2023-02-21 18:00:26
3
python,object-detection,image-augmentation
2
75,682,652
Getting AttributeError: 'FigureCanvasAgg' object has no attribute 'set_window_title' using IMGAUG python package
75,524,228
false
1,888
I have found this simple code below on some internet page, but I'm getting an error trying to execute it on my laptop. It should display an image with some bounding boxes on it. The error occurs not in my code, but in ingaug.py, so inside of the package file. Is it some bug in the imgaug package? I'm using MacOS, installed imgaug with condo from condo-forge channel. Python code: import imageio import imgaug as ia from imgaug.augmentables.bbs import BoundingBox, BoundingBoxesOnImage %matplotlib inline ia.seed(1) image = imageio.imread("https://upload.wikimedia.org/wikipedia/commons/8/8e/Yellow-headed_caracara_%28Milvago_chimachima%29_on_capybara_%28Hydrochoeris_hydrochaeris%29.JPG") image = ia.imresize_single_image(image, (298, 447)) bbs = BoundingBoxesOnImage([ BoundingBox(x1=0.2*447, x2=0.85*447, y1=0.3*298, y2=0.95*298), BoundingBox(x1=0.4*447, x2=0.65*447, y1=0.1*298, y2=0.4*298) ], shape=image.shape) ia.imshow(bbs.draw_on_image(image, size=2)) Error massage: AttributeError Traceback (most recent call last) Cell In[1], line 15 8 image = ia.imresize_single_image(image, (298, 447)) 10 bbs = BoundingBoxesOnImage([ 11 BoundingBox(x1=0.2*447, x2=0.85*447, y1=0.3*298, y2=0.95*298), 12 BoundingBox(x1=0.4*447, x2=0.65*447, y1=0.1*298, y2=0.4*298) 13 ], shape=image.shape) ---> 15 ia.imshow(bbs.draw_on_image(image, size=2)) File ~/miniconda3/envs/imgaug/lib/python3.11/site-packages/imgaug/imgaug.py:2120, in imshow(image, backend) 2117 w = max(w, 6) 2119 fig, ax = plt.subplots(figsize=(w, h), dpi=dpi) -> 2120 fig.canvas.set_window_title("imgaug.imshow(%s)" % (image.shape,)) 2121 # cmap=gray is automatically only activate for grayscale images 2122 ax.imshow(image, cmap="gray") AttributeError: 'FigureCanvasAgg' object has no attribute 'set_window_title'
0.291313
1
2
Open "imgaug.py" file, change line 2120 to: fig.canvas.manager.set_window_title("imgaug.imshow(%s)" % (image.shape,))
2023-02-21 18:00:26
1
python,object-detection,image-augmentation
2
76,241,774
Getting AttributeError: 'FigureCanvasAgg' object has no attribute 'set_window_title' using IMGAUG python package
75,524,228
false
1,888
I have found this simple code below on some internet page, but I'm getting an error trying to execute it on my laptop. It should display an image with some bounding boxes on it. The error occurs not in my code, but in ingaug.py, so inside of the package file. Is it some bug in the imgaug package? I'm using MacOS, installed imgaug with condo from condo-forge channel. Python code: import imageio import imgaug as ia from imgaug.augmentables.bbs import BoundingBox, BoundingBoxesOnImage %matplotlib inline ia.seed(1) image = imageio.imread("https://upload.wikimedia.org/wikipedia/commons/8/8e/Yellow-headed_caracara_%28Milvago_chimachima%29_on_capybara_%28Hydrochoeris_hydrochaeris%29.JPG") image = ia.imresize_single_image(image, (298, 447)) bbs = BoundingBoxesOnImage([ BoundingBox(x1=0.2*447, x2=0.85*447, y1=0.3*298, y2=0.95*298), BoundingBox(x1=0.4*447, x2=0.65*447, y1=0.1*298, y2=0.4*298) ], shape=image.shape) ia.imshow(bbs.draw_on_image(image, size=2)) Error massage: AttributeError Traceback (most recent call last) Cell In[1], line 15 8 image = ia.imresize_single_image(image, (298, 447)) 10 bbs = BoundingBoxesOnImage([ 11 BoundingBox(x1=0.2*447, x2=0.85*447, y1=0.3*298, y2=0.95*298), 12 BoundingBox(x1=0.4*447, x2=0.65*447, y1=0.1*298, y2=0.4*298) 13 ], shape=image.shape) ---> 15 ia.imshow(bbs.draw_on_image(image, size=2)) File ~/miniconda3/envs/imgaug/lib/python3.11/site-packages/imgaug/imgaug.py:2120, in imshow(image, backend) 2117 w = max(w, 6) 2119 fig, ax = plt.subplots(figsize=(w, h), dpi=dpi) -> 2120 fig.canvas.set_window_title("imgaug.imshow(%s)" % (image.shape,)) 2121 # cmap=gray is automatically only activate for grayscale images 2122 ax.imshow(image, cmap="gray") AttributeError: 'FigureCanvasAgg' object has no attribute 'set_window_title'
0.099668
1
2
Changing matplotlib version to 3.5 works.
2023-02-21 19:26:01
5
python,jupyter,virtualenv,pyenv
2
75,868,154
msno.matrix() shows an error when I use any venv using pyenv
75,525,029
false
1,702
I tried many times installing several virtual environments using pyenv, but the system shows a error in missingno library. This is : msno.matrix(df) `ValueError Traceback (most recent call last) Cell In[17], line 1 ----> 1 msno.matrix(df) File c:\Users\sarud\.pyenv\venvs\ETLs\lib\site-packages\missingno\missingno.py:72, in matrix(df, filter, n, p, sort, figsize, width_ratios, color, fontsize, labels, sparkline, inline, freq, ax) 70 # Remove extraneous default visual elements. 71 ax0.set_aspect('auto') ---> 72 ax0.grid(b=False) 73 ax0.xaxis.tick_top() 74 ax0.xaxis.set_ticks_position('none') File c:\Users\sarud\.pyenv\venvs\ETLs\lib\site-packages\matplotlib\axes\_base.py:3196, in _AxesBase.grid(self, visible, which, axis, **kwargs) 3194 _api.check_in_list(['x', 'y', 'both'], axis=axis) 3195 if axis in ['x', 'both']: -> 3196 self.xaxis.grid(visible, which=which, **kwargs) 3197 if axis in ['y', 'both']: 3198 self.yaxis.grid(visible, which=which, **kwargs) File c:\Users\sarud\.pyenv\venvs\ETLs\lib\site-packages\matplotlib\axis.py:1655, in Axis.grid(self, visible, which, **kwargs) 1652 if which in ['major', 'both']: 1653 gridkw['gridOn'] = (not self._major_tick_kw['gridOn'] 1654 if visible is None else visible) -> 1655 self.set_tick_params(which='major', **gridkw) 1656 self.stale = True ... 1073 % (key, allowed_keys)) 1074 kwtrans.update(kw_) 1075 return kwtrans ValueError: keyword grid_b is not recognized; valid keywords are ['size', 'width', 'color', 'tickdir', 'pad', 'labelsize', 'labelcolor', 'zorder', 'gridOn', 'tick1On', 'tick2On', 'label1On', 'label2On', 'length', 'direction', 'left', 'bottom', 'right', 'top', 'labelleft', 'labelbottom', 'labelright', 'labeltop', 'labelrotation', 'grid_agg_filter', 'grid_alpha', 'grid_animated', 'grid_antialiased', 'grid_clip_box', 'grid_clip_on', 'grid_clip_path', 'grid_color', 'grid_dash_capstyle', 'grid_dash_joinstyle', 'grid_dashes', 'grid_data', 'grid_drawstyle', 'grid_figure', 'grid_fillstyle', 'grid_gapcolor', 'grid_gid', 'grid_in_layout', 'grid_label', 'grid_linestyle', 'grid_linewidth', 'grid_marker', 'grid_markeredgecolor', 'grid_markeredgewidth', 'grid_markerfacecolor', 'grid_markerfacecoloralt', 'grid_markersize', 'grid_markevery', 'grid_mouseover', 'grid_path_effects', 'grid_picker', 'grid_pickradius', 'grid_rasterized', 'grid_sketch_params', 'grid_snap', 'grid_solid_capstyle', 'grid_solid_joinstyle', 'grid_transform', 'grid_url', 'grid_visible', 'grid_xdata', 'grid_ydata', 'grid_zorder', 'grid_aa', 'grid_c', 'grid_ds', 'grid_ls', 'grid_lw', 'grid_mec', 'grid_mew', 'grid_mfc', 'grid_mfcalt', 'grid_ms']` I don't know the error, but I installed a similar virtualenv using conda doesn't show that error. I installed different python versions using pyenv (3.11.2, 3.7.6, 3.9.13, 3.9.5), but in each one shows the same error when I install missingno. I show the image on error, I use VS code as IDE. At beginning, I thought it was the VS code version, but installing libraries using conda the error doesn't appeear.
0.462117
3
1
I'm having the same issue. Agree with Ziyuan, it seems like a recent update of matplotlib changed argument b to visible. The latest version of missingno (0.5.2) through pip have updated the argument name passed to matplotlib, but anaconda only provide version (0.4.2) and therefore having the issue. You can still plot graph even with this error but column label is disappeared. Solution: get into missingno.py, search for grid(b=False) and update it to grid(visible=False). There should be 3 occurrences. Once you done that, return to your own code, re-import the missingno package and it should work.
2023-02-21 23:28:19
0
python,macos,cpu,yolo
1
75,748,615
Unable to Use MPS to Run Pre-trained Object Detection Model on GPU
75,526,898
false
222
I am using MAC and trying to run Ultralytics YOLOv8 pre-trained model to detect objects in my project. However, despite trying to use MPS, I am still seeing the CPU being used in torch even after running the Python code. Specifically, the output I see is: "Ultralytics YOLOv8.0.43 🚀 Python-3.9.16 torch-1.13.1 CPU". I wanted to know if has support for MPS in YOLOv8, and how can use it?
0
1
1
Try adding "--device mps" as a parameter when running the command line
2023-02-22 00:31:00
1
python,tkinter,tkinter-entry
2
75,527,224
Configure Entry Text color for only 1 character in Tkinter
75,527,209
true
140
I am making a project in which I created a tkinter window, and in it there is an Entry widget. I need to change the color of the following Entry widget. The following is what I did. self.input_entry.config(fg="red") input_entry is the Entry widget. This changes the color of the entire string. Now, I want to only change the color of a specific character in the string. How do I accomplish this? I tried getting the specific index of the input entry, but obviously, it didn't work. Do I need to get the text with self.input_entry.get() and then modify it? I do not know, can someone inform me of a possible method of doing this?
1.2
1
1
You cannot change the color of individual characters in an Entry widget. You can use a one-line Text widget instead, which allows you to apply tags to a range of characters, and then apply various attributes (such as color) to those tags.
2023-02-22 06:59:42
0
python,mysql,sql,sqlalchemy
1
75,529,285
pd.read_sql and pd.read_sql_query hang upon execution, how to troubleshoot this?
75,529,204
false
253
I am currently trying to write a pandas dataframe to a table in MySQL database on my laptop, so this is local. The dataframe is filled with some 2M rows from a read_sql query with engine connection parameter stream_results=True, see my code below. Problem is that when I call pd.to_sql() and try to write that chunk of a dataframe to the local database table, the execution just hangs. Why does it take so long? How would I go about troubleshooting this or speeding it up? I have no idea what is going on or how to troubleshoot this? Any suggestions? Here is my code: ` try: # Connect to local database database_uri = 'mysql+pymysql://root:1234@localhost:3306' localEngine = sqlalchemy.create_engine(database_uri) with localEngine.connect().execution_options( stream_results=True) as conn_local: result = conn_local.execute(text("USE ConsumerExpenditures10;")) result = conn_local.execute(text(""" CREATE TABLE IF NOT EXISTS LOADING_TABLE ( EXPENDITURE_ID varchar(11) PRIMARY KEY NOT NULL, HOUSEHOLD_ID VARCHAR(10) NOT NULL, YEAR YEAR NOT NULL, MONTH INT(11) NOT NULL, PRODUCT_CODE VARCHAR(155) NOT NULL, COST DOUBLE NOT NULL, GIFT INT NOT NULL, IS_TRAINING INT(255) NOT NULL, MARITAL VARCHAR(25), SEX VARCHAR(25), AGE INT, WORK_STATUS VARCHAR(25), INCOME_RANK double, INCOME_RANK_1 double, INCOME_RANK_2 double, INCOME_RANK_3 double, INCOME_RANK_4 double, INCOME_RANK_5 double, INCOME_RANK_MEAN double, FEDERAL_FUNDS_TARGET_RATE double, FEDERAL_FUNDS_UPPER_TARGET double, FEDERAL_FUNDS_LOWER_TARGET double, EFFECTIVE_FEDERAL_FUNDS_RATE double, REAL_GDP double, UNEMPLOYMENT_RATE double, INFLATION_RATE double, CPI double) """)) # TRANSFORMATION #9 - Create comprehensive table of GDP, CPI and Consumer Expenditures Data # with each row being one Consumer Expenditure Purchase query = text("""select e.expenditure_id, e.household_id, e.year, e.month, e.product_code, e.cost, e.gift, e.is_training, hm.marital, hm.sex, hm.age, hm.work_status, h.income_rank, h.income_rank_1, h.income_rank_2, h.income_rank_3, h.income_rank_4, h.income_rank_5, h.income_rank_mean, g.FEDERAL_FUNDS_TARGET_RATE, g.FEDERAL_FUNDS_UPPER_TARGET, g.FEDERAL_FUNDS_LOWER_TARGET, g.EFFECTIVE_FEDERAL_FUNDS_RATE, g.REAL_GDP, g.UNEMPLOYMENT_RATE, g.INFLATION_RATE, c.CPI from expenditures e inner join household_members hm on hm.household_id = e.HOUSEHOLD_ID inner join households h on h.household_id = hm.HOUSEHOLD_ID inner join gdp g on g.gdp_year = e.`YEAR` inner join cpi c on c.CPI_YEAR = g.gdp_year""") # https://pythonspeed.com/articles/pandas-sql-chunking/ # https://stackoverflow.com/questions/69711599/pandas-read-sql-from-ms-sql-gets-stuck-for-queries-with-275-chars-in-linux # Takes too long to execute this query: df_final_table = pd.read_sql(query, conn_local) # So we have to do it in chunks to load into a pandas dataframe and then write that to the loading_table for chunk_dataframe in pd.read_sql_query(query, conn_local, chunksize=10): print( f"Got dataframe w/{len(chunk_dataframe)} rows" ) # write this dataframe chunk into the LOADING_TABLE result = chunk_dataframe.to_sql(name='LOADING_TABLE', con=conn_local, if_exists='append', index=False)>>>>>> execution hangs right here in PyCharm debugger! conn_local.commit()` Do you have any suggestions on what I can do trouble shoot this?
0
1
1
Are these queries running as part of a greater execution process? Maybe you are getting a deadlock somewhere. Also due to an error happening initially, maybe Python exited before closing its connections fully and now your connections to the DB are full. If all of this fails, try running these queries in mysql manually and compare results
2023-02-22 09:08:26
5
python,pandas,dataframe,python-polars
2
75,534,816
Polars vs. Pandas: size and speed difference
75,530,375
true
1,068
I have a parquet file (~1.5 GB) which I want to process with polars. The resulting dataframe has 250k rows and 10 columns. One column has large chunks of texts in it. I have just started using polars, because I heard many good things about it. One of which is that it is significantly faster than pandas. Here is my issue / question: The preprocessing of the dataframe is rather slow, so I started comparing to pandas. Am I doing something wrong or is polars for this particular use case just slower? If so: is there a way to speed this up? Here is my code in polars import polars as pl df = (pl.scan_parquet("folder/myfile.parquet") .filter((pl.col("type")=="Urteil") | (pl.col("type")=="Beschluss")) .collect() ) df.head() The entire code takes roughly 1 minute whereas just the filtering part takes around 13 seconds. My code in pandas: import pandas as pd df = (pd.read_parquet("folder/myfile.parquet") .query("type == 'Urteil' | type == 'Beschluss'") ) df.head() The entire code also takes roughly 1 minute whereas just the querying part takes <1 second. The dataframe has the following types for the 10 columns: i64 str struct[7] str (for all remaining) As mentioned: a column "content" stores large texts (1 to 20 pages of text) which I need to preprocess and the store differently I guess. EDIT: removed the size part of the original post as the comparison was not like for like and does not appear to be related to my question.
1.2
3
1
As mentioned: a column "content" stores large texts (1 to 20 pages of text) which I need to preprocess and the store differently I guess. This is where polars must do much more work than pandas. Polars uses arrow memory format for string data. When you filter your DataFrame all the columns are recreated for where the mask evaluates to true. That means that all the text bytes in the string columns need to be moved around. Whereas for pandas they can just move the pointers to the python objects around, e.g. a few bytes. This only hurts if you have really large values as strings. E.g. when you are storing whole webpages for instance. You can speed this up by converting to categoricals.
2023-02-22 11:29:29
0
python,pandas,calculation
3
75,532,380
Pyrthon script to calculate two rows together from same column based on a match between same rows in two different columns
75,532,115
false
61
I want to create a Python script to calculate a new column, based on subtracting two values from same column in two different rows. The two rows used for the calculation should be defined by being a match in values of two other columns. So, to specify and give an example: Id Tag Amount 1 2 3.75 2 xxx 15 3 4 4 4 xxx 14 5 6 5 6 xxx 15.5 The above table is an example of what I have right now. The below table is including the column that I would like to create. For me, it does not matter if 'NaN or 0' is in the specified row or the row afterwards: Id Tag Amount NewColumn 1 2 3.75 NaN or 0 or simply the value from Amount 2 xxx 15 11.25 3 4 4 NaN or 0 or simply the value from Amount 4 xxx 14 10 5 6 5 NaN or 0 or simply the value from Amount 6 xxx 15.5 10.5 So here, the value of NewColumn in the second row is equal to 11.25, because the following conditions are met: The value of the column 'Id' is equal to the value in the column 'Tag'. Therefore, the NewColumn should take the value of the column 'Amount' in row the bigger number and subtract it by the value in the row with the smaller number. This means that the calculation is 15-3.75 = 11.25. To give some context, the value in 'Amount' in row 2 is with VAT included. The value in the row before of the same column is the VAT by itself. The Id is the Transaction ID, and the Tag column are used to link together the VAT transaction the correct corresponding full transaction. I have tried to use ChatGPT to solve this issue, but can not seem to fully solve it. Here is what I have so far: import pandas as pd # Load the dataset into a pandas dataframe df = pd.read_csv('path/to/dataset.csv') # Define the name of the column to fetch data from other_column_name = 'other_column_name' # Iterate over each row in the dataframe for index, row in df.iterrows(): # Fetch data from another row and column based on an exact match search_value = row['column_name'] matching_row = df.loc[df['column_name'] == search_value] if len(matching_row) == 1: other_column_data = matching_row[other_column_name].values[0] else: other_column_data = None # Use the fetched data to calculate a new column if other_column_data is not None: new_column_data = row['existing_column'] + other_column_data else: new_column_data = None # Add the new column to the dataframe if new_column_data is not None: df.at[index, 'new_column'] = new_column_data # Save the updated dataset to a new CSV file df.to_csv('path/to/new_dataset.csv', index=False) Which simply outputs a combination of the values in Tag and Id.
0
1
1
Since I am unable to edit my question, I would like to contribute with this update to make my second table readable. Id Tag Amount NewColumn 1 2 3.75 NaN or 0 or simply the value from Amount 2 xxx 15 11.25 3 4 4 NaN or 0 or simply the value from Amount 4 xxx 14 10 5 6 5 NaN or 0 or simply the value from Amount 6 xxx 15.5 10.5 I should also add, that I can not simply apply a singular VAT percent rate for these transactions, as the transactions differ in their VAT. Also, it is not to be expected a "perfect" relationship here where the corresponding rows will be right after each other.
2023-02-22 11:40:12
0
python,pandas,dataframe,operation
2
75,532,275
How to multiply two columns of a Dataframe?
75,532,218
false
93
Good afternoon, I am trying to multiply two columns of a dataframe (C). And add the results to a new column. I have tried different methods but none work. The most common error I encounter is: TypeError: can't multiply sequence by non-int of type 'float' The columns that a i want to multiply are: H04_PEDRO_MARIN and SS(mg/l). And also i want to create a new column with the results. C: H04_PEDRO_MARIN SS(mg/l) multiplication Fecha 26/07/11 14:00 0.000000 80.4 0.000000 26/07/11 15:00 0.000000 76.1 0.000000 26/07/11 16:00 0.000000 0 0.000000 26/07/11 17:00 0.000000 0 0.000000 26/07/11 18:00 0.000000 0 0.000000 ... ... ... ... 12/04/12 10:00 9430.166667 61.18 9430.166667 12/04/12 11:00 9430.166667 60.05 9430.166667 12/04/12 12:00 9430.166667 59.43 9430.166667 12/04/12 14:00 9430.166667 56.98 9430.166667 [11568 rows x 3 columns] I have tried: C['multiplicaction'] = C['H04_PEDRO_MARIN'][1])*(C['SS(mg/l)']) And cols = ['H04_PEDRO_MARIN','SS(mg/l)'] C['multiplication'] = C[cols].prod(axis=1) And don´t work Even i have tried to separate both columns in different dataframes and multiply and don't work again. Thanks for any solution.
0
1
1
Check your columns astype: C.info(). To solve the TypeError: can't multiply sequence by non-int of type float error, convert the string into a floating-point number before multiplying it with a float. If you convert the string "3" to a float before multiplying it with the floating-point number 3.3 , there will be no error.
2023-02-22 15:16:30
1
javascript,python,jupyter-notebook,underscore.js,jupyter-widget
1
76,117,161
Jupyter Notebook - underscore.js doesn't seem to be accessible anymore
75,534,737
true
37
I'm using a custom widget in Jupyter. After upgrading to a new machine it stopped functioning. Checking the javascript console in the browser window running the notebook I see the error ReferenceError: _ is not defined. Indeed, running the following in a Jupyter cell: %%js alert(_) I get the same error. Doing the exact same command on my other machine functions correctly (it shows the definition of _ as in underscore.js). The html source of the Jupyter Notebook still shows underscore.js as being specified in the require.config. Note that simple included widgets still function as expected (so it is not an issue with initializing the widget system). I haven't found anything in the changelogs of ipywidgets or jupyter regarding changes to the use of underscore.js. I know that the widget api has changed recently in ipywidgets8.0, which is why I am still using version 7.7.3. Does anyone know if this is an expected change of behaviour in how widgets work? Any other ideas of why underscore does not seem to be initialized properly?
1.2
1
1
Underscore.js is available in the 6.4.x series e.g. pip install notebook=6.4.12 It broke in your case likely because js assets were delegated to NbClassic starting with 6.5, which I'm guessing doesn't load underscore.js (or doesn't make it available to the notebook frontend). See github.com/jupyter/notebook/pull/6474.
2023-02-22 18:25:26
0
python,parsing,selenium-webdriver,web-scraping,beautifulsoup
1
75,579,069
BeautifulSoup dont take html page in cyrcl
75,536,764
true
38
I ran into a problem that when parsing, the soup checks the same page every time. I use it in conjunction with selenium. Selenium opens a new link without problems, but the soup only checks the very first one. The saddest thing is that I used similar constructs in other code with another site and it works as it should. from bs4 import BeautifulSoup from selenium import webdriver keys_list = [] def start_browser(link): profile = 'C:\\Users\\Crazy_MoT\\AppData\\Local\\Google\\Chrome\\User Data\\Default' options = webdriver.ChromeOptions() try: options.add_argument(f"user-data-dir={profile}") browser = webdriver.Chrome(options=options) except: print("Connect to profile... Error\n Opening new profile") browser = webdriver.Chrome() browser.quit() #browser.get(link) browser.get(link) html = browser.page_source soup = BeautifulSoup(html, 'html.parser') author = soup.find("a", attrs={'data-qa': 'FileViewAuthorBox'}, href=True) print(author["href"]) keywords = soup.find_all("span", class_="_oX66p") for keys in keywords: keys_list.append(keys.text) print(keys_list) def start(links): for link in links: start_browser(link) links = ["https://ru.depositphotos.com/26182475/stock-photo-happy-birthday.html", "https://ru.depositphotos.com/39273619/stock-photo-label-with-happy-birthday.html"] start(links) I want to collect information from different pages of the site. I get information only from the very first page and then it repeats
1.2
1
1
the code works as it should. When checking on other machines, the error did not appear.
2023-02-22 22:36:48
0
python,c,cython
1
75,567,544
Cython: Aliasing function argument name
75,538,840
true
50
I have a C library that I am writing a Python extension for using Cython that includes this function (declared in the library's header file): #include <stdio.h> #include "zlib.h" int deflate_index_build(FILE *in, off_t span, struct deflate_index **built); I am attempting to create a Cython extension for this function using: from posix.types cimport off_t from libc.stdio cimport FILE cdef extern from "header.h": int deflate_index_build(FILE *in, off_t span, deflate_index **built) However, the use of in as the name for the first argument of the function causes a syntax error on compilation because in is a Python keyword. I don't want to change the name of this argument because it would have a large impact on the C library. Is there a way to alias the argument name in Cython to avoid this error?
1.2
1
1
From @user2357112 in the comment above: C argument names aren't part of the function signature. I don't think you actually need to put the same argument names in your Cython code as you declared in your .h file. (You don't even need to match argument names between your .h file and the actual function definition - in fact, you don't even need argument names in your .h file at all.)
2023-02-23 03:08:30
0
python,sockets,google-colaboratory,tor,stem
2
75,550,065
Python : Stem TOR Controller: SocketError: Socket error: 0x01: General SOCKS server failure
75,540,217
false
158
Hello im having hard time to use the tor stem module, it causes error on the with Controller.from_port(port=9050) as controller:I tried to check if my i am running on port 9050 using netstats, the service on the tor is already enabled tcp 0 0 127.0.0.1:9050 0.0.0.0:* LISTEN Here's my setup import requests import socks import socket from stem import Signal from stem.control import Controller socks.setdefaultproxy(socks.PROXY_TYPE_SOCKS5, "127.0.0.1", 9050) socket.socket = socks.socksocket session = requests.session() def renew_tor_ip(): with Controller.from_port(port=9050) as controller: controller.authenticate(password='malakititeko') controller.signal(Signal.NEWNYM) renew_tor_ip() The traceback error: SOCKS5Error Traceback (most recent call last) /usr/local/lib/python3.8/dist-packages/socks.py in connect(self, dest_pair, catch_errors) 808 negotiate = self._proxy_negotiators[proxy_type] --> 809 negotiate(self, dest_addr, dest_port) 810 except socket.error as error: 10 frames SOCKS5Error: 0x01: General SOCKS server failure During handling of the above exception, another exception occurred: GeneralProxyError Traceback (most recent call last) GeneralProxyError: Socket error: 0x01: General SOCKS server failure During handling of the above exception, another exception occurred: SocketError Traceback (most recent call last) /usr/local/lib/python3.8/dist-packages/stem/socket.py in _make_socket(self) 536 return control_socket 537 except socket.error as exc: --> 538 raise stem.SocketError(exc) 539 540 SocketError: Socket error: 0x01: General SOCKS server failure
0
1
1
You shouldn't try to connect to the Tor controller through the Tor proxy. You should connect to it directly.
2023-02-23 08:44:12
1
python,airflow,jinja2
1
75,546,267
Airflow: how to use dag_run argument value as trigger_rule in PythonOperator
75,542,398
true
53
I am trying to pass the trigger_rule of a PythonOperator via the config when triggering my DAG. For example: {"trigger_rule": "all_done"}. In other words, I want to be able to at runtime choose which trigger_rule to use by using the config when triggering the DAG. However, it seems that the trigger_rule field in the PythonOperator is not a templated field. I.e. I can not implement it as follows: PythonOperator( task_id='test', python_callable=my_func, trigger_rule="{{ dag_run.conf.get('trigger_rule', 'all_success') }}" ) Any clues on how to tackle this problem?
1.2
1
1
This is not possible as trigger_rule is a parameter needed by the Scheduler to determine if a task should run; meaning it needs to be present prior to task runtime. Template fields are evaluated/rendered during task execution. In essence, if a parameter is needed for the Scheduler it cannot be templated.
2023-02-23 09:08:01
0
python,visual-studio-code,vscode-extensions,prettier
1
76,475,787
prettier throws error `Failed to resolve a parser`
75,542,637
false
1,314
Prettier throws error "failed to resolve a parser". Prettier is selected in Workspace, User and Python > Workspace, so I'm out of ideas why the error is thrown... ["INFO" - 08:57:18] File Info: { "ignored": false, "inferredParser": null } ["WARN" - 08:57:18] Parser not inferred, trying VS Code language. ["ERROR" - 08:57:18] Failed to resolve a parser, skipping file. If you registered a custom file extension, be sure to configure the parser.
0
3
1
Did you by chance try to invoke the "Format Document (Forced)" command (prettier.forceFormatDocument) to trigger the formatting? (I would have asked you this with a comment first if only I had the required reputation to do so.) I have seen the exact same error when I tried to invoke the command on a .cs file today. Anyways, here's my... attempted supportive answer Since this command came from the "Prettier - Code formatter" extension (esbenp.prettier-vscode), it will only use the extension itself for formatting. Invoking it on any unsupported code only produces the log output you posted. (python and csharp are two unsupported ones) It would seem that you already have some other extension for formatting python. If that extension is working, the regular command "Format Document" (alt-shift-F / cmd-shift-F) should be enough. But even if it doesn't work, it won't produce the same log output you posted. Chances are, our respective formatter extensions actually worked, but it did not do the specific thing(s) we had in mind. For my .cs files, I'm still looking at why my long IF statements are not broken down into multiple lines or why it does not format my indentation on the 2nd and 3rd lines of my multi-line IF statements. "Isn't this what every prettier should do?" I probably got this idea from my other projects (i.e. Typescript) which have every formatting needs fulfilled. next step You should probably ignore the log from prettier and focus on the log from autopep8 or the lacking of such.
2023-02-23 11:33:52
-3
python,python-idle
2
75,544,650
How to prevent IDLE from running a Python code prematurely
75,544,264
false
84
I am quite new in using IDLE, and yes this is a rookie question, but please bear with me. I have this long, complex python code (I will embed below), that I am copying line by line to IDLE. The problem is that IDLE program runs some parts of the code before I am done typing the whole code. This happens when I skip two lines at a certain section of the code. When I copy the same code as is and input it in one of the online Python interpreters, it runs just fine and the output is complete, unlike with IDLE, where it is in bits or incomplete. How do I stop IDLE from running the code early? The code runs at line 25 (counter += 1) after skipping two lines (so that I get back to the initial/default indentation starting with ">>>"). Here is the code: decimal_parts = [] for num in decimal_numbers: decimal_part = str(num).split('.')[1] if '.' in str(num) else '00' if len(decimal_part) == 1: decimal_part += '0' decimal_parts.append(decimal_part) lst = list(map(int, ",".join(decimal_parts).split(','))) start_index = 0 counter = 1 grouped_lists = [] for i, num in enumerate(lst): if num == 0: start_index = i print(f"Position: {i + 1}") sub_list = lst[start_index:] + lst[:start_index] four_index = sub_list.index(0) last_digit_list = [num % 10 for num in sub_list[four_index:]] print(f"List {counter}: {last_digit_list}") grouped_lists.append(last_digit_list) counter += 1 matches = {} for i, sub_list in enumerate(grouped_lists): for j, num in enumerate(sub_list): if num not in matches: matches[num] = [i] else: matches[num].append(i) for num, match_indices in matches.items(): if len(match_indices) > 1: print(f"Matches found for number {num} in lists: {match_indices}")``` Irrespective of the code running prematurely, I continued inputting the rest of the code, then after the output was generated, I would continue typing the other sections, however, I don't want bits of outputs from bits of code. I want to be able to type the whole code, and get the whole output, all in once. Anyways, after "counter +=" line, there is another section of the code I still need to type on a new line starting with ">>>", but I never get to this line as the code runs.
-0.291313
1
2
What version of IDLE are you using? You might be needing to update it to the latest version. Also, be wary of indentation and spacing. You can adjust indentation on IDLE's general settings and switch off automatic indentations... hope this helps.
2023-02-23 11:33:52
1
python,python-idle
2
75,544,379
How to prevent IDLE from running a Python code prematurely
75,544,264
false
84
I am quite new in using IDLE, and yes this is a rookie question, but please bear with me. I have this long, complex python code (I will embed below), that I am copying line by line to IDLE. The problem is that IDLE program runs some parts of the code before I am done typing the whole code. This happens when I skip two lines at a certain section of the code. When I copy the same code as is and input it in one of the online Python interpreters, it runs just fine and the output is complete, unlike with IDLE, where it is in bits or incomplete. How do I stop IDLE from running the code early? The code runs at line 25 (counter += 1) after skipping two lines (so that I get back to the initial/default indentation starting with ">>>"). Here is the code: decimal_parts = [] for num in decimal_numbers: decimal_part = str(num).split('.')[1] if '.' in str(num) else '00' if len(decimal_part) == 1: decimal_part += '0' decimal_parts.append(decimal_part) lst = list(map(int, ",".join(decimal_parts).split(','))) start_index = 0 counter = 1 grouped_lists = [] for i, num in enumerate(lst): if num == 0: start_index = i print(f"Position: {i + 1}") sub_list = lst[start_index:] + lst[:start_index] four_index = sub_list.index(0) last_digit_list = [num % 10 for num in sub_list[four_index:]] print(f"List {counter}: {last_digit_list}") grouped_lists.append(last_digit_list) counter += 1 matches = {} for i, sub_list in enumerate(grouped_lists): for j, num in enumerate(sub_list): if num not in matches: matches[num] = [i] else: matches[num].append(i) for num, match_indices in matches.items(): if len(match_indices) > 1: print(f"Matches found for number {num} in lists: {match_indices}")``` Irrespective of the code running prematurely, I continued inputting the rest of the code, then after the output was generated, I would continue typing the other sections, however, I don't want bits of outputs from bits of code. I want to be able to type the whole code, and get the whole output, all in once. Anyways, after "counter +=" line, there is another section of the code I still need to type on a new line starting with ">>>", but I never get to this line as the code runs.
0.099668
1
2
The IDLE shell is just an interpreter, you can send multi-line blocks of code at once by using Shift + Enter when starting a new line, but instead it is recommended to run a Python file instead. To do this you make a new file by pressing Control + N or going to File -> New file. Now copy all text into that new file and run it by pressing F5 or by going to Run -> Run module. It may ask you to save your file first, if it does, press Ok and select a location.
2023-02-23 18:11:07
0
javascript,html,python-3.x,pandas
1
75,550,659
Accessing HTML table row element from pandas dataframe
75,548,738
true
64
I have the following code to generate html from pandas dataframe. I'm using JS to access each table row but getting an error. File "<fstring>", line 2 var elem = array[i].cells[1].innerHTML; ^ SyntaxError: invalid syntax def generate_html_main_page(dataframe: pd.DataFrame): # get the table HTML from the dataframe table_html = dataframe.to_html(table_id="table") # construct the complete HTML with jQuery Data tables html = f""" <html> <header> <link href="https://cdn.datatables.net/1.11.5/css/jquery.dataTables.min.css" rel="stylesheet"> </header> <body> {table_html} <script src="https://code.jquery.com/jquery-3.6.0.slim.min.js" integrity="sha256-u7e5khyithlIdTpu22PHhENmPcRdFiHRjhAuHcs05RI=" crossorigin="anonymous"></script> <script type="text/javascript" src="https://cdn.datatables.net/1.11.5/js/jquery.dataTables.min.js"></script> <script type="text/javascript"> var array = document.getElementById("table").rows for (let i = 0; i < array.length; ++i) { var elem = array[i].cells[1].innerHTML; document.getElementById("table").rows[i].cells[1].innerHTML = "<a href='#test'>" + elem +"</a>" document.write(document.getElementById("table").rows[i].cells[1].innerHTML) } </script> </body> </html> """ # return the html return html Any pointers are greatly appreciated. Thanks!
1.2
1
1
You're using an f-string to generate HTML, but the HTML also has {/} characters in it. So Python is trying to execute the body of your JavaScript for loop as Python code. You can escape them as {{/}} when you want them to just be literal braces. Or use plain string concatenation (html = '''...''' + table_html + '''...''').
2023-02-23 19:34:18
3
python,numpy,jupyter-notebook,jupyter,jupyter-lab
1
75,549,591
Jupyter Notebooks incorrectly calculating numpy conjugation to a power
75,549,520
true
41
I happened upon some error in my Jupyter Notebook that I can't explain. I have the following code. import numpy as np c = 100 c_conj = np.conjugate(c) print(c == c_conj) print(c**5 == c_conj**5) Resulting in the output True False I get the same result for JupyterLite (the online Jupyter Notebook software). Alternatively, if I run the same code on any other platform (e.g. Google Collab), I get the output True True Is this user error? Is there a way to explain this?
1.2
1
1
numpy.conjugate(100) returns a numpy.int_ instance, not a plain Python int. numpy.int_ corresponds to C long. On the platforms where the comparison evaluated to False, C long is 32 bits, and the c_conj**5 computation overflowed.
2023-02-23 22:04:34
0
python,tensorflow,keras,deep-learning,jupyter-notebook
1
75,697,493
Wrong with Test and valid generators
75,550,794
true
44
I've been working in my model, I came to generators part. when I use to read test.csv file like this: test_df = pd.read_csv("Test.csv") Its going well! and when I came to generator part: def get_test_generator(test_df, train_df, image_dir, x_col, y_cols, sample_size=100, batch_size=8, seed=1, target_w = 320, target_h = 320): """ Return generator for test set using normalization statistics from training set. Args: test_df (dataframe): dataframe specifying test data. image_dir (str): directory where image files are held. x_col (str): name of column in df that holds filenames. y_cols (list): list of strings that hold y labels for images. sample_size (int): size of sample to use for normalization statistics. batch_size (int): images per batch to be fed into model during training. seed (int): random seed. target_w (int): final width of input images. target_h (int): final height of input images. Returns: test_generator (DataFrameIterator): iterators over test set """ print("getting train generators...") # get generator to sample dataset raw_train_generator = ImageDataGenerator().flow_from_dataframe( dataframe=train_df, directory=IMAGE_DIR, x_col="Image", y_col=labels, class_mode="raw", batch_size=sample_size, shuffle=True, target_size=(target_w, target_h)) # get data sample batch = raw_train_generator.next() data_sample = batch[0] # use sample to fit mean and std for test set generator image_generator = ImageDataGenerator( featurewise_center=True, featurewise_std_normalization= True) # fit generator to sample from training data image_generator.fit(data_sample) # get test generator test_generator = image_generator.flow_from_dataframe( dataframe=test_df, directory=image_dir, x_col=x_col, y_col=y_cols, class_mode="raw", batch_size=batch_size, shuffle=False, seed=seed, target_size=(target_w,target_h)) return test_generator when I run this cell in Jupyter : IMAGE_DIR = '/Users/awabe/Desktop/Project/PapilaDB/FundusImages test' test_generator= get_test_generator(test_df, train_df, IMAGE_DIR, "Image", labels) to read the images it give me the error: getting train generators... Found 0 validated image filenames. Found 488 validated image filenames. /opt/anaconda3/envs/tensorflow/lib/python3.10/site-packages/keras/preprocessing/image.py:1139: UserWarning: Found 488 invalid image filename(s) in x_col="Image". These filename(s) will be ignored. warnings.warn( /opt/anaconda3/envs/tensorflow/lib/python3.10/site-packages/numpy/core/fromnumeric.py:3432: RuntimeWarning: Mean of empty slice. return _methods._mean(a, axis=axis, dtype=dtype, /opt/anaconda3/envs/tensorflow/lib/python3.10/site-packages/numpy/core/_methods.py:182: RuntimeWarning: invalid value encountered in divide ret = um.true_divide( /opt/anaconda3/envs/tensorflow/lib/python3.10/site-packages/numpy/core/_methods.py:265: RuntimeWarning: Degrees of freedom <= 0 for slice ret = _var(a, axis=axis, dtype=dtype, out=out, ddof=ddof, /opt/anaconda3/envs/tensorflow/lib/python3.10/site-packages/numpy/core/_methods.py:223: RuntimeWarning: invalid value encountered in divide arrmean = um.true_divide(arrmean, div, out=arrmean, casting='unsafe', /opt/anaconda3/envs/tensorflow/lib/python3.10/site-packages/numpy/core/_methods.py:254: RuntimeWarning: invalid value encountered in divide ret = um.true_divide( (the 488 images in the second line belongs to the train generator which works fine) where is the wrong here?
1.2
1
1
the images name in the column Image should contain the image name + the extension. for example: assume the name in the directory file is: image1.jpg so the name in the csv should be: image1.jpg if you write it as: image1 you will get an error back. so simple 🤦. thanks to the upper guy TFer2
2023-02-24 01:30:05
0
python,user-interface,tkinter
1
75,570,271
tkinter scrollable listbox refresh
75,551,873
false
85
I have the folling tkinter UI that I am building that on load does not immediately load the listbox data, and I'm not sure why. Instead, on load I get the scrollbar and an empty listbox (the button shows up fine too). As soon as I interact with the window at all, the contents of the listbox show up: from tkinter import * gui = Tk() gui.eval('tk::PlaceWindow . center') # gui.geometry("500x200") top_frame = Frame(gui) top_frame.pack(side=TOP) bot_frame = Frame(gui) bot_frame.pack(side=BOTTOM) scrollbar = Scrollbar(top_frame) scrollbar.pack(side=LEFT, fill=Y) lb = Listbox(top_frame) lb.pack() def onselect(evt): w = evt.widget index = int(w.curselection()[0]) value = w.get(index) print(index, value) lb.bind('<<ListboxSelect>>', onselect) lb.insert(0, *range(100)) scrollbar.config(command=lb.yview) lb.config(yscrollcommand=scrollbar.set) quit_button = Button(bot_frame, text="Quit", command=gui.destroy) quit_button.pack() mainloop() It seems like there is some ordering in which the pack calls need to occur that I can't seem to get right. How can I get the items to show up on window load while keeping the scrollbar on the left? EDIT 1: system info: platform.platform(): macOS-12.6.2-x86_64-i386-64bit platform.python_version(): 3.10.6 tk.TkVersion: 8.6
0
1
1
sometimes especially at initiation of the application there is a flood of things tkinter does in the background and pending calls get in the waiting queue/waiting for the mainloop. I've read that many TCL'er are waiting for the application to be mapped and after that forcing it to redraw is a widespread technique to make sure things get considered. widget.update_idletasks() should do that for you.
2023-02-24 02:41:33
1
python,machine-learning,scikit-learn,pipeline,data-preprocessing
2
75,584,135
What is the correct order in data preprocessing stage for Machine Learning?
75,552,168
false
304
I am trying to create some sort of step-by-step guide/cheat sheet for myself on how to correctly go over the data preprocessing stage for Machine Learning. Let's imagine we have a binary Classification problem. Would the below strategy work or do I have to change/modify the order of some of the steps and maybe something should be added or removed? 1. LOAD DATA import pandas as pd df = pd.read_csv("data.csv") 2. SPLIT DATA - I understand, that to prevent "data leakage", we MUST split data into training (work with it) and testing (pretend it does not exist) sets. from sklearn.model_selection import train_test_split # stratify = 'target' if proportion disbalance in data, so training and testing sets will have the same proportion after splitting. train_df, test_df = train_test_split(df, test_size = 0.33, random_state = 42, stratify = 'target') 3. EDA ON TRAINING DATA - Is it correct to look at the training set only or should we do EDA before splitting? If we assume the Test set doesn't exist, then we should not care what is there, right? train_df.info() train_df.describe() # + Plots etc. 4. OUTLIERS ON TRAINING DATA - If we have to scale the data, the Mean (Average) is very sensitive to outliers, therefore we have to take care of them in the beginning. Also, if we decide to fill Null numerical features with mean, outliers may be a problem in this case. import matplotlib.pyplot as plt import seaborn as sns # Check distributions sns.diplot(train_df) sns.boxplot(train_df) train_df.corr() # Correlation between all features and label train_df.corr()["target"].sort_values() sns.scatterplot(x = "Column X", y = 'target', data = train_df) train_df.describe() # above 75% + 1.5 * (75% - 25%) and below 25% - 1.5 * (75% - 25%) 5. MISSING VALUES ON TRAINING DATA - We can't have Null values. We either remove or fill in them. This step should be taken care of in the beginning. train_df.info() train_df.isnull().sum() # or train_df.isna().sum() # Show the rows with Null values train_df[train_df["Column"].isnull()] 6. FEATURE ENGINEERING ON TRAINING DATA - Is this step should be taken care of in the beginning as well? I think so because we can create the feature that might need to be scaled. # If some columns (not target) correlated with each other, we should delete one of them, or make some sort of blending. train_df.corr() train_df = train_df.drop("1 of Correlated X Column", axis = 1) # For normally distributed data, the skewness should be about 0. A skewness value > 0 means there is more weight in the left tail of the distribution # We should try to have normal distribution in the columns train_df["Not Skewed Column"] = np.log(train_df["Skewed Column"] + 1) train_df["Not Skewed Column"].hist(figsize = (20,5)) plt.show() 7. CATEGORICAL DATA - We can't have objects in the data frame. from sklearn.preprocessing import OneHotEncoder # Just an example # Create X and y variables X_train = train_df.drop('target', axis = 1) y_train = np.where(train['target'] == 'yes', 1, 0) # Create the one hot encoder onehot = OneHotEncoder(handle_unknown = 'ignore') # Apply one hot encoding to categorical columns encoded_columns = onehot.fit_transform(X_train.select_dtypes(include = 'object')).toarray() X_train = X_train.select_dtypes(exclude = 'object') X_train[onehot.get_feature_names_out()] = encoded_columns 8. IMBALANCED DATA - Good to have the same or similar number of observations in the target column. from imblearn.over_sampling import SMOTE # Just an example # Create the SMOTE class sm = SMOTE(random_state = 42) # Resample to balance the dataset X_train, y_train = sm.fit_resample(X_train, y_train) 9. SCALE DATA - Should we scale the target column in the Regression task? # Brings mean close to 0 and std to 1. Formula = (x - mean) / std from sklearn.preprocessing import StandardScaler # Just an example scaler = StandardScaler() scaled_X_train = scaler.fit_transform(X_train) # X_test we don't fit, only transform! 10. PRINCIPAL COMPONENT ANALYSIS (PCA) - REDUCING DIMENSIONALITY - Should data be scaled before applying PCA? # Example: PCA = 50 (n_components). Let's say Input is 100 X features, after applying PCA, Output will be 50 X features. # Why don't use PCA all the time? We lose the ability to explain what each value is because they are now in combination with a whole bunch of features. # Will not be able to look at feature importance, trees, etc. We use it when we need to. # If we are able to train the model with all features, then great. if can't, we can apply PCA, but be ready to lose the ability to explain what is driving the machine learning model. from sklearn.decomposition import PCA # Just an example pca = PCA(n_components = 50) # Just an Example scaled_X_train = pca.fit_transform(scaled_X_train) # X_test we don't fit, only transform! 11. MODEL, FIT, EVALUATE, PREDICT from sklearn.linear_model import RidgeClassifier # Just an Example from sklearn.metrics import accuracy_score, recall_score, precision_score, f1_score, confusion_matrix model = RidgeClassifier() model.fit(scaled_X_train, y_train) # HERE we should create and / or execute transformation function that will take test_df as input and will return scaled_X_test and y_test y_pred = model.predict(scaled_X_test) # Evaluate model - Calculate Classification metrics accuracy = accuracy_score(y_test, y_pred) precision = precision_score(y_test, y_pred) recall = recall_score(y_test, y_pred) f1 = f1_score(y_test, y_pred) print(f"RidgeClassifier model scores Accuracy: {accuracy}, Precision: {precision}, Recall: {recall}, F1-Score: {f1}") confusion_matrix(y_test, y_pred, labels = [1,0]) 12. SAVE MODEL import joblib # Just an example # Save Model joblib.dump(model, 'best_model.joblib')
0.099668
1
1
I would suggest, the following steps - EDA(Learn about data) Finding correlations Removing unnecessary features. Working on preprocessing the data(Such as Outlier removal, Encoding Data) Split features and target variables(X and Y) Train Test Split Perform scaling(Scaling before train test split will lead to data leakage) Choose the algorithm, depending on the usecase (TreeBased models doesn't get effected by outliers and different scale of data so you can reduce those steps while selecting these models) Depending on the usecase select the metrics to judge your model's performance.(Confusion matrix, f1 score, precision, recall, rmse, mse)
2023-02-24 03:22:02
2
python,python-3.x
2
75,679,235
LangChain - cannot import langchain.agents.load_tools
75,552,338
false
8,813
I am trying to use LangChain Agents and am unable to import load_tools. Version: langchain==0.0.27 I tried these: from langchain.agents import initialize_agent from langchain.llms import OpenAI from langchain.agents import load_tools shows output --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-36-8eb0012265d0> in <module> 1 from langchain.agents import initialize_agent 2 from langchain.llms import OpenAI ----> 3 from langchain.agents import load_tools ImportError: cannot import name 'load_tools' from 'langchain.agents' (C:\ProgramData\Anaconda3\lib\site-packages\langchain\agents\__init__.py)
0.197375
1
1
I had the same problem using python 3.7.9, I downloaded python 3.10.10 instead and it worked.
2023-02-24 11:35:40
-1
python,sorting,filenames
2
75,556,290
Python - clever way to define filenames including string and number suffix, in order to sort them properly later
75,556,094
false
48
I am looking for a clever way to sort my files in Python. I am generating many JSON files in a folder which includes a string and index. Currently I can list them like: [A_0.json, A_1.json, A_2.json, A_3.json ... A_500.json] Another folder contains: [B_0.json, B_1.json, B_2.json, B_3.json ... B_300.json] In the next step, for each folder, I will run a script to merge all files into one. So, I would like to keep this naming convention (string + index). The suffix numbers come from index from Dataframe. But, I am struggling to merge all JSON files into one with the right sequence of the index. I first sorted files in a folder: ['A_0.json', 'A_1.json', 'A_10.json', 'A_100.json', 'A_101.json', 'A_2.json', 'A_3.json'] What I would like to see is: ['A_0.json', 'A_1.json', 'A_2.json', 'A_3.json', 'A_10.json', 'A_100.json', 'A_101.json'...] So, the merged file can contain the content of JSON files in the right order. Note: the original JSON files should be preserved. Sorry that this question may mean two questions/steps. Suggestion to solve this problem is appreciated. If your suggestion requires the slight change of the naming convention, that is not ideal, but I would definitely consider. Many thanks!
-0.099668
1
1
Please try l = ['A_0.json', 'A_1.json', 'A_10.json', 'A_100.json', 'A_101.json', A_2.json', 'A_99.json'] l.sort(key=len) print(l)`
2023-02-24 13:04:24
1
python,jupyter,snakemake
1
75,557,048
ModuleNotFoundError when running Jupyter notebook with conda env in Snakemake
75,556,909
false
246
I recently tried to use the Scanpy python package in a jupyter notebook that I made in Snakemake. Scanpy is installed in a conda environment that I explicited in a .yaml in Snakemake. When running the job: snakemake --cores 1 results/output.h5ad --use-conda the conda environment is succesfully loaded, but Snakemake does not find the module and gives this error: Building DAG of jobs... Using shell: /bin/bash Provided cores: 1 (use --cores to define parallelism) Rules claiming more threads will be scaled down. Job stats: job count min threads max threads ------- ------- ------------- ------------- load_h5 1 1 1 total 1 1 1 Select jobs to execute... [Fri Feb 24 13:35:29 2023] rule load_h5: input: input/filtered_feature_bc_matrix.h5 output: results/output.h5ad jobid: 0 reason: Missing output files: results/output.h5ad resources: tmpdir=/var/folders/w7/zvnr_nqd4f3_2kdw0259s26r0000gq/T Activating conda environment: .snakemake/conda/8d45bb2abfce310beb1752237b93c097_ Traceback (most recent call last): File "/Users/usr/miniconda3/bin/jupyter-nbconvert", line 10, in <module> sys.exit(main()) File "/Users/usr/miniconda3/lib/python3.9/site-packages/jupyter_core/application.py", line 277, in launch_instance return super().launch_instance(argv=argv, **kwargs) File "/Users/usr/miniconda3/lib/python3.9/site-packages/traitlets/config/application.py", line 1041, in launch_instance app.start() File "/Users/usr/miniconda3/lib/python3.9/site-packages/nbconvert/nbconvertapp.py", line 418, in start self.convert_notebooks() File "/Users/usr/miniconda3/lib/python3.9/site-packages/nbconvert/nbconvertapp.py", line 592, in convert_notebooks self.convert_single_notebook(notebook_filename) File "/Users/usr/miniconda3/lib/python3.9/site-packages/nbconvert/nbconvertapp.py", line 555, in convert_single_notebook output, resources = self.export_single_notebook( File "/Users/usr/miniconda3/lib/python3.9/site-packages/nbconvert/nbconvertapp.py", line 483, in export_single_notebook output, resources = self.exporter.from_filename( File "/Users/usr/miniconda3/lib/python3.9/site-packages/nbconvert/exporters/exporter.py", line 198, in from_filename return self.from_file(f, resources=resources, **kw) File "/Users/usr/miniconda3/lib/python3.9/site-packages/nbconvert/exporters/exporter.py", line 217, in from_file return self.from_notebook_node( File "/Users/usr/miniconda3/lib/python3.9/site-packages/nbconvert/exporters/notebook.py", line 36, in from_notebook_node nb_copy, resources = super().from_notebook_node(nb, resources, **kw) File "/Users/bduc1/miniconda3/lib/python3.9/site-packages/nbconvert/exporters/exporter.py", line 153, in from_notebook_node nb_copy, resources = self._preprocess(nb_copy, resources) File "/Users/bduc1/miniconda3/lib/python3.9/site-packages/nbconvert/exporters/exporter.py", line 349, in _preprocess nbc, resc = preprocessor(nbc, resc) File "/Users/bduc1/miniconda3/lib/python3.9/site-packages/nbconvert/preprocessors/base.py", line 48, in __call__ return self.preprocess(nb, resources) File "/Users/bduc1/miniconda3/lib/python3.9/site-packages/nbconvert/preprocessors/execute.py", line 100, in preprocess self.preprocess_cell(cell, resources, index) File "/Users/usr/miniconda3/lib/python3.9/site-packages/nbconvert/preprocessors/execute.py", line 121, in preprocess_cell cell = self.execute_cell(cell, index, store_history=True) File "/Users/usr/miniconda3/lib/python3.9/site-packages/jupyter_core/utils/__init__.py", line 168, in wrapped return loop.run_until_complete(inner) File "/Users/usr/miniconda3/lib/python3.9/site-packages/nest_asyncio.py", line 90, in run_until_complete return f.result() File "/Users/usr/miniconda3/lib/python3.9/asyncio/futures.py", line 201, in result raise self._exception File "/Users/usr/miniconda3/lib/python3.9/asyncio/tasks.py", line 256, in __step result = coro.send(None) File "/Users/usr/miniconda3/lib/python3.9/site-packages/nbclient/client.py", line 1021, in async_execute_cell await self._check_raise_for_error(cell, cell_index, exec_reply) File "/Users/usr/miniconda3/lib/python3.9/site-packages/nbclient/client.py", line 915, in _check_raise_for_error raise CellExecutionError.from_cell_and_msg(cell, exec_reply_content) nbclient.exceptions.CellExecutionError: An error occurred while executing the following cell: ------------------ # start coding here import scanpy as sc ------------------ --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) /var/folders/w7/zvnr_nqd4f3_2kdw0259s26r0000gq/T/ipykernel_34677/1324428204.py in <cell line: 2>() 1 # start coding here ----> 2 import scanpy as sc ModuleNotFoundError: No module named 'scanpy' ModuleNotFoundError: No module named 'scanpy' [Fri Feb 24 13:35:35 2023] Error in rule load_h5: jobid: 0 input: input/filtered_feature_bc_matrix.h5 output: results/output.h5ad conda-env: /Users/usr/projects/wspace/project/.snakemake/conda/8d45bb2abfce310beb1752237b93c097_ Shutting down, this might take some time. Exiting because a job execution failed. Look above for error message To be sure that the environment created by Snakemake works correctly I activated the Snakemake-generated environment: conda activate .snakemake/conda/8d45bb2abfce310beb1752237b93c097_ this works, so I check which Python is used: which python /Users/usr/projects/wspace/project/.snakemake/conda/8d45bb2abfce310beb1752237b93c097_/bin/python this looks good, so I try to use Scanpy: $ python3 Python 3.10.8 | packaged by conda-forge | (main, Nov 22 2022, 08:27:35) [Clang 14.0.6 ] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import scanpy as sc >>> test = sc.read_10x_h5("/Users/usr/projects/wspace/project/input/filtered_feature_bc_matrix.h5") /Users/usr/projects/wspace/project/.snakemake/conda/8d45bb2abfce310beb1752237b93c097_/lib/python3.10/site-packages/anndata/_core/anndata.py:1830: UserWarning: Variable names are not unique. To make them unique, call `.var_names_make_unique`. So Scanpy works in this environment! But somehow Snakemake fails to find the Scanpy module! I'm using Snakemake 7.22.0, on macOS Monterey 12.5.1. Any help would be greatly appreciated! Snakefile rule is: rule load_h5: output: "results/output.h5ad" input: "input/filtered_feature_bc_matrix.h5" conda: "envs/scanpy_min.yaml" notebook: "notebooks/01_load_scanpy.py.ipynb" scanpy_min.yaml is: name: scanpy_min channels: - conda-forge - bioconda - defaults dependencies: - anndata=0.8.0=pyhd8ed1ab_1 - appdirs=1.4.4=pyh9f0ad1d_0 - appnope=0.1.3=pyhd8ed1ab_0 - arpack=3.7.0=hefb7bc6_2 - asttokens=2.2.1=pyhd8ed1ab_0 - backcall=0.2.0=pyh9f0ad1d_0 - backports=1.0=pyhd8ed1ab_3 - backports.functools_lru_cache=1.6.4=pyhd8ed1ab_0 - blosc=1.21.2=hebb52c4_0 - brotli=1.0.9=hb7f2c08_8 - brotli-bin=1.0.9=hb7f2c08_8 - brotlipy=0.7.0=py310h90acd4f_1005 - bzip2=1.0.8=h0d85af4_4 - c-ares=1.18.1=h0d85af4_0 - ca-certificates=2022.12.7=h033912b_0 - cached-property=1.5.2=hd8ed1ab_1 - cached_property=1.5.2=pyha770c72_1 - certifi=2022.12.7=pyhd8ed1ab_0 - cffi=1.15.1=py310ha78151a_3 - charset-normalizer=2.1.1=pyhd8ed1ab_0 - colorama=0.4.6=pyhd8ed1ab_0 - comm=0.1.2=pyhd8ed1ab_0 - contourpy=1.0.7=py310ha23aa8a_0 - cryptography=39.0.0=py310hdd0c95c_0 - cycler=0.11.0=pyhd8ed1ab_0 - debugpy=1.6.6=py310h7a76584_0 - decorator=5.1.1=pyhd8ed1ab_0 - entrypoints=0.4=pyhd8ed1ab_0 - et_xmlfile=1.1.0=pyhd8ed1ab_0 - executing=1.2.0=pyhd8ed1ab_0 - fonttools=4.38.0=py310h90acd4f_1 - freetype=2.12.1=h3f81eb7_1 - glpk=5.0=h3cb5acd_0 - gmp=6.2.1=h2e338ed_0 - h5py=3.8.0=nompi_py310h5555e59_100 - hdf5=1.12.2=nompi_h48135f9_101 - icu=70.1=h96cf925_0 - idna=3.4=pyhd8ed1ab_0 - igraph=0.10.3=h020c493_0 - importlib-metadata=6.0.0=pyha770c72_0 - importlib_metadata=6.0.0=hd8ed1ab_0 - ipykernel=6.20.2=pyh736e0ef_0 - ipython=8.8.0=pyhd1c38e8_0 - jedi=0.18.2=pyhd8ed1ab_0 - joblib=1.2.0=pyhd8ed1ab_0 - jpeg=9e=hac89ed1_2 - jupyter_client=7.4.9=pyhd8ed1ab_0 - jupyter_core=5.1.5=py310h2ec42d9_0 - kiwisolver=1.4.4=py310ha23aa8a_1 - krb5=1.20.1=h049b76e_0 - lcms2=2.14=h29502cd_1 - leidenalg=0.9.1=py310h7a76584_0 - lerc=4.0.0=hb486fe8_0 - libaec=1.0.6=hf0c8a7f_1 - libblas=3.9.0=16_osx64_openblas - libbrotlicommon=1.0.9=hb7f2c08_8 - libbrotlidec=1.0.9=hb7f2c08_8 - libbrotlienc=1.0.9=hb7f2c08_8 - libcblas=3.9.0=16_osx64_openblas - libcurl=7.87.0=h6df9250_0 - libcxx=14.0.6=hccf4f1f_0 - libdeflate=1.17=hac1461d_0 - libedit=3.1.20191231=h0678c8f_2 - libev=4.33=haf1e3a3_1 - libffi=3.4.2=h0d85af4_5 - libgfortran=5.0.0=11_3_0_h97931a8_27 - libgfortran5=11.3.0=h082f757_27 - libiconv=1.17=hac89ed1_0 - libjpeg-turbo=2.1.4=hb7f2c08_0 - liblapack=3.9.0=16_osx64_openblas - libllvm11=11.1.0=h8fb7429_5 - libnghttp2=1.51.0=he2ab024_0 - libopenblas=0.3.21=openmp_h429af6e_3 - libpng=1.6.39=ha978bb4_0 - libsodium=1.0.18=hbcb3906_1 - libsqlite=3.40.0=ha978bb4_0 - libssh2=1.10.0=h47af595_3 - libtiff=4.5.0=hee9004a_2 - libwebp-base=1.2.4=h775f41a_0 - libxcb=1.13=h0d85af4_1004 - libxml2=2.10.3=hb9e07b5_0 - libzlib=1.2.13=hfd90126_4 - llvm-openmp=15.0.7=h61d9ccf_0 - llvmlite=0.39.1=py310h2bfb868_1 - lz4-c=1.9.4=hf0c8a7f_0 - matplotlib-base=3.6.3=py310he725631_0 - matplotlib-inline=0.1.6=pyhd8ed1ab_0 - metis=5.1.0=h2e338ed_1006 - mpfr=4.1.0=h0f52abe_1 - munkres=1.1.4=pyh9f0ad1d_0 - natsort=8.2.0=pyhd8ed1ab_0 - ncurses=6.3=h96cf925_1 - nest-asyncio=1.5.6=pyhd8ed1ab_0 - networkx=3.0=pyhd8ed1ab_0 - numba=0.56.4=py310h62db5c2_0 - numexpr=2.8.3=py310hecf8f37_1 - numpy=1.23.5=py310h1b7c290_0 - openjpeg=2.5.0=h13ac156_2 - openpyxl=3.1.0=py310h90acd4f_0 - openssl=3.0.8=hfd90126_0 - packaging=23.0=pyhd8ed1ab_0 - pandas=1.5.3=py310hecf8f37_0 - parso=0.8.3=pyhd8ed1ab_0 - patsy=0.5.3=pyhd8ed1ab_0 - pexpect=4.8.0=pyh1a96a4e_2 - pickleshare=0.7.5=py_1003 - pillow=9.4.0=py310hab5364c_0 - pip=22.3.1=pyhd8ed1ab_0 - platformdirs=2.6.2=pyhd8ed1ab_0 - pooch=1.6.0=pyhd8ed1ab_0 - prompt-toolkit=3.0.36=pyha770c72_0 - psutil=5.9.4=py310h90acd4f_0 - pthread-stubs=0.4=hc929b4f_1001 - ptyprocess=0.7.0=pyhd3deb0d_0 - pure_eval=0.2.2=pyhd8ed1ab_0 - pycparser=2.21=pyhd8ed1ab_0 - pygments=2.14.0=pyhd8ed1ab_0 - pynndescent=0.5.8=pyh1a96a4e_0 - pyopenssl=23.0.0=pyhd8ed1ab_0 - pyparsing=3.0.9=pyhd8ed1ab_0 - pysocks=1.7.1=pyha2e5f31_6 - pytables=3.7.0=py310h90ba602_3 - python=3.10.8=he7542f4_0_cpython - python-dateutil=2.8.2=pyhd8ed1ab_0 - python-igraph=0.10.3=py310hedfac68_0 - python_abi=3.10=3_cp310 - pytz=2022.7.1=pyhd8ed1ab_0 - pyzmq=25.0.0=py310hf615a82_0 - readline=8.1.2=h3899abd_0 - requests=2.28.2=pyhd8ed1ab_0 - scanpy=1.9.1=pyhd8ed1ab_0 - scikit-learn=1.2.1=py310hcebe997_0 - scipy=1.10.0=py310h240c617_0 - seaborn=0.12.2=hd8ed1ab_0 - seaborn-base=0.12.2=pyhd8ed1ab_0 - session-info=1.0.0=pyhd8ed1ab_0 - setuptools=66.1.1=pyhd8ed1ab_0 - six=1.16.0=pyh6c4a22f_0 - snappy=1.1.9=h225ccf5_2 - stack_data=0.6.2=pyhd8ed1ab_0 - statsmodels=0.13.5=py310h936d966_2 - stdlib-list=0.8.0=pyhd8ed1ab_0 - suitesparse=5.10.1=h7aff33d_1 - tbb=2021.7.0=hb8565cd_1 - texttable=1.6.7=pyhd8ed1ab_0 - threadpoolctl=3.1.0=pyh8a188c0_0 - tk=8.6.12=h5dbffcc_0 - tornado=6.2=py310h90acd4f_1 - tqdm=4.64.1=pyhd8ed1ab_0 - traitlets=5.8.1=pyhd8ed1ab_0 - typing-extensions=4.4.0=hd8ed1ab_0 - typing_extensions=4.4.0=pyha770c72_0 - tzdata=2022g=h191b570_0 - umap-learn=0.5.3=py310h2ec42d9_0 - unicodedata2=15.0.0=py310h90acd4f_0 - urllib3=1.26.14=pyhd8ed1ab_0 - wcwidth=0.2.6=pyhd8ed1ab_0 - wheel=0.38.4=pyhd8ed1ab_0 - xorg-libxau=1.0.9=h35c211d_0 - xorg-libxdmcp=1.1.3=h35c211d_0 - xz=5.2.6=h775f41a_0 - zeromq=4.3.4=he49afe7_1 - zipp=3.11.0=pyhd8ed1ab_0 - zstd=1.5.2=hbc0c0cd_6 prefix: /Users/usr/miniconda3/envs/scanpy_min
0.197375
1
1
If anyone runs into the same issue, I found the answer. It is not sufficient to have jupyter_core and jupyter_client in the environment of interest. I installed jupyter lab using mamba and it solved everything!
2023-02-24 16:47:34
1
python,sqlite,rust,python-polars
2
76,377,130
How do I write polars dataframe to external database?
75,559,239
false
939
I have big polars dataframe that I want to write into external database (sqlite for example) How can I do it? In pandas, you have to_sql() function, but I couldn't find any equivalent in polars
0.099668
2
1
You can use the DataFrame.write_database method.
2023-02-24 17:27:19
0
python,numpy,scipy
2
75,559,691
What is the fastest way of determining rank of a new element in the existing numpy array without computing ranks for the whole array?
75,559,625
false
87
Given an array of numbers A and some decimal number X, I would like to know the rank of X in A, and the straightforward way of doing this is to append a new number into the initial array, run rankdata on it and pick the last element, like this: import numpy as np from scipy.stats import rankdata A = np.array([33.25, 40.16, 18.22, 96.34, 71.15, 48.12, 52.41, 83.11, 12.22]) X = 54.17 B = np.append(A, X) ranks = rankdata(-B) # reverse an array so that the largest value will have rank 1 rank = int(ranks[-1]) Even though it produces the correct result, in order to run it often and on large arrays, it would be useful to obtain it without sorting the whole array. With that in mind I wonder whether there is a numpy or scipy idiom of doing it faster.
0
2
1
Since you would like to determine the rank of new items "often and on large arrays", you'll need a sorted list. Hence, you might as well keep the items in a sorted data structure. Some of the data structures with the best complexity for this (O(n log(n))) would be B-trees and heaps.
2023-02-24 21:20:30
1
python-3.x,algorithm
2
75,561,604
Why is one of my almost identical algorithms works ~10 times faster then other?
75,561,483
false
53
I am new in programming, so i don't understand some more deeper things. I have a task, to create amount of passwords, with a given numb and longennes of passwords. My first code: from random import choices, choice def generate_password(m): ch = choices('23456789qwertyupasdfghjkzxcvbnmiLQWERTYUPASDFGHJKZXCVBNM', k=m - 3) ch.append(choice('LQWERTYUPASDFGHJKZXCVBNM')) ch.append(choice('qwertyupasdfghjkzxcvbnmi')) ch.append(choice('23456789')) return ''.join(ch) def main(n, m): p = set() t = 0 while t < n: s = generate_password(m) if s not in p: t += 1 p.add(s) return p was working, but too long, so then i tried other ways, randomly improving parts of code. So, why is this code so fast? Is it easier to shuffle() several times, then just use choice() ? from random import choices, shuffle, choice def generate_password(m): global p ch = choices('LQWERTYUPASDFGHJKZXCVBNM', k=m - 3) ch.append(choice('LQWERTYUPASDFGHJKZXCVBNM')) ch.append(choice('qwertyupasdfghjkzxcvbnmi')) ch.append(choice('23456789')) while True: tt = ''.join(ch) if tt in p: shuffle(ch) continue return tt p = set() def main(n, m): global p t = 0 while t < n: p.add(generate_password(m)) t += 1 return p from time import time from random import choices, shuffle, choice def generate_password(m): global p ch = choices('LQWERTYUPASDFGHJKZXCVBNM', k=m - 3) ch.append(choice('LQWERTYUPASDFGHJKZXCVBNM')) ch.append(choice('qwertyupasdfghjkzxcvbnmi')) ch.append(choice('23456789')) while True: tt = ''.join(ch) if tt in p: shuffle(ch) continue return tt p = set() def main(n, m): global p t = 0 while t < n: p.add(generate_password(m)) t += 1 return p t1 = time() print(*main(4609, 3)) t2 = time() print(t2 - t1) output: 0.05657219886779785 from time import time from random import choices, choice def generate_password(m): ch = choices('23456789qwertyupasdfghjkzxcvbnmiLQWERTYUPASDFGHJKZXCVBNM', k=m - 3) ch.append(choice('LQWERTYUPASDFGHJKZXCVBNM')) ch.append(choice('qwertyupasdfghjkzxcvbnmi')) ch.append(choice('23456789')) return ''.join(ch) def main(n, m): p = set() t = 0 while t < n: s = generate_password(m) if s not in p: t += 1 p.add(s) return p t1 = time() print(*main(4609, 3)) t2 = time() print(t2 - t1) I didn't wait
0.099668
1
1
In the first case, you generate a random password with m-2 uppercase letters, one lowercase letter and a digit between 2 and 9, after which it gets added to the set if it is not a duplicate. When you start accumulating passwords, the chances of a duplicate increase, so multiple passwords have to be generated before you get a unique one, which is the slowdown you mention. Now compare that to the second case, where if the password is a duplicare, you shuffle the characters until it is no longer a duplicate. Now the order of m-2 capitals, one lowercase and a digit no longer applies, so many more permutations of a password are possible, greatly decreasing the chance of a clash and likely always finding it is a unique password. So the short answer is: the latter code snippet generates many more unique passwords than the former, decreasing the chance of a duplicate and greatly increasing the execution speed.
2023-02-25 00:02:27
0
python,pandas
5
75,562,443
Delimiter for Splitting each character of a string?
75,562,434
false
134
I am currently trying to split two-character strings into two separate columns for each character in a pandas data.frame, but I've been struggling to find a way to perform the operation on the column without having to iterate through each row. My starting data.frame looks something like this: Initial 0 PT 1 XT 2 ZT And I'm hoping to split the 'Initial' column into two separate columns containing each character like this: S1 S2 0 P T 1 X T 2 Z T I've used the split() function, and I've tried to find a proper delimiter to supply it which would split every character, but I'm at a loss so far. Is there a good way to do this without needing to iterate over each row?
0
2
1
Not sure about pandas, but list(text) will create a list of all characters in a string Then you could try to create a dataframe over a list of lists.
2023-02-25 11:20:21
0
python
2
75,565,148
does python allows elif statement without else statement?
75,565,030
false
75
While teaching python to a friend i tried this statement : val = "hi" if (val=="hello") or ("w" in val): print("hello") elif(val=="hi"): print("hi") And to my great surprise it worked. I always tought in Python you couldn't do an elif without else. Has it been always like that or the syntax has changed since a particular version?
0
1
1
This has worked in all versions, just without another case you don't consider the case where if and elif conditions are not accepted.
2023-02-25 11:49:03
0
python,class,variables,android-recyclerview,kivy
1
75,579,129
How to retrieve variable from Spinner to use it in RecycleView?
75,565,193
false
54
I'm a kivy beginner and i cant retrieve this damn spinner variable. I'm a bit confused with the different ways python and kivy handle variables. Here is my (shorten) code: from kivy.app import App from kivy.properties import ObjectProperty, StringProperty, ListProperty from kivy.uix.boxlayout import BoxLayout from kivy.lang import Builder from kivy.uix.recycleview import RecycleView from kivy.uix.recycleview.views import RecycleDataViewBehavior from kivy.uix.widget import Widget from kivy.uix.image import Image from kivy.core.window import Window from kivy.lang import Builder Builder.load(""" #<MyViewClass>: # orientation: 'vertical' # label_vc: 'type' <MyRecycleView>: id: rv viewclass: 'Label' RecycleBoxLayout: default_size: None, dp(56) default_size_hint: 1, None size_hint_y: None height: self.minimum_height orientation: 'vertical' <MyLayout>: label_current: "type" BoxLayout: orientation: 'vertical' size: root.width, root.height Button: text: 'Ajouter/Supprimer/Modifier' size_hint: 1, .1 on_release: root.ajoutSuppMod() Spinner: size_hint: 1, .1 id: spinner_id text: "Type" values: ["ACCOMPAGNEMENT", "AROMATIQUE", "DESSERT", "PLAT", "VIANDE", "POISSON", "SAUCE", "SOUPE"] on_text: label_current = root.on_spinner_select(self.text) on_text: root.spinner_clicked(spinner_id.text) on_text: root.changeImage() Label: id: click_label text: "CONGELO Liste" font_size: 32 size_hint: 1, .2 MyImage: id: img source: root.cheminImage MyRecycleView: """ def freezer(type_aliment): dict_freez = {"accompagnement": [1, 2], "aromatique": [3, 4], "dessert": [1, 2], "soupe": [5, 2]], "plat": [1, 2], "poisson": [1, 2], "sauce": [1, 2], "viande": [1, 2], "divers": [1, 2]} return dict_freez[type_aliment] class MyImage(Image): pass class MyLayout(Widget): cheminImage = StringProperty('assets/32/type.png') value = StringProperty('viande') def on_spinner_select(self, text): return text def changeImage(self): img = 'assets/32/' + self.ids.click_label.text + ".png" self.cheminImage = img class MyViewClass(RecycleDataViewBehavior, BoxLayout): text = StringProperty("") index = None def refresh_view_attrs(self, rv, index, data): self.index = index return super(MyViewClass, self).refresh_view_attrs(rv, index, data) class MyRecycleView(RecycleView): rvtest = StringProperty('type') def __init__(self, **kwargs): super(MyRecycleView, self).__init__(**kwargs) self.data = [{'text': str(x)} for x in freezer(MY_DAMNED_VARIABLE!!)] class SpinApp(App): def build(self): Window.clearcolor = (0, 0, 0, 1) return MyLayout() if __name__ == '__main__': SpinApp().run() Thanks in advance ! The recyclerview displays the correct things (1 then 2) if I put directly a string (like "poisson") : self.data = [{'text': str(x)} for x in freezer("poisson")] But it displays noting whatever the variable I tried to insert to represent my spinner text.
0
1
1
Finally I found the solution thanks to a better understanding of variable references. A YT video explains this very clearly : Data handling, widget referencing
2023-02-25 11:55:03
6
python,python-3.x,networkx,attributeerror,pyvis
4
75,572,043
in pyvis I always get this error: "AttributeError: 'NoneType' object has no attribute 'render'"
75,565,224
false
4,707
I want to do a network visualisation using pyvis in the latest version and the python version 3.9.6: from pyvis.network import Network g = Network() g.add_node(0) g.add_node(1) g.add_edge(0, 1) g.show('test.html') every time I execute g.show() i get this error: Traceback (most recent call last): File "/Users/tom/Library/Mobile Documents/com~apple~CloudDocs/Projekte/Coding_/f1 standings/test2.py", line 3, in <module> g.show('nx.html') File "/Users/tom/Library/Python/3.9/lib/python/site-packages/pyvis/network.py", line 546, in show self.write_html(name, open_browser=False,notebook=True) File "/Users/tom/Library/Python/3.9/lib/python/site-packages/pyvis/network.py", line 515, in write_html self.html = self.generate_html(notebook=notebook) File "/Users/tom/Library/Python/3.9/lib/python/site-packages/pyvis/network.py", line 479, in generate_html self.html = template.render(height=height, AttributeError: 'NoneType' object has no attribute 'render' I tried updating pyvis, I changed all sorts of details in my code and I imported all of pyvis.network without any results.
1
5
2
Probably you've installed version 0.3.2. I had same issue today, downgrading to 0.3.1 helped me
2023-02-25 11:55:03
0
python,python-3.x,networkx,attributeerror,pyvis
4
75,565,349
in pyvis I always get this error: "AttributeError: 'NoneType' object has no attribute 'render'"
75,565,224
false
4,707
I want to do a network visualisation using pyvis in the latest version and the python version 3.9.6: from pyvis.network import Network g = Network() g.add_node(0) g.add_node(1) g.add_edge(0, 1) g.show('test.html') every time I execute g.show() i get this error: Traceback (most recent call last): File "/Users/tom/Library/Mobile Documents/com~apple~CloudDocs/Projekte/Coding_/f1 standings/test2.py", line 3, in <module> g.show('nx.html') File "/Users/tom/Library/Python/3.9/lib/python/site-packages/pyvis/network.py", line 546, in show self.write_html(name, open_browser=False,notebook=True) File "/Users/tom/Library/Python/3.9/lib/python/site-packages/pyvis/network.py", line 515, in write_html self.html = self.generate_html(notebook=notebook) File "/Users/tom/Library/Python/3.9/lib/python/site-packages/pyvis/network.py", line 479, in generate_html self.html = template.render(height=height, AttributeError: 'NoneType' object has no attribute 'render' I tried updating pyvis, I changed all sorts of details in my code and I imported all of pyvis.network without any results.
0
5
2
Default self.template = None. You have to set this value using Network().set_template
2023-02-25 16:27:01
1
python,python-asyncio,python-unittest
1
76,249,506
How to resolve error of Event loop is closed python unittest?
75,566,800
true
82
While implement Python unittest by subclassing IsolatedAsyncioTestCase, it only runs first test case successfully. For any subsequent test case it throws error that event loop is closed. This happens in both Windows and Mac. Could you please suggest how to make sure that event loop is running during the execution of test within each of the subsclasses of the IsolatedAsyncioTestCase that I have implemented.
1.2
1
1
I had the same problem when trying to run integration tests. The first test passed, but the second one got an "Event loop is closed" error. I'm using MognoDB with the async driver. The reason for this error was the way the database connection was opened. IsolatedAsyncioTestCase creates a new event loop at the start and closes it at the end for execution. So, a driver connection was attached to the event loop of the first TestCase, and when the second TestCase starts, it throws an error because the event loop of the first TestCase is already closed, but the new connection in the new event loop is not created. The solution is to create a new database connection in each IsolatedAsyncioTestCase.
2023-02-26 03:17:32
2
python,function,variables,pylance
1
75,569,822
Variable is not accessed in Pylance
75,569,807
true
865
For some reason, I am receiving an error on VSCode that says the specific variable is not accessed Pylance and each variable says the same thing, Am I missing something simple? full code below: score = 0 def gen1_questions(): q1 = input("Question 1. What is the rarest M&M color? ") q2 = input( "Question 2. In a website browser address bar, what does “www” stand for? ") q3 = input( "Question 3. Which country consumes the most chocolate per capita? ") q4 = input("Question 4. Who was the very first American Idol winner? ") q5 = input( "Question 5. What is the tiny piece at the end of a shoelace called? ") q6 = input("Question 6. How many weeks are in a year? ") q7 = input("Question 7. Which animal can be seen on the Porsche logo? ") q8 = input("Question 8. Muhammad Ali was well-known in which sport? ") q9 = input("Question 9. What is the lowest army rank of a US soldier? ") q10 = input( "Question 10. What is often seen as the smallest unit of memory greater than a byte? ") def gen2_questions(): q1 = input( "Question 1. Which is one of two U.S. states does not observe Daylight Saving Time? ") q2 = input( "Question 2. Michael Jordan won how many NBA titles with the Chicago Bulls? ") q3 = input("Question 3. What color eyes do most humans have? ") q4 = input("Question 4. What is the hardest rock on earth? ") q5 = input("Question 5. What is the solar systems hottest planet? ") q6 = input("Question 6. What is the fastest-flying bird in the world? ") q7 = input( "Question 7. Who was the first woman to have four country albums reach No. 1 on the Billboard 200? ") q8 = input( "Question 8. What is illegal for a single lady to do in Florida solely on Sundays? ") q9 = input("Question 9. Which is the Worlds Largest Ocean? ") q10 = input( "Question 10. What type of exercise is best for getting the blood flowing? ")
1.2
1
1
If that's the only code you have then, yes, you set (for example) q1 to a value but never use it after that. If you've failed to show us code outside of those functions that uses q1, it still is not being used. That's because, if you assign to a variable (like q1 = 42) within a function and that variable is not explicitly marked global, it will be a new local variable within the function, not one already existing in a containing namespace.
2023-02-26 12:50:11
1
python,numpy
1
75,572,114
Functions in Python's math module and equivalents in NumPy library: What are the fundamental differences?
75,572,088
false
51
What are the fundamental differences between the functions in Python's math module and their equivalents in the NumPy library?
0.197375
1
1
numpy works with vectors (or scalars, or matrices, or arbitrary n-dimensional arrays), math works with scalars only.
2023-02-27 02:52:59
0
python,google-colaboratory
5
75,680,260
ImportError: cannot import name 'dtreeviz' from 'dtreeviz.trees' (/usr/local/lib/python3.8/dist-packages/dtreeviz/trees.py)
75,576,403
false
1,746
When I try to run this vizulization on google colab I am getting this error, ImportError: cannot import name 'dtreeviz' from 'dtreeviz.trees' (/usr/local/lib/python3.8/dist-packages/dtreeviz/trees.py) from sklearn.datasets import load_wine from sklearn.ensemble import RandomForestClassifier from dtreeviz.trees import dtreeviz rf = RandomForestClassifier(n_estimators=100, max_depth=3, max_features='auto', min_samples_leaf=4, bootstrap=True, n_jobs=-1, random_state=0) rf.fit(X, y) viz = dtreeviz(rf.estimators_[99], X, y, target_name="SizeClass", feature_names=X_train.columns, class_names=list(y_train.feature_names), title="100th decision tree") viz.save("decision_tree.svg") from google.colab import files files.download("decision_treef.svg") I tried pip installing but it says that the requirments are already met
0
1
2
Try installing and old version pip install dtreeviz==1.4.0
2023-02-27 02:52:59
0
python,google-colaboratory
5
75,965,425
ImportError: cannot import name 'dtreeviz' from 'dtreeviz.trees' (/usr/local/lib/python3.8/dist-packages/dtreeviz/trees.py)
75,576,403
false
1,746
When I try to run this vizulization on google colab I am getting this error, ImportError: cannot import name 'dtreeviz' from 'dtreeviz.trees' (/usr/local/lib/python3.8/dist-packages/dtreeviz/trees.py) from sklearn.datasets import load_wine from sklearn.ensemble import RandomForestClassifier from dtreeviz.trees import dtreeviz rf = RandomForestClassifier(n_estimators=100, max_depth=3, max_features='auto', min_samples_leaf=4, bootstrap=True, n_jobs=-1, random_state=0) rf.fit(X, y) viz = dtreeviz(rf.estimators_[99], X, y, target_name="SizeClass", feature_names=X_train.columns, class_names=list(y_train.feature_names), title="100th decision tree") viz.save("decision_tree.svg") from google.colab import files files.download("decision_treef.svg") I tried pip installing but it says that the requirments are already met
0
1
2
I had the same problem. After trying both solutions, I can confirm that installing the older version dtreeviz==1.4.0 fixed the problem.
2023-02-27 04:36:26
3
python-poetry
1
75,578,333
What do I do when I change poetry pyproject.toml?
75,576,816
false
879
I have a pyproject.toml and I already did poetry init (obviously) and poetry install. If I change the toml file by hand, what exactly do I have to do? On the one hand I think I have to synchronize the poetry.lock file but do I erase it and do install again? I have conflicting ideas on how to proceed after an edit of the toml file. Also, is the procedure the same if I do a poetry add instead of editing the toml manually?
0.53705
2
1
Whenever you change Poetry related stuff in your pyproject.toml run poetry lock --no-update afterwards to sync the poetry.lock files with those changes. The --no-update flag tries to preserve existing versions of dependencies. Once the lock file is updated run poetry install to sync your venv with the locked dependencies. Wherever possible you should prefer using Poetry's cli instead of manually edit the pyproject.toml. Poetry will take care of the steps described above for you. So if you run poetry add <somedep>, Poetry will add the entry to your pyproject.toml, update the poetry.lock and will install necessary dependencies.
2023-02-27 07:31:51
0
python,tensorflow,keras,deep-learning,conv-neural-network
1
75,578,267
Placement of Flatten layer in deep learning model
75,577,762
false
50
I make a deep learning model for classification. The model consist of 4 Conv2d layer, 1 pooling layer, 2 dense layer and 1 flatten layer. When i do this arrangement of layers: Conv2D, Conv2D, Conv2D, Conv2D, pooling, dense, flatten, dense then my results are good. But when i follow this arrangement: Conv2D, Conv2D, Conv2D, Conv2D, pooling, flatten, dense, dense then the classification results are not good. My question is putting flatten layer between two dense layer is correct or not? Can I follow the pattern of layer by which i am getting good classification results?
0
1
1
Typically, it is not recommended to sandwich a flatten layer between dense layers, and as suggested by Corralien, It doesn't provide any value. Your other architecture Conv2D, Conv2D, Conv2D, Conv2D, pooling, flatten, dense, dense is more legit. If your model is providing you with good results, you might want to keep it, but technically you do not need the flatten layer between the two dense layers. You can consider using Conv2D, Conv2D, Conv2D, Conv2D, Pooling, dense, dense. Or a better alternative would be to try playing with your architecture. Such as adding another pooling layer between the four Conv2d layers like: Conv2D, Conv2D, Pooling, Conv2D, Conv2D, Pooling, flatten, dense, dense, and proceed with adjusting your hyperparameters.
2023-02-27 08:28:49
1
python,keras,lstm
3
75,578,977
What is the input dimension for a LSTM in Keras?
75,578,232
false
80
I'm trying to use deeplearning with LSTM in keras . I use a number of signal as input (nb_sig) that may vary during the training with a fixed number of samples (nb_sample) I would like to make parameter identification, so my output layer is the size of my parameter number (nb_param) so I created my training set of size (nb_sig x nb_sample) and the label (nb_param x nb_sample) my issue is I cannot find the correct dimension for the deep learning model. I tried this : import numpy as np from keras.models import Sequential from keras.layers import Dense, LSTM nb_sample = 500 nb_sig = 100 # number that may change during the training nb_param = 10 train = np.random.rand(nb_sig,nb_sample) label = np.random.rand(nb_sig,nb_param) print(train.shape,label.shape) DLmodel = Sequential() DLmodel.add(LSTM(units=nb_sample, return_sequences=True, input_shape =(None,nb_sample), activation='tanh')) DLmodel.add(Dense(nb_param, activation="linear", kernel_initializer="uniform")) DLmodel.compile(loss='mean_squared_error', optimizer='RMSprop', metrics=['accuracy', 'mse'], run_eagerly=True) print(DLmodel.summary()) DLmodel.fit(train, label, epochs=10, batch_size=nb_sig) but I get this error message: Traceback (most recent call last): File "C:\Users\maxime\Desktop\SESAME\PycharmProjects\LargeScale_2022_09_07\di3.py", line 22, in <module> DLmodel.fit(train, label, epochs=10, batch_size=nb_sig) File "C:\Python310\lib\site-packages\keras\utils\traceback_utils.py", line 70, in error_handler raise e.with_traceback(filtered_tb) from None File "C:\Python310\lib\site-packages\keras\engine\input_spec.py", line 232, in assert_input_compatibility raise ValueError( ValueError: Exception encountered when calling layer "sequential" " f"(type Sequential). Input 0 of layer "lstm" is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: (100, 500) Call arguments received by layer "sequential" " f"(type Sequential): • inputs=tf.Tensor(shape=(100, 500), dtype=float32) • training=True • mask=None I don't understand what I'm suppose to put as input_shape for the LSTM layer and as the number of signals I use during the training will changed, this is not so clear to me.
0.066568
1
1
The input to the LSTM should be 3D with the first dimension being the sample size in your case 500. Assuming input having shape (500,x,y), input_shape should (x,y).
2023-02-27 12:56:30
2
python,deployment,prefect
1
75,583,162
Run flow on prefect cloud without running local agent locally?
75,580,780
true
378
I'm trying to deploy my flow but I'don't know what I should do to completely deploy it (serverless). I'm using the free tier of Prefect Cloud and I have create a storage and process block. The step I have done : Build deployment $ prefect deployment build -n reporting_ff_dev-deployment flow.py:my_flow Apply configuration $ prefect deployment apply <file.yaml> Create block from prefect.filesystems import LocalFileSystem from prefect.infrastructure import Process #STORAGE my_storage_block = LocalFileSystem( basepath='~/ff_dev' ) my_storage_block.save( name='ff-dev-storage-block', overwrite=True) #INFRA my_process_infra = Process( working_dir='~/_ff_dev_work', ) my_process_infra.save( name='ff-dev-process-infra', overwrite=True) deploy block $ prefect deployment build -n <name> -sb <storage_name> -ib <infra_name> <entry_point.yml> -a I know that prefect cloud is a control system rather than a storage medium but as I understand, a store block -> store the code and process code -> run the code. What is the next step to run the flow without local agent ?
1.2
1
1
Where are you looking for the code to be executed from? With a deployment registered, you can execute the following to spawn a flow run. A deployment just describes how and where - prefect deployment run /my_flow
2023-02-27 13:07:53
2
python,opencv,conda,undefined-symbol,libffi
1
75,950,428
Open CV ImportError: /lib/x86_64-linux-gnu/libwayland-client.so.0: undefined symbol: ffi_type_uint32, version LIBFFI_BASE_7.0
75,580,886
false
828
I have installed OpenCV and when trying to import cv2 in python, I get the following error. The import was working fine until I installed/un-installed and re-installed tensor flow. OpenCV has been installed in a conda environment using cmake. Any idea how to fix this? Python 3.10.9 (main, Jan 11 2023, 15:21:40) [GCC 11.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import cv2 Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/deleeps/anaconda3/envs/zscore/lib/python3.10/site-packages/cv2/__init__.py", line 102, in <module> bootstrap() File "/home/deleeps/anaconda3/envs/zscore/lib/python3.10/site-packages/cv2/__init__.py", line 90, in bootstrap import cv2 ImportError: /lib/x86_64-linux-gnu/libwayland-client.so.0: undefined symbol: ffi_type_uint32, version LIBFFI_BASE_7.0 >>> $ ldconfig -p | grep libwayland-client libwayland-client.so.0 (libc6,x86-64) => /lib/x86_64-linux-gnu/libwayland-client.so.0 libwayland-client.so (libc6,x86-64) => /lib/x86_64-linux-gnu/libwayland-client.so libwayland-client++.so.0 (libc6,x86-64) => /lib/x86_64-linux-gnu/libwayland-client++.so.0
0.379949
2
1
I got around a similar issue by doing: export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libffi.so.7
2023-02-27 15:09:26
-1
python,datetime,floating-point,timestamp
3
75,598,779
Why does timestamp() show an extra microsecond compared with subtracting 1970-01-01?
75,582,190
false
92
The following differ by 1 microsecond : In [37]: datetime(2514, 5, 30, 1, 53, 4, 986754, tzinfo=dt.timezone.utc) - datetime(1970,1,1, tzinfo=dt.timezone.utc) Out[37]: datetime.timedelta(days=198841, seconds=6784, microseconds=986754) In [38]: datetime(2514, 5, 30, 1, 53, 4, 986754, tzinfo=dt.timezone.utc).timestamp() Out[38]: 17179869184.986755 The number of microseconds in 986754 in the first case, and 986755 in the second. Is this just Python floating point arithmetic error, or is there something else I'm missing?
-0.066568
1
1
It an error of 1.37109375 usecs due to conversion to a 64 bit float: print("%100.100f\n" % (17179869184.986754)) 17179869184.9867553710937500000000000000000000000000000000000000000000000000000000000000000000000000000000000000 17179869184.98675537109375 17179869184.986755
2023-02-27 17:51:28
0
python,api,google-analytics,google-analytics-api,google-analytics-4
1
75,584,194
TotalPurchasers metric in Google Analytics 4 giving wrong results
75,583,921
false
150
We have custom dimension define in Google Analytics Data API v1Beta for extracting data from Google Analytics GA4 account. I am trying to fetch purchaseRevenue and totalPurchasers metric with respect to date, sessionSource, Sessionmedium, campaignName, pagePath and eventName using python. I want to know what is the Purchasers for different eventName in different campaignName. I am getting correct purchaseRevenue but more Purchasers when i try to validate totalPurchasers data in ga4. Here is the code import pandas as pd import numpy as np from google.analytics.data_v1beta import BetaAnalyticsDataClient from google.analytics.data_v1beta.types import DateRange from google.analytics.data_v1beta.types import Dimension from google.analytics.data_v1beta.types import Metric from google.analytics.data_v1beta.types import RunReportRequest client = BetaAnalyticsDataClient() ## Format Report - run_report method def format_report(request): response = client.run_report(request) # Row index row_index_names = [header.name for header in response.dimension_headers] row_header = [] for i in range(len(row_index_names)): row_header.append([row.dimension_values[i].value for row in response.rows]) row_index_named = pd.MultiIndex.from_arrays(np.array(row_header), names = np.array(row_index_names)) # Row flat data metric_names = [header.name for header in response.metric_headers] data_values = [] for i in range(len(metric_names)): data_values.append([row.metric_values[i].value for row in response.rows]) output = pd.DataFrame(data = np.transpose(np.array(data_values, dtype = 'f')), index = row_index_named, columns = metric_names) return output request = RunReportRequest( property='properties/'+property_id, dimensions=[ Dimension(name="date"), Dimension(name="sessionSource"), Dimension(name="medium"), Dimension(name="campaignName"), Dimension(name="pagePath"), Dimension(name="eventName"), ], metrics=[ Metric(name="purchaseRevenue"), Metric(name="totalPurchasers") ], date_ranges=[DateRange(start_date="2023-02-01", end_date="2023-02-07")], ) Here is the data it is showing me through api df.totalPurchasers.sum() ``` 213.0 ```python df.purchaseRevenue.sum() ``` 13710.0 but in ga4 it is showing 191 purchasers but revenue is correct. [GA4](https://i.stack.imgur.com/SMBFY.png)
0
1
1
You are using right metrics. Try increasing the limit of rows, by default it is 10K. I think after increasing rows, you will get complete data and then values will be matched.
2023-02-27 17:53:42
4
python,iterator,python-3.8
1
75,583,980
Python skips __next__ directive and returns a generator object
75,583,940
false
25
I'm trying to implement an iterable class, which I have done several times before, but I'm experiencing some unexpected behavior this time around, and I can't figure out why. My class contains the usual __iter__(self) method that returns self, and __next__(self) method that yields results, however, when I attempt to do the following: with VSIFile(params) as vsi: for roi in vsi: print(roi) The roi is in fact a generator object instead of a yielded result. After going into debug, I found that __next__ never triggers, only __iter__. I tested making an iterator with a simple number counting class and that one works well. I expect roi to be a numpy array. Here's the full code: vsi_file.py from typing import Tuple import javabridge import bioformats from tqdm import tqdm from cv2 import resize javabridge.start_vm(class_path=bioformats.JARS) class VSIFile: def __init__(self, vsi_file: str, roi_size: Tuple[int, int] = (1024, 1024), target_size: Tuple[int, int] = (256, 256), use_pbar: bool = True): self.file_path = vsi_file self.roi_size = roi_size self.target_size = target_size self.slide = None self.shape = None self.max_x_idx = None self.max_y_idx = None self.num_rois = None self.skip = [1, 2, 5, 11, 22, 45, 72] if use_pbar: self.pbar = tqdm() else: self.pbar = None def __enter__(self): self.slide = bioformats.ImageReader(self.file_path) self.shape = self.slide.rdr.getSizeY(), self.slide.rdr.getSizeX(), 3 self.max_x_idx = self.shape[1] // self.roi_size[1] self.max_y_idx = self.shape[0] // self.roi_size[0] self.num_rois = self.max_x_idx * self.max_y_idx if self.pbar is not None: self.pbar.total = self.num_rois self.pbar.refresh() return self def __exit__(self, exc_type, exc_val, exc_tb): self.slide.close() if self.pbar is not None: self.pbar.close() def __del__(self): self.slide.close() if self.pbar is not None: self.pbar.close() def __iter__(self): return self def __next__(self): while self.idx in self.skip: if self.idx == self.max_x_idx * self.max_y_idx: if self.pbar is not None: self.pbar.close() raise StopIteration self.idx += 1 if self.pbar is not None: self.pbar.update(1) if self.idx == self.max_x_idx * self.max_y_idx: if self.pbar is not None: self.pbar.close() raise StopIteration y = (self.idx // self.max_x_idx) * self.roi_size[0] x = (self.idx % self.max_x_idx) * self.roi_size[1] roi = self.get_roi(x, y, self.roi_size[0], self.roi_size[1]) roi = resize(roi, self.target_size) if self.target_size else roi yield roi self.idx += 1 if self.pbar is not None: self.pbar.update(1) process_vsi.py (relevant portion) with VSIFile(os.path.abspath(os.path.join(data_dir, file))) as vsi: for roi in vsi: print(roi) This prints <generator object VSIFile.__next__ at 0x000001AA3C4AFC80>.
0.664037
1
1
You're mixing two paradigms of how you could implement an iterator. When you implement __next__, it is supposed to return its elements, not yield them. If you want to use yield, do that directly inside __iter__.
2023-02-28 05:47:23
1
python,numpy,julia,eigenvalue
1
75,588,281
Unable to determine why eigenvalue results are different between Julia and Python for specific case
75,588,271
false
87
I'm using Julia to do some linear algebra calculations but it gave me negative eigenvalues when I know the matrix is positive definite. I'm fairly new to Julia so is there some reason the Julia code below would have such different behavior than the corresponding python code? Could it be the abs function? At this point I'm at a loss. The Julia code is: using LinearAlgebra time = collect(range(0.0, 10.0, length=400)) H = 0.8 N = length(time) C_N = Matrix{Float32}(undef,N,N) for i in 1:N for j in 1:N ti,tj = time[i], time[j] C_N[i,j] = 0.5*(ti^(2*H)+tj^(2*H) - abs(ti-tj)^(2*H)) end end Decomposition = eigen(C_N) eigen_vals = Decomposition.values has_negative = any(x -> x < 0.0, eigen_vals) if has_negative @show "Has Negative eigenvalue" else @show "Only positive eigenvalues" end has_negative The output is: "Has Negative eigenvalue" = "Has Negative eigenvalue" The corresponding python code is: import numpy as np H = 0.8 N =400 time = np.linspace(0.0,10.0,num=N) C_N = np.zeros((N,N)) print(time.shape) for i in range(N): for j in range(N): ti,tj = time[i],time[j] C_N[i,j] = 0.5*(ti**(2*H)+tj**(2*H) - np.abs(ti-tj)**(2*H)) w, V = np.linalg.eig(C_N) neg_mask = w < 0.0 if np.any(neg_mask): print("Negative eigenvalue found") else: print("Only positive eigenvalues") which outputs: "Only positive eigenvalues" For reference I am using Julia v"1.8.2".
0.197375
1
1
It was the Float32 which led to numerical errors. Changing to Float64 fixed it. I'll keep the question up for posterity.
2023-02-28 07:35:18
0
python,django,django-rest-framework,celery
1
75,589,329
pre_save django model update fith celery shared_task
75,589,052
false
46
I have Project model class Project(models.Model): id = models.UUIDField(primary_key=True, unique=True, default=uuid4, editable=False) logo = models.ImageField(validators=[validate_image_size], blank=True, null=True, default=None) name = models.CharField(max_length=64) description = models.TextField() @transaction.atomic def save(self, *args, **kwargs): super().save(*args, **kwargs) def __str__(self): return self.name And i want to compress logo field with reciever @receiver(post_save, sender=Project) def compress_project_logo(sender, instance, **kwargs): compress_image.apply_async((instance.id,)) with shared_task @shared_task def compress_image(project_id): from api.models import Project project = get_object_or_404(Project, id=project_id) compressed_image = Image.open(project.logo) compressed_image = compressed_image.convert("RGB") compressed_image = ImageOps.exif_transpose(compressed_image) image_io = BytesIO() compressed_image.save(image_io, "JPEG", quality=70) project.logo = InMemoryUploadedFile(image_io, "ImageField", project.logo.file.name, "image/jpeg", sys.getsizeof(image_io), None) project.save() And when i'm saving Project model through django admin i take this Traceback (most recent call last): 2023-02-28 10:27:45 File "/usr/local/lib/python3.8/site-packages/celery/app/trace.py", line 450, in trace_task R = retval = fun(\*args, \*\*kwargs) File "/usr/local/lib/python3.8/site-packages/celery/app/trace.py", line 731, in __protected_call__ return self.run(\*args, \*\*kwargs) File "/code/api/tasks/compress.py", line 14, in compress_image project = get_object_or_404(Project, id=project_id) File "/usr/local/lib/python3.8/site-packages/rest_framework/generics.py", line 19, in get_object_or_404 return \_get_object_or_404(queryset, \*filter_args, \*\*filter_kwargs) File "/usr/local/lib/python3.8/site-packages/django/shortcuts.py", line 78, in get_object_or_404 raise Http404('No %s matches the given query.' % queryset.model.\_meta.object_name) django.http.response.Http404: No Project matches the given query.
0
1
1
Remove the transaction.atomic decorator I think on the save method
2023-02-28 09:17:20
0
python,7zip,py7zr
1
75,723,598
Open 7zip with no crc in python
75,590,010
true
77
I want to open an 7zip file in python and used the py7zr library but getting following error: CrcError: (3945015320, 1928216475, '1_Microsoft Outlook - Memoformat (3).tif') I tried the following code: archive= py7zr.SevenZipFile('path', mode='r',password="mypw") archive.reset() archive.extractall() archive.close() I checked with archive.test() and received None - In my understanding the crc value is missing.
1.2
1
1
The password from my client was incorrect - thank you for the help!
2023-02-28 11:18:33
0
python
1
75,591,859
Overwriting a method, during runtime, to raise an exception, on first pass only
75,591,314
false
28
I want to rewrite, during runtime, the print method of the class below so that the first pass raises an exception, but the second time it runs normally. I cannot change the code of the Test class. Assume it's in a file I don't have access to, and I cannot mock (nor use a deepcopy, since in reality I have socket objects which are not pickable). class Test: def __init__(self,a,b): self.a = a self.b = b def print(self): print(f"a-> {self.a}, and b-> {self.b}") I tried the following (doesn't work, with infinite recursion): test_class = Test(1,2) test_class.num_raise = 0 copy_test = test_class def raise_exception(): print("Inside save_clusters") if test_class.num_raise ==0: test_class.num_raise +=1 raise Exception("save_clusters method exception") elif test_class.num_raise ==1: return copy_test.print() test_class.print = raise_exception try: test_class.print() # should raise an exception except: test_class.print() # should print as normal
0
2
1
How about making a wrapper around Test? You can just subclass it and override the print method with code from your raise_exception function, replacing copy_test.print() with super().print() and adding num_raise as a parameter.
2023-02-28 11:21:01
0
python,python-asyncio,aiohttp
2
75,591,681
Can I download a large file in the background using aiohttp?
75,591,339
false
293
I'd like to download a series of large (~200MB) files, and use the time while they're downloading to do some CPU intensive processing. I'm investigating asyncio and aiohttp. My understanding is I can use them to start a large download and then do some heavy computation on the same thread while the download continues in the background. What I am finding, however, is that the download is paused while the heavy CPU process continues, then resumes as soon as the calculation is done. I include a minimal example below. I visually monitor the process CPU and bandwidth while the script is running. It's clear the download pauses during the ~30s of computation. Am I doing something wrong? Or am I not understanding what aiohttp can do? import asyncio import time import aiofiles import aiohttp async def download(session): url = 'https://repo.anaconda.com/archive/Anaconda3-2022.10-Linux-s390x.sh' # 280 MB file async with session.get(url) as resp: async with aiofiles.open('./tmpfile', mode='wb') as f: print('Starting the download') data = await resp.read() print('Starting the file write') await f.write(data) print('Download completed') async def heavy_cpu_load(): await asyncio.sleep(5) # make sure the download has started print('Starting the computation') for i in range(200000000): # takes about 30 seconds on my laptop. i ** 0.5 print('Finished the computation') async def main(): async with aiohttp.ClientSession() as session: timer = time.time() tasks = [download(session), heavy_cpu_load()] await asyncio.gather(*tasks) print(f'All tasks completed in {time.time() - timer}s') if __name__ == '__main__': asyncio.run(main())
0
1
1
I think what happens is that the aiohttp did finish downloading the file, but in order to open it and read it1, it needs the GIL to release the lock, but the CPU load does not release the GIL until it finishes. But, if you put await asyncio.sleep(0)2 after i ** 0.5 it will work. await just to make sure if someone wants to take control of the GIL. 1 resp.read() 2 Common practice to release the GIL lock on purpose.
2023-02-28 13:30:43
0
python,image-processing,resolution,image-resizing,gaussianblur
1
75,593,965
Calculating Gaussian Kernel sigma and width to approximate a desired lower resolution pixel/m for satellite images
75,592,762
false
212
I am working with satellite images with different spatial resolutions, understood as pixel/meter. For experiments I want to artificially down-sample these images, keeping the image size constant. For example I have a 512x512 image with spatial resolution 0.3m/pixel. I want to downsample it to 0.5m/pixel 512x512. I got advised to apply a Gaussian kernel to blur the image. But how do I calculate the standard deviation and kernel size of a Gaussian kernel to approximate the desired lower resolution? I can't find a rigorous method to do that calculation. Any help really much appreciated! ChatGTP says that the formula is: sigma = (desired_resolution / current_resolution) / (2 * sqrt(2 * log(2))) and kernel_size = 2 * ceil(2 * sigma) + 1 But can't explain why. Can someone explain how standard deviation (sigma) and desired output resolution are connected? And how do I know which sigma to use? Oftentimes these existing resizing functions ask for a sigma, but in their documentation don't explain how to derive it.
0
1
1
I wonder where that equation for the sigma comes from, I have never seen it. It is hard to define a cutoff frequency for the Gaussian. The Gaussian filter is quite compact in both the spatial domain and the frequency domain, and therefore is an extremely good low-pass filter. But it has no clear point at which it attenuates all higher frequencies sufficiently to no longer produce visible aliasing artifacts, without also attenuating lower frequencies so much that the downsampled image looks blurry. Of course we can follow the tradition from the field of electronics, and define the cutoff frequency as the frequency above which the signal gets attenuated with at least 3dB. I think this definition might have lead to the equation in the OP, though I don’t feel like attempting to replicate that computation. From personal experience, I find 0.5 times the subsampling factor to be a good compromise for regular images. For example, to downsample by a factor of 2, I’d apply a Gaussian filter with sigma 1.0 first. For OP’s example of going from 0.3 to 0.5 m per pixel, the downsampling factor is 0.5/0.3 = 1.667, half that is 0.833. Note that a Gaussian kernel with a sigma below 0.8 cannot be sampled properly without excessive aliasing, applying a Gaussian filter with a smaller sigma should be done through multiplication in the frequency domain. Finally, the kernel size. The Gaussian is infinite in size, but it becomes nearly zero very quickly, and we can truncate it without too much loss. The calculation 2 * ceil(2 * sigma) + 1 takes the central portion of the Gaussian of at least four sigma, two sigma to either side. The ceiling operation is the “at least”, it needs to be an integer size of course. The +1 accounts for the central pixel. This equation always produces an odd size kernel, so it can be symmetric around the origin. However, two sigma is quite small for a Gaussian filter, it cuts off too much of the bell shape, affecting some of the good qualities of the filter. I always recommend using three sigma to either side: 2 * ceil(3 * sigma) + 1. For some applications the difference might not matter, but if your goal is to quantify, I would certainly try to avoid any sources of error.
2023-02-28 20:16:05
1
python,nlp,doc2vec
2
75,598,110
How can I iterate through a doc2vec model?
75,596,881
false
56
I have built a Doc2Vec model and am trying to get the vectors of all my testing set (176 points). The code below I can only see one vector at a time. I want to be able to do "clean_corpus[404:]" to get the entire data set but when I try that it still outputs one vector. model.save("d2v.model") print("Model Saved") from gensim.models.doc2vec import Doc2Vec model= Doc2Vec.load("d2v.model") #to find the vector of a document which is not in training data test_data = clean_corpus[404] v1 = model.infer_vector(test_data) print("V1_infer", v1) Is there a way to easily iterate over the model to get and save all 176 vectors?
0.099668
1
1
Because .infer_vector() takes a single text (list-of-words), you would want to call it multiple times in a loop if you need to infer many separate vectors for many different documents. Another option would be to include all the documents of interest in the Doc2Vec model training data, including your test set. Then, you can simply request the learned-during-training vectors for any document, by the unique tag you supplied during training. Whether this is an acceptable practice depends on other unstated aspects of your project goals. Doc2Vec is an unsupervised algorithm, so in some cases it can be appropriate to use all available text to improve its training. (It doesn't necessarily cause the same problems as contaminating the training of a supervised classifier with the same already-labeled examples you'll be testing it against.)
2023-02-28 20:23:46
2
python
1
75,597,039
Bouncing screensaver (ala DVD screensaver) sticking to edges of screen
75,596,943
true
136
x = randint(50, width-60) y = randint(50, height-60) while True: x_speed = 1 y_speed = 1 if (x + 74 >= width) or (x <= 0): x_speed *= -1 if (y + 38 >= height) or (y <= 0): y_speed *= -1 x += x_speed y += y_speed This code is meant to make the physics of a 'DVD screensaver'. The 74/38 numbers are the dimensions of the screensaver picture. The screensaver should bounce around the screen, colliding with the walls. However, when I ran the program, the picture I used stuck to the wall, oscillating back and forth one pixel as it moved across the wall, eventually stopping in a corner, where it vibrated similarly. It seems as though it keeps flipping the x_speed / y_speed variables from + to - repeatedly, which keeps it on the border, which makes it keep flipping. It should just flip once, and then bounce away. Here are the things I tried: Changing the dimensions of the screen: It just makes it stick to a point further away from the edge of the screen Adding a cooldown for when it can flip the speed variables (setting a variable to a number then incrementing it back to 0 each tick before it can re-flip): It goes past the edge of the screen, jittering back a pixel each time the cooldown ends. Changing the dimensions of the picture (not visually, but in the physics): This only made it stick slightly past the edge of the screen. I cannot think of anything else to try. I have looked for alternative code, but none of it works with my program/it isn't in Python. Can anyone see any problems with the code?
1.2
1
1
Because of the while loop, the variables x_speed and y_speed are being instanced over and over again in each loop and don't change. Instead, put these variables outside the while loop so they don't get instanced again.
2023-03-01 07:12:47
0
python,android,windows,kivy
1
75,600,592
How to install Kivy with dependencies in Windows 10 using pip?
75,600,538
false
83
error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─> [10 lines of output] Collecting setuptools Using cached setuptools-67.4.0-py3-none-any.whl (1.1 MB) Collecting wheel Using cached wheel-0.38.4-py3-none-any.whl (36 kB) Collecting cython!=0.27,!=0.27.2,<=0.29.28,>=0.24 Using cached Cython-0.29.28-py2.py3-none-any.whl (983 kB) Collecting kivy_deps.gstreamer_dev~=0.3.3 Using cached kivy_deps.gstreamer_dev-0.3.3-cp311-cp311-win_amd64.whl (3.9 MB) ERROR: Could not find a version that satisfies the requirement kivy_deps.sdl2_dev~=0.4.5 (from versions: 0.5.1) ERROR: No matching distribution found for kivy_deps.sdl2_dev~=0.4.5 [end of output] I was trying to install kivy with "pip install kivy[full]" but isted of installing kivy suprocess error occored then I tryed installing subprocess with "pip install subprocess.run" it was installed sucessfully but again the same error is occruing
0
1
1
To fix this error, try to install Kivy using the pre-built wheel from the Kivy website. Download the wheel and install it using the pip command: pip install <path-to-wheel-file>
2023-03-01 11:17:33
2
python,pandas,apache-spark,pyspark,databricks
1
75,696,737
PySpark in Databricks error with table conversion to pandas
75,602,965
true
228
I'm using Databricks and want to convert my PySpark DataFrame to a pandas one using the df.toPandas() command. However, I keep getting this error: /databricks/spark/python/pyspark/sql/pandas/conversion.py:145: UserWarning: toPandas attempted Arrow optimization because 'spark.sql.execution.arrow.pyspark.enabled' is set to true, but has reached the error below and can not continue. Note that 'spark.sql.execution.arrow.pyspark.fallback.enabled' does not have an effect on failures in the middle of computation. 'DataFrame' object has no attribute 'dtype' warnings.warn(msg) AttributeError: 'DataFrame' object has no attribute 'dtype' I tried different things, including: spark.conf.set("spark.sql.execution.arrow.enabled", "false") But nothing worked so far (I also checked some of the other posts that have this issue, but none helped). UPDATE: result of df.printSchema(): flight_id: string (nullable = true) |-- flight_direction: string (nullable = true) |-- service_type: string (nullable = true) |-- flight_designator: string (nullable = true) |-- flight_number: string (nullable = true) |-- callsign: string (nullable = true) |-- scheduled_datetime: timestamp (nullable = true) |-- connecting_flight_designator: string (nullable = true) |-- airport_iata_codes: array (nullable = true) | |-- element: string (containsNull = true) |-- airline_name: string (nullable = true) |-- airport_names: array (nullable = true) | |-- element: string (containsNull = true) |-- country_number: long (nullable = true) |-- eu_category: string (nullable = true) |-- safe_town_indicator: boolean (nullable = true) |-- sibt: timestamp (nullable = true) |-- aibt: timestamp (nullable = true) |-- sobt: timestamp (nullable = true) |-- aibt: timestamp (nullable = true) |-- tsat: timestamp (nullable = true) |-- aircraft_name: string (nullable = true) |-- aircraft_registration: string (nullable = true) |-- ramp: string (nullable = true) |-- ramp_previous: string (nullable = true) |-- seats: long (nullable = true) |-- actual_total_pax: integer (nullable = true) |-- handler_apron: string (nullable = true) |-- occupancy_rate: double (nullable = false)
1.2
2
1
There was a problem in the data filtering. There were duplicate columns. If anyone in the future has a similar issue, please check this.
2023-03-01 19:52:19
4
python,error-handling,pip
12
76,614,012
How do I solve "error: externally-managed-environment" everytime I use pip3?
75,608,323
false
52,213
error: externally-managed-environment × This environment is externally managed ╰─> To install Python packages system-wide, try apt install python3-xyz, where xyz is the package you are trying to install. If you wish to install a non-Debian-packaged Python package, create a virtual environment using python3 -m venv path/to/venv. Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make sure you have python3-full installed. If you wish to install a non-Debian packaged Python application, it may be easiest to use pipx install xyz, which will manage a virtual environment for you. Make sure you have pipx installed. See /usr/share/doc/python3.11/README.venv for more information. note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages. hint: See PEP 668 for the detailed specification. I use apt upgrade and update.
0.066568
46
4
# rm /usr/lib/python3.11/EXTERNALLY-MANAGED
2023-03-01 19:52:19
7
python,error-handling,pip
12
75,755,526
How do I solve "error: externally-managed-environment" everytime I use pip3?
75,608,323
false
52,213
error: externally-managed-environment × This environment is externally managed ╰─> To install Python packages system-wide, try apt install python3-xyz, where xyz is the package you are trying to install. If you wish to install a non-Debian-packaged Python package, create a virtual environment using python3 -m venv path/to/venv. Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make sure you have python3-full installed. If you wish to install a non-Debian packaged Python application, it may be easiest to use pipx install xyz, which will manage a virtual environment for you. Make sure you have pipx installed. See /usr/share/doc/python3.11/README.venv for more information. note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages. hint: See PEP 668 for the detailed specification. I use apt upgrade and update.
1
46
4
that issue from pip just run the command and it will downgrade it. => pip install pip==22.3.1 --break-system-packages surely that will help.
2023-03-01 19:52:19
-1
python,error-handling,pip
12
76,209,085
How do I solve "error: externally-managed-environment" everytime I use pip3?
75,608,323
false
52,213
error: externally-managed-environment × This environment is externally managed ╰─> To install Python packages system-wide, try apt install python3-xyz, where xyz is the package you are trying to install. If you wish to install a non-Debian-packaged Python package, create a virtual environment using python3 -m venv path/to/venv. Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make sure you have python3-full installed. If you wish to install a non-Debian packaged Python application, it may be easiest to use pipx install xyz, which will manage a virtual environment for you. Make sure you have pipx installed. See /usr/share/doc/python3.11/README.venv for more information. note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages. hint: See PEP 668 for the detailed specification. I use apt upgrade and update.
-0.016665
46
4
To install the package XXX, instead of pip install XXX try: sudo apt install python3-XXX
2023-03-01 19:52:19
-2
python,error-handling,pip
12
76,031,775
How do I solve "error: externally-managed-environment" everytime I use pip3?
75,608,323
false
52,213
error: externally-managed-environment × This environment is externally managed ╰─> To install Python packages system-wide, try apt install python3-xyz, where xyz is the package you are trying to install. If you wish to install a non-Debian-packaged Python package, create a virtual environment using python3 -m venv path/to/venv. Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make sure you have python3-full installed. If you wish to install a non-Debian packaged Python application, it may be easiest to use pipx install xyz, which will manage a virtual environment for you. Make sure you have pipx installed. See /usr/share/doc/python3.11/README.venv for more information. note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages. hint: See PEP 668 for the detailed specification. I use apt upgrade and update.
-0.033321
46
4
Happens to me after I changed the name of the folder containing my virtual environment If you want to rename a virtual environment, changing the name of the folder doesn't work, what I did was re-creating it
2023-03-01 22:21:23
2
python-3.x,gstreamer,drawingarea
1
75,612,216
How to display a GStreamer pipeline inside a Gtk window with a specific size?
75,609,594
true
246
I'm working on a Python script that creates a GStreamer pipeline and displays the video output inside a Gtk window. The current code opens two windows: one with the title "Video Window" but with no content, and another one with the GStreamer pipeline inside. However, I'd like to have the GStreamer pipeline displayed inside the first window as a 200x200 box, without opening a second window. I've tried using a Gtk.DrawingArea widget and the GstVideo.VideoOverlay interface, but I'm not sure how to integrate them into my code. Here's the current code: import gi gi.require_version('Gtk', '3.0') gi.require_version('Gst', '1.0') from gi.repository import Gtk, Gst # Gtk win = Gtk.Window(title="Video Window") win.connect("destroy", Gtk.main_quit) win.set_default_size(600, 400) # DrawingArea to hold the video drawingarea = Gtk.DrawingArea() drawingarea.set_size_request(200, 200) win.add(drawingarea) # Gst Gst.init(None) pipeline = Gst.parse_launch("videotestsrc ! autovideoconvert ! gtksink") pipeline.set_state(Gst.State.PLAYING) # End win.show_all() Gtk.main() Can anyone help me modify this code to achieve my goal? I'm relatively new to Gtk and GStreamer, so a detailed explanation would be appreciated. Thank you in advance! I may add if the sink is not recommended gtksink please inform me. I'm using Raspberry Pi 4 and need as low latency as possible on this. So the best sink to use would help.
1.2
1
1
The gtksink has a property widget where it draws the video to. Get that widget from that property and add this widget to your GTK application window hierachy where you want it to be displayed.
2023-03-01 22:49:59
0
python,python-3.x
1
75,611,619
Not able to find a 'naked single' in a Sudoku board
75,609,775
false
39
For my 2nd newbie project I an coding a program to solve Sudoki puzzles. In the example code below there are two 9x9 Sudoku boards lists, the one in use results in 7 'naked singles found' while the other one results in 0 'naked singles found'. Both boards have 'naked singles' in them, but only one board reports this. I feel like I am missing a step in my logic somewhere. The 'logic' for finding 'naked singles' is defined as:- "A naked single is that what remains after you have applied your solving techniques, by eliminating other candidates. A naked single is the last remaining candidate in a cell. Alternative terms are Forced Digit and Sole Candidate." # 2nd board with hidden singles but none can be found # board = [ # [0, 0, 5, 3, 0, 0, 0, 0, 0], # [8, 0, 0, 0, 0, 0, 0, 2, 0], # [0, 7, 0, 0, 1, 0, 5, 0, 0], # [4, 0, 0, 0, 0, 5, 3, 0, 0], # [0, 1, 0, 0, 7, 0, 0, 0, 6], # [0, 0, 3, 2, 0, 0, 0, 8, 0], # [0, 6, 0, 5, 0, 0, 0, 0, 9], # [0, 0, 4, 0, 0, 0, 0, 3, 0], # [0, 0, 0, 0, 0, 9, 7, 0, 0] # ] # Board three, has hidden singles but none can be found # board = [ # [0, 0, 0, 7, 0, 6, 0, 0, 0], # [0, 0, 9, 0, 3, 0, 0, 0, 2], # [0, 6, 0, 9, 0, 0, 0, 0, 1], # [0, 0, 5, 0, 1, 0, 4, 0, 0], # [0, 0, 6, 0, 0, 0, 7, 0, 0], # [0, 3, 0, 0, 7, 4, 8, 0, 0], # [8, 0, 0, 0, 9, 0, 1, 0, 0], # [0, 0, 0, 0, 0, 0, 0, 0, 0], # [0, 0, 3, 0, 4, 0, 0, 0, 5] # ] def count_naked_singles(board): count = 0 for i in range(9): for j in range(9): if board[i][j] == 0: possibilities = set(range(1, 10)) # Check row for k in range(9): possibilities.discard(board[i][k]) # Check column for k in range(9): possibilities.discard(board[k][j]) # Check box box_row = (i // 3) * 3 box_col = (j // 3) * 3 for m in range(box_row, box_row + 3): for n in range(box_col, box_col + 3): possibilities.discard(board[m][n]) if len(possibilities) == 1: count += 1 return count # test board with 7 naked singles board = [ [0, 0, 0, 7, 0, 2, 9, 0, 0], [0, 9, 0, 0, 8, 1, 2, 0, 0], [8, 7, 2, 4, 5, 0, 0, 1, 3], [1, 0, 0, 0, 7, 0, 4, 2, 0], [9, 0, 0, 1, 0, 5, 0, 0, 8], [0, 4, 0, 0, 0, 0, 5, 6, 0], [0, 3, 5, 8, 0, 4, 0, 9, 6], [0, 8, 0, 0, 3, 6, 7, 0, 0], [0, 0, 0, 5, 0, 0, 0, 3, 2] ] count = count_naked_singles(board) print(f"Number of naked singles found is {count}") I have tried different Sudoku boards, three of which are included with the supplied code. Sometimes my function will find 'hidden singles' sometimes it misses them. The active 'board' with my code has been tested further and can be completed with a modiefied version of my function count_naked_singles(board) by just finding 'naked singles'. So my question is: Why is my function count_naked_singles(board): not able to find 'naked single(s)' in all Sudoku boards that have these features in them?
0
1
1
You've done a great job of eliminating the direct cases where the row, boxes or columns cancel out possibilities. However, there is another way to have a naked single. If no other cell in your row can have a 4, then YOU must have a 4. Or if no other cell in your column can have a 4, then YOU must have a 4. Or, if no other cell in your box can have a 4, then YOU must have a 4. So, eliminations help you find naked singles, but you must also look at the possibilities of those around you. For all of your possibilities, if all the possibilities in your row do not contain them, then, you are it. Save your possibilities for each position. Then do another pass where you check each possibility for nakedness among its row, or column, or box.
2023-03-02 04:00:48
1
python,pandas
2
75,612,064
How to calculate percent change of a pandas dataframe across groups with a custom period?
75,611,207
false
49
I am trying to compute the percent change in a a column of a pandas dataframe. I can get it to work when I don't specify periods, but I would like to specify the number of periods to consider. It has to work across groups is the problem, I can get it to work without groups, but for some reason it's not working now? _df_ty['velocity_7'] = _df_ty.groupby('store')['sales'].apply(pd.Series.pct_change(periods=7)).abs() It yells at me with: TypeError: pct_change() missing 1 required positional argument: 'self' Which I don't understand how that's happening inside of apply? Google and the pandas docs aren't helping here. Further, all the stack overflow answers I can find are just about calculating the percent change to begin with, I cannot find an example of one with grouping and a non-1 period in use.
0.099668
1
1
Instead, you should call the pct_change() method on a pd.Series object directly, and then group the resulting series by store to calculate the percent change within each store: _df_ty['velocity_7'] = _df_ty.groupby('store')['sales'].pct_change(periods=7).abs().groupby('store').fillna(0) Note that we're calling fillna(0) at the end to replace any missing values (which will occur for the first 7 rows of each store) with 0, since the percent change for those rows is undefined.
2023-03-02 05:59:05
0
python,python-3.x,github,jupyter-notebook,file-sharing
2
75,613,225
How to share an interactive jupyter notebook?
75,611,793
true
174
i need to send the Jupyter notebook to another person, and enable him to open, enter inputs and running the notebook, to get the results. please explain in a steps what should i do to share an interactive jupyter notebook to the end user, so he can enter inputs and running the code.
1.2
2
1
Follow th following steps: A. Save your Jupyter notebook file with a .ipynb extension. B. Make sure that all the necessary libraries and packages are installed in the environment where the notebook will be run. You can provide that person with a requirement.txt file so he/she can create an adequate invironemnt. C. Export the notebook as a runnable file: File -> Download as -> Runnable .ipynb. D. Share the exported .ipynb file E. Instruct the other person to open the .ipynb file in Jupyter Notebookwhatever other envirnoment/platform he/she needs. Of course, youu have to make sure that the other person has the necessary software installed on their computer to run the notebook, including Python and any necessary packages. Once they open the notebook, they should be able to run the code cells and input their own data.
2023-03-02 15:25:14
0
python,django,django-models,django-views,django-forms
1
75,618,151
phonenumber field authentication error in django
75,617,505
false
89
I am making a login function in django where user will input their phone number and password, they can access the account. but when I run the server, I get error that invalid phone number and password, but I registered proper phone number and password. I did some troubleshooting and come to know that phone number data is not showing. here's my code models.py from django.db import models from django.contrib.auth.models import AbstractBaseUser, PermissionsMixin from phonenumber_field.modelfields import PhoneNumberField from .manager import MyUserManager # Create your models here. class MyUser(AbstractBaseUser, PermissionsMixin): username= models.CharField(max_length=50, null=True) phone_number = PhoneNumberField(unique=True) first_name = models.CharField(max_length=30, blank=True) last_name = models.CharField(max_length=30, blank=True) email = models.EmailField(unique=True) is_active = models.BooleanField(default=True) is_staff = models.BooleanField(default=False) is_superuser = models.BooleanField(default=False) USERNAME_FIELD = 'phone_number' REQUIRED_FIELDS = [] objects = MyUserManager() def __str__(self): return self.phone_number.as_e164 def has_perm(self, perm, obj=None): return True def has_module_perms(self, app_label): return True forms.py from django import forms from django.contrib.auth.forms import AuthenticationForm from phonenumber_field.formfields import PhoneNumberField from .models import * class MyLoginForm(AuthenticationForm): phone = PhoneNumberField() password = forms.CharField(label='Password', widget=forms.PasswordInput) views.py from django.contrib.auth import authenticate, login, logout from django.shortcuts import render, redirect from .forms import MyLoginForm # Create your views here. def my_login(request): if request.method == 'POST': form = MyLoginForm(request, request.POST) if form.is_valid(): phone_number = form.cleaned_data.get('phone') print(form.cleaned_data.get('phone_number')) password = form.cleaned_data.get('password') print(password) user = authenticate(request, phone_number=phone_number, password=password) if user is not None: login(request, user) return redirect('home') else: form.add_error(None, 'Invalid phone number or password.') else: form = MyLoginForm() return render(request, 'core/login.html', {'form': form}) can anyone tell me where I am doing wrong?
0
1
1
You should access phone_number field not phone since you are authenticating with phone_number field in authenticate() so, you view code should be like this: def my_login(request): if request.method == 'POST': form = MyLoginForm(request.POST) if form.is_valid(): phone_number = form.cleaned_data.get('phone_number') print(form.cleaned_data.get('phone_number')) password = form.cleaned_data.get('password') print(password) user = authenticate(request, phone_number=phone_number, password=password) if user is not None: login(request, user) return redirect('home') else: form.add_error(None, 'Invalid phone number or password.') else: form = MyLoginForm() return render(request, 'core/login.html', {'form': form}) The irony is that when you print it as print(form.cleaned_data.get('phone_number')) it is showing as you said in question :)
2023-03-02 16:38:02
0
python,python-3.x
2
75,618,459
Why does ~True = -2 in python?
75,618,364
false
94
I am completely perplexed. We came across a bug, which we easily fixed, but we are perplexed as to why the value the bug was generating created the output it did. Specifically: Why does ~True equal -2 in python? ~True >> -2 Shouldn't the bitwise operator ~ only return binary? (Python v3.8)
0
2
1
Shouldn't the bitwise operator ~ only return binary? Well, technically EVERYTHING in a computer is binary. However, this is only part of the story. There are two important concepts here: Data types of values Representations of those values For the first one True is a boolean and -2 is an integer. In some cases, Python will convert a value from one type to another in order to perform certain operations. In this case, the ~ is the bitwise operator which only works on integers. In this case, an actual conversion isn't necessary since boolean inherits from int, so the boolean value can just be treated as the integer value of 1. Then ~ gives the bitwise inverse of that integer value. Now if we represent the 1 in binary as 00000001 (yes, technically there are more 0s, but I'm not going to type them out...the concept still holds if we only use 8 bits instead of the actual 32 or 64), we negate this to 11111110 (again, there are more leading 1s because there are really more bits). Note that the way I write the integer value here in binary is merely a representation of the value using characters I can type on a keyboard. This is still 00000001 in binary and 1 in decimal both represent the same underlying value in memory. Similarly 11111110 binary and -2 decimal both represent the same value. At the end, print() will just print the value in decimal which is why you get -2. If you want to print the value in hex or binary, there are built in methods to get these representations instead.
2023-03-02 17:28:47
0
python-3.x,google-cloud-platform,cloud-document-ai
1
75,619,504
How can I ensure that GCP Document AI model to output JSON with the same name as the input file?
75,618,908
true
153
I am using Python to BatchProcess PDFs through GCP Document AI ("DocAI"). The PDFs have long file names such as 71.169892_01-2022.10.15-21275188-1111.pdf. Often the only difference between the filenames are the last four digits before .pdf (such as 71.169892_01-2022.10.15-21275188-1111.pdf and 71.169892_01-2022.10.15-21275188-2547.pdf) When such a PDF is processed through DocAI, it outputs one or more JSON files with a shortened filename such as 71.169892_01-2022.10-0.json, 71.169892_01-2022.10-1.json, and so on. How can I ensure that DocAI does not cut off the filename? Is there an attribute I can add to BatchProcessing Request to ensure that the output preserves the full filename? This is important because when I process 2 PDFs with nearly identical filenames (e.g. 71.169892_01-2022.10.15-21275188-1111.pdf and 71.169892_01-2022.10.15-21275188-2547.pdf), the resulting JSONs end up with the same filename: 71.169892_01-2022.10-0.json. Which is a problem when such JSONs are moved from the folder where there are automatically stored by DocAI into the same folder (that is--the second JSON simply overwrites the first JSON which has the same name). The current state is as follows: Input PDF: 71.169892_01-2022.10.15-21275188-1111.pdf Output JSON: 71.169892_01-2022.10-0.json Expecting: Input PDF: 71.169892_01-2022.10.15-21275188-1111.pdf Output JSON: 71.169892_01-2022.10.15-21275188-1111.json
1.2
1
1
Currently, there isn't a way to specify the output filename from Document AI, other than the output bucket & folder. Batch Processing will always output JSON files with an extra -0 or another number since larger documents can be split up into multiple "shards". If it's possible, I would recommend sending the files that have nearly identical names in different requests to avoid the overwriting issue, since each request will output into a different folder named for the operation id. However, this is definitely an edge case that should be handled in the product, so I'll report this issue to the development team. Update: A fix has been made and it should be rolled out in the next couple of weeks. This should prevent the truncation of the filenames and the overwriting issue, but the output files will still have suffixes like -0.
2023-03-02 20:48:16
0
python,python-3.x,list,csv
2
75,623,696
Check for line in CSV, if not existant - append the line
75,620,670
true
96
I have the following problem. I am trying to match all existing rows of a CSV file with the current one. If the line already exists, the script should only show me it exists. If the row does not exist, the script should tell me that the row does not exist. However, the script always tells me that the row does not exist, even though I have checked that the row does exist. Here is my Code so far: # Imports from Library(s) from pathlib import Path import csv from windows_tools.installed_software import get_installed_software # Check if the csv file exists - if not: create it path = Path('./programms.csv') existingFile = [] if path.is_file() is not True: with open('programms.csv', 'w', newline='') as write1: w_object = csv.writer(write1) w_object.writerow(["Name", "Version", "Publisher"]) write1.close() # Lists all Software on the computer for software in get_installed_software(): csv_list = (software['name'], software['version'], software['publisher']) with open('programms.csv', 'r') as f1: existingFile = [line for line in csv.reader(f1, delimiter=',')] f1.close() #Checks if if csv_list in existingFile: print(str(csv_list) + "already is in the list") continue if csv_list not in existingFile: print("Current Object is not in the Existing lines") # # Open our existing CSV file in append mode # # Create a file object for this file # with open('programms.csv', 'a', newline='') as append1: # # Pass this file object to csv.writer() and create writer_object # writer_object = csv.writer(append1) # # Pass the list as an argument intothe writerow() # writer_object.writerow(csv_list) # # Close the file object # append1.close() print (existingFile) I already tried to specify the types i want to check it with: if str(csv_list) in list(existingFile): Sadly im just starting with python and I'm not quite sure how to tackle this one.
1.2
1
1
thanks to you I fixed it! Try csv_list = [software['name'], software['version'], software['publisher']] – inspectorG4dget and You're setting csv_list to a tuple, not a list because you're using () instead of [] – Barmar were right! I just changed the tuple to a list and that was it. Thanks to all that replied so quickly! Best Regards Pr3adus
2023-03-03 00:46:59
2
python,airflow
1
75,622,359
Mark a task as a success on callback on intentional failure
75,622,228
true
94
In Airflow 2.3.4, I have a task I am intentionally failing and when it fails I want to mark it as a success in the callback but the below does not work., def intentional_failure(): raise AirflowException("this is a dummy failure") def handle_failure(context): context['task_instance'].state = State.SUCCESS dummy_failure = PythonOperator(task_id="intentional_failure", python_callable=intentional_failure, on_failure_callback=handle_failure) How would I programatically mark a task as success on an intentional failure?
1.2
2
1
Turns out it is a method call not an attribute. So we need to set context['ti'].set_state(State.SUCCESS) in the on_failure_callback.
2023-03-03 05:51:59
1
python
2
75,623,659
Why does Python backspace behave strange?
75,623,588
true
55
Python escape character \b (backspace) behaves in a strange way. Just look at the code below: print("12\b345") print("12345\b\b") print("12345\b\ba") output: 1345 12345 123a5 I was expecting: 1345 123 123a
1.2
1
1
The backspace escape character (\b) in Python is used to move the cursor back one space, without deleting the character at that position. When combined with other characters in a string, the backspace escape character can be used to create some interesting effects. In the first line of the code, the string "12\b345" is printed. The backspace escape character (\b) is used to move the cursor back one space after the "2" in "12", and then the "3" is printed, effectively overwriting the "2". Therefore, the output is "1345". In the second line of the code, the string "12345\b\b" is printed. The backspace escape character (\b) is used to move the cursor back two spaces after the "5" in "12345", effectively deleting the "5" and leaving the string as "1234". Therefore, the output is "12345". In the third line of the code, the string "12345\b\ba" is printed. The backspace escape character (\b) is used to move the cursor back one space after the "5" in "12345", effectively deleting the "5" and leaving the string as "1234". Then, the letter "a" is printed, effectively inserting it after the "3". Therefore, the output is "123a5".
2023-03-03 07:44:38
1
python,numpy,jupyter-notebook
1
75,624,517
I am getting 'numpy.ndarray' object is not callable when I use range() function
75,624,387
false
130
I am getting this error "TypeError: 'numpy.ndarray' object is not callable" in Jupyter notebook when I use the range() function in a very simple for loop. What's the problem? TypeError Traceback (most recent call last) Input In [60], in <cell line: 1>() ----> 1 for i in range(100): 2 print(I) TypeError: 'numpy.ndarray' object is not callable
0.197375
1
1
I think it is because you had declared some variable with the name "range". You should change the variable name to a different one if you would like to use the function range().
2023-03-03 08:14:48
1
python,pyinstaller,scikit-image,easyocr
2
75,836,771
What is a .pyci file when compiling executables using pyinstaller?
75,624,637
false
328
I am trying to package a python application using PyInstaller. I used the following command: pyinstaller --noconfirm --onedir --windowed --icon "D:/Development/nikke-assistant/images/nikke_icon.ico" --add-data "C:/Program Files/Tesseract-OCR;Tesseract-OCR/" --hidden-import "skimage" --paths "C:/Windows/System32/downlevel" --hidden-import "easyocr" --collect-all "easyocr" --collect-all "scikit-image" --runtime-hook "D:/Development/nikke-assistant/hook.py" "D:/Development/nikke-assistant/nikke_interface.py" When running the executable, I run into the following error: Traceback (most recent call last): File "nikke_interface.py", line 11, in <module> File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 688, in _load_unlocked File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module File "nikke_agent.py", line 9, in <module> File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 688, in _load_unlocked File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module File "game_interaction_io.py", line 7, in <module> File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 688, in _load_unlocked File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module File "easyocr\__init__.py", line 1, in <module> from .easyocr import Reader File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 688, in _load_unlocked File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module File "easyocr\easyocr.py", line 3, in <module> from .recognition import get_recognizer, get_text File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 688, in _load_unlocked File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module File "easyocr\recognition.py", line 10, in <module> from .utils import CTCLabelConverter File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 688, in _load_unlocked File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module File "easyocr\utils.py", line 13, in <module> from .imgproc import loadImage File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 688, in _load_unlocked File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module File "easyocr\imgproc.py", line 8, in <module> from skimage import io File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 688, in _load_unlocked File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module File "skimage\__init__.py", line 74, in <module> File "lazy_loader\__init__.py", line 243, in attach_stub ValueError: Cannot load imports from non-existent stub 'D:\\Development\\nikke-assistant\\output\\nikke_interface\\skimage\\__init__.pyci' I've traced down the issue and apparently it caused by a chain of imports: imports easyocr easyocr imports skimage skimage has a module called io To import modules in the skimage package, it uses a library called lazy_loader to lazy load the dependencies where it does something like below in the __init__.py file: import lazy_loader as lazy __getattr__, __lazy_dir__, _ = lazy.attach_stub(__name__, __file__) The library lazy_loader looks at another stub file with the name of __init__.pyi when loading the submodules. The key question is: what is the skimage\\__init__.pyci it's looking for? I know .pyc or .pyi, is it looking for a compiled version of the stub file?
0.099668
1
2
I had the same issue too, and i solved it this way: put the init.pyi in in the skimage dir in the dist BUT rename it init.pyci, copy skimage/data dir into the skimage dist dir, and rename again the init.pyi to pyci. IT WORKS :) , ma config is python 3.11, pyInstaller 3.9
2023-03-03 08:14:48
1
python,pyinstaller,scikit-image,easyocr
2
75,627,692
What is a .pyci file when compiling executables using pyinstaller?
75,624,637
true
328
I am trying to package a python application using PyInstaller. I used the following command: pyinstaller --noconfirm --onedir --windowed --icon "D:/Development/nikke-assistant/images/nikke_icon.ico" --add-data "C:/Program Files/Tesseract-OCR;Tesseract-OCR/" --hidden-import "skimage" --paths "C:/Windows/System32/downlevel" --hidden-import "easyocr" --collect-all "easyocr" --collect-all "scikit-image" --runtime-hook "D:/Development/nikke-assistant/hook.py" "D:/Development/nikke-assistant/nikke_interface.py" When running the executable, I run into the following error: Traceback (most recent call last): File "nikke_interface.py", line 11, in <module> File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 688, in _load_unlocked File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module File "nikke_agent.py", line 9, in <module> File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 688, in _load_unlocked File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module File "game_interaction_io.py", line 7, in <module> File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 688, in _load_unlocked File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module File "easyocr\__init__.py", line 1, in <module> from .easyocr import Reader File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 688, in _load_unlocked File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module File "easyocr\easyocr.py", line 3, in <module> from .recognition import get_recognizer, get_text File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 688, in _load_unlocked File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module File "easyocr\recognition.py", line 10, in <module> from .utils import CTCLabelConverter File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 688, in _load_unlocked File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module File "easyocr\utils.py", line 13, in <module> from .imgproc import loadImage File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 688, in _load_unlocked File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module File "easyocr\imgproc.py", line 8, in <module> from skimage import io File "<frozen importlib._bootstrap>", line 1027, in _find_and_load File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 688, in _load_unlocked File "PyInstaller\loader\pyimod02_importers.py", line 352, in exec_module File "skimage\__init__.py", line 74, in <module> File "lazy_loader\__init__.py", line 243, in attach_stub ValueError: Cannot load imports from non-existent stub 'D:\\Development\\nikke-assistant\\output\\nikke_interface\\skimage\\__init__.pyci' I've traced down the issue and apparently it caused by a chain of imports: imports easyocr easyocr imports skimage skimage has a module called io To import modules in the skimage package, it uses a library called lazy_loader to lazy load the dependencies where it does something like below in the __init__.py file: import lazy_loader as lazy __getattr__, __lazy_dir__, _ = lazy.attach_stub(__name__, __file__) The library lazy_loader looks at another stub file with the name of __init__.pyi when loading the submodules. The key question is: what is the skimage\\__init__.pyci it's looking for? I know .pyc or .pyi, is it looking for a compiled version of the stub file?
1.2
1
2
I had the similar problem today. Solved it by downgrading the scikit-image to 0.18.3. I use Pyinstaller 5.8. Good luck!
2023-03-03 14:50:35
1
html,python-3.x,pandas,selenium-webdriver,web-scraping
1
75,712,830
Selenium, the problem of not being able to obtain all price and date information of a product
75,628,559
false
257
the code that follows the price of the cheapest price product searched on the site at a certain time at a certain time interval for the same seller: import pandas as pd import undetected_chromedriver as uc from selenium.webdriver.common.by import By from selenium.webdriver.common.keys import Keys from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC import time import matplotlib.pyplot as plt from time import sleep import inspect import os from bs4 import BeautifulSoup import requests # Get the search term and tracking period from the user search_term = input("Please enter the name of the product you want to search: ") months =input("Please enter the number of months you want to track the product: ") # To ensure that the user enters a non-string value while not months.isdigit(): print("Warning: Please enter a valid integer value for the number of months.") months = input("Please enter the number of months you want to track the product: ") months = int(months) # Start the web driver and go to the Hepsiburada homepage options = uc.ChromeOptions() options.add_argument('--blink-settings=imagesEnabled=false') # disable images for loading of page faster options.add_argument('--disable-notifications') prefs = {"profile.default_content_setting_values.notifications" : 2} options.add_experimental_option("prefs",prefs) driver = uc.Chrome(options=options) url = 'https://www.hepsiburada.com/' driver.get(url) wait = WebDriverWait(driver, 15) # close cookies bar wait.until(EC.element_to_be_clickable((By.ID, 'onetrust-accept-btn-handler'))).click() # Enter the search term in the search box and press Enter search_box = wait.until(EC.element_to_be_clickable((By.CLASS_NAME, 'theme-IYtZzqYPto8PhOx3ku3c'))) search_box.send_keys(search_term + Keys.RETURN) # load all products number_of_products = int(wait.until(EC.visibility_of_all_elements_located((By.CLASS_NAME, 'searchResultSummaryBar-AVnHBWRNB0_veFy34hco')))[1].text) ### visibility_of_all_elements_located is a wait strategy in Selenium that checks if all elements of a certain type are visible on the page and waits until they become visible before continuing. number_of_loaded_products = 0 while number_of_loaded_products < number_of_products: loaded_products = wait.until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR, 'li[class*=productListContent][id]'))) number_of_loaded_products = len(loaded_products) driver.execute_script('arguments[0].scrollIntoView({block: "center", behavior: "smooth"});', loaded_products[-1]) # Get the link, name, price and seller of all the products product = {key:[] for key in ['name','price','seller','url']} product['name'] = [h3.text for h3 in driver.find_elements(By.CSS_SELECTOR, 'h3[data-test-id=product-card-name]')] product['url'] = [a.get_attribute('href') for a in driver.find_elements(By.CSS_SELECTOR, 'a[class*=ProductCard]')] product['price'] = [float(div.text.replace('TL','').replace(',','.')) for div in driver.find_elements(By.CSS_SELECTOR, 'div[data-test-id=price-current-price]')] for i,url in enumerate(product['url']): print(f'Search seller names {i+1}/{number_of_loaded_products}', end='\r') driver.get(url) product['seller'] += [wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, '.seller a'))).text] product['url'][i] = driver.current_url # useful to replace some long urls # Sort by price in ascending order import pandas as pd product_list = pd.DataFrame(product).sort_values(by='price').to_dict('list') print(f"\nThe product selected from the search results is:"+ f"\nname: {product_list['name'][0]}"+ f"\nprice: {product_list['price'][0]}"+ f"\nseller: {product_list['seller'][0]}"+ f"\nurl: {product_list['url'][0]}") # Go to the page of the selected product driver.get(product_list['url'][0]) # Get the prices prices = [] dates = [] while len(prices) < months: price_elems = driver.find_elements(By.XPATH, "//div[@class='price-area']//strong[@itemprop='price']") print(price_elems) date_elems = driver.find_elements(By.XPATH, "//div[@class='product-info']//span[@class='product-info-date']") print(date_elems) for price_elem, date_elem in zip(price_elems, date_elems): price = float(price_elem.text.replace('.', '').replace(',', '.')) date = pd.to_datetime(date_elem.text, format='%d %B %Y, %H:%M') prices.append(price) dates.append(date) next_button = driver.find_element(By.XPATH, "//a[@class='page-next']") if 'disabled' in next_button.get_attribute('class'): break else: driver.execute_script("arguments[0].click();", next_button) # Create a DataFrame and select the data for the last X months df = pd.DataFrame({'Date': dates, 'Price': prices}) df['Hour'] = df['Date'].dt.hour df = df.groupby(['Date', 'Hour']).mean().reset_index() start_date = pd.Timestamp.today() - pd.DateOffset(months=months) end_date = pd.Timestamp.today() df = df.loc[(df['Date'] >= start_date) & (df['Date'] <= end_date)] # Create the plot plt.plot(df['Date'], df['Price']) plt.title('Price Changes of {} in the Last {} Months'.format(product_list['name'][0], months)) plt.xlabel('Date') plt.ylabel('Price (TL)') plt.show() I am trying to create a graph of the price of a product searched on the website Hepsiburada.com.tr with the cheapest price, for the same seller during a certain month, at the same time (For instance, let's say the product whose price we follow is "pınar süt 1lt"). However, I could not draw the graph because I could not obtain the "prices" and "dates" information.This list is empty. How can I obtain this graph? Focusing point:The piece of code under the '# Get the prices' comment is working incorrectly. The code up to this part is working properly.
0.197375
1
1
I'll try to direct you towards a solution. As I understand you need to track product price changes and process them somehow. You can do it by periodically running a script that collects product prices and stores data somewhere for future analysis. I see currently you use WebDriver to grab price data. My first suggestion is to try to use Web API instead. Communication with a website via HTTP is much faster and more stable than via UI. Automating UI requires you to deal with different issues related to element inaccessibility, tricky waits, and unexpected overlapping controls. Web API gives you a clear interface to request and receive needed data without the overhead to handle UI. Speaking shortly - UI is for humans, API is for machines. Use API if possible. In case you need to stay with WebDriver, check the following to address data extraction issues: the required data is displayed on the page before extraction Make sure the steps leading to the target data page are completed successfully. It may happen some automated interactions fail silently and the needed data is not shown. Button clicks may be skipped, element state waits may be wrong thus letting the script extract data when it's not displayed. Watch your script execution in real-time and make sure it successfully passes all steps before data extraction. If some steps fail, put sleeps initially just to verify it is a page state issue, then replace on custom waits if so. Try to use different click methods if some fail. make sure the required data is in view, not outside the visible page area To extract data from some controls they need to be in view. Scroll the page if needed parent iframe is selected before interaction with a child element Some UI controls may be put inside iframes. If interaction with a control fails, check if it is inside iframe. WebDriver needs to be switched to the parent iframe before interaction with controls inside. Use WebInspector in a browser to find if a given UI control is inside iframe try to execute a script in another browser Ideally, WebDriver script should work the same for all browsers but in fact issues happen and a button click that fails in one browser may works in another one. If your script looks perfect but element interaction still fails, try another browser.
2023-03-04 01:13:37
5
python,image,image-processing,python-imaging-library
1
75,633,329
Why do image size differ when vertical vs horizontal?
75,633,075
true
81
Tried to create a random image with PIL as per the example: import numpy from PIL import image a = numpy.random.rand(48,84) img = Image.fromarray(a.astype('uint8')).convert('1') print(len(img.tobytes())) This particular code will output 528. Wen we flip the numbers of the numpy array: a = numpy.random.rand(84,48) The output we get is 504. Why is that? I was expecting for the byte number to be the same, since the numpy arrays are the same size.
1.2
3
1
When you call tobytes() on the boolean array*, the data is likely encoded per row. In your second example, there are 48 booleans in each row of img. So each row can be represented with 6 bytes (48 bits). 6 bytes * 84 rows = 504 bytes in img. However, in your first example, there are 84 pixels per row, which is not divisible by 8. In this case, the encoder represents each row with 11 bytes (88 bits). There are 4 extra bits of padding per row. So now the total size is 11 bytes * 48 rows = 528 bytes. If you test a bunch of random input shapes for a 2d boolean array to encode, you will find that when the number of elements per row is divisible by 8, the number of total bytes in the encoding is equal to the width * height / 8. However, when the row length is not divisible by 8, the encoding will contain more bytes because it has to pad each row with between 1 and 7 bits. In summary - ideally, we would want to store eight boolean values per byte, but this is complicated by the fact that the row length isn't always divisible by 8, and the encoder serializes the array by row. Edit for clarification: *the PIL.Image object in mode "1" (binary or "bilevel" image) effectively represents a boolean array. In mode 1, the original image (in this case, the numpy array a) is thresholded to convert it to a binary image.
2023-03-04 04:47:27
1
python,python-3.x,web-scraping,web-crawler
1
75,634,048
In Python, Downloading Png and Jpg images
75,633,708
true
41
I am writing a script to download images from a certain website. The website contains jpg and png images. I was expecting the code to run normally. But the png images are taking a while to download (very slow) while the jpg images are quick. img_data = requests.get(image_url, headers=headers).content imagename = '' if image_url.endswith('.jpg'): imagename = str(product) + str(f"{imagevalue:03d}") + '.jpg' elif image_url.endswith('.png'): imagename = str(product) + str(f"{imagevalue:03d}") + '.png' with open(imagename, 'wb') as file: file.write(img_data) This is the code, working but slowly. Am I Missing something here?
1.2
1
1
It is not a problem with your code, rather .png files are much larger than .jpg files.
2023-03-04 05:33:51
1
python,algorithm,math
5
75,635,687
Is there an efficient way to determine if a sum of floats will be order invariant?
75,633,851
false
275
Due to precision limitations in floating point numbers, the order in which numbers are summed can affect the result. >>> 0.3 + 0.4 + 2.8 3.5 >>> 2.8 + 0.4 + 0.3 3.4999999999999996 This small error can become a bigger problem if the results are then rounded. >>> round(0.3 + 0.4 + 2.8) 4 >>> round(2.8 + 0.4 + 0.3) 3 I would like to generate a list of random floats such that their rounded sum does not depend on the order in which the numbers are summed. My current brute force approach is O(n!). Is there a more efficient method? import random import itertools import math def gen_sum_safe_seq(func, length: int, precision: int) -> list[float]: """ Return a list of floats that has the same sum when rounded to the given precision regardless of the order in which its values are summed. """ invalid = True while invalid: invalid = False nums = [func() for _ in range(length)] first_sum = round(sum(nums), precision) for p in itertools.permutations(nums): if round(sum(p), precision) != first_sum: invalid = True print(f"rejected {nums}") break return nums for _ in range(3): nums = gen_sum_safe_seq( func=lambda :round(random.gauss(3, 0.5), 3), length=10, precision=2, ) print(f"{nums} sum={sum(nums)}") For context, as part of a programming exercise I'm providing a list of floats that model a measured value over time to ~1000 entry-level programming students. They will sum them in a variety of ways. Provided that their code is correct, I'd like for them all to get the same result to simplify checking their code. I do not want to introduce the complexities of floating point representation to students at this level.
0.039979
2
1
The easiest way is to create random integers, and then divide (or multiply) them all by the same power of 2. As long as the sum of the absolute values of the original integers fits into 52 bits, then you can add the resulting floats without any rounding errors.
2023-03-04 05:46:09
-1
python,yahoo-finance
2
75,633,948
File size is getting progressively larger as I continue to download data
75,633,897
false
82
I am downloading financial data using to_hdf and I have noticed that each file gets larger and larger as it keeps downloading. What is happening? The first file was saved as 223 KB and the most recent where I stopped (67) was saved as 14,609 KB. The following is the code (some sections that are irrelevant have been removed): import pandas as pd import datetime as dt import yfinance as yf from pandas.tseries.holiday import USFederalHolidayCalendar import yahoo_fin.stock_info as si from pathlib import Path import os.path def main(): end = dt.datetime.now() start = end + dt.timedelta(days=-5) dr = pd.date_range(start=start, end=end) cal = USFederalHolidayCalendar() holidays = cal.holidays(start=dr.min(), end=dr.max()) a = dr[~dr.isin(holidays)] # not US holiday b = a[a.weekday != 5] b = b[b.weekday != 6] for year in set(b.year): tmp = b[b.year == year] for week in set(pd.Index(tmp.isocalendar().week)): temp = tmp[pd.Index(tmp.isocalendar().week) == week] start = temp[temp.weekday == temp.weekday.min()][0] # beginning of week end = temp[temp.weekday == temp.weekday.max()][0] # ending of week # get list of all index tickers ticker_strings = si.tickers_sp500() data_dir = 'data' x = 1 tickers_dir = './tickers' Index = '^GSPC' # initialize list for the following f(x) Df_list = list() ticker_data(ticker_strings, start, end, Df_list, data_dir, x) print("Complete") def ticker_data(ticker_strings, start, end, Df_list, data_dir, x): # find values for individual stocks for ticker in ticker_strings: loc_start = start while loc_start <= end: period_end = loc_start + dt.timedelta(days=1) intra_day_data = yf.download(ticker, loc_start, period_end, period="1d", interval="1m") extra_day_data = yf.download(ticker, loc_start, period_end, period="1d", interval="1m", prepost=True) Df_list.append(intra_day_data) Df_list.append(extra_day_data) loc_start = loc_start + dt.timedelta(days=1) df = pd.concat(Df_list) # creates file name filename = end.strftime('%F') + " " + ticker + ".h5" # saves file name to folder df.to_hdf(os.path.join(data_dir, filename), mode='w', key='df') #df.to_csv(os.path.join(data_dir, filename)) print(x, ticker) x += 1 if __name__ == "__main__": main()
-0.099668
1
2
This could be because it is constantly downloading new data.
2023-03-04 05:46:09
1
python,yahoo-finance
2
75,633,982
File size is getting progressively larger as I continue to download data
75,633,897
true
82
I am downloading financial data using to_hdf and I have noticed that each file gets larger and larger as it keeps downloading. What is happening? The first file was saved as 223 KB and the most recent where I stopped (67) was saved as 14,609 KB. The following is the code (some sections that are irrelevant have been removed): import pandas as pd import datetime as dt import yfinance as yf from pandas.tseries.holiday import USFederalHolidayCalendar import yahoo_fin.stock_info as si from pathlib import Path import os.path def main(): end = dt.datetime.now() start = end + dt.timedelta(days=-5) dr = pd.date_range(start=start, end=end) cal = USFederalHolidayCalendar() holidays = cal.holidays(start=dr.min(), end=dr.max()) a = dr[~dr.isin(holidays)] # not US holiday b = a[a.weekday != 5] b = b[b.weekday != 6] for year in set(b.year): tmp = b[b.year == year] for week in set(pd.Index(tmp.isocalendar().week)): temp = tmp[pd.Index(tmp.isocalendar().week) == week] start = temp[temp.weekday == temp.weekday.min()][0] # beginning of week end = temp[temp.weekday == temp.weekday.max()][0] # ending of week # get list of all index tickers ticker_strings = si.tickers_sp500() data_dir = 'data' x = 1 tickers_dir = './tickers' Index = '^GSPC' # initialize list for the following f(x) Df_list = list() ticker_data(ticker_strings, start, end, Df_list, data_dir, x) print("Complete") def ticker_data(ticker_strings, start, end, Df_list, data_dir, x): # find values for individual stocks for ticker in ticker_strings: loc_start = start while loc_start <= end: period_end = loc_start + dt.timedelta(days=1) intra_day_data = yf.download(ticker, loc_start, period_end, period="1d", interval="1m") extra_day_data = yf.download(ticker, loc_start, period_end, period="1d", interval="1m", prepost=True) Df_list.append(intra_day_data) Df_list.append(extra_day_data) loc_start = loc_start + dt.timedelta(days=1) df = pd.concat(Df_list) # creates file name filename = end.strftime('%F') + " " + ticker + ".h5" # saves file name to folder df.to_hdf(os.path.join(data_dir, filename), mode='w', key='df') #df.to_csv(os.path.join(data_dir, filename)) print(x, ticker) x += 1 if __name__ == "__main__": main()
1.2
1
2
You are appending new data to Df_list at every iteration of for ticker in ticker_strings and you are saving all of that every time. Which means that every file will contain also the previous file's data. You should use a variable local to the for ticker in ticker_strings loop instead of using a list passed in as a parameter.
2023-03-04 08:36:12
0
python,django,django-views,django-urls,django-sessions
2
75,634,789
How to separate user login session and admin login session in django
75,634,517
false
242
I have created small ecommerce website. User can register and login also created custom admin panel for admin which can product add, update and delete. User and Admin both URLS is different. problem is that when user login into website after I'm hit admin URLS is directly redirect to admin dashboard that I want to prevent. Here my panel app which can handle admin site link admin can add and update the product def Login_Page(request): if request.user.is_authenticated: return redirect('dashboard') if request.method == "POST": username = request.POST.get('username') password = request.POST.get('password') try: user = User.objects.get(username = username) except: messages.error(request,'User Does Not Exist !') try: vendor_authe = User_login.objects.get(user_auth=user, is_vendor_user='y',is_customer_user='n') user = authenticate(request, username= username, password = password) if user is not None: login(request, user) return redirect('dashboard') else: messages.error(request,'Username and Password Does Not Match. !') except: messages.error(request,'User not Found !') else: pass context = { } return render(request,'panel/login.html',context) Here my base app view.py which can handle user side login # Create your views here. def User_Login_Page(request): if request.user.is_authenticated: return redirect('home') if request.method == "POST": username = request.POST.get('username') password = request.POST.get('password') try: user = User.objects.get(username = username) except: messages.error(request,'User Does Not Exist !') try: user_authe = User_login.objects.get(user_auth=user, is_vendor_user='n',is_customer_user='y') user = authenticate(request, username= username, password = password) if user is not None: login(request, user) return redirect('home') else: messages.error(request,'Username and Password Does Not Match. !') except: messages.error(request,'User not Found !') else: pass context = { 'form_type':'user_login' } return render(request,'base/login.html', context) Here base app urls.py from django.contrib import admin from django.urls import path, include from . import views urlpatterns = [ path('user-login/', views.User_Login_Page, name="user_login"), path('user-registration/', views.User_Registration, name="user_registration"), path('user-logout/', views.User_Logout, name="user_logout"), path('', views.HomePage, name="home"), ] Here panel app urls.py from django.contrib import admin from django.urls import path, include from . import views urlpatterns = [ path('', views.Login_Page, name="login_page"), path('logout/', views.Vendor_logout, name="logout_page"), path('dashbord/', views.Dashboard_Page, name="dashboard"), ]
0
1
1
To prevent the user from accessing the admin dashboard after logging in, you can add a check in your Login_Page view to see if the user logging in is an admin or not. If the user is an admin, then redirect them to the admin login page instead of the admin dashboard, like so: def Login_Page(request): if request.user.is_authenticated: if request.user.is_staff: return redirect('admin:login') else: return redirect('dashboard') if request.method == "POST": username = request.POST.get('username') password = request.POST.get('password') try: user = User.objects.get(username=username) except: messages.error(request,'User Does Not Exist !') return redirect('login_page') try: vendor_authe = User_login.objects.get(user_auth=user, is_vendor_user='y',is_customer_user='n') user = authenticate(request, username=username, password=password) if user is not None: login(request, user) if user.is_staff: # User is an admin, redirect to admin login page return redirect('admin:login') else: # User is a regular user, redirect to dashboard return redirect('dashboard') else: messages.error(request,'Username and Password Does Not Match. !') except: messages.error(request,'User not Found !') context = {} return render(request,'panel/login.html', context) Note: Function based views are generally written in snake_case not PascalCase, so it would be better to name it as login_page and user_login_page instead of Login_Page and User_Login_Page respectively.
2023-03-04 11:48:32
2
python,pythonanywhere,youtube-dl,pafy
1
75,641,696
Pafy module causing error in PythonAnywhere
75,635,467
true
90
I am using pafy in a Flask app. It is working fine on my local machine. But when I am trying to deploy and run on PythonAnywhere, it is throwing an error. Upon execution, it is showing the following ewrror: youtube_dl.utils.ExtractorError: Unable to download API page: <urlopen error Tunnel connection failed: 403 Forbidden> (caused by URLError(OSError('Tunnel connection failed: 403 Forbidden'))) NO MATCH During handling of the above exception, another exception occurred: NO MATCH Traceback (most recent call last): File "/home/Philomath/.local/lib/python3.8/site-packages/pafy/backend_youtube_dl.py", line 40, in _fetch_basic self._ydl_info = ydl.extract_info(self.videoid, download=False) File "/home/Philomath/.local/lib/python3.8/site-packages/youtube_dl/YoutubeDL.py", line 808, in extract_info return self.__extract_info(url, ie, download, extra_info, process) File "/home/Philomath/.local/lib/python3.8/site-packages/youtube_dl/YoutubeDL.py", line 824, in wrapper self.report_error(compat_str(e), e.format_traceback()) File "/home/Philomath/.local/lib/python3.8/site-packages/youtube_dl/YoutubeDL.py", line 628, in report_error self.trouble(error_message, tb) File "/home/Philomath/.local/lib/python3.8/site-packages/youtube_dl/YoutubeDL.py", line 598, in trouble raise DownloadError(message, exc_info) youtube_dl.utils.DownloadError: ERROR: Unable to download API page: <urlopen error Tunnel connection failed: 403 Forbidden> (caused by URLError(OSError('Tunnel connection failed: 403 Forbidden'))) I have also removed the lines for dislike count in backend_yoututbe_dl in PythonAnywhere (from bash). Still, no use.
1.2
1
1
That will not work on PythonAnywhere. Any youtube video downloads use unofficial methods and so PythonAnywhere will not add youtube to the allowlist for free accounts.
2023-03-04 13:44:51
1
python,odoo,odoo-13
2
75,637,966
Custom controller very slow for some users, really slow for others
75,636,041
false
74
We have built a custom controller for a customer which passed products data and loads their products in a custom url. The controllers are basically like this: class CustomProducts(http.Controller): @http.route('/custom', type='http', auth='public', website=True, methods=['GET']) def render_custom_products(self, **kw): if kw.get('order'): if kw.get('order') != "": order_by = kw.get('order') else: order_by = 'list_price desc' products = http.request.env['product.product'].sudo().search([('categ_id.name', 'ilike', 'Custom'), ('is_published', '=', 'true'), ('active', '=', 'true')], order= order_by, limit = 150) return http.request.render('custom_module.custom_products', { # pass products details to view 'products': products, }) EDIT: This has worked good for over 2 years but suddenly the page is very slow. The weird thing is that the route is very slow for some visitors and very fast (like before) for some. What could be the issue? Postgres sometimes gives this error: Could not serialize access due to concurrent update SELECT * FROM website_visitor where id = XX FOR NO KEY UPDATE NOAWAIT The customer only have between 30-50 products for sale at once.
0.099668
2
1
You should consider adding logging to note the times http.request.env['product.product'].sudo().search seems to be the only function you are calling, so there can be only 2 (i think) causes of the issue - Either your search has slowed down or you have designed things in a way such that your system is not processing things in parallel You could could try using fastapi/some other framework - to support async or try to diagnose your search issue depending on further finding.
2023-03-04 16:05:51
1
python,coding-style,naming-conventions,readability
1
75,636,977
Are there good naming conventions to get around long variable names, without having to rely on comments for better readability?
75,636,882
false
114
Basically I always try my best to avoid comments whenever possible. I watched many videos from Uncle Bob and regarding the things he wants to express, there are many things I agree to. He once mentioned that ideally speaking, whenever we tend to use comments to give more information on why we do things the way we are doing inside a certain function, then that is the point where we maybe should reflect on the quality of the code we actually have written. Of course I know that the background of certain things in bigger systems can be really hard to explain just by the code itself without having any comments inside. But still I feel like that maybe I just don't know enough yet and certain things can indeed be expressed with the actual code itself (given I use right naming conventions etc.) Here is a concrete example: I am working on a Telegram Bot which needs to retrieve data from messages inside channels and then forwards to other components of the rest of the system. Due to restrictions in that project I am limited to do this by only using the Selenium Webdriver. Currently I am really having trouble with chosing the right naming convention. I found that at the moment, I can check if there are urnead messages for a specific Telegram Channel by its clickable WebElement object. It has a string member variable "text" which contains 3 lines, given there is no unread message available. If there however are unread messages available, the member variable "text" will have 4 lines. The additional line in this case contains the number of unread messages. So the last thing I would like to do is writing a function like this: def isUnreadMessageAvailable(self): if len(self.__buttonElement.text.splitlines()) <= 3: return False return True What I certainly don't like in the first place is that hard coded "3" there. I may need to use that exact threshold in several other places inside other python files. Also this "3" may change at any time, so when adapting to a new value for the threshold, I obviously don't want to edit it in 100 diferent places. Instead I'd rather use something like this: def isUnreadMessageAvailable(self): if len(self.__buttonElement.text.splitlines()) <= Constants.AMOUNT_OF_LINES_IF_MESSAGES_READ: return False return True As you can see, the name of the variable I replaced the "3" with, is really long. I mean it contains six words. Recently I feel that I am struggling more with having good names for variables, files, and functions rather than writing the logic to get my program to do what I want it to do. I apologize for this long question, but I can't come with a way to provide readability to the code without using a variable which contains less words in that case. Any opinion of your experiences are appreciated. Maybe some of your opinions/solutions can help me in the future when I face something similiar.
0.197375
1
1
Your code is for three audiences: the machine running it. anybody else who has to work on it. your future self. Once your code works, you are its most important audience. Write your code so your future self can understand it. The method you showed us has some assumptions baked into it. There's something special about your three lines, or two, or seven, or whatever in the future. A method header comment is appropriate here, Uncle Bob nothwithstanding. It should explain what's special about three lines, and briefly say why or give a reference explaining why. Long, explanatory, names are also helpful to your future self. One reason for that: they let you search all your code for references to them. And, you can place a comment by the declaration of any named object to explain even further. So use good declaration and method comments and meaningful symbols both. Keep this in mind. It's twice as hard to debug your code as it is to write it. So if you employ all your cleverness writing it, you'll have a much harder time debugging it. Writing it for your future self helps cope with the difficulty of debugging. Modern IDEs, editors, and runtimes don't penalize long names. IDEs show you declaration comments if you hover over names. The two work together nicely.
2023-03-04 18:00:59
1
python,fastapi
2
75,638,414
The rest of the python code doesn't work with FastAPI
75,637,585
false
102
I want to make a discord bot that would work with data that would be sent from a web application via POST request. So for that I need to make my own API containing that POST request with FastAPI. My question is: how can make my API work with the rest of the code? Consider this example: from fastapi import FastAPI, Request from pydantic import BaseModel app = FastAPI() dict = {"name": "","description": ""} class Item(BaseModel): name: str description: str @app.post("/item") async def create_item(request: Request, item: Item): result = await request.json() dict["name"] = result["name"] print(dict) print(dict) When I run the API, type the values in and print dict for the fist time, it outputs something like that: {'name': 'some_name', 'desc': 'some_desc'} But when I run my file as python code, only {'name': '', 'desc': ''} gets printed out. I thought that after I type in values in dict on my API page (https://localhost:800/docs), python would output the exact values I typed, but it didn't happen. What do I do?
0.099668
1
1
When you call the api, you essentially call the create_item function, which would manipulate your dictionary. When you directly run your python code, you don't call the create_item function, therefore the dictionary isn't manipulated, and you get the original dictionary back, which is {'name': '', 'desc': ''}. When you run your server, it runs in it's own instance. When you run your file again with python, it's another instance. They're two seperate things that don't know each other's "dict" values. If you want to know about when your api manipulates something (if you want your data to persist regardless of your python instances), you should use a database and query said database (you can even use a .json file for that purpose)
2023-03-04 18:21:56
0
python
1
75,637,778
Convert integer to decimal and recover places after zero
75,637,722
false
121
I have array data as below: a = np.array([1.41607, 2.17922, -14.7047, -1852.51, -2713.39, -165.025]) a is a decimal number and I want to convert a to an integer number such as below: a = [1, 2, -15, -1853, -2713, -165] then after I convert a to an integer number, I want to restore the original data. I have tried using the code below, but I can't restore data to the original data. import numpy as np a = np.array([1.41607, 2.17922, -14.7047, -1852.51, -2713.39, -165.025]) # Define the number of decimal places to keep decimal_places = 0 # Round the values to the specified number of decimal places rounded_a = np.round(a, decimals=decimal_places) # Convert the rounded values to integers int_a = (rounded_a * 10**decimal_places).astype(int) # Convert the integers back to the original decimal values float_a = int_a / 10**decimal_places # Print the original values and the recovered values print("Original values:", a) print("DecimalToInteger:", int_a) print("Restored values:", float_a) Results Original values: [ 1.41607 2.17922 -14.7047 -1852.51 -2713.39 -165.025 ] DecimalToInteger: [ 1 2 -15 -1853 -2713 -165] Restored values: [ 1. 2. -15. -1853. -2713. -165.]
0
1
1
This is impossible. Once you convert a float to an integer, there is no going back. You can convert the integer back into a float, but you won't get back any of the decimal places you had before. This is because you're using a function that loses information. In your case, you used np.round(). np.round() tells the computer to round the number to the closest integer, then throw away all the information left in the places after the decimal point. Because you threw it away, there's no getting it back.
2023-03-04 22:27:40
0
python,linux,pip,fastapi,centos7
2
75,686,832
Attempting to run a FastAPI application on CentOS 7 but getting a module 'asyncio' error
75,639,063
true
176
I am attempting to run a simple FastAPI application on CentOS 7 but getting some errors. I will include some more details for context: Python Version - 3.6.8 pip version - 9.0.3 I am running the application with this command: python3 -m uvicorn main:app I keep getting this error: Traceback (most recent call last): File "/usr/lib64/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.6/site-packages/uvicorn/__main__.py", line 4, in <module> uvicorn.main() File "/usr/local/lib/python3.6/site-packages/click/core.py", line 1128, in __call__ return self.main(*args, **kwargs) File "/usr/local/lib/python3.6/site-packages/click/core.py", line 1053, in main rv = self.invoke(ctx) File "/usr/local/lib/python3.6/site-packages/click/core.py", line 1395, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/local/lib/python3.6/site-packages/click/core.py", line 754, in invoke return __callback(*args, **kwargs) File "/usr/local/lib/python3.6/site-packages/uvicorn/main.py", line 435, in main run(app, **kwargs) File "/usr/local/lib/python3.6/site-packages/uvicorn/main.py", line 461, in run server.run() File "/usr/local/lib/python3.6/site-packages/uvicorn/server.py", line 67, in run return asyncio.run(self.serve(sockets=sockets)) AttributeError: module 'asyncio' has no attribute 'run' I was initially getting this error - /home/centos/fast_api/fastapi-tutorial/python3-venv/bin/python3: No module named uvicorn but after installing uvicorn via pip3 install uvicorn, I now get the "module 'asyncio' error" Any help would be great I have tried enabling a python virtual environment on the server but I still get the same error. Could this be an issue with the Python version?
1.2
1
1
Update on this. I was able to get this working by upgrading both python and pip. I upgraded python to 3.9 Upgraded pip using - /usr/local/bin/python3.9 -m pip install --upgrade pip Thanks for the help.
2023-03-05 07:00:16
0
python,tensorflow,visual-studio-code,jupyter-notebook,anaconda
1
76,205,291
jupyter notebook can't see GPU while it is available in conda environment
75,640,662
false
240
I'm trying to use tensorflow on GPU with jupyter notebook. I make SSH connection to a remote server with VScode and build up an anaconda environment(called 'myspace') where tensorflow(version 2.6.0) is successfully installed: (myspace) user@server:~$ python Python 3.6.13 |Anaconda, Inc.| (default, Jun 4 2021, 14:25:59) [GCC 7.5.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import tensorflow as tf >>> print("Num GPUs:", len(tf.config.experimental.list_physical_devices('GPU'))) Num GPUs: 1 However, when I open a jupyter notebook and select the same conda environment as its kernel, the GPU can't be seen: Num GPUs: 0 I suppose that the CUDA toolkit and CUDNN library are successfully installed and I attach the output of tf.sysconfig.get_build_info() here: OrderedDict([('cpu_compiler', '/usr/bin/gcc-5'), ('cuda_compute_capabilities', ['sm_35', 'sm_50', 'sm_60', 'sm_70', 'sm_75', 'compute_80']), ('cuda_version', '11.2'), ('cudnn_version', '8'), ('is_cuda_build', True), ('is_rocm_build', False), ('is_tensorrt_build', True)]) I've read many related posts but none of them worked. I tried to get the os.environ["LD_LIBRARY_PATH"] and got a keyerror. I suppose it might be a problem of system path but I already set the paths in ~/.bashrc: export PATH=/home/user/cuda-11.2/bin:${PATH} export LD_LIBRARY_PATH=/home/user/cuda-11.2/lib64:${LD_LIBRARY_PATH}
0
1
1
The best way I could solve this problem was to change to tensorflow's image as jupyter image did. From here you could update from the comand line. docker run -d -p 8888:8888 --name jupyter --mount type=bind,source="$(pwd)",target=/tf --gpus all -e CHOWN_EXTRA="/home/jovyan/work" --user root -e GRANT_SUDO=yes -e NB_GID=100 -e NB_USER:jovyan tensorflow/tensorflow:latest-gpu-jupyter
2023-03-05 11:15:20
0
python,regex,regex-group
4
75,642,053
Regex: How to match a string multiple times while capturing groups differently
75,641,828
false
79
I have a string that looks something like this: AB And I want to capture it using a regex that looks like this: (?P<GRP1>(A|D))?(?P<GRP2>(A|C))?B Is there a python function that would produce two matches for the string? The first by capturing A as a part of GRP1 and the second by capturing it as a part of GRP2? I tried functions like re.fullmatch and re.findall but all of them only capture A as part of GRP1. The function I'm looking for would produce one match when matching against DB or CB, but two matches for AB, one for each group that A can belong to.
0
2
1
That is not what regex is designed for. Regex is about matching a span of the input against a pattern. So any span of the input is either a match or it is not. The span is "consumed" in the process, that is, the same span cannot match the pattern twice. On the other hand a pattern can match multiple non-overlapping spans, which is what findall is about. There are some advanced regex technics that do not consume any characters while matching (lookahead/lookbehind), but those inherently also do not capture any input which makes them useless for you. Usually this is the point where one should start thinking about not using regex or atleast not using only regex. I see to possible solutions Use two regex patterns, one for each GRP. This would make the overall result a tupple representing 0-2 matches representing either none, GPR1, GPR2 or GPR1+GPR2 Use one regex pattern but restructure it like this (?P<GRP1>(D))?(?P<GRP2>(C))?(?P<GRP1_2>(A))?B i.e. have a separate group for each thing that may occure. With GRP1_2 representing the GRP1&GRP2 case. So, for example if your goal is to count the GRP1 and GRP2 matches then you would count GRP1_2 to both.
2023-03-05 16:50:42
0
python,json,django,sockets,django-channels
1
75,644,081
How to implement consumer for voice chat app on Django?
75,643,850
false
87
I am currently working on a voice chat app as a personal project and this app will be like Discord, where users can hop into a chat room and start talking to people in that room. I am only going to make it voice only for now (I will implement text messaging later). I have been looking at examples on Github and I am having a little trouble with how to make the Consumer. Here is what I have right now: import json from asgiref.sync import async_to_sync from channels.generic.websocket import WebsocketConsumer class VoiceConsumer(WebsocketConsumer): def connect(self): self.room_name = self.scope['url_route']['kwargs']['room_name'] self.room_group_name = "chat_%s" % self.room_name # join the room async_to_sync(self.channel_layer.group_add)( self.room_group_name, self.channel_name ) self.accept() def disconnect(self, closed_code): async_to_sync(self.channel_layer.group_discard)( self.room_group_name, self.channel_name ) def receive(self, text_data): text_data_json = json.loads(text_data) # Ask about this, how to receive voice ? voice_message = text_data_json['message'] async_to_sync(self.channel_layer.group_send)( self.room_group_name, {'type' : 'chat_message', 'message': voice_message} ) As you see, I am confused about the text_data_json['message'] part. This is based off an example I have seen where someone implements messages, but I want to implement voice only. What do I change about this to where I can implement voice only ? text_data_json = json.loads(text_data) # Ask about this, how to receive voice ? voice_message = text_data_json['message'] async_to_sync(self.channel_layer.group_send)( self.room_group_name, {'type' : 'chat_message', 'message': voice_message} ) I haven't seen any examples that do this for voice only.
0
1
1
Send audio bytes as Base64 string
2023-03-05 20:10:10
1
python,gtk,gtk3,pygobject
1
76,044,172
Python program crashes when I assign tags to a Gtk.TextView
75,645,044
false
42
I am trying to write a program that will take data from a Gtk.TextView and assign tags for it. Basically a markdown formatter. But whenever I add a * * set for itallics the entire program crashes. This is the code I had. It's not the best looking code because I was experimenting with the idea. # Imports import gi gi.require_version('Gtk', '3.0') from gi.repository import Gtk, Pango class MarkdownFormatter: def __init__(self, textview): self.textview = textview self.text_buffer = self.textview.get_buffer() self.text_buffer.connect('insert-text', self.on_key_insert) self.tag_bold = self.text_buffer.create_tag('bold',weight=Pango.Weight.BOLD) self.tag_italic = self.text_buffer.create_tag('italic',style=Pango.Style.ITALIC, ) self.tag_underline = self.text_buffer.create_tag('underline',underline=Pango.Underline.SINGLE) self.bold= False self.italic = False self.italic_start_iter = Gtk.TextIter() self.italic_end_iter = Gtk.TextIter() self.underline = False def on_key_insert(self, buffer, inter, text, iteg): end= self.text_buffer.get_end_iter() start= self.text_buffer.get_start_iter() all_text = self.text_buffer.get_text(end, start, True) self.text_buffer.remove_all_tags(start, end) count = 0 self.italic = False for d in all_text: count+= 1 if '*' in d: if self.italic == False: self.italic = True self.italic_start_iter.set_offset(count) elif self.italic: self.italic = False self.italic_end_iter.set_offset(count) self.text_buffer.apply_tag(self.tag_italic, self.italic_start_iter, self.italic_end_iter) text = ''' *I is good.* she is smart. "Hi" he said. ''' if __name__ == '__main__': def quit(*args): Gtk.main_quit() window = Gtk.Window.new(Gtk.WindowType.TOPLEVEL) window.set_title('Markdown formatter') view = Gtk.TextView() view.get_buffer().set_text(text) formater = MarkdownFormatter(view) window.set_default_size(600, 400) window.add(view) window.show_all() window.connect('delete-event', quit) Gtk.main() Run the program and then when you try to type anything it'll crash (It only tries to format the text when you type.
0.197375
1
1
I found the answer. I have no idea why so if someone can figure out why I'd be very happy. Basically if you replace the set_offset(s) with a itername = buffer.get_iter_at_offset(count) It will run fine. My only guess is that it's something to do with reusing buffers or there's an issue with set_offset.
2023-03-05 20:26:16
0
python,syntax-error,srt,webvtt,openai-whisper
3
75,645,171
On Whisper API, when I try to use a python script for transcribing audio files in bulk, I can't get the correct response_format ('srt' or 'vtt') work
75,645,133
false
871
I'm using this code for connecting to Whisper API and transcribe in bulk all mp3 in a folder to both srt and vtt: import requests import os import openai folder_path = "/content/audios/" def transcribe_and_save(file_path, format): url = 'https://api.openai.com/v1/audio/transcriptions' headers = {'Authorization': 'Bearer MyToken'} files = {'file': open(file_path, 'rb'), 'model': (None, 'whisper-1'), 'response_format': format} response = requests.post(url, headers=headers, files=files) output_path = os.path.join(folder_path, os.path.splitext(filename)[0] + '.' + format) with open(output_path, 'w') as f: f.write(response.content.decode('utf-8')) for filename in os.listdir(folder_path): if filename.endswith('.mp3'): file_path = os.path.join(folder_path, filename) transcribe_and_save(file_path, 'srt') transcribe_and_save(file_path, 'vtt') else: print('mp3s not found in folder') When I use this code, I'm getting the following error: "error": { "message": "1 validation error for Request\nbody -> response_format\n value is not a valid enumeration member; permitted: 'json', 'text', 'vtt', 'srt', 'verbose_json' (type=type_error.enum; enum_values=[<ResponseFormat.JSON: 'json'>, <ResponseFormat.TEXT: 'text'>, <ResponseFormat.VTT: 'vtt'>, <ResponseFormat.SRT: 'srt'>, <ResponseFormat.VERBOSE_JSON: 'verbose_json'>])", "type": "invalid_request_error", "param": null, "code": null } I've tried with different values, but either don't work or I'm only receiving the transcription as a object in plain text, but no srt or vtt. I'm expecting to get srt and vtt files in the same folder as where audios are Thanks, Javi
0
1
1
I am not sure about the whisper api, but you seem to be using an already existing python function as a parameter name. Perhaps this could be a reason why it is not working, as the function format is being used when calling the endpoint instead of the parameter you passed in. Try changing the parameter name to something other than format and change the value being used for response_format.
2023-03-05 23:39:25
1
python,django,render
2
75,646,083
Django render third argument
75,646,043
false
87
I am confused with Django renderring. I usually used context and dictionary in the third argument: return render(request,'zing_it/home.html', context=my_playlists) However, in the code below the third argument used list name in strings and list name as a variable. Can you please help me to understand what is my_playlists (in strings) and the second my_playlists (variable). Why so? from django.shortcuts import render my_playlists=[ {"id":1,"name":"Car Playlist","numberOfSongs":4}, {"id":2,"name":"Coding Playlist","numberOfSongs":2} ] def home(request): return render(request,'zing_it/home.html',{"my_playlists":my_playlists})
0.099668
1
1
context must be a dictionary. It cannot be a list, so the first version won't work assuming my_playlist is the same in both examples. Your second version works because it passes a dictionary correctly.
2023-03-06 06:30:11
5
python,pytest,flake8
1
75,651,823
How can i resolve flake8 "unused import" error for pytest fixture imported from another module
75,647,682
false
615
I wrote pytest fixture in fixtures.py file and using it in my main_test.py . But I am getting this error in flake8: F401 'utils_test.backup_path' imported but unused for this code: @pytest.fixture def backup_path(): ... from fixtures import backup_path def test_filename(backup_path): ... How can I resolve this?
0.761594
3
1
generally you should not do this making fixtures available via import side-effects is an unintentional implementation detail of how fixtures work and may break in the future of pytest if you want to continue doing so, you can use # noqa: F403 on the import, telling flake8 to ignore the unused imports (though the linter is telling you the right thing here!) the supported way to make reusable fixtures is to place them in a conftest.py which is in a directory above all the tests that need it. tests will have these fixtures "in scope" automatically if that file is getting too large, you can write your fixtures in a plugin module and add that plugin module via pytest_plugins = ['module.name'] in your conftest.py disclaimer: I'm the current flake8 maintainer, I'm also a pytest core dev