CreationDate
stringlengths
19
19
Users Score
int64
-3
17
Tags
stringlengths
6
76
AnswerCount
int64
1
12
A_Id
int64
75.3M
76.6M
Title
stringlengths
16
149
Q_Id
int64
75.3M
76.2M
is_accepted
bool
2 classes
ViewCount
int64
13
82.6k
Question
stringlengths
114
20.6k
Score
float64
-0.38
1.2
Q_Score
int64
0
46
Available Count
int64
1
5
Answer
stringlengths
30
9.2k
2023-03-06 09:07:38
2
python,tensorflow,keras,early-stopping,auto-keras
1
75,675,359
tf keras autokeras with early stopping returns empty history
75,648,889
true
128
I am trying different models for the same dataset, being autokeras.ImageClassifier one of them. First I go for img_size = (100,120,3) train_dataset = get_dataset(x_train, y_train, img_size[:-1], 128) valid_dataset = get_dataset(x_valid, y_valid, img_size[:-1], 128) test_dataset = get_dataset(x_test, y_test, img_size[:-1], 128) For getting the dataset with a predefined function and then I fit the model with an early stopping callback: # - Crear la red model = ak.ImageClassifier(overwrite=True, max_trials=1, metrics=['accuracy']) # - Entrena la red early_stop = tf.keras.callbacks.EarlyStopping(monitor='val_loss', mode='min', patience=2) #El resto de valores por defecto history = model.fit(train_dataset, epochs=10, validation_data=valid_dataset, callbacks=[early_stop]) # - Evalúa la red model.evaluate(test_dataset) The problem is that when train stops because of the callback, history is None type, what means is an empty object. I have not been able to find anything similar in the internet, for everyone it seems to work properly. I know the problem is with the callback because I fit the model without any callback it works properly. The output when the train is ended by the callback is this one: Trial 1 Complete [00h 13m 18s] val_loss: 4.089305400848389 Best val_loss So Far: 4.089305400848389 Total elapsed time: 00h 13m 18s WARNING:absl:Found untraced functions such as _jit_compiled_convolution_op, _jit_compiled_convolution_op, _update_step_xla while saving (showing 3 of 3). These functions will not be directly callable after loading. WARNING:tensorflow:Detecting that an object or model or tf.train.Checkpoint is being deleted with unrestored values. See the following logs for the specific values in question. To silence these warnings, use `status.expect_partial()`. See https://www.tensorflow.org/api_docs/python/tf/train/Checkpoint#restorefor details about the status object returned by the restore function. WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.1 WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.2 WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.3 WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.4 WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.5 WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.6 WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.7 WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.8 WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.9 WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.10 WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.11 WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer._variables.12
1.2
2
1
In case someone gets here looking for an answer, it seems to be a problem with the current version of autokeras and the callbacks.
2023-03-06 11:55:51
0
python,django,rest
1
75,650,698
Unittesting DRF Serializer validators one by one
75,650,569
false
43
We have an example Serializer class we'd like to test: from rest_framework import serializers class MySerializer(serializers.Serializer): fieldA = serializers.CharField() fieldB = serializers.CharField() def validate_fieldA(self,AVal): if len(AVal) < 3: raise serializers.ValidationError("AField must be at least 3 characters long") return AVal def validate_fieldB(self,BVal): if not len(BVal) < 3: raise serializers.ValidationError("BField must be at least 3 characters long") return BVal This is a greatly simplified scenario, but it should do. We want to write unittests for this Serializer class. My friend argues we should test using the .is_valid() method, like so class TestMySerializer(unittest.TestCase): def test_validate_fieldA_long_enough(self): ser = MySerializer(data={"fieldA":"I'm long enough","fieldB":"Whatever"}), self.assertTrue(ser.is_valid()) self.assertEqual(ser.validated_data["fieldA"],"I'm long enough") def test_validate_fieldA_too_short(self): ser = MySerializer(data={"fieldA":"x","fieldB":"Whatever"}) self.assertFalse(ser.is_valid()) #similarly tests for fieldB ... I argue that unittests are supposed to be atomic and test only one "thing" at a time. By using the .is_valid() method we run every validator in the class instead of just the one we want to test. This introduces an unwanted dependency, where tests for fieldA validators may fail if there's something wrong with fieldB validators. So instead I would write my tests like so: class TestMySerializer(unittest.TestCase): def setUp(self): self.ser = MySerializer() def test_fieldA_validator_long_enough(self): self.assertEqual(self.ser.validate_fieldA("I'm long enough"),"I'm long enough") def test_fieldA_validator_too_short(self): with self.assertRaises(serializers.ValidationError) as catcher: self.ser.validate_fieldA('x') self.assertEqual(str(catcher.exception),"AField must be at least 3 characters long") #same for fieldB validators ... Which approach is better and why?
0
1
1
You should not, in my opinion, specifically test the validation of your overly long chain. It makes no sense because you are not the author of the code that performs this validation. You must test your serializer and its various validation and invalidation cases. The maximum length validator was written in Django Rest Framework and has already been tested. The unit test that you write verifies the validation of specific input data. So, in my opinion, you should call the method that is typically called for this purpose, which in your case is is_valid(). You could also call run_validators(), but that would only test the Django Rest Framework's functionality, which is unnecessary in your case (I specify that there are relevant cases to test a lib included, but not here for me)
2023-03-06 16:37:33
2
python,machine-learning,h2o
1
75,659,358
Storing H2o models/MOJO outside the file system
75,653,463
true
41
I'm investigating the possibility of storing MOJOs in cloud storage blobs and/or a database. I have proof-of-concept code working that saves the MOJO to a file then loads the file and stores to the target (and vice-versa for loading), but I'd like to know if there's any way to skip the file step? I've looked into python's BytesIO, but since the h2o mojo APIs all require a file-path I don't think I can use it.
1.2
1
1
It's possible using the H2O's REST API. Have a look at model.download_mojo() for the reference which gets the model from the backend and then persists it using the _process_response() method. You can have a look at h2o.upload_mojo() for the uploading part.
2023-03-06 18:38:13
1
python,pygsheets
1
75,656,023
pygsheets does not modify Date Format
75,654,608
true
96
I'm trying to convert a date in a Google Spreadsheet column from 3/3/2023 to Friday March 3, 2023 with pygsheets. The following code: client = pygsheets.authorize(service_account_file="credentials.json") test_sheet = client.open(titles[0]) test_worksheets = test_sheet.worksheets() active = test_worksheets[0] model_cell = pygsheets.Cell("A1") model_cell.set_text_format("fontSize",18) model_cell.set_vertical_alignment(pygsheets.VerticalAlignment.MIDDLE) model_cell.set_number_format(pygsheets.FormatType.DATE, 'dddd+ mmmm yyy') pygsheets.DataRange('A2', 'A', worksheet=active).apply_format(model_cell) successfully changes the fontSize and VerticalAlignment attributes but does not change the date format. What is wrong with the code? UPDATE: It seems there are two things at play here. First, my date format string isn't what I wanted and so it's possible that gsheets wasn't able to interpret it and just ignored. I'm skeptical that's the case but it's possible. The format string I need to use is 'dddd", "mmmm" "d", "yyyy'. Second and more importantly, I noticed that after I move the date values from one column to another there is a single quote (or possibly a tick mark) at the start of the date string. I remove this quote and the format changes. Seems like pygsheets is adding a tick mark at the beginning of non-numerical dates (for example 3/3/2023) i'm guessing to preserve the original formatting. But when you call the batch updater it doesn't remove the tick mark. I'm not entirely sure how to get around this but at least I know what I need to get around now.
1.2
1
1
This almost seems like it's a bug but it's maybe more of a feature request or at least an update to the documentation. What I didn't mention in my question was that before I tried to modify the format of the column, I moved the values from another column into column A. In the process, it seems, pygsheets adds a single quote or tick mark to the beginning of the date strings - presumably to maintain the original formatting. Unfortunately this single quote seems to get in the way of gsheets modifying the string into a Date format. When I remove the tick mark/single quote, the formatting started to show up. So, for my issue I just modified the Date format before moving the values.
2023-03-06 21:56:42
1
python,scipy,spline
1
75,657,315
partial derivatives of Scipy smooth spline not working
75,656,192
false
38
I tried to take the second x derivative of a fitted smooth spline in Scipy, like following: spline = SmoothBivariateSpline(x,y,z,kx=3,ky=1) splinedxx = spline.partial_derivative(2,0) This give me following error: File "C:\Users\xxxxx\AppData\Roaming\Python\Python39\site-packages\scipy\interpolate\_fitpack2.py", line 988, in partial_derivative newc, ier = dfitpack.pardtc(tx, ty, c, kx, ky, dx, dy) dfitpack.error: ((0 <= nuy) && (nux < ky)) failed for 7th argument nuy: pardtc:nuy=0 The same error is given when I try the first derivative. What did I do wrong? Or is it something caused by my data?
0.197375
1
1
set ky=3 solves the problem. It seems like the order of y also need to be greater than the order of differentiation on x.
2023-03-07 00:25:46
1
python,pandas,group-by,outliers
1
75,657,073
Why do these different outlier methods fail to detect outliers?
75,657,025
true
34
I am trying to find the outliers by group for my dataframe. I have two groups: Group1 and Group2, and I am trying to find the best way to implement an outlier method data = {'Group1':['A', 'A', 'A', 'B', 'B', 'B','A','A','B','B','B','A','A','A','B','B','B','A','A','A','B','B','B','A','A','A','A','A','B','B'], 'Group2':['C', 'C', 'C', 'C', 'D', 'D','C','D','C','C','D', 'C', 'C', 'D', 'D','C', 'C','D','D','D', 'D','C','D','C','C', 'D','C','D','C','C'], 'Age':[20, 21, 19, 24, 11, 15, 18, 1, 17,23, 35,2000,22,24,24,18,17,19,21,22,20,25,18,24,17,19,16,18,25,23]} df = pd.DataFrame(data) groups = df.groupby(['Group1', 'Group2']) means = groups.Age.transform('mean') stds = groups.Age.transform('std') df['Flag'] = ~df.Age.between(means-stds*3, means+stds*3) def flag_outlier(x): lower_limit = np.mean(x) - np.std(x) * 3 upper_limit = np.mean(x) + np.std(x) * 3 return (x>upper_limit)| (x<lower_limit) df['Flag2'] = df.groupby(['Group1', 'Group2'])['Age'].apply(flag_outlier) df["Flag3"] = df.groupby(['Group1', 'Group2'])['Age'].transform(lambda x: (x - x.mean()).abs() > 3*x.std()) However, all 3 methods fail to detect obvious outliers - for example, when Age is 2000, none of these methods treat it as an outlier. Is there a reason for this? Or is it possible that my code for all three outlier detection models is incorrect? I have a strong feeling I've made a foolish mistake somewhere but I'm not sure where, so any help would be appreciated, thanks!
1.2
1
1
Within its group, that age of 2000 just isn't over 3 standard deviations away from the group mean. The group mean is 239.666667 and the group standard deviation is 660.129722. It might look like an obvious outlier to you, but you don't have enough data to label it an outlier by that standard.
2023-03-07 05:16:16
2
python,block
1
75,658,277
How to wait but not block thread when getting a resource in python?
75,658,264
true
25
Suppose we need to create a database connection pool, the requirement is that when a client tries to get a connection, if all exisiting connections are busy, then need to wait for 30 second before giving up, hope some connections are released by other client. So the naive solution is def get_connection(): if all_conn_are_busy: time.sleep(30) try to get connection again else: return conn But since time.sleep(30) will block the thread, if 2 clients trying to get connection at the same time, it will block for 60 seconds. So is there any way to noblock it but also wait for some time?
1.2
1
1
(A.) You have already lost the battle if you're in this "waiting" regime. (B.) It sounds like you want to have a single thread which is exclusively dedicated to acquiring that next connection. And if it is already busy trying to acquire one, then 2nd caller should promptly suffer fatal error. To have dozens of threads all clamoring for next connection from a terminally overloaded DB server sounds like a recipe for disaster. Consider pre-allocating connections at some slow rate, so there is always one available for the next requestor. Apparently you have some connection pool, and a policy for discarding idle connections. Re-evaluate how well suited that policy is in light of your current use cases.
2023-03-07 06:27:47
0
python,installation,virtualenv
1
76,243,180
ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory
75,658,642
false
287
I created a conda v-env with Python 3.8.10 and when I do pip install -r requirements_demo.txt, I get the following error. Processing /home/conda/feedstock_root/build_artifacts/pycparser_1593275161868/work ERROR: Could not install packages due to an OSError: \[Errno 2\] No such file or directory: '/System/Volumes/Data/home/conda/feedstock_root/build_artifacts/pycparser_1593275161868/work' I didn't know why it succeeded on my windows machine but on my m1 mac it just refuses to work. I have tried adding sudo, -H, --user, and all those yielded the same error above. I don't know what is going on now.
0
1
1
I had a very similar issue and found this which solved it for me: pip list --format=freeze > requirements.txt
2023-03-07 07:24:54
0
python,list,numpy
4
75,659,082
Locating indices with element 1 and converting to a list in Python
75,659,031
false
49
I have an array A. I want to identify all indices with element 1 and print as a list. But I am getting an error. I present the expected output. import numpy as np A=np.array([[1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0]]) A1=np.where(A[0]==1) A1.tolist() print(A1) The error is in <module> A1.tolist() AttributeError: 'tuple' object has no attribute 'tolist' The expected output is [[0, 2, 3, 5]]
0
1
1
The array is at the zeroth index of the tuple, so do [A1[0].tolist()] and you will have your expected output.
2023-03-07 07:29:53
1
python,pandas,csv,export-to-csv
1
75,659,152
Compare Latest CSV with all CSV in directory and remove the matching from the latest and write new rows in new file with python
75,659,071
false
54
the code will not work properly e.g. when the name of files are something else. for example when the file name is carre123.csv, it wont compare correctly. but when I changed the file name to test123.csv it works fine. here is the code import os import pandas as pd # Set the directory where the CSV files are stored directory = '/PATH/csv-files' # Get a list of all the CSV files in the directory csv_files = [os.path.join(directory, f) for f in os.listdir(directory) if f.endswith('.csv')] #print(csv_files) # Sort the CSV files by modification time and select the last file as the latest file latest_file = sorted(csv_files, key=os.path.getmtime)[-1] #print(latest_file) # Read the contents of the latest CSV file into a pandas DataFrame latest_data = pd.read_csv(latest_file) #print(latest_data) # Iterate over all the previous CSV files for csv_file in csv_files[:-1]: # Read the contents of the previous CSV file into a pandas DataFrame prev_data = pd.read_csv(csv_file) #print(prev_data) # Identify the rows in the latest CSV file that match the rows in the previous CSV file matches = latest_data.isin(prev_data.to_dict('list')).all(axis=1) print(matches) # Remove the matching rows from the latest CSV file latest_data = latest_data[~matches] # Write the remaining rows in the latest CSV file to a new file latest_data.to_csv('/NEWPATH/diff.csv', index=False) when the file name is carre123.csv, it wont compare correctly. but when I changed the file name to test123.csv it works fine.
0.197375
1
1
I think your code has a bug, which may be what is causing the problem. The for loop is over csv_files[:-1] which is not sorted by modification time, so depending on the file names this may cause the loop to include latest_file. Try storing the sorted list, sorted(csv_files, key=os.path.getmtime), then select the last one for latest_file and loop over the remaining files. Maybe there is something else wrong too, but based on the example you provided, this looks like the only issue I can see that is obviously a problem.
2023-03-07 08:28:53
0
python,graphene-python
2
75,659,638
How to display all resulting values in Jupiter notebook?
75,659,545
false
48
I have the following code: import numpy as np import matplotlib.pyplot as plt d0 = 0.3330630630630631 a0 = 0.15469469469469468 theta = 2 nmax=15 # lattice vectors sublattice 1 a1= np.array([3/2*a0,3**0.5/2*a0,0]) a2= np.array([3/2*a0,-3**0.5/2*a0,0]) # lattice vectors sublattice 2 b1 = np.array([ np.cos(theta) * a1[0] - np.sin(theta) * a1[1], np.sin(theta) * a1[0] + np.cos(theta) * a1[1], 0 ]) b2 = np.array([ np.cos(theta) * a2[0] - np.sin(theta) * a2[1], np.sin(theta) * a2[0] + np.cos(theta) * a2[1], 0 ]) ## coordinates for the unrotated layer sublattice a&b coords1a = np.array([i * a1 + j * a2 for i in range(-nmax-1, nmax+1) for j in range(-nmax-1, nmax+1)]) coords1b = np.array([i * a1 + j * a2 + [a0,0.,0.] for i in range(-nmax-1, nmax+1) for j in range(-nmax-1, nmax+1)]) ## coordinates for the rotated layer sublattice a&b coords2a = np.array([i * b1 + j * b2 + [0.,0.,d0] for i in range(-nmax-1, nmax+1) for j in range(-nmax-1, nmax+1)]) coords2b = np.array([i * b1 + j * b2 + [np.cos(theta)*a0,np.sin(theta)*a0,d0] for i in range(-nmax-1, nmax+1) for j in range(-nmax-1, nmax+1)]) coords1 = np.concatenate((coords1a, coords1b)) coords2 = np.concatenate((coords2a, coords2b)) When I try to display my coords1 and coords2 lists, python give me a list than continue only 6 items with ", ... ," . I want to see all the values. How can I do that?
0
1
1
to quickly see all the elements try print(*coords1) and print(*coords2)
2023-03-07 13:36:45
2
python,visual-studio-code,cvzone
2
75,663,797
ModuleNotFoundError: No module named 'cvzone' in visual studio code
75,662,682
false
362
I was trying to import cvzone and cv2 library in vs code but it gives this error no matter what I do. Tried to install this through cmd with pip and vs code (again, with pip) but nothing changes python version 3.11.2 pip version 22.3.1
0.197375
1
1
Well, it turns out making a virtual environment, installing wheel and only then installing cvzone solves this issue.
2023-03-07 14:05:08
0
python,pandas,datetime
2
75,663,161
How to include both ends of a pandas date_range()
75,663,011
false
131
From a pair of dates, I would like to create a list of dates at monthly frequency, including the months of both dates indicated. import pandas as pd import datetime # Option 1 pd.date_range(datetime(2022, 1, 13),datetime(2022, 4, 5), freq='M', inclusive='both') # Option 2 pd.date_range("2022-01-13", "2022-04-05", freq='M', inclusive='both') both return the list: DatetimeIndex(['2022-01-31', '2022-02-28', '2022-03-31'], dtype='datetime64[ns]', freq='M'). However, I am expecting the outcome with a list of dates (4 long) with one date for each month: [january, february, mars, april] If now we run: pd.date_range("2022-01-13", "2022-04-05", freq='M', inclusive='right') we still obtain the same result as before. It looks like inclusive has no effect on the outcome. Pandas version. 1.5.3
0
3
1
This is because the Month definition, if you use Day you see the difference. When you count in months there is no effect. For inclusive : both : a <= x <= b (in math convention : [a, b]) neither : a < x < b (in math convention : ]a, b[) right : a < x <= b (in math convention : ]a, b]) left : a <= x < b (in math convention : [a, b[) You can't include beyond the limits
2023-03-07 15:34:57
0
python,powershell
1
75,667,466
Powershell: se variable in command
75,664,029
false
63
I'm facing a basic and stupid problem on Windows PowerShell. I'm almost sure it has already been answered somewhere but I can't find something working for me. I simply would like to use a variable inside a command in PowerShell: $VENV_DIR='C:\venv\' python -m venv $VENV_DIR $VENV_DIR\Scripts\python.exe --version I expect to see Python 3.10.8 as result. But I have this error: PS C:\> $VENV_DIR\Scripts\python.exe --version At line:1 char:10 + $VENV_DIR\Scripts\python.exe --version + ~~~~~~~~~~~~~~~~~~~ Unexpected token '\Scripts\python.exe' in expression or statement. + CategoryInfo : ParserError: (:) [], ParentContainsErr orRecordException + FullyQualifiedErrorId : UnexpectedToken I tried a lot of different combinations, but none of them work 97 $VENV_DIR\Scripts\python.exe --version 98 `$VENV_DIR\Scripts\python.exe --version 99 $VENV_DIR\\Scripts\python.exe --version 100 $VENV_DIR\Scripts\python.exe --version 101 "$VENV_DIR"\Scripts\python.exe --version 102 "$VENV_DIR\Scripts\python.exe" --version 103 "$VENV_DIR\Scripts\python.exe --version" 104 ("$VENV_DIR\Scripts\python.exe --version") 105 $("$VENV_DIR\Scripts\python.exe --version") 106 -$("$VENV_DIR\Scripts\python.exe --version") 107 -$("$VENV_DIR")\Scripts\python.exe --version 108 -$($VENV_DIR)\Scripts\python.exe --version 109 $VENV_DIR\Scripts\python.exe --version 110 (echo $VENV_DIR)\Scripts\python.exe --version 111 echo $VENV_DIR\Scripts\python.exe --version 112 $(echo $VENV_DIR)\Scripts\python.exe --version 113 -$(echo $VENV_DIR)\Scripts\python.exe --version 114 -$("echo $VENV_DIR")\Scripts\python.exe --version 115 echo $VENV_DIR\Scripts\python.exe --version Could you please help? Thanks
0
1
1
Have you tried putting an ampersand before the python command?
2023-03-07 16:25:28
0
python,amazon-web-services,amazon-s3,boto3
4
76,351,923
Boto3 Exception when Creating s3 Bucket
75,664,629
false
189
I'm trying to create a bucket using: s3_client = boto3.client('s3', aws_access_key_id=api_id, aws_secret_access_key=apikey, region_name='us-east-1') bucket_name = 'test1' s3_client.create_bucket(Bucket=bucket_name) I've seen this same code everywhere all over SO and Github and it's meant to work but I'm getting this exception: botocore.exceptions.ClientError: An error occurred (IllegalLocationConstraintException) when calling the CreateBucket operation: The unspecified location constraint is incompatible for the region specific endpoint this request was sent to. Which from my research means I should specify the region (I did so in the client here), I also tried this line (using us-east-2 just to test a different region): s3_client.create_bucket(Bucket=bucket_name, CreateBucketConfiguration={'LocationConstraint': 'us-east-2'}) But that still throws an exception: botocore.exceptions.ClientError: An error occurred (IllegalLocationConstraintException) when calling the CreateBucket operation: The us-east-2 location constraint is incompatible for the region specific endpoint this request was sent to. I also tried this code: s3_client = boto3.client('s3', aws_access_key_id=api_id, aws_secret_access_key=apikey) bucket_name = 'test1' s3_client.create_bucket(Bucket=bucket_name) And my .aws config file specified: [default] region = us-east-1 output = json As far as I can tell, one of these methods should have worked. If it were an auth issue, I'd have gotten that exception instead. Can anyone point me in the right direction?
0
1
2
The error message displayed by the boto3 client is misleading, the error is actually due to the bucket name that you specified not being globally unique. Every bucket that is created has to have a unique name. Try doing the same but use a longer, more complex bucket name which is unlikely to collide with any existing bucket names.
2023-03-07 16:25:28
0
python,amazon-web-services,amazon-s3,boto3
4
75,700,560
Boto3 Exception when Creating s3 Bucket
75,664,629
false
189
I'm trying to create a bucket using: s3_client = boto3.client('s3', aws_access_key_id=api_id, aws_secret_access_key=apikey, region_name='us-east-1') bucket_name = 'test1' s3_client.create_bucket(Bucket=bucket_name) I've seen this same code everywhere all over SO and Github and it's meant to work but I'm getting this exception: botocore.exceptions.ClientError: An error occurred (IllegalLocationConstraintException) when calling the CreateBucket operation: The unspecified location constraint is incompatible for the region specific endpoint this request was sent to. Which from my research means I should specify the region (I did so in the client here), I also tried this line (using us-east-2 just to test a different region): s3_client.create_bucket(Bucket=bucket_name, CreateBucketConfiguration={'LocationConstraint': 'us-east-2'}) But that still throws an exception: botocore.exceptions.ClientError: An error occurred (IllegalLocationConstraintException) when calling the CreateBucket operation: The us-east-2 location constraint is incompatible for the region specific endpoint this request was sent to. I also tried this code: s3_client = boto3.client('s3', aws_access_key_id=api_id, aws_secret_access_key=apikey) bucket_name = 'test1' s3_client.create_bucket(Bucket=bucket_name) And my .aws config file specified: [default] region = us-east-1 output = json As far as I can tell, one of these methods should have worked. If it were an auth issue, I'd have gotten that exception instead. Can anyone point me in the right direction?
0
1
2
It works as long as I don't try to write into us-east-1, any of the other regions work just fine.
2023-03-07 16:43:41
1
python,deep-learning,data-science,lstm,recurrent-neural-network
1
75,712,452
LSTM Input Reshape
75,664,844
true
107
I am a deep learning beginner and working on LSTM for a multi-classification problem, my dataset has features=10, timestepts=1, and the target classes are 5 classes (0,1,2,3,4). I am not sure if I did the input reshape correctly because I got an error when I wanted to calculate the model latency + the good result I got makes me doubt. #Feature scaling from sklearn.preprocessing import StandardScaler scaler = StandardScaler() train_data= scaler.fit_transform(train_data) test_data= scaler.transform(test_data) epochs = 10 batch_size = 128 feature_num=10 # number of features timesteps=1 # Reshape the input to shape (num_instances, timesteps, num_features) train_data = np.reshape(train_data, (train_data.shape[0], timesteps, feature_num)) test_data=np.reshape(test_data, (test_data.shape[0], timesteps, feature_num)) # convert the target labels to one-hot encoded format train_labels = to_categorical(train_labels, num_classes=5) test_labels = to_categorical(test_labels, num_classes=5) # build the model model = Sequential() model.add(LSTM(64, input_shape=(timesteps,feature_num), return_sequences=True, activation='sigmoid')) model.add(Flatten()) model.add(Dense(5, activation='softmax')) model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) callback = EarlyStopping(patience=3) history = model.fit(train_data, train_labels, epochs=epochs, batch_size=batch_size, validation_data=(test_data, test_labels), callbacks=[callback]) # Evaluate the model y_pred = model.predict(test_data) y_pred_classes = np.argmax(y_pred, axis=1) y_test_classes = np.argmax(test_labels, axis=1) print(classification_report(y_test_classes, y_pred_classes)) report = classification_report(y_test_classes, y_pred_classes, output_dict=True) # extract the class names and metrics from the report class_names = list(report.keys())[:-3] metrics = ['precision', 'recall', 'f1-score'] # calculate the confusion matrix conf_mat = confusion_matrix(y_test_classes, y_pred_classes) # create a heatmap of the confusion matrix sns.heatmap(conf_mat, annot=True, cmap='Blues') # set the axis labels and title plt.xlabel('Predicted Labels') plt.ylabel('True Labels') plt.title('Confusion Matrix') # show the plot plt.show() #Measure model Latency start_time = time.time() y_pred = model.predict(np.expand_dims(test_data, axis=0)) end_time = time.time() latency = end_time - start_time print(f"Latency: {latency} seconds") I got This error when i measure the latency 2023-03-07 15:31:53.584508: W tensorflow/core/framework/op_kernel.cc:1780] OP_REQUIRES failed at transpose_op.cc:142 : INVALID_ARGUMENT: transpose expects a vector of size 4. But input(1) is a vector of size 3 Thank you in Advance.
1.2
2
1
When trying to measure latency of the model.predict() the error occurs. As it suggested from the error message. You are expanding the dimensions of the test_data array using np.expand_dims(test_data, axis=0). When you measure the latency This is because model.predict() expects a 3D input with shape (num_instances, timesteps, num_features). debug this issue, you can print the shapes of the test_data and the np.expand_dims() output to make sure they have the expected shapes. You can also try removing the np.expand_dims() call and use the original test_data array to measure the latency.
2023-03-07 18:06:41
1
python,pandas,dataframe,data-cleaning
2
75,666,854
Thinking about the best way to merge two DataFrame
75,665,688
false
80
I'm looking for a way to merge df. However, I don't know what would be the best way to do this. first df - metro cities/population/teams Metropolitan area Population (2016 est.)[8] NHL Phoenix 4661537 Coyotes Los Angeles 13310447 Kings Ducks Toronto 5928040 Maple Leafs Boston 4794447 Bruins Edmonton 1321426 Oilers New York City 20153634 Rangers Islanders Devils Second df - team/wins/losses team w L Los Angeles Kings 46 28 Phoenix Coyotes 37 30 Toronto Maple Leafs 49 26 Boston Bruins 50 20 Edmonton Oilers 29 44 New York Islanders 34 37 I tried to merge across teams. However, I need to arrange this data so that it collides in the Merge. I don't know how I would do that without looking at it case by case. Note: The data set is much larger and with more cities and teams. I had a little trouble presenting the DF here, so I only put 6 rows and the main columns.
0.099668
1
1
If you are trying to get the city part of a NHL team name, you could for example: Make a hash map which contains all the possible city names; {"Toronto": "Toronto"}, split the NHL TEAM string and check if the hash map contains any part of the string. If it does that's the city name. With the limited amount of possible city names that's not too bad. But I'm not exactly sure what you are trying to accomplish, you should elaborate and simplify your question.
2023-03-07 18:38:31
0
python,tkinter
2
75,670,897
How to keep the capture of the previous window after closing another one?
75,665,963
true
40
I am making an application in which different windows open, and naturally, when the window is opened, the functionality of the previous window is not available until the current one closes. It would seem that everything is fine, but for example: we open three windows - window_1, window_2, window_3 - one by one from each window. We see that when window_3 is active, all others are inactive. Next, we close the active window, and we see that suddenly, along with window_2, window_1 is also active. How to solve this problem? I wrote a simple code to fully understand this task: from tkinter import * class Second_Child_Win(): def __init__(self): self.root = Toplevel() self.root.geometry('200x200') self.root.grab_set() def run(self, first_child_root): Button(self.root, text='Quit', command=self.root.destroy).pack() first_child_root.wait_window(self.root) class First_Child_Win(): def __init__(self): self.root = Toplevel() self.root.geometry('200x200') self.root.grab_set() def run(self, main_root): Button(self.root, text='click', command=self.open_new_win).pack() main_root.wait_window(self.root) def open_new_win(self): A = Second_Child_Win() A.run(self.root) class Main(): def __init__(self): self.root = Tk() self.root.geometry('200x200') def run(self): Button(self.root, text='click', command=self.open_new_win).pack() self.root.mainloop() def open_new_win(self): A = First_Child_Win() A.run(self.root) A = Main() A.run()
1.2
1
1
In general, the decision is quite simple. Simply, after ending the session of the current window, I write grab_set again
2023-03-07 19:27:06
3
module,google-colaboratory,attributeerror,kaggle,python-traitlets
2
75,917,960
AttributeError: module 'IPython.utils.traitlets' has no attribute 'Unicode'
75,666,380
false
1,235
I am running a .ipynb notebook on a Kaggle server. At the first code cell, when importing modules, specifically cv2_imshow from google.patches as follows, from google.colab.patches import cv2_imshow I get this error: /opt/conda/lib/python3.7/site-packages/IPython/utils/traitlets.py:5: UserWarning: IPython.utils.traitlets has moved to a top-level traitlets package. warn("IPython.utils.traitlets has moved to a top-level traitlets package.") --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) /tmp/ipykernel_27/1840971195.py in <module> 18 19 # Display images using OpenCV ---> 20 from google.colab.patches import cv2_imshow # Importing cv2_imshow from google.patches to display images 21 22 # Ignore warnings /opt/conda/lib/python3.7/site-packages/google/colab/__init__.py in <module> 24 from google.colab import _tensorflow_magics 25 from google.colab import auth ---> 26 from google.colab import data_table 27 from google.colab import drive 28 from google.colab import files /opt/conda/lib/python3.7/site-packages/google/colab/data_table.py in <module> 164 165 --> 166 class _JavascriptModuleFormatter(_IPython.core.formatters.BaseFormatter): 167 format_type = _traitlets.Unicode(_JAVASCRIPT_MODULE_MIME_TYPE) 168 print_method = _traitlets.ObjectName('_repr_javascript_module_') /opt/conda/lib/python3.7/site-packages/google/colab/data_table.py in _JavascriptModuleFormatter() 165 166 class _JavascriptModuleFormatter(_IPython.core.formatters.BaseFormatter): --> 167 format_type = _traitlets.Unicode(_JAVASCRIPT_MODULE_MIME_TYPE) 168 print_method = _traitlets.ObjectName('_repr_javascript_module_') 169 AttributeError: module 'IPython.utils.traitlets' has no attribute 'Unicode' After running from traitlets import * print(traitlets) <module 'traitlets.traitlets' from '/opt/conda/lib/python3.7/site-packages/traitlets/traitlets.py'> and re-running the problem line, to deal with the top part of the error message, /opt/conda/lib/python3.7/site-packages/IPython/utils/traitlets.py:5: UserWarning: IPython.utils.traitlets has moved to a top-level traitlets package. warn("IPython.utils.traitlets has moved to a top-level traitlets package.") This part of the error message dissappears but all else remains the same. google-colab 1.0.0
0.291313
2
1
The warning message is saying that traitlets is now a top level package, and not still under the IPython.utils package. You can fix the error by editing the data_table.py file ( /opt/conda/lib/python3.7/site-packages/google/colab/data_table.py). Change the line importing the traitlets package from from IPython.utils import traitlets as _traitlets to import traitlets as _traitlets I have no idea why this error arose, but this fixed it for me anyway.
2023-03-07 19:29:24
2
python,solidity,web3py,ganache
1
75,668,977
Sending multiple transactions as once
75,666,395
false
292
What are some ways to send multiple transactions at once to a ganache blockchain ? I am using the web3.py library.
0.379949
2
1
Ethereum does not support sending multiple transactions once. Only way to do any kind of multi-transaction logic is to write your own smart contract. Alternatively you can use another blockchain, like Cosmos SDK based ones, which support sending multiple transactions once.
2023-03-07 23:32:31
0
python-3.x,wxpython
1
75,727,774
Import error when importing wx on Apple silicon
75,668,149
false
92
I have a Python 3.8 project that was created on a Intel Mac that I'm trying to get working on my Apple silicon M2 MacBook Pro. wxPython installed fine using brew, but importing wx causes the following import error: Python 3.11.2 (main, Feb 16 2023, 02:55:59) [Clang 14.0.0 (clang-1400.0.29.202)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import wx Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/homebrew/lib/python3.11/site-packages/wx/__init__.py", line 17, in <module> from wx.core import * File "/opt/homebrew/lib/python3.11/site-packages/wx/core.py", line 12, in <module> from ._core import * ImportError: dlopen(/opt/homebrew/lib/python3.11/site-packages/wx/_core.cpython-311-darwin.so, 0x0002): symbol not found in flat namespace '__ZN10wxBoxSizer20InformFirstDirectionEiii' I've tried re-installing wxPython clean, but didn't help.
0
1
1
Installing python3@3.10 solved my issue. I guess wxPython isn't working with 3.11. Thanks!
2023-03-08 00:58:08
1
python,ssl,curl
1
75,668,875
Curl fails with SSL errors 56 and 35 when talking to a HTTPS Python web server
75,668,544
false
405
I have setup my own HTTPS server using Python: from http.server import HTTPServer, BaseHTTPRequestHandler import ssl class SimpleHTTPRequestHandler(BaseHTTPRequestHandler): def do_GET(self): self.send_response(200) self.end_headers() self.wfile.write(b'Hello, secure world!\n') httpd = HTTPServer(('127.0.0.1', 4443), SimpleHTTPRequestHandler) context = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) context.load_cert_chain( certfile="/etc/letsencrypt/live/.../fullchain.pem", keyfile="/etc/letsencrypt/live/.../privkey.pem") httpd.socket = context.wrap_socket(httpd.socket, server_side=True) httpd.serve_forever() For security purposes, my server runs in a VirtualBox VM with port forwarding from 443 to 4443. When I query the HTTPS server via curl locally on the VM, I get a response, but there is also an error: $ curl -k 'https://locahost:4443' Hello, secure world! curl: (56) OpenSSL SSL_read: error:0A000126:SSL routines::unexpected eof while reading, errno 0 However, when I try to query it from the host, I only get an error: $ curl -k 'https://127.0.0.1' curl: (35) Unknown SSL protocol error in connection to 127.0.0.1:443 As you can see, in both calls I disable cert verification because I query my server by IP address and not the actual domain. The certs are for my personal domain and are signed by letsencrypt.com. When I tried using an actual domain on the host, I get the same error (34). Why do I get errors in the curl call in the guest VM and why is it different when calling from the host?
0.197375
1
1
Found bug in my code. HTTPServer needs to be constructed with ('0.0.0.0', 4443) to allow connections from other hosts.
2023-03-08 03:56:44
0
python,streamlit
1
75,669,593
Getting data from streamlit instance
75,669,330
false
113
I made a streamlit app that locally writes to a CSV on the instance, and just realized now i have no way to get that data. I am pretty sure if I push up a change via github it'll overwrite the DB (can I use a gitignore maybe?) Pretty dumb problem, open to any suggestions how to recover the data :)
0
1
1
To recover the data, you need to access the CSV file directly on the instance where the Streamlit app is running. You can try using a file transfer protocol (FTP) client to connect to the instance and download the CSV file. If you're on windows you'd use WinSCP for linux and Mac there is Filezilla. For future purposes, it's better to use a database for storage and retrieval like MySQL or PostgresDB. You could also store the CSV in storage facility like S3 as a last resort. But Databases are the way to go. Also yes you could add the CSV to your .gitignore file.
2023-03-08 12:15:18
0
python,sqlalchemy,alembic
2
75,674,301
Alembic raises ImportError cannot import name '_NONE_NAME'
75,672,917
false
379
When I rebuilt a Docker image for my Python application that uses SQLAlchemy and Alembic, I started getting the following error when running migrations: ImportError: cannot import name '_NONE_NAME' from 'sqlalchemy.sql.naming' I didn't change my pinned requirements in requirements.txt. What is causing the issue?
0
1
1
I had this same problem since yesterday. Reproducing the problem locally: I noticed that creating a new venv and reinstalling the requirements reproduced the problem like in the container. Diagnosis: So, I created a new venv and reinstalled the requirements.txt. Then i did pip freeze and compared the new installations to the pip freeze running on the old venv that was still working locally. I found that alembic was installed witha different versions. My solution: I downgraded alembic to be alembi==1.8.1 and now it works. Hope this helps!
2023-03-08 12:38:14
1
python,apache-kafka,confluent-kafka-python
1
75,674,699
Consuming messages from Kafka with different group IDs using confluent_kafka
75,673,129
false
159
I am using python and confluent_kafka I am building a Queue Management for Kafka where we can view the pending(uncommitted) messages of each topic, delete topic and purge topic. I am facing the following problems. I am using same group ID for all the consumers so that I can get the uncommitted messages. I have 2 consumers one (say consumer1) consuming and committing and another one (say consumer2) just consuming without committing. If I run consumer1 and consumer2 simultaneously only one consumer will start consuming and another just keep on waiting and hence cause heavy loading time in the frontend. If I assign different group Id for each it works but, the messages committed by consumer1 are still readable by consumer2. Example: If I have pushed 100 messages and say consumer1 consumed 80 messages and when I try to consume from consumer2 it should consume only remaining 20, but it is consuming all 100 including the messages committed by consumer1. How can I avoid that or solve?
0.197375
1
1
Unclear what you mean by uncommitted. Any message in a topic has been committed by a producer. From the consumer perspective, this isn't possible. Active Kafka consumers in the same group cannot be assigned the same partitions More specifically, how would "consumer2" know when/if "consumer1" was "done consuming 80 records" without consumer1 becoming inactive? If you have an idle consumer with only two consumers in the same group, sounds like you only have one partition... If you want both to be active at the same time, you'll need multiple partitions, but that won't help with any "visualizations" unless you persist your consumed data in some central location. At which point, Kafka Connect might be a better solution than Python. If you want to view consumer lag (how far behind a consumer is processing), then there are other tools to do this, such as Burrow with its REST API. Otherwise, you need to use the get_watermark_offsets() function to find the topic's offsets and compare to the current polled record offset
2023-03-08 16:18:14
0
python,apache-kafka,confluent-kafka-python
1
75,679,796
manual offset commit do not work as expected
75,675,557
true
53
I'm running confluent_kafka with 2.0.2 and test the manual offset commit code. I set up a confluentinc/cp-kafka:7.3.0 with a test-topic and 1 partition. I wrote a test script to test manual offset commmit with 'enable.auto.offset.store': False and 'enable.auto.commit': False, but the consumer seems to commit the offset after every poll(). The test code is as follows: from confluent_kafka import Consumer, TopicPartition LOCAL_TEST_TOPIC = "test-topic2" LOCAL_TEST_PRODUCER_CONFIG = { 'bootstrap.servers': '0.0.0.0:9092', 'group.id': "tail-grouptest", 'enable.auto.offset.store': False, 'enable.auto.commit': False, } consumer = Consumer(LOCAL_TEST_PRODUCER_CONFIG) topics = [LOCAL_TEST_TOPIC, ] def msg_process(msg): if not msg: print("EMPTY MESSAGE") return print(msg.value().decode('utf-8')) def basic_consume_loop(consumer, topics): try: consumer.subscribe(topics) topic = consumer.list_topics(topic='test-topic2') partitions = [TopicPartition('test-topic2', partition) for partition in list(topic.topics['test-topic2'].partitions.keys())] # msg1 msg1 = consumer.poll() msg_process(msg1) consumer.store_offsets(message=msg1) consumer.commit(asynchronous=False) print(consumer.position(partitions)) # msg2 msg2 = consumer.poll() msg_process(msg2) consumer.store_offsets(message=msg2) consumer.commit(asynchronous=False) print(consumer.position(partitions)) # msg3 msg3 = consumer.poll() msg_process(msg3) # msg4 msg4 = consumer.poll() msg_process(msg4) # msg5 msg5 = consumer.poll() msg_process(msg5) print(consumer.position(partitions)) # msg6 msg6 = consumer.poll() msg_process(msg6) print(consumer.position(partitions)) print("set back to msg3") # back to msg3 consumer.store_offsets(message=msg3) print("commit again") consumer.commit(message=msg3, asynchronous=False) print(consumer.position(partitions)) msg_new = consumer.poll() msg_process(msg_new) finally: # Close down consumer to commit final offsets. consumer.close() def main(): basic_consume_loop(consumer, topics) if __name__ == '__main__': main() If I run console producer with message 1 2 3 4 5 6 7 The script gives the following result: 1 [TopicPartition{topic=test-topic2,partition=0,offset=1,error=None}] 2 [TopicPartition{topic=test-topic2,partition=0,offset=2,error=None}] 3 4 5 [TopicPartition{topic=test-topic2,partition=0,offset=5,error=None}] 6 [TopicPartition{topic=test-topic2,partition=0,offset=6,error=None}] set back to msg3 commit again [TopicPartition{topic=test-topic2,partition=0,offset=6,error=None}] 7 I expect msg3 - msg6 are the same message(msg3), but it seems that the offset keeps adding up and the consumer continues to poll new messages. How did that happen? Anything wrong in my code?
1.2
1
1
Committing only stores the offset in the __consumer_offsets topic. You are still required to seek if you want to re-consume old messages.
2023-03-08 16:21:13
1
python,playwright-python
2
76,269,082
Importing playwright fails with "DLL load failed while importing _greenlet"
75,675,599
false
1,221
Why does this error appear in the playwright file? ImportError: DLL load failed while importing _greenlet: The specified module could not be found. Here is my code: from playwright.sync_api import sync_playwright with sync_playwright() as p: browser = p.chrome.launch() page = browser.new_page() page.goto("https://www.youtube.com/watch?v=FK_5SQPq6nY&list=PLYDwWPRvXB8_W56h2C1z5zrlnAlvqpJ6A&index=1") page.screenshot(path="demo.png") browser.close()
0.099668
3
1
The issue is from greenlet version, change greenlet version to 1.1.2. I have changed and it workes
2023-03-08 21:31:54
1
python,windows
1
75,678,450
Access is denied when running a Python script from a command line in Windows 11
75,678,434
false
99
Suddenly during steps running python scripts I got an 'Access is denied' when running a script. E:\crs\bde2\src>C:\python27\ArcGIS10.7\python.exe validate_csv.py aims Access is denied. Just trying to start python with no script or using -interactive produces no response. However starting an IDE (pyscripter) and opening the file works as normal. I spent two days following up this error message on web sites changing system settings, ACLs, permissions, developer overrides, admin permissions with no result. Finally today I worked through yet another list of system settings and the last one suggested that I validate the system files (all fine) and then uninstall the app and reinstall. [https://windowsreport.com/access-denied-windows-11/][1]
0.197375
1
1
It turned out that the startup python.exe in the install directory c:\python27 had been set to zero bytes! All I had to do was to restore the file (27k) and it is working again. Nothing to do with the windows system message putting me off the scent. I do not know how the file was corrupted, but clearly it was.
2023-03-09 01:36:02
0
python,airflow,directed-acyclic-graphs
2
75,734,923
How do you stop the dag started with TriggerDagRunOperator?
75,679,685
false
80
In my parent dag, I have a child dag started with the TriggerDagRunOperator. That child dag goes ahead and does its own thing. my_trigger = TriggerDagRunOperator( task_id = "my_trigger", trigger_dag_id = "child_dag", wait_for_completion = True, execution_date="{{ logical_date }}", ) However, somewhere down the line in my parent dag, something fails and I just want to stop everything. There's no point in the child dag still running if something failed in the parent dag. I can set the state of the my_trigger task to State.FAILED and that will fail the task, but it won't stop the child dag from continuing to run until it completes. So my question is how do you do stop the dag started by the TriggerDagRunOperator? I looked around members of my_trigger but I couldn't really find anything that would let me stop the dag. Does anyone have any ideas?
0
1
1
Launch the UI and find the DAG in the DAG List. Select each Task instance that is running or queued or scheduled, etc. and change the status to "failed". Then select the DAG instance and set the status to "failed".
2023-03-09 07:27:29
0
python,python-asyncio
1
75,683,125
Use a pool of coroutine workers or one coroutine per task with Semaphore
75,681,549
false
78
Suppose I have one million web pages to download. Should I use a fixed number of coroutine workers to download them, or create a coroutine per url and use asyncio.Semaphore to limit the numbers of coroutines. First option, fixed workers: async def worker(queue): while True: try: url = queue.get_nowait() except QueueEmpty: break await download(url) async def main(): queue = asyncio.Queue() for url in urls: await queue.put(url) workers = [worker(queue) for _ in range(10)] await asyncio.wait(workers) asyncio.run(main()) Second option, use semaphore: async def worker(url, sema): async with sema: await download(url) async def main(): semaphore = asyncio.Semaphore(10) workers = [worker(url, semaphore) for url in urls] await asyncio.wait(workers) asyncio.run(main())
0
1
1
The first approach (Queue + 10 tasks) won't equally distribute the workload (1M urls downloads) between fixed number of coroutines/tasks. When you'll add simple ordinal indices to identify each worker during processing you may note that some workers will hold/occupy the queue for a longer time so some workers downloads a hundreds of urls and some - tens of thousands urls.     While the 2nd approach (Semaphore) ensures that only a fixed number of urls will downloaded at a time by fixed number of active workers.     The above should also count on a remote server max connections limit (in case if a large proportion of urls have same origin).
2023-03-09 07:27:47
1
python,setuptools,setup.py,python-packaging
1
75,696,788
Setuptools is creating is creating egg for UNKNOWN instead of setuptools-rust
75,681,551
true
311
I'm trying to build setuptools rust (v1.3) using python 3.10.4, openssl 3.0 and setuptools v58.2 These are the commands and outputs: (venv3.10) [/dbc/blr-dbc2112/abandaru/projects]$ python cayman_pyo3_setuptools-rust/pyo3_setuptools-rust/src/setup.py build running build (venv3.10) [/dbc/blr-dbc2112/abandaru/projects]$ python cayman_pyo3_setuptools-rust/pyo3_setuptools-rust/src/setup.py install running install running bdist_egg running egg_info writing UNKNOWN.egg-info/PKG-INFO writing dependency_links to UNKNOWN.egg-info/dependency_links.txt writing top-level names to UNKNOWN.egg-info/top_level.txt reading manifest file 'UNKNOWN.egg-info/SOURCES.txt' writing manifest file 'UNKNOWN.egg-info/SOURCES.txt' installing library code to build/bdist.linux-x86_64/egg running install_lib warning: install_lib: 'build/lib' does not exist -- no Python modules to install creating build/bdist.linux-x86_64/egg creating build/bdist.linux-x86_64/egg/EGG-INFO copying UNKNOWN.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/EGG-INFO copying UNKNOWN.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/EGG-INFO copying UNKNOWN.egg-info/dependency_links.txt -> build/bdist.linux-x86_64/egg/EGG-INFO copying UNKNOWN.egg-info/top_level.txt -> build/bdist.linux-x86_64/egg/EGG-INFO zip_safe flag not set; analyzing archive contents... creating 'dist/UNKNOWN-0.0.0-py3.10.egg' and adding 'build/bdist.linux-x86_64/egg' to it removing 'build/bdist.linux-x86_64/egg' (and everything under it) Processing UNKNOWN-0.0.0-py3.10.egg Copying UNKNOWN-0.0.0-py3.10.egg to /dbc/blr-dbc2112/abandaru/projects/venv3.10/lib/python3.10/site-packages Adding UNKNOWN 0.0.0 to easy-install.pth file Installed /dbc/blr-dbc2112/abandaru/projects/venv3.10/lib/python3.10/site-packages/UNKNOWN-0.0.0-py3.10.egg Processing dependencies for UNKNOWN==0.0.0 Finished processing dependencies for UNKNOWN==0.0.0 (venv3.10) [/dbc/blr-dbc2112/abandaru/projects]$ unzip venv3.10/lib64/python3.10/site-packages/UNKNOWN-0.0.0-py3.10.egg Archive: venv3.10/lib64/python3.10/site-packages/UNKNOWN-0.0.0-py3.10.egg inflating: EGG-INFO/PKG-INFO inflating: EGG-INFO/SOURCES.txt inflating: EGG-INFO/dependency_links.txt inflating: EGG-INFO/top_level.txt inflating: EGG-INFO/zip-safe Edit: Please find the relevant files here: setup.py: #!/usr/bin/env python from setuptools import setup if __name__ == "__main__": setup() pyproject.toml: [build-system] requires = ["setuptools>=58.0", "setuptools_scm[toml]>=3.4.3"] build-backend = "setuptools.build_meta" [tool.setuptools_scm] write_to = "setuptools_rust/version.py" [tool.isort] profile = "black" [tool.pytest.ini_options] minversion = "6.0" addopts = "--doctest-modules" setup.cfg: [metadata] name = setuptools-rust version = attr: setuptools_rust.__version__ author = Nikolay Kim author_email = fafhrd91@gmail.com license = MIT description = Setuptools Rust extension plugin keywords = distutils, setuptools, rust url = https://github.com/PyO3/setuptools-rust long_description = file: README.md long_description_content_type = text/markdown classifiers = Topic :: Software Development :: Version Control License :: OSI Approved :: MIT License Intended Audience :: Developers Programming Language :: Python :: 3 Programming Language :: Python :: 3.6 Programming Language :: Python :: 3.7 Programming Language :: Python :: 3.8 Programming Language :: Python :: 3.9 Development Status :: 5 - Production/Stable Operating System :: POSIX Operating System :: MacOS :: MacOS X Operating System :: Microsoft :: Windows [options] packages = setuptools_rust zip_safe = True install_requires = setuptools>=58.0; semantic_version>=2.8.2,<3; typing_extensions>=3.7.4.3 setup_requires = setuptools>=58.0; setuptools_scm>=6.3.2 python_requires = >=3.7 [options.entry_points] distutils.commands = clean_rust=setuptools_rust:clean_rust build_rust=setuptools_rust:build_rust distutils.setup_keywords = rust_extensions=setuptools_rust.setuptools_ext:rust_extensions I'm not sure why setuptools is building it for unknown and not setuptools-rust. Could someone please point to my mistake?
1.2
1
1
The issue seems to be that your current working directory has to be the directory containing the setup.py script. As far as I can tell anything like python path/to/setup.py will fail, and only python setup.py will succeed. So first cd into the project directory containing the setup.py script, then call setup.py commands.
2023-03-09 08:02:21
0
python,numpy
2
75,681,877
python binning the Numbers in numpy
75,681,843
true
52
I want to bin an array in the following way: [0.1, 0.2, 0.3, ... 1.0]. I already try: bins = np.array([0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9]) Digitalized_lables_train = np.digitize(lables_train, bins ) lables_train are numbers from 0-9. But after this operation i still get bins between 1-10.
1.2
1
1
Got the answer: just dividing my np.digitize(lables_train, bins ) with 10. But for the case to bin in other float numbers i would like to know which other ways there could be.
2023-03-09 08:55:34
0
python,jira
1
75,705,253
"requests.exceptions.HTTPError: 400 Client Error" creating a Jira issue when using the package atlassian
75,682,320
true
281
I have this code on python 3.8: from atlassian import Jira jira = Jira( url='http://jira.mydomain.com/', username='login', password='password') summary = 'Test summary' description = 'Test description' current_date = datetime.date.today() duedate = datetime.datetime.strftime(current_date, "%Y-%m-%d") fields = {"project": {"key": 'ARL'}, "summary": summary, "description": description, "issuetype": {"name": "Task"}, "duedate": duedate, "labels": ["Demo"], "components": [{"name": "Selary"}] } _____________________________________ And it return me this error: Creating issue "Test summary" Traceback (most recent call last): File "C:\Users\Max\PycharmProjects\GSheets-Test\venv\lib\site-packages\atlassian\rest_client.py", line 436, in raise_for_status j.get("errorMessages", list()) + [k.get("message", "") for k in j.get("errors", dict())] File "C:\Users\Max\PycharmProjects\GSheets-Test\venv\lib\site-packages\atlassian\rest_client.py", line 436, in <listcomp> j.get("errorMessages", list()) + [k.get("message", "") for k in j.get("errors", dict())] AttributeError: 'str' object has no attribute 'get' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\Max\PycharmProjects\GSheets-Test\test.py", line 37, in <module> print(jira.issue_create(fields_arl)) File "C:\Users\Max\PycharmProjects\GSheets-Test\venv\lib\site-packages\atlassian\jira.py", line 1402, in issue_create return self.post(url, data={"fields": fields}) File "C:\Users\Max\PycharmProjects\GSheets-Test\venv\lib\site-packages\atlassian\rest_client.py", line 333, in post response = self.request( File "C:\Users\Max\PycharmProjects\GSheets-Test\venv\lib\site-packages\atlassian\rest_client.py", line 257, in request self.raise_for_status(response) File "C:\Users\Max\PycharmProjects\GSheets-Test\venv\lib\site-packages\atlassian\rest_client.py", line 440, in raise_for_status response.raise_for_status() File "C:\Users\Max\PycharmProjects\GSheets-Test\venv\lib\site-packages\requests\models.py", line 960, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http://jira.mydomain.com/rest/api/2/issue When i use project key from my second project - the issue is being created successfully According to information from the JIRA administrator these projects have the same business scheme and a set of required fields. The component in field exists too Same error rase if I pass an incorrect component name to the second project what causes the error? What project/field settings can I check? try another login-password - fail try second project - successful
1.2
1
1
I fetch the fields from the existing issue, as advised in the comments. The name of the component in the project that I passed to the request differs from the actual one. Although I took the name from the Jira UI in UI was "Selary" in existing issue was " Selary " in DB was " Selary"
2023-03-09 14:13:35
2
python,conda,sdlc,mamba
1
75,685,910
conda environments in team and shared Linux environment - best practices?
75,685,777
false
90
Conda is great, but it assumes that people code completely independently much like the complete lack of namespace hierarchy in public python modules. Looking for best practices with 6 people all developing in largely the same conda environment. Let's pretend there won't be exceptions. No containers, which would make this easier. I want to ask others what tips other have? Any other best practices anyone can recommend? Here's what we are working on currently: Having a "source" environment.yml and then having all developers work from an exported snapshot yml (unless they are developing the conda environment itself) so they are all developing in the exact same environment. you don't want surprises when someone's app goes to prod and you find out the environment it was developed under. This is just standard best practice in every language. Deploying shared conda environments to QA and Prod. Conda environments take up several GB of space, which is a minor concern, but it also takes a lot of time to install. It's a lot of machinery to install for every application update. Yes, need to be concerned about versioning that conda environment, but it is a less a problem to have 2 or 3 versions in use at the same time. And yes, there are occasionally bugs in conda/mamba. It's a pain to work around them
0.379949
2
1
Been using conda for a couple of years but lately abandoned it after it put my team into deadlock situations a couple of times. First suggestion I'd give, if you want to remain in the conda system, is at least to switch to mamba, since conda at times can be exceedingly slow. You did not specify what environment/languages you need to manage. For my team it's 95% python, and we switched to poetry which we find a lot faster, and makes it easier to manage a consistent environment via its TOML config. We use it also to separate the productions project dependencies from the development env, where the latter is equipped with many linters/testers etc used in either local (eg pre-commit) or remote (eg github actions) workflows. Of course if you'd need to install non python stuff that is not on PyPI, either conda or your platform's package managers (eg homebrew for Mac, apt for Debian Linux and derivatives) will be necessary. Hope any of this is useful.
2023-03-09 15:37:14
0
python,pytorch
1
76,470,352
Transformer encoder layer with pytorch : The shape of the 2D attn_mask is torch.Size([16, 512]), but should be (16, 16)
75,686,820
false
333
Here it's an minimal exemple of code : encoder_layers = nn.TransformerEncoderLayer(512, 8,2048 ,0.5) mask = torch.randint(0,2, (16,512)).bool() text = torch.randn(16,512) print(mask) print(text) encoder_layers(text,mask) This gives me the following error: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-31-b326564457ab> in <module> 4 print(mask) 5 print(text) ----> 6 encoder_layers(text,mask) 5 frames /usr/local/lib/python3.9/dist-packages/torch/nn/functional.py in multi_head_attention_forward(query, key, value, embed_dim_to_check, num_heads, in_proj_weight, in_proj_bias, bias_k, bias_v, add_zero_attn, dropout_p, out_proj_weight, out_proj_bias, training, key_padding_mask, need_weights, attn_mask, use_separate_proj_weight, q_proj_weight, k_proj_weight, v_proj_weight, static_k, static_v, average_attn_weights) 5067 correct_2d_size = (tgt_len, src_len) 5068 if attn_mask.shape != correct_2d_size: -> 5069 raise RuntimeError(f"The shape of the 2D attn_mask is {attn_mask.shape}, but should be {correct_2d_size}.") 5070 attn_mask = attn_mask.unsqueeze(0) 5071 elif attn_mask.dim() == 3: RuntimeError: The shape of the 2D attn_mask is torch.Size([16, 512]), but should be (16, 16). I don't understand why that doesn't work because the mask length will be equal to the number of the tokens ?
0
1
1
Mask represents which query vectors can attend to which key vectors in the attention section. For example, in machine translation, the training batch has the entire sentence in the target language, but we don't want the queries at each word in the target language to attend to the keys for future words in that sentence. So at training time we would apply a mask to filter out the future keys for each query. Therefore the attention mask should be of shape [len(queries), len(keys)]. In your example, len(queries) = len(keys) = 16.
2023-03-09 16:25:03
1
python,pandas,visual-studio-code
1
75,689,131
How to upgrade pandas 2.0rc in VSCode
75,687,386
true
948
Trying to upgrade to pandas 2.0rc in VSCode using the jupyter notebook extension. ! python -m pip install --upgrade pip ! pip install --pre pandas==2.0.0rc0 import pandas as pd pd.__version__ The result is: After the installation stuff... Successfully installed pandas-2.0.0rc0 So why does this still appear for the version? '1.5.2'
1.2
1
1
Easy answer. Restart VSCode. And then it works.
2023-03-09 19:50:31
0
python,format,match,case
1
75,910,844
Ufmt Format doesn't work on Py scripts with match case
75,689,375
false
52
I know match case is a pretty recent addition to Python. ufmt (v2.0.1) still doesn't format files with match case statements in them. Any workarounds? Thanks Here's the last thing I get from terminal when I run ufmt format on such a file libcst._exceptions.ParserSyntaxError: Syntax Error @ 11:19. Incomplete input. Encountered 'unit', but expected ';', or 'NEWLINE'. match unit: ^ Tried running ufmt format on file with match case statement in them. Expected the file to be formatted accordingly. Got errors and the file wasn't formatted
0
1
1
This happen because µsort (the import sorter that µfmt uses) uses LibCST, which requires enabling its native Rust parser in order to support 3.10+ syntax, including match statements. This is fixed in the latest version of µfmt (2.1.0) where it enables the LibCST native parser by default when formatting. If you can't (or prefer not to) upgrade µfmt, then you can workaround this issue by setting LIBCST_PARSER_TYPE=native in your environment before running µfmt or µsort.
2023-03-09 21:55:37
1
python,ssh,airflow
1
75,690,727
Airflow SSHOperator Command Timed Out when executing Python script
75,690,418
false
435
I created a DAG that successfully uses SSHOperator to execute a simple Python script from a server(note I have set cmd_timeout = None When I change the simple Python script to a more complex script, I get an error for "SSH command timed out" Additionally, if i log into the server, open CMD and execute C:/ProgramData/Python/Scripts/activate.bat && python C:/Users/Main/Desktop/Python/Python_Script.py, which is the one that "times out". It is successful, so I don't believe it is an issue with script or access. Is there an additional setting that must changed to avoid the SSH command time out when executing commands? Log: [2023-03-09, 21:31:53 UTC] {ssh.py:123} INFO - Creating ssh_client [2023-03-09, 21:31:53 UTC] {ssh.py:101} INFO - ssh_hook is not provided or invalid. Trying ssh_conn_id to create SSHHook. [2023-03-09, 21:31:53 UTC] {base.py:73} INFO - Using connection ID 'server' for task execution. [2023-03-09, 21:31:53 UTC] {transport.py:1874} INFO - Connected (version 2.0, client OpenSSH_for_Windows_8.1) [2023-03-09, 21:31:53 UTC] {transport.py:1874} INFO - Authentication (password) successful! [2023-03-09, 21:31:53 UTC] {ssh.py:465} INFO - Running command: call C:/ProgramData/Python/Scripts/activate.bat && python C:/Users/Main/Desktop/Python/Python_Script.py [2023-03-09, 21:32:03 UTC] {taskinstance.py:1772} ERROR - Task failed with exception Traceback (most recent call last): File "/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/ssh/operators/ssh.py", line 158, in execute result = self.run_ssh_client_command(ssh_client, self.command) File "/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/ssh/operators/ssh.py", line 143, in run_ssh_client_command exit_status, agg_stdout, agg_stderr = self.ssh_hook.exec_ssh_client_command( File "/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/ssh/hooks/ssh.py", line 526, in exec_ssh_client_command raise AirflowException("SSH command timed out") airflow.exceptions.AirflowException: SSH command timed out
0.197375
1
1
I was able to solve this. I had to change cmd_timeout = None to cmd_timeout = 7200. You could choose a different arbitrary large number.
2023-03-10 02:01:10
1
python,numpy
1
75,693,082
Error during install numpy version-1.20.3
75,691,655
false
505
I am working with BERTopic package and it requires certain version of numpy. However, I have a trouble with installing numpy with the metadata errors. How to resolve this issue? Collecting numpy==1.20.3 Downloading numpy-1.20.3.zip (7.8 MB) ---------------------------------------- 7.8/7.8 MB 7.5 MB/s eta 0:00:00 Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... error error: subprocess-exited-with-error × Preparing metadata (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [272 lines of output] setup.py:66: RuntimeWarning: NumPy 1.20.3 may not yet support Python 3.10. warnings.warn( Running from numpy source directory. setup.py:485: UserWarning: Unrecognized setuptools command, proceeding with generating Cython sources and expanding templates run_build = parse_setuppy_commands() Processing numpy/random\_bounded_integers.pxd.in Processing numpy/random\bit_generator.pyx Processing numpy/random\mtrand.pyx Processing numpy/random\_bounded_integers.pyx.in Processing numpy/random\_common.pyx Processing numpy/random\_generator.pyx Processing numpy/random\_mt19937.pyx Processing numpy/random\_pcg64.pyx Processing numpy/random\_philox.pyx Processing numpy/random\_sfc64.pyx Cythonizing sources blas_opt_info: blas_mkl_info: No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils customize MSVCCompiler libraries mkl_rt not found in ['C:\\Users\\tassa\\anaconda3\\envs\\dementia\\lib', 'C:\\', 'C:\\Users\\tassa\\anaconda3\\envs\\dementia\\libs', 'C:\\Users\\tassa\\anaconda3\\Library\\lib'] NOT AVAILABLE blis_info: libraries blis not found in ['C:\\Users\\tassa\\anaconda3\\envs\\dementia\\lib', 'C:\\', 'C:\\Users\\tassa\\anaconda3\\envs\\dementia\\libs', 'C:\\Users\\tassa\\anaconda3\\Library\\lib'] NOT AVAILABLE openblas_info: libraries openblas not found in ['C:\\Users\\tassa\\anaconda3\\envs\\dementia\\lib', 'C:\\', 'C:\\Users\\tassa\\anaconda3\\envs\\dementia\\libs', 'C:\\Users\\tassa\\anaconda3\\Library\\lib'] get_default_fcompiler: matching types: '['gnu', 'intelv', 'absoft', 'compaqv', 'intelev', 'gnu95', 'g95', 'intelvem', 'intelem', 'flang']' customize GnuFCompiler Could not locate executable g77 Could not locate executable f77 customize IntelVisualFCompiler Could not locate executable ifort Could not locate executable ifl customize AbsoftFCompiler Could not locate executable f90 customize CompaqVisualFCompiler Could not locate executable DF customize IntelItaniumVisualFCompiler Could not locate executable efl customize Gnu95FCompiler Could not locate executable gfortran Could not locate executable f95 customize G95FCompiler Could not locate executable g95 customize IntelEM64VisualFCompiler customize IntelEM64TFCompiler Could not locate executable efort Could not locate executable efc customize PGroupFlangCompiler Could not locate executable flang don't know how to compile Fortran code on platform 'nt' NOT AVAILABLE atlas_3_10_blas_threads_info: Setting PTATLAS=ATLAS libraries tatlas not found in ['C:\\Users\\tassa\\anaconda3\\envs\\dementia\\lib', 'C:\\', 'C:\\Users\\tassa\\anaconda3\\envs\\dementia\\libs', 'C:\\Users\\tassa\\anaconda3\\Library\\lib'] NOT AVAILABLE atlas_3_10_blas_info: libraries satlas not found in ['C:\\Users\\tassa\\anaconda3\\envs\\dementia\\lib', 'C:\\', 'C:\\Users\\tassa\\anaconda3\\envs\\dementia\\libs', 'C:\\Users\\tassa\\anaconda3\\Library\\lib'] NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in ['C:\\Users\\tassa\\anaconda3\\envs\\dementia\\lib', 'C:\\', 'C:\\Users\\tassa\\anaconda3\\envs\\dementia\\libs', 'C:\\Users\\tassa\\anaconda3\\Library\\lib'] NOT AVAILABLE atlas_blas_info: libraries f77blas,cblas,atlas not found in ['C:\\Users\\tassa\\anaconda3\\envs\\dementia\\lib', 'C:\\', 'C:\\Users\\tassa\\anaconda3\\envs\\dementia\\libs', 'C:\\Users\\tassa\\anaconda3\\Library\\lib'] NOT AVAILABLE C:\Users\tassa\AppData\Local\Temp\pip-install-7vfmv9ja\numpy_b1d7b579ce1c4036a0a7723416dca8e5\numpy\distutils\system_info.py:1989: UserWarning: Optimized (vendor) Blas libraries are not found. Falls back to netlib Blas library which has worse performance. A better performance should be easily gained by switching Blas library. if self._calc_info(blas): blas_info: libraries blas not found in ['C:\\Users\\tassa\\anaconda3\\envs\\dementia\\lib', 'C:\\', 'C:\\Users\\tassa\\anaconda3\\envs\\dementia\\libs', 'C:\\Users\\tassa\\anaconda3\\Library\\lib'] NOT AVAILABLE C:\Users\tassa\AppData\Local\Temp\pip-install-7vfmv9ja\numpy_b1d7b579ce1c4036a0a7723416dca8e5\numpy\distutils\system_info.py:1989: UserWarning: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. if self._calc_info(blas): blas_src_info: NOT AVAILABLE C:\Users\tassa\AppData\Local\Temp\pip-install-7vfmv9ja\numpy_b1d7b579ce1c4036a0a7723416dca8e5\numpy\distutils\system_info.py:1989: UserWarning: Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. if self._calc_info(blas): NOT AVAILABLE non-existing path in 'numpy\\distutils': 'site.cfg' lapack_opt_info: lapack_mkl_info: libraries mkl_rt not found in ['C:\\Users\\tassa\\anaconda3\\envs\\dementia\\lib', 'C:\\', 'C:\\Users\\tassa\\anaconda3\\envs\\dementia\\libs', 'C:\\Users\\tassa\\anaconda3\\Library\\lib'] NOT AVAILABLE openblas_lapack_info: libraries openblas not found in ['C:\\Users\\tassa\\anaconda3\\envs\\dementia\\lib', 'C:\\', 'C:\\Users\\tassa\\anaconda3\\envs\\dementia\\libs', 'C:\\Users\\tassa\\anaconda3\\Library\\lib'] NOT AVAILABLE openblas_clapack_info: libraries openblas,lapack not found in ['C:\\Users\\tassa\\anaconda3\\envs\\dementia\\lib', 'C:\\', 'C:\\Users\\tassa\\anaconda3\\envs\\dementia\\libs', 'C:\\Users\\tassa\\anaconda3\\Library\\lib'] NOT AVAILABLE flame_info: libraries flame not found in ['C:\\Users\\tassa\\anaconda3\\envs\\dementia\\lib', 'C:\\', 'C:\\Users\\tassa\\anaconda3\\envs\\dementia\\libs', 'C:\\Users\\tassa\\anaconda3\\Library\\lib'] NOT AVAILABLE atlas_3_10_threads_info: Setting PTATLAS=ATLAS libraries lapack_atlas not found in C:\Users\tassa\anaconda3\envs\dementia\lib libraries tatlas,tatlas not found in C:\Users\tassa\anaconda3\envs\dementia\lib libraries lapack_atlas not found in C:\ libraries tatlas,tatlas not found in C:\ libraries lapack_atlas not found in C:\Users\tassa\anaconda3\envs\dementia\libs libraries tatlas,tatlas not found in C:\Users\tassa\anaconda3\envs\dementia\libs libraries lapack_atlas not found in C:\Users\tassa\anaconda3\Library\lib libraries tatlas,tatlas not found in C:\Users\tassa\anaconda3\Library\lib <class 'numpy.distutils.system_info.atlas_3_10_threads_info'> NOT AVAILABLE atlas_3_10_info: libraries lapack_atlas not found in C:\Users\tassa\anaconda3\envs\dementia\lib libraries satlas,satlas not found in C:\Users\tassa\anaconda3\envs\dementia\lib libraries lapack_atlas not found in C:\ libraries satlas,satlas not found in C:\ libraries lapack_atlas not found in C:\Users\tassa\anaconda3\envs\dementia\libs libraries satlas,satlas not found in C:\Users\tassa\anaconda3\envs\dementia\libs libraries lapack_atlas not found in C:\Users\tassa\anaconda3\Library\lib libraries satlas,satlas not found in C:\Users\tassa\anaconda3\Library\lib <class 'numpy.distutils.system_info.atlas_3_10_info'> NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS libraries lapack_atlas not found in C:\Users\tassa\anaconda3\envs\dementia\lib libraries ptf77blas,ptcblas,atlas not found in C:\Users\tassa\anaconda3\envs\dementia\lib libraries lapack_atlas not found in C:\ libraries ptf77blas,ptcblas,atlas not found in C:\ libraries lapack_atlas not found in C:\Users\tassa\anaconda3\envs\dementia\libs libraries ptf77blas,ptcblas,atlas not found in C:\Users\tassa\anaconda3\envs\dementia\libs libraries lapack_atlas not found in C:\Users\tassa\anaconda3\Library\lib libraries ptf77blas,ptcblas,atlas not found in C:\Users\tassa\anaconda3\Library\lib <class 'numpy.distutils.system_info.atlas_threads_info'> NOT AVAILABLE atlas_info: libraries lapack_atlas not found in C:\Users\tassa\anaconda3\envs\dementia\lib libraries f77blas,cblas,atlas not found in C:\Users\tassa\anaconda3\envs\dementia\lib libraries lapack_atlas not found in C:\ libraries f77blas,cblas,atlas not found in C:\ libraries lapack_atlas not found in C:\Users\tassa\anaconda3\envs\dementia\libs libraries f77blas,cblas,atlas not found in C:\Users\tassa\anaconda3\envs\dementia\libs libraries lapack_atlas not found in C:\Users\tassa\anaconda3\Library\lib libraries f77blas,cblas,atlas not found in C:\Users\tassa\anaconda3\Library\lib <class 'numpy.distutils.system_info.atlas_info'> NOT AVAILABLE lapack_info: libraries lapack not found in ['C:\\Users\\tassa\\anaconda3\\envs\\dementia\\lib', 'C:\\', 'C:\\Users\\tassa\\anaconda3\\envs\\dementia\\libs', 'C:\\Users\\tassa\\anaconda3\\Library\\lib'] NOT AVAILABLE C:\Users\tassa\AppData\Local\Temp\pip-install-7vfmv9ja\numpy_b1d7b579ce1c4036a0a7723416dca8e5\numpy\distutils\system_info.py:1849: UserWarning: Lapack (http://www.netlib.org/lapack/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [lapack]) or by setting the LAPACK environment variable. return getattr(self, '_calc_info_{}'.format(name))() lapack_src_info: NOT AVAILABLE C:\Users\tassa\AppData\Local\Temp\pip-install-7vfmv9ja\numpy_b1d7b579ce1c4036a0a7723416dca8e5\numpy\distutils\system_info.py:1849: UserWarning: Lapack (http://www.netlib.org/lapack/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [lapack_src]) or by setting the LAPACK_SRC environment variable. return getattr(self, '_calc_info_{}'.format(name))() NOT AVAILABLE numpy_linalg_lapack_lite: FOUND: language = c define_macros = [('HAVE_BLAS_ILP64', None), ('BLAS_SYMBOL_SUFFIX', '64_')] C:\Users\tassa\AppData\Local\Temp\pip-build-env-0qfm0c3u\overlay\Lib\site-packages\setuptools\_distutils\dist.py:275: UserWarning: Unknown distribution option: 'define_macros' warnings.warn(msg) running dist_info running build_src build_src building py_modules sources creating build creating build\src.win-amd64-3.10 creating build\src.win-amd64-3.10\numpy creating build\src.win-amd64-3.10\numpy\distutils building library "npymath" sources Traceback (most recent call last): File "C:\Users\tassa\anaconda3\envs\dementia\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py", line 351, in <module> main() File "C:\Users\tassa\anaconda3\envs\dementia\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py", line 333, in main json_out['return_val'] = hook(**hook_input['kwargs']) File "C:\Users\tassa\anaconda3\envs\dementia\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py", line 152, in prepare_metadata_for_build_wheel return hook(metadata_directory, config_settings) File "C:\Users\tassa\AppData\Local\Temp\pip-build-env-0qfm0c3u\overlay\Lib\site-packages\setuptools\build_meta.py", line 157, in prepare_metadata_for_build_wheel self.run_setup() File "C:\Users\tassa\AppData\Local\Temp\pip-build-env-0qfm0c3u\overlay\Lib\site-packages\setuptools\build_meta.py", line 248, in run_setup super(_BuildMetaLegacyBackend, File "C:\Users\tassa\AppData\Local\Temp\pip-build-env-0qfm0c3u\overlay\Lib\site-packages\setuptools\build_meta.py", line 142, in run_setup exec(compile(code, __file__, 'exec'), locals()) File "setup.py", line 513, in <module> setup_package() File "setup.py", line 505, in setup_package setup(**metadata) File "C:\Users\tassa\AppData\Local\Temp\pip-install-7vfmv9ja\numpy_b1d7b579ce1c4036a0a7723416dca8e5\numpy\distutils\core.py", line 169, in setup return old_setup(**new_attr) File "C:\Users\tassa\AppData\Local\Temp\pip-build-env-0qfm0c3u\overlay\Lib\site-packages\setuptools\__init__.py", line 165, in setup return distutils.core.setup(**attrs) File "C:\Users\tassa\AppData\Local\Temp\pip-build-env-0qfm0c3u\overlay\Lib\site-packages\setuptools\_distutils\core.py", line 148, in setup dist.run_commands() File "C:\Users\tassa\AppData\Local\Temp\pip-build-env-0qfm0c3u\overlay\Lib\site-packages\setuptools\_distutils\dist.py", line 967, in run_commands self.run_command(cmd) File "C:\Users\tassa\AppData\Local\Temp\pip-build-env-0qfm0c3u\overlay\Lib\site-packages\setuptools\_distutils\dist.py", line 986, in run_command cmd_obj.run() File "C:\Users\tassa\AppData\Local\Temp\pip-build-env-0qfm0c3u\overlay\Lib\site-packages\setuptools\command\dist_info.py", line 31, in run egg_info.run() File "C:\Users\tassa\AppData\Local\Temp\pip-install-7vfmv9ja\numpy_b1d7b579ce1c4036a0a7723416dca8e5\numpy\distutils\command\egg_info.py", line 24, in run self.run_command("build_src") File "C:\Users\tassa\AppData\Local\Temp\pip-build-env-0qfm0c3u\overlay\Lib\site-packages\setuptools\_distutils\cmd.py", line 313, in run_command self.distribution.run_command(command) File "C:\Users\tassa\AppData\Local\Temp\pip-build-env-0qfm0c3u\overlay\Lib\site-packages\setuptools\_distutils\dist.py", line 986, in run_command cmd_obj.run() File "C:\Users\tassa\AppData\Local\Temp\pip-install-7vfmv9ja\numpy_b1d7b579ce1c4036a0a7723416dca8e5\numpy\distutils\command\build_src.py", line 144, in run self.build_sources() File "C:\Users\tassa\AppData\Local\Temp\pip-install-7vfmv9ja\numpy_b1d7b579ce1c4036a0a7723416dca8e5\numpy\distutils\command\build_src.py", line 155, in build_sources self.build_library_sources(*libname_info) File "C:\Users\tassa\AppData\Local\Temp\pip-install-7vfmv9ja\numpy_b1d7b579ce1c4036a0a7723416dca8e5\numpy\distutils\command\build_src.py", line 288, in build_library_sources sources = self.generate_sources(sources, (lib_name, build_info)) File "C:\Users\tassa\AppData\Local\Temp\pip-install-7vfmv9ja\numpy_b1d7b579ce1c4036a0a7723416dca8e5\numpy\distutils\command\build_src.py", line 378, in generate_sources source = func(extension, build_dir) File "numpy\core\setup.py", line 671, in get_mathlib_info st = config_cmd.try_link('int main(void) { return 0;}') File "C:\Users\tassa\AppData\Local\Temp\pip-build-env-0qfm0c3u\overlay\Lib\site-packages\setuptools\_distutils\command\config.py", line 243, in try_link self._link(body, headers, include_dirs, File "C:\Users\tassa\AppData\Local\Temp\pip-install-7vfmv9ja\numpy_b1d7b579ce1c4036a0a7723416dca8e5\numpy\distutils\command\config.py", line 162, in _link return self._wrap_method(old_config._link, lang, File "C:\Users\tassa\AppData\Local\Temp\pip-install-7vfmv9ja\numpy_b1d7b579ce1c4036a0a7723416dca8e5\numpy\distutils\command\config.py", line 96, in _wrap_method ret = mth(*((self,)+args)) File "C:\Users\tassa\AppData\Local\Temp\pip-build-env-0qfm0c3u\overlay\Lib\site-packages\setuptools\_distutils\command\config.py", line 137, in _link (src, obj) = self._compile(body, headers, include_dirs, lang) File "C:\Users\tassa\AppData\Local\Temp\pip-install-7vfmv9ja\numpy_b1d7b579ce1c4036a0a7723416dca8e5\numpy\distutils\command\config.py", line 105, in _compile src, obj = self._wrap_method(old_config._compile, lang, File "C:\Users\tassa\AppData\Local\Temp\pip-install-7vfmv9ja\numpy_b1d7b579ce1c4036a0a7723416dca8e5\numpy\distutils\command\config.py", line 96, in _wrap_method ret = mth(*((self,)+args)) File "C:\Users\tassa\AppData\Local\Temp\pip-build-env-0qfm0c3u\overlay\Lib\site-packages\setuptools\_distutils\command\config.py", line 132, in _compile self.compiler.compile([src], include_dirs=include_dirs) File "C:\Users\tassa\AppData\Local\Temp\pip-build-env-0qfm0c3u\overlay\Lib\site-packages\setuptools\_distutils\_msvccompiler.py", line 401, in compile self.spawn(args) File "C:\Users\tassa\AppData\Local\Temp\pip-build-env-0qfm0c3u\overlay\Lib\site-packages\setuptools\_distutils\_msvccompiler.py", line 505, in spawn return super().spawn(cmd, env=env) File "C:\Users\tassa\AppData\Local\Temp\pip-install-7vfmv9ja\numpy_b1d7b579ce1c4036a0a7723416dca8e5\numpy\distutils\ccompiler.py", line 90, in <lambda> m = lambda self, *args, **kw: func(self, *args, **kw) TypeError: CCompiler_spawn() got an unexpected keyword argument 'env' [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed × Encountered error while generating package metadata. ╰─> See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details.
0.197375
3
1
Open the command prompt as an administrator. Procedure 1: run the following commands python -m pip install --upgrade pip python -m pip cache purge python -m pip install numpy==1.20.3 Procedure-2: If procedure-1 does not work for installing then you can run the command python -m pip install --no-cache-dir numpy==1.20.3 Procedure-3: If none of the above procedure work, go to the numpy-1.20.3 package manually from the official website and installing it using the following command: python -m pip install path/to/numpy-1.20.3.whl
2023-03-10 05:23:57
0
python,anaconda,pyodbc,miniconda
1
75,728,492
Anaconda 3 not installing pyodbc v5
75,692,525
true
74
I have installed Anaconda 3 which includes Python 3.9.16. When I install pyodbc in miniconda it is only version 4. I read it needs to be version 5 if Python is above 3.7. When I run my Python script it I get the message ModuleNotFoundError: No module named 'pyodbc' even though it is installed. Is this because pyodbc is version 4 and should be version 5 or could it be another reason? I've tried all of the different flavours of installing pyodbc via Conda but I still get this error. I do also have Python 3.11 installed manually. Could there be a conflict?
1.2
1
1
In the end, I installed Python 3.9 separately to the Anaconda install and I had to install pyodbc using the pip command and it worked. I assumed Anaconda used a different install of Python and would require a conda install.
2023-03-10 07:14:54
1
python,lodash,pydash
1
75,693,477
Pydash: how to find using a object: py_.collections.find(DATA, OBJECT)
75,693,197
false
67
In lodash I can use the syntax: find(ARRAY_OF_OBJECTS, OBJECT) This will return an object from the array if it meets the criteria of the passed object. In this case OBJECT would be e.g. { active: true, dimension: 'target' }. The objects in the array would contain e.g. active, dimension, status etc. How can I do the same in pydash? I know I can do find(ARRAY_OF_OBJECTS, lambda x: x.active == True, but the thing is, the object I pass is dynamically made. So sometimes it might not have active (as example)
0.197375
1
1
Figured it out. I can do it with is_match from pydash. In a complete line of code it would become this. target_data is an array of objects and source_row['dimensions'] is an object py_.collections.find(target_data, lambda x: py_.predicates.is_match(x, source_row['dimensions']))
2023-03-10 14:49:02
0
python,easyocr
4
75,697,682
trying to install easyocr
75,697,585
false
1,450
command run in py 3.11 PS C:\Users\lenovo\Documents\python\My Heroes> pip install easyocr output released PS C:\Users\lenovo\Documents\python\My Heroes> pip install easyocr Collecting easyocr Using cached easyocr-1.6.2-py3-none-any.whl (2.9 MB) Requirement already satisfied: torch in c:\users\lenovo\appdata\local\programs\python\python311\lib\site-packages (from easyocr) (2.1.0.dev20230310+cpu) Requirement already satisfied: torchvision>=0.5 in c:\users\lenovo\appdata\local\programs\python\python311\lib\site-packages (from easyocr) (0.15.0.dev20230310+cpu) Collecting opencv-python-headless<=4.5.4.60 Using cached opencv-python-headless-4.5.4.60.tar.gz (89.8 MB) Installing build dependencies ... error error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─> [19 lines of output] Ignoring numpy: markers 'python_version == "3.6" and platform_machine != "aarch64" and platform_machine != "arm64"' don't match your environment Ignoring numpy: markers 'python_version == "3.7" and platform_machine != "aarch64" and platform_machine != "arm64"' don't match your environment Ignoring numpy: markers 'python_version == "3.8" and platform_machine != "aarch64" and platform_machine != "arm64"' don't match your environment Ignoring numpy: markers 'python_version <= "3.9" and sys_platform == "linux" and platform_machine == "aarch64"' don't match your environment Ignoring numpy: markers 'python_version <= "3.9" and sys_platform == "darwin" and platform_machine == "arm64"' don't match your environment Ignoring numpy: markers 'python_version == "3.9" and platform_machine != "aarch64" and platform_machine != "arm64"' don't match your environment Collecting setuptools Using cached setuptools-67.6.0-py3-none-any.whl (1.1 MB) Collecting wheel Using cached wheel-0.38.4-py3-none-any.whl (36 kB) Collecting scikit-build Using cached scikit_build-0.16.7-py3-none-any.whl (79 kB) Collecting cmake Using cached cmake-3.25.2-py2.py3-none-win_amd64.whl (32.6 MB) Collecting pip Using cached pip-23.0.1-py3-none-any.whl (2.1 MB) ERROR: Ignored the following versions that require a different python version: 1.21.2 Requires-Python >=3.7,<3.11; 1.21.3 Requires-Python >=3.7,<3.11; 1.21.4 Requires-Python >=3.7,<3.11; 1.21.5 Requires-Python >=3.7,<3.11; 1.21.6 Requires-Python >=3.7,<3.11 ERROR: Could not find a version that satisfies the requirement numpy==1.21.2 (from versions: 1.3.0, 1.4.1, 1.5.0, 1.5.1, 1.6.0, 1.6.1, 1.6.2, 1.7.0, 1.7.1, 1.7.2, 1.8.0, 1.8.1, 1.8.2, 1.9 .0, 1.9.1, 1.9.2, 1.9.3, 1.10.0.post2, 1.10.1, 1.10.2, 1.10.4, 1.11.0, 1.11.1, 1.11.2, 1.11.3, 1.12.0, 1.12.1, 1.13.0, 1.13.1, 1.13.3, 1.14.0, 1.14.1, 1.14.2, 1.14.3, 1.14.4, 1.14.5, 1.14.6, 1. 15.0, 1.15.1, 1.15.2, 1.15.3, 1.15.4, 1.16.0, 1.16.1, 1.16.2, 1.16.3, 1.16.4, 1.16.5, 1.16.6, 1.17.0, 1.17.1, 1.17.2, 1.17.3, 1.17.4, 1.17.5, 1.18.0, 1.18.1, 1.18.2, 1.18.3, 1.18.4, 1.18.5, 1.1 9.0, 1.19.1, 1.19.2, 1.19.3, 1.19.4, 1.19.5, 1.20.0, 1.20.1, 1.20.2, 1.20.3, 1.21.0, 1.21.1, 1.22.0, 1.22.1, 1.22.2, 1.22.3, 1.22.4, 1.23.0rc1, 1.23.0rc2, 1.23.0rc3, 1.23.0, 1.23.1, 1.23.2, 1.2 3.3, 1.23.4, 1.23.5, 1.24.0rc1, 1.24.0rc2, 1.24.0, 1.24.1, 1.24.2) [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. i installed torch with pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu as i found somewhere say nightly support 3.11 now i'm wondering it's torch problem or easyocr problem if someone know what pip to run for python 3.11, if even it's supported as i can't find what version it supports i tried different torch version, disinstalled numpy to see if pip install easyocr resolve it self but nothing i also tried pip install git+https://github.com/JaidedAI/EasyOCR.git and PS C:\Users\lenovo\Documents\python\My Heroes> pip install numpy==1.21.2 with this output ERROR: Ignored the following versions that require a different python version: 1.21.2 Requires-Python >=3.7,<3.11; 1.21.3 Requires-Python >=3.7,<3.11; 1.21.4 Requires-Python >=3.7,<3.11; 1.21.5 Requires-Python >=3.7,<3.11; 1.21.6 Requires-Python >=3.7,<3.11 ERROR: Could not find a version that satisfies the requirement numpy==1.21.2 (from versions: 1.3.0, 1.4.1, 1.5.0, 1.5.1, 1.6.0, 1.6.1, 1.6.2, 1.7.0, 1.7.1, 1.7.2, 1.8.0, 1.8.1, 1.8.2, 1.9.0, 1. 9.1, 1.9.2, 1.9.3, 1.10.0.post2, 1.10.1, 1.10.2, 1.10.4, 1.11.0, 1.11.1, 1.11.2, 1.11.3, 1.12.0, 1.12.1, 1.13.0, 1.13.1, 1.13.3, 1.14.0, 1.14.1, 1.14.2, 1.14.3, 1.14.4, 1.14.5, 1.14.6, 1.15.0, 1.15.1, 1.15.2, 1.15.3, 1.15.4, 1.16.0, 1.16.1, 1.16.2, 1.16.3, 1.16.4, 1.16.5, 1.16.6, 1.17.0, 1.17.1, 1.17.2, 1.17.3, 1.17.4, 1.17.5, 1.18.0, 1.18.1, 1.18.2, 1.18.3, 1.18.4, 1.18.5, 1.19.0, 1 .19.1, 1.19.2, 1.19.3, 1.19.4, 1.19.5, 1.20.0, 1.20.1, 1.20.2, 1.20.3, 1.21.0, 1.21.1, 1.22.0, 1.22.1, 1.22.2, 1.22.3, 1.22.4, 1.23.0rc1, 1.23.0rc2, 1.23.0rc3, 1.23.0, 1.23.1, 1.23.2, 1.23.3, 1 .23.4, 1.23.5, 1.24.0rc1, 1.24.0rc2, 1.24.0, 1.24.1, 1.24.2) ERROR: No matching distribution found for numpy==1.21.2 edit1: after someone suggested i runned by unistall my torch version first pip install --pre torch -f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html that's the pip show torch output Version: 2.1.0.dev20230311+cpu Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration Home-page: https://pytorch.org/ Author: PyTorch Team Author-email: packages@pytorch.org License: BSD-3 Location: C:\Users\lenovo\AppData\Local\Programs\Python\Python311\Lib\site-packages Requires: filelock, jinja2, networkx, sympy, typing-extensions Required-by: torchaudio, torchvision unfortunatly after running pip install easyocr the same error still reproduce, so it let me think it may be a easyOCR 3.11 support problem?
0
2
2
You are having trouble finding a version of numpy that matches the required version (numpy==1.21.2). (the error is indicating that one of the dependencies required by the EasyOCR library is the specific version of NumPy) Try to install numpy version 1.21.2 separately using pip install numpy==1.21.2 and then try installing easyocr again using pip install easyocr.
2023-03-10 14:49:02
0
python,easyocr
4
75,819,906
trying to install easyocr
75,697,585
false
1,450
command run in py 3.11 PS C:\Users\lenovo\Documents\python\My Heroes> pip install easyocr output released PS C:\Users\lenovo\Documents\python\My Heroes> pip install easyocr Collecting easyocr Using cached easyocr-1.6.2-py3-none-any.whl (2.9 MB) Requirement already satisfied: torch in c:\users\lenovo\appdata\local\programs\python\python311\lib\site-packages (from easyocr) (2.1.0.dev20230310+cpu) Requirement already satisfied: torchvision>=0.5 in c:\users\lenovo\appdata\local\programs\python\python311\lib\site-packages (from easyocr) (0.15.0.dev20230310+cpu) Collecting opencv-python-headless<=4.5.4.60 Using cached opencv-python-headless-4.5.4.60.tar.gz (89.8 MB) Installing build dependencies ... error error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─> [19 lines of output] Ignoring numpy: markers 'python_version == "3.6" and platform_machine != "aarch64" and platform_machine != "arm64"' don't match your environment Ignoring numpy: markers 'python_version == "3.7" and platform_machine != "aarch64" and platform_machine != "arm64"' don't match your environment Ignoring numpy: markers 'python_version == "3.8" and platform_machine != "aarch64" and platform_machine != "arm64"' don't match your environment Ignoring numpy: markers 'python_version <= "3.9" and sys_platform == "linux" and platform_machine == "aarch64"' don't match your environment Ignoring numpy: markers 'python_version <= "3.9" and sys_platform == "darwin" and platform_machine == "arm64"' don't match your environment Ignoring numpy: markers 'python_version == "3.9" and platform_machine != "aarch64" and platform_machine != "arm64"' don't match your environment Collecting setuptools Using cached setuptools-67.6.0-py3-none-any.whl (1.1 MB) Collecting wheel Using cached wheel-0.38.4-py3-none-any.whl (36 kB) Collecting scikit-build Using cached scikit_build-0.16.7-py3-none-any.whl (79 kB) Collecting cmake Using cached cmake-3.25.2-py2.py3-none-win_amd64.whl (32.6 MB) Collecting pip Using cached pip-23.0.1-py3-none-any.whl (2.1 MB) ERROR: Ignored the following versions that require a different python version: 1.21.2 Requires-Python >=3.7,<3.11; 1.21.3 Requires-Python >=3.7,<3.11; 1.21.4 Requires-Python >=3.7,<3.11; 1.21.5 Requires-Python >=3.7,<3.11; 1.21.6 Requires-Python >=3.7,<3.11 ERROR: Could not find a version that satisfies the requirement numpy==1.21.2 (from versions: 1.3.0, 1.4.1, 1.5.0, 1.5.1, 1.6.0, 1.6.1, 1.6.2, 1.7.0, 1.7.1, 1.7.2, 1.8.0, 1.8.1, 1.8.2, 1.9 .0, 1.9.1, 1.9.2, 1.9.3, 1.10.0.post2, 1.10.1, 1.10.2, 1.10.4, 1.11.0, 1.11.1, 1.11.2, 1.11.3, 1.12.0, 1.12.1, 1.13.0, 1.13.1, 1.13.3, 1.14.0, 1.14.1, 1.14.2, 1.14.3, 1.14.4, 1.14.5, 1.14.6, 1. 15.0, 1.15.1, 1.15.2, 1.15.3, 1.15.4, 1.16.0, 1.16.1, 1.16.2, 1.16.3, 1.16.4, 1.16.5, 1.16.6, 1.17.0, 1.17.1, 1.17.2, 1.17.3, 1.17.4, 1.17.5, 1.18.0, 1.18.1, 1.18.2, 1.18.3, 1.18.4, 1.18.5, 1.1 9.0, 1.19.1, 1.19.2, 1.19.3, 1.19.4, 1.19.5, 1.20.0, 1.20.1, 1.20.2, 1.20.3, 1.21.0, 1.21.1, 1.22.0, 1.22.1, 1.22.2, 1.22.3, 1.22.4, 1.23.0rc1, 1.23.0rc2, 1.23.0rc3, 1.23.0, 1.23.1, 1.23.2, 1.2 3.3, 1.23.4, 1.23.5, 1.24.0rc1, 1.24.0rc2, 1.24.0, 1.24.1, 1.24.2) [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. i installed torch with pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu as i found somewhere say nightly support 3.11 now i'm wondering it's torch problem or easyocr problem if someone know what pip to run for python 3.11, if even it's supported as i can't find what version it supports i tried different torch version, disinstalled numpy to see if pip install easyocr resolve it self but nothing i also tried pip install git+https://github.com/JaidedAI/EasyOCR.git and PS C:\Users\lenovo\Documents\python\My Heroes> pip install numpy==1.21.2 with this output ERROR: Ignored the following versions that require a different python version: 1.21.2 Requires-Python >=3.7,<3.11; 1.21.3 Requires-Python >=3.7,<3.11; 1.21.4 Requires-Python >=3.7,<3.11; 1.21.5 Requires-Python >=3.7,<3.11; 1.21.6 Requires-Python >=3.7,<3.11 ERROR: Could not find a version that satisfies the requirement numpy==1.21.2 (from versions: 1.3.0, 1.4.1, 1.5.0, 1.5.1, 1.6.0, 1.6.1, 1.6.2, 1.7.0, 1.7.1, 1.7.2, 1.8.0, 1.8.1, 1.8.2, 1.9.0, 1. 9.1, 1.9.2, 1.9.3, 1.10.0.post2, 1.10.1, 1.10.2, 1.10.4, 1.11.0, 1.11.1, 1.11.2, 1.11.3, 1.12.0, 1.12.1, 1.13.0, 1.13.1, 1.13.3, 1.14.0, 1.14.1, 1.14.2, 1.14.3, 1.14.4, 1.14.5, 1.14.6, 1.15.0, 1.15.1, 1.15.2, 1.15.3, 1.15.4, 1.16.0, 1.16.1, 1.16.2, 1.16.3, 1.16.4, 1.16.5, 1.16.6, 1.17.0, 1.17.1, 1.17.2, 1.17.3, 1.17.4, 1.17.5, 1.18.0, 1.18.1, 1.18.2, 1.18.3, 1.18.4, 1.18.5, 1.19.0, 1 .19.1, 1.19.2, 1.19.3, 1.19.4, 1.19.5, 1.20.0, 1.20.1, 1.20.2, 1.20.3, 1.21.0, 1.21.1, 1.22.0, 1.22.1, 1.22.2, 1.22.3, 1.22.4, 1.23.0rc1, 1.23.0rc2, 1.23.0rc3, 1.23.0, 1.23.1, 1.23.2, 1.23.3, 1 .23.4, 1.23.5, 1.24.0rc1, 1.24.0rc2, 1.24.0, 1.24.1, 1.24.2) ERROR: No matching distribution found for numpy==1.21.2 edit1: after someone suggested i runned by unistall my torch version first pip install --pre torch -f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html that's the pip show torch output Version: 2.1.0.dev20230311+cpu Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration Home-page: https://pytorch.org/ Author: PyTorch Team Author-email: packages@pytorch.org License: BSD-3 Location: C:\Users\lenovo\AppData\Local\Programs\Python\Python311\Lib\site-packages Requires: filelock, jinja2, networkx, sympy, typing-extensions Required-by: torchaudio, torchvision unfortunatly after running pip install easyocr the same error still reproduce, so it let me think it may be a easyOCR 3.11 support problem?
0
2
2
I'm seeing the exact same error on a M1 Macbook Pro. I assumed at first it was the apple chip causing some issue but I don't believe that is the case now.
2023-03-10 16:41:11
0
python,scipy,mathematical-optimization
2
75,701,005
How do I set simple linear constraints with dual_annealing?
75,698,797
false
144
I can set simple bounds to use with dual_annealing: E.g. upper_bound = 20 num_points = 30 bounds = [(0, upper_bound) for i in range(num_points)] res = dual_annealing(fun, bounds, maxiter=1000) But I would also like to constrain the variables so that x_i >= x_{i-1}+0.5 for each i. That is each variable should be at least 0.5 larger than the one preceding it. How can you do that? If scipy can't do it, are there other libraries with global optimizers that can?
0
2
1
You could try using a Lagrangian multiplier to penalize the objective function if your constraints aren't satisfied. Alternatively you can use the differential_evolution minimiser.
2023-03-10 18:01:38
0
python,dataframe,pyspark
3
75,699,743
Joining 2 dataframes and collecting only the unique values in pyspark
75,699,585
false
42
I have 2 dataframes, the first one that is called questions_version1 and the second one called questions_version2. questions_version1 +-----------+----------+-----------+--------------+ | question |answer |version | total_answers +-----------+----------+-----------+--------------+ |eye color | blue | 1 | 15000 | |eye color | brown | 1 | 32000 | |eye color | green | 1 | 5000 | |hair color | brown | 1 | 47000 | |hair color | blonde | 1 | 3000 | |hair color | white | 1 | 2000 | +-----------+----------+-----------+--------------+ questions_version2 +-----------+----------+-----------+--------------+ | question |answer |version | total_answers +-----------+----------+-----------+--------------+ |eye color | blue | 1 | 15000 | |eye color | green | 1 | 5000 | |eye color | hazel | 2 | 9000 | |hair color | brown | 1 | 47000 | |hair color | white | 1 | 2000 | |hair color | red | 2 | 500 | +-----------+----------+-----------+--------------+ How do I join both so I can get all the values in questions_version2 that doesn't exist in questions_version1 without duplicating all the repeated values that exist in both? The final result would be something like this: questions_1_and_2_merged +-----------+----------+-----------+--------------+ | question |answer |version | total_answers +-----------+----------+-----------+--------------+ |eye color | blue | 1 | 15000 | |eye color | brown | 1 | 32000 | |eye color | green | 1 | 5000 | |eye color | hazel | 2 | 9000 | |hair color | brown | 1 | 47000 | |hair color | blonde | 1 | 3000 | |hair color | white | 1 | 2000 | |hair color | red | 2 | 500 | +-----------+----------+-----------+--------------+ Could you all help me with this please? Thanks!
0
1
1
I think you could loop through the second list and do: if elem in questions_version1: continue else: questions_version1.append(elem)
2023-03-10 23:43:17
0
python,pygame
1
75,704,163
How to access or influence variables of other classes when you can't access their instances
75,701,915
false
37
I am building a game which has a level where bombs are thrown onto the player. I am having a bit of trouble in that. Ideally, the game should check whether the player is within the blast_radius of the bomb when the blast_time goes off (gets <= 0), and if yes, it should decrease a life from the player's lives. This needs a function which checks for (blast_time <= 0) and then looks if the player is within the blast radius of the bomb. If yes, then it just reduces a life from the player (self.lives -= 1). The problem I'm having is If I try to add the player_bomb_collision function into the Game class, I'm not able to access the blast_time for each bomb sprite (in the sprite Group). I can't directly access it using the instance because a new instance with the same name is created after "bomb_spawn_cooldown" goes off (<= 0). Thus reseting the "blast_time". And I can't implement a cooldown function in the game class since I need every bomb to have & follow their own cooldown. And if I add the player_bomb_collision function into the Bomb class, I can't tell the Game class to decrease a life. Since I don't have access to the "lives" variable (Yes i can take the lives var as a function argument but I can't return it because of how pygame works). Also, I can't create a new "lives" variable in the Bomb class because it would just reset the lives when a new Bomb instance is created. In both cases I am unable to access or influence variables in other classes due to the timer creating new instances, overwriting the same variable as the previous instance. The below code is the very core code related to the problem (or so i hope).I removed other functions unrelated to the problems such as movements or spawning control, positioning, etc import pygame pygame.init() screen = pygame.display.set_mode((screen_width, screen_height)) clock = pygame.time.Clock() screen_width = 1080 screen_height = 960 class Game(): import pygame pygame.init() screen = pygame.display.set_mode((screen_width, screen_height)) clock = pygame.time.Clock() screen_width = 1080 screen_height = 960 # Game Class class Game(): def __init__(self): # Player Setup player_sprite = Player(screen, pos = (500,500)) self.player = pygame.sprite.GroupSingle(player_sprite) #Bomb Setup self.bomb = pygame.sprite.Group() self.createBomb() self.bomb_spawn_cooldown = 20 # Score & Lives self.lives = 5 def createBomb(self): if self.bomb_spawn_cooldown <= 0: self.bomb_sprite = Bomb(screen) self.bomb.add(self.bomb_sprite) self.bomb_spawn_cooldown = 80 self.bomb_spawn_cooldown -= 1 def run(self): # self.player_bomb_collision() self.bomb.update() self.bomb.draw(screen) self.player.update() self.player.draw(screen) def gameloop(): exitgame = False FPS = 60 game = Game() while not exitgame: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() exit() screen.fill('Black') game.run() pygame.display.flip() clock.tick(FPS) if __name__ == "__main__": gameloop() # Bomb Class import pygame class Bomb(pygame.sprite.Sprite): def __init__(self, display_surface, blast_time = 200, size = (25,25), radius = 25): super().__init__() self.display_surface = display_surface self.image = pygame.Surface(size) self.image.fill('red') self.rect = self.image.get_rect(center = (520,520)) self.blast_time_remaining = blast_time self.blast_radius = radius def blast(self): if self.blast_time_remaining <= 0: print('blast!') self.kill() self.blast_time_remaining -= 1 def update(self): self.blast() def blast(self, player_sprite): if self.blast_time_remaining == 0: self.image.fill('red') hitbox = (self.rect.x - self.blast_radius, self.rect.y - self.blast_radius, self.rect.size[0] + (self.blast_radius*2), self.rect.size[1] + (self.blast_radius*2)) self.hitbox_rect = pygame.draw.rect(self.display_surface, 'orange', hitbox , 1) if pygame.Rect.colliderect(player_sprite.sprite.rect, self.hitbox_rect): print('Took a Bomb to the face') # self.lives -= 1 # self.is_damaged = True # print(self.lives) self.kill() self.blast_time_remaining -= 1 Above is the basic idea of what the function should do (note- this function is implemented in the Bomb class). What I am 90% sure of is that I cannot access/influence class variables without their respective instances. I have tried returning the variables but it did not work because of how inherting classes & pygame works. Only methods i can think of creating a global variable and importing it again & again (Neither do I like this method nor am I gonna do it). Thats about it. Please if you know anyway to solve this problem or there's anything to tell me, please do so. Thanks
0
1
1
An update to the question, I have figured out how to get around it and honestly it was much simpler than I thought. Thank you VantaTree#2609 (on discord) for pointing that out. I had to do a couple of things to solve it. First, I moved all the implementation of players' lives, damage, cooldown, etc to the player class (I don't know why they were in the Game class to begin with). Then, I moved all obstacle collisions into their respective classes (yes there are more than just Bombs, but i was only having problem with implementing the Bomb mechanic). Afterwards I sent the self.player_sprite (the player class instance) as an argument in the obstacle classes (Bomb in this case) and sent the self.player (spritegroup) as an argument in their respective update methods. Example - self.bomb.update(self.player) And voila, It worked. Please let me know if this was confusing or you need to see the final code.
2023-03-11 03:02:47
0
python,pygame
1
75,702,651
I am not getting an option to close the window in pygame
75,702,535
true
42
So, this is a simple code written to make the window and the background image. I need help with 2 things. 1. I am not getting the option to close, minimize or maximize the window, the entire thing is covered with my background image. 2. I need help in not making my image so bloated, or stretched. It is quite high-res but I believe I have made it stretched. Code is below: import pygame as p import time import random import sys # Defining global variables HEIGHT = 800 WIDTH = 1000 # Adding a background image BACKGROUND = p.image.load("background_image.jpg") def draw(): WINDOW.blit(BACKGROUND, (0, 0)) p.display.update() # Making the window WINDOW = p.display.set_mode((WIDTH, HEIGHT)) p.display.set_caption("Space Dodge") # Window will quit after pressing the X def main(): run = True while run: for event in p.event.get(): if event.type == p.QUIT: run = False break WINDOW.fill((0, 0, 0)) draw() p.quit() sys.exit() if __name__ == "__main__": main()
1.2
1
1
Ther is a problem with your height of the screen. The Option are available but it is not visible in you screen. Solution : Reduce the HEIGHT and run the code Sure It will work.
2023-03-10 22:29:17
1
linux,bash,python,environment-variables,shared-libraries
1
75,711,225
How to keep `LD_LIBRARY_PATH` from getting overwritten to pwd when running python script
75,703,491
false
140
I have a python script which eventually calls a binary that requires certain shared libraries. However, I have been running into the following error: error while loading shared libraries: libmkl_rt.so: cannot open shared object file: No such file or directory I have narrowed down the cause of this error to the fact that python overwrites LD_LIBRARY_PATH to be the PWD. When executing the binary manually via the bash, everything works properly as this environment variable is properly set. Here are examples of the current behavior: $ echo $LD_LIBRARY_PATH /the/proper/paths $ python >>> import os >>> os.environ.get('LD_LIBRARY_PATH') /the/proper/paths $ python script.py /my/present/working/directory Here, script.py has the same contents as the second example. Other information asked for: $ which python ~/.pyenv/shims/python $ python --version Python 3.7.3 $ type python python is hashed (/home/ec2-user/.pyenv/shims/python) $ realpath `which python` /home/ec2-user/.pyenv/shims/python I would like to know if there's a way to see where this variable is getting changed and/or why this would be happening. I am aware that I could simply hard-code the in the script, but that is a band-aid solution, I would rather know exactly what is happening under the hood.
0.197375
3
1
I am aware that I could simply hard-code the in the script, but that is a band-aid solution, I would rather know exactly what is happening under the hood. What's happening under the hood is that some code in your script.py (or code which script.py imports) sets the environment variable. You can run gdb --args python script.py, set a breakpoint on setenv and putenv, and have GDB stop the program when the environment is reset. However, this will stop somewhere in the Python runtime, and figuring out which Python code is running might get tricky. That said, setting LD_LIBRARY_PATH is not the right way to fix the "binary that requires certain shared libraries" problem in the first place. A much better solution is to either link the binary with correct -rpath flag, or to fix already-linked binary using patchelf --set-rpath, so the binary runs correctly without LD_LIBRARY_PATH.
2023-03-11 12:49:16
0
python,python-3.x,django,django-views,abstract-class
1
75,719,740
Why some Django view classes had not been defined as abstract base class?
75,705,043
true
89
I'm writing a small and lightweight Django-like back-end framework for myself just for experiment. If we look at ProcessFormView view(and some other views): class ProcessFormView(View): def get(self, request, *args, **kwargs): return self.render_to_response(self.get_context_data()) def post(self, request, *args, **kwargs): form = self.get_form() if form.is_valid(): return self.form_valid(form) else: return self.form_invalid(form) ... To me it sounds like a valid case to define this class as an "abstract base class". After all, It needs that sub-classes provide render_to_response(), get_context_data(), get_form(), form_valid(), form_invalid(). (They will be provided by TemplateResponseMixin and FormMixin.) I can do something like this: class ProcessFormView(View, metaclass=ABCMeta): @abstractmethod def render_to_response(self, context): pass @abstractmethod def get_context_data(self): pass @abstractmethod def get_form(self): pass @abstractmethod def form_valid(self, form): pass @abstractmethod def form_invalid(self, form): pass def get(self, request, *args, **kwargs): return self.render_to_response(self.get_context_data()) def post(self, request, *args, **kwargs): form = self.get_form() if form.is_valid(): return self.form_valid(form) else: return self.form_invalid(form) ... Or even better we can factor these abstract methods out to another ABC class and inherit from it to clear things up. I know that of course it's a made decision and nothing is wrong with it. I mostly interested to know if do what I just showed, how can it cause problem in future? what would be the cons that I'm not aware of? The only disadvantage that I can think about is I should then write many abstract classes! This make the code base much bigger.
1.2
1
1
Aside from the mentioned fact that the code base would become much bigger, I think I found the main reason. It's true that if we subclass ProcessFormView and "intend to use its get and post method", we have to eventually somehow provide those mentioned methods. But what if for some reason we only want to use its post method and provide our own custom method for get? (They are only get called inside post and get). This way we no longer have to implement the render_to_response and get_context_data() abstract method. Defining them as abstractmethods would unnecessarily force sub-classes to implement them.
2023-03-11 14:03:18
1
python,django,django-models,django-views,fingerprint
1
75,705,638
How to save fingerprint data in django
75,705,469
true
128
So I have a project where I have a model called Beneficiary where I need to store the fingerprint. This is the model: class Beneficiary(models.Model): image=models.ImageField(upload_to='home/images/',blank=True,null=True) name = models.CharField(max_length=200) gender=models.CharField(max_length=6,choices=GENDER_CHOICES,default='male') dob=models.DateField() registration_date=models.DateField() I want to save the fingerprints of the beneficiaries in the model. I am using DigitalPersona2 fingerprint sensor. I am pretty new to fingerprints so can anyone help?
1.2
1
1
I don't know much about DigitalPersona2, but generally, the Fingerprint device should come with an SDK, that must be installed on the system that uses the fingerprint scanner device. DigitalPersona2 would have an SDK API that would allow you to communicate with the device.
2023-03-11 14:43:25
-1
python,pip,guppy
1
75,711,283
Guppy install in Python 3.11 fails because of missing longintrepr.h file
75,705,697
false
340
Operating System: Windows I try: pip install guppy3 It runs successfully at first but fatally fails towards the end. The error message is: Collecting guppy3 Using cached guppy3-3.1.2.tar.gz (335 kB) Preparing metadata (setup.py) ... done Building wheels for collected packages: guppy3 Building wheel for guppy3 (setup.py) ... error error: subprocess-exited-with-error × python setup.py bdist_wheel did not run successfully. │ exit code: 1 ╰─> [95 lines of output] //so long so i delete it bitset.c src/sets/bitset.c(7): fatal error C1083: 無法開啟包含檔案: 'longintrepr.h': No such file or directory error: command 'C:\\Program Files\\Microsoft Visual Studio\\2022\\Community\\VC\\Tools\\MSVC\\14.34.31933\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2 By the way I have all ready update wheel 0.38.4 pip 23.0.1 setuptools 67.6.0 I really have no clue... Hope someone who can help me to fix this error!
-0.197375
1
1
I find the solution!I just downgrade my python version to 3.9
2023-03-11 16:56:40
0
python-3.x,sqlite,flask-sqlalchemy,attributeerror
1
75,707,766
SQLAlchemy in flask automap_base not mapping classes?
75,706,511
false
141
long time listener first time caller. I'm making a flask app to create a small API off a SQLite db. Like the simplest thing in the world. I can query the db schema fine, so I know the engine is working. But as soon as I declare a class object with what is absolutely a class name, I get a AttributeError: 'function' object has no attribute 'WarID'. The error is so generic I haven't been able find anything. Help me, Obi-Wan Kenobi. Here's the app.py import numpy as np import sqlalchemy from sqlalchemy.ext.automap import automap_base from sqlalchemy.orm import Session from sqlalchemy import create_engine, func from flask import Flask, jsonify ################################################# # Database Setup ################################################# try: engine = create_engine("sqlite:///static/data/iWars.sqlite") # reflect an existing database into a new model Base = automap_base() # reflect the tables Base.prepare(autoload_with=engine) print("All about that base") print(Base) # # Save reference to the table # tribes = Base.classes.Tribes warsTable = Base.classes.Wars print("yes no maybe") except Exception as e: print("Hey what's up looks like something gnarly happened") print(e) ################################################# # Flask Setup ################################################# app = Flask(__name__) ################################################# # Flask Routes ################################################# @app.route("/") def welcome(): """List all available api routes.""" return ( f"Available Routes:<br/>" # f"/api/v1.0/tribes<br/>" f"/api/v1.0/listwars<br/>" ) @app.route("/api/v1.0/listwars") def warsRoute(): # Create our session (link) from Python to the DB session = Session(engine) # Query all results = session.query(warsTable.WarID).all() session.close() # Create a dictionary from the row data and append to a list of all_passengers all_passengers = [] for id in results: wars_dict = {} wars_dict["id"] = id all_passengers.append(wars_dict) return jsonify(all_passengers) if __name__ == '__main__': app.run(debug=True) Here's the run Notice that as soon as it hits the object declaration, it drops into the except block with just Wars as the exception, and then when I open the route I get the error shown. (mlenv) ianmacsmacbook:USIndigenousWars ianmacmoore$ python app.py All about that base <class 'sqlalchemy.ext.automap.Base'> Hey what's up looks like something gnarly happened Wars Serving Flask app "app" (lazy loading) * Environment: production WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Debug mode: on * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit) * Restarting with watchdog (fsevents) All about that base <class 'sqlalchemy.ext.automap.Base'> Hey what's up looks like something gnarly happened Wars * Debugger is active! * Debugger PIN: 955-667-953 127.0.0.1 - - [11/Mar/2023 13:19:44] "GET / HTTP/1.1" 200 - 127.0.0.1 - - [11/Mar/2023 13:19:54] "GET /api/v1.0/listwars HTTP/1.1" 500 - Traceback (most recent call last): File "/opt/anaconda3/envs/mlenv/lib/python3.7/site-packages/flask/app.py", line 2464, in __call__ return self.wsgi_app(environ, start_response) File "/opt/anaconda3/envs/mlenv/lib/python3.7/site-packages/flask/app.py", line 2450, in wsgi_app response = self.handle_exception(e) File "/opt/anaconda3/envs/mlenv/lib/python3.7/site-packages/flask/app.py", line 1867, in handle_exception reraise(exc_type, exc_value, tb) File "/opt/anaconda3/envs/mlenv/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise raise value File "/opt/anaconda3/envs/mlenv/lib/python3.7/site-packages/flask/app.py", line 2447, in wsgi_app response = self.full_dispatch_request() File "/opt/anaconda3/envs/mlenv/lib/python3.7/site-packages/flask/app.py", line 1952, in full_dispatch_request rv = self.handle_user_exception(e) File "/opt/anaconda3/envs/mlenv/lib/python3.7/site-packages/flask/app.py", line 1821, in handle_user_exception reraise(exc_type, exc_value, tb) File "/opt/anaconda3/envs/mlenv/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise raise value File "/opt/anaconda3/envs/mlenv/lib/python3.7/site-packages/flask/app.py", line 1950, in full_dispatch_request rv = self.dispatch_request() File "/opt/anaconda3/envs/mlenv/lib/python3.7/site-packages/flask/app.py", line 1936, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/Users/ianmacmoore/Documents/BluePlusRed/USIndigenousWars/app.py", line 56, in warsRoute results = session.query(warsTable.WarID).all() NameError: name 'warsTable' is not defined 127.0.0.1 - - [11/Mar/2023 13:19:54] "GET /api/v1.0/listwars?__debugger__=yes&cmd=resource&f=style.css HTTP/1.1" 200 - 127.0.0.1 - - [11/Mar/2023 13:19:54] "GET /api/v1.0/listwars?__debugger__=yes&cmd=resource&f=debugger.js HTTP/1.1" 200 - 127.0.0.1 - - [11/Mar/2023 13:19:54] "GET /api/v1.0/listwars?__debugger__=yes&cmd=resource&f=console.png HTTP/1.1" 200 - 127.0.0.1 - - [11/Mar/2023 13:19:54] "GET /api/v1.0/listwars?__debugger__=yes&cmd=resource&f=ubuntu.ttf HTTP/1.1" 200 - 127.0.0.1 - - [11/Mar/2023 13:19:54] "GET /api/v1.0/listwars?__debugger__=yes&cmd=resource&f=console.png HTTP/1.1" 200 - 127.0.0.1 - - [11/Mar/2023 13:40:37] "GET /api/v1.0/wars?__debugger__=yes&cmd=resource&f=style.css HTTP/1.1" 200 - 127.0.0.1 - - [11/Mar/2023 13:40:37] "GET /api/v1.0/wars?__debugger__=yes&cmd=resource&f=debugger.js HTTP/1.1" 200 - 127.0.0.1 - - [11/Mar/2023 13:40:37] "GET /api/v1.0/wars?__debugger__=yes&cmd=resource&f=console.png HTTP/1.1" 200 - 127.0.0.1 - - [11/Mar/2023 13:40:37] "GET /api/v1.0/wars?__debugger__=yes&cmd=resource&f=ubuntu.ttf HTTP/1.1" 200 - Here're the class definitions used to initially write to the db class Against(Base): __tablename__ = 'Against' TribeID = Column(Integer, primary_key=True) AgainstCode = Column(Integer) WarID = Column(Integer) class Tribes(Base): __tablename__ = 'Tribes' TribeID = Column(Integer, primary_key=True) TribeName = Column(String(255)) TribeName2 = Column(String(255)) TribeName3 = Column(String(255)) class Wars(Base): __tablename__ = 'Wars' WarID = Column(Integer, primary_key=True) WarName = Column(String(255)) StartYear = Column(Integer) EndYear = Column(Integer) WikiLink = Column(String(255)) LengthYears = Column(Integer) class YearSum(Base): __tablename__ = 'YearSum' Year = Column(Integer, primary_key=True) SumWars = Column(Integer) y = Column(Integer) And here's the table object being created from a Jupyter Notebook on the same db wars = Table("Wars", metadata_obj, autoload_with=engine) wars Table('Wars', MetaData(bind=None), Column('index', BIGINT(), table=<Wars>), Column('WarID', BIGINT(), table=<Wars>), Column('War Name', TEXT(), table=<Wars>), Column('StartYear', BIGINT(), table=<Wars>), Column('EndYear', BIGINT(), table=<Wars>), Column('WikiLink', TEXT(), table=<Wars>), Column('LengthYears', BIGINT(), table=<Wars>), schema=None) Here's what I'm running conda list '^(python|flask|sqlal)' # packages in environment at /opt/anaconda3/envs/mlenv: # # Name Version Build Channel flask 1.1.2 pyhd3eb1b0_0 python 3.7.13 hdfd78df_0 python-dateutil 2.8.2 pyhd3eb1b0_0 python-fastjsonschema 2.16.2 py37hecd8cb5_0 python-libarchive-c 2.9 pyhd3eb1b0_1 python-lsp-black 1.2.1 py37hecd8cb5_0 python-lsp-jsonrpc 1.0.0 pyhd3eb1b0_0 python-lsp-server 1.5.0 py37hecd8cb5_0 python-slugify 5.0.2 pyhd3eb1b0_0 python-snappy 0.6.0 py37h23ab428_3 python.app 3 py37hca72f7f_0 sqlalchemy 1.4.39 py37hca72f7f_0 Thanks for your time and attention.
0
1
1
The declaration of wars the automapped class and wars the route function are both declared in the module's top-level namespace, so the declaration of the route function overwrites the automapped class. The error message 'function' object has no attribute 'WarID' refers to the route function, not the automapped class. Choose a different name for one of them, that does not conflict with any other names in the namespace.
2023-03-11 17:06:20
1
c,header-files,micropython
1
75,716,626
How to dynamically assign variable into #define statement in header file
75,706,566
false
75
I have EXTREMELY limited programming experience but am working on a custom Micropython build. A feature is enabled/disabled using a #define statement in a C header file e.g. #define MICROPY_HW_USB_MSC (1); I would like to make this dynamic as part of the code build. This defines whether the MCU supports USB Mass Storage at runtime. What I would like to do is to set this variable (?) from a script (or some other recommended means) that reads the state of a GPI pin on the MCU at boot. So if the pin is low, it effectively creates #define MICROPY_HW_USB_MSC (0), and conversely if the pin is high, #define MICROPY_HW_USB_MSC (1). I hope this makes sense and thanks in advance for any assistance. I have tried searching for a solution online but am not experienced enough to search for the correct terminology - hence turning to experts for assistance.
0.197375
1
1
Anything specified in #define's is done by the preprocessor that runs at compile time. This means it cannot know what the state of a pin is at runtime. The only way to make a feature dynamic is to write actual C code that checks the state of the pin and acts on it.
2023-03-11 19:36:42
1
python,android,android-uiautomator,pydroid
1
75,752,265
Error on android using uiautomator2 on Pydroid3
75,709,042
true
64
I'm trying to automate tasks on Android, but when I run my code on Pydroid3 on Android, it return me a error... My device is a Asus Zenfone Selfie 4 I'dont understand this error because the error is about a ADB EXE problem. Android have ADB EXE? The code: #MODULES IMPORT import os import cv2 import numpy as np from uiautomator2 import Device from uiautomator2 import connect #CLASS AND FUNCTIONS DEFINE class AndroidAuto: def _init_ (self, Serial = "Serial"): self.Serial = Serial def Android_Screenshot(self, dir = ".screenshots"): dir = dir if not os.path.exists(dir): os.makedirs(dir) device = Device(self.Serial) device.screenshot(f"{dir}/.screenshot_opencv.png") return (f"{dir}/.screenshot_opencv.png") def Android_Touch(self, x = 0, y = 0): device = Device(self.Serial) device.click(x, y) def Android_Get_Position(self, target = None): Device.toast("Starting ") while True: template = AndroidAuto().Android_Screenshot() w, h = target.shape[:-1] res = cv2.matchTemplate(template, target, cv2.TM_CCOEFF_NORMED) threshold = 0.8 loc = np.where(res >= threshold) if len(loc[0]) > 0: x, y = loc[::-1] return x, y else: x, y = None, None Device.toast("Trying again!") continue def Android_Find_Click(self, target = "", x = 0, y = 0): while True: match = AndroidAuto().Android_Get_Position(target) if match != None: AndroidAuto().Android_Touch(x, y = match) break else: continue AA = AndroidAuto() print(AA.Android_Screenshot()) Then when I play, it returns me this is the error: RuntimeError: No adb exe could be found. install adb on your system
1.2
1
1
That's what site description says: python-uiautomator2 is a python-wrapper, which allows: scripting with Python on computer controlling the mobile with computer without usb connection. According to this (and my guess) uiautomator2 relies on ADB.exe to connect to the device, searches for it in $PATH system variable, fails to find it and shows you an error. So I guess it's not possible to use uiautomator2 in Pydroid3 like you would on PC.
2023-03-11 20:26:42
0
python,anagram
3
75,709,582
Anagram function returns False for unknown reason in some words - Python
75,709,320
false
71
I've tried to create a function that returns True if two words are anagrams (have the same letters) to each other. I realized that my counter counts to 13 but not sure what is the issue. def is_anagram (worda: str , wordb: str) -> bool: count_same_letters = 0 if len(worda) == len(wordb): for i in worda: for j in wordb: if i == j: count_same_letters = count_same_letters+1 print(count_same_letters) print(len(wordb)) return count_same_letters == len(wordb) else: return False print(is_anagram("anagram","nagaram")) while trying the string 'abc' abd 'bca' the output was True as I expected, but the strings 'anagram' and 'nagaram'returns False
0
2
1
You are going through each pair of letters. For instance if you have two 5-letter words, you're going through 25 pairs of letters. If you input aaaab and baaaa, your count_same_letters counter will get to 4*4 + 1*1 (4*4 pairs of a's and 1*1 pairs of b's). Change your algorithm.
2023-03-11 20:42:02
0
python,pytorch,pytorch-geometric
2
75,709,515
What does it mean if -1 is returned for .get_device() for torch tensor?
75,709,399
false
136
I am using pytorch geometric to train a graph neural network. The problem that led to this question is the following error: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_addmm) So, I am trying to check which device the tensors are loaded on, and when I run data.x.get_device() and data.edge_index.get_device(), I get -1 for each. What does -1 mean? In general, I am a bit confused as to when I need to transfer data to the device (whether cpu or gpu), but I assume for each epoch, I simply use .to(device) on my tensors to add to the proper device (but as of now I am not using .to(device) since I am just testing with cpu). Additional context: I am running ubuntu 20, and I didn't see this issue until installing cuda (i.e., I was able to train/test the model on cpu but only having this issue after installing cuda and updating nvidia drivers). I have cuda 11.7 installed on my system with an nvidia driver compatible up to cuda 12 (e.g., cuda 12 is listed with nvidia-smi), and the output of torch.version.cuda is 11.7. Regardless, I am simply trying to use the cpu at the moment, but will use the gpu once this device issue is resolved.
0
1
1
-1 means the tensors are on CPU. when you do .to(device), what is your device variable initialized as? If you want to use only CPU, I suggest initializing device as device = torch.device("cpu") and running your python code with CUDA_VISIBLE_DEVICES='' python your_code.py ... Typically, if you are passing your tensors to a model, PyTorch expects your tensors to be on the same device as your model. If you are passing multiple tensors to a method, such as your loss function ``nn.CrossEntropy()```, PyTorch expects both your tensors (predictions and labels) to be on the same device.
2023-03-11 21:39:04
2
python,directory,python-3.9
1
75,710,064
Pip is looking for python in wrong directory
75,709,706
false
138
Pip is looking for python in a directory that no longer exists. I downgraded from python 3.10 to python 3.9 using Microsoft app store after my original python 3.10 had an error running pip install pyqt5-tools. I have tried adding my new python to PATH but it doesn't work. I am trying to convert a UI file to a PY file using pyuic5. I tried C:\WINDOWS\system32>python --version It gave No Python at '"C:\Program Files\Python310\python.exe' I also tried where python and it gave C:\Users\Ponsar Kumzhi\PycharmProjects\pythonProject1\venv\Scripts\python.exe This suggests that Windows knows where the correct installation of python is located. I have tried adding this new directory to path and it doesn't work. What should I do?
0.379949
1
1
You probably missed out something or there is some issues with your computer. To update the PATH environment variable is probably the only solution to fix this issue. Search for "Environment Variables" in the search bar. Click on "Edit the system environment variables". In the System Properties window, click on the "Environment Variables" button. Scroll down to the "Path" variable in the "System Variables" section and select it. Click on the "Edit" button and "New" button and add the path to the Python 3.9 installation directory. For your case, it is: C:\Users\Ponsar Kumzhi\PycharmProjects\pythonProject1\venv\Scripts\python.exe. Click on "OK" lastly. After you finished the above mentioned steps, press Windows + R and type cmd, then press enter. Try typing "py --version" and it should print "Python [version]". You should then be able to use pip to install packages and use pyuic5 command.
2023-03-12 18:44:56
0
python,image,tensorflow,keras,tensor
2
75,716,225
AttributeError: EagerTensor object has no attribute 'astype'
75,715,563
true
309
I am trying to do a GradCAM Heatmap in Google Colab like so: import tensorflow as tf from tensorflow.keras import backend as K from tf_keras_vis.activation_maximization import ActivationMaximization from tf_keras_vis.utils.callbacks import Print def model_modifier(m): m.layers[-1].activation = tf.keras.activations.linear activation_maximization = ActivationMaximization(model, model_modifier) loss = lambda x: K.mean(x[:, 1]) activation = activation_maximization(loss, callbacks=[Print(interval=100)]) image = activation[0].astype(np.uint8) # <----- error f, ax = plt.subplots(figsize=(10, 5), subplot_kw={'xticks': [], 'yticks': []}) ax.imshow(image) plt.show() but I get an error AttributeError: EagerTensor object has no attribute 'astype'.
1.2
1
1
Convert to numpy first with activation[0].numpy(), then you should be able to use numpy array methods.
2023-03-12 20:32:39
1
python,numpy,bernoulli-probability
2
75,716,261
Numpy generating array from repeated function
75,716,218
false
57
I'm trying to generate an numpy array with randomly generated elements (it's similar like bernouilli distribution. The thing is I want to achieve it without using numpy.random.binomial function). Only function that fits after searching documentation and some forums is numpy.fromfunction. But I encountered a problem that this function doesn't generate n elements, but one. I expect output something like: [0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1] This function generates 1 element only, no matter what shape is in tuple: np.fromfunction(lambda i : np.random.choice([0, 1], p=[0.1, 0.9]), (20, )) #returns 1 or 0 np.fromfunction(lambda i ,j : np.random.choice([0, 1], p=[0.1, 0.9]), (20, 1)) #still returns 1 or 0 Though I tried implementing "i" into the output stupidest way possible but.. it changed something, but still didn't help: np.fromfunction(lambda i : i*0 + np.random.choice([0, 1], p=[0.1, 0.9]), (20, )) #returns array([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]) It's closer to the shape I want, but this ones or zeros are just repeated on all the array, so nothing's changed really. To sum up: I have a function f() that generates randomly with some probability 0 and 1, and is there any other function in numpy that can repeat function f() on array, or maybe some way to repair the example above?
0.099668
2
1
Sure, np.random.choice([0, 1], p=[0.1, 0.9], size=(20,)) or also there is np.random.randint(low=0, high=2, size=(20,)).
2023-03-13 09:06:14
3
python,intern
1
75,721,318
Python string intern mechanism
75,719,812
true
93
I'm studying the python string intern mechanism. While I was doing some test like this: # short str list_short_str = ['0', str(0), chr(48), ''.join(['0']), '0'.join(('','')), '230'[-1:], ''+'0'+'', 'aaa0a'.strip('a')] print("short str id:") for item in list_short_str: print(id(item)) # long str list_long_str = ['hello', 'hel'+'lo', 'helloasd'[:5], ''.join(['h','e','l','l','o']), ' hello '.strip(' ')] print("long str id:") for item in list_long_str: print(id(item)) I got output like this: short str id: 2450307092400 2450856182064 2450307092400 2450307092400 2450298848880 2450307092400 2450307092400 2450307092400 long str id: 2450855173808 2450855173808 2450856182256 2450856182320 2450856182192 I have tried IDLE, PyCharm and Jupyter, and all of these IDE gave me the same output. More precisely, for short string '0', str(0) and '0'.join(('','')) use the different id(s), and the others share the same; for long string 'hello', 'hello' and 'hel'+'lo' share the same id(s), and the others are different. I access to information but haven't found out the answer. Could anyone please tell me why?
1.2
2
1
In String Intern mechanism in python, when we create two strings with the same value - instead of allocating memory for both of them, only one string is actually committed to memory. The other one just points to that same memory location. However The string interning behavior in Python can differ based on several factors, including the version of Python, the implementation of the interpreter, and the context in which the string is created.As a result, identical string values may not always be interned, and the behavior can be difficult to predict in certain cases. One reason why the string interning result can differ for the same string in Python is that Python interns only string literals and not string objects that are created at runtime. This means if a string is executed at compile time are same then the reference is same. But if the execution done at run time then the reference(id) is differ. # In short str: The str() function in Python is executed at runtime. It converts the given object into a string representation. This conversion happens dynamically when the str() function is called during the program's execution, rather than during the program's compilation. The join() function cannot be executed at compile time because the values of the list and delimiter are only known at runtime. The '0'.join(('', '')) expression is executed at runtime in Python. This expression uses the join() method of the string '0' to join the elements of the tuple ('', ''). Since the tuple object ('', '') must exist at runtime for the join() method to operate on it, the expression must be executed at runtime. When the delimiter argument of the join() method is a non-empty string, the join() method cannot be executed at compile time in Python. Therefore, the expression ''.join(['0']) cannot be fully executed at compile time. However, since the join() method is operating on a single-element list and the delimiter is an empty string, the Python interpreter may be able to optimize the expression and partially execute it at compile time. In #long_str: String concatenation using the + operator in Python is generally executed at runtime and not during compile time. This is because the values of the string operands may not be known until runtime, and therefore the concatenation operation must be performed at that time. In certain cases, the Python interpreter may be able to optimize string concatenation operations and perform them partially or fully at compile time. For example, if both operands of the + operator are string literals, and the expression is used in a context where the result can be computed at compile time, the Python interpreter may optimize the expression and perform the concatenation at compile time. For example: a = "hello" b = "world" c = a + b # this concatenation will be executed at runtime d = "hello" + "world" # this concatenation may be executed at compile time Remaining 'helloasd'[:5], ''.join(['h','e','l','l','o']), ' hello '.strip(' ') will be executed during run time thats why all of them have different ids. I hope this might help you atleast a little.
2023-03-13 09:33:29
0
python,python-3.x,web-scraping,web,python-requests
1
75,905,488
requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer')), Python
75,720,088
true
385
I made a telegram bot and deployed it on Render. All works fine for like 3 hours then the bot gets down and throws this error: `raise ConnectionError(err, request=request) requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))` Worth mentioning that the server keeps being Live despite the error, but the bot gets totally out of service. And when re-deploying it, it works fine for another 3 hours and then bot dies but server keeps Live. I tried adding this code: time.sleep(0.45) r = requests.get(get_url(name), headers=headers) if r.status_code == 200: soup = bs(r.content, features='html.parser') But did not help that much. Any solutions?
1.2
1
1
Turned out I just needed to use bot.infinity_polling() instead of bot.polling()
2023-03-13 10:00:27
1
python,excel,windows,csv,file
1
75,720,478
What is the maximum size of a csv file?
75,720,345
false
303
I have 70 files in format with an actual file size of 800 MB each. Total original file size: 56000 MB. Unzipping, moving, and processing (at least renaming the files) become quite challenging and time-consuming tasks in Windows i7. I was considering if it will be a good idea to concatenate all the files in one file. So, one giant file of size of 56000 MB. Is there any limitation on the maximum size a CSV file can have? Also, is it a good idea to work with a such big single file or shall I continue with 70 separate files?
0.197375
1
1
The only limitation is your storage limit, in theory a csv can be of unlimited size. But if it's to big it may be unusable for practical use because limitations of your read and write speed, memory size and so on
2023-03-13 14:11:35
1
python,python-3.x,conda,environment
1
75,723,127
Meaning of this error message python version conflicts
75,723,026
true
117
While trying to install package: line-profiler-pycharm, I install it using pip inside my conda environment called wcs. But when I try to run it, I get the following error: 22:01 Error running 'prep_network': Could not do Profile Lines execution because the python package 'line-profile-pycharm' is not installed to Python 3.8 (wcs): Python 3.10.5 (/Users/myusername/opt/anaconda3/envs/wcs/bin/python) Why does it show to python 3.8 and then python 3.10 after the colon :? I checked the python versions inside my ˜/anaconda3/wcs/bin/python , I can see python 3.11 and python3.10, Python3.8 does not even exist in there.
1.2
1
1
Type $ which pip and compare with $ which python. They likely come from different places. To avoid the mismatch, prefer $ python -m pip install ... Going forward, it would be best to add "line-profiler-pycharm" to your project's environment.yml, and then $ conda env update will properly install it for you. If your config gets messed up, it is always harmless to discard the environment (anaconda3/envs/wcs) and use $ conda env update to re-create it from scratch.
2023-03-13 18:28:48
1
python,python-3.x
2
75,725,676
How can I update my Python version and keeping all the packages?
75,725,658
false
298
I want to ask you a question about the Python update. If I wanted to update my Python version from the 3.10 to 3.11, what should I do? And, most important, do I have to reinstall all the packages like Pandas, Numpy, Matplotlib, ...? In your opinion, when is needed to update it?
0.099668
2
1
Unless there is some security threat or some particularly new fun-ctionality, I wouldn't update too much between minor versions. Then thanks to pip managing python packages is easy : You can use pip freeze contextualized to a version, like : python3.10 -m pip freeze This will output a text with all libraries and their versions used by this specific binary. You can put that into a file : python3.10 -m pip freeze > /tmp/python3.10.pip.txt And then install it with another version : python3.11 -m pip install -r /tmp/python3.10.pip.txt
2023-03-13 20:41:00
2
python,python-asyncio
4
75,726,786
Confused by python async for loop--executes sequentially
75,726,719
true
164
I am new to asyncio and trying to understand basic for loop behavior. The code below executes sequentially, but my naive assumption was that while the sleeps are occurring, other items could be fetched via the for loop and start processing. But that doesn't seem to happen. For example, while the code is "doing something else with 1" it seems like it could fetch the next item from the loop and start working on it while waiting for the sleep to end on item 1. But when I run, it executes sequentially with pauses for the sleeps like a non-async program. What am I missing here? import asyncio class CustomIterator(): def __init__(self): self.counter = 0 def __aiter__(self): return self async def __anext__(self): if self.counter >= 3: raise StopAsyncIteration await asyncio.sleep(1) self.counter += 1 return self.counter async def f(item): print(f"doing something with {item}") await asyncio.sleep(3) async def f2(item): print(f"doing something else with {item}") await asyncio.sleep(2) async def do_async_stuff(): async for item in CustomIterator(): print(f"got {item}") await f(item) await f2(item) if __name__ == '__main__': asyncio.run(do_async_stuff()) Output: got 1 doing something with 1 doing something else with 1 got 2 doing something with 2 doing something else with 2 got 3 doing something with 3 doing something else with 3
1.2
3
2
I think you have a common misunderstanding of how async works. You have written your program to be synchronous. await foo() says to call foo(), and feel free to go do something else while we're waiting for foo to return with its answer. Likewise, getting the next element from your custom iterator says "get the next element of this iterator, but feel free to go do something else while waiting for the result". In both cases, you have nothing else to do, so your code wants. If it is safe for two things in your code to run at once, it is your job to say so, using appropriate primitives.
2023-03-13 20:41:00
0
python,python-asyncio
4
75,733,656
Confused by python async for loop--executes sequentially
75,726,719
false
164
I am new to asyncio and trying to understand basic for loop behavior. The code below executes sequentially, but my naive assumption was that while the sleeps are occurring, other items could be fetched via the for loop and start processing. But that doesn't seem to happen. For example, while the code is "doing something else with 1" it seems like it could fetch the next item from the loop and start working on it while waiting for the sleep to end on item 1. But when I run, it executes sequentially with pauses for the sleeps like a non-async program. What am I missing here? import asyncio class CustomIterator(): def __init__(self): self.counter = 0 def __aiter__(self): return self async def __anext__(self): if self.counter >= 3: raise StopAsyncIteration await asyncio.sleep(1) self.counter += 1 return self.counter async def f(item): print(f"doing something with {item}") await asyncio.sleep(3) async def f2(item): print(f"doing something else with {item}") await asyncio.sleep(2) async def do_async_stuff(): async for item in CustomIterator(): print(f"got {item}") await f(item) await f2(item) if __name__ == '__main__': asyncio.run(do_async_stuff()) Output: got 1 doing something with 1 doing something else with 1 got 2 doing something with 2 doing something else with 2 got 3 doing something with 3 doing something else with 3
0
3
2
To add more context on the other great answers, I'd like to detail a bit more in depth some definitions (quite simplified) which may help you to better understand the issue: Concurrency refers to the ability to execute multiple tasks at seemingly the same period of time. Parallelism refers to the ability to execute multiple tasks at physically the same period of time. Asynchrony refers to actual ways to deal with concurrent tasks. They are many of such ways of dealing with concurrent tasks at the application level, such as: callback-based (callback functions), dispatch queues, the future and promise pattern, the async/await pattern, etc. Using one of these forms of asynchronous programming does not guarantee that your program will execute concurrently, you often have to explicitly state that. AsyncIO —as stated in the documentation— is "a library to write concurrent code using the async/await syntax.", and as its name suggests, is mainly dedicated to deal with I/O (Input/Output) asynchronous events (but not limited to). To create a concurrent program using AsyncIO you need two things: using the async/await syntax, and explicitly schedule tasks concurrently, for instance with asyncio.gather and asyncio.create_task. Without that, your program will generally executes sequentially as you observed in your example.
2023-03-14 02:56:58
1
python,pytorch
1
75,728,754
Can i initialize optimizer before changing the last layer of my model
75,728,717
true
37
Say I want to change the last layer of my code but my optimizer is defined on the top of my scripts, what is a better practise? batch_size = 8 learning_rate = 2e-4 num_epochs = 100 cnn = models.resnet18(weights='DEFAULT') loss_func = nn.CrossEntropyLoss() optimizer = optim.Adam(cnn.parameters(), lr=learning_rate) def main(): dataset= datasets.ImageFolder(root=mydir) loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True) num_classes = len(dataset.classes) num_ftrs = cnn.fc.in_features cnn.fc = nn.Linear(num_ftrs, num_classes) # Do i need to reinitialize the optimizer again here? if __name__ == '__main__': main()
1.2
1
1
No, the optimizer would not be aware of the weights for the new last layer unless you initialize or reinitilize the optimizer after modifying the model. You should define the optimizer after creating the model and modifying any layers as needed.
2023-03-14 08:00:40
0
python,python-3.x,python-asyncio
1
75,731,093
What happens when there are no await after create_task
75,730,400
false
54
I am trying to understand asyncio and I am quite confused about tasks. Specifically, I know a task will run when the current coroutine is suspended and controll is passed to the event loop. What if await is never called after a task is created, and the coroutine just exit? For example import asyncio async def print_stuff(s): print(s) async def print_wrapper(s): print(f"{s}: create task") asyncio.create_task(print_stuff(s)) print(f"{s}: sleeping") print(f"{s}: ending coroutine") async def sleep_wrapper(s): print(f"{s}: before sleep") await asyncio.sleep(2) print(f"{s}: after sleep") async def main(): asyncio.create_task(print_stuff("stuff")) asyncio.create_task(print_wrapper("wrapper")) asyncio.create_task(sleep_wrapper("abc")) asyncio.run(main()) print("after main") The output is stuff wrapper: create task wrapper: sleeping wrapper: ending coroutine abc: before sleep after main What I don't understand is why would I see any output from the tasks at all, after main ends, shouldn't the event loop ends with it? why "wrapper" and "abc: after sleep" were not printed while the other messages were?
0
1
1
I briefly looked at the asyncio source and I think the event loop makes one loop (or maybe two?) iterations and then it returns, because the stop condition (finished main) is true. There are 4 tasks at the beginning, 3 created in the main and the 4th is the main coroutine itself. In one loop iteration each task that is ready is run one "step", i.e. new tasks from the beginning, other tasks from the point where the previous step finished. A "step" always finishes at the point where the task cannot continue and that is either the end of the corresponding coroutine (a return or after the last instruction or an exception) or at an await that cannot immediately return a value. In the first case, the task becomes "done". In the latter case the task becomes not ready and waits for some pending future value. When the future becomes "done", the associated task can be scheduled to run again. In your code it is the sleep where the "step" ends.
2023-03-14 10:37:29
1
python,pandas,jupyter-notebook
1
75,767,400
How do I force the usual "default" formatting that pandas dataframes output to?
75,732,052
true
68
I'm using a jupyter like platform that has truly terrible visual capabilities and one of the annoying things is that when simply seeing a dataframe by having it be the last thing in a cell all the usual pandas formatting is gone if I turn it into a styler. What I mean by that is: alternating rows white / grey hover over row gets me nothing instead of that blue highlighting numbers are aligned in weird ways I want to use styler because it allows me to format numbers easily (ie things like '{:,.0f}' or '{:.1%}'. Is there a way to force the formatting to match the "default" in some easy way? eg where is the default stored? df # this works fine as an output df.style # this loses all default formatting
1.2
1
1
Some editors rely on CSS classes to apply their own browser specific styling. At one time google colab refused to auto format a Styler since DataFrame HTML adds the dataframe class in <table class="dataframe">, whereas Styler doesnt. Try using df.style.set_table_attributes('class="dataframe"')
2023-03-14 15:10:49
2
python,amazon-web-services,amazon-s3,aws-lambda
1
75,736,362
Append S3 File in Lambda, Python
75,734,987
false
61
I understand it's not recommendable way but we need to update the same S3 file using Lambda. We'd like to write and add the count result in the same file in S3. Usually, it's counting and updating one by one in sequential order in the same job session. But we can't deny any possibility that the scheduled jobs using same Lambda function can run simultaneously, especially at the busy time. In such case, the file can be updated in unintended way or corrupsed in the worst senario. I think we can set the concurrency 0 of that Lambda function and make it always run in sequential order, but I'm afraid it might cause the performance issue instaed. If you have any better idea for concurrency prevention or any error countermeasure in Python code below, please advise me. Thank you for your help in advance. -- lambda_handler.py import json from s3 import * def lambda_handler(event, context): s3_append('yyyymmdd/result.txt', 'yyyymmdd/result.txt', 'append str') -- function S3.py import boto3 from datetime import datetime bucket_name = 's3-sachiko-aws3-new2' s3_resource = boto3.resource("s3") # S3 File Append def s3_append(target_path, append_path, append_str): target_context = s3_resource.Object(bucket_name, target_path).get()["Body"].read() s3_resource.Object(bucket_name, append_path).put(Body=target_context + b"\n" + bytes(append_str, 'utf-8'))
0.379949
1
1
If I understand your requirement, you want to do two things: Make a freeform update to an S3 file based on an upstream request Make a simple calculated update to the same file based on the result of Update 1. If this is the case, you can set up a SQS FIFO queue and push the requests from the upstream system into it. Then you can set up an Event Source Mapping lambda that will pull freeform updates one-by-one from the queue, make Updates 1 and 2, write the file back to S3 and then pick up the next item in the queue.
2023-03-14 16:56:00
3
python,algorithm,time-complexity
3
75,736,272
What is a quick way to count the number of pairs in a list where a XOR b is greater than a AND b?
75,736,181
false
253
I have an array of numbers, I want to count all possible combination of pairs for which the xor operation for that pair is greater than and operation. Example: 4,3,5,2 possible pairs are: (4,3) -> xor=7, and = 0 (4,5) -> xor=1, and = 4 (4,2) -> xor=6, and = 0 (3,5) -> xor=6, and = 1 (3,2) -> xor=1, and = 2 (5,2) -> xor=7, and = 0 Valid pairs for which xor > and are (4,3), (4,2), (3,5), (5,2) so result is 4. This is my program: def solve(array): n = len(array) ans = 0 for i in range(0, n): p1 = array[i] for j in range(i, n): p2 = array[j] if p1 ^ p2 > p1 & p2: ans +=1 return ans Time complexity is O(n^2) , but my array size is 1 to 10^5 and each element in array is 1 to 2^30. So how can I reduce time complexity of this program.
0.197375
1
1
Say a and b are integers. Then a^b > a&b iff a and b have different highest set bits. Solution: use a counting map where the keys are the highest set bits. Populate this in linear time. Then, process the keys. Say there are n total integers, and some key has r integers (with the same highest set bit). Then that key adds r * (n-r) to the count of pairs where xor > and. That is, each of r integers can be paired with each of (n-r) with a different highest set bit. This double-counts everything, so divide by 2 at the end. Example: Say we have 8 integers, 3 of which have the third bit as their highest set bit, 3 the fourth, and 2 the fifth. So per my algorithm, we have three buckets of sizes 3, 3, and 2, and a solution of [3*(8-3) + 3*(8-3) + 2*(8-2)] / 2 = 42 / 2 = 21. Here's a more detailed explanation of the approach: Any two integers within the same bucket have a higher value under the AND operation than the XOR operation because the AND operation preserves the max set bit, and the XOR operation turns it to 0. Now take two integers in separate buckets. One of them has a higher max set bit than the other. That means that the max set bit between the two numbers appears in one but not the other, so becomes a 0 under the AND operation, but is preserved under the XOR operation, thus XOR results in a higher max set bit. The number of pairs where XOR yields a higher result than AND is exactly the number of pairs which are not both in the same bucket. Say we have n integers total, and r in some bucket. Each of the r integers in that bucket can be paired with any of the (n-r) integers in the other buckets and not with any of the r-1 integers in the same bucket, contributing r * (n-r) to the count of pairs where XOR yields a higher integer than AND. However, this counts every pair that contributes to our count exactly twice. E.g., when we're comparing 1001 and 110, our analysis of both the bucket with the 4th and 3rd highest set bit being 1 will be incremented by 1 for this pair. Thus at the end we have to divide by 2. Further example: Here are all integers with the third bit as their highest set bit: 4, 5, 6, 7, or 100, 101, 110, and 111 in binary. Here are all with the second bit as their highest set bit: 2, 3 or 10, 11 in binary. Take any pair with the same highest set bit, arbitrarily I'll choose 6 and 7. AND(110, 111) = 110 = 6. XOR(110, 111) = 001. So the AND operation produces a higher result than XOR. In every case, XOR will convert the highest set bit from 1 to 0, and AND will keep it at 1, so in every case AND will result in a higher result than XOR. Taking pairs from separate bucket, whichever bit is the highest set bit among the pair is only set in one of the two integers (because this is what we're bucketing by), so under AND that bit becomes 0, and under XOR it remains 1. Since the XOR operation leaves the output with a bigger highest-set-bit than AND, the resulting integer is higher under XOR.
2023-03-14 19:57:10
1
python,google-bigquery
2
76,011,889
Bases must be type error when running "from google.cloud import bigquery" in jupyter notebook in GCP
75,737,824
false
353
I tried running the following: from google.cloud import bigquery But from some reason, I keep getting this "Bases must be types" error: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-3-1035661e8528> in <module> ----> 1 from google.cloud import bigquery /opt/conda/lib/python3.7/site-packages/google/cloud/bigquery/__init__.py in <module> 33 __version__ = bigquery_version.__version__ 34 ---> 35 from google.cloud.bigquery.client import Client 36 from google.cloud.bigquery.dataset import AccessEntry 37 from google.cloud.bigquery.dataset import Dataset /opt/conda/lib/python3.7/site-packages/google/cloud/bigquery/client.py in <module> 58 59 import google.api_core.client_options ---> 60 import google.api_core.exceptions as core_exceptions 61 from google.api_core.iam import Policy 62 from google.api_core import page_iterator /opt/conda/lib/python3.7/site-packages/google/api_core/exceptions.py in <module> 27 import warnings 28 ---> 29 from google.rpc import error_details_pb2 30 31 try: /opt/conda/lib/python3.7/site-packages/google/rpc/error_details_pb2.py in <module> 18 # source: google/rpc/error_details.proto 19 """Generated protocol buffer code.""" ---> 20 from google.protobuf import descriptor as _descriptor 21 from google.protobuf import descriptor_pool as _descriptor_pool 22 from google.protobuf import message as _message /opt/conda/lib/python3.7/site-packages/google/protobuf/descriptor.py in <module> 38 import warnings 39 ---> 40 from google.protobuf.internal import api_implementation 41 42 _USE_C_DESCRIPTORS = False /opt/conda/lib/python3.7/site-packages/google/protobuf/internal/api_implementation.py in <module> 102 try: 103 # pylint: disable=g-import-not-at-top --> 104 from google.protobuf.pyext import _message 105 sys.modules['google3.net.proto2.python.internal.cpp._message'] = _message 106 _c_module = _message TypeError: bases must be types I'm not sure what I did wrong. Maybe I install or uninstalled a package somewhere. But now I can't get this simple command to work. Any help would be greatly appreciated.
0.099668
2
2
correct input in terminal to change protobuf version (my acct. was unable to comment on the above solution so had to add as answer instead...) pip install --upgrade protobuf==3.20.1
2023-03-14 19:57:10
2
python,google-bigquery
2
75,922,541
Bases must be type error when running "from google.cloud import bigquery" in jupyter notebook in GCP
75,737,824
true
353
I tried running the following: from google.cloud import bigquery But from some reason, I keep getting this "Bases must be types" error: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-3-1035661e8528> in <module> ----> 1 from google.cloud import bigquery /opt/conda/lib/python3.7/site-packages/google/cloud/bigquery/__init__.py in <module> 33 __version__ = bigquery_version.__version__ 34 ---> 35 from google.cloud.bigquery.client import Client 36 from google.cloud.bigquery.dataset import AccessEntry 37 from google.cloud.bigquery.dataset import Dataset /opt/conda/lib/python3.7/site-packages/google/cloud/bigquery/client.py in <module> 58 59 import google.api_core.client_options ---> 60 import google.api_core.exceptions as core_exceptions 61 from google.api_core.iam import Policy 62 from google.api_core import page_iterator /opt/conda/lib/python3.7/site-packages/google/api_core/exceptions.py in <module> 27 import warnings 28 ---> 29 from google.rpc import error_details_pb2 30 31 try: /opt/conda/lib/python3.7/site-packages/google/rpc/error_details_pb2.py in <module> 18 # source: google/rpc/error_details.proto 19 """Generated protocol buffer code.""" ---> 20 from google.protobuf import descriptor as _descriptor 21 from google.protobuf import descriptor_pool as _descriptor_pool 22 from google.protobuf import message as _message /opt/conda/lib/python3.7/site-packages/google/protobuf/descriptor.py in <module> 38 import warnings 39 ---> 40 from google.protobuf.internal import api_implementation 41 42 _USE_C_DESCRIPTORS = False /opt/conda/lib/python3.7/site-packages/google/protobuf/internal/api_implementation.py in <module> 102 try: 103 # pylint: disable=g-import-not-at-top --> 104 from google.protobuf.pyext import _message 105 sys.modules['google3.net.proto2.python.internal.cpp._message'] = _message 106 _c_module = _message TypeError: bases must be types I'm not sure what I did wrong. Maybe I install or uninstalled a package somewhere. But now I can't get this simple command to work. Any help would be greatly appreciated.
1.2
2
2
Marking this as an answer for better visibility, as this resolved the question: pip installing protobuff v3.20.1 as @runner16 mentioned pip install protobuf == 3.20.1
2023-03-15 02:42:07
0
python,pandas,dataframe,type-conversion
3
75,740,287
Pandas dataframe - transform selected cell values based on their suffix
75,740,088
false
61
I have a dataframe as below: data_dict = {'id': {0: 'G1', 1: 'G2', 2: 'G3'}, 'S': {0: 35.74, 1: 36.84, 2: 38.37}, 'A': {0: 8.34, 1: '2.83%', 2: 10.55}, 'C': {0: '6.63%', 1: '5.29%', 2: 3.6}} df = pd.DataFrame(data_dict) I want to multiply all the values in the data frame with 10000 (except under the column 'id' - 1st column) if they endswith %: cols = df.columns[1:] for index, row in df.loc[:, df.columns != 'id'].iterrows(): for c in cols: if str(row[c]).endswith('%'): data_value = str(row[c]) data_value = data_value.replace('%',"") df.at[index,c]= float(data_value) * 10000 Finally, this sets all the columns values (except the first column) to numeric: df[cols[1:]] = df[cols[1:]].apply(pd.to_numeric, errors='coerce') Is there a simple way to convert the values instead of iterating the rows?
0
2
1
You rewrite the code by using applymap.
2023-03-15 06:22:28
1
python-hypothesis
1
75,816,263
How to create a hypothesis strategy to sample uniformly over a range?
75,741,139
false
34
I am doing something roughly like this: test_a.py import unittest import hypothesis import hypothesis.extra.numpy import numpy as np from hypothesis import strategies as st SHAPE = (10, ) ARRAY_STRATEGY = hypothesis.extra.numpy.arrays(float, SHAPE, elements=st.floats(min_value=-1, max_value=1)) ZERO_ONE_STRATEGY = st.floats(min_value=0, max_value=1) class TestMyClass(unittest.TestCase): @hypothesis.settings(max_examples=10) @hypothesis.given( value=ZERO_ONE_STRATEGY, arr1=ARRAY_STRATEGY, arr2=ARRAY_STRATEGY, arr3=ARRAY_STRATEGY, ) def test_something(self, value: float, arr1: np.ndarray, arr2: np.ndarray, arr3: np.ndarray) -> None: print(value) And I am often seeing behavior like this: $ pytest test_a.py --capture=no 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 I am testing behavior with respect to value, so sampling 0.0 10 times renders this test useless to me. I don't need to randomly sample value, in fact, I would like to sample it evenly over the range of ZERO_ONE_STRATEGY at the granularity of 1 / (max_examples - 1). How can I create a strategy to do that?
0.197375
1
1
Hypothesis does not offer control over the distribution of examples, because computers are better at this than humans. In your example, I'd bet that the other arguments are in fact varying between examples, and it's pretty reasonable to check what happens when value=0.0 and the arrays take on a variety of values. The distribution of inputs which finds bugs fastest-in-expectation is certainly not uniform; nor does it match the distribution of inputs seen in production. We make no promises about the distribution of examples, so that we can improve it in new versions and based on runtime observations of the behavior of code under test. Coverage-guided fuzzing with HypoFuzz or SMT-solving with CrossHair are extreme cases but Hypothesis itself has some feedback mechanisms too (as well as the target() function). We deliberately keep a small amount of redundancy, weighted towards the early examples, to check that your code does the same thing when fed the same inputs. So my advice overall is to relax, trust the system and research it's based on, and think about the properties to test and possible range of valid inputs rather than the distribution within that range.
2023-03-15 11:47:20
0
python,python-3.x,openai-api,gpt-3,chatgpt-api
2
75,744,748
How Can I make openAI API respond to requests in specific categories only?
75,744,277
false
1,349
I have created an openAI API using python, to respond to any type of prompt. I want to make the API respond to requests that are only related to Ad from product description and greetings requests only and if the user sends a request that's not related to this task, the API should send a message like I'm not suitable for tasks like this. import os import openai openai.api_key = os.getenv("OPENAI_API_KEY") response = openai.Completion.create( model="text-davinci-003", prompt="Write a creative ad for the following product to run on Facebook aimed at parents:\n\nProduct: Learning Room is a virtual environment to help students from kindergarten to high school excel in school.", temperature=0.5, max_tokens=100, top_p=1.0, frequency_penalty=0.0, presence_penalty=0.0 ) I want to update the code to generate a chat like this. make bot understand generating ADs and greetings requests and ignoring the others EX:- user:- Hello api:- Hello, How can I assist you today with your brand? user:- Write a social media post for the following product to run on Facebook aimed at parents:\n\nProduct: Learning Room is a virtual environment to help students from kindergarten to high school excel in school. api:- Are you looking for a way to give your child a head start in school? Look no further than Learning Room! Our virtual environment is designed to help students from kindergarten to high school excel in their studies. Our unique platform offers personalized learning plans, interactive activities, and real-time feedback to ensure your child is getting the most out of their education. Give your child the best chance to succeed in school with Learning Room! user:- where is the united states located? api:- I'm not suitable for this type of tasks. So, How can update my code?
0
1
1
A naive implementation would be to check a user prompt before you send it to GPT3 and make sure it includes words related to advertisments via comparing every word in the prompt to a hashtable full of related words beforehand. If it has enough related words, let the prompt go through. Else, change the prompt to 'Say "I'm not suitable for this type of tasks."'.
2023-03-15 13:25:30
0
python,google-cloud-platform,jupyter-notebook,gcp-ai-platform-notebook
2
75,753,196
automatic reloading of jupyter notebook after crash
75,745,337
true
134
is there a way to reload automatically a jupyter notebook, each time it crashes ? I am actually running a notebook, that trains a Deep learning model (the notebook can reload the last state of model, with state of optimizer and scheduler, after each restart of the kernel ), so that reloading the notebook after a crash enables to get back the last state without a substantial loss of computations. I was wondering if there was a simple way to do that using the jupyter notebook API, or a signal from the jupyter notebook for example (maybe on logs). Also, I am running the notebook on google cloud platform (on compute engine), if you know any efficient way to do it, using the GCP troubleshooting services, and the logging agent, it might be interested for me and for others with the same issue. Thank you again for you time. I tried to look up for a solution on stack overflow, but I didn't find any similar question.
1.2
1
1
From your comment: "reloading the notebook after a crash enables to get back the last state without a substantial loss of computations." What do you call a crash?, does it generate logs that can be parsed from /var/log or other location (e.g journalctl -u jupyter.service) ? If so you can manually create a shell script. With User Managed Notebooks you have the concept of post-startup-script or startup-script post-startup-script, is path to a Bash script that automatically runs after a notebook instance fully boots up. The path must be a URL or Cloud Storage path. Example: "gs://path-to-file/file-name" This script can be a loop that monitors the crash you mention
2023-03-15 14:25:39
0
python,python-3.x
4
75,746,186
Can Classes Use Assignment Unpacking Similar to Iterable Lists?
75,746,077
false
42
I have list myList = [[1, 2], [3, 4], [5, 6]] which I can use assignment unpacking to get list_instance0, list_instance1, list_instance2 = myList If I have a class: class myClass(): def __init__(self, a: int, b: int): self.a = a self.b = b def printAB(self): return self.a, self.b Now If I want to assign the list_instance[i] I can do: classInstance0=myClass(*list_instance0) classInstance1=myClass(*list_instance1) classInstance2=myClass(*list_instance2) If I was to try subClass0, subClass1, subClass2 = myClass(*list_instance0, *list_instance1, *list_instance2) # error "myClass" is not iterable: __iter__ method not defined So I was wondering is there a way to assign classInstance[i] with list_instance[i] to myClass in a more dynamic way instead of having to assign each one individually?
0
1
1
I think you are misunderstanding inheritance. The items you are calling sublcass are not subclasses of myClass. They are instances of myClass. So the short answer is no. Calling the constructor for myClass returns a single object. Namely, an instance of myClass. So it cannot be assigned to 3 objects or an iterable, because it returns a single object.
2023-03-15 15:26:37
1
python,for-loop,bioinformatics,fasta
2
75,746,898
I'm trying to convert a fasta document into a dictionary in python using a for loop, but only my last iteration gets captured
75,746,806
false
62
I'm trying to write a code to create a dictionary that reads fasta document for a dna sequence, where names of dna sequences are signified with a ">" at the start of the row containing the name. Until the next time a name is encountered, the dna sequence's bases will keep getting assigned to the dictionary entry. The for loop I've created only creates a dictionary for the last sequence, and I can't understand why this happens. Here's the code I wrote: def read_fasta(): with open('../data/problem_1_question_4_new.fasta', 'r') as fasta: for line in fasta: rows = line.split() sequencedict = {} sequence = '' if str(rows)[2] == '>': sequencename = str(rows)[3:-2] else: sequence += str(rows)[2:-2] sequencedict[sequencename] = sequence return(sequencedict) print(read_fasta()) I'm assuming that I have an error with the indentations, but I don't know where. Edit: I've solved the error. I moved the line "sequencedict = {}" outside of the for loop. My new code is: def read_fasta(): with open('../data/problem_1_question_4_new.fasta', 'r') as fasta: sequencedict = {} for line in fasta: rows = line.split() sequence = '' if str(rows)[2] == '>': sequencename = str(rows)[3:-2] else: sequence += str(rows)[2:-2] sequencedict[sequencename] = sequence return(sequencedict) print(read_fasta())
0.099668
1
1
You need to declare your dict outside the for loop. As it stands, your dict is renewed every new iteration.
2023-03-15 17:50:38
0
python,visual-studio-code,cvzone
2
75,753,445
Cannot import cvzone in VSCode though correctly installed
75,748,411
true
370
I am trying to work with cvzone library in Python using VSCode. I have installed the library using pip install cvzone. But when trying to import the library, ModuleNotFoundError is being raised. import cvzone Output ModuleNotFoundError: No module named 'cvzone' I have checked that the correct environment is activated in terminal Have checked that the interpreter is correctly selected Ran pip list and ensured that the library is installed Tried to import using python in terminal, and worked correctly I tried to run the same in Colab and it was successfully imported. But when trying to run in Jupyter Notebook in VSCode, it is not working. Using cvzone 1.5.6
1.2
1
1
I deleted the virtualenv, created a new one (with another name. First, I created one with the same name, but it showed the same issue), and reinstalled all the packages. Then it worked! I don't know what was wrong, but anyway... Thank you all for answering.
2023-03-15 20:09:28
1
python,string
1
75,749,894
Using is statement for strings
75,749,605
true
51
Can someone explain why I am getting following outputs? (I know this code has SyntaxWarning.) >>> a = 'book' >>> a is 'book' True >>> a = 'two book' >>> a is 'two book' False I was expecting same result for both code snippet.
1.2
1
1
When comparing any 2 objects with "is" in python, you actually check the storage allocation of the 2 objects. When creating a string, python tries to optimize the storage by using the same allocation for both. but this behavior isn't always consistent, for it might be different for other strings. I notices this behavior changes when using a space (" ") as a part of the string. Anyways, you should compare objects with "==" and not "is". This way, the comparison will use the "eq" method from the class of the object, as it should. Or you may implement this method by yourself if you'd like to (using inheritance and overriding)
2023-03-15 21:23:51
0
python,scikit-learn,regression,linear-regression,olsmultiplelinearregression
2
75,755,724
linear regression/ols regression with python
75,750,204
false
65
I want to run multiple linear regression models, and there are 5 independent variables (2 of them are categorical). Thus, I first applied onehotencoder to change categorical variables into dummies. These are dependent and independent variables y = df['price'] x = df[['age', 'totalRooms', 'elevator', 'floorLevel_bottom', 'floorLevel_high', 'floorLevel_low', 'floorLevel_medium','floorLevel_top', 'buildingType_bungalow', 'buildingType_plate', 'buildingType_plate_tower', 'buildingType_tower']] Next, I tried the following two methods, but found that their results are different only for the intercept and categorical variables. from sklearn.linear_model import LinearRegression mlr = linear_model.LinearRegression() mlr.fit(x, y) print('Intercept: \n', mlr_in.intercept_) print("Coefficients:") list(zip(x, mlr_in.coef_)) This gives Intercept: 35228.96453917408 Coefficients: [('age', 1046.5347118942063), ('totalRooms', -797.7667275033103), ('elevator', 11940.629576736419), ('floorLevel_bottom', 1011.5929167549165), ('floorLevel_high', 157.60625500592502), ('floorLevel_low', 483.89164772666277), ('floorLevel_medium', 630.9547280568961), ('floorLevel_top', -2284.0455475443687), ('buildingType_bungalow', 31610.88176756009), ('buildingType_plate', -9649.087529585862), ('buildingType_plate_tower', -8813.187607409624), ('buildingType_tower', -13148.606630564624)] import statsmodels.formula.api as smf x_in = sm.add_constant(x_in) model = sm.OLS(y, x_in).fit() print(model.summary()) but this gives Intercept 2.43e+04 age 1046.5347 totalRooms -797.7667 elevator 1.194e+04 floorLevel_bottom 5870.7604 floorLevel_high 5016.7738 floorLevel_low 5343.0592 floorLevel_medium 5490.1223 floorLevel_top 2575.1220 buildingType_bungalow 3.768e+04 buildingType_plate -3575.1281 buildingType_plate_tower -2739.2282 buildingType_tower -7074.6472 Now I don't understand the difference between them ;(
0
1
1
Few things to take care of assuming you have done data preprocessing exactly for each iteration. (By the variable names I think there might be something else you might've done) Set the seed to the same number so that results will pick the same random number, to begin with. Avoid dummy variable trtap and use pd.get_dummies(x, columns=['floorLevel', 'buildingType'], drop_first=True)
2023-03-16 03:53:19
0
python,python-3.x,django,list,django-templates
2
75,752,087
List data print in another for loop in Django template
75,752,026
false
102
{% for i in packages %} {% endfor %} I used this "for loop" in the Django template, in this loop I need to print another list. package_main_prices_list=[340, 180, 170, 95, 500] problem is I need to print one data from the "package_main_prices_list" list in one "Packages" loop in the Django template like this: {% for i in packages %} [340] #one data from "package_main_prices_list" {% endfor %} my views: package_main_prices_list = [] for i in packages: package_price = float(i.startup_payment) calculated_price = round(max(package_price, package_price * last_price / 28000)) package_main_prices_list.append(calculated_price) context = { 'packages': packages, 'package_main_prices_list': package_main_prices_list, } return render(request, 'pages/shop.html', context) The output will be: package name 340 date package name 180 date
0
1
1
Pass the data in query set. It will be much easier
2023-03-16 04:56:29
0
python,tensorflow,keras,tensorflow2.0
2
75,945,873
tf.keras.utils.get_file error: TypeError: '<' not supported between instances of 'int' and 'NoneType'
75,752,307
true
159
everyone, Recently I start to program with tensorflow 2.10.0, have the following codes in my ipynb file (Jupyter Notebook file): if not data_dir.exists(): tf.keras.utils.get_file('free-spoken-digit-dataset-master.zip',origin="https://codeload.github.com/Jakobovski/free-spoken-digit-dataset/zip/refs/heads/master",extract=True,cache_dir='.',cache_subdir='data') I want to download the file free-spoken-digit-dataset-master.zip from the URL https://codeload.github.com/Jakobovski/free-spoken-digit-dataset/zip/refs/heads/master, after running the codes the following error message is displayed: TypeError: '<' not supported between instances of 'int' and 'NoneType' Has anyone faced this issue or similar issue before? Also tried the following codes: tf.keras.utils.get_file(origin="https://github.com/Jakobovski/free-spoken-digit-dataset/archive/v1.0.9.tar.gz",extract=True,cache_dir='.',cache_subdir='data') The same error message was displayed: TypeError: '<' not supported between instances of 'int' and 'NoneType'
1.2
1
1
Thank you deeply, @TFer2, Stackoverflow is a good forum for software developers, people can almost find each answer here. I already turned to other projects now. The correct way to loading the spoken_digit dataset is: dataset,dataset_info=tfds.load("spoken_digit",split=['train'],as_supervised=True,with_info=True) or dataset=TFDatasets.builder('spoken_digit').as_dataset()['train'] .
2023-03-16 15:42:59
0
python,json,csv
2
75,758,582
How do I turn a 2-column csv into a json list of the key/value pairs using python?
75,758,469
false
48
I have a csv like this: key spanish no no yes is why porque I want to make it into a json like this: { "no": "no", "yes": "si", "why": "porque" } All the examples I have found online create JSON's kind of like this: { "no": { "key": "no", "Spanish": "no" } } I would like to write a python script that will create the simple key/value pairs in a json file.
0
1
1
Load you csv to dataframe, then use this: df.set_index('key')['spanish'].to_json('path/to/jsonfile.json') Neatly save the dataframe in json format.
2023-03-16 15:51:01
0
python,fastapi,blocking
2
75,758,890
Will using time.sleep(1) in a loop block other routes in my FastAPI server?
75,758,568
false
152
I have a route that sends a number of emails. To avoid rate limits, I am using time.sleep(1) between emails. If I understand correctly, the route will run in its own thread or coroutine and this will not block other requests, but I thought it would be good to confirm this with the community. Here is a code example (simplified to focus on the issue): @router.get("/send_a_bunch_of_emails") def send_a_bunch_of_emails(db: Session = Depends(get_db)): users = get_a_bunch_of_users(db) for user in users: send_email(to=user.email) time.sleep(1) # Maximum of 1 email per second I am just wanting to confirm, that if hypothetically, this sent 10 emails, it wouldn't block FastAPI for 10 seconds. Based on my testing this doesn't appear to be the case, but I'm wary of gotchas.
0
1
1
FastAPI runs on ASGI server implementations, where A stands for Asynchronous. One can't be asynchronous while freezing on simple sleep() call. This is very much by design; not exactly FastAPI's, but rather the lower-level frameworks it uses. You can test this directly by implementing /sleep/{int} route, GETting /sleep/666 and trying other endpoints while it "hangs".
2023-03-16 18:31:24
0
python,pytorch
1
75,760,397
why is there tensor.dtype and tensor.type()?
75,760,325
false
114
Quick question. Why does a tensor have a type and a dtype attribute? t = torch.tensor([1,2,3,4]) print(t.dtype, t.type()) is it possible to change one of them without changing the other? if not why is there this redundancy? Does it serve any purpose?
0
1
1
The Tensor.type(dtype=None) function has the ability to cast the tensor to the given dtype as a parameter. If dtype parameter is not provided, it just returns the dtype of the tensor. So, the difference is the added functionality of casting. Also, If the casting is performed to a new type, then the function returns a copy of the tensor.
2023-03-16 19:59:14
0
python,pandas,pandas-to-sql
1
75,769,139
What does a negative return value mean when calling to_sql() in Pandas?
75,761,128
false
98
I'm sending various data frames to Microsoft SQL Server using the Pandas function to_sql() and a mssql+pyodbc:// connection made with sqlalchemy.create_engine. Sometimes to_sql() returns the number of rows written, which is what I expect from the documentation on Returns: Number of rows affected by to_sql. None is returned if the callable passed into method does not return an integer number of rows. The number of returned rows affected is the sum of the rowcount attribute of sqlite3.Cursor or SQLAlchemy connectable which may not reflect the exact number of written rows as stipulated in the sqlite3 or SQLAlchemy. But in some cases I'm seeing it return negative values like -1, 2, -11, -56. If I use method="multi" this behavior goes away. Here I'm writing a table with 325 records: >>> PLSUBMITTALTYPE.to_sql("PLSubmittalType", con=data_lake, if_exists="replace") -1 >>> PLSUBMITTALTYPE.to_sql("PLSubmittalType", con=data_lake, if_exists="replace", method="multi", chunksize = 50) 325 What do those negative values mean? It appears to be succeeding in writing to the database in those cases.
0
1
1
You have a software stack that is not behaving in a completely reliable, predicted, documented way. I recommend changing your approach. Use some combination of CREATE TABLE / DELETE FROM / TRUNCATE to obtain a temp table that has zero rows in it. Often CREATE TABLE LIKE is a convenient approach for this. Invoke with two args: .to_sql(temp_table, con=con) This successfully INSERTed and reported number of rows without incident. Now submit a transaction to transfer those rows to the table of interest. You have several options: INSERT all rows (perhaps with errors ignored) UPDATE all rows use JOIN to segregate new / existing values, and do separate INSERT / UPDATE some other vendor-specific UPSERT technique When choosing an option you now have full flexibility to use any technique suggested by your DB vendor or the community, rather than having to funnel things through the narrow cross-vendor API offered by to_sql. This puts you back in the driver seat, so you can implement the more reliable solution you desire.
2023-03-16 20:59:36
0
python,sql,performance,amazon-sagemaker,duckdb
2
76,272,640
How to speed up processing of very large dataframe in python
75,761,587
false
100
I'm pretty new to working with very large dataframes (~550 million rows and 7 columns). I have raw data in the following format: df = Date|ID|Store|Brand|Category1|Category2|Age This dataframe is over 500 million rows and I need to pass it through a function that will aggregate it at a particular level (brand, category1, or caetgory2) and calculate market basket affinity metrics. Since several temp tables need to be made to get to the final metrics, I am using the pandasql function to do the calculations on the df. I have tried running my code on both my local computer and a large sagemaker instance, but the compute time is extremely long, and often the script does not finish/the kernel crashed. I have tried the following packages to try to speed up the code, but no luck so far: Vaex - I tried recreating the sql calculations in python but this did not seem to be promising at all in terms of speed. Dask - Not really sure if this one applied here but did not help Duckdb - since I am calling sql through python, this one seemed the most promising. It worked well when I took a subset of the data (10 mil rows) but will not finish processing when I try it on 300 mil rows...and I need it to work on 550 mil rows. Does anyone have suggestions on how I can speed things up to work more efficiently? Below is the python function that runs the df through the sql aggregations. ```def mba_calculation(df, tgt_level='CATEGORY_2', aso_level='CATEGORY_2', threshold=1000, anchor=[]): """ tgt_level - string, target level is one of three options: category 1, category 2, brand. Deafult: cat2 aso_level - string, association level is one of three options: category 1, catgeory 2, brand. Default: cat2 anchor - list containing either 0,1, or 2 category1/category2/brand depdending on tgt_level. Default: 0 threshold - co-occurence level of target and associated item; ranges from 1 to the max co-occurence. Default: 1000 """ #Case1: no anchor selected(default view) - display pairs if len(anchor) == 0: sql_mba = """ WITH combined AS (SELECT t.{} AS TGT_{}, a.{} AS ASO_{}, COUNT(DISTINCT t.ID) AS RCPTS_BOTH FROM {} t INNER JOIN {} a ON t.ID = a.ID and t.{} <> a.{} GROUP BY 1,2 --set minimum threshold for co-occurence HAVING COUNT(DISTINCT t.ID) >= {} ), target AS (SELECT {} AS TGT_{}, COUNT(DISTINCT ID) AS RCPTS_TGT FROM {} WHERE TGT_{} IN (SELECT DISTINCT(TGT_{}) FROM combined) GROUP BY 1 ), associated AS (SELECT {} AS ASO_{}, COUNT(ID) AS RCPTS_ASO FROM {} WHERE ASO_{} IN (SELECT DISTINCT(ASO_{}) FROM combined) GROUP BY 1 ) SELECT combined.TGT_{}, combined.ASO_{}, RCPTS_BOTH, target.RCPTS_TGT, associated.RCPTS_ASO, RCPTS_ALL --calculate support, confidence, and lift ,CASE WHEN RCPTS_ALL = 0 THEN 0 ELSE (RCPTS_BOTH*1.0) / RCPTS_ALL END AS MBA_SUPPORT ,CASE WHEN RCPTS_TGT = 0 THEN 0 ELSE (RCPTS_BOTH*1.0) / RCPTS_TGT END AS MBA_CONFIDENCE ,CASE WHEN RCPTS_ALL = 0 OR RCPTS_TGT = 0 OR RCPTS_ASO = 0 THEN 0 ELSE ((RCPTS_BOTH*1.0) / RCPTS_ALL ) / ( ((RCPTS_TGT*1.0) / RCPTS_ALL) * ((RCPTS_ASO*1.0) / RCPTS_ALL) ) END AS MBA_LIFT FROM combined LEFT JOIN target ON combined.TGT_{} = target.TGT_{} LEFT JOIN associated ON combined.ASO_{} = associated.ASO_{} LEFT JOIN (SELECT COUNT(DISTINCT ID) AS RCPTS_ALL FROM {}) ORDER BY MBA_LIFT DESC; """.format(tgt_level,tgt_level, aso_level, aso_level, df, df, tgt_level,aso_level, threshold, tgt_level, tgt_level, df, tgt_level, tgt_level, aso_level, aso_level, df, aso_level, aso_level, tgt_level, aso_level, tgt_level, tgt_level, aso_level,aso_level, df) mba_df = pysqldf(sql_mba) #print(mba_df.shape) #display(mba_df.head(50)) #Case2: 1 anchor selected - display pairs elif len(anchor) == 1: anchor_item = anchor[0] #need to make anchors be this format '%ORANGE JUICE%' sql_mba = """ WITH combined AS (SELECT t.{} AS TGT_{}, a.{} AS ASO_{}, COUNT(DISTINCT t.ID) AS RCPTS_BOTH FROM df t INNER JOIN df a ON t.ID = a.ID and t.{} <> a.{} --filter tgt to anchor WHERE UPPER(t.{}) LIKE '%{}%' GROUP BY 1,2 --set minimum threshold for co-occurence HAVING COUNT(DISTINCT t.ID) >= {} ), target AS (SELECT {} AS TGT_{}, COUNT(DISTINCT ID) AS RCPTS_TGT FROM df WHERE TGT_{} IN (SELECT DISTINCT(TGT_{}) FROM combined) GROUP BY 1 ), associated AS (SELECT {} AS ASO_{}, COUNT(DISTINCT ID) AS RCPTS_ASO FROM df WHERE ASO_{} IN (SELECT DISTINCT(ASO_{}) FROM combined) GROUP BY 1 ) SELECT combined.TGT_{}, combined.ASO_{}, RCPTS_BOTH, target.RCPTS_TGT, associated.RCPTS_ASO, RCPTS_ALL --calculate support, confidence, and lift ,CASE WHEN RCPTS_ALL = 0 THEN 0 ELSE (RCPTS_BOTH*1.0) / RCPTS_ALL END AS MBA_SUPPORT ,CASE WHEN RCPTS_TGT = 0 THEN 0 ELSE (RCPTS_BOTH*1.0) / RCPTS_TGT END AS MBA_CONFIDENCE ,CASE WHEN RCPTS_ALL = 0 OR RCPTS_TGT = 0 OR RCPTS_ASO = 0 THEN 0 ELSE ((RCPTS_BOTH*1.0) / RCPTS_ALL) / ( ((RCPTS_TGT*1.0) / RCPTS_ALL) * ((RCPTS_ASO*1.0) / RCPTS_ALL) ) END AS MBA_LIFT FROM combined LEFT JOIN target ON combined.TGT_{} = target.TGT_{} LEFT JOIN associated ON combined.ASO_{} = associated.ASO_{} LEFT JOIN (SELECT COUNT(DISTINCT _ID) AS RCPTS_ALL FROM df) ORDER BY MBA_LIFT DESC """.format(tgt_level,tgt_level, aso_level, aso_level, tgt_level, aso_level, tgt_level, anchor_item, threshold, tgt_level, tgt_level, tgt_level, tgt_level, aso_level, aso_level, aso_level, aso_level, tgt_level, aso_level, tgt_level, tgt_level, aso_level,aso_level) mba_df = pysqldf(sql_mba) #Case3: 2 anchors selected - display trios elif len(anchor) == 2: anchor_item1 = anchor[0] anchor_item2 = anchor[1] #need to make anchors be this format '%ORANGE JUICE%' sql_mba = """ WITH combined AS (SELECT t1.{} AS TGT1_{}, t2.{} AS TGT2_{}, a.{} AS ASO_{}, COUNT(DISTINCT t1.ID) AS RCPTS_BOTH FROM df t1 INNER JOIN df t2 ON t1.ID = t2.ID AND t1.{} <> t2.{} INNER JOIN df a ON t1.ID = a.ID AND t2.ID = a.ID AND t1.{} <> a.{} AND t2.{} <> a.{} --filter to anchors WHERE ( (UPPER(TGT1_{}) LIKE '%{}%' OR UPPER(TGT1_{}) LIKE '%{}%') AND (UPPER(TGT2_{}) LIKE '%{}%' OR UPPER(TGT2_{}) LIKE '%{}%') ) GROUP BY 1,2,3 --set minimum threshold for co-occurence HAVING COUNT(DISTINCT t1.ID) > {} ), target AS (SELECT tgt1.{} AS TGT1_{}, tgt2.{} AS TGT2_{}, COUNT(DISTINCT tgt1.ID) AS RCPTS_TGT FROM df tgt1 INNER JOIN df tgt2 ON tgt1.ID = tgt2.RID AND tgt1.{} <> tgt2.{} WHERE TGT1_{} IN (SELECT DISTINCT(TGT1_{}) FROM combined) AND TGT2_{} IN (SELECT DISTINCT(TGT2_{}) FROM combined) AND --filter to anchors ( (UPPER(TGT1_{}) LIKE '%{}%' OR UPPER(TGT1_{}) LIKE '%{}%') AND (UPPER(TGT2_{}) LIKE '%{}%' OR UPPER(TGT2_{}) LIKE '%{}%') ) GROUP BY 1,2 ), associated AS (SELECT {} AS ASO_{}, COUNT(DISTINCT ID) AS RCPTS_ASO FROM df WHERE ASO_{} IN (SELECT DISTINCT(ASO_{}) FROM combined) GROUP BY 1 ) SELECT combined.TGT1_{}, combined.TGT2_{},combined.ASO_{}, RCPTS_BOTH, target.RCPTS_TGT, associated.RCPTS_ASO, RCPTS_ALL --calculate support, confidence, and lift ,CASE WHEN RCPTS_ALL = 0 THEN 0 ELSE (RCPTS_BOTH*1.0) / RCPTS_ALL END AS MBA_SUPPORT ,CASE WHEN RCPTS_TGT = 0 THEN 0 ELSE (RCPTS_BOTH*1.0) / RCPTS_TGT END AS MBA_CONFIDENCE ,CASE WHEN RCPTS_ALL = 0 OR RCPTS_TGT = 0 OR RCPTS_ASO = 0 THEN 0 ELSE ((RCPTS_BOTH*1.0) / RCPTS_ALL ) / ( ((RCPTS_TGT*1.0) / RCPTS_ALL) * ((RCPTS_ASO*1.0) / RCPTS_ALL) ) END AS MBA_LIFT FROM combined LEFT JOIN target ON combined.TGT1_{} = target.TGT1_{} AND combined.TGT2_{} = target.TGT2_{} LEFT JOIN associated ON combined.ASO_{} = associated.ASO_{} LEFT JOIN (SELECT COUNT(DISTINCT ID) AS RCPTS_ALL FROM df) ORDER BY MBA_LIFT DESC; """.format(tgt_level, tgt_level, tgt_level, tgt_level, aso_level, aso_level, tgt_level, tgt_level, tgt_level, aso_level, tgt_level, aso_level, tgt_level, anchor_item1, tgt_level, anchor_item2, tgt_level, anchor_item1, tgt_level, anchor_item2, threshold, tgt_level, tgt_level, tgt_level, tgt_level, tgt_level, tgt_level, tgt_level, tgt_level, tgt_level, tgt_level, tgt_level, anchor_item1, tgt_level,anchor_item2, tgt_level, anchor_item1, tgt_level, anchor_item2, aso_level, aso_level, aso_level, aso_level, tgt_level, tgt_level, aso_level, tgt_level, tgt_level, tgt_level, tgt_level, aso_level,aso_level) mba_df = pysqldf(sql_mba) return mba_df ```
0
1
2
My preferred tool for out-of-core memory aggregations of very large datasets is Vaex. But you would need to write your datasets to an uncompressed hdf5 file(s). Polars is also pretty good. However, as you already have your code in SQL and a re-write is probably painful, you may be able to use DuckDB if you optimise your datatypes. If you can get away with float32s or uint8s, for example, you maybe be able to reduce the size of the dataset and this may be enough to get DuckDB to run on 550 million rows. Also, if any of your columns have text, could you convert them into an category ID integer?
2023-03-16 20:59:36
1
python,sql,performance,amazon-sagemaker,duckdb
2
75,761,641
How to speed up processing of very large dataframe in python
75,761,587
false
100
I'm pretty new to working with very large dataframes (~550 million rows and 7 columns). I have raw data in the following format: df = Date|ID|Store|Brand|Category1|Category2|Age This dataframe is over 500 million rows and I need to pass it through a function that will aggregate it at a particular level (brand, category1, or caetgory2) and calculate market basket affinity metrics. Since several temp tables need to be made to get to the final metrics, I am using the pandasql function to do the calculations on the df. I have tried running my code on both my local computer and a large sagemaker instance, but the compute time is extremely long, and often the script does not finish/the kernel crashed. I have tried the following packages to try to speed up the code, but no luck so far: Vaex - I tried recreating the sql calculations in python but this did not seem to be promising at all in terms of speed. Dask - Not really sure if this one applied here but did not help Duckdb - since I am calling sql through python, this one seemed the most promising. It worked well when I took a subset of the data (10 mil rows) but will not finish processing when I try it on 300 mil rows...and I need it to work on 550 mil rows. Does anyone have suggestions on how I can speed things up to work more efficiently? Below is the python function that runs the df through the sql aggregations. ```def mba_calculation(df, tgt_level='CATEGORY_2', aso_level='CATEGORY_2', threshold=1000, anchor=[]): """ tgt_level - string, target level is one of three options: category 1, category 2, brand. Deafult: cat2 aso_level - string, association level is one of three options: category 1, catgeory 2, brand. Default: cat2 anchor - list containing either 0,1, or 2 category1/category2/brand depdending on tgt_level. Default: 0 threshold - co-occurence level of target and associated item; ranges from 1 to the max co-occurence. Default: 1000 """ #Case1: no anchor selected(default view) - display pairs if len(anchor) == 0: sql_mba = """ WITH combined AS (SELECT t.{} AS TGT_{}, a.{} AS ASO_{}, COUNT(DISTINCT t.ID) AS RCPTS_BOTH FROM {} t INNER JOIN {} a ON t.ID = a.ID and t.{} <> a.{} GROUP BY 1,2 --set minimum threshold for co-occurence HAVING COUNT(DISTINCT t.ID) >= {} ), target AS (SELECT {} AS TGT_{}, COUNT(DISTINCT ID) AS RCPTS_TGT FROM {} WHERE TGT_{} IN (SELECT DISTINCT(TGT_{}) FROM combined) GROUP BY 1 ), associated AS (SELECT {} AS ASO_{}, COUNT(ID) AS RCPTS_ASO FROM {} WHERE ASO_{} IN (SELECT DISTINCT(ASO_{}) FROM combined) GROUP BY 1 ) SELECT combined.TGT_{}, combined.ASO_{}, RCPTS_BOTH, target.RCPTS_TGT, associated.RCPTS_ASO, RCPTS_ALL --calculate support, confidence, and lift ,CASE WHEN RCPTS_ALL = 0 THEN 0 ELSE (RCPTS_BOTH*1.0) / RCPTS_ALL END AS MBA_SUPPORT ,CASE WHEN RCPTS_TGT = 0 THEN 0 ELSE (RCPTS_BOTH*1.0) / RCPTS_TGT END AS MBA_CONFIDENCE ,CASE WHEN RCPTS_ALL = 0 OR RCPTS_TGT = 0 OR RCPTS_ASO = 0 THEN 0 ELSE ((RCPTS_BOTH*1.0) / RCPTS_ALL ) / ( ((RCPTS_TGT*1.0) / RCPTS_ALL) * ((RCPTS_ASO*1.0) / RCPTS_ALL) ) END AS MBA_LIFT FROM combined LEFT JOIN target ON combined.TGT_{} = target.TGT_{} LEFT JOIN associated ON combined.ASO_{} = associated.ASO_{} LEFT JOIN (SELECT COUNT(DISTINCT ID) AS RCPTS_ALL FROM {}) ORDER BY MBA_LIFT DESC; """.format(tgt_level,tgt_level, aso_level, aso_level, df, df, tgt_level,aso_level, threshold, tgt_level, tgt_level, df, tgt_level, tgt_level, aso_level, aso_level, df, aso_level, aso_level, tgt_level, aso_level, tgt_level, tgt_level, aso_level,aso_level, df) mba_df = pysqldf(sql_mba) #print(mba_df.shape) #display(mba_df.head(50)) #Case2: 1 anchor selected - display pairs elif len(anchor) == 1: anchor_item = anchor[0] #need to make anchors be this format '%ORANGE JUICE%' sql_mba = """ WITH combined AS (SELECT t.{} AS TGT_{}, a.{} AS ASO_{}, COUNT(DISTINCT t.ID) AS RCPTS_BOTH FROM df t INNER JOIN df a ON t.ID = a.ID and t.{} <> a.{} --filter tgt to anchor WHERE UPPER(t.{}) LIKE '%{}%' GROUP BY 1,2 --set minimum threshold for co-occurence HAVING COUNT(DISTINCT t.ID) >= {} ), target AS (SELECT {} AS TGT_{}, COUNT(DISTINCT ID) AS RCPTS_TGT FROM df WHERE TGT_{} IN (SELECT DISTINCT(TGT_{}) FROM combined) GROUP BY 1 ), associated AS (SELECT {} AS ASO_{}, COUNT(DISTINCT ID) AS RCPTS_ASO FROM df WHERE ASO_{} IN (SELECT DISTINCT(ASO_{}) FROM combined) GROUP BY 1 ) SELECT combined.TGT_{}, combined.ASO_{}, RCPTS_BOTH, target.RCPTS_TGT, associated.RCPTS_ASO, RCPTS_ALL --calculate support, confidence, and lift ,CASE WHEN RCPTS_ALL = 0 THEN 0 ELSE (RCPTS_BOTH*1.0) / RCPTS_ALL END AS MBA_SUPPORT ,CASE WHEN RCPTS_TGT = 0 THEN 0 ELSE (RCPTS_BOTH*1.0) / RCPTS_TGT END AS MBA_CONFIDENCE ,CASE WHEN RCPTS_ALL = 0 OR RCPTS_TGT = 0 OR RCPTS_ASO = 0 THEN 0 ELSE ((RCPTS_BOTH*1.0) / RCPTS_ALL) / ( ((RCPTS_TGT*1.0) / RCPTS_ALL) * ((RCPTS_ASO*1.0) / RCPTS_ALL) ) END AS MBA_LIFT FROM combined LEFT JOIN target ON combined.TGT_{} = target.TGT_{} LEFT JOIN associated ON combined.ASO_{} = associated.ASO_{} LEFT JOIN (SELECT COUNT(DISTINCT _ID) AS RCPTS_ALL FROM df) ORDER BY MBA_LIFT DESC """.format(tgt_level,tgt_level, aso_level, aso_level, tgt_level, aso_level, tgt_level, anchor_item, threshold, tgt_level, tgt_level, tgt_level, tgt_level, aso_level, aso_level, aso_level, aso_level, tgt_level, aso_level, tgt_level, tgt_level, aso_level,aso_level) mba_df = pysqldf(sql_mba) #Case3: 2 anchors selected - display trios elif len(anchor) == 2: anchor_item1 = anchor[0] anchor_item2 = anchor[1] #need to make anchors be this format '%ORANGE JUICE%' sql_mba = """ WITH combined AS (SELECT t1.{} AS TGT1_{}, t2.{} AS TGT2_{}, a.{} AS ASO_{}, COUNT(DISTINCT t1.ID) AS RCPTS_BOTH FROM df t1 INNER JOIN df t2 ON t1.ID = t2.ID AND t1.{} <> t2.{} INNER JOIN df a ON t1.ID = a.ID AND t2.ID = a.ID AND t1.{} <> a.{} AND t2.{} <> a.{} --filter to anchors WHERE ( (UPPER(TGT1_{}) LIKE '%{}%' OR UPPER(TGT1_{}) LIKE '%{}%') AND (UPPER(TGT2_{}) LIKE '%{}%' OR UPPER(TGT2_{}) LIKE '%{}%') ) GROUP BY 1,2,3 --set minimum threshold for co-occurence HAVING COUNT(DISTINCT t1.ID) > {} ), target AS (SELECT tgt1.{} AS TGT1_{}, tgt2.{} AS TGT2_{}, COUNT(DISTINCT tgt1.ID) AS RCPTS_TGT FROM df tgt1 INNER JOIN df tgt2 ON tgt1.ID = tgt2.RID AND tgt1.{} <> tgt2.{} WHERE TGT1_{} IN (SELECT DISTINCT(TGT1_{}) FROM combined) AND TGT2_{} IN (SELECT DISTINCT(TGT2_{}) FROM combined) AND --filter to anchors ( (UPPER(TGT1_{}) LIKE '%{}%' OR UPPER(TGT1_{}) LIKE '%{}%') AND (UPPER(TGT2_{}) LIKE '%{}%' OR UPPER(TGT2_{}) LIKE '%{}%') ) GROUP BY 1,2 ), associated AS (SELECT {} AS ASO_{}, COUNT(DISTINCT ID) AS RCPTS_ASO FROM df WHERE ASO_{} IN (SELECT DISTINCT(ASO_{}) FROM combined) GROUP BY 1 ) SELECT combined.TGT1_{}, combined.TGT2_{},combined.ASO_{}, RCPTS_BOTH, target.RCPTS_TGT, associated.RCPTS_ASO, RCPTS_ALL --calculate support, confidence, and lift ,CASE WHEN RCPTS_ALL = 0 THEN 0 ELSE (RCPTS_BOTH*1.0) / RCPTS_ALL END AS MBA_SUPPORT ,CASE WHEN RCPTS_TGT = 0 THEN 0 ELSE (RCPTS_BOTH*1.0) / RCPTS_TGT END AS MBA_CONFIDENCE ,CASE WHEN RCPTS_ALL = 0 OR RCPTS_TGT = 0 OR RCPTS_ASO = 0 THEN 0 ELSE ((RCPTS_BOTH*1.0) / RCPTS_ALL ) / ( ((RCPTS_TGT*1.0) / RCPTS_ALL) * ((RCPTS_ASO*1.0) / RCPTS_ALL) ) END AS MBA_LIFT FROM combined LEFT JOIN target ON combined.TGT1_{} = target.TGT1_{} AND combined.TGT2_{} = target.TGT2_{} LEFT JOIN associated ON combined.ASO_{} = associated.ASO_{} LEFT JOIN (SELECT COUNT(DISTINCT ID) AS RCPTS_ALL FROM df) ORDER BY MBA_LIFT DESC; """.format(tgt_level, tgt_level, tgt_level, tgt_level, aso_level, aso_level, tgt_level, tgt_level, tgt_level, aso_level, tgt_level, aso_level, tgt_level, anchor_item1, tgt_level, anchor_item2, tgt_level, anchor_item1, tgt_level, anchor_item2, threshold, tgt_level, tgt_level, tgt_level, tgt_level, tgt_level, tgt_level, tgt_level, tgt_level, tgt_level, tgt_level, tgt_level, anchor_item1, tgt_level,anchor_item2, tgt_level, anchor_item1, tgt_level, anchor_item2, aso_level, aso_level, aso_level, aso_level, tgt_level, tgt_level, aso_level, tgt_level, tgt_level, tgt_level, tgt_level, aso_level,aso_level) mba_df = pysqldf(sql_mba) return mba_df ```
0.099668
1
2
To conserve memory, prefer import polars over the pandas library. If your records still do not fit in memory, use external storage. The toSQL function makes it very easy to send your rows to postgres, sqlite, or similar relational database. Then you can use an on-disk data structure, an index, to make JOINs go quickly.
2023-03-17 01:23:46
1
python,numpy,scipy,windows-arm64
3
75,774,481
How to install SciPy and Numpy on Windows on ARM64
75,762,998
false
354
I need numpy and scipy to perform some signal analysis. Has anyone succeeded in doing this? (I am interested in running it natively, not via virtualenv). My ultimate goal is to build an exe from the python script that uses numpy and scipy that can be run in WinPE for tests. I have successfully installed python 3.11.2 and am able to get to numpy installation which also fails at this. INFO: unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src INFO: build_src INFO: building py_modules sources creating build creating build\src.win-arm64-3.11 creating build\src.win-arm64-3.11\numpy creating build\src.win-arm64-3.11\numpy\distutils INFO: building library "npymath" sources error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/ [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for numpy Failed to build numpy
0.066568
3
2
This will be an interesting exercise. If you have experience in building software go straight ahead. If you're new to building packages, then you might need a different way. To build windows+ARM native packages of numpy and scipy: Not only will you need a C/C++ compiler, you will also need a Fortran compiler. I am not sure if gfortran is available for WinARM (that's what we typically use to build scipy wheels). ifort is another compiler to look into. scipy wheels are made with gcc/g++/gfortran. You will need to download + build a BLAS library from source. The scipy/numpy projects typically use OpenBLAS. You will need to build + install packages for all the build time dependencies. i.e. Cython, pybind11, (pythran), meson, meson-python, ninja. Some of those may have dependencies of their own. A couple of those have OS/arch independent wheels, but not all. Once you've built all of those you'll build numpy, then scipy. This is a large task, and even then some of the packages may have bugs that are exposed when you try and do Windows + ARM. Can you use an emulated python environment? e.g. does Windows+ARM permit x86_64 python interpreters to run? Alternatively could you use WSL to run a Python interpreter?
2023-03-17 01:23:46
0
python,numpy,scipy,windows-arm64
3
75,769,920
How to install SciPy and Numpy on Windows on ARM64
75,762,998
false
354
I need numpy and scipy to perform some signal analysis. Has anyone succeeded in doing this? (I am interested in running it natively, not via virtualenv). My ultimate goal is to build an exe from the python script that uses numpy and scipy that can be run in WinPE for tests. I have successfully installed python 3.11.2 and am able to get to numpy installation which also fails at this. INFO: unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src INFO: build_src INFO: building py_modules sources creating build creating build\src.win-arm64-3.11 creating build\src.win-arm64-3.11\numpy creating build\src.win-arm64-3.11\numpy\distutils INFO: building library "npymath" sources error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/ [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for numpy Failed to build numpy
0
3
2
Numpy and Scipy don't provide pre-built binary wheels for Windows Arm64 yet and need to be built from the source during installation. You will need to install Visual Studio 2019 with C/C++ toolchains.
2023-03-17 01:29:33
1
python-3.x,unicode,cpython
2
75,763,478
Are they equally stored in memory in python3?
75,763,030
false
75
As we know that Python3 takes all string characters as Unicode code point. type('\x0d') <class 'str'> type(b'\x0d') <class 'bytes'> The ascii of b'\x0d' is 13,stored in memory in the form of 0000 0111,'\x0d' is stored in the same format of 0000 0111 or not?Are they equally stored in memory? To dig more to make me more confused: #My python version python3 --version Python 3.9.2 #in python cli len(b'\x0d') 1 import sys print(sys.getsizeof(b'\x0d')) 34 b\x0d is not stored in the form of 00000111 in memory? print(sys.getsizeof('\x0d')) 50 From using sys.getsizeof make me understand that: string and bytes are stored with different objects in python3. When we say that b\x0d is stored in the form of 00000111 in memory,it is based on some abstract level,in fact b\x0d is stored with 34 bytes in pc's memory for cython3?
0.099668
2
1
CPython currently encodes (text) strings using the minimum of 8,16, 32 bits per character (in LATIN-1, UTF-16(?), UTF-32) to fit all characters contained in the string. So, if all of your characters fit in the LATIN-1 encoding - which feature a 1:1 matching with unicode codepoints <256, it will use only one byte per character internally. I've seem somewhere (sorry, it was a debate in comments in a question here, but I can´t find it now) there are future plans to keep text-strings utf-8 encoded in memory, so to avoid a large overhead in the great majority of text-related workflows, where a utf-8 byte string is decoded to the internal representation to be little used. Pypy currently uses that approach.
2023-03-17 02:40:34
0
python-packaging,python-poetry
1
76,446,807
Poetry import package in test
75,763,328
false
154
I have the following project structure: / pyproject.toml poetry.lock tests/ test.py src/ my_pkg/ __init__.py ... test.py import my_pkg However, I get the error, ModuleNotFoundError: No module named 'my_pkg'. How do I properly import my_pkg into test.py?
0
1
1
My solution was odd. Exit the poetry shell using exit. Create a README.md (touch README.md) on the main package dir 🤷 poetry install (should write: "Installing the current project:") poetry shell && pytest.
2023-03-17 03:17:18
0
python,apache-spark,databricks,transformer-model,mlflow
1
76,177,239
Can't load pyspark.ml.Pipeline model with custom transformer in spark ml pipeline because of missing positional argument for class
75,763,468
false
175
I have a custom transformer python class that I use with a spark mllib pipeline. I want to save the model and load it in another session with spark. I'm able to log the model, but I'm not able to load it after because the confidence transformer requires the labels and it is missing the positional argument. I am using pyspark==3.3.0. from pyspark.ml import Transformer, PipelineModel from pyspark.ml.param import Param, Params from pyspark.ml.util import DefaultParamsWriter, DefaultParamsReader, MLReadable, MLWritable, MLWriter from pyspark.sql.functions import lit from pyspark.ml.feature import StringIndexer from pyspark.ml.classification import LogisticRegression class Confidence(Transformer, DefaultParamsReadable, DefaultParamsWritable): """ A custom Transformer which does some cleanup of the output of the model and creates a column a confidence metric based on a T distribution. """ labels = Param( Params._dummy(), "labels", "Count of labels for degrees of freedom", typeConverter=TypeConverters.toInt) def __init__(self, labels: int): super(Confidence, self).__init__() self._setDefault(labels=labels) def getLabels(self): return self.getOrDefault(self.labels) def setLabels(self, value): self._set(labels=value) def _transform(self, df): return df.withColumn("labelCount", lit(self.getLabels())) # String Indexer to convert feature column stringIndexer = StringIndexer(inputCol = "feature", outputCol = "label").fit(train) # Fit model lr = LogisticRegression() # Get count of labels from string indexer to pass to confidence labelCount = len(stringIndexer.labels) confidence = Confidence(labels = labelCount) # Create pipeline and fit model pipeline = Pipeline().setStages([stringIndexer, lr, confidence]) pipeline_model = pipeline.fit(train_df) basePath = "/tmp/mllib-persistence-example" pipeline_model.write().overwrite().save(basePath + "/model") model_loaded = Pipeline.load(basePath + "/model") I get this error message: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <command-1923614380916072> in <cell line: 1>() ----> 1 model_loaded = Pipeline.load(basePath + "/model") /databricks/spark/python/pyspark/ml/util.py in load(cls, path) 444 def load(cls, path: str) -> RL: 445 """Reads an ML instance from the input path, a shortcut of `read().load(path)`.""" --> 446 return cls.read().load(path) 447 448 /databricks/spark/python/pyspark/ml/pipeline.py in load(self, path) 247 return JavaMLReader(cast(Type["JavaMLReadable[Pipeline]"], self.cls)).load(path) 248 else: --> 249 uid, stages = PipelineSharedReadWrite.load(metadata, self.sc, path) 250 return Pipeline(stages=stages)._resetUid(uid) 251 /databricks/spark/python/pyspark/ml/pipeline.py in load(metadata, sc, path) 437 stageUid, index, len(stageUids), stagesDir 438 ) --> 439 stage: "PipelineStage" = DefaultParamsReader.loadParamsInstance(stagePath, sc) 440 stages.append(stage) 441 return (metadata["uid"], stages) /databricks/spark/python/pyspark/ml/util.py in loadParamsInstance(path, sc) 727 pythonClassName = metadata["class"].replace("org.apache.spark", "pyspark") 728 py_type: Type[RL] = DefaultParamsReader.__get_class(pythonClassName) --> 729 instance = py_type.load(path) 730 return instance 731 /databricks/spark/python/pyspark/ml/util.py in load(cls, path) 444 def load(cls, path: str) -> RL: 445 """Reads an ML instance from the input path, a shortcut of `read().load(path)`.""" --> 446 return cls.read().load(path) 447 448 /databricks/spark/python/pyspark/ml/util.py in load(self, path) 638 metadata = DefaultParamsReader.loadMetadata(path, self.sc) 639 py_type: Type[RL] = DefaultParamsReader.__get_class(metadata["class"]) --> 640 instance = py_type() 641 cast("Params", instance)._resetUid(metadata["uid"]) 642 DefaultParamsReader.getAndSetParams(instance, metadata) TypeError: __init__() missing 1 required positional argument: 'labels'
0
1
1
If you're trying to load a fitted pipeline shouldn't it be: model_loaded = PipelineModel.load(basePath + "/model")
2023-03-17 08:09:30
8
python,jupyter-notebook,ethereum,web3py
2
75,781,513
'Web3' object has no attribute 'isConnected'
75,765,117
false
1,525
I am trying to connect to the Ethereum blockchain with Web3. When I install web3 using jupyter notebook I keep receiving the error that Web3 has no attribute. Can someone please advise on how to get connected to the Ethereum network? MY CODE: pip install web3 from web3 import Web3, EthereumTesterProvider w3 = Web3(EthereumTesterProvider()) w3.isConnected() ERROR: AttributeError Traceback (most recent call last) Input In [29], in <cell line: 3>() 1 from web3 import EthereumTesterProvider 2 w3 = Web3(EthereumTesterProvider()) ----> 3 w3.isConnected() AttributeError: 'Web3' object has no attribute 'isConnected' I have tried both web3 and capital Web3 and still receive the same error. I have also tried w3 = Web3(Web3.EthereumTesterProvider()) but same issues.
1
3
2
Please try is_connected() instead of isConnected().
2023-03-17 08:09:30
0
python,jupyter-notebook,ethereum,web3py
2
76,291,739
'Web3' object has no attribute 'isConnected'
75,765,117
false
1,525
I am trying to connect to the Ethereum blockchain with Web3. When I install web3 using jupyter notebook I keep receiving the error that Web3 has no attribute. Can someone please advise on how to get connected to the Ethereum network? MY CODE: pip install web3 from web3 import Web3, EthereumTesterProvider w3 = Web3(EthereumTesterProvider()) w3.isConnected() ERROR: AttributeError Traceback (most recent call last) Input In [29], in <cell line: 3>() 1 from web3 import EthereumTesterProvider 2 w3 = Web3(EthereumTesterProvider()) ----> 3 w3.isConnected() AttributeError: 'Web3' object has no attribute 'isConnected' I have tried both web3 and capital Web3 and still receive the same error. I have also tried w3 = Web3(Web3.EthereumTesterProvider()) but same issues.
0
3
2
Web3py has started using snakecase instead of CamelCase, So try is_connected()
2023-03-17 16:08:51
0
python,matplotlib
1
75,788,130
How to prevent matplotlib to be shown in popup window?
75,769,820
true
91
I use matplotlib in my lib to display legend on a ipyleaflet map. In my CD/CI tests I run several checks on this legend (values displayed, colors etc...). My problem is when it's run on my local computer, matplotlib open a legend popup windows that stops the execution of the tests. Is it possible to force matplotlib to remain non-interactive when I run my pytest session ?
1.2
1
1
You can change the matplotlib backend to a non graphical one by calling matplotlib.use('Agg') at the beginning of your scripts. This will prevent matplotlib from opening windows.
2023-03-17 19:20:14
0
python,bazel,bazel-python
1
75,831,970
One entry point in WORKSPACE for managing various python project requirements
75,771,422
false
224
My Bazel project structure is like this ├── BUILD ├── WORKSPACE ├── example.py ├── reqs.bzl └── requirements.txt I have created my WORKSPACE like this load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive") http_archive( name = "rules_python", sha256 = "a30abdfc7126d497a7698c29c46ea9901c6392d6ed315171a6df5ce433aa4502", strip_prefix = "rules_python-0.6.0", url = "https://github.com/bazelbuild/rules_python/archive/0.6.0.tar.gz", ) load("reqs.bzl","load_reqs") load_reqs() I did that as I want to have one entry point in WORKSPACE from where I can manage dependencies for various python based projects. my reqs.bzl looks like this load("@rules_python//python:pip.bzl", "pip_parse") load("@rules_python//python:pip.bzl", "compile_pip_requirements") def load_reqs(): compile_pip_requirements( name = "pyreqs", requirements_in = "//:requirements.txt", visibility = ["/visbibility:public"] ) My BUILD file looks like load("@pyreqs//:requirements.bzl", "requirement") py_binary( name="example", srcs=["example.py"], deps=[ requirement("absl-py"), requirement("numpy"), requirement("scipy"), requirement("matplotlib") ], visibility=["//visibility:public"] ) This above setup is obviously failing to build my example. I tried compile_pip_requirement as I couldnt pip_parse, load and invoke install_deps() from the load_reqs() function of reqs.bzl Appreciate any input from the community on how this has to be done correctly. PS: Still climbing that Bazel learning curve and trying to figure out best practices. The primary motivation behind above project setup is, I have to add lot of python targets and I don't want to have a large WORKSPACE file and frequently make edits to it. requirements.txt absl-py==1.4.0 cycler==0.11.0 kiwisolver==1.4.4 numpy==1.24.2 pillow matplotlib==3.3.4 python-dateutil==2.8.2 pyparsing==3.0.9 pytz==2022.7.1 scipy==1.10.1 six==1.16.0 requests bazel build //:example gives error ERROR: Traceback (most recent call last): File "/home/ubuntu/bazel-tests/python-ex-2/WORKSPACE", line 11, column 10, in <toplevel> load_reqs() File "/home/ubuntu/bazel-tests/python-ex-2/reqs.bzl", line 5, column 29, in load_reqs compile_pip_requirements( File "/home/ubuntu/.cache/bazel/_bazel_ubuntu/7d424a81034067ed3c4b3321d48bbdb4/external/rules_python/python/pip_install/requirements.bzl", line 40, column 21, in compile_pip_requirements native.filegroup( Error in filegroup: filegroup cannot be in the WORKSPACE file (used by //external:pyreqs) ERROR: Error computing the main repository mapping: error loading package 'external': Package 'external' contains errors
0
1
1
compile_pip_requirement is a rule that should be used in BUILD files, not WORKSPACE file. Seems there is no way to load a .bzl file other than load(). new_local_repository doesn't work because it won't evaluate WORKSPACE in external workspace. If you can accept one single load() and install_deps, maybe you can write a repository rule that merge all requirements.txt into one file, then call pip_parse, load() and install_deps once in your workspace root. Otherwise, you can only figure out how pip_parse work and write your own.