A_Id
int64
518
76.6M
AnswerCount
int64
1
64
Question
stringlengths
15
29.1k
Available Count
int64
1
31
Tags
stringlengths
6
105
Q_Id
int64
337
75M
Answer
stringlengths
6
11.6k
is_accepted
bool
2 classes
CreationDate
stringlengths
19
23
ViewCount
int64
6
6.81M
Users Score
int64
-42
1.15k
Score
float64
-1
1.2
Title
stringlengths
11
150
Q_Score
int64
0
6.79k
74,944,132
2
How to loop through a directory in Python and open wave files that are good whilst ignoring bad (corrupted) ones? I want to open various wave files from a directory. However, some of these files may be corrupted, some may not be to specification. In particular there will be files in that directory which when trying to open them will raise the error: wave.Error: file does not start with RIFF id I want to ignore those files. I want to catch the error and continue with the loop. How can this be done? My code: for file_path in files: sig=0 file = str(file_path) sig, wave_params = DataGenerator.open_wave(file) if sig == 0: print( 'WARNING: Could not open wave file during data creation: ' + file) continue if wave_params[0] != 1: print("WARNING: Wrong NUMBER OF CHANNELS in " + file) txt.write( "WARNING: Wrong NUMBER OF CHANNELS in " + file + "\n") continue if wave_params[1] != 2: print("WARNING: Wrong SAMPLE WIDTH in " + file) txt.write("WARNING: Wrong SAMPLE WIDTH in " + file + "\n") continue if wave_params[2] != RATE: print("WARNING: Wrong FRAME RATE in " + file) txt.write("WARNING: Wrong FRAME RATE in " + file + "\n") continue if wave_params[3] != SAMPLES: print("WARNING: Wrong NUMBER OF SAMPLES in " + file) txt.write( "WARNING: Wrong NUMBER OF SAMPLES in " + file + "\n") continue if wave_params[4] != 'NONE': print("WARNING: Wrong comptype: " + file) txt.write("WARNING: Wrong comptype: " + file + "\n") continue if wave_params[5] != 'not compressed': print("WARNING: File appears to be compressed " + file) txt.write( "WARNING: File appears to be compressed " + file + "\n") continue if bit_depth != (wave_params[2] * (2**4) * wave_params[1]): print("WARNING: Wring bit depth in " + file) txt.write("WARNING: Wring bit depth in " + file + "\n") continue if isinstance(sig, int): print("WARNING: No signal in " + file) txt.write("WARNING: No signal in " + file + "\n") continue My code for opening the wave file: def open_wave(sound_file): """ Open wave file Links: https://stackoverflow.com/questions/16778878/python-write-a-wav-file-into-numpy-float-array https://stackoverflow.com/questions/2060628/reading-wav-files-in-python """ if Path(sound_file).is_file(): sig = 0 with wave.open(sound_file, 'rb') as f: n_channels = f.getnchannels() samp_width = f.getsampwidth() frame_rate = f.getframerate() num_frames = f.getnframes() wav_params = f.getparams() snd = f.readframes(num_frames) audio_as_np_int16 = np.frombuffer(snd, dtype=np.int16) sig = audio_as_np_int16.astype(np.float32) return sig, wav_params else: print('ERROR: File ' + sound_file + ' does not exist. BAD.') print("Problem with openng wave file") exit(1) The missing lines which scale the output of the wave file correctly is done on purpose. I am interested in how to catch the error mentioned above. A tipp of how to open wave files defensively would be nice, too. That is how can I simply ignore wave files that throw errors?
1
python,error-handling,wave,defensive-programming,riff
74,944,110
You could make use of a try-catch block. where you 'try' accessing the file and you catch a potential exception. here you could just make a 'pass'
false
2022-12-28 19:09:28
152
0
0
How to ignore corrupted files?
1
74,946,457
1
I'm trying to run a very basic locust load testing which did work previously. from locust import HttpUser, between, task class QuickstartUser(HttpUser): wait_time = between(1, 5) @task def get_status(self): self.client.get("/status/") Running the following command: locust -f <package-name>/tests/load_tests.py -r 20 -u 400 -H http://localhost:8000 yields the following error message when trying to access the web interface: [2022-12-28 23:23:30,962] MacBook-Pro.fritz.box/INFO/locust.main: Starting web interface at http://0.0.0.0:8089 (accepting connections from all network interfaces) [2022-12-28 23:23:30,968] MacBook-Pro.fritz.box/INFO/locust.main: Starting Locust 2.14.0 Traceback (most recent call last): File "src/gevent/greenlet.py", line 906, in gevent._gevent_cgreenlet.Greenlet.run File "/Users/<user>/Coding/PycharmProjects/<project>-fastapi/.venv/lib/python3.10/site-packages/gevent/baseserver.py", line 34, in _handle_and_close_when_done return handle(*args_tuple) File "/Users/<user>/Coding/PycharmProjects/<project>-fastapi/.venv/lib/python3.10/site-packages/gevent/pywsgi.py", line 1577, in handle handler.handle() File "/Users/<user>/Coding/PycharmProjects/<project>-fastapi/.venv/lib/python3.10/site-packages/gevent/pywsgi.py", line 464, in handle result = self.handle_one_request() File "/Users/<user>/Coding/PycharmProjects/<project>-fastapi/.venv/lib/python3.10/site-packages/gevent/pywsgi.py", line 656, in handle_one_request if self.rfile.CLOSED: AttributeError: '_io.BufferedReader' object has no attribute 'CLOSED' 2022-12-28T22:23:35Z <Greenlet at 0x106fbc3a0: _handle_and_close_when_done(<bound method WSGIServer.handle of <WSGIServer at , <bound method StreamServer.do_close of <WSGIServer, (<gevent._socket3.socket [closed] at 0x106fcb460 o)> failed with AttributeError The following versions are being used: $ poetry show locust --tree locust 2.14.0 Developer friendly load testing framework β”œβ”€β”€ configargparse >=1.0 β”œβ”€β”€ flask >=2.0.0 β”‚ β”œβ”€β”€ click >=8.0 β”‚ β”‚ └── colorama * β”‚ β”œβ”€β”€ itsdangerous >=2.0 β”‚ β”œβ”€β”€ jinja2 >=3.0 β”‚ β”‚ └── markupsafe >=2.0 β”‚ └── werkzeug >=2.2.2 β”‚ └── markupsafe >=2.1.1 (circular dependency aborted here) β”œβ”€β”€ flask-basicauth >=0.2.0 β”‚ └── flask * β”‚ β”œβ”€β”€ click >=8.0 β”‚ β”‚ └── colorama * β”‚ β”œβ”€β”€ itsdangerous >=2.0 β”‚ β”œβ”€β”€ jinja2 >=3.0 β”‚ β”‚ └── markupsafe >=2.0 β”‚ └── werkzeug >=2.2.2 β”‚ └── markupsafe >=2.1.1 (circular dependency aborted here) β”œβ”€β”€ flask-cors >=3.0.10 β”‚ β”œβ”€β”€ flask >=0.9 β”‚ β”‚ β”œβ”€β”€ click >=8.0 β”‚ β”‚ β”‚ └── colorama * β”‚ β”‚ β”œβ”€β”€ itsdangerous >=2.0 β”‚ β”‚ β”œβ”€β”€ jinja2 >=3.0 β”‚ β”‚ β”‚ └── markupsafe >=2.0 β”‚ β”‚ └── werkzeug >=2.2.2 β”‚ β”‚ └── markupsafe >=2.1.1 (circular dependency aborted here) β”‚ └── six * β”œβ”€β”€ gevent >=20.12.1 β”‚ β”œβ”€β”€ cffi >=1.12.2 β”‚ β”‚ └── pycparser * β”‚ β”œβ”€β”€ greenlet >=2.0.0 β”‚ β”œβ”€β”€ setuptools * β”‚ β”œβ”€β”€ zope-event * β”‚ β”‚ └── setuptools * (circular dependency aborted here) β”‚ └── zope-interface * β”‚ └── setuptools * (circular dependency aborted here) β”œβ”€β”€ geventhttpclient >=2.0.2 β”‚ β”œβ”€β”€ brotli * β”‚ β”œβ”€β”€ certifi * β”‚ β”œβ”€β”€ gevent >=0.13 β”‚ β”‚ β”œβ”€β”€ cffi >=1.12.2 β”‚ β”‚ β”‚ └── pycparser * β”‚ β”‚ β”œβ”€β”€ greenlet >=2.0.0 β”‚ β”‚ β”œβ”€β”€ setuptools * β”‚ β”‚ β”œβ”€β”€ zope-event * β”‚ β”‚ β”‚ └── setuptools * (circular dependency aborted here) β”‚ β”‚ └── zope-interface * β”‚ β”‚ └── setuptools * (circular dependency aborted here) β”‚ └── six * β”œβ”€β”€ msgpack >=0.6.2 β”œβ”€β”€ psutil >=5.6.7 β”œβ”€β”€ pywin32 * β”œβ”€β”€ pyzmq >=22.2.1,<23.0.0 || >23.0.0 β”‚ β”œβ”€β”€ cffi * β”‚ β”‚ └── pycparser * β”‚ └── py * β”œβ”€β”€ requests >=2.23.0 β”‚ β”œβ”€β”€ certifi >=2017.4.17 β”‚ β”œβ”€β”€ charset-normalizer >=2,<3 β”‚ β”œβ”€β”€ idna >=2.5,<4 β”‚ └── urllib3 >=1.21.1,<1.27 β”œβ”€β”€ roundrobin >=0.0.2 β”œβ”€β”€ typing-extensions >=3.7.4.3 └── werkzeug >=2.0.0 └── markupsafe >=2.1.1
1
python,locust
74,945,549
Not entirely sure why, but I had some other issues in the same environment (pip updates failing for example) and decided to delete the entire virtual env and create it from scratch using the lock file. Afterwards, the exact same code works perfectly fine.
true
2022-12-28 22:28:50
60
0
1.2
Locust - AttributeError when accessing locust web interface
1
74,950,938
2
I am trying to write a discord.py bot. The following is an abridgement of my full code. intents = discord.Intents.default() intents.message_content = True load_dotenv() TOKEN = os.getenv('DISCORD_TOKEN') bot = commands.Bot(intents=intents, command_prefix="!", sort_commands=False) client = discord.Client(intents=intents) @bot.event async def on_ready(): print(f'{bot.user.name} has connected to Discord!') channel = bot.get_channel(12345678912345678912) await channel.send("This is a test.") bot.run(TOKEN) Sometimes, the code above will work as intended, printing the connection message and sending the desired message into the target channel. This is not always the case, however! I am getting an issue where the bot.get_channel() command returns none instead of the correct data. I assume this means that the channel cannot be found, except for the fact that the channel does exist. The error that is sent to the console is; AttributeError: 'NoneType' object has no attribute 'send' I am very, very new to discord.py so I would appreciate any help I can get to better understand what's going on under the hood here. Thank you very much for your help.
1
python,discord,discord.py
74,946,142
The accepted answer is not recommended. The reason it doesn't work is because the cache isn't populated yet in on_ready, but you should not make API requests (fetch_X & send) in on_ready! Making API calls in there has a high chance for Discord to just disconnect your bot. Also, on_ready gets triggered multiple times, so you'll end up sending this message constantly even though you only started it once. There's rarely a reason to do anything at all in there. If you want something to run once on startup you can create a Task & start it in setup_hook.
true
2022-12-29 00:17:39
382
0
1.2
bot.get_channel() occasionally returning none in discord.py,
1
74,946,385
1
My Python import could not be found after i changed the directory and restarted VSCode. I installed the package via cmd (pip install ) and it was found in Vscode. I restarted VSCode because i changed the file location to a other directory. The package wasnt found since then. I uninstalled the package and installed it via Powershell but it wouldnt work. Updated the pip installer. Created a new file with in the directory where it has been before and installed the package again. VSCode doesnt recognize the package anymore. Import "" could not be Resolved (Pylance(reportMissingImports)) Does anybody know why this behavious appears and how to fix it? I havent found a proper solution on here or another forum
1
python,directory,python-import,pylance
74,946,334
In the bottom right of your VS code instance, you'll see something like 3.11.0 64-bit, which indicates the version of Python that VS code is referring to when running and linting your code. The problem is you installed the package with a different version of Python. If you click on the aforementioned button (that says 3.11.0 64-bit) you should see a list of options show up for the different Python versions installed. You need to change to the one that you installed the package on.
false
2022-12-29 01:00:16
22
0
0
Python import not found after VSCode restart
0
74,946,655
2
Hi I'm doing some image processing. I have some problems when code running maybe... at least 1 hour code running well. But when the times on and on my code speed is getting slower and memory usage increase. I try to find some information about these problems. people use list comprehension or map func. Are these the only solutions? x_array = np.array([]) y_array = np.array([]) present_x_array = np.array([]) present_y_array = np.array([]) cnt = 0 for x in range(curve.shape[1]): if np.max(curve[:, x]) > 200 : for y in range(curve.shape[0]): if curve[y,x] > 200 : present_x_array = np.append(present_x_array, x) present_y_array = np.append(present_y_array, y) else: continue if cnt == 0: x_array = present_x_array y_array = present_y_array cnt = cnt + 1 else : if abs(np.max(y_array) - np.max(present_y_array)) <= 10 : x_array =np.append(x_array, present_x_array) y_array =np.append(y_array, present_y_array) else : continue present_x_array = np.array([]) present_y_array = np.array([]) else: continue I try to make comprehension but it stucks handle 'cnt == 0' and 'cnt = cnt + 1'
1
python,python-3.x,for-loop,list-comprehension
74,946,461
From my understanding, there might be some way to do the trick: 1, Replace those if statements with np.where(). 2, Turn your whole loop into a function or class. Hope this could help. Have a nice coding day.
false
2022-12-29 01:28:35
97
1
0.099668
How to solve memory usage increase and slower in for loop
1
76,487,979
2
I wanted to create a virtual environment in conda prompt: conda create --name name_of_venv I am getting error: Collecting package metadata (current_repodata.json): done Solving environment: done CondaSSLError: Encountered an SSL error. Most likely a certificate verification issue. Exception: HTTPSConnectionPool(host='repo.anaconda.com', port=443): Max retries exceeded with url: /pkgs/main/win-64/current_repodata.json (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1129)'))) The same error I get with anaconda navigator too.
1
python,anaconda,conda,virtual-environment
74,946,960
I encounter this too, I solved by close my VPN
false
2022-12-29 03:28:25
2,526
0
0
Anaconda Prompt is having an issue with SSL certificates
1
75,643,080
1
This is my old code: # session initialiser opt = Options(); opt.add_argument("--remote-debugging-port=8989"); opt.add_argument("--user-data-dir=C:\\chromedriver\\chromeProfile"); service_obj = Service("C:\\chromedriver\\chromedriver.exe"); driver = Chrome((service = service_obj), (options = opt)); # realtime debugger opt = Options() opt.add_experimental_option("debuggerAddress", "localhost:8989") service_obj = Service('C:\\chromedriver\\chromedriver.exe') driver = webdriver.Chrome(service=service_obj, options=opt) what i do with this: initiate a browser with a static port number. run a separate code to connect to the previous session. With this i don't need to relaunch chrome instance every time i debug with this i can fast forward the initial going to the page, navigate to desired location find element(need the debugger here) and debug my xpath or anything else fast. can't Do this with Undetected chromedriver: can't set the custom port. can't even connect with the port mentioned in the UC session. What I need: A way to do the same(like my old code) with undetected chromedriver setup of uc options = uc.ChromeOptions() # options.add_argument("--remote-debugging-port=50620") options.add_experimental_option("debuggerAddress", "localhost:50620") driver = uc.Chrome(options=options) print("working") output: it is not connecting to the existing session by uc instead it lunching a new one.
1
python,selenium,undetected-chromedriver
74,947,178
use the below code with module "undetected_chromedriver" can work the same as .add_experimental_option in selenium. although it will connect to the opened chrome, it will also open a new one, but it will still work on the targe remote connection chrome "options = uc.ChromeOptions() options.debugger_address = "127.0.0.1:1688" driver = uc.Chrome(options=options, driver_executable_path=".\chromedriver.exe")"
false
2022-12-29 04:15:16
487
0
0
Set and Connect to Debugging port undetected chromedriver
1
74,947,690
1
I tried installing dlib files I'm getting this error: cd C:\Users\Dnyaneshwar\AppData\Local\Programs\Python\Python311\Lib\site-packages PS C:\Users\Dnyaneshwar\AppData\Local\Programs\Python\Python311\Lib\site-packages> python setup.py install You must use Visual Studio to build a python extension on windows. If you are getting this error it means you have not installed Visual C++. Note that there are many flavors of Visual Studio, like Visual Studio for C# development. You need to install Visual Studio for C++. subprocess.check_call(cmake_setup, cwd=build_folder) File "C:\Users\Dnyaneshwar\AppData\Local\Programs\Python\Python311\Lib\subprocess.py", line 413, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['cmake', 'C:\\Users\\Dnyaneshwar\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\tools\\python', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY=C:\\Users\\Dnyaneshwar\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\build\\lib.win-amd64-cpython-311', '-DPYTHON_EXECUTABLE=C:\\Users\\Dnyaneshwar\\AppData\\Local\\Programs\\Python\\Python311\\python.exe', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY_RELEASE=C:\\Users\\Dnyaneshwar\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\build\\lib.win-amd64-cpython-311', '-A', 'x64']' returned non-zero exit status 1. I have installed the C++ CMake tool for windows but I'm still getting this error.
1
python,cmake,pip,face-recognition,dlib
74,947,576
i had to downgrade my python from 3.11 to 3.9 now it worked
false
2022-12-29 05:26:45
44
0
0
facing error while installing facerecognition and dlib
2
76,313,395
1
I am using summary method of torchinfo package for printing the network summary. I have defined a subclass of the nn.Module as follows import torch class aNN(torch.nn.Module): def __init__(self, layers_stack: tuple): super(PINN, self).__init__() self.model = torch.nn.Sequential(*layers_stack) return def forward(self, x): return self.model(x) Then, I am building the aNN model using the following lines prints total number of parameters as 501 (in stead of 921) and also recursive for hidden layer in the summary as shown next to the code. num_inputs, num_outputs = 2, 1 num_units = 20 num_hidden_layers = 2 activation_fun = torch.nn.Tanh() input_layer = (torch.nn.Linear(num_inputs, num_units), activation_fun,) hidden_layers = ((torch.nn.Linear(num_units, num_units), activation_fun)) * 2 output_layer = (torch.nn.Linear(num_units, num_outputs),) nn_model = aNN(input_layer + hidden_layers + output_layer) summary(nn_model) ================================================================= Layer (type:depth-idx) Param # ================================================================= PINN -- β”œβ”€Sequential: 1-1 -- β”‚ └─Linear: 2-1 60 β”‚ └─Tanh: 2-2 -- β”‚ └─Linear: 2-3 420 β”‚ └─Tanh: 2-4 -- β”‚ └─Linear: 2-5 (recursive) β”‚ └─Tanh: 2-6 -- β”‚ └─Linear: 2-7 21 ================================================================= Total params: 501 Trainable params: 501 Non-trainable params: 0 ================================================================= The same approach but explicit addition of hidden layers has resulted correct number of parameters as shown in below num_inputs, num_outputs = 2, 1 num_units = 20 num_hidden_layers = 2 activation_fun = torch.nn.Tanh() input_layer = (torch.nn.Linear(num_inputs, num_units), activation_fun,) hidden_layers = ((torch.nn.Linear(num_units, num_units), activation_fun)) + ((torch.nn.Linear(num_units, num_units), activation_fun)) output_layer = (torch.nn.Linear(num_units, num_outputs),) nn_model = aNN(input_layer + hidden_layers + output_layer) summary(nn_model) ================================================================= Layer (type:depth-idx) Param # ================================================================= PINN -- β”œβ”€Sequential: 1-1 -- β”‚ └─Linear: 2-1 60 β”‚ └─Tanh: 2-2 -- β”‚ └─Linear: 2-3 420 β”‚ └─Tanh: 2-4 -- β”‚ └─Linear: 2-5 420 β”‚ └─Tanh: 2-6 -- β”‚ └─Linear: 2-7 21 ================================================================= Total params: 921 Trainable params: 921 Non-trainable params: 0 ================================================================= What was wrong with the first approach? I am interested in the first approach as it more concise and easy to add multiple layers. Also, what does recursive mean?
1
python,deep-learning,pytorch,neural-network
74,947,598
The problem is with hidden_layers = ((torch.nn.Linear(num_units, num_units), activation_fun)) * 2, it recreates a pointer to the same Linear and the activation function, so they are executed twice instead of constructed twice.
false
2022-12-29 05:28:59
210
0
0
What is recursive under the number of parameters in the summary of neural netwroks in pytorch
1
76,183,833
4
I am trying to import Top2Vec package for nlp topic modelling. But even after upgrading pip, numpy this error is coming. I tried pip install --upgrade pip pip install --upgrade numpy I was expecting to run from top2vec import Top2Vec model = Top2Vec(FAQs, speed='learn', workers=8) but it is giving the mentioned error
2
python,import,nlp,google-colaboratory
74,947,992
In my case, as for @CGFoX, I needed to uninstall and reinstall numba. The catch was that numba had been introduced by installing umap but then changed when I later imported scikit-image. After the latter import I had to reinstall numba-0.56.4 to avoid the error.
false
2022-12-29 06:37:49
44,305
0
0
How to remove the error "SystemError: initialization of _internal failed without raising an exception"
42
75,205,169
4
I am trying to import Top2Vec package for nlp topic modelling. But even after upgrading pip, numpy this error is coming. I tried pip install --upgrade pip pip install --upgrade numpy I was expecting to run from top2vec import Top2Vec model = Top2Vec(FAQs, speed='learn', workers=8) but it is giving the mentioned error
2
python,import,nlp,google-colaboratory
74,947,992
For me it was not the numpy release as I was already on the version 1.23.5. I simply restarted the kernel and re-imported top2vec and it worked. P.S. I was on an AWS Linux machine
false
2022-12-29 06:37:49
44,305
14
1
How to remove the error "SystemError: initialization of _internal failed without raising an exception"
42
74,950,478
3
class Parent: def __init__(self, name): self.name = name def printName(self): print(self.name) class Child(Parent): def __init__(self, name): Parent.__init__(name) bob = Child('Bob') bob.printName() It's working with super().__init__(name) but not with the class name, why?
1
python
74,950,432
It doesn't work since Parent.__init__ is defined to take two arguments: self and name, and you're passing just a single argument to it. Thus, if you want to call it like that, you need to use Parent.__init__(self, name). But there really is no point, and you should instead just use super().__init__(name), as you already know.
true
2022-12-29 11:17:19
331
2
1.2
TypeError: Parent.__init__() missing 1 required positional argument: 'name'
1
75,053,071
2
I am using imap-tools to download attachments from unread emails. I need mark as seen only those messages that contain attachments and have been downloaded. The code below works, but marks all unread messages as seen. import ssl from imap_tools import MailBox, AND from datetime import date context = ssl.create_default_context() today = date.today() with MailBox('imap.gmail.com', ssl_context=context).login('email', 'password', 'INBOX') as mailbox: for msg in mailbox.fetch(AND(seen=False), mark_seen = True, bulk = True): for att in msg.attachments: print(att.filename, today) if att.filename.lower().endswith('.xlsx'): with open('D:/pp/nf/mail/1.txt', 'a') as f: print(att.filename, today, file=f) with open('D:/pp/nf/mail/{}'.format(att.filename), 'wb') as f: f.write(att.payload)
1
python,imap-tools
74,950,536
Right answer: Use fetch arg mark_seen=False
false
2022-12-29 11:27:13
147
0
0
Imap-tools. Mark as 'seen' only messages with attachments
1
74,952,009
1
I have some legacy python script to one way encrypt passwords for db storage import base64, hashlib def encrypt(passw): secret = "SECRET_KEY_HERE" passw = secret + passw passw = passw.encode('utf-8') m = hashlib.sha256() m.update(passw) encoded = base64.b64encode(m.digest()).decode('utf-8') return (encoded) I managed to put together a c# version for an existing 3rd party package we are using private static string Encrypt(string clearText) { SHA256 sHA256 = SHA256.Create(); byte[] sourceArray = sHA256.ComputeHash(Encoding.UTF8.GetBytes(EncryptionKey + clearText)); return Convert.ToBase64String(sourceArray); } These both return the same results. I am trying to put together a web front end using next and have added an encrypt function to the register / login page const crypto = require('crypto'); export const encrypt = (password: string) :string => { const key = process.env.PASS_KEY; return crypto.createHash('sha256').update(key + password).digest('base64') } this returns a different result to the other two functions. I have checked all the usual sources and all that I have found is that what I have put together should work fine. Can anyone please shed any light on why this is not working UPDATE: Just to add to my confusion, I added the js function to a react form in codesandbox and it returns the correct result. The function is currently only called via the nextauth authorize function to verify the login of a user like this const confirmPasswordHash = (plainPassword: string , hashedPassword: string) => { const res = plainPassword && hashedPassword.localeCompare(encrypt(plainPassword)) return res === 0 ? true:false }
1
javascript,python,node.js
74,951,124
Jonathan Ciapetti pointed me in the right direction to solve this. The problem did indeed lie within the process.env call. The key being used includes a dollar sign which was, in turn, truncating part of the string being passed in. I solved this be escaping the dollar sign in the key and now it all works as expected.
true
2022-12-29 12:27:00
94
2
1.2
Hashing in python and javascript returning different results
2
76,180,455
1
I have a code where I need to save RAM usage so I've been tracing RAM usage through tracemalloc.get_traced_memory. However, I have found that what tracemalloc.get_traced_memory gives is very different from the RAM usage I see through htop. In particular, the usage appearing in htop is more than twice than the usage returned by tracemalloc.get_traced_memory[1] (which is supposed to return the peak value). I wonder why this is happening, and what would be a more accurate way to trace RAM usage other than tracemalloc.get_traced_memory? Ubuntu 20.04.4 LTS python version: 3.7.15
1
python,python-3.x,ram,htop,tracemalloc
74,951,396
It is important to note that tracemalloc only traces memory usage of Python objects that have been allocated through Python's memory manager, so it may not accurately reflect memory usage of other resources like file handles or sockets. Additionally, tracemalloc only tracks memory usage of the current process, so it does not account for memory used by child processes or other system resources.
false
2022-12-29 12:58:07
121
0
0
python3 -- RAM usage (htop and tracemalloc give different values)
1
74,954,038
2
Possibly because of my noobness, I can't get pylint and django management commands to agree about how to import files in my project. Setup # venv cd $(mktemp -d) virtualenv venv venv/bin/pip install django pylint pylint-django # django venv/bin/django-admin startproject foo touch foo/__init__.py touch foo/foo/models.py # management command mkdir -p foo/foo/management/commands touch foo/foo/management/__init__.py touch foo/foo/management/commands/__init__.py echo -e "import foo.models\nclass Command:\n def run_from_argv(self, options):\n pass" > foo/foo/management/commands/pa.py # install perl -pe 's/(INSTALLED_APPS = \[)$/\1 "foo",\n/' -i foo/foo/settings.py # testing venv/bin/python foo/manage.py pa venv/bin/pylint --load-plugins pylint_django --django-settings-module=foo.foo.settings --errors-only foo Result You'll note that manage.py is happy with import foo.models, but pylint isn't: ************* Module foo.foo.management.commands.pa foo/foo/management/commands/pa.py:1:0: E0401: Unable to import 'foo.models' (import-error) foo/foo/management/commands/pa.py:1:0: E0611: No name 'models' in module 'foo' (no-name-in-module) If I change it to import foo.foo.models, pylint passes but manage.py breaks: ... File "<frozen importlib._bootstrap_external>", line 883, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "/tmp/tmp.YnQCNTrkbX/foo/foo/management/commands/pa.py", line 1, in <module> import foo.foo.models ModuleNotFoundError: No module named 'foo.foo' What am I missing?
1
python,django,pylint
74,952,320
Hm, okay, one suggested way was to add __init__.py to the topmost folder (though I really dislike this solution, your project folder is not a module). Another working solution is to rename outer foo to bar - so the trouble source is the name collision. With the help of .pylintrc I managed to make sys.path equal for both python foo/manage.py ca and pylint --errors-only foo (just with debug prints everywhere). So, it means that pylint does some weird lookup not completely equivalent to the lookup of python itself, giving precedence to current working directory. pylint resolves foo to the outer foo directory, while python - for inner. For some reason it's changed when foo is a part of another module, which doesn't make sense for me.
false
2022-12-29 14:25:42
48
1
0.099668
Django and pylintt: inconsistent module reference?
1
74,953,370
3
I have a dataframe that looks like the following index 0 feature2 feature3 0 zipcode 0.10 y z 1 latitude 0.56 y z 2 longitude 0.39 y z I have used the following code to change the 0, the third column name to something else, but when I output the dataframe again nothing has changed and the column is still 0. df.rename(index = {0: 'feature_rank'}) # Or alternatively df.rename(column = {0: 'feature_rank'}) I would also like to know if it is possible to change the name of the second column to something else. Once again rename is not working for the 'index' column
1
python,pandas,dataframe
74,953,230
Hope this will work. df.rename(columns = {'0':'feature_rank'}, inplace = True)
false
2022-12-29 15:49:11
542
2
0.132549
Rename column of dataframe from 0 to a string (rename not working)
2
74,954,206
1
Is there a neat solution to raise an error if a value is passed to the NamedTuple field that does not match the declared type? In this example, I intentionally passed page_count str instead of int. And the script will work on passing the erroneous value forward. (I understand that linter will draw your attention to the error, but I encountered this in a case where NamedTuple fields were filled in by a function getting values from config file). I could check the type of each value with a condition, but it doesn't look really clean. Any ideas? Thanks. from typing import NamedTuple class ParserParams(NamedTuple): api_url: str page_count: int timeout: float parser_params = ParserParams( api_url='some_url', page_count='3', timeout=10.0, )
1
python-3.x,python-typing,namedtuple
74,953,581
By design, Python is a dynamically typed language which means any value can be assigned to any variable. Typing is only supported as hints - the errors might be highlighted in your IDE, but they do not enforce anything. This means that if you need type checking you have to implement it yourself. On the upside, this can probably be automated, i.e. implemented only once instead of separately for every field. However, NamedTuple does not provide such checking out of the box.
false
2022-12-29 16:20:41
72
1
0.197375
NamedTuple - сhecking types of fields at runtime
1
74,954,338
2
I use pickle to save a Python dictionary to a file. with open(FILENAME, 'wb') as f: pickle.dump(DATA, f, protocol=pickle.HIGHEST_PROTOCOL) It was all good until the disk run out of space on a shared server and my file became empty (0 byte). Traceback (most recent call last): File "****.py", line 81, in **** with open(FILENAME, 'wb') as f: OSError: [Errno 28] No space left on device What is the best solution to prevent overwriting the previous data if the above error occurs?
1
python,pickle,oserror
74,954,294
Write to a temporary file (on the same filesystem!), and move it to the real destination when finished. Maybe with a fsync in between, to make sure the new data is really written.
true
2022-12-29 17:33:04
40
3
1.2
How to prevent data loss when writing to a file fails with "no space left on device"?
1
74,955,902
2
I'am trying to so something simple, just use a function to call loc from pandas and then print it in an excell sheet, but I don't know why the output is empty. def update_output(farol,dias,prod,officer,anal,coord): df_f=df.loc[(df['FarolAging'].isin([farol])) & (df['Dias Pendentes'].isin([dias])) & (df['Produto'].isin([prod])) & (df['Officer'].isin([officer])) & (df['Analista'].isin([anal])) & (df['Coordenador'].isin([coord]))] df_f.to_excel('C:\\Users\\brechtl\\Downloads\\File.xlsx', index=False) update_output('vermelho', 'AtΓ© 20 dias','','Alexandre Denardi','Guilherme De Oliveira Moura','Anna Claudia') Edit: As asked by a friend at the comments, I created a similar dataframe as the one I'am using df = pd.DataFrame(np.array([["Vermelho","Verde","Amarelo"],["20 dias","40 dias","60 dias"],["Prod1","Prod1","Prod2"], ["Alexandre Denardi","Alexandre Denardi","Lucas Fernandes"],["Guilherme De Oliveira Moura","Leonardo Silva","Julio Cesar"], ["Anna Claudia","Bruno","Bruno"]]), columns=["FarolAging","Dias Pendentes","Produto","Officer","Analista","Coord"])
1
python,pandas,dataframe
74,954,319
You have not passed value for third parameter while calling function. Is it by mistake or intentional? This could result in NO data, as all filter conditions are in "AND", means all must be true.
false
2022-12-29 17:35:03
79
0
0
How to filter data using an function? - Python
1
74,995,229
2
I am new to fabric. I am running a command as res = fabric.api.sudo(f"pip install {something}",user=user) I expect the command to return stderr or abort when the package/version is not found i.e. pip install fails. However I am getting a res.return_code=0, res.stderr, as empty on an error condition. I do get the ERROR message on stdout. Is it expected behavior ? How can I make the stderr have the error condition and the correct return_code? Version: Using Fabric3 with version 1.14.post1 Any help would be great, thanks.
1
python,python-3.x,paramiko,invoke,fabric
74,955,225
The command had multiple commands with pipes. So needed to leverage PIPESTATUS to get the right return code.
true
2022-12-29 19:16:34
19
0
1.2
fabric.api.sudo() returning empty stderr on error condition
0
74,978,246
1
import winreg REG_PATH = r"Software\Microsoft\Windows\CurrentVersion\Policies\Explorer" def set_reg(name, value): try: winreg.CreateKey(winreg.HKEY_CURRENT_USER, REG_PATH) registry_key = winreg.OpenKey(winreg.HKEY_CURRENT_USER, REG_PATH, 0, winreg.KEY_WRITE) winreg.SetValueEx(registry_key, name, 0, winreg.REG_DWORD, value) winreg.CloseKey(registry_key) return True except WindowsError: return False def get_reg(name): try: registry_key = winreg.OpenKey(winreg.HKEY_CURRENT_USER, REG_PATH, 0, winreg.KEY_READ) value, regtype = winreg.QueryValueEx(registry_key, name) winreg.CloseKey(registry_key) return value except WindowsError: return None #Read value print (get_reg('NoLogOff')) #Set Value (will write the value to reg, the changed val requires a win re-log to apply*) set_reg('NoLogOff',1) #will then apply the registry changes The code above will change the NoLogOff value to 1, but will not save/apply in the actual windows registry. Is there anything I can do to have this done in real-time??
1
python,winreg
74,956,810
The code will change the value in the Windows Registry. If you're using regedit you must make sure you refresh in order to view the changes. Keep in mind many changes to the Windows Registry require a computer restart (or at least a logout from the current user) in order to reload the settings and actually apply them. This is unavoidable for most values. If you wish to force the registry key changes on the disk and bypass the Windows lazy flush, you can use winreg.FlushKey(registry_key) before closing. However, It still does not guarantee the settings will apply immediately without restart, only that the registry will be saved to disk.
false
2022-12-29 23:08:24
147
0
0
How to apply/save Python WINREG changes real-time
1
75,052,136
1
I've been stuck for a while with a Keras model structure issue. I'm using a dataset with grayscale images of size (181,181,1). Image paths are stored in pandas DF along with class names. Here's the datagenerator: trainData = train_generator.flow_from_dataframe( dataframe=train_df, x_col='Path', y_col='CLASS', image_size=(181,181), color_mode='grayscale', class_mode='categorical', batch_size=64, interpolation='nearest', shuffle=True, seed=42 ) valData = train_generator.flow_from_dataframe( dataframe=valid_df, x_col='Path', y_col='CLASS', image_size=(181,181), color_mode='grayscale', class_mode='categorical', batch_size=64, interpolation='nearest', shuffle=True, seed=42 ) Here's the model in question: model = Sequential([ layers.Rescaling(scale=1./255, offset = -1,input_shape=(181,181,1)), layers.Flatten(), Dense(units=90, activation='relu',kernel_initializer='he_normal',bias_initializer=biasInitializer), Dense(units=45, activation='relu',kernel_initializer='he_normal',bias_initializer=biasInitializer), Dense(units=20, activation='softmax',kernel_initializer='he_normal',bias_initializer=biasInitializer) ],name='myModel') Model Summary: model.summary() Model: "myModel" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= rescaling_19 (Rescaling) (None, 181, 181, 1) 0 flatten_17 (Flatten) (None, 32761) 0 dense_44 (Dense) (None, 90) 2948580 dense_45 (Dense) (None, 45) 4095 dense_46 (Dense) (None, 20) 920 ================================================================= Total params: 2,953,595 Trainable params: 2,953,595 Non-trainable params: 0 Model compilation: model.compile(optimizer=k.optimizers.Adam(learning_rate=0.001) , loss='categorical_crossentropy', metrics=['accuracy']) Model Training: epochs = 100 # train the model to the dataset model.fit(x=trainData, epochs=epochs, verbose=1, shuffle=True, validation_data=valData) Whenever I use flatten layer in the model, I get this error: Node: 'myModel/flatten_17/Reshape' Input to reshape is a tensor with 3145728 values, but the requested shape requires a multiple of 32761 [[{{node myModel/flatten_17/Reshape}}]] [Op:__inference_train_function_731522] Here's the full error transcript if that would help: InvalidArgumentError Traceback (most recent call last) Input In [79], in <cell line: 3>() 1 epochs = 65 2 # train the model to the dataset ----> 3 model.fit(x=trainData, 4 epochs=epochs, 5 verbose=1, 6 shuffle=True, 7 validation_data=valData) File /usr/local/lib/python3.9/dist-packages/keras/utils/traceback_utils.py:67, in filter_traceback.<locals>.error_handler(*args, **kwargs) 65 except Exception as e: # pylint: disable=broad-except 66 filtered_tb = _process_traceback_frames(e.__traceback__) ---> 67 raise e.with_traceback(filtered_tb) from None 68 finally: 69 del filtered_tb File /usr/local/lib/python3.9/dist-packages/tensorflow/python/eager/execute.py:54, in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name) 52 try: 53 ctx.ensure_initialized() ---> 54 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, 55 inputs, attrs, num_outputs) 56 except core._NotOkStatusException as e: 57 if name is not None: InvalidArgumentError: Graph execution error: Detected at node 'myModel/flatten_17/Reshape' defined at (most recent call last): File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.9/dist-packages/ipykernel_launcher.py", line 17, in <module> app.launch_new_instance() File "/usr/local/lib/python3.9/dist-packages/traitlets/config/application.py", line 976, in launch_instance app.start() File "/usr/local/lib/python3.9/dist-packages/ipykernel/kernelapp.py", line 712, in start self.io_loop.start() File "/usr/local/lib/python3.9/dist-packages/tornado/platform/asyncio.py", line 215, in start self.asyncio_loop.run_forever() File "/usr/lib/python3.9/asyncio/base_events.py", line 601, in run_forever self._run_once() File "/usr/lib/python3.9/asyncio/base_events.py", line 1905, in _run_once handle._run() File "/usr/lib/python3.9/asyncio/events.py", line 80, in _run self._context.run(self._callback, *self._args) File "/usr/local/lib/python3.9/dist-packages/ipykernel/kernelbase.py", line 510, in dispatch_queue await self.process_one() File "/usr/local/lib/python3.9/dist-packages/ipykernel/kernelbase.py", line 499, in process_one await dispatch(*args) File "/usr/local/lib/python3.9/dist-packages/ipykernel/kernelbase.py", line 406, in dispatch_shell await result File "/usr/local/lib/python3.9/dist-packages/ipykernel/kernelbase.py", line 730, in execute_request reply_content = await reply_content File "/usr/local/lib/python3.9/dist-packages/ipykernel/ipkernel.py", line 383, in do_execute res = shell.run_cell( File "/usr/local/lib/python3.9/dist-packages/ipykernel/zmqshell.py", line 528, in run_cell return super().run_cell(*args, **kwargs) File "/usr/local/lib/python3.9/dist-packages/IPython/core/interactiveshell.py", line 2881, in run_cell result = self._run_cell( File "/usr/local/lib/python3.9/dist-packages/IPython/core/interactiveshell.py", line 2936, in _run_cell return runner(coro) File "/usr/local/lib/python3.9/dist-packages/IPython/core/async_helpers.py", line 129, in _pseudo_sync_runner coro.send(None) File "/usr/local/lib/python3.9/dist-packages/IPython/core/interactiveshell.py", line 3135, in run_cell_async has_raised = await self.run_ast_nodes(code_ast.body, cell_name, File "/usr/local/lib/python3.9/dist-packages/IPython/core/interactiveshell.py", line 3338, in run_ast_nodes if await self.run_code(code, result, async_=asy): File "/usr/local/lib/python3.9/dist-packages/IPython/core/interactiveshell.py", line 3398, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "/tmp/ipykernel_61/2241442519.py", line 3, in <cell line: 3> model.fit(x=trainData, File "/usr/local/lib/python3.9/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler try: File "/usr/local/lib/python3.9/dist-packages/keras/engine/training.py", line 1409, in fit `tf.distribute.experimental.ParameterServerStrategy`. File "/usr/local/lib/python3.9/dist-packages/keras/engine/training.py", line 1051, in train_function self.loss_tracker.reset_states() File "/usr/local/lib/python3.9/dist-packages/keras/engine/training.py", line 1040, in step_function def __init__(self, *args, **kwargs): File "/usr/local/lib/python3.9/dist-packages/keras/engine/training.py", line 1030, in run_step def compute_loss(self, x=None, y=None, y_pred=None, sample_weight=None): File "/usr/local/lib/python3.9/dist-packages/keras/engine/training.py", line 889, in train_step File "/usr/local/lib/python3.9/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler try: File "/usr/local/lib/python3.9/dist-packages/keras/engine/training.py", line 490, in __call__ # default. File "/usr/local/lib/python3.9/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler try: File "/usr/local/lib/python3.9/dist-packages/keras/engine/base_layer.py", line 1014, in __call__ RuntimeError: if `super().__init__()` was not called in the File "/usr/local/lib/python3.9/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler def error_handler(*args, **kwargs): File "/usr/local/lib/python3.9/dist-packages/keras/engine/sequential.py", line 374, in call def build(self, input_shape=None): File "/usr/local/lib/python3.9/dist-packages/keras/engine/functional.py", line 458, in call def _trackable_children(self, save_type="checkpoint", **kwargs): File "/usr/local/lib/python3.9/dist-packages/keras/engine/functional.py", line 596, in _run_internal_graph # Read final output shapes from layers_to_output_shapes. File "/usr/local/lib/python3.9/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler try: File "/usr/local/lib/python3.9/dist-packages/keras/engine/base_layer.py", line 1014, in __call__ RuntimeError: if `super().__init__()` was not called in the File "/usr/local/lib/python3.9/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler def error_handler(*args, **kwargs): File "/usr/local/lib/python3.9/dist-packages/keras/layers/reshaping/flatten.py", line 98, in call ) Node: 'myModel/flatten_17/Reshape' Input to reshape is a tensor with 3145728 values, but the requested shape requires a multiple of 32761 [[{{node myModel/flatten_17/Reshape}}]] [Op:__inference_train_function_731522] I'm using Python 3.9, tried Tensorflow-gpu 2.9.1 and 2.11.0 .. both having same issue. Have been trying all sorts of suggestions I could find online, but still no luck. Appreciate any suggestions Thanks!
1
python,pandas,tensorflow,keras
74,957,141
Well, Turned out the reason for this issue is, I was using the keyword image_size for setting the image size in the flow_from_dataframe functions instead of the correct keyword target_size ... so the flow_from_dataframe was using the default value for target_size which is (256,256) Fixing this and setting this target_size keyword to the correct image size of (181,181) fixed the issue for me
true
2022-12-30 00:19:49
73
0
1.2
Problem with tensor size in a TF Keras dense network, Flatten layer not working while training
1
74,971,625
2
OS: Linux 4.18.0-193.28.1.el8_2.x86_64 anaconda: anaconda3/2022.10 Trying to install RAPIDS, I get: $ conda install -c rapidsai rapids Collecting package metadata (current_repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source. Collecting package metadata (repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Solving environment: | Found conflicts! Looking for incompatible packages. This can take several minutes. Press CTRL-C to abort. failed UnsatisfiableError: The following specifications were found to be incompatible with each other: Output in format: Requested package -> Available versionsThe following specifications were found to be incompatible with your system: - feature:/linux-64::__glibc==2.28=0 - feature:|@/linux-64::__glibc==2.28=0 - rapids -> cucim=22.12 -> __glibc[version='>=2.17|>=2.17,<3.0.a0'] Your installed version is: 2.28 $ As has been asked by others (but, as far as I can tell, not answered), why is "__glibc" version 2.28 not between 2.17 & 3.0?
2
python,conda,rapids
74,957,311
Please try using the full install command, as show in the getting started guide, which pins the rapids, python, and cuda tool kit versions, as well as some of the channels to retrieve any supporting packages: conda install -c rapidsai -c conda-forge -c nvidia rapids=22.12 python=3.9 cudatoolkit=11.5
false
2022-12-30 01:00:09
373
0
0
conda error on install for RAPIDS fails due to incompatible glib
1
75,190,293
2
OS: Linux 4.18.0-193.28.1.el8_2.x86_64 anaconda: anaconda3/2022.10 Trying to install RAPIDS, I get: $ conda install -c rapidsai rapids Collecting package metadata (current_repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source. Collecting package metadata (repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Solving environment: | Found conflicts! Looking for incompatible packages. This can take several minutes. Press CTRL-C to abort. failed UnsatisfiableError: The following specifications were found to be incompatible with each other: Output in format: Requested package -> Available versionsThe following specifications were found to be incompatible with your system: - feature:/linux-64::__glibc==2.28=0 - feature:|@/linux-64::__glibc==2.28=0 - rapids -> cucim=22.12 -> __glibc[version='>=2.17|>=2.17,<3.0.a0'] Your installed version is: 2.28 $ As has been asked by others (but, as far as I can tell, not answered), why is "__glibc" version 2.28 not between 2.17 & 3.0?
2
python,conda,rapids
74,957,311
i've got the same problem. My solution was to create a new conda enviroment with python 3.8/3.9 and use the pip installation recommended by RAPIDS (just create the env and install rapids) pip install cudf-cu11 dask-cudf-cu11 --extra-index-url=https://pypi.ngc.nvidia.com pip install cuml-cu11 --extra-index-url=https://pypi.ngc.nvidia.com pip install cugraph-cu11 --extra-index-url=https://pypi.ngc.nvidia.com
true
2022-12-30 01:00:09
373
2
1.2
conda error on install for RAPIDS fails due to incompatible glib
1
74,959,264
2
I need to add an additional optional argument from_strings to the constructor. Does this seem right? Or do I need to add a default value for from_strings? def __init__(self, size=19, from_strings): assert 2 <= size <= 26, "Illegal board size: must be between 2 and 26." self.size = size self.grid = [['E'] * size for _ in range(size)] self.from_strings = from_strings Because the constructor should be taking this: b = Board(3, ["O.O", ".@.", "@O."]) Or should it be like this? def __init__(self, size=19, from_strings=[]): assert 2 <= size <= 26, "Illegal board size: must be between 2 and 26." self.size = size self.grid = [['E'] * size for _ in range(size)] self.from_strings = from_strings
1
python,oop
74,959,109
In your case it is better to use the default value because of predictable function behavior. In the first case, in addition, you have to swap arguments because first non-default argument should not follow the default argument.
false
2022-12-30 07:24:48
46
1
0.099668
Should I include a default value for the argument passed into the constructor?
1
74,959,149
2
What is the difference between flip() and flipud() in NumPy? Both functions do the same things so which one should I use?
1
python,arrays,numpy,numpy-ndarray
74,959,110
flipud can only flip an array along the vertical axis and flip will flip along a given axis. Very similiar.
false
2022-12-30 07:24:55
69
-1
-0.099668
What is the difference between flip() and flipud() in NumPy?
0
74,962,843
1
I'm learning to use FastAPI, psycopg2 and SQLAlchemy with python, which has been working fine. Now for some reason whenever I run my web app, the SQLAlchemy module cannot be found. I am running this in a Pipenv, with python 3.11.1 and SQLAlchemy 1.4.45, and running pip freeze shows SQLAlchemy is definitely installed, and my source is definitely my pipenv environment, the same from which I'm running my fastAPI server. I have tried uninstalling and reinstalling SQLAlchemy with Pipenv, and when I run python in interactive mode, it is the expected python version and I'm able to import SQLAlchemy and check sqalalchemy.version . Any ideas why it's saying it can't import when I run FastAPI? Code from my models.py module being imported into main.py: from sqlalchemy import Column, Integer, String, Boolean from app.database import Base class Post(Base): __tablename__ = "posts" id = Column(Integer, primary_key=True, nullable=False) title = Column(String, nullable=False) content = Column(String, nullable=False) published = Column(Boolean, default=True) # timestamp = Column(TIMESTAMP, default=now()) main.py: from fastapi import FastAPI, Response, status, HTTPException, Depends from pydantic import BaseModel import psycopg2 from psycopg2.extras import RealDictCursor import time from app import models from sqlalchemy.orm import Session from app.database import engine, SessionLocal models.Base.metadata.create_all(bind=engine) # FastAPI initialisation app = FastAPI() # function to initialise SQlAlchemy DB session dependency def get_db(): db = SessionLocal() try: yield db finally: db.close() # psycopg2 DB connection initialisation while True: try: conn = psycopg2.connect(host="localhost", dbname="fastapi", user="postgres", password="*********", cursor_factory=RealDictCursor) cursor = conn.cursor() print('Database connection successful.') break except Exception as error: print("Connecting to database failed.") print("Error: ", error) print("Reconnecting after 2 seconds") time.sleep(2) # this class defines the expected fields for the posts extending the BaseModel class # from Pydantic for input validation and exception handling ==> a "schema" class Post(BaseModel): title: str content: str published: bool = True # this list holds posts, with 2 hard coded for testing purposes my_posts = [{"title": "title of post 1", "content": "content of post 1", "id": 1}, {"title": "title of post 2", "content": "content of post 2", "id": 2}] # this small function simply finds posts by id by iterating though the my_posts list def find_post(find_id): for post in my_posts: if post["id"] == find_id: return post def find_index(find_id): try: index = my_posts.index(find_post(find_id)) except: raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail=f"Post of id: {find_id} not found.") return index # these decorated functions act as routes for FastAPI. # the decorator is used to define the HTTP request verb (e.g. get, post, delete, patch, put), # as well as API endpoints within the app (e.g. "/" is root), # and default HTTP status codes. @app.get("/") async def root(): return {"message": "Hello World"} # "CRUD" (Create, Read, Update, Delete) says to use same endpoint # but with different HTTP request verbs for the different request types. # (e.g. using "/posts" for all four CRUD operations, but using POST, GET, PUT/PATCH, DELETE respectively.) @app.get("/posts") def get_data(): cursor.execute("SELECT * FROM posts") posts = cursor.fetchall() print(posts) return {"data": posts} @app.post("/posts", status_code=status.HTTP_201_CREATED) def create_posts(post: Post): cursor.execute("INSERT INTO posts (title, content, published) VALUES (%s, %s, %s) RETURNING *", (post.title, post.content, post.published)) new_post = cursor.fetchone() conn.commit() return {"created post": new_post} @app.delete("/posts/{id}") def delete_post(id: int): cursor.execute("DELETE FROM posts * WHERE id = %s RETURNING *", str(id)) deleted_post = cursor.fetchone() conn.commit() if deleted_post is None: raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail=f"post with id: {id} not found.") else: print("deleted post:", deleted_post) return Response(status_code=status.HTTP_204_NO_CONTENT) @app.get("/posts/{id}") def get_post(id: int): cursor.execute("SELECT * FROM posts WHERE id = %s", str(id)) post = cursor.fetchone() if post is None: raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail=f"post with id: {id} was not found.") return {"post_detail": post} @app.put("/posts/{id}") def update_post(id: int, put: Post): cursor.execute("UPDATE posts SET title = %s, content = %s, published= %s WHERE id = %s RETURNING *", (put.title, put.content, put.published, str(id))) updated_post = cursor.fetchone() if updated_post is None: raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail=f"post with id: {id} was not found.") return {"updated_post_detail": updated_post} @app.get("/sqlalchemy") def test_posts(db: Session = Depends(get_db)): return {"status": "success"} ERROR LOG: louisgreenhalgh@MacBook-Pro ξ‚° ~/PycharmProjects/FASTAPI ξ‚° uvicorn app.main:app --reload ξ‚² βœ” ξ‚² FASTAPI-3Pf2tu2f INFO: Will watch for changes in these directories: ['/Users/louisgreenhalgh/PycharmProjects/FASTAPI'] INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit) INFO: Started reloader process [32662] using WatchFiles Process SpawnProcess-1: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/uvicorn/_subprocess.py", line 76, in subprocess_started target(sockets=sockets) File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/uvicorn/server.py", line 60, in run return asyncio.run(self.serve(sockets=sockets)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 190, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "uvloop/loop.pyx", line 1517, in uvloop.loop.Loop.run_until_complete File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/uvicorn/server.py", line 67, in serve config.load() File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/uvicorn/config.py", line 477, in load self.loaded_app = import_from_string(self.app) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/uvicorn/importer.py", line 24, in import_from_string raise exc from None File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/uvicorn/importer.py", line 21, in import_from_string module = importlib.import_module(module_str) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<frozen importlib._bootstrap>", line 1206, in _gcd_import File "<frozen importlib._bootstrap>", line 1178, in _find_and_load File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 690, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 940, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "/Users/louisgreenhalgh/PycharmProjects/FASTAPI/app/main.py", line 6, in <module> from app import models File "/Users/louisgreenhalgh/PycharmProjects/FASTAPI/app/models.py", line 1, in <module> from sqlalchemy import Column, Integer, String, Boolean ModuleNotFoundError: No module named 'sqlalchemy' Pipenv Graph Output: fastapi==0.88.0 - pydantic [required: >=1.6.2,<2.0.0,!=1.8.1,!=1.8,!=1.7.3,!=1.7.2,!=1.7.1,!=1.7, installed: 1.10.4] - typing-extensions [required: >=4.2.0, installed: 4.4.0] - starlette [required: ==0.22.0, installed: 0.22.0] - anyio [required: >=3.4.0,<5, installed: 3.6.2] - idna [required: >=2.8, installed: 3.4] - sniffio [required: >=1.1, installed: 1.3.0] greenlet==2.0.1 psycopg2-binary==2.9.5 SQLAlchemy==1.4.45
1
python,sqlalchemy,fastapi,pipenv
74,962,787
Most likely issue is that the uvicorn executable is not present in the same python (v)env. When a python process starts, it looks into the location of the binary (uvicorn in this case), determines the python base location (either the same folder as the binary is in, or one above), and finally adds the appropriate site_packages location based on that base location. So in your case, try pip(env) install uvicorn into the same virtual environment
true
2022-12-30 14:48:33
237
2
1.2
SQLAlchemy module not found despite definitely being installed with Pipenv
3
75,134,319
1
I have correctly got a microbit working with serial communication via COM port USB. My aim is to use COM over bluetooth to do the same. Steps I have taken: (on windows 10) bluetooth settings -> more bluetooth settings -> COM ports -> add -> incoming in device manager changed the baud rate to match that of the microbit (115,200) paired and connected to the microbit tried to write to both the serial and uart bluetooth connection from the microbit to the PC (using a flashed python script) using Tera Term, setup -> serial port... -> COM(number - in my case 4), with all necessary values (including 115,200 baud rate) After doing all of these, I see no incoming message on Tera Term. Have I missed anything?
1
python,bluetooth,serial-port,bbc-microbit,spp
74,963,246
This is not directly possible via BLE UART communication because it uses different protocols (as mentioned above by ukBaz). You are able to, however, communicate via custom BLE libraries such as bleak. Bleak has some good examples on its github repo of how to scan GATT characteristics and services to find the TX and RX characteristics of your BLE device. From there you're able to connect to the microbit directly over bluetooth and read and write to it's GATT table and not using the proprietary BLE protocols etc. I'll make a tutorial at some point and link it back here when it's done.
true
2022-12-30 15:34:28
44
1
1.2
Using serial ports over bluetooth with micro bit
0
75,038,663
1
I'm using Christofides algorithm to calculate a solution for a Traveling Salesman Problem. The implementation is the one integrated in networkx library for Python. The algorithm accepts an undirected networkx graph and returns a list of nodes in the order of the TSP solution. I'm not sure if I understand the algorithm correctly yet, so I don't really know yet how it determines the starting node for the calculated solution. So, my assumption is: the solution is considered circular so that the Salesman returns to his starting node once he visited all nodes. end is now considered the node the Salesman visits last before returning to the start node. The start node of the returned solution is random. Hence, I understand (correct me if I'm wrong) that for each TSP solution (order of list of nodes) with N nodes that is considered circular like that, there are N actual solutions where each node could be the starting node with the following route left unchanged. A-B-C-D-E-F-G-H->A could also be D-E-F-G-H-A-B-C->D and would still be a valid route and basically the same solution only with a different starting node. I need to find that one particular solution of all possible starting nodes of the returned order that has the greatest distance between end and start - assuming that that isn't already guaranteed to be the solution that networkx.algorithms.approximation.christofides returns.
1
python,networkx,traveling-salesman
74,963,650
After reading up a bit more on Christofides, it seems like, due to the minimum spanning tree that's generated as first step, the desired result of the first and last node visited being those along the path that are the furthest apart, is already the case.
false
2022-12-30 16:20:04
32
0
0
Christofides TSP; let start and end node be those that are the farthest apart
0
74,968,888
1
The problem: I have a large amount of product data that I need to POST to my website via a REST API. The data: [{'name': TOOL, LINER UPPER', 'description': 'TOOL, LINER UPPER', 'short_description': '<font size="+3">\r\nFits Engines:\r\n</font>NFAD6-(E)KMK\r\n NFAD6-KMK2\r\n NFAD7-KMK\r\n NFAD7-LEIK3\r\n NFAD7-LIK3\r\n NFAD8-(E)KMK\r\n NFD9-(E)KMK\r\n TF50\r\n TF55H-DI\r\n TF55L\r\n TF55R-DI\r\n TF60\r\n TF60B\r\n TF65H-DI\r\n TF65L\r\n TF65R-DI\r\n TF70\r\n TS105C-GBB\r\n TS190R-B\r\n TS60C-GB\r\n TS70C-GB\r\n TS80C-GBB', 'tags': [{'name': '101300-92010'}], 'sku': '98JCDOZE4P', 'categories': [{'id': 307931}], 'slug': 'tool-liner upper-yanmar/'}, {'name': ' SEAL, OIL', 'description': 'SEAL, OIL', 'short_description': '<font size="+3">\r\nFits Engines:\r\n</font>NFD13-MEA\r\n NFD13-MEAS\r\n NFD13-MEP\r\n NFD13-MEPA\r\n NFD13-MEPAS\r\n TF110M(E/H/L\r\n TF110N-L\r\n TF120M(E/H/L\r\n TF120ML-XA\r\n TF120N-L\r\n TF80-M(E/H/L\r\n TF90-M(E/H/L', 'tags': [{'name': '103288-02221'}], 'sku': 'PX8AKH5JDR', 'categories': [{'id': 307931}], 'slug': '-seal-oil-yanmar/'}] What I have tried: i. Using a for loop. from woocommerce import API wcapi = API(url=url, consumer_key=consumer_key, consumer_secret=consumer_secret,timeout=50) for product in my_json_list: print(wcapi.post("products", product).json()) This works, but it's going to take until next millennium because I have millions of product pages I want to create. ii. The concurent.futures module. def post_data(product): return wcapi.post("products", product).json() # protect the entry point if __name__ == '__main__': # create the thread pool with ThreadPoolExecutor() as ex: # issue many asynchronous tasks systematically futures = [ex.submit(post_data, page) for page in data] # enumerate futures and report results for future in futures: print(future.result()) When I try this I get a timeout error, which seems to be connected to the API (read timeout=50), and the status of the future object is pending. Here's the full error output. Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/connectionpool.py", line 449, in _make_request six.raise_from(e, None) File "<string>", line 3, in raise_from File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/connectionpool.py", line 444, in _make_request httplib_response = conn.getresponse() ^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/http/client.py", line 1374, in getresponse response.begin() File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/http/client.py", line 318, in begin version, status, reason = self._read_status() ^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/http/client.py", line 279, in _read_status line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/socket.py", line 705, in readinto return self._sock.recv_into(b) ^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py", line 1278, in recv_into return self.read(nbytes, buffer) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py", line 1134, in read return self._sslobj.read(len, buffer) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TimeoutError: The read operation timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/requests/adapters.py", line 489, in send resp = conn.urlopen( ^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/connectionpool.py", line 787, in urlopen retries = retries.increment( ^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/util/retry.py", line 550, in increment raise six.reraise(type(error), error, _stacktrace) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/packages/six.py", line 770, in reraise raise value File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/connectionpool.py", line 703, in urlopen httplib_response = self._make_request( ^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/connectionpool.py", line 451, in _make_request self._raise_timeout(err=e, url=url, timeout_value=read_timeout) File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/urllib3/connectionpool.py", line 340, in _raise_timeout raise ReadTimeoutError( urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='new.turbo-diesel.co.uk', port=443): Read timed out. (read timeout=50) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/martinhewing/Downloads/Python_Code/Woo_web_pages/create_DEP_pages.py", line 74, in <module> futures = [ex.submit(wcapi.post("products", page).json()) for page in data] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/martinhewing/Downloads/Python_Code/Woo_web_pages/create_DEP_pages.py", line 74, in <listcomp> futures = [ex.submit(wcapi.post("products", page).json()) for page in data] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/woocommerce/api.py", line 110, in post return self.__request("POST", endpoint, data, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/woocommerce/api.py", line 92, in __request return request( ^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/requests/sessions.py", line 587, in request resp = self.send(prep, **send_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/requests/sessions.py", line 701, in send r = adapter.send(request, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/requests/adapters.py", line 578, in send raise ReadTimeout(e, request=request) requests.exceptions.ReadTimeout: HTTPSConnectionPool(host='new.turbo-diesel.co.uk', port=443): Read timed out. (read timeout=50) When I look at the ex.summit() function I can see that the status is pending. <Future at 0x124d1f7d0 state=pending> does anyone know what might be the problem here?
1
python,threadpool,woocommerce-rest-api,concurrent.futures
74,964,033
The problem is with how many concurrency connections the server can take/can take from one client IP, as this is typically limited to a small number per client in modern web servers. When I tried to POST using the plain for loop I have been blocked. Using the concurrent method is seen by the server as a DOS attack.;
false
2022-12-30 17:04:29
243
0
0
POST Multiple Requests concurrent.futures Status = Pending, Error = Timeout
1
75,045,210
1
after I enter the phone number in the console for the pyrogram, the bot user string app.run() pyrogram gives this error: Exception has occurred: ValueError <class 'pyrogram.raw.types.auth.sent_code_type_email_code.SentCodeTypeEmailCode'> is not a valid SentCodeType this was not the case before. earlier it doesnt happend and worked god this happens regardless of the code until the app.run() line. how to fix it? whole code from pyrogram import Client,filters api_id = my api id api_hash = my api hash app = Client(name="my_account",api_id=api_id,api_hash= api_hash) regions = ("Π”Π½Ρ–ΠΏΡ€ΠΎΠΏΠ΅Ρ‚Ρ€ΠΎΠ²ΡΡŒΠΊΠ°_ΠΎΠ±Π»Π°ΡΡ‚ΡŒ","ΠœΠΈΠΊΠΎΠ»Π°Ρ—Π²ΡΡŒΠΊΠ°_ΠΎΠ±Π»Π°ΡΡ‚ΡŒ","Π₯Π΅Ρ€ΡΠΎΠ½ΡΡŒΠΊΠ°_ΠΎΠ±Π»Π°ΡΡ‚ΡŒ","Π—Π°ΠΏΠΎΡ€Ρ–Π·ΡŒΠΊΠ°_ΠΎΠ±Π»Π°ΡΡ‚ΡŒ","ОдСська_ΠΎΠ±Π»Π°ΡΡ‚ΡŒ","ΠšΠΈΡ—Π²ΡΡŒΠΊΠ°_ΠΎΠ±Π»Π°ΡΡ‚ΡŒ","Π’Ρ–Π½Π½ΠΈΡ†ΡŒΠΊΠ°_ΠΎΠ±Π»Π°ΡΡ‚ΡŒ","Π§Π΅Ρ€ΠΊΠ°ΡΡŒΠΊΠ°_ΠΎΠ±Π»Π°ΡΡ‚ΡŒ","ΠšΡ–Ρ€ΠΎΠ²ΠΎΠ³Ρ€Π°Π΄ΡΡŒΠΊΠ°_ΠΎΠ±Π»Π°ΡΡ‚ΡŒ","Π”ΠΎΠ½Π΅Ρ†ΡŒΠΊΠ°_ΠΎΠ±Π»Π°ΡΡ‚ΡŒ","Π₯Π°Ρ€ΠΊΡ–Π²ΡΡŒΠΊΠ°_ΠΎΠ±Π»Π°ΡΡ‚ΡŒ","Π’Π΅Ρ€Π½ΠΎΠΏΡ–Π»ΡŒΡΡŒΠΊΠ°_ΠΎΠ±Π»Π°ΡΡ‚ΡŒ","ΠŸΠΎΠ»Ρ‚Π°Π²ΡΡŒΠΊΠ°_ΠΎΠ±Π»Π°ΡΡ‚ΡŒ","Π›ΡŒΠ²Ρ–Π²ΡΡŒΠΊΠ°_ΠΎΠ±Π»Π°ΡΡ‚ΡŒ","Π†Π²Π°Π½ΠΎΠ€Ρ€Π°Π½ΠΊΡ–Π²ΡΡŒΠΊΠ°_ΠΎΠ±Π»Π°ΡΡ‚ΡŒ","Π§Π΅Ρ€Π½Ρ–Π²Π΅Ρ†ΡŒΠΊΠ°_ΠΎΠ±Π»Π°ΡΡ‚ΡŒ","Π Ρ–Π²Π½Π΅Π½ΡΡŒΠΊΠ°_ΠΎΠ±Π»Π°ΡΡ‚ΡŒ","Π’ΠΎΠ»ΠΈΠ½ΡΡŒΠΊΠ°_ΠΎΠ±Π»Π°ΡΡ‚ΡŒ","Π–ΠΈΡ‚ΠΎΠΌΠΈΡ€ΡΡŒΠΊΠ°_ΠΎΠ±Π»Π°ΡΡ‚ΡŒ","Π—Π°ΠΊΠ°Ρ€ΠΏΠ°Ρ‚ΡΡŒΠΊΠ°_ΠΎΠ±Π»Π°ΡΡ‚ΡŒ","Π‘ΡƒΠΌΡΡŒΠΊΠ°_ΠΎΠ±Π»Π°ΡΡ‚ΡŒ","Π§Π΅Ρ€Π½Ρ–Π³Ρ–Π²ΡΡŒΠΊΠ°_ΠΎΠ±Π»Π°ΡΡ‚ΡŒ","Π₯ΠΌΠ΅Π»ΡŒΠ½ΠΈΡ†ΡŒΠΊΠ°_ΠΎΠ±Π»Π°ΡΡ‚ΡŒ","Π›ΡƒΠ³Π°Π½ΡΡŒΠΊΠ°_ΠΎΠ±Π»Π°ΡΡ‚ΡŒ") allert = None @app.on_message(filters.all) def allert_hendler(client, mess): global regions global allert txt = mess.text txt2 = txt.split(" ") txt3 = txt2[-1].split("#") region = txt3[1] time = txt2[1] if region in regions: if txt2[0] == "πŸ”΄": allert = ["Π’Ρ€ΠΈΠ²ΠΎΠ³Π°",1] elif txt2[0] == "🟒": allert = ["Π’Ρ–Π΄Π±Ρ–ΠΉ", 0] elif txt2[0] == "🟑": allert = ["частковий Π²Ρ–Π΄Π±Ρ–ΠΉ",0] app.send_message(chat_id=-786324633,text=f"{allert[1]},{region},{time}") app.run() I tryed to redownload pyrogram and tgcrypto. I tryed to change code and keep only part which code need to work, but it happend again
1
python,compiler-errors,telegram,pyrogram
74,964,636
In December, additional verification was introduced (due to spam), now core.telegram.org/api/auth#email-verification
false
2022-12-30 18:18:26
218
1
0.197375
is not a valid SentCodeType pyrogram
1
75,022,823
2
I've got a cloud function I deployed a while ago. It's running fine, but some of its dependent libraries were updated, and I didn't specify == in the requirements.txt, so now when I try to deploy again pip can't resolve dependencies. I'd like to know which specific versions my working, deployed version is using, but I can't just do a pip freeze of the environment as far as I know. Is there a way to see which versions of libraries the function's environment is using?
1
python,google-cloud-platform,google-cloud-functions
74,964,835
I am still unaware how to get this information directly from Google Cloud Platform. I think it may not be surfaced after deploy. But a coworker had a workaround if you've deployed from a CI pipeline: Go back and look in that pipeline's logs to see which packages got installed upon deploy. It's printed. This didn't quite save me, because I'd deployed my function manually from a terminal, but it got me closer, because I could see which versions were being used around that time.
false
2022-12-30 18:43:56
55
0
0
Can I see library versions for a google cloud function?
2
74,965,612
1
I am gathering all the links from a single web page and trying to store only the links that contain the string in a list. I can get all the links using this: links=[] for link in soup.findAll('a') links.append(link.get('href')) That code works but returns a huge list of over 700 links. I want to get those down to only include the items in a list. I am trying to use the any function like this: list_of_keywords = ['word1', 'word2', 'word3'] links=[] for link in soup.findAll('a') if any(word in link for word in list_of_keywords): links.append(link.get('href')) But that returns 0. I know that the words in the list are included in the links. What am I doing wrong? Thanks for the help!
1
python,python-3.x
74,965,338
I figured it out. The link was being returned as a bs4.element.Tag. I had to perform the link.get('href') first. Once that was done, I could then check it against the list.
true
2022-12-30 19:53:45
30
1
1.2
Python3 store link if it contains an item in a list
1
74,997,631
1
The error: /opencv-python/opencv/modules/gapi/include/opencv2/gapi/streaming/cap.hpp:26:10: fatal error: opencv2/videoio.hpp: No such file or directory #include <opencv2/videoio.hpp> The docker image command that fails: RUN pip wheel . --verbose Here are my cmake args: ENV CMAKE_ARGS="\ -D BUILD_JAVA=OFF \ -D BUILD_PERF_TESTS=ON \ -D BUILD_TESTS=ON \ -D BUILD_opencv_apps=OFF \ -D BUILD_opencv_freetype=OFF \ -D BUILD_opencv_calib3d=OFF \ -D BUILD_opencv_videoio=OFF \ -D BUILD_opencv_python2=OFF \ -D BUILD_opencv_python3=ON \ -D WITH_GSTREAMER=OFF \ -D VIDEOIO_ENABLE_PLUGINS=OFF \ -D ENABLE_FAST_MATH=1 \ -D ENABLE_PRECOMPILED_HEADERS=OFF \ -D INSTALL_C_EXAMPLES=OFF \ -D INSTALL_PYTHON_EXAMPLES=OFF \ -D INSTALL_TESTS=OFF" I realize that the file is not found because I have videoio off, but it should not be looking for the file in the first place. Any advice? I've tried -D WITH_GSTREAMER=OFF but no success.
1
python,docker,opencv
74,965,734
You can build opencv without videoio but you must also specifically disable gapi, -D BUILD_opencv_videoio=OFF -D BUILD_opencv_gapi=OFF
true
2022-12-30 20:54:45
112
2
1.2
Building opencv-python from source with videoio off results in file not found error
1
74,966,025
3
I have a list a = ["Today, 30 Dec", "01:10", "02:30", "Tomorrow, 31 Dec", "00:00", "04:30", "05:30", "01 Jan 2023", "01:00", "10:00"] and would like to kind of forward fill this list so that the result looks like this b = ["Today, 30 Dec 01:10", "Today, 30 Dec 02:30", "Tomorrow, 31 Dec 00:00", "Tomorrow, 31 Dec 04:30", "Tomorrow, 31 Dec 05:30", "01 Jan 2023 01:00", "01 Jan 2023 10:00"]
1
python,list,fill,forward
74,965,989
Looks like that list contains dates and times. Any item that contains a space is a date value; otherwise it is a time value. Iterate over the list. If you see a date value, save it as the current date. If you see a time value, append it to the current date and save that value the new list.
false
2022-12-30 21:38:24
84
1
0.066568
Python list forward fill elements according to thresholds
1
74,969,211
3
I am trying to install torch using pypy. when I run pypy -m pip install torch, I get this error: ERROR: Could not find a version that satisfies the requirement torch (from versions: none) ERROR: No matching distribution found for torch Why is this happening, and how can I successfully install torch? I want to install torch (above 1.6.0), I can't
2
python,windows,pip,pytorch,pypy
74,966,575
Do you have python version 3.11?. If yes, try using a lower version. It doesn't work with Python 3.11. When I had anaconda installed with Python 3.9, it worked fine, but when I updated python version to 3.11, I was unable to install it and was getting the same error.
false
2022-12-30 23:31:35
161
0
0
Can't install torch (windows)
0
74,969,176
3
I am trying to install torch using pypy. when I run pypy -m pip install torch, I get this error: ERROR: Could not find a version that satisfies the requirement torch (from versions: none) ERROR: No matching distribution found for torch Why is this happening, and how can I successfully install torch? I want to install torch (above 1.6.0), I can't
2
python,windows,pip,pytorch,pypy
74,966,575
try: pip3 install torch pip install pytorch
false
2022-12-30 23:31:35
161
0
0
Can't install torch (windows)
0
74,970,781
3
My folder structure is: |-fastapi |-app |-calc.py |-tests |-mytest.py In mytest.py I'm trying to import calc.py, like this: from app import calc In mytest.py, app and calc are both highlighted green, and when I hover over them, it says (module). It seems to be recognized, but when I run it, I get the error. I know this has been asked before but I haven't found the solution.
1
python,python-import,importerror,modulenotfounderror
74,966,729
You should check and ensure you do not have an already existing file named 'calc.py'.
false
2022-12-31 00:09:38
156
-1
-0.066568
ModuleNotFoundError even though the module is recognized
3
74,969,198
2
@bot.tree.command(name="clear", description="admin only", guild=discord.Object(guildid)) async def clear(interaction: discord.Interaction, amount : int = None): if not interaction.user.guild_permissions.manage_messages: return if amount == None: embed = discord.Embed(title="**πŸ“› Error**", description=f"Please enter the amount to be deleted",color=0xff0000, timestamp = datetime.datetime.now()) await interaction.response.send_message(embed=embed) else: await interaction.channel.purge(limit=amount) embed = discord.Embed(title="**🧹 Chat Cleaning **", description=f"{amount} recent chats have been deleted", color = 0xFFFD05, timestamp = datetime.datetime.now()) await interaction.response.send_message(embed=embed) await asyncio. sleep(2) await interaction.channel.purge(limit=1) File "C:\Users\Heeryun\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\app_commands\tree.py", line 1242, in _call await command._invoke_with_namespace(interaction, namespace) File "C:\Users\Heeryun\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\app_commands\commands.py", line 887, in _invoke_with_namespace return await self._do_call(interaction, transformed_values) File "C:\Users\Heeryun\AppData\Local\Programs\Python\Python310\lib\site-packages\discord\app_commands\commands.py", line 880, in _do_call raise CommandInvokeError(self, e) from e discord.app_commands.errors.CommandInvokeError: Command 'clear' raised an exception: NotFound: 404 Not Found (error code: 10062): Unknown interaction clear system I think it is interaction error The message is deleted, but an error occurs
1
python,discord,discord.py,interaction
74,967,136
This happens when you're too slow to respond to an interaction. You must always respond within 3 seconds or your interaction will fail. If you need more than 3 seconds, you can use defer(), and then reply using followup. Considering the fact that you're purging messages before replying, you won't be in time anymore. You should first defer, then do what you want, and then send the follow-up message. Note that you can only respond once. You have to use followup (or channel.send) for consecutive messages or it will error as well. Also: instead of deleting the message manually after 2 seconds, you can make it ephemeral (so only the user can see it, and they can manually dismiss it). the first if-statement will never send a response, so it will always cause an error in Discord. It's better to send an actual error message to the user instead of letting the command fail. You can once again make it ephemeral. use is instead of == for None-checks
false
2022-12-31 02:08:53
1,586
4
0.379949
discord.py NotFound: 404 Not Found (error code: 10062): Unknown interaction
2
74,967,545
1
I want to use the Scrapy Python module and because I have anaconda installed, scrapy is installed. However, I don't want to use the anaconda environment I want to use VSCode. I type "pip install scrapy" but it returns "Requirement already satisfied". How do I change the path way of this module because I can't run Scrapy in VSCode. It gives me a "report missing imports" error but I already have it installed. I already tried to uninstall and reinstall but that didn't work.
1
python,scrapy
74,967,378
Python allows you to modify the module search path at runtime by modifying the sys.path variable. This allows you to store module files in any folder of your choice. Since the sys.path is a list, you can append a search-path to it.
false
2022-12-31 03:42:22
30
0
0
How change python module path?
0
74,968,968
2
I am using a python script that login to AWS account with an IAM user and MFA (multi-factor authentication) enabled. The script runs continuously and does some operations (IoT, fetching data from devices etc etc). As mentioned, the account needs an MFA code while starting the script, and it does perfectly. But the problem is script fails after 36 hours because the token expires. Can we increase the session token expiration time or automate this task not to ask MFA code again and again?
1
python-3.x,amazon-ec2,boto3,multi-factor-authentication
74,968,390
Unfortunately not, the value can range from 900 seconds (15 minutes) to 129600 seconds (36 hours). If you are using root user credentials, then the range is from 900 seconds (15 minutes) to 3600 seconds (1 hour).
false
2022-12-31 08:26:46
56
0
0
Increase aws session token expiration time
0
74,969,341
2
I have a function that is supposed to take input, calculate the average and total as well as record count. The bug in the code is that: Even though I have added a try and except to catch errors, these errors are also being added to the count. How do I only count the integer inputs without making the "Invalid Input" part of the count? Code snippet count = 0 total = 0 avg = 0 #wrap entire function in while loop while True: #prompt user for input line = input('Enter a number: ') try: if line == 'done': break print(line) #function formulars for total, count, avg count = int(count) + 1 total = total + int(line) avg = total / count except: print('Invalid input') continue #print function results print(total, count, avg) With the above code the output for print(total, count, avg) for input i.e 5,4,7, bla bla car, done : will be 16, 4, 5.33333333 expected output 16, 3, 5.33333333
1
python,conditional-statements
74,969,261
When this line: total = total + int(line) throws an error, The previous line count = int(count) + 1 has already ben executed, which incremented the count. Swapping these two line should solve the problem.
true
2022-12-31 11:39:21
63
2
1.2
How to only count valid inputs
1
74,969,724
2
Here's what the dataset I'm working on looks like: Type SubType Municipality Social Media Facebook New Castle Onground Campus Monroe Onground Cafe Kutlski Social Media Instagram New Castle Social Media Tiktok San Andreas Social Media Facebook New Castle Social Media Facebook San Andreas I want to group it by Type and SubType then further filter it by Municipality and then value_counts() it. Here's what I've tried: ab = df.groupby([df['Type'] == 'Social Media', df['SubType']]) ab['Municipality'].value_counts() I almost got what I want only that it shows everything, not just the result of the condition (under the Type column, it has 'true' and false' section. This is the result I'm looking for: Type SubType Municipality Social Media Facebook New Castle 2 San Andreas 1 Instagram New castle 1 TikTok San Andreas 1 But instead, this is my result: Type SubType Municipality True Facebook New Castle 2 San Andreas 1 Instagram New Castle 1 Titkok San Andreas 1 False Onground Cafe 1 Campus 1 and so on... .... .... ...
1
python,pandas,group-by,data-analysis
74,969,681
Just filter first and group only by subtype. df.query('Type == `Social Media`').groupby('SubType')['Municipality'].value_counts()
false
2022-12-31 12:59:09
55
0
0
Pandas groupby 2 coluns/conditions then value_counts() by another column?
2
75,129,165
4
In Python, trying to run the opencv package in an AWS lambda layer. Using opencv-python-headless but keep getting this error. Response { "errorMessage": "Unable to import module 'lambda_function': /lib64/libz.so.1: version `ZLIB_1.2.9' not found (required by /opt/python/lib/python3.8/site-packages/cv2/../opencv_python_headless.libs/libpng16-186fce2e.so.16.37.0)", "errorType": "Runtime.ImportModuleError", "stackTrace": [] } Have tried different versions of opencv to no avail. And different versions of python.
2
python,amazon-web-services,lambda,aws-lambda
74,972,995
In your requirements.txt file you probably didn't specify a specific version for opencv-python-headless - Thus each time you deploy a new image it installs the newest one. And... guess what... the newest release was 2 weeks ago - and it appears not to be compatible with your environment. So: Always specify the specific version you are using. Specify version 4.6.0.66, as @job-heersink suggested.
false
2023-01-01 03:31:14
2,374
0
0
OpenCV - AWS Lambda - /lib64/libz.so.1: version `ZLIB_1.2.9' not found
3
74,974,047
4
In Python, trying to run the opencv package in an AWS lambda layer. Using opencv-python-headless but keep getting this error. Response { "errorMessage": "Unable to import module 'lambda_function': /lib64/libz.so.1: version `ZLIB_1.2.9' not found (required by /opt/python/lib/python3.8/site-packages/cv2/../opencv_python_headless.libs/libpng16-186fce2e.so.16.37.0)", "errorType": "Runtime.ImportModuleError", "stackTrace": [] } Have tried different versions of opencv to no avail. And different versions of python.
2
python,amazon-web-services,lambda,aws-lambda
74,972,995
You can create layer or just (if making layer isn't all mandatory) - install the necessary libraries in the same directory your lambda code in using pip install opencv-contrib-python -t . (. Means current directory, change if needed). After downloading all libraries zip them (along with the lambda) and store on a s3 bucket. Then just source lambda from that zip file and you should be good to go. Best wishes.
false
2023-01-01 03:31:14
2,374
2
0.099668
OpenCV - AWS Lambda - /lib64/libz.so.1: version `ZLIB_1.2.9' not found
3
75,159,540
1
In installing and utilizing cURL (specifically curl 7.86.0 (Windows) libcurl/7.86.0; previously I said it was curl 7.83.1 (Windows) libcurl/7.83.1 but I was mistaken) to download .htm files in conjunction with/subordinate to a mass media-file downloading program called gallery-dl, I ran into a filenaming problem regarding how cURL deals with "weird" characters. Basically, it seems that at least for my version or install of cURL, when I try to use some kind of alternate version of a symbol such as Big Solidus ⧸ slash instead of normal slash in the filenaming command, cURL will create the .htm file but will replace that alternate symbol with an underscore. I know this isn't a problem with cURL interpreting the Big Solidus as a normal slash, since when I try to instead use a Fullwidth Solidus / slash it errors out the same way it would with a normal / slash. As a simple example, try running something like curl [url] -o C:\dir\ec\to\ry\test⧸.htm or curl [url] -o "test⧸.htm" yourself. For me, it outputs test_.htm. Is there anything I can do, anything I can attach to the "weird" characters to get cURL to avoid changing them to underscores? Or is this something version-related?
1
python,windows,curl,encoding
74,974,252
Ok, so OP here, and I've since seemed to figure out a solution to this problem, although the actual nature of the solution I'm not really certain of. It was seeming like Linux versions of cURL were not having this problem of changing characters like alternate versions of functional characters (and other characters I had since discovered mine was also changing, like Japanese characters) into underscores, while multiple Windows versions were. My friend decided to compile a Windows build of cURL himself from the current source code available on github to run a debugger, and for some reason this version just doesn't have the changing-"odd"-characters-to-underscores problem? It just.... doesn't have the problem at all. You ask it to create a file with Big Solidus β§Έ or a Japanese character あ or anything like that and it does it just fine. Our only guess is that it's down to a matter of slight differences created by different compilers, and the compiler used on the official Windows builds creates the problem while some other compilers don't.
false
2023-01-01 11:04:53
93
0
0
cURL automatically replacing alternates to functional characters with underscore?
0
74,976,632
3
I have installed pytesseract successfully but still getting this error in vscode. I tried installing tesseract in my venv in vscode. and it was successfully installed. but still , I'm getting this error. I used a simple code i.e., from PIL import Image import pytesseract as pt def tesseract(): path_to_tesseract = r"C:\Program Files\Tesseract-OCR\tesseract.exe" image_path="test.jpg" pt.tesseract_cmd = path_to_tesseract text = pt.image_to_string(Image.open(image_path)) print(text) tesseract()
1
python,ocr,tesseract,libraries,importerror
74,975,253
Import errors occur either due to the package not being installed or being installed in a different path. Since you said you installed pytesseract, I’m guessing it’s the latter. Try running your script in verbose mode with the -v flag to see the path in which Python looks for your packages. Then you can manually check if pytesseract is installed inside there or somewhere else.
false
2023-01-01 14:44:40
750
0
0
ImportError : No Module Named pytesseract
1
74,976,695
1
I have created the project and I want to export the program to make it as an application so when I start in the desktop it opens the output of the program. I tried to search for such an option in the vs (2019) but I didn't find it.I hope that you reply to me.
1
python,visual-studio
74,975,603
The easiest way is too simply compile it with the Release Settings and then get the .exe from the Release Folder. Go to Solution Configurations, this is the second Button to the left from the green Build button. Select Release I guess currently Debug is selected. Right-Click on the Solution in the Solution Explorer on the right and Select open file in file exlporer (pretty far down) Go one folder up and go into the Release Folder Copy your .exe and paste it somewhere and now you should be able to start it from everywhere. If you compile it with the x64 Settings you have to go first into the x64 Folder and then there into the Release Folder. You could also do the same with the Debug version, but the Release Option optimizes and it is literally named release.
false
2023-01-01 15:51:14
34
0
0
How to export a project in vs 2019?
0
74,980,226
1
i tryed to fix this problem for hours now but i can't solve it. I did read through some similiar questions but they coudnt help me. I want to use the Selectolax HTMLParser Module inside my AWS Lambda Function. I Import the module like this from an Layer like this: from selectolax.parser import HTMLParser I always get the error: "errorMessage": "cannot import name 'parser' from partially initialized module 'selectolax' (most likely due to a circular import) The Problem lays not in the Name of My Function/File, i called it "Test123". As Selectolax is a public Module, i was afraid to change something after installing it with pip. I reinstalled the package at least 3 times and uploaded it again as a layer.
1
python,lambda,module,html-parsing
74,975,839
Reinstalling the Package with an older version(0.3.11) did solve the problem.
false
2023-01-01 16:30:16
33
0
0
Lambda Selectolax Import partially initialized module 'selectolax'
0
74,976,292
2
I want to display this context inside an invoice html page but I seem to be stuck at getting data. I want this data to be displayed at the invoice page. Currently, it is giving me Error:''QuerySet' object has no attribute 'product'' models.py: class PurchaseItem(models.Model): product = models.ForeignKey(Item, on_delete=models.CASCADE) quantity = models.PositiveSmallIntegerField() purchase_price = models.DecimalField(max_digits=6, decimal_places=2) paid_amount = models.DecimalField(max_digits=6, decimal_places=2) views.py: def get_context_data(request, **kwargs): purchases = PurchaseItem.objects.all() context={ "company": { "name": "Mc-Services", "address": "1080, Vienna, Austria", "phone": "(818) XXX XXXX", "email": "contact@mcservice---.com", 'product': purchases.product, 'price' : purchases.product.price, 'quantity' : purchases.quantity, }} return render(request, 'purchase/pdf_template.html', context) and the html file pdf_template.html: <tbody> {% for purchase in purchases %} <tr> <td class="no" >{{purchase.product}}</td> <td class="qty" >{{purchase.quantity}}</td> <td class="total" >${{purchase.product.price}}</td> </tr> {% endfor %} </tbody>
1
python,html,django,django-views
74,976,040
Here's the error: purchases = PurchaseItem.objects.all() In your code purchases is the set of all the purchases, but then you try to use it as if it was a single purchase. You need to take one single element from the set. For example, if you wanted the newest item you could use purchases = PurchaseItem.objects.last().
true
2023-01-01 17:04:07
43
1
1.2
Getting context from database for views
1
74,976,625
2
there, I have two data frames like in the following table First is df_zero_purchase: Includes around 4700 rows. OrderItemSKU PurchasePrice TotalWeight 4188-DE 0.0 2.5 5300-MY 0.0 3.8 1889-XC 0.0 4.7 df_zero_purchase = pd.DataFrame({ "OrderItemSKU": ['4188-DE', '5300-MY', '1889-XC'], "PurchasePrice": [0, 0, 0], "TotalWeight":[2.5, 3.8, 4.5] }) And the second is df_purchase: Includes 4814 rows. OrderItemSKU PurchasePrice 4188-DE 5.5 5300-MY 8.3 1889-XC 2.1 df_purchase = pd.DataFrame({ "OrderItemSKU": ['4188-DE', '5300-MY', '1889-XC'], "PurchasePrice": [5.5, 8.3, 2.1], }) I just wanted to update the zero PurchasePrices on my first data frame .I tried the following code but at the and it gives as shape with almost 50000 rows. I don't understand why ? So I need your help... df_merged = pd.merge(df_zero_purchase, df_purchase[['OrderItemSKU', 'PurchasePrice']], on='ORDERITEMSKU')
1
python,pandas,dataframe,merge
74,976,430
You must have duplicate values for "OrderItemSKU" column. Please remove duplicate from both dataframe. and then try to merge. Use df[df.duplicated('OrderItemSKU')], for both dataframe. If you want to remove duplicate use - new_df = df[df.duplicated(keep = 'first')]
false
2023-01-01 18:04:23
84
0
0
How can I merge these two dataframes correctly?
1
74,978,215
2
I am currently looking to speed up my code using the power of multiprocessing. However I am encountering some issues when it comes to calling the compiled code from python, as it seems that the compiled file disappears from the code's view when it includes any form of multiprocessing. For instance, with the following test code: #include <omp.h> int main() { int thread_id; #pragma omp parallel { thread_id = omp_get_thread_num(); } return 0; } Here, I compile the program, then turn it into a .so file using the command gcc -fopenmp -o theories/test.so -shared -fPIC -O2 test.c I then attempt to run the said code from test.py: from ctypes import CDLL import os absolute_path = os.path.dirname(os.path.abspath(__file__)) # imports the c libraries test_lib_path = absolute_path + '/theories/test.so' test = CDLL(test_lib_path) test.main() print('complete') I get the following error: FileNotFoundError: Could not find module 'C:\[my path]\theories\test.so' (or one of its dependencies). Try using the full path with constructor syntax. However, when I comment out the multiprocessing element to get the follwing code: #include <omp.h> int main() { int thread_id; /* #pragma omp parallel { thread_id = omp_get_thread_num(); } */ return 0; } I then have a perfect execution with the python program printing out "complete" at the end. I'm wondering how this has come to happen, and how the code can seemingly be compiled fine but then throw problems only once it's called from python (also I have checked and the file is in fact created). UPDATES: I have now checked that I have libgomp-1.dll installed I have uninstalled and reinstalled MinGW, with no change happening. I have installed a different, 64 bit version of gcc and, using a different (64 bit python 3.10) version of python have reproduced the same error. This also has libgomp-1.dll.
1
python,c,multiprocessing,ctypes,file-not-found
74,978,154
Note where the error message says "or one of its dependencies". Try running ldd on your test.so file to see if it's completely linked. EDIT1: Generally, gcc requires binutils. (On ms-windows, I guess they could be combined.) Which means that you should have objdump. If you run objdump -x test.so|more, you should see some lines starting with "NEEDED" in the "Dynamic section". Those are the shared libraries needed by this one.
false
2023-01-02 00:09:03
173
2
0.197375
Why does adding multiprocessing prevent python from finding my compiled c program?
4
74,980,147
1
So I have a project that has multiple files regular python, and I'm using a jupyter lab python file as the 'main' file that imports and runs all the rest of the code. But if I make changes to those python files, the jupyter lab file does not automatically respond to updates in those files, and it takes a long time before the code runs properly with the updates. The main problem is that I have a text file that I constantly update, and the jupyter lab file reads from that, but it takes forever before the changes in the text file are actually noticed and the code runs off that. Is this just a known issue with jupyter lab or?
1
python,jupyter-notebook,jupyter-lab
74,978,273
There is no code so is difficult to know what is happening here. But how the Jupyter environ "notice" these changes? You must to re run de code again and must to consider than Jupyter maintain the vars in memory until the kernel is restarted (because the garbage collector of Python). I've tried to erase the variables with del but always Jupyter maintain a reference to the old value (I don't know why) for that reason I try to use my code inside of function's scope in this way the variable dies when the function is done. This is the only way I found to deal with this problem. I always try work with functions because is hard to debug a code in Jupyter with old variables values.
false
2023-01-02 00:45:37
25
0
0
jupyter notebook slow at responding to updates in code or text information
0
74,978,428
1
I am really new in programming, especially, in machine learning. Currently, I am training my dataset and I am using KNN, random forest, and decision tree as my algorithms. However, my accuracy, precision, recall, and f1 scores in random forest and decision tree are all 1.0, which means something is wrong. On the other hand, my KNN scores are low (Accuracy: 0.892 Recall: 0.452 Precision: 0.824 F1-score: 0.584). I already cleaned and split my dataset for training and testing, and imputed (median) my dataset, so I am really confused as to why the results are like this. What can I do to fix this? P.S. I am not really sure how to ask questions here, so if I am lacking any information necessary, just tell me. dataset image: https://i.stack.imgur.com/6FR1K.png distribution of dataset: https://i.stack.imgur.com/1uZzN.png #Convert 0's to NaN columns = ["Age", "Race", "Marital Status", "T Stage", "N Stage", "6th Stage", "Grade", "A Stage", "Tumor Size", "Estrogen Status", "Progesterone Status", "Regional Node Examined", "Reginol Node Positive", "Survival Months", "Status"] data[columns] = data[columns].replace({'0':np.nan, 0:np.nan}) #imputing using median imp_median.fit(data.values) imp_median.fit(data.values) data_median = imp_median.transform(data.values) data_median = pd.DataFrame(data_median) data_median.columns =["Age", "Race", "Marital Status", "T Stage ", "N Stage", "6th Stage", "Grade", "A Stage", "Tumor Size", "Estrogen Status", "Progesterone Status", "Regional Node Examined", "Reginol Node Positive", "Survival Months", "Status"] #scaling data median minmaxScale = MinMaxScaler() X = minmaxScale.fit_transform(data_median.values) transformedDF = minmaxScale.transform(X) data_transformedDF = pd.DataFrame(X) data_transformedDF.columns =["Age", "Race", "Marital Status", "T Stage ", "N Stage", "6th Stage", "Grade", "A Stage", "Tumor Size", "Estrogen Status", "Progesterone Status", "Regional Node Examined", "Reginol Node Positive", "Survival Months", "Status"] #splitting the dataset features = data_transformedDF.drop(["Status"], axis=1) outcome_variable = data_transformedDF["Status"] x_train, x_test, y_train, y_test = train_test_split(features, outcome_variable, test_size=0.20, random_state=7) #cross validation def cross_validation(model, _X, _y, _cv=10): ''' Function to perform 10 Folds Cross-Validation Parameters model: Python Class, default=None This is the machine learning algorithm to be used for training. _X: array This is the matrix of features (age, race, etc). _y: array This is the target variable (1 - Dead, 0 - Alive). cv: int, default=10 Determines the number of folds for cross-validation. Returns The function returns a dictionary containing the metrics 'accuracy', 'precision', 'recall', 'f1' for training/validation set. ''' _scoring = ['accuracy', 'precision', 'recall', 'f1'] results = cross_validate(estimator=model, X=_X, y=_y, cv=_cv, scoring=_scoring, return_train_score=True) return {"Training Accuracy scores": results['train_accuracy'], "Mean Training Accuracy":results['train_accuracy'].mean()*100, "Mean Training Precision": results['train_precision'].mean(), "Mean Training Recall": results['train_recall'].mean(), "Mean Training F1 Score": results['train_f1'].mean(), } #KNN knn = KNeighborsClassifier() cross_validation(knn, x_train, y_train, 10) #DecisionTree from sklearn.tree import DecisionTreeClassifier dtc = DecisionTreeClassifier() cross_validation(dtc, x_train, y_train, 10) #RandomForest from sklearn.ensemble import RandomForestClassifier rfc = RandomForestClassifier() cross_validation(rfc, x_train, y_train, 10) # Test predictions for dtc dtc_fitted = dtc.fit(x_train, y_train) y_pred = dtc_fitted.predict(x_test) print(confusion_matrix(y_test, y_pred)) print('Accuracy: %.3f' % accuracy_score(y_test, y_pred) + ' Recall: %.3f' % recall_score(y_test, y_pred) + ' Precision: %.3f' % precision_score(y_test, y_pred) + ' F1-score: %.3f' % f1_score(y_test, y_pred))\ # Test predictions for rfc rfc_fitted = rfc.fit(x_train, y_train) y_pred = rfc_fitted.predict(x_test) print(confusion_matrix(y_test, y_pred)) print('Accuracy: %.3f' % accuracy_score(y_test, y_pred) + ' Recall: %.3f' % recall_score(y_test, y_pred) + ' Precision: %.3f' % precision_score(y_test, y_pred) + ' F1-score: %.3f' % f1_score(y_test, y_pred)) # Test predictions for knn knn_fitted = knn.fit(x_train, y_train) y_pred = knn_fitted.predict(x_test) print(confusion_matrix(y_test, y_pred)) print('Accuracy: %.3f' % accuracy_score(y_test, y_pred) + ' Recall: %.3f' % recall_score(y_test, y_pred) + ' Precision: %.3f' % precision_score(y_test, y_pred) + ' F1-score: %.3f' % f1_score(y_test, y_pred)) **For KNN** 'Mean Training Accuracy': 90.2971947134574, 'Mean Training Precision': 0.8457275536528337, 'Mean Training Recall': 0.44194341372912804, 'Mean Training F1 Score': 0.5804614758695162 test predictions for knn Accuracy: 0.872 Recall: 0.323 Precision: 0.707 F1-score: 0.443 **For Decision Tree** 'Mean Training Accuracy': 100.0, 'Mean Training Precision': 1.0, 'Mean Training Recall': 1.0, 'Mean Training F1 Score': 1.0 test predictions for dtc: Accuracy: 0.850 Recall: 0.528 Precision: 0.523 F1-score: 0.525 **For Random Forest** 'Mean Training Accuracy': 99.99309630652398, 'Mean Training Precision': 1.0, 'Mean Training Recall': 0.9995454545454546, test predictions for rtc: Accuracy: 0.896 Recall: 0.449 Precision: 0.803 F1-score: 0.576 from imblearn.over_sampling import SMOTE smote = SMOTE() # Oversample the training data X_train_resampled, y_train_resampled = smote.fit_resample(x_train, y_train) I ran knn, rfc, and dtc again after running the code for smote
1
python,machine-learning,random-forest,decision-tree,knn
74,978,343
This might not be a technical issue with the code but rather with something known as target leakage. That is one of the features in your model is recorded after your label has occurred. For example if you are predicting if the patient is going to die vs not to die, and there is a survival date field, then most models can perfectly predict the outcome. KNN is a bit different because it is a memorization model - it doesn't learn the relationship between the variable and label. So if it hasn't seen an observation before, it won't give perfect prediction even in the presence of target leakage.
false
2023-01-02 01:05:32
93
1
0.197375
How to fix weirdly perfect testing scores in machine learning
1
75,177,995
3
Operating System: macOS Monterey 12.6 Chip: Apple M1 Python version: 3.11.1 I try: pip3 install gensim The install process starts well but fatally fails towards the end while running 'clang'. The error message is: clang -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -arch arm64 -arch x86_64 -g -I/Library/Frameworks/Python.framework/Versions/3.11/include/python3.11 -I/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/numpy/core/include -c gensim/models/word2vec_inner.c -o build/temp.macosx-10.9-universal2-cpython-311/gensim/models/word2vec_inner.o gensim/models/word2vec_inner.c:217:12: fatal error: 'longintrepr.h' file not found #include "longintrepr.h" ^~~~~~~~~~~~~~~ 1 error generated. error: command '/usr/bin/clang' failed with exit code 1 [end of output] This issue is raised in a couple of github postings and is attributed to some incompatibility between cython and python 3.11. However, no suggestion is forwarded as to what users should do until cython is updated. I may have misrepresented the details of the discussions on github but I think this is the gist of it. Can anyone help me in installing gensim in the meantime? Thanks. I updated cython and aiohttp. The latter because I had seen a post where the aiohttp install failed for the same reason as mine (missing "longintrepr.h"). No improvement. "pip install gensim" still fails and fails with the same message as copied above.
2
python-3.x,cython,gensim
74,979,674
I also faced the same issue for gensim library on Windows laptop while using Python 3.11.1 Changing to the Python 3.10 worked for me.
false
2023-01-02 06:55:20
12,506
8
1
Gensim install in Python 3.11 fails because of missing longintrepr.h file
10
74,989,590
3
Operating System: macOS Monterey 12.6 Chip: Apple M1 Python version: 3.11.1 I try: pip3 install gensim The install process starts well but fatally fails towards the end while running 'clang'. The error message is: clang -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -arch arm64 -arch x86_64 -g -I/Library/Frameworks/Python.framework/Versions/3.11/include/python3.11 -I/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/numpy/core/include -c gensim/models/word2vec_inner.c -o build/temp.macosx-10.9-universal2-cpython-311/gensim/models/word2vec_inner.o gensim/models/word2vec_inner.c:217:12: fatal error: 'longintrepr.h' file not found #include "longintrepr.h" ^~~~~~~~~~~~~~~ 1 error generated. error: command '/usr/bin/clang' failed with exit code 1 [end of output] This issue is raised in a couple of github postings and is attributed to some incompatibility between cython and python 3.11. However, no suggestion is forwarded as to what users should do until cython is updated. I may have misrepresented the details of the discussions on github but I think this is the gist of it. Can anyone help me in installing gensim in the meantime? Thanks. I updated cython and aiohttp. The latter because I had seen a post where the aiohttp install failed for the same reason as mine (missing "longintrepr.h"). No improvement. "pip install gensim" still fails and fails with the same message as copied above.
2
python-3.x,cython,gensim
74,979,674
It seems your issue may be due to the specifics of a fairly-new Python, and lagging library support, on a somewhat new system (a MacOS M1 machine) which has its own somewhat-unique build toolchains. Unless you absolutely need to use Python 3.11.1, I'd suggest using Gensim within a Python environment with a slightly-older Python interpreter, where the various pacakges you truly need may be a little more settled. For example, on many OS/architecture/Python combinations, a standard pip install will grab precompiled libraries – so build errors of the type you're seeing can't happen. That your installation is falling back to a local compilation (which hits a problem without an easy off-the-shelf solution) is a hint that something about the full configuration is still somewhat undersupported by one or more of the involved libraries. If you use the conda 3rd-party system for managing Python virtual environments, it also offers you the ability to explicitly choose which Python version will be used in each environment. That is, you're not stuck with the exact version, and installed libraries, that are default/global on your OS. You could easily try Python 3.10, or Python 3.9, which might work better. And, keeping your development/project virtual-environment distinct from the system's Python is often considered a "best practice" for other purposes, too. There's no risk that you'll cause damage to the system Python and any tools reliant on it, or face an issue where several of your Python projects need conflicting library versions. (You just use a separate environment for eah project.) And, the exercise of rigorously specifying what needs to be in your project's environment helps keep its prerequisites/dependencies clear for any future relocations/installations elsewhere. When using the conda tool for this purpose, I usually start with the miniconda version, so that I have explicit control over exactly what packages are installed, and can thus keep each environment minimally-specified for its purposes. (The larger anaconda approach pre-installs tons of popular packages instead.)
false
2023-01-02 06:55:20
12,506
7
1
Gensim install in Python 3.11 fails because of missing longintrepr.h file
10
74,989,935
1
I am trying to learn pytorch and I am starting with Fashion Mnist dataset. I created a model and it was giving horrible results. I found out that if I rewrite the model without using nn.Sequential, it actually works. I have no idea where is actual difference between these two and why is the one with nn.Sequential not working properly. This version achieves around 10% class Down(nn.Module): def __init__(self, in_channels, out_channels): super().__init__() self.down = nn.Sequential( nn.Conv2d(in_channels, out_channels, kernel_size= 3, padding = 1), nn.BatchNorm2d(out_channels), nn.ReLU(), nn.MaxPool2d(2) ) def forward(self, x): return self.down(x) class MyNet(nn.Module): def __init__(self): super(MyNet, self).__init__() self.net = nn.Sequential( Down(1, 128), Down(128,256)) def forward(self, x): x = self.net(x) # print(x.size()) x = torch.flatten(x, start_dim=1) x = nn.Linear(12544, 10)(x) return F.log_softmax(x, dim = 1) And this model achieves around 90% class MyNet(nn.Module): def __init__(self): super(MyNet, self).__init__() self.conv1 = nn.Conv2d(1,128, kernel_size = 3, padding = 1) self.pool = nn.MaxPool2d(2) self.conv2 = nn.Conv2d(128, 256, kernel_size = 3, padding = 1) self.lin1 = nn.Linear(12544, 10) self.lin2 = nn.Linear(64, 10) self.norm1 = nn.BatchNorm2d(128) self.norm2 = nn.BatchNorm2d(256) self.relu = nn.ReLU() def forward(self, x): x = self.conv1(x) x = self.norm1(x) x = self.relu(x) x = self.pool(x) x = self.conv2(x) x = self.norm2(x) x = self.relu(x) x = self.pool(x) x = torch.flatten(x, start_dim = 1) x = self.lin1(x) return F.log_softmax(x, dim = 1) Thank you for any advice
1
python,pytorch,conv-neural-network
74,980,651
You're making a linear layer every time you pass an input to your network. Declaring nn.Linear at __init__ will fix your problem
true
2023-01-02 09:19:08
167
1
1.2
Why is the same model with nn sequential giving worse results
1
75,629,317
1
I have fine-tuned the T5-base model (from hugging face) on a new task where each input and target are sentences of 256 words. The loss is converging to low values however when I use the generate method the output is always too short. I tried giving minimal and maximal length values to the method but it doesn't seem to be enough. I suspect the issue is related to the fact that the sentence length before tokenization is 256 and after tokenization, it is not constant (padding is used during training to ensure all inputs are of the same size). Here is my generate method: model = transformers.T5ForConditionalGeneration.from_pretrained('t5-base') tokenizer = T5Tokenizer.from_pretrained('t5-base') generated_ids = model.generate( input_ids=ids, attention_mask=attn_mask, max_length=1024, min_length=256, num_beams=2, early_stopping=False, repetition_penalty=10.0 ) preds = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True) for g in generated_ids][0] preds = preds.replace("<pad>", "").replace("</s>", "").strip().replace(" ", " ") target = [tokenizer.decode(t, skip_special_tokens=True, clean_up_tokenization_spaces=True) for t in reference][0] target = target.replace("<pad>", "").replace("</s>", "").strip().replace(" ", " ") The inputs are created using tokens = tokenizer([f"task: {text}"], return_tensors="pt", max_length=1024, padding='max_length') inputs_ids = tokens.input_ids.squeeze().to(dtype=torch.long) attention_mask = tokens.attention_mask.squeeze().to(dtype=torch.long) labels = self.tokenizer([target_text], return_tensors="pt", max_length=1024, padding='max_length') label_ids = labels.input_ids.squeeze().to(dtype=torch.long) label_attention = labels.attention_mask.squeeze().to(dtype=torch.long)
1
python,pytorch,huggingface-transformers,huggingface-tokenizers
74,981,011
For whom it may concern, I found out the issue was with the max_length argument of the generation method. It limits the maximal number of tokens including the input tokens. In my case it was required to set max_new_tokens=1024 instead of the argument provided in the question.
true
2023-01-02 10:00:42
703
0
1.2
T5 model generates short output
2
74,981,811
1
My overall objective is to check whether each row of a big array exists in a small array. Using in, testing numpy arrays sometimes results in false positives, whereas it returns the correct result for python lists. item = [1, 2] small = [[0,2], [5, 0]] item in small # False import numpy as np item_array = np.array(item) small_array = np.array(small) item_array in small_array # True Why does in return a false positive when using numpy arrays? For context, the following is my attempt to check membership of items from one array in another array: big_array = np.array([[5, 0], [1, -2], [0, 2], [-1, 3], [1, 2]]) small_array = np.array([[0, 2], [5, 0]]) # false positive for last item [row in small_array for row in big_array] # [True, False, True, False, True]
1
python,numpy
74,981,692
Let's do the example: np.array([1, 2]) in small_array It will check if the 1 is anywhere in the small array in the first position (index 0). It is not. Then it checks if the 2 is anywhere in the small array in the second position (index 1). It is! As one of the two returns True, it will return True. So np.array([i, 2]) in small_array will always return True for any i.
true
2023-01-02 11:11:01
65
3
1.2
Why does `in` operator return false positive when used on numpy arrays?
1
74,983,359
2
I have a list of tensors t_list=[tensor([[1], [1], [1]]), tensor([[1], [1], [1]]), tensor([[1], [1], [1]])] and want to convert it to [tensor([[1,0,0], [1,0,0], [1,0,0]]), tensor([[1,0,0], [1,0,0], [1,0,0]]), tensor([[1,0,0], [1,0,0], [1,0,0]])] I tried this code import torch z= torch.zeros(1,2) for i, item in enumerate(t_list): for ii, item2 in enumerate(item): unsqueezed = torch.unsqueeze(item2,0) cat1 = torch.cat((unsqueezed,z),-1) squeezed = torch.squeeze(cat1,0) t[i][ii] = squeezed But got this error RuntimeError: expand(torch.FloatTensor{[5]}, size=[]): the number of sizes provided (0) must be greater or equal to the number of dimensions in the tensor (1) I am not sure how to get around this
1
python,pytorch
74,981,918
Well, I stored them in a new list. Maybe it is not the best way but this is how I made it
false
2023-01-02 11:34:48
82
0
0
replacing an item (tensor) in a list with another tensor but of different shape, using pytorch
1
74,985,196
1
I have an api (django app), lots of people use it, i want this api handle millions requests. How can i make it distributed so this api can handle many requests. Should i make producer and consumer in one file?
1
python,django,apache-kafka
74,982,240
You need an HTTP load balancer, not Kafka to scale incoming API requests. Once a request is made, you can produce Kafka events, or try to do something with a consumer as long as you aren't blocking the HTTP response. File organization doesn't really matter, but yes, one producer instance per app, but multiple consumer threads can be started independently, as needed
false
2023-01-02 12:06:21
40
0
0
Django-kafka. Distributed requests to an endpoint to handle millions of requests
0
74,982,440
1
I installed a package with poetry add X, and so now it shows up in the toml file and in the venv (mine's at .venv/lib/python3.10/site-packages/). Now to remove that package, I could use poetry remove X and I know that would work properly. But sometimes, it's easier to just go into the toml file and delete the package line there. So that's what I tried by removing the line for X. I then tried doing poetry install but that didn't do anything When I do ls .venv/lib/python3.10/site-packages/, I still see X is installed there. I also tried poetry lock but no change with that either. So is there some command to take the latest toml file and clean up packages from being installed that are no longer present in the toml?
1
python,python-poetry
74,982,325
When ever you manual edit the pyproject.toml you have to run poetry lock --no-update to sync the locked dependencies in the poetry.lock file. This is necessary because Poetry will use the resolved dependencies from the poetry.lock file on install if this file is available. Once the pyproject.toml and poetry.lock file are in sync run poetry install --sync to get the venv in sync with the poetry.lock file.
true
2023-01-02 12:16:37
181
2
1.2
Poetry clean/remove package from env after removing from toml file
0
74,983,990
1
I installed Python,and then Django.I checked that Django is installed with --version command.I installed VENV. Now I want to start a Project,but django-admin startproject my_site does't work. I'm working with VScode. What can I do?
1
python,python-3.x,django,python-3.10
74,983,204
this solution is for windows,so first you have to create your VENV ; python -m venv my_venv then you have to activate it in my case windows; my_venv/Script/activate and then after you activate the Virtual environment : pip install django after that you can run either: django-admin.exe startproject my_site or: django-admin startproject my_site in some case's it's : django-admin.py startproject my_site i hope it's helped
true
2023-01-02 13:48:59
28
0
1.2
Does't work django-admin startproject mfdw_site
0
74,983,781
1
From the start of using pycharm i am facing problems with working with libraries I tried reinstalling python, pip, pycharm; adding and readding to path. I also tried using pipenv instead of virtenv.. and it worked once, but.. now again- i use $pip install numpy |as example| in cmd window it says to be successfully installed. Go to pycharm, type 'import numpy'.. and nothing happens. I know i can download manually: go to settings and so on.. but it would be much beter if pip installed with cmd would be instantly displayed in Pycharm. Anybody, please, help.
1
python,pip,pycharm
74,983,670
Check if you have activated the virtual environment in which you have installed the packages. For instance you may have installed the package on Global python version and running your program on a virtual environment which will not work. So maybe try activating your virtual environment before installing the Packages. Step 1:- activate {name_of_pipenv} pip install numpy
false
2023-01-02 14:36:18
27
0
0
pycharm don't see things installed with pip
0
74,983,804
1
An interviewer asked me about this.. I tried to answer but just get confused πŸ˜•
1
python,data-science
74,983,767
. py files contain the source code of a program. Whereas, . pyc file contains the bytecode of your program.
false
2023-01-02 14:46:28
78
0
0
What is the difference between .py and .pyc files?
0
74,984,135
1
Am trying to query every candidate that belong to a specific position and loop through it using the django template in my html. If I have just one position/poll all candidate will display in my frontend, but once i add another position/poll then the list of the candidate will not display again def index(request): context = {} instruction = "" positions = Position.objects.order_by('priority').all() for position in positions: candidates = Candidate.objects.filter(position=position) for candidate in candidates: votes = Votes.objects.filter(candidate=candidate).count() if position.max_vote > 1: instruction = "You may select up to " + str(position.max_vote) + " candidates" else: instruction = "Select only one candidate" context = { 'positions': positions, 'candidates': candidates, 'votes': votes, 'instruction': instruction } return render(request, 'poll/index.html', context) {% block content %} <div class="row"> <div class="mt-5"> {% for p in positions %} {{ instruction }} <h1>{{ p.name }}</h1> <p>{{ p.description }}</p> {% for c in candidates %} <h1>{{ candidate.fullname }}</h1> {% endfor %} {% endfor %} </div> </div> {% endblock %}
1
python,html,django
74,983,907
Well, as much I understand you are querying objects inside the for loop and you are not storing the result of each iteration, whenever the next iterations happen, that overwrite the candidates and votes variable...
false
2023-01-02 14:58:47
44
0
0
How do I get all candidate in a position
1
74,984,745
2
Consider this class: class Product(models.Model): name = models.Charfield(verbose_name="Product Name", max_length=255) class Meta: verbose_name = "Product Name" I looked at the Django docs and it says: For verbose_name in a field declaration: "A human-readable name for the field." For verbose_name in a Meta declaration: "A human-readable name for the object, singular". When would I see either verbose_name manifest at runtime? In a form render? In Django admin?
1
python,django,django-models
74,984,318
verbose_name in a field declaration is set when the name of that field (here we mean the name) is not enough for the user to explain what that field is exactly. For example, maybe you want to provide a more complete description of the name to the user, suppose you mean a short name. So it is better to set verbose_name equal to the "short name". verbose_name in a Meta declaration is set when you want to show your objects individually in a different way to the user in the admin panel. Of course, we also have verbose_name_plural in the meta class. It is used when Django cannot correctly recognize the plural form of a word. Django shows word endings by placing s in plurals, but this is not always true. For example, imagine you have a model called Child. Well, now it is better that you set the verbose_name_plural value equal to "children" in the meta class. When you use a language other than English in Django, the above description is more useful.
false
2023-01-02 15:40:26
57
1
0.099668
In Django, what's the difference between verbose_name as a field parameter, and verbose_name in a Meta inner class?
2
74,986,028
2
I need to create a MLPClassifier with hidden_layer_sizes, that is a tuple specifying the number of neurons in the hidden layers. For example: (10,) means that there is only 1 hidden layer with 10 neurons. (10, 50,) means that there are 2 hidden layers, the first with 10 neurons, the second with 50 neurons and so on. I want to test each of them in sequence. I have passed this dictionary: hl_parameters = {'hidden_layer_sizes': [(10,), (50,), (10,10,), (50,50,)]} And defined MLPClassifier like this: mlp_cv = MLPClassifier(hidden_layer_sizes=hl_parameters['hidden_layer_sizes'], max_iter=300, alpha=1e-4, solver='sgd', tol=1e-4, learning_rate_init=.1, verbose=True, random_state=ID) mlp_cv.fit(X_train, y_train) But when I fit the model, I got this error: TypeError Traceback (most recent call last) Input In [65], in <cell line: 9>() 8 mlp_cv = MLPClassifier(hidden_layer_sizes=hl_parameters['hidden_layer_sizes'], max_iter=300, alpha=1e-4, solver='sgd', tol=1e-4, learning_rate_init=.1, verbose=True, random_state=ID) ----> 9 mlp_cv.fit(X_train, y_train) File ~/opt/anaconda3/lib/python3.9/site-packages/sklearn/neural_network/_multilayer_perceptron.py:752, in BaseMultilayerPerceptron.fit(self, X, y) 735 def fit(self, X, y): 736 """Fit the model to data matrix X and target(s) y. 737 738 Parameters (...) 750 Returns a trained MLP model. 751 """ --> 752 return self._fit(X, y, incremental=False) File ~/opt/anaconda3/lib/python3.9/site-packages/sklearn/neural_network/_multilayer_perceptron.py:385, in BaseMultilayerPerceptron._fit(self, X, y, incremental) 383 # Validate input parameters. 384 self._validate_hyperparameters() --> 385 if np.any(np.array(hidden_layer_sizes) <= 0): 386 raise ValueError( 387 "hidden_layer_sizes must be > 0, got %s." % hidden_layer_sizes 388 ) 389 first_pass = not hasattr(self, "coefs_") or ( 390 not self.warm_start and not incremental 391 ) TypeError: '<=' not supported between instances of 'tuple' and 'int' I cannot find a solution. How do I solve this?
1
python,machine-learning,scikit-learn,deep-learning,neural-network
74,984,624
MLPClassifier(hidden_layer_sizes=hl_parameters['hidden_layer_sizes'], max_iter=300, alpha=1e-4, solver='sgd', tol=1e-4, learning_rate_init=.1, verbose=True, random_state=ID) that field is an issue...you are providing a list of tuples as input for hidden_layer_sizes. MLPClassifier can only take tuple for hidden_layer_sizes. if you need 3 hidden layers with 10, 50 and 50 neurons, just put (10,50,50) for hidden layer sizes. If you are testing different configurations, you can make a list of tuples and loop through the different combinations one at a time instead of putting the full list.
false
2023-01-02 16:09:12
98
1
0.099668
How can I pass a combination of architectures to a MLPClassifier?
2
74,984,945
5
I have file names like ios_g1_v1_yyyymmdd ios_g1_v1_h1_yyyymmddhhmmss ios_g1_v1_h1_YYYYMMDDHHMMSS ios_g1_v1_g1_YYYY ios_g1_v1_j1_YYYYmmdd ios_g1_v1 ios_g1_v1_t1_h1 ios_g1_v1_ty1_f1 I would like to remove only the suffix when it matches the string YYYYMMDDHHMMSS OR yyyymmdd OR YYYYmmdd OR YYYY my expected output would be ios_g1_v1 ios_g1_v1_h1 ios_g1_v1_h1 ios_g1_v1_g1 ios_g1_v1_j1 ios_g1_v1 ios_g1_v1_t1_h1 ios_g1_v1_ty1_f1 How can I achieve this in python using regex ? i tried with something like below, but it didn't work word_trimmed_stage1 = re.sub('.*[^YYYYMMDDHHMMSS]$', '', filename)
1
python,string
74,984,788
Try removing everything after the last _ detected.
false
2023-01-02 16:24:56
106
-3
-0.119427
how to remove string ending with specific string
1
76,519,104
2
I have tried making an app in python - kivy. After compiling the application into an executable file, I get a large chunk of error text. The following is the beginning and end of that error text while trying to run the finalized exe file: Traceback (most recent call last): File "logging\__init__.py", line 1103, in emit AttributeError: 'NoneType' object has no attribute 'write' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "logging\__init__.py", line 1103, in emit AttributeError: 'NoneType' object has no attribute 'write' ... ... ... RecursionError: maximum recursion depth exceeded These are my program's code files' contents: main.py: import random import sys from kivy.app import App from kivy.uix.widget import Widget from kivy.lang import Builder from kivy.config import Config Config.set('graphics', 'resizable', '0') # Designate Our .kv design file Builder.load_file('main.kv') class MyLayout(Widget): def release(self): self.ids.my_button.background_color = 5 / 255, 225 / 255, 120 / 255, 1 self.ids.my_button.color = 1, 1, 1, 1 def press(self): # Create variables for our widget # Update the label deck = list(range(1, 43)) random.shuffle(deck) # Create list of 6 values, and assign each with a number between 1 and 42 random_numbers = [0, 1, 2, 3, 4, 5] for i in range(0, 6): random_numbers[i] = deck.pop() # Sort the array from lowest to highest random_numbers.sort() self.ids.my_button.background_color = 50 / 255, 225 / 255, 120 / 255, 1 self.ids.my_button.color = 180 / 255, 180 / 255, 180 / 255, 1 self.ids.name_label.text = f'{random_numbers[0]} ' \ f'{random_numbers[1]} ' \ f'{random_numbers[2]} ' \ f'{random_numbers[3]} ' \ f'{random_numbers[4]} ' \ f'{random_numbers[5]}' class AwesomeApp(App): def build(self): return MyLayout() sys.setrecursionlimit(2000) if __name__ == '__main__': AwesomeApp().run() main.kv: <MyLayout> BoxLayout: orientation: "vertical" size: root.width, root.height Label: font_name: "files/cambriab.ttf" id: name_label text: "If you had 530M dinars, what would you do with it?" font_size: 32 Button: id: my_button size_hint: .4, .2 font_size: 32 font_name: "files/cambriaz.ttf" text: "Make me rich!" pos_hint: {'center_x': 0.5} background_color: 5/255,225/255,120/255,1 on_press: root.press() on_press: hassanGIF.anim_delay = 1/50 on_press: hassanGIF._coreimage.anim_reset(True) on_release: root.release() Image: id: hassanGIF source: 'files/sequence.zip' anim_delay : -1 anim_loop: 1 center_x: self.parent.center_x center_y: self.parent.center_y+400 size: root.width-400, root.height-400 Any help with this error would be greatly appreciated. I have tried using different methods of converting the program into an executable file, but it was to no avail. I have also tried setting the recursion limit to a fixed value, but that also didn't work.
1
python,recursion,kivy,exe
74,984,815
I've seen this bug on builds made with pyinstaller 5.7.0, if you're using it too, you should try build you app with pyinstaller 5.6.2.
false
2023-01-02 16:27:46
350
0
0
RecursionError: maximum recursion depth exceeded in python kivy app as an executable file
1
76,072,298
1
i'm making an AI using N.E.A.T algorithm that plays a game, there is a reporting function in this algorithm that shows statistics about every generation, and one of them is about the best genome, i'm trying to understand what is meant by this line Best fitness: 20201.00000 - size: (4, 7) - species 1 - id 2564, specially the size and id part. When i went to the file of the algorithm,I found out this is the printing sentence print('Best fitness: {0:3.5f} - size: {1!r} - species {2} - id{3}' .format(best_genome.fitness, best_genome.size(),best_species_id,best_genome.key)) but still, I can't understand what theses two numbers mean
1
python,neat
74,985,612
The id is simply the ID of the particular genome. And size refers to (as per the docs) : Returns genome complexity, taken to be (number of nodes, number of enabled connections); currently only used for reporters - some retrieve this information for the highest-fitness genome at the end of each generation.
false
2023-01-02 17:56:07
40
0
0
Understand `reporting()` class in N.E.A.T algorithm
1
74,986,201
2
I am trying to create two python programs namely A and B. A will access 'test.xlsx'(excel file), create a sheet called 'sheet1' and write to 'sheet1'. Python program B will access 'test.xlsx'(excel file), create a sheet called 'sheet2' and write to 'sheet2' simultaneously. Is it possible to do the above process?
1
python,excel,pandas,openpyxl
74,986,033
Generally operation of opening a file performed on an object is to associate it to a real file. An open file is represented within a program by a stream and any input or output operation performed on this stream object will be applied to the physical file associated to it. The act of closing the file (actually, the stream) ends the association; the transaction with the file system is terminated, and input/output may no longer be performed on the stream. Python doesn't flush the bufferβ€”that is, write data to the fileβ€”until it's sure you're done writing, and one way to do this is to close the file. If you write to a file without closing, the data won't make it to the target file. When we are finished with our input and output operations on a file we shall close it so that the operating system is notified and its resources become available again. There are to ways you can pick, either you open/close file synchronically or you will make a copy of your file and destroy it afterwards.
false
2023-01-02 18:50:54
60
0
0
Can two Python programs write to different sheets in the same .xlsx file simultaneously?
0
75,105,118
1
I deployed a model using Azure ML managed endpoint, but I found a bottleneck. I'm using Azure ML Managed Endpoint to host ML models for object prediction. Our endpoint receives a URL of a picture and is responsible for downloading and predicting the image. The problem is the bottleneck: each image is downloaded one at a time (synchronously), which is very slow. Is there a way to download images async or to create multiple threads ? I expected a way to make if faster.
1
python,machine-learning,azure-machine-learning-service,azure-machine-learning-studio
74,986,098
We recommend to use Azure Blob storage to host the images and then use blob storage SDK to fetch the images.
false
2023-01-02 19:00:09
42
0
0
How to use Async or Multithread on Azure Managed Endpoint
1
74,986,708
1
I would like to know if there is a way for FastAPI to receive a URL of a file as a parameter and save this file to disk? I know it is possible with the requests library using requests.get() method, but is it possible with FastAPI to receive the URL and save it directly? I tried using file: UploadFile = File(...), but then it doesn't download the file when the URL is sent.
1
python,download,fastapi,starlette
74,986,488
I don't believe so. I've come across this before and was unable to find a solution (and ended up using requests like you mentioned), but seeing this I wanted to check again more thoroughly. Reviewing the uvicorn and fastapi repositories by searching the code itself, I see no functions/code that reference requests or urllib (they do use urllib.parse/quote, etc though) that would be 2 likely suspects to build requests. They do use httpx.AsyncClient, but only in tests. I would expect to see some use of these libraries in the main uvicorn/fastapi libraries if they had code to make external requests. Seeing the above, I actually think I will change my code to use httpx.AsyncClient anyways since it is already a dependency.
false
2023-01-02 19:50:55
55
1
0.197375
How to receive URL File as parameter and save it to disk using FastAPI?
1