CreationDate
stringlengths
19
19
Users Score
int64
-3
17
Tags
stringlengths
6
76
AnswerCount
int64
1
12
A_Id
int64
75.3M
76.6M
Title
stringlengths
16
149
Q_Id
int64
75.3M
76.2M
is_accepted
bool
2 classes
ViewCount
int64
13
82.6k
Question
stringlengths
114
20.6k
Score
float64
-0.38
1.2
Q_Score
int64
0
46
Available Count
int64
1
5
Answer
stringlengths
30
9.2k
2023-03-29 17:47:02
0
python,macos,terminal,gradio,gr
1
76,593,567
How do I remedy this attributeerror in terminal while trying to execute a gradio interface?
75,880,531
false
436
in Mac terminal, gotten similar errors in Jupyter. When I try and run this python app. I keep getting these errors. line 3, in import gradio as gr line 6, in demo = gr.Interface(fn=greet, inputs="text", outputs="text") AttributeError: partially initialized module 'gradio' has no attribute 'Interface' (most likely due to a circular import) I haven't used gradio much so any advice would be much appreciated. from gpt_index import ( SimpleDirectoryReader, GPTListIndex, GPTSimpleVectorIndex, LLMPredictor, PromptHelper, ) from langchain import OpenAI import gradio as gr import sys import os os.environ["OPENAI_API_KEY"] = "My API Key" def construct_index(directory_path): max_input_size = 4096 num_outputs = 512 max_chunk_overlap = 20 chunk_size_limit = 600 prompt_helper = PromptHelper( max_input_size, num_outputs, max_chunk_overlap, chunk_size_limit=chunk_size_limit, ) llm_predictor = LLMPredictor( llm=OpenAI( temperature=0.7, model_name="text-davinci-003", max_tokens=num_outputs ) ) documents = SimpleDirectoryReader(directory_path).load_data() index = GPTSimpleVectorIndex( documents, llm_predictor=llm_predictor, prompt_helper=prompt_helper ) index.save_to_disk("index.json") return index def chatbot(input_text): index = GPTSimpleVectorIndex.load_from_disk("index.json") response = index.query(input_text, response_mode="compact") return response.response iface = gr.Interface( fn=chatbot, inputs=gr.inputs.Textbox(lines=7, label="Enter your text"), outputs="text", title="Chat", ) index = construct_index("database") iface.launch(share=True) I tried the code above and was expecting it to launch the gradio interface.
0
1
1
Check your Python file name, if it's gradio.py rename it to something else
2023-03-29 18:34:06
0
python,telethon
2
75,881,086
Telegram bot using Telethon hangs after entering the code
75,880,917
false
275
I'm working on a Telegram bot using the Telethon library in Python to fetch the last message from a source channel and forward it to a target channel. The bot connects to my account and asks for the authentication code sent to my Telegram app. However, after entering the code, the script hangs indefinitely without any error messages or further output. Here's the code I've been using: import asyncio from telethon.sync import TelegramClient from telethon.sessions import StringSession import socks api_id = 'YOUR_API_ID' api_hash = 'YOUR_API_HASH' phone_number = 'YOUR_PHONE_NUMBER' # Include country code, e.g., +1XXXXXXXXXX source_channel = '@source_channel_username' target_channel = '@target_channel_username' # Proxy settings proxy_host = 'YOUR_SOCKS5_HOST' proxy_port = YOUR_SOCKS5_PORT proxy_username = 'PROXY_USERNAME' # If the proxy requires authentication proxy_password = 'PROXY_PASSWORD' # If the proxy requires authentication # Session storage session_string = '' async def main(): async with TelegramClient(StringSession(session_string), api_id, api_hash, proxy=(socks.SOCKS5, proxy_host, proxy_port, True, proxy_username, proxy_password), sequential_updates=True) as client: print("Connecting to account...") # Login with your phone number await client.start(phone_number) print("Successfully connected to account.") print("Fetching the last message...") # Fetch the last message from the source channel last_message = await client.get_messages(source_channel, limit=1) print("Last message fetched.") # Delay before sending the message await asyncio.sleep(3) # Copy and send the message to the target channel if last_message: message_text = last_message[0].text message_media = last_message[0].media print("Sending message...") if message_media is not None: await client.send_message(target_channel, message_text, file=message_media) else: await client.send_message(target_channel, message_text) print("Successfully copied and sent the last message.") else: print("No messages found in the source channel.") # Save the session string after a successful login saved_session_string = client.session.save() print("Session saved. Copy this string and use it on the next launch:") print(saved_session_string) # Run the asynchronous function main() using asyncio.run() asyncio.run(main()) The issue occurs after the following output: Please enter your phone (or bot token): +XXXXXXXXXXX Please enter the code you received: XXXXX After this point, there are no more messages, and the script seems to hang forever. My Telegram app shows that the bot is connected and has an active session. I've already tried adding a delay before sending the message and using "sequential_updates=True" when creating the "TelegramClient" object, but it didn't help. Any suggestions or solutions would be greatly appreciated. Thanks in advance!
0
1
1
telethon.sync is meant to be used without await. use from telethon import import TelegramClient instead.
2023-03-30 04:51:21
0
python,blockchain,solidity,brownie
1
75,916,271
When Installing brownie on Mac I get this error
75,884,234
false
72
I tried to install brownie and I'm getting this error(I'm watching the code academy blockchain, solidity tutorial). I feel like after trying all possibilities I've reached dead end from my side. Please help cause I need to move forward with the tutorial This is the error I'm getting when trying to install on MAC Fatal error from pip prevented installation. Full pip output in file: /Users/rafeliafernandes/.local/pipx/logs/cmd_2023-03-30_08.38.31_pip_errors.log pip failed to build packages: bitarray cytoolz lru-dict multidict psutil regex yarl Some possibly relevant errors from pip install: error: subprocess-exited-with-error xcrun: error: invalid active developer path (/Library/Developer/CommandLineTools), missing xcrun at: /Library/Developer/CommandLineTools/usr/bin/xcrun error: command '/usr/bin/clang' failed with exit code 1 Error installing eth-brownie. /Users/rafeliafernandes/.local/pipx/logs/cmd_2023-03-30_08.38.31_pip_errors.log shows: 190.2ms (setup:799): 2023-03-30 08:38:31 190.8ms (setup:800): /Users/rafeliafernandes/Library/Python/3.11/bin/pipx install eth-brownie 190.8ms (setup:801): pipx version is 1.2.0 190.9ms (setup:802): Default python interpreter is '/Library/Frameworks/Python.framework/Versions/3.11/bin/python3' 191.8ms (package_name_from_spec:322): Determined package name: eth-brownie 191.8ms (package_name_from_spec:323): Package name determined in 0.0s 192.3ms (run_subprocess:173): running /Library/Frameworks/Python.framework/Versions/3.11/bin/python3 -m venv --without-pip /Users/rafeliafernandes/.local/pipx/venvs/eth-brownie 266.8ms (run_subprocess:186): stdout: 266.9ms (run_subprocess:188): stderr: 267.0ms (run_subprocess:189): returncode: 0 267.5ms (run_subprocess:173): running /Users/rafeliafernandes/.local/pipx/venvs/eth-brownie/bin/python -c import sysconfig; print(sysconfig.get_path('purelib')) 313.1ms (run_subprocess:186): stdout: /Users/rafeliafernandes/.local/pipx/venvs/eth-brownie/lib/python3.11/site-packages 313.3ms (run_subprocess:189): returncode: 0 313.7ms (run_subprocess:173): running /Users/rafeliafernandes/.local/pipx/shared/bin/python -c import sysconfig; print(sysconfig.get_path('purelib')) 358.4ms (run_subprocess:186): stdout: /Users/rafeliafernandes/.local/pipx/shared/lib/python3.11/site-packages 358.6ms (run_subprocess:189): returncode: 0 359.2ms (run_subprocess:173): running /Users/rafeliafernandes/.local/pipx/venvs/eth-brownie/bin/python --version 374.4ms (run_subprocess:186): stdout: Python 3.11.2 374.6ms (run_subprocess:188): stderr: 374.6ms (run_subprocess:189): returncode: 0 375.0ms (_parsed_package_to_package_or_url:147): cleaned package spec: eth-brownie 375.7ms (run_subprocess:173): running /Users/rafeliafernandes/.local/pipx/venvs/eth-brownie/bin/python -m pip install eth-brownie 64916.4ms (run_subprocess:189): returncode: 1 64917.7ms (subprocess_post_check_handle_pip_error:335): '/Users/rafeliafernandes/.local/pipx/venvs/eth-brownie/bin/python -m pip install eth-brownie' failed 64918.8ms (subprocess_post_check_handle_pip_error:352): Fatal error from pip prevented installation. Full pip output in file: /Users/rafeliafernandes/.local/pipx/logs/cmd_2023-03-30_08.38.31_pip_errors.log 64922.8ms (analyze_pip_output:298): pip failed to build packages: bitarray cytoolz lru-dict multidict psutil regex yarl 64923.3ms (rmdir:55): removing directory /Users/rafeliafernandes/.local/pipx/venvs/eth-brownie 64925.5ms (cli:866): PipxError: Error installing eth-brownie. Traceback (most recent call last): File "/Users/rafeliafernandes/Library/Python/3.11/lib/python/site-packages/pipx/main.py", line 863, in cli return run_pipx_command(parsed_pipx_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/rafeliafernandes/Library/Python/3.11/lib/python/site-packages/pipx/main.py", line 214, in run_pipx_command return commands.install( ^^^^^^^^^^^^^^^^^ File "/Users/rafeliafernandes/Library/Python/3.11/lib/python/site-packages/pipx/commands/install.py", line 60, in install venv.install_package( File "/Users/rafeliafernandes/Library/Python/3.11/lib/python/site-packages/pipx/venv.py", line 253, in install_package raise PipxError( pipx.util.PipxError: Error installing eth-brownie. 64929.3ms (cli:874): pipx finished.
0
2
1
The solution for this is to downgrade from python 3.11 to python 3.10 and redo the installation steps as shown on freeCodeCamp.org's Solidity, Blockchain, and Smart Contract Course – Beginner to Expert Python Tutorial
2023-03-30 08:17:25
0
python,django,django-anymail
2
75,889,176
Getting this error when tried to install using pip install anymail
75,885,707
false
72
ERROR: Could not find a version that satisfies the requirement anymail (from versions: none) ERROR: No matching distribution found for anymail Can anyone help me with this
0
1
1
try upgrading your pip pip install --upgrade pip note: you may need to check your other packages after the upgrade
2023-03-30 08:59:07
0
python,machine-learning,optimization,deep-learning,pytorch
2
76,153,288
How should I using torch.compile properly?
75,886,125
false
784
I'm currently trying to use pytorch 2.0 for boosting training performance with my project. And I've heard that torch.compile might be boost some models. So my question (for now) is simple; how should I use torch.compile with large model? Such as, should I use torch.model like this? class BigModel(nn.Module): def __init__(self, ...): super(BigModel, self).__init__() self.model = nn.Sequential( SmallBlock(), SmallBlock(), SmallBlock(), ... ) ... class SmallBlock(nn.Module): def __init__(self, ...): super(SmallBlock, self).__init__() self.model = nn.Sequential( ...some small model... ) model = BigModel() model_opt = torch.compile(model) ,or like this? class BigModel(nn.Module): def __init__(self, ...): super(BigModel, self).__init__() self.model = nn.Sequential( SmallBlock(), SmallBlock(), SmallBlock(), ... ) ... class SmallBlock(nn.Module): def __init__(self, ...): super(SmallBlock, self).__init__() self.model = nn.Sequential( ...some small model... ) self.model = torch.compile(self.model) model = BigModel() model_opt = torch.compile(model) For summary, Should compile each layer? or torch.compile do this automatically? Is there any tips for using torch.compile properly? Thanks To be honest, I tried both, but there are no differences.. And also, it didn't speed up dramatically, I just checked speed up rate for my model just about 5 ~ 10%.
0
3
1
the default mode of torch.compile does not seem to work, but it has another mode that can really accelerate your model. """ torch.compile(${yourmodel}, mode="reduce-overhead") """
2023-03-30 10:09:49
0
python,pandas,dataframe,numpy,data-analysis
2
75,887,040
Why does using df[[column_name]] dislays a DataFrame?
75,886,847
false
63
I was just asking myself: I understand that calling df[column_name] displays a Series because a DataFrame is built of different arrays. Though, why does calling df[[column_name]] (column_name being only one column) returns a DataFrame and not a Series ? I'm ont sure to understand the logic behing how Pandas was built here Thanks :) I was trying to explain to my students why calling a list of one element displays a dataframe and not a Series, but did not manage
0
1
1
It may happend because when you give single column_name as a string it takes it perform selection and return single value based on the search key column_name. But when you provide same column_name contained in a list it tries to fetch all the keys of the list which in one in this case. Hence resulting a dataframe. I guess they are using some standard logic to return dataframe if list is provided irrespective of length of list. import pandas as pd df = pd.DataFrame(columns=["a","b","c"], data=[[1,4,7],[2,5,8],[3,6,9]]) column_name = "a" print(type(df[column_name])) print(type(df[[column_name]])) output: <class 'pandas.core.series.Series'> <class 'pandas.core.frame.DataFrame'>
2023-03-30 14:33:09
2
python
1
75,890,197
I can't run python script in visual studio code
75,889,543
false
218
I have problem in running python script in visual studio code, in other IDE I dont have issue I run the same python script without any issue. in the terminal I got this message conda activate base conda : The term 'conda' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:1 conda activate base + CategoryInfo : ObjectNotFound: (conda:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException
0.379949
1
1
Sounds like there is an issue with the interpreter. Open the Command Palette in Visual Studio Code by pressing Ctrl + Shift + P Type "Python: Select Interpreter" and select it. Choose the conda environment that you want to use from the list of available interpreters. After selecting the interpreter, close and reopen VSC (for changes to take effect). Once you have the correct interpreter, you should be able to run the script in the terminal.
2023-03-30 18:22:53
1
python,discord
1
75,958,304
Python discord bot not joining voice channel, but without sending an error
75,891,722
true
306
I made a music bot for a discord server one month ago, using discord.py. It worked well until one week ago, but now when I use the join command the bot fails to connect to the voice channel. Here is the code of the command : class MonClient(discord.Client): async def on_ready(self): self.voice = None # contains the <VoiceClient> object created by <chan.connect> in the join command async def on_message(self,message): if message.author.bot: pass elif message.content[0] == "!": # the command prefix is "!" parts = message.content.split(" ") command = parts[0][1:] if command == "join": if message.author.voice != None and self.voice == None: chan = message.author.voice.channel await message.channel.send("before connecting to voice channel") self.voice = await chan.connect() await message.channel.send("after connecting to voice channel") else: await message.channel.send("Bot already connected to voice channel or user not connected to voice channel") More precisely, the bot successfully sends "before connecting to voice channel" in the text channel, then it is shown to have joined to the voice channel by the Discord UI (its image is shown in the list of members connected to the voice channel, there is the sound of connection to the voice channel). But the second message "after connecting to voice channel" is never sent. We can still use the bot's other commands, but the variable self.voice still contains None, so obviously the bot can't play music. I suppose the asynchronous connect method stops on something indefinitely, but I can't understand what, because there is no error message, not even the TimeoutError described by the documentation. The internet connection seems fine, and as I said the bot worked well for two or three weeks, so now I really don't know how to better understand what blocks the connection. There are some other questions on Stackoverflow for similar issues, but the proposed answers had no relation with my problem : an answer proposed another method to get the voice channel object to connect to, but I already get the good object since the bot is at least shown to connect to the voice channel ; and another answer had to do with the @client.command() lines, which I don't use.
1.2
1
1
Make sure your library is up to date, discord made a change recently. It worked for me. pip install -U discord
2023-03-30 18:50:05
1
python,redis,redis-py
1
75,900,584
how to decode Redis ft search query?
75,891,951
false
79
So I am new to redis and I am trying to learn everything I possible can and later want to add this for caching in my website. But, I am currently encountering a very small problem, I have no idea of any function how to decode the return query from the ft function. The code is below: import redis import json from redis.commands.json.path import Path from redis.commands.search.query import Query,NumericFilter r=redis.Redis(host="192.168.196.75",port=6379) user1 = { "name": "Paul John", "email": "paul.john@example.com", "age": 42, "city": "London", "id":1, "username":"Paul Walker", } # r.json().set("user:1",Path.root_path(),user1) loc=r.ft("Idx").search(query="@username:Paul") print(loc) The output I am getting is Result{1 total, docs: [Document {'id': 'user:1', 'payload': None, 'json': '{"name":"Paul John","email":"paul.john@example.com","age":42,"city":"London","id":1,"username":"Paul Walker"}'}]} As you can see everything is working fine, But I am not able to derive the dictioniary that has the information, I can try to make many steps like splitting and slicing but I think there is a function for this which I am not able to find. It would be very helpful If anyone knows what that is. Thank you for reading my question
0.197375
1
1
for retrieving just the documents you can use the .docs such as: loc=r.ft("Idx").search(query="@username:Paul").docs retrieving all results or .docs[0] for the first element and on
2023-03-30 19:19:30
5
python,generator
2
75,892,197
What is the difference between exhausting a generator using "for i in generator" and next(generator)
75,892,172
true
60
I want to learn how to use the return value of a generator (but this not what I'm concerned with now). After searching, they said that I can get the return value from StopIteration when the generator is exhausted, so I tested it with the following code: def my_generator(): yield 1 yield 2 yield 3 return "done" def exhaust_generator(_gen): print("===============================================\n") print("exhaust_generator") try: while True: print(next(_gen)) except StopIteration as e: print(f"Return value: '{e.value}'") def exhaust_generator_iter(_gen): print("===============================================\n") print("exhaust_generator_iter") try: for i in _gen: print(i) print(next(_gen)) except StopIteration as e: print(f"Return value: {e.value}") gen = my_generator() gen2 = my_generator() exhaust_generator(gen) exhaust_generator_iter(gen2) =============================================== exhaust_generator 1 2 3 Return value: 'done' =============================================== exhaust_generator_iter 1 2 3 Return value: None As you can see, the return value is different between the two versions of exhausting the generator and I wonder why. Searched google but it has not been helpful.
1.2
2
1
In your first example with the while, you're catching the first StopIteration exception raised by the generator when it's initially exhausted. In your second example with the for, you're catching a subsequent StopIteration exception raised by calling next() a second time after the generator has already been exhausted (by the for, which caught that initial exception).
2023-03-31 09:25:54
0
python,api,facebook,flask,request
1
75,900,462
Facebook API is sending a GET request instead of a POST to my Chatbot
75,896,833
false
46
When using Facebook Messenger to send a chat message to my Chatbot, I don't get a reply. Checking the server logs show that I am receiving a GET request whereas it should be a POST request. This is weird because the chatbot works when tested by running the file locally, whereas upon deploying it on Digital Ocean Droplet, it receives it as a GET request. The deployed version works if i explicitly send a POST request via Postman (while simulating a facebook messaging request body) Is there any reason Facebook is sending / I am receiving it as a GET request instead of a POST? I tried to test the file locally and it works, and through simulating requests with Postman. It just doesn't work if the request is sent via a message on Facebook Messenger
0
1
1
Made a mistake before, it was indeed related to the redirections as stated by CBroe. POST Endpoint was made for url with a trailing slash whereas the callback set on facebook developers was set to handle no trailing slash. Just had to fix that and redirection issue was resolved!
2023-03-31 11:00:54
1
python-3.x,networkx,nomachine-nx
2
76,401,892
AttributeError: module 'networkx' has no attribute 'info'
75,897,778
false
1,353
import networkx as nx G = nx.read_edgelist('webpages.txt', create_using=nx.DiGraph()) print(nx.info(G)) AttributeError: module 'networkx' has no attribute 'info' print(nx.info(G)) without the error
0.099668
1
1
You can also just print(G) to get the graph properties
2023-03-31 11:58:04
3
python,prompt,openai-api,completion,chatgpt-api
5
76,221,115
OpenAI ChatGPT (GPT-3.5) API error 429: "You exceeded your current quota, please check your plan and billing details"
75,898,276
false
82,587
I'm making a Python script to use OpenAI via its API. However, I'm getting this error: openai.error.RateLimitError: You exceeded your current quota, please check your plan and billing details My script is the following: #!/usr/bin/env python3.8 # -*- coding: utf-8 -*- import openai openai.api_key = "<My PAI Key>" completion = openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=[ {"role": "user", "content": "Tell the world about the ChatGPT API in the style of a pirate."} ] ) print(completion.choices[0].message.content) I'm declaring the shebang python3.8 because I'm using pyenv. I think it should work, since I did 0 API requests, so I'm assuming there's an error in my code.
0.119427
39
3
I was facing the same error, and for me the steps were: Add a credit or debit card in payment methods. Generate a new API key in user preferences. (Optional) Delete the old API key. Be sure to set limits to not incur in charges This are the limits for gpt-3.5-turbo RPM 3,500 TPM 90,000 Hope it helps.
2023-03-31 11:58:04
3
python,prompt,openai-api,completion,chatgpt-api
5
76,200,742
OpenAI ChatGPT (GPT-3.5) API error 429: "You exceeded your current quota, please check your plan and billing details"
75,898,276
false
82,587
I'm making a Python script to use OpenAI via its API. However, I'm getting this error: openai.error.RateLimitError: You exceeded your current quota, please check your plan and billing details My script is the following: #!/usr/bin/env python3.8 # -*- coding: utf-8 -*- import openai openai.api_key = "<My PAI Key>" completion = openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=[ {"role": "user", "content": "Tell the world about the ChatGPT API in the style of a pirate."} ] ) print(completion.choices[0].message.content) I'm declaring the shebang python3.8 because I'm using pyenv. I think it should work, since I did 0 API requests, so I'm assuming there's an error in my code.
0.119427
39
3
Just create a new API Key and use it, it worked for me
2023-03-31 11:58:04
1
python,prompt,openai-api,completion,chatgpt-api
5
76,082,960
OpenAI ChatGPT (GPT-3.5) API error 429: "You exceeded your current quota, please check your plan and billing details"
75,898,276
false
82,587
I'm making a Python script to use OpenAI via its API. However, I'm getting this error: openai.error.RateLimitError: You exceeded your current quota, please check your plan and billing details My script is the following: #!/usr/bin/env python3.8 # -*- coding: utf-8 -*- import openai openai.api_key = "<My PAI Key>" completion = openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=[ {"role": "user", "content": "Tell the world about the ChatGPT API in the style of a pirate."} ] ) print(completion.choices[0].message.content) I'm declaring the shebang python3.8 because I'm using pyenv. I think it should work, since I did 0 API requests, so I'm assuming there's an error in my code.
0.039979
39
3
I encountered a similar issue and found a solution that worked for me. I first canceled my paid account and renewed it with a different payment method. Next, I went to the 'API Keys' section, selected my organization under the 'Default Organizations' dropdown, and saved the changes. This action reset my soft limit, but I still needed to create a new API key to resolve the issue completely. Cancel paid account and recreate with new payment method Confirm Organization Create new API Key
2023-03-31 14:40:53
1
python,django,datetime
1
75,900,449
Django datetime field prints out unsupported operand type error
75,899,810
true
38
I have a @property in my django model which prints out True or False based on datetime difference. Even though it works when I am unittesting it I get error TypeError: unsupported operand type(s) for -: 'datetime.datetime' and 'datetime.date' class MyModel(models.Model): ... created_at = models.DateTimeField(auto_now_add=True) @property def is_expired(self): if datetime.now(tz=utc) - self.created_at > timedelta(hours=48): return True return False How do I fix this? Every answer I checked does not help me .
1.2
1
1
The traceback error is explaining the problem to you: unsupported operand type(s) for -: 'datetime.datetime' and 'datetime.date' This says that you cannot subtract a datetime.datetime from a datetime.date. In your case, self.created_at appears to be a date but your use of datetime.now() and timedelta() suggest that you really want to be dealing with datetimes. Therefore, the solution is to either make sure that created_at is a datetime, or change datetime.now() to date.today() and change timedelta(hours=48) to timedelta(days=2) These will behave a little differently so pick one depending on whether things expire after two calendar days or in 48 hours.
2023-03-31 15:07:21
1
python,tensorflow,keras,artificial-intelligence
1
75,901,480
ValueError: Shapes (None, 20, 9) and (None, 9) are incompatible
75,900,053
true
32
This is my code import tensorflow as tf from tensorflow import keras from keras import layers # definition question and answer questions = [ "你好嗎?", "今天天氣如何?", "你有什麼興趣?", "你喜歡什麼食物?" ] answers = [ "我很好,謝謝你。", "今天天氣非常好。", "我喜歡看書和打電子遊戲。", "我喜歡吃中式食物,特別是炒飯。" ] # 將問題和回答轉換成數字 tokenizer = keras.preprocessing.text.Tokenizer() tokenizer.fit_on_texts(questions + answers) question_seqs = tokenizer.texts_to_sequences(questions) answer_seqs = tokenizer.texts_to_sequences(answers) # 將問題和回答填充到相同的長度 max_len = 20 question_seqs_padded = keras.preprocessing.sequence.pad_sequences(question_seqs, maxlen=max_len) answer_seqs_padded = keras.preprocessing.sequence.pad_sequences(answer_seqs, maxlen=max_len) # 定義模型 model = keras.Sequential() model.add(layers.Embedding(len(tokenizer.word_index) + 1, 50, input_length=max_len)) model.add(layers.LSTM(64)) model.add(layers.Dense(len(tokenizer.word_index) + 1, activation='softmax')) # 編譯模型 model.compile(loss='categorical_crossentropy', optimizer='adam') # 訓練模型 model.fit(question_seqs_padded, keras.utils.to_categorical(answer_seqs_padded, num_classes=len(tokenizer.word_index)+1), epochs=100, batch_size=32) and this is it ran out of error ValueError: Shapes (None, 20, 9) and (None, 9) are incompatible I tryed to fix the Shapes (None, 20, 9) and (None, 9) are incompatible model.fit(question_seqs_padded, keras.utils.to_categorical(answer_seqs_padded, num_classes=len(tokenizer.word_index)+1), epochs=100, batch_size=32) I try to delete the answer_seqs_padded to incompatible (None, 9),but it is still not work.
1.2
1
1
In this case, you need to return the whole sequence in lstm, so just use: layers.LSTM(64, return_sequences=True) instead. If you don't use return_sequences=True, it will just return the last output.
2023-03-31 16:34:56
0
python,flask,tdd
3
76,024,944
TDD modifying my test to make my code pass
75,900,837
false
54
I'm learning Test Driven Development, i'm struggling a little bit, seems like my brain wants to build solution algorithm, and not the needed test to build my algorithm. I can barely formalize unit test, and i go back and forth to change my tests. I come to a point where i make some adjustement in my code, and then i'm trying to modify my tests to make them pass! It should be the other way around. i had initially this piece of code for test def test_is_zero_in_range(self): self.assertRaises(Exception, self._provServerPrepare.is_in_allow_range, 0) so i write it up the code needed def is_in_allow_range(self, value: str) -> int: value = int(value) if value in MULTIPLY_PROV_RANGE: return value raise Exception('Value out of range') as i'm working with flask, i changed my code to get a return with a message and an error code, rather than a raise exception def is_in_allow_range(self, value: str) -> int: value = int(value) if value in MULTIPLY_PROV_RANGE: return value return {"message": f"Value out of range"}, 400 In this situation i have to change my tests to make it work, i'm not sure, but i feel like i'm messing with something, this doesn't seem right to me to do it that way. Does anyone can help me on this, or have any resources for me to read/watch? Thx
0
1
2
You're discovering your actual requirements and evolving your way there, which is one of the primary goals and benefits of TDD. There is nothing wrong with changing your mind. If you have a dozen tests making a call, and then you need to change how that call works, you have two options: Use an IDE like PyCharm that supports the Change Signature refactoring. Extract a helper method on the test side that makes the call. Then you only have to change one place. Such helpers often move from test code to production code. Another thing to be aware of with TDD: Don't start with an edge case. Instead, capture a stripped-down subset of what you want. At Industrial Logic, we call this Essence First. Throwing an exception is likely not the essence of the behavior you want to TDD, so don't start there. Instead, start with a minimal expression of the "happy path."
2023-03-31 16:34:56
1
python,flask,tdd
3
75,909,400
TDD modifying my test to make my code pass
75,900,837
false
54
I'm learning Test Driven Development, i'm struggling a little bit, seems like my brain wants to build solution algorithm, and not the needed test to build my algorithm. I can barely formalize unit test, and i go back and forth to change my tests. I come to a point where i make some adjustement in my code, and then i'm trying to modify my tests to make them pass! It should be the other way around. i had initially this piece of code for test def test_is_zero_in_range(self): self.assertRaises(Exception, self._provServerPrepare.is_in_allow_range, 0) so i write it up the code needed def is_in_allow_range(self, value: str) -> int: value = int(value) if value in MULTIPLY_PROV_RANGE: return value raise Exception('Value out of range') as i'm working with flask, i changed my code to get a return with a message and an error code, rather than a raise exception def is_in_allow_range(self, value: str) -> int: value = int(value) if value in MULTIPLY_PROV_RANGE: return value return {"message": f"Value out of range"}, 400 In this situation i have to change my tests to make it work, i'm not sure, but i feel like i'm messing with something, this doesn't seem right to me to do it that way. Does anyone can help me on this, or have any resources for me to read/watch? Thx
0.066568
1
2
It happens all the time that after you write a test, you find that you need to change the return value from a function (or the exception that it raises). That's normal. The TDD way to address this is to change the test first to reflect the new correct return value. See that test fail, and then update your code to make it pass. So, in your example, when you switched to flask, or when you realized that you needed to return a message and error code rather than raise an exception, you would first modify test_is_zero_in_range to require the message and error code, see it fail, and then update is_in_allow_range to make it pass. "Test first." By the way, here are some alternative names for your test and your function, which you might like better: test_zero_is_out_of_range  or  test_zero_should_be_out_of_range is_in_allowed_range This test name is more self-explanatory, since it states the intent of the test.
2023-03-31 18:05:05
1
python,python-polars
1
75,901,631
Polars Modify Many Columns Based On Value In Another Column
75,901,487
true
146
Say I have a DataFrame that looks like this: df = pl.DataFrame({ "id": [1, 2, 3, 4, 5], "feature_a": np.random.randint(0, 3, 5), "feature_b": np.random.randint(0, 3, 5), "label": [1, 0, 0, 1, 1], }) ┌─────┬───────────┬───────────┬───────┐ │ id ┆ feature_a ┆ feature_b ┆ label │ │ --- ┆ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 ┆ i64 │ ╞═════╪═══════════╪═══════════╪═══════╡ │ 1 ┆ 2 ┆ 0 ┆ 1 │ │ 2 ┆ 1 ┆ 1 ┆ 0 │ │ 3 ┆ 2 ┆ 2 ┆ 0 │ │ 4 ┆ 1 ┆ 0 ┆ 1 │ │ 5 ┆ 0 ┆ 0 ┆ 1 │ └─────┴───────────┴───────────┴───────┘ I want to modify all the features columns based on the value in the label column, producing a new DataFrame. ┌─────┬───────────┬───────────┐ │ id ┆ feature_a ┆ feature_b │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 │ ╞═════╪═══════════╪═══════════╡ │ 1 ┆ 1 ┆ 1 │ │ 2 ┆ 0 ┆ 0 │ │ 3 ┆ 0 ┆ 0 │ │ 4 ┆ 1 ┆ 1 │ │ 5 ┆ 1 ┆ 1 │ └─────┴───────────┴───────────┘ I know I can select all the features columns by using regex in the column selector pl.col(r"^feature_.*$") And I can use a when/then expression to evaluate the label column pl.when(pl.col("label") == 1).then(1).otherwise(0) But I can't seem to put the 2 together to modify all the selected columns in one fell swoop. It seems so simple, what am I missing?
1.2
1
1
Here's one way: Recently support was added for more ergonomic arguments in a lot of methods, including with_columns and select. Since they now can take any number of keyword arguments acting like an alias at the end (e.g. setting the new column name), we can construct a dict of the columns to overwrite and pass it in (with unpacking) like so: df.select('id', **{col : 'label' for col in df.columns if col.startswith('feature')}) In this simple case no when/then is needed for the label column, but in general any expression evaluating to a column of the same height as id can go into this dict comprehension.
2023-03-31 21:48:35
1
python,windows,list,spyder
2
75,985,497
Can someone explain this "for" loop in python?
75,902,985
false
62
I don't understand the meaning of this for loop, specifically the range arguments: for element in range(len(text1)-1,-1,-1): print(text1[element])
0.099668
2
2
Answer above says it all. But I can add that making the second arguments of range -2 or -3 and so on, will make your loop goes multiple times. So once it's done printing all the elements of your list, it will redo the same operation as many times you specified.
2023-03-31 21:48:35
2
python,windows,list,spyder
2
75,985,428
Can someone explain this "for" loop in python?
75,902,985
false
62
I don't understand the meaning of this for loop, specifically the range arguments: for element in range(len(text1)-1,-1,-1): print(text1[element])
0.197375
2
2
The loop will print the element of the list text1 at position element, but the for loop will make it print starting first from the last element of your list text1. The syntax of the range is the following: range(start,stop,step). The reason why the loop starts at len(text1)-1 is because list in python are indexed not starting from 1 but from 0. So if your list has n elements, it will be indexed from 0 to n-1. I suggest you try to change the values used in range to understand what is happening.
2023-04-01 10:32:03
0
python,fastapi,starlette
3
75,905,654
FastAPI responds with required field is missing
75,905,638
false
240
I'm following a simple tutorial from the FastAPI docs. I was already using SQLAlchemy in this project just added the fastapi dependency and trying to run it, here's my code: import re import json import copy import traceback import urllib.parse from models import * from mangum import Mangum from datetime import datetime from sqlalchemy.orm import Session from sqlalchemy import * from sqlalchemy import create_engine from collections import defaultdict from fastapi import FastAPI, Depends, Request, Response app = FastAPI() @app.middleware("http") async def db_session_middleware(request, call_next): response = Response("Internal server error", status_code=500) try: engine = create_engine( "some db" ) base.metadata.create_all(engine) request.state.db = Session(engine) response = await call_next(request) finally: request.state.db.close() return response def get_db(request): return request.state.db @app.get("/") def get_root(): return {"Status": "OK"} @app.get("/products/{sku}") def get_product(sku, db=Depends(get_db)): pass @app.get("/products", status_code=200) def get_products(page: int = 1, page_size: int = 50, db: Session = Depends(get_db)): try: result, sku_list = [], [] for row in ( db.query(Product, Image) .filter(Product.sku == Image.sku) .limit(page_size) .offset(page * page_size) ): if row[0].sku not in sku_list: result.append( { "sku": row[0].sku, "brand": row[0].brand, "image": row[1].image_url, "title": row[0].product_title, "price": row[0].original_price, "reviewCount": row[0].total_reviews, "rating": row[0].overall_rating, } ) sku_list.append(row[0].sku) print(f"Result: {result}") return {"body": {"message": "Success", "result": result}, "statusCode": 200} except Exception as err: print(traceback.format_exc(err)) return { "body": {"message": "Failure", "result": traceback.format_exc(err)}, "statusCode": 500, } When I hit the products endpoint using this url: http://127.0.0.1:8000/products?page=1&page_size=100 I'm getting this response: {"detail":[{"loc":["query","request"],"msg":"field required","type":"value_error.missing"}]} I'm not sure what it means by query and request missing in the response. What am I doing wrong?
0
1
1
declare request as argument. not async def endp(request, ...), but async def endp(request: Request, ...) import Request from fastapi from fastapi import Request
2023-04-01 12:13:13
0
python,math,time-complexity,big-o
2
75,906,270
What's the time complexity of this code with a branch?
75,906,116
false
65
def p(a, b): if b == 0: return 1 if b % 2 == 0: return p(a, b//2) * p(a, b//2) return a * p(a, b-1) I know that it's O(n) if the code is: def p(a, b): if b == 0: return 1 return p(a, b//2) * p(a, b//2) However, when there is a return a * p(a, b-1) at the end, I'm not sure whether the time complexity will change.
0
1
1
Ok I did some maths, and you can prove (by a recurrence) that, when you call p(a,b), the total number of calls to p() is <= 3b - 1 for b >= 1, which indeed makes the time complexity O(b). I won't give the full details, but the gist of it is that (calling N(b) the total number of calls to p()) : N(1) = 2; N(b+1) = 1 + N(b) if b is even and 1 + 2 * N(b/2) if b is odd. You can verify this, then do the recurrence. Interestingly, the worst case scenario, where N(b) = 3b - 1, happens when b is a power of 2.
2023-04-01 15:43:22
3
python,jupyter-notebook,jupyter-lab
1
75,912,173
Jupyterlab extension gives a function not found error
75,907,222
true
721
I have issues with jupyter extensions on ArchLinux. In particular, I get the following error: [W 2023-04-01 18:34:36.504 ServerApp] A `_jupyter_server_extension_points` function was not found in jupyter_nbextensions_configurator. Instead, a `_jupyter_server_extension_paths` function was found and will be used for now. This function name will be deprecated in future releases of Jupyter Server. [W 2023-04-01 18:34:36.493 ServerApp] A `_jupyter_server_extension_points` function was not found in notebook_shim. Instead, a `_jupyter_server_extension_paths` function was found and will be used for now. This function name will be deprecated in future releases of Jupyter Server. How can I get rid of this error/warning? I tried removing the functions with Pip, but it did not work. Any ideas?
1.2
1
1
This is not an error, but a warning aimed at developers of notebook extensions: jupyter_nbextensions_configurator and notebook_shim respectively, not at a user like you. You do not need to do anything. It's worth pointing out that the next version of Jupyter Notebook (v7) will not require jupyter_nbextensions_configurator anymore as it will come with new extension manager (and a completely new extension ecosystem shared with JupyterLab), so there is very little incentive to address this deprecation warning in jupyter_nbextensions_configurator as it will cease to be relevant soon.
2023-04-02 12:26:33
1
python,arrays,pandas,dataframe,numpy
2
75,912,084
Taking the mean of a row of a pandas dataframe with NaN and arrays
75,912,013
false
39
Here is my reproducible example: import pandas as pd import numpy as np df = pd.DataFrame({'x' : [np.NaN, np.array([0,2])], 'y' : [np.array([3,2]),np.NaN], 'z' : [np.array([4,5]),np.NaN], 't' : [np.array([3,4]),np.array([4,5])]}) I would like to compute the mean array for each row excluding NaN I have tried df.mean(axis=1) which gives NaN for both row. This is particularly surprising to me as df.sum(axis=1) appears to be working as I would have expected. [df.loc[i,:].mean() for i in df.index] does work but I am sure there is a more straightforward solution.
0.099668
1
1
Your DataFrame uses the object dtype which is always a bit of a bodge. It's slower than native types, and doesn't always behave the way you'd expect. Since Pandas removed the "Panel" type which was used for 3D data, I'd recommend you not store this data in a DataFrame. Instead, store it in a 3D NumPy array, then you can use np.nanmean() to easily calculate averages while ignoring NaN.
2023-04-02 15:12:43
0
python,html,sqlite,flask
1
75,912,917
My HTML is not sending the POST method to the flask app function
75,912,858
true
89
What's happening: I have an HTML code that is sending a POST request to the flask code. It's for a Login page. (I'm using SQLite3 and Flask.) The HTML code: <div class="col"> <div class="login-box"> <h2>Register</h2> <form method="post" action="{{ url_for('register_post') }}"> <div class="user-box"> <input type="email" name="email" placeholder="Email Address" required> </div> <div class="user-box"> <input type="password" name="password" placeholder="Password" required> </div> <div class="user-box"> <input type="text" name="username" placeholder="Username" required> </div> <div class="button-form"> <a id="submit" href="{{url_for('register_post')}}">Submit</a> <div id="register"> Already Have An Account ? <a href="{{url_for('login')}}">Login Now !</a> </div> </div> </form> </div> </div> The Python code: @app.route('/register', methods=['POST']) def register_post(): print("Got a data") username = request.form['username'] email = request.form['email'] password = request.form['password'] c.execute('''INSERT INTO users (email, username, password) VALUES (?,?,?)''', (email, username, password)) conn.commit() conn.close() return redirect(url_for('login')) There isn't a traceback which (maybe) means the function hasn't even been called.
1.2
1
1
I have not worked with Flask, but have with Django. I see you got an <a> tag with href="{{url_for('register_post')}}", why are you trying to redirect to 'https://example.com/register'? By redirecting you are making GET request whereas your register_post function only handles POST method. You described <form> with method="post" and action="{{url_for('register_post')}}", this means that the function will work whenever someone submits the form, but for now it's possible when you click the 'enter' key when focused on any input of the form. Another way to do it is to add <button type="submit">yout text</button> or <input type="submit" value="your text" /> instead of <a id="submit"> tag. There is only two methods of HTTP Request in HTML those being POST and GET. GET request occurs when you are navigating via URL or submit a form with method="GET" which is default and you do not have to specify it. POST method occurs when someone SUBMITS a FORM with POST method described. If you want to access any other HTTP Requests kind of PUT, DELETE, HEAD, OPTIONS, PATCH, then you need JavaScript for it.
2023-04-02 16:52:48
16
python,websocket,rasa
1
75,913,833
ImportError: cannot import name 'CLOSED' from 'websockets.connection'
75,913,380
true
3,299
I installed rasa on a virtual environment on windows. But while I am trying to check either rasa is installed or not, it is showing an error that says- ImportError: cannot import name 'CLOSED' from 'websockets.connection' I have reinstalled rasa, and installed websockets. But still getting the error. Python version is 3.10.2 Can anyone help me to solve this problem?
1.2
5
1
Try installing websockets version 10.0 like this: pip install websockets==10.0 It should help (it helped me)
2023-04-02 16:59:40
0
javascript,python,pandas,flask,pyinstaller
1
76,095,358
NoneType Object has no attribute 'write' for Flask app with Pyinstaller
75,913,425
false
381
I wrote a multi-template Flask app and am trying to convert it to an executable with PyInstaller. The app has a form page that allows you to submit to a csv, and then it consumes that csv and converts it into a Pandas dataframe, which it displays. I have a button next to each rendered dataframe which 'downloads' the view, by dropping that particular dataframe to a csv using the build in pandas functions. Everything seems to work fine with the exception that when the app loads, I get the following error: Traceback (most recent call last): File "app.py", line 426, in <module> File "flask\app.py", line 915, in run File "flask\cli.py", line 680, in show_server_banner File "click\utils.py", line 298, in echo AttributeError: 'NoneType' object has no attribute 'write' And the app crashes when I push any of these 'download' buttons. Here is the relevant code: app.py from flask import Flask, abort, render_template, request, url_for, jsonify from markupsafe import escape import os import datetime from datetime import timedelta import calendar import csv from csv import DictWriter import pathlib import pandas as pd import json import sys import webbrowser from threading import Timer #This is how I can store my controller in the same directory as my master log file sys.path.insert(1, "C:/Users/sam/webdev/crosstraining") import PYcontroller pd.set_option('display.max_columns', 20) if getattr(sys, 'frozen', False): template_folder = os.path.join(sys._MEIPASS, "templates") static_folder = os.path.join(sys._MEIPASS, "static") app = Flask(__name__, template_folder=template_folder, static_folder=static_folder) else: app = Flask(__name__) app.config['TEMPLATES_AUTO_RELOAD'] = True computers = ['home', 'laptop', 'work', 'production'] location = computers[0] @app.route('/timekeeper/', methods=["GET","POST"]) def timekeeper(): timekeeperdata = { 'utc_dt':date_time_str, 'day':today, 'month':month, 'file':table, 'engineer':engineer, 'jobs':jobs, 'dates':dates } monthCategory = aggDF(master_crossTrainingLog, getMonth()) yearCategory = aggDF(master_crossTrainingLog, getYear()) crosstraining = crossTrainingDF(master_crossTrainingLog, engineer) admincrosstraining = admincrossTrainingDF(master_crossTrainingLog) #These lists are to make the download work all_dfs = [admincrosstraining, crosstraining, monthCategory, yearCategory, full_hours] all_df_names = ["Admin_view_crosstraining", "_crossTraining", "ThisMonthCrossTraining", "ThisYearCrossTraining", "MasterCrossTrainingFile"] #the df's in all_df's are ordered as they are used in timekeeper. if request.method == "POST": view = request.form.get("view") download_view(view, all_dfs, all_df_names) return render_template("timekeeper.html", timekeeperdata=timekeeperdata, #these are all of the pandas tables that have been made elsewhere tables=[admincrosstraining.to_html(classes='data'), crosstraining.to_html(classes='data'), monthCategory.to_html(classes='data'), yearCategory.to_html(classes='data'), full_hours.to_html(classes='data')], titles=[admincrosstraining.columns.values, crosstraining.columns.values, monthCategory.columns.values, yearCategory.columns.values, full_hours.columns.values]) def download_view(viewNum, data, dataNames): df = data[int(viewNum)] if viewNum == 1: filename = str(engineer[1][3]) + str(dataNames[int(viewNum)]) else: filename = str(dataNames[int(viewNum)]) filepath = str(pathlib.Path.home() / "Downloads") +"\\"+ filename + "_"+date_time_str[:10] + ".csv" df.to_csv(filepath) print("file downloaded to: "+ filepath) def open_browser(): webbrowser.open_new('http://127.0.0.1:5000/') if __name__=='__main__': Timer(1, open_browser).start() app.run() #<--- This is line 426 timekeeper template {% if timekeeperdata['engineer'][1][1] == 'admin' %} <h3> Team Hours to Date</h3> <div id="pandas" name="View0"> {{tables[0]|safe }} </div> <!--This is the Team hours--> <form action="{{ url_for('timekeeper')}}" method="post"> <input type="text" name="view" value=0 style="display:none;"> <button type="submit" class="btn" name='download' value="View0" >Download</button> <p id="submitMessage"> Download Complete!</p> </form> When you click the download button, it thinks for a minute and then returns a 500 Internal server error message. The Console in the dev tool says: Status 500 INTERNAL SERVER ERROR VersionHTTP/1.0 Transferred464 B (290 B size) Referrer Policystrict-origin-when-cross-origin Request PriorityHighest I dont know what headers I would need to send in this instance to remove the CORS issue, if that's actually what's causing the error. I'm just assuming the error on start and the way it throws an error when you try and 'download' a dataframe are related, but I'm not sure how. If they're unrelated, I'd love any advice on how to explore either. I'm not particularly versed in Pyinstaller or Flask. There's more code but I'm pretty sure this is all that's relevant. If you think something is missing from a troubleshooting perspective, please feel free to let me know. ***EDIT: The app seems to work when I omit the -w flag from pyinstaller.
0
3
1
Im not sure, but I think is something related with Python 3.10, because its working when I use Python 3.9
2023-04-02 18:04:24
0
python,tensorflow,autoencoder
1
75,917,742
ValueError: in user code: ValueError: No gradients provided for any variable for Autoencoder Tensorflow
75,913,753
true
40
I have the above code and I have the below error during my fit ValueError: No gradients provided for any variable: (['dense_12/kernel:0', 'dense_12/bias:0', 'dense_13/kernel:0', 'dense_13/bias:0'],). Provided grads_and_vars is ((None, <tf.Variable 'dense_12/kernel:0' shape=(784, 2) dtype=float32>), (None, <tf.Variable 'dense_12/bias:0' shape=(2,) dtype=float32>), (None, <tf.Variable 'dense_13/kernel:0' shape=(2, 784) dtype=float32>), (None, <tf.Variable 'dense_13/bias:0' shape=(784,) dtype=float32>)). I tried to change encoder and decoder and It didn't solve the problem. I have an autoencoder problem that is why i used X_train and X_test dataset instead of y_train and y_test dataset. How can I solve this error? Thank you for your feedbacks! class autoencoder(Model): def __init__(self, output_dim = 2): super(autoencoder, self).__init__() self.output_dim = output_dim self.encoder = tf.keras.Sequential([ layers.Flatten(), layers.Dense(2, activation='relu') ]) self.decoder = tf.keras.Sequential([ layers.Dense(784, activation='relu') ]) def encoder_call(self, inputs): inputs = Flatten()(inputs) return self.encoder(inputs) def call(self, x, train = None): x = Flatten()(x) encoded = self.encoder(x) decoded = self.decoder(encoded) return tf.reshape(x,(-1,28,28)) model = autoencoder() model.compile(optimizer='adam', loss=losses.MeanSquaredError()) model.fit(X_train, X_train, epochs=1, shuffle=True, validation_data=(X_test, X_test)) ```
1.2
1
1
return tf.reshape(x,(-1,28,28)) it should be return tf.reshape(decoded,(-1,28,28)) Cause of error: your input x is first passed through a Flatten layer, then through encoder and decoder successively. Finally, call method returns reshaped version of x, but not the decoded. So, there is no possibility to build a computational graph and backpropagate through it.
2023-04-03 05:37:43
0
python,tensorflow,machine-learning,deep-learning,prefetch
1
76,032,262
Using tf.data.Dataset prefetch makes model performance overfit?
75,916,309
true
60
I'm trying to train a simple LRCN model with some sequential image dataset in Tensorflow 2.5.0. Training performance was fine like increasing to 0.9x training & validation accuracy both in first 5 epochs and train & validation loss kept decreasing during the training. Then, I've tried to optimize data pipeline with using prefetch(). The dataset I use is sequential images (.png) that titles and information are written in .csv file. So I made the data generator like below : def setData(data): X, y = [], [] name = data.loc['fileName'].values.tolist()[0] info1 = data.loc['info1'].values.tolist()[0] info2 = data.loc['info2'].values.tolist()[0] info3 = data.loc['info3'].values.tolist()[0] if os.path.isfile(filepath + name) == False: print('No file for img') return try: img = np.load(filepath + fName) except: print(name) if info1 in info_list: X.append(img) if info2 == 'True': y.append(0) else: y.append(1) X = np.array(X) X = np.reshape(X, (3, 128, 128, 1)).astype(np.float64) y = np_utils.to_categorical(y, num_classes = 2) y = np.reshape(y, (2)).astype(np.float64) return X, y And I added the data generator load function like this : def generatedata(i): i = i.numpy() X_batch, y_batch = setData(pd.DataFrame(traindata.iloc[i])) return X_batch, y_batch Finally, I prefetched dataset using map z = list(range(len(traindata[]))) trainDataset = tf.data.Dataset.from_generator(lambda: z, tf.uint8) trainDataset = trainDataset.map(lambda i: tf.py_function(func = generatedata, inp = [i], Tout = [tf.float32, tf.float32]), num_parallel_calls = tf.data.AUTOTUNE) After I applied these steps, training accuracy goes 0.9x in first epoch, 1.0 in the first 3-5 epochs and validation accuracy stays at around 0.6x and validation loss kept growing over x.x. I believe that the prefetch only changes the data pipeline that do not affect to the model performance so I'm not sure what caused this overfitting(maybe?)-like results. I followed every step of the prefetch step that were denoted at the Tensorflow documentation. Though, since I'm not very familiar with tensorflow, there might be some mistakes. Is there any line that I missed? Any opinion would be really greatfull. Thanks in advance.
1.2
1
1
It turns out that the py_function() makes tf.graph stacked over previous results that leads the model to overfitting. I've modified the prefetch function to get the generator function and works as it should be. Though I checked the tensorflow documents, I haven't fully alarmed of this situation but found this in tensorflow github page. To those who have same problem as me, try to review the library module function carefully.
2023-04-03 09:04:17
0
python,numpy,sparse-matrix
1
75,918,964
Handling CSR data file
75,917,727
false
35
I have sparse matrix data stored in npy format. I am reading the file and storing it in CSR format. And the below code works perfectly: def load_adjacency_matrix_csr(folder: str, filename: str, suffix: str = "npy", row_idx: int = 1, num_rows: int = None, dtype: np.dtype = np.float32) -> sparse.csr_matrix: coo_indices = np.load(os.path.join(folder, f"{filename}.{suffix}")) rows = coo_indices[row_idx] cols = coo_indices[1 - row_idx] data = np.ones(len(rows), dtype=dtype) num_rows = num_rows or rows.max() + 1 if num_rows < rows.max() + 1: raise ValueError("The number of rows in the file is larger than the specified number of rows.") csr_mat = sparse.csr_matrix((data, (rows, cols)), shape=(num_rows, num_rows), dtype=dtype) return csr_mat Now I read the same file, write it to a new file (_temp) and then I read. But now when I read from this new file (_temp), I get an error Detected 1 oom-kill even(s). This out of memory happens when creating the CSR matrix. But this is the exact same number of rows/columns and data as when I was able to successfully execute when I directly from the file. I am unable to figure out what changed when writing to a new file (the rows/columns and data has not changed). def save_coo_matrix(npx: int, folder: str, filename: str, suffix: str = "npy", row_idx: int = 1, dtype: np.dtype = np.float32) -> None: file_path = os.path.join(folder, f"{filename}.{suffix}") coo_indices = np.load(file_path) rows = coo_indices[row_idx] cols = coo_indices[1 - row_idx] filename += "_temp" file_path = os.path.join(folder, f"{filename}.npz") np.savez(file_path, rows=rows, cols=cols) def load_adjacency_matrix_csr(npx: int, folder: str, filename: str, suffix: str = "npy", row_idx: int = 1, num_rows: int = None, dtype: np.dtype = np.float32) -> sparse.csr_matrix: file_path = os.path.join(folder, f"{filename}_padded.npz") with np.load(file_path) as data: rows = data["rows"] cols = data["cols"] num_rows = max(rows) + 1 num_cols = max(cols) + 1 values = np.ones(len(rows)) # fails here when creating the csr matrix (the num_rows is the same value as before) csr_mat = sparse.csr_matrix((values, (rows, cols)), shape=(num_rows, num_cols)) return csr_mat
0
1
1
There seem to be a mismatch in the file extensions you use when writing and reading the data. when you call load_adjacency_matrix_csr, make sure to set the suffix parameter to csr_matrix = load_adjacency_matrix_csr(npx, folder, filename, suffix="npz", ...) also the npx is not used within the function. You can remove this parameter from the function definition.
2023-04-03 09:53:31
0
python,pytorch,huggingface-transformers,huggingface
3
76,139,553
Getting RuntimeError: expected scalar type Half but found Float in AWS P3 instances in opt6.7B fine tune
75,918,140
false
917
I have a simple code which takes a opt6.7B model and fine tunes it. When I run this code in Google colab(Tesla T4, 16GB) it runs without any problem. But when I try to run the the same code in AWS p3-2xlarge environment (Tesla V100 GPU, 16GB) it gives the error. RuntimeError: expected scalar type Half but found Float To be able to run the fine tuning on a single GPU I use LORA and peft. which are installed exactly the same way (pip install) in both cases. I can use with torch.autocast("cuda"): and then that error vanishes. But the loss of the training becomes very strange meaning it does not gradually decrease rather it fluctuates within a large range (0-5) (and if I change the model to GPT-J then the loss always stays 0) whereas the loss is gradually decreasing for the case of colab. So I am not sure if using with torch.autocast("cuda"): is a good thing or not. The transfromeers version is 4.28.0.dev0 in both case. Torch version for colab shows 1.13.1+cu116 whereas for p3 shows - 1.13.1 (does this mean it does not have CUDA support? I doubt, on top of that doing torch.cuda.is_available() shows True) The only large difference I can see is that for colab, bitsandbytes has this following setup log ===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues ================================================================================ CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching /usr/local/cuda/lib64... CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so CUDA SETUP: Highest compute capability among GPUs detected: 7.5 CUDA SETUP: Detected CUDA version 118 Whereas for p3 it is the following ===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues ================================================================================ CUDA SETUP: CUDA runtime path found: /opt/conda/envs/pytorch/lib/libcudart.so CUDA SETUP: Highest compute capability among GPUs detected: 7.0 CUDA SETUP: Detected CUDA version 117 CUDA SETUP: Loading binary /opt/conda/envs/pytorch/lib/python3.9/site-packages/bitsandbytes/libbitsandbytes_cuda117_nocublaslt.so... What am I missing? I am not posting the code here. But it is really a very basic version that takes opt-6.7b and fine tunes it on alpaca dataset using LORA and peft. Why does it run in colab but not in p3? Any help is welcome :) -------------------- EDIT I am posting a minimal code example that I actually tried import os os.environ["CUDA_VISIBLE_DEVICES"]="0" import torch import torch.nn as nn import bitsandbytes as bnb from transformers import AutoTokenizer, AutoConfig, AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained( "facebook/opt-6.7b", load_in_8bit=True, device_map='auto', ) tokenizer = AutoTokenizer.from_pretrained("facebook/opt-6.7b") for param in model.parameters(): param.requires_grad = False # freeze the model - train adapters later if param.ndim == 1: # cast the small parameters (e.g. layernorm) to fp32 for stability param.data = param.data.to(torch.float32) model.gradient_checkpointing_enable() # reduce number of stored activations model.enable_input_require_grads() class CastOutputToFloat(nn.Sequential): def forward(self, x): return super().forward(x).to(torch.float32) model.lm_head = CastOutputToFloat(model.lm_head) def print_trainable_parameters(model): """ Prints the number of trainable parameters in the model. """ trainable_params = 0 all_param = 0 for _, param in model.named_parameters(): all_param += param.numel() if param.requires_grad: trainable_params += param.numel() print( f"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param}" ) from peft import LoraConfig, get_peft_model config = LoraConfig( r=16, lora_alpha=32, target_modules=["q_proj", "v_proj"], lora_dropout=0.05, bias="none", task_type="CAUSAL_LM" ) model = get_peft_model(model, config) print_trainable_parameters(model) import transformers from datasets import load_dataset tokenizer.pad_token_id = 0 CUTOFF_LEN = 256 data = load_dataset("tatsu-lab/alpaca") data = data.shuffle().map( lambda data_point: tokenizer( data_point['text'], truncation=True, max_length=CUTOFF_LEN, padding="max_length", ), batched=True ) # data = load_dataset("Abirate/english_quotes") # data = data.map(lambda samples: tokenizer(samples['quote']), batched=True) trainer = transformers.Trainer( model=model, train_dataset=data['train'], args=transformers.TrainingArguments( per_device_train_batch_size=4, gradient_accumulation_steps=4, warmup_steps=100, max_steps=400, learning_rate=2e-5, fp16=True, logging_steps=1, output_dir='outputs' ), data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False) ) model.config.use_cache = False # silence the warnings. Please re-enable for inference! trainer.train() And here is the full stack trace /tmp/ipykernel_24622/2601578793.py:2 in <module> │ │ │ │ [Errno 2] No such file or directory: '/tmp/ipykernel_24622/2601578793.py' │ │ │ │ /opt/conda/envs/pytorch/lib/python3.9/site-packages/transformers/trainer.py:1639 in train │ │ │ │ 1636 │ │ inner_training_loop = find_executable_batch_size( │ │ 1637 │ │ │ self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size │ │ 1638 │ │ ) │ │ ❱ 1639 │ │ return inner_training_loop( │ │ 1640 │ │ │ args=args, │ │ 1641 │ │ │ resume_from_checkpoint=resume_from_checkpoint, │ │ 1642 │ │ │ trial=trial, │ │ │ │ /opt/conda/envs/pytorch/lib/python3.9/site-packages/transformers/trainer.py:1906 in │ │ _inner_training_loop │ │ │ │ 1903 │ │ │ │ │ with model.no_sync(): │ │ 1904 │ │ │ │ │ │ tr_loss_step = self.training_step(model, inputs) │ │ 1905 │ │ │ │ else: │ │ ❱ 1906 │ │ │ │ │ tr_loss_step = self.training_step(model, inputs) │ │ 1907 │ │ │ │ │ │ 1908 │ │ │ │ if ( │ │ 1909 │ │ │ │ │ args.logging_nan_inf_filter │ │ │ │ /opt/conda/envs/pytorch/lib/python3.9/site-packages/transformers/trainer.py:2662 in │ │ training_step │ │ │ │ 2659 │ │ │ loss = loss / self.args.gradient_accumulation_steps │ │ 2660 │ │ │ │ 2661 │ │ if self.do_grad_scaling: │ │ ❱ 2662 │ │ │ self.scaler.scale(loss).backward() │ │ 2663 │ │ elif self.use_apex: │ │ 2664 │ │ │ with amp.scale_loss(loss, self.optimizer) as scaled_loss: │ │ 2665 │ │ │ │ scaled_loss.backward() │ │ │ │ /opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/_tensor.py:488 in backward │ │ │ │ 485 │ │ │ │ create_graph=create_graph, │ │ 486 │ │ │ │ inputs=inputs, │ │ 487 │ │ │ ) │ │ ❱ 488 │ │ torch.autograd.backward( │ │ 489 │ │ │ self, gradient, retain_graph, create_graph, inputs=inputs │ │ 490 │ │ ) │ │ 491 │ │ │ │ /opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/autograd/__init__.py:197 in backward │ │ │ │ 194 │ # The reason we repeat same the comment below is that │ │ 195 │ # some Python versions print out the first line of a multi-line function │ │ 196 │ # calls in the traceback and some print out the last line │ │ ❱ 197 │ Variable._execution_engine.run_backward( # Calls into the C++ engine to run the bac │ │ 198 │ │ tensors, grad_tensors_, retain_graph, create_graph, inputs, │ │ 199 │ │ allow_unreachable=True, accumulate_grad=True) # Calls into the C++ engine to ru │ │ 200 │ │ │ │ /opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/autograd/function.py:267 in apply │ │ │ │ 264 │ │ │ │ │ │ │ "Function is not allowed. You should only implement one " │ │ 265 │ │ │ │ │ │ │ "of them.") │ │ 266 │ │ user_fn = vjp_fn if vjp_fn is not Function.vjp else backward_fn │ │ ❱ 267 │ │ return user_fn(self, *args) │ │ 268 │ │ │ 269 │ def apply_jvp(self, *args): │ │ 270 │ │ # _forward_cls is defined by derived class │ │ │ │ /opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/utils/checkpoint.py:157 in backward │ │ │ │ 154 │ │ │ raise RuntimeError( │ │ 155 │ │ │ │ "none of output has requires_grad=True," │ │ 156 │ │ │ │ " this checkpoint() is not necessary") │ │ ❱ 157 │ │ torch.autograd.backward(outputs_with_grad, args_with_grad) │ │ 158 │ │ grads = tuple(inp.grad if isinstance(inp, torch.Tensor) else None │ │ 159 │ │ │ │ │ for inp in detached_inputs) │ │ 160 │ │ │ │ /opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/autograd/__init__.py:197 in backward │ │ │ │ 194 │ # The reason we repeat same the comment below is that │ │ 195 │ # some Python versions print out the first line of a multi-line function │ │ 196 │ # calls in the traceback and some print out the last line │ │ ❱ 197 │ Variable._execution_engine.run_backward( # Calls into the C++ engine to run the bac │ │ 198 │ │ tensors, grad_tensors_, retain_graph, create_graph, inputs, │ │ 199 │ │ allow_unreachable=True, accumulate_grad=True) # Calls into the C++ engine to ru │ │ 200 │ │ │ │ /opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/autograd/function.py:267 in apply │ │ │ │ 264 │ │ │ │ │ │ │ "Function is not allowed. You should only implement one " │ │ 265 │ │ │ │ │ │ │ "of them.") │ │ 266 │ │ user_fn = vjp_fn if vjp_fn is not Function.vjp else backward_fn │ │ ❱ 267 │ │ return user_fn(self, *args) │ │ 268 │ │ │ 269 │ def apply_jvp(self, *args): │ │ 270 │ │ # _forward_cls is defined by derived class │ │ │ │ /opt/conda/envs/pytorch/lib/python3.9/site-packages/bitsandbytes/autograd/_functions.py:456 in │ │ backward │ │ │ │ 453 │ │ │ │ │ 454 │ │ │ elif state.CB is not None: │ │ 455 │ │ │ │ CB = state.CB.to(ctx.dtype_A, copy=True).mul_(state.SCB.unsqueeze(1).mul │ │ ❱ 456 │ │ │ │ grad_A = torch.matmul(grad_output, CB).view(ctx.grad_shape).to(ctx.dtype │ │ 457 │ │ │ elif state.CxB is not None: │ │ 458 │ │ │ │ │ │ 459 │ │ │ │ if state.tile_indices is None: (Sorry if this is a very novice question but I have no solution at the moment :( )
0
4
1
I have same errors with yours: when I add the blow code: with torch.autocast("cuda"): trainer.train() the loss is 0; I doubt that bitsandbytes can't support V100 while using load_int8=True and fp16=True
2023-04-03 11:00:42
1
python,tensorflow,predict
1
75,949,490
why predict with a tensorflow model don't give the same answer for each signal separatly and all signal at once?
75,918,731
true
87
Have created a tenforflow model that is taking 512 input samples (1 * N * 512) and I would like to make a prediction with new input. I have an s variable that has 19*512 signal if i predict the output of my model with one signal at a time [DLmodel( s[i,:][np.newaxis,np.newaxis,:] ).numpy()[0,:,0] for i in range(19)] I got this answer: [[0.41768566 0.5564939 0.30202574 0.35190994 0.27736259 0.28247398 0.2699227 0.33878434 0.35135144 0.31779674 0.3259031 0.3272484 0.32065392 0.33836302 0.31446803 0.26727855 0.29702038 0.30528304 0.32032394]] but if I predict directly with a 2D matrix (all signals) I get : DLmodel( s[np.newaxis,:,:] ).numpy()[0,:,0] [4.1768566e-01 3.5780075e-01 1.5305097e-01 9.7242827e-03 8.3400400e-06 2.6045337e-09 2.0279233e-11 1.0051511e-12 4.4332330e-13 2.3794513e-13 2.0760676e-13 1.8587506e-13 1.7166681e-13 1.7180506e-13 1.7025846e-13 1.5340669e-13 1.8261155e-13 1.4610023e-13 1.4570285e-13] I don't understand why the answers are not equal? I don't understand also why if i make a 2d matrix input with a sliding window of 5 signals with 1 sample shift, I don't get the correct answer: Signals = [] k=0 for i in range(int(437*Fs),int(437*Fs)+5): Signals.append(Sigs[10,(k+i):(k+i)+size]) Signals = np.array(Signals) Signals = np.expand_dims(Signals, axis=[0]) print(DLmodel(Signals).numpy()[0,:,0]) Signals = [] k=0 for i in range(int(437*Fs),int(437*Fs)+5): Signals.append(Sigs[10,(k+i+1):(k+i+1)+size]) Signals = np.array(Signals) Signals = np.expand_dims(Signals, axis=[0]) print(DLmodel(Signals).numpy()[0,:,0]) print this : [0.9198115 0.98681784 0.997053 0.9992207 0.9997619 ] [0.92536646 0.9863089 0.99667054 0.99903715 0.999721 ] I tab the second line so both up and down number should be the same.This is very confusing. Here's the model I used: DLmodel = Sequential() DLmodel.add(LSTM(units=size, return_sequences=True, input_shape=(None, size), activation='tanh')) # , kernel_regularizer=L2(0.01))) DLmodel.add(Dropout(0.3)) DLmodel.add(Dense(size // 2, activation="relu", kernel_initializer="uniform")) DLmodel.add(Dropout(0.3)) DLmodel.add(Dense(size // 4, activation="relu", kernel_initializer="uniform")) DLmodel.add(Dropout(0.3)) DLmodel.add(Dense(size // 8, activation="relu", kernel_initializer="uniform")) DLmodel.add(Dropout(0.3)) DLmodel.add(Dense(1, activation="sigmoid", kernel_initializer="uniform")) DLmodel.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy', 'mse'], run_eagerly=True)
1.2
1
1
You use an LSTM layer and this layer has an internal memory, that is initialized with an all-zero vector for every new input. The LSTM layer is used for (time-)series data, for example the traffic at a certain hour of the day, or the sequence of words within a sentence. Usually you input one or several samples into an LSTM layer, so that your data has the following shape: (num_samples, num_steps, num_features) The num_steps refers to the number of "sub-samples" within your data. For example: You have the sentence "I am at home", your first input will be a numeric representation of the word "I", the second step a numeric representation of the word "am". The data could also be 12 cars driving at a street between 5:00 and 6:00 am (time step 0) and 48 cars driving between 6:00 and 7:00 am (time step 1). For every time step passed into this layer, the memory is adjusted. In consequence: If you plug in time step 1 without having inserted time step 0 the result will be different from inserting time step 0 and time step 1 after another. And that is the entry point for your problem: In the upper part of your example you insert your data separated and the internal memory is set to zero in all cases. But in the second case, you insert the data as time series, therefore the first result is the same (it is calculated based on the all-zero memory), but all later results use the memory of the time steps before. So you actually produced two different cases. Unfortunately, you did not provide details about your data, but one of the cases is the "right" one, while the other one will not work as you intend. The two cases: Your first case: Start memory = [0, 0, 0, ...] input 10 samples shape = (1, 1, 512) # 1 sample, 1 time steps, 512 features insert 1st sample -> calculate output based on all-zero memory insert 2nd sample -> calculate output based on all-zero memory and so on Second case: Start memory = [0, 0, 0, ...] input shape = (1, 10, 512) # 1 sample, 10 time steps, 512 features insert whole input, calculation based on multiple sub steps: calculate 1st output, adjust memory calculate 2nd output based on adjusted memory, adjust memory again and so on
2023-04-03 11:58:01
0
python,linux,azure
1
75,925,218
azure information protection remove label in linux
75,919,230
false
65
I am in a situation that requires (decrypting/removeLabel) in Linux eg. Debian or rasbian ? I have looked into azure products and they clearly stated that there is no "official" support for Linux..... but my question here is .... is there any workaround to achieve this in Linux? One silly way I could think of is to get this protected file in a windows pc, decrypt it and send it to the Linux PC over scp.... but this requires me to have a sort of a bypass PC to run just for this ...seems pretty silly ... any ideas/suggestion? Thanks in advance
0
1
1
Azure Information Protection dont have an official client for linux, one solution would be to use a windows virtual machine on your linux system and install the AIP client in it, VirtualBox or Vmware are good choices here. You can also use the AIP PowerShell module to automate the removal of labels
2023-04-03 15:44:23
1
python,django,timezone,dst
1
75,930,819
Django and Time Zone for regions that no longer observe the Daylight Save Time (DST)
75,921,330
true
88
I have a Django project, time zone is enabled for multiple purposes, my TIME_ZONE = 'America/Mexico_City', however, since this year 2023, in this time zone it's no longer observed the DST, I use the localtime to get the right date/time in some cases, but it detects the DST >>> localtime(timezone.now()) datetime.datetime(2023, 4, 3, 10, 14, 49, 782365, tzinfo=<DstTzInfo 'America/Mexico_City' CDT-1 day, 19:00:00 DST>) >>> timezone.now() datetime.datetime(2023, 4, 3, 15, 14, 54, 953013, tzinfo=<UTC>) >>> datetime.now() datetime.datetime(2023, 4, 3, 9, 15, 7, 628038) datetime.now() has the correct date/time, of course I can change the localtime for datetime.now(), but there are a lot of them, I want to understand how I can "update" or "sync" my Django project so it take the correct DST when some region change it for both observe or not longer observe it.
1.2
2
1
UPDATE: Solved Thanks to @FObersteiner for the reference, my pytz version was old. Simply upgrade it with pip install --upgrade pytz, that will solve the issue.
2023-04-03 16:46:37
1
python,sql,flask
3
75,921,900
TypeError: delete_dog() missing 1 required positional argument: 'Dog_id' How do I get around this?
75,921,864
false
28
I have written this code and have gotten this TypeError and don't understand the error. What am I doing wrong. @app.route("/deletedog", methods=["GET", "POST"]) def delete_dog(Dog_id): p=dog.query.filter_by(Dog_id=Dog_id).first() if request.method == 'GET': db.session.delete(p) db.session.commit() return redirect(url_for('index')) return render_template("deletedog.html") ` Im kinda of scratching my head i didn't think it needs a positional
0.066568
2
1
I assume you are making a Flask app, kindly, next time include those details in your question. To add parameters to a flask app you need to include in the route URL the request, so changing the route url to something like /deletedog/<dog_id> should make this to work.
2023-04-03 18:46:38
0
python,powershell,visual-studio-code,terminal,command-prompt
2
75,922,821
Python wont run inside CMD, PS, or VS Code
75,922,801
false
106
I am a learner, so please forgive me if this is common knowledge I fixed my PATH files for python, I made the only PATH and sole instance of Python C:\Python This fixed the issue where VS code could not find a python interpreter. Now that I have done this, when when i try to run a python command, this happens: PS C:\Users\sajja\Documents\git\cs50w_repo\cs50w_lecturepractice\lecture2-python> python name.py but it never prompts me for the input, which is the whole point of this file. when i check the python install, vscode does recognize it, just wont run it PS C:\Users\sajja\Documents\git\cs50w_repo\cs50w_lecturepractice\lecture2-python> python Python 3.11.2 (tags/v3.11.2:878ead1, Feb 7 2023, 16:38:35) [MSC v.1934 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. If i switch to the output tab and do it from there instead, it runs the code without error, but never prompts me for the input, and therefore never prints the output [Running] python -u "c:\Users\sajja\Documents\git\cs50w_repo\cs50w_lecturepractice\lecture2-python\name.py" [Done] exited with code=0 in 0.057 seconds I was expecting for this code: name = input("Name: ") print(name) to prompt me to enter an input, and then print said input in the output or terminal
0
2
1
There are many similar questions, but I could'nt find an exact one. Here is the solution I found: Instead of running the file in VSCode, I selected Run and Debug, which opened a different terminal (maybe?), this terminal accepts input, and therefore printed the correct output. Here it is: PS C:\Users\sajja\Documents\git\cs50w_repo\cs50w_lecturepractice> & C:/Python/python.exe c:/Users/sajja/Documents/git/cs50w_repo/cs50w_lecturepractice/lecture2-python/name.py Name: sajjad sajjad The output window still does not do what it is supposed to, but the Terminal window is behaving the way it should
2023-04-03 23:59:47
1
python,continuous-integration,automated-tests,integration-testing
2
75,924,682
Testing Python Package Dependencies
75,924,550
false
111
Lets say I have a widely distributed/used python package called foo that's designed to work with the following dependencies: pandas>=1.3.0     pyarrow>=8.0 python>=3.8 How do I make sure that my foo package is actually compatible with all those dependencies so that people have a seamless experience with using my package? One idea that I had is to run my test suite against a whole bunch of environments with different versions of the dependent packages. For example, run the test suite 13 times under environments with the following dependency versions: pandas=1.3.0, pyarrow=11.0, python=3.11.2 pandas=1.4.0, pyarrow=11.0, python=3.11.2 pandas=1.5.0, pyarrow=11.0, python=3.11.2 pandas=2.0.0, pyarrow=11.0, python=3.11.2 pyarrow=8.0, pandas=2.0.0, python=3.11.2 pyarrow=9.0, pandas=2.0.0, python=3.11.2 pyarrow=10.0, pandas=2.0.0, python=3.11.2 pyarrow=11.0, pandas=2.0.0, python=3.11.2 python=3.8, pandas=2.0.0, pyarrow=11.0 python=3.9, pandas=2.0.0, pyarrow=11.0 python=3.10, pandas=2.0.0, pyarrow=11.0 python=3.11, pandas=2.0.0, pyarrow=11.0 python=3.11.2, pandas=2.0.0, pyarrow=11.0 Is there a more robust way to do it? For example, what if my foo package doesn't work with pandas version 1.5.3. I don't think testing every major and minor release for all the dependent packages is feasible.
0.099668
3
1
In general we may have significantly more than three deps, leading to combinatorial explosion. And mutual compatibility among the deps may be fragile, burdening you with tracking things like "A cannot import B between breaking change X and bugfix Y". Import renames will sometimes stir up trouble of that sort. Testing e.g. pandas 1.5.0 may be of limited interest once we know the bugfixes of 1.5.3 are out. Is there a more robust way to do it? I recommend you adopt a "point in time" approach, so test configs resemble real user configs. First pick a budget of K tests, and an "earliest" date. We will test between that date and current date, so initially we have 2 tests for those dates, with K - 2 remaining in the budget. For a given historic date, scan the deps for their release dates, compute min over them, and request installation of the corresponding version number. Allow flexibility so that you get e.g. pandas 1.4.4 installed ("< 1.5") rather than the less interesting 1.4.0. Run the test, watch it succeed. Report the test's corresponding date, which is max of dates for the installed dependencies. At this point there's two ways you could go. You might pick a single dep and constrain it (">= 1.5" or ">= 2.0") to simulate a user who wanted a certain released feature and updated that specific package. Likely a better way for you to spend the test budget is to bisect a range of your "reported" dates, locate when a dep bumped its minor version number, and adjust the constraints to pull that in. It may affect a single dep, but likely the install solver will uprev additional deps, as well, and that's fine. Report the test result, lather, rinse, consume the budget. Proudly publish the testing details on your website. Given that everything takes a dependency on the cPython interpreter, one way to do "point in time" is to simply pick K interpreter releases and constrain the install so it demands exact match on the release number, e.g. 3.10.8. Ratchet down the various minor version numbers as far as you can get away with, e.g. pandas "< 1.5" or "< 1.4".
2023-04-04 07:32:51
17
python,pandas,apache-spark,databricks,iteritems
4
75,926,954
Databricks: Issue while creating spark data frame from pandas
75,926,636
true
6,946
I have a pandas data frame which I want to convert into spark data frame. Usually, I use the below code to create spark data frame from pandas but all of sudden I started to get the below error, I am aware that pandas has removed iteritems() but my current pandas version is 2.0.0 and also I tried to install lesser version and tried to created spark df but I still get the same error. The error invokes inside the spark function. What is the solution for this? which pandas version should I install in order to create spark df. I also tried to change the runtime of cluster databricks and tried re running but I still get the same error. import pandas as pd spark.createDataFrame(pd.DataFrame({'i':[1,2,3],'j':[1,2,3]})) error:- UserWarning: createDataFrame attempted Arrow optimization because 'spark.sql.execution.arrow.pyspark.enabled' is set to true; however, failed by the reason below: 'DataFrame' object has no attribute 'iteritems' Attempting non-optimization as 'spark.sql.execution.arrow.pyspark.fallback.enabled' is set to true. warn(msg) AttributeError: 'DataFrame' object has no attribute 'iteritems'
1.2
9
2
It's related to the Databricks Runtime (DBR) version used - the Spark versions in up to DBR 12.2 rely on .iteritems function to construct a Spark DataFrame from Pandas DataFrame. This issue was fixed in the Spark 3.4 that is available as DBR 13.x. If you can't upgrade to DBR 13.x, then you need to downgrade the Pandas to latest 1.x version (1.5.3 right now) by using %pip install -U pandas==1.5.3 command in your notebook. Although it's just better to use Pandas version shipped with your DBR - it was tested for compatibility with other packages in DBR.
2023-04-04 07:32:51
1
python,pandas,apache-spark,databricks,iteritems
4
75,995,044
Databricks: Issue while creating spark data frame from pandas
75,926,636
false
6,946
I have a pandas data frame which I want to convert into spark data frame. Usually, I use the below code to create spark data frame from pandas but all of sudden I started to get the below error, I am aware that pandas has removed iteritems() but my current pandas version is 2.0.0 and also I tried to install lesser version and tried to created spark df but I still get the same error. The error invokes inside the spark function. What is the solution for this? which pandas version should I install in order to create spark df. I also tried to change the runtime of cluster databricks and tried re running but I still get the same error. import pandas as pd spark.createDataFrame(pd.DataFrame({'i':[1,2,3],'j':[1,2,3]})) error:- UserWarning: createDataFrame attempted Arrow optimization because 'spark.sql.execution.arrow.pyspark.enabled' is set to true; however, failed by the reason below: 'DataFrame' object has no attribute 'iteritems' Attempting non-optimization as 'spark.sql.execution.arrow.pyspark.fallback.enabled' is set to true. warn(msg) AttributeError: 'DataFrame' object has no attribute 'iteritems'
0.049958
9
2
This issue is occurred due to pandas version <= 2.0. In Pandas 2.0, .iteritems function is removed. There are two solutions for this issue. Down grade the pandas version < 2. For example, pip install -U pandas==1.5.3 Use the latest Spark version i.e 3.4
2023-04-04 08:04:36
0
python,pytest
1
75,957,867
Pytest AttributeError: 'TestMyTest' object has no attribute
75,926,918
true
146
I have multiple tests written in the following format. When the test is run, why do some tests fail with exception AttributeError: 'TestMyTest' object has no attribute 'var1'? var1 is defined as class variable in test_init(). Still the exception is thrown in test_1(). @pytest.fixture(scope="module") def do_setup(run_apps): """ Do setup """ # Do something yield some_data class TestMyTest(BaseTestClass): ids = dict() task_id = list() output_json = None def test_init(self, do_setup): TestMyTest.var1 = Feature1() def test_1(self): self._logger.info("self.var1: {}".format(self.var1))
1.2
1
1
var1 was not defined as test_init() failed.
2023-04-04 09:51:21
0
python,mathematical-optimization,linear-programming,gurobi
2
75,938,813
linear programming python gurobi
75,927,946
false
102
I am working on a single machine scheduling problem where the goal is to minimize weighted delay. I have created a model using Python/Gurobi, but I'm experiencing issues with the objective function as it is producing suboptimal results. Interestingly, when I transfer the model to OpenSolver in Excel, I get better results. import gurobipy as gp N = 15 # number of jobs # w: weights of jobs w = [1, 10, 9, 10, 10, 4, 3, 2, 10, 3, 7, 3, 1, 3, 10] # p: process time of jobs p = [26, 24, 79, 46, 32, 35, 73, 74, 14, 67, 86, 46, 78, 40, 29] # d: due dates of jobs d = [397, 405, 433, 443, 424, 372, 392, 461, 432, 409, 400, 385, 464, 411, 427] # Create a new model m = gp.Model() x = {} goal = m.addVar(vtype=gp.GRB.CONTINUOUS) #T : Tardiness #C : Completion time T = m.addVars(N,lb=0) C = m.addVars(N) K = m.addVars(N) weight = {j: w[j] for j in range(N)} #x[i,j]:The variable xij is a variable that takes a value of 1 when Job i is assigned to Sequence j, and takes a value of 0 in all other cases. for i in range(N): for j in range(N): x[i,j] = m.addVar(vtype=gp.GRB.BINARY) # Set objective function: "The sequencing of jobs in a way that minimizes the total weighted delay." m.setObjective((goal), gp.GRB.MINIMIZE) # Add constraints # Constraint1: Each job must be assigned to a sequence. for i in range(N): m.addConstr(gp.quicksum(x[i,j] for j in range(N)) == 1) #Constraint2: Each sequence must be assigned a job. for j in range(N): m.addConstr(gp.quicksum(x[i,j] for i in range(N)) == 1) #Constraint3: If Job i is assigned to Sequence j, then the processing time of Job i is assigned to Sequence j. for j in range(N): m.addConstr(gp.quicksum(p[i]*x[i,j] for i in range(N)) == K[j]) #Constraint4: If Job i is assigned to Sequence j, then the weight of Job i is assigned to Sequence j. for j in range(N): m.addConstr(gp.quicksum(w[i]*x[i,j] for i in range(N)) == weight[j]) #Constraint 5: The completion time of Sequence j is the sum of the completion time of the previous sequence and the processing time of the job assigned to Sequence j. C[0].lb = 0 for j in range(1, N): m.addConstr(C[j-1] + K[j] == C[j]) #Constraint 6: If the completion time is greater than the due date, then there is a tardiness. for j in range(N): m.addConstr(T[j] >= C[j] - gp.quicksum(x[i,j]*d[i] for i in range(N))) #goal: The sequencing of jobs in a way that minimizes the total weighted tardiness. m.addConstr(gp.quicksum(T[j]*weight[j] for j in range (N)) <= goal) m.addConstr(C[0] == K[0]) m.addConstr(T[0] >= C[0] - d[0]) # Solve the problem m.optimize() # Print solution print('Optimal objective value: %g' % m.objVal) print('Solution:') for i in range(N): for j in range(N): if x[i,j].X > 0.5: print('x[%d,%d] = %d' % (i,j,x[i,j].X)) I would appreciate any assistance in improving the objective function in my Python/Gurobi model to match the results obtained from OpenSolver in Excel.
0
1
1
You should write out the LP file for both approaches to see whether they differ at some point. This is generally recommended to double-check your formulation and see whether it actually matches the mathematical model.
2023-04-04 11:53:53
0
python,algorithm
2
75,929,141
One loop with all lists in one instead of two loops with separated lists
75,929,062
false
77
I have an algorithmic question. Sorry if it seems to you silly. Consider I have two lists: list_a = ["Red", "Blue", "Black"] list_b = ["Samsung", "Apple"] So the question is, if I iterate through the lists above with two for loop is faster or merging them and using just one?
0
2
1
Imho it depends on what you're trying to accomplish with the iteration. If you need to perform an operation on each combination of elements from list_a and list_b, such as creating a Cartesian product or a nested loop, then using two for loops is necessary. In this case, merging the lists would not be useful and may even cause errors in your code. However, if you just need to iterate over all the elements in both lists sequentially, then merging them into one list and using a single for loop would be faster and more efficient.
2023-04-04 12:19:56
1
python,unit-testing,mocking,monkeypatching
2
75,929,664
How to test exceptions in mock unittests for non-deterministic functions?
75,929,336
false
46
I've a chain of functions in my library that looks like this, from myfuncs.py import copy import random def func_a(x, population=[0, 0, 123, 456, 789]): sum_x = 0 for _ in range(x): pick = random.choice(population) if pick == 0: # Reset the sum. sum_x = 0 else: sum_x += pick return {'input': sum_x} def func_b(y): sum_x = func_a(y)['input'] scale_x = sum_x * 1_00_000 return {'a_input': sum_x, 'input': scale_x} def func_c(z): bz = func_b(z) scale_x = bz['b_input'] = copy.deepcopy(bz['input']) bz['input'] = scale_x / (scale_x *2)**2 return bz Due to the randomness in func_a, the output of fun_c is non-deterministic. So sometimes when you do: >>> func_c(12) {'a_input': 1578, 'input': 1.5842839036755386e-09, 'b_input': 157800000} >>> func_c(12) {'a_input': 1947, 'input': 1.2840267077555213e-09, 'b_input': 194700000} >>> func_c(12) --------------------------------------------------------------------------- ZeroDivisionError Traceback (most recent call last) <ipython-input-121-dd3380e1c5ac> in <module> ----> 1 func_c(12) <ipython-input-119-cc87d58b0001> in func_c(z) 21 bz = func_b(z) 22 scale_x = bz['b_input'] = copy.deepcopy(bz['input']) ---> 23 bz['input'] = scale_x / (scale_x *2)**2 24 return bz ZeroDivisionError: division by zero Then I've modified func_c to catch the error and explain to users why ZeroDivisionError occurs, i.e. def func_c(z): bz = func_b(z) scale_x = bz['b_input'] = copy.deepcopy(bz['input']) try: bz['input'] = scale_x / (scale_x *2)**2 except ZeroDivisionError as e: raise Exception("You've lucked out, the pick from func_a gave you 0!") return bz And the expected behavior that raises a ZeroDivisionError now shows: --------------------------------------------------------------------------- ZeroDivisionError Traceback (most recent call last) <ipython-input-123-4082b946f151> in func_c(z) 23 try: ---> 24 bz['input'] = scale_x / (scale_x *2)**2 25 except ZeroDivisionError as e: ZeroDivisionError: division by zero During handling of the above exception, another exception occurred: Exception Traceback (most recent call last) <ipython-input-124-dd3380e1c5ac> in <module> ----> 1 func_c(12) <ipython-input-123-4082b946f151> in func_c(z) 24 bz['input'] = scale_x / (scale_x *2)**2 25 except ZeroDivisionError as e: ---> 26 raise Exception("You've lucked out, the pick from func_a gave you 0!") 27 return bz Exception: You've lucked out, the pick from func_a gave you 0! I could test the func_c in a deterministic way to avoid the zero-division without iterating func_c multiple times and I've tried: from mock import patch from myfuncs import func_c with patch("myfuncs.func_a", return_value={"input": 345}): assert func_c(12) == {'a_input': 345, 'input': 7.246376811594203e-09, 'b_input': 34500000} And when I need to test the new exception, I don't want to arbitrarily iterate func_c such that I hit the exception, instead I want to mock the outputs from func_a directly to return the 0 value. Q: How do I get the mock to catch the new exception without iterating multiple time through func_c? I've tried this in my testfuncs.py file in the same directory as myfuncs.py: from mock import patch from myfuncs import func_c with patch("myfuncs.func_a", return_value={"input": 0}): try: func_c(12) except Exception as e: assert str(e).startswith("You've lucked out") Is how I'm checking the error message content the right way to check Exception in the mock test?
0.099668
1
1
What you have looks fine to me. Unit tests usually break down and test each component in isolation and run through all the possible paths of execution. Essentially, you want to make sure each line of code is run at least once to bring out any issues (instead of bringing out the issues when running in production). With that said, there are tools in the various test suites to aid with common test cases. One is pytest.raises() or unittest's assertRaisesRegex to assert that the code within that context 100% raises the exception you expect. In your code, the test will still pass even if the exception isn't raised when it should fail. You could fix that manually, but using the framework tools makes for easier to read and more concise code.
2023-04-04 12:44:02
0
python,modbus,modbus-tcp,pymodbus
2
75,930,438
Reading modbus registers
75,929,544
false
87
I have python script which reads registers from energy meter and saves values to database. Script was working fine until today when i tried to run it and i got error: AttributeError: 'ModbusIOException' object has no attribute 'registers' I can ping device normally... This is my code (half of it) - even simple print of value doesn't work anymore from pymodbus.client import ModbusTcpClient IP = "192.168.X.X" client = ModbusTcpClient(IP) reg = client.read_holding_registers(23322, 2) calc = round((reg.registers[0] * pow(2, 16) + reg.registers[1]) * 0.01 / 1000, 2) print(calc) What could be the problem?
0
1
1
Your device is obviously returning an exception instead of the values, have a look at the type of the exception (by looking into your reg variable) to get a clue on how to fix it
2023-04-04 13:25:38
0
python,python-requests,urllib3,python-logging,redaction
2
75,931,155
Redact sensitive info from urllib3 logger
75,929,920
false
108
I would like to apply a Filter to the urllib3 Loggers used in the requests module, so that the sensible info from all log strings would be redacted. For some reason, my filter is not applied to urllib3.connectionpool Logger when it's called by requests.get(). Reproducible example import logging import re import requests class Redactor(logging.Filter): """Filter subclass to redact patterns from logs.""" redact_replacement_string = "<REDACTED_INFO>" def __init__(self, patterns: list[re.Pattern] = None): super().__init__() self.patterns = patterns or list() def filter(self, record: logging.LogRecord) -> bool: """ Overriding the original filter method to redact, rather than filter. :return: Always true - i.e. always apply filter """ for pattern in self.patterns: record.msg = pattern.sub(self.redact_replacement_string, record.msg) return True # Set log level urllib_logger = logging.getLogger("urllib3.connectionpool") urllib_logger.setLevel("DEBUG") # Add handler handler = logging.StreamHandler() handler.setFormatter(logging.Formatter("logger name: {name} | message: {message}", style="{")) urllib_logger.addHandler(handler) # Add filter urllib_logger.info("Sensitive string before applying filter: www.google.com") sensitive_patterns = [re.compile(r"google")] redact_filter = Redactor(sensitive_patterns) urllib_logger.addFilter(redact_filter) urllib_logger.info("Sensitive string after applying filter: www.google.com") # Perform a request that's supposed to use the filtered logger requests.get("https://www.google.com") # Check if the logger has been reconfigured urllib_logger.info("Sensitive string after request: www.google.com") The result of this code is that the Handler is applied to all log strings, but the Filter is not applied to log strings emitted by the requests.get() function: logger name: urllib3.connectionpool | message: Sensitive string before applying filter: www.google.com logger name: urllib3.connectionpool | message: Sensitive string after applying filter: www.<REDACTED_INFO>.com logger name: urllib3.connectionpool | message: Starting new HTTPS connection (1): www.google.com:443 logger name: urllib3.connectionpool | message: https://www.google.com:443 "GET / HTTP/1.1" 200 None logger name: urllib3.connectionpool | message: Sensitive string after request: www.<REDACTED_INFO>.com What I'm expecting I would like the sensitive pattern ("google") to be redacted everywhere: logger name: urllib3.connectionpool | message: Sensitive string before applying filter: www.google.com logger name: urllib3.connectionpool | message: Sensitive string after applying filter: www.<REDACTED_INFO>.com logger name: urllib3.connectionpool | message: Starting new HTTPS connection (1): www.<REDACTED_INFO>.com:443 logger name: urllib3.connectionpool | message: https://www.<REDACTED_INFO>.com:443 "GET / HTTP/1.1" 200 None logger name: urllib3.connectionpool | message: Sensitive string after request: www.<REDACTED_INFO>.com What I tried I tried applying the same Filter to "root" Logger, to "urllib3" Logger and to all existing Loggers and get the same result (like this): all_loggers = [logger for logger in logging.root.manager.loggerDict.values() if not isinstance(logger, logging.PlaceHolder)] for logger in all_loggers: logger.addFilter(redact_filter) I tried applying the Filter to the Handler, not to the Logger, since it seems that the Handler is applied to all log strings. Still no luck. I know that I could subclass a Formatter and do the redactions in there, but I think formatting and redacting are two different functions and I would like to keep them separately. Also, it would be nice to understand the logic in the logging module that produces the results that I'm getting.
0
2
1
That's because the record passed to your filter function is not formatted already. The url you want to redact is in the record.args. You need to apply the filter after the ending message is constructed.
2023-04-04 15:30:52
0
python,pip,scapy
2
75,931,331
Successfully installed pip, but error displayed when viewing pip version
75,931,207
false
31
The command line displays successful installation of pip, but when querying the pip version, it displays Script file 'D:\anaconda\Scripts\pip-script.py' is not present.I don't know how to resolve it. Help!!! I tried many methods, but it seems like I can't use pip. I wanted to call scapy, but I can't use pip anymore.Please help me!
0
1
1
Did you check the PATH variable in your windows? Check that the pip executable is in your PATH variable. To do this, open a command prompt and type echo %PATH%. Look for the directory where pip is installed (likely something like C:\Python3\Scripts). Now if pip is in your PATH, try running it directly from the command prompt by typing pip and pressing Enter. But If this still gives an error, I think you should try specifying the full path to pip. Or you should re install it
2023-04-04 15:45:19
0
python,ubuntu,docker-compose
1
75,934,513
Python server is killed when started using docker-compose exec
75,931,332
true
31
I start docker container and want to start simple python server in it. I can't use ENTRYPOINT/CMD of Dockerfile as it is already used for other things. I am trying to do it the following way: docker-compose exec <service_name> /bin/bash -c "./server.py &" But as soon as docker-compose exec ends server stops. Yet if I run application written on C++ in the same way it continues to run: docker-compose exec <service_name> /bin/bash -c "./app &" I inspected this using htop and found out that both server.py and app start as child processes but after end of docker-compose exec app reattaches and server.py stops. How can I make my python server continue to run starting it through `docke-compose exec?
1.2
1
1
Using nohup like /bin/bash -c "nohup ./server.py & as proposed in commentaries did help.
2023-04-04 15:49:25
2
python,fastapi,pydantic,sqlmodel
1
75,931,557
FastAPI - switching off a switch some time later
75,931,368
false
40
Let's say I have this sqlmodel.SQLModel which has a table name Project. My intention is to feature a record of a Project for a definite period of time, e.g. 3 days, i.e. setting it's field featured to be True, and automatically set its field featured to be False thereafter. class Project(SQLModel): featured: bool = False How can I achieve this behavior using FastAPI? Is it via background tasks or what?
0.379949
2
1
Consider using something like "featured until" instead. The field will contain a date stating the last date the project should be featured. Your feature page will then look for any projects with a "features until" date greater than today. This way you don't need any background process so your system will be simpler and easier to maintain. It also leaves you with feature history in the database which may be useful in the future.
2023-04-04 15:58:19
1
python,mastodon
1
75,931,569
snscrape error when attempting to scrape from mastodon
75,931,466
false
139
I'm not sure what I'm doing wrong. I'm trying to test out snscrape for scraping posts from Mastodon, using @pomological@botsin.space as a test case. (A bot that posts public domain fruit illustrations.) I think I'm following the correct format but it keeps giving me this error. Can anyone help? It gives me a similar error when I try either the "mastodon-profile" or "mastodon-toots" scrapers. I have used snscrape to scrape tweets from twitter many times and haven't had this problem. The command I'm using is snscrape mastodon-profile @pomological@botsin.space. C:\Windows\system32>snscrape mastodon-profile @pomological@botsin.space 2023-04-04 11:49:23.086 CRITICAL snscrape._cli Dumped stack and locals to C:\Users\[me]\AppData\Local\Temp\snscrape_loca ls_3xbv51be Traceback (most recent call last): File "<frozen runpy>", line 198, in _run_module_as_main File "<frozen runpy>", line 88, in _run_code File "C:\Users\[me]\AppData\Local\Programs\Python\Python311\Scripts\snscrape.exe\__main__.py", line 7, in <module> File "C:\Users\[me]\AppData\Local\Programs\Python\Python311\Lib\site-packages\snscrape\_cli.py", line 320, in main for i, item in enumerate(scraper.get_items(), start = 1): File "C:\Users\[me]\AppData\Local\Programs\Python\Python311\Lib\site-packages\snscrape\modules\mastodon.py", line 279, in get_items yield from self._entries_to_items(soup.find('div', class_ = 'activity-stream').find_all('div', class_ = 'entry'), r.url) AttributeError: 'NoneType' object has no attribute 'find_all' I've done some minimal python programming, but I'm pretty out of my depth here. Any assistance would be appreciated. I may have missed something obvious!
0.197375
1
1
Feels like either the Mastodon profile page of this account does not have the expected HTML structure that snscrape is looking for. Either that or there might be server-side issues on Mastodon's end that are preventing snscrape from retrieving the data.
2023-04-04 16:08:47
0
python,ide
1
75,938,145
Debugger in Pycharm doesn't work properly
75,931,561
true
115
I'm using Pycharm and want to debug my project but when I go and hit the debug key I get the following message Python Debugger Extension Available Cython extension speeds up Python debugging When I click on install I get the following error message Copile Cython Extension Error: Non-zero exit code (1): Traceback (most recent call last): File "/opt/pycharm-professional/plugins/python/helpers/pydev/setup_cython.py", line 242, in main() File "/opt/pycharm-professional/plugins/python/helpers/pydev/setup_cython.py", line 233, in main with FrameEvalModuleBuildContext(): File "/opt/pycharm-professional/plugins/python/helpers/pydev/setup_cython.py", line 187, in __enter__ shutil.copy(compatible_c, self._c_file) File "/home/lz/.pyenv/versions/3.8.16/lib/python3.8/shutil.py", line 418, in copy copyfile(src, dst, follow_symlinks=follow_symlinks) File "/home/lz/.pyenv/versions/3.8.16/lib/python3.8/shutil.py", line 264, in copyfile with open(src, 'rb') as fsrc, open(dst, 'wb') as fdst: PermissionError: [Errno 13] Permission denied: '/opt/pycharm-professional/plugins/python/helpers/pydev/_pydevd_frame_eval/pydevd_frame_evaluator.c' (Side notes: I'm using arch and also utilize pyenv for python version management) I tried to install cython both via Arch's package manager and via pip in my project but to no success
1.2
1
1
Run sudo chown <user> <path_to_dir>/_pydevd_frame_eval/ Open up pycharm again as a normal user, click on install and everything should work as intended
2023-04-04 16:58:22
0
python,google-colaboratory,pickle,langchain
1
76,140,854
“AttributeError: Can't get attribute 'Document' on <module 'langchain.schema' on Google Colab
75,931,980
false
661
When I try to load a pickle file onto Google Colab like this: with open("file.pkl", "rb") as f: vectorstore = pickle.load(f) I either get: AttributeError: Can't get attribute 'Document' on <module 'langchain.schema' from '/usr/local/lib/python3.9/dist-packages/langchain/schema.py'> or ImportError: cannot import name 'Document' from 'langchain.schema' (/usr/local/lib/python3.9/dist-packages/langchain/schema.py) Reinstalling Langchain and other workarounds did not fix the issue. The .pkl file is already in my "files" tab, so it's not an issue with importation
0
1
1
I had the same error and recreating the pkl file solved it. If you updated your environment after creating the pkl file it's likely a problem with different versions of a module (I couldn't spot which in my case)
2023-04-04 18:02:12
1
python,jupyter-notebook,ipython,jupyter-lab
1
75,933,461
How can I debug IPython display methods?
75,932,505
true
34
I'm writing _repr_latex_, _repr_html_, __repr__ methods. The problem that I'm facing is that I want to debug the call chain, because I bumped in to a situation like this. class A: def _get_text(self): return self.__class__.__name__ def _repr_html_(self): text = self._get_text() return f"<b>{text}</b>" class B(A): def __repr__(self): return f"<{self.__class__.__name__} >" class C(B): def _get_text(self): return "I'm C, an equivalent data structure with different meaning." And whenever I try to display by stating an object c = C at the end of a code block, it displays the __repr__ not the _repr_html_, corresponding to the class C. If anyone knows what is a common debugging process for this situation?
1.2
1
1
After examining all the methods, mentioned, I realized that there was an exception occurring in one of the methods, forcing a fallback to __repr__. The conclusion is to make sure that the _repr_{type}_ doesn't raise exceptions.
2023-04-04 19:57:19
1
python,pytorch
1
75,933,435
self() as function within class, what does it do?
75,933,354
true
65
Sorry for the poor title but I'm unsure how better to describe the question. So I recently watched Andrej Kaparthy's build GPT video which is awesome and now trying to reconstruct the code myself I notices that he uses self() as a function and was curious why and what exactly it does. The code is here and I'm curious in particular about the generate function: class BigramLanguageModel(nn.Module): def __init__(self, vocab_size): super().__init__() # each token directly reads off the logits for the next token from a lookup table self.token_embedding_table = nn.Embedding(vocab_size, vocab_size) def forward(self, idx, targets=None): # idx and targets are both (B,T) tensor of integers logits = self.token_embedding_table(idx) # (B,T,C) if targets is None: loss = None else: B, T, C = logits.shape logits = logits.view(B*T, C) targets = targets.view(B*T) loss = F.cross_entropy(logits, targets) return logits, loss def generate(self, idx, max_new_tokens): # idx is (B, T) array of indices in the current context for _ in range(max_new_tokens): # get the predictions logits, loss = self(idx) # focus only on the last time step logits = logits[:, -1, :] # becomes (B, C) # apply softmax to get probabilities probs = F.softmax(logits, dim=-1) # (B, C) # sample from the distribution idx_next = torch.multinomial(probs, num_samples=1) # (B, 1) # append sampled index to the running sequence idx = torch.cat((idx, idx_next), dim=1) # (B, T+1) return idx So to me it seems that he is calling the forward function defined within the class through using the self(). Is that correct? And if so why would he not use forward(idx) instead. Thank you for you help!
1.2
3
1
Meh, this is pytorch. Remember that you can use the model like this: model(x) to do the model.forward(x). So inside of the model class self(x) will be the basically the same as doing self.forward(x).
2023-04-05 00:09:23
1
python-3.x,python-poetry
2
75,934,739
Poetry install ChefBuildError
75,934,738
false
1,582
Poetry install fails with ChefBuildError: Backend operation failed: HookMissing('build_editable') My poetry version is 1.4.2
0.099668
1
1
This worked for me I believe this is caused by a change to how the build-backend is defined in the pyproject.toml between poetry ^1.3 and poetry ^1.4. Assuming you have poetry ^1.4 installed you have two options: In your pyproject.toml change build-backend = "poetry.masonry.api" to build-backend = "poetry.core.masonry.api" If, like me you have other code the assumes poetry ^1.3 then simply downgrade you poetry version poetry self update 1.3.2 If you go with option 2 you may get a bunch of RuntimeError hash for xxx errors. If that's the case you will also need to rm -r ~/.cache/pypoetry/artifacts and rm -r ~/.cache/pypoetry/cache.
2023-04-05 00:12:56
2
python-3.x,python-poetry
1
75,934,755
Poetry install hash for package.whl not found in known hashes
75,934,754
false
508
When running poetry install I get the following error Hash for some-package (a.b.c) from archive some-package-a.b.c....whl not found in known hashes poetry version ^1.3
0.379949
2
1
This worked for me. rm -r ~/.cache/pypoetry/cache and failing that maybe try rm -r ~/.cache/pypoetry/artifacts Then ruunning poetry install again.
2023-04-05 00:54:43
7
python,algorithm,time-complexity
1
75,934,928
Increasing programming efficiency to O(n) from O(n^2)
75,934,901
false
77
Let's say you have an array of numbers and a queries array: array = [1,3,5,6,9] queries = [5,9,3] We need to find the number of operations required i.e. add 1 or subtract 1 from the array to bring all numbers of the array to match the element from the queries array. For example: For query 5, number of operations required to bring all elements of array to 5 is abs(1-5) + abs(3-5) + abs(5-5) + abs(6-5) + abs(9-5) = 4+2+0+1+4 = 11. For query 9, it would be 21. For query 3 it would be 13. Is there a way to do this with time complexity better than O(len(array) x len(queries)) ? My function to do this in python is as follows def find_min_costs(array,queries): for query in queries: cost = 0 for arr in array: cost+= abs(arr-query) print(cost,end = ' ') but the time complexity of my code is O(len(array) x len(queries)) I would like to do this with a better time complexity such as O(len(array))
1
3
1
Sort the input array (if it is not sorted already). Now, for every query q, find the position pos where the query number would be placed in the array (use binary search). Next, observe that, for all numbers to the left, array[1], ..., array[pos], their impact would be q - array[i]. The total is pos * q - sum_of_array_from_1_to_pos. Similarly, for all numbers to the right, array[pos+1], ..., array[n], their impact would be array[i] - q. The total is sum_of_array_from_pos+1_to_n - (n-pos) * q. If you compute prefix sums of the sorted array in advance, the above quantities can be found in O(1). In total, the preparation step is sorting the array in O(n log n), where n is the length of the array. Then computing prefix sums in O(n). For each query, the work can be done in O(log n) for binary search and then O(1) for computing the answer. The total is O(m log n), where m is the number of queries and n is the length of the array. How to use prefix sums: if we have prefix sums p[i] = array[1] + array[2] + ... + array[i], then a quantity like sum_of_array_from_x_to_y is just p[y] - p[x-1]. How to compute prefix sums: p[0] = 0, p[i] = p[i - 1] + array[i].
2023-04-05 03:18:57
0
python,python-3.x,while-loop
2
75,935,712
Use while loop to check for the condition
75,935,385
false
46
I was trying to make this program in which every time when the users enter a number, if the number is in the range between 0 and 2, it will return the number the user has inputed. Otherwise, it will keep printing out the statement "Enter selection: " to the console until the user has entered a value that is in the range. My problem is it seems like my while loop is not working so that every time no matter what value I enter (even it's outside the range 0, 2) the value still returns. Below is my existing code: def get_user_option_input(): number = input('Please enter a number: ') num = int(number) while not num > 0 and num < 2: print('Enter selection: ') return num number = get_user_option_input() print(number) Hope someone can help me figure this out. Much appreciated!
0
2
1
If the first number the user inputs is outside the range, then it will run print('Enter selection: ') forever, because num is never changed inside the loop. To fix this you need to change print('Enter selection: ') to num = int(input('Enter selection: ')). Also, change while not num > 0 and num < 2:to while not (num > 0 and num < 2):. Otherwise, not only negates num > 0. By adding a parentheses, not negates both num > 0 and num < 2.
2023-04-05 08:16:38
0
python,nlp,gensim,word2vec
1
75,942,808
Error while importing gensim library value error
75,937,149
false
116
while i am importing the library gensim import gensim it throws this error --------------------------------------------------------------------------- ValueError Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_14076/1427573704.py in <module> ----> 1 import gensim ~\AppData\Local\Programs\Python\Python310\lib\site-packages\gensim\__init__.py in <module> 9 import logging 10 ---> 11 from gensim import parsing, corpora, matutils, interfaces, models, similarities, utils # noqa:F401 12 13 ~\AppData\Local\Programs\Python\Python310\lib\site-packages\gensim\corpora\__init__.py in <module> 4 5 # bring corpus classes directly into package namespace, to save some typing ----> 6 from .indexedcorpus import IndexedCorpus # noqa:F401 must appear before the other classes 7 8 from .mmcorpus import MmCorpus # noqa:F401 ~\AppData\Local\Programs\Python\Python310\lib\site-packages\gensim\corpora\indexedcorpus.py in <module> 12 import numpy 13 ---> 14 from gensim import interfaces, utils 15 16 logger = logging.getLogger(__name__) ~\AppData\Local\Programs\Python\Python310\lib\site-packages\gensim\interfaces.py in <module> 17 import logging 18 ---> 19 from gensim import utils, matutils 20 21 ~\AppData\Local\Programs\Python\Python310\lib\site-packages\gensim\matutils.py in <module> 1028 try: 1029 # try to load fast, cythonized code if possible -> 1030 from gensim._matutils import logsumexp, mean_absolute_difference, dirichlet_expectation 1031 1032 except ImportError: ~\AppData\Local\Programs\Python\Python310\lib\site-packages\gensim\_matutils.pyx in init gensim._matutils() ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject` What is the solution of this error?
0
1
1
This likely means you've installed mismatched versions of libraries that closely depend on each other, like Gensim and Numpy. In such cases, as ~zanga suggests, starting from a fresh Python environment with no optional libraries yet installed is likely to help. As you install just what you need, you'll get the latest/compatible libraries. In general it is best to use independent Python virtual environments for your own separate coding project, as supported by the Python 3 venv functionality, or also well-implemented by the conda package/environment management too. Then, you don't change the shared system Python environment with mixed mismatches of libraries that can risk: library choices you made in an earlier project negatively impacting later projects new libraries in later project inadvertently breaking earlier projects, which have their preferred versions changed out-from-under-them being unsure what libraries/versions are needed for a project when you move it elsewhere or share it with others (because sufficient libraries just happened to already be there without proper explicit declaration)
2023-04-05 10:29:13
0
python,sftp,paramiko,pysftp
2
75,940,168
Paramiko SFTPClient class throws "Authentication failed", but Transport class and WinSCP work
75,938,329
false
242
I have an issue using Paramiko and connecting to an SFTP server. The connecting work fine using same credentials in WinSCP. I am also able to use pysftp. But I would prefer to be able to use Paramiko, which gives access to better control of timeouts (removed in the example below). I get the following error in the last Paramiko example: paramiko.ssh_exception.AuthenticationException: Authentication failed. What is even more strange, the exact same code, works for my colleague. So anyone have a clue what I should look into? import pysftp from base64 import decodebytes import paramiko username = "my_username" password = "my_secret_password" hostname = "ftp.hostname.tld" directory = "/protected/directory" port = 22 keydata = b"""AAAAB...TuQ==""" key = paramiko.RSAKey(data=decodebytes(keydata)) # PYSFTP is working! cnopts = pysftp.CnOpts() cnopts.hostkeys.add(hostname, 'ssh-rsa', key) with pysftp.Connection(host=hostname, username=username, password=password, cnopts=cnopts) as sftp: files = sftp.listdir_attr(directory) test = "test" # This pure Paramiko is also working! with paramiko.Transport((hostname,port)) as transport: transport.connect( hostkey=key, username=username, password=password ) with paramiko.SFTPClient.from_transport(transport) as sftp: files = sftp.listdir_attr(directory) test = "test" # This version where the transport is established using sshclient, does not work. ssh_client = paramiko.SSHClient() ssh_client.get_host_keys().add(hostname=hostname, keytype="ssh-rsa", key=key) ssh_client.connect( hostname=hostname, username=username, password=password, port=port ) sftp = ssh_client.open_sftp() files = sftp.listdir_attr(directory) test = "test" Paramiko log from a failed session: DEBUG:paramiko.transport:starting thread (client mode): 0x5dad0e50 DEBUG:paramiko.transport:Local version/idstring: SSH-2.0-paramiko_3.1.0 DEBUG:paramiko.transport:Remote version/idstring: SSH-2.0-srtSSHServer_11.00 INFO:paramiko.transport:Connected (version 2.0, client srtSSHServer_11.00) DEBUG:paramiko.transport:=== Key exchange possibilities === DEBUG:paramiko.transport:kex algos: diffie-hellman-group14-sha256, diffie-hellman-group-exchange-sha256, diffie-hellman-group14-sha1, diffie-hellman-group-exchange-sha1, diffie-hellman-group1-sha1, diffie-hellman-group-exchange-sha512@ssh.com DEBUG:paramiko.transport:server key: ssh-rsa DEBUG:paramiko.transport:client encrypt: aes256-ctr, aes128-ctr, aes256-cbc, twofish256-cbc, aes128-cbc, twofish256-ctr, aes192-ctr, twofish192-ctr, twofish128-ctr, twofish128-cbc DEBUG:paramiko.transport:server encrypt: aes256-ctr, aes128-ctr, aes256-cbc, twofish256-cbc, aes128-cbc, twofish256-ctr, aes192-ctr, twofish192-ctr, twofish128-ctr, twofish128-cbc DEBUG:paramiko.transport:client mac: hmac-sha3-512, hmac-sha3-384, hmac-sha3-256, hmac-sha3-224, hmac-sha2-512, hmac-sha2-384, hmac-sha2-256, hmac-sha2-224, hmac-sha1 DEBUG:paramiko.transport:server mac: hmac-sha3-512, hmac-sha3-384, hmac-sha3-256, hmac-sha3-224, hmac-sha2-512, hmac-sha2-384, hmac-sha2-256, hmac-sha2-224, hmac-sha1 DEBUG:paramiko.transport:client compress: none DEBUG:paramiko.transport:server compress: none DEBUG:paramiko.transport:client lang: <none> DEBUG:paramiko.transport:server lang: <none> DEBUG:paramiko.transport:kex follows: False DEBUG:paramiko.transport:=== Key exchange agreements === DEBUG:paramiko.transport:Kex: diffie-hellman-group-exchange-sha256 DEBUG:paramiko.transport:HostKey: ssh-rsa DEBUG:paramiko.transport:Cipher: aes128-ctr DEBUG:paramiko.transport:MAC: hmac-sha2-256 DEBUG:paramiko.transport:Compression: none DEBUG:paramiko.transport:=== End of kex handshake === DEBUG:paramiko.transport:Got server p (2048 bits) DEBUG:paramiko.transport:kex engine KexGexSHA256 specified hash_algo <built-in function openssl_sha256> DEBUG:paramiko.transport:Switch to new keys ... DEBUG:paramiko.transport:Trying discovered key b'f28c2e56806b573d7fc45b9722242e5b' in C:\Users\my-user/.ssh/id_rsa DEBUG:paramiko.transport:userauth is OK DEBUG:paramiko.transport:Finalizing pubkey algorithm for key of type 'ssh-rsa' DEBUG:paramiko.transport:Our pubkey algorithm list: ['rsa-sha2-512', 'rsa-sha2-256', 'ssh-rsa'] DEBUG:paramiko.transport:Server did not send a server-sig-algs list; defaulting to our first preferred algo ('rsa-sha2-512') DEBUG:paramiko.transport:NOTE: you may use the 'disabled_algorithms' SSHClient/Transport init kwarg to disable that or other algorithms if your server does not support them! INFO:paramiko.transport:Authentication (publickey) failed. INFO:paramiko.transport:Disconnect (code 2): unexpected service request
0
1
1
Adding this setting to the connect() call: look_for_keys=False, Makes the above code work. Thanks for pointing me in the right direction @martin-prikryl
2023-04-05 15:11:54
0
python,performance,scikit-learn,jupyter-notebook,thread-priority
1
75,941,995
Speedup Sklearn model fitting
75,941,142
false
15
I am profiling Sklearn model: clf = GridSearchCV(..., n_jobs=-1) %time clf.fit(X_train, y_train) ... CPU times: user 2min 35s, sys: 3.07 s, total: 2min 38s Wall time: 8min 40s Wall Time is significantly larger than CPU total time. Does it mean, that Sklearn is not fully utilizing CPU resources? I haven't any programs on my PC started explicitly, except Jupyter Notebook. How do I can increase CPU priority for all processes, that Sklearn have started? OS: Kubuntu 22.04
0
1
1
Much higher wall time than CPU time usually indicates an I/O bottleneck, but that should not happen training a scikit-learn model on data you have in memory. The next thing I would try is setting n_jobs to the number of physical CPU cores.
2023-04-05 19:56:44
1
python,conda,cvxpy
2
75,951,249
Anaconda Python 3.9 (Windows): Why can't CVXPY find qdldl.dll?
75,943,511
false
291
I'm running Anaconda Python 3.9 on Windows 10: Python 3.9.16 (main, Mar 8 2023, 10:39:24) [MSC v.1916 64 bit (AMD64)] on win32 I installed CVXPY in an Anaconda environment using the command: conda install -c conda-forge cvxpy I then attempted to run the test suite via (qutip-env) C:\Users\ray>pytest --pyargs cvxpy.tests and I obtain ================================================= test session starts ================================================= platform win32 -- Python 3.9.16, pytest-7.1.2, pluggy-1.0.0 rootdir: C:\Users\ray plugins: rerunfailures-10.1 collecting ... (CVXPY) Apr 05 12:27:19 PM: Encountered unexpected exception importing solver OSQP: ImportError('DLL load failed while importing qdldl: The specified module could not be found.') collecting 0 items (CVXPY) Apr 05 12:27:20 PM: Encountered unexpected exception importing solver OSQP: ImportError('DLL load failed while importing qdldl: The specified module could not be found.') collecting 1189 items (CVXPY) Apr 05 12:27:20 PM: Encountered unexpected exception importing solver OSQP: ImportError('DLL load failed while importing qdldl: The specified module could not be found.') collected 1201 items I assumed that the problem was that the directory containing the DLL qdldl.dll wasn't on my PATH, so I updated my system environment variable to include it: C:\Users\ray\anaconda3\envs\qutip-env\Library\bin Closing and restarting the Anaconda prompt, I use (qutip-env) C:\Users\ray>echo %PATH% and I see this directory. Python sees it as well if I enter: (qutip-env) C:\Users\ray>python Python 3.9.16 (main, Mar 8 2023, 10:39:24) [MSC v.1916 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> print(os.environ['PATH']) But I see the same "failure to load" error when I test CVXPY again. Is there another way to tell Python/CVXPY where to look for this DLL? (I actually first saw this problem when trying to import the QuTiP toolbox in a Jupyter notebook, but it's easy to replicate in a command prompt.)
0.099668
1
1
The pointer that Jay provided provided the correct clue: I needed to also add C:\Users\ray\anaconda3\envs\qutip-env\Library\lib to my PATH. I don't know how general this solution is for Python-related DLL load failures on Windows.
2023-04-06 05:45:02
0
python,autosave,isort
2
75,946,760
When using autosave with black and isort in vscode, isort does not work
75,946,225
false
404
I want to use autosave in Visual Studio Code and apply commonly used tools like flake8, mypy, isort, and black. Flake8, mypy, and black work fine, but isort doesn't work at all. Is there a way to solve this problem? My settings.json file looks like this: { "python.terminal.activateEnvInCurrentTerminal": false, "python.defaultInterpreterPath": "${workspaceFolder}\\.venv\\Scripts\\python.exe", "python.linting.enabled": true, "python.linting.lintOnSave": true, "python.linting.pylintEnabled": false, "python.linting.flake8Enabled": true, "python.linting.flake8Path": "${workspaceFolder}\\.venv\\Scripts\\pflake8.exe", "python.linting.mypyEnabled": true, "python.formatting.provider": "black", "[python]": { "editor.codeActionsOnSave": { "source.organizeImports": true }, "editor.formatOnSave": true } }
0
1
1
You can try adding the following line to your settings.json file: "python.sortImports.path": "${workspaceFolder}\\.venv\\Scripts\\isort.exe" After that, save the settings.json file and restart VS Code. Hope this answer can help you
2023-04-06 06:36:42
0
django,pdf,compression,wkhtmltopdf,python-pdfkit
2
75,998,327
Is there any way to highly compress PDF files through python?
75,946,538
false
443
I have generated a PDF file through python using pdfkit library running on django server. The size of pdf generated is 43.2 MB. Total pages in pdf = 15. Each page has 70 images, each image size = 623 bytes. Tech Stack version used- -Python = 3.8.16 -pdfkit = 1.0.0 -wkhtmltopdf = 0.12.6 -Django = 3.2.16 System OS = Ubuntu 22.04.2 LTS Requirement is to compress this pdf file without compromising with the quality of content, images in it. Any approach or suggestions for improvements in the things tried? Things tried: PDFNetPython3 python package- requires license/key file. Ghost-script command from system(ubuntu) terminal to reduce file. shrinkpdf.sh - file got from github. ps2pdf command from system terminal. pdfsizeopt cpdf Making zip of original file Changing dpi value in pdfkit options. File size after compression with quality of content maintained = 23.7 MB. Expectation is have more reduction in file size.
0
1
1
I had a similar problem - one report generated with pdfkit was resulting in a 32mb file that I couldn't deliver through discord. I ended up changing my code to create the report using fpdf2 and the report now is less than 2mb
2023-04-06 13:55:40
1
python,class,instance
2
75,950,429
In Python, when to modify self and when to return new instance?
75,950,352
false
84
I am very often in a great dilemma, when should a class modify itself and when to return a new modified instance. Lets say I want to program a simple sequence library. class Seq: def __init__(self, seq): self.seq = seq and then I want to rename the elements in the sequence, there are two options for this method in Seq: def rename(lookup): self.seq = [lookup[e] for e in self.seq] or def rename(lookup): return Seq([lookup[e] for e in self.seq]) I will be dealing with more complicated structures, for exampke, if I am creating a graph and I want to put it into unique canonical (unique vertex enumeration) form, should it put itself in this form or return a new instance. What are most common practices about this and when to choose the two options? My thinking is this: you have to participate what the users will have most use of. But is this def rename(lookup, create_new_instance = False): if create_new_instance: return [lookup[e] for e in self.seq] else: self.seq = [lookup[e] for e in self.seq] a good practice? Or should I just implement two different methods, one as a static, that returns new object and one non-static that modifies itself? Like list.sort() and sorted(list)? But then there is a lot of overhead if I have a lot of methods that modify the object.
0.099668
1
2
Typically, for built-ins the rule is: If the class instances are logically mutable, and the method makes sense as a mutation method, only make mutating methods (and they always return None, so there is no confusion as to whether they returned themselves mutated, or a mutated copy) If the class instances are logically immutable, you have no choice, all such "mutating" methods return new objects (note: New objects of that type typically; rename returning an unrelated list instead of a new Seq would be weird) Do not have methods that vary in the return like that based on the argument, that's pointlessly confusing, especially if people pass it positionally; quick, what does obj.rename(xyz, True) mean as opposed to obj.rename(xyz, False)? It's more work for you, and more work for them, just don't do it. Since your class is logically mutable, follow the rules for #1. Don't make a bunch of variant methods though, just define slicing (so they can do myseq[:] for a shallow copy) and/or a copy method (list provides both, you may as well). If the user wants to preserve the original thing, they copy first, then mutate the result. Makes it much clearer in use, and avoids massive code duplication.
2023-04-06 13:55:40
0
python,class,instance
2
75,950,410
In Python, when to modify self and when to return new instance?
75,950,352
false
84
I am very often in a great dilemma, when should a class modify itself and when to return a new modified instance. Lets say I want to program a simple sequence library. class Seq: def __init__(self, seq): self.seq = seq and then I want to rename the elements in the sequence, there are two options for this method in Seq: def rename(lookup): self.seq = [lookup[e] for e in self.seq] or def rename(lookup): return Seq([lookup[e] for e in self.seq]) I will be dealing with more complicated structures, for exampke, if I am creating a graph and I want to put it into unique canonical (unique vertex enumeration) form, should it put itself in this form or return a new instance. What are most common practices about this and when to choose the two options? My thinking is this: you have to participate what the users will have most use of. But is this def rename(lookup, create_new_instance = False): if create_new_instance: return [lookup[e] for e in self.seq] else: self.seq = [lookup[e] for e in self.seq] a good practice? Or should I just implement two different methods, one as a static, that returns new object and one non-static that modifies itself? Like list.sort() and sorted(list)? But then there is a lot of overhead if I have a lot of methods that modify the object.
0
1
2
It mainly depends on what you want the user to be aware of in your structure. On one hand, modifying self.seq ensures that you keep entire control of your structure, because you can control how this field is accessed (through properties for example) On the other hand, returning the collection directly allows the user to make any transformation on it, so you will not keep control of what appends to seq. It can entirely break you class logic if the right checks are not done.
2023-04-06 14:26:11
0
python,pandas
1
75,951,954
Exception: Reindexing only valid with uniquely valued Index objects - indexes are unique
75,950,671
true
43
I am attempting to concat two dataframes. When I tried doing so, I was hit with the error that the Reindexing only valid with uniquely valued Index objects while trying to run this code: df_3=pd.concat([df_1,df_2]). I have added the following checks and updates to try and identify the issue: df_1= df_1.reset_index(drop=True) df_2= df_2.reset_index(drop=True) try: df_3=pd.concat([df_1,df_2], ignore_index=True) except Exception as e: print(df_1.index.is_unique) print(df_1.index.to_list()) print(df_2.index.is_unique) print(df_2.index.to_list()) df_1.to_csv("../../df_1.csv") df_2.to_csv("../../df_2.csv") raise Exception(e) This results in the following output: True [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 8 7, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140] True [0, 1] Reindexing only valid with uniquely valued Index objects I do not know why this is occurring - I do not get the same issue when I am testing in a console. Please advise!
1.2
1
1
Error was I has duplicate column names - two columns in one dataframe with the same value. Pandas throws the same error in that situation. Thank you to @Timeless for finding the issue!
2023-04-06 16:27:03
0
python,pandas,dataframe
2
75,951,867
How can I use pandas.query() to check if a string exists in a list within the dataframe?
75,951,737
false
53
Let's say I have a dataframe that looks like: col1 0 ['str1', 'str2'] 1 ['str3', 'str4'] 2 [] 3 ['str2', 'str4'] 4 ['str1', 'str3'] 5 [] I'm trying to craft a df.query() string that would be the equivalent of saying "'str3' in col1". So it would return: col1 1 ['str3', 'str4'] 4 ['str1', 'str3'] I've tried df.query("col1.str.contains('str3')") but that results in "None of [Float64Index([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan,\n ...\n nan, nan, nan, nan, nan, nan, nan, nan, nan, nan],\n dtype='float64', length=652)] are in the [index]" I'm guessing because many of the lists in this column may be empty and convert to nan floats instead of strings? It's highly preferable that I use query strings for this since rather than list constructors, since I want this to be a script where other users can filter a dataframe using these query strings that they may craft.
0
1
1
Is it possible in your set up to use lambda functions? Because it seems that data could be filtered with mask: df['col'].apply(labmda x: 'str3' in x)
2023-04-06 16:51:59
0
python,pip,python-module,python-packaging,pythonpath
2
75,952,494
Pip installs packages in the wrong directory
75,951,948
false
229
So I want to install the opencv-python package. I typed in pip install opencv-python and got this: Collecting opencv-python Downloading opencv_python-4.7.0.72-cp37-abi3-win_amd64.whl (38.2 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 38.2/38.2 MB 3.1 MB/s eta 0:00:00 Requirement already satisfied: numpy>=1.17.0 in c:\users\leo westerburg burr\appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\localcache\local-packages\python39\site-packages (from opencv-python) (1.24.2) Installing collected packages: opencv-python Successfully installed opencv-python-4.7.0.72 When I try and import the package (on IDLE) I get Traceback (most recent call last): File "<pyshell#1>", line 1, in <module> import cv2 ModuleNotFoundError: No module named 'cv2' This is the same for all the packages I have installed recently, like numpy. The thing is when I type sys.path into the IDLE I get C:\Users\Leo Westerburg Burr\AppData\Local\Programs\Python\Python311\Lib\idlelib C:\Users\Leo Westerburg Burr\AppData\Local\Programs\Python\Python311\python311.zip C:\Users\Leo Westerburg Burr\AppData\Local\Programs\Python\Python311\DLLs C:\Users\Leo Westerburg Burr\AppData\Local\Programs\Python\Python311\Lib C:\Users\Leo Westerburg Burr\AppData\Local\Programs\Python\Python311 C:\Users\Leo Westerburg Burr\AppData\Local\Programs\Python\Python311\Lib\site-packages Which are all in the AppData/Local/Programs directory, however the packages are stored in appdata\local\packages\pythonsoftwarefoundation.python.3.9_qbz5n2kfra8p0\... as you can see when I installed opencv-python - which I find weird; why are they installed there and not in programs\python ? I have tried reinstalling pip, and also downloading a newer version of python. What is weird is that I have Python311 and Python38 in my Python folder, but this weird folder that has the packages is python39? So my question is: how do I get pip to install packages in Programs\Python\Python311\..., rather than Packages\... ? Do I have to add something to my PATH?
0
2
1
It seems that you have both python3.9 and 3.11 installed. When just typing in pip install ... you probably install your package in python 3.9 whereas you run python 3.11 in IDLE. Try python -V in your command prompt, it will probably answer python3.11
2023-04-06 22:24:00
1
python,python-3.x,windows,virtual-machine
1
75,954,507
ValueError: source code string cannot contain null bytes
75,954,211
true
178
I'm originally an Ubuntu user, but I have to use a Windows Virtual Machine for some reason. I was trying to pip-install a package using the CMD, however, I'm getting the following error: from pip._vendor.packaging.utils import canonicalize_name ValueError: source code string cannot contain null bytes I used pip install numpy and pip3 install numpy along with other commands I found while tying to fix the problem. I checked that pip is available and reinstalled Python to make sure the path is added. I've also made sure that I'm running everything as an administrator. Everything seems to be installed properly, but I keep getting that error. I've also checked almost all other StackOverflow questions related to this error message. How can I solve this?
1.2
2
1
The error that occurred was while using "Python 3.10.11 (64-bit)". Though I reinstalled it the issue continued. When I downgraded to "Python 3.9.0 (64-bit)", the issue was solved.
2023-04-07 00:05:52
0
python,conda
3
76,227,643
conda error: zstandard could not be imported
75,954,582
false
3,357
My Conda (on Ubuntu 18.04) has the following error message but all functions are running correctly: /home/td7920/miniconda3/lib/python3.8/site-packages/conda_package_streaming/package_streaming.py:19: UserWarning: zstandard could not be imported. Running without .conda support. warnings.warn("zstandard could not be imported. Running without .conda support.") /home/td7920/miniconda3/lib/python3.8/site-packages/conda_package_handling/api.py:29: UserWarning: Install zstandard Python bindings for .conda support _warnings.warn("Install zstandard Python bindings for .conda support") Collecting package metadata (repodata.json): | /home/td7920/miniconda3/lib/python3.8/site-packages/conda_package_streaming/package_streaming.py:19: UserWarning: zstandard could not be imported. Running without .conda support. warnings.warn("zstandard could not be imported. Running without .conda support.") /home/td7920/miniconda3/lib/python3.8/site-packages/conda_package_handling/api.py:29: UserWarning: Install zstandard Python bindings for .conda support _warnings.warn("Install zstandard Python bindings for .conda support") I did pip install zstandard and conda install zstandard. When I do conda list, both are found, zstandard 0.19.0 py38h5945529_1 conda-forge zstd 1.5.5.1 pypi_0 pypi But still get the error.
0
4
1
I figured out a way to fix this issue. We need to import zstandard in the following two .py files and everything will be smooth as wind. /home/ubuntu/anaconda3/lib/python3.9/site-packages/conda_package_handling/api.py /home/ubuntu/anaconda3/lib/python3.9/site-packages/conda_package_streaming/package_streaming.py
2023-04-07 09:52:19
3
python,python-3.x,pandas,ipython,spyder
2
75,982,082
Spyder Console Throws "[SpyderKernelApp] ERROR | Exception in message handler" error
75,957,345
false
613
I am new to Python and use Spyder as my IDE, and I mainly use it for data analysis. A few days ago I reinstalled Spyder for some reasons. Now when I want to directly enter pandas-related commands such as df.info() or df.describe() in the iPython console, I see the error message after the output (I do get the right output though). It shows: [SpyderKernelApp] ERROR | Exception in message handler: Traceback (most recent call last): File "/opt/homebrew/lib/python3.11/site-packages/ipykernel/kernelbase.py", line 409, in dispatch_shell await result File "/opt/homebrew/lib/python3.11/site-packages/ipykernel/kernelbase.py", line 798, in inspect_request reply_content = self.do_inspect( ^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/ipykernel/ipkernel.py", line 555, in do_inspect bundle = self.shell.object_inspect_mime( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/IPython/core/interactiveshell.py", line 1838, in object_inspect_mime return self.inspector._get_info( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/IPython/core/oinspect.py", line 738, in _get_info info_dict = self.info(obj, oname=oname, info=info, detail_level=detail_level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/IPython/core/oinspect.py", line 838, in info if info and info.parent and hasattr(info.parent, HOOK_NAME): File "/opt/homebrew/lib/python3.11/site-packages/pandas/core/generic.py", line 1466, in __nonzero__ raise ValueError( ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). Weirdly, this only shows up when I attempt to type the commands manually, and only when I use pandas-related commands. When I run the commands from scripts, or even simply use the up arrow key to select commands in the console, the error message won't show; It only shows when I manually type the commands. I've looked the error up on the internet but found no discussion. How do I deal with this error? I reinstalled Spyder, Python, iPython, ipykernel, spyder-kernels, but nothing seemed to help.
0.291313
10
1
Same here. New-ish install of Spyder with the following setup details: Spyder 5.4.2 standalone Miniconda environment with Python 3.9.16, Pandas 1.5.3, and spyder-kernels 2.4.2 Windows 11 22H2 (OS build 22621.1485) Lenovo Thinkpad X1 Extreme with i7/3050Ti/32GB/1TB Some observations: it also happens with Series methods, e.g., df["column"].min() the ValueError message says "truth of a DataFrame is ambiguous" vs the Series error I see more commonly; not sure if this could help with troubleshooting there's NO error when using different slicing methods The 3rd point above does offer a workaround, if not a true solution (hence me posting as an answer versus comment). Specifically, df.loc[rows, columns].<method> works without issue. This would mean a DataFrame method like describe would be implemented as df.loc[:, :].describe().
2023-04-07 10:07:41
1
python,machine-learning,artificial-intelligence,yolov5,fitness
1
75,958,288
YOLOv5: does best.pt control for overfitting?
75,957,447
true
318
After each YOLOv5 training, two model files are saved: last.pt and best.pt. I'm aware that: last.pt is the latest saved checkpoint of the model. This will be updated after each epoch. best.pt is the checkpoint that has the best validation loss so far. It is updated whenever the model fitness improves. Fitness is defined as a weighted combination of mAP@0.5 and mAP@0.5:0.95 metrics: def fitness(x): # Returns fitness (for use with results.txt or evolve.txt) w = [0.0, 0.0, 0.1, 0.9] # weights for [P, R, mAP@0.5, mAP@0.5:0.95] return (x[:, :4] * w).sum(1) My question is, if the training continued for too many epochs (and last.pt is thus overfitted), is best.pt then a checkpoint from when the training was not yet overfit? In other words, does best.pt control for overfitting?
1.2
1
1
We can assume that the best.pt has a good performance for non-training data when the model has a regularization. However, I saw some researchers who choose a model which doesn't have the best result in terms of the limitation of validation loss. If you don't have a lot of costs to train your model, you could consider the option that I mentioned, otherwise, you could just pick up the best.pt. In addition, we choose some of the models to ensemble them. This can collaborate with the first, second, etc., Although the validation loss of models has a slight difference, they can produce better performance than just using the best model.
2023-04-07 10:13:09
0
python,attributeerror,jmespath
2
75,957,595
AttributeError: module 'jmespath' has no attribute 'search'
75,957,487
false
106
I encountered that error: import jmespath AttributeError: Module jmespath has no attribute search I used the version 1.0.1 of this module. I wanted to run the following code: import jmespath person = { "person": [ {'id': 1, 'name': 'ali', 'age': 42, 'children': [ {'name': 'sara', 'age': 7}, {'name': 'sima', 'age': 15}, {'name': 'sina', 'age': 2} ]}, {'id': 2, 'name': 'reza', 'age': 65, 'children': []} ] } print(jmespath.search('person[*].children[?age>`10`].name', person))
0
1
1
Which version of Python are you using? If it's an older version, you can try running the code with a higher version of Python.
2023-04-07 14:24:37
0
python,python-3.x,string
4
75,959,422
How do I change the value of a variable inside a string?
75,959,301
false
73
I'm trying to simplify the error message since on my project it's expected to print a lot of errors. I tried this simple example but nothing seems to work. How can I make it so 'n' iterates and actually prints the position where there is an error? It always prints as if 'n=1'. n = 1 error = f"Error word {n}" def test(word): if len(word)>5: print(error) ar = ['pizza', 'toasts', 'ham', 'mayo', 'houses'] for word in ar: test(word) n+=1 I've tried error = f"Error word {n}" error = 'Error word '+ str(n) error = "Error word {n}".format(n) But maybe I'm not looking at the issue the correct way?
0
1
1
because n is outside the loop. the code once executed is not updating the value on n in line number 2. it is something like error = 'Error word 1'
2023-04-08 02:42:31
1
python,nginx,show,http-status-code-500,mlrun
1
76,088,114
mlrun cann't show the traning graph by AttributeError(\"'K8sHelper' object has no attribute 'v1api'\")
75,963,069
false
54
I have run mlrun by docker-compose. But when I finished the sample training, I couldn't see the results graph on the management pages. I get 500 error and the details likes: { "data": { "detail": "AttributeError(\"'K8sHelper' object has no attribute 'v1api'\")" }, "status": 500, "statusText": "Internal Server Error", "headers": { "connection": "keep-alive", "content-length": "76", "content-type": "application/json", "date": "Fri, 07 Apr 2023 09:45:51 GMT", "server": "nginx/1.21.6" }, "config": { "url": "projects/quick-tutorial-root/files", "method": "get", "headers": { "Accept": "application/json, text/plain, */*" }, "params": { "path": "/auto-trainer-train/0/confusion-matrix.html" }, "baseURL": "/mlrun/api/v1", "transformRequest": [ null ], "transformResponse": [ null ], "timeout": 0, "xsrfCookieName": "XSRF-TOKEN", "xsrfHeaderName": "X-XSRF-TOKEN", "maxContentLength": -1, "maxBodyLength": -1, "transitional": { "silentJSONParsing": true, "forcedJSONParsing": true, "clarifyTimeoutError": false } }, "request": {} } I do not find any info for this question, please help me, thanks!
0.197375
1
1
Yes, I have found a method to solve this problem by myself. This problem is due to k8s config conflict. So I can change the code in the container of mlrun-api and then commit a new image. Now I add after the 66-line code in mlrun/api/api/endpoints/files.py with "use_secrets= False" temporarily. The show is on UI normally.
2023-04-08 07:08:33
0
python,procedure,python-pptx
1
75,963,852
pptx issues with fonts size and color from procedure
75,963,821
false
20
ive created a code to create PPT presentation from a dataframe. defined a procedure "addtext" to add text passing some parameters. somehow, procedure is ignoring text size and color .. why ? thanks! from pptx import Presentation from pptx.util import Inches, Pt prs = Presentation() blank_slide_layout = prs.slide_layouts[6] import pandas as pd data = { 'titolo': ['Pizza Margherita', 'Bolognese', 'Fish and Chips', 'Tacos', 'Sushi', 'Paella', 'Spaghetti Carbonara', 'Couscous', 'Falafel', 'Goulash'], 'commenti': ['pomodoro, mozzarella, basilico', 'pomodoro, carne macinata, carote, sedano, cipolla', 'pesce fritto, patatine fritte', 'carne, cipolle, peperoni, pomodoro, avocado', 'riso, pesce crudo, alga nori', 'riso, pollo, gamberi, verdure', 'uova, pancetta, pecorino romano', 'couscous, agnello, verdure', 'fagioli, cipolle, spezie', 'carne, paprika, patate'], 'numero': [8, 12, 10, 9, 15, 20, 14, 18, 6, 16], 'paese': ['Italia', 'Italia', 'Regno Unito', 'Messico', 'Giappone', 'Spagna', 'Italia', 'Marocco', 'Medio Oriente', 'Ungheria'] } df = pd.DataFrame(data) def addtext(titlee,L,T,SS,RR,GG,BB): txBox = slide.shapes.add_textbox(Inches(L),Inches(T),Inches(1),Inches(1)) tf = txBox.text_frame p = tf.paragraphs[0] run = p.add_run() run.text = titlee font.name = 'Calibri' font.size = Pt(SS) font.color.rgb = RGBColor(RR,GG,BB) for index, row in df.iterrows(): slide = prs.slides.add_slide(blank_slide_layout) addtext(row['titolo'],.2,1,42,0, 0, 255) addtext(row['commenti'],.2,2,22,102, 255, 102) addtext('Costo medio: ' + str(row['numero']) + ' €',.2,5,18,32, 32, 255) prs.save('test.pptx')
0
1
1
You haven't associated the font variable with the run (variable run).
2023-04-08 11:51:23
3
python,gradio
1
75,971,565
How to specify constant inputs for Gradio click handler?
75,965,051
true
260
If I have the following code: submit = gr.Button(...) submit.click( fn=my_func, inputs=[ some_slider_1, # gr.Slider some_slider_2, # gr.Slider some_slider_3, # gr.Slider ], outputs=[ some_text_field ] ) How can I substitute let's say some_slider_2 for a constant value like 2? If I simply write 2 in there, I get an error: AttributeError: 'int' object has no attribute '_id'
1.2
3
1
If you want a constant number input which is not visible on the UI you can pass gr.Number(value=2, visible=False)
2023-04-08 14:21:41
0
python,barcode-scanner,pyserial,serial-communication
1
75,965,791
Datalogic 8500xt misses the last digit while scanning barcodes using pyserial
75,965,769
false
46
I'm trying to scan barcodes using a Datalogic 8500xt scanner with pyserial in my Python program. However, I'm facing an issue where the scanner misses the last digit of the barcode. For instance, if I scan a barcode that has seven digits, such as "1234567", the scanner only returns "123456". The same issue occurs when I scan a barcode that has five digits, such as "09876", and the scanner returns "0987". I have tried to adjust the scanner's settings to fix the issue, but with no success. I also tried to increase the sleep time between scans, but it didn't help either. I'm using pyserial to communicate with the scanner, and I'm wondering if there is something I'm missing in my code. Here is a code snippet that I'm using to read data from the scanner: try: # device_port= serial_ports()[0] device_port= 'COM3' print("device_port : ",device_port) print("Connecting device") ser = serial.Serial( port = device_port, timeout = 1, baudrate=9600, parity=serial.PARITY_EVEN, stopbits=serial.STOPBITS_ONE, bytesize=serial.SEVENBITS, ) i = True while i: # if wt == 'S14\x0D': ser.write('S01\x0D'.encode('utf-8')) print("connected") # ser.write('\x05'.encode('utf-8')) # time.sleep(1) barcode=None read_val = ser.read(size=128) read_val = str(read_val, 'UTF-8') res = list(read_val) print(res) res = res[4:-1] if len(res)>0: i=False barcode=res ser.close() return {"barcode": barcode} except: return {"message": "No Barcode Device Connected"}
0
1
1
I can see three possible culprits: Off by one logic error: try res = res[4:-1] -> res = res[4:] Insufficient timeout time, try timeout=1 -> timeout=5, # Increase timeout to allow more time for data transfer Insufficient read buffer size: try ser.read(size=128) -> ser.read(size=256) Also remove the bare except and replace it with a more specific one, to avoid hiding errors while debugging.
2023-04-08 14:54:45
1
python,list,numpy
2
75,966,696
Find the most similar subsequence in another sequence when they both numeric and huge
75,965,921
false
66
I have two numeric and huge np.arrays (let's call them S1 and S2, such that len(S1)>>len(S2)>>N where N is a very large number). I wish to find the most likely candidate part of S1 to be equal to S2. The naive approach would be to compute a running difference between S2 and parts of S1. This would take too long (about 170 hours for a single comparison). Another approach I thought about was to manually create a matrix of windows, M, where each row i of M is S1[i:(i+len(S2)]. Then, under this approach, we can broadcast a difference operation. It is also infeasible because it takes a long time (less than the most naive, but still), and it uses all the RAM I have. Can we parallelize it using a convolution? Can we use torch/keras to do something similar? Bear in mind I am looking for the best candidate, so the values of some convolution just have to preserve order, so the most likely candidate will have the smallest value.
0.099668
1
1
I am assuming you are doing this as a stepping stone to find the perfect match My reason for assuming this is that you say: I wish to find the most likely candidate part of S1 to be equal to S2. Start with the first value in the small array. Make a list of all indices of the big array, that match that first value of the small array. That should be very fast? Let's call that array indices, and it may have values [2,123,457,513, ...] Now look at the second value in the small array. Search through all positions indices+1 of the big array, and test for matches to that second value. This may be faster, as there are relatively few comparisons to make. Write those successful hits into a new, smaller, indices array. Now look at the third value in the small array, and so on. Eventually the indices array will have shrunk to size 1, when you have found the single matched position. If the individual numerical values in each array are 0-255, you might want to "clump" them into, say, 4 values at a time, to speed things up. But if they are floats, you won't be able to. Typically the first few steps of this approach will be slow, because it will be inspecting many positions. But (assuming the numbers are fairly random), each successive step becomes much faster. Therefore the determining factor in how long it will take, will be the first few steps through the small array. This would demand memory size as large as the largest plausible length of indices. (You could overwrite each indices list with the next version, so you would only need one copy.) You could parallelise this: You could give each parallel process a chunk of the big array (s1). You could make the chunks overlap by len(s2)-1, but you only need to search the first len(s1) elements of each chunk on the first iteration: the last few elements are just there to allow you to detect sequences that end there (but not start there). Proviso As @Kelly Bundy points out below, this won't help you if you are not on a journey that ultimately ends in finding a perfect match.
2023-04-08 19:43:20
0
python,networking,wifi,scapy,packet-sniffers
1
76,346,461
Using Scapy and a wireless network card to scan network - getting Network is down Error
75,967,347
false
73
Disclaimer: I can't use aircrack I am using Ubuntu (if that helps) I have a wireless network adapter USB device that I am using (tenda N150) to scan networks. I have used a tutorial and in the tutorial they asked to put the device in monitor mode sudo ifconfig wlan0 down sudo iwconfig wlan0 mode monitor The code that I am running is: from scapy.all import * from threading import Thread import pandas import time import os # initialize the networks dataframe that will contain all access points nearby networks = pandas.DataFrame(columns=["BSSID", "SSID", "dBm_Signal", "Channel", "Crypto"]) # set the index BSSID (MAC address of the AP) networks.set_index("BSSID", inplace=True) def callback(packet): if packet.haslayer(Dot11Beacon): # extract the MAC address of the network bssid = packet[Dot11].addr2 # get the name of it ssid = packet[Dot11Elt].info.decode() try: dbm_signal = packet.dBm_AntSignal except: dbm_signal = "N/A" # extract network stats stats = packet[Dot11Beacon].network_stats() # get the channel of the AP channel = stats.get("channel") # get the crypto crypto = stats.get("crypto") networks.loc[bssid] = (ssid, dbm_signal, channel, crypto) def print_all(): while True: os.system("clear") print(networks) time.sleep(0.5) def change_channel(): ch = 1 while True: os.system(f"iwconfig {interface} channel {ch}") # switch channel from 1 to 14 each 0.5s ch = ch % 14 + 1 time.sleep(0.5) if __name__ == "__main__": # interface name, check using iwconfig interface = "wlxc83a35c2e0bb" # start the thread that prints all the networks printer = Thread(target=print_all) printer.daemon = True printer.start() # start the channel changer channel_changer = Thread(target=change_channel) channel_changer.daemon = True channel_changer.start() # start sniffing sniff(prn=callback, iface=interface, monitor=True) The error that I am getting is: OSError: [Errno 100] Network is down This is the output of the iwconfig wlxc83a35c2e0bb IEEE 802.11 Mode:Monitor Tx-Power=20 dBm Retry short long limit:2 RTS thr:off Fragment thr:off Power Management:off Any Ideas why this might happen ? the code that I tried it written above
0
1
1
So, the problem was that i forgot: sudo iwconfig wlan0 up and because of that the interface stayed down.
2023-04-08 23:09:53
3
python,installation,anaconda,macos-ventura
5
75,988,377
I can't install Anaconda on a MacBook Pro M1 with Ventura 13.3.1
75,968,081
true
4,353
this is my first question here :) When i try to install Anaconda on my MacBook (M1) with Ventura 13.3.1 i receive the following error: "This package is incompatible with this version of macOS." I tried the arm64 installer and the x86 installer, both lead to the same error message. I used Anaconda on the same MacBook just a few days ago, but after the update from Ventura 13.2.1 to 13.3 I couldn't open Jupyter Notebooks from within the Anaconda Navigator. First I thought that the problem might be caused by Anaconda, so I uninstalled it. However, now here I am, unable to install it again. I also did a complete reset of my MacBook, nothing changed. Does anyone have the same issue or know how I can fix this? Thanks a lot
1.2
2
4
if you have homebrew installed you should be able to run "brew install anaconda"
2023-04-08 23:09:53
-1
python,installation,anaconda,macos-ventura
5
76,641,130
I can't install Anaconda on a MacBook Pro M1 with Ventura 13.3.1
75,968,081
false
4,353
this is my first question here :) When i try to install Anaconda on my MacBook (M1) with Ventura 13.3.1 i receive the following error: "This package is incompatible with this version of macOS." I tried the arm64 installer and the x86 installer, both lead to the same error message. I used Anaconda on the same MacBook just a few days ago, but after the update from Ventura 13.2.1 to 13.3 I couldn't open Jupyter Notebooks from within the Anaconda Navigator. First I thought that the problem might be caused by Anaconda, so I uninstalled it. However, now here I am, unable to install it again. I also did a complete reset of my MacBook, nothing changed. Does anyone have the same issue or know how I can fix this? Thanks a lot
-0.039979
2
4
thanks "brew install anaconda".
2023-04-08 23:09:53
4
python,installation,anaconda,macos-ventura
5
76,047,692
I can't install Anaconda on a MacBook Pro M1 with Ventura 13.3.1
75,968,081
false
4,353
this is my first question here :) When i try to install Anaconda on my MacBook (M1) with Ventura 13.3.1 i receive the following error: "This package is incompatible with this version of macOS." I tried the arm64 installer and the x86 installer, both lead to the same error message. I used Anaconda on the same MacBook just a few days ago, but after the update from Ventura 13.2.1 to 13.3 I couldn't open Jupyter Notebooks from within the Anaconda Navigator. First I thought that the problem might be caused by Anaconda, so I uninstalled it. However, now here I am, unable to install it again. I also did a complete reset of my MacBook, nothing changed. Does anyone have the same issue or know how I can fix this? Thanks a lot
0.158649
2
4
I had the same problem as you, I installed on my account only instead of Macintosh HD and it worked like a breeze.
2023-04-08 23:09:53
1
python,installation,anaconda,macos-ventura
5
75,985,899
I can't install Anaconda on a MacBook Pro M1 with Ventura 13.3.1
75,968,081
false
4,353
this is my first question here :) When i try to install Anaconda on my MacBook (M1) with Ventura 13.3.1 i receive the following error: "This package is incompatible with this version of macOS." I tried the arm64 installer and the x86 installer, both lead to the same error message. I used Anaconda on the same MacBook just a few days ago, but after the update from Ventura 13.2.1 to 13.3 I couldn't open Jupyter Notebooks from within the Anaconda Navigator. First I thought that the problem might be caused by Anaconda, so I uninstalled it. However, now here I am, unable to install it again. I also did a complete reset of my MacBook, nothing changed. Does anyone have the same issue or know how I can fix this? Thanks a lot
0.039979
2
4
Try selecting a different partition/folder for the installation.
2023-04-09 07:07:46
0
python,squish
1
75,975,777
Is there any possibility to run all test suites and generate report?
75,969,199
false
51
Is there any command with squish runner or any option available in squish to run all the test suites in single run and generate report of complete project?
0
2
1
Nothing that is built into squishrunner, but if you specify the same "html" report folder for each execution, the suite reports will be stored in it and linked in the index.html page, to view them conveniently. Executing one after the other via a shell script or batch file is simple, just add each squishrunner call into such a file. As for reporting, please look into Squish Test Center, for which you should have 2 licenses per each Squish GUI Tester Tester license. This is because the HTML report format is deprecated and lacks features.
2023-04-09 09:01:52
0
python,networkx,graph-theory
2
75,971,752
Return true if a graph is a disjoint union of paths
75,969,617
false
60
I am trying to make an if statement with the condidion that a graph is a disjoint union of paths using networkx but I'm not sure how to go about it I tried deleting an edge (u,v) in each node iteration then checking if the first node in the graph to the first node without the edge (node1 to u) is a path then check the same thing from the second node to the last node (v to node 2). for i in G1.nodes: node_count = node_count+1 nodes.append(i) p = nx.neighbors(G1, i) for o in p: neigh.append(o) for j in neigh: G1.remove_edge(i, j) no = len(nodes) for k in range(0, node_count-1): path.append(nodes[k]) if nx.is_path(G1, path): for l in range(node_count-1, no-1): path2.append(nodes[l]) if nx.is_path(G1, path2): return True
0
2
1
A finite graph G is the disjoint union of paths iff every vertex has degree <= 2 and there are no cycles. Observations: |E| < |V| There can be any number of isolated points (degree zero) Procedure: return true if all nodes have degree 0 or 1 return false if any node has degree > 2 return false if any node has degree 2 and there are no nodes of degree 1 return false if the graph has any cycles. return true Cycle detection is linear in this graph because |E| < |V|. One of several fast ways to find cycles is repeatedly using BFS (once per connected component).
2023-04-09 09:37:35
0
python
2
75,969,858
Is __init__ a class attribute?
75,969,736
false
157
Let's consider the example below: class Student: school = "stackoverflow" def __init__(self, name, course): self.name = name self.course = course I understand that __init__ is the constructor. I understand that it is a method (since it's a function within a class). Or, to be more precise, as I've come to understand from the numerous tutorials - an attribute that holds a method. But is __init__ considered a class attribute in this example? Would that be correct to say? Or is it only school that is considered a class attribute? Edit: I do understand that considering the essence of my question, any method may replace __init __(). But I would like to get an understanding using __init __() as the example at hand.
0
1
1
Yes, init is an instance method of the Student class, not a class attribute. In your example, the class attribute is school, which is a static property that is shared by all instances of the class. The init method is a special method that is called when an instance of the class is created. It is used to initialize the attributes of the instance with the values passed to it as arguments. Since it is a method that operates on an instance of the class, it is not considered a class attribute. To clarify, a class attribute is a property of the class itself, not its instances. It is shared by all instances of the class and can be accessed using the class name. For example, Student.school would return the value of the school class attribute. In summary, init is an instance method used to initialize the attributes of instances of the class, while school is a class attribute that is shared by all instances of the class. edit I understand your confusion. Although the init method is called for each instance of the class, it is not considered a class attribute because it belongs to each instance of the class, not to the class itself. In other words, the init method is defined in the class definition, but it is not a class attribute. Instead, it is a special method that is called on each instance of the class to initialize its attributes. The self parameter in the init method refers to the specific instance of the class that is being initialized. To summarize, a class attribute is a property or method that belongs to the class itself and is shared by all instances of the class. The init method, on the other hand, is a special method that is called on each instance of the class to initialize its attributes, and therefore it is not a class attribute.
2023-04-09 15:37:12
0
python,wave,winsound
1
75,971,696
How do I write sound files in python using wave?
75,971,424
false
48
I am trying to make a text-to-sound-file converter with python, it reads the binary data of an input and writes a sound file that matches the data. 1 is a higher note (440 hz) and 0 is a lower note(330 hz). I have tried so many different things and my code is all over the place. Can someone please help me fix it? import os import numpy as np import winsound import wave import struct import math import random # Parameters sampleRate = 44100 # samples per second duration = 1 # sample duration (seconds) frequency = 440.0 # sound frequency (Hz) print(os.getcwd()) obj = wave.open('sound.wav','w') obj.setnchannels(1) # mono obj.setsampwidth(2) obj.setframerate(sampleRate) string=input("Enter text to turn into sound file") binary=' '.join(format(ord(i), 'b') for i in string) print(binary) for i in binary: if i=="1": winsound.Beep(440, 500) data=struct.pack('<h', 440) obj.writeframesraw( data ) elif i=="0": winsound.Beep(330, 500) data=struct.pack('<h', 330) obj.writeframesraw( data ) obj.close()
0
1
1
You may use the 'wb' mode when you try to write (in line 13)
2023-04-09 18:04:55
0
python,json,npm,ijson
2
76,020,221
Continue iteration to the next json record after encountering ijson.common.IncompleteJSONError
75,972,111
false
58
I have a large json file (about 11,600 records) and I am trying to parse it using ijson. However, the for loop breaks because of one faulty json record. Is there a way to continue the iteration by skipping that record and moving on using ijson or any other python library? Here's the code snippet. try: for row in ijson.items(json_file, 'rows.item'): data = row try: super_df=dependency_latest_version(data, version) except Exception as e: print(e) except ijson.common.IncompleteJSONError: traceback.print_exc() This generates the following error: for row in ijson.items(json_file, 'rows.item'): ijson.common.IncompleteJSONError: parse error: after array element, I expect ',' or ']' dmeFilename":"README.md"}}":{"integrity":"sha512-204Fg2wwe1Q (right here) ------^ I tried iterating through the json file line by line and then using json.loads(line) but it didn't help since the entire json file was being read as a single line. Are there any other alternatives? Thank you.
0
1
1
I faced a similar situation once. Later I had to fix the document before parsing as @Rodrigo mentioned. It is pointless to try to fix the library since the library is doing exactly what it is supposed to do and it should not be able to parse the document unless it is a proper json document. The way I fixed it for my case is that I tried to look for a pattern where the supposed json document has improper formatting and wrote a script to fix those. After doing this preprocessing, it becomes a proper json document and at this point it can be parsed with ijson or any similar library.
2023-04-09 22:34:54
1
python,python-requests,request,spotify,endpoint
1
75,975,526
Spotify API Only Returning Some of Users playlists
75,973,216
false
121
I have some Python code that is meant to return the user's playlists. However, it only returns some of my playlists. The documentation says it should return all playlists in my library (all the ones I've created and the ones I've liked). Also, when I use the 'try it' feature on Spotify's website, it works perfectly fine. I've made sure I have the right scopes, the user ID is correct and even made sure my playlists are public (not that it should matter). Here is my code calling the function: def getUsersPlaylists(self, user_id, offset): headers = { "Authorization" : f"Bearer {self.access_token}" } endpoint = 'https://api.spotify.com/v1/users/' + user_id + '/playlists?offset=' + str(offset) + '&limit=50' return requests.get(endpoint, headers=headers).json() and I've also tried this similar function that just uses the user that logged in with OAuth 2.0 rather than providing the user ID: def getUsersPlaylists(self, offset): headers = { "Authorization" : f"Bearer {self.access_token}" } endpoint = 'https://api.spotify.com/v1/me/playlists?offset=' + str(offset) + '&limit=50' return requests.get(endpoint, headers=headers).json() What would be causing this? As I said before, it shows some playlists but not others and I cant see a pattern in which playlists it does and doesn't show. Thanks to anyone who can help :)
0.197375
1
1
I just found my problem. I was using the user-read-private and playlist-read-collaborative scopes, but I also needed the playlist-read-private scope. Using this scope fixed my problem.
2023-04-10 01:18:16
0
python,numpy,opencv,ffmpeg,urllib
1
75,976,717
moov atom not found (Extracting unique faces from youtube video)
75,973,692
false
126
I got the error below Saved 0 unique faces [mov,mp4,m4a,3gp,3g2,mj2 @ 0000024f505224c0] moov atom not found Trying to extract unique faces from a YouTube video with the code below which is designed to download the YouTube video and extract unique faces into a folder named faces. I got an empty video and folder. Please do check the Python code below import os import urllib.request import cv2 import face_recognition import numpy as np # Step 1: Download the YouTube video video_url = "https://www.youtube.com/watch?v=JriaiYZZhbY&t=4s" urllib.request.urlretrieve(video_url, "video.mp4") # Step 2: Extract frames from the video cap = cv2.VideoCapture("video.mp4") frame_count = int(cap.get(cv2.CAP_PROP_FRAME_COUNT)) frames = [] for i in range(frame_count): cap.set(cv2.CAP_PROP_POS_FRAMES, i) ret, frame = cap.read() if ret: frames.append(frame) cap.release() # Step 3: Detect faces in the frames detected_faces = [] for i, frame in enumerate(frames): face_locations = face_recognition.face_locations(frame) for j, location in enumerate(face_locations): top, right, bottom, left = location face_image = frame[top:bottom, left:right] cv2.imwrite(f"detected_{i}_{j}.jpg", face_image) detected_faces.append(face_image) # Step 4: Save the faces as separate images if not os.path.exists("faces"): os.makedirs("faces") known_faces = [] for i in range(len(detected_faces)): face_image = detected_faces[i] face_encoding = face_recognition.face_encodings(face_image)[0] known_faces.append(face_encoding) cv2.imwrite(f"faces/face_{i}.jpg", face_image) print("Saved", len(known_faces), "unique faces")
0
2
1
Your file is not an mp4/mov file. It's a web page, not a video file. You can't just download a YouTube web page like that and expect to receive a video file.
2023-04-10 05:56:41
0
python,anaconda3
1
76,443,378
Failed to initialize conda directories when trying to install Anaconda 3
75,974,612
false
246
I'm trying to install Anaconda 3 2023.03 and can't get anywhere with it. I get an error that says "Failed to initialize conda directories". I tried installing as an administrator. If I do this the install goes through but the system doesn't recognise the install anywhere. If I type "where conda" into the terminal it says it doesn't exist. I had a previous installation of python 3.11 which I deleted every trace I could find (I'm not competent to go fishing around the registry without clear instructions here) but still no joy. I've only just started learning Python and need to install Anaconda to follow along with my Udemy course, so please dumb it down for me.
0
1
1
When you install Anaconda, choose to install it for "All Users" instead of "Just for Me" This should solve the issue.
2023-04-10 08:14:20
0
python,pyqt,pyside6
1
75,978,843
How can I change the Main Window with a button and run the events from another window Pyside6
75,975,394
true
85
I want to change the Main Window through a button click, but when the main window change´s the event´s dont run. The both window´s are already been created in qt Designer here is the code: from PySide6.QtWidgets import * from PySide6.QtWidgets import QPushButton from PySide6.QtCore import QObject from PySide6.QtCore import QCoreApplication from PySide6.QtCore import * from PySide6.QtGui import * from PySide6 import QtWidgets, QtCore, QtGui from ui_app import Ui_MainWindow from ui_menu import Ui_MainWindow1 import sys class MainWindow(QMainWindow, Ui_MainWindow): def __init__(self): super(MainWindow, self).__init__() self.setupUi(self) self.setWindowTitle("Login") self.pushButton.clicked.connect(self.trocarpag) def trocarpag(self): if self.textuser.toPlainText() == "" : msg = QMessageBox() msg.setIcon(QMessageBox.Warning) msg.setText("Preencha o Username!!!") msg.exec() elif self.textpass.toPlainText() == "" : msg = QMessageBox() msg.setIcon(QMessageBox.Warning) msg.setText("Preencha a Password!!!") msg.exec() else: ###Change the Main Window####` self.ui = Ui_MainWindow1() self.ui.setupUi(self) if __name__ =="__main__": app = QApplication() window = MainWindow() window.show() app.exec()
1.2
1
1
A line of code self.ui = Ui_MainWindow1() creates just a member variable in your object window of class MainWindow. There are no code that would replace widgets in current window or show new window instead.
2023-04-10 16:33:27
-1
python,spacy
2
76,606,634
spacy python package no longer runs
75,978,880
false
528
Running python 3.11.3 on macos, Intel. I had spacy working fine. I then decided to try adding gpu support with: pip install -U 'spacy[cuda113]' but started getting errors. I uninstalled with pip uninstall 'spacy[cuda113]' and then reinstalled spacy with just pip install spacy. However, I'm still getting the same errors when running a simple script with just import spacy in it: Traceback (most recent call last): File "/Users/steve/workshop/python/blah.py", line 4, in <module> import spacy File "/usr/local/lib/python3.11/site-packages/spacy/__init__.py", line 14, in <module> from . import pipeline # noqa: F401 ^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/spacy/pipeline/__init__.py", line 1, in <module> from .attributeruler import AttributeRuler File "/usr/local/lib/python3.11/site-packages/spacy/pipeline/attributeruler.py", line 6, in <module> from .pipe import Pipe File "spacy/pipeline/pipe.pyx", line 1, in init spacy.pipeline.pipe File "spacy/vocab.pyx", line 1, in init spacy.vocab File "/usr/local/lib/python3.11/site-packages/spacy/tokens/__init__.py", line 1, in <module> from .doc import Doc File "spacy/tokens/doc.pyx", line 36, in init spacy.tokens.doc File "/usr/local/lib/python3.11/site-packages/spacy/schemas.py", line 158, in <module> class TokenPatternString(BaseModel): File "/usr/local/lib/python3.11/site-packages/pydantic/main.py", line 369, in __new__ cls.__signature__ = ClassAttribute('__signature__', generate_model_signature(cls.__init__, fields, config)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/site-packages/pydantic/utils.py", line 231, in generate_model_signature merged_params[param_name] = Parameter( ^^^^^^^^^^ File "/usr/local/Cellar/python@3.11/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/inspect.py", line 2722, in __init__ raise ValueError('{!r} is not a valid parameter name'.format(name)) ValueError: 'in' is not a valid parameter name
-0.099668
1
1
I uninstalled pydantic then installed it again.I got it fixed in pydantic 1.10.
2023-04-10 17:44:21
0
python,docker,docker-compose,poppler,pdf2image
1
76,028,299
pdf2image fails in docker container
75,979,398
true
147
I have a Python project running in a docker container, but I can't get convert_from_path to work (from pdf2image library). It works locally on my Windows PC, but not in the linux-based docker container. The error I get each time is Unable to get page count. Is poppler installed and in PATH? Relevant parts of my code look like this from pdf2image import convert_from_path import os from sys import exit def my_function(file_source_path): try: pages = convert_from_path(file_source_path, 600, poppler_path=os.environ.get('POPPLER_PATH')) except Exception as e: print('Fail 1') print(e) try: pages = convert_from_path(file_source_path, 600) except Exception as e: print('Fail 2') print(e) try: pages = convert_from_path(file_source_path, 600, poppler_path=r'\usr\local\bin') except Exception as e: print('Fail 3') print(e) print(os.environ) exit('Exiting script') In attempt 1 I try to reference the original file saved on windows. Basically the path refers to '/code/poppler' which is a binded mount referring to [snippet from docker-compose.yml] - type: bind source: "C:/Program Files/poppler-0.68.0/bin" target: /code/poppler In attempt 2 I just try to leave the path empty. In attempt 3 I tried something I found that worked from some other users locally. Relevant parts of my Dockerfile look like this FROM python:3.10 WORKDIR /code # install poppler RUN apt-get update RUN apt-get install poppler-utils -y COPY ./requirements.txt ./ RUN pip install --upgrade pip RUN pip install --no-cache-dir -r requirements.txt COPY . . CMD ["python", "./app.py"]
1.2
2
1
So the issue was that my Docker image was not refreshing correctly and after nuking the build-cache and trying again the middle option worked combined with the above Dockerfile. So a combination of RUN apt-get install poppler-utils -y in the Dockerfile + not referencing the path in the code pages = convert_from_path(file_source_path, 600) will work, as it will find the PATH automatically when installing poppler-utils. The binded mount can also be removed from docker-compose.yml and from the .env file.
2023-04-10 17:46:58
0
python,machine-learning,pytorch,huggingface-transformers,langchain
1
75,980,931
using llama_index with mac m1
75,979,420
true
582
Question #1: Is there a way of using Mac with M1 CPU and llama_index together? I cannot pass the bellow assertion: AssertionError Traceback (most recent call last) <ipython-input-1-f2d62b66882b> in <module> 6 from transformers import pipeline 7 ----> 8 class customLLM(LLM): 9 model_name = "google/flan-t5-large" 10 pipeline = pipeline("text2text-generation", model=model_name, device=0, model_kwargs={"torch_dtype":torch.bfloat16}) <ipython-input-1-f2d62b66882b> in customLLM() 8 class customLLM(LLM): 9 model_name = "google/flan-t5-large" ---> 10 pipeline = pipeline("text2text-generation", model=model_name, device=0, model_kwargs={"torch_dtype":torch.bfloat16}) 11 12 def _call(self, prompt, stop=None): ~/Library/Python/3.9/lib/python/site-packages/transformers/pipelines/__init__.py in pipeline(task, model, config, tokenizer, feature_extractor, framework, revision, use_fast, use_auth_token, device, device_map, torch_dtype, trust_remote_code, model_kwargs, pipeline_class, **kwargs) 868 kwargs["device"] = device 869 --> 870 return pipeline_class(model=model, framework=framework, task=task, **kwargs) ~/Library/Python/3.9/lib/python/site-packages/transformers/pipelines/text2text_generation.py in __init__(self, *args, **kwargs) 63 64 def __init__(self, *args, **kwargs): ---> 65 super().__init__(*args, **kwargs) 66 67 self.check_model_type( ~/Library/Python/3.9/lib/python/site-packages/transformers/pipelines/base.py in __init__(self, model, tokenizer, feature_extractor, modelcard, framework, task, args_parser, device, binary_output, **kwargs) 776 # Special handling 777 if self.framework == "pt" and self.device.type != "cpu": --> 778 self.model = self.model.to(self.device) 779 780 # Update config with task specific parameters ~/Library/Python/3.9/lib/python/site-packages/transformers/modeling_utils.py in to(self, *args, **kwargs) 1680 ) 1681 else: -> 1682 return super().to(*args, **kwargs) 1683 1684 def half(self, *args): ~/Library/Python/3.9/lib/python/site-packages/torch/nn/modules/module.py in to(self, *args, **kwargs) 1143 return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) 1144 -> 1145 return self._apply(convert) 1146 1147 def register_full_backward_pre_hook( ~/Library/Python/3.9/lib/python/site-packages/torch/nn/modules/module.py in _apply(self, fn) 795 def _apply(self, fn): 796 for module in self.children(): --> 797 module._apply(fn) 798 799 def compute_should_use_set_data(tensor, tensor_applied): ~/Library/Python/3.9/lib/python/site-packages/torch/nn/modules/module.py in _apply(self, fn) 818 # `with torch.no_grad():` 819 with torch.no_grad(): --> 820 param_applied = fn(param) 821 should_use_set_data = compute_should_use_set_data(param, param_applied) 822 if should_use_set_data: ~/Library/Python/3.9/lib/python/site-packages/torch/nn/modules/module.py in convert(t) 1141 return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, 1142 non_blocking, memory_format=convert_to_format) -> 1143 return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) 1144 1145 return self._apply(convert) ~/Library/Python/3.9/lib/python/site-packages/torch/cuda/__init__.py in _lazy_init() 237 "multiprocessing, you must use the 'spawn' start method") 238 if not hasattr(torch._C, '_cuda_getDeviceCount'): --> 239 raise AssertionError("Torch not compiled with CUDA enabled") 240 if _cudart is None: 241 raise AssertionError( AssertionError: Torch not compiled with CUDA enabled Obviously I've no Nvidia card, but I've read Pytorch is now supporting Mac M1 as well I'm trying to run the below example: from llama_index import SimpleDirectoryReader, LangchainEmbedding, GPTListIndex,GPTSimpleVectorIndex, PromptHelper from langchain.embeddings.huggingface import HuggingFaceEmbeddings from llama_index import LLMPredictor, ServiceContext import torch from langchain.llms.base import LLM from transformers import pipeline class customLLM(LLM): model_name = "google/flan-t5-large" pipeline = pipeline("text2text-generation", model=model_name, device=0, model_kwargs={"torch_dtype":torch.bfloat16}) def _call(self, prompt, stop=None): return self.pipeline(prompt, max_length=9999)[0]["generated_text"] def _identifying_params(self): return {"name_of_model": self.model_name} def _llm_type(self): return "custom" llm_predictor = LLMPredictor(llm=customLLM()) Question #2: Assuming the answer for the above is no - I don't mind using Google Colab with GPU, but once the index will be made, will it be possible to download it and use it on my Mac? i.e. something like: on Google Colab: service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, embed_model=embed_model) index = GPTSimpleVectorIndex.from_documents(documents, service_context=service_context) index.save_to_disk('index.json') ... and later on my Mac use load_from_file
1.2
2
1
Why are you passing device=0? If isinstance(device, int), PyTorch will assume device is the index of a CUDA device, hence the error. Try device="cpu" (or maybe simply removing the device kwarg), and this issue should disappear.
2023-04-10 19:05:15
0
python,python-3.x,installation,path,version
1
75,980,303
python --version shows old version
75,979,937
false
1,192
I'm using Windows, and in my path I have: C:\Users\Ant_P\AppData\Local\Programs\Python\Python311\ C:\Users\Ant_P\AppData\Local\Programs\Python\Python311\Scripts\ When I type python3 --version, it still shows me 3.10.8. I deleted the old python folder, but it still exists. I've tried looking online, but I'm not seeing how to fix this. I thought I just had to set my path, but apparently there is something else going on. Any clues?
0
1
1
If you are on Windows and you have installed Python 3.11, you should type python --version instead of python3 --version. Alternatively, you can try running py --version to see which version of Python is currently being used by your system. If you have set your PATH correctly and it still shows the wrong version of Python, you can try restarting your command prompt or shell to see if the changes take effect.
2023-04-11 00:45:07
4
python,fastapi,tortoise-orm
1
76,075,261
How to create db migrations for local tortoise project?
75,981,677
true
189
I have a FastAPI + tortose projects and I want to run the project locally with database postgres://lom:lom@localhost:5432/lom (database is created) My code # lom/app.py class App: storage: S3Storage def __init__(self): self.config = Config(_env_file=".env", _env_file_encoding="utf-8") self.__setup_sentry() ... def create_app(self, loop: asyncio.AbstractEventLoop) -> FastAPI: app = FastAPI() register_tortoise( app, modules={ "models": [ "lom.core", "aerich.models", ] } I want to apply current migrations and create new migrations I am trying aerich init -t <Don't understand path> What aerich command should I run and which parameters should I use? ├── lom │ ├── app.py │ ├── config.py │ ├── core │ │ ├── city.py │ │ ├── company.py ├── ├── migrations │ ├── 001_main.sql │ ├── 002_cities.sql │ ├── 003_cities_declination.sql
1.2
1
1
Initialize aerich by running the following command in the root directory of your project: aerich init -t tortoise.Tortoise -p lom.models This will create an aerich.ini configuration file and a migrations directory in the root directory of your project. Configure the aerich.ini file to match your database connection settings. The default settings are for a SQLite database, so you will need to modify the [tortoise] section to use PostgreSQL instead. [tortoise] # database_url = sqlite://db.sqlite3 database = lom host = localhost port = 5432 user = lom password = lom modules = ['lom.models'] Run the following command to create a new migration: aerich migrate This will create a new migration file in the migrations directory. Run the following command to apply the migrations: aerich upgrade This will apply all pending migrations to your database and the problem you're facing will be solved. Let me know if this helps