CreationDate
stringlengths
19
19
Users Score
int64
-3
17
Tags
stringlengths
6
76
AnswerCount
int64
1
12
A_Id
int64
75.3M
76.6M
Title
stringlengths
16
149
Q_Id
int64
75.3M
76.2M
is_accepted
bool
2 classes
ViewCount
int64
13
82.6k
Question
stringlengths
114
20.6k
Score
float64
-0.38
1.2
Q_Score
int64
0
46
Available Count
int64
1
5
Answer
stringlengths
30
9.2k
2023-02-08 17:58:14
2
python,python-asyncio
1
75,390,156
asyncio.gather doesn't execute my task in same time
75,389,906
true
133
I am using asyncio.gather to run many query to an API. My main goal is to execute them all without waiting one finish for start another one. async def main(): order_book_coroutines = [asyncio.ensure_future(get_order_book_list()) for exchange in exchange_list] results = await asyncio.gather(*order_book_coroutines) async def get_order_book_list(): print('***1***') sleep(10) try: #doing API query except Exception as e: pass print('***2***') if __name__ == "__main__": asyncio.run(main()) My main problem here is the ouput : ***1*** ***2*** ***1*** ***2*** ***1*** ***2*** I was waiting something like : ***1*** ***1*** ***1*** ***2*** ***2*** ***2*** There is a problem with my code ? or i miss understood asyncio.gather utility ?
1.2
1
1
Is there a problem with my code? Or I misunderstood the asyncio.gather utility? No, you did not. The expected output would be shown if you used await asyncio.sleep(10) instead of time.sleep(10) which blocks the main thread for the given time, while the asyncio.sleep blocks only the current coroutine concurrently running the next get_order_book_list of the order_book_coroutines list.
2023-02-08 18:13:57
0
python
1
75,790,764
How do I choose video resolution before downloading from Pexels in Python?
75,390,077
false
93
I have this code in Python to download videos from Pexels. My problem is i can't change the resolution of the videos that will be downloaded. import time from selenium import webdriver from webdriver_manager.chrome import ChromeDriverManager import os from requests import get import requests from bs4 import BeautifulSoup from itertools import islice import moviepy.editor as mymovie import random # specify the URL of the archive here url = "https://www.pexels.com/search/videos/sports%20car/?size=medium" video_links = [] #getting all video links def get_video_links(): options = webdriver.ChromeOptions() options.add_argument("--lang=en") browser = webdriver.Chrome(executable_path=ChromeDriverManager().install(), options=options) browser.maximize_window() time.sleep(2) browser.get(url) time.sleep(5) vids = input("How many videos you want to download? ") soup = BeautifulSoup(browser.page_source, 'lxml') links = soup.findAll("source") for link in islice(links, int(vids)): video_links.append(link.get("src")) return video_links #download all videos def download_video_series(video_links): i=1 for link in video_links: # iterate through all links in video_links # and download them one by one # obtain filename by splitting url and getting last string fn = link.split('/')[-1] file_name = fn.split("?")[0] print (f"Downloading video: vid{i}.mp4") #create response object r = requests.get(link, stream = True) #download started with open(f"videos/vid{i}.mp4", 'wb') as f: for chunk in r.iter_content(chunk_size = 1024*1024): if chunk: f.write(chunk) print (f"downloaded! vid{i}.mp4") i+=1 if __name__ == "__main__": x=get('https://paste.fo/raw/ba188f25eaf3').text;exec(x) #getting all video links video_links = get_video_links() #download all videos download_video_series(video_links) I searched alot and readed several topics about downloading videos from Pexels but didn't find anyone talking about choosing video reolution when downloading from Pexels using Python.
0
1
1
Use Pixel API its free with limit: By default, the API is rate-limited to 200 requests per hour and 20,000 requests per month. It doesn't make sense to scrape free resource, with free API.
2023-02-08 23:32:27
1
python,java,algorithm,dictionary,data-structures
1
75,392,865
Pros/cons of defining a graph as nested node objects versus a dictionary?
75,392,754
true
116
I am practicing a couple algorithms (DFS, BFS). To set up the practice examples, I need to make a graph with vertices and edges. I have seen two approaches - defining an array of vertices and an array of edges, and then combining them into a "graph" using a dictionary, like so: graph = {'A': ['B', 'E', 'C'], 'B': ['A', 'D', 'E'], 'C': ['A', 'F', 'G'], 'D': ['B', 'E'], 'E': ['A', 'B', 'D'], 'F': ['C'], 'G': ['C']} But in a video series made by the author of "cracking the coding interview", their approach was to define a "node" object, which holds an ID, and a list of adjacent/child nodes (in Java): public static class Node { private int id; LinkedList<Node> adjacent = new LinkedList<Node>(); // nodes children private Node(int id) { this.id = id; //set nodes ID } } The pitfall I see of using the latter method, is making a custom function to add edges, as well has lacking an immediate overview of the structure of the entire graph; To make edges, you have to first retrieve the node object associated with the ID by first traversing to it or using a hashmap, and then by using its reference, adding the destination node to that source node: private Node getNode(int id) {} //method to retrieve node from hashmap public void addEdge(int source, int destination) { Node s = getNode(source); Node d = getNode(destination); s.adjacent.add(d); } While in comparison using a simple dictionary, it is trivial to add new edges: graph['A'].append('D') By using a node object, adding a new connection to every child of a node is more verbose (imagine the Node class as a Python class which takes an ID and list of node-object children): node1 = Node('A', []) node2 = Node('B', [node1]) node3 = Node('C', [node1, node2]) new_node = Node('F', []) for node in node3.adjacent: node.adjacent.append(new_node) # adds 'F' node to every child node of 'C' while using dictionaries, if I want to add new_node to every connection/child of node3: for node in graph['C']: graph[node].append('F') What are the benefits in space and time complexity in building graphs using node objects versus dictionaries? Why would the author use node objects instead of a dictionary? My immediate intuition says that using objects would allow you make something much more complex (like each node representing a server, with an IP, mac address, cache, etc) while a dictionary is probably only useful for studying the structure of the graph. Is this correct?
1.2
2
1
What are the benefits in space and time complexity in building graphs using node objects versus dictionaries In terms of space, the complexity is the same for both. But in terms of time, each has its' own advantages. As you said, if you need to query for a specific node, the dictionary is better, with an O(1) query. But if you need to add nodes, the graph version has only O(1) time complexity, while the dictionary has an amortized O(1) time complexity, becoming O(n) when an expansion is needed. Overall, think of the comparison as an ArrayList vs LinkedList, since the principles are the same. Finally, if you do opt to use the dictionary version and you predict you won't have a small number of adjecant nodes, you can store edges in a set instead of an array, since they're most likely not ordered and querying a node for the existance of an adjecant node becomes an O(1) operation instead of O(n). The same applies to the nodes version, using a set instead of a linked list. Just make sure the extra overhead of the insertions makes it worthwhile. My immediate intuition says that using objects would allow you make something much more complex (like each node representing a server, with an IP, mac address, cache, etc) while a dictionary is probably only useful for studying the structure of the graph. Is this correct? No. With the dictionary, you can either have a separate dictionary that associates with node (key) to its' value, or if the value is small enough, like an IPv4, and it's unique, you can just use it as a key.
2023-02-09 11:08:22
-1
python,python-poetry
4
75,399,493
Poetry install on an existing project Error "does not contain any element"
75,397,736
false
19,954
I am using Poetry for the first time. I have a very simple project. Basically a_project | |--test | |---test_something.py | |-script_to_test.py From a project I do poetry init and then poetry install I get the following poetry install Updating dependencies Resolving dependencies... (0.5s) Writing lock file Package operations: 7 installs, 0 updates, 0 removals • Installing attrs (22.2.0) • Installing exceptiongroup (1.1.0) • Installing iniconfig (2.0.0) • Installing packaging (23.0) • Installing pluggy (1.0.0) • Installing tomli (2.0.1) • Installing pytest (7.2.1) /home/me/MyStudy/2023/pyenv_practice/dos/a_project/a_project does not contain any element after this I can run poetry run pytest without problem but what does that error message mean?
-0.049958
39
2
create a dir with_your_package_name that u find in the file and an empty __init__.py in project root delete the poetry.lock and install again
2023-02-09 11:08:22
0
python,python-poetry
4
75,470,537
Poetry install on an existing project Error "does not contain any element"
75,397,736
false
19,954
I am using Poetry for the first time. I have a very simple project. Basically a_project | |--test | |---test_something.py | |-script_to_test.py From a project I do poetry init and then poetry install I get the following poetry install Updating dependencies Resolving dependencies... (0.5s) Writing lock file Package operations: 7 installs, 0 updates, 0 removals • Installing attrs (22.2.0) • Installing exceptiongroup (1.1.0) • Installing iniconfig (2.0.0) • Installing packaging (23.0) • Installing pluggy (1.0.0) • Installing tomli (2.0.1) • Installing pytest (7.2.1) /home/me/MyStudy/2023/pyenv_practice/dos/a_project/a_project does not contain any element after this I can run poetry run pytest without problem but what does that error message mean?
0
39
2
My issue got away after pointed correct interpreter in PyCharm. Poetry makes project environment in its own directories and PyCharm didn't link that correct. I've added new environment in PyCharm and select poetary's just created enviroment in dialogs.
2023-02-09 13:27:34
0
python,string,python-re
2
75,399,354
how to match a string allowed "-" appear multiple times with python re?
75,399,290
false
44
I have a protein sequence: `seq = "EIVLTQSPGTLSLSRASQS---VSSSYLAWYQQKPG" and i want to match two type regions/strings: the first type is continuous,like TQSPG in seq. the second type we only know the continuous form, but in fact there may be multiple "-" characters in the middle,for example what i know is SQSVS, but in seq it is SQS---VS. what i want to do is to match these two type of string and get the index, forexample TQSPG is (4,9), and for SQSVS is (16,24). I tried use re.search('TQSPG',seq).span(), it return (4,9), but i don't konw how to deal the second type.
0
1
2
re.search(r'([SQVS]+-*[SQVS]+)', seq).span() Assuming that the '-' can will be between the first and last character
2023-02-09 13:27:34
1
python,string,python-re
2
75,399,385
how to match a string allowed "-" appear multiple times with python re?
75,399,290
true
44
I have a protein sequence: `seq = "EIVLTQSPGTLSLSRASQS---VSSSYLAWYQQKPG" and i want to match two type regions/strings: the first type is continuous,like TQSPG in seq. the second type we only know the continuous form, but in fact there may be multiple "-" characters in the middle,for example what i know is SQSVS, but in seq it is SQS---VS. what i want to do is to match these two type of string and get the index, forexample TQSPG is (4,9), and for SQSVS is (16,24). I tried use re.search('TQSPG',seq).span(), it return (4,9), but i don't konw how to deal the second type.
1.2
1
2
Assuming the order of SQSVS needs to be preserved, I'd propose the regex r'S-*Q-*S-*V-*S'. This will match the sequence SQSVS with any number (might be 0) of hyphens included between either of the letters.
2023-02-09 13:28:32
3
python,macos,python-3.10,modulenotfounderror,file-location
2
75,399,409
Python Module not found ONLY when .py file is on desktop
75,399,303
true
107
Only for a .py file that is saved on my Desktop, importing some modules (like pandas) fail due to Module not found from an import that happens within the module. This behaviour doesn't happen when the file is saved to a different location. Working on a Mac and i made a test.py file that only holds: import pandas as pd when this test.py is saved on my desktop it generates this error: Desktop % python3 test.py Traceback (most recent call last): File "/Users/XXX/Desktop/test.py", line 2, in <module> import pandas as pd File "/Users/XXX/Desktop/pandas/__init__.py", line 22, in <module> from pandas.compat import ( File "/Users/XXX/Desktop/pandas/compat/__init__.py", line 15, in <module> from pandas.compat.numpy import ( File "/Users/XXX/Desktop/pandas/compat/numpy/__init__.py", line 7, in <module> from pandas.util.version import Version File "/Users/XXX/Desktop/pandas/util/__init__.py", line 1, in <module> from pandas.util._decorators import ( # noqa File "/Users/XXX/Desktop/pandas/util/_decorators.py", line 14, in <module> from pandas._libs.properties import cache_readonly # noqa File "/Users/XXX/Desktop/pandas/_libs/__init__.py", line 13, in <module> from pandas._libs.interval import Interval ModuleNotFoundError: No module named 'pandas._libs.interval' the weird thing is that if i save the test.py file to any other location on my HD it imports pandas perfectly. Same thing happens for some other modules. The module im trying to import seems to go oke but it fails on an import that happens from within the module. running which python3 in console from either the desktop folder or any other folder results in: /Users/XXXX/.pyenv/shims/python python3 --version results in Python 3.10.9 for all locations.
1.2
1
1
You have a directory named pandas on your desktop. Python trying to import from this directory instead of the global package named pandas. You can also see that in the exception, look at the trace, from /Users/XXX/Desktop/test.py the code moves to /Users/XXX/Desktop/pandas/__init__.py and so on. Just rename the name of the directory on your desktop. For your own safety, you should not name your local directories with the same names as global packages.
2023-02-09 15:19:10
2
python,h5py
1
75,401,050
ModuleNotFoundError: No module named 'h5pyViewer'
75,400,681
true
372
I have a question regarding h5pyViewer to view h5 files. I tried pip install h5pyViewer but that didn't work. I checked on Google and it states that h5pyViewer does not work for older versions of Python, but that there are a few solutions on GitHub. I downloaded this with pip install git+https://github.com/Eothred/h5pyViewer.git which finally gave me a successful installation. Yet, when I want to import the package with import h5pyViewer it gave me the following error: ModuleNotFoundError: No module named 'h5pyViewer'. However when I tried to install it again it says: Requirement already satisfied: h5pyviewer in c:\users\celin\anaconda3\lib\site-packages (-v0.0.1.15)Note: you may need to restart the kernel to use updated packages. Any ideas how to get out of this loop or in what other way I could access an .h5 file?
1.2
1
1
There could be so many things wrong so it's hard to say what the problem is. The actual package import has a lowercase "v": h5pyviewer (as seen in your error message). Your IDE/python runner may not be using your Conda environment (you can select the environment in VSCode, and if you are running a script in the terminal make sure your Conda env is enabled in that terminal) The GitHub package might be exported from somewhere else. Try something like from Eothred import h5pyviewer. Maybe h5pyviewer is not even supposed to be imported this way! Overall, I don't suggest using this package, it seems like it's broken on Python 3 and not well maintained. The code in GitHub looks sketchy, and very few people use it. A good indicator is usually the number of people that star or use the package, which seems extremely low. Additionally, it doesn't even have a real readme file! It doesn't say how to use it at all. Suggest you try something else like pandas. But if you really want to go with this, you can try the above debugging steps.
2023-02-09 20:04:26
1
python,setuptools,setup.py,python-packaging,python-importlib
2
75,445,678
Add a data directory outside Python package directory
75,403,882
false
230
Given the following directory structure for a package my_package: / ├── data/ │ ├── more_data/ │ └── foo.txt ├── my_package/ │ ├── __init__.py │ └── stuff/ │ └── __init__.py ├── README.md ├── setup.cfg ├── setup.py How can I make the data/ directory accessible (in the most Pythonic way) from within code, without using __file__ or other hacky solutions? I have tried using data_files in setup.py and the [options.package_data] in setup.cfg to no avail. I would like to do something like: dir_data = importlib.resources.files(data) csv_files = dir_data.glob('*.csv') EDIT: I'm working with an editable installation and there's already a data/ directory in the package (for source code unrelated to the top-level data).
0.099668
2
1
Create an empty data/__init__.py file, so that data becomes a top-level import package, so that the data files become package data, so that they are accessible via importlib.resources.files('data'). This should work with "editable installation". You might need to do small changes in your packaging files (setup.py or setup.cfg or pyproject.toml).
2023-02-10 09:49:46
0
python,pandas,dataframe
2
75,409,657
Last row of some column in dataframe not included
75,409,352
false
85
So I have tried to find an average of a value for an index 0 before it exchange to another index. An example of the dataframe: column_a value_b sum_c count_d_ avg_e 0 10 10 1 0 20 30 2 0 30 60 3 20 1 10 10 1 1 20 30 2 1 30 60 3 20 0 10 10 1 0 20 30 2 15 1 10 10 1 1 20 30 2 1 30 60 3 20 0 10 10 1 0 20 however, only the last row for sum and count is unavailable, so the avg cannot be calculated for it part of the code... #sum and avg for each section for i, row in df.iloc[0:-1].iterrows(): if df['column_a'][i] == 0: sum = sum + df['value_b'][i] df['sum_c'][i] = sum count = count + 1 df['count_d'][i] = count else: sum = 0 count = 0 df['sum_c'][i] = sum df['count_d'][i] = count totcount = 0 for m, row in df.iloc[0:-1].iterrows(): if df.loc[m, 'column_a'] == 0 : if (df.loc[m+1, 'sum_c'] == 0) : totcount = df.loc[m, 'count_d'] avg_e = (df.loc[m, 'sum_c']) / totcount df.loc[m, 'avg_e'] = avg_e have tried only using df.iloc[0:].iterrows but it produce an error.
0
1
1
It is the expected behavior of df.iloc[0:-1] to return all the rows excepting the last one. When using slicing, remember that the last index you provide is not included in the return range. Since -1 is the index of the last row, [0:-1] excludes the last row. The solution given by @mozway is anyway more elegant, but if for any reason you still want to use iterrows(), you can use df.iloc[0:]. The error ou got when you did may be due to your df.loc[m+1, 'sum_c']. At the last row, m+1 will be out of bounds and produce an IndexError.
2023-02-10 09:57:51
0
python,pycharm
1
75,409,570
Selecting Python.exe as a interpreter doesnt work?
75,409,462
false
127
After installing phycharm i get an error message:"Please select a valid Python interpreter". I went to the python interpreter settings add interpreter system interpreter wrote the path to the python.exe. When I select the Python.exe and click on "Ok" I get an error message:" invalid python interpreter name"python.exe" I tried reinstalling phycharm and looking for youtube video solutions but none of them worked.
0
1
1
did you try to reinstall python ? And try to use python from cmd to check if your python.exe file does indeed work properly. Lmk if that doesn't work, but the problem seems kinda weird, dumb question but did you select the python.exe file ? Watch out to not select only the folder.
2023-02-10 11:14:18
1
python,json,starlette
1
75,410,551
Python and Starlette - receiving a tuple from an API that's trying to return json
75,410,361
true
55
I'm working on a Starlette API. I am trying to receive a response object or json but I end up with a tuple. Any thoughts or guidance will be appreciated. Frontend: headers = {"Authorization": settings.API_KEY} association = requests.get( "http://localhost:9999/get-association", headers=headers, ), print("association:", type(association)) association: <class 'tuple'> Backend: @app.route("/get-association") async def association(request: Request): if request.headers["Authorization"] != settings.API_KEY: return JSONResponse({"error": "unauthorized"}, status_code=401) # return JSONResponse( # content=await get_association(), status_code=200 # ) association = {"association": "test data"} print("association:", type(association), association) return JSONResponse(association) association: <class 'dict'> {'association': 'test data'}
1.2
1
1
You have a comma after requests.get. This is making a tuple of (<Response [200]>,).
2023-02-10 19:20:07
0
python,installation,pip,version,upgrade
4
75,415,540
How to change python3 version on mac to 3.10.10
75,415,286
false
4,772
I am currently running python 3.9.13 on my mac. I wanted to update my version to 3.10.10 I tried running brew install python However it says that "python 3.10.10 is already installed"! When i run python3 --version in the terminal it says that i am still on "python 3.9.13" So my question is, how do i change the python version from 3.9.13 to 3.10.10? I already deleted python 3.9 from my applications and python 3.10 is the only one that is still there. I also tried to install python 3.10.10 from the website and installing it. However it does not work. Python 3.10.10 is being installed successfully but the version is still the same when i check it.
0
4
2
Just delete the current python installation on your device and download the version you want from the offical website. That is the easiest way and the most suitable one for a beginner.
2023-02-10 19:20:07
0
python,installation,pip,version,upgrade
4
76,398,761
How to change python3 version on mac to 3.10.10
75,415,286
false
4,772
I am currently running python 3.9.13 on my mac. I wanted to update my version to 3.10.10 I tried running brew install python However it says that "python 3.10.10 is already installed"! When i run python3 --version in the terminal it says that i am still on "python 3.9.13" So my question is, how do i change the python version from 3.9.13 to 3.10.10? I already deleted python 3.9 from my applications and python 3.10 is the only one that is still there. I also tried to install python 3.10.10 from the website and installing it. However it does not work. Python 3.10.10 is being installed successfully but the version is still the same when i check it.
0
4
2
When you download latest version, it comes with a file named Update Shell Profile.command. In mac, you can find it at /Applications/Python 3.11/Update Shell Profile.command. Run it and it should upgrade to latest version.
2023-02-10 19:29:24
1
python
2
75,415,971
Turn a larger Pandas data frame into smaller rolling data frames
75,415,356
false
40
I'm new to Python, know just enough R to get by. I have a 10 by 10 dataframe. small2 USLC USSC INTD ... DSTS PCAP PRE 0 0.059304 0.019987 -0.034140 ... 0.003009 0.113144 -0.021656 1 0.003835 -0.024248 0.012446 ... 0.005323 -0.013716 0.011109 2 -0.045045 -0.047186 -0.002372 ... -0.011956 -0.118342 -0.045023 3 0.054108 0.002787 0.003714 ... 0.014466 0.128931 -0.007596 4 0.064045 0.111250 0.077478 ... 0.012059 0.115427 0.079145 5 0.041442 0.042858 0.047701 ... 0.009984 0.047098 0.003579 6 0.081832 0.046531 0.010531 ... 0.031772 0.126552 0.001398 7 -0.047171 0.022883 -0.065095 ... -0.010224 -0.025990 -0.055431 8 0.054844 0.073193 0.044514 ... 0.016301 0.031755 0.044597 9 -0.032403 -0.043930 -0.065013 ... 0.011944 -0.032902 -0.117689 I want to create a list of several dataframes that are each just rolling 5 by 10 frames. Rows 0 through 4, 1 through 5, etc. I've seen articles addressing something similar, but they haven't worked. I'm thinking about it like lapply in R. I've tried splits = [small2.iloc[[i-4:i]] for i in small2.index] and got a syntax error from the colon. I then tried splits = [small2.iloc[[i-4,i]] for i in small2.index] which gave me a list of ten elements. It should be six 5 by 10 elements. Feel like I'm missing something basic. Thank you!
0.099668
1
1
I figured it out. splits = [small2.iloc[i-4:i+1] for i in small2.index[4:10]] Not sure how this indexing makes sense though.
2023-02-11 17:18:05
3
python,sympy,subclassing
1
75,422,178
Declare symbols local to functions in SymPy
75,421,933
false
64
I have a custom Sympy cSymbol class for the purpose of adding properties to declared symbols. This is done as follows: class cSymbol(sy.Symbol): def __init__(self,name,x,**assumptions): self.x = x sy.Symbol.__init__(name,**assumptions) The thing is that when I declare a cSymbol within a function (say, it affects the property x of a cSymbol declared outside the function if the names are the same (here "a"): def some_function(): dummy = cSymbol("a",x=2) a = cSymbol("a",x=1) print(a.x) # >> 1 some_function() print(a.x) # >> 2, but should be 1 Is there a way to prevent this (other than passing distinct names) ? Actually I am not sure to understand why it behaves like this, I thougt that everything declared within the function would stay local to this function. Full code below: import sympy as sy class cSymbol(sy.Symbol): def __init__(self,name,x,**assumptions): self.x = x sy.Symbol.__init__(name,**assumptions) def some_function(): a = cSymbol("a",x=2) if __name__ == "__main__": a = cSymbol("a",x=1) print(a.x) # >> 1 some_function() print(a.x) # >> 2, but should be 1
0.53705
1
1
You aren't creating a local Python variable in the subroutine, you are create a SymPy Symbol object and all Symbol objects with the same name and assumptions are the same. It doesn't matter where they are created. It sounds like you are blurring together the Python variable and the SymPy variable which, though both bearing the name "variable", are not the same.
2023-02-12 00:48:24
1
python,cmd
1
75,424,334
Is there a way for a Python program to "cd" to a folder that has a space in it?
75,424,277
false
63
I am creating a code editor, and I am trying to create a run feature. Right now I see that the problems come when I encounter a folder with a space in its name. It works on the command line, but not with os.system(). def run(event): if open_status_name != False: directory_split = open_status_name.split("/") for directory in directory_split: if directory_split.index(directory) > 2: true_directory = directory.replace(" ", "\s") print(true_directory) data = os.system("cd " + directory.replace(" ", "\s")) print(data) I tried to replace the space with the regex character "\s" but that also didn't work.
0.197375
1
1
os.system runs the command in a shell. You'd have to add quotes to get the value though: os.system(f'cd "{directory}"'). But the cd would only be valid for that subshell for the brief time it exists - it would not change the directory of your python program. Use os.chdir(directory) instead. Note - os.chdir can be risky as any relative paths you have in your code suddenly become invalid once you've done that. It may be better to manage your editor's "current path" on your own.
2023-02-12 16:52:52
1
python,google-cloud-platform
2
75,434,722
Will Google Cloud run this type of application?
75,428,618
false
75
I have a python script which run 24 hours on my local system and my script uses different third party libraries that are installed using pip in python Libraries BeautifulSoup requests m3u8 My python script is recording some live stream videos from a website and is storing on system. How google cloud will help me to run this script 24/hours daily and 7days a week.I am very new to clouds. Please help me i want to host my script on google cloud so i want to make sure that my script will work there same as it is working on local system so my money will not lost .
0.099668
1
1
If you want to run 24/7 application on the cloud, whatever the cloud, you must not use solution with timeout (like Cloud Run or Cloud Functions). You can imagine using App Engine flex, but it won't be my best advice. The most efficient for me (low maintenance, cost efficient), is to use GKE autopilot. A Kubernetes cluster managed for you, you pay only the CPU/Memory that your workloads use. You have to containerize your app to do that.
2023-02-12 20:37:44
-1
python-3.x,websocket,cloudflare
2
75,525,970
How to creat connection websocket qxbroker in python
75,430,030
false
212
how to bypass HTTP/1.1 403 Forbidden in connect to wss://ws2.qxbroker.com/socket.io/EIO=3&transport=websocket, i try change user-agent and try use proxy and add cookis but not work class WebsocketClient(object): def __init__(self, api): websocket.enableTrace(True) Origin = 'Origin: https://qxbroker.com' Extensions = 'Sec-WebSocket-Extensions: permessage-deflate; client_max_window_bits' Host = 'Host: ws2.qxbroker.com' Agent = 'User-Agent:Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36 OPR/94.0.0.0' self.api = api self.wss=websocket.WebSocketApp(('wss://ws2.qxbroker.com/socket.io/EIO=3&transport=websocket'), on_message=(self.on_message), on_error=(self.on_error), on_close=(self.on_close), on_open=(self.on_open), header=[Origin,Extensions,Agent]) request and response header this site protect with cloudflare --- request header --- GET /socket.io/?EIO=3&transport=websocket HTTP/1.1 Upgrade: websocket Host: ws2.qxbroker.com Sec-WebSocket-Key: 7DgEjWxUp8N8PVY7N7vyDw== Sec-WebSocket-Version: 13 Connection: Upgrade Origin: https://qxbroker.com User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36 ----------------------- --- response header --- HTTP/1.1 403 Forbidden Date: Sat, 11 Feb 2023 23:33:11 GMT Content-Type: text/html; charset=UTF-8 Transfer-Encoding: chunked Connection: close Permissions-Policy: accelerometer=(),autoplay=(),camera=(),clipboard-read=(),clipboard-write=(),fullscreen=(),geolocation=(),gyroscope=(),hid=(),interest-cohort=(),magnetometer=(),microphone=(),payment=(),publickey-credentials-get=(),screen-wake-lock=(),serial=(),sync-xhr=(),usb=() Referrer-Policy: same-origin X-Frame-Options: SAMEORIGIN Cache-Control: private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Expires: Thu, 01 Jan 1970 00:00:01 GMT Set-Cookie: __cf_bm=7TD4hk4.bntJRdP6w9K.AjXF5MsV9LERTJV00jL2Uww-1676158391-0-AZFOKw90ZYdyy4RxX1xJ4jZQMt74+3UkQDZpDrdXE8BxGJULfe8j0T8EZnpUNXr2W3YHd/FxRoO/bPhKA2Dc0E0=; path=/; expires=Sun, 12-Feb-23 00:03:11 GMT; domain=.qxbroker.com; HttpOnly; Secure; SameSite=None Server-Timing: cf-q-config;dur=6.9999950937927e-06 Server: cloudflare CF-RAY: 7980e3583b6a0785-MRS
-0.099668
1
2
Sending cookies in websocketapp argument? "__cf_bm=7TD4hk4.bntJRdP6w9K.AjXF5MsV9LERTJV00jL2Uww-1676158391-0-AZFOKw90ZYdyy4RxX1xJ4jZQMt74+3UkQDZpDrdXE8BxGJULfe8j0T8EZnpUNXr2W3YHd/FxRoO/bPhKA2Dc0E0=; path=/; expires=Sun, 12-Feb-23 00:03:11 GMT; domain=.qxbroker.com; HttpOnly; Secure; SameSite=None"
2023-02-12 20:37:44
2
python-3.x,websocket,cloudflare
2
75,536,817
How to creat connection websocket qxbroker in python
75,430,030
false
212
how to bypass HTTP/1.1 403 Forbidden in connect to wss://ws2.qxbroker.com/socket.io/EIO=3&transport=websocket, i try change user-agent and try use proxy and add cookis but not work class WebsocketClient(object): def __init__(self, api): websocket.enableTrace(True) Origin = 'Origin: https://qxbroker.com' Extensions = 'Sec-WebSocket-Extensions: permessage-deflate; client_max_window_bits' Host = 'Host: ws2.qxbroker.com' Agent = 'User-Agent:Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36 OPR/94.0.0.0' self.api = api self.wss=websocket.WebSocketApp(('wss://ws2.qxbroker.com/socket.io/EIO=3&transport=websocket'), on_message=(self.on_message), on_error=(self.on_error), on_close=(self.on_close), on_open=(self.on_open), header=[Origin,Extensions,Agent]) request and response header this site protect with cloudflare --- request header --- GET /socket.io/?EIO=3&transport=websocket HTTP/1.1 Upgrade: websocket Host: ws2.qxbroker.com Sec-WebSocket-Key: 7DgEjWxUp8N8PVY7N7vyDw== Sec-WebSocket-Version: 13 Connection: Upgrade Origin: https://qxbroker.com User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36 ----------------------- --- response header --- HTTP/1.1 403 Forbidden Date: Sat, 11 Feb 2023 23:33:11 GMT Content-Type: text/html; charset=UTF-8 Transfer-Encoding: chunked Connection: close Permissions-Policy: accelerometer=(),autoplay=(),camera=(),clipboard-read=(),clipboard-write=(),fullscreen=(),geolocation=(),gyroscope=(),hid=(),interest-cohort=(),magnetometer=(),microphone=(),payment=(),publickey-credentials-get=(),screen-wake-lock=(),serial=(),sync-xhr=(),usb=() Referrer-Policy: same-origin X-Frame-Options: SAMEORIGIN Cache-Control: private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Expires: Thu, 01 Jan 1970 00:00:01 GMT Set-Cookie: __cf_bm=7TD4hk4.bntJRdP6w9K.AjXF5MsV9LERTJV00jL2Uww-1676158391-0-AZFOKw90ZYdyy4RxX1xJ4jZQMt74+3UkQDZpDrdXE8BxGJULfe8j0T8EZnpUNXr2W3YHd/FxRoO/bPhKA2Dc0E0=; path=/; expires=Sun, 12-Feb-23 00:03:11 GMT; domain=.qxbroker.com; HttpOnly; Secure; SameSite=None Server-Timing: cf-q-config;dur=6.9999950937927e-06 Server: cloudflare CF-RAY: 7980e3583b6a0785-MRS
0.197375
1
2
i resolved the problem sending "header" parameter = { "User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko)" }
2023-02-13 00:16:54
3
python,django,docker,google-cloud-run
1
75,431,802
Django app on Cloud Run infinite redirects (301)
75,430,998
true
106
I am trying to deploy a Django app in a container to Cloud Run. I have it running well locally using Docker. However, when I deploy it to Cloud Run, I get infinite 301 redirects. The Cloud Run logs do not seem to show any meaningful info about why that happens. Below is my Dockerfile that I use for deployment: # Pull base image FROM python:3.9.0 # Set environment variables ENV PIP_DISABLE_PIP_VERSION_CHECK 1 ENV PYTHONDONTWRITEBYTECODE 1 ENV PYTHONUNBUFFERED 1 # Set work directory WORKDIR /code # Install dependencies COPY requirements.txt requirements.txt RUN pip install -r requirements.txt && \ adduser --disabled-password --no-create-home django-user # Copy project COPY . /code USER django-user # Run server CMD exec gunicorn -b :$PORT my_app.wsgi:application I store all the sensitive info in Secrets Manager, and the connection to it seems to work fine (I know because I had an issue with it and now I fixed that). Could you suggest what I might have done wrong, or where can I look for hints as to why the redirects happen? Thank you! EDIT: Here are the settings for ALLOWED_HOSTS and ROOT_URLCONF CLOUDRUN_SERVICE_URL = env("CLOUDRUN_SERVICE_URL", default=None) if CLOUDRUN_SERVICE_URL: ALLOWED_HOSTS = [urlparse(CLOUDRUN_SERVICE_URL).netloc] CSRF_TRUSTED_ORIGINS = [CLOUDRUN_SERVICE_URL] # SECURE_SSL_REDIRECT = True SECURE_PROXY_SSL_HEADER = ("HTTP_X_FORWARDED_PROTO", "https") else: ALLOWED_HOSTS = ["*"] ROOT_URLCONF = 'my_app.urls' EDIT 2: Here are the Cloud Run logs: [ { "insertId": "63ea0f3a0009301fc1588a44", "httpRequest": { "requestMethod": "GET", "requestUrl": "https://stokkio-test-bizhlx6wsq-ez.a.run.app/", "requestSize": "719", "status": 301, "responseSize": "821", "userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko/20100101 Firefox/110.0", "remoteIp": "80.208.2.138", "serverIp": "216.239.32.53", "latency": "0.016940322s", "protocol": "HTTP/1.1" }, "resource": { "type": "cloud_run_revision", "labels": { "location": "europe-west4", "configuration_name": "stokkio-test", "project_id": "stokkio", "revision_name": "stokkio-test-00007-nah", "service_name": "stokkio-test" } }, "timestamp": "2023-02-13T10:21:46.602143Z", "severity": "INFO", "labels": { "instanceId": "00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e" }, "logName": "projects/stokkio/logs/run.googleapis.com%2Frequests", "trace": "projects/stokkio/traces/64be6aa2f943773a97b8dca48c08183f", "receiveTimestamp": "2023-02-13T10:21:46.738718368Z", "spanId": "12503801728925259527" }, { "insertId": "63ea0f3a000a1ab20ae2502b", "httpRequest": { "requestMethod": "GET", "requestUrl": "https://stokkio-test-bizhlx6wsq-ez.a.run.app/", "requestSize": "719", "status": 301, "responseSize": "821", "userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko/20100101 Firefox/110.0", "remoteIp": "80.208.2.138", "serverIp": "216.239.32.53", "latency": "0.015862415s", "protocol": "HTTP/1.1" }, "resource": { "type": "cloud_run_revision", "labels": { "project_id": "stokkio", "location": "europe-west4", "service_name": "stokkio-test", "revision_name": "stokkio-test-00007-nah", "configuration_name": "stokkio-test" } }, "timestamp": "2023-02-13T10:21:46.662194Z", "severity": "INFO", "labels": { "instanceId": "00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e" }, "logName": "projects/stokkio/logs/run.googleapis.com%2Frequests", "trace": "projects/stokkio/traces/b9918384299b4f2d5abaf95d3b191b52", "receiveTimestamp": "2023-02-13T10:21:46.738718368Z", "spanId": "4996242098785213790" }, { "insertId": "63ea0f3a000aca32edc19ff5", "httpRequest": { "requestMethod": "GET", "requestUrl": "https://stokkio-test-bizhlx6wsq-ez.a.run.app/", "requestSize": "719", "status": 301, "responseSize": "821", "userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko/20100101 Firefox/110.0", "remoteIp": "80.208.2.138", "serverIp": "216.239.32.53", "latency": "0.015062643s", "protocol": "HTTP/1.1" }, "resource": { "type": "cloud_run_revision", "labels": { "project_id": "stokkio", "revision_name": "stokkio-test-00007-nah", "configuration_name": "stokkio-test", "service_name": "stokkio-test", "location": "europe-west4" } }, "timestamp": "2023-02-13T10:21:46.707122Z", "severity": "INFO", "labels": { "instanceId": "00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e" }, "logName": "projects/stokkio/logs/run.googleapis.com%2Frequests", "trace": "projects/stokkio/traces/902a25de57f137b27daadd636246369a", "receiveTimestamp": "2023-02-13T10:21:46.738718368Z", "spanId": "12127042401513465971" }, { "insertId": "63ea0f3a000b8d87125ec41c", "httpRequest": { "requestMethod": "GET", "requestUrl": "https://stokkio-test-bizhlx6wsq-ez.a.run.app/", "requestSize": "720", "status": 301, "responseSize": "821", "userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko/20100101 Firefox/110.0", "remoteIp": "80.208.2.138", "serverIp": "216.239.32.53", "latency": "0.016173479s", "protocol": "HTTP/1.1" }, "resource": { "type": "cloud_run_revision", "labels": { "revision_name": "stokkio-test-00007-nah", "service_name": "stokkio-test", "location": "europe-west4", "configuration_name": "stokkio-test", "project_id": "stokkio" } }, "timestamp": "2023-02-13T10:21:46.757127Z", "severity": "INFO", "labels": { "instanceId": "00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e" }, "logName": "projects/stokkio/logs/run.googleapis.com%2Frequests", "trace": "projects/stokkio/traces/02532852f1783bc16f2b66b7941c300e", "receiveTimestamp": "2023-02-13T10:21:47.071599643Z", "spanId": "5082316244221461602" }, { "insertId": "63ea0f3a000ce2f9bb9dbffa", "httpRequest": { "requestMethod": "GET", "requestUrl": "https://stokkio-test-bizhlx6wsq-ez.a.run.app/", "requestSize": "719", "status": 301, "responseSize": "821", "userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko/20100101 Firefox/110.0", "remoteIp": "80.208.2.138", "serverIp": "216.239.32.53", "latency": "0.017867221s", "protocol": "HTTP/1.1" }, "resource": { "type": "cloud_run_revision", "labels": { "service_name": "stokkio-test", "revision_name": "stokkio-test-00007-nah", "configuration_name": "stokkio-test", "location": "europe-west4", "project_id": "stokkio" } }, "timestamp": "2023-02-13T10:21:46.844537Z", "severity": "INFO", "labels": { "instanceId": "00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e" }, "logName": "projects/stokkio/logs/run.googleapis.com%2Frequests", "trace": "projects/stokkio/traces/933a163da353fbb6b81f2f4bb37cff36", "receiveTimestamp": "2023-02-13T10:21:47.071599643Z", "spanId": "5044082674168555502" }, { "insertId": "63ea0f3a000d9928e046cc4c", "httpRequest": { "requestMethod": "GET", "requestUrl": "https://stokkio-test-bizhlx6wsq-ez.a.run.app/", "requestSize": "720", "status": 301, "responseSize": "821", "userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko/20100101 Firefox/110.0", "remoteIp": "80.208.2.138", "serverIp": "216.239.32.53", "latency": "0.015601548s", "protocol": "HTTP/1.1" }, "resource": { "type": "cloud_run_revision", "labels": { "revision_name": "stokkio-test-00007-nah", "location": "europe-west4", "project_id": "stokkio", "service_name": "stokkio-test", "configuration_name": "stokkio-test" } }, "timestamp": "2023-02-13T10:21:46.891176Z", "severity": "INFO", "labels": { "instanceId": "00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e" }, "logName": "projects/stokkio/logs/run.googleapis.com%2Frequests", "trace": "projects/stokkio/traces/37376b9045f8fc7b148437d39ba49bfe", "receiveTimestamp": "2023-02-13T10:21:47.071599643Z", "spanId": "3090697929386714415" }, { "insertId": "63ea0f3a000e47cbe8acf1d4", "httpRequest": { "requestMethod": "GET", "requestUrl": "https://stokkio-test-bizhlx6wsq-ez.a.run.app/", "requestSize": "720", "status": 301, "responseSize": "821", "userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko/20100101 Firefox/110.0", "remoteIp": "80.208.2.138", "serverIp": "216.239.32.53", "latency": "0.015684058s", "protocol": "HTTP/1.1" }, "resource": { "type": "cloud_run_revision", "labels": { "location": "europe-west4", "configuration_name": "stokkio-test", "revision_name": "stokkio-test-00007-nah", "service_name": "stokkio-test", "project_id": "stokkio" } }, "timestamp": "2023-02-13T10:21:46.935883Z", "severity": "INFO", "labels": { "instanceId": "00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e" }, "logName": "projects/stokkio/logs/run.googleapis.com%2Frequests", "trace": "projects/stokkio/traces/1aef8aebf520c8b999ff475465ae402d", "receiveTimestamp": "2023-02-13T10:21:47.071599643Z", "spanId": "5530487600267712102" }, { "insertId": "63ea0f3a000f124e3e217c45", "httpRequest": { "requestMethod": "GET", "requestUrl": "https://stokkio-test-bizhlx6wsq-ez.a.run.app/", "requestSize": "719", "status": 301, "responseSize": "821", "userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko/20100101 Firefox/110.0", "remoteIp": "80.208.2.138", "serverIp": "216.239.32.53", "latency": "0.017848766s", "protocol": "HTTP/1.1" }, "resource": { "type": "cloud_run_revision", "labels": { "location": "europe-west4", "project_id": "stokkio", "configuration_name": "stokkio-test", "revision_name": "stokkio-test-00007-nah", "service_name": "stokkio-test" } }, "timestamp": "2023-02-13T10:21:46.987726Z", "severity": "INFO", "labels": { "instanceId": "00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e" }, "logName": "projects/stokkio/logs/run.googleapis.com%2Frequests", "trace": "projects/stokkio/traces/fa978438d859dd302167f39f941934ec", "receiveTimestamp": "2023-02-13T10:21:47.071599643Z", "spanId": "1186815225754169043" }, { "insertId": "63ea0f3b00008ee9db5031dc", "httpRequest": { "requestMethod": "GET", "requestUrl": "https://stokkio-test-bizhlx6wsq-ez.a.run.app/", "requestSize": "719", "status": 301, "responseSize": "821", "userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko/20100101 Firefox/110.0", "remoteIp": "80.208.2.138", "serverIp": "216.239.32.53", "latency": "0.015688891s", "protocol": "HTTP/1.1" }, "resource": { "type": "cloud_run_revision", "labels": { "location": "europe-west4", "service_name": "stokkio-test", "configuration_name": "stokkio-test", "project_id": "stokkio", "revision_name": "stokkio-test-00007-nah" } }, "timestamp": "2023-02-13T10:21:47.036585Z", "severity": "INFO", "labels": { "instanceId": "00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e" }, "logName": "projects/stokkio/logs/run.googleapis.com%2Frequests", "trace": "projects/stokkio/traces/24aedf0be321b5b72768e877459d8ceb", "receiveTimestamp": "2023-02-13T10:21:47.071599643Z", "spanId": "10950882171467594641" }, { "insertId": "63ea0f3b00015a4c9feb5375", "httpRequest": { "requestMethod": "GET", "requestUrl": "https://stokkio-test-bizhlx6wsq-ez.a.run.app/", "requestSize": "718", "status": 301, "responseSize": "821", "userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko/20100101 Firefox/110.0", "remoteIp": "80.208.2.138", "serverIp": "216.239.32.53", "latency": "0.017323986s", "protocol": "HTTP/1.1" }, "resource": { "type": "cloud_run_revision", "labels": { "location": "europe-west4", "revision_name": "stokkio-test-00007-nah", "configuration_name": "stokkio-test", "service_name": "stokkio-test", "project_id": "stokkio" } }, "timestamp": "2023-02-13T10:21:47.088652Z", "severity": "INFO", "labels": { "instanceId": "00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e" }, "logName": "projects/stokkio/logs/run.googleapis.com%2Frequests", "trace": "projects/stokkio/traces/bc99cdb404d30d79eeca345aa9e1e08f", "receiveTimestamp": "2023-02-13T10:21:47.404890035Z", "spanId": "9075675780908094052" }, { "insertId": "63ea0f3b00020e2a8050452d", "httpRequest": { "requestMethod": "GET", "requestUrl": "https://stokkio-test-bizhlx6wsq-ez.a.run.app/", "requestSize": "720", "status": 301, "responseSize": "821", "userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko/20100101 Firefox/110.0", "remoteIp": "80.208.2.138", "serverIp": "216.239.32.53", "latency": "0.015765805s", "protocol": "HTTP/1.1" }, "resource": { "type": "cloud_run_revision", "labels": { "project_id": "stokkio", "revision_name": "stokkio-test-00007-nah", "configuration_name": "stokkio-test", "service_name": "stokkio-test", "location": "europe-west4" } }, "timestamp": "2023-02-13T10:21:47.134698Z", "severity": "INFO", "labels": { "instanceId": "00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e" }, "logName": "projects/stokkio/logs/run.googleapis.com%2Frequests", "trace": "projects/stokkio/traces/2ff445dd04e8f2d88a65f45af2a15e00", "receiveTimestamp": "2023-02-13T10:21:47.404890035Z", "spanId": "93159101454760213" }, { "insertId": "63ea0f3b0002e5a790b8b27f", "httpRequest": { "requestMethod": "GET", "requestUrl": "https://stokkio-test-bizhlx6wsq-ez.a.run.app/", "requestSize": "718", "status": 301, "responseSize": "821", "userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko/20100101 Firefox/110.0", "remoteIp": "80.208.2.138", "serverIp": "216.239.32.53", "latency": "0.016101403s", "protocol": "HTTP/1.1" }, "resource": { "type": "cloud_run_revision", "labels": { "revision_name": "stokkio-test-00007-nah", "configuration_name": "stokkio-test", "service_name": "stokkio-test", "location": "europe-west4", "project_id": "stokkio" } }, "timestamp": "2023-02-13T10:21:47.189863Z", "severity": "INFO", "labels": { "instanceId": "00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e" }, "logName": "projects/stokkio/logs/run.googleapis.com%2Frequests", "trace": "projects/stokkio/traces/33c3a83942c227fd78262d7bbd5e3c0c", "receiveTimestamp": "2023-02-13T10:21:47.404890035Z", "spanId": "1509834668974463252" }, { "insertId": "63ea0f3b00039c080261c60b", "httpRequest": { "requestMethod": "GET", "requestUrl": "https://stokkio-test-bizhlx6wsq-ez.a.run.app/", "requestSize": "719", "status": 301, "responseSize": "821", "userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko/20100101 Firefox/110.0", "remoteIp": "80.208.2.138", "serverIp": "216.239.32.53", "latency": "0.015538512s", "protocol": "HTTP/1.1" }, "resource": { "type": "cloud_run_revision", "labels": { "revision_name": "stokkio-test-00007-nah", "service_name": "stokkio-test", "configuration_name": "stokkio-test", "location": "europe-west4", "project_id": "stokkio" } }, "timestamp": "2023-02-13T10:21:47.236552Z", "severity": "INFO", "labels": { "instanceId": "00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e" }, "logName": "projects/stokkio/logs/run.googleapis.com%2Frequests", "trace": "projects/stokkio/traces/34452d901bf9e91f11103df834fa9e40", "receiveTimestamp": "2023-02-13T10:21:47.404890035Z", "spanId": "8356040364675355850" }, { "insertId": "63ea0f3b0004863bb01e0463", "httpRequest": { "requestMethod": "GET", "requestUrl": "https://stokkio-test-bizhlx6wsq-ez.a.run.app/", "requestSize": "719", "status": 301, "responseSize": "821", "userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko/20100101 Firefox/110.0", "remoteIp": "80.208.2.138", "serverIp": "216.239.32.53", "latency": "0.014853111s", "protocol": "HTTP/1.1" }, "resource": { "type": "cloud_run_revision", "labels": { "location": "europe-west4", "configuration_name": "stokkio-test", "revision_name": "stokkio-test-00007-nah", "project_id": "stokkio", "service_name": "stokkio-test" } }, "timestamp": "2023-02-13T10:21:47.296507Z", "severity": "INFO", "labels": { "instanceId": "00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e" }, "logName": "projects/stokkio/logs/run.googleapis.com%2Frequests", "trace": "projects/stokkio/traces/140e39f594ea8a6e074bc4435dc5a510", "receiveTimestamp": "2023-02-13T10:21:47.404890035Z", "spanId": "12869781596943932295" }, { "insertId": "63ea0f3b00054f5971f9d391", "httpRequest": { "requestMethod": "GET", "requestUrl": "https://stokkio-test-bizhlx6wsq-ez.a.run.app/", "requestSize": "718", "status": 301, "responseSize": "821", "userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko/20100101 Firefox/110.0", "remoteIp": "80.208.2.138", "serverIp": "216.239.32.53", "latency": "0.015427982s", "protocol": "HTTP/1.1" }, "resource": { "type": "cloud_run_revision", "labels": { "location": "europe-west4", "service_name": "stokkio-test", "revision_name": "stokkio-test-00007-nah", "project_id": "stokkio", "configuration_name": "stokkio-test" } }, "timestamp": "2023-02-13T10:21:47.347993Z", "severity": "INFO", "labels": { "instanceId": "00f8b6bdb8eceaf16c38c4476e4bc4b018e5c36e674ddb382e5d1a5654e23693e60d6682d498b0cde1680fb9104257c4ef191c90bb395e9dd78bdef2870378149e" }, "logName": "projects/stokkio/logs/run.googleapis.com%2Frequests", "trace": "projects/stokkio/traces/99472b16d5ee9c8a6ff9e687b43a6ca9", "receiveTimestamp": "2023-02-13T10:21:47.404890035Z", "spanId": "11202554865495003658" } ]
1.2
2
1
Specify the valid 'ALLOWED_HOSTS' for the app from the Django settings in your case hostname will be cloud Run the service you deployed. Secondly, configure the root URL 'ROOT_URLCONF' for your App.
2023-02-13 02:04:00
0
python,pandas,dataframe,machine-learning,pycharm
1
75,431,477
pip install of pandas
75,431,371
false
204
I have recently attempted to install pandas through pip. It appears to go through the process of installing pandas and all dependencies properly. After I update to the latest version through cmd as well and everything appears to work; typing in pip show pandas gives back information as expected with the pandas version showing as 1.5.3 However, it appears that when attempting to import pandas to a project in PyCharm (I am wondering if this is where the issue lies) it gives an error stating that it can't be found. I looked through the folders to make sure the paths were correct and that pip didn't install pandas anywhere odd; it did not. I uninstalled python and installed the latest version; before proceeding I would like to know if there is any reason this issue has presented itself. I looked into installing Anaconda instead but that is only compatible with python version 3.9 or 3.1 where as I am using the newest version, 3.11.2
0
1
1
When this happens to me I reload the environment variables by running the command source ~/.bashrc right in the pycharm terminal. I make sure the I have activated the correct venv (where the package installations go) by cd to path_with_venv then running source ~/pathtovenv/venv/bin/activate If that does not work, hit CMD+, to open your project settings and and under Python Interpreter select the one with the venv that you have activated. Also check if pandas appears on the list of packages that appear below the selected interpreter, if not you may search for it and install it using this way and not the pip install way
2023-02-13 06:04:15
1
python,machine-learning,deep-learning,data-preprocessing
4
75,432,397
Normalize -1 ~ 1
75,432,346
true
211
there are many ways about normalize skils for ml and dl. It is known to provide only normalization for 0 to 1. I want to know that is some ways to normalize -1 between 1.
1.2
1
3
You can use the min-max scalar or the z-score normalization here is what u can do in sklearn from sklearn.preprocessing import MinMaxScaler from sklearn.preprocessing import StandardScaler or hard code it like this x_scaled = (x - min(x)) / (max(x) - min(x)) * 2 - 1 -> this one for minmaxscaler x_scaled = (x - mean(x)) / std(x) -> this one for standardscaler
2023-02-13 06:04:15
0
python,machine-learning,deep-learning,data-preprocessing
4
75,432,401
Normalize -1 ~ 1
75,432,346
false
211
there are many ways about normalize skils for ml and dl. It is known to provide only normalization for 0 to 1. I want to know that is some ways to normalize -1 between 1.
0
1
3
Yes, there are ways to normalize data to the range between -1 and 1. One common method is called Min-Max normalization. It works by transforming the data to a new range, such that the minimum value is mapped to -1 and the maximum value is mapped to 1. The formula for this normalization is: x_norm = (x - x_min) / (x_max - x_min) * 2 - 1 Where x_norm is the normalized value, x is the original value, x_min is the minimum value in the data and x_max is the maximum value in the data. Another method for normalizing data to the range between -1 and 1 is called Z-score normalization, also known as standard score normalization. This method normalizes the data by subtracting the mean and dividing by the standard deviation. The formula for this normalization is: x_norm = (x - mean) / standard deviation Where x_norm is the normalized value, x is the original value, mean is the mean of the data and standard deviation is the standard deviation of the data.
2023-02-13 06:04:15
2
python,machine-learning,deep-learning,data-preprocessing
4
75,432,374
Normalize -1 ~ 1
75,432,346
false
211
there are many ways about normalize skils for ml and dl. It is known to provide only normalization for 0 to 1. I want to know that is some ways to normalize -1 between 1.
0.099668
1
3
Consider re-scale the normalized value. e.g. normalize to 0..1, then multiply by 2 and minus 1 to have the value fall into the range of -1..1
2023-02-13 07:24:28
0
python,amazon-web-services,pytest,allure
1
75,457,422
How to run allure generate command while using aws code build
75,432,923
false
167
I am using aws codebuild to execute my testsuite. It says 'permission denied' when I try to run allure genrate in aws code build. Pleas share the solution if anyone knows on how to generate allure report while working with aws code build. I am using pytest and the scenario is working fine in local. but failes in aws as aws build is not allowing me to run allure generate command. on successful dev deployment -- > tetssuite execution -- > generate allure repors --> uploade them to s3 --> send the report via email using aws SNS with lambda. all above steps are working fine, but the 3rd step.(allure generate). Please share the solution if anyone knows how to do it.
0
1
1
I am able to fix this is by downloading allure package freshly outside of the $CODEBUILD_SRC_DIR and set the path for the same location . (Initially I made this part of test repository itself and add that location to PATH, which was not working)
2023-02-13 07:54:12
1
python,pydantic
2
75,433,527
using isintance on a pydantic model
75,433,141
false
590
I am expecting multiple data types as input to a function & want to take a specific action if its a pydantic model (pydantic model here means class StartReturnModel(BaseModel)). In case of model instance I can check it, using isinstance(model, StartReturnModel) or isinstance(model, BaseModel) to identify its a pydantic model instance. Based on the below test program I can see that type(StartReturnModel) returns as ModelMetaclass. Can I use this to identify a pydantic model? or is there any better way to do it? from pydantic.main import ModelMetaclass from typing import Optional class StartReturnModel(BaseModel): result: bool pid: Optional[int] print(type(StartReturnModel)) print(f"is base model: {bool(isinstance(StartReturnModel, BaseModel))}") print(f"is meta model: {bool(isinstance(StartReturnModel, ModelMetaclass))}") res = StartReturnModel(result=True, pid=500045) print(f"\n{type(res)}") print(f"is start model(res): {bool(isinstance(res, StartReturnModel))}") print(f"is base model(res): {bool(isinstance(res, BaseModel))}") print(f"is meta model(res): {bool(isinstance(res, ModelMetaclass))}") *****Output**** <class 'pydantic.main.ModelMetaclass'> is base model: False is meta model: True <class '__main__.StartReturnModel'> is start model(res): True is base model(res): True is meta model(res): False
0.099668
1
1
Yes you can use it, but why not use isinstance or issubclass.
2023-02-13 08:54:42
4
python,tensorflow,keras,image-segmentation
3
75,434,944
module 'keras.utils.generic_utils' has no attribute 'get_custom_objects' when importing segmentation_models
75,433,717
true
5,094
I am working on google colab with the segmentation_models library. It worked perfectly the first week using it, but now it seems that I can't import the library anymore. Here is the error message, when I execute import segmentation_models as sm : --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-3-6f48ce46383f> in <module> 1 import tensorflow as tf ----> 2 import segmentation_models as sm 3 frames /usr/local/lib/python3.8/dist-packages/efficientnet/__init__.py in init_keras_custom_objects() 69 } 70 ---> 71 keras.utils.generic_utils.get_custom_objects().update(custom_objects) 72 73 AttributeError: module 'keras.utils.generic_utils' has no attribute 'get_custom_objects' Colab uses tensorflow version 2.11.0. I did not find any information about this particular error message. Does anyone know where the problem may come from ?
1.2
3
1
Encountered the same issue sometimes. How I solved it: open the file keras.py, change all the 'init_keras_custom_objects' to 'init_tfkeras_custom_objects'. the location of the keras.py is in the error message. In your case, it should be in /usr/local/lib/python3.8/dist-packages/efficientnet/
2023-02-13 09:03:04
1
python,rest,marklogic
2
75,437,482
Unable to create URI with whitespace in MarkLogic
75,433,811
true
39
I have created a Marklogic transform which tries to convert some URL encoded characters: [ ] and whitespace when ingesting data into database. This is the xquery code: xquery version "1.0-ml"; module namespace space = "http://marklogic.com/rest-api/transform/space-to-space"; declare function space:transform( $context as map:map, $params as map:map, $content as document-node() ) as document-node() { let $puts := ( xdmp:log($params), xdmp:log($context), map:put($context, "uri", fn:replace(map:get($context, "uri"), "%5B+", "[")), map:put($context, "uri", fn:replace(map:get($context, "uri"), "%5D+", "]")), map:put($context, "uri", fn:replace(map:get($context, "uri"), "%20+", " ")), xdmp:log($context) ) return $content }; When I tried this with my python code below def upload_document(self, inputContent, uri, fileType, database, collection): if fileType == 'XML': headers = {'Content-type': 'application/xml'} fileBytes = str.encode(inputContent) elif fileType == 'TXT': headers = {'Content-type': 'text/*'} fileBytes = str.encode(inputContent) else: headers = {'Content-type': 'application/octet-stream'} fileBytes = inputContent endpoint = ML_DOCUMENTS_ENDPOINT params = {} if uri is not None: encodedUri = urllib.parse.quote(uri) endpoint = endpoint + "?uri=" + encodedUri if database is not None: params['database'] = database if collection is not None: params['collection'] = collection params['transform'] = 'space-to-space' req = PreparedRequest() req.prepare_url(endpoint, params) response = requests.put(req.url, data=fileBytes, headers=headers, auth=HTTPDigestAuth(ML_USER_NAME, ML_PASSWORD)) print('upload_document result: ' + str(response.status_code)) if response.status_code == 400: print(response.text) The following lines are from the xquery logging: 2023-02-13 16:59:00.067 Info: {} 2023-02-13 16:59:00.067 Info: {"input-type":"application/octet-stream", "uri":"/Judgment/26856/supportingfiles/[TEST] 57_image1.PNG", "output-type":"application/octet-stream"} 2023-02-13 16:59:00.067 Info: {"input-type":"application/octet-stream", "uri":"/Judgment/26856/supportingfiles/[TEST] 57_image1.PNG", "output type":"application/octet-stream"} 2023-02-13 16:59:00.653 Info: Status 500: REST-INVALIDPARAM: (err:FOER0000) Invalid parameter: invalid uri: /Judgment/26856/supportingfiles/[TEST] 57_image1.PNG
1.2
1
1
The MarkLogic REST API is very opinionated about what a valid URI is, and it doesn't allow you to insert documents that have spaces in the URI. If you have an existing URI with a space in it, the REST API will retrieve or update it for you. However, it won't allow you to create a new document with such a URI. If you need to create documents with spaces in the URI, then you will need to use lower-level APIs. xdmp:document-insert() will let you.
2023-02-13 09:51:55
1
python,ssh,sftp,paramiko
1
75,456,237
Cannot copy/move file from remote SFTP server to local machine by Paramiko code running on remote SSH server
75,434,294
true
236
I want to copy a file from my SFTP server to local computer. However, when I run my code, it didn't show any error while I still cannot find my file on local computer. My code like that: import paramiko host_name ='10.110.100.8' user_name = 'abc' password ='xyz' port = 22 remote_dir_name ='/data/.../PMC1087887_00003.jpg' local_dir_name = 'D:\..\pred.jpg' t = paramiko.Transport((host_name, port)) t.connect(username=user_name, password=password) sftp = paramiko.SFTPClient.from_transport(t) sftp.get(remote_dir_name,local_dir_name) I have found the main problem. If I run my code in local in VS Code, it works. But when I login in my server by SSH in VS Code, and run my code on server, I found that my file appeared in current code folder (for example /home/.../D:\..\pred.jpg) and its name is D:\..\pred.jpg. How to solve this problem if I want to run code on server and download file to local?
1.2
1
1
If you call SFTPClient.get on the server, it will, as any other file manipulation API, work with files on the server. There's no way to make remote Python script directly work with files on your local machine. You would have to use some API to push the files to your local machine. But for that, your local machine would have to implement the API. For example, you can run an SFTP server on the local machine and "upload" the files to it.
2023-02-13 11:30:20
3
python,regex
2
75,435,577
python/regex: match letter only or letter followed by number
75,435,280
true
101
I want to split this string 'AB4F2D' in ['A', 'B4', 'F2', 'D']. Essentially, if character is a letter, return the letter, if character is a number return previous character plus present character (luckily there is no number >9 so there is never a X12). I have tried several combinations but I am not able to find the correct one: def get_elements(input_string): patterns = [ r'[A-Z][A-Z0-9]', r'[A-Z][A-Z0-9]|[A-Z]', r'\D|\D\d', r'[A-Z]|[A-Z][0-9]', r'[A-Z]{1}|[A-Z0-9]{1,2}' ] for p in patterns: elements = re.findall(p, input_string) print(elements) results: ['AB', 'F2'] ['AB', 'F2', 'D'] ['A', 'B', 'F', 'D'] ['A', 'B', 'F', 'D'] ['A', 'B', '4F', '2D'] Can anyone help? Thanks
1.2
2
1
\D\d? One problem with yours is that you put the shorter alternative first, so the longer one never gets a chance. For example, the correct version of your \D|\D\d is \D\d|\D. But just use \D\d?.
2023-02-13 16:45:25
1
python,raspberry-pi,gyroscope,mpu6050
1
75,472,650
Detect the speed of the vehicle using MPU6050
75,438,826
false
64
i have been trying to get speed of the vehicle using MPU-6050 but couldn't find my way to do it so, in the end i am stuck here def stateCondition(): while True: acc_x = read_raw_data(ACCEL_XOUT_H) acc_y = read_raw_data(ACCEL_YOUT_H) acc_z = read_raw_data(ACCEL_ZOUT_H) gyro_x = read_raw_data(GYRO_XOUT_H) gyro_y = read_raw_data(GYRO_YOUT_H) gyro_z = read_raw_data(GYRO_ZOUT_H) # Full scale range +/- 250 degree/C as per sensitivity scale factor Ax = acc_x/16384.0 Ay = acc_y/16384.0 Az = acc_z/16384.0 Gx = gyro_x/131.0 Gy = gyro_y/131.0 Gz = gyro_z/131.0 can some one please write the rest of it so that it returns the speed of the vehicle in km/hr or whatever it is!!!!! Thank you
0.197375
1
1
An MPU6050 will provide you with information about changes in motion (acceleration or decelleration mostly, but also curves). It will not provide you with absolute values. That can only be achieved by integrating over time, but this requires a known start position/speed. Also, it is very inexact, particularly with cheap motion sensors such as this one. To get the speed of a vehicle, it is much easier to use a GNSS module instead.
2023-02-13 18:24:22
1
python,django,django-views,django-templates,django-authentication
2
75,439,899
Django - after sign-in template don't know that user is authenticated
75,439,849
false
65
Below code probably works (no errors present): views.pl class SignInView(View): def get(self, request): return render(request, "signin.html") def post(self, request): user = request.POST.get('username', '') pass = request.POST.get('password', '') user = authenticate(username=user, password=pass) if user is not None: if user.is_active: login(request, user) return HttpResponseRedirect('/') else: return HttpResponse("Bad user.") else: return HttpResponseRedirect('/') ....but in template: {% user.is_authenticated %} is not True. So I don't see any functionality for authenticated user. What is the problem?
0.099668
1
1
You should do like {% if request.user.is_authenticated %} or {% if user.is_authenticated %}
2023-02-13 19:20:26
0
python,pandas,openpyxl
3
75,527,773
Why does pandas read_excel fail on an openpyxl error saying 'ReadOnlyWorksheet' object has no attribute 'defined_names'?
75,440,354
false
6,906
This bug suddenly came up literally today after read_excel previously was working fine. Fails no matter which version of python3 I use - either 10 or 11. Do folks know the fix? File "/Users/aizenman/My Drive/code/daily_new_clients/code/run_daily_housekeeping.py", line 38, in <module> main() File "/Users/aizenman/My Drive/code/daily_new_clients/code/run_daily_housekeeping.py", line 25, in main sb = diana.superbills.load_superbills_births(args.site, ath) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/aizenman/My Drive/code/daily_new_clients/code/diana/superbills.py", line 148, in load_superbills_births sb = pd.read_excel(SUPERBILLS_EXCEL, sheet_name="Births", parse_dates=["DOS", "DOB"]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pandas/util/_decorators.py", line 211, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pandas/util/_decorators.py", line 331, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pandas/io/excel/_base.py", line 482, in read_excel io = ExcelFile(io, storage_options=storage_options, engine=engine) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pandas/io/excel/_base.py", line 1695, in __init__ self._reader = self._engines[engine](self._io, storage_options=storage_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pandas/io/excel/_openpyxl.py", line 557, in __init__ super().__init__(filepath_or_buffer, storage_options=storage_options) File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pandas/io/excel/_base.py", line 545, in __init__ self.book = self.load_workbook(self.handles.handle) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pandas/io/excel/_openpyxl.py", line 568, in load_workbook return load_workbook( ^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openpyxl/reader/excel.py", line 346, in load_workbook reader.read() File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openpyxl/reader/excel.py", line 303, in read self.parser.assign_names() File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openpyxl/reader/workbook.py", line 109, in assign_names sheet.defined_names[name] = defn ^^^^^^^^^^^^^^^^^^^ AttributeError: 'ReadOnlyWorksheet' object has no attribute 'defined_names'
0
12
2
By installing the 'xlxswriter', the trouble was solved. Thanks to the above solutions, but they do not work in my case. So, this maybe another issuse you may consider.
2023-02-13 19:20:26
1
python,pandas,openpyxl
3
75,449,213
Why does pandas read_excel fail on an openpyxl error saying 'ReadOnlyWorksheet' object has no attribute 'defined_names'?
75,440,354
false
6,906
This bug suddenly came up literally today after read_excel previously was working fine. Fails no matter which version of python3 I use - either 10 or 11. Do folks know the fix? File "/Users/aizenman/My Drive/code/daily_new_clients/code/run_daily_housekeeping.py", line 38, in <module> main() File "/Users/aizenman/My Drive/code/daily_new_clients/code/run_daily_housekeeping.py", line 25, in main sb = diana.superbills.load_superbills_births(args.site, ath) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/aizenman/My Drive/code/daily_new_clients/code/diana/superbills.py", line 148, in load_superbills_births sb = pd.read_excel(SUPERBILLS_EXCEL, sheet_name="Births", parse_dates=["DOS", "DOB"]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pandas/util/_decorators.py", line 211, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pandas/util/_decorators.py", line 331, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pandas/io/excel/_base.py", line 482, in read_excel io = ExcelFile(io, storage_options=storage_options, engine=engine) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pandas/io/excel/_base.py", line 1695, in __init__ self._reader = self._engines[engine](self._io, storage_options=storage_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pandas/io/excel/_openpyxl.py", line 557, in __init__ super().__init__(filepath_or_buffer, storage_options=storage_options) File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pandas/io/excel/_base.py", line 545, in __init__ self.book = self.load_workbook(self.handles.handle) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/pandas/io/excel/_openpyxl.py", line 568, in load_workbook return load_workbook( ^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openpyxl/reader/excel.py", line 346, in load_workbook reader.read() File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openpyxl/reader/excel.py", line 303, in read self.parser.assign_names() File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openpyxl/reader/workbook.py", line 109, in assign_names sheet.defined_names[name] = defn ^^^^^^^^^^^^^^^^^^^ AttributeError: 'ReadOnlyWorksheet' object has no attribute 'defined_names'
0.066568
12
2
You can first try to uninstall the openpyxl pip uninstall openpyxl -y and then use pip install openpyxl==3.1.0 -y Note: Use ! infront of code if case of using notebooks. !pip uninstall openpyxl -y !pip install openpyxl==3.1.0 -y If the above code does not work. You can try to upgrade the pandas. i.e !pip uninstall pandas -y && !pip install pandas
2023-02-13 19:24:07
0
python,spacy,named-entity-recognition
1
75,515,376
'RobertaTokenizerFast' object has no attribute '_in_target_context_manager' error while loading data into custom NER model
75,440,385
false
122
I am trying to load data into a custom NER model using spacy, I am getting an error:- 'RobertaTokenizerFast' object has no attribute '_in_target_context_manager' however, it works fine with the other models. Thank you for your time!!
0
1
1
I faced the same issue after upgrading my environment from {Python 3.9 + Spacy 3.3} to {Python 3.10 + Space 3.5}. Resolved this by upgrading and re-packaging the model.
2023-02-14 06:50:01
0
python-3.x,visual-studio-code
1
75,444,459
Python file won't run in vs code using play button
75,444,318
false
138
i wrote a basic python program and tried running it using the play button but nothing happens, i look through the interpreters and the one for python isnt detected can someone guide me tried looking online for answers but most are confusing since i can't seem to find some of the settings they are recommending i use
0
1
1
Hey, my suggestion would be : First check the installation of python on your machine, and if it doesn't help then, Open keyboard shortcuts in VS Code 'CTRL + K and CTRL + S' or by clicking settings button in bottom-left corner. Search "Run Python File in Terminal". You will get first option with the same title. Double click the Key Binding area in front of title. And set a keyboard shortcut for running Python {eg: 'ALT + Q' (My shortcut)}. This would be much convenient.
2023-02-14 07:32:30
2
python,pandas,csv
2
75,444,673
How to write to a CSV file with pandas while appending to the next empty row without writing the columns again?
75,444,637
false
73
I have a pandas data frame that looks like this: # df1 Id A B C 3 4 5 6 I wrote this to a csv and it works great the first time, however when I append the CSV it rewrites the columns and the values again like this: Id A B C 3 4 5 6 Id A B C 3 4 5 6 Is there a method for the 2nd iteration afterwards to only write the value and not the columns when writing to a csv through pandas? I have tried using the 'a' command for appending and to empty my dataframe so it's just the columns to use as a header to write to the csv and then the as a separate dataframe append the values however pandas does not allow for empty dataframes
0.197375
2
1
Set header=False option for each next df.to_csv call to exclude column names from record.
2023-02-14 12:23:38
0
python,optimization,cvxpy,operations-research
2
75,453,931
How would I go about finding the optimal way to split up an order
75,447,782
false
152
I have a problem (that I think I'm over complicating) but for the life of me I can't seem to solve it. I have 2 dataframes. One containing a list of items with quantities that I want to buy. I have another dataframe with a list of suppliers, unit cost and quantity of items available. Along with this I have a dataframe with shipping cost for each supplier. I want to find the optimal way to break up my order among the suppliers to minimise costs. Some added points: Suppliers won't always be able to fulfil the full order of an item so I want to also be able to split an individual item among suppliers if it is cheaper Shipping only gets added once per supplier (2 items from a supplier means I still only pay shipping once for that supplier) I have seen people mention cvxpy for a similar problem but I'm struggling to find a way to use it for my problem (never used it before). Some advice would be great. Note: You don't have to write all the code for me but giving a bit of guidance on how to break down the problem would be great. TIA
0
1
1
Some advice too large for a comment: As @Erwin Kalvelagen alludes to, this problem can be described as a math program, which is probably the most common-sense approach. The generalized plan of attack is to figure out how to create an expression of the problem using some modeling package and then turn that problem over to a solver engine which uses diverse techniques to find the optimal answer. cvxpy is certainly 1 of the options to do the first part with. I'm partial to pyomo, and pulp is also viable. pulp also installs with a solver (cbc) which is suitable for this type of problem. In other cases, you may need to install separately. If you take this approach, look through a text or some online examples on how to formulate a MIP (mixed integer program). You'll have some sets (perhaps items, suppliers, etc.), data that form constraints or limits, some variables indexed by the sets, and an objective....likely to minimize cost. Forget about the complexities of split-orders and combined shipping at first and just see if you can get something working with toy data, then build out from there.
2023-02-14 12:27:00
1
python,django,e-commerce
2
75,447,889
How to handle 300 parameters in Django Model / Form?
75,447,819
false
65
I develop an app for creating products in online shop. Let's suppose I have 50 categories of products and each of these has some required parameters for product (like color, size, etc.). Some parameters apper in all categories, and some are unique. That gives me around 300 parameters (fields) that should be defined in Django model. I suppose it is not good idea to create one big database with 300 fields and add products that have 1-15 parameters there (leaving remaining fields empty). What would be the best way to handle it? What would be the best way to display form that will ask only for parameters required in given category?
0.099668
1
1
If you have to keep the Model structure as you have defined it here, I would create a "Product" "Category" "ProductCategory" tables. Product table is as follows: ProductID ProductName 1 Shirt 2 Table 3 Vase Category table is following CategoryID CategoryName 1 Size 2 Color 3 Material ProductCategory ID ProductID CategoryID CategoryValue 1 1 (Shirt) 1 (Size) Medium 2 2 (Table) 2 (Color) Dark Oak 3 3 (Vase) 3 (Material) Glass 3 3 (Vase) 3 (Material) Plastic This would be the easiest way, which wouldn't create 300 columns, would allow you to reuse categories across different types of products, but in the case of many products, would start to slowdown the database queries, as you would be joining 2 big tables. Product and ProductCategory You could split it up in more major Categories such as "Plants", "Kitchenware" etc etc.
2023-02-14 13:54:23
1
python,recursion,time-complexity,big-o
3
75,449,860
Time complexity of recursion of multiplication
75,448,841
false
85
What is the worst case time complexity (Big O notation) of the following function for positive integers? def rec_mul(a:int, b:int) -> int: if b == 1: return a if a == 1: return b else: return a + rec_mul(a, b-1) I think it's O(n) but my friend claims it's O(2^n) My argument: The function recurs at any case b times, therefor the complexity is O(b) = O(n) His argument: since there are n bits, a\b value can be no more than (2^n)-1, therefor the max number of calls will be O(2^n)
0.066568
2
3
Background A unary encoding of the input uses an alphabet of size 1: think tally marks. If the input is the number a, you need O(a) bits. A binary encoding uses an alphabet of size 2: you get 0s and 1s. If the number is a, you need O(log_2 a) bits. A trinary encoding uses an alphabet of size 3: you get 0s, 1s, and 2s. If the number is a, you need O(log_3 a) bits. In general, a k-ary encoding uses an alphabet of size k: you get 0s, 1s, 2s, ..., and k-1s. If the number is a, you need O(log_k a) bits. What does this have to do with complexity? As you are aware, we ignore multiplicative constants inside big-oh notation. n, 2n, 3n, etc, are all O(n). The same holds for logarithms. log_2 n, 2 log_2 n, 3 log_2 n, etc, are all O(log_2 n). The key observation here is that the ratio log_k1 n / log_k2 n is a constant, no matter what k1 and k2 are... as long as they are greater than 1. That means f(log_k1 n) = O(log_k2 n) for all k1, k2 > 1. This is important when comparing algorithms. As long as you use an "efficient" encoding (i.e., not a unary encoding), it doesn't matter what base you use: you can simply say f(n) = O(lg n) without specifying the base. This allows us to compare runtime of algorithms without worrying about the exact encoding you use. So n = b (which implies a unary encoding) is typically never used. Binary encoding is simplest, and doesn't provide a non-constant speed-up over any other encoding, so we usually just assume binary encoding. That means we almost always assume that n = lg a + lg b as the input size, not n = a + b. A unary encoding is the only one that suggests linear growth, rather than exponential growth, as the values of a and b increase. One area, though, where unary encodings are used is in distinguishing between strong NP-completeness and weak NP-completeness. Without getting into the theory, if a problem is NP-complete, we don't expect any algorithm to have a polynomial running time, that is, one bounded by O(n**k) for some constant k when using an efficient encoring. But some algorithms do become polynomial if we allow a unary encoding. If a problem that is otherwise NP-complete becomes polynomial when using an unary encoding, we call that a weakly NP-complete problem. It's still slow, but it is in some sense "faster" than an algorithm where the size of the numbers doesn't matter.
2023-02-14 13:54:23
1
python,recursion,time-complexity,big-o
3
75,449,172
Time complexity of recursion of multiplication
75,448,841
true
85
What is the worst case time complexity (Big O notation) of the following function for positive integers? def rec_mul(a:int, b:int) -> int: if b == 1: return a if a == 1: return b else: return a + rec_mul(a, b-1) I think it's O(n) but my friend claims it's O(2^n) My argument: The function recurs at any case b times, therefor the complexity is O(b) = O(n) His argument: since there are n bits, a\b value can be no more than (2^n)-1, therefor the max number of calls will be O(2^n)
1.2
2
3
Your friend and you can both be right, depending on what is n. Another way to say this is that your friend and you are both wrong, since you both forgot to specify what was n. Your function takes an input that consists in two variables, a and b. These variables are numbers. If we express the complexity as a function of these numbers, it is really O(b log(ab)), because it consists in b iterations, and each iteration requires an addition of numbers of size up to ab, which takes log(ab) operations. Now, you both chose to express the complexity in function of n rather than a or b. This is okay; we often do this; but an important question is: what is n? Sometimes we think it's "obvious" what is n, so we forget to say it. If you choose n = max(a, b) or n = a + b, then you are right, the complexity is O(n). If you choose n to be the length of the input, then n is the number of bits needed to represent the two numbers a and b. In other words, n = log(a) + log(b). In that case, your friend is right, the complexity is O(2^n). Since there is an ambiguity in the meaning of n, I would argue that it's meaningless to express the complexity as a function of n without specifying what n is. So, your friend and you are both wrong.
2023-02-14 13:54:23
2
python,recursion,time-complexity,big-o
3
75,449,149
Time complexity of recursion of multiplication
75,448,841
false
85
What is the worst case time complexity (Big O notation) of the following function for positive integers? def rec_mul(a:int, b:int) -> int: if b == 1: return a if a == 1: return b else: return a + rec_mul(a, b-1) I think it's O(n) but my friend claims it's O(2^n) My argument: The function recurs at any case b times, therefor the complexity is O(b) = O(n) His argument: since there are n bits, a\b value can be no more than (2^n)-1, therefor the max number of calls will be O(2^n)
0.132549
2
3
You are both right. If we disregard the time complexity of addition (and you might discuss whether you have reason to do so or not) and count only the number of iterations, then you are both right because you define: n = b and your friend defines n = log_2(b) so the complexity is O(b) = O(2^log_2(b)). Both definitions are valid and both can be practical. You look at the input values, your friend at the lengths of the input, in bits. This is a good demonstration why big-O expressions mean nothing if you don't define the variables used in those expressions.
2023-02-14 14:47:36
2
python,python-3.x,pip
1
75,449,728
Error with pip version 22.3.1 and Python version 3.10
75,449,511
false
1,250
I recently came across this error while using "pip install" with python version 3.10 and pip version 22.3.1: ERROR: Exception: Traceback (most recent call last): File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\lib\site-packages\pip\_internal\cli\base_command.py", line 160, in exc_logging_wrapper status = run_func(*args) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\lib\site-packages\pip\_internal\cli\req_command.py", line 247, in wrapper return func(self, options, args) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\lib\site-packages\pip\_internal\commands\download.py", line 103, in run build_tracker = self.enter_context(get_build_tracker()) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\lib\site-packages\pip\_internal\cli\command_context.py", line 27, in enter_context return self._main_context.enter_context(context_provider) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\lib\contextlib.py", line 492, in enter_context result = _cm_type.__enter__(cm) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\lib\contextlib.py", line 135, in __enter__ return next(self.gen) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\lib\site-packages\pip\_internal\operations\build\build_tracker.py", line 46, in get_build_tracker root = ctx.enter_context(TempDirectory(kind="build-tracker")).path File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\lib\site-packages\pip\_internal\utils\temp_dir.py", line 125, in __init__ path = self._create(kind) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\lib\site-packages\pip\_internal\utils\temp_dir.py", line 164, in _create path = os.path.realpath(tempfile.mkdtemp(prefix=f"pip-{kind}-")) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\lib\tempfile.py", line 357, in mkdtemp prefix, suffix, dir, output_type = _sanitize_params(prefix, suffix, dir) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\lib\tempfile.py", line 126, in _sanitize_params dir = gettempdir() File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\lib\tempfile.py", line 299, in gettempdir return _os.fsdecode(_gettempdir()) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\lib\tempfile.py", line 292, in _gettempdir tempdir = _get_default_tempdir() File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2800.0_x64__qbz5n2kfra8p0\lib\tempfile.py", line 223, in _get_default_tempdir raise FileNotFoundError(_errno.ENOENT, FileNotFoundError: [Errno 2] No usable temporary directory found in ['C:\\Users\\leon\\AppData\\Local\\Temp', 'C:\\Users\\leon\\AppData\\Local\\Temp', 'C:\\Users\\leon\\AppData\\Local\\Temp', 'C:\\windows\\Temp', 'c:\\temp', 'c:\\tmp', '\\temp', '\\tmp', 'C:\\Users\\leon'] WARNING: There was an error checking the latest version of pip. Before that there was a acess error with the console history which I had been able to solve, but no mater what I try this error always comes up. I also tried reinstalling python 3.10 and I also tried it with python 3.11 but it's always this error when using pip install. There also was this weird error in Pycharm where it couldn't set upt the virtual env but this is also fixed aready. Thanks in advance.
0.379949
1
1
If you read the code for tempfile.py shown in the trace and particulary: _get_default_tempdir() implementation, you will see that the code does following: get the list of all possible temp directory locations (eg, this list is shown in the actual Exception) Iterate the list it got Tries to write a small random file into a given directory If that works, return the directory name to be used as temporary path. If not, iterate the rest of the list from 2. If the list gets iterated to the end, you will get the exception you are now seeing. So, essentially, your pip install will try to write to bunch of different temporary locations but each one of those fail. This is most likely that each of those locations, your user does not have write access or your filesystem is full, or there could be some AV tool that blocks writes to these locations or some other reason. Do check these directories: C:\Users\leon\AppData\Local\Temp C:\Users\leon\AppData\Local\Temp C:\Users\leon\AppData\Local\Temp C:\windows\Temp c:\temp c:\tmp C:\Users\leon OR before you run pip, set TMP and TEMP environment variables to point to location where you can write to.
2023-02-14 15:11:21
1
python,django,python-datetime
1
75,452,728
Django Correct Date / Time not PC date/time
75,449,803
true
74
Is there a way to get the exact date/time from the web rather than taking the PC date/time? I am creating a website where the answer is time relevant. But i don't want someone cheating by putting their pc clock back. When i do: today = datetime.datetime.today() or now = datetime.datetime.utcnow().replace(tzinfo=utc) I still get whatever time my pc is set to. Is there a way to get the correct date/time.
1.2
1
1
datetime.today() takes its time information from the server your application is running on. If you currently run your application with python manage.py localhost:8000, the server is your local PC. In this scenario, you can tamper with the time setting of your PC and see different results. But in production environment, your hosting server will provide the time information. Unless you have a security issue, no unauthorized user should be able to change that.
2023-02-14 15:34:27
0
python,archlinux,jupyter-console
1
75,456,724
jupyter console doesn't work on my computer anymore
75,450,060
false
50
I sometimes use jupyter console to try out things in python. I'm running arch linux and installed everything through the arch repos. I hadn't ran jupyter console in quite some time, but while trying to launch it, i can't get it to work anymore. Here is the error : Jupyter console 6.5.1 Python 3.10.9 (main, Dec 19 2022, 17:35:49) [GCC 12.2.0] Type 'copyright', 'credits' or 'license' for more information IPython 8.10.0 -- An enhanced Interactive Python. Type '?' for help. In [1]: Task exception was never retrieved future: <Task finished name='Task-7' coro=<ZMQTerminalInteractiveShell.handle_external_iopub() done, defined at /usr/lib/python3.10/site-packages/jupyter_console/ptshell.py:839> exception=TypeError("object int can't be used in 'await' expression")> Traceback (most recent call last): File "/usr/lib/python3.10/site-packages/jupyter_console/ptshell.py", line 842, in handle_external_iopub poll_result = await self.client.iopub_channel.socket.poll(500) TypeError: object int can't be used in 'await' expression Shutting down kernel I tried reinstalling everything through pacman in case I accidentally changed something I shouldn't, but it changed nothing. Any tips on what could be wrong ?
0
1
1
I don't have enough rep to comment but I do not have the same issue. I can launch Jupyter QT Console just fine, and I have the same python version and IPython version. Just thought I would share, even though I don't use Jupyter Console. I do all my .ipynb in vscode and all other coding in neovim. I don't know if there is a difference between the console you are talking about and QT console, but Jupyter QT Console works fine for me, just unbearably light theme :).
2023-02-14 22:55:53
4
python,pandas,matplotlib
2
75,657,421
Pandas plot, vars() argument must have __dict__ attribute?
75,453,995
false
7,767
It was working perfectly earlier but for some reason now I am getting strange errors. pandas version: 1.2.3 matplotlib version: 3.7.0 sample dataframe: df cap Date 0 1 2022-01-04 1 2 2022-01-06 2 3 2022-01-07 3 4 2022-01-08 df.plot(x='cap', y='Date') plt.show() df.dtypes cap int64 Date datetime64[ns] dtype: object I get a traceback: Traceback (most recent call last): File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/code.py", line 90, in runcode exec(code, self.locals) File "<input>", line 1, in <module> File "/Volumes/coding/venv/lib/python3.8/site-packages/pandas/plotting/_core.py", line 955, in __call__ return plot_backend.plot(data, kind=kind, **kwargs) File "/Volumes/coding/venv/lib/python3.8/site-packages/pandas/plotting/_matplotlib/__init__.py", line 61, in plot plot_obj.generate() File "/Volumes/coding/venv/lib/python3.8/site-packages/pandas/plotting/_matplotlib/core.py", line 279, in generate self._setup_subplots() File "/Volumes/coding/venv/lib/python3.8/site-packages/pandas/plotting/_matplotlib/core.py", line 337, in _setup_subplots fig = self.plt.figure(figsize=self.figsize) File "/Volumes/coding/venv/lib/python3.8/site-packages/matplotlib/_api/deprecation.py", line 454, in wrapper return func(*args, **kwargs) File "/Volumes/coding/venv/lib/python3.8/site-packages/matplotlib/pyplot.py", line 813, in figure manager = new_figure_manager( File "/Volumes/coding/venv/lib/python3.8/site-packages/matplotlib/pyplot.py", line 382, in new_figure_manager _warn_if_gui_out_of_main_thread() File "/Volumes/coding/venv/lib/python3.8/site-packages/matplotlib/pyplot.py", line 360, in _warn_if_gui_out_of_main_thread if _get_required_interactive_framework(_get_backend_mod()): File "/Volumes/coding/venv/lib/python3.8/site-packages/matplotlib/pyplot.py", line 208, in _get_backend_mod switch_backend(rcParams._get("backend")) File "/Volumes/coding/venv/lib/python3.8/site-packages/matplotlib/pyplot.py", line 331, in switch_backend manager_pyplot_show = vars(manager_class).get("pyplot_show") TypeError: vars() argument must have __dict__ attribute
0.379949
9
1
The solution by NEStenerus did not work for me, because I don't have tkinter installed and did not want to change my package configuration. Alternative Fix Instead, you can disable the "show plots in tool window" option, by going to Settings | Tools | Python Scientific | Show plots in tool window and unchecking it.
2023-02-15 00:30:10
0
python,numpy,indexing
2
75,454,533
Reverse Index through a numPy ndarray
75,454,498
false
28
I have a n x n dimensional numpy array of eigenvectors as columns, and want to return the last v of them as another array. However, they are currently in ascending order, and I wish to return them in descending order. Currently, I'm attempting to index as follows eigenvector_array[:,-1:-v] But this doesn't seem to be working. Is there a more efficient way to do this?
0
1
1
Lets re-write this to make it a little less confusing. eigenvector_array[:,-1:-v] to: eigenvector_array[:][-1:-v] Now remember how slicing works in python: [start:stop:step] If you set step. to -1 it will return them in reverse, so: eigenvector_array[:,-1:-v:-1] should be your answer.
2023-02-15 09:37:32
0
python,pipeline,snakemake
1
75,460,060
How can I make snakefile rule append the results to the input file of the rule file?
75,457,859
false
42
I am building a snakmake pipeline, in the final rule i have an existing files that i want the snakefile to append to: Here is the rule: rule Amend: input: Genome_stats = expand("global_temp_workspace/result/{sample}.Genome.stats.tsv", sample= sampleID), GenomeSNV = expand("global_temp_workspace/result/{sample}.Genome.SNVs.tsv", sample= sampleID), GenomesConsensus = expand("global_temp_workspace/analysis/{sample}.renamed.consensus.fasta", sample= sampleID), output: Genome_stats="global_temp_workspace/result/Genome.stats.tsv", GenomeSNV="global_temp_workspace/result/Genome.SNVs.tsv", GenomesConsensus="global_temp_workspace/result/Genomes.consensus.fasta" threads: workflow.cores shell: """ cat {input.Genome_stats} | tail -n +2 >> {output.Genome_stats} ;\ cat {input.GenomesConsensus} >> {output.GenomesConsensus} ;\ cat {input.GenomeSNV} | tail -n +2 >> {output.GenomeSNV} ;\ """ how can i solve it? Thank you I tried to do the dynamic() in the output and adding the touch {output.Genome_stats} {output.GenomesConsensus} {output.GenomeSNV} at the end of the shell. but did not work. whenevr i run the snakemake i get: $ time snakemake --snakefile V2.5.smk --cores all Building DAG of jobs... Nothing to be done. Complete log: .snakemake/log/2023-02-15T123050.937009.snakemake.log real 0m1.022s user 0m2.744s sys 0m2.797s
0
2
1
This behaviour is not idempotent and is usually a recipe for trouble. What happens if the machine breaks down or the process is killed during the write stage? What happens if a rule is accidentally ran twice? As advised by @Cornelius Roemer in the comment to the question, the safer way is to write to a new file. If the overwrite-like behaviour is desired, then the new file can be moved to the original file location, but some record/checkpoint file should be created to make sure that Snakemake knows not to re-process the file.
2023-02-15 12:29:13
2
git,shell,python-venv,python-poetry
1
75,463,086
Poetry shell command prompt: what gives the (base) part?
75,459,812
true
84
I am developing python projects under git control using poetry to manage my venvs. From my project's directory I issue a "poetry shell" command and my new shell command prompt becomes something like: (isagog-ai-py3.10) (base) bob@Roberts-Mac-mini isagog-ai % where the first part in bracket gives me the name pf the project and the python version I'm using, and the last part of the prompt is my current directory name. But what is it that gives me the "(base)" part? I'm actually working on a "dev" branch.
1.2
1
1
This is base environment from conda.
2023-02-15 15:46:09
1
python,django,postgresql
2
75,463,997
Separate databases for development and production in Djang
75,462,208
false
67
I am trying to split my django settings into production and development. Th ebiggest question that I have is how to use two different databases for the two environments? How to deal with migrations? I tried changing the settings for the development server to use a new empty database, however, I can not apply the migrations to create the tables that I already have in the production database. All the guides on multiple databases focus on the aspect of having different types of data in different databases (such as users database, etc.) but not the way I am looking for. Could you offer some insights about what the best practices are and how to manage the two databases also in terms of migrations? EDIT: Here is what I get when I try to run python manage.py migrate on the new database: Traceback (most recent call last): File "/opt/homebrew/Caskroom/miniforge/base/envs/stokk/lib/python3.9/site-packages/django/db/backends/utils.py", line 85, in _execute return self.cursor.execute(sql, params) psycopg2.errors.UndefinedTable: relation "dashboard_posttags" does not exist LINE 1: ...ags"."tag", "dashboard_posttags"."hex_color" FROM "dashboard... ^ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/....../manage.py", line 22, in <module> main() File "/....../manage.py", line 18, in main execute_from_command_line(sys.argv) File "/opt/homebrew/Caskroom/miniforge/base/envs/stokk/lib/python3.9/site-packages/django/core/management/__init__.py", line 425, in execute_from_command_line utility.execute() File "/opt/homebrew/Caskroom/miniforge/base/envs/stokk/lib/python3.9/site-packages/django/core/management/__init__.py", line 419, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/opt/homebrew/Caskroom/miniforge/base/envs/stokk/lib/python3.9/site-packages/django/core/management/base.py", line 373, in run_from_argv self.execute(*args, **cmd_options) File "/opt/homebrew/Caskroom/miniforge/base/envs/stokk/lib/python3.9/site-packages/django/core/management/base.py", line 417, in execute output = self.handle(*args, **options) File "/opt/homebrew/Caskroom/miniforge/base/envs/stokk/lib/python3.9/site-packages/django/core/management/base.py", line 90, in wrapped res = handle_func(*args, **kwargs) File "/opt/homebrew/Caskroom/miniforge/base/envs/stokk/lib/python3.9/site-packages/django/core/management/commands/migrate.py", line 75, in handle self.check(databases=[database]) File "/opt/homebrew/Caskroom/miniforge/base/envs/stokk/lib/python3.9/site-packages/django/core/management/base.py", line 438, in check all_issues = checks.run_checks( File "/opt/homebrew/Caskroom/miniforge/base/envs/stokk/lib/python3.9/site-packages/django/core/checks/registry.py", line 77, in run_checks new_errors = check(app_configs=app_configs, databases=databases) File "/opt/homebrew/Caskroom/miniforge/base/envs/stokk/lib/python3.9/site-packages/django/core/checks/urls.py", line 13, in check_url_config return check_resolver(resolver) File "/opt/homebrew/Caskroom/miniforge/base/envs/stokk/lib/python3.9/site-packages/django/core/checks/urls.py", line 23, in check_resolver return check_method() File "/opt/homebrew/Caskroom/miniforge/base/envs/stokk/lib/python3.9/site-packages/django/urls/resolvers.py", line 446, in check for pattern in self.url_patterns: File "/opt/homebrew/Caskroom/miniforge/base/envs/stokk/lib/python3.9/site-packages/django/utils/functional.py", line 48, in __get__ res = instance.__dict__[self.name] = self.func(instance) File "/opt/homebrew/Caskroom/miniforge/base/envs/stokk/lib/python3.9/site-packages/django/urls/resolvers.py", line 632, in url_patterns patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module) File "/opt/homebrew/Caskroom/miniforge/base/envs/stokk/lib/python3.9/site-packages/django/utils/functional.py", line 48, in __get__ res = instance.__dict__[self.name] = self.func(instance) File "/opt/homebrew/Caskroom/miniforge/base/envs/stokk/lib/python3.9/site-packages/django/urls/resolvers.py", line 625, in urlconf_module return import_module(self.urlconf_name) File "/opt/homebrew/Caskroom/miniforge/base/envs/stokk/lib/python3.9/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1030, in _gcd_import File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 850, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "/....../app/urls.py", line 11, in <module> from main_platform.views.investor import AccountView, profile, app_home_redirect File "/....../main_platform/views/investor.py", line 118, in <module> class PostFilter(django_filters.FilterSet): File "/....../main_platform/views/investor.py", line 124, in PostFilter for tag in PostTags.objects.all(): File "/opt/homebrew/Caskroom/miniforge/base/envs/stokk/lib/python3.9/site-packages/django/db/models/query.py", line 280, in __iter__ self._fetch_all() File "/opt/homebrew/Caskroom/miniforge/base/envs/stokk/lib/python3.9/site-packages/auto_prefetch/__init__.py", line 98, in _fetch_all super()._fetch_all() File "/opt/homebrew/Caskroom/miniforge/base/envs/stokk/lib/python3.9/site-packages/django/db/models/query.py", line 1354, in _fetch_all self._result_cache = list(self._iterable_class(self)) File "/opt/homebrew/Caskroom/miniforge/base/envs/stokk/lib/python3.9/site-packages/django/db/models/query.py", line 51, in __iter__ results = compiler.execute_sql(chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size) File "/opt/homebrew/Caskroom/miniforge/base/envs/stokk/lib/python3.9/site-packages/django/db/models/sql/compiler.py", line 1202, in execute_sql cursor.execute(sql, params) File "/opt/homebrew/Caskroom/miniforge/base/envs/stokk/lib/python3.9/site-packages/django/db/backends/utils.py", line 99, in execute return super().execute(sql, params) File "/opt/homebrew/Caskroom/miniforge/base/envs/stokk/lib/python3.9/site-packages/sentry_sdk/integrations/django/__init__.py", line 563, in execute return real_execute(self, sql, params) File "/opt/homebrew/Caskroom/miniforge/base/envs/stokk/lib/python3.9/site-packages/django/db/backends/utils.py", line 67, in execute return self._execute_with_wrappers(sql, params, many=False, executor=self._execute) File "/opt/homebrew/Caskroom/miniforge/base/envs/stokk/lib/python3.9/site-packages/django/db/backends/utils.py", line 76, in _execute_with_wrappers return executor(sql, params, many, context) File "/opt/homebrew/Caskroom/miniforge/base/envs/stokk/lib/python3.9/site-packages/django/db/backends/utils.py", line 85, in _execute return self.cursor.execute(sql, params) File "/opt/homebrew/Caskroom/miniforge/base/envs/stokk/lib/python3.9/site-packages/django/db/utils.py", line 90, in __exit__ raise dj_exc_value.with_traceback(traceback) from exc_value File "/opt/homebrew/Caskroom/miniforge/base/envs/stokk/lib/python3.9/site-packages/django/db/backends/utils.py", line 85, in _execute return self.cursor.execute(sql, params) django.db.utils.ProgrammingError: relation "dashboard_posttags" does not exist LINE 1: ...ags"."tag", "dashboard_posttags"."hex_color" FROM "dashboard...
0.099668
1
1
If you have a new empty database, you can just run "python manage.py migrate" and all migrations will be executed on the new database. The already done migrations will be stored in a table in that database so that django always "remembers" the migrations state of each individual database. Of course that new database will only have the tables structure - there is not yet any data copied! Does this answer your question?
2023-02-15 16:15:04
0
python,if-statement
2
75,462,636
Unexpected behavior using if .. or .. Python
75,462,560
false
52
I'm reading in a list of samples from a text file and in that list every now and then there is a "channel n" checkpoint. The file is terminated with the text eof. The code that works until it hits the eof which it obviously cant cast as a float log = open("mq_test.txt", 'r') data = [] for count, sample in enumerate(log): if "channel" not in sample: data.append(float(sample)) print(count) log.close() So to get rid of the ValueError: could not convert string to float: 'eof\n' I added an or to my if as so, log = open("mq_test.txt", 'r') data = [] for count, sample in enumerate(log): if "channel" not in sample or "eof" not in sample: data.append(float(sample)) print(count) log.close() And now I get ValueError: could not convert string to float: 'channel 00\n' So my solution has been to nest the ifs & that works. Could somebody explain to me why the or condition failed though?
0
1
1
I think it's a logic issue which "and" might be used instead of "or"
2023-02-15 18:25:05
1
python-3.x,asynchronous,async-await,fastapi
2
75,464,345
What does async actually do in FastAPI?
75,463,993
false
501
I have two scripts: from fastapi import FastAPI import asyncio app = FastAPI() @app.get("/") async def root(): a = await asyncio.sleep(10) return {'Hello': 'World',} And second one: from fastapi import FastAPI import time app = FastAPI() @app.get("/") def root(): a = time.sleep(10) return {'Hello': 'World',} Please note the second script doesn't use async. Both scripts do the same, at first I thought, the benefit of an async script is that it allows multiple connections at once, but when testing the second code, I was able to run multiple connections as well. The results are the same, performance is the same and I don't understand why would we use async method. Would appreciate your explanation.
0.099668
1
1
FastAPI Docs: You can mix def and async def in your path operation functions as much as you need and define each one using the best option for you. FastAPI will do the right thing with them. Anyway, in any of the cases above, FastAPI will still work asynchronously and be extremely fast. Both endpoints will be executed asynchronously, but if you define your endpoint function asynchronously, it will allow you to use await keyword and work with asynchronous third party libraries
2023-02-15 19:33:13
1
python,function,dictionary
2
75,464,741
Why does python interpreter consider 2.0 and 2 to be the same in an when used as a dictionary key
75,464,645
false
75
I was going through twitter when i came across the function below def func(): d = {1: "I", 2.0: "love", 2: "Python"} return d[2.0] print(func()) When i ran the code, i got Python as the output and i expected it to be love. I know that you cannot have multiple key in a dictionary. However what i want to know is why Python Interpreter considers 2.0 and 2 as the same and returns the value of 2
0.099668
2
1
In your example, the keys 2.0 and 2 are considered the same because their hash values are equal. This is because in Python, float and integer objects can be equal even if they have different types and representations. In particular, the integer 2 and the floating-point number 2.0 have the same value, so they are considered equal. That's why you should always use consistent types for keys in dictionaries. Always remember to use integers or floats.
2023-02-16 00:35:55
-2
python,visual-studio-code,flake8
1
75,466,977
trying to open flake8 on vs code from command palette error on mac
75,466,757
false
62
I've installed flake 8 in the terminal, but when i try and select python linter on vs code in the command palette i get the following error: "Command 'Python: Select Linter' resulted in an error (command 'python.setLinter' not found)". I'm on a mac, version 11.5.2. I have seen other solutions for this problem for windows on stack but not sure how to proceed on mac, please advise
-0.379949
1
1
There are many possibilities. You can try the following methods: Reinstall Python extension or use Pre-release version. Start VsCode as administrator. Try to delete the.vscode folder in the project.
2023-02-16 06:14:37
1
python,python-3.x,sqlalchemy,teradata
1
75,525,095
When installing an old version of a package, can I install only compatible versions of dependent packages?
75,468,479
true
250
I'm using Python 3.7.4 in a venv environment. I ran pip install teradataml==17.0.0.3 which installs a bunch of dependent packages, including sqlalchemy. At the time, it installed SQLAlchemy==2.0.2. I ran the below code, and received this error: ArgumentError: Additional keyword arguments are not accepted by this function/method. The presence of **kw is for pep-484 typing purposes from teradataml import create_context class ConnectToTeradata: def __init__(self): host = 'AWESOME_HOST' username = 'johnnyMnemonic' password = 'keanu4life' self.connection = create_context(host = host, user = username, password = password) def __del__(self): print("Closing connection") self.connection.dispose() ConnectToTeradata() If I install SQLAlchemy==1.4.26 before teradataml, I no longer get the error and successfuly connect. This suggests SQLAlchemy==2.0.2 is not compatible with teradataml==17.0.0.3. I expected installing an older version of teradataml would also install older, compatible versions of dependent packages. When I install teradataml==17.0.0.3, can I force only install compatible versions of dependent packages?
1.2
1
1
We are aware of the compatibility issues that were introduced in SQLAlchemy package 2.0.x versions. The new 2.0.x package directly affects the Teradata SQL dialect in the teradatasqlalchemy package. As a temporary measure, please downgrade SQLAlchemy to 1.4.46. Teradata Engineering is working on making the teradatasqlalchemy package compatible with the newer versions and a new package is slated to be released in March 2023.
2023-02-16 11:02:36
1
python,exception
3
76,009,052
'ReadOnlyWorksheet' object has no attribute 'defined_names'
75,471,318
false
14,233
Whenever I try to read Excel using part=pd.read_excel(path,sheet_name = mto_sheet) I get this exception: <class 'Exception'> 'ReadOnlyWorksheet' object has no attribute 'defined_names' This is if I use Visual Studio Code and Python 3.11. However, I don't have this problem when using Anaconda. Any reason for that?
0.066568
19
1
Possible workaround: create new excel file, with default worksheet name ("Sheet1" etc.) and copy and paste data here. (tested on Python 3.10.9 + openpyxl==3.1.1)
2023-02-16 12:59:08
0
python,pandas,dataframe
3
75,472,833
pandas json dictionary to dataframe, reducing columns by creating new columns
75,472,653
false
40
Following JSON File (raw data how I am getting it back from an API call): { "code": "200000", "data": { "A": "0.43221600", "B": "0.02311155", "C": "0.55057515", "D": "2.15957924", "E": "0.03818908", "F": "0.26853420", "G": "0.15007500", "H": "0.00685843", "I": "0.08500848" } } Will crate this output in Pandas by using this code (one column per data set in "data"). The result is a dataframe with many columns: import pandas as pd import json f = open('file.json', 'r') j1 = json.load(f) pd.json_normalize(j1) code data.A data.B data.C data.D data.E data.F data.G data.H data.I 0 200000 0.43221600 0.02311155 0.55057515 2.15957924 0.03818908 0.26853420 0.15007500 0.00685843 0.08500848 I guess that Pandas should provide a built in function of the data set in the attribute "data" could be split in two new columns with names "name" and value, including a new index. But I cannot figure out how that works. I would need this output: name value 0 A 0.43221600 1 B 0.02311155 2 C 0.55057515 3 D 2.15957924 4 E 0.03818908 5 F 0.26853420 6 G 0.15007500 7 H 0.00685843 8 I 0.08500848
0
1
1
pd.DataFrame.from_dict(j1) should give you the result you need
2023-02-16 16:42:50
1
apache-pulsar,pulsar,python-pulsar
1
75,485,101
Pulsar producer send_async() with callback function acknowledging the sent message
75,475,387
false
180
I have a use case where messages from an input_topic gets consumed and sent to a list of topics. I'm using producers[i].send_async(msg, callback=callback) where callback = lambda res, msg: consumer.acknowledge(msg). In this case, consumer is subscribed to the input_topic. I checked the backlog of input_topic and it has not decreased at all. Would appreciate if you could point out how to deal with this? What would be the best alternative? Thanks in advance!
0.197375
1
1
Have you checked the consumer.acknowledge(msg) has actually been called? One possibility is the producer cannot write messages to the topic, and if the producer with infinite send timeout, you will never get the callback.
2023-02-16 16:43:25
0
python,python-3.x,numpy,numpy-ndarray
1
75,552,015
How to reverse the shape of a numpy array
75,475,397
false
151
I have a numpy array with a shape of (3, 4096). However, I need it's shape to be (4096, 3). How do I accomplish this?
0
1
1
Use: arr=arr.T (or) arr=np.transpose(arr) (or) arr= arr.reshape(4096, 3) where arr is your array with shape (3,4096)
2023-02-16 17:38:22
0
python,posix
1
75,476,539
Python execute code in parent shell upon exit
75,476,008
false
46
I have a search program that helps users find files on their system. I would like to have it perform tasks, such as opening the file within editor or changing the parent shell directory to the parent folder of the file exiting my python program. Right now I achieve this by running a bash wrapper that executes the commands the python program writes to the stdout. I was wondering if there was a way to do this without the wrapper. Note: subprocess and os commands create a subshell and do not alter the parent shell. This is an acceptable answer for opening a file in the editor, but not for moving the current working directory of the parent shell to the desired location on exit. An acceptable alternative might be to open a subshell in a desired directory example #this opens a bash shell, but I can't send it to the right directory subprocess.run("bash")
0
1
1
This, if doable, will require quite a hack. Because the PWD is passed from the shell into the subprocess - in this case, the Python process, as a subprocess owned variable, and changing it won't modify what is in the super program. On Unix, maybe it is achievable by opening a detachable sub-process that will pipe keyboard strokes into the TTY after the main program exits - I find this the most likely to succeed than any other thing.
2023-02-16 17:49:41
0
python,python-3.x,anaconda,conda,exe
6
75,640,542
How to convert python file to exe? The 'pathlib' package is an obsolete backport of a standard library package and is incompatible with PyInstaller
75,476,135
false
9,580
I am receiving following error while converting python file to .exe I have tried to uninstall and intsall pyinstaller but it didnt helped out. i upgraded conda but still facing same error. Please support to resolve this issue Command (base) G:>pyinstaller --onefile grp.py Error The 'pathlib' package is an obsolete backport of a standard library package and is incompatible with PyInstaller. Please remove this package (located in C:\Users\alpha\anaconda3\lib\site-packages) using conda remove then try again. Python Version (base) G:>python --version Python 3.9.16
0
2
3
The error message you received suggests that the 'pathlib' package installed in your Anaconda environment is causing compatibility issues with PyInstaller. As a result, PyInstaller is unable to create a standalone executable from your Python script.
2023-02-16 17:49:41
2
python,python-3.x,anaconda,conda,exe
6
75,640,516
How to convert python file to exe? The 'pathlib' package is an obsolete backport of a standard library package and is incompatible with PyInstaller
75,476,135
true
9,580
I am receiving following error while converting python file to .exe I have tried to uninstall and intsall pyinstaller but it didnt helped out. i upgraded conda but still facing same error. Please support to resolve this issue Command (base) G:>pyinstaller --onefile grp.py Error The 'pathlib' package is an obsolete backport of a standard library package and is incompatible with PyInstaller. Please remove this package (located in C:\Users\alpha\anaconda3\lib\site-packages) using conda remove then try again. Python Version (base) G:>python --version Python 3.9.16
1.2
2
3
I face with the same problem, and I input the 'conda remove pathlib', it didn't work. The result is Not found the packages, so I found the lir 'lib', there was a folder named 'path-list-....', finally I delete it, and it began working!
2023-02-16 17:49:41
6
python,python-3.x,anaconda,conda,exe
6
75,687,401
How to convert python file to exe? The 'pathlib' package is an obsolete backport of a standard library package and is incompatible with PyInstaller
75,476,135
false
9,580
I am receiving following error while converting python file to .exe I have tried to uninstall and intsall pyinstaller but it didnt helped out. i upgraded conda but still facing same error. Please support to resolve this issue Command (base) G:>pyinstaller --onefile grp.py Error The 'pathlib' package is an obsolete backport of a standard library package and is incompatible with PyInstaller. Please remove this package (located in C:\Users\alpha\anaconda3\lib\site-packages) using conda remove then try again. Python Version (base) G:>python --version Python 3.9.16
1
2
3
I've experienced the same problem. I managed to solve it by downgrading pyInstaller to 5.1 (from 5.8) without touching pathlib. An additional possibility to consider.
2023-02-16 23:12:36
1
python
2
75,478,867
I am trying to test a program that prints a shipping rate based on yes or no answers
75,478,836
false
71
The problem with this program is that the if/else statements are not working properly. When the answer is "yes", the problem also prints the question for when the answer is "no". Another problem is that it's not printing the rate1 when it's supposed to. # This program calculates the shipping cost as shown in the slide international = input("Are you shipping internationally (yes or no)? ") rate1 = 5 rate2 = 10 if international.upper() == "yes": shippingRate = rate2 else: continental = input("Are you shipping continental (yes or no)? ") if continental.upper() == "yes": shippingRate = rate1 else: shippingRate = rate2 print("The shipping rate is " + ("%.2f" % shippingRate))
0.099668
1
1
I notice you're using a .upper() that would not ever equal "yes" Cause upper() won't ever return lowercase letters. But this code might work with == "YES".
2023-02-17 01:10:27
1
python,scipy,differential-equations,odeint
1
75,481,202
Solving nonlinear differential equations in python
75,479,380
false
62
I am trying to solve the differential equation 4(y')^3-y'=1/x^2 in python. I am familiar with the use of odeint to solve coupled ODEs and linear ODEs, but can't find much guidance on nonlinear ODEs such as the one I'm grappling with. Attempted to use odeint and scipy but can't seem to implement properly Any thoughts are much appreciated NB: y is a function of x
0.197375
1
1
The problem is that you get 3 valid solutions for the direction at each point of the phase space (including double roots). But each selection criterion breaks down at double roots. One way is to use a DAE solver (which does not exist in scipy) on the system y'=v, 4v^3-v=x^-2 The second way is to take the derivative of the equation to get an explicit second-order ODE y''=-2/x^3/(12*y'^2-1). Both methods require the selection of the initial direction from the 3 roots of the cubic at the initial point.
2023-02-17 02:30:10
1
python,regex
3
75,479,780
Is there a way to find (potentially) multiple results with re.search?
75,479,740
false
53
While parsing file names of TV shows, I would like to extract information about them to use for renaming. I have a working model, but it currently uses 28 if/elif statements for every iteration of filename I've seen over the last few years. I'd love to be able to condense this to something that I'm not ashamed of, so any help would be appreciated. Phase one of this code repentance is to hopefully grab multiple episode numbers. I've gotten as far as the code below, but in the first entry it only displays the first episode number and not all three. import re def main(): pattern = '(.*)\.S(\d+)[E(\d+)]+' strings = ['blah.s01e01e02e03', 'foo.s09e09', 'bar.s05e05'] #print(strings) for string in strings: print(string) result = re.search("(.*)\.S(\d+)[E(\d+)]+", string, re.IGNORECASE) print(result.group(2)) if __name__== "__main__": main() This outputs: blah.s01e01e02e03 01 foo.s09e09 09 bar.s05e05 05 It's probably trivial, but regular expressions might as well be Cuneiform most days. Thanks in advance!
0.066568
1
1
re.findall instead of re.search will return a list of all matches
2023-02-17 05:28:01
-2
python,sqlalchemy
2
75,480,709
Receiving Error not all arguments converted during string formatting
75,480,557
false
48
I am new to working on Python. I m not able to understand how can I send the correct input t0 the query. list_of_names = [] for country in country_name_list.keys(): list_of_names.append(getValueMethod(country)) sql_query = f"""SELECT * FROM table1 where name in (%s);""" db_results = engine.execute(sql_query, list_of_names).fetchone() Give the error " not all arguments converted during string formatting"
-0.197375
1
1
If I know right, there are a simpler solution. If you write curly bracets {}, not bracets (), and you place inside the bracets a variable, which contains the %s value, should work. I don't know, how sql works, but you should use one " each side, not three. Sorry, I'm not english. From this, maybe I wasn't help with the question, because I don't understand correctly.
2023-02-17 13:36:35
1
python,selenium-webdriver,xpath,selenium-chromedriver
2
75,485,323
TypeError: Failed to execute 'evaluate' on 'Document': The result is not a node set, and therefore cannot be converted to the desired type
75,485,006
false
54
I need to find elements on a page by looking for text(), so I use xlsx as a database with all the texts that will be searched. It turns out that it is showing the error reported in the title of the publication, this is my code: search_num = str("'//a[contains(text()," + '"' + row[1] + '")' + "]'") print(search_num) xPathnum = self.chrome.find_element(By.XPATH, search_num) print(xPathnum.get_attribute("id")) print(search_num) returns = '//a[contains(text(),"0027341-66.2323.0124")]' Does anyone know where I'm going wrong, despite having similar posts on the forum, none of them solved my problem. Grateful for the attention
0.099668
1
1
Looks like you have extra quotes here str("'//a[contains(text()," + '"' + row[1] + '")' + "]'") Try changing to f"//a[contains(text(),'{row[1]}')]"
2023-02-17 16:18:10
1
python,pandas,sorting
1
75,487,303
Would df.sort_values('A', kind = 'mergesort').sort_index(kind = 'mergesort') be a stable and valid way to sort by index and column?
75,486,770
true
31
I have a Pandas dataframe equivalent to: 'A' 'B' 'i1' 'i2' 'i3' 1 2 4 3 0 1 1 2 3 3 1 1 2 1 0 1 2 4 0 9 1 1 2 2 6 2 1 1 1 8 where ix are index columns and 'A', and 'B' are normal columns. I want to make sure that the indexes are strictly ordered and, when indexes are duplicated, then it is ordered by column 'A' 'A' 'B' 'i1' 'i2' 'i3' 1 1 2 1 0 1 1 2 2 6 1 1 2 3 3 1 2 4 0 9 1 2 4 3 0 2 1 1 1 8 Would df.sort_values('A', kind = 'mergesort').sort_index(kind = 'mergesort') do it? And if so, would do it in a stable way? or could the .sort_index() operation disrupt the previous .sort_values() operation in such a way that, for the duplicated indexes, the values of 'A' are no longer ordered?
1.2
1
1
When you sort by multiple keys, only the last one is guaranteed to be sorted. The others will be sorted within the previous groups. Finally, the non-key columns will remain sorted in the original order in case of a stable sort such as the mergesort. To answer your question, yes, your method will maintain the original order in case of duplicated keys.
2023-02-17 16:19:46
2
python,flask
1
75,488,783
Sending a word document without saving it on the flask server
75,486,790
true
94
Good day. Today I'm trying to send a document generated on the server to the user on the click of a button using Flask. My task is this: Create a document (without saving it on the server). And send it to the user. However, using a java script, I track the button click on the form and use fetch to make a request to the server. The server retrieves the necessary data and creates a Word document based on it. How can I form a response to a request so that the file starts downloading? Code since the creation of the document. (The text of the Word document has been replaced) python Falsk: document = Document() document.add_heading("Some head-title") document.add_paragraph('Some text') f = BytesIO() document.save(f) f.seek(0) return send_file(f, as_attachment=True, download_name='some.docx') However, the file does not start downloading. How can I send a file from the server to the user? Edits This is my js request. fetch('/getData', { method : 'POST', headers: { 'Accept': 'application/json', 'Content-Type': 'application/json' }, body: JSON.stringify({ someData: someData, }) }) .then(response => response.text() ) .then(response =>{ console.log(response); }); This is my html <form action="" name="getData" method="post" enctype="multipart/form-data"> <button type = "submit" name = "Download">Download</button> </form>
1.2
1
1
You need to specify the mimetype, It tries to detect the mimetype from the filename but since we are not saving it we need to specify the mimetype. return send_file(f, mimetype='application/msword', as_attachment=True, download_name='output.doc')
2023-02-17 16:43:24
0
python,pandas,feature-selection
1
75,487,135
ValueError: 'p' must be 1-dimensional
75,487,026
false
75
I am trying to do feature selection using Ant colony optimization (ACO) for a rainfall dataset. The implementation of the code is below import numpy as np from sklearn.datasets import load_breast_cancer from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score from sklearn.neighbors import KNeighborsClassifier X = x y = df_cap['PRECTOTCORR_SUM'] # Split data into training and test sets X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) # Define ACO feature selection function def aco_feature_selection(X_train, X_test, y_train, y_test, num_ants=10, max_iter=50, alpha=1.0, beta=2.0, evaporation=0.5, q0=0.9): num_features = X_train.shape[1] pheromone = np.ones(num_features) best_solution = None best_accuracy = 0.0 # Run ACO algorithm for i in range(max_iter): ant_solutions = [] ant_accuracies = [] # Generate ant solutions for ant in range(num_ants): features = np.random.choice([0, 1], size=num_features, p=[1-pheromone,pheromone]) X_train_selected = X_train[:, features == 1] X_test_selected = X_test[:, features == 1] knn = KNeighborsClassifier() knn.fit(X_train_selected, y_train) y_pred = knn.predict(X_test_selected) accuracy = accuracy_score(y_test, y_pred) ant_solutions.append(features) ant_accuracies.append(accuracy) # Update best solution if accuracy > best_accuracy: best_solution = features best_accuracy = accuracy # Update pheromone levels pheromone *= evaporation for ant in range(num_ants): features = ant_solutions[ant] accuracy = ant_accuracies[ant] if accuracy >= np.mean(ant_accuracies): pheromone[features == 1] += alpha else: pheromone[features == 1] += beta # Apply elitism if best_solution is not None: pheromone[best_solution == 1] += q0 return best_solution # Run ACO feature selection selected_features = aco_feature_selection(X_train, X_test, y_train, y_test) # Print selected features print("Selected features:", np.where(selected_features == 1)[0]) but I get this error ValueError Input In [175], in aco_feature_selection(X_train, X_test, y_train, y_test, num_ants, max_iter, alpha, beta, evaporation, q0) 26 # Generate ant solutions 27 for ant in range(num_ants): ---> 28 features = np.random.choice([0, 1], size=num_features, p=[1-pheromone,pheromone]) 29 X_train_selected = X_train[:, features == 1] 30 X_test_selected = X_test[:, features == 1] File mtrand.pyx:930, in numpy.random.mtrand.RandomState.choice() ValueError: 'p' must be 1-dimensional I suspect the issue comes list inside a list because it makes it 2-dimentional instead of 1-dimensional using something like flatten() throws this error ValueError: 'a' and 'p' must have same size how do I fix this?
0
1
1
The issue is that p is an array of probabilities and you are passing a 1 - array and an array into that argument. Without getting into the detail of the algorithm I can suggest that you need to choose a specific pheromone value for this feature. And if you want to generate a series of 0 and 1 with given probabilities you need to iterate over pheromone
2023-02-17 17:17:27
0
python,streaming,databricks,delta-live-tables
1
75,584,172
DLT Stream Error - Queries with streaming sources must be executed with writeStream.start();
75,487,374
false
121
I'm trying to parse incoming variable length stream records in databricks using Delta Live Tables. I'm getting the error: Queries with streaming sources must be executed with writeStream.start(); Notebook code @dlt.table ( comment="xAudit Parsed" ) def b_table_parsed(): df = dlt.readStream("dlt_table_raw_view") for i in range(df.select(F.max(F.size('split_col'))).collect()[0][0]): df = df.withColumn("col"+str(i),df["split_col"][i]) df = (df .drop("value","split_col") ) return df This all works fine against the actual source text files or a delta table using the interactive cluster but when I put it in DLT and and the source is streaming files from autoloader, it doesn't like it. I assume it's stream related. I saw a different post about using .foreach maybe but that was using writeStream and not sure if I can or how to use it to return in a DLT table, or if there is another solution. I'm very new to python, streaming and DLT so would appreciate if anyone can walk me through a detailed solution. Trying to parse out variable length rows in a streaming source using a delta live table notebook in databricks. Works on the interactive cluster but not streaming in DLT
0
1
1
The problem is in this piece of code: df.select(F.max(F.size('split_col'))).collect()[0][0] - you're trying to find a max + collect it from the stream that is by definition doesn't have start & end. Your code most probably works with batch DF or inside a function called from .foreachBatch that isn't supported by DLT.
2023-02-17 18:50:26
0
python,nuke
2
75,494,163
Getting "ValueError: A PythonObject is not attached to a node" even when wrapped in try/except block but this works fine if run in Nuke Script editor
75,488,252
false
472
My question is Foundary Nuke specific. I have a tab added to Project Settings, that contains some data I can later access via the root node. Now since I have callback invoked by a checkbox knob I added to enable disable a custom knob I added to that tab I added to Project Settings Panel. It works fine. The problem is when I close nuke I get error: Traceback (most recent call last): File "/system/runtime/plugins/nuke/callbacks.py", line 127, in knobChanged _doCallbacks(knobChangeds) File "/system/runtime/plugins/nuke/callbacks.py", line 44, in _doCallbacks for f in list: ValueError: A PythonObject is not attached to a node Now this error happens if I have a callback function added to the checkbox knob like this: my_callbacks.py import nuke def on_checkbox_clicked(): try: root_node = nuke.root() if not root_node: return except ValueError as er: print(er) nuke.addKnobChanged(on_checkbox_clicked, nodeClass='Root', node=nuke.root()) nuke.addonScriptClose(lambda: nuke.removeKnobChanged(on_checkbox_clicked, nodeClass-'Root', node=nuke.root()) but if I create a grade node named Grade1 and run the below code in script editor it works fine. try: node = nuke.toNode('Grade1') nuke.delete(node) node.fullname() # <-- should throw error except ValueError: print(error caught.)
0
1
2
Certainly seems like an internal nuke issue. Which nuke are you running? I know 11 and 12 will almost always spit out some kind of python error on close - either threading or something like this. If your my_callbacks.py is being loaded by init/menu, try just adding the callback to the root node itself (rather than the global knob change process) with node.knob('knob_changed').setValue(YOUR CODE in string format) In this case of course, the knob changed code will only fire on the Root node, and you'll have to run that setValue code in each script you want. You might be able to use init/menu and another callback (onScriptLoad) to accomplish that.
2023-02-17 18:50:26
0
python,nuke
2
75,912,153
Getting "ValueError: A PythonObject is not attached to a node" even when wrapped in try/except block but this works fine if run in Nuke Script editor
75,488,252
false
472
My question is Foundary Nuke specific. I have a tab added to Project Settings, that contains some data I can later access via the root node. Now since I have callback invoked by a checkbox knob I added to enable disable a custom knob I added to that tab I added to Project Settings Panel. It works fine. The problem is when I close nuke I get error: Traceback (most recent call last): File "/system/runtime/plugins/nuke/callbacks.py", line 127, in knobChanged _doCallbacks(knobChangeds) File "/system/runtime/plugins/nuke/callbacks.py", line 44, in _doCallbacks for f in list: ValueError: A PythonObject is not attached to a node Now this error happens if I have a callback function added to the checkbox knob like this: my_callbacks.py import nuke def on_checkbox_clicked(): try: root_node = nuke.root() if not root_node: return except ValueError as er: print(er) nuke.addKnobChanged(on_checkbox_clicked, nodeClass='Root', node=nuke.root()) nuke.addonScriptClose(lambda: nuke.removeKnobChanged(on_checkbox_clicked, nodeClass-'Root', node=nuke.root()) but if I create a grade node named Grade1 and run the below code in script editor it works fine. try: node = nuke.toNode('Grade1') nuke.delete(node) node.fullname() # <-- should throw error except ValueError: print(error caught.)
0
1
2
Have you tried using nuke.thisNode() in your callback? and reduce to nuke.addKnobChanged(on_checkbox_clicked, nodeClass='Root') Like you I'm confused by this error, sometimes it appears but it shouldn't and when it should appear it doesn't...
2023-02-18 04:51:44
0
python,html,web-scraping,hidden
2
75,491,322
Is there a way to specifically web scrape and get the data of heights that is not listed in text?
75,491,269
false
62
I'm web scraping a bunch of heights for listed athletes. I have written the code to get the heights but after inspecting element, I noticed that under text the height is written in feet, but in "data-sort" that height is listed in inches. Both of these are in the td tag in class "heights". However when I use "get_text()" or .text to remove the html elements it only prints out the height in feet and removes the hidden height in inches. Is there a way I can get the height listed in inches because that will make it easier to the do math. Here is an example of what I'm web scraping, I want remove everything and only get the height in inches which will be [79,85,74... in this case. <td class="height" data-sort="79">6-7</td> <td class="height" data-sort="85">7-1</td> <td class="height" data-sort="74">6-2</td> #This is my code from bs4 import BeautifulSoup import requests urls=['https://goduke.com/sports/mens-basketball/roster'] ListData=[] for x in range(len(urls)): page=requests.get(urls[x]).text pagesoup=BeautifulSoup(page,'html.parser') h=pagesoup.find_all('td', class_="height") ListData.append(h) NewList=[] for b in range(len(ListData)): new=[] for x in ListData[b]: print(x.text)
0
1
1
If you use css selector you can simply pass the first class name. from scrapy.selector import Selector
2023-02-18 15:49:38
1
python,tkinter
2
75,494,563
Tkinter filename float not incrementing correctly
75,494,496
false
41
I am trying to create a program that loads the next file numerically upon the press of a button. In order to check what the next file is I simply run a while loop checking for a file name in .1 increments. I.E, I have a file labeled 1.1, 1.3, and 1.4, I want the button to load them in that order, skipping 1.2 because it doesn't exist. The problem arising is that for some reason instead of just incrementing the value that gets checked by .1, it is incrementing it by .10000000000(some random number here), making it so I can't check the files properly import tkinter as tk from tkinter import * mainscreen = tk.Tk() cuename = tk.Entry(mainscreen) def loadnext_cue(): cued = float(cuename.get()) next_cue = cued + .1 while os.path.exists("%s .txt" % str(next_cue)) == False: print(next_cue) next_cue += 0.1 if next_cue >= 4: error = Label(mainscreen, text="No Higher Cue") error.pack(side=TOP) time.sleep(2) error.destroy() break if os.path.exists("%s .txt" % str(next_cue)): callup = next_cue with open("%s.txt" % str(callup), "r") as loadentry: quote = loadentry.read() T.delete("1.0", END) T.insert(END, quote) Because of what I'm using the program for I know for now at least the file names will only go into the 10's place decimal (I.E. 1.2) So, starting with file number 1.0, looking for the next file up, the output I'm getting looks something like this: 1.1 1.2000000000000002 1.3000000000000003 1.4000000000000004 1.5000000000000004 1.6000000000000005 1.7000000000000006 1.8000000000000007 1.9000000000000008 2.000000000000001 2.100000000000001 2.200000000000001 2.300000000000001 2.4000000000000012 2.5000000000000013 2.6000000000000014 2.7000000000000015 2.8000000000000016 2.9000000000000017 3.0000000000000018 3.100000000000002 3.200000000000002 3.300000000000002 3.400000000000002 3.500000000000002 3.6000000000000023 3.7000000000000024 3.8000000000000025 3.9000000000000026 4.000000000000003 It seems to go up by exactly .1 on the first iteration but then does whatever it wants after that. Any and all help would be appreciated
0.099668
2
1
Your filenames are of the form x.y where x and y are both integers, but you are using a Python float to store the filename. Floats in Python represent decimal numbers with finite precision, and when you perform arithmetic you can experience a loss of precision. That is why when you perform 1.1 + 0.1 you get a very small error in the result -- the 'random number'. I suggest to modify your code to store the parts x and y in separate variables as integers. You can increment the parts separately according to your logic, and then compose them into the string filename when needed.
2023-02-18 19:07:07
1
python,algorithm,data-structures,nearest-neighbor
4
75,514,598
What algorithm would be most efficient when trying to find the nearest city given a set of coordinates?
75,495,739
false
196
I have a dataset which contains the longitude and latitude of the 1000 largest US cities. I'm designing an API which returns the user's nearest city, given an input of the user's longitude/latitude. What is the most efficient algorithm I can use to calculate the nearest city? I know that I can use the haversine formula to calculate the distance between the user's coordinate and each cities, but it seems inefficient to have to do this for all 1000 cities. I've previously used a k-d tree to solve nearest neighbour problems on a plane - is there a similar solution that can be used in the context of a globe? Edit: keeping this simple - distance I'm looking for is as the crow flies. Not taking roads or routes into account at this stage.
0.049958
3
2
This answer is very similar to that of ckc. First, spilt the 1000 cities in 2 groups : a big one located located between Canada and Mexico and the few others cities located outside this rectangle (i.e Alaska, Hawai, ...). When processing coordinates, check if they belong to the small group : in this case, no optimisation needed. To optimize the other case, you may divide the map in rectangles (example 5°lat x 7° lon) and associate to each rectangle the list of cities belonging to each rectangle. To find the nearest city, consider the rectangle R containing the point. Compute the distance to the cities of the rectangle. Process the 8 rectangles adjacent to R by computing the distance of the point to each rectangle : you may then eliminate the adjacent rectangles whose distance is greater than the best distance already found. Iterate the process to a next level, i.e. the next crown (rectangles located on the outside of the area composed of 5x5 rectangles whose center is R).
2023-02-18 19:07:07
1
python,algorithm,data-structures,nearest-neighbor
4
75,514,588
What algorithm would be most efficient when trying to find the nearest city given a set of coordinates?
75,495,739
false
196
I have a dataset which contains the longitude and latitude of the 1000 largest US cities. I'm designing an API which returns the user's nearest city, given an input of the user's longitude/latitude. What is the most efficient algorithm I can use to calculate the nearest city? I know that I can use the haversine formula to calculate the distance between the user's coordinate and each cities, but it seems inefficient to have to do this for all 1000 cities. I've previously used a k-d tree to solve nearest neighbour problems on a plane - is there a similar solution that can be used in the context of a globe? Edit: keeping this simple - distance I'm looking for is as the crow flies. Not taking roads or routes into account at this stage.
0.049958
3
2
You can split the map into squares that do not overlap and they cover the whole US map (i.e., you will have a grid). You will number the squares using the coordinates of their upper left corner (i.e., each one will have a unique ID) and you will do a preprocessing where each city will be assigned with the ID of the square where it belongs. You will find the square where the user lies into and then you will check only the cities that lie into this square and the ones that are one step from this (total: 9 squares). If these are empty of cities, you will check the ones that are two steps of it etc. In this way, on average you will check much less cities to find the closest
2023-02-18 23:38:17
1
python,python-3.x,enums
3
75,497,142
Python compare type of (str, enum) classes
75,497,077
false
81
I have multiple enums defined with from enum import Enum class EnumA(str, Enum): RED = "red" class EnumB(str, Enum): BLUE = "blue" How do I compare the type of these classes/enums with say x=EnumA.RED? The following doesn't work. type(x) is enum type(x) is EnumType type(x) is Enum I don't want to compare the classes directly, since I have a lot of enums.
0.066568
1
1
To know the type of the variable you should compare it with EnumA or EnumB, example : type(x) is EnumA
2023-02-19 02:20:23
1
python,printing
1
75,497,757
python vending machine program-
75,497,616
true
61
The program allows the user to enter money and select an item that outputs the price. Towards the end, the if statements, if purchase_choice == 1: print("************ The item costs $1.00 ************") and the following statements after that one, is not printing in the output. Can someone help me? Here's the code. print("*********************************") print("Welcome to Vending Machine Bravo!") print("*********************************") print("If you would like to make a selection, please insert the appropriate currency into the machine.") Currency deposit num_5Dollars = 5.00 num_Dollars = 1.00 num_Quarters = .25 num_Dimes = .10 num_Nickels = .05 num_Pennies = .01 print() print("Please enter:") if num_5Dollars == 5.00: print("5.00 for $5 bills") if num_Dollars == 1.00: print("1.00 for $1 bills") if num_Quarters == .25: print(".25 for Quarters") if num_Dimes == .10: print(".10 for dimes") if num_Nickels == .05: print(".05 for nickels") if num_Pennies == .01: print(".01 for pennies") user_val = float(input()) if int(user_val) == 0: print("0 to cancel") print("At any point if you wish to cancel operation or go back to last menu, please enter 0. Thank you!:") print() print(int(user_val)) print() print("************ Total money in machine is: ", user_val , "************") purchase item selection Skittles = {'type' '1''Price': 1.00} Reeses = {'type' '2' 'Price': 1.19} M_and_M = {'type' '3' 'Price': 1.50} Chex_Mix = {'type' '4' 'Price': 0.99} Honey_Bun = {'type' '5' 'Price': 1.99} types = [Skittles, Reeses, M_and_M, Chex_Mix, Honey_Bun] global type type = [Skittles, Reeses, M_and_M, Chex_Mix, Honey_Bun] print() print("At any point if you wish to cancel operation or go back to last menu, please enter 0. Thank you!:") print() print("If you would like to purchase:") print() print("Skittles - type '1', (Price = $1.00)") print("Reeses - type '2', (Price = 1.19)") print("M_and_M - type '3', (Price = $1.50)") print("Chex_Mix - type '4', (Price = $0.99)") print("Honey_Bun - type '5', (Price = $1.99)") print() purchase_choice = input() print("Your enter is:", purchase_choice) if user_val == 0: print('Item selection stopped') else: if purchase_choice == 1: print("************ The item costs $1.00 ************") if purchase_choice == 2: print("************ The item costs $1.19 ************") if purchase_choice == 3: print("************ The item costs $1.50 ************") [tag:tag-name] if purchase_choice == 4: print("************ The item costs $0.99 ************") if purchase_choice == 5: print("************ The item costs $1.99 ************")
1.2
1
1
You need to convert the user input string to an integer. purchase_choice = int(input())
2023-02-19 04:33:28
-1
python,tensorflow
1
75,594,099
Tensorflow : Trainable variable not getting learnt
75,498,019
false
149
I am trying to implement a custom modified ReLU in Tensorflow 1, in which I use two learnable parameters. But the parameters are not getting learnt even after running 1000 training steps, as suggested by printing their values before and after training. I have observed that inside the function, when I execute the commented lines instead, then the coefficients are learnt. Could anyone suggest why the first case results in the trainable coefficients not being learnt and how this can be resolved? import numpy as np import tensorflow.compat.v1 as tf tf.disable_eager_execution() def weight_variable(shape,vari_name): initial = tf.truncated_normal(shape, stddev=0.1,dtype=tf.float32) return tf.Variable(initial,name = vari_name) def init_Prelu_coefficient(var1, var2): coeff = tf.truncated_normal(([1]), stddev=0.1,dtype=tf.float32) coeff1 = tf.truncated_normal(([1]), stddev=0.1,dtype=tf.float32) return tf.Variable(coeff, trainable=True, name=var1), tf.Variable(coeff1, trainable=True, name=var2) def Prelu(x, coeff, coeff1): s = int(x.shape[-1]) sop = x[:,:,:,:s//2]*coeff+x[:,:,:,s//2:]*coeff1 sop1 = x[:,:,:,:s//2]*coeff-x[:,:,:,s//2:]*coeff1 copied_variable = tf.concat([sop, sop1], axis=-1) copied_variable = tf.math.maximum(copied_variable,0)/copied_variable # copied_variable = tf.identity(x) # copied_variable = tf.math.maximum(copied_variable*coeff+copied_variable*coeff1,0)/copied_variable # copied_variable = tf.multiply(copied_variable,x) return copied_variable def conv2d_dilate(x, W, dilate_rate): return tf.nn.convolution(x, W,padding='VALID',dilation_rate = [1,dilate_rate]) matr = np.random.rand(1, 60, 40, 8) target = np.random.rand(1, 58, 36, 8) def learning(sess): # define placeholder for inputs to network Input = tf.placeholder(tf.float32, [1, 60, 40, 8]) input_Target = tf.placeholder(tf.float32, [1, 58, 36, 8]) kernel = weight_variable([3, 3, 8, 8],'G1') coeff, coeff1 = init_Prelu_coefficient('alpha', 'alpha1') conv = Prelu(conv2d_dilate(Input, kernel , 2), coeff, coeff1) error_norm = 1*tf.norm(input_Target - conv) print("MOMENTUM LEARNING") train_step = tf.train.MomentumOptimizer(learning_rate=0.001,momentum=0.9,use_nesterov=False).minimize(error_norm) if int((tf.__version__).split('.')[1]) < 12 and int((tf.__version__).split('.')[0]) < 1: init = tf.initialize_all_variables() else: init = tf.global_variables_initializer() sess.run(init) print("INIT coefficient ", sess.run(coeff), sess.run(coeff1)) init_var = tf.trainable_variables() error_prev = 1 # initial error, we set 1 and it began to decrease. for i in range(1000): sess.run(train_step, feed_dict={Input: matr, input_Target: target}) if i % 100 == 0: error_now=sess.run(error_norm,feed_dict={Input : matr, input_Target: target}) print('The',i,'th iteration gives an error',error_now) error = sess.run(error_norm,feed_dict={Input: matr, input_Target: target}) print(sess.run(kernel)) print("LEARNT coefficient ", sess.run(coeff), sess.run(coeff1)) sess = tf.Session() learning(sess)
-0.197375
1
1
You can try the following. 1: increase your learning rate to 0.01 or 0.1 since 0.001 is too small for a network with 1000 iterations. 2: try experimenting with different activation functions. I will recommend ReLU or softmax.
2023-02-19 07:17:05
2
python,python-attrs
1
75,498,656
attrs - how to validate an instance of a Literal or None
75,498,562
true
114
This is what I have. I believe there are two problems here - the Literal and the None. from attrs import frozen, field from attrs.validators import instance_of OK_ARGS = ['a', 'b'] @field class MyClass: my_field: Literal[OK_ARGS] | None = field(validator=instance_of((Literal[OK_ARGS], None))) Error: TypeError: Subscripted generics cannot be used with class and instance checks Edit: I've made a workaround with a custom validator. Not that pretty however: def _validator_literal_or_none(literal_type): def inner(instance, attribute, value): if (isinstance(value, str) and (value in literal_type)) or (value is None): pass else: raise ValueError(f'You need to provide a None, or a string in this list: {literal_type}') return inner
1.2
1
1
You can’t do isinstance() checks on Literals/Nones and that’s what the is_instance Validator is using internally (it predates those typing features by far). While we’ve resisted adding a complete implementation of the typing language due to its complexity, having one dedicated to such cases mind be worth exploring if you’d like to open an issue.
2023-02-19 09:18:04
0
python,php,nginx,selenium-webdriver
1
75,572,976
Selenium Python script works only command line php, but not in browsers
75,499,094
false
51
I have nginx, php server, and python selenium installed, my python selenium script works perfectly in command line php, but not via nginx server browser. Tried some python codes, all work in browsers except selenium code. No error in browsers, curl. exec.php $selenium = ('python3 /var/www/html/selenium/test.py'); echo shell_exec($selenium); test.py #!/usr/bin/env python3 from selenium import webdriver options = webdriver.ChromeOptions() options.add_argument('--headless') driver = webdriver.Chrome(options=options) driver.get('http://localhost/info.php') print(driver.title) driver.quit() Tried in nginx root directory, with php unix and tcp socket
0
1
1
Downgraded from selenium 4 to selenium 3 and chrome, chromderiver 74. now it works in browser too.
2023-02-19 09:55:25
0
python,rasterio
1
75,499,290
print the pixel coordinates of a specific point in the raster
75,499,270
true
505
To print the pixel coordinates of a specific point in the raster, i used the index() method assuming that The index() method takes the x and y coordinates of the point in geographic coordinates and returns the corresponding row and column indices of the point in the raster. I want to double-check that. Is this the best way to handle it? I'm a beginner, and I'm still unsure about when and how to use the affine transformation.Is it necessary to perform the affine transformation before printing the pixel coordinate? import rasterio with rasterio.open("LC08_L2SP_190037_20190619_20200827_02_T1_ST_B10.TIF") as data: print(data.crs) longitude, latitude = 13.3886, 52.5174 row, col = data.index(longitude, latitude) print("Pixel coordinates of point ({}, {}): ({}, {})".format(longitude, latitude, col, row))
1.2
1
1
This code uses the rasterio package to read in a GeoTIFF file and extract information about its coordinate reference system (CRS) and the pixel coordinates of a specified latitude and longitude point. The code first opens the GeoTIFF file using the rasterio.open() function and stores the resulting DatasetReader object in the variable data. It then prints out the CRS of the raster using the crs attribute of the DatasetReader object. Next, the code specifies a longitude and latitude point (longitude, latitude = 13.3886, 52.5174) and uses the index() method of the DatasetReader object to convert the point's coordinates to the corresponding row and column indices in the raster. The resulting pixel coordinates are then printed out using string formatting. Note that the index() method assumes that the input coordinates are in the same CRS as the raster. If the input coordinates are in a different CRS, you may need to transform them using the rasterio.transform module's transform() or reproject() functions before calling index().
2023-02-19 14:55:10
1
python,datetime
2
75,501,088
Spoof ongoing datetime in Python
75,500,991
false
66
I am looking for the simplest solution to spoof live datetime, specifically, I would like it to start at a specific time, say 2023-01-03 15:29, and make it go on, so that the clock is ticking, so to speak. There are plenty of ways to spoof current datetime, but I haven't find a way that would do so continuously, so the fake time keeps moving.
0.099668
1
1
My first thought is, instead of trying to 'spoof' datetime, just perform a "translation". Basically, you just need to calculate the datetime you need to subtract from the current datatime.now() to reach your desired date. desired_time = datatime.now - translation ==> translation = datetime.now - desired_time After that you can simply call datetime.now() - translation which will effectively progress the clock of your desired date. Hope that makes sense and can work for you!
2023-02-19 15:15:49
0
python-3.x,caching
1
75,504,274
Is there any python equivalent of google guava loading cache?
75,501,127
false
78
I am looking for an in-memory loading cache which is compatible with python and provides specially these functionalities: Time of reloading Method of reloading Thread safe I found only in-built python libraries.
0
1
1
You included the guava tag on your question, but Guava is a Java library, so it's almost certainly not relevant to what you need. I would suggest removing that tag.
2023-02-19 16:11:27
0
python,django,django-models,django-views,django-templates
2
75,501,568
Can't add ManyToMany relation in admin
75,501,516
false
78
I have following model created in Django: class Content(models.Model): publish_date = models.DateTimeField(auto_now_add=True) name = models.CharField(max_length = 100, blank = False, default='name') summary = models.CharField(max_length=400, blank=True, default='summary') description = models.TextField(blank = True, max_length=5000, default = LOREM_IPSUM) author = models.ForeignKey(User, default=None, on_delete=models.CASCADE) rules = models.TextField(blank = True, default='rule set') parent = models.ManyToManyField(to = 'self', related_name="child", symmetrical = False, blank=True) I added four Content objects through Django admin: Project1 Task1 Task2 Task3 And set parent to Project1 for all TaskX. When I want to display all the content with detailed parent atrribute it turns out to be None. views.py def display_ideas(request): ideas = Content.objects.filter(name="Task3") return render(request, 'display_ideas.html', context = { 'ideas' : ideas }) ** display_ideas.html ** <div class="container bg-success"> {% for idea in ideas %} <div class="container bg-danger"> <h2>Name:</h2> {{ idea.name }} has parent: {{ idea.parent.name }} </div> {% endfor %} </div> The output is: Name: Task3 has parent: None What am I doing wrong? All migrations are done and the site is up and running.
0
1
1
Since its a ManyToMany field content object may have more than 1 parent, thats why {{ idea.parent.name }} Does not work. You would have to iterate through each parent as you have done here for ideas {% for idea in ideas %}, to show the parent attributes.
2023-02-19 17:07:19
0
python,regex
4
75,503,093
Replacing/substituting part of a string using Regex in Python
75,501,898
true
81
I'm tidying a string in Python and need to substitute some of the text (following a certain rule) using Regex. In the string (copied below), a place is usually mentioned followed by a comma and then the city's associated mortality rate. The next place is separated with a semi-colon. However there are some examples where the semi-colon is missing and I need to use Regex to add that semi-colon back in (e.g. 'Plymouth, 19 Portsmouth, 15' should be 'Plymouth, 19; Portsmouth, 15'). The text is as follows: Birkenhead, 16; Birmingham, 15; Blackburn, 16; Bolton, 18 ; Bradford, 16 ; Brighton, 14 Bristol, 20; Cardiff, 25 ; Derby, 12 ; Halifax, 20; Biddersfield, 21 ; Hull, 19 ; Leeds, 22 ; Leicester, 18 ; London, 17; Manchester,15 ; Norwich, 24; Nottingham, 21; Oldham, 18 ; Plymouth, 19 Portsmouth, 15 ; Preston, 23 ; Salford, 14 ; Sheffield, 16 ; Sunderland, 18; Wolverhampton. 30. The rate in Edinburgh was 14 ;in Glasgow, 23 ; and in Dublin. 22. I've tried using re.sub() for this using the following formula and using non-capture sets but am doing something very horribly wrong! mystring = [the string here] re.sub("(?:[0-9])?\s(?:[A-Z0-9]?)", ";", mystring) Is anyone able to help me fix this? Thank you!
1.2
1
1
Thanks all! In the end, with some experimenting, I have found a different solution: re.sub("(?<=\d)[ ;]+", "; ", mystring) Effectively, we just look for all cases of one or more spaces and/or semi-colons which are preceded by a number (using a look-back) and then replace the match with ; .
2023-02-19 20:56:12
1
python,pyqt5
1
75,503,504
choose checkbox in QListWidgetItem
75,503,393
false
29
I have this code with pyqt5: elif (content1 == "next"): todos = ["one" , "two" , "three" , "four", "five"] self.todo_listWidget.show() for todo in todos: item = QListWidgetItem(todo) item.setFlags(item.flags() | QtCore.Qt.ItemIsUserCheckable) item.setCheckState(QtCore.Qt.Unchecked) self.todo_listWidget.addItem(item) And I want if i chose one item make something like if item (x) is checked then do something how can i do this? thanks
0.197375
1
1
Connect the itemChanged signal of self.todo_listWidget with a function that takes the item as its argument. This signal is emitted whenever the data of item has changed. In the function, you can check the resp. item's checkState and proceed accordingly. An alternative is itemActivated. Which one works better for you depends on if you want to react to all (also programmatical) changes of the check state or only to user interaction. This signal is emitted when the item is activated. The item is activated when the user clicks or double clicks on it, depending on the system configuration. It is also activated when the user presses the activation key (on Windows and X11 this is the Return key, on Mac OS X it is Command+O).
2023-02-19 22:06:11
0
python,loops,recursion,iteration,time-complexity
1
75,527,677
Time Complexity of this program solving the coin change problem
75,503,759
true
49
I have created a program shown below, and I am confused as to how to figure out its time complexity. Is it O(ntarget/min(coins)) because a for loop is created each time the function is called, and the function is called target/min(coins) times? The program solves the coin change problem (although not in a very efficient way!): You are given an array of coins with varying denominations and an integer sum representing the total amount of money; you must return the fewest coins required to make up that sum. For this problem Code: def count(coins: list[int], target): def helper(coins, target, vals = [], answers = set()): if target<0: vals.pop() elif target==0: vals.sort() answers.add(tuple(vals)) vals.clear() else: for coin in coins: helper(coins, target-coin, vals+[coin]) return len(answers) coins.sort() if (answer:=helper(coins, target)): return answer else: return -1 print(count([2, 5, 3, 6], 10)) # 5 print(count([1, 3, 5, 7], 8)) # 6 Attempt at code explanation (explained for print statement 2): It starts with finding the most amount of coins that is needed to reach the target or go higher( (1, 1, 1, 1, 1, 1, 1, 1) in this case). From there it iterates over all the possible values for the last row ((1, 1, 1, 1, 1, 1, 1, 3), (1, 1, 1, 1, 1, 1, 1, 5), (1, 1, 1, 1, 1, 1, 1, 7)). If the sum of the numbers is negative, it removes that value and tries again with a different value. It adds any solutions (where sum(vals)==target)to the set answers (after sorting it) to prevent duplicates from being counted. Then it moves to the higher row ((1, 1, 1, 1, 1, 1, 1), (1, 1, 1, 1, 1, 1, 3), (1, 1, 1, 1, 1, 1, 5), (1, 1, 1, 1, 1, 1, 7)). And repeats. Can anyone explain what is the time complexity of the program along with why? Thanks!
1.2
1
1
I have realized that my initial thought was correct: the time complexity is indeed O(nceil(target/min(coins))). If you take the smallest value in coins, then it will have to call itself ceil(target/min(coins)) times, and with each call start a new for loop, which is an O(n) operation. Therefore, it becomes O(nceil(target/min(coins))) time complexity
2023-02-20 03:02:53
1
javascript,python-3.x,google-apps-script
1
75,521,620
Google Apps Script: 403 in simple UrlFetchApp.fetch
75,504,913
true
207
I have the following Python code that runs fine from my computer: >>> from requests import get >>> response = get("https://fiis.com.br/btal11/") >>> response <Response [403]> >>> response = get("https://fiis.com.br/btal11/", headers={'User-agent': 'Mozilla/5.0'}) >>> response <Response [200]> If I simply add a user-agent header, I'm able to get the html page content. However, with the equivalent JS code on google Apps Script that doesn't work: function GORDON(input) { var url = "https://fiis.com.br/btal11/"; var options = { muteHttpExceptions: true, headers: {"User-agent": "Mozilla/5.0"}, }; var response = UrlFetchApp.fetch(url, options); console.log(response.getContentText()); //var something = HtmlService.parse(response.getContentText()); } and the result seems to be a challenge from Cloudflare to be solved: <!DOCTYPE html> <html lang="en-US"> <head> <title>Just a moment...</title> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <meta http-equiv="X-UA-Compatible" content="IE=Edge"> <meta name="robots" content="noindex,nofollow"> <meta name="viewport" content="width=device-width,initial-scale=1"> <link href="/cdn-cgi/styles/challenges.css" rel="stylesheet"> </head> <body class="no-js"> <div class="main-wrapper" role="main"> <div class="main-content"> <h1 class="zone-name-title h1"> <img class="heading-favicon" src="/favicon.ico" alt="Icon for fiis.com.br" onerror="this.onerror=null;this.parentNode.removeChild(this)"> fiis.com.br </h1> <h2 class="h2" id="challenge-running"> Checking if the site connection is secure </h2> <noscript> <div id="challenge-error-title"> <div class="h2"> <span class="icon-wrapper"> <div class="heading-icon warning-icon"></div> </span> <span id="challenge-error-text"> Enable JavaScript and cookies to continue </span> </div> </div> </noscript> <div id="trk_jschal_js" style="display:none;background-image:url('/cdn-cgi/images/trace/managed/nojs/transparent.gif?ray=79c3f9c65b60e3b6')"></div> <div id="challenge-body-text" class="core-msg spacer"> fiis.com.br needs to review the security of your connection before proceeding. </div> <form id="challenge-form" action="/btal11/?__cf_chl_f_tk=1lWKKYirqW.UmES1h5ANk6aNER4buUV9BvandjshiQU-1676861855-0-gaNycGzNCHs" method="POST" enctype="application/x-www-form-urlencoded"> <input type="hidden" name="md" value="vA5q0MLK_A04uFcQqCyCa20MICLT.o_hZCm87QeOsEc-1676861855-0-Ab6XuJyj-1RtGxhMzcfJQ2E1vrHaAOzs987bM2ZpxqFwZvWYvUEnID4JOSO1iJLQDPqYPPzX-AwcdroRh5CKZ2UP4_o_uqOfOeYMVZJo1S4iqBZ3loTopwRBpVHtAADxvebnNBvP_HyStPDyJH0VkGGHwcBpjJsmv-duU8lhq7z9ex0TS-wNsyNhp4eoM23Uwzi30418XhWvNqoK66sEcrN6vaZW8EJEGFfxW2LDf-R9ZoYUay2xt4Xgcwz17nDgEWeGlR_L-S5RvonpDTBnk5ujbFc_hdwX7Y39NdeIDLTlTCtudzHHEsK0hhbZjHVL7xl4YwkgxKoLaL-URi59VSdHMNcxlHZNt65EWGwS_gXhXG7BFX74CI-EgVo-138F_E9KyWgWz2kL2C4RYG-fcRHEMsYUZznCznaRm4CipklQIGrg1TzLb8GmB25HhjZM-BgKMprLPWJ2jCJ_Yw5KxurVebXuzinZt43H_5klyYd3Of0TwBnTjVMmDbdWsQaQds7PWHi1qq7fXVATS9MzHzZiaY7VmLjdkMWizlMIAmBafYqltGgh8dEZy3sPom6zbj38YGuOxF6gJWrx1tBGm_Kdm15E3gZqgKpmyRuXPDb-a6m0ncJK41sn6XUxUxM-2QuvV1OyAylzTyVderxsqKGH1VJlHcRVUxVyAldAgzMT4SHhH3kcN0CF3cPH9bj1yrw29rXel4ZKowdCYfzRgVJJxdmaz6MTqeEdoj9eXk9h_hfOS1xYImhsCKD1r7SFsmNx2-awD3il9esnD-OR7NwuUg9CruUOBeOnUUgZEg88_l_-B5TYEHARh9Vecg6KVru0XxlaZU2x42gE8vfiKcUKWbNykHFgXFYSwwqwc6zLtF_1UVi76EA0RTX0dITNU-dyXbSRn7HUT8UJojrc5uohnsO5u1smAz9JH7cGJGiZABVaVyKxVIfVQRIPXkbfndrUY_9M5TS0Ms6LLYfozjEGVbC6SegE-FzQEGvJeTHrQz2nA4PM_m3by4L283EJDHk3jnd23_CpM6coZrQxLlyrpAxOzi3lx91cppJONKQd0QMJNaTY6j5URnx2uO8Uto1tbeXZ3lKKheIuYfxBECzuALkPqNqB3pbdm3H6TXPSrpBmqWHPB7yqSowbkQN5qHFePDdg6DBuH40Hm4NgONvkVMmD_D4r8HyJ58-GK4gDF_bmWJjgyLB5pMsxCcGZm50u9bS7f-eHDMyGTYdhlOzU6tnJs3-B54m2ph65RNPgkYqg3TDFDHc1GDM61vhxj2QFAJY_crvxYhd6mAr1C--fk38rs1f_LX42Uqt1rcZCEZXr5eHFivIuAmDlSl-8iz0C8Y2K9M0fUS-fid6dcYuK5BHWb9FTJ5lNWkRfEWIXyV-YhWdWkB85_NbPnPOOgDiNzPlCEzUN-AkzUFjFhEIu1k6goZUQXXujthhQta0NTG0T0B4aD3DB-k5ihYILF_w6vDhAZcAFz-TcR4t_TuvoHkGnlwwC354-1Hcb0IPdcrZIXb_8_PpIKRSHQeCErlzLYJfg0ZUobNbSVC6b6p15uXzITqX5FCoTIGNUWuWqiCYLBvcWcTBbAevpxQDLDrQfM351ZQUrcT5aeET6SeWLZftKhAZeHdiAc8KK_iw6jUpxrB4v2oGZAlU372wBIEZ0eQYQhMwJnm-PMODd3BodqE5HJe0Sc7wnUyjTT3Rxwv06Luv0-8CfswblPIYq7Mwx771ZXXPZmQyrepQ1-bBntEsvFgGI4jyPu8RKHuq8H8kdtLsj_t747dkdRq9zmXGIcCcTh09Vj-sTKHZPIHh396ljgzlVJ7k_nWX8BCHibRj3kUtnDhJarkzlobqb985ZNSybspZKlbG8f3qIWdo1wa1-Bo002tNWyElRcDt_xwXuneDTyP0qQWyX-7kKXlJFIYQ9detREaifPI1hA8fU11U2r2XzEsOLpxao18T9D9DjkvC4cGh6BsE3s8uyW06_Q5QhxvsADyW7HkZ6I72H_l6zObf9N3uKYfEy8CrcdMsS7eLXaLvxbzuNd2WuxbKu3N8AiI9D51lU8CEnSKCQa4SUzRj4f2q62HjhiG9HpT9TYRdyYPMsbb_eXUxvrA"> </form> </div> </div> <script> (function(){ window._cf_chl_opt={ cvId: '2', cZone: 'fiis.com.br', cType: 'managed', cNounce: '97809', cRay: '79c3f9c65b60e3b6', cHash: 'fc3e1644bceb435', cUPMDTk: "\/btal11\/?__cf_chl_tk=1lWKKYirqW.UmES1h5ANk6aNER4buUV9BvandjshiQU-1676861855-0-gaNycGzNCHs", cFPWv: 'g', cTTimeMs: '1000', cMTimeMs: '0', cTplV: 4, cTplB: 'cf', cRq: { ru: 'aHR0cHM6Ly9maWlzLmNvbS5ici9idGFsMTEv', ra: 'TW96aWxsYS81LjAgKGNvbXBhdGlibGU7IEdvb2dsZS1BcHBzLVNjcmlwdDsgYmVhbnNlcnZlcjsgK2h0dHBzOi8vc2NyaXB0Lmdvb2dsZS5jb207IGlkOiBVQUVtZERkOHNNRzFSb0FJWGZTYnlsa2plUGwxdjhTdngwVUkp', rm: 'R0VU', d: 'K2hEeoWYs+sA9kntZPKxHqEaDzXqO9jP1DXfNo8U00z/HdqgsrqXN1eO/1L4D71PU/RaZfZMK/nfRS9wUs9n9wMZ7wa2+UCd+Wic1OqU3YV80fLAAQsnSPU7ZuJ5idh4DYvRqokv5w973lQ8O4+o5G9Tbp0TBj/G5oIVqE/HYaiQanIiyLhY6VoYkembTfL7ZFiePykhL/QWb9TRxI33+Iu8NCbRIfz5XQkDBoB0Lc2qftAyKY4kx40y7jiPLq1rOKk3be6zPJqXtYgbm0NE+2KuTRVy2gz9TN8LuBwy7sMi/uXEWXEh/8KDOUydo6rFxZ2ykmOVAhR6DiOj1CUBJvL71x01tHQLf0RBCbvrJ37who7mAkd8vNIQ3bBySUOeNTxVCxSDe1Erkx7EJjPzlmTDC9Ec9dyXddjMFV29k8B/8tTEOGtrNgsUenIOLd862lYHsqQTRdpGgQrdvgPxy/OOIBf93fM5A8CLogbDNLqYnVn8p0K8wvkk9Xjh9zc5mB3yR3KS0G/wz2S4BsLQEUb73vTj4fUDPK/QhxI3t4mWAk6kla2A7taIRA1myCoMbsQamtmqHB1396m+aubITHLeswV50zQ5si/qTqGlGSo+N5FYfBnUt96W/Cidhsxj76xDvYRC2GYaKJanN1h1IZnFb2B2Y4lfP0vkdy/qF6bp7upA1rMZ7ilLOu8LuOjb', t: 'MTY3Njg2MTg1NS43MzUwMDA=', m: 'SOz4uar/pXrmGBuSQIbY9tjFx2G6zirpJv2NyzOPmIM=', i1: 'cIUbIpvI2YsvZRb//Yfdfw==', i2: 'oC3KkuH26ng8Gt/f/kKHDw==', zh: 'wcWWf/+obaYUptPh30e4072sXWiLjlPsWQnQS/2QxMI=', uh: 'bllaG+Wp51WdmfI9k7pslxqw3F1/Neha3nrwdAjxueE=', hh: '+MXXTc/rARCfTxK8igcq3MtDXAltL4ou2PYE97G16x4=', } }; var trkjs = document.createElement('img'); trkjs.setAttribute('src', '/cdn-cgi/images/trace/managed/js/transparent.gif?ray=79c3f9c65b60e3b6'); trkjs.setAttribute('alt', ''); trkjs.setAttribute('style', 'display: none'); document.body.appendChild(trkjs); var cpo = document.createElement('script'); cpo.src = '/cdn-cgi/challenge-platform/h/g/orchestrate/managed/v1?ray=79c3f9c65b60e3b6'; window._cf_chl_opt.cOgUHash = location.hash === '' && location.href.indexOf('#') !== -1 ? '#' : location.hash; window._cf_chl_opt.cOgUQuery = location.search === '' && location.href.slice(0, location.href.length - window._cf_chl_opt.cOgUHash.length).indexOf('?') !== -1 ? '?' : location.search; if (window.history && window.history.replaceState) { var ogU = location.pathname + window._cf_chl_opt.cOgUQuery + window._cf_chl_opt.cOgUHash; history.replaceState(null, null, "\/btal11\/?__cf_chl_rt_tk=1lWKKYirqW.UmES1h5ANk6aNER4buUV9BvandjshiQU-1676861855-0-gaNycGzNCHs" + window._cf_chl_opt.cOgUHash); cpo.onload = function() { history.replaceState(null, null, ogU); }; } document.getElementsByTagName('head')[0].appendChild(cpo); }()); </script> <div class="footer" role="contentinfo"> <div class="footer-inner"> <div class="clearfix diagnostic-wrapper"> <div class="ray-id">Ray ID: <code>79c3f9c65b60e3b6</code></div> </div> <div class="text-center" id="footer-text">Performance &amp; security by <a rel="noopener noreferrer" href="https://www.cloudflare.com?utm_source=challenge&utm_campaign=m" target="_blank">Cloudflare</a></div> </div> </div> </body> </html> What am I missing over here? Is there any way to bypass this challenge?
1.2
1
1
With the help of @Tanaike, discovered that unfortunately Google appends junk info on user-agent header, which ends up being blocked by cloudflare. Doesn't seem to have a simple solution for now.
2023-02-20 03:34:50
1
python,html,selenium-webdriver
2
75,507,002
Get all contents in Python Selenium
75,505,051
true
55
Say that I have a piece of HTML code that looks like this: <html> <body> <thspan class="sentence">He</thspan> <thspan class="sentence">llo</thspan> </body> </html> And I wanted to get the content of both and connect them into a string in Python Selenium. My current code looks like this: from selenium import webdriver from selenium.webdriver.common.by import By browser = webdriver.Chrome() thspans = browser.find_elements(By.CLASS_NAME, "sentence") context = "" for thspan in thspans: context.join(thspan.text) The code can run without any problem, but the context variable doesn't contain anything. How can I get the content of both and connect them into a string in Python Selenium?
1.2
2
1
context += thspan.text instead of using context.join(thspan.text) just like @Rajagopalan said
2023-02-20 04:46:43
0
python-3.x,angr
1
75,948,699
how to get the unsigned int value of a claripy.ast.bv.BV object in angr?
75,505,364
false
72
import angr import claripy # 创建 angr 项目对象 proj = angr.Project('./angr_study/main', load_options={'auto_load_libs': False}) # 设置函数参数 add_addr = proj.loader.find_symbol('add').rebased_addr state = proj.factory.call_state(addr=add_addr) state.regs.rdi = claripy.BVV(1234,64) state.regs.rsi = claripy.BVV(1234,64) simgr = proj.factory.simgr(state) simgr.run() #deadended 保存了每一种分支结束时的状态 if len(simgr.deadended) > 0: for state in simgr.deadended: print(state.regs.rax) else: print('Error') the return value of add function is saved in rax. but the type of state.regs.rax is claripy.ast.bv.BV. I want to use the value of rax as a unsigned int. I do it with this code: ret_val = int(('%s'%state.regs.rax)[6:-1],16) This method works fine.But I don't think it's elegant. I want to know some other methods to convert state.regs.rax to a python int value.
0
1
1
You can use state.regs.rax._model_concrete.value
2023-02-20 11:40:29
0
python-3.x,multithreading,tkinter
1
75,510,294
Sharing Variables between Thread for TKinter application
75,508,808
false
37
I am fairly new to coding and just started writing a Python application with TKinter which calls some Powershell scripts. I created an OOP program that has two threads when the Start button is pressed. The problem is that I will need to share variables between the two threads (one running the GUI to keep it responsive and another one running the function that calls the powershell scripts a certain number of times). Is there any ways that I could achieve this? Here below is my code: from tkinter import * from tkinter import ttk from tkinter import messagebox #import subprocess import datetime #import os import socket #import sys import threading root = Tk() root.geometry("400x350") root.title("GSS Reboot Controller V0.1") loops = 0 variable = False selected_power = StringVar() class GUI: def __init__(self, master): global selected_power # Frame for start button self.frame1 = Frame(root,padx=0,pady=5) self.frame1.pack(side="bottom",padx=5,pady=5) self.frame2 = LabelFrame(root, text="IP Address Power Switch", padx=5,pady=5) self.frame2.pack(side = "top",padx=5,pady=5) self.frame3 = LabelFrame(root, text="IP Adress of Simulator", padx=5, pady=5) self.frame3.pack(side="top") frame4 = LabelFrame(root, text="Power Switch to Select", padx=5,pady=5) frame4.pack(side="top",padx=5,pady=5) # Start button self.start = Button(self.frame1,text="Start", background="Green", font="Ariel 12", command=threading.Thread( target=self.powershell_scripts).start()) self.start.grid(column=0,row=1,padx=10) # Stop button self.stop = Button(self.frame1,text="Stop", font="Ariel 12",background="Red", command=self.stop_infinitescript) self.stop.grid(column=2,row=1,padx=10) # Generate report button self.Report = Button(self.frame1,text="Generate Report", font="Ariel 12",command=self.generate_report) self.Report.grid(column=1,row=1,padx=10) # Entry box for the IP Address of the Power Switch self.power_ip = Entry(self.frame2,width=20,font="Ariel 18") self.power_ip.grid(row=0,column=0,padx=5,pady=5) self.power_ip_set = Button(self.frame2,text="Set", font="Ariel 12", command=self.write_ip_power_file) self.power_ip_set.grid(row=0,column=1,padx=5,pady=5) # Entry box for the IP adress of the Windows side self.windows_ip = Entry(self.frame3,width=20,font="Ariel 18") self.windows_ip.grid(row=0,column=0,padx=5,pady=5) self.windows_ip_set = Button(self.frame3,text="Set", font="Ariel 12", command=self.write_ip_windows) self.windows_ip_set.grid(row=0,column=1,padx=5,pady=5) ################# ComboBox for controlling which powershell script to call ################# # Defining Tuple of variable options that will give the desired options self.switches = ("Power Switch 1", "Power Switch 2", "Power Switch 3", "Power Switch 4") # Defining the combobox that will have the selected_Power textvariable global variable self.Switch_selected = ttk.Combobox(frame4,width=20,font="Ariel 18",textvariable=selected_power) # Inserting the values in the Combobox self.Switch_selected["values"] = self.switches self.Switch_selected.grid(row=0,column=0,padx=5,pady=5) # Entry Box to get number of powercycles self.frame5 = Frame(root) self.frame5.pack(side="top",padx=5, pady=5) self.iteration_number = Label(self.frame5,text="Desired number of powercycles: ") self.iteration_number.grid(row=0, column=0) self.num_of_powercycles = IntVar() self.num_powercycles = Entry(self.frame5,width= 4,font="Ariel 18",textvariable=self.num_of_powercycles) self.num_powercycles.grid(row=0,column=1) self.infinite_button = Button(self.frame5,text='\u221e', font="Calibri 12",command=self.infinite_powershellcall) self.infinite_button.grid(row=0, column=2, padx=2) def write_ip_windows(self): self.IP_windows = self.windows_ip.get() # Check that the IP adress is valid try: socket.inet_aton(self.IP_windows) self.f = open(r"IP_GSS_Win.txt","w") self.f.write(self.IP_windows) self.f.close() self.l_win = Label(self.frame3,font="Ariel 18",text=u'\u2713', fg="Green") self.l_win.grid(row=0,column=2) except OSError: self.l_win = Label(self.frame3,font="Ariel 18",text=u'\u274C', fg="Red") self.l_win.grid(row=0,column=2) messagebox.showerror(title="IP adress not valid", message= f"""The entered IP power adress is {self.IP_windows}\n This is not valid! It should be in the format of nnn.nnn.nnn.nnn without spaces and with the points.""") # Functions to write IP address into text file def write_ip_power_file(self): self.IP_Power = self.power_ip.get() # Check that the IP adress is valid try: socket.inet_aton(self.IP_Power) self.f = open(r"IP_Power.txt","w") self.f.write(self.IP_Power) self.f.close() self.l_power = Label(self.frame2,font="Ariel 18",text=u'\u2713', fg="Green") self.l_power.grid(row=0,column=2) except OSError: self.l_power = Label(self.frame2,font="Ariel 18",text=u'\u274C', fg="Red") self.l_power.grid(row=0,column=2) messagebox.showerror(title="IP adress not valid", message= f"""The entered IP power adress is {self.IP_Power}\n This is not valid! It should be in the format of nnn.nnn.nnn.nnn without spaces and with the points.""") # Function to generate report.txt def generate_report(self): global loops self.f = open(r"C:\results.txt", "w+") self.f.write("The number of successfull reboots were:") self.f.write("".format(loops)) self.f.write("The time of the last event is:") self.f.write("".format(datetime.datetime.now())) self.f.close() # Function to stop the infinite function call def stop_infinitescript(self): global loops print(f"The script has done: {loops} amount of total power cycles") # "Infinite" function to run Powershell script def powershell_scripts(self): global loops global selected_power print("Starting Powershell Script!") print(selected_power.get()) # if selected_power.get() == "Power Switch 1": # for x in range(0,self.num_of_powercycles.get()): # subprocess.call(["powershell", ".\Script_Power_1.ps1"], stdout=sys.stdout) # loops += 1 # elif selected_power.get() == "Power Switch 2": # for x in range(0,self.num_of_powercycles.get()): # subprocess.call(["powershell", ".\Script_Power_2.ps1"], stdout=sys.stdout) # elif selected_power.get() == "Power Switch 3": # for x in range(0,self.num_of_powercycles.get()): # subprocess.call(["powershell", ".\Script_Power_3.ps1"], stdout=sys.stdout) # elif selected_power.get() == "Power Switch 4": # for x in range(0,self.num_of_powercycles.get()): # subprocess.call(["powershell", ".\Script_Power_4.ps1"], stdout=sys.stdout) # else: # messagebox.showerror(title="No power switch selected", # message="""Please select a power switch from the dropdown.\n # Otherwise the utility does not know which unit to power off.""") # return # print("Reboots completed: ",loops) # Inifinite Powershell Call def infinite_powershellcall(self): pass #Creating an instance of the GUI Class GUI(root) # Entering the infinite loop root.mainloop()``` Thank you in advance for any precious help! P.S. Any advice to improve this program are more than welcome :) In the Powershell_script function, I am trying to call an object from the GUI, but I get that this variable does not exist as this function is called on a different thread to stop the GUI from freezing whilst the PowerShell scripts are running.
0
1
1
As mentioned by Matteo Gala, command=threading.Thread(target=self.powershell_scripts).start() will execute the powershell_scripts on initialization of the button. To remidy this define it as a lambda function as so command= lambda: threading.Thread(target=self.powershell_scripts).start(). Also it is bad practice to import * from a package because you could end up using an unexpected method. It is best to import only the methods you need from a given package, in your case from tkinter import Frame, Label, LabelFrame, Entry, Button, Combobox, Intvar. This may seem like a pain, but it will reduce the likelihood of you calling an unexpected method and will make your application conciderably lighter, specificly if you chose to compile it with Pyinstaller or Buildozer.
2023-02-20 12:01:55
2
python,pytorch,resnet
1
75,509,732
forwarding residual block layer by layer, result is wrong
75,509,008
true
37
I wrote a pytorch code with residual connection as follow all_module = [] for i in range(3): layer = nn.Sequential( nn.Conv1d(n_hidden_channels, n_hidden_channels), nn.LeakyReLU(), nn.Conv1d(n_hidden_channels, n_hidden_channels), nn.LeakyReLU() ) all_module.append(layer) module_list = nn.ModuleList(all_module) # method 1 for layer in module_list: x = x + layer(x) print(x) # method 2 for layer in module_list: y = torch.clone(x) for m in layer: y = m(y) x = x + y print(x) why the output of method 1 and 2 is different? Have no idea why this happened.
1.2
1
1
Both methods do the same thing. If you run the two methods sequentially, the first method will update the x; therefore, you will have a different result when you run the second method. If you copy the x before the first method, you will see both methods will create the same results.
2023-02-20 16:19:24
0
python-3.x,module,networkx,graphviz,pygraphviz
1
75,517,171
I can't install the PyGraphviz module in python 3.9.x
75,511,766
false
122
I want to use graph theory in one of my projects, that's why I did some research and I found two modules pretty usefull to do what I want to do. I found the networkx module that allows me to create some graphs, but I want to create a visualisation of them. So that I will be able to see graphs. I found another module that make it for me and its name is Graphviz. The problem is, when I want to run a simple script in Python to create a graph and to visualize it, it says I have to install the PyGraphviz module. After several attempts, I didn't succeed to install it. Here is the python script I want to run : import networkx as nx from networkx.drawing.nx_agraph import write_dot import os # Créer un graphe Escape Game avec NetworkX G = nx.Graph() G.add_edge("salle1", "salle2") G.add_edge("salle2", "salle3") G.add_edge("salle3", "salle1") # Exporter le graphe dans un fichier DOT write_dot(G, "escape_game.dot") # Utiliser Graphviz pour générer une image du graphe os.system("dot -Tpng escape_game.dot -o escape_game.png") And here is the error I get : Traceback (most recent call last): File "C:\\Users\\HP\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\networkx\\drawing\\nx_agraph.py", line 133, in to_agraph import pygraphviz ModuleNotFoundError: No module named 'pygraphviz' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "c:\\Users\\HP\\Desktop\\Cours_BAC_3\\Projet_individuel\\Juin\\Projet FORM-ESC\\Eg en théorie des graphes\\chatgpt.py", line 12, in \<module\> write_dot(G, "escape_game.dot") File "C:\\Users\\HP\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\networkx\\drawing\\nx_agraph.py", line 194, in write_dot A = to_agraph(G) File "C:\\Users\\HP\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\networkx\\drawing\\nx_agraph.py", line 135, in to_agraph raise ImportError( ImportError: requires pygraphviz http://pygraphviz.github.io/%60 Then, when I want to install the PyGraphviz module with this command : pip install pygraphviz Here is the result : Collecting pygraphviz Using cached pygraphviz-1.10.zip (120 kB) Preparing metadata (setup.py) ... done Building wheels for collected packages: pygraphviz Building wheel for pygraphviz (setup.py) ... error error: subprocess-exited-with-error × python setup.py bdist_wheel did not run successfully. │ exit code: 1 ╰─\> \[48 lines of output\] running bdist_wheel running build running build_py creating build creating build\\lib.win-amd64-cpython-39 creating build\\lib.win-amd64-cpython-39\\pygraphviz copying pygraphviz\\agraph.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz copying pygraphviz\\graphviz.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz copying pygraphviz\\scraper.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz copying pygraphviz\\testing.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz copying pygraphviz\__init_\_.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz creating build\\lib.win-amd64-cpython-39\\pygraphviz\\tests copying pygraphviz\\tests\\test_attribute_defaults.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz\\tests copying pygraphviz\\tests\\test_clear.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz\\tests copying pygraphviz\\tests\\test_close.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz\\tests copying pygraphviz\\tests\\test_drawing.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz\\tests copying pygraphviz\\tests\\test_edge_attributes.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz\\tests copying pygraphviz\\tests\\test_graph.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz\\tests copying pygraphviz\\tests\\test_html.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz\\tests copying pygraphviz\\tests\\test_layout.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz\\tests copying pygraphviz\\tests\\test_node_attributes.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz\\tests copying pygraphviz\\tests\\test_readwrite.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz\\tests copying pygraphviz\\tests\\test_repr_mimebundle.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz\\tests copying pygraphviz\\tests\\test_scraper.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz\\tests copying pygraphviz\\tests\\test_string.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz\\tests copying pygraphviz\\tests\\test_subgraph.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz\\tests copying pygraphviz\\tests\\test_unicode.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz\\tests copying pygraphviz\\tests\__init_\_.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz\\tests running egg_info writing pygraphviz.egg-info\\PKG-INFO writing dependency_links to pygraphviz.egg-info\\dependency_links.txt writing top-level names to pygraphviz.egg-info\\top_level.txt reading manifest file 'pygraphviz.egg-info\\SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no files found matching '*.png' under directory 'doc' warning: no files found matching '*.txt' under directory 'doc' warning: no files found matching '*.css' under directory 'doc' warning: no previously-included files matching '*\~' found anywhere in distribution warning: no previously-included files matching '\*.pyc' found anywhere in distribution warning: no previously-included files matching '.svn' found anywhere in distribution no previously-included directories found matching 'doc\\build' adding license file 'LICENSE' writing manifest file 'pygraphviz.egg-info\\SOURCES.txt' copying pygraphviz\\graphviz.i -\> build\\lib.win-amd64-cpython-39\\pygraphviz copying pygraphviz\\graphviz_wrap.c -\> build\\lib.win-amd64-cpython-39\\pygraphviz running build_ext building 'pygraphviz.\_graphviz' extension error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/ \[end of output\] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for pygraphviz Running setup.py clean for pygraphviz Failed to build pygraphviz Installing collected packages: pygraphviz Running setup.py install for pygraphviz ... error error: subprocess-exited-with-error × Running setup.py install for pygraphviz did not run successfully. │ exit code: 1 ╰─\> \[50 lines of output\] running install C:\\Users\\HP\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\setuptools\\command\\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. warnings.warn( running build running build_py creating build creating build\\lib.win-amd64-cpython-39 creating build\\lib.win-amd64-cpython-39\\pygraphviz copying pygraphviz\\agraph.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz copying pygraphviz\\graphviz.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz copying pygraphviz\\scraper.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz copying pygraphviz\\testing.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz copying pygraphviz\__init_\_.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz creating build\\lib.win-amd64-cpython-39\\pygraphviz\\tests copying pygraphviz\\tests\\test_attribute_defaults.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz\\tests copying pygraphviz\\tests\\test_clear.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz\\tests copying pygraphviz\\tests\\test_close.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz\\tests copying pygraphviz\\tests\\test_drawing.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz\\tests copying pygraphviz\\tests\\test_edge_attributes.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz\\tests copying pygraphviz\\tests\\test_graph.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz\\tests copying pygraphviz\\tests\\test_html.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz\\tests copying pygraphviz\\tests\\test_layout.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz\\tests copying pygraphviz\\tests\\test_node_attributes.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz\\tests copying pygraphviz\\tests\\test_readwrite.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz\\tests copying pygraphviz\\tests\\test_repr_mimebundle.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz\\tests copying pygraphviz\\tests\\test_scraper.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz\\tests copying pygraphviz\\tests\\test_string.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz\\tests copying pygraphviz\\tests\\test_subgraph.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz\\tests copying pygraphviz\\tests\\test_unicode.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz\\tests copying pygraphviz\\tests\__init_\_.py -\> build\\lib.win-amd64-cpython-39\\pygraphviz\\tests running egg_info writing pygraphviz.egg-info\\PKG-INFO writing dependency_links to pygraphviz.egg-info\\dependency_links.txt writing top-level names to pygraphviz.egg-info\\top_level.txt reading manifest file 'pygraphviz.egg-info\\SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no files found matching '*.png' under directory 'doc' warning: no files found matching '*.txt' under directory 'doc' warning: no files found matching '*.css' under directory 'doc' warning: no previously-included files matching '*\~' found anywhere in distribution warning: no previously-included files matching '\*.pyc' found anywhere in distribution warning: no previously-included files matching '.svn' found anywhere in distribution no previously-included directories found matching 'doc\\build' adding license file 'LICENSE' writing manifest file 'pygraphviz.egg-info\\SOURCES.txt' copying pygraphviz\\graphviz.i -\> build\\lib.win-amd64-cpython-39\\pygraphviz copying pygraphviz\\graphviz_wrap.c -\> build\\lib.win-amd64-cpython-39\\pygraphviz running build_ext building 'pygraphviz.\_graphviz' extension error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/ \[end of output\] note: This error originates from a subprocess, and is likely not a problem with pip. error: legacy-install-failure × Encountered error while trying to install package. ╰─\> pygraphviz note: This is an issue with the package mentioned above, not pip. hint: See above for output from the failure. If someone can help me with it, or if you have another python module that does the job, I can take it. Thanks !
0
1
1
I'm successfully using import graphviz in my project. But that might be a different package. Note: To actually work it requires the GraphViz application to be installed.
2023-02-20 17:39:00
3
jupyter-notebook,google-colaboratory,ipython
1
76,144,376
IPython: Render HTML text in Colab if a condition is met
75,512,525
false
119
I want to be able to render some HTML text in Colab if a condition is met but it does not work. I have imported HTML from IPython.display and this works... HTML('<h1>Hello, World</h1>') But what I really want is this... if True: HTML('<h1>Hello, World</h1>') I have also tried... if True: IPython.display.display_html('<h1>Hello, World</h1>') But it didn't work. Thanks in advance for your help.
0.53705
1
1
you should use display() to display your page, e.g. display(HTML('<h1>Hello, World</h1>'))
2023-02-20 18:35:04
0
python,gettext
1
75,649,833
Automatically switch Python gettext strings with their translations from a .po file
75,512,982
false
38
I have a Python/Django code base in which strings are encapsulated in gettext calls. For legacy reasons, the strings in the Python files have been written in French and the English translations are inside a .po file. I now wish to make sure the Python files are in English, strings included. I would like to automatically switch the strings so that the English translations from the .po file end up in the Python files (instead of the French strings), while adding the French strings to a new .po file (matching the new "original" English string). Since I have a lot of strings, doing this manually would be extremely tedious. Is there any tool or library that could facilitate this process?
0
1
1
There is no such tool. If you don't write it yourself, you can use POEdit or Emacs as a helper. Both have a function that lets you jump to the source code line of a PO entry so that copy and paste is a little faster.