CreationDate
stringlengths 19
19
| Users Score
int64 -3
17
| Tags
stringlengths 6
76
| AnswerCount
int64 1
12
| A_Id
int64 75.3M
76.6M
| Title
stringlengths 16
149
| Q_Id
int64 75.3M
76.2M
| is_accepted
bool 2
classes | ViewCount
int64 13
82.6k
| Question
stringlengths 114
20.6k
| Score
float64 -0.38
1.2
| Q_Score
int64 0
46
| Available Count
int64 1
5
| Answer
stringlengths 30
9.2k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2023-04-26 13:01:03 | 2 | python,api,fastapi | 1 | 76,114,810 | Can i send delete api request directly in browser url? | 76,111,063 | false | 49 | I am using Fastapi , and this delete request is there , so for my hosted api or be it on localhost can i send directly delete request in browser like url --> http://api/delete/2 .
It gives me a method not allowed response and similar for update and put api as well,
Can i directly do through this way or not, ?
I was trying to send this delete API request through url , but the json response says method not allowed.. | 0.379949 | 1 | 1 | By default in browser , api requests are taken as "GET" request . To execute any other type of requests use curl or postman . |
2023-04-26 13:54:49 | 2 | python,algorithm,task,python-itertools | 1 | 76,111,841 | The task of using itertools | 76,111,634 | false | 52 | Put + signs between some digits of 12345678910111213...N to make the sum of M.
The input is two positive integers N and M.
Conclusion: correct examples per line.
Examples:
Input: 5 15.
Conclusion: 1+2+3+4+5=15
Input: 4 46.
Conclusion: 12+34=46
Input: 15 1117614.
Conclusion: 12+3+4567+891+01112131+4+1+5
But the third example won't work, so please tell me what to do.
from itertools import product
def find_expression(n, m):
nums = [str(i) for i in range(1, n+1)]
for p in product(['', '+'], repeat=n-1):
expr = ''
for i in range(n-1):
expr += nums[i] + p[i]
expr += nums[n-1]
if eval(expr) == m:
return expr
return None
n= 15
m= 1117614
expression = find_expression(n, m)
if expression:
print(expression + '=' + str(m))
else:
print('No valid expression found.')
print(find_expression(5, 15)) # Output: 1+2+3+4+5
print(find_expression(4, 46)) # Output: 12+34=46
print(find_expression(15, 1117614)) # Output: 12+3+4567+891+01112131+4+1+5 | 0.379949 | 1 | 1 | I believe the issue is that the answer requires 10, 14, and 15 to be split, but your code only allows for the + to be inserted either side of a whole number.
You'll need to change your nums variable to something like ''.join([str(n) for n in range(1,n+1)]) and change your for loop to iterate over the length of the whole string as opposed to the number of numbers (i.e. 1 to 15 has 21 characters, not 15). |
2023-04-26 17:48:59 | 1 | python,ethereum,blockchain,metamask | 1 | 76,117,493 | Python created ETH address doesn't match Metamask | 76,113,684 | false | 48 | I am working on a project working with the Ethereum blockchain.
Part of the project requires access to my own ETH wallet address. However, when I run a check on my code, I don't generate my own address that I use for Metamask. Any help would be appreciated.
from eth_account import Account
acct = Account.create('<my 12 seed words>')
acct.address
The address in acct.address is not mine. I have copied my seed words exactly, copied from clipboard, and neither returns my wallet address.
Tried to create my ETH wallet address in Python using eth_account library and was unsuccessful | 0.197375 | 1 | 1 | Use Account.from_mnemonic(mnemonic) instead which should have the same format as Metamask (BIP39).
The function Account.create() does not take mnemonic as an argument, just anything random. |
2023-04-26 21:48:41 | 0 | python,html,css | 1 | 76,115,432 | Is it possible to do live css changes on a HTML page via python? | 76,115,349 | true | 52 | I'm pretty new to Python and not sure, where to start searching. For a small project I'm trying to figure out if one can manipulate CSS code via python, so that dynamic changes are visible in a browser without refreshing the page? Say you want to change the background of some DIV from green to red depending on a recurring python expression/events after the page has been loaded, like a status indicator that changes over time.
Update: The project is a simple network scanner. As pings have to be handled asynchronous, I have to find a way to update the online status of a host in a table after the page has been loaded. It's a HTML page hosted on my Mac, which I open in a tool, which displays it as a dektop wallpaper. | 1.2 | 1 | 1 | Short answer? Sort of.
What you see in a browser is some application (Chrome, Firefox, Safari), parsing an html file that may contain various text, tags, js, css, etc...and making the determination of how to render that content. In order for the content to update/change, the html must either re-load or have some javascript running that directly changes the Document Object Model (DOM). But currently, there is no mechanism for running python inside of a web page.
Now, that being said, what you could do is have some Javascript that listened or polled a server written in Python that emitted content/messages/events that your JS framework (in the webpage) responded to and would make the appropriate changes in your DOM. It's not uncommon and that's how a lot of web pages do what they do: Some 'Backend', could be Python or C# or even JS, that responds to requests from a 'Frontend' e.g. your webpage or browser, and sends content, sometimes as JSON instead of HTML. |
2023-04-27 09:29:07 | 1 | python,dataframe,group-by | 1 | 76,118,939 | Python explain groupby | 76,118,834 | true | 48 | I'm working in a analysis of data mining. In a the groupby function is used like this:
df.groupby('tshirts')['id'].count()
What does ['id'] really do? I understand that the function are grouping by tshirts, but then the brackets do not know..
Can you explain for me, please? And, if you can give me an example, I appreciate.
Best regard.
pd: df is a dataframe. | 1.2 | 1 | 1 | So in the square brackets after groupby() you usually place the column names that you want to apply the function that follows to (in your case count()). So for example in your case, it groups by tshirts and then counts how many times each unique id value appears in the 'id' column.
If your code was something like df.groupby(['tshirts'])['id', 'size'].count() then it would group by tshirts and then apply the count() function to both id and size columns.
Generally the basic template goes like this df.groupby([list of cols to groupby])[list cols to apply function to].function()
If you want to have a different function for each col in list of cols to apply function to try df.groupby([...]).agg('col1': 'count', 'col2': 'sum') |
2023-04-27 14:59:33 | 1 | python,postgresql,airflow | 1 | 76,159,580 | 'datetime.datetime' object has no attribute '__module__' when returning results from Postgres hook in Airflow | 76,121,894 | false | 175 | Given the following DAG definition:
from airflow.hooks.postgres_hook import PostgresHook
from airflow.decorators import dag, task
from airflow.utils.dates import days_ago
default_args = {
'owner': 'airflow',
}
def get_data():
sql_stmt = "SELECT * FROM table"
pg_hook = PostgresHook(
postgres_conn_id='postgres_connection',
schema='postgres'
)
pg_conn = pg_hook.get_conn()
cursor = pg_conn.cursor()
cursor.execute(sql_stmt)
return cursor.fetchall()
@dag(default_args=default_args, schedule_interval=None, start_date=days_ago(2), tags=['etl'])
def etl():
@task()
def extract():
return get_data()
data = extract()
etl_dag = etl()
When run testing the task (airflow tasks test etl extract) the following error is returned:
Traceback (most recent call last):
File "<path>/Airflow/venv/bin/airflow", line 8, in <module>
sys.exit(main())
File "<path>/Airflow/venv/lib/python3.9/site-packages/airflow/__main__.py", line 48, in main
args.func(args)
File "<path>/Airflow/venv/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 52, in command
return func(*args, **kwargs)
File "<path>/Airflow/venv/lib/python3.9/site-packages/airflow/utils/cli.py", line 108, in wrapper
return f(*args, **kwargs)
File "<path>/Airflow/venv/lib/python3.9/site-packages/airflow/cli/commands/task_command.py", line 576, in task_test
ti.run(ignore_task_deps=True, ignore_ti_state=True, test_mode=True)
File "<path>/Airflow/venv/lib/python3.9/site-packages/airflow/utils/session.py", line 75, in wrapper
return func(*args, session=session, **kwargs)
File "<path>/Airflow/venv/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1677, in run
self._run_raw_task(
File "<path>/Airflow/venv/lib/python3.9/site-packages/airflow/utils/session.py", line 72, in wrapper
return func(*args, **kwargs)
File "<path>/Airflow/venv/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1383, in _run_raw_task
self._execute_task_with_callbacks(context, test_mode)
File "<path>/Airflow/venv/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1529, in _execute_task_with_callbacks
result = self._execute_task(context, task_orig)
File "<path>/Airflow/venv/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1596, in _execute_task
self.xcom_push(key=XCOM_RETURN_KEY, value=xcom_value, session=session)
File "<path>/Airflow/venv/lib/python3.9/site-packages/airflow/utils/session.py", line 72, in wrapper
return func(*args, **kwargs)
File "<path>/Airflow/venv/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 2298, in xcom_push
XCom.set(
File "<path>/Airflow/venv/lib/python3.9/site-packages/airflow/utils/session.py", line 72, in wrapper
return func(*args, **kwargs)
File "<path>/Airflow/venv/lib/python3.9/site-packages/airflow/models/xcom.py", line 234, in set
value = cls.serialize_value(
File "<path>/Airflow/venv/lib/python3.9/site-packages/airflow/models/xcom.py", line 627, in serialize_value
return json.dumps(value, cls=XComEncoder).encode("UTF-8")
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/json/__init__.py", line 234, in dumps
return cls(
File "<path>/Airflow/venv/lib/python3.9/site-packages/airflow/utils/json.py", line 176, in encode
return super().encode(o)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "<path>/Airflow/venv/lib/python3.9/site-packages/airflow/utils/json.py", line 153, in default
CLASSNAME: o.__module__ + "." + o.__class__.__qualname__,
AttributeError: 'datetime.datetime' object has no attribute '__module__'
The error is being triggered by a timestamp field in the database table. Specifying the non-timestamp fields behaves as expected.
Is there a way of changing the type of the field in the returned data, or having XCOM parse the field correctly? | 0.197375 | 1 | 1 | i have the same problem, im using docker to run airflow, and i get this error when i use the image apache/airflow:2.5.3-python3.10, and I solved the problem by migrating to apache/airflow:2.6.0-python3.10. from what I read, until before version 3.7 of python datetime has an attribute called __module__ to determine the name of the module in which it was defined.However, as of Python 3.7, this attribute is no longer used and the error occurs when trying to access it. |
2023-04-27 15:03:09 | 0 | python,pytube | 3 | 76,255,059 | can't retrieve youtube title using yt.title from pytube library | 76,121,926 | false | 589 | After running this:
from pytube import YouTube
yt = YouTube('http://youtube.com/watch?v=2lAe1cqCOXo')
print(yt.title)
python version 3.11.3
I receive this error:
raceback (most recent call last):
File "C:\Users\jay\PycharmProjects\pythonProject\youtube.py", line 4, in <module>
print(yt.title)
^^^^^^^^
File "C:\Users\jay\PycharmProjects\pythonProject\venv\Lib\site-packages\pytube\__main__.py", line 346, in title
raise exceptions.PytubeError(
pytube.exceptions.PytubeError: Exception while accessing title of https://youtube.com/watch?v=2lAe1cqCOXo.
I was expecting to receive a title | 0 | 2 | 2 | Try going to Pytube/__main__.py in line 244 change client = 'ANDRIOD' to client = 'WEB'
this worked for me. |
2023-04-27 15:03:09 | 0 | python,pytube | 3 | 76,492,106 | can't retrieve youtube title using yt.title from pytube library | 76,121,926 | true | 589 | After running this:
from pytube import YouTube
yt = YouTube('http://youtube.com/watch?v=2lAe1cqCOXo')
print(yt.title)
python version 3.11.3
I receive this error:
raceback (most recent call last):
File "C:\Users\jay\PycharmProjects\pythonProject\youtube.py", line 4, in <module>
print(yt.title)
^^^^^^^^
File "C:\Users\jay\PycharmProjects\pythonProject\venv\Lib\site-packages\pytube\__main__.py", line 346, in title
raise exceptions.PytubeError(
pytube.exceptions.PytubeError: Exception while accessing title of https://youtube.com/watch?v=2lAe1cqCOXo.
I was expecting to receive a title | 1.2 | 2 | 2 | I started using yt-dlp plugin. |
2023-04-27 21:04:54 | 0 | python,selenium-webdriver | 1 | 76,124,689 | Click on a text with role button | 76,124,661 | false | 27 | I am trying to scrape some website which includes "Load more" text. Down you can see the whole html code.
I tried many solutions but nothing works.
I am using Python and Selenium webdriver. Could you please help me to solve this problem?
HTML code:
<td class="MuiTableCell-root-565 MuiTableCell-body-567 MuiTableCell-paddingNone-571" colspan="5">
<span class="MuiButtonBase-root-2050 MuiButton-root-2023 MuiButton-text-2025 MuiButton-textPrimary-2026" tabindex="0" role="button" aria-disabled="false" style="width:100%;padding:16px;text-align:center">
<span class="MuiButton-label-2024"><span>Load mor...</span>
</span>
<span class="MuiTouchRipple-root-2051"></span>
</span>
</td> | 0 | 1 | 1 | Install Selenium IDE using Chrome web browser
Press Record
Click about
The code will be generated for you.
Should give you what you need as a starter for 10 |
2023-04-27 21:32:32 | 0 | python,pandas,dataframe | 4 | 76,124,849 | How to take the cumulative maximum of a column based on another column | 76,124,814 | false | 36 | I have a DataFrame like this:
import pandas as pd
import numpy as np
df = pd.DataFrame({
"realization_id": np.repeat([0, 1], 6),
"sample_size": np.tile([0, 1, 2], 4),
"num_obs": np.tile(np.repeat([25, 100], 3), 2),
"accuracy": [0.8, 0.7, 0.8, 0.6, 0.7, 0.5, 0.6, 0.7, 0.8, 0.7, 0.9, 0.7],
"prob": [0.94, 0.96, 0.95, 0.98, 0.93, 0.92, 0.90, 0.92, 0.95, 0.9, 0.91, 0.92]
})
df["accum_max_prob"] = df.groupby(["realization_id", "num_obs"])["prob"].cummax()
And I want to know how to create a column with this output:
df["desired_accuracy"] = [0.8, 0.7, 0.7, 0.6, 0.6, 0.6, 0.6, 0.7, 0.8, 0.7, 0.9, 0.7]
Each entry of desired_accuracy equals the accuracy value that corresponds to the row where the highest prob has been achieved so far by group (that is why I create accum_max_prob).
So, for example: the first value is 0.8 because there is no data prior to that, but then the next one is 0.7 because the prob of the second row is greater than the first. The third value stays the same, because the third prob is lower than the second one, so it does not update desired_accuracy. For each pair (realization_id, num_obs) the criteria resets.
How can I do it in a vectorized fashion using Pandas? | 0 | 2 | 1 | Do it like this:
df['desired_accuracy'] = df['accuracy'].where(df['accum_max_prob'].eq(df['prob'])).ffill() |
2023-04-28 00:41:07 | 0 | python,image,tkinter,canvas,tkinter-canvas | 1 | 76,125,558 | i need to make an image in the canvas in tkinter heres the code but errors are coming | 76,125,515 | true | 45 | c = tkinter.Canvas(width=275,height=300)
so_ramdom = tkinter.PhotoImage("/Users/vikranthracherla/Downloads/download.jpeg")
c.create_image(image=so_ramdom)
the error
Traceback (most recent call last):
File "d27.py", line 33, in <module>
c.create_image(image=so_ramdom)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk/Tkinter.py", line 2325, in create_image
return self._create('image', args, kw)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk/Tkinter.py", line 2309, in _create
cnf = args[-1]
IndexError: tuple index out of range | 1.2 | 1 | 1 | You forgot to add coordinates x and y
c.create_image(x, y, ..... Also noticed you are not passing in any coordinates onto. |
2023-04-28 10:05:45 | 2 | android,python-3.x,beeware | 1 | 76,171,333 | How to solve Invalid keystore format while building beeware android app | 76,128,496 | false | 106 | Tried to build android app using beware on windows.
Log file:
BriefcaseCommandError: Error while building project.
[12:18:50] console.py:305
>>> Extra information: console.py:306
subprocess.py:675
>>> Running Command: subprocess.py:676
>>> 'C:\Users\user\AppData\Local\BeeWare\briefcase\Cache\tools\android_sdk\cmdline-tools\latest\bin\sdkmanager.bat' --list_installed subprocess.py:677
>>> Working Directory: subprocess.py:684
>>> D:\Code\Python\BeWare\test\beware\helloworld subprocess.py:685
>>> Environment Overrides: subprocess.py:694
>>> ANDROID_SDK_ROOT=C:\Users\user\AppData\Local\BeeWare\briefcase\Cache\tools\android_sdk subprocess.py:696
>>> JAVA_HOME=C:\Users\user\AppData\Local\BeeWare\briefcase\Cache\tools\java subprocess.py:696
>>> Command Output: subprocess.py:701
>>> Loading package information... subprocess.py:703
>>> Loading local repository... subprocess.py:703
>>> [========= ] 25% Loading local repository... subprocess.py:703
>>> [========= ] 25% Fetch remote repository... subprocess.py:703
>>> [=======================================] 100% Fetch remote repository... subprocess.py:703
>>> Installed packages: subprocess.py:703
>>> Path | Version | Description | Location subprocess.py:703
>>> ------- | ------- | ------- | ------- subprocess.py:703
>>> build-tools;30.0.2 | 30.0.2 | Android SDK Build-Tools 30.0.2 | build-tools\30.0.2 subprocess.py:703
>>> emulator | 32.1.12 | Android Emulator | emulator subprocess.py:703
>>> patcher;v4 | 1 | SDK Patch Applier v4 | patcher\v4 subprocess.py:703
>>> platform-tools | 34.0.1 | Android SDK Platform-Tools | platform-tools subprocess.py:703
>>> platforms;android-33 | 2 | Android SDK Platform 33 | platforms\android-33 subprocess.py:703
>>> subprocess.py:703
>>> Return code: 0 subprocess.py:712
FAILURE: Build failed with an exception.
Console log:
* What went wrong:
Execution failed for task ':app:packageDebug'.
> A failure occurred while executing com.android.build.gradle.tasks.PackageAndroidArtifact$IncrementalSplitterRunnable
> com.android.ide.common.signing.KeytoolException: Failed to read key AndroidDebugKey from store "C:\Users\user\.android\debug.keystore": Invalid keystore format
just run briefcase build android | 0.379949 | 1 | 1 | You can take these steps:
Close Android Studio and any other applications that might be using the keystore.
Open the directory C:\Users\<username>\.android in Windows Explorer.
Locate the file "debug.keystore" and delete it. If you encounter an error trying to delete the file, you can try opening the Task Manager and ending any Java processes that might be using the file.
Run the build command again using briefcase build android.
After these steps, build must be successful. Deleting the "debug.keystore" file and letting Android Studio regenerate a new one can often resolve issues related to invalid keystore formats. |
2023-04-28 13:40:51 | 0 | python,python-3.x,pyspark,apache-spark-sql,aws-s3-client | 1 | 76,203,924 | Unable to zip a .txt file using pyspark | 76,130,175 | false | 29 | I am trying to generate a file for a downstream consumer, for which we fetch data from an oracle table, and try to write a .txt.gz file into AWS S3.
The idea is as follows -
Generate multiple csv files, which get written into a single .txt file.
Zip this .txt file to produce a .txt.gz file to send to the consumer.
I am able to get through step 1, but not able to figure out step 2.
def execute_script(environment: str,logger, input_json_path: str = '', isdebug: bool = False):
print('Started')
config = ConfigurationManager(env=environment).oraclerds_conf
sqlSelect = get_bcbsa_value_based_pgms_df(config)
logger.info('sqlSelect df created.')
sqlSelect.persist().count()
s3_bucket_name = config.get('tgt_bucket_name')
logger.info("total records : {}".format(sqlSelect.count()))
folder_name = f'{CONTEXT_NAME}/{STRDATE}/'
s3_bucket_name = config.get('tgt_bucket_name')
s3_path = f's3://{s3_bucket_name}/{FOLDER_PATH}/'
logger.info(f'Writing to {s3_path} as csv')
sqlSelect \
.replace("", None) \
.coalesce(1) \
.write \
.mode('overwrite') \
.format('csv') \
.option("header", "false") \
.option("sep", "|") \
.option("quote", "") \
.option("escape", "") \
.option("nullValue", None) \
.save(s3_path)
my_bucket = s3.Bucket(s3_bucket_name)
file_number = 0
logger.info(f'Target path : {s3_bucket_name}/{TARGET_FILE}')
for obj in my_bucket.objects.filter(Prefix=f'{FOLDER_PATH}'):
source_filename = (obj.key).split('/')[-1]
logger.info(f'Source file name: {source_filename}')
copy_source = {
'Bucket': s3_bucket_name,
'Key': obj.key
}
file_number += 1
s3.meta.client.copy(copy_source, s3_bucket_name, TARGET_FILE + ".txt")
**zip_obj = s3.Object(bucket_name='s3_bucket_name', key='obj.key')
buffer = BytesIO(zip_obj.get().read())
z = zipfile.ZipFile(buffer)
for filename in z.namelist():
file_info = z.getinfo(filename)
s3.meta.client.upload_fileobj(
z.open(filename),
Bucket=s3_bucket_name,
Key=obj.key
)**
s3.Object(s3_bucket_name, obj.key).delete()
The part in bold is what I tried to do and was expecting a file with [FILE_NAME.txt.gz] in the desired S3 path.
But the error looks like this -
2023-04-28 12:56:43,414 ERROR [main] glue.ProcessLauncher (Logging.scala:logError(73)): Error from Python:Traceback (most recent call last):
File "/tmp/glue_extract_job.py", line 213, in
execute_script(environment=EXEC_ENVIRONMENT,logger=logger, isdebug=False)
File "/tmp/glue_extract_job.py", line 131, in execute_script
buffer = BytesIO(zip_obj.get().read())
File "/home/spark/.local/lib/python3.7/site-packages/boto3/resources/factory.py", line 520, in do_action
response = action(self, *args, **kwargs)
File "/home/spark/.local/lib/python3.7/site-packages/boto3/resources/action.py", line 83, in call
response = getattr(parent.meta.client, operation_name)(*args, **params)
File "/home/spark/.local/lib/python3.7/site-packages/botocore/client.py", line 386, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/home/spark/.local/lib/python3.7/site-packages/botocore/client.py", line 705, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the GetObject operation: Access Denied
Any help would be appreciated! | 0 | 1 | 1 | This is likely not a spark problem, but boto not having access to s3 in execute_script buffer = BytesIO(zip_obj.get().read())....GetObject operation: Access Denied
By the way, you can compress the csv to gzip with spark, just add .option("compression", "gzip") and you won't get to configure boto3 to zip the csv file. |
2023-04-28 15:55:00 | 0 | python-3.10,weaviate | 2 | 76,509,546 | Adding data objects to weaviate - {'error': [{'message': 'store is read-only'}]} | 76,131,291 | false | 287 | I am setting up a weaviate database using the docker-compose option. Starting up the db works fine, and I am able to create a class and add data objects in the REPL or when I am running it all in the same script (i.e., create weaviate class and add data in the same file). However, when I try to set up the weaviate class(es) in a different file or command and then try to add data to it, I get the following response:
{'error': [{'message': 'store is read-only'}]}
I've tried the following:
Start at the basics by following the weaviate Quickstart tutorial in a single function (Successful)
Adjust the function to create a Message class to accept a message from the user as input to be inserted (Successful)
Move the code to create the weaviate class to a separate file and function while keeping the code to accept the user message and add data to weaviate in the original file/function (Failed)
I've tried doing that last step in a variety of ways but to no avail. I always get the same error response.
Has anyone ran into this before or have an idea on how to resolve this?
Please let me know what other information would be helpful.
Here's a more detailed outline of what I am doing to produce the error:
Run ./build.sh setup_weaviate to create the class(es) found in a json file (completes successfully):
build.sh
setup_venv () {
python3 -m venv venv
source venv/bin/activate
pip install --upgrade pip wheel
pip install -r requirements.txt
}
setup_weaviate () {
python3 src/weaviate_client.py
}
case "$1" in
setup_venv)
setup_venv
;;
setup_weaviate)
setup_weaviate
;;
*)
echo "Usage: $0 {setup}"
exit 1
;;
esac
src/weaviate_client.py
import os
import yaml
from dotenv import load_dotenv
import weaviate
def get_client(url, api_key):
client = weaviate.Client(
url=url,
additional_headers={"X-OpenAI-API-Key": api_key}
)
return client
def setup_weaviate(client):
"""Fetch the classes from the weaviate_classes.yml file and create them in Weaviate."""
client.schema.delete_all()
client.schema.create("resources/weaviate.json")
print(client.schema.get())
if __name__ == "__main__":
load_dotenv()
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
WEAVIATE_URL = os.getenv("WEAVIATE_URL")
client = get_client(WEAVIATE_URL, OPENAI_API_KEY)
setup_weaviate(client)
client._connection.close()
resources/weaviate.json
{"classes": [{"class": "Message", "invertedIndexConfig": {"bm25": {"b": 0.75, "k1": 1.2}, "cleanupIntervalSeconds": 60, "stopwords": {"additions": null, "preset": "en", "removals": null}}, "moduleConfig": {"text2vec-openai": {"model": "ada", "modelVersion": "002", "type": "text", "vectorizeClassName": true}}, "properties": [{"dataType": ["string"], "description": "The content of a message", "moduleConfig": {"text2vec-openai": {"skip": false, "vectorizePropertyName": false}}, "name": "content", "tokenization": "word"}], "replicationConfig": {"factor": 1}, "shardingConfig": {"virtualPerPhysical": 128, "desiredCount": 1, "actualCount": 1, "desiredVirtualCount": 128, "actualVirtualCount": 128, "key": "_id", "strategy": "hash", "function": "murmur3"}, "vectorIndexConfig": {"skip": false, "cleanupIntervalSeconds": 300, "maxConnections": 64, "efConstruction": 128, "ef": -1, "dynamicEfMin": 100, "dynamicEfMax": 500, "dynamicEfFactor": 8, "vectorCacheMaxObjects": 1000000000000, "flatSearchCutoff": 40000, "distance": "cosine", "pq": {"enabled": false, "bitCompression": false, "segments": 0, "centroids": 256, "encoder": {"type": "kmeans", "distribution": "log-normal"}}}, "vectorIndexType": "hnsw", "vectorizer": "text2vec-openai"}]}
Note that the weaviate.json file is just the output of the client.shema.get() command (after having once successfully created the class in the REPL).
Execute the message:handle_message function, which creates a message object and attempts to push it to weaviate:
message.py
import os
import asyncio
from dotenv import load_dotenv
from datetime import datetime
load_dotenv()
BATCH_SIZE = int(os.getenv("BATCH_SIZE"))
def handle_message(client, message, messages_batch=[]):
"""Save a message to the database."""
data = [{
"content": message.content,
}
]
with client.batch as batch:
batch.batch_size=100
for i, d in enumerate(data):
properties = {
"content": d["content"],
}
client.batch.add_data_object(properties, "Message")
return True
I get the {'error': [{'message': 'store is read-only'}]} when I pass in a message to this function. Also, I understand that as the code is currently a batch will be executed each time a message is passed to the function -- this was intentional since I was trying to resolve this issue with just one message.
The only output I get when I execute the handle_message function is what I mentioned previously: {'error': [{'message': 'store is read-only'}]}
Here is also the output from client.schema.get() in case that is helpful, but is essentially the same as the resources/weaviate.json contents:
{'classes': [{'class': 'Message', 'invertedIndexConfig': {'bm25': {'b': 0.75, 'k1': 1.2}, 'cleanupIntervalSeconds': 60, 'stopwords': {'additions': None, 'preset': 'en', 'removals': None}}, 'moduleConfig': {'text2vec-openai': {'model': 'ada', 'modelVersion': '002', 'type': 'text', 'vectorizeClassName': True}}, 'properties': [{'dataType': ['string'], 'description': 'The content of a message', 'moduleConfig': {'text2vec-openai': {'skip': False, 'vectorizePropertyName': False}}, 'name': 'content', 'tokenization': 'word'}], 'replicationConfig': {'factor': 1}, 'shardingConfig': {'virtualPerPhysical': 128, 'desiredCount': 1, 'actualCount': 1, 'desiredVirtualCount': 128, 'actualVirtualCount': 128, 'key': '_id', 'strategy': 'hash', 'function': 'murmur3'}, 'vectorIndexConfig': {'skip': False, 'cleanupIntervalSeconds': 300, 'maxConnections': 64, 'efConstruction': 128, 'ef': -1, 'dynamicEfMin': 100, 'dynamicEfMax': 500, 'dynamicEfFactor': 8, 'vectorCacheMaxObjects': 1000000000000, 'flatSearchCutoff': 40000, 'distance': 'cosine', 'pq': {'enabled': False, 'bitCompression': False, 'segments': 0, 'centroids': 256, 'encoder': {'type': 'kmeans', 'distribution': 'log-normal'}}}, 'vectorIndexType': 'hnsw', 'vectorizer': 'text2vec-openai'}]} | 0 | 1 | 1 | I found that if I stopped mounting the PERSISTENCE_DATA_PATH to a local directory on the host, it solved the issue. |
2023-04-29 02:12:42 | 4 | python,kotlin,types | 1 | 76,134,311 | Type conversions inconsistent in Kotlin | 76,134,233 | true | 48 | I started coding on python and moved on to Kotlin/android. I'm curious: when you want to convert types in python, it is generally as simple as str(x), but in kotlin - depending on type - it is different:
Integer.toString(x)
x.toString()
there's probably more since I'm new to this, but am I doing this all wrong or is it really actually this inconsistent. I'm trying to re-learn bad habits :-) | 1.2 | 2 | 1 | The standard way is to use toString(). All classes have a toString() function. In general, conversions from one class to another in Kotlin are always done by calling a function on that class instance that starts with the word .to.
The reason Integer.toString() exists is that if you're targeting the JVM, then all the Java standard library is available as well. Java has the concept of "primitives" which are not classes so they don't have functions, and so it provides Integer.toString() to be able to convert a primitive integer into a String.
Java has primitive wrapper classes that you pretty much should never touch in Kotlin:
java.lang.Integer
java.lang.Long
java.lang.Float
etc. |
2023-04-29 05:27:49 | 1 | python | 3 | 76,134,752 | Roman to int conversion problem gives an string index out of range error | 76,134,654 | false | 62 | I have a simple task to convert roman to int. this is the code I have been trying.
def romanToInt(s):
"""
:type s: str
:rtype: int
"""
j=0
print(len(s))
for i in range(len(s)):
k=s[i]
l=i+1
if s[i]=="I":
if s[l]=="V":
j+=4
elif s[l]=="X":
j+=9
elif s[i]=="X":
if s[l]=="L":
j+=40
elif s[l]=="C":
j+=90
elif s[i]=="C":
if s[l]=="D":
j+=400
elif s[l]=="M":
j+=900
else:
match k :
case "I":
j+=1
case "V":
j+=5
case "X":
j+=10
case "L":
j+=50
case "C":
j+=100
case "D":
j+=500
case "M":
j+=1000
return j
a=input("Enter number")
b=romanToInt(a)
print(b)
this code gives this error .
Enter numberIII
3
Traceback (most recent call last):
File "C:\Users\admin\Desktop\Dekstop Folder\pycharm\python\Leetcode\Roman to int.py", line 46, in <module>
b=romanToInt(a)
^^^^^^^^^^^^^
File "C:\Users\admin\Desktop\Dekstop Folder\pycharm\python\Leetcode\Roman to int.py", line 13, in romanToInt
if s[l]=="V":
~^^^
IndexError: string index out of range
Process finished with exit code 1
I dont know why it says String index out of range.
I tried printing the length and it is in the range but i dont know why it does not work. please help. | 0.066568 | 1 | 1 | As mentioned in the comments, your problem comes when i reaches len(s)-1, meaning l=i+1 is one past the end of the string.
Instead of using s[l], you could use a slice of s[l:l+1]. This will return an empty string when l is out of bounds instead of generating an error. |
2023-04-29 05:44:38 | 0 | python,win32com,mainframe | 1 | 76,383,446 | How to connect to the active session of mainframe from Python | 76,134,704 | true | 50 | How to connect to the active session in mainframe from Python -
import win32com.client
passport = win32com.client.Dispatch("PASSPORT.Session")
session = passport.ActiveSession
if session.connected():
print("Session active")
else:
print("Session not active")
It is giving a DDL file missing error when activating the .zws file, and when the session is not active it's giving the message, "Session not active".
Can someone please help?
I am using Python3.8 version | 1.2 | 1 | 1 | Discovered a possible solution -
we can create active session in mainframe using VBA code.
And connect the macro through python code. |
2023-04-29 06:11:06 | 0 | python,selenium-webdriver,codespaces | 2 | 76,276,698 | How can I write a python selenium webscraping script using chromedriver on github codespaces? | 76,134,787 | false | 198 | I have got github codespaces environment and I have installed both selenium and the necessary chromedriver-binary using pip
pip install selenium chromedriver-binary
Here's an example of the Python web scraper I'm writing
import json
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
class PriceScraper:
def scrape(self):
input_url = "https://www.google.com"
chrome_options = Options()
chrome_options.add_argument("--headless")
chrome_options.add_argument("--disable-dev-shm-usage")
chrome_options.add_argument("--no-sandbox")
service = Service('/usr/bin/chromedriver')
driver = webdriver.Chrome(service=service, options=chrome_options)
driver.get(input_url)
if __name__ == '__main__':
scraper = PriceScraper()
scraper.scrape()
I have installed all the necessary pip packages and I have confirmed the installation of the chromium and the chromedriver by running:
(venv) $ sudo apt-get install -y chromium-browser chromium-chromedriver python3-selenium
Reading package lists... Done
Building dependency tree
Reading state information... Done
python3-selenium is already the newest version (4.0.0~a1+dfsg1-1.1).
chromium-browser is already the newest version (1:85.0.4183.83-0ubuntu0.20.04.3).
chromium-chromedriver is already the newest version (1:85.0.4183.83-0ubuntu0.20.04.3).
And checking by running
ls -l /usr/bin/chromedriver
But when I try to execute the python from my vscode codespaces terminal as follows:
python3 scrape_prices.py
It returns the following error:
selenium.common.exceptions.WebDriverException: Message: unknown error: Chrome failed to start: exited abnormally.
(unknown error: DevToolsActivePort file doesn't exist)
(The process started from chrome location /usr/bin/chromium-browser is no longer running, so ChromeDriver is assuming that Chrome has crashed.)
Interestingly when I try to run chromedriver from the command line it says:
Command '/usr/bin/chromedriver' requires the chromium snap to be installed.
Please install it with:
snap install chromium
And when I attempt to install snap with snap install chromium
I get the following error
error: cannot communicate with the server: Post http://localhost/v2/snaps/chromium: dial unix /run/snapd.socket: connect: no such file or directory
I'm unsure how to get this working | 0 | 1 | 1 | The issue was that I hadn't installed chromium locally, and once that was done properly from within my .venv virtual environment, everything worked properly. |
2023-04-29 14:16:27 | 0 | python,dockerfile,appdynamics | 1 | 76,294,759 | Set AppDynamics integration with Python | 76,136,697 | false | 39 | I'm trying to set up my Python application to send data to AppDynamics. I have the AppDynamics controller up and running and on my local my application works, but no data is found on AppDynamics
I've been told to use this repo as a template (and I can confirm it works, sending data to the AppDynamics instance I'm working on) https://github.com/jaymku/py-k8s-init-scar/blob/master/kube/web-api.yaml
I have some doubts though, and they might be the cause of the issues that I'm having.
I had in my Dockerfile a CMD at the end like first.sh && python3 second and I've changed it to be ENTRYPOINT "first.sh && python3 second". Note no [] format here and also that there are two concatenated commands.
On the value of the APP_ENTRY_POINT variable I'm trying just the same.
There are no errors when I run this, my application works correctly, except the data is not sent to AppDynamics. Nothing seems to fail, I can't find any error messages. Any ideas what I'm missing?
Also, where can I find out, within AppDynamics, the value that we need to set for the APPDYNAMICS_CONTROLLER_PORT variable? I'm pretty sure it will be 443 in our case, since we seem to be using that in other proyects in AppDynamics that are working, but checking it would be a good idea. It might also be related with this issue, I don't know. | 0 | 1 | 1 | I managed to have this working by using CMD instead of ENTRYPOINT, using the command that is found inside the suggested entrypoint. So, I did the same that was supposed to done the entrypoint but inserting the command myself |
2023-04-29 14:30:13 | 0 | python,python-3.x,psychopy | 1 | 76,153,224 | How To Prevent Stimuli Overlap Prevention? | 76,136,750 | false | 21 | new to the scene.
I've created a code for a Visual Search task with a randomized number of Distractors (1/5/9) and a Target that may or may not appear (50% chance).
The coordinates for the placements of both Distractors and Target are randomized, but naturally that on its own doesn't prevent overlap, and both Distractors and Target keep clashing with each other.
Is there any way to prevent it?
I've considered trying to add each new random set of positions to a list and somehow make sure the list doesn't allow duplicates/close numbers (if duplicate/close then randomize again), but no idea how to implement.
Code provided: (D = Distractor, T = Target):
for i in range(20):
ClosedD_num = random.choice([1, 5, 9])
ClosedD_list = []
for i in range(ClosedD_num):
x_pos = random.uniform(-0.5, 0.5)
y_pos = random.uniform(-0.5, 0.5)
ClosedD = visual.ImageStim(
win=window,
name='ClosedD',
image='images/closed_d.png', mask=None, anchor='center',
ori=randint(1, 360), pos=(x_pos, y_pos), size=(0.1, 0.1),
color=[1, 1, 1], colorSpace='rgb', opacity=None,
flipHoriz=False, flipVert=False,
texRes=128.0, interpolate=True, depth=-3.0)
ClosedD_list.append(ClosedD)
for ClosedD in ClosedD_list:
ClosedD.setAutoDraw(True)
r_pos = random.uniform(-0.5, 0.5)
s_pos = random.uniform(-0.5, 0.5)
ClosedT = visual.ImageStim(
win=window,
name='ClosedT',
image='images/closed_t.png', mask=None, anchor='center',
ori=randint(1, 360), pos=(r_pos, s_pos), size=(0.1, 0.1),
color=[1, 1, 1], colorSpace='rgb', opacity=None,
flipHoriz=False, flipVert=False,
texRes=128.0, interpolate=True, depth=-3.0)
ClosedT_present = random.choice([0, 1])
if ClosedT_present == 0:
ClosedT.setAutoDraw(False)
else:
ClosedT.setAutoDraw(True)
window.flip()
P.S can provide further information if needed | 0 | 1 | 1 | To avoid overlap in many stimuli you basically need to constrain the randomness. I'd recommend you take a different approach of creating a grid of coordinates, with 1 element per grid position, and then applying a random jitter to the positions of the elements that is smaller than the size of your imaginary boxes. This will still give an appearance of random positions but means a) no overlaps and b) a more even distribution with no clumps of elements |
2023-04-29 21:10:58 | 1 | python,mypy,python-typing | 2 | 76,138,555 | Incompatible types in assignment (expression has type "List[str]", variable has type "List[Union[str, int]]") | 76,138,438 | false | 226 | If I run this code through mypy:
if __name__ == '__main__':
list1: list[str] = []
list2: list[str | int] = []
list2 = list1
It gives me following error:
error: Incompatible types in assignment (expression has type "List[str]", variable has type "List[Union[str, int]]")
Why? Isn't a list[str] a subset of list[str | int]? If yes, then why can't I assign list1 which doesn't have wider range of possible types to list2? | 0.099668 | 1 | 1 | This is the covariant vs contravariant problem.
Let X be a subtype of Y be a subtype of Z.
If I have a method that expects list[Y], and only ever reads that list, then it is perfectly safe to pass list[X]. The method expects to read an X, and it gets a Y instead, which is a subtype of X. Everything is okay. This is called a covariant argument.
If I have a method that expects list[Y], and only ever writes to that list, then it is perfectly safe to pass list[Z]. Values of type Y can fit well into list[Z]. This is called a contravariant argument.
If the method expects to both read and write to the list, then I have to pass an argument of precisely type list[Y]. |
2023-04-29 23:53:39 | 0 | python,fernet | 1 | 76,144,599 | Invalid Token when using Fernet library in python | 76,138,974 | false | 68 | I was trying to use a library I found in python called fernet and make a simple program encrypting and decrypting in python and it worked but when I tried to make it GUI it kept spiting out the error "invalid token". I don't know what I did wrong but it seems that it is only throwing a error in the GUI, Can someone help?
(Just so you know I am not the smartest in python so can you explain in simple terms)
Here is the full error:
line 121, in _get_unverified_token_data
raise InvalidToken
cryptography.fernet.InvalidToken
from cryptography.fernet import Fernet
from tkinter import *
import tkinter as tk
def Encrypt():
get=tk.StringVar()
message=get.get()
byte_message = str.encode(message)
f = Fernet(b'U_v31LOlh9SOe3eN-18CNUropIalPD02eqxgTxUnuAI=')
encrypted_message = f.encrypt(byte_message)
enmess.config(text = encrypted_message.decode(), font=('Poppins',5))
def Decrypt():
get=tk.StringVar()
message=get.get()
f = Fernet(b'U_v31LOlh9SOe3eN-18CNUropIalPD02eqxgTxUnuAI=')
message = str.encode(message)
encrypted_message = f.decrypt(message)
print(encrypted_message.decode())
#decrypt_message = f.decrypt(b'gAAAAABkTNLJxD-1doUad45Srrb0vku4QdC0sQ-KZLr4-Z5OdgJ5R118ApojM-3HQTc3_YcYLJ6FYNQgYylc1V9QE2G_lTt1_A==')
#print(decrypt_message)
root=tk.Tk()
root.geometry("600x400")
message=tk.StringVar()
name_label = tk.Label(root, text = "Enter Message", font=('Poppins',10, 'bold'))
name_entry = tk.Entry(root,textvariable = message, font=('Poppins',10,'normal'))
enmess = tk.Label(root, text = "will be updated soon", font=('Poppins',10))
encrypt_btn=tk.Button(root,text = 'Encrypt', command = Encrypt)
decrypt_btn=tk.Button(root,text = 'Decrypt', command = Decrypt)
name_label.grid(row=0,column=1)
name_entry.grid(row=0,column=2)
encrypt_btn.grid(row=2,column=1)
decrypt_btn.grid(row=2,column=2)
enmess.grid(row=3, column=0)
root.mainloop()
I fiddled around a lot and so the code may look a little weird but thank you for your help (: | 0 | 1 | 1 | The error message you're seeing is related to the cryptography library.
InvalidToken exception is raised when the token passed to Fernet.decrypt() method is either invalid or the message was tampered with.
Here are a few things you can check:
Make sure you are entering the correct token to decrypt. It should be the exact same token that was generated during encryption.
Ensure that the token is being passed correctly to the Fernet.decrypt() method.
Check if there is any change in the message or token during transfer or storage. |
2023-04-30 01:22:53 | 0 | python,security,flask,restful-authentication | 1 | 76,147,821 | Secure Password Storage in Flask-based RESTful API using Python | 76,139,171 | false | 38 | I am building a Flask-based RESTful API in Python, and I need to securely store and encrypt user passwords for authentication. Currently, I am using a simple hashing algorithm to hash passwords with a salt, but I'm not sure if this is secure enough.
Here is an example of my current implementation for password hashing:
import hashlib
password = 'password123'
salt = 'somesalt'
hashed_password = hashlib.sha256((password + salt).encode('utf-8')).hexdigest()
print(hashed_password)
Can anyone suggest a more secure way to store and encrypt passwords for user authentication in a Flask-based API? Specifically, I would like to know which password hashing algorithm to use and how to use it in a Flask application. Any advice or suggestions would be greatly appreciated. | 0 | 1 | 1 | sha256 is good enough, but you're using the "salt" as a pepper
the difference is, you'll be using this salt for every password - the value will be 'somesalt' every time. This is actually called a pepper.
If you want to use a salt correctly, you have to randomize it for every password, and then save it into your database along with the password. (then retrieve it, and re-add it when checking the password's hash)
Ideally, you should be using both salt and pepper. The pepper, which is never saved into the database and cannot be figured out if someone just has your database values, and the salt, which makes sure every password's hash is different from every other password's hash, even if the passwords are the same. You should also not put the salt as clear text into your codebase either.
I would say that
cleartext = very bad
hashed = not good enough
hashed + peppered or salted = mediocre / ok
hashed + peppered + salted = standard |
2023-04-30 09:17:01 | 0 | python,queue,python-multiprocessing | 4 | 76,151,686 | Processes with python multiprocessing not ending | 76,140,451 | false | 102 | Trying to get multiprocessing working correctly. Have review lots of posts in Stack overflow but none seem to fit my issue. I have a batch of pdfs that I am extracting the text data from. Using multiprocessing with queue to speed up the process. My script starts the processes and extracts the text from the pdfs.
There is a print statement in the main.py that is never executed saying that there are processes that never finish.
print(f'Finished in {round(finish - start, 2)} seconds(s)')
My main.py read as
import multiprocessing
from multiprocessing import freeze_support, Queue # Add this line to support Windows platform
import ExtractPDFData as eD
import time
import os
if __name__ == '__main__':
freeze_support()
files_to_process = []
pdf_directory = 'PDFData'
for each_pdf_file in os.listdir(pdf_directory):
if each_pdf_file.endswith('.pdf'):
files_to_process.append(os.path.join(pdf_directory, each_pdf_file))
start = time.perf_counter()
q = Queue()
current_processes_array = []
for pdf_file in files_to_process:
p = multiprocessing.Process(target=eD.pdf_process, args=(pdf_file, q))
p.start()
current_processes_array.append(p)
print(f"Number of active children: {len(multiprocessing.active_children())}")
for process in current_processes_array:
process.join()
finish = time.perf_counter()
print(f'Finished in {round(finish - start, 2)} seconds(s)')
with open('OutputData/halo_data.txt', 'w', encoding='UTF-8', errors='ignore') as add_text_to_file:
while not q.empty():
add_text_to_file.write(q.get())
add_text_to_file.close()
with open('OutputData/halo_data.txt', 'r', encoding='UTF-8', errors='ignore') as f:
print(f'Number of char in the input file: {len(f.read())}')
The attaching ExtractedPDFData.py is
from pypdf import PdfReader
import os
# Extracts text from PDF file
def pdf_process(pdf_path, q):
with open(pdf_path, 'rb') as f:
reader = PdfReader(f)
cleaned_pages_array = []
for page in range(len(reader.pages)):
# Extract the text from each page
page_of_text = reader.pages[page].extract_text()
# Convert to lowercase
page_of_text = page_of_text.lower()
# Remove non-alphanumeric characters and extra whitespaces
cleaned_pages_array.append(page_of_text)
print('Number of pages in the array: ', len(cleaned_pages_array))
text = " ".join(cleaned_pages_array)
q.put(text)
print("Have written to file " + pdf_path)
print(f"Worker process ID: {os.getpid()}, which comes from the parent {os.getppid()}") | 0 | 1 | 1 | I took the point that the 'queue' needs to be consumed before 'join()'. Tested that premise but my script still didn't work. The premise however was correct but is seems it is what I putting in the 'queue' that is causing the issue - 'queue' and 'process' with big text files isn't that right mix.
When I altered my code to 'put' string values into the 'queue' instead of big 'text' files and the script ran successfully. I then placed the big text files into an object and 'put' those object reference values to the 'queue', but that didn't work. I really thought that would be successful.
So small values work but not large text files, or reference values.
Next step is to look at 'Pooling'. |
2023-04-30 09:49:21 | 0 | python,langchain | 1 | 76,161,492 | Error when calling GitLoader module when using LangChain in Python | 76,140,582 | false | 258 | Unfortunately I'm not a Python expert and do have a problem when trying to use module GitLoader from LangChain project to load data from github. I run the program in Jupyter a notebook.
Here's a fragment of my code:
kb_loader = GitLoader(
clone_url="https://github.com/neo4j-documentation/knowledge-base",
repo_path="./repos/kb/",
branch="master",
file_filter=lambda file_path: file_path.endswith(".adoc")
and "articles" in file_path,
)
kb_data = kb_loader.load()
I get an error like:
ImportError Traceback (most recent call last)
File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\git\__init__.py:89
88 try:
---> 89 refresh()
90 except Exception as exc:
File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\git\__init__.py:76, in refresh(path)
74 GIT_OK = False
---> 76 if not Git.refresh(path=path):
77 return
File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\git\cmd.py:392, in Git.refresh(cls, path)
391 else:
--> 392 raise ImportError(err)
393 else:
ImportError: Bad git executable.
The git executable must be specified in one of the following ways:
- be included in your $PATH
- be set via $GIT_PYTHON_GIT_EXECUTABLE
- explicitly set via git.refresh()
All git commands will error until this is rectified.
This initial warning can be silenced or aggravated in the future by setting the
$GIT_PYTHON_REFRESH environment variable. Use one of the following values:
- quiet|q|silence|s|none|n|0: for no warning or exception
- warn|w|warning|1: for a printed warning
- error|e|raise|r|2: for a raised exception
Example:
export GIT_PYTHON_REFRESH=quiet
The above exception was the direct cause of the following exception:
ImportError Traceback (most recent call last)
File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain\document_loaders\git.py:33, in GitLoader.load(self)
32 try:
---> 33 from git import Blob, Repo # type: ignore
34 except ImportError as ex:
File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\git\__init__.py:91
90 except Exception as exc:
---> 91 raise ImportError("Failed to initialize: {0}".format(exc)) from exc
92 #################
ImportError: Failed to initialize: Bad git executable.
The git executable must be specified in one of the following ways:
- be included in your $PATH
- be set via $GIT_PYTHON_GIT_EXECUTABLE
- explicitly set via git.refresh()
All git commands will error until this is rectified.
This initial warning can be silenced or aggravated in the future by setting the
$GIT_PYTHON_REFRESH environment variable. Use one of the following values:
- quiet|q|silence|s|none|n|0: for no warning or exception
- warn|w|warning|1: for a printed warning
- error|e|raise|r|2: for a raised exception
Example:
export GIT_PYTHON_REFRESH=quiet
`The above exception was the direct cause of the following exception:
ImportError Traceback (most recent call last)
Cell In [27], line 9
1 # Knowledge base
2 kb_loader = GitLoader(
3 clone_url="https://github.com/neo4j-documentation/knowledge-base",
4 repo_path="./repos/kb/",
(...)
7 and "articles" in file_path,
8 )
----> 9 kb_data = kb_loader.load()
10 print(len(kb_data))`
File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain\document_loaders\git.py:35, in GitLoader.load(self)
33 from git import Blob, Repo # type: ignore
34 except ImportError as ex:
---> 35 raise ImportError(
36 "Could not import git python package. "
37 "Please install it with `pip install GitPython`."
38 ) from ex
40 if not os.path.exists(self.repo_path) and self.clone_url is None:
41 raise ValueError(f"Path {self.repo_path} does not exist")
ImportError: Could not import git python package. Please install it with `pip install GitPython`.
I did the pip install GitPython as recommended and it seems to be correctly installed... evertheless the error is still there.
Any idea on what I shall do to fix this?
Many thanks for your help! | 0 | 1 | 1 | This one will do
import os
os.environ['GIT_PYTHON_REFRESH'] = 'quiet' |
2023-04-30 16:50:29 | -2 | python,numpy,uint64 | 5 | 76,181,263 | Unexpected uint64 behaviour 0xFFFF'FFFF'FFFF'FFFF - 1 = 0? | 76,142,428 | false | 4,950 | Consider the following brief numpy session showcasing uint64 data type
import numpy as np
a = np.zeros(1,np.uint64)
a
# array([0], dtype=uint64)
a[0] -= 1
a
# array([18446744073709551615], dtype=uint64)
# this is 0xffff ffff ffff ffff, as expected
a[0] -= 1
a
# array([0], dtype=uint64)
# what the heck?
I'm utterly confused by this last output.
I would expect 0xFFFF'FFFF'FFFF'FFFE.
What exactly is going on here?
My setup:
>>> sys.platform
'linux'
>>> sys.version
'3.10.5 (main, Jul 20 2022, 08:58:47) [GCC 7.5.0]'
>>> np.version.version
'1.23.1' | -0.07983 | 45 | 2 | The behavior you are seeing is due to how unsigned integer arithmetic works in numpy. When an unsigned integer is decremented, if the result is negative, it "wraps around" to the maximum value of the data type.
In your example, a[0] starts at the value 0xFFFFFFFFFFFFFFFF, which is the maximum value for a 64-bit unsigned integer. When you subtract 1 from it, the result is 0xFFFFFFFFFFFFFFFE, as you expected. However, when you subtract 1 from it again, the result is -1 (which is represented as 0xFFFFFFFFFFFFFFFF in binary). Since this value is negative, it wraps around to the maximum value of the data type, which is 0.
So, the behavior you are seeing is expected due to the properties of unsigned integer arithmetic. If you want to avoid this behavior, you can use a signed integer data type instead. |
2023-04-30 16:50:29 | 12 | python,numpy,uint64 | 5 | 76,142,841 | Unexpected uint64 behaviour 0xFFFF'FFFF'FFFF'FFFF - 1 = 0? | 76,142,428 | false | 4,950 | Consider the following brief numpy session showcasing uint64 data type
import numpy as np
a = np.zeros(1,np.uint64)
a
# array([0], dtype=uint64)
a[0] -= 1
a
# array([18446744073709551615], dtype=uint64)
# this is 0xffff ffff ffff ffff, as expected
a[0] -= 1
a
# array([0], dtype=uint64)
# what the heck?
I'm utterly confused by this last output.
I would expect 0xFFFF'FFFF'FFFF'FFFE.
What exactly is going on here?
My setup:
>>> sys.platform
'linux'
>>> sys.version
'3.10.5 (main, Jul 20 2022, 08:58:47) [GCC 7.5.0]'
>>> np.version.version
'1.23.1' | 1 | 45 | 2 | a[0] - 1 is 1.8446744073709552e+19, a numpy.float64. That can't retain all the precision, so its value is 18446744073709551616=264. Which, when written back into a with dtype np.uint64, becomes 0. |
2023-04-30 19:22:15 | 1 | python | 2 | 76,143,181 | Loop Stops instead of continuing to next input after condition is met | 76,143,118 | true | 68 | I'm doing an assignment for class, and I chose to do something like a text-style adventure game, but there's something wrong as soon as I go into a battle. It's supposed to loop until either the enemy's health reaches zero or the player's health reaches zero, it basically does a few iterations of the loop but doesn't continue the code. Here is how I have the code set up for battles with an example of what's supposed to happen after, and then what's happening after I execute the program.
def game():
import random
att=random.randint(15,30)
health=100
defen=random.randint(10,20)
print("Your stats are:\n")
print("Attack Points:",att)
print("HP:", health)
print("Defense Points:", defen)
There's more code that continues on from here, but I will skip ahead to a battle
skel_att=random.randint(10,20)
skel_health=30
skel_def=random.randint(8,12)
while skel_health > 0 and health > 0:
print("You attack the skeleton!")
dam_at_skel=att-skel_def
skel_health=skel_health-dam_at_skel
print("The skeleton attacked!")
skel_dam=skel_att-defen
health=health-skel_dam
print("Oh no! Your health is:", health)
if skel_health > 0 or health > 0:
continue
elif health == 0:
print("The skeleton lays his final blow and you fall to the cold stone floor. The villagers will never find your body, but at least the skeleton will have a buddy in his afterlife!\nEnding Four: Spooky Scary Skeletons\nTry Again!")
elif skel_health == 0:
print("You land your final blow and the skeleton falls to pieces on the floor. You grab the EPIC SWORD and feel prepared to take on the dragon!")
new_heal=100
att=att+5
print("With the sword in hand you feel reinvigorated and energized. Your health has been healed in full!\nThe only places left to go is up or out the door. Where do you go?")
fourth_choice = str(input())
Here is what happens when I run the skeleton battle
game()
Your stats are:
Attack Points: 28
HP: 100
Defense Points: 11
You are the epic hero of legend. At least as epic as this text based adventure can get.
You approach the dark castle belonging to the evil dragon.
Upon entering the castle there's only a few directions you can go. Up the grand staircase, down the left passage, down the right passage, or back out the front door. Probably not a good idea to go upstairs yet since the dragon is there, but it's your funeral.
Will you go up, left, right, or leave?
Right
You go down the right corridor. Up on the wall there is an EPIC SWORD fit for an epic hero of legend like you. It's much more fitting than the frying pan you've been carrying at least! But oh no, a skeleton popped out! And he is standing between you and the EPIC SWORD. Time to send this skeleton back to the graveyard!
You attack the skeleton!
The skeleton attacked!
Oh no! Your health is: 91
You attack the skeleton!
The skeleton attacked!
Oh no! Your health is: 82
It basically just ends without continuing on to the next prompt. How do I fix this? | 1.2 | 2 | 1 | When the skeleton is dead, you continue in if skel_health > 0 or health > 0:,
and skel_health > 0 and health > 0 is false so the you basically break from the loop. To fix it, just remove the first if (skel_health > 0 or health > 0).
Also, you should probably change skel_health == 0 to skel_health <= 0 since you might deal more damage than the skeleton's hp. (As well as health <= 0) |
2023-04-30 20:27:49 | -1 | python | 1 | 76,143,412 | Input python using sys | 76,143,385 | false | 57 | Is it possible to read the character of an input without consuming it?
Example:
while state < 100:
if read: c = sys.stdin.read(1)
else: read = True
if c=='/':
next_c=sys.stdin.peek(1)
if next_c=='/':
state = MT[state][filter('//')]
else:
state = MT[state][filter(c)]
else:
state = MT[state][filter(c)]
if state < 100 and state !=0: lexeme += c
I want that next_c doesn't consume the input character so that in the rest of the runs it is complete.
For example, if I give a/b it just prints a/ because the b was read. | -0.197375 | 2 | 1 | In Python, you can use the io.BufferedReader class to wrap the standard input and provide the peek() functionality. |
2023-05-01 06:47:30 | 0 | python,generator | 1 | 76,145,297 | TypeError on Generator object with Linux Mint. Code works fine in Win11 | 76,145,177 | true | 41 | I have code for running a google search from terminal.
The weird thing is it works just fine in my win11, but when I launch it in Linux Mint I get a TypeError.
The code creates an error in line 13
Exception has occurred: TypeError
'generator' object is not subscriptable
for i in result[:num_results]:
TypeError: 'generator' object is not subscriptable
Any ideas why this is occuring and how to fix it?
Code below:
from googlesearch import search
import webbrowser, sys
import time
searching_for = input(("Input search words: "))
num_results = int(input("How many results : ") or "3")
result = search(searching_for)
for i in result[:num_results]:
webbrowser.open(i)
time.sleep(1) | 1.2 | 1 | 1 | search returns a generator.
Use:
for i in list(result)[:num_results]: |
2023-05-01 11:56:53 | 0 | python,tkinter,customtkinter,tkinter-text | 3 | 76,367,119 | Customtkinter: Change text color of switch | 76,146,738 | false | 454 | I want to change the text color of the switch widget text in customtkinter
I tried to use configure in with text_color but it showed me that there's no attribute of a switch called text_color...
Btw.. when creating the switch text_color works
minimal reproducible example:
import customtkinter as ctk
root = ctk.CTk()
switch = ctk.CTkSwitch(master=root, text='This is a switch', text_color='yellow')
switch.pack()
switch.configure(text_color='red')
root.mainloop() | 0 | 1 | 1 | Configuring CTkSwitch.configure(text_color=...) works in CustomTkinter 5.1.3. |
2023-05-01 13:55:29 | 0 | python,algorithm,nearest-neighbor,pulp,traveling-salesman | 1 | 76,154,815 | TSP with only one key to identify a road | 76,147,508 | true | 33 | I need to solve Traveling Salesman Problem in Python using PuLP.
import pulp
import numpy as np
import matplotlib.pyplot as plt
n = 50
np.random.seed(42)
x = 1.5*np.random.rand(n)
y = np.random.rand(n)
Roads are declared like this:
roads = pulp.LpVariable.dicts("Road", (range(n), range(n)), 0, 1, pulp.LpInteger)
Distance calculation:
xi, xj = np.meshgrid(x, x)
yi, yj = np.meshgrid(y, y)
dist = np.hypot(xi - xj, yi - yj)
Constraints:
for i in range(n):
prob += pulp.lpSum([roads[i][j] for j in range(n)]) == 1
for j in range(n):
prob += pulp.lpSum([roads[i][j] for i in range(n)]) == 1
for i in range(n):
prob += roads[i][i] == 0
prob += pulp.lpSum([dist[i][j]*roads[i][j] for i in range(n) for j in range(n)]), "Objective Function"
def draw(x, y, n, path):
plt.plot(x, y, '*', markerfacecolor = 'red', markeredgecolor = 'red')
plt.axis('equal')
for i in range(n):
plt.plot((x[i], x[path[i]]), (y[i], y[path[i]]), '--b')
prob.solve()
def findPath(roads):
path = [0]*n
for i in range(n):
for j in range(n):
if pulp.value(roads[i][j]) == 1:
path[i] = j
return path
path = findPath(roads)
draw(x, y, n, path)
def findTours(path):
ipath = [True]*len(path)
tours = []
while True in ipath:
i0 = ipath.index(True)
ipath[i0] = False
tour = [i0]
i = path[i0]
while i != i0:
tour.append(i)
ipath[i] = False
i = path[i]
tours.append(tour)
return tours
itCount = 0
while True:
itCount += 1
print('#', itCount, end = '')
result = prob.solve()
if result != 1:
break
path = findPath(roads)
tours = findTours(path)
print(' => ', len(tours))
if len(tours) == 1:
break
for tour in tours:
prob += pulp.lpSum([roads[i][j] for i in tour for j in tour]) <= len(tour) - 1
draw(x, y, n, path)
print("Result = ", pulp.value(prob.objective))
This code works correctly, I need to use only one variable to identify a road, i.e.:
roads = pulp.LpVariable.dicts("Road", (range(n), range(n)), 0, 1, pulp.LpInteger)
I rewrote constraints like this:
for i in range(n):
prob += pulp.lpSum([roads[(i,j)] for j in range(i + 1, n)]) + pulp.lpSum([roads[(j,i)] for j in range(0, i)]) == 2
prob += pulp.lpSum([dist[i][j] * roads[i,j] for i in range(n) for j in range(i + 1, n)]), "Objective Function"
And path calculation like this:
def findPath(roads):
path = [[] for _ in range(n)]
for i in range(n):
for j in range(i + 1, n):
if pulp.value(roads[i, j]) == 1:
path[i].append(j)
return path
Now I have a problem with function findTours which combines connected components into one graph using the Nearest neighbour algorithm - obviously it does not work with 2D arrays and I don't see an efficient way to re-implement it to support 2D arrays.
For experiments I reduced n to 10 and rewrote the function like this, but it struggles to combine everything to one graph:
def findTours(path):
ipath = [True]*len(path)
tours = []
while True in ipath:
i0 = ipath.index(True)
ipath[i0] = False
tour = [i0]
u = path[i0]
if len(u) == 0: continue
i1, i2 = u
while i1 != i0:
print(i0, i1, i2)
tour.append(i1)
ipath[i1] = False
u = path[i1]
if len(u) == 0:
i0 = i1
i1 = i2
elif len(u) == 1:
i1 = u[0]
else:
i1, i2 = u
tours.append(tour)
return tours
I also changed the constraint in the loop to this, but I guess I am wrong:
prob += pulp.lpSum([roads[i,j] for i in tour for j in range(i + 1, len(tour))]) <= len(tour) - 1
Is there some correct mathematical way to use just one variable to represent a road, and where am I wrong? How should the findTours function work in this case, or do I need to come up with a way to simplify the findPath function? | 1.2 | 1 | 1 | You’re missing the subtour elimination constraints, so in general, the roads chosen will form many connected components. Probably the best way to handle these is to generate them on demand:
Solve the current model (initially with no subtour elimination constraints).
If the solution is connected, we’re done.
Otherwise, choose a connected component, add a constraint that at least two roads enter/exit the component, and try again. |
2023-05-01 17:49:59 | 1 | python,oop,parallel-processing,python-multiprocessing | 1 | 76,150,138 | How to filter a list of objects based on an attribute in parallel, using Pool | 76,149,018 | false | 30 | Given a list of objects, I want to be able to reduce that list based on the attributes of the objects. Suppose I have the following class:
class TestClass:
def __init__(self, x, y):
self.x = x
self.y = y
and a list of objects from that class:
N = 10
list_of_objects = []
for i in range(N):
x = random.randint(0, 10)
y = random.randint(0, 10)
tst = TestClass(x, y)
list_of_objects.append(tst)
I want to reduce list_of_objects such that I only have elements for which self.x > self.y. Since I will have many such objects, and the actual filtering will be much more resource-intensive, I want to parallelize this. I have tried the following, which gives me a list of Nones and objects that match the criterion:
import random
from multiprocessing import Pool
class TestClass:
def __init__(self, x, y):
self.x = x
self.y = y
def filter(test_object):
if test_object.x > test_object.y:
return test_object
else:
return None
def parallel_process(operation, input, pool):
result = pool.map(operation, input)
return result
if __name__ == "__main__":
N = 10
list_of_objects = []
for i in range(N):
x = random.randint(0, 10)
y = random.randint(0, 10)
tst = TestClass(x, y)
list_of_objects.append(tst)
process_count = 2
process_pool = Pool(process_count)
result = parallel_process(filter, list_of_objects, process_pool)
print(result)
which returns a list of Nones and objects that match the criterion:
[<__main__.TestClass object at 0x1033f0610>, <__main__.TestClass object at 0x103e9f8d0>, <__main__.TestClass object at 0x103e9fa90>, None, <__main__.TestClass object at 0x103e9fb10>, <__main__.TestClass object at 0x103e9fb50>, None, <__main__.TestClass object at 0x103e9fcd0>, None, <__main__.TestClass object at 0x103e9fd50>]
I could, in principle, then get rid of Nones:
result = [i for i in result if i is not None]
Is this the best way of doing this? By "best" here I mean: (i) doing it in a pythonic and concise way, and (ii) the most efficient way within the confines of (i). | 0.197375 | 1 | 1 | If you run your program, this is what will happen:
The Pool will create two secondary Processes.
Every item in list_of_objects (instances of TestClass) will be converted to bytes using the Python pickle protocol. These bytes will be transmitted to the secondary Processes through a Pipe or a Queue. In the secondary Process, the bytes will be un-pickled to reconstitute objects of TestClass.
The secondary Process will call filter and convert its returned value to bytes using the Python pickle protocol. These bytes will be transmitted back to the main Process through a Pipe or a Queue. In the main Process, the bytes will be un-pickled to reconstitute the object that was returned by filter - either a valid instance of TestClass or None. Finally, as you point out, the main Process will have to perform additional logic to remove the Nones.
This all happens behind the scenes, in the standard library. This machinery is necessary because Python Processes do not share an address space; in other words, each Python Process acts almost like an independent program, with its own objects and its own memory. The secondary Processes must receive data from the main Process and return the results somehow. Python does this using pickle, Pipes and Queues.
Can you make your program faster using this mechanism? For your simple test program the answer is certainly no. For your actual program the answer may be yes. But it seems to me that you are trying to optimize the performance of a program you haven't even written yet. Why not write the program in a simple, straightforward way and see if the performance is acceptable, before undertaking the difficult task of writing a multiprocessing-based application? |
2023-05-01 21:35:12 | 0 | python,while-loop | 2 | 76,150,439 | While loop stops for a reason I did not anticipate in python | 76,150,319 | false | 83 | Am very new to python, I'm just using it to solve some of Euler's problems. I'm stuck on problem 4 as my code keeps stopping for no apparent reason. Its attempting to find the largest palindrome product of two three digit numbers.
x = 100
y = 100
while x != 1000:
test = x * y
if str(test)[::-1] == str(test):
ans = test
elif y<1000:
y += 1
else:
x += 1
y = 100
print(x)
print(ans)
If I run it all I see is that 1 is added to x only one time, though it should happen 899 times. I know the rest of the code is working as it finds 10201 which is 101 * 101 but nothing after that.
I've tried changing the loop to while a variable stop is false and setting it true when x == 999 but it still stops after x == 101. Why does my loop stop? | 0 | 1 | 1 | this line elif y<1000: should be an if not an elif: you rporgram is stuck in an infite loop not increasing x neither y once it finds the first palindrome.
Your while loop is not "stopping": you'd see your program finish, and get back to console, or whatever: as it is, it will keep running forever. Even for getting this one correct result, you likely had a different program than this: this won't print even the answer for "101".
After you fix that, move your print to inside the while loop, unless you want to see just the last result (although, IIRC the largest number is usually what Project Euler problems require - it would be ok then) |
2023-05-02 00:34:05 | 1 | python-3.x,django,django-models,django-rest-framework,django-views | 1 | 76,151,456 | select_related in view is not reducing queries with hybrid property in Django | 76,151,016 | true | 49 | I have included select_related and overall is reducing the number of queries and prefetching data as expected. However, I have a case which is querying separately. I am not sure if I am expecting a behaviour that would not be possible.
I have the following view:
list.py (View) where I am using select_related:
class OpportunitiesList(generics.ListAPIView):
serializer_class = OpportunityGetSerializer
def get_queryset(self):
queryset = Opportunity.objects.all()
queryset = queryset.filter(deleted=False)
return queryset.select_related('nearest_airport').order_by('id')
I have four objects in the models:
opportunity.py (Model):
class Opportunity(models.Model):
deleted = models.BooleanField(
default=False,
)
opportunity_name = models.TextField(blank=True, null=True)
nearest_airport = models.ForeignKey(AirportDistance,
on_delete=models.SET_NULL,
db_column="nearest_airport",
null=True)
class AirportDistance(models.Model):
airport_id = models.ForeignKey(Airport,
on_delete=models.SET_NULL,
db_column="airport_id",
null=True)
airport_distance = models.DecimalField(max_digits=16, decimal_places=4, blank=False, null=False)
@property
def airport_name(self):
return self.airport_id.name
location_assets.py (Model):
class Location(models.Model):
name = models.CharField(null=True, blank=True, max_length=255)
location_fuzzy = models.BooleanField(
default=False,
help_text=
"This field should be set to True if the `point` field was not present in the original dataset "
"and is inferred or approximated by other fields.",
)
location_type = models.CharField(null=True, blank=True, max_length=255)
class Airport(Location):
description = models.CharField(null=True, blank=True, max_length=255)
aerodrome_status = models.CharField(null=True, blank=True, max_length=255)
aircraft_access_ind = models.CharField(null=True, blank=True, max_length=255)
data_source = models.CharField(null=True, blank=True, max_length=255)
data_source_year = models.CharField(null=True, blank=True, max_length=255)
class Meta:
ordering = ("id", )
Lastly, I have the serializers to transform the data:
get.py: (Serializer)
class OpportunityGetSerializer(serializers.ModelSerializer):
nearest_airport = OpportunityAirportSerializer(required=False)
class Meta:
model = Opportunity
fields = (
"id",
"opportunity_name",
"nearest_airport",
)
distance.py (Serializer)
class OpportunityAirportSerializer(serializers.ModelSerializer):
class Meta:
model = AirportDistance
fields = ('airport_id', 'airport_distance','airport_name')
The above is creating two queries instead of one:
SELECT opportunity.id,
opportunity.opportunity_name,
opportunity.nearest_airport,
airportdistance.id,
airportdistance.airport_id,
airportdistance.airport_distance
FROM opportunity
LEFT OUTER JOIN airportdistance
ON (opportunity.nearest_airport = airportdistance.id)
WHERE opportunity.deleted = false
ORDER BY opportunity.id ASC
SELECT location.name
FROM airport
INNER JOIN location
ON (airport.location_ptr_id = location.id)
WHERE airport.location_ptr_id = 1
AirportDistance model is connected to Airport model which in turn is connected to Location model. In AirportDistance, I included a hybrid property to retrieve the name from Airport that is coming from Location model. I am not sure if the hybrid property in here would affect select_related.
Is possible just to get the following query and mix both queries:
SELECT opportunity.id,
opportunity.opportunity_name,
opportunity.nearest_airport,
airportdistance.id,
airportdistance.airport_id,
airportdistance.airport_distance,
location."name"
FROM opportunity
LEFT OUTER JOIN airportdistance
ON (opportunity.nearest_airport = airportdistance.id)
INNER join airport
ON (airport.location_ptr_id = airportdistance.airport_id)
INNER join location
ON (location.id = airport.location_ptr_id)
WHERE opportunity.deleted = false
ORDER BY opportunity.id ASC | 1.2 | 1 | 1 | you need to do select_related('nearest_airport__airport_id') instead of select_related('nearest_airport')
since AirportDistance accesses airport_id.name when resolving airport_name. |
2023-05-02 07:13:55 | 0 | python,pandas,datetime,strptime | 2 | 76,152,586 | strptime unconverted data remains | 76,152,521 | false | 47 | I have a string I want to convert to a datetime in the following format:
'31-12-2022:24'
The last two digits being the hour. I tried the following to convert it:
dt = datetime.strptime(z[1], '%d-%m-%Y:%H').date()
But get the following error:
ValueError: unconverted data remains: 4
Why is it only selecting the first digit? I have a workaround, splitting the string and converting them separately before rejoining them but I'd like to just do it in one line.
What I would ideally like is a datetime object that allows me to do some maths on the difference between two dates (edit* and times), which I assume is easiest in datetime format? | 0 | 1 | 1 | the problem is with your date value, as there is no such hour as 24.
If your string will be correct, as e.g.:
'31-12-2022:12' it should be ok. |
2023-05-02 16:26:22 | 0 | python,java | 1 | 76,157,095 | Want to run a Python "print()" script from Java, but getting no output | 76,156,942 | false | 76 | I have a simple Python script:
def main():
print("Hello world")
if __name__ == "__main__":
main()
And I want to get the response in Java. The endpoint is:
@RestController
@CrossOrigin(origins = "http://localhost:4200")
public class TestPython {
@PostMapping("/endpoint")
public ResponseEntity<String> endpoint() throws IOException {
ProcessBuilder pb = new ProcessBuilder("path-to-venv", "path-to-python-script");
pb.redirectErrorStream(true);
Process p = pb.start();
BufferedReader reader = new BufferedReader(new InputStreamReader(p.getInputStream()));
StringBuilder sb = new StringBuilder();
String line;
while ((line = reader.readLine()) != null) {
sb.append(line);
}
String jsonResponse = sb.toString();
BufferedReader errorReader = new BufferedReader(new InputStreamReader(p.getErrorStream()));
StringBuilder errorSb = new StringBuilder();
String errorLine;
while ((errorLine = errorReader.readLine()) != null) {
errorSb.append(errorLine);
}
String errorOutput = errorSb.toString();
if (!errorOutput.isEmpty()) {
System.out.println("Error output: " + errorOutput);
}
System.out.println(jsonResponse);
System.out.println(errorOutput);
return ResponseEntity.ok(jsonResponse);
}
}
jsonResponse and errorOutput are both empty. My virtual environment is active and I am using it. When I execute the Python script via terminal, a cmd pops up and prints Hello World, but using Java is empty. I also tried to explicitly set the charset in java, but I get the same problem. Also I get a lot of these \u0000 characters when executing reader.cb in debug mode.
Any help is welcome. | 0 | 1 | 1 | Not 100%, but you can try to flush the output stream of your Python script. It's possible that the output is not being sent until the script completes, so flushing the stream may help. You can try adding sys.stdout.flush() at the end of your Python script to force the output to be flushed.
edit (I few other things to try):
Make sure that the Python script has execute permissions. If the Python script is not marked as executable, you may need to set the execute permission using the chmod command in your terminal.
Triple check the path's you're using in the ProcessBuilder, that object also has a few other constructors you could try that may work better.
You can also try using a different method for executing the Python script, such as Runtime.getRuntime().exec() |
2023-05-02 21:26:43 | 0 | python,xpath,scrapy | 3 | 76,363,716 | Scrapy xpath error: 'Selector' object has no attribute '_default_type' | 76,158,987 | false | 185 | I'm using scrapy shell to run some sample test
after I enter 'response.xpath('//h1/text()').get()'
and it gives me AttributeError: 'Selector' object has no attribute '_default_type'
Can anyone help with this?
Thank you very much
below are the things that I entered in the terminal
scrapy shell
fetch("https://www.worldometers.info/world-population/population-by-country/")
r = scrapy.Request(url="https://www.worldometers.info/world-population/population-by-country/")
fetch(r)
response.xpath('//h1/text()').get()
Everything went good before the last line.
After I entered 'response.xpath('//h1/text()').get()'
it throw an error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-5-7a322073163e> in <module>
----> 1 response.xpath('//h1/text()').get()
~/anaconda3/envs/virtual_work/lib/python3.7/site-packages/scrapy/http/response/text.py in xpath(self, query, **kwargs)
117
118 def xpath(self, query, **kwargs):
--> 119 return self.selector.xpath(query, **kwargs)
120
121 def css(self, query):
~/anaconda3/envs/virtual_work/lib/python3.7/site-packages/scrapy/http/response/text.py in selector(self)
113 from scrapy.selector import Selector
114 if self._cached_selector is None:
--> 115 self._cached_selector = Selector(self)
116 return self._cached_selector
117
~/anaconda3/envs/virtual_work/lib/python3.7/site-packages/scrapy/selector/unified.py in __init__(self, response, text, type, root, _root, **kwargs)
84 % self.__class__.__name__)
85
---> 86 st = _st(response, type or self._default_type)
87
88 if _root is not None:
AttributeError: 'Selector' object has no attribute '_default_type' | 0 | 1 | 1 | What version of Scrapy are you on? Upgrading to the latest solved it for me. |
2023-05-02 21:29:16 | 0 | python,performance,tensorflow,keras | 1 | 76,159,288 | invalid results of process_time() when measuring model.fit() performance | 76,159,000 | true | 27 | I use the snippet below to measure and output the time spent during model fitting.
perf_counter_train_begin = time.perf_counter()
process_time_train_begin = time.process_time()
model.fit(data, ...)
perf_counter_train = time.perf_counter() - perf_counter_train_begin
process_time_train = time.process_time() - process_time_train_begin
print(f"System Time: {perf_counter_train}; Process Time: {process_time_train}")
It is expected that the system time (acquired from time.perf_counter()) might take much greater values than the process time (from time.process_time()) due to various factors like system calls, process scheduling and so on. On the other hand, when I run my neural network training script, I get results like this:
System Time: 51.13854772000013; Process Time: 115.725974476
Judging by my clock, the system time is measured correctly, and the process time is bogus. What am I doing wrong here? | 1.2 | 1 | 1 | The documentation of the various timing functions can in fact be a bit confusing, here is an explanation for both functions (skip to the end for a summary of the differences):
time.process_time
time.process_time() → float
Return the value (in fractional seconds)
of the sum of the system and user CPU time of the current process. It
does not include time elapsed during sleep. It is process-wide by
definition. The reference point of the returned value is undefined, so
that only the difference between the results of two calls is valid.
Use process_time_ns() to avoid the precision loss caused by the float
type.
New in version 3.3.
"Return the value (in fractional seconds) of the sum of the system and user CPU time of the current process": This means that time.process_time() returns the amount of CPU time used by the current process and all its child processes, expressed in seconds as a floating point number. This includes all the CPU time of the threads spawned by the process, each one measured individually and then they are all added up.
"It does not include time elapsed during sleep": This means that the time spent sleeping or waiting for I/O operations is not included in the value returned by time.process_time(). Only the time spent executing code on the CPU is included.
"It is process-wide by definition": it only includes time taken by this process (and its children, as mentioned before).
"The reference point of the returned value is undefined, so that only the difference between the results of two calls is valid": This means that the absolute value returned by time.process_time() has no specific meaning, since the "zero" point in time is undefined. Instead, the value returned by two successive calls to time.process_time() can be subtracted to get the elapsed CPU time between the two calls, which is the only meaningful measure.
time.perf_counter
time.perf_counter() → float
Return the value (in fractional seconds) of a performance counter, i.e. a clock with the highest available resolution to measure a short duration. It does include time elapsed during sleep and is system-wide. The reference point of the returned value is undefined, so that only the difference between the results of two calls is valid.
Use _counter_ns() to avoid the precision loss caused by the float type.
New in version 3.3.
Changed in version 3.10: On Windows, the function is now system-wide.
"Return the value (in fractional seconds) of a performance counter": This means that time.perf_counter() returns the value of a performance counter, which is a clock that is used to measure elapsed time with high accuracy and precision.
"It does include time elapsed during sleep and is system-wide": This means that the time elapsed during sleep is included and system-wide means that it also takes into account the time that other programs take while the main program is paused (such as waiting to receive a message from another running program).
"The reference point of the returned value is undefined, so that only the difference between the results of two calls is valid": This means that the absolute value returned by time.perf_counter() has no well-defined meaning, since the "zero" point in time is undefined. Instead, the value returned by two successive calls to time.perf_counter() can be subtracted to get the elapsed time between the two calls, which is the only meaningful measure. (This point is the same as for time.process_time).
Summary of the differences
time.process_time() measures the total CPU time used by the current process (counting the time taken by a process using N threads N times, because the time taken by each thread will be added to the final result), while time.perf_counter() is aligned with real-life (wall clock) time. In general we expect time.process_time() to be longer then time.perf_counter(), as you also have seen in your example. Furthermore, the time.process_time() will generally be longer than the real life time, as more threads can work concurrently at the same time, and all of their computation time will be added up, it will only be equal to the wall time for a single-threaded application.
time.process_time() does not include time elapsed during sleep, while time.perf_counter() does include it. |
2023-05-03 06:36:41 | 0 | tensorflow,pip,windows-10,conda,python-3.10 | 2 | 76,497,459 | How to install tensorflow-gpu | 76,161,038 | false | 1,300 | How to install tensorflow-gpu on windows 10 with Python 3.10
conda and pip not works
anyone have idea how to install tensorflow-gpu with Python 3.10 ?
Windows 10
Python 3.10.10
I installed:
cudnn-windows-x86_64-8.9.0.131_cuda11-archive
cuda_12.1.1_531.14_windows
add to path
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1\bin
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1\libnvvp
conda create --name cuda
conda activate cuda
(cuda) C:\Users\xxx>python -V
Python 3.10.10
(cuda) C:\Users\xxx>conda install -c conda-forge tensorflow-gpu
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
(cuda) C:\Users\xxx>pip install -U tensorflow-gpu
Collecting tensorflow-gpu
Using cached tensorflow-gpu-2.12.0.tar.gz (2.6 kB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [39 lines of output]
Traceback (most recent call last):
File "C:\Users\xxx\anaconda3\envs\cuda\lib\site-packages\setuptools\_vendor\packaging\requirements.py", line 35, in __init__
parsed = parse_requirement(requirement_string)
File "C:\Users\xxx\anaconda3\envs\cuda\lib\site-packages\setuptools\_vendor\packaging\_parser.py", line 64, in parse_requirement
return _parse_requirement(Tokenizer(source, rules=DEFAULT_RULES))
File "C:\Users\xxx\anaconda3\envs\cuda\lib\site-packages\setuptools\_vendor\packaging\_parser.py", line 82, in _parse_requirement
url, specifier, marker = _parse_requirement_details(tokenizer)
File "C:\Users\xxx\anaconda3\envs\cuda\lib\site-packages\setuptools\_vendor\packaging\_parser.py", line 126, in _parse_requirement_details
marker = _parse_requirement_marker(
File "C:\Users\xxx\anaconda3\envs\cuda\lib\site-packages\setuptools\_vendor\packaging\_parser.py", line 147, in _parse_requirement_marker
tokenizer.raise_syntax_error(
File "C:\Users\xxx\anaconda3\envs\cuda\lib\site-packages\setuptools\_vendor\packaging\_tokenizer.py", line 163, in raise_syntax_error
raise ParserSyntaxError(
setuptools.extern.packaging._tokenizer.ParserSyntaxError: Expected end or semicolon (after name and no valid version specifier)
python_version>"3.7"
^
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "C:\Users\xxx\AppData\Local\Temp\pip-install-t4yr5nvl\tensorflow-gpu_48989e8e399a4c5da19c2d876e93f0d7\setup.py", line 40, in <module>
setuptools.setup()
File "C:\Users\xxx\anaconda3\envs\cuda\lib\site-packages\setuptools\__init__.py", line 106, in setup
_install_setup_requires(attrs)
File "C:\Users\xxx\anaconda3\envs\cuda\lib\site-packages\setuptools\__init__.py", line 77, in _install_setup_requires
dist.parse_config_files(ignore_option_errors=True)
File "C:\Users\xxx\anaconda3\envs\cuda\lib\site-packages\setuptools\dist.py", line 910, in parse_config_files
self._finalize_requires()
File "C:\Users\xxx\anaconda3\envs\cuda\lib\site-packages\setuptools\dist.py", line 607, in _finalize_requires
self._move_install_requirements_markers()
File "C:\Users\xxx\anaconda3\envs\cuda\lib\site-packages\setuptools\dist.py", line 647, in _move_install_requirements_markers
inst_reqs = list(_reqs.parse(spec_inst_reqs))
File "C:\Users\xxx\anaconda3\envs\cuda\lib\site-packages\setuptools\_vendor\packaging\requirements.py", line 37, in __init__
raise InvalidRequirement(str(e)) from e
setuptools.extern.packaging.requirements.InvalidRequirement: Expected end or semicolon (after name and no valid version specifier)
python_version>"3.7"
^
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
i found a solution with pycharm and ubuntu in windows but i don't want to do it. | 0 | 2 | 1 | I think you should downgrade python version to 3.9 and using conda install tensorflow-gpu |
2023-05-03 07:37:40 | 0 | python,py2exe,python-xmlschema | 1 | 76,161,543 | Why does my python exe file give an winerror 3 (missing modules py2exe) | 76,161,458 | false | 52 | Update:
I just tried to replicate my problem, and I found that there seems to be a problem when creating the exe file.
py2exe says that 98 modules are missing and one of these are what is causing my winerror. I cannot, however, find the solution to these missing modules.
INFO:runtime:Analyzing the code
INFO:runtime:Found 622 modules, 98 are missing, 0 may be missing
98 missing Modules
------------------
? __main__ imported from bdb, pdb
? _frozen_importlib imported from importlib, importlib.abc, zipimport
? _frozen_importlib_external imported from importlib, importlib._bootstrap, importlib.abc, zipimport
? _posixshmem imported from multiprocessing.resource_tracker, multiprocessing.shared_memory
? _winreg imported from platform
? asyncio.DefaultEventLoopPolicy imported from -
? converters.AbderaConverter imported from xmlschema
? converters.BadgerFishConverter imported from xmlschema
? converters.ColumnarConverter imported from xmlschema
? converters.ElementData imported from xmlschema, xmlschema.aliases, xmlschema.dataobjects, xmlschema.validators.elements, xmlschema.validators.groups
? converters.JsonMLConverter imported from xmlschema
? converters.ParkerConverter imported from xmlschema
? converters.UnorderedConverter imported from xmlschema
? converters.XMLSchemaConverter imported from xmlschema, xmlschema.aliases, xmlschema.dataobjects, xmlschema.validators.elements, xmlschema.validators.schemas
? cssselect imported from lxml.cssselect
? datatypes.AbstractBinary imported from elementpath.serialization
? datatypes.AbstractDateTime imported from elementpath.serialization, elementpath.xpath1._xpath1_operators, elementpath.xpath_tokens
? datatypes.AbstractQName imported from elementpath.compare
? datatypes.AnyAtomicType imported from elementpath.sequence_types, elementpath.serialization, elementpath.xpath_context, elementpath.xpath_tokens
? datatypes.AnyURI imported from elementpath.compare, elementpath.serialization, elementpath.xpath1._xpath1_functions, elementpath.xpath1._xpath1_operators, elementpath.xpath2._xpath2_functions, elementpath.xpath2._xpath2_operators, elementpath.xpath30._xpath30_functions, elementpath.xpath_tokens
? datatypes.ArithmeticProxy imported from elementpath.xpath1._xpath1_operators, elementpath.xpath2._xpath2_functions
? datatypes.AtomicValueType imported from elementpath.schema_proxy, elementpath.xpath2.xpath2_parser, elementpath.xpath_nodes, elementpath.xpath_tokens
? datatypes.Base64Binary imported from elementpath.xpath2._xpath2_constructors
? datatypes.BooleanProxy imported from elementpath.xpath2._xpath2_constructors
? datatypes.Date imported from elementpath.xpath2._xpath2_constructors, elementpath.xpath2._xpath2_functions
? datatypes.Date10 imported from elementpath.xpath2._xpath2_constructors, elementpath.xpath2._xpath2_functions, elementpath.xpath30._xpath30_functions, elementpath.xpath_tokens
? datatypes.DateTime imported from elementpath.xpath2._xpath2_constructors, elementpath.xpath2._xpath2_functions
? datatypes.DateTime10 imported from elementpath.xpath2._xpath2_constructors, elementpath.xpath2._xpath2_functions, elementpath.xpath30._xpath30_functions, elementpath.xpath_tokens
? datatypes.DateTimeStamp imported from elementpath.xpath2._xpath2_constructors
? datatypes.DayTimeDuration imported from elementpath.xpath1._xpath1_functions, elementpath.xpath1._xpath1_operators, elementpath.xpath2._xpath2_constructors, elementpath.xpath2._xpath2_functions, elementpath.xpath_tokens
? datatypes.DoubleProxy imported from elementpath.xpath2._xpath2_functions, elementpath.xpath_tokens
? datatypes.DoubleProxy10 imported from elementpath.xpath2._xpath2_operators, elementpath.xpath_tokens
? datatypes.Duration imported from elementpath.xpath1._xpath1_functions, elementpath.xpath1._xpath1_operators, elementpath.xpath2._xpath2_constructors, elementpath.xpath2._xpath2_functions, elementpath.xpath2._xpath2_operators, elementpath.xpath_tokens
? datatypes.Float10 imported from elementpath.xpath1._xpath1_functions, elementpath.xpath2._xpath2_functions
? datatypes.GregorianDay imported from elementpath.xpath2._xpath2_constructors
? datatypes.GregorianMonth imported from elementpath.xpath2._xpath2_constructors
? datatypes.GregorianMonthDay imported from elementpath.xpath2._xpath2_constructors
? datatypes.GregorianYear imported from elementpath.xpath2._xpath2_constructors
? datatypes.GregorianYear10 imported from elementpath.xpath2._xpath2_constructors
? datatypes.GregorianYearMonth imported from elementpath.xpath2._xpath2_constructors
? datatypes.GregorianYearMonth10 imported from elementpath.xpath2._xpath2_constructors
? datatypes.HexBinary imported from elementpath.xpath2._xpath2_constructors
? datatypes.Id imported from elementpath.xpath2._xpath2_functions
? datatypes.Integer imported from elementpath.xpath2._xpath2_operators, elementpath.xpath_tokens
? datatypes.Language imported from elementpath.xpath_context
? datatypes.NCName imported from elementpath.xpath2._xpath2_functions
? datatypes.NumericProxy imported from elementpath.sequence_types, elementpath.xpath1._xpath1_operators, elementpath.xpath2._xpath2_functions, elementpath.xpath30._xpath30_functions
? datatypes.QName imported from elementpath.exceptions, elementpath.sequence_types, elementpath.serialization, elementpath.xpath1.xpath1_parser, elementpath.xpath2._xpath2_constructors, elementpath.xpath2._xpath2_functions, elementpath.xpath2._xpath2_operators, elementpath.xpath2.xpath2_parser, elementpath.xpath30._xpath30_functions, elementpath.xpath30._xpath30_operators, elementpath.xpath_tokens
? datatypes.StringProxy imported from elementpath.xpath1._xpath1_functions
? datatypes.Time imported from elementpath.xpath2._xpath2_constructors, elementpath.xpath2._xpath2_functions, elementpath.xpath30._xpath30_functions
? datatypes.Timezone imported from elementpath.xpath_context, elementpath.xpath_tokens
? datatypes.UntypedAtomic imported from elementpath.compare, elementpath.serialization, elementpath.xpath2._xpath2_constructors, elementpath.xpath2._xpath2_functions, elementpath.xpath2._xpath2_operators, elementpath.xpath2.xpath2_parser, elementpath.xpath30._xpath30_functions, elementpath.xpath_nodes, elementpath.xpath_tokens
? datatypes.YearMonthDuration imported from elementpath.xpath1._xpath1_functions, elementpath.xpath1._xpath1_operators, elementpath.xpath2._xpath2_constructors, elementpath.xpath2._xpath2_functions
? datatypes.get_atomic_value imported from elementpath.xpath2._xpath2_operators, elementpath.xpath_nodes
? datatypes.xsd10_atomic_types imported from elementpath.sequence_types, elementpath.xpath2._xpath2_constructors, elementpath.xpath30._xpath30_functions, elementpath.xpath_tokens
? datatypes.xsd11_atomic_types imported from elementpath.sequence_types, elementpath.xpath2._xpath2_constructors
? dummy.Process imported from multiprocessing.pool
? java.lang imported from platform
? org.python.core imported from copy, pickle
? os.path imported from ctypes._aix, distutils.file_util, elementpath.xpath2._xpath2_functions, os, pkgutil, py_compile, sysconfig, tracemalloc, unittest, unittest.util, xmlschema.resources
? readline imported from cmd, code, pdb
? regex.RegexError imported from elementpath, elementpath.xpath2._xpath2_functions, elementpath.xpath30._xpath30_functions
? regex.translate_pattern imported from elementpath, elementpath.xpath2._xpath2_functions, elementpath.xpath30._xpath30_functions, elementpath.xpath30.xpath30_helpers
? resource imported from test.support
? urllib2 imported from lxml.ElementInclude
? urlparse imported from lxml.ElementInclude
? validators.XMLSchema imported from xmlschema
? validators.XMLSchema10 imported from xmlschema, xmlschema.documents
? validators.XMLSchema11 imported from xmlschema
? validators.XMLSchemaBase imported from xmlschema, xmlschema.aliases, xmlschema.documents
? validators.XMLSchemaChildrenValidationError imported from xmlschema
? validators.XMLSchemaDecodeError imported from xmlschema
? validators.XMLSchemaEncodeError imported from xmlschema
? validators.XMLSchemaImportWarning imported from xmlschema
? validators.XMLSchemaIncludeWarning imported from xmlschema
? validators.XMLSchemaModelDepthError imported from xmlschema
? validators.XMLSchemaModelError imported from xmlschema
? validators.XMLSchemaNotBuiltError imported from xmlschema, xmlschema.xpath
? validators.XMLSchemaParseError imported from xmlschema
? validators.XMLSchemaTypeTableWarning imported from xmlschema
? validators.XMLSchemaValidationError imported from xmlschema, xmlschema.aliases, xmlschema.dataobjects, xmlschema.documents
? validators.XMLSchemaValidatorError imported from xmlschema
? validators.XsdAnyAttribute imported from xmlschema.aliases
? validators.XsdAnyElement imported from xmlschema.aliases
? validators.XsdAssert imported from xmlschema.aliases
? validators.XsdAttribute imported from xmlschema, xmlschema.aliases
? validators.XsdAttributeGroup imported from xmlschema.aliases
? validators.XsdComplexType imported from xmlschema.aliases
? validators.XsdComponent imported from xmlschema, xmlschema.aliases
? validators.XsdElement imported from xmlschema, xmlschema.aliases, xmlschema.converters.abdera, xmlschema.converters.badgerfish, xmlschema.converters.columnar, xmlschema.converters.default, xmlschema.converters.jsonml, xmlschema.converters.parker, xmlschema.converters.unordered, xmlschema.dataobjects
? validators.XsdGlobals imported from xmlschema
? validators.XsdGroup imported from xmlschema.aliases
? validators.XsdNotation imported from xmlschema.aliases
? validators.XsdSimpleType imported from xmlschema.aliases
? validators.XsdType imported from xmlschema
? xpath1.XPath1Parser imported from elementpath, elementpath.sequence_types, elementpath.xpath2.xpath2_parser, elementpath.xpath_selectors, elementpath.xpath_tokens
? xpath2.XPath2Parser imported from elementpath, elementpath.schema_proxy, elementpath.xpath30.xpath30_parser, elementpath.xpath_selectors, elementpath.xpath_tokens
? xpath30.XPath30Parser imported from elementpath.schema_proxy, elementpath.xpath_selectors, elementpath.xpath_tokens
Building 'dist\xml_validator.exe'.
I just created a python exe file with py2exe, in the script I use the library xmlschema.
The exe was created correct, as far as i can see, freeze script used:
from py2exe import freeze
freeze(
windows=["xml_validator.py"],
options={
"includes": ["sys", "os"],
"packages": ["xmlschema", "lxml", "tkinter"]
},
version_info= {
"version": "1.0",
"description": "A xml validator for incomming files"
}
)
but when I run the program I get this error .log file
Traceback (most recent call last):
File "urllib\request.pyc", line 1505, in open_local_file
FileNotFoundError: [WinError 3] Den angivne sti blev ikke fundet: 'C:\\Users\\fmk\\OneDrive - Fors A S\\Dokumenter\\XML validator\\Python\\dist\\library.zip\\xmlschema\\schemas\\XSD_1.0\\XMLSchema.xsd'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "xml_validator.py", line 4, in <module>
File "xmlschema\__init__.pyc", line 21, in <module>
File "xmlschema\dataobjects.pyc", line 24, in <module>
File "xmlschema\validators\__init__.pyc", line 38, in <module>
File "xmlschema\validators\schemas.pyc", line 2183, in <module>
File "xmlschema\validators\schemas.pyc", line 134, in __new__
File "xmlschema\validators\schemas.pyc", line 768, in create_meta_schema
File "xmlschema\validators\schemas.pyc", line 345, in __init__
File "xmlschema\resources.pyc", line 482, in __init__
File "xmlschema\resources.pyc", line 743, in parse
File "urllib\request.pyc", line 216, in urlopen
File "urllib\request.pyc", line 519, in open
File "urllib\request.pyc", line 536, in _open
File "urllib\request.pyc", line 496, in _call_chain
File "urllib\request.pyc", line 1483, in file_open
File "urllib\request.pyc", line 1522, in open_local_file
urllib.error.URLError: <urlopen error [WinError 3] Den angivne sti blev ikke fundet: 'C:\\Users\\fmk\\OneDrive - Fors A S\\Dokumenter\\XML validator\\Python\\dist\\library.zip\\xmlschema\\schemas\\XSD_1.0\\XMLSchema.xsd'>
Traceback (most recent call last):
File "urllib\request.pyc", line 1505, in open_local_file
FileNotFoundError: [WinError 3] Den angivne sti blev ikke fundet: 'C:\\Users\\fmk\\OneDrive - Fors A S\\Dokumenter\\XML validator\\Python\\dist\\library.zip\\xmlschema\\schemas\\XSD_1.0\\XMLSchema.xsd'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "xml_validator.py", line 4, in <module>
File "xmlschema\__init__.pyc", line 21, in <module>
File "xmlschema\dataobjects.pyc", line 24, in <module>
File "xmlschema\validators\__init__.pyc", line 38, in <module>
File "xmlschema\validators\schemas.pyc", line 2183, in <module>
File "xmlschema\validators\schemas.pyc", line 134, in __new__
File "xmlschema\validators\schemas.pyc", line 768, in create_meta_schema
File "xmlschema\validators\schemas.pyc", line 345, in __init__
File "xmlschema\resources.pyc", line 482, in __init__
File "xmlschema\resources.pyc", line 743, in parse
File "urllib\request.pyc", line 216, in urlopen
File "urllib\request.pyc", line 519, in open
File "urllib\request.pyc", line 536, in _open
File "urllib\request.pyc", line 496, in _call_chain
File "urllib\request.pyc", line 1483, in file_open
File "urllib\request.pyc", line 1522, in open_local_file
urllib.error.URLError: <urlopen error [WinError 3] Den angivne sti blev ikke fundet: 'C:\\Users\\fmk\\OneDrive - Fors A S\\Dokumenter\\XML validator\\Python\\dist\\library.zip\\xmlschema\\schemas\\XSD_1.0\\XMLSchema.xsd'>
Traceback (most recent call last):
File "urllib\request.pyc", line 1505, in open_local_file
FileNotFoundError: [WinError 3] Den angivne sti blev ikke fundet: 'C:\\Users\\fmk\\OneDrive - Fors A S\\Dokumenter\\XML validator\\Python\\dist\\library.zip\\xmlschema\\schemas\\XSD_1.0\\XMLSchema.xsd'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "xml_validator.py", line 4, in <module>
File "xmlschema\__init__.pyc", line 21, in <module>
File "xmlschema\dataobjects.pyc", line 24, in <module>
File "xmlschema\validators\__init__.pyc", line 38, in <module>
File "xmlschema\validators\schemas.pyc", line 2183, in <module>
File "xmlschema\validators\schemas.pyc", line 134, in __new__
File "xmlschema\validators\schemas.pyc", line 768, in create_meta_schema
File "xmlschema\validators\schemas.pyc", line 345, in __init__
File "xmlschema\resources.pyc", line 482, in __init__
File "xmlschema\resources.pyc", line 743, in parse
File "urllib\request.pyc", line 216, in urlopen
File "urllib\request.pyc", line 519, in open
File "urllib\request.pyc", line 536, in _open
File "urllib\request.pyc", line 496, in _call_chain
File "urllib\request.pyc", line 1483, in file_open
File "urllib\request.pyc", line 1522, in open_local_file
urllib.error.URLError: <urlopen error [WinError 3] Den angivne sti blev ikke fundet: 'C:\\Users\\fmk\\OneDrive - Fors A S\\Dokumenter\\XML validator\\Python\\dist\\library.zip\\xmlschema\\schemas\\XSD_1.0\\XMLSchema.xsd'>
When i look in the folder it is correct that this file does not exits, but why does it look for a file here?
it seems to me it is looking for a part of the library that is not there. | 0 | 1 | 1 | I'm pretty sure it is caused by a malformation of the path.
As you can see, it is trying to search for C:\\Users\\fmk\\OneDrive - Fors A S\\Dokumenter\\XML validator\\Python\\dist\\library.zip\\xmlschema\\schemas\\XSD_1.0\\XMLSchema.xsd
The folder OneDrive - Fors A S has some spaces in the name, which should be escaped with either a surrounding of "" or \ before the spaces. I can't see the code snippet where you declare the path, but you could try giving a simple path, just to try. If this is the problem, then you can leverage some Python libraries to get a sanified relative path
If you don't declare any path, then you should check the documentation to see if it is possible to specify a new path or try to launch this script somewhere else |
2023-05-03 08:16:47 | 1 | python,scikit-learn,nlp,gensim,word2vec | 1 | 76,169,399 | Fine tune a custom word2vec model with gensim 4 | 76,161,758 | true | 88 | I am new using gensim, especially with gensim 4. To be honest, I found quite hard to understand the docs how to fine-tune a pre-trained word2vec model.
I have a binary pre-trained model saved local. I would like to fine tune this model on new data.
My questions are;
how to create the vocab merging both vocabs?
is that the correct approach to fine-tune a word2vec model?
So far i have created the following code:
# path to pretrained model
pretrained_path = '../models/german.model'
# new data
sentences = df.stem_token_wo_sw.to_list() # Pandas column containing text data
# Create new model
w2v_de = Word2Vec(
min_count = min_count,
vector_size = vector_size,
window = window,
workers = workers,
)
# Build vocab
w2v_de.build_vocab(sentences)
# Extract number of examples
total_examples = w2v_de.corpus_count
# Load pretrained model
model = KeyedVectors.load_word2vec_format(pretrained_path, binary=True)
# Add previous words from pretrained model
w2v_de.build_vocab([list(model.key_to_index.keys())], update=True)
# Train model
w2v_de.train(sentences, total_examples=total_examples, epochs=2)
# create array of vectors
vectors = np.asarray(w2v_de.wv.vectors)
# create array of labels
labels = np.asarray(w2v_de.wv.index_to_key)
# create dataframe of vectors for each word
w_emb = pd.DataFrame(
index = labels,
columns = [f'X{n}' for n in range(1, vectors.shape[1] + 1)],
data = vectors,
)
After training I use PCA to reduce the dimensions from 300 to two, in order to plot the word-embedding space.
# create pipeline
pipeline = Pipeline(
steps = [
# ('scaler', StandardScaler()),
('pca', PCA(n_components=2)),
]
)
# fit pipeline
pipeline.fit(w_emb)
# Transform vectors
vectors_transformed = pipeline.transform(w_emb)
w_emb_transformed = (
pd.DataFrame(
index = labels,
columns = ['PC1', 'PC2'],
data = vectors_transformed,
)
)
The labels and vectors do only contain the new words, and not the old + new words and so does my plot and PCA values. | 1.2 | 1 | 1 | There are no official Gnesim docs on how to fine-tune a Word2Vec model because there's no well-established/reliable way to do fine-tuning & be sure it's helping.
There's thus no direct support in Gensim, nor standard recipe that Gensim could recommend to non-expert users.
People have patched together approaches, reaching into Gensim steps/models directly, to try to accomplish fine-tuning. But the average quality of such write-ups that I've seen is very poor, with little evaluation of whether the steps are working, or discussion of the tradeoffs and considerations when expanding beyond the write-up's toy setup.
That is: they're often misleading the unaware into thinking this is a well-esablished process, with dependable results, when it's not.
Regarding your process, some comments:
Your initial creation of a vocabulary will get all of your corpus's words into the model, with accurate frequency counts based on your corpus. (Frequencies affect how a model does negative-sampling & frequent-word downsampling, and which words get ignored entirely because they appear fewer times than the configured min_count.)
You are then successfully requesting the model's vocabulary expand with the .build_vocab(..., update=True) call - but by providing a mere list of the words in the new corpus, every word gets an effective occurence-count of just 1. With sensible values of min_count (such as the default 5, or higher when your corpus is larg enough), none of those word from the pre-trained model will be added to the vocabulary.
But even if you did fix this step – either setting min_count unwisely-low, or artificially repeating the words – the build_vocab() step only makes slots for a word, & randomly-initialized its vector to ready the word for training. You're not doing anything to copy over the actual vectors from the model into w2v_de. So all those 'borrowed' words will just be untrained noise in your actual model. And, these words don't have accurate frequency counts to participate properly in training.
When you train, on just your corpus, only your local-corpus words will appear in the corpus, and thus appear in the positive word-to-word training examples. But some of the imported words (if any) will occasionally be chosen as negative-examples, if you're using negative-sampling mode. (But, they won't be chosen at the typical frequencies - because of the lack of frequency info.) So you'll have a weird training run, primarily updating only your corpus's words, sometimes negative-example updating the other words (but never positive-updating them). The randomly-initialized imported words will thus be skewed further, but not in any useful way.
At the end, you might have passable vectors for your in-corpus words. (Though: epochs=2 is unlikely to be sufficient training unless your corpus is so vary large that every word of interest appears in many, many diverse contexts.) But the words you tried to import will have just junk vectors, having been initialized randomly, never influenced in their weights by your pretrained model at all, just skewed a bit by sometimes appearing as negative examples.
In short: a mess, with the extra non-standard steps attempting fine-tuning doing nothing useful. (If you've copied this pattern from an online resource faithfully – that resource may have been offered by an author that didn't know what they were doing.)
A far surer approach, if you find your corpus is missing words, is to obtain a larger corpus. As one example, if your pretrained vectors were trained on something like Wikipedia, you can just mix your corpus with Wikipedia texts, to have a combined corpus with good usage example of all the same words. (In some cases, you might be able to find corpus-extending materials that are more appropriate for your project/domain than generic Wikipedia reference text. Alternatively, you might choose to interleave & repeat your corpus to essentially give your texts greater weight, in the combined corpus.)
A straightforward from-scratch training-run on this new extended corpus will co-train all words in the same model, with accurate counts matching words' appearances in the combined corpus.
Another approach that's sometimes used to re-use word-vectors from elsewhere is to learn a projection between your new/small model, and the pretrained/larger model, based on words that are shared between the two models. Then, use that projection to move the extra words needed – in one or the other model – to new positions, that render them comparable, "in the same coordinate space", to the other imported vectors. There's an example of doing this in the Gensim TranslationMatrix class & demo notebook. |
2023-05-03 08:38:24 | -1 | python,opencv | 1 | 76,167,186 | Pycharm/Python OpenCV and CV2 importing error | 76,161,938 | false | 58 | Even after installing opencv using:
pip install opencv-python
I'm getting this error while importing it in pycharm
ERROR: Could not find a version that satisfies the requirement cv2 (from versions: none)
ERROR: No matching distribution found for cv2
I tried downloading it in differnt ways but it dosen't import | -0.197375 | 1 | 1 | I had the same problem once when I used VS code. I tried to run the same code in PyCharm (with included opencv-python library) and it worked with no problems! |
2023-05-03 09:00:29 | 1 | jupyter,python-module,pyodide,jupyterlite | 2 | 76,208,774 | Jupyterlite: accessing functions and libraries in a separate file? | 76,162,095 | true | 88 | I have a JupyterLite/pyodide session running a nodebook called "Workshop1.ipynb`. In the same (root) directory as that notebook, I have a file containing useful functions which also does some imports, e.g.
# functions.py
import numpy as np
def a_useful_function(val):
return np.array(val) * 2
I want to call import functions as the first line of my notebook, to use the functions defined in the notebook, but I then get
ModuleNotFoundError: The module 'numpy' is included in the Pyodide distribution, but it is not installed.
You can install it by calling:
await micropip.install("numpy") in Python, or
await pyodide.loadPackage("numpy") in JavaScript
Is it possible to load modules inside a file which is then imported into the notebook, without calling e.g. import numpy and/or await micropip.install("numpy") in the notebook itself? | 1.2 | 1 | 1 | Is it possible to load modules inside a file which is then imported into the notebook, without calling e.g. import numpy and/or await micropip.install("numpy") in the notebook itself?
The short answer is no. Similar to regular Python you need to indicate which packages should be installed before running your code.
From a technical perspective currently, in Pyodide we can't install a package (which is an async operation ) during the import (which is sync). Unless we recursively parse imports which has a performance overhead.
Beyond that, loading packages that are imported in a pyodide.runPythonAsync code snippet is a convenient functionality (which JupyterLite is using). However, I don't believe it would be a good idea to extend it to second-level import: i.e. follow your import functions and see what imports it has. For the same reasons, Python doesn't pip install packages on import in general. Both from the security and reliability perspective it's not great.
In practice, the workaround proposed by @TachyonicBytes if you don't want to specify requirements in a notebook would work. |
2023-05-03 09:54:30 | 0 | python,mysql,loops | 2 | 76,162,910 | Using a function in a loop in Python | 76,162,531 | false | 73 | My aim is to get the last price from a histoy table (History_price) and inject it in a parameter table (param_forex).
I have defined the following function:
mycursor.execute(f"UPDATE `param_forex` SET `Rate` = %s, `Update_Time` = CURRENT_TIME() WHERE `param_forex`.`Ticker` LIKE %s",
(Rate, Ticker))
mydb.commit()
mycursor.close
The function works on its own. It simply fills a table with new prices on some particular lines.
Now I try to incorporate it in my loop. It's actually a double loop because of how mycursor works in mysql.connector
def fillpricetable_assetid() :
for x in ['EURUSD','EURJPY','EURGBP']:
mycursor.execute(f"SELECT `Price` FROM `History_price` WHERE `Ticker` LIKE '{x}' ORDER BY `History_price`.`Time` DESC LIMIT 1")
for y in mycursor:
updateForexdb(x.lower(),y[0])
And it does not work anymore...
I get
File "c:\...\library_import.py", line 116, in <module>
fillpricetable_assetid()
File "c:\...\library_import.py", line 89, in fillpricetable_assetid
updateForexdb(x.lower(),y[0])
NameError: name 'updateForexdb' is not defined
How is that possible, when I have just defined it above?? | 0 | 1 | 1 | I have interverted the order and now it works fine. |
2023-05-03 09:58:19 | 0 | python,django,api,backend | 2 | 76,201,119 | Email address on Django API returns 404 | 76,162,565 | false | 69 | I have an endpoint created like this:
class OTPSViewSet(viewsets.ViewSet, mixins.RetrieveModelMixin, mixins.CreateModelMixin):
"""
A simple ViewSet for listing or retrieving users.
"""
throttle_classes = [OncePerDayUserThrottle]
authentication_classes = []
permission_classes = []
def retrieve(self, request, pk=None):
response = self.get_otp(pk)
return response
def create(self, request):
serializer = OtpsForms(data=request.data)
if serializer.is_valid():
response = self.verify_otp(request)
return response
else:
return response_error(serializer.errors)
def get_otp(self, username):
"""Send OTP by email
Args:
username (string): username
"""
username = username.strip().lower()
user = _get_all_by_username(username)
...
What happens is that when I make a call to the endpoint with a "username" such as "javi" or "javi@gmail", the call "GET /api/otps/javi/ HTTP/1.1" 200" returns a 200, and when the username is an email like "javi@gmail.com" the call is "GET /api/otps/javi@gmail.com/ HTTP/1.1" 404" returning a 404 not found status.
I think it has to do with having dot (.) inside the url but I don't know how to fix it.
Thanks in advance! | 0 | 2 | 1 | I fixed it adding lookup_value_regex = '[^/]+' at the beginning of the viewset and now it is working |
2023-05-03 10:32:30 | 0 | python,pandas,scikit-learn,cross-validation | 1 | 76,163,383 | Time series cross-validation removing new categories from test | 76,162,845 | false | 37 | I want to use time series cross validation to evaluate my model. I'm trying to leverage TimeSeriesSplit and cross_validate from sklearn.
Let's say my model has features A, B, and others. In practice I want my model only to give predictions for categories A and B seen during training, for all other categories it will raise an error.
How I can cross validate my model enforcing this behaviour? Could I still use the sklearn implementations, adapt them with minor changes or would I have to build my cross validation from scracth? | 0 | 1 | 1 | Import TimeSeriesSplit.
Then create an instance of TimeSeriesSplit (set the test size parameter to 1)
Then define a method, which filters out all the unwanted categories from the train and test dataset and outputs your filtered data.
Import cross_validate from scikit-learn, then make sure to call your custom function for every iteration of your cross_validate.
This way you can implement cross validation with minor changes rather than implementing it all from scartch. |
2023-05-03 12:06:24 | 1 | python,mongodb,pymongo,asyncmongo | 1 | 76,164,207 | Want to Use UpdateMany instead of UpdateOne for syncing MongoDB Collection | 76,163,629 | false | 23 | I am a new user of MongoDB. I am working with a large amount of transaction data which is updated periodically after each day or week. I created my code for this use where I am using pymongo.UpdateOne but as I have a large amount of data, I want to use UpdateMany. Here’s a snippet of my code:
client = MongoClient()
database = client['Eg']
collection = database['eg']
start = datetime.now()
df = pd.read_csv("eg.csv")
df['_id'] = df["Factory_Id"] + df["Order ID"]
data = df.to_dict(orient="records")
requests = []
for doc in data:
filter = {'_id': doc['_id']}
update = {'$set': doc}
request = pymongo.UpdateOne(filter, update, upsert=True)
requests.append(request)
if requests:
result = collection.bulk_write(requests)
Here I am using UpdateOne for syncing but I want it to sync once and not use a for-loop with the help of UpdateMany. | 0.197375 | 1 | 1 | You can't use UpdateMany in your example, as each filter is different for each update; UpdateMany only works where you are applying the same filter for all the updates.
As you are already using bulk_write, there isn't a performance benefit to be gained anyway. |
2023-05-03 13:03:32 | -1 | python,c#,json,post | 1 | 76,164,517 | python:requests.post(url , headers=headers, data=json.dumps(order)) is work,when use C# return error | 76,164,168 | false | 35 | In python, it is working:
requests.post(url , headers=headers, data=json.dumps(order))
When I use C#, then the API return msg:Invalid HTTP Request Input
string serviceUrl = string.Format("{0}{1}", this._baseUri, uri);
//System.Net.ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12;
//serviceUrl = "http://mj.qzdyyy.com/ChsMobilePay/ChsMobilePayApi/v1/GetPayAuthNo";
HttpWebRequest myRequest = (HttpWebRequest)WebRequest.Create(serviceUrl);
byte[] buf = Encoding.GetEncoding("UTF-8").GetBytes(data);
myRequest.Method = "POST";
myRequest.ContentLength = buf.Length;
myRequest.KeepAlive = true;
myRequest.Accept = "*/*";
if (headerDic != null && headerDic.Count != 0)
{
foreach (var item in headerDic)
{
if (item.Key == "ContentType")
{
myRequest.ContentType = item.Value;
}
else
{
myRequest.Headers.Add(item.Key, item.Value );
}
}
}
//myRequest.Headers.Add("Content-Type", "application/json");
myRequest.ContentType = "application/json";
myRequest.AllowAutoRedirect = true;
using (var stream = myRequest.GetRequestStream())
{
stream.Write(buf, 0, buf.Length);
}
myResponse = (HttpWebResponse)myRequest.GetResponse();
using (myResponse = (HttpWebResponse)myRequest.GetResponse())
{
StreamReader reader = new StreamReader(myResponse.GetResponseStream(), Encoding.UTF8);
//string returnXml = HttpUtility.UrlDecode(reader.ReadToEnd());
returnXml = reader.ReadToEnd();
reader.Close();
}
I tried to use fillder monitor, but when I use C#, header also have "Content-Type: application/json". | -0.197375 | 1 | 1 | I solved it. In the api,some number should be toString. |
2023-05-03 14:18:36 | 1 | python,dictionary | 2 | 76,164,991 | Python nested dictionary on __setitem__ only. __getitem__ raises KeyError if item is not found | 76,164,925 | false | 41 | I am trying to create a flexible nested dictionary that will raise a key error if a user tries to access a key that doesn't exist.
For a fully nested dict I've used this in the past:
nested_dict = lambda: defaultdict(nested_dict)
But I don't like that when I try to access a key that doesn't exist nested_dict returns a defaultdict. I would like to be able to add key value pairs even if the key doesn't exist, but I would like it to raise a KeyError if I try to directly access a key that doesn't exist.
I've tried things like this:
class NestedDict(dict):
def __getitem__(self, item):
if type(self[item]) == NestedDict:
raise KeyError
return self[item]
But can't figure out how to break the recursion. Any ideas how to do this? | 0.099668 | 1 | 1 | What you're looking for isn't possible. The first part of executing d[a][b] = c is reading d[a]. If you want to forbid reading d[a] when that key isn't present, then you can't write to d[a][b] when d[a] isn't present. |
2023-05-04 00:01:07 | 5 | python,c,python-c-api | 1 | 76,168,841 | A memory leak in a simple Python C-extension | 76,168,695 | true | 100 | I have some code similar to the one below. That code leaks, and I don't know why. The thing that leaks is a simple creation of a Python class' instance inside a C code. The function I use to check the leak is create_n_times that's defined below and just creates new Python instances and derefrences them in a loop.
This is not an MWE per-se, but part of an example. To make it easier to understand, what the code does is:
The Python code defines the dataclass and registers it into the C-extension using set_ip_settings_type.
Then, a C-extension function create_n_times is called and that function creates and destroys n instances of the Python dataclass.
Can anyone help?
In Python:
import c_api
@dataclass
class IpSettings:
ip: str
port: int
dhcp: bool
c_api.set_ip_settings_type(IpSettings)
c_api.generate_n_times(100000)
In C++ I have the following code that's compiled into a Python extension called c_api (it's a part of that library's definition):
#include <Python.h>
// ... Other functions including a "PyInit" function
extern "C" {
PyObject* ip_settings_type = NULL;
PyObject* set_ip_settings_type(PyObject* tp)
{
Py_XDECREF(ip_settings_type);
Py_INCREF(tp);
ip_settings_type = tp;
return Py_None;
}
PyObject* create_n_times(PyObject* n)
{
long n_ = PyLong_AsLong(n);
for (int i = 0; i < n_ ++i)
{
PyObject* factory_object = ip_settings_type;
PyObject* args = PyTuple_New(3);
PyTuple_SetItem(args, 0, PyUnicode_FromString("123.123.123.123"));
PyTuple_SetItem(args, 1, PyLong_FromUnsignedLong(1231));
PyTuple_SetItem(args, 2, Py_False);
PyObject* obj = PyObject_CallObject(factory_object, args);
Py_DECREF(obj);
}
return Py_None;
}
} | 1.2 | 3 | 1 | PyTuple_SetItem steals the reference to the supplied object, but Py_False is a single object. When the args tuple is destroyed, the reference count for Py_False isgetting mangled.
Use PyBool_FromLong(0) to create a new reference to Py_False, like the other two calls to PyTuple_SetItem. (see docs.python.org/3/c-api/bool.html) |
2023-05-04 00:46:27 | 1 | python,discord,discord.py,discord-buttons | 1 | 76,193,419 | How to make a discord bot button that can download the file to the client's local with discord.py | 76,168,845 | true | 117 | I have made a button in the view class like this:
# Create Download Button
downlaodButton = Button(style=discord.ButtonStyle.grey, label='Download', row=1)
downlaodButton.callback = self.download
self.add_item(downlaodButton)
async def download(self, interaction: discord.Interaction):
# Change Button Color And Accessbility After Clicking it
self.children[-2].style = discord.ButtonStyle.blurple
await interaction.response.edit_message(view=self)
response = requests.get(self.image_url)
with open('my_photo.png','wb') as f:
f.write(response.content)
It does download the photos to my project directory but what I really want is when a user click that button it will start downloading the photo to their computer. Is it possible to implement with discord.py? | 1.2 | 1 | 1 | It's not possible with the discord.py api to make a download on a client pc. you can only upload the image in discord or give the user an url where he can download the image. If you use an url you can make it download directly depending on the browser. But the Discord Client won't download anything by clicking on a button. |
2023-05-04 09:06:31 | 1 | python,selenium-webdriver,webdriverwait | 2 | 76,172,208 | What is the best way to handle waiting for elements that are not always present with Selenium? | 76,171,426 | true | 49 | For example if I have a code like this:
<div id='parent'>
<div class='profile-pic'>...</div>
<p class='email'>I'm not always present</p>
<span class='name'>John Smith</span>
</div>
Now this is a random example. But should help to visualize what I mean. I want to grab profile picture, email and the name. None of them are necessarily always present on the page.
I could set a wait for each element separately like this:
try:
wait = WebDriverWait(driver, 10)
wait.until(EC.visibility_of_element_located((By.CLASS_NAME, 'email')))
except:
pass
But that would mean every time one or more elements are not present on the page I would wait 10 seconds for each of them. Now let's say I have 20 elements like this that can be present but are not always and I have 1000 pages to go through. This approach would take forever.
Now that's where my question comes in. What is the best solution to handle cases like this? Or is this something that should not be done with Selenium at all?
What I have been doing so far is selecting the parent element and waiting for it to load but as I have learned that does not guarantee that all the children are loaded. | 1.2 | 1 | 1 | What you can do here is to iterate over parent elements.
For each parent element wait for it's presence.
I can't know if parent element here is visible or not.
In case parent element is visible - wait for it visibility.
Now, wait for desired child element visibility.
Here you can use a short timeout since this timeout is for use in case the parent element is already present (but maybe still not fully loaded) while child element is still need to be loaded.
2-3 seconds should be more that enough here until you have very bad internet connectivity / loading is done extremmely low. |
2023-05-04 11:46:19 | 1 | python,sqlalchemy,langchain | 3 | 76,176,790 | can not import CursorResult from sqlalchemy | 76,172,817 | false | 900 | I am using langchain tool with streamlit and after running py file getting error that cannot import CursorResult from sqlalchemy | 0.066568 | 1 | 2 | Try this:
pip install -U sqlalchemy |
2023-05-04 11:46:19 | 1 | python,sqlalchemy,langchain | 3 | 76,176,879 | can not import CursorResult from sqlalchemy | 76,172,817 | false | 900 | I am using langchain tool with streamlit and after running py file getting error that cannot import CursorResult from sqlalchemy | 0.066568 | 1 | 2 | Do pip install langchain==0.0.157
It will solve the issue. |
2023-05-04 21:20:11 | 0 | python,django,django-models,django-views | 1 | 76,177,518 | Reduction of queries in ManyToMany field with prefetch_related | 76,177,432 | false | 34 | I want to reduce further the number of queries. I used prefetch_related decreasing the number of queries. I was wondering if it is possible to reduce to one query. Please let me show the code involved:
I have a view with prefetch_related:
class BenefitList(generics.ListAPIView):
serializer_class = BenefitGetSerializer
def get_queryset(self):
queryset = Benefit.objects.all()
queryset = queryset.filter(deleted=False)
qs= queryset.prefetch_related('nearest_first_nations__reserve_id')
return qs
I have the models used by the serializers. In here, it is important to notice the hybrid property name which I want to display along with reserve_id and reserve_distance:
benefit.py:
class IndianReserveBandDistance(models.Model):
reserve_id = models.ForeignKey(IndianReserveBandName,
on_delete=models.SET_NULL,
db_column="reserve_id",
null=True)
reserve_distance = models.DecimalField(max_digits=16, decimal_places=4, blank=False, null=False)
@property
def name(self):
return self.reserve_id.name
class Benefit(models.Model):
banefit_name = models.TextField(blank=True, null=True)
nearest_first_nations = models.ManyToManyField(IndianReserveBandDistance,
db_column="nearest_first_nations",
blank=True,
null=True)
Name field is obtained in the model IndianReserveBandName.
indian_reserve_band_name.py:
class IndianReserveBandName(models.Model):
ID_FIELD = 'CLAB_ID'
NAME_FIELD = 'BAND_NAME'
name = models.CharField(max_length=127)
band_number = models.IntegerField(null=True)
Then, the main serializer using BenefitIndianReserveBandSerializer to obtain the fields reserve_id, reserve_distance and name:
get.py:
class BenefitGetSerializer(serializers.ModelSerializer):
nearest_first_nations = BenefitIndianReserveBandSerializer(many=True)
The serializer to obtain the mentioned fields:
distance.py:
class BenefitIndianReserveBandSerializer(serializers.ModelSerializer):
class Meta:
model = IndianReserveBandDistance
fields = ('reserve_id', 'reserve_distance', 'name')
The above is resulting in two queries which I would like to be one:
SELECT ("benefit_nearest_first_nations"."benefit_id") AS "_prefetch_related_val_benefit_id",
"indianreservebanddistance"."id",
"indianreservebanddistance"."reserve_id",
"indianreservebanddistance"."reserve_distance"
FROM "indianreservebanddistance"
INNER JOIN "benefit_nearest_first_nations"
ON ("indianreservebanddistance"."id" = "benefit_nearest_first_nations"."indianreservebanddistance_id")
WHERE "benefit_nearest_first_nations"."benefit_id" IN (1, 2)
SELECT "indianreservebandname"."id",
"indianreservebandname"."name"
FROM "indianreservebandname"
WHERE "indianreservebandname"."id" IN (678, 140, 627, 660, 214, 607)
ORDER BY "indianreservebandname"."id" ASC
I am expecting the following query:
SELECT ("benefit_nearest_first_nations"."benefit_id") AS "_prefetch_related_val_benefit_id",
"indianreservebanddistance"."id",
"indianreservebanddistance"."reserve_id",
"indianreservebanddistance"."reserve_distance",
"indianreservebandname"."name"
FROM "indianreservebanddistance"
INNER JOIN "benefit_nearest_first_nations"
ON ("indianreservebanddistance"."id" = "benefit_nearest_first_nations"."indianreservebanddistance_id")
inner JOIN "indianreservebandname"
on ("indianreservebandname"."id" = "indianreservebanddistance"."reserve_id")
WHERE "benefit_nearest_first_nations"."benefit_id" IN (1, 2)
Would you know if it is possible to get just one query? Am I missing something which is stopping Django to create just one query?
Thanks a lot | 0 | 1 | 1 | Am I missing something which is stopping Django to create just one query?
Yes. The behavior of using two queries is on deliberately. It prevents introducing data duplication, where the same values for the same columns are repeated a lot. This can blow up memory usage (both at the database side and the Django/Python side), and render the system unresponsive. In fact, it can even result in the out of memory (OOM) manager killing the web application, the database, or another application. |
2023-05-05 04:51:42 | -1 | python,database,installation,snowflake-cloud-data-platform,dbt | 2 | 76,200,198 | Dbt: not found .dbt folder on default default location | 76,178,930 | false | 748 | I was creating my dbt project, but when running the command dbt init <name_project> it returned the error that it can't find profiles.yml in the .dbt folder. So I try to access the folder on default location and it's not there. Is there a problem install dbt-snowflake inside a python virtual env that changes the installation default behavior?
Created virtuaenv:
python3 -m venv venv
Actvated venv:
source venv/bin/activate
Updated pip:
pip install --upgrade pip wheel setuptools
Install dbt-snowflake:
pip install dbt-snowflake
Trying create a project:
bt init <name_project>
But I receive this error message:
Usage: dbt init [OPTIONS] [PROJECT_NAME]
Try 'dbt init -h' for help.
Error: Invalid value for '--profiles-dir': Path '/Users/brunofonseca/.dbt' does not exist.
I saw the default location is ~/.dbt, so I tried to access the folder but it is not exist there. I also tried run the command dbt debug --config-dir to find where is the folder, but returns:
04:40:15 Running with dbt=1.5.0
04:40:15 [OpenCommand]: Unable to parse dict {'open_cmd': 'open', 'profiles_dir': PosixPath('/Users/brunofonseca/.dbt')}
04:40:15 To view your profiles.yml file, run:
Finally. I tried to find the profiles.yml inside the venv folder, but it was not found.
OBS: the operation system is macos. | -0.099668 | 3 | 2 | Access the folder through Terminal where you want to init your dbt project, then type:
dbt init <name_project> --profiles-dir . |
2023-05-05 04:51:42 | 2 | python,database,installation,snowflake-cloud-data-platform,dbt | 2 | 76,269,190 | Dbt: not found .dbt folder on default default location | 76,178,930 | false | 748 | I was creating my dbt project, but when running the command dbt init <name_project> it returned the error that it can't find profiles.yml in the .dbt folder. So I try to access the folder on default location and it's not there. Is there a problem install dbt-snowflake inside a python virtual env that changes the installation default behavior?
Created virtuaenv:
python3 -m venv venv
Actvated venv:
source venv/bin/activate
Updated pip:
pip install --upgrade pip wheel setuptools
Install dbt-snowflake:
pip install dbt-snowflake
Trying create a project:
bt init <name_project>
But I receive this error message:
Usage: dbt init [OPTIONS] [PROJECT_NAME]
Try 'dbt init -h' for help.
Error: Invalid value for '--profiles-dir': Path '/Users/brunofonseca/.dbt' does not exist.
I saw the default location is ~/.dbt, so I tried to access the folder but it is not exist there. I also tried run the command dbt debug --config-dir to find where is the folder, but returns:
04:40:15 Running with dbt=1.5.0
04:40:15 [OpenCommand]: Unable to parse dict {'open_cmd': 'open', 'profiles_dir': PosixPath('/Users/brunofonseca/.dbt')}
04:40:15 To view your profiles.yml file, run:
Finally. I tried to find the profiles.yml inside the venv folder, but it was not found.
OBS: the operation system is macos. | 0.197375 | 3 | 2 | you just need to create the .dbt folder in your home directory beforehand, then run the dbt init <name_project> that will automatically create a profiles.yml in your .dbt file. |
2023-05-05 07:17:22 | 0 | python,selenium-webdriver,selenium-chromedriver | 2 | 76,328,784 | Having this error : Message: unknown error: Runtime.callFunctionOn threw exception: Error: LavaMoat - property "JSON" | 76,179,670 | false | 761 | After updating chrome to 114.0.5735.16, there is error when i try to call elements through xpath, id..
driver get function and refresh seems to work fine
is there something i need to change?
example of codes are these
ex)
element = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, "password")))
or
driver.find_element(By.XPATH,"/html/body/div[1]/div/div[3]/div/div/button")
all the errors are same as below
Message: unknown error: Runtime.callFunctionOn threw exception: Error: LavaMoat - property "JSON" of globalThis is inaccessible under scuttling mode. To learn more visit https://github.com/LavaMoat/LavaMoat/pull/360.
at get (chrome-extension://nkbihfbeogaeaoehlefnkodbefgpgknn/runtime-lavamoat.js:11200:17)
at buildError (<anonymous>:323:13)
(Session info: chrome=114.0.5735.16)
Stacktrace:
Backtrace:
GetHandleVerifier [0x00496E73+48323]
(No symbol) [0x00429661]
(No symbol) [0x00335308]
(No symbol) [0x003384D6]
(No symbol) [0x00339991]
(No symbol) [0x00339A30]
(No symbol) [0x003607BC]
(No symbol) [0x00360CDB]
(No symbol) [0x0038E3D2]
(No symbol) [0x0037A924]
(No symbol) [0x0038CAC2]
(No symbol) [0x0037A6D6]
(No symbol) [0x0035847C]
(No symbol) [0x0035957D]
GetHandleVerifier [0x006FFD5D+2575277]
GetHandleVerifier [0x0073F86E+2836158]
GetHandleVerifier [0x007396DC+2811180]
GetHandleVerifier [0x005241B0+626688]
(No symbol) [0x0043314C]
(No symbol) [0x0042F4B8]
(No symbol) [0x0042F59B]
(No symbol) [0x004221B7]
BaseThreadInitThunk [0x77000099+25]
RtlGetAppContainerNamedObjectPath [0x77207B6E+286]
RtlGetAppContainerNamedObjectPath [0x77207B3E+238]
all the find element functions, showing the same error | 0 | 3 | 1 | I have encountered the same problem, and my approach is to lower the Chrome version |
2023-05-05 07:52:21 | 3 | python,django,amazon-elastic-beanstalk | 1 | 76,182,636 | Pycairo error while deploying django app on AWS | 76,179,953 | false | 401 | My app is deployed on AWS ElasticBeanStalk was working fine up until one hour ago and now it is giving 502 Bad gateway error. When I tried to redeploy the application it gives me the following error
2023/05/05 07:41:44.917340 [ERROR] An error occurred during execution of command [self-startup] - [InstallDependency]. Stop running the command. Error: fail to install dependencies with requirements.txt file with error Command /bin/sh -c /var/app/venv/staging-LQM1lest/bin/pip install -r requirements.txt failed with error exit status 1. Stderr: error: subprocess-exited-with-error
× Building wheel for pycairo (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [15 lines of output]
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-cpython-38
creating build/lib.linux-x86_64-cpython-38/cairo
copying cairo/__init__.py -> build/lib.linux-x86_64-cpython-38/cairo
copying cairo/__init__.pyi -> build/lib.linux-x86_64-cpython-38/cairo
copying cairo/py.typed -> build/lib.linux-x86_64-cpython-38/cairo
running build_ext
Package cairo was not found in the pkg-config search path.
Perhaps you should add the directory containing `cairo.pc'
to the PKG_CONFIG_PATH environment variable
No package 'cairo' found
Command '['pkg-config', '--print-errors', '--exists', 'cairo >= 1.15.10']' returned non-zero exit status 1.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for pycairo
ERROR: Could not build wheels for pycairo, which is required to install pyproject.toml-based projects
I don't even have Pycairo in my requirements.txt. Is there any dependency using this in the background, if so How can I find it and resolve this?
I added cairo in .ebextensions as well but it's of no use
requirements.txt
asgiref==3.2.7
Django==3.0.5
django-cors-headers==3.2.1
djangorestframework==3.11.0
djangorestframework-simplejwt==4.4.0
PyJWT==1.7.1
pytz==2020.1
sqlparse==0.3.1
djangorestframework-jwt
django-storages
boto3==1.11.4
zplgrf
reportlab
pandas==1.4.2
python-barcode
pretty_html_table
folium
googlemaps
pyexcel
pyexcel_xls
pyexcel-xlsx
openpyxl
gunicorn
apscheduler
django-smtp-ssl
xlsxwriter
django-simple-search
gmplot
awscrt
awsiotsdk
psycopg2-binary==2.8.6 | 0.53705 | 1 | 1 | If you have xhtml2pdf in your requirements, try installing first reportlab==3.6.13 and then xhtml2pdf==0.2.10 |
2023-05-05 13:19:32 | 3 | python,date | 2 | 76,182,684 | Trying to convert a date to another format gives error | 76,182,655 | false | 70 | I have a program where the user selects a date from a datepicker, and I need to convert this date to another format.
The original format is %d/%m/%Y and I need to convert it to %-d-%b-%Y
I made a small example of what happens
from datetime import datetime
# Import tkinter library
from tkinter import *
from tkcalendar import Calendar, DateEntry
win = Tk()
win.geometry("750x250")
win.title("Example")
def convert():
date1 = cal.get()
datetimeobject = datetime.strptime(date1, '%d/%m/%Y')
print(date1)
new_format = datetimeobject.strftime('%-d-%b-%Y')
print(new_format)
cal = DateEntry(win, width=16, background="gray61", foreground="white", bd=2, date_pattern='dd/mm/y')
cal.pack(pady=20)
btn = Button(win, command=convert, text='PRESS')
btn.pack(pady=50)
win.mainloop()
This gives me the following error
File "---------\date.py", line 15, in convert
new_format = datetimeobject.strftime('%-d-%b-%Y')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: Invalid format string | 0.291313 | 3 | 2 | The error is due to the use of the %-d format specifier in the strftime() method. This specifier is not supported in the Windows platform.
To resolve this issue, you can replace the %-d format specifier with %d and remove the -. |
2023-05-05 13:19:32 | 2 | python,date | 2 | 76,182,714 | Trying to convert a date to another format gives error | 76,182,655 | true | 70 | I have a program where the user selects a date from a datepicker, and I need to convert this date to another format.
The original format is %d/%m/%Y and I need to convert it to %-d-%b-%Y
I made a small example of what happens
from datetime import datetime
# Import tkinter library
from tkinter import *
from tkcalendar import Calendar, DateEntry
win = Tk()
win.geometry("750x250")
win.title("Example")
def convert():
date1 = cal.get()
datetimeobject = datetime.strptime(date1, '%d/%m/%Y')
print(date1)
new_format = datetimeobject.strftime('%-d-%b-%Y')
print(new_format)
cal = DateEntry(win, width=16, background="gray61", foreground="white", bd=2, date_pattern='dd/mm/y')
cal.pack(pady=20)
btn = Button(win, command=convert, text='PRESS')
btn.pack(pady=50)
win.mainloop()
This gives me the following error
File "---------\date.py", line 15, in convert
new_format = datetimeobject.strftime('%-d-%b-%Y')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: Invalid format string | 1.2 | 3 | 2 | The reason why that gives an error is because %-d only works on Unix machines (Linux, MacOS). On Windows (or Cygwin), you have to use %#d. |