repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
pywinauto/pywinauto | automation | 1,155 | Is there a method to find a word on a web page? | ## Expected Behavior
I want to auto-follow twitter accounts, so I need to know if I am Following this user's Twitter account or not yet.
## Actual Behavior
I am trying to find static text **Following** in Control Identifiers, but it is not there.
If I can't find it in Control Identifiers, then maybe **_there is a method to find a word on a web page?_**

## Steps to Reproduce the Problem
1. run Chrome browser
2. log in to my twitter account
3. open page with some twitter account
4. print_control_identifiers()
## Short Example of Code to Demonstrate the Problem
```
def twitter_follow(twit, login, passwrd):
app = run_browser(url='https://twitter.com/i/flow/login')
twitter_login(app=app, login=login, passwrd=passwrd)
time.sleep(3)
app['Pane']['Edit2'].set_text(f'https://twitter.com/{twit}')
send_keys('{ENTER}')
time.sleep(5)
brw = app['Pane']
brw.print_control_identifiers(filename=os.path.abspath('ident.txt'))
try:
app['Pane'][f'Читать @{twit}Button'].click()
except:
app['Pane'][f'Follow @{twit}Button'].click()
```
## Specifications
- Pywinauto version: 0.6.8
- Python version: 3.10
- Platform and OS: win32
| open | 2021-12-09T20:26:42Z | 2021-12-20T05:33:52Z | https://github.com/pywinauto/pywinauto/issues/1155 | [] | DvarfInkviz | 1 |
JaidedAI/EasyOCR | machine-learning | 567 | Using custom model with different input size? (rgb: True) | I've successfully trained my own model, thank you so much for the guidance there, but now when I am trying to use the model I made with `rgb: true` (making it have an input of 3 channels) I get errors for size mismatches:
```
RuntimeError: Given groups=1, weight of size [32, 3, 3, 3], expected
input[1, 1, 64, 256] to have 3 channels, but got 1 channels instead
```
This may be due to the downloaded model, or something? I played with `download_enabled` and removing the downloaded model, that gives this error:
```
FileNotFoundError: Missing ./ocr_model/craft_mlt_25k.pth and downloads disabled
```
How do I use the reader **only using** my network? or am I trying to do something that doesn't make any sense? If my training is making great predictions, I'm confused on why I need another model?
```py
reader = easyocr.Reader(
['en'],
# here's where my custom_model.pth file is
model_storage_directory="./ocr_model",
# here's where my custom_model.py and custom_model.yaml live
user_network_directory='./ocr_network',
recog_network='custom_model'
)
reader.readtext('test.jpg')
```
I believe I set `custom_model.yaml` up properly:
```yaml
network_params:
input_channel: 3
output_channel: 256
hidden_size: 256
imgH: 64
lang_list:
- 'en'
character_list: 0123456789!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz
```
## EDIT
Doing the exact same training but with `rgb: False` and everything works, woo! am I missing out on much without rgb? | closed | 2021-10-13T00:31:36Z | 2022-12-22T02:26:54Z | https://github.com/JaidedAI/EasyOCR/issues/567 | [] | ckcollab | 4 |
stanfordnlp/stanza | nlp | 1,163 | NER error | English Stanza (1.5.0 beta) labels "August House" as a date, not an org (the publisher) or a facility (the building in South Africa)
| open | 2022-12-09T08:50:42Z | 2023-01-11T09:05:09Z | https://github.com/stanfordnlp/stanza/issues/1163 | [
"bug"
] | AngledLuffa | 1 |
PokeAPI/pokeapi | graphql | 454 | Missing some Sprites | Hello all,
It seems that some sprites ressources are missing:
Tested from:
https://pokeapi.co/api/v2/pokemon/806/

Is this a source issue?
| closed | 2019-10-24T23:16:18Z | 2020-08-19T10:54:44Z | https://github.com/PokeAPI/pokeapi/issues/454 | [] | seelker | 4 |
dask/dask | scikit-learn | 11,415 | ⚠️ Upstream CI failed ⚠️ | [Workflow Run URL](https://github.com/dask/dask/actions/runs/11226930712)
<details><summary>Python 3.12 Test Summary</summary>
```
dask/dataframe/tests/test_groupby.py::test_groupby_value_counts_all_na_partitions[disk]: IndexError: cannot do a non-empty take from an empty axes.
dask/dataframe/tests/test_groupby.py::test_groupby_value_counts_all_na_partitions[tasks]: IndexError: cannot do a non-empty take from an empty axes.
```
</details>
| closed | 2024-10-07T14:52:03Z | 2024-10-08T10:17:02Z | https://github.com/dask/dask/issues/11415 | [
"upstream"
] | github-actions[bot] | 1 |
holoviz/panel | jupyter | 6,923 | FileInput default to higher websocket_max_message_size? | Currently, the default is 20 MBs, but this is pretty small for most use cases.
If it exceeds the 20 MBs, it silently disconnects the websocket (at least in notebook; when serving, it does show `2024-06-14 11:39:36,766 WebSocket connection closed: code=None, reason=None`). This leaves the user confused as to why nothing is happening (perhaps a separate issue).
Is there a good reason why the default is 20 MBs, or can we make it larger?
For reference:
https://discourse.holoviz.org/t/file-upload-is-uploading-the-file-but-the-value-is-always-none/7268/7 | closed | 2024-06-14T18:59:54Z | 2024-06-25T11:23:18Z | https://github.com/holoviz/panel/issues/6923 | [
"wontfix",
"type: discussion"
] | ahuang11 | 1 |
man-group/notebooker | jupyter | 44 | Default from_email address is from a nonexistent domain | [This](https://github.com/man-group/notebooker/blob/master/notebooker/utils/notebook_execution.py#L19) email address belongs to a domain which doesn't exist. If someone responds either automatically or by mistake, a firewall may be triggered. This should be configurable by the user (perhaps as an attribute on the result object) and have a sensible default. | closed | 2021-07-08T12:57:20Z | 2022-01-06T16:35:28Z | https://github.com/man-group/notebooker/issues/44 | [
"bug",
"enhancement"
] | jonbannister | 1 |
robotframework/robotframework | automation | 4,923 | Can't connect to MySQL db that has SSL, can you please help? | *** Settings ***
Library DatabaseLibrary
Library Collections
Library OperatingSystem
Library String
Library DateTime
*** Variables ***
${DB Host} ABC.us-east-1.rds.amazonaws.com
${DB Port} 3306
${DB Name} dispatch
${DB User} mayname
${DB Password} mypass
${SSL Ca} ./Certificates/us-east-1-bundle.pem
${SSL Cert} ./Certificates/us-east-1-bundle.pem
${SSL Key} ./Certificates/us-east-1-bundle.pem
*** Test Cases ***
Connect to MySQL Database with SSL
${db_params}= Create Dictionary
Set To Dictionary ${db_params} host=${DB Host}
Set To Dictionary ${db_params} port=${DB Port}
Set To Dictionary ${db_params} database=${DB Name}
Set To Dictionary ${db_params} user=${DB User}
Set To Dictionary ${db_params} password=${DB Password}
Set To Dictionary ${db_params} use_unicode=true
Set To Dictionary ${db_params} charset=utf8mb4
Set To Dictionary ${db_params} ssl_ca=${SSL Ca}
Set To Dictionary ${db_params} ssl_cert=${SSL Cert}
Set To Dictionary ${db_params} ssl_key=${SSL Key}
Connect To Database pymysql server=${db_params}
Check If Exists In Database SELECT first_name FROM dispatch.admin WHERE id=15157727
Disconnect From Database
Getting the below error
Connect to MySQL Database with SSL | FAIL |
OperationalError: (2003, 'Can\'t connect to MySQL server on \'"ABC.us-east-1.rds.amazonaws.com"\' ([Errno 11001] getaddrinfo failed)')
| closed | 2023-11-02T14:46:58Z | 2023-11-03T01:30:18Z | https://github.com/robotframework/robotframework/issues/4923 | [] | ghost | 2 |
Morizeyao/GPT2-Chinese | nlp | 1 | 请问是从0开始预训练中文语言模型吗? | 计划用多大规模的语料?使用什么GPU? | closed | 2019-07-25T11:31:24Z | 2019-08-06T13:36:54Z | https://github.com/Morizeyao/GPT2-Chinese/issues/1 | [] | lexmen318 | 1 |
django-import-export/django-import-export | django | 1,842 | numeric fields exported as text by default since v4 to excel | Hello,
not sure whether this is a bug / feature.
In v3 exporting numeric fields (floats, decimal fields) used to give numeric fields in excel when exporting.
in version 4 by default numeric fields are displayed as text.
in case i add a field definition to the resource class the issue dissappears:
```
class SensorResource(resources.ModelResource):
float_factor = Field(attribute='float_factor', widget=NumberWidget())
class Meta:
model = Meter
```
Is that behavior intentional ?
| closed | 2024-05-18T08:57:47Z | 2024-10-22T16:18:58Z | https://github.com/django-import-export/django-import-export/issues/1842 | [
"question"
] | tobhv | 6 |
FactoryBoy/factory_boy | sqlalchemy | 735 | factory.Sequence is not incremented sequentially. | We're using version **2.8.1** for a long time and it's working smoothly, and decided recently to upgrade to latest version to make use of the improvements/fixes in random seeding/determinism in **2.11.0** at least.
But upon upgrading, we noticed that the `factory.Sequence` is behaving differently compared to the previous behavior.
Simplified model/factory class:
```python
class MyModel()
value = models.IntegerField()
class MyFactory():
value = factory.Sequence(lambda n: n)
class Meta:
model = MyModel
```
### Using the 2.8.1 version
#### Output if the specific test method containing these lines is run:
```python
MyFactory().value # 0
MyFactory().value # 1
MyFactory().value # 2
MyFactory().value # 3
MyFactory().value # 4
```
#### Output if the test method containing these lines is run as part of the suite:
```python
MyFactory().value # 3
MyFactory().value # 4
MyFactory().value # 5
MyFactory().value # 6
MyFactory().value # 7
```
### Using the 2.9.0 or later versions
#### Output if the specific test method containing these lines is run:
```python
MyFactory().value # 0
MyFactory().value # 33
MyFactory().value # 66
MyFactory().value # 99
MyFactory().value # 132
```
#### Output if the test method containing these lines is run as part of the suite:
```python
MyFactory().value # 47
MyFactory().value # 80
MyFactory().value # 113
MyFactory().value # 146
MyFactory().value # 179
```
Hence, seems that the upgraded version increments by `33` instead of `1` for some reason. Are there code changes since **2.9.0** that could affect the sequencing? Note that there are no other setup/code changes except the `factory-boy` package's version. This is the same behavior even when just invoking the `MyFactory().value` manually and in succession in Django shell. Wondering also if other devs have similar observation since it seems easy to replicate. Thanks :) | closed | 2020-05-16T21:25:07Z | 2020-08-21T15:01:16Z | https://github.com/FactoryBoy/factory_boy/issues/735 | [] | ranelpadon | 4 |
biolab/orange3 | scikit-learn | 6,548 | ROC Analysis Report Chart Inaccuracies | <!--
Thanks for taking the time to report a bug!
If you're raising an issue about an add-on (i.e., installed via Options > Add-ons), raise an issue in the relevant add-on's issue tracker instead. See: https://github.com/biolab?q=orange3
To fix the bug, we need to be able to reproduce it. Please answer the following questions to the best of your ability.
-->
**What's wrong?**
The ROC Analysis widget's report is not displaying the ROC curve accurately. The data and chart look fine when you open the widget, but what displays in the report shows different axes labels and the data does not overlay on these axes accurately. In screenshot, report is on the right. Notice the differences in the images.

**How can we reproduce the problem?**
1. Generate an ROC graph.
2. Open the ROC Widget and examine the curve. Note the x- and y- axes and labels.
3. Click on the Report button. Note the chart image in the report is not accurate.
**What's your environment?**
<!-- To find your Orange version, see "Help → About → Version" or `Orange.version.full_version` in code -->
- Operating system: Ventura 13.1
- Orange version: 3.35
- How you installed Orange: DMG downloaded from https://orangedatamining.com/download/#macos
| closed | 2023-08-24T00:21:16Z | 2023-10-03T10:38:41Z | https://github.com/biolab/orange3/issues/6548 | [
"bug report"
] | cnewell2 | 3 |
Esri/arcgis-python-api | jupyter | 2,139 | Reading hosted table in AGOL using .query() returns a dataframe with exactly 1000 less records than are in AGOL hosted table | I am trying to read a hosted AGOL table using the script below (.query). When the script runs, saves a csv and then publishes the table, it has exactly 1000 less records than the original hosted table. I don't have any other queries on the table and I don't think there are settings within the hosted table that would cause this. I am using this to compare the existing hosted table to a new table of AGOL members information so I can create an updated dashboard with removed accounts still present (hence, comparing the two tables, dataframes in this case).
Below is the code piece being used:
Existing item ID
item_id = 'xyz'
Load the existing table from AGOL to check for removed members
existing_table_item = gis.content.get(item_id)
Initialize DataFrame either from the existing table or as an empty DataFrame with the same columns as df
existing_table_df = (existing_table_item.tables[0].query().df
if existing_table_item else pd.DataFrame(columns=df.columns))
Debugging: Check the columns of existing_table_df
print("Columns in existing_table_df:", existing_table_df.columns.tolist())
Save the DataFrame to a CSV File
file_SaveName = 'Test_Export_Existing_Table_DF.csv'
existing_table_df.to_csv(file_SaveName, index=False)
Define metadata for the CSV file
csv_properties = {
'title': 'Test Export Existing Table DF',
'type': 'CSV',
'tags': 'AGOL Export, CSV',
'description': 'CSV export of the existing table from AGOL',
}
Upload the CSV file to AGOL content
csv_item = gis.content.add(item_properties=csv_properties, data=file_SaveName)
For what its worth, the following code has the same result:
Access the previous values table
table_url = "xyz"
previous_values_table = Table(table_url)
Retrieve existing values from the previous values table
existing_records = previous_values_table.query(where="1=1", out_fields="*").features
Convert the list of features to a list of dictionaries (attributes)
records_dict_list = [feature.attributes for feature in existing_records]
Convert the list of dictionaries to a pandas DataFrame
existing_records_df = pd.DataFrame(records_dict_list)
Save the DataFrame to a CSV file
file_SaveName = 'Test_Export_Existing_Table_DF_20241022.csv'
existing_records_df.to_csv(file_SaveName, index=False)
Define metadata for the CSV file
csv_properties = {
'title': 'Test Export Existing Table DF 20241022',
'type': 'CSV',
'tags': 'AGOL Export, CSV',
'description': 'CSV export of the existing table from AGOL',
}
Upload the CSV file to AGOL content
csv_item = gis.content.add(item_properties=csv_properties, data=file_SaveName)
```
error:
There is no error message,
```
**Expected behavior**
I would expect this piece of code to read the entire hosted table and create a dataFrame consisting of all the records
**Platform (please complete the following information):**
- OS: Windows 11
- Browser: Google Chrome, running from AGOL Notebook
- Python API Version: ArcGIS Notebook, Python 3 Standard - 10.0 | closed | 2024-10-23T16:45:31Z | 2024-11-04T21:58:50Z | https://github.com/Esri/arcgis-python-api/issues/2139 | [
"bug"
] | theisenm12 | 22 |
pydata/pandas-datareader | pandas | 220 | Google data source broken for mutual funds | ``` python
import pandas_datareader.data as web
import datetime
start = datetime.datetime(2010, 1, 1)
end = datetime.datetime(2013, 1, 27)
f = web.DataReader("F", 'google', start, end) # this works
f = web.DataReader("VFINX", 'google', start, end) # this dies
```
output
```
Traceback (most recent call last):
File "google_bug.py", line 9, in <module>
f = web.DataReader("VFINX", 'google', start, end)
File "C:\Anaconda2\lib\site-packages\pandas_datareader\data.py", line 105, in DataReader
session=session).read()
File "C:\Anaconda2\lib\site-packages\pandas_datareader\base.py", line 173, in read
df = self._read_one_data(self.url, params=self._get_params(self.symbols))
File "C:\Anaconda2\lib\site-packages\pandas_datareader\base.py", line 80, in _read_one_data
out = self._read_url_as_StringIO(url, params=params)
File "C:\Anaconda2\lib\site-packages\pandas_datareader\base.py", line 91, in _read_url_as_StringIO
response = self._get_response(url, params=params)
File "C:\Anaconda2\lib\site-packages\pandas_datareader\base.py", line 117, in _get_response
raise RemoteDataError('Unable to read URL: {0}'.format(url))
pandas_datareader._utils.RemoteDataError: Unable to read URL: http://www.google.com/finance/historical
```
Google supplies "Date, Open, High, Low, Close, Volume" columns for stocks, but only "Date, Close" for mutual funds.
| closed | 2016-08-01T19:00:18Z | 2018-01-23T10:16:04Z | https://github.com/pydata/pandas-datareader/issues/220 | [
"google-finance"
] | illbebach | 2 |
adamerose/PandasGUI | pandas | 42 | Not able to run in Ubuntu 20.04 for error related to PyQt5 |
Following [advice](https://github.com/adamerose/pandasgui/issues/34#issuecomment-701757849) from another closed issue from this repo tried this alternative way to run.
```python
import pandas as pd
from pandasgui import show
from PyQt5 import QtCore, QtGui, QtWidgets, QtWebEngineWidgets
from PyQt5.QtWebEngineWidgets import QWebEngineView
app = QtWidgets.QApplication([])
df = pd.DataFrame(([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), columns=['a', 'b', 'c'])
show(df)
```
#### But getting below error
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-1-8c40c842eba1> in <module>
1 import pandas as pd
----> 2 from pandasgui import show
3 from PyQt5 import QtCore, QtGui, QtWidgets, QtWebEngineWidgets
4 from PyQt5.QtWebEngineWidgets import QWebEngineView
5 app = QtWidgets.QApplication([])
~/.local/lib/python3.8/site-packages/pandasgui/__init__.py in <module>
----> 1 from pandasgui.gui import show
2
3 __all__ = ["show"]
~/.local/lib/python3.8/site-packages/pandasgui/gui.py in <module>
12 from pandasgui.store import Store, PandasGuiDataFrame
13 from pandasgui.utility import fix_ipython, fix_pyqt, get_logger, as_dict, delete_datasets
---> 14 from pandasgui.widgets.dataframe_explorer import DataFrameExplorer
15 from pandasgui.widgets.find_toolbar import FindToolbar
16 from pandasgui.widgets.json_viewer import JsonViewer
~/.local/lib/python3.8/site-packages/pandasgui/widgets/dataframe_explorer.py in <module>
7 from pandasgui.utility import get_logger, nunique
8 from pandasgui.widgets.dataframe_viewer import DataFrameViewer
----> 9 from pandasgui.widgets.grapher import Grapher
10 from pandasgui.widgets.reshaper import Reshaper
11 from pandasgui.widgets.filter_viewer import FilterViewer
~/.local/lib/python3.8/site-packages/pandasgui/widgets/grapher.py in <module>
13
14 from pandasgui.utility import flatten_df, get_logger
---> 15 from pandasgui.widgets.plotly_viewer import PlotlyViewer
16 from pandasgui.widgets.spinner import Spinner
17 from pandasgui.widgets.dragger import Dragger, ColumnArg, Schema
~/.local/lib/python3.8/site-packages/pandasgui/widgets/plotly_viewer.py in <module>
27 app.__init__(sys.argv)
28 else:
---> 29 raise e
30
31
~/.local/lib/python3.8/site-packages/pandasgui/widgets/plotly_viewer.py in <module>
15 # https://stackoverflow.com/a/57436077/3620725
16 try:
---> 17 from PyQt5 import QtWebEngineWidgets
18 except ImportError as e:
19 if e.msg == "QtWebEngineWidgets must be imported before a QCoreApplication instance is created":
ImportError: /lib/x86_64-linux-gnu/libQt5Network.so.5: undefined symbol: _ZN15QIPAddressUtils8toStringER7QStringPh, version Qt_5
```
And I already have installed and upgraded PyQt5 as below
```
pip3 install --user --upgrade PyQt5
pip3 install --user --upgrade PyQt5-sip
``` | closed | 2020-10-21T07:19:17Z | 2020-10-23T12:25:08Z | https://github.com/adamerose/PandasGUI/issues/42 | [] | rohan-paul | 3 |
JaidedAI/EasyOCR | pytorch | 553 | OCR | closed | 2021-09-28T17:08:22Z | 2021-10-06T08:39:56Z | https://github.com/JaidedAI/EasyOCR/issues/553 | [] | bouzid-s | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.