organization
string
repo_name
string
base_commit
string
iss_html_url
string
iss_label
string
title
string
body
string
code
null
pr_html_url
string
commit_html_url
string
file_loc
string
own_code_loc
list
ass_file_loc
list
other_rep_loc
list
analysis
dict
loctype
dict
iss_has_pr
int64
xtekky
gpt4free
c5691c5993f8595d90052e4a81b582d63fe81919
https://github.com/xtekky/gpt4free/issues/913
bug stale
TypeError: unhashable type: 'Model'
import g4f, asyncio async def run_async(): _providers = [ g4f.Provider.ChatgptAi, g4f.Provider.ChatgptLogin, g4f.Provider.DeepAi, g4f.Provider.Opchatgpts, g4f.Provider.Vercel, g4f.Provider.Wewordle, g4f.Provider.You, g4f.Provider.Yqcloud, ] responses =...
null
https://github.com/xtekky/gpt4free/pull/924
null
{'base_commit': 'c5691c5993f8595d90052e4a81b582d63fe81919', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [241, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 277]}}}, {'p...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "g4f/Provider/CodeLinkAva.py", "g4f/Provider/H2o.py", "g4f/Provider/ChatgptLogin.py", "g4f/Provider/Aivvm.py", "g4f/Provider/HuggingChat.py", "g4f/Provider/__init__.py", "g4f/__init__.py", "g4f/models.py", "g4f/Provider/Vitalentum.py", "g4f/Provider/Bard.py", "g...
1
xtekky
gpt4free
2dcdce5422cd01cd058490d4daef5f69300cca89
https://github.com/xtekky/gpt4free/issues/2006
bug stale
CORS not enabled for API
**Bug description** Run docker image Try to access the Completion API via Javascript console in Browser `fetch("http://localhost:1337/v1/chat/completions", { "headers": { "accept-language": "de-DE,de;q=0.9,en-DE;q=0.8,en;q=0.7,en-US;q=0.6", "cache-control": "no-cache", "content-type": "applicat...
null
https://github.com/xtekky/gpt4free/pull/2281
null
{'base_commit': '2dcdce5422cd01cd058490d4daef5f69300cca89', 'files': [{'path': 'g4f/api/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [14]}, "(None, 'create_app', 24)": {'add': [26]}}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "g4f/api/__init__.py" ], "doc": [], "test": [], "config": [], "asset": [] }
1
xtekky
gpt4free
5d8e603095156303a016cc16e2811a8f2bc74f15
https://github.com/xtekky/gpt4free/issues/1338
bug
How to use providers via HTTP Request ?
I am trying to use the api version of this project, but, the providers option in my request is not working, am i doing something wrong? ```js const response = await axios.post( `${API_BASE}`, { provider: 'g4f.Provider.ChatgptAi', temperature:0.75, top_p: 0.6, model:...
null
https://github.com/xtekky/gpt4free/pull/1344
null
{'base_commit': '5d8e603095156303a016cc16e2811a8f2bc74f15', 'files': [{'path': 'g4f/api/__init__.py', 'status': 'modified', 'Loc': {"('Api', 'chat_completions', 71)": {'add': [86, 94, 100]}}}]}
[]
[]
[]
{ "iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "g4f/api/__init__.py" ], "doc": [], "test": [], "config": [], "asset": [] }
1
xtekky
gpt4free
0d8e4ffa2c0706b0381f53c3985d04255b7170f5
https://github.com/xtekky/gpt4free/issues/2334
bug
Model "command-r+" returning 401 error: "You have to be logged in"
**Bug description** I'm experiencing an issue with the model "command-r+" not working. When attempting to use this model through the g4f API (running "g4f api"), I receive the following error: ``` ERROR:root:Request failed with status code: 401, response: {"error":"You have to be logged in."} Traceback (most re...
null
https://github.com/xtekky/gpt4free/pull/2313
null
{'base_commit': '0d8e4ffa2c0706b0381f53c3985d04255b7170f5', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [72], 'mod': [31, 169, 186, 197, 198, 199, 200, 293, 299, 305, 773, 776]}}}, {'path': 'docs/async_client.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add'...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "g4f/Provider/Ai4Chat.py", "g4f/Provider/ChatifyAI.py", "g4f/Provider/ChatGptEs.py", "g4f/Provider/DeepInfraChat.py", "g4f/Provider/AiMathGPT.py", "g4f/Provider/Allyfy.py", "g4f/Provider/ChatGpt.py", "g4f/Provider/Bing.py", "g4f/Provider/AIUncensored.py", "g4f/Provi...
1
xtekky
gpt4free
b2bfc88218d3ffb367c6a4bcb14c0748666d348f
https://github.com/xtekky/gpt4free/issues/1206
bug stale
OpenaiChat:\lib\asyncio\base_events.py", line 498, in _make_subprocess_transport raise NotImplementedError
![image](https://github.com/xtekky/gpt4free/assets/37258899/c14b81f6-a429-4cb9-9029-4fab53d0e812) this problem happens today after I update to the latest version!
null
https://github.com/xtekky/gpt4free/pull/1207
null
{'base_commit': 'b2bfc88218d3ffb367c6a4bcb14c0748666d348f', 'files': [{'path': 'g4f/Provider/needs_auth/OpenaiChat.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4]}, "(None, 'get_arkose_token', 146)": {'mod': [147, 148, 149, 150, 151, 177, 178, 179, 180, 182, 186, 187]}}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "g4f/Provider/needs_auth/OpenaiChat.py" ], "doc": [], "test": [], "config": [], "asset": [] }
1
Z4nzu
hackingtool
b2cf73c8f414cd9c30d920beb2e7a000934c1f92
https://github.com/Z4nzu/hackingtool/issues/354
target not found yay and python-pip.19.1.1-1
i have a problem when i try to run bash install.sh it says error target not found yay, python-pip.19.1.1-1 , i have installed the yay and i have no idea how to install python-pip so i need help. OS: Arch linux 64x_86X shell: bash 5.1.6 ![image](https://user-images.githubusercontent.com/127435098/224668316-8f1bfd93...
null
https://github.com/Z4nzu/hackingtool/pull/355
null
{'base_commit': 'b2cf73c8f414cd9c30d920beb2e7a000934c1f92', 'files': [{'path': 'install.sh', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [74, 96, 111]}}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [], "doc": [], "test": [], "config": [], "asset": [ "install.sh" ] }
1
Z4nzu
hackingtool
1e088ad35b66dda0ee9139a5220627f86cb54365
https://github.com/Z4nzu/hackingtool/issues/347
enhancement
Typos found by codespell
./tools/xss_attack.py:107: vulnerabilites ==> vulnerabilities ./tools/information_gathering_tools.py:87: Scaning ==> Scanning ./tools/information_gathering_tools.py:117: informations ==> information ./tools/information_gathering_tools.py:168: informations ==> information ./tools/forensic_tools.py:60: Aquire ==> Acq...
null
https://github.com/Z4nzu/hackingtool/pull/350
null
{'base_commit': '1e088ad35b66dda0ee9139a5220627f86cb54365', 'files': [{'path': '.github/workflows/lint_python.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [25]}}}, {'path': 'tools/forensic_tools.py', 'status': 'modified', 'Loc': {"('Guymager', None, 59)": {'mod': [60]}}}, {'path': 'tools/informatio...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "tools/wireless_attack_tools.py", "tools/xss_attack.py", "tools/phising_attack.py", "tools/forensic_tools.py", "tools/others/socialmedia_finder.py", "tools/payload_creator.py", "tools/webattack.py", "tools/information_gathering_tools.py" ], "doc": [], "test": [], "c...
1
Z4nzu
hackingtool
0a4faeac9c4f93a61c937b0e57023b693beeca6f
https://github.com/Z4nzu/hackingtool/issues/174
SyntaxError: invalid syntax
Traceback (most recent call last): File "/home/kali/hackingtool/hackingtool.py", line 11, in <module> from tools.ddos import DDOSTools File "/home/kali/hackingtool/tools/ddos.py", line 29 "sudo", "python3 ddos", method, url, socks_type5.4.1, threads, proxylist, multiple, timer]) I'm getting this erro...
null
https://github.com/Z4nzu/hackingtool/pull/176
null
{'base_commit': '0a4faeac9c4f93a61c937b0e57023b693beeca6f', 'files': [{'path': 'tools/ddos.py', 'status': 'modified', 'Loc': {"('ddos', 'run', 20)": {'mod': [29]}}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "tools/ddos.py" ], "doc": [], "test": [], "config": [], "asset": [] }
1
scikit-learn
scikit-learn
0e8e38e3b2f4b79f03fe8a3e655b9f506ab0f2a6
https://github.com/scikit-learn/scikit-learn/issues/768
Arpack wrappers fail with new scipy
I have scipy 0.11.0.dev-c1ea274. This does not seem to play well with the current arpack wrappers. I'm a bit out of my depth there, though.
null
https://github.com/scikit-learn/scikit-learn/pull/802
null
{'base_commit': '0e8e38e3b2f4b79f03fe8a3e655b9f506ab0f2a6', 'files': [{'path': 'sklearn/utils/arpack.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [55]}, "(None, 'svds', 1540)": {'add': [1598], 'mod': [1540]}, "(None, 'eigs', 1048)": {'mod': [1048]}, "(None, 'eigsh', 1264)": {'mod': [1264]}}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "sklearn/utils/arpack.py" ], "doc": [], "test": [], "config": [], "asset": [] }
1
scikit-learn
scikit-learn
bb7e34bc52461749e6014787a05a9507eda11011
https://github.com/scikit-learn/scikit-learn/issues/21668
Build / CI cython
CI with boundscheck=False
I really dislike segmentation faults! Unfortunately, there are many issues reporting them. Findings in #21654, #21283 were easier with setting `boundscheck = True`. **Proposition** Set up one CI configuration that runs with `boundscheck = True` globally which should be easier now that #21512 is merged.
null
https://github.com/scikit-learn/scikit-learn/pull/21779
null
{'base_commit': 'c9e5067cb14de578ab48b64f399743b994e3ca94', 'files': [{'path': 'azure-pipelines.yml', 'status': 'modified', 'Loc': {'(None, None, 202)': {'add': [202]}}}, {'path': 'doc/computing/parallelism.rst', 'status': 'modified', 'Loc': {'(None, None, 216)': {'add': [216]}}}, {'path': 'sklearn/_build_utils/__init_...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "sklearn/_build_utils/__init__.py" ], "doc": [ "doc/computing/parallelism.rst" ], "test": [], "config": [ "azure-pipelines.yml" ], "asset": [] }
1
scikit-learn
scikit-learn
64ab789905077ba8990522688c11177442e5e91f
https://github.com/scikit-learn/scikit-learn/issues/29358
Documentation
Sprints page
### Describe the issue linked to the documentation The following sprints are listed: https://scikit-learn.org/stable/about.html#sprints But, that is a small subset, given the list here: https://blog.scikit-learn.org/sprints/ Are the sprints posted on the "About Us" page of a certain criteria, such as Dev spr...
null
https://github.com/scikit-learn/scikit-learn/pull/29418
null
{'base_commit': '64ab789905077ba8990522688c11177442e5e91f', 'files': [{'path': 'doc/about.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [548, 549, 551, 552, 553, 554, 555, 557, 558, 559, 560, 561, 563, 564, 565]}}}]}
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [], "doc": [ "doc/about.rst" ], "test": [], "config": [], "asset": [] }
1
scikit-learn
scikit-learn
41e129f1a6eb17a39ff0b25f682d903d0ae3c5af
https://github.com/scikit-learn/scikit-learn/issues/5991
Easy Enhancement
PERF : StratifiedShuffleSplit is slow when using large number of classes
When using large number of classes (e.g. > 10000, e.g for recommender systems), `StratifiedShuffleSplit` is very slow when compared to `ShuffleSplit`. Looking at the code, I believe that the following part: ``` python for i, class_i in enumerate(classes): permutation = rng.permutation(clas...
null
https://github.com/scikit-learn/scikit-learn/pull/9197
null
{'base_commit': '41e129f1a6eb17a39ff0b25f682d903d0ae3c5af', 'files': [{'path': 'doc/whats_new.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [219]}}}, {'path': 'sklearn/model_selection/_split.py', 'status': 'modified', 'Loc': {"('StratifiedShuffleSplit', '_iter_indices', 1495)": {'add': [1523], 'mod'...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "sklearn/model_selection/_split.py" ], "doc": [ "doc/whats_new.rst" ], "test": [], "config": [], "asset": [] }
1
scikit-learn
scikit-learn
4143356c3c51831300789e4fdf795d83716dbab6
https://github.com/scikit-learn/scikit-learn/issues/10336
help wanted
Should mixture models have a clusterer-compatible interface
Mixture models are currently a bit different. They are basically clusterers, except they are probabilistic, and are applied to inductive problems unlike many clusterers. But they are unlike clusterers in API: * they have an `n_components` parameter, with identical purpose to `n_clusters` * they do not store the `labe...
null
https://github.com/scikit-learn/scikit-learn/pull/11281
null
{'base_commit': '4143356c3c51831300789e4fdf795d83716dbab6', 'files': [{'path': 'doc/whats_new/v0.20.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [583]}}}, {'path': 'sklearn/mixture/base.py', 'status': 'modified', 'Loc': {"('BaseMixture', 'fit', 172)": {'add': [190], 'mod': [175, 243]}}}, {'path': '...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "sklearn/mixture/base.py" ], "doc": [ "doc/whats_new/v0.20.rst" ], "test": [ "sklearn/mixture/tests/test_bayesian_mixture.py", "sklearn/mixture/tests/test_gaussian_mixture.py" ], "config": [], "asset": [] }
1
scikit-learn
scikit-learn
d7795a431e30d23f7e8499bdbe89dbdc6e9a068e
https://github.com/scikit-learn/scikit-learn/issues/16001
Bug Easy good first issue help wanted
Possible infinite loop iterations in synthetic data sets generation module
Hello, I found two code snippets in https://github.com/scikit-learn/scikit-learn/blob/7e85a6d1f/sklearn/datasets/_samples_generator.py are susceptible to infinite loop iterations when using make_multilabel_classification(): 1) https://github.com/scikit-learn/scikit-learn/blob/7e85a6d1f/sklearn/datasets/_samples_g...
null
https://github.com/scikit-learn/scikit-learn/pull/16006
null
{'base_commit': 'd7795a431e30d23f7e8499bdbe89dbdc6e9a068e', 'files': [{'path': 'doc/whats_new/v0.23.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [65]}}}, {'path': 'sklearn/datasets/_samples_generator.py', 'status': 'modified', 'Loc': {"(None, 'make_multilabel_classification', 263)": {'add': [344]}}...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "sklearn/datasets/_samples_generator.py" ], "doc": [ "doc/whats_new/v0.23.rst" ], "test": [ "sklearn/datasets/tests/test_samples_generator.py" ], "config": [], "asset": [] }
1
scikit-learn
scikit-learn
0e3cbbdcdfeec1c6b10aea11524add6350a8f4e0
https://github.com/scikit-learn/scikit-learn/issues/933
Speed up tree construction
CC: @pprett @amueller @bdholt1 Hi folks, Everyone will agree that tree-based methods have shown to perform quite well (e.g., the recent achievement of Peter!) and are increasingly used by our users. However, the tree module still has a major drawback: it is slow as hell in comparison to other machine learning packag...
null
https://github.com/scikit-learn/scikit-learn/pull/946
null
{'base_commit': '0e3cbbdcdfeec1c6b10aea11524add6350a8f4e0', 'files': [{'path': 'doc/whats_new.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [11]}}}, {'path': 'sklearn/ensemble/_gradient_boosting.c', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [381, 637, 673, 746, 931, 973, 4913, 5993...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "sklearn/tree/_tree.pyx", "sklearn/ensemble/_gradient_boosting.pyx", "sklearn/ensemble/_gradient_boosting.c", "sklearn/ensemble/gradient_boosting.py", "sklearn/ensemble/forest.py", "sklearn/tree/tree.py" ], "doc": [ "doc/whats_new.rst" ], "test": [ "sklearn/tree/tes...
1
scikit-learn
scikit-learn
77aeb825b6494de1e3a2c1e7233b182e05d55ab0
https://github.com/scikit-learn/scikit-learn/issues/27982
Documentation good first issue help wanted
Ensure that we have an example in the docstring of each public function or class
We should make sure that we have a small example for all public functions or classes. Most of the missing examples are linked to functions. I could list the following classes and functions for which `numpydoc` did not find any example: - [x] sklearn.base.BaseEstimator - [x] sklearn.base.BiclusterMixin - [x] skl...
null
https://github.com/scikit-learn/scikit-learn/pull/28564
null
{'base_commit': 'd967cfe8124902181892411b18b50dce9921a32d', 'files': [{'path': 'sklearn/datasets/_samples_generator.py', 'status': 'modified', 'Loc': {"(None, 'make_low_rank_matrix', 1359)": {'add': [1413]}}}]}
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "sklearn/datasets/_samples_generator.py" ], "doc": [], "test": [], "config": [], "asset": [] }
1
scikit-learn
scikit-learn
e11c4d21a4579f0d49f414a4b76e386f80f0f074
https://github.com/scikit-learn/scikit-learn/issues/19269
New Feature module:datasets
sklearn.datasets.load_files select file extension
<!-- If you want to propose a new algorithm, please refer first to the scikit-learn inclusion criterion: https://scikit-learn.org/stable/faq.html#what-are-the-inclusion-criteria-for-new-algorithms --> #### Describe the workflow you want to enable When using load_files in a directory where there are different ki...
null
https://github.com/scikit-learn/scikit-learn/pull/22498
null
{'base_commit': 'e11c4d21a4579f0d49f414a4b76e386f80f0f074', 'files': [{'path': 'doc/whats_new/v1.1.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [175]}}}, {'path': 'sklearn/datasets/_base.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [13]}, "(None, 'load_files', 99)": {'add': [108...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "sklearn/datasets/_base.py" ], "doc": [ "doc/whats_new/v1.1.rst" ], "test": [ "sklearn/datasets/tests/test_base.py" ], "config": [], "asset": [] }
1
scikit-learn
scikit-learn
cdd693bf955acd2a97cce48011d168c6b1ef316d
https://github.com/scikit-learn/scikit-learn/issues/8364
Easy Documentation Sprint
Matplotlib update on CI makes example look different
The examples look different on the current dev website, in particular the classifier comparison that's on the landing pages looks a bit odd now: http://scikit-learn.org/dev/auto_examples/classification/plot_classifier_comparison.html I suspect the culprit is the CI upgrading to matplotlib v2. I think we should go t...
null
https://github.com/scikit-learn/scikit-learn/pull/8516 https://github.com/scikit-learn/scikit-learn/pull/8369
null
{'base_commit': '676e8630243b894aa2976ef6fb6048f9880b8a23', 'files': [{'path': 'examples/svm/plot_separating_hyperplane.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [15, 20], 'mod': [18, 19, 25, 26, 27, 28, 29, 31, 32, 33, 34, 35, 36, 38, 39, 40, 41, 43, 44, 45]}}}]}
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "examples/svm/plot_separating_hyperplane.py" ], "doc": [], "test": [], "config": [], "asset": [] }
1
scikit-learn
scikit-learn
839b356f45fac7724eab739dcc129a0c8f650a23
https://github.com/scikit-learn/scikit-learn/issues/15005
API
Implement SLEP009: keyword-only arguments
[SLEP009](https://scikit-learn-enhancement-proposals.readthedocs.io/en/latest/slep009/proposal.html) is all but accepted. It proposes to make most parameters keyword-only. We should do this by first: * [x] Merging #13311 * [x] Perhaps getting some stats on usage of positional arguments as per https://github.co...
null
https://github.com/scikit-learn/scikit-learn/pull/17007 https://github.com/scikit-learn/scikit-learn/pull/17046 https://github.com/scikit-learn/scikit-learn/pull/17006 https://github.com/scikit-learn/scikit-learn/pull/17005 https://github.com/scikit-learn/scikit-learn/pull/13311 https://github.com/scikit-learn/scikit-l...
null
{'base_commit': '839b356f45fac7724eab739dcc129a0c8f650a23', 'files': [{'path': 'sklearn/datasets/_base.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [19]}, "(None, 'load_files', 83)": {'mod': [83]}, "(None, 'load_wine', 270)": {'mod': [270]}, "(None, 'load_iris', 384)": {'mod': [384]}, "(None, 'load_...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "sklearn/datasets/_openml.py", "sklearn/datasets/_california_housing.py", "sklearn/datasets/_base.py", "sklearn/datasets/_covtype.py", "sklearn/datasets/_twenty_newsgroups.py", "sklearn/datasets/_olivetti_faces.py", "sklearn/datasets/_samples_generator.py", "sklearn/dataset...
1
scikit-learn
scikit-learn
62d205980446a1abc1065f4332fd74eee57fcf73
https://github.com/scikit-learn/scikit-learn/issues/12779
Easy good first issue
Remove "from __future__ import XXX"
Given #12746, I think we should remove ``from __future__ import XXX``, right? @adrinjalali ``` $ git grep "from __future__ import" | wc -l 147 ```
null
https://github.com/scikit-learn/scikit-learn/pull/13079
null
{'base_commit': '62d205980446a1abc1065f4332fd74eee57fcf73', 'files': [{'path': 'sklearn/utils/_random.pyx', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0], 'mod': [16]}}}]}
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "sklearn/utils/_random.pyx" ], "doc": [], "test": [], "config": [], "asset": [] }
1
scikit-learn
scikit-learn
45019594938f92f3344c80bb0d351793dd91334b
https://github.com/scikit-learn/scikit-learn/issues/12306
module:impute
SimpleImputer to Crash on Constant Imputation with string value when dataset is encoded Numerically
#### Description The title kind of describes it. It might be pretty logical, but just putting it out here as it took a while for me to realize and debug what exactly happened. The SimpleImputer has the ability to impute missing values with a constant. If the data is categorical, it is possible to impute with a str...
null
https://github.com/scikit-learn/scikit-learn/pull/25081
null
{'base_commit': '45019594938f92f3344c80bb0d351793dd91334b', 'files': [{'path': 'sklearn/impute/_base.py', 'status': 'modified', 'Loc': {"('SimpleImputer', None, 142)": {'mod': [179, 180, 181]}}}]}
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "sklearn/impute/_base.py" ], "doc": [], "test": [], "config": [], "asset": [] }
1
scikit-learn
scikit-learn
5ad3421a5b5759ecfaaab93406592d988f5d487f
https://github.com/scikit-learn/scikit-learn/issues/16556
New Feature module:ensemble
Add Pre-fit Model to Stacking Model
<!-- If you want to propose a new algorithm, please refer first to the scikit-learn inclusion criterion: https://scikit-learn.org/stable/faq.html#what-are-the-inclusion-criteria-for-new-algorithms --> #### Describe the workflow you want to enable Allow pre-fit models to stacking model such as `StackingClassif...
null
https://github.com/scikit-learn/scikit-learn/pull/22215
null
{'base_commit': '5ad3421a5b5759ecfaaab93406592d988f5d487f', 'files': [{'path': 'doc/whats_new/v1.1.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [372]}}}, {'path': 'sklearn/ensemble/_stacking.py', 'status': 'modified', 'Loc': {"('StackingClassifier', None, 281)": {'add': [328], 'mod': [309, 317, 366...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "sklearn/ensemble/_stacking.py" ], "doc": [ "doc/whats_new/v1.1.rst" ], "test": [ "sklearn/ensemble/tests/test_stacking.py" ], "config": [], "asset": [] }
1
scikit-learn
scikit-learn
9b2aac9e5c8749243c73f2377519d2f2c407b095
https://github.com/scikit-learn/scikit-learn/issues/7603
When min_samples_split and min_samples_leaf are greater than or equal to 1.0 and 0.5, no error is thrown.
<!-- Instructions For Filing a Bug: https://github.com/scikit-learn/scikit-learn/blob/master/CONTRIBUTING.md#filing-bugs --> #### Description This is a silent bug in version 0.18.0, as a result of the following change: "Random forest, extra trees, decision trees and gradient boosting estimator accept the parameter min...
null
https://github.com/scikit-learn/scikit-learn/pull/7604
null
{'base_commit': '9b2aac9e5c8749243c73f2377519d2f2c407b095', 'files': [{'path': 'sklearn/tree/tests/test_tree.py', 'status': 'modified', 'Loc': {"(None, 'test_error', 496)": {'add': [511, 523]}}}, {'path': 'sklearn/tree/tree.py', 'status': 'modified', 'Loc': {"('BaseDecisionTree', 'fit', 117)": {'add': [218, 220, 223, 2...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "sklearn/tree/tree.py" ], "doc": [], "test": [ "sklearn/tree/tests/test_tree.py" ], "config": [], "asset": [] }
1
scikit-learn
scikit-learn
86476582a3759b82fd163d27522bd2de6ad95b6c
https://github.com/scikit-learn/scikit-learn/issues/11568
TST: optics function is not tested
Related to https://github.com/scikit-learn/scikit-learn/pull/1984 that was merged: it seems that the `optics` function (that @amueller added to the `cluster/__init__.py` in https://github.com/scikit-learn/scikit-learn/pull/11567) is not tested (at least not in `test_optics.py`) (so the function `optics` that wraps t...
null
https://github.com/scikit-learn/scikit-learn/pull/13271
null
{'base_commit': '86476582a3759b82fd163d27522bd2de6ad95b6c', 'files': [{'path': 'doc/modules/classes.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [117]}}}, {'path': 'sklearn/cluster/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [14, 35]}}}, {'path': 'sklearn/cluster/dbsca...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "sklearn/cluster/__init__.py", "sklearn/cluster/optics_.py", "sklearn/cluster/dbscan_.py" ], "doc": [ "doc/modules/classes.rst" ], "test": [], "config": [], "asset": [] }
1
scikit-learn
scikit-learn
ebf2bf81075ae1f4eb47ea0f54981c512bda5ceb
https://github.com/scikit-learn/scikit-learn/issues/5022
Deprecate n_iter in SGDClassifier and implement max_iter.
We should implement a stopping condition based on the scaled norm of the parameter update as done in the new SAG solver for LogisticRegression / Ridge. The convergence check should be done at the end of the each epoch to avoid introducing too much overhead. Other classes sharing the same underlying implementation shou...
null
https://github.com/scikit-learn/scikit-learn/pull/5036
null
{'base_commit': 'ebf2bf81075ae1f4eb47ea0f54981c512bda5ceb', 'files': [{'path': 'benchmarks/bench_covertype.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [105]}}}, {'path': 'benchmarks/bench_sgd_regression.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [23, 27], 'mod': [1, 2, 4, 5, 6...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "sklearn/linear_model/sgd_fast.pyx", "benchmarks/bench_sparsify.py", "examples/linear_model/plot_sgd_separating_hyperplane.py", "sklearn/linear_model/passive_aggressive.py", "examples/linear_model/plot_sgd_weighted_samples.py", "sklearn/utils/weight_vector.pyx", "sklearn/linear...
1
scikit-learn
scikit-learn
dc1cad2b3fddb8b9069d7cfd89cb1039260baf8e
https://github.com/scikit-learn/scikit-learn/issues/28976
Documentation help wanted
`min_samples` in HDSCAN
### Describe the issue linked to the documentation I find the description of the `min_samples` argument in sklearn.cluster.HDBSCAN confusing. It says "The number of samples in a neighborhood for a point to be considered as a core point. This includes the point itself." But if I understand everything correctly `m...
null
https://github.com/scikit-learn/scikit-learn/pull/29263
null
{'base_commit': 'dc1cad2b3fddb8b9069d7cfd89cb1039260baf8e', 'files': [{'path': 'sklearn/cluster/_hdbscan/_reachability.pyx', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [65, 66]}}}, {'path': 'sklearn/cluster/_hdbscan/hdbscan.py', 'status': 'modified', 'Loc': {"('HDBSCAN', None, 419)": {'mod': [444, 445]...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "sklearn/cluster/_hdbscan/hdbscan.py", "sklearn/cluster/_hdbscan/_reachability.pyx" ], "doc": [], "test": [], "config": [], "asset": [] }
1
scikit-learn
scikit-learn
127415b209ca1df3f8502bdf74de56c33aff2565
https://github.com/scikit-learn/scikit-learn/issues/901
add predict and fit_predict to more clustering algorithms
We should add `predict` and `fit_predict` to other clustering algorithms than `KMeans`: they are useful to retrieve cluster labels independently of the underlying attribute names...
null
https://github.com/scikit-learn/scikit-learn/pull/907
null
{'base_commit': '127415b209ca1df3f8502bdf74de56c33aff2565', 'files': [{'path': 'sklearn/base.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [365]}}}, {'path': 'sklearn/cluster/affinity_propagation_.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [12]}, "('AffinityPropagation', None, 1...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "sklearn/cluster/hierarchical.py", "sklearn/cluster/mean_shift_.py", "sklearn/base.py", "sklearn/cluster/affinity_propagation_.py", "sklearn/cluster/dbscan_.py", "sklearn/cluster/k_means_.py", "sklearn/cluster/spectral.py" ], "doc": [], "test": [ "sklearn/cluster/test...
1
scikit-learn
scikit-learn
9385c45c0379ceab913daa811b1e7d4128faee35
https://github.com/scikit-learn/scikit-learn/issues/4700
Bug
cross_val_predict AttributeError with lists
When calling the cross_val_predict with an X parameter that is a list type, an AttributeError is raised on line 1209. This is because it is checking for the shape of the X parameter, but a list does not have the shape attribute. The documentation says that this function supports lists so I am supposing that it isn't i...
null
https://github.com/scikit-learn/scikit-learn/pull/4705
null
{'base_commit': '9385c45c0379ceab913daa811b1e7d4128faee35', 'files': [{'path': 'sklearn/cross_validation.py', 'status': 'modified', 'Loc': {"(None, 'cross_val_predict', 958)": {'mod': [1027]}}}, {'path': 'sklearn/tests/test_cross_validation.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1037]}, "('Mo...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "sklearn/cross_validation.py", "sklearn/utils/mocking.py" ], "doc": [], "test": [ "sklearn/tests/test_cross_validation.py" ], "config": [], "asset": [] }
1
scikit-learn
scikit-learn
053d2d1af477d9dc17e69162b9f2298c0fda5905
https://github.com/scikit-learn/scikit-learn/issues/19705
[RFC] Minimal scipy version for 1.0 (or 0.26) release
#### Proposal I'd like to propose to increase the minimal scipy version to 1.0. ```python SCIPY_MIN_VERSION = '1.0.0' ``` #### Reasoning 1. In case we should release scikit-learn 1.0, it would be a good fit:smirk: 2. Linear quantile regression #9978 could make it into the next release. It uses `scipy.optimiz...
null
https://github.com/scikit-learn/scikit-learn/pull/20069
null
{'base_commit': '053d2d1af477d9dc17e69162b9f2298c0fda5905', 'files': [{'path': '.circleci/config.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [6, 11, 50, 99, 133]}}}, {'path': '.travis.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [43, 48, 49, 50, 51, 52, 53, 54]}}}, {'path': 'a...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "sklearn/decomposition/_truncated_svd.py", "doc/conftest.py", ".circleci/config.yml", "sklearn/_min_dependencies.py" ], "doc": [ "doc/whats_new/v1.0.rst", "doc/tutorial/statistical_inference/supervised_learning.rst", "doc/modules/sgd.rst" ], "test": [ "sklearn/utils...
1
scikit-learn
scikit-learn
a0ba256dbe9380b5d2cf9cee133482fc87768267
https://github.com/scikit-learn/scikit-learn/issues/19304
New Feature Easy module:ensemble
Poisson criterion in RandomForestRegressor
#### Describe the workflow you want to enable I want to officially use the Poisson splitting criterion in `RandomForestRegressor`. #### Describe your proposed solution #17386 implemented the poisson splitting criterion for `DecisionTreeRegressor` and `ExtraTreeRegressor`. This also enabled&mdash;somewhat silently&...
null
https://github.com/scikit-learn/scikit-learn/pull/19464
null
{'base_commit': 'a0ba256dbe9380b5d2cf9cee133482fc87768267', 'files': [{'path': 'sklearn/ensemble/_forest.py', 'status': 'modified', 'Loc': {"('BaseForest', 'fit', 274)": {'add': [317]}, "('RandomForestRegressor', None, 1279)": {'mod': [1301, 1304, 1305, 1307, 1308]}}}]}
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "sklearn/ensemble/_forest.py" ], "doc": [], "test": [], "config": [], "asset": [] }
1
scikit-learn
scikit-learn
8453daa6b983ee2fd73d537e81e58b3f6b0e3147
https://github.com/scikit-learn/scikit-learn/issues/4846
Bug
RidgeClassifier triggers data copy
RidgeClassifier always triggers a data copy even when not using sample weights. Regression introduced in #4838. See: https://github.com/scikit-learn/scikit-learn/pull/4838#discussion_r32090535
null
https://github.com/scikit-learn/scikit-learn/pull/4851
null
{'base_commit': '99d08b571e4813e8d91d809b851b46e8cd5dd88f', 'files': [{'path': 'sklearn/linear_model/ridge.py', 'status': 'modified', 'Loc': {"('RidgeClassifier', 'fit', 575)": {'mod': [593, 594, 601, 602, 603]}, "('RidgeClassifierCV', 'fit', 1053)": {'mod': [1073, 1074, 1080, 1081, 1082]}}}]}
[]
[]
[]
{ "iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "sklearn/linear_model/ridge.py" ], "doc": [], "test": [], "config": [], "asset": [] }
1
scikit-learn
scikit-learn
c9e227b70d64f73b953d8d60629d6ac63e02a91c
https://github.com/scikit-learn/scikit-learn/issues/7467
Bug
float numbers can't be set to RFECV's parameter "step"
#### Description When I use RFECV with parameter 'step' as a float number will cause warnings/errors "rfe.py:203: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future". And the analysis can't be finished until integer or 1/2. I read description of RFECV an...
null
https://github.com/scikit-learn/scikit-learn/pull/7469
null
{'base_commit': 'c9e227b70d64f73b953d8d60629d6ac63e02a91c', 'files': [{'path': 'sklearn/feature_selection/rfe.py', 'status': 'modified', 'Loc': {"('RFECV', 'fit', 378)": {'add': [398], 'mod': [427]}}}, {'path': 'sklearn/feature_selection/tests/test_rfe.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "sklearn/feature_selection/rfe.py" ], "doc": [], "test": [ "sklearn/feature_selection/tests/test_rfe.py" ], "config": [], "asset": [] }
1
scikit-learn
scikit-learn
9b42b0cc7d5cf6978805619bc2433e3888c38d0c
https://github.com/scikit-learn/scikit-learn/issues/17814
Bug
l1_ratio in sklearn.linear_model's ElasticNet greater than 1?
I accidentally ran ElasticNet (from sklearn.linear_model) for l1_ratio >1, and no error or warning was raised. From the docsstring, it says that ``0 < l1_ratio < 1``. Should we raise a ValueError or something? Found this with @mathurinm. If this turns out to be something to be done, I could help out if someone could...
null
https://github.com/scikit-learn/scikit-learn/pull/17846
null
{'base_commit': '9b42b0cc7d5cf6978805619bc2433e3888c38d0c', 'files': [{'path': 'sklearn/linear_model/_coordinate_descent.py', 'status': 'modified', 'Loc': {"('ElasticNet', 'fit', 719)": {'add': [757]}}}, {'path': 'sklearn/linear_model/tests/test_coordinate_descent.py', 'status': 'modified', 'Loc': {'(None, None, None)'...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "sklearn/linear_model/_coordinate_descent.py" ], "doc": [], "test": [ "sklearn/linear_model/tests/test_coordinate_descent.py" ], "config": [], "asset": [] }
1
scikit-learn
scikit-learn
38c7e93b1edcbfb85060cf7c14cca3ab47b9267c
https://github.com/scikit-learn/scikit-learn/issues/8499
Bug
Memory leak in LogisticRegression
Dear all, while running many logistic regressions, I encountered a continuous memory increase on several (Debian) machines. The problem is isolated in this code: ```python import sklearn from sklearn.linear_model import LogisticRegression import numpy as np import time import psutil import os if __name__...
null
https://github.com/scikit-learn/scikit-learn/pull/9024
null
{'base_commit': '38c7e93b1edcbfb85060cf7c14cca3ab47b9267c', 'files': [{'path': 'sklearn/svm/src/liblinear/liblinear_helper.c', 'status': 'modified', 'Loc': {"(None, 'free_problem', 217)": {'add': [221]}}}, {'path': 'sklearn/svm/src/liblinear/linear.cpp', 'status': 'modified', 'Loc': {"(None, 'free_model_content', 2907)...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "sklearn/svm/src/liblinear/liblinear_helper.c", "sklearn/svm/src/liblinear/linear.cpp" ], "doc": [], "test": [], "config": [], "asset": [] }
1
scikit-learn
scikit-learn
e25e8e2119ab6c5aa5072b05c0eb60b10aee4b05
https://github.com/scikit-learn/scikit-learn/issues/29906
Bug
Incorrect sample weight handling in `KBinsDiscretizer`
### Describe the bug Sample weights are not properly passed through when specifying subsample within KBinsDiscretizer. ### Steps/Code to Reproduce ```python from sklearn.datasets import make_blobs from sklearn.preprocessing import KBinsDiscretizer import numpy as np rng = np.random.RandomState(42) # F...
null
https://github.com/scikit-learn/scikit-learn/pull/29907
null
{'base_commit': 'e25e8e2119ab6c5aa5072b05c0eb60b10aee4b05', 'files': [{'path': 'sklearn/ensemble/_hist_gradient_boosting/tests/test_gradient_boosting.py', 'status': 'modified', 'Loc': {"(None, 'make_missing_value_data', 564)": {'mod': [571]}}}, {'path': 'sklearn/inspection/tests/test_permutation_importance.py', 'status...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "sklearn/utils/_indexing.py", "sklearn/utils/stats.py", "sklearn/preprocessing/_discretization.py", "sklearn/utils/_test_common/instance_generator.py" ], "doc": [], "test": [ "sklearn/preprocessing/tests/test_discretization.py", "sklearn/tests/test_docstring_parameters.py", ...
1
scikit-learn
scikit-learn
dcfb3df9a3df5aa2a608248316d537cd6b3643ee
https://github.com/scikit-learn/scikit-learn/issues/6656
New Feature module:ensemble
var.monotone option in GradientBoosting
Hi, is it possible to add the equivalent of the var.monotone option in R GBM package to the GradientBoostingClassifier/Regressor? Sometimes it is really useful when we know/want some factors to have monotonic effect to avoid overfitting and non-intuitive results. Thanks!
null
https://github.com/scikit-learn/scikit-learn/pull/15582
null
{'base_commit': 'dcfb3df9a3df5aa2a608248316d537cd6b3643ee', 'files': [{'path': 'doc/modules/ensemble.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1052], 'mod': [900]}}}, {'path': 'doc/whats_new/v0.23.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [186]}}}, {'path': 'sklearn/ense...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "sklearn/ensemble/_hist_gradient_boosting/grower.py", "sklearn/ensemble/_hist_gradient_boosting/splitting.pyx", "sklearn/ensemble/_hist_gradient_boosting/common.pxd", "sklearn/ensemble/_hist_gradient_boosting/gradient_boosting.py" ], "doc": [ "doc/modules/ensemble.rst", "doc/wh...
1
scikit-learn
scikit-learn
417788c6a54c39614b82acf1a04b1f97f8a32199
https://github.com/scikit-learn/scikit-learn/issues/6783
"scoring must return a number" error with custom scorer
#### Description I'm encountering the same error (`ValueError: scoring must return a number, got [...] (<class 'numpy.core.memmap.memmap'>) instead.`) as #6147, despite running v0.17.1. This is because I'm creating my own scorer, following the example in this [article](http://bigdataexaminer.com/data-science/dealing-w...
null
https://github.com/scikit-learn/scikit-learn/pull/6789
null
{'base_commit': '417788c6a54c39614b82acf1a04b1f97f8a32199', 'files': [{'path': 'sklearn/cross_validation.py', 'status': 'modified', 'Loc': {"(None, '_score', 1645)": {'add': [1650]}}}, {'path': 'sklearn/model_selection/_validation.py', 'status': 'modified', 'Loc': {"(None, '_score', 298)": {'add': [303]}}}, {'path': 's...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "sklearn/model_selection/_validation.py", "sklearn/cross_validation.py" ], "doc": [], "test": [ "sklearn/model_selection/tests/test_validation.py" ], "config": [], "asset": [] }
1
scikit-learn
scikit-learn
3f49cee020a91a0be5d0d5602d29b3eefce9d758
https://github.com/scikit-learn/scikit-learn/issues/3722
Bug Easy
preprocessing.scale provides consistent results on arrays with zero variance
I'm using Python 2.7, NumPy 1.8.2 and scikit-learn 0.14.1 on x64 linux (all installed through Anaconda) and getting very inconsistent results for preprocessing.scale function: > print preprocessing.scale(np.zeros(6) + np.log(1e-5)) > [ 0. 0. 0. 0. 0. 0.] > > print preprocessing.scale(np.zeros(8) + np.log(1e-5)) ...
null
https://github.com/scikit-learn/scikit-learn/pull/4436
null
{'base_commit': 'ad26ae47057885415f74893d6329a481b0ce01bd', 'files': [{'path': 'doc/whats_new.rst', 'status': 'modified', 'Loc': {'(None, None, 231)': {'add': [231]}, '(None, None, 3378)': {'add': [3378]}}}, {'path': 'sklearn/preprocessing/_weights.py', 'status': 'modified', 'Loc': {}}, {'path': 'sklearn/preprocessing/...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "sklearn/preprocessing/_weights.py", "sklearn/preprocessing/data.py" ], "doc": [ "doc/whats_new.rst" ], "test": [ "sklearn/preprocessing/tests/test_data.py" ], "config": [], "asset": [] }
1
scikit-learn
scikit-learn
eda99f3cec70ba90303de0ef3ab7f988657fadb9
https://github.com/scikit-learn/scikit-learn/issues/13362
Bug Blocker
return_intercept==True in ridge_regression raises an exception
<!-- If your issue is a usage question, submit it here instead: - StackOverflow with the scikit-learn tag: https://stackoverflow.com/questions/tagged/scikit-learn - Mailing List: https://mail.python.org/mailman/listinfo/scikit-learn For more information, see User Questions: http://scikit-learn.org/stable/support.ht...
null
https://github.com/scikit-learn/scikit-learn/pull/13363
null
{'base_commit': 'eda99f3cec70ba90303de0ef3ab7f988657fadb9', 'files': [{'path': 'doc/whats_new/v0.21.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [342]}}}, {'path': 'sklearn/linear_model/ridge.py', 'status': 'modified', 'Loc': {"(None, '_ridge_regression', 366)": {'mod': [371, 372, 373, 374, 375, 37...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "sklearn/linear_model/ridge.py" ], "doc": [ "doc/whats_new/v0.21.rst" ], "test": [ "sklearn/linear_model/tests/test_ridge.py" ], "config": [], "asset": [] }
1
pandas-dev
pandas
df2fb490a58f272067b33aad372bb4fe2393bb93
https://github.com/pandas-dev/pandas/issues/7261
Bug Missing-data Dtype Conversions
API: Should Index.min and max use nanmin and nanmax?
Index and Series `min` and `max` handles `nan` and `NaT` differently. Even though `min` and `max` are defined in `IndexOpsMixin`, `Series` doesn't use them and use `NDFrame` definitions. ``` pd.Index([np.nan, 1.0]).min() # nan pd.Index([np.nan, 1.0]).max() # nan pd.DatetimeIndex([pd.NaT, '2011-01-01']).min() # NaT ...
null
https://github.com/pandas-dev/pandas/pull/7279
null
{'base_commit': 'df2fb490a58f272067b33aad372bb4fe2393bb93', 'files': [{'path': 'doc/source/v0.14.1.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [67]}}}, {'path': 'pandas/core/base.py', 'status': 'modified', 'Loc': {"('IndexOpsMixin', 'max', 237)": {'mod': [239]}, "('IndexOpsMixin', 'min', 241)": {'...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "pandas/core/base.py", "pandas/tseries/index.py" ], "doc": [ "doc/source/v0.14.1.txt" ], "test": [ "pandas/tests/test_base.py" ], "config": [], "asset": [] }
1
pandas-dev
pandas
abd5333e7a3332921707888de9621c52dd3408e6
https://github.com/pandas-dev/pandas/issues/7943
Enhancement API Design Timezones
tz_localize should support is_dst input array
When storing datetimes with timezone information in mysql I split out the is_dst flag into a separate column. Then when reconstructing the Timestamps I am either forced to iterate through each row and call pytz.timezone.localize on every Timestamp which is very slow or do some magic with localizing what I can and then...
null
https://github.com/pandas-dev/pandas/pull/7963
null
{'base_commit': 'abd5333e7a3332921707888de9621c52dd3408e6', 'files': [{'path': 'doc/source/timeseries.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1359, 1490, 1509], 'mod': [1492, 1493, 1494, 1503, 1507, 1511, 1512, 1513, 1514, 1516, 1517]}}}, {'path': 'doc/source/v0.15.0.txt', 'status': 'modified...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "pandas/core/generic.py", "pandas/tslib.pyx", "pandas/tseries/index.py" ], "doc": [ "doc/source/timeseries.rst", "doc/source/v0.15.0.txt" ], "test": [ "pandas/tseries/tests/test_timezones.py", "pandas/tseries/tests/test_tslib.py" ], "config": [], "asset": [] }
1
pandas-dev
pandas
a9421af1aac906cc38d025ed5db4a2b55cb8b9bc
https://github.com/pandas-dev/pandas/issues/16773
Performance Sparse
SparseDataFrame constructor has horrible performance for df with many columns
#### Code Sample This is an example taken directly from the [docs](https://pandas.pydata.org/pandas-docs/stable/sparse.html#sparsedataframe), only that I've changed the sparsity of the arrays from 90% to 99%. ```python import pandas as pd from scipy.sparse import csr_matrix import numpy as np arr = np.rando...
null
https://github.com/pandas-dev/pandas/pull/16883
null
{'base_commit': 'a9421af1aac906cc38d025ed5db4a2b55cb8b9bc', 'files': [{'path': 'asv_bench/benchmarks/sparse.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0, 29]}}}, {'path': 'doc/source/whatsnew/v0.21.0.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [137]}}}, {'path': 'pandas/core...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "asv_bench/benchmarks/sparse.py", "pandas/core/sparse/frame.py" ], "doc": [ "doc/source/whatsnew/v0.21.0.txt" ], "test": [ "pandas/tests/reshape/test_reshape.py", "pandas/tests/sparse/test_frame.py" ], "config": [], "asset": [] }
1
pandas-dev
pandas
ba48fc4a033f11513fa2dd44c946e18b7bc27ad2
https://github.com/pandas-dev/pandas/issues/26058
Docs CI
DOC: test new sphinx 2 release
The docs are currently being built with sphinx 1.8.5 (see eg https://travis-ci.org/pandas-dev/pandas/jobs/518832177 for a recent build on master). Sphinx has released 2.0.0 (http://www.sphinx-doc.org/en/master/changes.html#release-2-0-0-released-mar-29-2019), and it would be good to test our docs with this new relea...
null
https://github.com/pandas-dev/pandas/pull/26519
null
{'base_commit': 'ba48fc4a033f11513fa2dd44c946e18b7bc27ad2', 'files': [{'path': 'pandas/core/indexes/base.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [54]}, "('Index', None, 165)": {'add': [2790]}}}, {'path': 'pandas/core/indexes/interval.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mo...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "pandas/core/indexes/interval.py", "pandas/core/indexes/base.py" ], "doc": [], "test": [], "config": [], "asset": [] }
1
pandas-dev
pandas
45d8d77f27cf0dbc8cefe932f8fb64f6982b9527
https://github.com/pandas-dev/pandas/issues/10078
good first issue Needs Tests
Pandas attempts to convert some strings to timestamps when grouping by a timestamp and aggregating?
I am working through logs of web requests, and when I want to find the most common, say, user agent string for a (disguised) user, I run something like the following: ``` from pandas import Series, DataFrame, Timestamp tdf = DataFrame({'day': {0: Timestamp('2015-02-24 00:00:00'), 1: Timestamp('2015-02-24 00:00:00'),...
null
https://github.com/pandas-dev/pandas/pull/30646
null
{'base_commit': '45d8d77f27cf0dbc8cefe932f8fb64f6982b9527', 'files': [{'path': 'pandas/tests/frame/test_constructors.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2427], 'mod': [2]}}}, {'path': 'pandas/tests/frame/test_missing.py', 'status': 'modified', 'Loc': {"('TestDataFrameInterpolate', 'test_in...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [], "doc": [], "test": [ "pandas/tests/reshape/test_pivot.py", "pandas/tests/groupby/test_apply.py", "pandas/tests/indexing/test_loc.py", "pandas/tests/frame/test_constructors.py", "pandas/tests/indexing/multiindex/test_loc.py", "pandas/tests/groupby/test_groupby.py", "pandas...
1
pandas-dev
pandas
636dd01fdacba0c8f0e7b5aaa726165983fc861d
https://github.com/pandas-dev/pandas/issues/21356
IO JSON good first issue
JSON nested_to_record Silently Drops Top-Level None Values
xref https://github.com/pandas-dev/pandas/pull/21164#issuecomment-394510095 `nested_to_record` is silently dropping `None` values that appear at the top of the JSON. This is IMO unexpected and undesirable. #### Code Sample, a copy-pastable example if possible ```python In [3]: data = { ...: "id": None...
null
https://github.com/pandas-dev/pandas/pull/21363
null
{'base_commit': '636dd01fdacba0c8f0e7b5aaa726165983fc861d', 'files': [{'path': 'doc/source/whatsnew/v0.23.1.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [33]}}}, {'path': 'pandas/io/json/normalize.py', 'status': 'modified', 'Loc': {"(None, 'nested_to_record', 24)": {'mod': [83, 84]}}}, {'path': 'pa...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "pandas/io/json/normalize.py" ], "doc": [ "doc/source/whatsnew/v0.23.1.txt" ], "test": [ "pandas/tests/io/json/test_normalize.py" ], "config": [], "asset": [] }
1
pandas-dev
pandas
19f715c51d16995fc6cd0c102fdba2f213a83a0f
https://github.com/pandas-dev/pandas/issues/24607
Missing-data Complex
DES: Should util.is_nan check for complex('nan')?
It doesn't at the moment. A handful of functions in libs.missing _do_ check for complex nan, and could be simplified/de-duplicated if we make util.is_nan also catch the complex case.
null
https://github.com/pandas-dev/pandas/pull/24628
null
{'base_commit': 'd106e9975100cd0f2080d7b1a6111f20fb64f906', 'files': [{'path': 'pandas/_libs/missing.pyx', 'status': 'modified', 'Loc': {'(None, None, 15)': {'mod': [15]}, '(None, None, 23)': {'mod': [23, 24, 25, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39]}, '(None, None, 65)': {'mod': [65, 66, 67, 68, 69, 70, ...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "pandas/_libs/missing.pyx", "pandas/_libs/tslibs/nattype.pyx", "pandas/_libs/tslibs/nattype.pxd", "pandas/_libs/tslibs/util.pxd" ], "doc": [], "test": [ "pandas/tests/dtypes/test_missing.py" ], "config": [], "asset": [] }
1
pandas-dev
pandas
a797b28c87d90a439dfa2c12b4a11e62bf0d6db2
https://github.com/pandas-dev/pandas/issues/7778
Bug Datetime Dtype Conversions Timedelta
BUG: df.apply handles np.timedelta64 as timestamp, should be timedelta
I think there may be a bug with the row-wise handling of `numpy.timedelta64` data types when using `DataFrame.apply`. As a check, the problem does not appear when using `DataFrame.applymap`. The problem may be related to #4532, but I'm unsure. I've included an example below. This is only a minor problem for my use-cas...
null
https://github.com/pandas-dev/pandas/pull/7779
null
{'base_commit': 'a797b28c87d90a439dfa2c12b4a11e62bf0d6db2', 'files': [{'path': 'doc/source/v0.15.0.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [189]}}}, {'path': 'pandas/core/frame.py', 'status': 'modified', 'Loc': {"('DataFrame', '_apply_standard', 3516)": {'add': [3541], 'mod': [3550]}}}, {'path...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "pandas/core/internals.py", "pandas/core/frame.py", "pandas/core/series.py" ], "doc": [ "doc/source/v0.15.0.txt" ], "test": [ "pandas/tests/test_frame.py", "pandas/tests/test_internals.py" ], "config": [], "asset": [] }
1
pandas-dev
pandas
fcb0263762a31724ba6db39bf1564569dda068a0
https://github.com/pandas-dev/pandas/issues/16991
Bug Indexing
ValueError on df.columns.isin(pd.Series())
#### Code Sample, a copy-pastable example if possible ```python df = pd.DataFrame(columns=list('ab')) s1 = pd.Series(['a']) s2 = pd.Series() df.columns.isin(s1) df.columns.isin(s2) ``` #### Problem description The second call to `df.columns.isin(s2)` fails with D:\Anaconda\env...
null
https://github.com/pandas-dev/pandas/pull/17006
null
{'base_commit': 'fcb0263762a31724ba6db39bf1564569dda068a0', 'files': [{'path': 'doc/source/whatsnew/v0.21.0.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [206]}}}, {'path': 'pandas/core/algorithms.py', 'status': 'modified', 'Loc': {"(None, '_ensure_data', 41)": {'add': [67]}}}, {'path': 'pandas/test...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "pandas/core/algorithms.py" ], "doc": [ "doc/source/whatsnew/v0.21.0.txt" ], "test": [ "pandas/tests/test_algos.py", "pandas/tests/indexes/test_base.py", "pandas/tests/series/test_analytics.py", "pandas/tests/frame/test_analytics.py" ], "config": [], "asset": [] }
1
pandas-dev
pandas
0e8331f85cde8db2841aad92054d8e896e88fcef
https://github.com/pandas-dev/pandas/issues/51236
Docs good first issue
DOC fix EX02 errors in docstrings
pandas has a script for validating docstrings https://github.com/pandas-dev/pandas/blob/ced983358b06576af1a73c3e936171cc6dc98a6d/ci/code_checks.sh#L560-L568 which can be run with ``` ./ci/code_checks.sh docstrings ``` Currently, many functions fail the EX02 check, and so are excluded from the check. The ...
null
https://github.com/pandas-dev/pandas/pull/51724
null
{'base_commit': 'ce3260110f8f5e17c604e7e1a67ed7f8fb07f5fc', 'files': [{'path': 'ci/code_checks.sh', 'status': 'modified', 'Loc': {'(None, None, 82)': {'mod': [82, 83]}, '(None, None, 560)': {'mod': [560, 561, 562, 563, 564, 565, 566, 567, 568]}}}, {'path': 'pandas/core/dtypes/common.py', 'status': 'modified', 'Loc': {"...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "pandas/plotting/_core.py", "pandas/plotting/_misc.py", "pandas/core/dtypes/common.py" ], "doc": [], "test": [], "config": [], "asset": [ "ci/code_checks.sh" ] }
1
pandas-dev
pandas
2e087c7841aec84030fb489cec9bfeb38fe8086f
https://github.com/pandas-dev/pandas/issues/10043
Indexing
iloc breaks on read-only dataframe
This is picking up #9928 again. I don't know if the behavior is expected, but it is a bit odd to me. Maybe I'm doing something wrong, I'm not that familiar with the pandas internals. We call `df.iloc[indices]` and that breaks with a read-only dataframe. I feel that it shouldn't though, as it is not writing. Minimal r...
null
https://github.com/pandas-dev/pandas/pull/10070
null
{'base_commit': '2e087c7841aec84030fb489cec9bfeb38fe8086f', 'files': [{'path': 'pandas/src/generate_code.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [148, 170], 'mod': [96, 97, 98, 99, 100, 101, 143, 145]}}}, {'path': 'pandas/tests/test_common.py', 'status': 'modified', 'Loc': {"('TestTake', '_test...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "pandas/src/generate_code.py" ], "doc": [], "test": [ "pandas/tests/test_common.py" ], "config": [], "asset": [] }
1
pandas-dev
pandas
89b3d6b201b5d429a202b5239054d5a70c8b5071
https://github.com/pandas-dev/pandas/issues/38495
Performance Regression
Major Performance regression of df.groupby(..).indices
I'm experiencing major performance regressions with pandas=1.1.5 versus 1.1.3 Version 1.1.3: ``` Python 3.7.9 | packaged by conda-forge | (default, Dec 9 2020, 20:36:16) [MSC v.1916 64 bit (AMD64)] Type 'copyright', 'credits' or 'license' for more information IPython 7.19.0 -- An enhanced Interactive Python. Ty...
null
https://github.com/pandas-dev/pandas/pull/38892
null
{'base_commit': '89b3d6b201b5d429a202b5239054d5a70c8b5071', 'files': [{'path': 'asv_bench/benchmarks/groupby.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [128]}}}]}
[]
[]
[]
{ "iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "asv_bench/benchmarks/groupby.py" ], "doc": [], "test": [], "config": [], "asset": [] }
1
pandas-dev
pandas
03e58585036c83ca3d4c86d7d3d7ede955c15130
https://github.com/pandas-dev/pandas/issues/37748
Bug Indexing
BUG: ValueError is mistakenly raised if a numpy array is assigned to a pd.Series of dtype=object and both have the same length
- [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the latest version of pandas. - [ ] (optional) I have confirmed this bug exists on the master branch of pandas. --- #### Code Sample, a copy-pastable example ```python import pandas as pd impor...
null
https://github.com/pandas-dev/pandas/pull/38266
null
{'base_commit': '03e58585036c83ca3d4c86d7d3d7ede955c15130', 'files': [{'path': 'doc/source/whatsnew/v1.2.0.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [680]}}}, {'path': 'pandas/core/indexers.py', 'status': 'modified', 'Loc': {"(None, 'is_scalar_indexer', 68)": {'add': [81]}}}, {'path': 'pandas/te...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "pandas/core/indexers.py" ], "doc": [ "doc/source/whatsnew/v1.2.0.rst" ], "test": [ "pandas/tests/indexing/test_indexers.py", "pandas/tests/indexing/test_loc.py" ], "config": [], "asset": [] }
1
pandas-dev
pandas
f09d514cf0b09e65baf210a836de04e69b208cef
https://github.com/pandas-dev/pandas/issues/49247
Bug Reshaping Warnings
BUG: Getting FutureWarning for Groupby.mean when using .pivot_table
### Pandas version checks - [X] I have checked that this issue has not already been reported. - [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [X] I have confirmed this bug exists on the main branch of pandas. ### Reproducible Example ...
null
https://github.com/pandas-dev/pandas/pull/49615
null
{'base_commit': 'f09d514cf0b09e65baf210a836de04e69b208cef', 'files': [{'path': 'pandas/core/reshape/pivot.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [23]}, "(None, '__internal_pivot_table', 113)": {'mod': [167]}}}, {'path': 'pandas/tests/reshape/test_pivot.py', 'status': 'modified', 'Loc': {"('Tes...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "pandas/core/reshape/pivot.py", "pandas/util/_exceptions.py" ], "doc": [], "test": [ "pandas/tests/reshape/test_pivot.py" ], "config": [], "asset": [] }
1
pandas-dev
pandas
e226bacd9e0d69ce3a81abfa09ae850f4610f888
https://github.com/pandas-dev/pandas/issues/8169
Bug Groupby Dtype Conversions
BUG: groupby.count() on different dtypes seems buggy
from [SO](http://stackoverflow.com/questions/25648923/groupby-count-returns-different-values-for-pandas-dataframe-count-vs-describ) something odd going on here: ``` vals = np.hstack((np.random.randint(0,5,(100,2)), np.random.randint(0,2,(100,2)))) df = pd.DataFrame(vals, columns=['a', 'b', 'c', 'd']) df[df==2] = np.n...
null
https://github.com/pandas-dev/pandas/pull/8171
null
{'base_commit': 'e226bacd9e0d69ce3a81abfa09ae850f4610f888', 'files': [{'path': 'doc/source/v0.15.0.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [671]}}}, {'path': 'pandas/core/groupby.py', 'status': 'modified', 'Loc': {"(None, '_count_compat', 149)": {'mod': [150, 151, 152, 153]}, "('BaseGrouper', ...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "pandas/core/groupby.py" ], "doc": [ "doc/source/v0.15.0.txt" ], "test": [ "pandas/tests/test_groupby.py" ], "config": [], "asset": [] }
1
pandas-dev
pandas
9ea0d4485e77c95ff0d8766990ab55d43472b66e
https://github.com/pandas-dev/pandas/issues/4312
Indexing Dtype Conversions
BUG: astype assignment via iloc/loc not working
http://stackoverflow.com/questions/17778139/pandas-unable-to-change-column-data-type/17778560#17778560 This might be trying to coerce `object` dtype to a real dtype (int/float) and is failing Should prob raise for now (or work). Not working with iloc/loc. ``` In [66]: df = DataFrame([['1','2','3','.4',5,6.,'foo']],co...
null
https://github.com/pandas-dev/pandas/pull/4624
null
{'base_commit': '9ea0d4485e77c95ff0d8766990ab55d43472b66e', 'files': [{'path': 'doc/source/release.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [267]}}}, {'path': 'pandas/core/common.py', 'status': 'modified', 'Loc': {"(None, '_possibly_downcast_to_dtype', 960)": {'add': [989]}, "(None, '_maybe_upc...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "pandas/core/internals.py", "pandas/core/common.py", "pandas/core/indexing.py", "pandas/core/groupby.py" ], "doc": [ "doc/source/release.rst" ], "test": [ "pandas/tests/test_common.py", "pandas/tests/test_indexing.py", "pandas/tests/test_frame.py" ], "config": [...
1
pandas-dev
pandas
70435eba769c6bcf57332306455eb70db9fa1111
https://github.com/pandas-dev/pandas/issues/40730
Bug cut NA - MaskedArrays
BUG: qcut fails with Float64Dtype
- [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the latest version of pandas. - [ ] (optional) I have confirmed this bug exists on the master branch of pandas. --- #### Code Sample, a copy-pastable example ```python series = pd.Series([1.0, 2...
null
https://github.com/pandas-dev/pandas/pull/40969
null
{'base_commit': '70435eba769c6bcf57332306455eb70db9fa1111', 'files': [{'path': 'doc/source/whatsnew/v1.3.0.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [698]}}}, {'path': 'pandas/core/reshape/tile.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [28], 'mod': [27]}, "(None, '_coerce_...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "pandas/core/reshape/tile.py" ], "doc": [ "doc/source/whatsnew/v1.3.0.rst" ], "test": [ "pandas/tests/reshape/test_qcut.py" ], "config": [], "asset": [] }
1
pandas-dev
pandas
38afa9310040f1bd4fb122008e96fe6d719b12a2
https://github.com/pandas-dev/pandas/issues/19787
Missing-data Categorical Clean good first issue
Clean: Categorical.fillna NaN in categories checking
We don't allow NaN in the categories anymore, so this block should be unreachable. https://github.com/pandas-dev/pandas/blob/8bfcddc7728deaf8e840416d83c8feda86630d27/pandas/core/arrays/categorical.py#L1622-L1628 If anyone wants to remove it and test things out.
null
https://github.com/pandas-dev/pandas/pull/19880
null
{'base_commit': '38afa9310040f1bd4fb122008e96fe6d719b12a2', 'files': [{'path': '.gitignore', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [63], 'mod': [93]}}}, {'path': 'pandas/core/arrays/categorical.py', 'status': 'modified', 'Loc': {"('Categorical', 'fillna', 1590)": {'mod': [1630, 1631, 1632, 1633, 1...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "pandas/core/arrays/categorical.py" ], "doc": [], "test": [], "config": [ ".gitignore" ], "asset": [] }
1
pandas-dev
pandas
2dad23f766790510d09e66f1e02b57a395d479b1
https://github.com/pandas-dev/pandas/issues/9570
Enhancement Timedelta
timedelta string conversion requires two-digit hour value
`Timedelta('00:00:00')` works fine whereas `Timedelta('0:00:00')` raises and error. Unsure whether to call this a bug, but under some circumstances the `datetime` module in pure python will produce time delta strings without the leading 0.
null
https://github.com/pandas-dev/pandas/pull/9868
null
{'base_commit': '2dad23f766790510d09e66f1e02b57a395d479b1', 'files': [{'path': 'doc/source/whatsnew/v0.16.1.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [52]}}}, {'path': 'pandas/tseries/tests/test_timedeltas.py', 'status': 'modified', 'Loc': {"('TestTimedeltas', 'test_construction', 35)": {'add': ...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "pandas/tseries/timedeltas.py" ], "doc": [ "doc/source/whatsnew/v0.16.1.txt" ], "test": [ "pandas/tseries/tests/test_timedeltas.py" ], "config": [], "asset": [] }
1
pandas-dev
pandas
b03df731095154e94d23db51d11df5dd736622f8
https://github.com/pandas-dev/pandas/issues/3925
Datetime
Access DateTimeIndexed dataframe by timestamp
Hello, I am new to pandas and thanks for this great library! I have a data frame like this: ``` Gold_2012.head() open high low close volume date_time 2012-01-02 18:01:00 1571.0 1571.0 1569.1 1569.8 351 2012-01-02 18:02:00 1569.8 1570.0 1569.7 1569.8 ...
null
https://github.com/pandas-dev/pandas/pull/3931
null
{'base_commit': 'b03df731095154e94d23db51d11df5dd736622f8', 'files': [{'path': 'RELEASE.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [256, 357, 360]}}}, {'path': 'pandas/core/indexing.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2]}}}, {'path': 'pandas/tseries/index.py', 'statu...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "3", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "pandas/tseries/index.py", "pandas/core/indexing.py" ], "doc": [ "RELEASE.rst" ], "test": [ "pandas/tseries/tests/test_timeseries.py" ], "config": [], "asset": [] }
1
pandas-dev
pandas
f231c9a74a544ec94cd12e813cb2543fb5a18556
https://github.com/pandas-dev/pandas/issues/35331
good first issue Needs Tests
BUG: np.argwhere on pandas series
- [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the latest version of pandas. - [ ] (optional) I have confirmed this bug exists on the master branch of pandas. --- numpy/numpy#15555 reports an issue with `np.argwhere` on pandas Series. Reporting ...
null
https://github.com/pandas-dev/pandas/pull/53381
null
{'base_commit': 'f231c9a74a544ec94cd12e813cb2543fb5a18556', 'files': [{'path': 'pandas/tests/series/test_npfuncs.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5, 7]}, "(None, 'test_numpy_unique', 19)": {'add': [21]}}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [], "doc": [], "test": [ "pandas/tests/series/test_npfuncs.py" ], "config": [], "asset": [] }
1
pandas-dev
pandas
5de6b84f5117b005a8f010d4510a758b50f3d14e
https://github.com/pandas-dev/pandas/issues/12081
Reshaping Error Reporting
DataFrame.merge with Series should give nice error message
Right now trying this results in "IndexError: list index out of range". It should say can't merge DataFrame with a Series... I know this for quite a while now, but still get trapped on it every once in a while. This would be very helpful for beginners. Other people also get confused: http://stackoverflow.com/question...
null
https://github.com/pandas-dev/pandas/pull/12112
null
{'base_commit': '5de6b84f5117b005a8f010d4510a758b50f3d14e', 'files': [{'path': 'doc/source/whatsnew/v0.18.0.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [205]}}}, {'path': 'pandas/tools/merge.py', 'status': 'modified', 'Loc': {"('_MergeOperation', '__init__', 157)": {'add': [186]}}}, {'path': 'pand...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "pandas/tools/merge.py" ], "doc": [ "doc/source/whatsnew/v0.18.0.txt" ], "test": [ "pandas/tools/tests/test_merge.py" ], "config": [], "asset": [] }
1
pandas-dev
pandas
a3c0e7bcfb8bbe9ca45df7e571a305d403e0f066
https://github.com/pandas-dev/pandas/issues/44597
API Design Deprecate
API/DEPR: int downcasting in DataFrame.where
`Block.where` has special downcasting logic that splits blocks differently from any other Block methods. I would like to deprecate and eventually remove this bespoke logic. The relevant logic is only reached AFAICT when we have integer dtype (non-int64) and an integer `other` too big for this dtype, AND the pas...
null
https://github.com/pandas-dev/pandas/pull/45009
null
{'base_commit': 'a3c0e7bcfb8bbe9ca45df7e571a305d403e0f066', 'files': [{'path': 'doc/source/whatsnew/v1.4.0.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [547]}}}, {'path': 'pandas/core/internals/blocks.py', 'status': 'modified', 'Loc': {"('Block', 'where', 1138)": {'add': [1229]}}}, {'path': 'pandas...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "pandas/core/internals/blocks.py" ], "doc": [ "doc/source/whatsnew/v1.4.0.rst" ], "test": [ "pandas/tests/frame/methods/test_clip.py", "pandas/tests/frame/indexing/test_where.py" ], "config": [], "asset": [] }
1
pandas-dev
pandas
32f789fbc5d5a72d9d1ac14935635289eeac9009
https://github.com/pandas-dev/pandas/issues/52151
Bug Groupby Categorical
BUG: Inconsistent behavior with `groupby/min` and `observed=False` on categoricals between 2.0 and 2.1
### Pandas version checks - [X] I have checked that this issue has not already been reported. - [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [X] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/...
null
https://github.com/pandas-dev/pandas/pull/52236
null
{'base_commit': '32f789fbc5d5a72d9d1ac14935635289eeac9009', 'files': [{'path': 'pandas/core/groupby/ops.py', 'status': 'modified', 'Loc': {"('WrappedCythonOp', '_ea_wrap_cython_operation', 358)": {'add': [404]}}}, {'path': 'pandas/tests/groupby/test_min_max.py', 'status': 'modified', 'Loc': {"(None, 'test_min_max_nulla...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "pandas/core/groupby/ops.py" ], "doc": [], "test": [ "pandas/tests/groupby/test_min_max.py" ], "config": [], "asset": [] }
1
pandas-dev
pandas
8924277fa3dbe775f46e679ab8bd97b293e465ea
https://github.com/pandas-dev/pandas/issues/41556
Bug Groupby Algos
BUG: groupby.shift return keys filled with `fill_value` when `fill_value` is specified
- [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the latest version of pandas. - [ ] (optional) I have confirmed this bug exists on the master branch of pandas. --- **Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28...
null
https://github.com/pandas-dev/pandas/pull/41858
null
{'base_commit': '8924277fa3dbe775f46e679ab8bd97b293e465ea', 'files': [{'path': 'asv_bench/benchmarks/groupby.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [371]}}}, {'path': 'doc/source/whatsnew/v1.4.0.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [170, 264]}}}, {'path': 'pandas/c...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "asv_bench/benchmarks/groupby.py", "pandas/core/groupby/groupby.py" ], "doc": [ "doc/source/whatsnew/v1.4.0.rst" ], "test": [ "pandas/tests/groupby/test_groupby_shift_diff.py" ], "config": [], "asset": [] }
1
pandas-dev
pandas
92093457ca13ba037257d0b8d41735268535c84f
https://github.com/pandas-dev/pandas/issues/3573
Bug Output-Formatting
Unintuitive default behavior with wide DataFrames in the IPython notebook
In the IPython notebook, HTML output it the default and whether summary view is displayed should not be governed by hypothetical line width. I ran into this problem in a demo recently and it took me a minute to figure out what was wrong, definitely a bad change in 0.11.
null
https://github.com/pandas-dev/pandas/pull/3663
null
{'base_commit': '0ed4549ac857fbf2c7e975acdf1d987bacc3ea32', 'files': [{'path': 'RELEASE.rst', 'status': 'modified', 'Loc': {'(None, None, 65)': {'add': [65]}, '(None, None, 87)': {'add': [87]}, '(None, None, 141)': {'add': [141]}}}, {'path': 'doc/source/faq.rst', 'status': 'modified', 'Loc': {'(None, None, 38)': {'mod'...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "pandas/core/common.py", "pandas/core/frame.py", "pandas/core/config_init.py", "pandas/core/format.py" ], "doc": [ "RELEASE.rst", "doc/source/faq.rst" ], "test": [ "pandas/tests/test_format.py" ], "config": [], "asset": [] }
1
pandas-dev
pandas
a214915e241ea15f3d072d54930d0e0c8f42ee10
https://github.com/pandas-dev/pandas/issues/19482
Dtype Conversions Error Reporting Numeric Operations
Rank With 'method=first' Broken for Objects
Came across this working on #15779 ```python In []: df = pd.DataFrame({'key': ['a'] * 5, 'val': ['bar', 'bar', 'foo', 'bar', 'baz']}) In []: df.groupby('key').rank(method='first') Out []: Empty DataFrame Columns: [] Index: [] ``` #### Expected Output ```python Out[]: val 0 1.0 1 2.0 ...
null
https://github.com/pandas-dev/pandas/pull/19481
null
{'base_commit': 'a214915e241ea15f3d072d54930d0e0c8f42ee10', 'files': [{'path': 'doc/source/whatsnew/v0.23.0.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [583]}}}, {'path': 'pandas/_libs/algos.pxd', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [13]}}}, {'path': 'pandas/_libs/algos.pyx...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "pandas/_libs/groupby.pyx", "pandas/_libs/groupby_helper.pxi.in", "pandas/_libs/algos.pyx", "pandas/core/groupby.py", "pandas/_libs/algos.pxd" ], "doc": [ "doc/source/whatsnew/v0.23.0.txt" ], "test": [ "pandas/tests/groupby/test_groupby.py" ], "config": [], "asset...
1
pandas-dev
pandas
679dbd021eccc238e422057009365e2ee1c04b25
https://github.com/pandas-dev/pandas/issues/21687
Docs Usage Question Algos Window
"on" argument of DataFrame.rolling only works for datetime columns
the `on=` argument of `DataFrame.rolling` only works for datetime columns. ``` df = pd.DataFrame([ [18, 0], [2, 0], [1, 0], [9, 1], [8, 1], ], columns=['value', 'roll']) ``` ``` df.roll = pd.to_datetime(df.roll, unit='s') df.rolling('1s', on='roll').value.max() ``` returns: ``...
null
https://github.com/pandas-dev/pandas/pull/27265
null
{'base_commit': '679dbd021eccc238e422057009365e2ee1c04b25', 'files': [{'path': 'pandas/core/window.py', 'status': 'modified', 'Loc': {"('Window', None, 489)": {'mod': [516, 517]}}}]}
[]
[]
[]
{ "iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "pandas/core/window.py" ], "doc": [], "test": [], "config": [], "asset": [] }
1
pandas-dev
pandas
940104efc9e708bc93744dfaa36c9492b03b1ca4
https://github.com/pandas-dev/pandas/issues/20452
Reshaping API Design
BUG: New feature allowing merging on combination of columns and index levels drops levels of index
#### Code Sample, a copy-pastable example if possible ```python In [1]: import pandas as pd In [2]: pd.__version__ Out[2]: '0.23.0.dev0+657.g01882ba5b' In [3]: df1 = pd.DataFrame({'v1' : range(12)}, index=pd.MultiIndex.from_product([list('abc'),list('xy'),[1,2]], names=['abc','xy','num'])) ...: df1 ...
null
https://github.com/pandas-dev/pandas/pull/20475
null
{'base_commit': '940104efc9e708bc93744dfaa36c9492b03b1ca4', 'files': [{'path': 'doc/source/merging.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1202], 'mod': [1136, 1137, 1141, 1142, 1143, 1146, 1164]}}}, {'path': 'doc/source/whatsnew/v0.24.0.rst', 'status': 'modified', 'Loc': {'(None, None, None)...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "pandas/core/reshape/merge.py" ], "doc": [ "doc/source/whatsnew/v0.24.0.rst", "doc/source/merging.rst" ], "test": [ "pandas/tests/reshape/merge/test_merge.py", "pandas/tests/reshape/merge/test_join.py" ], "config": [], "asset": [] }
1
pandas-dev
pandas
13940c7f3c0371d6799bbd88b9c6546392b418a1
https://github.com/pandas-dev/pandas/issues/35650
good first issue Needs Tests
BUG: pd.factorize with read-only datetime64 numpy array raises ValueError
- [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the latest version of pandas. - [ ] (optional) I have confirmed this bug exists on the master branch of pandas. --- **Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28...
null
https://github.com/pandas-dev/pandas/pull/35775
null
{'base_commit': '13940c7f3c0371d6799bbd88b9c6546392b418a1', 'files': [{'path': 'pandas/tests/test_algos.py', 'status': 'modified', 'Loc': {"('TestFactorize', 'test_object_factorize', 245)": {'add': [253]}}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [], "doc": [], "test": [ "pandas/tests/test_algos.py" ], "config": [], "asset": [] }
1
pandas-dev
pandas
816f94575c9ec1af2169a28536217c4d16dd6b4b
https://github.com/pandas-dev/pandas/issues/16033
Docs
DOC: styler warnings in doc-build
https://travis-ci.org/pandas-dev/pandas/jobs/222779268 ``` /tmp/doc/source/generated/pandas.io.formats.style.Styler.rst:74: WARNING: failed to import template: /tmp/doc/source/generated/pandas.io.formats.style.Styler.rst:74: WARNING: toctree references unknown document 'generated/template:' ``` cc @TomAugspurg...
null
https://github.com/pandas-dev/pandas/pull/16094
null
{'base_commit': 'f0bd908336a260cafa9d83c8244dd1a0a056f72d', 'files': [{'path': 'pandas/tests/io/formats/test_css.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2]}, "(None, 'test_css_parse_strings', 46)": {'mod': [48, 49, 50, 51]}, "(None, 'test_css_parse_invalid', 79)": {'mod': [80]}, "(None, 'test_...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [], "doc": [], "test": [ "pandas/tests/io/formats/test_css.py" ], "config": [], "asset": [] }
1
pandas-dev
pandas
2067d7e306ae720d455f356e4da21f282a8a762e
https://github.com/pandas-dev/pandas/issues/35811
Bug Usage Question API Design Series
BUG/QST: Series.transform with a dictionary
What is the expected output of passing a dictionary to `Series.transform`? For example: s = pd.Series([1, 2, 3]) result1 = s.transform({'a': lambda x: x + 1}) result2 = s.transform({'a': lambda x: x + 1, 'b': lambda x: x + 2}) The docs say that `dict of axis labels -> functions` is acceptable, but I...
null
https://github.com/pandas-dev/pandas/pull/35964
null
{'base_commit': '2067d7e306ae720d455f356e4da21f282a8a762e', 'files': [{'path': 'doc/source/whatsnew/v1.2.0.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [344]}}}, {'path': 'pandas/core/aggregation.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [23], 'mod': [21]}, "(None, 'validate_...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "pandas/core/aggregation.py", "pandas/core/base.py", "pandas/core/generic.py", "pandas/core/frame.py", "pandas/core/series.py", "pandas/core/shared_docs.py" ], "doc": [ "doc/source/whatsnew/v1.2.0.rst" ], "test": [ "pandas/tests/series/apply/test_series_apply.py", ...
1
pandas-dev
pandas
889c2ff67af14213e8ed065df2957b07e34ac95b
https://github.com/pandas-dev/pandas/issues/33810
Testing IO Parquet
TST: add Feather V2 round-trip test
no that pyarrow 0.17 has landed, we should have a round-trip Feather V2 test to ensure we have dtype preservation (we can likely re-use some of our test frames from the parquet tests).
null
https://github.com/pandas-dev/pandas/pull/33422
null
{'base_commit': '889c2ff67af14213e8ed065df2957b07e34ac95b', 'files': [{'path': 'doc/source/conf.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [418]}}}, {'path': 'doc/source/user_guide/io.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [4586, 4588, 4589, 4595, 4596]}}}, {'path': 'doc...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "pandas/io/feather_format.py", "pandas/core/frame.py", "doc/source/conf.py" ], "doc": [ "doc/source/user_guide/io.rst", "doc/source/whatsnew/v1.1.0.rst" ], "test": [ "pandas/tests/io/test_feather.py" ], "config": [], "asset": [] }
1
pandas-dev
pandas
b6691127523f965003dbf877a358c81af5012989
https://github.com/pandas-dev/pandas/issues/15630
Numeric Operations Algos
Pandas (0.18) Rank: unexpected behavior for method = 'dense' and pct = True
I find the behavior of rank function with method = 'dense' and pct = True unexpected as it looks like, in order to calculate percentile ranks, the function is using the total number of observations instead of the number of _distinct_ observations. #### Code Sample, a copy-pastable example if possible ``` import ...
null
https://github.com/pandas-dev/pandas/pull/15639
null
{'base_commit': 'b6691127523f965003dbf877a358c81af5012989', 'files': [{'path': 'doc/source/whatsnew/v0.23.0.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [909]}}}, {'path': 'pandas/_libs/algos_rank_helper.pxi.in', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [216, 388]}}}, {'path': 'p...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "pandas/_libs/algos_rank_helper.pxi.in" ], "doc": [ "doc/source/whatsnew/v0.23.0.txt" ], "test": [ "pandas/tests/series/test_rank.py", "pandas/tests/frame/test_rank.py" ], "config": [], "asset": [] }
1
pandas-dev
pandas
95be01dbc060f405b7928cc6e4ba4d6d6181c22a
https://github.com/pandas-dev/pandas/issues/13420
Groupby Categorical
DataFrame.groupby(grp, axis=1) with categorical grp breaks
While attempting to use `pd.qcut` (which returned a Categorical) to bin some data in groups for plotting, I encountered the following error. The idea is to group a DataFrame by columns (`axis=1`) using a Categorical. #### Minimal breaking example ``` >>> import pandas >>> df = pandas.DataFrame({'a':[1,2,3,4], 'b':[-1,...
null
https://github.com/pandas-dev/pandas/pull/27788
null
{'base_commit': '54e58039fddc79492e598e85279c42e85d06967c', 'files': [{'path': 'pandas/tests/groupby/test_categorical.py', 'status': 'modified', 'Loc': {"(None, 'test_seriesgroupby_observed_apply_dict', 1159)": {'add': [1165]}}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [], "doc": [], "test": [ "pandas/tests/groupby/test_categorical.py" ], "config": [], "asset": [] }
1
pandas-dev
pandas
be61825986ba565bc038beb2f5df2750fc1aca30
https://github.com/pandas-dev/pandas/issues/13565
Docs Usage Question Timezones
Call unique() on a timezone aware datetime series returns non timezone aware result
Call unique() on a timezone aware datetime series returns non timezone aware result. #### Code Sample import pandas as pd import pytz import datetime In [242]: ts = pd.Series([datetime.datetime(2011,2,11,20,0,0,0,pytz.utc), datetime.datetime(2011,2,11,20,0,0,0,pytz.utc), datetime.datetime(2011,2,11,21,0,0,0,pytz.utc...
null
https://github.com/pandas-dev/pandas/pull/13979
null
{'base_commit': 'be61825986ba565bc038beb2f5df2750fc1aca30', 'files': [{'path': 'doc/source/whatsnew/v0.19.0.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [460, 906]}}}, {'path': 'pandas/core/base.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [10, 11, 24]}, "('IndexOpsMixin', None,...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "pandas/indexes/base.py", "pandas/core/base.py", "pandas/core/series.py", "pandas/tseries/base.py", "pandas/indexes/category.py" ], "doc": [ "doc/source/whatsnew/v0.19.0.txt" ], "test": [ "pandas/util/testing.py", "pandas/tests/test_base.py", "pandas/tests/index...
1
pandas-dev
pandas
c4a996adfc91f023b46ce3cb67e33fc8b2ca3627
https://github.com/pandas-dev/pandas/issues/9400
Visualization Error Reporting
Improve error message in plotting.py's _plot
This a minor enhancement proposal. At the moment I cannot submit a pull request. I will probably have time to create one during the next week. This is a snippet from `tools/plotting.py`: https://github.com/pydata/pandas/blob/master/pandas/tools/plotting.py#L2269-2283 ``` python def _plot(data, x=None, y=None, subplo...
null
https://github.com/pandas-dev/pandas/pull/9417
null
{'base_commit': 'c4a996adfc91f023b46ce3cb67e33fc8b2ca3627', 'files': [{'path': 'pandas/tools/plotting.py', 'status': 'modified', 'Loc': {"(None, '_plot', 2269)": {'mod': [2269, 2277]}}}]}
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "pandas/tools/plotting.py" ], "doc": [], "test": [], "config": [], "asset": [] }
1
pandas-dev
pandas
53243e8ec73ecf5035a63f426a9c703d6835e9a7
https://github.com/pandas-dev/pandas/issues/54889
Build
BUILD: Race condition between .pxi.in and .pyx compiles in parallel build of 2.1.0
### Installation check - [X] I have read the [installation guide](https://pandas.pydata.org/pandas-docs/stable/getting_started/install.html#installing-pandas). ### Platform Linux-6.4.7-gentoo-dist-x86_64-AMD_Ryzen_5_3600_6-Core_Processor-with-glibc2.38 ### Installation Method Built from source ### pandas Version...
null
https://github.com/pandas-dev/pandas/pull/54958
null
{'base_commit': '53243e8ec73ecf5035a63f426a9c703d6835e9a7', 'files': [{'path': 'pandas/_libs/meson.build', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [72]}}}]}
[]
[]
[]
{ "iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [], "doc": [], "test": [], "config": [], "asset": [ "pandas/_libs/meson.build" ] }
1
meta-llama
llama
7565eb6fee2175b2d4fe2cfb45067a61b35d7f5e
https://github.com/meta-llama/llama/issues/658
documentation
Confusion about the default max_seq_len = 2048
When reading the class Transformer, I found that the code use max_seq_len * 2 to prepare the rotary positional encoding, which confused me for a while. Then I realized that the default max_seq_len was set to 2048, and the 'max_seq_len * 2' aims to generate 4096 positional embeddings, corresponding to the 4K context len...
null
https://github.com/meta-llama/llama/pull/754
null
{'base_commit': '7565eb6fee2175b2d4fe2cfb45067a61b35d7f5e', 'files': [{'path': 'llama/model.py', 'status': 'modified', 'Loc': {"('Transformer', '__init__', 414)": {'add': [450]}}}]}
[]
[]
[]
{ "iss_type": "3", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null }
{ "code": [ "llama/model.py" ], "doc": [], "test": [], "config": [], "asset": [] }
1
pallets
flask
f88765d504ce2fa9bc3926c76910b11510522892
https://github.com/pallets/flask/issues/1224
Starting up a public server.
I ran into this problem today with one of my applications trying to make it public to my local network. C:\Users\Savion\Documents\GitHub\Example-Flask-Website>flask\Scripts\python run. py - Running on http://127.0.0.1:5000/ - Restarting with reloader 10.101.37.124 - - [26/Oct/2014 15:51:23] "GET / HTTP/1.1" 404 - ...
null
null
null
{'base_commit': 'f88765d504ce2fa9bc3926c76910b11510522892', 'files': [{'path': 'flask/views.py', 'Loc': {}}]}
[]
[]
[]
{ "iss_type": "1\n404 error", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "flask/views.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
pallets
flask
2d8a21c7321a9ead8e27208b49a18f4b8b27e2c1
https://github.com/pallets/flask/issues/834
How to get the serialized version of the session cookie in 0.10?
In version 0.9 I could simply get the value of the `session` like this: ``` flask.session.serialize() ``` But after upgrading to 0.10 this is not working anymore.. what's the alternative? How can I get the session value? (`flask.request.cookies.get('session')` is not good for me, because I would like to get the ses...
null
null
null
{'base_commit': '2d8a21c7321a9ead8e27208b49a18f4b8b27e2c1', 'files': [{'path': 'flask/sessions.py', 'Loc': {"('SecureCookieSessionInterface', 'get_signing_serializer', 308)": {'mod': []}, "('TaggedJSONSerializer', 'dumps', 60)": {'mod': []}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "3\nhow to do …", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "flask/sessions.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
pallets
flask
22d82e70b3647ed16c7d959a939daf533377382b
https://github.com/pallets/flask/issues/4015
2.0.0: build requires ContextVar module
Simple I cannot find it. ```console + /usr/bin/python3 setup.py build '--executable=/usr/bin/python3 -s' Traceback (most recent call last): File "setup.py", line 4, in <module> setup( File "/usr/lib/python3.8/site-packages/setuptools/__init__.py", line 144, in setup return distutils.core.setup(**attr...
null
null
null
{'base_commit': '22d82e70b3647ed16c7d959a939daf533377382b', 'files': [{'path': 'setup.py', 'Loc': {'(None, None, None)': {'mod': [7]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "setup.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
pallets
flask
43e2d7518d2e89dc7ed0b4ac49b2d20211ad1bfa
https://github.com/pallets/flask/issues/2977
Serial port access problem in DEBUG mode.
### Expected Behavior Sending commands through the serial port. ```python app = Flask(__name__) serialPort = serial.Serial(port = "COM5", baudrate=1000000, bytesize=8, timeout=2, stopbits=serial.STOPBITS_ONE) lamp = { 1 : {'name' : 'n1', 'state' : True}, 2 : {'name' : 'n2'...
null
null
null
{}
[ { "Loc": [ 7 ], "path": null } ]
[]
[]
{ "iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code" }
{ "code": [ null ], "doc": [], "test": [], "config": [], "asset": [] }
null
pallets
flask
1a7fd980f8579bd7d7d53c812a77c1dc64be52ba
https://github.com/pallets/flask/issues/1749
JSONEncoder and aware datetimes
I was surprised to see that though flask.json.JSONEncoder accepts datetime objects, it ignores the timezone. I checked werkzeug.http.http_date and it can handle timezone aware dates just fine if they are passed in, but the JSONEncoder insists on transforming the datetime to a timetuple, like this `return http_date(o....
null
null
null
{'base_commit': '1a7fd980f8579bd7d7d53c812a77c1dc64be52ba', 'files': [{'path': 'flask/json.py', 'Loc': {"('JSONEncoder', 'default', 60)": {'mod': [78]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "flask/json.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
sherlock-project
sherlock
144d43830f663808c5fbca75b797350060acf7dd
https://github.com/sherlock-project/sherlock/issues/559
Results files saved to specific folder
Having just installed Sherlock I was surprised to see the results files are just jumbled in with everything else instead of being in their own Results folder. Having a separate folder would keep things cleaner especially as you use it more and the number of files increases.
null
null
null
{'base_commit': '144d43830f663808c5fbca75b797350060acf7dd', 'files': [{'path': 'README.md', 'Loc': {'(None, None, 65)': {'mod': [65]}}, 'status': 'modified'}, {'path': 'sherlock/sherlock.py', 'Loc': {"(None, 'main', 462)": {'mod': [478]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code\n+\nDoc" }
{ "code": [ "sherlock/sherlock.py" ], "doc": [ "README.md" ], "test": [], "config": [], "asset": [] }
null
sherlock-project
sherlock
7ec56895a37ada47edd6573249c553379254d14a
https://github.com/sherlock-project/sherlock/issues/1911
question
How do you search for usernames? New to this.
<!-- ###################################################################### WARNING! IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE. ###################################################################### --> ## Checklist <!-- Put x into all boxes (like this [x]) once you hav...
null
null
null
{'base_commit': '7ec56895a37ada47edd6573249c553379254d14a', 'files': [{'path': 'README.md', 'Loc': {}}]}
[]
[]
[]
{ "iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc" }
{ "code": [], "doc": [ "README.md" ], "test": [], "config": [], "asset": [] }
null
sherlock-project
sherlock
65ce128b7fd8c8915c40495191d9c136f1d2322b
https://github.com/sherlock-project/sherlock/issues/1297
bug
name 'requests' is not defined
<!-- ###################################################################### WARNING! IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE ###################################################################### --> ## Checklist <!-- Put x into all boxes (like this [x]) once you ha...
null
null
null
{'base_commit': '65ce128b7fd8c8915c40495191d9c136f1d2322b', 'files': [{'path': 'sherlock/sites.py', 'Loc': {}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "sherlock/sites.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
sherlock-project
sherlock
f63e17066dc4881ee5a164aed60b6e8f1e9ab129
https://github.com/sherlock-project/sherlock/issues/462
environment
File "sherlock.py", line 24, in <module> from requests_futures.sessions import FuturesSession ModuleNotFoundError: No module named 'requests_futures'
File "sherlock.py", line 24, in <module> from requests_futures.sessions import FuturesSession ModuleNotFoundError: No module named 'requests_futures'
null
null
null
{'base_commit': 'f63e17066dc4881ee5a164aed60b6e8f1e9ab129', 'files': [{'path': 'requirements.txt', 'Loc': {}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [], "doc": [], "test": [], "config": [ "requirements.txt" ], "asset": [] }
null
sherlock-project
sherlock
6c6faff416896a41701aa3e24e5b5a584bd5cb44
https://github.com/sherlock-project/sherlock/issues/236
question
No module named 'torrequest'
Hi, similar problem to module "requests_futures" Traceback (most recent call last): File "sherlock.py", line 25, in <module> from torrequest import TorRequest ModuleNotFoundError: No module named 'torrequest'
null
null
null
{'base_commit': '6c6faff416896a41701aa3e24e5b5a584bd5cb44', 'files': [{'path': 'requirements.txt', 'Loc': {}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc\n依赖声明" }
{ "code": [], "doc": [], "test": [], "config": [ "requirements.txt" ], "asset": [] }
null
keras-team
keras
c0d95fd6c2cd8ffc0738819825c3065e3c89977c
https://github.com/keras-team/keras/issues/4954
TimeDistributed Wrapper not working with LSTM/GRU
Please make sure that the boxes below are checked before you submit your issue. If your issue is an implementation question, please ask your question on [StackOverflow](http://stackoverflow.com/questions/tagged/keras) or [join the Keras Slack channel](https://keras-slack-autojoin.herokuapp.com/) and ask there instead o...
null
null
null
{'base_commit': 'c0d95fd6c2cd8ffc0738819825c3065e3c89977c', 'files': [{'path': 'Version', 'Loc': {}}, {'path': 'Version', 'Loc': {}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [], "doc": [], "test": [], "config": [], "asset": [ "Version" ] }
null
keras-team
keras
980a6be629610ee58c1eae5a65a4724ce650597b
https://github.com/keras-team/keras/issues/16234
type:support
Compiling model in callback causes TypeError
**System information**. - Have I written custom code (as opposed to using a stock example script provided in Keras): yes - TensorFlow version (use command below): 2.8.0 (2.4 too) - Python version: 3.7 **Describe the problem**. In a fine-tuning case I would like to do transfer-learning phase first (with fine-tu...
null
null
null
{'base_commit': '980a6be629610ee58c1eae5a65a4724ce650597b', 'files': [{'path': 'keras/engine/training.py', 'Loc': {"('Model', 'make_train_function', 998)": {'mod': []}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code" }
{ "code": [ "keras/engine/training.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
keras-team
keras
90f441a6a0ed4334cac53760289061818a68b7c1
https://github.com/keras-team/keras/issues/2893
Is the cifar10_cnn.py example actually performing data augmentation?
When `datagen.fit(X_train)` is called in the [`cifar10_cnn.py` example](https://github.com/fchollet/keras/blob/master/examples/cifar10_cnn.py#L103), shouldn't it be (when `data_augmentation=True`): ``` python datagen.fit(X_train, augment=True) ``` as the [default value for `augment` is `False`](https://github.com/fch...
null
null
null
{'base_commit': '90f441a6a0ed4334cac53760289061818a68b7c1', 'files': [{'path': 'keras/preprocessing/image.py', 'Loc': {"('ImageDataGenerator', 'fit', 404)": {'mod': [419, 420, 421, 422, 423]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "keras/preprocessing/image.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
keras-team
keras
654404c2ed8db47a5361a3bff9126a16507c9c4c
https://github.com/keras-team/keras/issues/1787
What happened to WordContextProduct?
``` python In [1]: import keras In [2]: keras.__version__ Out[2]: '0.3.2' In [3]: from keras.layers.embeddings import WordContextProduct Using Theano backend. /usr/local/lib/python3.5/site-packages/theano/tensor/signal/downsample.py:5: UserWarning: downsample module has been moved to the pool module. warnings.warn(...
null
null
null
{}
[ { "Loc": [ 54 ], "path": null } ]
[]
[]
{ "iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code" }
{ "code": [ null ], "doc": [], "test": [], "config": [], "asset": [] }
null
keras-team
keras
8778add0d66aed64a8970c34576bf5800bc19170
https://github.com/keras-team/keras/issues/3335
Masking the output of a conv layer
Hi, I am trying to apply a given mask in the output of a conv layer. The simplest form of my problem can be seen in the img ![image](https://cloud.githubusercontent.com/assets/810340/17194147/e8728ad4-542c-11e6-8c60-b2949c288cec.png) The mask should be considered as an input when training/predicting. I have already t...
null
null
null
{'base_commit': '8778add0d66aed64a8970c34576bf5800bc19170', 'files': [{'path': 'keras/src/models/model.py', 'Loc': {}}]}
[]
[]
[]
{ "iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "keras/src/models/model.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
keras-team
keras
ed07472bc5fc985982db355135d37059a1f887a9
https://github.com/keras-team/keras/issues/13101
type:support
model.fit : AttributeError: 'Model' object has no attribute '_compile_metrics'
**System information** - Have I written custom code (as opposed to using example directory): Yes - OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Mint 19.3 - TensorFlow backend (yes / no): yes - TensorFlow version: 2.0.0b1 - Keras version: 2.2.4-tf - Python version: 3.6 - CUDA/cuDNN version: / ...
null
null
null
{'base_commit': 'ed07472bc5fc985982db355135d37059a1f887a9', 'files': [{'path': 'keras/engine/training.py', 'Loc': {"('Model', 'compile', 40)": {'mod': []}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code" }
{ "code": [ "keras/engine/training.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
keras-team
keras
a3d160b9467c99cbb27f9aa0382c759f45c8ee66
https://github.com/keras-team/keras/issues/9741
Improve Keras Documentation User Experience for Long Code Snippets By Removing The Need For Horizontal Slide Bars
**Category**: documentation user-experience **Comment**: modify highlight.js <code></code> to wrap long documentation code snippets **Why**: eliminates the need for a user to manually click and slide a horizontal slider just to get a quick sense of what available parameters and their default values are **Context**...
null
null
null
{'base_commit': 'a3d160b9467c99cbb27f9aa0382c759f45c8ee66', 'files': [{'path': 'docs/autogen.py', 'Loc': {}}]}
[]
[]
[]
{ "iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "docs/autogen.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
keras-team
keras
7a12fd0f8597760cf8e1238a9b021e247693517b
https://github.com/keras-team/keras/issues/2372
problem of save/load model
HI, Thanks for making such a wonderful tool! I'm using Keras 1.0. I want to save and load the model both the arch and the parameters. So I use the method in FAQ. Here is the code. ``` def save_model(self, model, options): json_string = model.to_json() open(options['file_arch'], 'w').write(json_string) m...
null
null
null
{'base_commit': '7a12fd0f8597760cf8e1238a9b021e247693517b', 'files': [{'path': 'keras/src/trainers/trainer.py', 'Loc': {"('Trainer', 'compile', 40)": {'mod': []}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "keras/src/trainers/trainer.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
keras-team
keras
284ef7b495a61238dccc6149996c4cb88fef1c5a
https://github.com/keras-team/keras/issues/933
Same model but graph gives bad performance
Hello, I am learning to use Graph as it seems more powerful so I implemented one of my previous model which uses Sequential. Here is the model using sequential (number of dimension set in random): ``` def build_generation_embedding_model(self, dim): print "Build model ..." input_model = Sequential() inpu...
null
null
null
{}
[ { "Loc": [ 36 ], "path": null } ]
[]
[]
{ "iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code" }
{ "code": [ null ], "doc": [], "test": [], "config": [], "asset": [] }
null
keras-team
keras
c2b844ba2fe8d0d597da9ef6a9af3b20d18d0bec
https://github.com/keras-team/keras/issues/7603
Loss Increases after some epochs
I have tried different convolutional neural network codes and I am running into a similar issue. The network starts out training well and decreases the loss but after sometime the loss just starts to increase. I have shown an example below: Epoch 15/800 1562/1562 [==============================] - 49s - loss: 0.9050...
null
null
null
{'base_commit': 'c2b844ba2fe8d0d597da9ef6a9af3b20d18d0bec', 'files': [{'path': 'examples/cifar10_cnn.py', 'Loc': {'(None, None, None)': {'mod': [65]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "examples/cifar10_cnn.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
keras-team
keras
530eff62e5463e00d73e72c51cc830b9ac3a14ab
https://github.com/keras-team/keras/issues/3997
Using keras for Distributed training raise RuntimeError("Graph is finalized and cannot be modified.")
I'm using keras for distributed training with following code: ``` python #!/usr/bin/env python # -*- coding:utf-8 -*- # Created by Enigma on 2016/9/26 import numpy as np import tensorflow as tf # Define Hyperparameters FLAGS = tf.app.flags.FLAGS # For missions tf.app.flags.DEFINE_string("ps_hosts", "", ...
null
null
null
{'base_commit': '530eff62e5463e00d73e72c51cc830b9ac3a14ab', 'files': [{'path': 'keras/engine/training.py', 'Loc': {"('Model', '_make_train_function', 685)": {'mod': []}, "('Model', '_make_test_function', 705)": {'mod': []}, "('Model', '_make_predict_function', 720)": {'mod': []}}, 'status': 'modified'}, {'path': 'keras...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "keras/engine/training.py", "keras/backend/tensorflow_backend.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
keras-team
keras
c2e36f369b411ad1d0a40ac096fe35f73b9dffd3
https://github.com/keras-team/keras/issues/4810
Parent module '' not loaded, cannot perform relative import with vgg16.py
just set up my ubuntu and have the python 3.5 installed, together with Keras...the following occurs: RESTART: /usr/local/lib/python3.5/dist-packages/keras/applications/vgg16.py Traceback (most recent call last): File "/usr/local/lib/python3.5/dist-packages/keras/applications/vgg16.py", line 14, in <module> ...
null
null
null
{'base_commit': 'c2e36f369b411ad1d0a40ac096fe35f73b9dffd3', 'files': [{'path': 'keras/applications/vgg16.py', 'Loc': {'(None, None, None)': {'mod': [14, 15, 16, 17, 18, 19, 20, 21]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "keras/applications/vgg16.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null