url
stringlengths
59
59
repository_url
stringclasses
1 value
labels_url
stringlengths
73
73
comments_url
stringlengths
68
68
events_url
stringlengths
66
66
html_url
stringlengths
49
49
id
int64
782M
1.89B
node_id
stringlengths
18
24
number
int64
4.97k
9.98k
title
stringlengths
2
306
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
4 values
active_lock_reason
null
body
stringlengths
0
63.6k
reactions
dict
timeline_url
stringlengths
68
68
performed_via_github_app
null
state_reason
stringclasses
3 values
draft
bool
0 classes
pull_request
dict
is_pull_request
bool
1 class
https://api.github.com/repos/kubeflow/pipelines/issues/8688
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8688/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8688/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8688/events
https://github.com/kubeflow/pipelines/issues/8688
1,537,094,194
I_kwDOB-71UM5bnjIy
8,688
[chore] Investigate/implement automation of KFP SDK release note generation
{ "login": "connor-mccarthy", "id": 55268212, "node_id": "MDQ6VXNlcjU1MjY4MjEy", "avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-mccarthy", "html_url": "https://github.com/connor-mccarthy", "followers_url": "https://api.github.com/users/connor-mccarthy/followers", "following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}", "gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions", "organizations_url": "https://api.github.com/users/connor-mccarthy/orgs", "repos_url": "https://api.github.com/users/connor-mccarthy/repos", "events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-mccarthy/received_events", "type": "User", "site_admin": false }
[ { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
null
[]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions." ]
"2023-01-17T22:05:13"
"2023-08-27T07:42:10"
null
MEMBER
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> <!-- /area frontend --> <!-- /area backend --> /area sdk <!-- /area samples --> <!-- /area components --> ### What feature would you like to see? KFP SDK uses [conventional commits](https://www.conventionalcommits.org/en/v1.0.0/). Can we automate the creation of release notes to (1) ensure that they comprehensively document all SDK changes and (2) eliminate the need to ask OSS contributors to update the release notes by hand in [RELEASE.md](https://github.com/kubeflow/pipelines/blob/master/sdk/RELEASE.md). --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8688/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8688/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8684
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8684/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8684/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8684/events
https://github.com/kubeflow/pipelines/issues/8684
1,536,845,087
I_kwDOB-71UM5bmmUf
8,684
[feature] Multi-layer lineage graph with MLMD
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" }, { "id": 2152751095, "node_id": "MDU6TGFiZWwyMTUyNzUxMDk1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/frozen", "name": "lifecycle/frozen", "color": "ededed", "default": false, "description": null } ]
open
false
null
[]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions." ]
"2023-01-17T18:29:00"
"2023-08-28T16:21:40"
null
COLLABORATOR
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> /area frontend <!-- /area backend --> <!-- /area sdk --> <!-- /area samples --> <!-- /area components --> ### What feature would you like to see? With MLMD upgrade to 1.4.0, we are able to list multi-layer lineage graph using the new MLMD API call. As a result, we can introduce this feature on KFP UI. <!-- Provide a description of this feature and the user experience. --> --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8684/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8684/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8683
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8683/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8683/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8683/events
https://github.com/kubeflow/pipelines/issues/8683
1,536,381,224
I_kwDOB-71UM5bk1Eo
8,683
[bug] 1.6 manifests leads to deployment issue with kubeflow pipelines
{ "login": "MatthewRalston", "id": 4308024, "node_id": "MDQ6VXNlcjQzMDgwMjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/4308024?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MatthewRalston", "html_url": "https://github.com/MatthewRalston", "followers_url": "https://api.github.com/users/MatthewRalston/followers", "following_url": "https://api.github.com/users/MatthewRalston/following{/other_user}", "gists_url": "https://api.github.com/users/MatthewRalston/gists{/gist_id}", "starred_url": "https://api.github.com/users/MatthewRalston/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MatthewRalston/subscriptions", "organizations_url": "https://api.github.com/users/MatthewRalston/orgs", "repos_url": "https://api.github.com/users/MatthewRalston/repos", "events_url": "https://api.github.com/users/MatthewRalston/events{/privacy}", "received_events_url": "https://api.github.com/users/MatthewRalston/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @MatthewRalston , \r\n\r\nFirst of all `dsl.ContainerOp` has been deprecated as a user interface, you should have seen a deprecation warning on it.\r\nHowever, I think the real culprit for the error is `str(fastq1).rstrip(\".gz\")`, this is code not containerized. You can either wrap the code in a lightweight component, or handle the logic within `gunzip` component.", "Hi thanks for the quick response.\n\nI needed to use this interface to set the container requirements (CPU, memory request/limit) via kfp. Is there an alternative way to specify container limits via the v1.8 kfp on a 1.6 manifests Kubeflow instance?", "Also I can confirm that removal of the str(fastq) etc component is not the culprit of the bash builtin \"exec\" not being found in the Alpine Linux container. \n\nNew syntax is\n\n```python\ngunzip1 = gunzip (infile=fastq1).set_cpu_request('1') etc.\n```", "Hey @chensun, when you get a chance, would you mind revisiting this issue please? Again, I'm using a vanilla kubernetes instance (minikube; compliant) and deploying kubeflow pipelines via the v1.6 branch manifests from kubeflow/manifests. Again *I recognize that the ContainerOp invocation pattern has been deprecated and should migrate to new syntax*. But that has nothing to do with Kubeflow's exec builtin. Please, let me know if you have any questions.\r\n\r\n![Screenshot from 2023-01-12 14-28-26](https://user-images.githubusercontent.com/4308024/214159750-e462bba3-cb30-4170-87d7-2a7513c38b75.png)\r\n\r\n\r\n\r\n```python\r\n#!/bin/env python\r\n\r\nimport os\r\nimport sys\r\nimport argparse\r\n\r\nimport kfp.dsl as dsl\r\nimport kfp.components as comp\r\nimport kfp\r\n\r\n\r\n\r\nimport logging\r\nglobal logger\r\nlogger = None\r\n\r\ndef get_root_logger(level):\r\n levels=[logging.WARNING, logging.INFO, logging.DEBUG]\r\n if level < 0 or level > 2:\r\n raise TypeError(\"{0}.get_root_logger expects a verbosity between 0-2\".format(__file__))\r\n logging.basicConfig(level=levels[level], format=\"%(levelname)s: %(asctime)s %(funcName)s L%(lineno)s| %(message)s\", datefmt=\"%Y/%m/%d %I:%M:%S\")\r\n root_logger = logging.getLogger()\r\n return root_logger\r\n\r\n\r\n\r\ncomponents_dir = os.path.join(os.path.dirname(__file__), \"components\")\r\n\r\n#gunzip = comp.load_component_from_file(os.path.join(components_dir, \"gunzip.yaml\"))\r\n\r\n\r\nis_unzip_needed = comp.load_component_from_file(os.path.join(components_dir, \"is_unzip_needed.yaml\"))\r\n\r\ndef gunzip(infile:str):\r\n \"\"\" Infile like 'path/to/example.txt.gz', outfile like 'path/to/example.txt' \"\"\"\r\n\r\n return dsl.ContainerOp(\r\n name='gunzip',\r\n image='debian:latest',\r\n command=[\r\n '/bin/zcat $0 > input.txt ',\r\n '|| ',\r\n 'mv $0 input.txt'\r\n ],\r\n file_outputs={\r\n 'output': 'input.txt'\r\n }\r\n )\r\n\r\n\r\n@dsl.pipeline(\r\n name='kubeflow-barebone-demo',\r\n description='kubeflow demo with minimal setup'\r\n)\r\ndef rnaseq_pipeline(fastq1:str):\r\n # Step 1: training component\r\n\r\n gunzip1 = gunzip(infile=fastq1).set_cpu_request('1').set_cpu_limit('1').set_memory_request('512Mi').set_memory_limit('512Mi')\r\n\r\n\r\n\r\nif __name__ == \"__main__\":\r\n logger = get_root_logger(2)\r\n \r\n kfp.compiler.Compiler().compile(rnaseq_pipeline, 'pipeline.yaml')\r\n\r\n```", "\r\n> Also I can confirm that removal of the str(fastq) etc component is not the culprit of the bash builtin \"exec\" not being found in the Alpine Linux container.\r\n> \r\n> New syntax is\r\n> \r\n> ```python\r\n> gunzip1 = gunzip (infile=fastq1).set_cpu_request('1') etc.\r\n> ```\r\n\r\nSorry, I spoke too earlier without looking carefully at the error message--though the `str(fastq)` usage was also an error that would not give you the expected value.\r\n\r\nYou're right that the culprit is the about the exec path, to solve that you should add `sh -c` in front of your command. But then another issue is you're not really passing the value as you thought--`$0` doesn't magically map to `infile`. \r\n\r\nI would still suggest dropping `dsl.ContainerOp` so that you know what a proper component interface and data-passing story should look like.\r\n\r\nTry the following \r\n```python\r\nfrom kfp import components\r\nfrom kfp import dsl\r\n\r\nfoo = components.load_component_from_text(\"\"\"\r\n name: foo\r\n inputs:\r\n - {name: msg, type: String}\r\n implementation:\r\n container:\r\n image: debian:latest\r\n command: \r\n - sh\r\n - -c\r\n - /bin/echo $0\r\n args:\r\n - {inputValue: msg}\r\n\"\"\")\r\n\r\n@dsl.pipeline\r\ndef bar():\r\n foo('hello')\r\n```\r\n\r\n", "@chensun Done. New version uploaded. Fails to write outputs, which is distinctly a different error. Should I follow up on gitter, Slack, or just start a new issue?\r\n\r\n![image](https://user-images.githubusercontent.com/4308024/214195224-97f16a56-0f6d-4034-bc56-b656da247507.png)\r\n", "As a follow up, is this something that could be handle with a more graceful error checking process, to see if MinIO is reachable? or that the /tmp directory is available in the container before data is written? I'm pretty sure MinIO is part of my problem, because there's a 90% chance that it's a issue with deployment, and not bash syntax.", "Thanks for checking this issue out last week @chensun . I'm fairly confident the issue isn't my bash syntax related to the $0? I've since reworked the script as follows:\r\n\r\n* pipeline.py\r\n\r\n```python\r\n...\r\ngunzip = comp.load_component_from_file(\"gunzip.yaml\")\r\n\r\n@dsl.pipeline(name=\"kubeflow-barebone-demo\", description=\"\")\r\ndef pipeline(fastq1: str):\r\n gunzip1 = gunzip(infile=fastq1).set_cpu_request(...) # as above\r\n```\r\n\r\n* gunzip.yaml\r\n```yaml\r\nname: gunzip\r\ndescription: Gunzips an Input file to an Ouput filepath\r\ninputs:\r\n- {name: infile, type: String, description: 'Data for gzip decompression'}\r\noutputs:\r\n- {name: Output, type: String, description: 'Output decompressed plaintext'}\r\nimplementation:\r\n container:\r\n image: alpine:latest\r\n # command is a list of strings (command-line arguments). \r\n # The YAML language has two syntaxes for lists and you can use either of them. \r\n # Here we use the \"flow syntax\" - comma-separated strings inside square brackets.\r\n command: [\r\n sh,\r\n -c,\r\n zcat,\r\n {inputPath: infile},\r\n '>',\r\n {outputPath: Output}\r\n ]\r\n\r\n```", "Thank you to Chen Sun and Benjamin Tan for encouraging the `sh -c` prelude and the v1.8 syntax, respectively. Closing the original issue thanks to chensun , opened a new issue linked above." ]
"2023-01-17T13:27:32"
"2023-02-02T23:56:41"
"2023-02-02T23:56:41"
NONE
null
### Environment <!-- Please fill in those that seem relevant. --> Minikube v1.28.0 kubernetes 1.22.2 Kustomize v3.2.0 kubectl v1.25.5 Manifests v1.6.1 * How do you deploy Kubeflow Pipelines (KFP)? Locally, on minikube, via manifests v1.6.1. I've also experimented with Argoflow on my OS, but it's not relevant here. <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> * KFP version: <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> * KFP SDK version: <!-- Specify the output of the following shell command: $pip list | grep kfp --> kfp 1.8.17 kfp-pipeline-spec 0.1.16 kfp-server-api 1.8.5 ### Steps to reproduce Create pipeline.py from the following. ```python #!/bin/env python import os import sys import argparse import kfp.dsl as dsl import kfp.components as comp import kfp import logging global logger logger = None def get_root_logger(level): levels=[logging.WARNING, logging.INFO, logging.DEBUG] if level < 0 or level > 2: raise TypeError("{0}.get_root_logger expects a verbosity between 0-2".format(__file__)) logging.basicConfig(level=levels[level], format="%(levelname)s: %(asctime)s %(funcName)s L%(lineno)s| %(message)s", datefmt="%Y/%m/%d %I:%M:%S") root_logger = logging.getLogger() return root_logger components_dir = os.path.join(os.path.dirname(__file__), "components") #gunzip = comp.load_component_from_file(os.path.join(components_dir, "gunzip.yaml")) #is_unzip_needed = comp.load_component_from_file(os.path.join(components_dir, "is_unzip_needed.yaml")) def gunzip(infile:str, outfile:str): """ Infile like 'path/to/example.txt.gz', outfile like 'path/to/example.txt' """ return dsl.ContainerOp( name='gunzip', image='bitnami/minideb:latest', command=[ '/bin/zcat $0 > input.txt ', '|| ', 'mv $0 input.txt' ], file_outputs={ 'output': 'input.txt' } ) @dsl.pipeline( name='kubeflow-barebone-demo', description='kubeflow demo with minimal setup' ) def pipeline(fastq1:str): # Step 1: training component gunzip1 = gunzip(infile=fastq1, outfile = str(fastq1).rstrip(".gz")).set_cpu_request('1').set_cpu_limit('1').set_memory_request('512Mi').set_memory_limit('512Mi') if __name__ == "__main__": logger = get_root_logger(2) kfp.compiler.Compiler().compile(rnaseq_pipeline, 'pipeline.yaml') ``` <!-- Specify how to reproduce the problem. This may include information such as: a description of the process, code snippets, log output, or screenshots. --> ### Expected result I expect that my 'gunzip' container should run, given that the executable listed can be verified to be installed in that container (alpine:latest or bitnami/minideb:latest) However, I am observing the following in the dashboard logs. ```bash failed to find name in PATH: exec: "echo $0": executable file not found in $PATH ``` <!-- What should the correct behavior be? --> The correct behavior should be to locate or understand the shell builtin 'exec'. ### Materials and reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> ![Screenshot from 2023-01-12 14-28-26](https://user-images.githubusercontent.com/4308024/212910879-c8bc76d2-2e03-4360-8cc0-778e4e2390eb.png) ### Labels <!-- Please include labels below by uncommenting them to help us better triage issues --> <!-- /area frontend --> <!-- /area backend --> <!-- /area sdk --> <!-- /area testing --> <!-- /area samples --> <!-- /area components --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8683/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8683/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8682
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8682/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8682/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8682/events
https://github.com/kubeflow/pipelines/issues/8682
1,535,345,011
I_kwDOB-71UM5bg4Fz
8,682
[sdk] KFP v2 pipeline with error
{ "login": "TrevorM15", "id": 30201274, "node_id": "MDQ6VXNlcjMwMjAxMjc0", "avatar_url": "https://avatars.githubusercontent.com/u/30201274?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TrevorM15", "html_url": "https://github.com/TrevorM15", "followers_url": "https://api.github.com/users/TrevorM15/followers", "following_url": "https://api.github.com/users/TrevorM15/following{/other_user}", "gists_url": "https://api.github.com/users/TrevorM15/gists{/gist_id}", "starred_url": "https://api.github.com/users/TrevorM15/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TrevorM15/subscriptions", "organizations_url": "https://api.github.com/users/TrevorM15/orgs", "repos_url": "https://api.github.com/users/TrevorM15/repos", "events_url": "https://api.github.com/users/TrevorM15/events{/privacy}", "received_events_url": "https://api.github.com/users/TrevorM15/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
closed
false
{ "login": "connor-mccarthy", "id": 55268212, "node_id": "MDQ6VXNlcjU1MjY4MjEy", "avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-mccarthy", "html_url": "https://github.com/connor-mccarthy", "followers_url": "https://api.github.com/users/connor-mccarthy/followers", "following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}", "gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions", "organizations_url": "https://api.github.com/users/connor-mccarthy/orgs", "repos_url": "https://api.github.com/users/connor-mccarthy/repos", "events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-mccarthy/received_events", "type": "User", "site_admin": false }
[ { "login": "connor-mccarthy", "id": 55268212, "node_id": "MDQ6VXNlcjU1MjY4MjEy", "avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-mccarthy", "html_url": "https://github.com/connor-mccarthy", "followers_url": "https://api.github.com/users/connor-mccarthy/followers", "following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}", "gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions", "organizations_url": "https://api.github.com/users/connor-mccarthy/orgs", "repos_url": "https://api.github.com/users/connor-mccarthy/repos", "events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-mccarthy/received_events", "type": "User", "site_admin": false } ]
null
[ "The Error comes out because you failed to access to kfp client.\r\nTry like this\r\n```\r\nimport requests\r\n\r\nUSERNAME = \"user@example.com\"\r\nPASSWORD = \"12341234\" \r\nNAMESPACE = \"kubeflow-user-example-com\"\r\nHOST = \"http://127.0.0.1:8080\" # your istio-ingressgateway pod ip:8080\r\n\r\nsession = requests.Session()\r\nresponse = session.get(HOST)\r\n\r\nheaders = {\r\n \"Content-Type\": \"application/x-www-form-urlencoded\",\r\n}\r\n\r\ndata = {\"login\": \"user@example.com\", \"password\": \"12341234\"}\r\nsession.post(response.url, headers=headers, data=data)\r\nsession_cookie = session.cookies.get_dict()[\"authservice_session\"]\r\n\r\nclient = kfp.Client(\r\n host=f\"{HOST}/pipeline\",\r\n namespace=f\"{NAMESPACE}\",\r\n cookies=f\"authservice_session={session_cookie}\",\r\n)\r\n```", "We have our namespace set up to not require any arguments in the Client method. My pipeline worked in kfp v1.8, but I needed a newer version of the SDK to get a newer Kubernetes client version. ", "oic did you solve the error by upgrading SDK version?", "No, upgrading to kfp v2.0.0 is what's causing the issue. It appears from the diffs between 1.8.18 and 2.0.0b10 that all the changes were in the SDK, not in the kfp backend, so not sure what's causing these issues.", "well for me it works fine\r\n<img width=\"862\" alt=\"image\" src=\"https://user-images.githubusercontent.com/63439911/212796975-5914fb18-09c7-4a74-89aa-ed6e7c3c1efd.png\">\r\n\r\n", "@kimkihoon0515 @TrevorM15, if you `pip install kfp==2.0.0b9 kfp-pipeline-spec==0.1.16` (and pin these versions in your SDK environment) is the error resolved?", "@connor-mccarthy yes. There's no error on kfp 2.0 beta.", "@connor-mccarthy please ignore @kimkihoon0515, he does not speak for me. I did `pip install kfp==2.0.0b9 kfp-pipeline-spec==0.1.16`, but still got the error `Reason: Bad Request\r\nHTTP response headers: HTTPHeaderDict({'content-type': 'application/json', 'date': 'Fri, 20 Jan 2023 15:46:05 GMT', 'content-length': '548', 'x-envoy-upstream-service-time': '1', 'server': 'envoy'})\r\nHTTP response body: {\"error\":\"Validate create run request failed.: InvalidInputError: Invalid IR spec format.: invalid character 'c' looking for beginning of value\",\"code\":3,\"message\":\"Validate create run request failed.: InvalidInputError: Invalid IR spec format.: invalid character 'c' looking for beginning of value\",\"details\":[{\"@type\":\"type.googleapis.com/api.Error\",\"error_message\":\"Invalid IR spec format.\",\"error_details\":\"Validate create run request failed.: InvalidInputError: Invalid IR spec format.: invalid character 'c' looking for beginning of value\"}]}` after compiling and attempting to create a run from the pipeline. ", "@TrevorM15, I was able to reproduce this with `kfp>=2.0.0b10`, but it resolved when I downgraded to `kfp==2.0.0b9`. This is because `kfp==2.0.0b10` (this release was yanked today, replaced by b11) began writing the `isOptional` field with https://github.com/kubeflow/pipelines/pull/8612 and https://github.com/kubeflow/pipelines/pull/8623 ([release notes](https://github.com/kubeflow/pipelines/blob/master/sdk/RELEASE.md#bug-fixes-and-other-changes-2)).\r\n\r\nCan you double check that this error is not resolved when you use 2.0.0b9? This `isOptional` field should not be present in your compiled YAML. If you still have the error, can you please run `pip freeze` and share the output here?", "Edit: This is the solution for #8734, not this bug.\r\n\r\nIn the short term, this is resolved by downgrading to `kfp==2.0.0b9` (and recompiling your pipeline with this version). In the long term, this will be resolved for `kfp>=2.0.0b10` by the upcoming KFP BE beta release.\r\n\r\ncc @chensun @gkcalat @Linchin ", "@connor-mccarthy the issue still persists for me when downgrading to 2.0.0b9", "@TrevorM15, can you confirm that the `isOptional` field is not present in your compiled YAML? Can you run `pip freeze and share the output`?", "`pip freeze` output:\r\n```python\r\nabsl-py==0.11.0\r\nadal==1.2.7\r\nanyio==3.1.0\r\nargon2-cffi==20.1.0\r\nasync-generator==1.10\r\nattrs==21.2.0\r\navro==1.11.0\r\nazure-common==1.1.28\r\nazure-storage-blob==2.1.0\r\nazure-storage-common==2.1.0\r\nBabel==2.9.1\r\nbackcall==0.2.0\r\nbleach==3.3.0\r\nblis==0.7.6\r\nbokeh==2.3.2\r\nbrotlipy==0.7.0\r\ncachetools==4.2.4\r\ncatalogue==2.0.6\r\ncertifi==2021.5.30\r\ncffi @ file:///home/conda/feedstock_root/build_artifacts/cffi_1613413861439/work\r\nchardet @ file:///home/conda/feedstock_root/build_artifacts/chardet_1610093490430/work\r\nclick==7.1.2\r\ncloudevents==1.2.0\r\ncloudpickle==2.2.1\r\ncolorama==0.4.4\r\nconda==4.10.1\r\nconda-package-handling @ file:///home/conda/feedstock_root/build_artifacts/conda-package-handling_1618231394280/work\r\nconfigparser==5.2.0\r\ncryptography @ file:///home/conda/feedstock_root/build_artifacts/cryptography_1616851476134/work\r\ncycler==0.11.0\r\ncymem==2.0.6\r\ndecorator==5.0.9\r\ndefusedxml==0.7.1\r\nDeprecated==1.2.13\r\ndeprecation==2.1.0\r\ndill==0.3.4\r\ndocstring-parser==0.13\r\nentrypoints==0.3\r\nfastai==2.4\r\nfastcore==1.3.29\r\nfastprogress==1.0.2\r\nfire==0.4.0\r\ngitdb==4.0.9\r\nGitPython==3.1.27\r\ngoogle-api-core==2.7.1\r\ngoogle-api-python-client==1.12.10\r\ngoogle-auth==1.35.0\r\ngoogle-auth-httplib2==0.1.0\r\ngoogle-cloud-core==2.3.2\r\ngoogle-cloud-storage==2.7.0\r\ngoogle-crc32c==1.3.0\r\ngoogle-resumable-media==2.3.2\r\ngoogleapis-common-protos==1.55.0\r\nhttplib2==0.20.4\r\nidna @ file:///home/conda/feedstock_root/build_artifacts/idna_1593328102638/work\r\nimageio==2.16.1\r\nipykernel==5.5.5\r\nipympl==0.7.0\r\nipython==7.24.1\r\nipython-genutils==0.2.0\r\nipywidgets==7.6.3\r\njedi==0.18.0\r\nJinja2==3.0.1\r\njoblib==1.1.0\r\njson5==0.9.5\r\njsonschema==3.2.0\r\njupyter-client==6.1.12\r\njupyter-core==4.7.1\r\njupyter-server==1.8.0\r\njupyter-server-mathjax==0.2.5\r\njupyterlab==3.0.16\r\njupyterlab-git==0.30.1\r\njupyterlab-pygments==0.1.2\r\njupyterlab-server==2.6.0\r\njupyterlab-widgets==1.0.2\r\nkfp==2.0.0b9\r\nkfp-pipeline-spec==0.1.16\r\nkfp-server-api==2.0.0a6\r\nkfserving==0.5.1\r\nkiwisolver==1.3.2\r\nkubernetes==12.0.1\r\nlangcodes==3.3.0\r\nMarkupSafe==2.0.1\r\nmatplotlib==3.4.2\r\nmatplotlib-inline==0.1.2\r\nminio==6.0.2\r\nmistune==0.8.4\r\nmurmurhash==1.0.6\r\nnbclassic==0.3.1\r\nnbclient==0.5.3\r\nnbconvert==6.0.7\r\nnbdime==3.1.1\r\nnbformat==5.1.3\r\nnest-asyncio==1.5.1\r\nnetworkx==2.7.1\r\nnotebook==6.4.0\r\nnumpy==1.20.3\r\noauthlib==3.2.0\r\npackaging==20.9\r\npandas==1.2.4\r\npandocfilters==1.4.3\r\nparso==0.8.2\r\npathy==0.6.1\r\npexpect==4.8.0\r\npickleshare==0.7.5\r\nPillow==9.0.1\r\npreshed==3.0.6\r\nprometheus-client==0.11.0\r\nprompt-toolkit==3.0.18\r\nprotobuf==3.19.4\r\nptyprocess==0.7.0\r\npyasn1==0.4.8\r\npyasn1-modules==0.2.8\r\npycosat @ file:///home/conda/feedstock_root/build_artifacts/pycosat_1610094800877/work\r\npycparser @ file:///home/conda/feedstock_root/build_artifacts/pycparser_1593275161868/work\r\npydantic==1.8.2\r\nPygments==2.9.0\r\nPyJWT==2.3.0\r\npyOpenSSL @ file:///home/conda/feedstock_root/build_artifacts/pyopenssl_1608055815057/work\r\npyparsing==2.4.7\r\npyrsistent==0.17.3\r\nPySocks @ file:///home/conda/feedstock_root/build_artifacts/pysocks_1610291447907/work\r\npython-dateutil==2.8.1\r\npytz==2021.1\r\nPyWavelets==1.2.0\r\nPyYAML==5.4.1\r\npyzmq==22.1.0\r\nrequests @ file:///home/conda/feedstock_root/build_artifacts/requests_1608156231189/work\r\nrequests-oauthlib==1.3.1\r\nrequests-toolbelt==0.9.1\r\nrsa==4.8\r\nruamel-yaml-conda @ file:///home/conda/feedstock_root/build_artifacts/ruamel_yaml_1611943339799/work\r\nscikit-image==0.18.1\r\nscikit-learn==0.24.2\r\nscipy==1.7.0\r\nseaborn==0.11.1\r\nSend2Trash==1.5.0\r\nsix @ file:///home/conda/feedstock_root/build_artifacts/six_1620240208055/work\r\nsmart-open==5.2.1\r\nsmmap==5.0.0\r\nsniffio==1.2.0\r\nspacy==3.2.3\r\nspacy-legacy==3.0.9\r\nspacy-loggers==1.0.1\r\nsrsly==2.4.2\r\nstrip-hints==0.1.10\r\ntable-logger==0.3.6\r\ntabulate==0.8.9\r\ntermcolor==1.1.0\r\nterminado==0.10.0\r\ntestpath==0.5.0\r\nthinc==8.0.13\r\nthreadpoolctl==3.1.0\r\ntifffile==2022.2.9\r\ntorch==1.8.1+cpu\r\ntorchaudio==0.8.1\r\ntorchvision==0.9.1+cpu\r\ntornado==6.1\r\ntqdm @ file:///home/conda/feedstock_root/build_artifacts/tqdm_1621890532941/work\r\ntraitlets==5.0.5\r\ntyper==0.4.0\r\ntyping-extensions==3.10.0.0\r\nuritemplate==3.0.1\r\nurllib3 @ file:///home/conda/feedstock_root/build_artifacts/urllib3_1622056799390/work\r\nwasabi==0.9.0\r\nwcwidth==0.2.5\r\nwebencodings==0.5.1\r\nwebsocket-client==1.0.1\r\nwidgetsnbextension==3.5.2\r\nwrapt==1.13.3\r\nxgboost==1.4.2\r\n```\r\n\r\nThe yaml:\r\n```yaml\r\n# PIPELINE DEFINITION\r\n# Name: addition-pipeline\r\n# Inputs:\r\n# a: int [Default: 1.0]\r\n# b: int [Default: 2.0]\r\n# c: int [Default: 10.0]\r\ncomponents:\r\n comp-addition-component:\r\n executorLabel: exec-addition-component\r\n inputDefinitions:\r\n parameters:\r\n num1:\r\n parameterType: NUMBER_INTEGER\r\n num2:\r\n parameterType: NUMBER_INTEGER\r\n outputDefinitions:\r\n parameters:\r\n Output:\r\n parameterType: NUMBER_INTEGER\r\n comp-addition-component-2:\r\n executorLabel: exec-addition-component-2\r\n inputDefinitions:\r\n parameters:\r\n num1:\r\n parameterType: NUMBER_INTEGER\r\n num2:\r\n parameterType: NUMBER_INTEGER\r\n outputDefinitions:\r\n parameters:\r\n Output:\r\n parameterType: NUMBER_INTEGER\r\ndeploymentSpec:\r\n executors:\r\n exec-addition-component:\r\n container:\r\n args:\r\n - --executor_input\r\n - '{{$}}'\r\n - --function_to_execute\r\n - addition_component\r\n command:\r\n - sh\r\n - -c\r\n - \"\\nif ! [ -x \\\"$(command -v pip)\\\" ]; then\\n python3 -m ensurepip ||\\\r\n \\ python3 -m ensurepip --user || apt-get install python3-pip\\nfi\\n\\nPIP_DISABLE_PIP_VERSION_CHECK=1\\\r\n \\ python3 -m pip install --quiet --no-warn-script-location 'kfp==2.0.0-beta.9'\\\r\n \\ && \\\"$0\\\" \\\"$@\\\"\\n\"\r\n - sh\r\n - -ec\r\n - 'program_path=$(mktemp -d)\r\n\r\n printf \"%s\" \"$0\" > \"$program_path/ephemeral_component.py\"\r\n\r\n python3 -m kfp.components.executor_main --component_module_path \"$program_path/ephemeral_component.py\" \"$@\"\r\n\r\n '\r\n - \"\\nimport kfp\\nfrom kfp import dsl\\nfrom kfp.dsl import *\\nfrom typing import\\\r\n \\ *\\n\\ndef addition_component(num1: int, num2: int) -> int:\\n return num1\\\r\n \\ + num2\\n\\n\"\r\n image: python:3.7\r\n exec-addition-component-2:\r\n container:\r\n args:\r\n - --executor_input\r\n - '{{$}}'\r\n - --function_to_execute\r\n - addition_component\r\n command:\r\n - sh\r\n - -c\r\n - \"\\nif ! [ -x \\\"$(command -v pip)\\\" ]; then\\n python3 -m ensurepip ||\\\r\n \\ python3 -m ensurepip --user || apt-get install python3-pip\\nfi\\n\\nPIP_DISABLE_PIP_VERSION_CHECK=1\\\r\n \\ python3 -m pip install --quiet --no-warn-script-location 'kfp==2.0.0-beta.9'\\\r\n \\ && \\\"$0\\\" \\\"$@\\\"\\n\"\r\n - sh\r\n - -ec\r\n - 'program_path=$(mktemp -d)\r\n\r\n printf \"%s\" \"$0\" > \"$program_path/ephemeral_component.py\"\r\n\r\n python3 -m kfp.components.executor_main --component_module_path \"$program_path/ephemeral_component.py\" \"$@\"\r\n\r\n '\r\n - \"\\nimport kfp\\nfrom kfp import dsl\\nfrom kfp.dsl import *\\nfrom typing import\\\r\n \\ *\\n\\ndef addition_component(num1: int, num2: int) -> int:\\n return num1\\\r\n \\ + num2\\n\\n\"\r\n image: python:3.7\r\npipelineInfo:\r\n name: addition-pipeline\r\nroot:\r\n dag:\r\n tasks:\r\n addition-component:\r\n cachingOptions:\r\n enableCache: true\r\n componentRef:\r\n name: comp-addition-component\r\n inputs:\r\n parameters:\r\n num1:\r\n componentInputParameter: a\r\n num2:\r\n componentInputParameter: b\r\n taskInfo:\r\n name: addition-component\r\n addition-component-2:\r\n cachingOptions:\r\n enableCache: true\r\n componentRef:\r\n name: comp-addition-component-2\r\n dependentTasks:\r\n - addition-component\r\n inputs:\r\n parameters:\r\n num1:\r\n taskOutputParameter:\r\n outputParameterKey: Output\r\n producerTask: addition-component\r\n num2:\r\n componentInputParameter: c\r\n taskInfo:\r\n name: addition-component-2\r\n inputDefinitions:\r\n parameters:\r\n a:\r\n defaultValue: 1.0\r\n parameterType: NUMBER_INTEGER\r\n b:\r\n defaultValue: 2.0\r\n parameterType: NUMBER_INTEGER\r\n c:\r\n defaultValue: 10.0\r\n parameterType: NUMBER_INTEGER\r\nschemaVersion: 2.1.0\r\nsdkVersion: kfp-2.0.0-beta.9\r\n```", "I am unable to reproduce this on later versions of the KFP BE. 1.5 is a fairly old version of the KFP BE. I suggest you upgrade the BE and see if the issue continues.\r\n\r\nYou can find some [upgrade instructions here](https://googlecloudplatform.github.io/kubeflow-gke-docs/docs/pipelines/upgrade/) (specific to Google Cloud).", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions." ]
"2023-01-16T18:32:28"
"2023-08-28T15:35:22"
"2023-08-28T15:35:21"
NONE
null
### Environment * KFP version: Kubeflow 1.5 * KFP SDK version: KFP v2.0.0b10 * All dependencies version: kfp 2.0.0b10 kfp-pipeline-spec 0.1.17 kfp-server-api 2.0.0a6 ### Steps to reproduce Tried upgrading from KFP 1.8 to 2.0 for higher version kubernetes sdk support. Tried copying the example from [here](https://www.kubeflow.org/docs/components/pipelines/v2/compile-a-pipeline/) ```python3 import kfp from kfp import compiler from kfp import dsl @dsl.component def addition_component(num1: int, num2: int) -> int: return num1 + num2 @dsl.pipeline(name='addition-pipeline') def my_pipeline(a: int=1, b: int=2, c: int = 10): add_task_1 = addition_component(num1=a, num2=b) add_task_2 = addition_component(num1=add_task_1.output, num2=c) cmplr = compiler.Compiler() cmplr.compile(my_pipeline, package_path='my_pipeline.yaml') client=kfp.Client() client.create_run_from_pipeline_package('my_pipeline.yaml',arguments={"a":1,"b":2}) ``` When I give it arguments I get the error `ApiException: (400) Reason: Bad Request HTTP response headers: HTTPHeaderDict({'content-type': 'application/json', 'date': 'Mon, 16 Jan 2023 18:20:24 GMT', 'content-length': '190', 'x-envoy-upstream-service-time': '0', 'server': 'envoy'}) HTTP response body: {"error":"json: cannot unmarshal number into Go value of type map[string]json.RawMessage","code":3,"message":"json: cannot unmarshal number into Go value of type map[string]json.RawMessage"}` And without arguments I get the error `ApiException: (400) Reason: Bad Request HTTP response headers: HTTPHeaderDict({'content-type': 'application/json', 'date': 'Mon, 16 Jan 2023 18:30:38 GMT', 'content-length': '548', 'x-envoy-upstream-service-time': '1', 'server': 'envoy'}) HTTP response body: {"error":"Validate create run request failed.: InvalidInputError: Invalid IR spec format.: invalid character 'c' looking for beginning of value","code":3,"message":"Validate create run request failed.: InvalidInputError: Invalid IR spec format.: invalid character 'c' looking for beginning of value","details":[{"@type":"type.googleapis.com/api.Error","error_message":"Invalid IR spec format.","error_details":"Validate create run request failed.: InvalidInputError: Invalid IR spec format.: invalid character 'c' looking for beginning of value"}]}` ### Expected result Pipeline compiles and runs. Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8682/reactions", "total_count": 5, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8682/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8680
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8680/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8680/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8680/events
https://github.com/kubeflow/pipelines/issues/8680
1,534,951,739
I_kwDOB-71UM5bfYE7
8,680
What is Difference of pipelines between V1 and V2???
{ "login": "kimkihoon0515", "id": 63439911, "node_id": "MDQ6VXNlcjYzNDM5OTEx", "avatar_url": "https://avatars.githubusercontent.com/u/63439911?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kimkihoon0515", "html_url": "https://github.com/kimkihoon0515", "followers_url": "https://api.github.com/users/kimkihoon0515/followers", "following_url": "https://api.github.com/users/kimkihoon0515/following{/other_user}", "gists_url": "https://api.github.com/users/kimkihoon0515/gists{/gist_id}", "starred_url": "https://api.github.com/users/kimkihoon0515/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kimkihoon0515/subscriptions", "organizations_url": "https://api.github.com/users/kimkihoon0515/orgs", "repos_url": "https://api.github.com/users/kimkihoon0515/repos", "events_url": "https://api.github.com/users/kimkihoon0515/events{/privacy}", "received_events_url": "https://api.github.com/users/kimkihoon0515/received_events", "type": "User", "site_admin": false }
[ { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @kimkihoon0515 \r\n\r\n> Does V2 still supports argo workflow or not??\r\n\r\nKFP v2 still supports argo workflow.\r\n\r\n> And for now V1 compiler compiles pipeline into yaml file but in V2 the compiler only supports json file format.\r\n> Are you guys planning to support a function that supports to be compiled as a yaml file??\r\n\r\nIn master branch, KFP SDK (currently at 2.0-beta) support compiling to YAML as well. However, it's worth noting that the file format itself isn't what matter, but the content format. In KFP v1, the YAML is of Argo Workflow CRD, while in KFP v2, the JSON/YAML is of PipelineSpec--and this leads to your third question.\r\n\r\n> Plus what is pipeline Spec?\r\n\r\nPipeline spec is the spec definition described via [this proto message](https://github.com/kubeflow/pipelines/blob/d1f1ee9f2bbd09df7ea6ab51b21f07ba5f86c871/api/v2alpha1/pipeline_spec.proto#L50). It's meant to be an Intermediate Representation (IR) of pipeline. KFP v2 creates this abstraction layer on top of Argo, the new IR spec is platform-agnostic. The goal is to support running the same pipeline across multiple platforms/engines: KFP, KFP on tekton, Vertex Pipelines, etc.\r\n\r\nThis [\"Understanding KFP v2\"](https://docs.google.com/presentation/d/1HzMwtI2QN67xQp2lSxmuXhitEsukLB7mvZx4KAPub3A/edit#slide=id.gb4a3fac3a8_7_1911) deck we presented at Kubeflow Pipelines community meeting may help you have a better view of KFP v2. You need to join [kubeflow-discuss](https://groups.google.com/g/kubeflow-discuss) Google group to gain access to the docs we've shared with the community. \r\n\r\n\r\n", "@chensun I saw the roadmap on YT. So you guys gonna stop releasing major version of v1? ", "> @chensun I saw the roadmap on YT. So you guys gonna stop releasing major version of v1?\r\n\r\nThat is correct. We will continue patching v1 (1.8) for security and vulnerability fixes, but there's no planned feature work/release for v1. ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions." ]
"2023-01-16T13:34:09"
"2023-08-28T07:42:20"
null
NONE
null
Does V2 still supports argo workflow or not?? And for now V1 compiler compiles pipeline into yaml file but in V2 the compiler only supports json file format. Are you guys planning to support a function that supports to be compiled as a yaml file?? Plus what is pipeline Spec?
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8680/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8680/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8676
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8676/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8676/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8676/events
https://github.com/kubeflow/pipelines/issues/8676
1,533,811,531
I_kwDOB-71UM5bbBtL
8,676
[feature] conda-forge feedstock github repository for google-cloud-pipeline-components
{ "login": "tarrade", "id": 12021701, "node_id": "MDQ6VXNlcjEyMDIxNzAx", "avatar_url": "https://avatars.githubusercontent.com/u/12021701?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tarrade", "html_url": "https://github.com/tarrade", "followers_url": "https://api.github.com/users/tarrade/followers", "following_url": "https://api.github.com/users/tarrade/following{/other_user}", "gists_url": "https://api.github.com/users/tarrade/gists{/gist_id}", "starred_url": "https://api.github.com/users/tarrade/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tarrade/subscriptions", "organizations_url": "https://api.github.com/users/tarrade/orgs", "repos_url": "https://api.github.com/users/tarrade/repos", "events_url": "https://api.github.com/users/tarrade/events{/privacy}", "received_events_url": "https://api.github.com/users/tarrade/received_events", "type": "User", "site_admin": false }
[ { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
{ "login": "IronPan", "id": 2348602, "node_id": "MDQ6VXNlcjIzNDg2MDI=", "avatar_url": "https://avatars.githubusercontent.com/u/2348602?v=4", "gravatar_id": "", "url": "https://api.github.com/users/IronPan", "html_url": "https://github.com/IronPan", "followers_url": "https://api.github.com/users/IronPan/followers", "following_url": "https://api.github.com/users/IronPan/following{/other_user}", "gists_url": "https://api.github.com/users/IronPan/gists{/gist_id}", "starred_url": "https://api.github.com/users/IronPan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/IronPan/subscriptions", "organizations_url": "https://api.github.com/users/IronPan/orgs", "repos_url": "https://api.github.com/users/IronPan/repos", "events_url": "https://api.github.com/users/IronPan/events{/privacy}", "received_events_url": "https://api.github.com/users/IronPan/received_events", "type": "User", "site_admin": false }
[ { "login": "IronPan", "id": 2348602, "node_id": "MDQ6VXNlcjIzNDg2MDI=", "avatar_url": "https://avatars.githubusercontent.com/u/2348602?v=4", "gravatar_id": "", "url": "https://api.github.com/users/IronPan", "html_url": "https://github.com/IronPan", "followers_url": "https://api.github.com/users/IronPan/followers", "following_url": "https://api.github.com/users/IronPan/following{/other_user}", "gists_url": "https://api.github.com/users/IronPan/gists{/gist_id}", "starred_url": "https://api.github.com/users/IronPan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/IronPan/subscriptions", "organizations_url": "https://api.github.com/users/IronPan/orgs", "repos_url": "https://api.github.com/users/IronPan/repos", "events_url": "https://api.github.com/users/IronPan/events{/privacy}", "received_events_url": "https://api.github.com/users/IronPan/received_events", "type": "User", "site_admin": false } ]
null
[ "Happy to help or to give more context on this FR", "just some context, there are already 62 google-cloud python packages available in conda-forge channel\r\n\r\nhttps://github.com/orgs/conda-forge/repositories?q=google-cloud-&type=all&language=&sort=\r\n**62 results for all repositories matching google-cloud- sorted by last updated**\r\n\r\nSome example of feedstock related to kfp:\r\nhttps://github.com/conda-forge/kfp-pipeline-spec-feedstock\r\nhttps://github.com/conda-forge/kfp-feedstock\r\n\r\nSome example from gcp python sdk:\r\nhttps://github.com/conda-forge/google-cloud-logging-feedstock\r\nhttps://github.com/conda-forge/google-cloud-aiplatform-feedstock\r\n\r\nstep by step:\r\nhttps://conda-forge.org/docs/maintainer/adding_pkgs.html\r\n\r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions." ]
"2023-01-15T13:17:57"
"2023-08-28T07:42:22"
null
NONE
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> <!-- /area frontend --> <!-- /area backend --> <!-- /area sdk --> <!-- /area samples --> /area components ### What feature would you like to see? <!-- Provide a description of this feature and the user experience. --> Would like to have google-cloud-pipeline-components python package in the conda-forge channel. There is no conda forge feedstock available yet [link](https://github.com/orgs/conda-forge/repositories?q=google-cloud-&type=all&language=&sort=) ### What is the use case or pain point? <!-- It helps us understand the benefit of this feature for your use case. --> Mamba is a much better python package manager than what we can get with pip/pypi which works for example with conda-forge channel. Most of the open source libraries and gpc python lib (cloud storage, logging, hyperparameter, veretx ai ..) exist in the conda-forge channel like the kfp python sdk [link](https://anaconda.org/conda-forge/kfp) with the following conda-forge feedstock [link](https://github.com/conda-forge/kfp-feedstock). When a conda-forge feedstock is created for google-cloud-pipeline-components, I will be happy to do PR for new version as I did for kfp sdk but this is is a google/gcp package I guess the conda-forge feedstock repository need to be created by google/gcp as well asthe approval of the PR to avoid malicious actor to use it no ? Happy to help but never create a conda-forge feedstock from scratch ### Is there a workaround currently? <!-- Without this feature, how do you accomplish your task today? --> The package exist in pypi [link](https://pypi.org/project/google-cloud-pipeline-components/) --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8676/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8676/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8675
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8675/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8675/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8675/events
https://github.com/kubeflow/pipelines/issues/8675
1,533,198,027
I_kwDOB-71UM5bYr7L
8,675
[bug] <KFP v2 Metrics Doesn't show up>
{ "login": "kimkihoon0515", "id": 63439911, "node_id": "MDQ6VXNlcjYzNDM5OTEx", "avatar_url": "https://avatars.githubusercontent.com/u/63439911?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kimkihoon0515", "html_url": "https://github.com/kimkihoon0515", "followers_url": "https://api.github.com/users/kimkihoon0515/followers", "following_url": "https://api.github.com/users/kimkihoon0515/following{/other_user}", "gists_url": "https://api.github.com/users/kimkihoon0515/gists{/gist_id}", "starred_url": "https://api.github.com/users/kimkihoon0515/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kimkihoon0515/subscriptions", "organizations_url": "https://api.github.com/users/kimkihoon0515/orgs", "repos_url": "https://api.github.com/users/kimkihoon0515/repos", "events_url": "https://api.github.com/users/kimkihoon0515/events{/privacy}", "received_events_url": "https://api.github.com/users/kimkihoon0515/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Hello @kimkihoon0515 , can you deploy Kubeflow Pipelines 2.0.0-alpha versions like this one? https://github.com/kubeflow/pipelines/releases/tag/2.0.0-alpha.6.\r\n\r\nAlso, please use KFP SDK version which is 2.0.0-beta like this one: https://github.com/kubeflow/pipelines/releases/tag/2.0.0b10", "@zijianjoy Hey Thx for your help. I tried kfp version 2.0.0b2 and i got the screen that i wanted :) " ]
"2023-01-14T09:42:34"
"2023-01-19T23:40:33"
"2023-01-19T23:40:33"
NONE
null
### Environment <!-- Please fill in those that seem relevant. --> * How do you deploy Kubeflow Pipelines (KFP)? I deployed Kubeflow using this command ``` minikube start --driver=docker --kubernetes-version=1.23.14 --memory=12g --cpus=4 ``` * KFP version: 1.8.18 ### Steps to reproduce I'm trying to get confusion matrix metric output by using kfp v2 like this ``` @component( packages_to_install=['scikit-learn'], base_image='python:3.9' ) def iris_sgdclassifier(test_samples_fraction: float, metrics: Output[ClassificationMetrics]): from sklearn import datasets, model_selection from sklearn.linear_model import SGDClassifier from sklearn.metrics import confusion_matrix iris_dataset = datasets.load_iris() train_x, test_x, train_y, test_y = model_selection.train_test_split( iris_dataset['data'], iris_dataset['target'], test_size=test_samples_fraction) classifier = SGDClassifier() classifier.fit(train_x, train_y) predictions = model_selection.cross_val_predict(classifier, train_x, train_y, cv=3) metrics.log_confusion_matrix( ['Setosa', 'Versicolour', 'Virginica'], confusion_matrix(train_y, predictions).tolist() # .tolist() to convert np array to list. ) ``` and it shows me a confusion matrix output in "Visualizations" tab. ![image](https://user-images.githubusercontent.com/63439911/212465593-fb041328-48eb-4e74-bb1b-41dadc8c3522.png) But nothing shows up in "Input/Output" tab. ![image](https://user-images.githubusercontent.com/63439911/212465617-d79e1f0a-37df-4300-ba1e-b40a000c998a.png) ### Expected result Actually I'm trying to compare confusion matrixes of several runs which is possible on kfp v2. <img width="749" alt="image" src="https://user-images.githubusercontent.com/63439911/212465776-cbeb5807-4294-4dac-91ba-6c0c52ec750e.png"> the log says ``` launcher.go:560] Local filepath "/minio/mlpipeline/v2/artifacts/pipeline/v2-metrics/58c160ea-4ef3-49c8-a08a-0e1c0347a031/iris-sgdclassifier/metrics" does not exist ``` I think that log comes out because theres no output file in the component. But idk how to do it. ### Materials and reference https://www.kubeflow.org/docs/components/pipelines/v1/sdk-v2/run-comparison/ --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8675/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8675/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8672
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8672/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8672/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8672/events
https://github.com/kubeflow/pipelines/issues/8672
1,532,822,877
I_kwDOB-71UM5bXQVd
8,672
[sdk] Notebook Executor pipeline component does not cause the pipeline to fail when a notebook execution fails
{ "login": "nturner-maritz", "id": 105460605, "node_id": "U_kgDOBkkzfQ", "avatar_url": "https://avatars.githubusercontent.com/u/105460605?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nturner-maritz", "html_url": "https://github.com/nturner-maritz", "followers_url": "https://api.github.com/users/nturner-maritz/followers", "following_url": "https://api.github.com/users/nturner-maritz/following{/other_user}", "gists_url": "https://api.github.com/users/nturner-maritz/gists{/gist_id}", "starred_url": "https://api.github.com/users/nturner-maritz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nturner-maritz/subscriptions", "organizations_url": "https://api.github.com/users/nturner-maritz/orgs", "repos_url": "https://api.github.com/users/nturner-maritz/repos", "events_url": "https://api.github.com/users/nturner-maritz/events{/privacy}", "received_events_url": "https://api.github.com/users/nturner-maritz/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
{ "login": "chongyouquan", "id": 48691403, "node_id": "MDQ6VXNlcjQ4NjkxNDAz", "avatar_url": "https://avatars.githubusercontent.com/u/48691403?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chongyouquan", "html_url": "https://github.com/chongyouquan", "followers_url": "https://api.github.com/users/chongyouquan/followers", "following_url": "https://api.github.com/users/chongyouquan/following{/other_user}", "gists_url": "https://api.github.com/users/chongyouquan/gists{/gist_id}", "starred_url": "https://api.github.com/users/chongyouquan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chongyouquan/subscriptions", "organizations_url": "https://api.github.com/users/chongyouquan/orgs", "repos_url": "https://api.github.com/users/chongyouquan/repos", "events_url": "https://api.github.com/users/chongyouquan/events{/privacy}", "received_events_url": "https://api.github.com/users/chongyouquan/received_events", "type": "User", "site_admin": false }
[ { "login": "chongyouquan", "id": 48691403, "node_id": "MDQ6VXNlcjQ4NjkxNDAz", "avatar_url": "https://avatars.githubusercontent.com/u/48691403?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chongyouquan", "html_url": "https://github.com/chongyouquan", "followers_url": "https://api.github.com/users/chongyouquan/followers", "following_url": "https://api.github.com/users/chongyouquan/following{/other_user}", "gists_url": "https://api.github.com/users/chongyouquan/gists{/gist_id}", "starred_url": "https://api.github.com/users/chongyouquan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chongyouquan/subscriptions", "organizations_url": "https://api.github.com/users/chongyouquan/orgs", "repos_url": "https://api.github.com/users/chongyouquan/repos", "events_url": "https://api.github.com/users/chongyouquan/events{/privacy}", "received_events_url": "https://api.github.com/users/chongyouquan/received_events", "type": "User", "site_admin": false } ]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions." ]
"2023-01-13T19:48:16"
"2023-08-28T07:42:25"
null
NONE
null
### Environment * KFP version: <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> * KFP SDK version: 1.8.12 * `google-cloud-pipeline-components 1.0.4` <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> * All dependencies version: <!-- Specify the output of the following shell command: $pip list | grep kfp --> `kfp 1.8.12` `kfp-pipeline-spec 0.1.14` `kfp-server-api 1.8.1` ### Steps to reproduce <!-- Specify how to reproduce the problem. This may include information such as: a description of the process, code snippets, log output, or screenshots. --> I encountered this error when using the kubeflow pipelines sdk with Vertex AI Pipelines. Run a pipeline which uses the notebook executor Google Cloud pipeline component. When the notebook execution raises an error, the pipeline will continue to run until stopped manually. This is due to a bug in executor.py that results in an infinite loop if the notebook job fails with an error. ### Expected result <!-- What should the correct behavior be? --> When the notebook throws an error, the pipeline should stop running and return an error message. ### Materials and Reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> components/google-cloud/google_cloud_pipeline_components/container/experimental/notebooks/executor.py https://github.com/kubeflow/pipelines/blob/master/components/google-cloud/google_cloud_pipeline_components/container/experimental/notebooks/executor.py#L220 By changing Line 220 to the following: `if job_state.JobState(custom_job_state) in _STATES_COMPLETED:` I was able to get the expected behavior. --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8672/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8672/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8666
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8666/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8666/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8666/events
https://github.com/kubeflow/pipelines/issues/8666
1,528,626,831
I_kwDOB-71UM5bHP6P
8,666
[sdk] Accept higher PyYAML versions
{ "login": "mai-nakagawa", "id": 2883424, "node_id": "MDQ6VXNlcjI4ODM0MjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2883424?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mai-nakagawa", "html_url": "https://github.com/mai-nakagawa", "followers_url": "https://api.github.com/users/mai-nakagawa/followers", "following_url": "https://api.github.com/users/mai-nakagawa/following{/other_user}", "gists_url": "https://api.github.com/users/mai-nakagawa/gists{/gist_id}", "starred_url": "https://api.github.com/users/mai-nakagawa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mai-nakagawa/subscriptions", "organizations_url": "https://api.github.com/users/mai-nakagawa/orgs", "repos_url": "https://api.github.com/users/mai-nakagawa/repos", "events_url": "https://api.github.com/users/mai-nakagawa/events{/privacy}", "received_events_url": "https://api.github.com/users/mai-nakagawa/received_events", "type": "User", "site_admin": false }
[ { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
closed
false
{ "login": "connor-mccarthy", "id": 55268212, "node_id": "MDQ6VXNlcjU1MjY4MjEy", "avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-mccarthy", "html_url": "https://github.com/connor-mccarthy", "followers_url": "https://api.github.com/users/connor-mccarthy/followers", "following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}", "gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions", "organizations_url": "https://api.github.com/users/connor-mccarthy/orgs", "repos_url": "https://api.github.com/users/connor-mccarthy/repos", "events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-mccarthy/received_events", "type": "User", "site_admin": false }
[ { "login": "connor-mccarthy", "id": 55268212, "node_id": "MDQ6VXNlcjU1MjY4MjEy", "avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-mccarthy", "html_url": "https://github.com/connor-mccarthy", "followers_url": "https://api.github.com/users/connor-mccarthy/followers", "following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}", "gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions", "organizations_url": "https://api.github.com/users/connor-mccarthy/orgs", "repos_url": "https://api.github.com/users/connor-mccarthy/repos", "events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-mccarthy/received_events", "type": "User", "site_admin": false } ]
null
[]
"2023-01-11T08:28:12"
"2023-01-17T18:46:44"
"2023-01-17T18:46:44"
CONTRIBUTOR
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> <!-- /area frontend --> <!-- /area backend --> /area sdk <!-- /area samples --> <!-- /area components --> ### What feature would you like to see? <!-- Provide a description of this feature and the user experience. --> The SDK currently requires `PyYAML>=5.3,<6`. I would like the SDK to accept higher versions - `>=6` specifically. ### What is the use case or pain point? Would like to use kfp SDK from Google Cloud Composer. However, the latest Google Cloud Composer comes with `PyYAML==6.0` as per [Cloud Composer version list](https://cloud.google.com/composer/docs/concepts/versioning/composer-versions) ### Is there a workaround currently? <!-- Without this feature, how do you accomplish your task today? --> There are some workarounds but I think it's no problem for the SDK to accept newer PyYAML versions. I double-checked the [change list of PyYAML 6.0](https://github.com/yaml/pyyaml/blob/6.0/CHANGES#L7-L18). I believe those changes have no impact to kfp SDK. --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8666/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8666/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8661
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8661/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8661/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8661/events
https://github.com/kubeflow/pipelines/issues/8661
1,526,100,273
I_kwDOB-71UM5a9nEx
8,661
[sdk] `wait_for_run_completion` occasionally hangs forever in `RunServiceApi.get_run` call
{ "login": "jli", "id": 133466, "node_id": "MDQ6VXNlcjEzMzQ2Ng==", "avatar_url": "https://avatars.githubusercontent.com/u/133466?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jli", "html_url": "https://github.com/jli", "followers_url": "https://api.github.com/users/jli/followers", "following_url": "https://api.github.com/users/jli/following{/other_user}", "gists_url": "https://api.github.com/users/jli/gists{/gist_id}", "starred_url": "https://api.github.com/users/jli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jli/subscriptions", "organizations_url": "https://api.github.com/users/jli/orgs", "repos_url": "https://api.github.com/users/jli/repos", "events_url": "https://api.github.com/users/jli/events{/privacy}", "received_events_url": "https://api.github.com/users/jli/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
null
[]
null
[ "@jli Thanks for reaching out. Can you provide how you observe the run status (GetRun API call)? Also, have you experienced this issue from UI as well? Can you provide a few screenshot of your logging?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions." ]
"2023-01-09T18:27:50"
"2023-08-28T07:42:27"
null
CONTRIBUTOR
null
### Environment * KFP version: 1.7.1 * KFP SDK version: 1.8.12 * All dependencies version: ``` kfp 1.8.12 kfp-pipeline-spec 0.1.16 kfp-server-api 1.8.3 ``` (I know we're on an older version of the kfp sdk, but it doesn't look like the relevant code has changed much since then.) ### Steps to reproduce - Launch a long-running run on KFP - Create a few processes that call `kfp.Client.wait_for_run_completion` with INFO logging enabled[1] - After ~10-15 minutes, I observe that ~2 out of 3 wait calls are hanging after the RunServiceApi.get_run call. The hanging continues even after the run completes. [1] enable INFO logging to see when `wait_for_run_completion` makes a new call ```python logging.basicConfig(format="%(asctime)s %(message)s", datefmt="%Y-%m-%d %H:%M:%S", force=True) logging.getLogger().setLevel(logging.INFO) ``` I also added more logging calls before and after the `get_run` API call here: https://github.com/kubeflow/pipelines/blob/f4588f31128d268974bdffb16e487ea170706011/sdk/python/kfp/client/client.py#L1330 ### Expected result `wait_for_run_completion` should continue polling until the run completes. ### Materials and Reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8661/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8661/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8660
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8660/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8660/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8660/events
https://github.com/kubeflow/pipelines/issues/8660
1,525,743,316
I_kwDOB-71UM5a8P7U
8,660
Contributing to pipeline CICD for ppc64le arch
{ "login": "pranavpandit1", "id": 99784213, "node_id": "U_kgDOBfKWFQ", "avatar_url": "https://avatars.githubusercontent.com/u/99784213?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pranavpandit1", "html_url": "https://github.com/pranavpandit1", "followers_url": "https://api.github.com/users/pranavpandit1/followers", "following_url": "https://api.github.com/users/pranavpandit1/following{/other_user}", "gists_url": "https://api.github.com/users/pranavpandit1/gists{/gist_id}", "starred_url": "https://api.github.com/users/pranavpandit1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pranavpandit1/subscriptions", "organizations_url": "https://api.github.com/users/pranavpandit1/orgs", "repos_url": "https://api.github.com/users/pranavpandit1/repos", "events_url": "https://api.github.com/users/pranavpandit1/events{/privacy}", "received_events_url": "https://api.github.com/users/pranavpandit1/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }, { "login": "gkcalat", "id": 35157096, "node_id": "MDQ6VXNlcjM1MTU3MDk2", "avatar_url": "https://avatars.githubusercontent.com/u/35157096?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gkcalat", "html_url": "https://github.com/gkcalat", "followers_url": "https://api.github.com/users/gkcalat/followers", "following_url": "https://api.github.com/users/gkcalat/following{/other_user}", "gists_url": "https://api.github.com/users/gkcalat/gists{/gist_id}", "starred_url": "https://api.github.com/users/gkcalat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gkcalat/subscriptions", "organizations_url": "https://api.github.com/users/gkcalat/orgs", "repos_url": "https://api.github.com/users/gkcalat/repos", "events_url": "https://api.github.com/users/gkcalat/events{/privacy}", "received_events_url": "https://api.github.com/users/gkcalat/received_events", "type": "User", "site_admin": false }, { "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false } ]
null
[ "@pranavpandit1 We suggest you to pull the changes from our website and make your own CICD pipeline, because our current testing infra in Google Cloud (GKE), and we don't have the access to IBM Power. Feel free to reopen this if you have any idea or suggestion.", "@jlyaoyuli \r\n we can add support for ppc64le using qemu. Checkout this YAML:\r\n[yaml file](https://github.com/GoogleCloudPlatform/solutions-build-multi-architecture-images-tutorial/blob/master/terraform/cloud-build/build-docker-image-trigger.yaml)\r\nbut I haven't tested this approach as I don't have access to google cloud platform. How can I get access to GCP?\r\n\r\nIf possible, I suggest that we should migrate to github action. this will make easier for anyone to contribute to CICD as github action doesn't require any subscription and it is free to use for individual account.\r\n\r\nwould like to know, why are we not using GITHUB action?\r\n", "@jlyaoyuli: I second Adil's thought about using GitHub actions as it will be easier to contribute.\r\nAny specific reasons for not using GitHub Actions? ", "/reopen\r\n\r\nI now added a design proposal to the tracking issue https://github.com/kubeflow/kubeflow/issues/6684. We want to target most of the KFP components next and already got good community feedback on this (see upvotes in https://github.com/kubeflow/kubeflow/issues/6684).\r\n\r\n@zijianjoy, can you review the design document? Thanks!\r\nhttps://docs.google.com/document/d/1nGUvLonahoLogfWCHsoUOZl-s77YtPEiCjWBVlZjJHo/edit?usp=sharing\r\n\r\n", "@lehrig: Reopened this issue.\n\n<details>\n\nIn response to [this](https://github.com/kubeflow/pipelines/issues/8660#issuecomment-1434791318):\n\n>/reopen\r\n>\r\n>I now added a design proposal to the tracking issue https://github.com/kubeflow/kubeflow/issues/6684. We want to target most of the KFP components next and already got good community feedback on this (see upvotes in https://github.com/kubeflow/kubeflow/issues/6684).\r\n>\r\n>@zijianjoy, can you review the design document? Thanks!\r\n>https://docs.google.com/document/d/1nGUvLonahoLogfWCHsoUOZl-s77YtPEiCjWBVlZjJHo/edit?usp=sharing\r\n>\r\n>\n\n\nInstructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.\n</details>", "@zijianjoy, @gkcalat - any news on this?", "@chensun - can you help driving this a bit quicker? my team is ready & would like to proceed...\r\n\r\nI think we need to get these 2 questions answered clearly:\r\n1. can & should we enable GitHub Action builds for KFP projects to improve consistency with the general Kubeflow ecosystem & make things like multi-arch easier possible?\r\n2. if we don't migrate to GitHub Actions, how can we help making multi-arch builds (x86 + ppc64le) in the current CI possible?\r\n\r\nNote that we have lots of support driving this forward (see https://github.com/kubeflow/kubeflow/issues/6684) but are struggling to get these questions answered clearly. \r\n\r\nThe design document & a detailed discussion is here: https://docs.google.com/document/d/1nGUvLonahoLogfWCHsoUOZl-s77YtPEiCjWBVlZjJHo/edit?usp=sharing\r\n \r\nThank you!", "We recently discussed this topic in the KubeFlow Pipelines community call:\r\n- notes are here: http://bit.ly/kfp-meeting-notes\r\n- recording is here: https://drive.google.com/file/d/1NNOHR205gaZgJBVk9qXBzLea6ygfGxaC/view\r\n\r\nWe were pointed to the postsubmits file to trigger builds: https://github.com/GoogleCloudPlatform/oss-test-infra/tree/master/prow/prowjobs/kubeflow/pipelines\r\n\r\nAccordingly, we are now assessing the integration ppc64le workers into this CI (prow). Here's the issue for that: https://github.com/GoogleCloudPlatform/oss-test-infra/issues/1972 " ]
"2023-01-09T14:50:34"
"2023-06-16T13:13:00"
null
NONE
null
Hello, My team is working on a request of "Porting Kubeflow to IBM Power (ppc64le)" For the same purpose, we want to contribute to CICD for the pipeline repository for ppc64le architecture. Kindly provide the details of how to access existing CI builds, files, logs, etc. for more details, reference the link [Kubeflow Umbrella issue](https://github.com/kubeflow/kubeflow/issues/6684)
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8660/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8660/timeline
null
reopened
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8654
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8654/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8654/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8654/events
https://github.com/kubeflow/pipelines/issues/8654
1,520,574,513
I_kwDOB-71UM5aoiAx
8,654
[feature] Add filter for finished_at runs
{ "login": "knjk04", "id": 11173328, "node_id": "MDQ6VXNlcjExMTczMzI4", "avatar_url": "https://avatars.githubusercontent.com/u/11173328?v=4", "gravatar_id": "", "url": "https://api.github.com/users/knjk04", "html_url": "https://github.com/knjk04", "followers_url": "https://api.github.com/users/knjk04/followers", "following_url": "https://api.github.com/users/knjk04/following{/other_user}", "gists_url": "https://api.github.com/users/knjk04/gists{/gist_id}", "starred_url": "https://api.github.com/users/knjk04/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/knjk04/subscriptions", "organizations_url": "https://api.github.com/users/knjk04/orgs", "repos_url": "https://api.github.com/users/knjk04/repos", "events_url": "https://api.github.com/users/knjk04/events{/privacy}", "received_events_url": "https://api.github.com/users/knjk04/received_events", "type": "User", "site_admin": false }
[ { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
closed
false
{ "login": "Linchin", "id": 12806577, "node_id": "MDQ6VXNlcjEyODA2NTc3", "avatar_url": "https://avatars.githubusercontent.com/u/12806577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Linchin", "html_url": "https://github.com/Linchin", "followers_url": "https://api.github.com/users/Linchin/followers", "following_url": "https://api.github.com/users/Linchin/following{/other_user}", "gists_url": "https://api.github.com/users/Linchin/gists{/gist_id}", "starred_url": "https://api.github.com/users/Linchin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Linchin/subscriptions", "organizations_url": "https://api.github.com/users/Linchin/orgs", "repos_url": "https://api.github.com/users/Linchin/repos", "events_url": "https://api.github.com/users/Linchin/events{/privacy}", "received_events_url": "https://api.github.com/users/Linchin/received_events", "type": "User", "site_admin": false }
[ { "login": "Linchin", "id": 12806577, "node_id": "MDQ6VXNlcjEyODA2NTc3", "avatar_url": "https://avatars.githubusercontent.com/u/12806577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Linchin", "html_url": "https://github.com/Linchin", "followers_url": "https://api.github.com/users/Linchin/followers", "following_url": "https://api.github.com/users/Linchin/following{/other_user}", "gists_url": "https://api.github.com/users/Linchin/gists{/gist_id}", "starred_url": "https://api.github.com/users/Linchin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Linchin/subscriptions", "organizations_url": "https://api.github.com/users/Linchin/orgs", "repos_url": "https://api.github.com/users/Linchin/repos", "events_url": "https://api.github.com/users/Linchin/events{/privacy}", "received_events_url": "https://api.github.com/users/Linchin/received_events", "type": "User", "site_admin": false } ]
null
[]
"2023-01-05T11:20:36"
"2023-01-15T19:01:00"
"2023-01-15T19:01:00"
NONE
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> /area sdk ### What feature would you like to see? Protocol buffer filtering for the `finished_at` field in runs. ### What is the use case or pain point? We are looking to archive runs that finished over 90 days ago. While we have a workaround (see below), if we have a filter (like you do currently for the `created_at` field), it would mean that our request payloads are substantially smaller. ### Is there a workaround currently? Our current approach is to get all non-archived runs (using a predicate filter), then iterate over the result to compare the `datetime` of the `finished_at` field to the `datetime` from 90 days ago. We then archive those runs that completed 90 days ago. --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8654/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8654/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8650
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8650/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8650/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8650/events
https://github.com/kubeflow/pipelines/issues/8650
1,518,810,483
I_kwDOB-71UM5ahzVz
8,650
[sdk] No longer possible to compile components that use PipelineTaskFinalStatus
{ "login": "suned", "id": 1228354, "node_id": "MDQ6VXNlcjEyMjgzNTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1228354?v=4", "gravatar_id": "", "url": "https://api.github.com/users/suned", "html_url": "https://github.com/suned", "followers_url": "https://api.github.com/users/suned/followers", "following_url": "https://api.github.com/users/suned/following{/other_user}", "gists_url": "https://api.github.com/users/suned/gists{/gist_id}", "starred_url": "https://api.github.com/users/suned/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/suned/subscriptions", "organizations_url": "https://api.github.com/users/suned/orgs", "repos_url": "https://api.github.com/users/suned/repos", "events_url": "https://api.github.com/users/suned/events{/privacy}", "received_events_url": "https://api.github.com/users/suned/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Closed due to replication of #8649" ]
"2023-01-04T11:29:26"
"2023-01-05T23:51:24"
"2023-01-05T23:51:24"
CONTRIBUTOR
null
### Environment * KFP version: N/A <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> * KFP SDK version: 2.0.0b10 <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> * All dependencies version: <!-- Specify the output of the following shell command: $pip list | grep kfp --> ``` kfp 2.0.0b10 kfp-pipeline-spec 0.1.17 kfp-server-api 2.0.0a6 ``` ### Steps to reproduce <!-- Specify how to reproduce the problem. This may include information such as: a description of the process, code snippets, log output, or screenshots. --> In version 1.x.x it was possible to compile components that uses `PipelineTaskFinalStatus`, e.g using: ```python from kfp.v2.dsl import component, PipelineTaskFinalStatus, pipeline, ExitHandler from kfp.components import load_component_from_file @component(output_component_file='example.yaml') def example(status: PipelineTaskFinalStatus): print(status) loaded_component = load_component_from_file('example.yaml') @pipeline def example_pipeline(): with ExitHandler(loaded_component()): pass ``` In version 2.0.0bx we expected this to still be possible, e.g with ```python from kfp.dsl import component, PipelineTaskFinalStatus, pipeline, ExitHandler from kfp.compiler import Compiler from kfp.components import load_component_from_file @component def example(status: PipelineTaskFinalStatus): print(status) Compiler().compile(example, 'example.yaml') loaded_component = load_component_from_file('example.yaml') @pipeline def example_pipeline(): with ExitHandler(loaded_component()): pass ``` However, this is no longer possible for what seems like a variety of reasons. First of all the sdk explicitly forbids it in https://github.com/kubeflow/pipelines/blob/fdf3ee7b68b2293d08e14c389b0dab9a57854e2a/sdk/python/kfp/compiler/pipeline_spec_builder.py#L333-L338 I tried commenting out the above check, which does enable us to compile valid component yaml without further changes: ```yaml # PIPELINE DEFINITION # Name: example # Inputs: # status: dict components: comp-example: executorLabel: exec-example inputDefinitions: parameters: status: isOptional: true parameterType: STRUCT deploymentSpec: executors: exec-example: container: args: - --executor_input - '{{$}}' - --function_to_execute - example command: - sh - -c - "\nif ! [ -x \"$(command -v pip)\" ]; then\n python3 -m ensurepip ||\ \ python3 -m ensurepip --user || apt-get install python3-pip\nfi\n\nPIP_DISABLE_PIP_VERSION_CHECK=1\ \ python3 -m pip install --quiet --no-warn-script-location 'kfp==2.0.0-beta.10'\ \ && \"$0\" \"$@\"\n" - sh - -ec - 'program_path=$(mktemp -d) printf "%s" "$0" > "$program_path/ephemeral_component.py" python3 -m kfp.components.executor_main --component_module_path "$program_path/ephemeral_component.py" "$@" ' - "\nimport kfp\nfrom kfp import dsl\nfrom kfp.dsl import *\nfrom typing import\ \ *\n\ndef example(status: PipelineTaskFinalStatus):\n print(status)\n\ \n" image: python:3.7 pipelineInfo: name: example root: dag: tasks: example: cachingOptions: enableCache: true componentRef: name: comp-example inputs: parameters: status: componentInputParameter: status taskInfo: name: example inputDefinitions: parameters: status: isOptional: true parameterType: STRUCT schemaVersion: 2.1.0 sdkVersion: kfp-2.0.0-beta.10 ``` Unfortunately, the input type of the `status` parameter is converted to `STRUCT` in the process, so loading in the component yaml results in a component that fails at runtime because the `status` argument is not provided by the backend as expected. ### Expected result <!-- What should the correct behavior be? --> It should be possible to compile components that use `PipelineTaskFinalStatus` for feature parity with version 1.x.x ### Materials and Reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8650/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8650/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8649
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8649/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8649/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8649/events
https://github.com/kubeflow/pipelines/issues/8649
1,518,810,123
I_kwDOB-71UM5ahzQL
8,649
[sdk] No longer possible to compile components that use `PipelineTaskFinalStatus`
{ "login": "suned", "id": 1228354, "node_id": "MDQ6VXNlcjEyMjgzNTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1228354?v=4", "gravatar_id": "", "url": "https://api.github.com/users/suned", "html_url": "https://github.com/suned", "followers_url": "https://api.github.com/users/suned/followers", "following_url": "https://api.github.com/users/suned/following{/other_user}", "gists_url": "https://api.github.com/users/suned/gists{/gist_id}", "starred_url": "https://api.github.com/users/suned/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/suned/subscriptions", "organizations_url": "https://api.github.com/users/suned/orgs", "repos_url": "https://api.github.com/users/suned/repos", "events_url": "https://api.github.com/users/suned/events{/privacy}", "received_events_url": "https://api.github.com/users/suned/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
closed
false
{ "login": "connor-mccarthy", "id": 55268212, "node_id": "MDQ6VXNlcjU1MjY4MjEy", "avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-mccarthy", "html_url": "https://github.com/connor-mccarthy", "followers_url": "https://api.github.com/users/connor-mccarthy/followers", "following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}", "gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions", "organizations_url": "https://api.github.com/users/connor-mccarthy/orgs", "repos_url": "https://api.github.com/users/connor-mccarthy/repos", "events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-mccarthy/received_events", "type": "User", "site_admin": false }
[ { "login": "connor-mccarthy", "id": 55268212, "node_id": "MDQ6VXNlcjU1MjY4MjEy", "avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-mccarthy", "html_url": "https://github.com/connor-mccarthy", "followers_url": "https://api.github.com/users/connor-mccarthy/followers", "following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}", "gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions", "organizations_url": "https://api.github.com/users/connor-mccarthy/orgs", "repos_url": "https://api.github.com/users/connor-mccarthy/repos", "events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-mccarthy/received_events", "type": "User", "site_admin": false } ]
null
[ "I'm impacted by this too, and not clear what the workaround is either. Is there a standard way I'm not finding in Vertex to report exit status?", "This is on our radar to fix, @berniecamus. No update/workaround yet.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.", "This is fixed in the latest version of the KFP SDK." ]
"2023-01-04T11:29:09"
"2023-08-28T15:35:47"
"2023-08-28T15:35:46"
CONTRIBUTOR
null
### Environment * KFP version: N/A <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> * KFP SDK version: 2.0.0b10 <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> * All dependencies version: <!-- Specify the output of the following shell command: $pip list | grep kfp --> ``` kfp 2.0.0b10 kfp-pipeline-spec 0.1.17 kfp-server-api 2.0.0a6 ``` ### Steps to reproduce <!-- Specify how to reproduce the problem. This may include information such as: a description of the process, code snippets, log output, or screenshots. --> In version 1.x.x it was possible to compile components that uses `PipelineTaskFinalStatus`, e.g using: ```python from kfp.v2.dsl import component, PipelineTaskFinalStatus, pipeline, ExitHandler from kfp.components import load_component_from_file @component(output_component_file='example.yaml') def example(status: PipelineTaskFinalStatus): print(status) loaded_component = load_component_from_file('example.yaml') @pipeline def example_pipeline(): with ExitHandler(loaded_component()): pass ``` In version 2.0.0bx we expected this to still be possible, e.g with ```python from kfp.dsl import component, PipelineTaskFinalStatus, pipeline, ExitHandler from kfp.compiler import Compiler from kfp.components import load_component_from_file @component def example(status: PipelineTaskFinalStatus): print(status) Compiler().compile(example, 'example.yaml') loaded_component = load_component_from_file('example.yaml') @pipeline def example_pipeline(): with ExitHandler(loaded_component()): pass ``` However, this is no longer possible for what seems like a variety of reasons. First of all the sdk explicitly forbids it in https://github.com/kubeflow/pipelines/blob/fdf3ee7b68b2293d08e14c389b0dab9a57854e2a/sdk/python/kfp/compiler/pipeline_spec_builder.py#L333-L338 I tried commenting out the above check, which does produce valid component yaml: ```yaml # PIPELINE DEFINITION # Name: example # Inputs: # status: dict components: comp-example: executorLabel: exec-example inputDefinitions: parameters: status: isOptional: true parameterType: STRUCT deploymentSpec: executors: exec-example: container: args: - --executor_input - '{{$}}' - --function_to_execute - example command: - sh - -c - "\nif ! [ -x \"$(command -v pip)\" ]; then\n python3 -m ensurepip ||\ \ python3 -m ensurepip --user || apt-get install python3-pip\nfi\n\nPIP_DISABLE_PIP_VERSION_CHECK=1\ \ python3 -m pip install --quiet --no-warn-script-location 'kfp==2.0.0-beta.10'\ \ && \"$0\" \"$@\"\n" - sh - -ec - 'program_path=$(mktemp -d) printf "%s" "$0" > "$program_path/ephemeral_component.py" python3 -m kfp.components.executor_main --component_module_path "$program_path/ephemeral_component.py" "$@" ' - "\nimport kfp\nfrom kfp import dsl\nfrom kfp.dsl import *\nfrom typing import\ \ *\n\ndef example(status: PipelineTaskFinalStatus):\n print(status)\n\ \n" image: python:3.7 pipelineInfo: name: example root: dag: tasks: example: cachingOptions: enableCache: true componentRef: name: comp-example inputs: parameters: status: componentInputParameter: status taskInfo: name: example inputDefinitions: parameters: status: isOptional: true parameterType: STRUCT schemaVersion: 2.1.0 sdkVersion: kfp-2.0.0-beta.10 ``` Unfortunately, the input type of the `status` parameter is converted to `STRUCT` in the process, so loading in the component yaml results in a component that fails at runtime because the `status` argument is not provided by the backend as expected. ### Expected result <!-- What should the correct behavior be? --> It should be possible to compile components that use `PipelineTaskFinalStatus` for feature parity with version 1.x.x ### Materials and Reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8649/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8649/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8648
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8648/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8648/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8648/events
https://github.com/kubeflow/pipelines/issues/8648
1,518,787,063
I_kwDOB-71UM5ahtn3
8,648
[sdk] jsonschema dependency pinning
{ "login": "bretttully", "id": 5830269, "node_id": "MDQ6VXNlcjU4MzAyNjk=", "avatar_url": "https://avatars.githubusercontent.com/u/5830269?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bretttully", "html_url": "https://github.com/bretttully", "followers_url": "https://api.github.com/users/bretttully/followers", "following_url": "https://api.github.com/users/bretttully/following{/other_user}", "gists_url": "https://api.github.com/users/bretttully/gists{/gist_id}", "starred_url": "https://api.github.com/users/bretttully/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bretttully/subscriptions", "organizations_url": "https://api.github.com/users/bretttully/orgs", "repos_url": "https://api.github.com/users/bretttully/repos", "events_url": "https://api.github.com/users/bretttully/events{/privacy}", "received_events_url": "https://api.github.com/users/bretttully/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
{ "login": "connor-mccarthy", "id": 55268212, "node_id": "MDQ6VXNlcjU1MjY4MjEy", "avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-mccarthy", "html_url": "https://github.com/connor-mccarthy", "followers_url": "https://api.github.com/users/connor-mccarthy/followers", "following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}", "gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions", "organizations_url": "https://api.github.com/users/connor-mccarthy/orgs", "repos_url": "https://api.github.com/users/connor-mccarthy/repos", "events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-mccarthy/received_events", "type": "User", "site_admin": false }
[ { "login": "connor-mccarthy", "id": 55268212, "node_id": "MDQ6VXNlcjU1MjY4MjEy", "avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-mccarthy", "html_url": "https://github.com/connor-mccarthy", "followers_url": "https://api.github.com/users/connor-mccarthy/followers", "following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}", "gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions", "organizations_url": "https://api.github.com/users/connor-mccarthy/orgs", "repos_url": "https://api.github.com/users/connor-mccarthy/repos", "events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-mccarthy/received_events", "type": "User", "site_admin": false }, { "login": "JOCSTAA", "id": 65559367, "node_id": "MDQ6VXNlcjY1NTU5MzY3", "avatar_url": "https://avatars.githubusercontent.com/u/65559367?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JOCSTAA", "html_url": "https://github.com/JOCSTAA", "followers_url": "https://api.github.com/users/JOCSTAA/followers", "following_url": "https://api.github.com/users/JOCSTAA/following{/other_user}", "gists_url": "https://api.github.com/users/JOCSTAA/gists{/gist_id}", "starred_url": "https://api.github.com/users/JOCSTAA/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JOCSTAA/subscriptions", "organizations_url": "https://api.github.com/users/JOCSTAA/orgs", "repos_url": "https://api.github.com/users/JOCSTAA/repos", "events_url": "https://api.github.com/users/JOCSTAA/events{/privacy}", "received_events_url": "https://api.github.com/users/JOCSTAA/received_events", "type": "User", "site_admin": false } ]
null
[ "Moving back a little in time seems to be fine -- the following runs without an issue\r\n\r\n```\r\nFROM mambaorg/micromamba:jammy as build-stage\r\n\r\nRUN micromamba install --name base --channel conda-forge \\\r\n \"python >=3.10\" \\\r\n \"kfp >=1.8\" \\\r\n \"jupyterlab 3.2.*\" \\\r\n \"jupyterlab_server 2.10.*\"\r\n\r\nARG MAMBA_DOCKERFILE_ACTIVATE=1 # (otherwise python will not be found)\r\nRUN python -c \"import kfp; print(kfp.__version__)\"\r\nRUN python -c \"import jupyterlab_server; print(jupyterlab_server.__version__)\"\r\n```", "Hi, @bretttully. I suspect there are no behavior changes in `jsonschema` v4 that would affect us based on the [changelog](https://github.com/python-jsonschema/jsonschema/blob/main/CHANGELOG.rst#v400). Please feel free to bump the version number in the PR in the following places:\r\n- https://github.com/kubeflow/pipelines/blob/sdk/release-1.8/sdk/python/setup.py#L47\r\n- https://github.com/kubeflow/pipelines/blob/sdk/release-1.8/sdk/python/requirements.in#L16\r\n\r\nThen run the command here: https://github.com/kubeflow/pipelines/blob/sdk/release-1.8/sdk/python/requirements.in#L3\r\n\r\nFrom there, we can run our tests against `kfp` with this new version.", "/assign @JOCSTAA", "Hi; are there any updates re this matter. I'm running in the exact same issue when I want to do environment management by means of [Poetry](https://python-poetry.org/)", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions." ]
"2023-01-04T11:16:25"
"2023-08-28T07:42:31"
null
NONE
null
### Environment * KFP SDK version: `>=1.8` ### Steps to reproduce In the `requirements.in` file, `jsonschema >=3.0.1,<4`. A lot of other packages (e.g. [jupyterlab, via jupyterlab_server](https://github.com/conda-forge/jupyterlab_server-feedstock/blob/main/recipe/meta.yaml#L27)) have moved to `jsonschema >=4` ### Expected result It should be possible to install kfp in the same environment as jupyterlab. #### Attempt 1 Here is a docker file where the environment is created successfully, but `jupterlab_server` is broken because it didn't pin `jsonschema` properly. ``` FROM mambaorg/micromamba:jammy as build-stage RUN micromamba install --name base --channel conda-forge \ "python >=3.10" \ "kfp >=1.8" \ "jupyterlab >=3.5" ARG MAMBA_DOCKERFILE_ACTIVATE=1 # (otherwise python will not be found) RUN python -c "import kfp; print(kfp.__version__)" RUN python -c "import jupyterlab_server; print(jupyterlab_server.__version__)" ``` `docker build --progress plain --no-cache -f Dockerfile .` <details> <summary>Docker build output</summary> ```bash #1 [internal] load build definition from debugging.Dockerfile #1 sha256:72a7d0078219e5aabc2b20905f1eda6c042b0871faff96192fd91696fe22ae06 #1 transferring dockerfile: 54B done #1 DONE 0.0s #2 [internal] load .dockerignore #2 sha256:6f6399f5781af6f88a59e1b8fb6df788f99d62320b583d83e37211ae3e5917b2 #2 transferring context: 2B done #2 DONE 0.0s #3 [internal] load metadata for docker.io/mambaorg/micromamba:jammy #3 sha256:70f396090a53ab9b6a069cc5d6d0672a0924b076e550e08e2f4c414e429987cc #3 DONE 1.1s #4 [1/4] FROM docker.io/mambaorg/micromamba:jammy@sha256:3e3d2899397fb031d57d4269a19f7700d4eb72e802d1d1b8f6054acd93b6fe47 #4 sha256:4a94cba5ef489da4fe85e3887c63904a40cb788f5ac27ff5dff6a4f3998815af #4 CACHED #5 [2/4] RUN micromamba install --name base --channel conda-forge "python >=3.10" "kfp >=1.8" "jupyterlab >=3.5" #5 sha256:3855277fd177e022a0fd9b8478652f3563e2e3be3cbdb8c93c2e6b34728fe389 #5 0.276 #5 0.276 __ #5 0.276 __ ______ ___ ____ _____ ___ / /_ ____ _ #5 0.276 / / / / __ `__ \/ __ `/ __ `__ \/ __ \/ __ `/ #5 0.276 / /_/ / / / / / / /_/ / / / / / / /_/ / /_/ / #5 0.276 / .___/_/ /_/ /_/\__,_/_/ /_/ /_/_.___/\__,_/ #5 0.276 /_/ #5 0.276 #5 24.28 #5 24.28 Transaction #5 24.28 #5 24.28 Prefix: /opt/conda #5 24.28 #5 24.28 Updating specs: #5 24.28 #5 24.28 - python[version='>=3.10'] #5 24.28 - kfp[version='>=1.8'] #5 24.28 - jupyterlab[version='>=3.5'] #5 24.28 #5 24.28 #5 24.31 Package Version Build Channel Size #5 24.31 ──────────────────────────────────────────────────────────────────────────────────────────────────── #5 24.31 Install: #5 24.31 ──────────────────────────────────────────────────────────────────────────────────────────────────── #5 24.31 #5 24.31 + _libgcc_mutex 0.1 conda_forge conda-forge/linux-64 3kB #5 24.31 + _openmp_mutex 4.5 2_gnu conda-forge/linux-64 24kB #5 24.31 + absl-py 1.3.0 pyhd8ed1ab_0 conda-forge/noarch 98kB #5 24.31 + aiohttp 3.8.3 py311hd4cff14_1 conda-forge/linux-64 561kB #5 24.31 + aiosignal 1.3.1 pyhd8ed1ab_0 conda-forge/noarch 13kB #5 24.31 + anyio 3.6.2 pyhd8ed1ab_0 conda-forge/noarch 85kB #5 24.31 + argon2-cffi 21.3.0 pyhd8ed1ab_0 conda-forge/noarch 16kB #5 24.31 + argon2-cffi-bindings 21.2.0 py311hd4cff14_3 conda-forge/linux-64 36kB #5 24.31 + asttokens 2.2.1 pyhd8ed1ab_0 conda-forge/noarch 28kB #5 24.31 + async-timeout 4.0.2 pyhd8ed1ab_0 conda-forge/noarch 9kB #5 24.31 + attrs 22.2.0 pyh71513ae_0 conda-forge/noarch 54kB #5 24.31 + babel 2.11.0 pyhd8ed1ab_0 conda-forge/noarch 7MB #5 24.31 + backcall 0.2.0 pyh9f0ad1d_0 conda-forge/noarch 14kB #5 24.31 + backports 1.0 pyhd8ed1ab_3 conda-forge/noarch 6kB #5 24.31 + backports.functools_lru_cache 1.6.4 pyhd8ed1ab_0 conda-forge/noarch 9kB #5 24.31 + beautifulsoup4 4.11.1 pyha770c72_0 conda-forge/noarch 98kB #5 24.31 + bleach 5.0.1 pyhd8ed1ab_0 conda-forge/noarch 127kB #5 24.31 + blinker 1.5 pyhd8ed1ab_0 conda-forge/noarch 15kB #5 24.31 + brotlipy 0.7.0 py311hd4cff14_1005 conda-forge/linux-64 354kB #5 24.31 + bzip2 1.0.8 h7f98852_4 conda-forge/linux-64 496kB #5 24.31 + ca-certificates 2022.12.7 ha878542_0 conda-forge/linux-64 146kB #5 24.31 + cachetools 4.2.4 pyhd8ed1ab_0 conda-forge/noarch 13kB #5 24.31 + certifi 2022.12.7 pyhd8ed1ab_0 conda-forge/noarch 151kB #5 24.31 + cffi 1.15.1 py311h409f033_3 conda-forge/linux-64 296kB #5 24.31 + charset-normalizer 2.1.1 pyhd8ed1ab_0 conda-forge/noarch 36kB #5 24.31 + click 8.1.3 unix_pyhd8ed1ab_2 conda-forge/noarch 76kB #5 24.31 + cloudpickle 2.2.0 pyhd8ed1ab_0 conda-forge/noarch 26kB #5 24.31 + colorama 0.4.6 pyhd8ed1ab_0 conda-forge/noarch 25kB #5 24.31 + comm 0.1.2 pyhd8ed1ab_0 conda-forge/noarch 11kB #5 24.31 + commonmark 0.9.1 py_0 conda-forge/noarch 47kB #5 24.31 + cryptography 39.0.0 py311h9b4c7bb_0 conda-forge/linux-64 2MB #5 24.31 + dataclasses 0.8 pyhc8e2a94_3 conda-forge/noarch 10kB #5 24.31 + debugpy 1.6.4 py311ha362b79_0 conda-forge/linux-64 2MB #5 24.31 + decorator 5.1.1 pyhd8ed1ab_0 conda-forge/noarch 12kB #5 24.31 + defusedxml 0.7.1 pyhd8ed1ab_0 conda-forge/noarch 24kB #5 24.31 + deprecated 1.2.13 pyh6c4a22f_0 conda-forge/noarch 12kB #5 24.31 + docstring_parser 0.15 pyhd8ed1ab_0 conda-forge/noarch 30kB #5 24.31 + entrypoints 0.4 pyhd8ed1ab_0 conda-forge/noarch 9kB #5 24.31 + executing 1.2.0 pyhd8ed1ab_0 conda-forge/noarch 25kB #5 24.31 + fire 0.4.0 pyh44b312d_0 conda-forge/noarch 81kB #5 24.31 + flit-core 3.8.0 pyhd8ed1ab_0 conda-forge/noarch 46kB #5 24.31 + frozenlist 1.3.3 py311hd4cff14_0 conda-forge/linux-64 46kB #5 24.31 + future 0.18.2 pyhd8ed1ab_6 conda-forge/noarch 365kB #5 24.31 + google-api-core 1.34.0 pyhd8ed1ab_0 conda-forge/noarch 77kB #5 24.31 + google-api-python-client 1.12.8 pyhd3deb0d_0 conda-forge/noarch 49kB #5 24.31 + google-auth 1.35.0 pyh6c4a22f_0 conda-forge/noarch 83kB #5 24.31 + google-auth-httplib2 0.1.0 pyhd8ed1ab_1 conda-forge/noarch 14kB #5 24.31 + google-cloud-core 1.3.0 py_0 conda-forge/noarch 25kB #5 24.31 + google-cloud-storage 1.30.0 pyh9f0ad1d_0 conda-forge/noarch 62kB #5 24.31 + google-crc32c 1.1.2 py311h98db957_4 conda-forge/linux-64 26kB #5 24.31 + google-resumable-media 1.3.3 pyh6c4a22f_0 conda-forge/noarch 42kB #5 24.31 + googleapis-common-protos 1.57.0 pyhd8ed1ab_3 conda-forge/noarch 112kB #5 24.31 + httplib2 0.21.0 pyhd8ed1ab_0 conda-forge/noarch 99kB #5 24.31 + idna 3.4 pyhd8ed1ab_0 conda-forge/noarch 57kB #5 24.31 + importlib-metadata 6.0.0 pyha770c72_0 conda-forge/noarch 25kB #5 24.31 + ipykernel 6.19.4 pyh210e3f2_0 conda-forge/noarch 108kB #5 24.31 + ipython 8.8.0 pyh41d4057_0 conda-forge/noarch 568kB #5 24.31 + ipython_genutils 0.2.0 py_1 conda-forge/noarch 22kB #5 24.31 + jedi 0.18.2 pyhd8ed1ab_0 conda-forge/noarch 804kB #5 24.31 + jinja2 3.1.2 pyhd8ed1ab_1 conda-forge/noarch 101kB #5 24.31 + json5 0.9.5 pyh9f0ad1d_0 conda-forge/noarch 21kB #5 24.31 + jsonschema 3.2.0 pyhd8ed1ab_3 conda-forge/noarch 46kB #5 24.31 + jupyter_client 7.4.8 pyhd8ed1ab_0 conda-forge/noarch 99kB #5 24.31 + jupyter_core 5.1.2 py311h38be061_0 conda-forge/linux-64 115kB #5 24.31 + jupyter_events 0.5.0 pyhd8ed1ab_0 conda-forge/noarch 66kB #5 24.31 + jupyter_server 2.0.6 pyhd8ed1ab_0 conda-forge/noarch 304kB #5 24.31 + jupyter_server_terminals 0.4.3 pyhd8ed1ab_0 conda-forge/noarch 19kB #5 24.31 + jupyterlab 3.5.2 pyhd8ed1ab_0 conda-forge/noarch 6MB #5 24.31 + jupyterlab_pygments 0.2.2 pyhd8ed1ab_0 conda-forge/noarch 17kB #5 24.31 + jupyterlab_server 2.16.6 pyhd8ed1ab_0 conda-forge/noarch 58kB #5 24.31 + kfp 1.8.14 pyhd8ed1ab_0 conda-forge/noarch 261kB #5 24.31 + kfp-pipeline-spec 0.1.17 pyhd8ed1ab_0 conda-forge/noarch 24kB #5 24.31 + kfp-server-api 1.8.4 pyhd8ed1ab_0 conda-forge/noarch 47kB #5 24.31 + ld_impl_linux-64 2.39 hcc3a1bd_1 conda-forge/linux-64 691kB #5 24.31 + libcrc32c 1.1.2 h9c3ff4c_0 conda-forge/linux-64 20kB #5 24.31 + libffi 3.4.2 h7f98852_5 conda-forge/linux-64 58kB #5 24.31 + libgcc-ng 12.2.0 h65d4601_19 conda-forge/linux-64 954kB #5 24.31 + libgomp 12.2.0 h65d4601_19 conda-forge/linux-64 466kB #5 24.31 + libnsl 2.0.0 h7f98852_0 conda-forge/linux-64 31kB #5 24.31 + libprotobuf 3.20.2 h6239696_0 conda-forge/linux-64 3MB #5 24.31 + libsodium 1.0.18 h36c2ea0_1 conda-forge/linux-64 375kB #5 24.31 + libsqlite 3.40.0 h753d276_0 conda-forge/linux-64 810kB #5 24.31 + libstdcxx-ng 12.2.0 h46fd767_19 conda-forge/linux-64 4MB #5 24.31 + libuuid 2.32.1 h7f98852_1000 conda-forge/linux-64 28kB #5 24.31 + libzlib 1.2.13 h166bdaf_4 conda-forge/linux-64 66kB #5 24.31 + markupsafe 2.1.1 py311hd4cff14_2 conda-forge/linux-64 26kB #5 24.31 + matplotlib-inline 0.1.6 pyhd8ed1ab_0 conda-forge/noarch 12kB #5 24.31 + mistune 2.0.4 pyhd8ed1ab_0 conda-forge/noarch 69kB #5 24.31 + multidict 6.0.4 py311h2582759_0 conda-forge/linux-64 57kB #5 24.31 + nbclassic 0.4.8 pyhd8ed1ab_0 conda-forge/noarch 8MB #5 24.31 + nbclient 0.7.2 pyhd8ed1ab_0 conda-forge/noarch 62kB #5 24.31 + nbconvert 7.2.7 pyhd8ed1ab_0 conda-forge/noarch 8kB #5 24.31 + nbconvert-core 7.2.7 pyhd8ed1ab_0 conda-forge/noarch 199kB #5 24.31 + nbconvert-pandoc 7.2.7 pyhd8ed1ab_0 conda-forge/noarch 6kB #5 24.31 + nbformat 5.7.1 pyhd8ed1ab_0 conda-forge/noarch 101kB #5 24.31 + ncurses 6.3 h27087fc_1 conda-forge/linux-64 1MB #5 24.31 + nest-asyncio 1.5.6 pyhd8ed1ab_0 conda-forge/noarch 10kB #5 24.31 + notebook 6.5.2 pyha770c72_1 conda-forge/noarch 273kB #5 24.31 + notebook-shim 0.2.2 pyhd8ed1ab_0 conda-forge/noarch 15kB #5 24.31 + oauthlib 3.2.2 pyhd8ed1ab_0 conda-forge/noarch 92kB #5 24.31 + openssl 3.0.7 h0b41bf4_1 conda-forge/linux-64 3MB #5 24.31 + packaging 22.0 pyhd8ed1ab_0 conda-forge/noarch 40kB #5 24.31 + pandoc 2.19.2 h32600fe_1 conda-forge/linux-64 31MB #5 24.31 + pandocfilters 1.5.0 pyhd8ed1ab_0 conda-forge/noarch 12kB #5 24.31 + parso 0.8.3 pyhd8ed1ab_0 conda-forge/noarch 71kB #5 24.31 + pexpect 4.8.0 pyh1a96a4e_2 conda-forge/noarch 49kB #5 24.31 + pickleshare 0.7.5 py_1003 conda-forge/noarch 9kB #5 24.31 + pip 22.3.1 pyhd8ed1ab_0 conda-forge/noarch 2MB #5 24.31 + platformdirs 2.6.2 pyhd8ed1ab_0 conda-forge/noarch 17kB #5 24.31 + prometheus_client 0.15.0 pyhd8ed1ab_0 conda-forge/noarch 51kB #5 24.31 + prompt-toolkit 3.0.36 pyha770c72_0 conda-forge/noarch 272kB #5 24.31 + protobuf 3.20.2 py311ha362b79_1 conda-forge/linux-64 369kB #5 24.31 + psutil 5.9.4 py311hd4cff14_0 conda-forge/linux-64 497kB #5 24.31 + ptyprocess 0.7.0 pyhd3deb0d_0 conda-forge/noarch 17kB #5 24.31 + pure_eval 0.2.2 pyhd8ed1ab_0 conda-forge/noarch 15kB #5 24.31 + pyasn1 0.4.8 py_0 conda-forge/noarch 54kB #5 24.31 + pyasn1-modules 0.2.7 py_0 conda-forge/noarch 61kB #5 24.31 + pycparser 2.21 pyhd8ed1ab_0 conda-forge/noarch 103kB #5 24.31 + pydantic 1.10.4 py311h2582759_0 conda-forge/linux-64 2MB #5 24.31 + pygments 2.14.0 pyhd8ed1ab_0 conda-forge/noarch 824kB #5 24.31 + pyjwt 2.6.0 pyhd8ed1ab_0 conda-forge/noarch 21kB #5 24.31 + pyopenssl 23.0.0 pyhd8ed1ab_0 conda-forge/noarch 127kB #5 24.31 + pyparsing 3.0.9 pyhd8ed1ab_0 conda-forge/noarch 81kB #5 24.31 + pyrsistent 0.19.3 py311h2582759_0 conda-forge/linux-64 124kB #5 24.31 + pysocks 1.7.1 pyha2e5f31_6 conda-forge/noarch 19kB #5 24.31 + python 3.11.0 ha86cf86_0_cpython conda-forge/linux-64 38MB #5 24.31 + python-dateutil 2.8.2 pyhd8ed1ab_0 conda-forge/noarch 246kB #5 24.31 + python-fastjsonschema 2.16.2 pyhd8ed1ab_0 conda-forge/noarch 247kB #5 24.31 + python-json-logger 2.0.4 pyhd8ed1ab_0 conda-forge/noarch 13kB #5 24.31 + python-kubernetes 18.20.0 pyhd8ed1ab_0 conda-forge/noarch 484kB #5 24.31 + python_abi 3.11 3_cp311 conda-forge/linux-64 6kB #5 24.31 + pytz 2022.7 pyhd8ed1ab_0 conda-forge/noarch 186kB #5 24.31 + pyu2f 0.1.5 pyhd8ed1ab_0 conda-forge/noarch 32kB #5 24.31 + pyyaml 5.4.1 py311hd4cff14_4 conda-forge/linux-64 212kB #5 24.31 + pyzmq 24.0.1 py311ha4b6469_1 conda-forge/linux-64 587kB #5 24.31 + readline 8.1.2 h0f457ee_0 conda-forge/linux-64 298kB #5 24.31 + requests 2.28.1 pyhd8ed1ab_1 conda-forge/noarch 55kB #5 24.31 + requests-oauthlib 1.3.1 pyhd8ed1ab_0 conda-forge/noarch 22kB #5 24.31 + requests-toolbelt 0.10.1 pyhd8ed1ab_0 conda-forge/noarch 43kB #5 24.31 + rich 12.6.0 pyhd8ed1ab_0 conda-forge/noarch 175kB #5 24.31 + rsa 4.9 pyhd8ed1ab_0 conda-forge/noarch 30kB #5 24.31 + send2trash 1.8.0 pyhd8ed1ab_0 conda-forge/noarch 18kB #5 24.31 + setuptools 65.6.3 pyhd8ed1ab_0 conda-forge/noarch 634kB #5 24.31 + shellingham 1.5.0.post1 pyhd8ed1ab_0 conda-forge/noarch 14kB #5 24.31 + simplejson 3.18.1 py311h2582759_0 conda-forge/linux-64 135kB #5 24.31 + six 1.16.0 pyh6c4a22f_0 conda-forge/noarch 14kB #5 24.31 + sniffio 1.3.0 pyhd8ed1ab_0 conda-forge/noarch 14kB #5 24.31 + soupsieve 2.3.2.post1 pyhd8ed1ab_0 conda-forge/noarch 35kB #5 24.31 + stack_data 0.6.2 pyhd8ed1ab_0 conda-forge/noarch 26kB #5 24.31 + strip-hints 0.1.8 py_0 conda-forge/noarch 20kB #5 24.31 + tabulate 0.9.0 pyhd8ed1ab_1 conda-forge/noarch 36kB #5 24.31 + termcolor 2.1.1 pyhd8ed1ab_0 conda-forge/noarch 10kB #5 24.31 + terminado 0.17.1 pyh41d4057_0 conda-forge/noarch 21kB #5 24.31 + tinycss2 1.2.1 pyhd8ed1ab_0 conda-forge/noarch 23kB #5 24.31 + tk 8.6.12 h27826a3_0 conda-forge/linux-64 3MB #5 24.31 + tomli 2.0.1 pyhd8ed1ab_0 conda-forge/noarch 16kB #5 24.31 + tornado 6.2 py311hd4cff14_1 conda-forge/linux-64 885kB #5 24.31 + traitlets 5.8.0 pyhd8ed1ab_0 conda-forge/noarch 98kB #5 24.31 + typer 0.7.0 pyhd8ed1ab_0 conda-forge/noarch 58kB #5 24.31 + typing-extensions 4.4.0 hd8ed1ab_0 conda-forge/noarch 9kB #5 24.31 + typing_extensions 4.4.0 pyha770c72_0 conda-forge/noarch 30kB #5 24.31 + tzdata 2022g h191b570_0 conda-forge/noarch 108kB #5 24.31 + uritemplate 3.0.1 py_0 conda-forge/noarch 16kB #5 24.31 + urllib3 1.26.13 pyhd8ed1ab_0 conda-forge/noarch 111kB #5 24.31 + wcwidth 0.2.5 pyh9f0ad1d_2 conda-forge/noarch 34kB #5 24.31 + webencodings 0.5.1 py_1 conda-forge/noarch 12kB #5 24.31 + websocket-client 1.4.2 pyhd8ed1ab_0 conda-forge/noarch 44kB #5 24.31 + wheel 0.38.4 pyhd8ed1ab_0 conda-forge/noarch 33kB #5 24.31 + wrapt 1.14.1 py311hd4cff14_1 conda-forge/linux-64 61kB #5 24.31 + xz 5.2.6 h166bdaf_0 conda-forge/linux-64 418kB #5 24.31 + yaml 0.2.5 h7f98852_2 conda-forge/linux-64 89kB #5 24.31 + yarl 1.8.2 py311hd4cff14_0 conda-forge/linux-64 98kB #5 24.31 + zeromq 4.3.4 h9c3ff4c_1 conda-forge/linux-64 360kB #5 24.31 + zipp 3.11.0 pyhd8ed1ab_0 conda-forge/noarch 16kB #5 24.31 #5 24.31 Summary: #5 24.31 #5 24.31 Install: 174 packages #5 24.31 #5 24.31 Total download: 132MB #5 24.31 #5 24.31 ──────────────────────────────────────────────────────────────────────────────────────────────────── #5 24.31 #5 24.31 #5 24.31 Confirm changes: [Y/n] #5 24.31 Transaction starting #5 55.40 Linking _libgcc_mutex-0.1-conda_forge #5 55.40 Linking python_abi-3.11-3_cp311 #5 55.41 Linking libstdcxx-ng-12.2.0-h46fd767_19 #5 55.41 Linking ld_impl_linux-64-2.39-hcc3a1bd_1 #5 55.41 Linking ca-certificates-2022.12.7-ha878542_0 #5 55.42 Linking libgomp-12.2.0-h65d4601_19 #5 55.42 Linking _openmp_mutex-4.5-2_gnu #5 55.42 Linking libgcc-ng-12.2.0-h65d4601_19 #5 55.43 Linking libuuid-2.32.1-h7f98852_1000 #5 55.43 Linking libsodium-1.0.18-h36c2ea0_1 #5 55.44 Linking openssl-3.0.7-h0b41bf4_1 #5 55.51 Linking libffi-3.4.2-h7f98852_5 #5 55.51 Linking bzip2-1.0.8-h7f98852_4 #5 55.51 Linking yaml-0.2.5-h7f98852_2 #5 55.52 Linking ncurses-6.3-h27087fc_1 #5 59.34 Linking libcrc32c-1.1.2-h9c3ff4c_0 #5 59.34 Linking xz-5.2.6-h166bdaf_0 #5 59.38 Linking libzlib-1.2.13-h166bdaf_4 #5 59.39 Linking libnsl-2.0.0-h7f98852_0 #5 59.39 Linking zeromq-4.3.4-h9c3ff4c_1 #5 59.40 Linking readline-8.1.2-h0f457ee_0 #5 59.41 Linking pandoc-2.19.2-h32600fe_1 #5 59.41 Linking libsqlite-3.40.0-h753d276_0 #5 59.41 Linking libprotobuf-3.20.2-h6239696_0 #5 59.44 Linking tk-8.6.12-h27826a3_0 #5 59.54 Linking tzdata-2022g-h191b570_0 #5 59.63 Linking python-3.11.0-ha86cf86_0_cpython #5 60.54 Linking wheel-0.38.4-pyhd8ed1ab_0 #5 60.98 Linking setuptools-65.6.3-pyhd8ed1ab_0 #5 61.03 Linking pip-22.3.1-pyhd8ed1ab_0 #5 61.12 Linking blinker-1.5-pyhd8ed1ab_0 #5 61.12 Linking pyjwt-2.6.0-pyhd8ed1ab_0 #5 61.12 Linking pycparser-2.21-pyhd8ed1ab_0 #5 61.16 Linking pytz-2022.7-pyhd8ed1ab_0 #5 61.25 Linking backports-1.0-pyhd8ed1ab_3 #5 61.25 Linking soupsieve-2.3.2.post1-pyhd8ed1ab_0 #5 61.26 Linking pyparsing-3.0.9-pyhd8ed1ab_0 #5 61.27 Linking pysocks-1.7.1-pyha2e5f31_6 #5 61.27 Linking certifi-2022.12.7-pyhd8ed1ab_0 #5 61.27 Linking future-0.18.2-pyhd8ed1ab_6 #5 61.33 Linking dataclasses-0.8-pyhc8e2a94_3 #5 61.33 Linking pyasn1-0.4.8-py_0 #5 61.35 Linking shellingham-1.5.0.post1-pyhd8ed1ab_0 #5 61.35 Linking colorama-0.4.6-pyhd8ed1ab_0 #5 61.36 Linking zipp-3.11.0-pyhd8ed1ab_0 #5 61.36 Linking parso-0.8.3-pyhd8ed1ab_0 #5 61.37 Linking json5-0.9.5-pyh9f0ad1d_0 #5 61.38 Linking mistune-2.0.4-pyhd8ed1ab_0 #5 61.39 Linking pandocfilters-1.5.0-pyhd8ed1ab_0 #5 61.39 Linking defusedxml-0.7.1-pyhd8ed1ab_0 #5 61.40 Linking webencodings-0.5.1-py_1 #5 61.40 Linking python-fastjsonschema-2.16.2-pyhd8ed1ab_0 #5 61.41 Linking pure_eval-0.2.2-pyhd8ed1ab_0 #5 61.41 Linking executing-1.2.0-pyhd8ed1ab_0 #5 61.42 Linking ptyprocess-0.7.0-pyhd3deb0d_0 #5 61.42 Linking charset-normalizer-2.1.1-pyhd8ed1ab_0 #5 61.43 Linking idna-3.4-pyhd8ed1ab_0 #5 61.44 Linking flit-core-3.8.0-pyhd8ed1ab_0 #5 61.47 Linking termcolor-2.1.1-pyhd8ed1ab_0 #5 61.47 Linking cachetools-4.2.4-pyhd8ed1ab_0 #5 61.48 Linking attrs-22.2.0-pyh71513ae_0 #5 61.48 Linking python-json-logger-2.0.4-pyhd8ed1ab_0 #5 61.49 Linking tabulate-0.9.0-pyhd8ed1ab_1 #5 61.49 Linking docstring_parser-0.15-pyhd8ed1ab_0 #5 61.50 Linking cloudpickle-2.2.0-pyhd8ed1ab_0 #5 61.50 Linking typing_extensions-4.4.0-pyha770c72_0 #5 61.51 Linking click-8.1.3-unix_pyhd8ed1ab_2 #5 61.51 Linking absl-py-1.3.0-pyhd8ed1ab_0 #5 61.52 Linking six-1.16.0-pyh6c4a22f_0 #5 61.53 Linking pygments-2.14.0-pyhd8ed1ab_0 #5 61.59 Linking backcall-0.2.0-pyh9f0ad1d_0 #5 61.60 Linking decorator-5.1.1-pyhd8ed1ab_0 #5 61.60 Linking nest-asyncio-1.5.6-pyhd8ed1ab_0 #5 61.60 Linking entrypoints-0.4-pyhd8ed1ab_0 #5 61.61 Linking traitlets-5.8.0-pyhd8ed1ab_0 #5 61.62 Linking websocket-client-1.4.2-pyhd8ed1ab_0 #5 61.63 Linking prometheus_client-0.15.0-pyhd8ed1ab_0 #5 61.64 Linking sniffio-1.3.0-pyhd8ed1ab_0 #5 61.64 Linking tomli-2.0.1-pyhd8ed1ab_0 #5 61.65 Linking packaging-22.0-pyhd8ed1ab_0 #5 61.65 Linking send2trash-1.8.0-pyhd8ed1ab_0 #5 61.66 Linking ipython_genutils-0.2.0-py_1 #5 61.67 Linking pickleshare-0.7.5-py_1003 #5 61.67 Linking strip-hints-0.1.8-py_0 #5 61.68 Linking babel-2.11.0-pyhd8ed1ab_0 #5 61.87 Linking backports.functools_lru_cache-1.6.4-pyhd8ed1ab_0 #5 61.88 Linking beautifulsoup4-4.11.1-pyha770c72_0 #5 61.89 Linking httplib2-0.21.0-pyhd8ed1ab_0 #5 61.89 Linking commonmark-0.9.1-py_0 #5 61.90 Linking pyasn1-modules-0.2.7-py_0 #5 61.93 Linking rsa-4.9-pyhd8ed1ab_0 #5 61.93 Linking importlib-metadata-6.0.0-pyha770c72_0 #5 61.94 Linking jedi-0.18.2-pyhd8ed1ab_0 #5 62.44 Linking tinycss2-1.2.1-pyhd8ed1ab_0 #5 62.44 Linking pexpect-4.8.0-pyh1a96a4e_2 #5 62.46 Linking typing-extensions-4.4.0-hd8ed1ab_0 #5 62.46 Linking asttokens-2.2.1-pyhd8ed1ab_0 #5 62.47 Linking pyu2f-0.1.5-pyhd8ed1ab_0 #5 62.48 Linking fire-0.4.0-pyh44b312d_0 #5 62.49 Linking python-dateutil-2.8.2-pyhd8ed1ab_0 #5 62.50 Linking jupyterlab_pygments-0.2.2-pyhd8ed1ab_0 #5 62.52 Linking comm-0.1.2-pyhd8ed1ab_0 #5 62.52 Linking matplotlib-inline-0.1.6-pyhd8ed1ab_0 #5 62.53 Linking anyio-3.6.2-pyhd8ed1ab_0 #5 62.54 Linking bleach-5.0.1-pyhd8ed1ab_0 #5 62.56 Linking wcwidth-0.2.5-pyh9f0ad1d_2 #5 62.56 Linking rich-12.6.0-pyhd8ed1ab_0 #5 62.58 Linking async-timeout-4.0.2-pyhd8ed1ab_0 #5 62.59 Linking platformdirs-2.6.2-pyhd8ed1ab_0 #5 62.59 Linking stack_data-0.6.2-pyhd8ed1ab_0 #5 62.60 Linking prompt-toolkit-3.0.36-pyha770c72_0 #5 62.66 Linking typer-0.7.0-pyhd8ed1ab_0 #5 62.66 Linking ipython-8.8.0-pyh41d4057_0 #5 62.76 Linking psutil-5.9.4-py311hd4cff14_0 #5 62.78 Linking debugpy-1.6.4-py311ha362b79_0 #5 62.94 Linking multidict-6.0.4-py311h2582759_0 #5 62.95 Linking frozenlist-1.3.3-py311hd4cff14_0 #5 62.96 Linking pyrsistent-0.19.3-py311h2582759_0 #5 62.97 Linking wrapt-1.14.1-py311hd4cff14_1 #5 62.98 Linking simplejson-3.18.1-py311h2582759_0 #5 63.00 Linking pyzmq-24.0.1-py311ha4b6469_1 #5 63.06 Linking markupsafe-2.1.1-py311hd4cff14_2 #5 63.06 Linking tornado-6.2-py311hd4cff14_1 #5 63.09 Linking pyyaml-5.4.1-py311hd4cff14_4 #5 63.10 Linking cffi-1.15.1-py311h409f033_3 #5 63.11 Linking protobuf-3.20.2-py311ha362b79_1 #5 63.12 Linking pydantic-1.10.4-py311h2582759_0 #5 63.14 Linking jupyter_core-5.1.2-py311h38be061_0 #5 63.15 Linking yarl-1.8.2-py311hd4cff14_0 #5 63.16 Linking brotlipy-0.7.0-py311hd4cff14_1005 #5 63.16 Linking cryptography-39.0.0-py311h9b4c7bb_0 #5 63.20 Linking google-crc32c-1.1.2-py311h98db957_4 #5 63.21 Linking argon2-cffi-bindings-21.2.0-py311hd4cff14_3 #5 63.21 Linking aiosignal-1.3.1-pyhd8ed1ab_0 #5 63.21 Linking jsonschema-3.2.0-pyhd8ed1ab_3 #5 63.22 Linking deprecated-1.2.13-pyh6c4a22f_0 #5 63.23 Linking uritemplate-3.0.1-py_0 #5 63.23 Linking jinja2-3.1.2-pyhd8ed1ab_1 #5 63.24 Linking terminado-0.17.1-pyh41d4057_0 #5 63.25 Linking googleapis-common-protos-1.57.0-pyhd8ed1ab_3 #5 63.28 Linking jupyter_client-7.4.8-pyhd8ed1ab_0 #5 63.30 Linking oauthlib-3.2.2-pyhd8ed1ab_0 #5 63.34 Linking pyopenssl-23.0.0-pyhd8ed1ab_0 #5 63.35 Linking google-resumable-media-1.3.3-pyh6c4a22f_0 #5 63.37 Linking argon2-cffi-21.3.0-pyhd8ed1ab_0 #5 63.37 Linking nbformat-5.7.1-pyhd8ed1ab_0 #5 63.39 Linking jupyter_events-0.5.0-pyhd8ed1ab_0 #5 63.40 Linking jupyter_server_terminals-0.4.3-pyhd8ed1ab_0 #5 63.41 Linking kfp-pipeline-spec-0.1.17-pyhd8ed1ab_0 #5 63.41 Linking ipykernel-6.19.4-pyh210e3f2_0 #5 63.44 Linking urllib3-1.26.13-pyhd8ed1ab_0 #5 63.46 Linking nbclient-0.7.2-pyhd8ed1ab_0 #5 63.47 Linking requests-2.28.1-pyhd8ed1ab_1 #5 63.48 Linking kfp-server-api-1.8.4-pyhd8ed1ab_0 #5 63.50 Linking nbconvert-core-7.2.7-pyhd8ed1ab_0 #5 63.55 Linking requests-oauthlib-1.3.1-pyhd8ed1ab_0 #5 63.56 Linking requests-toolbelt-0.10.1-pyhd8ed1ab_0 #5 63.58 Linking nbconvert-pandoc-7.2.7-pyhd8ed1ab_0 #5 63.58 Linking jupyter_server-2.0.6-pyhd8ed1ab_0 #5 63.63 Linking nbconvert-7.2.7-pyhd8ed1ab_0 #5 63.63 Linking notebook-shim-0.2.2-pyhd8ed1ab_0 #5 63.64 Linking jupyterlab_server-2.16.6-pyhd8ed1ab_0 #5 63.66 Linking nbclassic-0.4.8-pyhd8ed1ab_0 #5 63.97 Linking notebook-6.5.2-pyha770c72_1 #5 64.01 Linking jupyterlab-3.5.2-pyhd8ed1ab_0 #5 64.12 Linking aiohttp-3.8.3-py311hd4cff14_1 #5 64.14 Linking google-auth-1.35.0-pyh6c4a22f_0 #5 64.15 Linking google-auth-httplib2-0.1.0-pyhd8ed1ab_1 #5 64.16 Linking google-api-core-1.34.0-pyhd8ed1ab_0 #5 64.17 Linking python-kubernetes-18.20.0-pyhd8ed1ab_0 #5 64.29 Linking google-cloud-core-1.3.0-py_0 #5 64.29 Linking google-api-python-client-1.12.8-pyhd3deb0d_0 #5 64.30 Linking google-cloud-storage-1.30.0-pyh9f0ad1d_0 #5 64.30 Linking kfp-1.8.14-pyhd8ed1ab_0 #5 65.65 Transaction finished #5 DONE 67.0s #6 [3/4] RUN python -c "import kfp; print(kfp.__version__)" #6 sha256:d26d993ae6308c1c74fae85095e667ae0e756b1b5b67798d4d3f4c3270dc397e #6 1.468 1.8.14 #6 DONE 1.7s #7 [4/4] RUN python -c "import jupyterlab_server; print(jupyterlab_server.__version__)" #7 sha256:9658a8dc9ad79f099f3bf704f1e72fbbf2bfdec8d42489868625cfa38c078749 #7 1.257 Traceback (most recent call last): #7 1.257 File "<string>", line 1, in <module> #7 1.257 File "/opt/conda/lib/python3.11/site-packages/jupyterlab_server/__init__.py", line 5, in <module> #7 1.257 from .app import LabServerApp #7 1.257 File "/opt/conda/lib/python3.11/site-packages/jupyterlab_server/app.py", line 6, in <module> #7 1.258 from jupyter_server.extension.application import ExtensionApp, ExtensionAppJinjaMixin #7 1.258 File "/opt/conda/lib/python3.11/site-packages/jupyter_server/extension/application.py", line 14, in <module> #7 1.258 from jupyter_server.serverapp import ServerApp #7 1.259 File "/opt/conda/lib/python3.11/site-packages/jupyter_server/serverapp.py", line 33, in <module> #7 1.259 from jupyter_events.logger import EventLogger #7 1.259 File "/opt/conda/lib/python3.11/site-packages/jupyter_events/__init__.py", line 3, in <module> #7 1.260 from .logger import EVENTS_METADATA_VERSION, EventLogger #7 1.260 File "/opt/conda/lib/python3.11/site-packages/jupyter_events/logger.py", line 19, in <module> #7 1.260 from .schema_registry import SchemaRegistry #7 1.260 File "/opt/conda/lib/python3.11/site-packages/jupyter_events/schema_registry.py", line 3, in <module> #7 1.260 from .schema import EventSchema #7 1.260 File "/opt/conda/lib/python3.11/site-packages/jupyter_events/schema.py", line 6, in <module> #7 1.260 from jsonschema.protocols import Validator #7 1.260 ModuleNotFoundError: No module named 'jsonschema.protocols' #7 ERROR: executor failed running [/usr/local/bin/_dockerfile_shell.sh python -c "import jupyterlab_server; print(jupyterlab_server.__version__)"]: exit code: 1 ------ > [4/4] RUN python -c "import jupyterlab_server; print(jupyterlab_server.__version__)": ------ executor failed running [/usr/local/bin/_dockerfile_shell.sh python -c "import jupyterlab_server; print(jupyterlab_server.__version__)"]: exit code: 1 ``` </details> #### Attempt 2 Here is a docker file where I have fixed the problem with `Attempt 1` by pinning to the fixed version of `jupyterlab_server` too. Now the environment doesn't build. ``` FROM mambaorg/micromamba:jammy as build-stage RUN micromamba install --name base --channel conda-forge \ "python >=3.10" \ "kfp >=1.8" \ "jupyterlab >=3.5" \ "jupyterlab_server >=2.17" ``` `docker build --progress plain --no-cache -f Dockerfile .` <details> <summary>Docker build output</summary> ```bash #1 [internal] load build definition from debugging.Dockerfile #1 sha256:3a0dc4a0c95482ec6cc84124298afe0c2ebae9294c277bc91274793bf36aac2c #1 transferring dockerfile: 465B done #1 DONE 0.0s #2 [internal] load .dockerignore #2 sha256:c1c4fbf0b0e6bb3ce897f5e9c1fd0615466b85f3bacdae64fd25220cad1efed2 #2 transferring context: 2B done #2 DONE 0.0s #3 [internal] load metadata for docker.io/mambaorg/micromamba:jammy #3 sha256:70f396090a53ab9b6a069cc5d6d0672a0924b076e550e08e2f4c414e429987cc #3 DONE 2.1s #4 [1/4] FROM docker.io/mambaorg/micromamba:jammy@sha256:3e3d2899397fb031d57d4269a19f7700d4eb72e802d1d1b8f6054acd93b6fe47 #4 sha256:4a94cba5ef489da4fe85e3887c63904a40cb788f5ac27ff5dff6a4f3998815af #4 CACHED #5 [2/4] RUN micromamba install --name base --channel conda-forge "python >=3.10" "kfp >=1.8" "jupyterlab >=3.5" "jupyterlab_server >=2.17" #5 sha256:756e7ff99e52e8c6d7b872a5482a4c4db23a3c34732d11392cc3313276149405 #5 0.232 #5 0.232 __ #5 0.232 __ ______ ___ ____ _____ ___ / /_ ____ _ #5 0.232 / / / / __ `__ \/ __ `/ __ `__ \/ __ \/ __ `/ #5 0.232 / /_/ / / / / / / /_/ / / / / / / /_/ / /_/ / #5 0.232 / .___/_/ /_/ /_/\__,_/_/ /_/ /_/_.___/\__,_/ #5 0.232 /_/ #5 0.232 #5 19.09 error libmamba Could not solve for environment specs #5 19.09 Encountered problems while solving: #5 19.09 - package kfp-1.8.9-pyhd8ed1ab_0 requires jsonschema >=3.0.1,<4, but none of the providers can be installed #5 19.09 #5 19.09 The environment can't be solved, aborting the operation #5 19.09 #5 19.13 critical libmamba Could not solve for environment specs #5 ERROR: executor failed running [/usr/local/bin/_dockerfile_shell.sh micromamba install --name base --channel conda-forge "python >=3.10" "kfp >=1.8" "jupyterlab >=3.5" "jupyterlab_server >=2.17"]: exit code: 1 ------ > [2/4] RUN micromamba install --name base --channel conda-forge "python >=3.10" "kfp >=1.8" "jupyterlab >=3.5" "jupyterlab_server >=2.17": ------ executor failed running [/usr/local/bin/_dockerfile_shell.sh micromamba install --name base --channel conda-forge "python >=3.10" "kfp >=1.8" "jupyterlab >=3.5" "jupyterlab_server >=2.17"]: exit code: 1 ``` </details> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8648/reactions", "total_count": 5, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8648/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8641
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8641/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8641/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8641/events
https://github.com/kubeflow/pipelines/issues/8641
1,517,061,835
I_kwDOB-71UM5abIbL
8,641
Pipeline:<lambda>() missing 1 required positional argument: 'y'
{ "login": "leoliang1988", "id": 19922561, "node_id": "MDQ6VXNlcjE5OTIyNTYx", "avatar_url": "https://avatars.githubusercontent.com/u/19922561?v=4", "gravatar_id": "", "url": "https://api.github.com/users/leoliang1988", "html_url": "https://github.com/leoliang1988", "followers_url": "https://api.github.com/users/leoliang1988/followers", "following_url": "https://api.github.com/users/leoliang1988/following{/other_user}", "gists_url": "https://api.github.com/users/leoliang1988/gists{/gist_id}", "starred_url": "https://api.github.com/users/leoliang1988/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leoliang1988/subscriptions", "organizations_url": "https://api.github.com/users/leoliang1988/orgs", "repos_url": "https://api.github.com/users/leoliang1988/repos", "events_url": "https://api.github.com/users/leoliang1988/events{/privacy}", "received_events_url": "https://api.github.com/users/leoliang1988/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Hi @leoliang1988, it looks like this error message comes from the function `model.fit()`, where it requires more input arguments than you provided. Maybe the `DummyClassifier`'s documentation can provide more info?" ]
"2023-01-03T08:17:30"
"2023-01-05T23:47:11"
"2023-01-05T23:47:10"
NONE
null
Hello,here is my model: model = Pipeline([ ('scaler',FunctionTransformer(StandardScaler().fit_transform, validate=True)), ('dummy', FunctionTransformer(lambda X, y: dummy_fit_transform(X, y), validate=False)), ('reshape', FunctionTransformer(reshape_to_3d, validate=True)), ('bidirectional', KerasClassifier(build_fn=create_model, epochs=10, batch_size=64)) ]) def dummy_fit_transform(X, y): model = DummyClassifier(strategy='most_frequent') model.fit(X, y) return model.predict(X) def create_model(): model = Sequential([ Bidirectional(LSTM(128)), Dense(1, activation='sigmoid') ]) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) return model def reshape_to_3d(X): return np.reshape(X, (X.shape[0], 1, X.shape[1])) The question is that when I run : model.fit(x_train, y_train) the system notice me : TypeError: () missing 1 required positional argument: 'y' please tell me how to fix it,thank you very much!
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8641/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8641/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8627
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8627/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8627/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8627/events
https://github.com/kubeflow/pipelines/issues/8627
1,508,877,981
I_kwDOB-71UM5Z76ad
8,627
[sdk] dsl.Condition throws Attribute error on valid condition in kfp.compiler.compiler_utilsL131
{ "login": "MatthewRalston", "id": 4308024, "node_id": "MDQ6VXNlcjQzMDgwMjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/4308024?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MatthewRalston", "html_url": "https://github.com/MatthewRalston", "followers_url": "https://api.github.com/users/MatthewRalston/followers", "following_url": "https://api.github.com/users/MatthewRalston/following{/other_user}", "gists_url": "https://api.github.com/users/MatthewRalston/gists{/gist_id}", "starred_url": "https://api.github.com/users/MatthewRalston/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MatthewRalston/subscriptions", "organizations_url": "https://api.github.com/users/MatthewRalston/orgs", "repos_url": "https://api.github.com/users/MatthewRalston/repos", "events_url": "https://api.github.com/users/MatthewRalston/events{/privacy}", "received_events_url": "https://api.github.com/users/MatthewRalston/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "`str(fastq1).endswith(\".gz\")` is not a valid condition to `dsl.Condition`.\r\n`dsl.Condition` expects two operants and an operator:\r\n\r\nhttps://www.kubeflow.org/docs/components/pipelines/v2/author-a-pipeline/pipelines/#dslcondition" ]
"2022-12-23T04:40:38"
"2022-12-30T00:03:54"
"2022-12-30T00:03:54"
NONE
null
### Environment * KFP version: <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> Minikube v1.28.0 kubernetes 1.22.2 Kustomize v3.2.0 kubectl v1.25.5 Manifests v1.6.1 * KFP SDK version: <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. 2.0.0-beta.9 To find the version number, See version number shows on bottom of KFP UI left sidenav. --> * All dependencies version: <!-- Specify the output of the following shell command: $pip list | grep kfp --> kfp 2.0.0b9 kfp-pipeline-spec 0.1.16 kfp-server-api 2.0.0a6 ### Steps to reproduce <!-- Specify how to reproduce the problem. This may include information such as: a description of the process, code snippets, log output, or screenshots. --> All you have to do is specify a `dsl.Condition` with a Python condition, and an attribute error is thrown when called on the 'bool' evaluated from the Python condition. ```python @dsl.container_component def gunzip(infile:dsl.InputPath(str), outfile:dsl.OutputPath(str)): return dsl.ContainerSpec( image='bitnami/minideb:latest', command=[ 'zcat ' '$1 ', '> ', '$2 ', '||', 'mv $1 $2' ], args=[infile, outfile] ) @dsl.pipeline( name='kubeflow-barebone-demo', description='kubeflow demo with minimal setup' ) def rnaseq_pipeline(fastq1:str): # Step 1: checking file extension(s) for gzip compression with dsl.Condition(str(fastq1).endswith(".gz"), 'fastq1-needs-gunzip'): gunzip1 = gunzip(infile=fastq1) ``` #### Output ```python Traceback (most recent call last): File "./pipeline.py", line 51, in <module> def rnaseq_pipeline(fastq1:dsl.Input[dsl.Dataset]): File "/home/matt/.pyenv/versions/miniconda3-4.7.12/lib/python3.7/site-packages/kfp/components/pipeline_context.py", line 67, in pipeline return component_factory.create_graph_component_from_func(func) File "/home/matt/.pyenv/versions/miniconda3-4.7.12/lib/python3.7/site-packages/kfp/components/component_factory.py", line 527, in create_graph_component_from_func name=component_name, File "/home/matt/.pyenv/versions/miniconda3-4.7.12/lib/python3.7/site-packages/kfp/components/graph_component.py", line 69, in __init__ pipeline_outputs=pipeline_outputs, File "/home/matt/.pyenv/versions/miniconda3-4.7.12/lib/python3.7/site-packages/kfp/compiler/pipeline_spec_builder.py", line 1549, in create_pipeline_spec root_group) File "/home/matt/.pyenv/versions/miniconda3-4.7.12/lib/python3.7/site-packages/kfp/compiler/compiler_utils.py", line 146, in get_condition_channels_for_tasks _get_condition_channels_for_tasks_helper(root_group, []) File "/home/matt/.pyenv/versions/miniconda3-4.7.12/lib/python3.7/site-packages/kfp/compiler/compiler_utils.py", line 144, in _get_condition_channels_for_tasks_helper group, new_current_conditions_channels) File "/home/matt/.pyenv/versions/miniconda3-4.7.12/lib/python3.7/site-packages/kfp/compiler/compiler_utils.py", line 131, in _get_condition_channels_for_tasks_helper if isinstance(group.condition.left_operand, AttributeError: 'bool' object has no attribute 'left_operand' ``` ### Expected result <!-- What should the correct behavior be? --> I'm not sure what was intended by the `left_operand` call, but I expected more documentation on how to use a conditional in Kubeflow pipelines. ### Materials and Reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8627/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8627/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8626
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8626/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8626/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8626/events
https://github.com/kubeflow/pipelines/issues/8626
1,508,525,416
I_kwDOB-71UM5Z6kVo
8,626
[bug] kfp does not allow introspection of PipelineParam values for debugging.
{ "login": "MatthewRalston", "id": 4308024, "node_id": "MDQ6VXNlcjQzMDgwMjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/4308024?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MatthewRalston", "html_url": "https://github.com/MatthewRalston", "followers_url": "https://api.github.com/users/MatthewRalston/followers", "following_url": "https://api.github.com/users/MatthewRalston/following{/other_user}", "gists_url": "https://api.github.com/users/MatthewRalston/gists{/gist_id}", "starred_url": "https://api.github.com/users/MatthewRalston/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MatthewRalston/subscriptions", "organizations_url": "https://api.github.com/users/MatthewRalston/orgs", "repos_url": "https://api.github.com/users/MatthewRalston/repos", "events_url": "https://api.github.com/users/MatthewRalston/events{/privacy}", "received_events_url": "https://api.github.com/users/MatthewRalston/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "This is expected behavior. See [comment](https://github.com/kubeflow/pipelines/issues/8203#issuecomment-1366043593) for a relevant information.\r\n\r\nYou can achieve this by swapping out the arbitrary Python code in `dsl.Condition` for a task that does the check:\r\n\r\n```\r\nis_gz_task = is_gz(path=infile)\r\nwith dsl.Condition(is_gz_task.output == False, name='is_gunzip_not_needed'):\r\n```\r\n\r\nIn general, the pipeline body must contain only orchestration logic and cannot use runtime values or arbitrary user code. Component bodies can contain arbitrary user code.", "Will take a look. Thanks for the speedy response." ]
"2022-12-22T20:48:50"
"2022-12-27T18:51:21"
"2022-12-27T17:04:42"
NONE
null
### Environment <!-- Please fill in those that seem relevant. --> ```bash Minikube v1.28.0 kubernetes 1.22.2 Kustomize v3.2.0 kubectl v1.25.5 Manifests v1.6.1 kfp 1.8.17 kfp-pipeline-spec 0.1.16 kfp-server-api 1.8.5 ``` * How do you deploy Kubeflow Pipelines (KFP)? <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> I deploy it locally on a minikube cluster. * KFP version: kfp 1.8.17 * KFP SDK version: <!-- Specify the output of the following shell command: $pip list | grep kfp --> kfp 1.8.17 kfp-pipeline-spec 0.1.16 kfp-server-api 1.8.5 ### Steps to reproduce <!-- Specify how to reproduce the problem. This may include information such as: a description of the process, code snippets, log output, or screenshots. --> I expect to be able to use Python logic or properly use the `kfp.dsl.Condition` constructor to provide conditional logic (i.e. whether or not to decompress an input file) within a `@kfp.dsl.pipeline` pipeline. Instead, I cannot introspect `PipelineParam` objects in the pipeline! How do I do conditional validations? Seems like an essential function for a production grade pipeline. I'm sure it's user error and documentation. ```python #!/bin/env python import os import sys import argparse import kfp.dsl as dsl import kfp.components as comp import kfp components_dir = os.path.join(os.path.dirname(__file__), "components") #gunzip = comp.load_component_from_file(os.path.join(components_dir, "gunzip.yaml")) @dsl.pipeline( name='kubeflow-barebone-demo', description='kubeflow demo with minimal setup' ) def rnaseq_pipeline(fastq1: str) -> bool: # Step 1: training component gunzip = dsl.ContainerOp( name='gunzip', images='bitnami/minideb:latest', command=[ 'gunzip', fastq1 ], file_outputs={'fastq1': fastq1.rstrip(".gz")} ) with dsl.Condition(os.path.splitext(fastq)[1] is ".gz", name='gunzip_is_needed'): gunzip1 = gunzip(fastq) # I'd really like to view and print the components of gunzip1.outputs, or even understand when gunzip1.output # can be used as a convenient shortcut, or even debug what the expected key value should be. I'm getting KeyErrors before the step is complete. print("GUNZIPPED") with dsl.Condition(os.path.splitext(infile)[1] is not ".gz", name='is_gunzip_not_needed'): print("NOT GUNZIPPED") if __name__ == "__main__": logger = get_root_logger(2) kfp.compiler.Compiler().compile(rnaseq_pipeline, 'pipeline.yaml') ``` ### Expected result <!-- What should the correct behavior be? --> The desired behavior is either the proper way to create a function that can launch individual pipeline steps, checking outputs for certain desired behaviors in the path only. I'm trying to debug some very basic code, and I don't understand the desired `kfp` implementation semantics/syntax. Thanks for your help. ### Materials and reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> ### Labels <!-- Please include labels below by uncommenting them to help us better triage issues --> <!-- /area frontend --> <!-- /area backend --> <!-- /area sdk --> <!-- /area testing --> <!-- /area samples --> <!-- /area components --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8626/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8626/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8624
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8624/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8624/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8624/events
https://github.com/kubeflow/pipelines/issues/8624
1,507,111,121
I_kwDOB-71UM5Z1LDR
8,624
[frontend] brace-editor search module not imported or included in webpack
{ "login": "opalelement", "id": 18103988, "node_id": "MDQ6VXNlcjE4MTAzOTg4", "avatar_url": "https://avatars.githubusercontent.com/u/18103988?v=4", "gravatar_id": "", "url": "https://api.github.com/users/opalelement", "html_url": "https://github.com/opalelement", "followers_url": "https://api.github.com/users/opalelement/followers", "following_url": "https://api.github.com/users/opalelement/following{/other_user}", "gists_url": "https://api.github.com/users/opalelement/gists{/gist_id}", "starred_url": "https://api.github.com/users/opalelement/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/opalelement/subscriptions", "organizations_url": "https://api.github.com/users/opalelement/orgs", "repos_url": "https://api.github.com/users/opalelement/repos", "events_url": "https://api.github.com/users/opalelement/events{/privacy}", "received_events_url": "https://api.github.com/users/opalelement/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
open
false
{ "login": "jlyaoyuli", "id": 56132941, "node_id": "MDQ6VXNlcjU2MTMyOTQx", "avatar_url": "https://avatars.githubusercontent.com/u/56132941?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jlyaoyuli", "html_url": "https://github.com/jlyaoyuli", "followers_url": "https://api.github.com/users/jlyaoyuli/followers", "following_url": "https://api.github.com/users/jlyaoyuli/following{/other_user}", "gists_url": "https://api.github.com/users/jlyaoyuli/gists{/gist_id}", "starred_url": "https://api.github.com/users/jlyaoyuli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jlyaoyuli/subscriptions", "organizations_url": "https://api.github.com/users/jlyaoyuli/orgs", "repos_url": "https://api.github.com/users/jlyaoyuli/repos", "events_url": "https://api.github.com/users/jlyaoyuli/events{/privacy}", "received_events_url": "https://api.github.com/users/jlyaoyuli/received_events", "type": "User", "site_admin": false }
[ { "login": "jlyaoyuli", "id": 56132941, "node_id": "MDQ6VXNlcjU2MTMyOTQx", "avatar_url": "https://avatars.githubusercontent.com/u/56132941?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jlyaoyuli", "html_url": "https://github.com/jlyaoyuli", "followers_url": "https://api.github.com/users/jlyaoyuli/followers", "following_url": "https://api.github.com/users/jlyaoyuli/following{/other_user}", "gists_url": "https://api.github.com/users/jlyaoyuli/gists{/gist_id}", "starred_url": "https://api.github.com/users/jlyaoyuli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jlyaoyuli/subscriptions", "organizations_url": "https://api.github.com/users/jlyaoyuli/orgs", "repos_url": "https://api.github.com/users/jlyaoyuli/repos", "events_url": "https://api.github.com/users/jlyaoyuli/events{/privacy}", "received_events_url": "https://api.github.com/users/jlyaoyuli/received_events", "type": "User", "site_admin": false } ]
null
[ "@Linchin You moved this issue to \"Needs More Info\" but I'm not sure what additional information would be necessary. What else is required to move forward with this?", "Pinging @jlyaoyuli and @chensun to see if I can get more context around the \"Needs More Info\" status", "@Linchin @jlyaoyuli @chensun I'm still not sure what more information is needed here. I've given the background for the issue, a link to one reference showing how to fix it (which can be corroborated as necessary in many other sources by searching for `webpack \"brace/ext/searchbox\"`), and I've included the steps to implement the fix in this project.\r\n\r\nBecause of this bug searching YAML editors in the UI is completely broken; this would be a very small change so it would be quick \"win\" towards improving user experience." ]
"2022-12-22T01:12:08"
"2023-08-10T15:37:11"
null
NONE
null
### Environment * How did you deploy Kubeflow Pipelines (KFP)? * Minikube v1.28.0 * kubernetes 1.21.14 * Kustomize v3.2.0 * kubectl v1.26.0 * Manifests v1.6.1 * KFP version: dev_local from latest master (2.0.0-alpha6?) ### Steps to reproduce Open any page in UI which uses brace-editor (e.g. Pipelines -> [any pipeline] -> "YAML" tab) and press CTRL+F to search. No search box opens, and Javascript console logs the following (at least in Chrome; unsure of logs in other browser but search box still doesn't open in Edge or FF): ``` > GET http://192.168.0.202:8080/pipeline/ext-searchbox.js net::ERR_ABORTED 404 (Not Found) > Refused to execute script from 'http://192.168.0.202:8080/pipeline/ext-searchbox.js' because its MIME type ('text/html') is not executable, and strict MIME type checking is enabled. ``` Trying to load the referenced page gives `Cannot GET /pipeline/ext-searchbox.js`. ### Expected result The brace-editor search box should open. This is necessary as the view is buffered, meaning normal browser search functionality cannot find strings that are not currently visible (e.g. require scrolling to view). ### Materials and Reference This bug was reported for another library that uses brace-editor [here](https://github.com/fxmontigny/ng2-ace-editor/issues/53), and the suggested fix is to import the search module when the brace-editor is used. I've found three places where brace is used in the pipelines codebase: * frontend/src/components/DetailsTable.tsx * frontend/src/components/viewers/VisualizationCreator.tsx * frontend/src/pages/PipelineDetails.tsx I made the following change to each file as a test: ```diff import 'brace/ext/language_tools'; +import 'brace/ext/searchbox'; ``` After this, the JS console no longer logs an error, and the search box successfully appears: ![image](https://user-images.githubusercontent.com/18103988/209033165-78849e6f-bfbd-46ff-90b9-cc4dbe50260b.png) If this is an acceptable approach I can submit a pull request with those changes. --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8624/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8624/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8605
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8605/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8605/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8605/events
https://github.com/kubeflow/pipelines/issues/8605
1,504,198,785
I_kwDOB-71UM5ZqECB
8,605
[bug] Max retries exceeded with connection refused
{ "login": "MatthewRalston", "id": 4308024, "node_id": "MDQ6VXNlcjQzMDgwMjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/4308024?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MatthewRalston", "html_url": "https://github.com/MatthewRalston", "followers_url": "https://api.github.com/users/MatthewRalston/followers", "following_url": "https://api.github.com/users/MatthewRalston/following{/other_user}", "gists_url": "https://api.github.com/users/MatthewRalston/gists{/gist_id}", "starred_url": "https://api.github.com/users/MatthewRalston/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MatthewRalston/subscriptions", "organizations_url": "https://api.github.com/users/MatthewRalston/orgs", "repos_url": "https://api.github.com/users/MatthewRalston/repos", "events_url": "https://api.github.com/users/MatthewRalston/events{/privacy}", "received_events_url": "https://api.github.com/users/MatthewRalston/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Update in status. I can verify the `istio-ingressgateway` internal address (1.2.3.4 here) but the external IP is still wonky.\r\n```bash\r\nistio-ingressgateway NodePort 1.2.3.4 <none> 1234:2345/TCP,80:987/TCP,443:789/TCP,5678:6789/TCP,91234:92234/TCP 15h\r\n\r\n```", "If I use the port described by `minikube service list -n istio-system` for `http2` and in the same host as `minikube ip`, I get a `400` error bad request with an HTML response and a div with class `dex-container`. I'm hoping I'm hitting the dex authentication suite I've created via Manifests.", "This looks to me it might be some minikube configuration issue, or it's possible there's not enough resource, were you able to list all the pods and see if any are not in a healthy state?\r\n\r\nPS: we don't officially support minikube.", "Yes, minikube is properly configured and I can run pipelines directly from the dashboard." ]
"2022-12-20T08:53:06"
"2023-01-17T13:15:06"
"2023-01-17T13:15:06"
NONE
null
### Environment <!-- Please fill in those that seem relevant. --> * How do you deploy Kubeflow Pipelines (KFP)? Minikube v1.28.0 kubernetes 1.22.2 Kustomize v3.2.0 kubectl v1.25.5 Manifests v1.6.1 ### Manifests were installed on a running minikube instance: ```bash # After clone.. while ! kustomize build example | minikube kubectl -- apply -f -; do echo "Retrying to apply resources"; sleep 10; done ``` ### Minikube started with compatible version of kubernetes and kustomize according to v1.6.1 Manifests `README.md` ``` minikube start --driver=docker --kubernetes-version=v1.22.2 --cpus=10 --memory=40gb --disk-size=50gb --container-runtime=containerd --wait-timeout=6m # Then verify that pods are running with minikube kubectl -- get pods -n kubeflow --all ``` * KFP version: dev_local ???? * KFP SDK version: <!-- Specify the output of the following shell command: $pip list | grep kfp --> ```bash kfp 1.8.17 kfp-pipeline-spec 0.1.16 kfp-server-api 1.8.5 ``` ### Steps to reproduce <!-- Specify how to reproduce the problem. This may include information such as: a description of the process, code snippets, log output, or screenshots. --> ### Expected result <!-- What should the correct behavior be? --> The expected behavior is that the `kfp` Python client (v1.8.17) is able to connect with the following basic invocation in a Kubeflow Pipelines Python script, with `@dsl` declarations adorning the pipeline function called next. ```python kubeflow_host='192.168.111.222' kubeflow_port=443 client = kfp.Client(host="{0}:{1}/pipeline".format(kubeflow_host, kubeflow_port), namespace="kubeflow") client.create_run_from_pipeline_func(my_func, arguments={ ... some params }) ``` ### Materials and reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> #### Screenshot of KFP Version running the pipelines dashboard I am able to access with `minikube tunnel` ![Screenshot from 2022-12-19 21-24-32](https://user-images.githubusercontent.com/4308024/208567426-714cf92b-7192-41b2-a70b-332cc18ea45b.png) Screenshot showing dev_local kfp version, running on localhost through `minikube tunnel` ```python py", line 788, in urlopen method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] File "/home/matt/.pyenv/versions/miniconda3-4.7.12/lib/python3.7/site-packages/urllib3/util/retry.py", line 592, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='127.0.0.1', port=443): Max retries exceeded with url: /pipeline/apis/v1beta1/experiments?filter=%7B%22predicates%22%3A+%5B%7B%22op%22%3A+1%2C+%22key%22%3A+%22name%22%2C+%22stringValue%22%3A+%22Default%22%7D%5D%7D&resource_reference_key.type=NAMESPACE&resource_reference_key.id=default (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fa5a794f050>: Failed to establish a new connection: [Errno 111] Connection refused')) ``` ### Labels <!-- Please include labels below by uncommenting them to help us better triage issues --> <!-- /area frontend --> <!-- /area backend --> <!-- /area sdk --> <!-- /area testing --> <!-- /area samples --> <!-- /area components --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8605/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8605/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8602
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8602/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8602/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8602/events
https://github.com/kubeflow/pipelines/issues/8602
1,503,522,406
I_kwDOB-71UM5Zne5m
8,602
[bug] component BigqueryQueryJobOp throws error "The KMS key does not contain a location" on Vertex AI
{ "login": "mattiasliljenzin", "id": 1239924, "node_id": "MDQ6VXNlcjEyMzk5MjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1239924?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mattiasliljenzin", "html_url": "https://github.com/mattiasliljenzin", "followers_url": "https://api.github.com/users/mattiasliljenzin/followers", "following_url": "https://api.github.com/users/mattiasliljenzin/following{/other_user}", "gists_url": "https://api.github.com/users/mattiasliljenzin/gists{/gist_id}", "starred_url": "https://api.github.com/users/mattiasliljenzin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mattiasliljenzin/subscriptions", "organizations_url": "https://api.github.com/users/mattiasliljenzin/orgs", "repos_url": "https://api.github.com/users/mattiasliljenzin/repos", "events_url": "https://api.github.com/users/mattiasliljenzin/events{/privacy}", "received_events_url": "https://api.github.com/users/mattiasliljenzin/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
null
[]
null
[]
"2022-12-19T20:08:29"
"2022-12-20T09:44:16"
"2022-12-20T09:44:16"
NONE
null
Code: ``` extract_output = BigqueryQueryJobOp( project=bq_project, location=bq_location, query=get_query() ) ``` Error message: > RuntimeError: The BigQuery job https://www.googleapis.com/bigquery/v2/projects/xxx/jobs/job_XXXXXXXYTxuWce7PjLzOHK1tEZ?location=EU failed. Error: {'errorResult': {'reason': 'invalid', 'message': 'The KMS key does not contain a location.'}, 'errors': [{'reason': 'invalid', 'message': 'The KMS key does not contain a location.'}], 'state': 'DONE'} Let me know if you need additional information
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8602/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8602/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8600
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8600/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8600/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8600/events
https://github.com/kubeflow/pipelines/issues/8600
1,503,480,284
I_kwDOB-71UM5ZnUnc
8,600
SDK: Drop support for Python 3.7 in Q2/Q3 2023
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
null
[]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions." ]
"2022-12-19T19:43:06"
"2023-08-28T07:42:33"
null
COLLABORATOR
null
Python 3.7 will reach its EOL on June 23, 2023: https://devguide.python.org/versions/#supported-versions We plan to drop Python 3.7 support in KFP SDK around the same time. Please feel free to comment or vote on this issue.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8600/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8600/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8599
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8599/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8599/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8599/events
https://github.com/kubeflow/pipelines/issues/8599
1,503,425,705
I_kwDOB-71UM5ZnHSp
8,599
[frontend] No graph to show with increasing number of pods
{ "login": "hkair", "id": 58126733, "node_id": "MDQ6VXNlcjU4MTI2NzMz", "avatar_url": "https://avatars.githubusercontent.com/u/58126733?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hkair", "html_url": "https://github.com/hkair", "followers_url": "https://api.github.com/users/hkair/followers", "following_url": "https://api.github.com/users/hkair/following{/other_user}", "gists_url": "https://api.github.com/users/hkair/gists{/gist_id}", "starred_url": "https://api.github.com/users/hkair/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hkair/subscriptions", "organizations_url": "https://api.github.com/users/hkair/orgs", "repos_url": "https://api.github.com/users/hkair/repos", "events_url": "https://api.github.com/users/hkair/events{/privacy}", "received_events_url": "https://api.github.com/users/hkair/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
{ "login": "gkcalat", "id": 35157096, "node_id": "MDQ6VXNlcjM1MTU3MDk2", "avatar_url": "https://avatars.githubusercontent.com/u/35157096?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gkcalat", "html_url": "https://github.com/gkcalat", "followers_url": "https://api.github.com/users/gkcalat/followers", "following_url": "https://api.github.com/users/gkcalat/following{/other_user}", "gists_url": "https://api.github.com/users/gkcalat/gists{/gist_id}", "starred_url": "https://api.github.com/users/gkcalat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gkcalat/subscriptions", "organizations_url": "https://api.github.com/users/gkcalat/orgs", "repos_url": "https://api.github.com/users/gkcalat/repos", "events_url": "https://api.github.com/users/gkcalat/events{/privacy}", "received_events_url": "https://api.github.com/users/gkcalat/received_events", "type": "User", "site_admin": false }
[ { "login": "gkcalat", "id": 35157096, "node_id": "MDQ6VXNlcjM1MTU3MDk2", "avatar_url": "https://avatars.githubusercontent.com/u/35157096?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gkcalat", "html_url": "https://github.com/gkcalat", "followers_url": "https://api.github.com/users/gkcalat/followers", "following_url": "https://api.github.com/users/gkcalat/following{/other_user}", "gists_url": "https://api.github.com/users/gkcalat/gists{/gist_id}", "starred_url": "https://api.github.com/users/gkcalat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gkcalat/subscriptions", "organizations_url": "https://api.github.com/users/gkcalat/orgs", "repos_url": "https://api.github.com/users/gkcalat/repos", "events_url": "https://api.github.com/users/gkcalat/events{/privacy}", "received_events_url": "https://api.github.com/users/gkcalat/received_events", "type": "User", "site_admin": false }, { "login": "jlyaoyuli", "id": 56132941, "node_id": "MDQ6VXNlcjU2MTMyOTQx", "avatar_url": "https://avatars.githubusercontent.com/u/56132941?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jlyaoyuli", "html_url": "https://github.com/jlyaoyuli", "followers_url": "https://api.github.com/users/jlyaoyuli/followers", "following_url": "https://api.github.com/users/jlyaoyuli/following{/other_user}", "gists_url": "https://api.github.com/users/jlyaoyuli/gists{/gist_id}", "starred_url": "https://api.github.com/users/jlyaoyuli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jlyaoyuli/subscriptions", "organizations_url": "https://api.github.com/users/jlyaoyuli/orgs", "repos_url": "https://api.github.com/users/jlyaoyuli/repos", "events_url": "https://api.github.com/users/jlyaoyuli/events{/privacy}", "received_events_url": "https://api.github.com/users/jlyaoyuli/received_events", "type": "User", "site_admin": false } ]
null
[ "This was a regression in the Pipeline frontend, it has been fixed in #8343, but has not been part of a release yet. If this feature is a must have, a custom image of the Pipeline frontend can be [built](https://github.com/kubeflow/pipelines/blob/master/developer_guide.md).\r\n\r\nSimilar issue for more context: https://github.com/kubeflow/pipelines/issues/7655", "Marking as Blocked. This will be resolved when the next KFP BE/FE releases.", "2.0.0-beta.0 has been [released](https://github.com/kubeflow/pipelines/releases/tag/2.0.0-beta.0). Please, upgrade the manifests and, if the issues will still be present, reopen this issue." ]
"2022-12-19T19:01:45"
"2023-02-08T23:58:41"
"2023-02-08T23:58:40"
NONE
null
### Environment * How did you deploy Kubeflow Pipelines (KFP)? Cloud: AWS, EKS. * KFP version: Kubeflow - 1.6.1 KFP - 1.8.13 ### Steps to reproduce No graph to show error (even though the pipeline successfully ran) ![Screenshot 2022-12-19 at 2 11 38 PM](https://user-images.githubusercontent.com/58126733/208501754-ad9486f4-431e-47b1-800c-fe2f07db1ee7.png) ![Screenshot 2022-12-19 at 2 09 52 PM](https://user-images.githubusercontent.com/58126733/208501546-803b2fe8-cb9a-404b-b67e-10e2133d3049.png) ![Screenshot 2022-12-19 at 2 09 59 PM](https://user-images.githubusercontent.com/58126733/208501559-e7d458f1-4519-4e29-9283-fe9a8d944dca.png) - Currently, our pipeline is for our offline training architecture and kubeflow is running on top of AWS Elastic Kubernetes Service. - It seems the problem of "No graph to show" occurs when we have too many number of pods running for our pipeline. - The num_games parameter is used to determine how many games we should use to train our models. At num_games=500, we are able to successfully view the graphs as shown in the expected results section. - But, beyond that at num_games=750 and 1000, we are unable to see the pipeline graph and get a "No graph to show" in the Kubeflow UI. ![image](https://user-images.githubusercontent.com/58126733/208497306-260a44c1-3723-4f82-bf76-3586cc338f9f.png) - This is our current kubeflow pipeline for our offline training at num_games=100. We are currently utilizing the ParallelFor method to parallelize and batch our preprocessing process. For each batch, **10 games are processed**, thus for num_games=100 there are 10 batches for preprocessing and within each ParallelFor there are a total of 3 components being run. - The number of pods is therefore linear to the num_games we are training on. With num_games = 100, there are 10 batches and in each batch there are 3 components/pod, so we can assume there are at least 30 pods that have been initialized. num_games = 500, 50x3 = 150 + 10 (constant pods) = 160 pods initialized num_games = 750, 75x3 = 235 + 10 (constant pods) = 245 pods initialized num_games = 1000, 100x3 = 300 + 10 (constant pods) = 310 pods initialized ### Expected result - The expected result should be to show the graph of the kubeflow pipeline - At a training session with num_games = 500, the graph is displayed correctly in the Kubeflow UI and pass that at 750 games or 1000 games, no graph is shown. ![Screenshot 2022-12-19 at 1 31 33 PM](https://user-images.githubusercontent.com/58126733/208495666-221897f8-ee6e-42a1-ae1e-6ad8215fadc4.png) ![Screenshot 2022-12-19 at 2 10 09 PM](https://user-images.githubusercontent.com/58126733/208501623-7d6559d9-34f7-43dc-b31c-17c4e8e97f9e.png) ### Materials and Reference https://kubeflow-pipelines.readthedocs.io/en/stable/source/kfp.dsl.html#kfp.dsl.ParallelFor Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8599/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8599/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8596
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8596/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8596/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8596/events
https://github.com/kubeflow/pipelines/issues/8596
1,501,840,996
I_kwDOB-71UM5ZhEZk
8,596
[sdk] How to apply PVCs to pipeline tasks when using KFP SDK v2
{ "login": "yshimizu37", "id": 80316060, "node_id": "MDQ6VXNlcjgwMzE2MDYw", "avatar_url": "https://avatars.githubusercontent.com/u/80316060?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yshimizu37", "html_url": "https://github.com/yshimizu37", "followers_url": "https://api.github.com/users/yshimizu37/followers", "following_url": "https://api.github.com/users/yshimizu37/following{/other_user}", "gists_url": "https://api.github.com/users/yshimizu37/gists{/gist_id}", "starred_url": "https://api.github.com/users/yshimizu37/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yshimizu37/subscriptions", "organizations_url": "https://api.github.com/users/yshimizu37/orgs", "repos_url": "https://api.github.com/users/yshimizu37/repos", "events_url": "https://api.github.com/users/yshimizu37/events{/privacy}", "received_events_url": "https://api.github.com/users/yshimizu37/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
closed
false
null
[]
null
[ "@yshimizu37 PVC is not supported yet in SDK v2. It's on our roadmap to be supported once [platform-specific runtime support](https://docs.google.com/document/d/10Cx-B18V6gR35VOmTe8_8gB67srOFF_un7NN8qXP1T4/edit#heading=h.x9snb54sjlu9) is ready. (Join [kubeflow-discuss](https://groups.google.com/g/kubeflow-discuss) to gain access to the doc).\r\n\r\n/cc @connor-mccarthy ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.", "@yshimizu37, please take a look at the [`kfp-kubernetes`](https://kfp-kubernetes.readthedocs.io/) reference docs." ]
"2022-12-18T09:53:55"
"2023-08-28T15:34:57"
"2023-08-28T15:34:57"
NONE
null
### Environment * KFP version: I could not find out my KFP version. Not sure this is a valuable information, but I am using Kubeflow v1.6(manually installed). * KFP SDK version: 2.0.0b8 * All dependencies version: kfp 2.0.0b8 kfp-pipeline-spec 0.1.16 kfp-server-api 2.0.0a6 ### Steps to reproduce Here is my source code. ```python @dsl.component def data_preprocessing(): # do some data tasks @dsl.pipeline(name="AI Training Run") def ai_training_run( epochs: int=1, batch_size: int=64, learning_rate: float=0.001, dataset_pvc_name: str="dataset", ) -> Model: dataset_dir = "/mnt/dataset" # proprocessing dataset preprocess_data_task = data_preprocessing() preprocess_data_task.apply( onprem.mount_pvc(dataset_pvc_name, 'datavol', dataset_dir) ).set_display_name('Data preprocessing') ``` It could be compiled successfully when I was using KFP 1.8.17. ```sh $ python3 ai-training-run.py /opt/deepops/env/lib/python3.8/site-packages/kfp/compiler/compiler.py:79: UserWarning: V2_COMPATIBLE execution mode is at Beta quality. Some pipeline features may not work as expected. warnings.warn('V2_COMPATIBLE execution mode is at Beta quality.' ``` But in KFP 2.0.0b8, It doesn't work. ```sh $ python3 ai-training-run.py /opt/deepops/env/lib/python3.8/site-packages/kfp/deprecated/dsl/type_utils.py:28: FutureWarning: Module kfp.dsl.type_utils is deprecated and will be removed in KFP v2.0. Please use from kfp.components.types.type_utils instead. warnings.warn( Traceback (most recent call last): File "ai-training-run.py", line 58, in <module> def ai_training_run( File "/opt/deepops/env/lib/python3.8/site-packages/kfp/components/pipeline_context.py", line 67, in pipeline return component_factory.create_graph_component_from_func(func) File "/opt/deepops/env/lib/python3.8/site-packages/kfp/components/component_factory.py", line 501, in create_graph_component_from_func return graph_component.GraphComponent( File "/opt/deepops/env/lib/python3.8/site-packages/kfp/components/graph_component.py", line 56, in __init__ pipeline_outputs = pipeline_func(*args_list) File "ai-training-run.py", line 89, in ai_training_run preprocess_data_task.apply( AttributeError: 'PipelineTask' object has no attribute 'apply' ``` I know 'PipelineTask' class does not have 'apply' method anymore, but am curious that are there other ways to mount PVCs to pipeline tasks? ### Expected result finding out how to mount PVCs to pipeline tasks. ### Materials and Reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8596/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8596/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8592
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8592/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8592/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8592/events
https://github.com/kubeflow/pipelines/issues/8592
1,500,850,560
I_kwDOB-71UM5ZdSmA
8,592
[sdk] TabularDatasetCreateOp Fails to Validate Labels because they are a string not dict
{ "login": "ccdefreitas", "id": 117761290, "node_id": "U_kgDOBwTlCg", "avatar_url": "https://avatars.githubusercontent.com/u/117761290?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ccdefreitas", "html_url": "https://github.com/ccdefreitas", "followers_url": "https://api.github.com/users/ccdefreitas/followers", "following_url": "https://api.github.com/users/ccdefreitas/following{/other_user}", "gists_url": "https://api.github.com/users/ccdefreitas/gists{/gist_id}", "starred_url": "https://api.github.com/users/ccdefreitas/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ccdefreitas/subscriptions", "organizations_url": "https://api.github.com/users/ccdefreitas/orgs", "repos_url": "https://api.github.com/users/ccdefreitas/repos", "events_url": "https://api.github.com/users/ccdefreitas/events{/privacy}", "received_events_url": "https://api.github.com/users/ccdefreitas/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
{ "login": "chongyouquan", "id": 48691403, "node_id": "MDQ6VXNlcjQ4NjkxNDAz", "avatar_url": "https://avatars.githubusercontent.com/u/48691403?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chongyouquan", "html_url": "https://github.com/chongyouquan", "followers_url": "https://api.github.com/users/chongyouquan/followers", "following_url": "https://api.github.com/users/chongyouquan/following{/other_user}", "gists_url": "https://api.github.com/users/chongyouquan/gists{/gist_id}", "starred_url": "https://api.github.com/users/chongyouquan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chongyouquan/subscriptions", "organizations_url": "https://api.github.com/users/chongyouquan/orgs", "repos_url": "https://api.github.com/users/chongyouquan/repos", "events_url": "https://api.github.com/users/chongyouquan/events{/privacy}", "received_events_url": "https://api.github.com/users/chongyouquan/received_events", "type": "User", "site_admin": false }
[ { "login": "chongyouquan", "id": 48691403, "node_id": "MDQ6VXNlcjQ4NjkxNDAz", "avatar_url": "https://avatars.githubusercontent.com/u/48691403?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chongyouquan", "html_url": "https://github.com/chongyouquan", "followers_url": "https://api.github.com/users/chongyouquan/followers", "following_url": "https://api.github.com/users/chongyouquan/following{/other_user}", "gists_url": "https://api.github.com/users/chongyouquan/gists{/gist_id}", "starred_url": "https://api.github.com/users/chongyouquan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chongyouquan/subscriptions", "organizations_url": "https://api.github.com/users/chongyouquan/orgs", "repos_url": "https://api.github.com/users/chongyouquan/repos", "events_url": "https://api.github.com/users/chongyouquan/events{/privacy}", "received_events_url": "https://api.github.com/users/chongyouquan/received_events", "type": "User", "site_admin": false } ]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions." ]
"2022-12-16T20:53:12"
"2023-08-29T07:41:51"
null
NONE
null
TabularDatasetCreateOp Fails to Validate Labels because they are a string not dict. This fails whether or not the labels are passed in. The underlying function `TabularDataset.create` works without issues. ### Environment * KFP version: <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> Vertex AI Pipelines * KFP SDK version: <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> Vertex AI Pipelines * All dependencies version: <!-- Specify the output of the following shell command: $pip list | grep kfp --> google-cloud-pipeline-components==1.0.30 kfp==1.8.17 kfp-pipeline-spec==0.1.16 kfp-server-api==1.8.5 ### Steps to reproduce <!-- Specify how to reproduce the problem. This may include information such as: a description of the process, code snippets, log output, or screenshots. --> ```python @dsl.pipeline(name='test') def my_pipeline(): dataset_create_op = TabularDatasetCreateOp( project=PROJECT_ID, display_name="beans", bq_source = f"bq://aju-dev-demos.beans.beans1", labels = {'test': 'hi'} # Same result if this is uncommented ) from kfp.v2 import compiler compiler.Compiler().compile( pipeline_func=my_pipeline, package_path=TEMPLATE_PATH ) job = aip.PipelineJob( display_name=DISPLAY_NAME, template_path=TEMPLATE_PATH, pipeline_root=PIPELINE_ROOT, ) job.run() ``` Job fails with the following logs: ```console ... --method.labels {"test": "hi"} ... File "/opt/python3.7/lib/python3.7/site-packages/google/cloud/aiplatform/datasets/tabular_dataset.py", line 122, in create utils.validate_labels(labels) File "/opt/python3.7/lib/python3.7/site-packages/google/cloud/aiplatform/utils/__init__.py", line 242, in validate_labels for k, v in labels.items(): AttributeError: 'str' object has no attribute 'items' ``` ### Expected result <!-- What should the correct behavior be? --> The dataset should be created. This command works without any issue so it seems like the issue is in `google-cloud-pipeline-components` not the underlying datasets library. ```python from google.cloud.aiplatform.datasets.tabular_dataset import TabularDataset TabularDataset.create( display_name='beans', bq_source="bq://aju-dev-demos.beans.beans1", labels = {'test': 'hi'} ) ``` ### Materials and Reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8592/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8592/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8591
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8591/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8591/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8591/events
https://github.com/kubeflow/pipelines/issues/8591
1,500,771,076
I_kwDOB-71UM5Zc_ME
8,591
[components] ModelBatchPredictOp params PipelineParams not iterable
{ "login": "telpirion", "id": 7399197, "node_id": "MDQ6VXNlcjczOTkxOTc=", "avatar_url": "https://avatars.githubusercontent.com/u/7399197?v=4", "gravatar_id": "", "url": "https://api.github.com/users/telpirion", "html_url": "https://github.com/telpirion", "followers_url": "https://api.github.com/users/telpirion/followers", "following_url": "https://api.github.com/users/telpirion/following{/other_user}", "gists_url": "https://api.github.com/users/telpirion/gists{/gist_id}", "starred_url": "https://api.github.com/users/telpirion/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/telpirion/subscriptions", "organizations_url": "https://api.github.com/users/telpirion/orgs", "repos_url": "https://api.github.com/users/telpirion/repos", "events_url": "https://api.github.com/users/telpirion/events{/privacy}", "received_events_url": "https://api.github.com/users/telpirion/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
{ "login": "chongyouquan", "id": 48691403, "node_id": "MDQ6VXNlcjQ4NjkxNDAz", "avatar_url": "https://avatars.githubusercontent.com/u/48691403?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chongyouquan", "html_url": "https://github.com/chongyouquan", "followers_url": "https://api.github.com/users/chongyouquan/followers", "following_url": "https://api.github.com/users/chongyouquan/following{/other_user}", "gists_url": "https://api.github.com/users/chongyouquan/gists{/gist_id}", "starred_url": "https://api.github.com/users/chongyouquan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chongyouquan/subscriptions", "organizations_url": "https://api.github.com/users/chongyouquan/orgs", "repos_url": "https://api.github.com/users/chongyouquan/repos", "events_url": "https://api.github.com/users/chongyouquan/events{/privacy}", "received_events_url": "https://api.github.com/users/chongyouquan/received_events", "type": "User", "site_admin": false }
[ { "login": "chongyouquan", "id": 48691403, "node_id": "MDQ6VXNlcjQ4NjkxNDAz", "avatar_url": "https://avatars.githubusercontent.com/u/48691403?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chongyouquan", "html_url": "https://github.com/chongyouquan", "followers_url": "https://api.github.com/users/chongyouquan/followers", "following_url": "https://api.github.com/users/chongyouquan/following{/other_user}", "gists_url": "https://api.github.com/users/chongyouquan/gists{/gist_id}", "starred_url": "https://api.github.com/users/chongyouquan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chongyouquan/subscriptions", "organizations_url": "https://api.github.com/users/chongyouquan/orgs", "repos_url": "https://api.github.com/users/chongyouquan/repos", "events_url": "https://api.github.com/users/chongyouquan/events{/privacy}", "received_events_url": "https://api.github.com/users/chongyouquan/received_events", "type": "User", "site_admin": false } ]
null
[ "@telpirion, can you verify whether you get the same result using the latest versions of KFP SDK v1 and GCPC v1? If so, can you provide a minimal reproducible example?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions." ]
"2022-12-16T19:43:50"
"2023-08-29T07:41:52"
null
NONE
null
## Environment * KFP version: Vertex Pipelines * KFP SDK version: v1.8.12 ### Steps to reproduce + Pass the output of another operation to the ModelBatchPredictOp, subscript into the `output` property + Compile the pipeline ### Expected result Pipeline compiles ### Actual result Receive error "PipelineParams object is not iterable" ### Materials and Reference Related: #2206 --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8591/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8591/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8590
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8590/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8590/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8590/events
https://github.com/kubeflow/pipelines/issues/8590
1,500,760,199
I_kwDOB-71UM5Zc8iH
8,590
docs(google-cloud-components): param names for ModelBatchPredictOp are wrong
{ "login": "telpirion", "id": 7399197, "node_id": "MDQ6VXNlcjczOTkxOTc=", "avatar_url": "https://avatars.githubusercontent.com/u/7399197?v=4", "gravatar_id": "", "url": "https://api.github.com/users/telpirion", "html_url": "https://github.com/telpirion", "followers_url": "https://api.github.com/users/telpirion/followers", "following_url": "https://api.github.com/users/telpirion/following{/other_user}", "gists_url": "https://api.github.com/users/telpirion/gists{/gist_id}", "starred_url": "https://api.github.com/users/telpirion/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/telpirion/subscriptions", "organizations_url": "https://api.github.com/users/telpirion/orgs", "repos_url": "https://api.github.com/users/telpirion/repos", "events_url": "https://api.github.com/users/telpirion/events{/privacy}", "received_events_url": "https://api.github.com/users/telpirion/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
{ "login": "chongyouquan", "id": 48691403, "node_id": "MDQ6VXNlcjQ4NjkxNDAz", "avatar_url": "https://avatars.githubusercontent.com/u/48691403?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chongyouquan", "html_url": "https://github.com/chongyouquan", "followers_url": "https://api.github.com/users/chongyouquan/followers", "following_url": "https://api.github.com/users/chongyouquan/following{/other_user}", "gists_url": "https://api.github.com/users/chongyouquan/gists{/gist_id}", "starred_url": "https://api.github.com/users/chongyouquan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chongyouquan/subscriptions", "organizations_url": "https://api.github.com/users/chongyouquan/orgs", "repos_url": "https://api.github.com/users/chongyouquan/repos", "events_url": "https://api.github.com/users/chongyouquan/events{/privacy}", "received_events_url": "https://api.github.com/users/chongyouquan/received_events", "type": "User", "site_admin": false }
[ { "login": "chongyouquan", "id": 48691403, "node_id": "MDQ6VXNlcjQ4NjkxNDAz", "avatar_url": "https://avatars.githubusercontent.com/u/48691403?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chongyouquan", "html_url": "https://github.com/chongyouquan", "followers_url": "https://api.github.com/users/chongyouquan/followers", "following_url": "https://api.github.com/users/chongyouquan/following{/other_user}", "gists_url": "https://api.github.com/users/chongyouquan/gists{/gist_id}", "starred_url": "https://api.github.com/users/chongyouquan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chongyouquan/subscriptions", "organizations_url": "https://api.github.com/users/chongyouquan/orgs", "repos_url": "https://api.github.com/users/chongyouquan/repos", "events_url": "https://api.github.com/users/chongyouquan/events{/privacy}", "received_events_url": "https://api.github.com/users/chongyouquan/received_events", "type": "User", "site_admin": false } ]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions." ]
"2022-12-16T19:32:18"
"2023-08-29T07:41:54"
null
NONE
null
## Environment <!-- Please fill in those that seem relevant. --> * How do you deploy Kubeflow Pipelines (KFP)? Vertex AI Workbench * KFP version: Vertex Pipelines * KFP SDK version: 1.8.12 ## Repro When trying to create a `ModelBatchPredictOp` in my pipeline, I get a `gcs_source_uris` `KeyError`. After running `help(ModelBatchPredictOp)`, I see that the documentation for this component is incorrect. [Incorrect docs page](https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-1.0.30/google_cloud_pipeline_components.v1.batch_predict_job.html#google_cloud_pipeline_components.v1.batch_predict_job.ModelBatchPredictOp) Errors: - `gcs_source_uris` should be `gcs_source` - `gcs_destination_output_uri_prefix` should be `gcs_destination_prefix`
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8590/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8590/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8586
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8586/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8586/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8586/events
https://github.com/kubeflow/pipelines/issues/8586
1,500,022,962
I_kwDOB-71UM5ZaIiy
8,586
[sdk] Can't compile component that uses PipelineTaskFinalStatus: Enum value for ParameterTypeEnum must be an int, but got <class 'NoneType'> None.
{ "login": "suned", "id": 1228354, "node_id": "MDQ6VXNlcjEyMjgzNTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1228354?v=4", "gravatar_id": "", "url": "https://api.github.com/users/suned", "html_url": "https://github.com/suned", "followers_url": "https://api.github.com/users/suned/followers", "following_url": "https://api.github.com/users/suned/following{/other_user}", "gists_url": "https://api.github.com/users/suned/gists{/gist_id}", "starred_url": "https://api.github.com/users/suned/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/suned/subscriptions", "organizations_url": "https://api.github.com/users/suned/orgs", "repos_url": "https://api.github.com/users/suned/repos", "events_url": "https://api.github.com/users/suned/events{/privacy}", "received_events_url": "https://api.github.com/users/suned/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
null
[]
null
[]
"2022-12-16T11:10:05"
"2022-12-19T18:48:48"
"2022-12-19T18:48:48"
CONTRIBUTOR
null
### Environment * KFP version: 2.0.0b8 <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> * KFP SDK version: 2.0.0b8 <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> * All dependencies version: 2.0.0b8 <!-- Specify the output of the following shell command: $pip list | grep kfp --> ### Steps to reproduce When compiling a component that uses `PipelineTaskFinalStatus`, only the given parameter type is special cased during type checking, leading to an exception: ```python from kfp.dsl import component, PipelineTaskFinalStatus from kfp.compiler import Compiler @component def test(status: PipelineTaskFinalStatus): pass Compiler().compile(test, package_path='test.yaml') # TypeError: Enum value for ParameterTypeEnum must be an int, but got <class 'NoneType'> None. ``` ### Expected result It should be possible to compile components that use `PipelineTaskFinalStatus` to yaml. ### Materials and Reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8586/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8586/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8581
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8581/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8581/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8581/events
https://github.com/kubeflow/pipelines/issues/8581
1,498,781,827
I_kwDOB-71UM5ZVZiD
8,581
[feature] User defined Init and Wait container resources in Pipeline Spec
{ "login": "ameya-parab", "id": 75458630, "node_id": "MDQ6VXNlcjc1NDU4NjMw", "avatar_url": "https://avatars.githubusercontent.com/u/75458630?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ameya-parab", "html_url": "https://github.com/ameya-parab", "followers_url": "https://api.github.com/users/ameya-parab/followers", "following_url": "https://api.github.com/users/ameya-parab/following{/other_user}", "gists_url": "https://api.github.com/users/ameya-parab/gists{/gist_id}", "starred_url": "https://api.github.com/users/ameya-parab/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ameya-parab/subscriptions", "organizations_url": "https://api.github.com/users/ameya-parab/orgs", "repos_url": "https://api.github.com/users/ameya-parab/repos", "events_url": "https://api.github.com/users/ameya-parab/events{/privacy}", "received_events_url": "https://api.github.com/users/ameya-parab/received_events", "type": "User", "site_admin": false }
[ { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
null
[]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions." ]
"2022-12-15T16:58:46"
"2023-08-29T07:41:56"
null
NONE
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> <!-- /area frontend --> /area backend /area sdk <!-- /area samples --> <!-- /area components --> ### What feature would you like to see? We’re looking into ways to provide resource requests and limits for each pipeline’s init and wait containers for efficient compute utilization. ### What is the use case or pain point? I’m aware that Kubeflow currently allows for the modification of ‘init’ and ‘wait’ container limits via a global configmap (`workflow-controller-configmap`), but we’d prefer to provide these resources for each pipeline separately because each pipeline may have different resource requirements when handling artifacts. It appears that ‘Argo workflows’ allows for resource modification via the [PodSpecPatch](https://github.com/argoproj/argo-workflows/discussions/9203), but Kubeflow pipelines do not. ### Is there a workaround currently? Currently, we rely on global configuration parameters in the `workflow-controller-configmap` to tune the 'init' and 'wait' container resource requirements, but these do not scale well across pipelines. --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8581/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8581/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8580
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8580/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8580/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8580/events
https://github.com/kubeflow/pipelines/issues/8580
1,498,434,779
I_kwDOB-71UM5ZUEzb
8,580
[bug] component VertexNotificationEmailOp fails when there are 2-level chained conditions
{ "login": "rikardocorp", "id": 2100779, "node_id": "MDQ6VXNlcjIxMDA3Nzk=", "avatar_url": "https://avatars.githubusercontent.com/u/2100779?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rikardocorp", "html_url": "https://github.com/rikardocorp", "followers_url": "https://api.github.com/users/rikardocorp/followers", "following_url": "https://api.github.com/users/rikardocorp/following{/other_user}", "gists_url": "https://api.github.com/users/rikardocorp/gists{/gist_id}", "starred_url": "https://api.github.com/users/rikardocorp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rikardocorp/subscriptions", "organizations_url": "https://api.github.com/users/rikardocorp/orgs", "repos_url": "https://api.github.com/users/rikardocorp/repos", "events_url": "https://api.github.com/users/rikardocorp/events{/privacy}", "received_events_url": "https://api.github.com/users/rikardocorp/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
{ "login": "chongyouquan", "id": 48691403, "node_id": "MDQ6VXNlcjQ4NjkxNDAz", "avatar_url": "https://avatars.githubusercontent.com/u/48691403?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chongyouquan", "html_url": "https://github.com/chongyouquan", "followers_url": "https://api.github.com/users/chongyouquan/followers", "following_url": "https://api.github.com/users/chongyouquan/following{/other_user}", "gists_url": "https://api.github.com/users/chongyouquan/gists{/gist_id}", "starred_url": "https://api.github.com/users/chongyouquan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chongyouquan/subscriptions", "organizations_url": "https://api.github.com/users/chongyouquan/orgs", "repos_url": "https://api.github.com/users/chongyouquan/repos", "events_url": "https://api.github.com/users/chongyouquan/events{/privacy}", "received_events_url": "https://api.github.com/users/chongyouquan/received_events", "type": "User", "site_admin": false }
[ { "login": "chongyouquan", "id": 48691403, "node_id": "MDQ6VXNlcjQ4NjkxNDAz", "avatar_url": "https://avatars.githubusercontent.com/u/48691403?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chongyouquan", "html_url": "https://github.com/chongyouquan", "followers_url": "https://api.github.com/users/chongyouquan/followers", "following_url": "https://api.github.com/users/chongyouquan/following{/other_user}", "gists_url": "https://api.github.com/users/chongyouquan/gists{/gist_id}", "starred_url": "https://api.github.com/users/chongyouquan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chongyouquan/subscriptions", "organizations_url": "https://api.github.com/users/chongyouquan/orgs", "repos_url": "https://api.github.com/users/chongyouquan/repos", "events_url": "https://api.github.com/users/chongyouquan/events{/privacy}", "received_events_url": "https://api.github.com/users/chongyouquan/received_events", "type": "User", "site_admin": false } ]
null
[ "/assign @chongyouquan ", "Got the same issue. We should have a fix for Vertex AI on GCP. The fix will need 3 weeks to be deployed.\r\nhttps://issuetracker.google.com/issues/274611724", "On my side, it is now working in europe-west4. Look like GCP team fully deployed the fix", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions." ]
"2022-12-15T13:26:04"
"2023-08-29T07:41:58"
null
NONE
null
Message Error: <_InactiveRpcError of RPC that terminated with: status = StatusCode.INTERNAL details = "Internal error encountered." debug_error_string = "{"created":"@1671108949.152751032","description":"Error received from peer ipv4:142.250.125.95:443","file":"src/core/lib/surface/call.cc","file_line":966,"grpc_message":"Internal error encountered.","grpc_status":13}" > The above exception was the direct cause of the following exception:
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8580/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8580/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8578
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8578/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8578/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8578/events
https://github.com/kubeflow/pipelines/issues/8578
1,497,807,962
I_kwDOB-71UM5ZRrxa
8,578
[bug] ValueError: Cannot use IfPresentPlaceholder within ConcatPlaceholde
{ "login": "divjyotsinghanzx", "id": 117130589, "node_id": "U_kgDOBvtFXQ", "avatar_url": "https://avatars.githubusercontent.com/u/117130589?v=4", "gravatar_id": "", "url": "https://api.github.com/users/divjyotsinghanzx", "html_url": "https://github.com/divjyotsinghanzx", "followers_url": "https://api.github.com/users/divjyotsinghanzx/followers", "following_url": "https://api.github.com/users/divjyotsinghanzx/following{/other_user}", "gists_url": "https://api.github.com/users/divjyotsinghanzx/gists{/gist_id}", "starred_url": "https://api.github.com/users/divjyotsinghanzx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/divjyotsinghanzx/subscriptions", "organizations_url": "https://api.github.com/users/divjyotsinghanzx/orgs", "repos_url": "https://api.github.com/users/divjyotsinghanzx/repos", "events_url": "https://api.github.com/users/divjyotsinghanzx/events{/privacy}", "received_events_url": "https://api.github.com/users/divjyotsinghanzx/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "@divjyotsinghanzx, `google-cloud-pipeline-components` is not yet compatible with `kfp==2.x.x` yet. This will work as expected once `google-cloud-pipeline-components` supports KFP v2.\r\n\r\ncc @chongyouquan " ]
"2022-12-15T05:20:20"
"2022-12-15T23:52:27"
"2022-12-15T23:52:27"
NONE
null
### Environment <!-- Please fill in those that seem relevant. --> * How do you deploy Kubeflow Pipelines (KFP)? : Creating Vertex AI pipeline, however the bug is while importing `google_cloud_pipeline_components` * KFP version: `2.0.0b8` ### Steps to reproduce Executing the following: `from google_cloud_pipeline_components.experimental.custom_job.utils import create_custom_training_job_op_from_component` produces following error: ``` File ~/pipelines/training_pipeline.py:4 ----> 4 from google_cloud_pipeline_components.experimental.custom_job.utils import ( 5 create_custom_training_job_op_from_component, 6 ) File ~/.venv/lib/python3.9/site-packages/google_cloud_pipeline_components/experimental/custom_job/utils.py:23 20 import tempfile 21 from typing import Callable, Dict, Optional, Sequence ---> 23 from google_cloud_pipeline_components.aiplatform import utils 24 from kfp import components 25 from kfp.components import structures File ~/.venv/lib/python3.9/site-packages/google_cloud_pipeline_components/aiplatform/__init__.py:171 163 ModelDeployOp = load_component_from_file( 164 os.path.join( 165 os.path.dirname(__file__), 'endpoint/deploy_model/component.yaml')) 167 ModelUndeployOp = load_component_from_file( 168 os.path.join( 169 os.path.dirname(__file__), 'endpoint/undeploy_model/component.yaml')) --> 171 ModelBatchPredictOp = load_component_from_file( 172 os.path.join(os.path.dirname(__file__), 'batch_predict_job/component.yaml')) 174 ModelUploadOp = load_component_from_file( 175 os.path.join( 176 os.path.dirname(__file__), 'model/upload_model/component.yaml')) 178 EndpointCreateOp = load_component_from_file( 179 os.path.join( 180 os.path.dirname(__file__), 'endpoint/create_endpoint/component.yaml')) File ~/.venv/lib/python3.9/site-packages/kfp/components/yaml_component.py:91, in load_component_from_file(file_path) 75 """Loads a component from a file. 76 77 Args: (...) 88 components.load_component_from_file('~/path/to/pipeline.yaml') 89 """ 90 with open(file_path, 'r') as component_stream: ---> 91 return load_component_from_text(component_stream.read()) File ~/.venv/lib/python3.9/site-packages/kfp/components/yaml_component.py:70, in load_component_from_text(text) 60 def load_component_from_text(text: str) -> YamlComponent: 61 """Loads a component from text. 62 63 Args: (...) 67 Component loaded from YAML. 68 """ 69 return YamlComponent( ---> 70 component_spec=structures.ComponentSpec.load_from_component_yaml(text), 71 component_yaml=text) File ~/.venv/lib/python3.9/site-packages/kfp/components/structures.py:824, in ComponentSpec.load_from_component_yaml(cls, component_yaml) 821 if is_v1: 822 v1_component = v1_components._load_component_spec_from_component_text( 823 component_yaml) --> 824 return cls.from_v1_component_spec(v1_component) 825 else: 826 return ComponentSpec.from_pipeline_spec_dict(json_component) File ~/.venv/lib/python3.9/site-packages/kfp/components/structures.py:607, in ComponentSpec.from_v1_component_spec(cls, v1_component_spec) 601 container = component_dict['implementation']['container'] 602 command = [ 603 placeholders.maybe_convert_v1_yaml_placeholder_to_v2_placeholder( 604 command, component_dict=component_dict) 605 for command in container.get('command', []) 606 ] --> 607 args = [ 608 placeholders.maybe_convert_v1_yaml_placeholder_to_v2_placeholder( 609 command, component_dict=component_dict) 610 for command in container.get('args', []) 611 ] 612 env = { 613 key: 614 placeholders.maybe_convert_v1_yaml_placeholder_to_v2_placeholder( 615 command, component_dict=component_dict) 616 for key, command in container.get('env', {}).items() 617 } 618 container_spec = ContainerSpecImplementation.from_container_dict({ 619 'image': container['image'], 620 'command': command, 621 'args': args, 622 'env': env 623 }) File ~/.venv/lib/python3.9/site-packages/kfp/components/structures.py:608, in <listcomp>(.0) 601 container = component_dict['implementation']['container'] 602 command = [ 603 placeholders.maybe_convert_v1_yaml_placeholder_to_v2_placeholder( 604 command, component_dict=component_dict) 605 for command in container.get('command', []) 606 ] 607 args = [ --> 608 placeholders.maybe_convert_v1_yaml_placeholder_to_v2_placeholder( 609 command, component_dict=component_dict) 610 for command in container.get('args', []) 611 ] 612 env = { 613 key: 614 placeholders.maybe_convert_v1_yaml_placeholder_to_v2_placeholder( 615 command, component_dict=component_dict) 616 for key, command in container.get('env', {}).items() 617 } 618 container_spec = ContainerSpecImplementation.from_container_dict({ 619 'image': container['image'], ... 223 ) 225 # check that there is no illegal state found recursively 226 if isinstance(self.then, ConcatPlaceholder): ValueError: Cannot use IfPresentPlaceholder within ConcatPlaceholder when `then` and `else_` arguments to IfPresentPlaceholder are lists. Please use a single element for `then` and `else_` only. ``` ### Materials and reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> ### Labels <!-- Please include labels below by uncommenting them to help us better triage issues --> <!-- /area frontend --> <!-- /area backend --> <!-- /area sdk --> <!-- /area testing --> <!-- /area samples --> <!-- /area components --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8578/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8578/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8570
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8570/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8570/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8570/events
https://github.com/kubeflow/pipelines/issues/8570
1,494,680,592
I_kwDOB-71UM5ZFwQQ
8,570
[bug] KFP V2 does not support Volumes
{ "login": "StefanoFioravanzo", "id": 3354305, "node_id": "MDQ6VXNlcjMzNTQzMDU=", "avatar_url": "https://avatars.githubusercontent.com/u/3354305?v=4", "gravatar_id": "", "url": "https://api.github.com/users/StefanoFioravanzo", "html_url": "https://github.com/StefanoFioravanzo", "followers_url": "https://api.github.com/users/StefanoFioravanzo/followers", "following_url": "https://api.github.com/users/StefanoFioravanzo/following{/other_user}", "gists_url": "https://api.github.com/users/StefanoFioravanzo/gists{/gist_id}", "starred_url": "https://api.github.com/users/StefanoFioravanzo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/StefanoFioravanzo/subscriptions", "organizations_url": "https://api.github.com/users/StefanoFioravanzo/orgs", "repos_url": "https://api.github.com/users/StefanoFioravanzo/repos", "events_url": "https://api.github.com/users/StefanoFioravanzo/events{/privacy}", "received_events_url": "https://api.github.com/users/StefanoFioravanzo/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Thanks, @StefanoFioravanzo. This is on our roadmap for before June 2023.\r\n\r\nWhile volumes are a general concept/feature, the implementation is typically platform-specific. For example, you'd configure a volume on Kubernetes differently than you would on GCP. KFP v2, however, compiles pipelines to a platform-agnostic pipeline representation protocol: PipelineSpec IR. \r\n\r\nFor this reason, we will be implementing support for volumes as a [platform-specific feature](https://docs.google.com/document/d/10Cx-B18V6gR35VOmTe8_8gB67srOFF_un7NN8qXP1T4/edit#heading=h.x9snb54sjlu9) to make support across platforms possible while keeping PipelineSpec IR decoupled from the executing platform.", "@connor-mccarthy Thanks for your quick reply! \r\n\r\nI wasn't aware of the platform-specific API proposal - it looks exciting and has a flexible solution to support different runtimes. \r\n\r\nI wonder how you define the success criteria for volumes support on Kubernetes. Would it make sense to define a list of Kubernetes volume-related capabilities the SDK v1 offers so we have a reference to claim that v2 offers feature parity?", "Thanks, @StefanoFioravanzo! That's exactly the plan. We're striving to ensure users can author pipelines in v2 that are functionally equivalent to those they could author in v1 using Kubernetes volume features ([relevant docs](https://www.kubeflow.org/docs/components/pipelines/v1/sdk/manipulate-resources/)).", "@connor-mccarthy That's fantastic! Cannot wait to see these updates come to v2 then and looking forward to more details", "> Thanks, @StefanoFioravanzo. This is on our roadmap for before June 2023.\r\n\r\nHi folks! Is there any update here? I was wanting to use ResourceOp today and saw it was deprecated for v2, and not sure if I even have any options. Are most people still using v1?", "The general `ResourceOp` is indeed removed in v2, but the KFP does support Volumes and Secrets via the `kfp-kubernetes` extension library. Please see the [Platform-specific features KFP SDK docs](https://www.kubeflow.org/docs/components/pipelines/v2/platform-specific-features/) and the [`kfp-kubernetes` reference documentation](https://kfp-kubernetes.readthedocs.io/en/kfp-kubernetes-1.0.0/).", "Could we please keep the issue open and scope it to be about resourceOp for custom resource definitions? There is still no support or design (that I can find) to support that.", "The original issue is primarily concerned with Volume support. Please consider opening another issue about `ResourceOp` support.", "Done https://github.com/kubeflow/pipelines/issues/9703 thank you!" ]
"2022-12-13T16:47:07"
"2023-07-05T17:37:25"
"2023-07-05T17:07:47"
MEMBER
null
### TL;DR Kubeflow Pipelines V2 does not support V1 `VolumeOp` and `PipelineVolume` to manipulate volume objects during the run and mount them on pipeline steps. Kubeflow Pipelines V2 doesn't seem to have a roadmap commitment to support volumes before GA in June 2023. ### Context Kubeflow Pipelines V1 provides the [`ResourceOp`](https://www.kubeflow.org/docs/components/pipelines/v1/sdk/manipulate-resources/#resourceop) construct to define arbitrary K8s resources as a pipeline step. [`VolumeOp`](https://www.kubeflow.org/docs/components/pipelines/v1/sdk/manipulate-resources/#volumeop) is an extension of `ResourceOp` and allows provisioning and mounting PVCs. ### Problem Kubeflow Pipelines V2 gets rid of those constructs (There is no `ResourceOp` or `VolumeOp` among the public attributes and objects of the V2 DSL https://github.com/kubeflow/pipelines/blob/4bb57e6723b7a5c2eb685536e5a293aea87bd3a1/sdk/python/kfp/dsl/__init__.py#L17-L47). Not having a `ResourceOp` is ok since one can emulate its behavior with a custom component, but KFP execution engine seems to miss the ability to declare a volume mount in the step's Pod in the first place. We are concerned that adding volume support may not just be a matter of extending the V2 DSL because of how the new execution model is designed. In the [Kubeflow Pipelines (KFP) v2 System Design](https://docs.google.com/document/d/1fHU29oScMEKPttDA1Th1ibImAKsFVVt2Ynr4ZME05i0) document a paragraph states the following > In this design, we support gcs, s3, minio object stores using their clients. However, a caveat is that the approach doesn't scale to support more cloud providers or custom solutions. After Kubernetes standardizes on CSI, I'd hope there will be more momentum on ReadWriteMany volume. Especially, volume drivers backed by object stores like csi-gcs, csi-s3 and azure-blob-csi may become a good fit as pipeline storages. We will continuously evaluate the space and consider supporting volumes as an alternative later. As stated in the comments, K8s has standardized on CSI since January 2019. We haven't seen other comments or design proposals targeting this missing functionality. ### Proposed Next Steps This is a fundamental breaking change from KFP V1 and may not find a straightforward resolution for many people relying on volumes in their production pipelines. We propose first-class support for volumes as a non-negotiable feature that needs to be part of KFP V2 before it becomes GA. We would love to hear some feedback from the Google folks who are contributing to this design and discuss together a way forward. We would be happy to step in and help solve this issue. @zijianjoy @james-jwu @chensun @connor-mccarthy @elikatsis /area backend /area sdk Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8570/reactions", "total_count": 5, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8570/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8565
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8565/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8565/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8565/events
https://github.com/kubeflow/pipelines/issues/8565
1,492,396,464
I_kwDOB-71UM5Y9Cmw
8,565
[backend] Cache Server referencing the wrong file in secret webhook-server-tls
{ "login": "ReggieCarey", "id": 10270182, "node_id": "MDQ6VXNlcjEwMjcwMTgy", "avatar_url": "https://avatars.githubusercontent.com/u/10270182?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ReggieCarey", "html_url": "https://github.com/ReggieCarey", "followers_url": "https://api.github.com/users/ReggieCarey/followers", "following_url": "https://api.github.com/users/ReggieCarey/following{/other_user}", "gists_url": "https://api.github.com/users/ReggieCarey/gists{/gist_id}", "starred_url": "https://api.github.com/users/ReggieCarey/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ReggieCarey/subscriptions", "organizations_url": "https://api.github.com/users/ReggieCarey/orgs", "repos_url": "https://api.github.com/users/ReggieCarey/repos", "events_url": "https://api.github.com/users/ReggieCarey/events{/privacy}", "received_events_url": "https://api.github.com/users/ReggieCarey/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "NOTE: I deployed Kubeflow v1.6.0 and this problem did not appear. The problem appears to be a regression due to the updated version (in v.1.6.1) of the image used in CacheServer deployment.", "Any update on this bug? Is the tls.crt the file that that cache-server is looking for?", "I was affected by this as well.\r\nMy successful temporary workaround was to manually update the image version in the `cache-server` deployment to: `gcr.io/ml-pipeline/cache-server:2.0.0-beta.0 `. Now caching seems to work again fine.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions." ]
"2022-12-12T17:33:32"
"2023-08-29T07:41:59"
null
NONE
null
### Environment * How did you deploy Kubeflow Pipelines (KFP)? <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> KFP was deployed via Manifests with Kubeflow v1.6.1 onto K8S 1.24.8 on baremetal with Ubuntu 22.04 host OS. * KFP version: <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> See the below problem description for info on what images were used * KFP SDK version: <!-- Specify the output of the following shell command: $pip list | grep kfp --> See the below problem description. ### Steps to reproduce <!-- Specify how to reproduce the problem. This may include information such as: a description of the process, code snippets, log output, or screenshots. --> ``` git clone https://github.com/kubeflow/manifests.git pushd manifests git checkout tags/v1.6.1 export CD_REGISTRATION_FLOW=true while ! ../kustomize build example | kubectl $APPLY -f -; do echo "Retrying to $APPLY resources"; sleep 10; done git switch - ``` CacheServer (Server Pod) fails to execute properly. See the logs: ``` 2022/12/12 17:19:41 Initing client manager.... 2022/12/12 17:19:41 Database created 2022/12/12 17:19:42 execution_caches 2022/12/12 17:19:42 open /etc/webhook/certs/cert.pem: no such file or directory ``` The file in question should be served up via a volume mount: ``` spec: containers: - name: server image: gcr.io/ml-pipeline/cache-server:2.0.0-alpha.5 args: - '--db_driver=$(DBCONFIG_DRIVER)' - '--db_host=$(DBCONFIG_HOST_NAME)' - '--db_port=$(DBCONFIG_PORT)' - '--db_name=$(DBCONFIG_DB_NAME)' - '--db_user=$(DBCONFIG_USER)' - '--db_password=$(DBCONFIG_PASSWORD)' - '--namespace_to_watch=$(NAMESPACE_TO_WATCH)' volumeMounts: - name: webhook-tls-certs readOnly: true mountPath: /etc/webhook/certs volumes: - name: webhook-tls-certs secret: secretName: webhook-server-tls defaultMode: 420 ``` Secret webhook-server-tls: ``` apiVersion: v1 kind: Secret metadata: name: webhook-server-tls namespace: kubeflow type: kubernetes.io/tls data: ca.crt: <hidden> tls.crt: <hidden> tls.key: <hidden> ``` As you can see, there is no `cert.pem` file ### Expected result <!-- What should the correct behavior be? --> The correct behavior should be that the Cache Server is properly configured to consume the TLS certificate resources that its configured to consume. ### Materials and Reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8565/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8565/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8564
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8564/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8564/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8564/events
https://github.com/kubeflow/pipelines/issues/8564
1,490,891,419
I_kwDOB-71UM5Y3TKb
8,564
[New feature] Can not use the output_loop_item as a display name for a task.
{ "login": "MinhManPham", "id": 65837910, "node_id": "MDQ6VXNlcjY1ODM3OTEw", "avatar_url": "https://avatars.githubusercontent.com/u/65837910?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MinhManPham", "html_url": "https://github.com/MinhManPham", "followers_url": "https://api.github.com/users/MinhManPham/followers", "following_url": "https://api.github.com/users/MinhManPham/following{/other_user}", "gists_url": "https://api.github.com/users/MinhManPham/gists{/gist_id}", "starred_url": "https://api.github.com/users/MinhManPham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MinhManPham/subscriptions", "organizations_url": "https://api.github.com/users/MinhManPham/orgs", "repos_url": "https://api.github.com/users/MinhManPham/repos", "events_url": "https://api.github.com/users/MinhManPham/events{/privacy}", "received_events_url": "https://api.github.com/users/MinhManPham/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
null
[]
null
[ "This is currently be design, `set_display_name` doesn't support runtime value an this moment.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions." ]
"2022-12-12T04:05:52"
"2023-08-29T07:42:01"
null
NONE
null
### Environment * KFP version: kfp==1.8.16 * KFP SDK version: kfp 1.8.16 kfp-pipeline-spec 0.1.16 kfp-server-api 1.8.5 ### Steps to reproduce ``` def pipeline(countries, start, end, model_types): get_countriest_task = get_countriest_op(countries=countries) with dsl.ParallelFor(get_countriest_task.output) as country_code: download_data_task = download_data_op( country_code=country_code, start_date=start, end_date=end ).set_display_name(str(country_code)) ``` ### Expected result I expect in the UI, each component's name is corresponding to the `country_code`. ### Materials and reference Code for download_data_op: ``` def get_countriest(countries:str) -> str: import json countries= countries.split(",") return json.dumps(countries) ``` ### Actual result <!-- Please include labels below by uncommenting them to help us better triage issues --> ![image](https://user-images.githubusercontent.com/65837910/206959170-00c14224-205f-4c01-843c-f2286160a434.png) --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8564/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8564/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8558
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8558/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8558/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8558/events
https://github.com/kubeflow/pipelines/issues/8558
1,486,627,621
I_kwDOB-71UM5YnCMl
8,558
[sdk] Component parameters with a default value of `None` are incorrectly interpreted as required when compiling to yaml
{ "login": "suned", "id": 1228354, "node_id": "MDQ6VXNlcjEyMjgzNTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1228354?v=4", "gravatar_id": "", "url": "https://api.github.com/users/suned", "html_url": "https://github.com/suned", "followers_url": "https://api.github.com/users/suned/followers", "following_url": "https://api.github.com/users/suned/following{/other_user}", "gists_url": "https://api.github.com/users/suned/gists{/gist_id}", "starred_url": "https://api.github.com/users/suned/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/suned/subscriptions", "organizations_url": "https://api.github.com/users/suned/orgs", "repos_url": "https://api.github.com/users/suned/repos", "events_url": "https://api.github.com/users/suned/events{/privacy}", "received_events_url": "https://api.github.com/users/suned/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
{ "login": "connor-mccarthy", "id": 55268212, "node_id": "MDQ6VXNlcjU1MjY4MjEy", "avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-mccarthy", "html_url": "https://github.com/connor-mccarthy", "followers_url": "https://api.github.com/users/connor-mccarthy/followers", "following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}", "gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions", "organizations_url": "https://api.github.com/users/connor-mccarthy/orgs", "repos_url": "https://api.github.com/users/connor-mccarthy/repos", "events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-mccarthy/received_events", "type": "User", "site_admin": false }
[ { "login": "connor-mccarthy", "id": 55268212, "node_id": "MDQ6VXNlcjU1MjY4MjEy", "avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-mccarthy", "html_url": "https://github.com/connor-mccarthy", "followers_url": "https://api.github.com/users/connor-mccarthy/followers", "following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}", "gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions", "organizations_url": "https://api.github.com/users/connor-mccarthy/orgs", "repos_url": "https://api.github.com/users/connor-mccarthy/repos", "events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-mccarthy/received_events", "type": "User", "site_admin": false } ]
null
[ "/assign @connor-mccarthy ", "Hi, @suned. https://github.com/kubeflow/pipelines/pull/8572 should resolve this issue, I believe." ]
"2022-12-09T11:55:22"
"2023-03-21T17:48:33"
"2023-03-21T17:48:32"
CONTRIBUTOR
null
### Environment * KFP version: 2.0.0b8 <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> * KFP SDK version: 2.0.0b8 <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> * All dependencies version: * kfp 2.0.0b8 * kfp-pipeline-spec 0.1.16 * kfp-server-api 2.0.0a6 <!-- Specify the output of the following shell command: $pip list | grep kfp --> ### Steps to reproduce Example: ```python from typing import Optional from kfp.dsl import component from kfp.compiler import Compiler @component def example(optional_param: Optional[str] = None): pass Compiler().compile(example, 'example.yaml') ``` Output: ```yaml # example.yaml components: comp-example: executorLabel: exec-example inputDefinitions: parameters: optional_param: parameterType: STRING deploymentSpec: executors: exec-example: container: args: - --executor_input - '{{$}}' - --function_to_execute - example command: - sh - -c - "\nif ! [ -x \"$(command -v pip)\" ]; then\n python3 -m ensurepip ||\ \ python3 -m ensurepip --user || apt-get install python3-pip\nfi\n\nPIP_DISABLE_PIP_VERSION_CHECK=1\ \ python3 -m pip install --quiet --no-warn-script-location 'kfp==2.0.0-beta.8'\ \ && \"$0\" \"$@\"\n" - sh - -ec - 'program_path=$(mktemp -d) printf "%s" "$0" > "$program_path/ephemeral_component.py" python3 -m kfp.components.executor_main --component_module_path "$program_path/ephemeral_component.py" "$@" ' - "\nimport kfp\nfrom kfp import dsl\nfrom kfp.dsl import *\nfrom typing import\ \ *\n\ndef example(optional_param: Optional[str] = None):\n pass\n\n" image: python:3.7 pipelineInfo: name: example root: dag: tasks: example: cachingOptions: enableCache: true componentRef: name: comp-example inputs: parameters: optional_param: componentInputParameter: optional_param taskInfo: name: example inputDefinitions: parameters: optional_param: parameterType: STRING schemaVersion: 2.1.0 sdkVersion: kfp-2.0.0-beta.8 ``` ### Expected result `optional_param` in the above example should have a default value under the `defaultValue` key in the output yaml. ### Materials and Reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8558/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8558/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8555
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8555/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8555/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8555/events
https://github.com/kubeflow/pipelines/issues/8555
1,485,860,530
I_kwDOB-71UM5YkG6y
8,555
[Bazel support] <Will support bazel build for Kubeflow>
{ "login": "xushaoxiao", "id": 83151234, "node_id": "MDQ6VXNlcjgzMTUxMjM0", "avatar_url": "https://avatars.githubusercontent.com/u/83151234?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xushaoxiao", "html_url": "https://github.com/xushaoxiao", "followers_url": "https://api.github.com/users/xushaoxiao/followers", "following_url": "https://api.github.com/users/xushaoxiao/following{/other_user}", "gists_url": "https://api.github.com/users/xushaoxiao/gists{/gist_id}", "starred_url": "https://api.github.com/users/xushaoxiao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xushaoxiao/subscriptions", "organizations_url": "https://api.github.com/users/xushaoxiao/orgs", "repos_url": "https://api.github.com/users/xushaoxiao/repos", "events_url": "https://api.github.com/users/xushaoxiao/events{/privacy}", "received_events_url": "https://api.github.com/users/xushaoxiao/received_events", "type": "User", "site_admin": false }
[ { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Thanks, @xushaoxiao. We've supported bazel in the past, but it slowed down development velocity and required extra effort to maintain. This is not something we are looking to support at this time.\r\n\r\nHere is a bit of context: https://github.com/kubeflow/pipelines/issues/3250\r\n\r\ncc @chensun " ]
"2022-12-09T03:00:13"
"2022-12-15T23:58:27"
"2022-12-15T23:58:26"
NONE
null
### Bazel build is good Whether to consider supporting bazel build ?
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8555/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8555/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8554
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8554/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8554/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8554/events
https://github.com/kubeflow/pipelines/issues/8554
1,485,641,884
I_kwDOB-71UM5YjRic
8,554
Behavior of set_retry() - component marked as failed, pipeline marked as success
{ "login": "hydeta", "id": 9207425, "node_id": "MDQ6VXNlcjkyMDc0MjU=", "avatar_url": "https://avatars.githubusercontent.com/u/9207425?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hydeta", "html_url": "https://github.com/hydeta", "followers_url": "https://api.github.com/users/hydeta/followers", "following_url": "https://api.github.com/users/hydeta/following{/other_user}", "gists_url": "https://api.github.com/users/hydeta/gists{/gist_id}", "starred_url": "https://api.github.com/users/hydeta/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hydeta/subscriptions", "organizations_url": "https://api.github.com/users/hydeta/orgs", "repos_url": "https://api.github.com/users/hydeta/repos", "events_url": "https://api.github.com/users/hydeta/events{/privacy}", "received_events_url": "https://api.github.com/users/hydeta/received_events", "type": "User", "site_admin": false }
[ { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }, { "login": "connor-mccarthy", "id": 55268212, "node_id": "MDQ6VXNlcjU1MjY4MjEy", "avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-mccarthy", "html_url": "https://github.com/connor-mccarthy", "followers_url": "https://api.github.com/users/connor-mccarthy/followers", "following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}", "gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions", "organizations_url": "https://api.github.com/users/connor-mccarthy/orgs", "repos_url": "https://api.github.com/users/connor-mccarthy/repos", "events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-mccarthy/received_events", "type": "User", "site_admin": false } ]
null
[ "@hydeta, just to confirm: are you running this on the Kubeflow Pipelines open source BE? If so, which version (both KFP SDK and KFP BE versions)?\r\n\r\ncc @gkcalat ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions." ]
"2022-12-09T00:15:51"
"2023-08-29T07:42:03"
null
NONE
null
Hello, I am using the method `set_retry()` and occasionally I am seeing instances where the component is marked as a failure but the pipeline will still be marked as a success. How does this occur? Is it the case where if the component fails but succeeds on subsequent tries then the component will still be marked as a failure but the pipeline will succeed? Or is the component truly failing and the pipeline is proceeding when it should instead fail? Thanks.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8554/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8554/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8552
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8552/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8552/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8552/events
https://github.com/kubeflow/pipelines/issues/8552
1,485,616,850
I_kwDOB-71UM5YjLbS
8,552
MySQL logs disk full error
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "/cc @jlyaoyuli " ]
"2022-12-08T23:53:35"
"2022-12-09T03:51:11"
"2022-12-09T03:51:11"
COLLABORATOR
null
We noticed on some KFP cluster listing run failed consistently with 524 timeout error. Looking at the pod logs, metadata-writer and persistence-agent are in crash loop with error failed to connect to mysql. Checking mysql pod logs, it's flooded with > `disk is full writing './binlog.~rec~'` Did some research, looks like MySQL 8.0 changed the default log expiration from 10 days to 30 days: https://dba.stackexchange.com/questions/233048/my-disk-is-full-of-binlog-files
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8552/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8552/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8546
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8546/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8546/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8546/events
https://github.com/kubeflow/pipelines/issues/8546
1,483,062,768
I_kwDOB-71UM5YZb3w
8,546
[backend] <Wrong Parameter loading for google_cloud_pipeline_components.aiplatform.TabularDatasetCreateOp in v1.0.29>
{ "login": "qijia2045579", "id": 13291534, "node_id": "MDQ6VXNlcjEzMjkxNTM0", "avatar_url": "https://avatars.githubusercontent.com/u/13291534?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qijia2045579", "html_url": "https://github.com/qijia2045579", "followers_url": "https://api.github.com/users/qijia2045579/followers", "following_url": "https://api.github.com/users/qijia2045579/following{/other_user}", "gists_url": "https://api.github.com/users/qijia2045579/gists{/gist_id}", "starred_url": "https://api.github.com/users/qijia2045579/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qijia2045579/subscriptions", "organizations_url": "https://api.github.com/users/qijia2045579/orgs", "repos_url": "https://api.github.com/users/qijia2045579/repos", "events_url": "https://api.github.com/users/qijia2045579/events{/privacy}", "received_events_url": "https://api.github.com/users/qijia2045579/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
{ "login": "chongyouquan", "id": 48691403, "node_id": "MDQ6VXNlcjQ4NjkxNDAz", "avatar_url": "https://avatars.githubusercontent.com/u/48691403?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chongyouquan", "html_url": "https://github.com/chongyouquan", "followers_url": "https://api.github.com/users/chongyouquan/followers", "following_url": "https://api.github.com/users/chongyouquan/following{/other_user}", "gists_url": "https://api.github.com/users/chongyouquan/gists{/gist_id}", "starred_url": "https://api.github.com/users/chongyouquan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chongyouquan/subscriptions", "organizations_url": "https://api.github.com/users/chongyouquan/orgs", "repos_url": "https://api.github.com/users/chongyouquan/repos", "events_url": "https://api.github.com/users/chongyouquan/events{/privacy}", "received_events_url": "https://api.github.com/users/chongyouquan/received_events", "type": "User", "site_admin": false }
[ { "login": "chongyouquan", "id": 48691403, "node_id": "MDQ6VXNlcjQ4NjkxNDAz", "avatar_url": "https://avatars.githubusercontent.com/u/48691403?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chongyouquan", "html_url": "https://github.com/chongyouquan", "followers_url": "https://api.github.com/users/chongyouquan/followers", "following_url": "https://api.github.com/users/chongyouquan/following{/other_user}", "gists_url": "https://api.github.com/users/chongyouquan/gists{/gist_id}", "starred_url": "https://api.github.com/users/chongyouquan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chongyouquan/subscriptions", "organizations_url": "https://api.github.com/users/chongyouquan/orgs", "repos_url": "https://api.github.com/users/chongyouquan/repos", "events_url": "https://api.github.com/users/chongyouquan/events{/privacy}", "received_events_url": "https://api.github.com/users/chongyouquan/received_events", "type": "User", "site_admin": false } ]
null
[ "/assign @chongyouquan ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions." ]
"2022-12-07T23:10:01"
"2023-08-29T07:42:05"
null
NONE
null
### Environment * How did you deploy Kubeflow Pipelines (KFP)? We use the Vertex AI managed notebook to create KFP and call the "google_cloud_pipeline_components.aiplatform.TabularDatasetCreateOp" component to create a Vertex AI dataset. * KFP version: V2 * KFP SDK version: google-cloud-pipeline-components-1.0.29 ### Steps to reproduce Under google-cloud-pipeline-components-1.0.29. Use simple pipelineJob to create the TabularDatasetCreateOp with normal "project" and "location" as parameters. This will raise the error. Did some research and found: The error is found in container execution. The remote_runner did not correctly pass the "project" and "location" as Str; they are recognized as "Dict()" here https://github.com/kubeflow/pipelines/blob/217cbd9e6e0c79316bb98c005e3154cea08afb71/components/google-cloud/google_cloud_pipeline_components/container/aiplatform/remote_runner.py#L211 ### Expected result This may be caused by the most recent change in the labeling part. Because we still have the correct results in the 1.0.27 version ### Materials and Reference --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8546/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8546/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8527
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8527/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8527/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8527/events
https://github.com/kubeflow/pipelines/issues/8527
1,477,298,221
I_kwDOB-71UM5YDcgt
8,527
[feature] Documentation around specifying pod configuration for components (e.g. volume mounts/pod affinity etc) in v2
{ "login": "elibixby", "id": 6596957, "node_id": "MDQ6VXNlcjY1OTY5NTc=", "avatar_url": "https://avatars.githubusercontent.com/u/6596957?v=4", "gravatar_id": "", "url": "https://api.github.com/users/elibixby", "html_url": "https://github.com/elibixby", "followers_url": "https://api.github.com/users/elibixby/followers", "following_url": "https://api.github.com/users/elibixby/following{/other_user}", "gists_url": "https://api.github.com/users/elibixby/gists{/gist_id}", "starred_url": "https://api.github.com/users/elibixby/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elibixby/subscriptions", "organizations_url": "https://api.github.com/users/elibixby/orgs", "repos_url": "https://api.github.com/users/elibixby/repos", "events_url": "https://api.github.com/users/elibixby/events{/privacy}", "received_events_url": "https://api.github.com/users/elibixby/received_events", "type": "User", "site_admin": false }
[ { "id": 1260031624, "node_id": "MDU6TGFiZWwxMjYwMDMxNjI0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/samples", "name": "area/samples", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
{ "login": "connor-mccarthy", "id": 55268212, "node_id": "MDQ6VXNlcjU1MjY4MjEy", "avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-mccarthy", "html_url": "https://github.com/connor-mccarthy", "followers_url": "https://api.github.com/users/connor-mccarthy/followers", "following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}", "gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions", "organizations_url": "https://api.github.com/users/connor-mccarthy/orgs", "repos_url": "https://api.github.com/users/connor-mccarthy/repos", "events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-mccarthy/received_events", "type": "User", "site_admin": false }
[ { "login": "connor-mccarthy", "id": 55268212, "node_id": "MDQ6VXNlcjU1MjY4MjEy", "avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-mccarthy", "html_url": "https://github.com/connor-mccarthy", "followers_url": "https://api.github.com/users/connor-mccarthy/followers", "following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}", "gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions", "organizations_url": "https://api.github.com/users/connor-mccarthy/orgs", "repos_url": "https://api.github.com/users/connor-mccarthy/repos", "events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-mccarthy/received_events", "type": "User", "site_admin": false } ]
null
[ "/assign @connor-mccarthy ", "Hi, @elibixby. Please see a description of plans to support this in KFP SDK v2 here: https://github.com/kubeflow/pipelines/issues/8570#issuecomment-1349057570", "Do I read this correctly that the plan is to be platform agnostic even outside of Kubernetes? I'm really skeptical that a workflow library can be built that actually supports real use cases without providing access to at least the level of abstraction that Kubernetes already provides, and am disappointed to hear that kubeflow is moving in that direction.", "@elibixby, thanks for expressing your concern. Hopefully the following resolves some of your questions:\r\n\r\nThe _pipeline topology_ itself will be platform agnostic (this is represented in PipelineSpec IR). Each node/task in the pipeline can have features bound to it. Features that aren't specific to a particular executing backend (environment variables, display name, retry behavior, etc.) will be represented directly in IR.\r\n\r\nFeatures that are are tied to a particular backend (either by concept or by API/specification) will be will _still be exposed and configurable_, but not represented directly in IR YAML (they will be represented alongside IR YAML). This enables feature parity with v1 Kubeflow concepts in the KFP SDK, while still maintaining cross-platform portability of the core pipeline definition.\r\n\r\nIn the case of Kubernetes, Kubernetes concepts will be exposed via the `kfp.kubernetes` subpackage _within_ the KFP SDK library (see [example](https://docs.google.com/document/d/10Cx-B18V6gR35VOmTe8_8gB67srOFF_un7NN8qXP1T4/edit#heading=h.etyis0olf88a)).\r\n\r\nFrom a pipeline authoring standpoint, you will still be able to author the same pipeline using Kubernetes-specific abstractions once this work is complete, but your code will merely look a little different (slightly less object oriented) and the YAML file containing your pipeline (which is largely implementation detail) will represent these features differently.", "Does this mean KubeFlow will be moving away from using Argo as the workflow system? Because the comparative complexity of the Python SDK my team is strongly considering using KubeFlow operators for caching etc but authoring pipelines with another Argo frontend (e.g. Hera). This currently seems to work quite well.\r\n\r\nIt sounds like this will stop working if you move to a custom IR? If so I hope there are plans to pull out the components that add useful functionality to Argo as separate projects for those of us who don't care about platform agnosticism.", "> Does this mean KubeFlow will be moving away from using Argo as the workflow system?\r\n\r\nKFP SDK v2.x.x is moving to writing custom IR, which is submitted to the KFP BE and compiled to Argo by the KFP BE. This is already how the v2 namespace behaves in KFP SDK v1.x.x. KFP SDK v2.x.x promotes this to the primary authoring style.\r\n\r\n> I hope there are plans to pull out the components that add useful functionality to Argo as separate projects\r\n\r\nThe `sdk/release-1.8` will continue to hold the latest KFP SDK v1.8.x changes and, despite focusing development efforts on KFP SDK v2.x.x, we have continued to release KFP SDK v1 patch versions. You can continue to `pip install kfp==1.8`.\r\n\r\nFurthermore, the KFP v2 BE will still accept Argo workflow.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions." ]
"2022-12-05T19:42:27"
"2023-08-29T07:42:07"
null
NONE
null
### Feature Area /area samples ### What feature would you like to see? It's unclear from documentation to what extent you should be able to configure the pod that run a given task in v2. In v1 the [BaseOp](https://github.com/kubeflow/pipelines/blob/1.8.16/sdk/python/kfp/dsl/_container_op.py#L810) class seems to suggest that most aspects of a pod are configurable within the pipeline but this functionality seems to completely disappear in v2 with the move to `ComponentSpec` which seems to have no such configuration fields. Is this still possible? Some documentation on customizing pod fields would help here. ### What is the use case or pain point? Many steps for my use case require specific pod affinity, tolerances, and premounted volumes etc. It is unclear how to do that in the context of v2 pipelines. --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8527/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8527/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8524
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8524/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8524/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8524/events
https://github.com/kubeflow/pipelines/issues/8524
1,474,265,308
I_kwDOB-71UM5X34Dc
8,524
[feature] Support different ServiceAccounts for each component (least privilege)
{ "login": "agilgur5", "id": 4970083, "node_id": "MDQ6VXNlcjQ5NzAwODM=", "avatar_url": "https://avatars.githubusercontent.com/u/4970083?v=4", "gravatar_id": "", "url": "https://api.github.com/users/agilgur5", "html_url": "https://github.com/agilgur5", "followers_url": "https://api.github.com/users/agilgur5/followers", "following_url": "https://api.github.com/users/agilgur5/following{/other_user}", "gists_url": "https://api.github.com/users/agilgur5/gists{/gist_id}", "starred_url": "https://api.github.com/users/agilgur5/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/agilgur5/subscriptions", "organizations_url": "https://api.github.com/users/agilgur5/orgs", "repos_url": "https://api.github.com/users/agilgur5/repos", "events_url": "https://api.github.com/users/agilgur5/events{/privacy}", "received_events_url": "https://api.github.com/users/agilgur5/received_events", "type": "User", "site_admin": false }
[ { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1126834402, "node_id": "MDU6TGFiZWwxMTI2ODM0NDAy", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/components", "name": "area/components", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" }, { "id": 2710158147, "node_id": "MDU6TGFiZWwyNzEwMTU4MTQ3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/needs%20more%20info", "name": "needs more info", "color": "DBEF12", "default": false, "description": "" } ]
open
false
null
[]
null
[ "Hello @agilgur5 , thank you so much for bringing this up and offering to contribution. If you can write a design doc about how you plan to implement the per-component service account feature, and get it reviewed by Pipelines Working Group (Join the Pipelines Community Meeting), we can review and approve it for enabling the implementation. Thank you!", "Hi @zijianjoy, thanks for the quick response!\r\n\r\n> If you can write a design doc about how you plan to implement the per-component service account feature\r\n\r\nSo _originally_ I was hoping to get some guidance on where in the codebase to look/start from for this feature (which I would need to understand a good bit before proposing a Design Doc, depending on the level of implementation details expected).\r\n\r\nBut I managed to find a decent bit myself, and I think this might actually be a simpler contribution than I first thought.\r\n\r\n### `podSpecPatch` is already used\r\n\r\nIn particular, I found that [`podSpecPatch`](https://github.com/kubeflow/pipelines/blob/217cbd9e6e0c79316bb98c005e3154cea08afb71/backend/src/v2/compiler/argocompiler/container.go#L210) is _already_ used by the compiler. The workaround for per-step `ServiceAccount`s in Argo Workflow [also uses `podSpecPatch`](https://github.com/argoproj/argo-workflows/issues/2053#issuecomment-827120183).\r\n\r\nI see #4877 requested it and I looked through the implementation to support GPU requests in #5972. \r\n\r\nBased on the code there and the existing support in the compiler, it seems like this feature might actually only require changes to the SDK to expose a `ServiceAccount` configuration (and not the compiler or backend as I originally anticipated). Does that sound about right to you?\r\n\r\nI need to read through a bit more, but if so, this might only require a small PR similar to #5972 to implement. If that is the case, would a Design Doc + Pipelines WG review be necessary for this? Or could I just go straight to PR if the source changes are limited in scope to, say, ~50 LoC source in the SDK only?\r\n\r\nI can probably work in a Design Doc + Community Meeting if needed, but that needs a bit more scheduling/synchronous coordination 😅 ", "We need to make sure that your contribution still works in KFP v2, would you like to take a closer look and see if the proposed change is valid in the KFPv2 syntax? https://www.kubeflow.org/docs/components/pipelines/v2/", "Yea as far as I could tell, a `set_service_account_name` function would be compatible with both versions", "I think there are two main points to be considered:\r\n1. In KFP v2, we are using PipelineSpec to define a pipeline template: https://github.com/kubeflow/pipelines/blob/217cbd9e6e0c79316bb98c005e3154cea08afb71/api/v2alpha1/pipeline_spec.proto#L50. In your design, you mention that we only need to change KFP SDK, how do we represent service_account for individual steps in PipelineSpec proto?\r\n2. Users currently can set service account for the whole pipeline run when you are creating a new run, that means you can execute the same pipeline template using different service account. Why do you think we should follow the pattern of `set_gpu_limit()` to set service account during pipeline compile time?", "Gonna answer these out of order as I think they're dependent on each other:\r\n\r\n> 2\\. Why do you think we should follow the pattern of `set_gpu_limit()` to set service account during pipeline compile time?\r\n\r\n2. For the same reason why something like `set_gpu_limit()` exists: one may set each _component_ to have a different limit, and the same for Service Accounts (per opening comment). Since each component can have a different Service Account for least privilege, a component needs to be able to be configured with a Service Account. Depending on the Pipeline itself it runs in and _where_ that Pipeline runs (etc), the component may need a different Service Account.\r\nAlso, the current way to set Service Accounts in Argo Workflows relies on using `podSpecPatch`.\r\nI don't think there's as much need for that to change based on other component's outputs/results like `set_gpu_limit`, but that use-case is possible too.\r\n\r\n> 1. In KFP v2, we are using PipelineSpec to define a pipeline template:\r\n\r\n1. Thanks for your guidance and the `proto` reference, I wasn't aware of that!\r\nBased on 2., it would be the similar to how `set_gpu_limit` is configured right now, ~which does not have an explicit named configuration~ **EDIT**: see next comment, this seems to be under `AcceleratorConfig` I believe.\r\nAs far as I understand -- please correct me if I'm wrong, I'm much more familiar with the underlying Argo Workflows spec than KFP's own spec and so may very well have gaps here -- ~this is just an `input` to the component (which can have a default value). Since a component effectively wraps a singular AW `template`, the `template` must accept as `input` anything that can be parametrized, as it can be used in many different Pipelines (workflows in AW terms).~\r\n**EDIT**: see next comment, it seems like these are defined under `PipelineContainerSpec`", "Correction on the above 1., I do seem to have misunderstood the `proto` a bit. My eyes seemed to have glossed over the `PipelineContainerSpec` last night because I woke up and immediately thought I missed something then found it 😅\r\nApologies for my misunderstanding as a first-time contributor to this project.\r\n\r\n1. Based on my re-read of the `proto`, it looks like a `ServiceAccount` -- `service_account_name` to be more specific -- would likely be defined under `PipelineContainerSpec`:\r\nhttps://github.com/kubeflow/pipelines/blob/217cbd9e6e0c79316bb98c005e3154cea08afb71/api/v2alpha1/pipeline_spec.proto#L652\r\nSame as where `cpu_limit` and `AcceleratorConfig` (i.e. GPU config) are currently.\r\nThe naming is a bit off though because a `serviceAccountName` is actually a field on the Pod, not a container... hmm... 💬\r\n\r\nBased on that, that does make it sound like it _may_ need a spec change for v2, unless I'm misunderstanding something again.\r\nWhen I looked at the compiler code before though, I did see that [`podSpecPatch` was already used](https://github.com/kubeflow/pipelines/blob/217cbd9e6e0c79316bb98c005e3154cea08afb71/backend/src/v2/compiler/argocompiler/container.go#L210) and defined as an `input` so I feel like I'm still missing something here.\r\n\r\nIf that _does_ require a spec change, then yea a Design Doc would absolutely be understood in that scenario. _Especially_ so if, for proper naming & nesting, a `PipelinePodSpec` would have to wrap the `PipelineContainerSpec`, as that's definitely a larger change than just a small SDK change 😅\r\nI might be missing something around how this implemented under-the-hood in KFP v1 vs KFP v2? (i.e. deeper than the external-facing syntax)\r\n\r\nWould appreciate any further guidance or links/references here! And again, please correct me if I'm wrong in my understanding anywhere above!", "Thank you @agilgur5 for the detailed answers and your thinking process!\r\n\r\nTo share more information, I would like to suggest this reading for KFPv2 system design: https://docs.google.com/document/d/1fHU29oScMEKPttDA1Th1ibImAKsFVVt2Ynr4ZME05i0/edit.\r\n\r\n------\r\n\r\nTo give a high level summary of difference between KFPv1 and v2:\r\n\r\nKFP V1:\r\n\r\nSDK compiles Pipeline Template (`ArgoWorkflow CR`) -> Backend deploys `ArgoWorkflow CR` with runtime parameters -> Finish.\r\n\r\n\r\nKFP V2:\r\n\r\nSDK compiles Pipeline Template (`PipelineSpec`) -> Backend translates `PipelineSpec` with `RuntimeConfig` to `ArgoWorkflow CR` -> Backend deploys `ArgoWorkflow CR` -> Finish. \r\n\r\n\r\n-------\r\n\r\nAs you have noticed, we are looking for information as followed in a design doc:\r\n1. Do we set service account at the compile time, or at runtime?\r\n2. What should be the level of granularity for service account setting? (component level, pod level, container level, pipeline level, or others?) Note that a pipeline can run within a pipeline (We sometimes call this concept as sub-DAG).\r\n3. When there is a conflict of service account configuration, which one takes higher priority? (You can set service account for the whole run at runtime currently. If a component inside has a different service account at compile time, which service account should it choose?)\r\n4. Based on the decision above, what is the proposed proto change in `PipelineSpec`?\r\n\r\n-------\r\n\r\nHope that this information is helpful! It will be easier to discuss these topics in a design doc.", "Just wanted to quickly note that I'm still engaged here and this is now on my main team's JIRA board.\r\n\r\nBeen fairly preoccupied the past 2 weeks as my family caught COVID -- bit of a hectic new year to say the least 😅\r\n\r\nI'll have a response soon with some questions and am working on a design doc in the next week or so", "heya, any status updates here?", "I got COVID shortly after my previous comment and got the rough end of it (few weeks sick + long COVID for a few months 😞 ). Then got re-org'd at work (again) and headed up a new team, on top of other corporate things, so this dropped pretty far down on the priority list as a result unfortunately 😕 \r\n\r\nI have been ramping back up on OSS work the past few weeks and did some work on the upstream issue recently, discovering that it seems to already be implemented(??): https://github.com/argoproj/argo-workflows/issues/2053#issuecomment-1597524593\r\n\r\nI'm hoping I can get back to Kubeflow work around ~next week (I also recently joined the newly formed Security WG with a few things on the list), but tbd" ]
"2022-12-03T22:53:24"
"2023-07-06T00:12:31"
null
NONE
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> <!-- /area frontend --> <!-- /area backend --> /area sdk <!-- /area samples --> <!-- /area components --> ### What feature would you like to see? <!-- Provide a description of this feature and the user experience. --> I would like to be able to configure different k8s `ServiceAccount`s for each component in my pipeline. ### What is the use case or pain point? <!-- It helps us understand the benefit of this feature for your use case. --> Main examples are for integrations: 1. Let's say I want to run the Spark Operator in one of my components, then I need to give that component RBAC access to the `SparkApplication` CR. _But_, I don't want to allow my other components access to `SparkApplication` CRs. 2. Let's say I want one of my components to access secrets from Vault's CSI Driver, then I need to give that component RBAC access to PSPs for CSI (in older k8s versions). _But_, I don't want to allow my other components access to CSI. Basically, limiting to **least privilege** RBAC per component. In secured environments with a component catalog, an example of usage here would be that anything that requires RBAC has to go through the component catalog -- i.e. any integrations that require RBAC are through secured components. ### Is there a workaround currently? <!-- Without this feature, how do you accomplish your task today? --> All components must be given the same `ServiceAccount` with RBAC allowed for all. So if I only have _one_ component that needs to integrate with Spark, _all_ components are given access to Spark. This does not fit least privilege security paradigms, causing many components to have extra unnecessary permissions, widening the attack surface area unnecessarily. ## Additional Details Argo Workflows does currently allow different `ServiceAccount`s per Workflow, and there exists a workaround to allow different `ServiceAccount`s per step: https://github.com/argoproj/argo-workflows/issues/2053. So this is already possible in Argo Workflows, it just needs support in Kubeflow Pipelines. I would be more than happy to contribute this work as it's critical to a few teams that I work with. I may also contribute to an official feature in Argo for this as well (so that the workaround is not necessary) --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8524/reactions", "total_count": 8, "+1": 8, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8524/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8513
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8513/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8513/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8513/events
https://github.com/kubeflow/pipelines/issues/8513
1,469,973,409
I_kwDOB-71UM5XngOh
8,513
[feature] Export VertexModels in python function based components in Vertex Pipelines.
{ "login": "wardVD", "id": 2136274, "node_id": "MDQ6VXNlcjIxMzYyNzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2136274?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wardVD", "html_url": "https://github.com/wardVD", "followers_url": "https://api.github.com/users/wardVD/followers", "following_url": "https://api.github.com/users/wardVD/following{/other_user}", "gists_url": "https://api.github.com/users/wardVD/gists{/gist_id}", "starred_url": "https://api.github.com/users/wardVD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wardVD/subscriptions", "organizations_url": "https://api.github.com/users/wardVD/orgs", "repos_url": "https://api.github.com/users/wardVD/repos", "events_url": "https://api.github.com/users/wardVD/events{/privacy}", "received_events_url": "https://api.github.com/users/wardVD/received_events", "type": "User", "site_admin": false }
[ { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
{ "login": "connor-mccarthy", "id": 55268212, "node_id": "MDQ6VXNlcjU1MjY4MjEy", "avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-mccarthy", "html_url": "https://github.com/connor-mccarthy", "followers_url": "https://api.github.com/users/connor-mccarthy/followers", "following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}", "gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions", "organizations_url": "https://api.github.com/users/connor-mccarthy/orgs", "repos_url": "https://api.github.com/users/connor-mccarthy/repos", "events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-mccarthy/received_events", "type": "User", "site_admin": false }
[ { "login": "connor-mccarthy", "id": 55268212, "node_id": "MDQ6VXNlcjU1MjY4MjEy", "avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-mccarthy", "html_url": "https://github.com/connor-mccarthy", "followers_url": "https://api.github.com/users/connor-mccarthy/followers", "following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}", "gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions", "organizations_url": "https://api.github.com/users/connor-mccarthy/orgs", "repos_url": "https://api.github.com/users/connor-mccarthy/repos", "events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-mccarthy/received_events", "type": "User", "site_admin": false } ]
null
[ "/assign @connor-mccarthy ", "@wardVD, thanks for your question. Can you let me know which version of the KFP SDK you're using? If it's `kfp==1.x.x` or `kfp<2.0.0b5` this is expected.\r\n\r\nFor `kfp>=2.0.0b5`, the `VertexModel` symbol will be defined, but the GCPC library is not yet compatible with KFP v2 (this is a work in progress) so this will still not work yet.", "Hi @connor-mccarthy, Are there any updates regarding this? I wanted to use `ModelUploadOp` component and pass the `parent_model` argument in order to upload a new version to an existing model. Based on the documentation this expects a `VertexModel` so wanted to create a custom python component that a model name and returns the existing `VertexModel`, but faced this error.\r\nWhat other ways can this be done as of now?", "Hi, @morningcloud. There is no elegant solution for this currently that allows you to use a `VertexModel` directly, but since the `dsl.Artifact` type is compatible with all other artifact types, returning a `dsl.Artifact` may be a useful workaround for the time being [[docs](https://github.com/kubeflow/pipelines/issues/8513)].", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions." ]
"2022-11-30T17:15:38"
"2023-08-30T07:41:50"
null
NONE
null
### Feature Area I wanted to create a custom component from a python function that creates a VertexModel artifact when running with Vertex Pipelines. I thought it could be as simple as something like this (dummy code): ``` from google_cloud_pipeline_components.types.artifact_types import VertexModel @component( base_image="python:3.9-slim", ) def ModelUploadCustomOp( model_name: str, model: Output[VertexModel], ): model.uri = <MODEL_URI_STRING> ``` The compilation step works, but running gives me the following error: ``` model: Output[VertexModel], NameError: name 'VertexModel' is not defined ``` Is there a way on how to create a python function based component that creates a VertexModel artifact in Vertex Pipelines? --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8513/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8513/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8502
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8502/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8502/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8502/events
https://github.com/kubeflow/pipelines/issues/8502
1,467,179,111
I_kwDOB-71UM5Xc2Bn
8,502
[feature] S3 Integration for Artifact Storage (IRSA)
{ "login": "ryansteakley", "id": 37981995, "node_id": "MDQ6VXNlcjM3OTgxOTk1", "avatar_url": "https://avatars.githubusercontent.com/u/37981995?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ryansteakley", "html_url": "https://github.com/ryansteakley", "followers_url": "https://api.github.com/users/ryansteakley/followers", "following_url": "https://api.github.com/users/ryansteakley/following{/other_user}", "gists_url": "https://api.github.com/users/ryansteakley/gists{/gist_id}", "starred_url": "https://api.github.com/users/ryansteakley/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ryansteakley/subscriptions", "organizations_url": "https://api.github.com/users/ryansteakley/orgs", "repos_url": "https://api.github.com/users/ryansteakley/repos", "events_url": "https://api.github.com/users/ryansteakley/events{/privacy}", "received_events_url": "https://api.github.com/users/ryansteakley/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
closed
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }, { "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }, { "login": "jlyaoyuli", "id": 56132941, "node_id": "MDQ6VXNlcjU2MTMyOTQx", "avatar_url": "https://avatars.githubusercontent.com/u/56132941?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jlyaoyuli", "html_url": "https://github.com/jlyaoyuli", "followers_url": "https://api.github.com/users/jlyaoyuli/followers", "following_url": "https://api.github.com/users/jlyaoyuli/following{/other_user}", "gists_url": "https://api.github.com/users/jlyaoyuli/gists{/gist_id}", "starred_url": "https://api.github.com/users/jlyaoyuli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jlyaoyuli/subscriptions", "organizations_url": "https://api.github.com/users/jlyaoyuli/orgs", "repos_url": "https://api.github.com/users/jlyaoyuli/repos", "events_url": "https://api.github.com/users/jlyaoyuli/events{/privacy}", "received_events_url": "https://api.github.com/users/jlyaoyuli/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @zijianjoy @jlyaoyuli Can you please ~~tag Jo Li/people from frontend side to~~ review this proposal\r\n\r\ncc @chensun, @james-jwu", "@jlyaoyuli ", "Hi @gkcalat, I believe @ryansteakley is interested to contribute this functionality. Shall we go ahead with the PR?\r\n@jlyaoyuli let us know if you have any comments", "/assign @chensun ", "resolved by #8651 merged, thanks everyone!" ]
"2022-11-28T22:27:53"
"2023-03-10T18:23:29"
"2023-03-10T18:23:29"
CONTRIBUTOR
null
## S3 Integration for Artifact Storage (IRSA) As of Kubeflow pipelines 2.0-alpha6 release, using IRSA for artifact storage in S3 is not fully supported(details later in the doc). There has been a long-running thread with high-interest for this functionality in the[ kubeflow pipelines github](https://github.com/kubeflow/pipelines/issues/3405). These proposal aims to address it. To support using IRSA with S3, we want to add the[ AWS JS SDK credentials](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/modules/_aws_sdk_credential_providers.html) module to the frontend container image and update the[ aws-helper](https://github.com/kubeflow/pipelines/blob/4bb57e6723b7a5c2eb685536e5a293aea87bd3a1/frontend/server/aws-helper.ts) to use credentialProviderChain function from the AWS JS SDK. **Please read the following [design document](https://docs.google.com/document/d/1kWbenzENgj2SE8oKshiu_MMzZuBqfZ8Iv1LQSmwJD7E/edit?usp=sharing&resourcekey=0-DxxoE1CFqJkUeSYpEBKfmg) to better understand the reasoning and use-case behind this feature**. --- /area frontend Love this idea? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8502/reactions", "total_count": 10, "+1": 10, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8502/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8500
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8500/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8500/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8500/events
https://github.com/kubeflow/pipelines/issues/8500
1,466,545,203
I_kwDOB-71UM5XabQz
8,500
[bug] Issue with ImageDatasetCreateOp
{ "login": "wardVD", "id": 2136274, "node_id": "MDQ6VXNlcjIxMzYyNzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2136274?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wardVD", "html_url": "https://github.com/wardVD", "followers_url": "https://api.github.com/users/wardVD/followers", "following_url": "https://api.github.com/users/wardVD/following{/other_user}", "gists_url": "https://api.github.com/users/wardVD/gists{/gist_id}", "starred_url": "https://api.github.com/users/wardVD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wardVD/subscriptions", "organizations_url": "https://api.github.com/users/wardVD/orgs", "repos_url": "https://api.github.com/users/wardVD/repos", "events_url": "https://api.github.com/users/wardVD/events{/privacy}", "received_events_url": "https://api.github.com/users/wardVD/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Currently Google Cloud Pipeline Component hasn't been migrated to KFP SDK V2, so it is a work-in-progress.\r\n\r\n/cc @connor-mccarthy \r\n\r\n" ]
"2022-11-28T14:37:40"
"2022-12-15T07:56:42"
"2022-12-08T23:52:40"
NONE
null
### Environment * KFP SDK version: 2.0.0b4 ### Steps to reproduce Run https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/pipelines/google_cloud_pipeline_components_automl_images.ipynb No issues on `kfp==1.8.13` Issue: ![image](https://user-images.githubusercontent.com/2136274/204304592-f6078988-9c68-43d7-8ff0-826e09f803f3.png) --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8500/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8500/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8499
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8499/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8499/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8499/events
https://github.com/kubeflow/pipelines/issues/8499
1,464,518,232
I_kwDOB-71UM5XSsZY
8,499
[feature] Filter splits not possible for AutoMLTrainingJobOp
{ "login": "wardVD", "id": 2136274, "node_id": "MDQ6VXNlcjIxMzYyNzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2136274?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wardVD", "html_url": "https://github.com/wardVD", "followers_url": "https://api.github.com/users/wardVD/followers", "following_url": "https://api.github.com/users/wardVD/following{/other_user}", "gists_url": "https://api.github.com/users/wardVD/gists{/gist_id}", "starred_url": "https://api.github.com/users/wardVD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wardVD/subscriptions", "organizations_url": "https://api.github.com/users/wardVD/orgs", "repos_url": "https://api.github.com/users/wardVD/repos", "events_url": "https://api.github.com/users/wardVD/events{/privacy}", "received_events_url": "https://api.github.com/users/wardVD/received_events", "type": "User", "site_admin": false }
[ { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
{ "login": "chongyouquan", "id": 48691403, "node_id": "MDQ6VXNlcjQ4NjkxNDAz", "avatar_url": "https://avatars.githubusercontent.com/u/48691403?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chongyouquan", "html_url": "https://github.com/chongyouquan", "followers_url": "https://api.github.com/users/chongyouquan/followers", "following_url": "https://api.github.com/users/chongyouquan/following{/other_user}", "gists_url": "https://api.github.com/users/chongyouquan/gists{/gist_id}", "starred_url": "https://api.github.com/users/chongyouquan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chongyouquan/subscriptions", "organizations_url": "https://api.github.com/users/chongyouquan/orgs", "repos_url": "https://api.github.com/users/chongyouquan/repos", "events_url": "https://api.github.com/users/chongyouquan/events{/privacy}", "received_events_url": "https://api.github.com/users/chongyouquan/received_events", "type": "User", "site_admin": false }
[ { "login": "chongyouquan", "id": 48691403, "node_id": "MDQ6VXNlcjQ4NjkxNDAz", "avatar_url": "https://avatars.githubusercontent.com/u/48691403?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chongyouquan", "html_url": "https://github.com/chongyouquan", "followers_url": "https://api.github.com/users/chongyouquan/followers", "following_url": "https://api.github.com/users/chongyouquan/following{/other_user}", "gists_url": "https://api.github.com/users/chongyouquan/gists{/gist_id}", "starred_url": "https://api.github.com/users/chongyouquan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chongyouquan/subscriptions", "organizations_url": "https://api.github.com/users/chongyouquan/orgs", "repos_url": "https://api.github.com/users/chongyouquan/repos", "events_url": "https://api.github.com/users/chongyouquan/events{/privacy}", "received_events_url": "https://api.github.com/users/chongyouquan/received_events", "type": "User", "site_admin": false } ]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions." ]
"2022-11-25T13:01:21"
"2023-08-30T07:41:52"
null
NONE
null
### Feature Area I noticed that the `component.yaml` file for AutoMLTrainingJobOp mentions data filters in the description, but the parameters for filter splits are not yet present: https://github.com/kubeflow/pipelines/blob/master/components/google-cloud/google_cloud_pipeline_components/aiplatform/automl_training_job/automl_image_training_job/component.yaml This unfortunately does not allow to train an AutoML model by defining the training/validation/test splits when they are already defined in the dataset itself. This is possible with the Google Cloud Vertex SDK: https://github.com/googleapis/python-aiplatform/blob/main/google/cloud/aiplatform/training_jobs.py#L5332 --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8499/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8499/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8498
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8498/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8498/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8498/events
https://github.com/kubeflow/pipelines/issues/8498
1,464,357,609
I_kwDOB-71UM5XSFLp
8,498
How do I use an external variable, an external function, or an external class inside a pipeline component?
{ "login": "wfg314", "id": 102725106, "node_id": "U_kgDOBh918g", "avatar_url": "https://avatars.githubusercontent.com/u/102725106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wfg314", "html_url": "https://github.com/wfg314", "followers_url": "https://api.github.com/users/wfg314/followers", "following_url": "https://api.github.com/users/wfg314/following{/other_user}", "gists_url": "https://api.github.com/users/wfg314/gists{/gist_id}", "starred_url": "https://api.github.com/users/wfg314/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wfg314/subscriptions", "organizations_url": "https://api.github.com/users/wfg314/orgs", "repos_url": "https://api.github.com/users/wfg314/repos", "events_url": "https://api.github.com/users/wfg314/events{/privacy}", "received_events_url": "https://api.github.com/users/wfg314/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "connor-mccarthy", "id": 55268212, "node_id": "MDQ6VXNlcjU1MjY4MjEy", "avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-mccarthy", "html_url": "https://github.com/connor-mccarthy", "followers_url": "https://api.github.com/users/connor-mccarthy/followers", "following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}", "gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions", "organizations_url": "https://api.github.com/users/connor-mccarthy/orgs", "repos_url": "https://api.github.com/users/connor-mccarthy/repos", "events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-mccarthy/received_events", "type": "User", "site_admin": false }
[ { "login": "connor-mccarthy", "id": 55268212, "node_id": "MDQ6VXNlcjU1MjY4MjEy", "avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-mccarthy", "html_url": "https://github.com/connor-mccarthy", "followers_url": "https://api.github.com/users/connor-mccarthy/followers", "following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}", "gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions", "organizations_url": "https://api.github.com/users/connor-mccarthy/orgs", "repos_url": "https://api.github.com/users/connor-mccarthy/repos", "events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-mccarthy/received_events", "type": "User", "site_admin": false } ]
null
[ "The Containerized Python Components approach is what you're looking for: https://www.kubeflow.org/docs/components/pipelines/v2/author-a-pipeline/components/#2-containerized-python-components.\r\n\r\nPlease feel free to re-open this you have any other questions." ]
"2022-11-25T10:43:58"
"2022-12-02T00:06:04"
"2022-12-02T00:06:03"
NONE
null
I defined a pipeline component in com.py like this: @component(base_image="my-image") def com1(): a = 1 def func1(aa): pass def func2(): pass x = func1(a) y = func2() return com1 I want to move a, func1 and func2 outside from com1, because the com1 is too long, Too long code leads to poor readability. So I want to define like this: a = 1 def func1(aa): pass def func2(): pass @component(base_image="my-image") def com1(): x = func1(a) y = func2() return com1 but when I compiler it to yaml file , the yaml file could not find a, func1 and func2. So, how should I do ?
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8498/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8498/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8497
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8497/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8497/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8497/events
https://github.com/kubeflow/pipelines/issues/8497
1,464,356,741
I_kwDOB-71UM5XSE-F
8,497
How do I use an external variable, an external function, or an external class inside a pipeline component?
{ "login": "wfg314", "id": 102725106, "node_id": "U_kgDOBh918g", "avatar_url": "https://avatars.githubusercontent.com/u/102725106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wfg314", "html_url": "https://github.com/wfg314", "followers_url": "https://api.github.com/users/wfg314/followers", "following_url": "https://api.github.com/users/wfg314/following{/other_user}", "gists_url": "https://api.github.com/users/wfg314/gists{/gist_id}", "starred_url": "https://api.github.com/users/wfg314/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wfg314/subscriptions", "organizations_url": "https://api.github.com/users/wfg314/orgs", "repos_url": "https://api.github.com/users/wfg314/repos", "events_url": "https://api.github.com/users/wfg314/events{/privacy}", "received_events_url": "https://api.github.com/users/wfg314/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-11-25T10:43:12"
"2022-11-26T06:45:48"
"2022-11-26T06:45:48"
NONE
null
I defined a pipeline component in com.py like this: @component(base_image="my-image") def com1(): a = 1 def func1(aa): pass def func2(): pass x = func1(a) y = func2() return com1 I want to move a, func1 and func2 outside from com1, because the com1 is too long, Too long code leads to poor readability. So I want to define like this: a = 1 def func1(aa): pass def func2(): pass @component(base_image="my-image") def com1(): x = func1(a) y = func2() return com1 but when I compiler it to yaml file , the yaml file could not find a, func1 and func2. So, how should I do ?
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8497/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8497/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8495
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8495/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8495/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8495/events
https://github.com/kubeflow/pipelines/issues/8495
1,463,129,321
I_kwDOB-71UM5XNZTp
8,495
[bug] BigqueryCreateModelJob component fails when query includes double quotes (")
{ "login": "anifort", "id": 2149571, "node_id": "MDQ6VXNlcjIxNDk1NzE=", "avatar_url": "https://avatars.githubusercontent.com/u/2149571?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anifort", "html_url": "https://github.com/anifort", "followers_url": "https://api.github.com/users/anifort/followers", "following_url": "https://api.github.com/users/anifort/following{/other_user}", "gists_url": "https://api.github.com/users/anifort/gists{/gist_id}", "starred_url": "https://api.github.com/users/anifort/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anifort/subscriptions", "organizations_url": "https://api.github.com/users/anifort/orgs", "repos_url": "https://api.github.com/users/anifort/repos", "events_url": "https://api.github.com/users/anifort/events{/privacy}", "received_events_url": "https://api.github.com/users/anifort/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1126834402, "node_id": "MDU6TGFiZWwxMTI2ODM0NDAy", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/components", "name": "area/components", "color": "d2b48c", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
{ "login": "chongyouquan", "id": 48691403, "node_id": "MDQ6VXNlcjQ4NjkxNDAz", "avatar_url": "https://avatars.githubusercontent.com/u/48691403?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chongyouquan", "html_url": "https://github.com/chongyouquan", "followers_url": "https://api.github.com/users/chongyouquan/followers", "following_url": "https://api.github.com/users/chongyouquan/following{/other_user}", "gists_url": "https://api.github.com/users/chongyouquan/gists{/gist_id}", "starred_url": "https://api.github.com/users/chongyouquan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chongyouquan/subscriptions", "organizations_url": "https://api.github.com/users/chongyouquan/orgs", "repos_url": "https://api.github.com/users/chongyouquan/repos", "events_url": "https://api.github.com/users/chongyouquan/events{/privacy}", "received_events_url": "https://api.github.com/users/chongyouquan/received_events", "type": "User", "site_admin": false }
[ { "login": "chongyouquan", "id": 48691403, "node_id": "MDQ6VXNlcjQ4NjkxNDAz", "avatar_url": "https://avatars.githubusercontent.com/u/48691403?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chongyouquan", "html_url": "https://github.com/chongyouquan", "followers_url": "https://api.github.com/users/chongyouquan/followers", "following_url": "https://api.github.com/users/chongyouquan/following{/other_user}", "gists_url": "https://api.github.com/users/chongyouquan/gists{/gist_id}", "starred_url": "https://api.github.com/users/chongyouquan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chongyouquan/subscriptions", "organizations_url": "https://api.github.com/users/chongyouquan/orgs", "repos_url": "https://api.github.com/users/chongyouquan/repos", "events_url": "https://api.github.com/users/chongyouquan/events{/privacy}", "received_events_url": "https://api.github.com/users/chongyouquan/received_events", "type": "User", "site_admin": false } ]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions." ]
"2022-11-24T10:33:46"
"2023-08-30T07:41:54"
null
NONE
null
### Environment google-cloud-pipeline-components = "1.0.26" gcr.io/ml-pipeline/google-cloud-pipeline-components:1.0.26 * How do you deploy Kubeflow Pipelines (KFP)? Running on Vertex AI Pipelines * https://www.kubeflow.org/docs/pipelines/installation/overview/. --> * KFP version: n/a * KFP SDK version: kfp = "^1.8.14" ### Steps to reproduce install google-cloud-pipeline-components = "1.0.26" create a pipeline with the component ``` create_model_tast = BigqueryCreateModelJobOp( project = project_id, location = location, query = 'CREATE OR REPLACE MODEL `dev_models.model_tfidf_v1` OPTIONS ( MODEL_TYPE="LOGISTIC_REG", input_label_cols=["category"] ) AS SELECT body, category FROM `bigquery-public-data.bbc_news.fulltext` LIMIT 1000') ``` Error: ``` File "/opt/python3.7/lib/python3.7/json/decoder.py", line 353, in raw_decode obj, end = self.scan_once(s, idx) json.decoder.JSONDecodeError: Expecting ',' delimiter: line 1 column 203 (char 202) ``` Workaround: Replaced all double quotes (") with single quotes (') Suggested solution: Escape double quotes before parsing the query to json ### Expected result Create model query to run successfully like when using the BigQuery UI to execute the same query / string ### Materials and reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> ### Labels <!-- Please include labels below by uncommenting them to help us better triage issues --> <!-- /area frontend --> /area backend /area sdk <!-- /area testing --> <!-- /area samples --> /area components --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8495/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8495/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8491
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8491/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8491/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8491/events
https://github.com/kubeflow/pipelines/issues/8491
1,460,653,686
I_kwDOB-71UM5XD852
8,491
[feature] kfp.component to accept tuples/lists of component outputs as args
{ "login": "sradc", "id": 17290057, "node_id": "MDQ6VXNlcjE3MjkwMDU3", "avatar_url": "https://avatars.githubusercontent.com/u/17290057?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sradc", "html_url": "https://github.com/sradc", "followers_url": "https://api.github.com/users/sradc/followers", "following_url": "https://api.github.com/users/sradc/following{/other_user}", "gists_url": "https://api.github.com/users/sradc/gists{/gist_id}", "starred_url": "https://api.github.com/users/sradc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sradc/subscriptions", "organizations_url": "https://api.github.com/users/sradc/orgs", "repos_url": "https://api.github.com/users/sradc/repos", "events_url": "https://api.github.com/users/sradc/events{/privacy}", "received_events_url": "https://api.github.com/users/sradc/received_events", "type": "User", "site_admin": false }
[ { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
closed
false
{ "login": "connor-mccarthy", "id": 55268212, "node_id": "MDQ6VXNlcjU1MjY4MjEy", "avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-mccarthy", "html_url": "https://github.com/connor-mccarthy", "followers_url": "https://api.github.com/users/connor-mccarthy/followers", "following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}", "gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions", "organizations_url": "https://api.github.com/users/connor-mccarthy/orgs", "repos_url": "https://api.github.com/users/connor-mccarthy/repos", "events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-mccarthy/received_events", "type": "User", "site_admin": false }
[ { "login": "connor-mccarthy", "id": 55268212, "node_id": "MDQ6VXNlcjU1MjY4MjEy", "avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-mccarthy", "html_url": "https://github.com/connor-mccarthy", "followers_url": "https://api.github.com/users/connor-mccarthy/followers", "following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}", "gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions", "organizations_url": "https://api.github.com/users/connor-mccarthy/orgs", "repos_url": "https://api.github.com/users/connor-mccarthy/repos", "events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-mccarthy/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @sradc. Thanks for suggesting this. We definitely want to enable expression of more complex topologies like the ones you show.\r\n\r\nRight now we're working on three separate designs/implementations related to this feature request:\r\n- Support for lists of artifacts\r\n- Support for fan-in from `dsl.ParallelFor`\r\n- Support for fanning-in outputs from multiple `dsl.Condition` branches (possibly support for elif/else conditions too)\r\n\r\nDo these WIP features collectively enable you to express the types of pipeline topologies you wish to express?", "Hey @connor-mccarthy, that sounds great; looking forward to it!", "Looks like this has been added (I haven't tried it yet); https://github.com/kubeflow/pipelines/pull/8631", "Hey, @sradc. Fan in of parameters from `dsl.ParallelFor` has been added. The other features (fan-in of artifacts) and collection from `dsl.Condition` are coming soon." ]
"2022-11-22T22:39:20"
"2023-02-10T16:02:56"
"2023-02-10T12:12:30"
NONE
null
### Feature Area /area sdk ### What feature would you like to see? The ability to pass in a list/tuple of `component.output` as an arg to a component. E.g. so that examples like the following work: ```python from kfp.v2 import compiler, dsl @dsl.component def my_transformer_op(item: str) -> str: return item + "_transformed" @dsl.component def my_aggregator_op(args: list) -> str: return " ".join(args) @dsl.pipeline("aggtest", "agg test") def dynamic_pipeline(): transformed_vals = [] for x in ["a", "b", "c"]: transformed_vals.append(my_transformer_op(x)) my_aggregator_op([x.output for x in transformed_vals]) compiler.Compiler().compile(pipeline_func=dynamic_pipeline, package_path="my_pipeline.yaml") ``` ``` TypeError: Object of type PipelineParam is not JSON serializable ``` (Looking at [this](https://stackoverflow.com/a/63219053) stackoverflow post, seems this kind of thing was possible in v1; but seems it's not possible in v2?) ### What is the use case or pain point? For aggregating results, e.g. see [this](https://sidsite.com/posts/kubeflow-hyperparam-opt/) example of hyperparameter optimization, and the screenshots attached below. ### Is there a workaround currently? *Yes*. There are workarounds. For example, I created a hacky workaround, that modifies the source code of a function, to expand args annotated with PseudoList into actual args of the function, and then add lines to the function def to collect these into tuples with the original name of the arg. (Using ast/inspect.) Source code here: https://github.com/sradc/kubeflow_hyperparam_opt_example/blob/b3ef4d7e01055e27011a4d1311cf9adccf37869e/pseudo_tuple_component.py Example of using it here (there's also an example in blog post linked above): https://github.com/sradc/kubeflow_hyperparam_opt_example/blob/41bc48e31407f79fbbcd07b39ea82586a5a71562/pseudo_tuple_example.ipynb Will also reproduce the example here: ```python from kfp.v2 import compiler, dsl from pseudo_tuple_component import PseudoTuple, pseudo_tuple_component MY_LIST = ["a", "b", "c"] PIPELINE_NAME = "pseudo-tuple-example" @dsl.component def my_transformer_op(item: str) -> str: return item + "_transformed" @pseudo_tuple_component(globals_=globals(), locals_=locals()) def my_aggregator_op(args: PseudoTuple(len(MY_LIST), str)) -> str: return " ".join(args) @dsl.pipeline("aggtest", "agg test") def dynamic_pipeline(): transformed_vals = [] for x in MY_LIST: transformed_vals.append(my_transformer_op(x).output) my_aggregator_op(*transformed_vals) compiler.Compiler().compile(pipeline_func=dynamic_pipeline, package_path=f"{PIPELINE_NAME}.json") ``` To show more explicitely how the function is modified: ```python from pseudo_tuple_component import expand_PseudoTuple_annotated_args_to_str def my_aggregator_op(args: PseudoTuple(len(MY_LIST), str)) -> str: return " ".join(args) new_source_code_for_func = expand_PseudoTuple_annotated_args_to_str( my_aggregator_op, globals_=globals(), locals_=locals() ) print(new_source_code_for_func) ``` ``` def my_aggregator_op(args_0: str, args_1: str, args_2: str) -> str: args = (args_0, args_1, args_2) return ' '.join(args) ``` --- This is related to issues: - https://github.com/kubeflow/pipelines/issues/1933 - https://github.com/kubeflow/pipelines/issues/3412 --- This is the kind of thing you could do (taken from [here](https://sidsite.com/posts/kubeflow-hyperparam-opt/), using my workaround): ![Screenshot 2022-11-22 at 22 27 04](https://user-images.githubusercontent.com/17290057/203434169-909d39dd-dbfb-45eb-b0fd-b78a42751f03.png) ![pseudo_tuple_example](https://user-images.githubusercontent.com/17290057/203435052-47bd3f09-7d76-4afc-b79a-c95b4ae0f05b.png) --- Love this idea? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8491/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8491/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8490
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8490/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8490/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8490/events
https://github.com/kubeflow/pipelines/issues/8490
1,460,581,612
I_kwDOB-71UM5XDrTs
8,490
[frontend] Image reference specifying tag with digest should work
{ "login": "doctapp", "id": 3598900, "node_id": "MDQ6VXNlcjM1OTg5MDA=", "avatar_url": "https://avatars.githubusercontent.com/u/3598900?v=4", "gravatar_id": "", "url": "https://api.github.com/users/doctapp", "html_url": "https://github.com/doctapp", "followers_url": "https://api.github.com/users/doctapp/followers", "following_url": "https://api.github.com/users/doctapp/following{/other_user}", "gists_url": "https://api.github.com/users/doctapp/gists{/gist_id}", "starred_url": "https://api.github.com/users/doctapp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/doctapp/subscriptions", "organizations_url": "https://api.github.com/users/doctapp/orgs", "repos_url": "https://api.github.com/users/doctapp/repos", "events_url": "https://api.github.com/users/doctapp/events{/privacy}", "received_events_url": "https://api.github.com/users/doctapp/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Hi @doctapp,\r\nCan you share what client are you using and how you submitted the job?", "Hi @gkcalat,\r\nusing Python 3.10.6, google_cloud_aiplatform 1.18.3 and kfp 1.8.16.\r\n\r\nAs for how the image is set:\r\n```\r\n@dsl.component\r\ndef some_func(...): ...\r\n...\r\nsome_fun.component_spec.implementation.container.image = 'gcr.io/some-project-id/some-product:0.0.1@sha256:0123456789abcedf0123456789abcedf0123456789abcedf0123456789abcedf'\r\n...\r\n@dsl.pipeline(...):\r\ndef pipeline(...):\r\n some_func(...)\r\n...\r\nfrom kfp.v2 import compiler\r\nfrom google.cloud.aiplatform import PipelineJob\r\ncompiler.Compiler().compile(pipeline_func=pipeline, package_path='pipeline.json')\r\njob = PipelineJob(\r\n display_name='test',\r\n template_path='pipeline.json',\r\n parameter_values=...,\r\n enable_caching=enable_caching,\r\n )\r\njob.submit(service_account=...)\r\n```\r\n\r\nThat's the high-level :) Please let me know if you need more info.\r\n\r\nThanks", "Hello @doctapp , I am wondering what should the expected behavior to be when the image tag and digest are pointing to different images. It might cause confusion to which image is actually used in the pipeline run. I am leaning towards keeping only digest or tag when you are referring an image. Please let me know if you don't agree.\r\n\r\n/cc @chensun ", "@zijianjoy Please use digest when both are specified and no need to check (could log a warning if there's a mismatch if that's something you want to implement but don't need this check). It's way more useful to specify both tag and digest since it provides context particularly when debugging and is safe when the digest is added to the tag by automation. Leave it to users to decide how to interpret both and let kubeflow use digest when both are specified.\r\nThanks", "@doctapp, it looks like your error is surfacing from job submission via the Vertex SDK (`google.cloud.aiplatform`). Please consider opening an issue in that repository: https://github.com/googleapis/python-aiplatform.", "Raised https://github.com/googleapis/python-aiplatform/issues/1861" ]
"2022-11-22T21:34:00"
"2022-12-16T14:32:24"
"2022-12-15T23:39:45"
NONE
null
### Environment * How did you deploy Kubeflow Pipelines (KFP)? Google Vertex AI * KFP version: 1.8.6 (Python 3.10.6) ### Steps to reproduce Use image reference containing both tag and digest, e.g., `gcr.io/some-project-id/some-product:0.0.1@sha256:0123456789abcedf0123456789abcedf0123456789abcedf0123456789abcedf` ### Expected result Pipeline should run but receiving `google.api_core.exceptions.InvalidArgument: 400 Invalid image URI gcr.io/some-project-id/some-product@sha256:0123456789abcedf0123456789abcedf0123456789abcedf0123456789abcedf` instead. ### Materials and Reference Using either tag (e.g., `gcr.io/some-project-id/some-product:0.0.1`) or digest (e.g., `gcr.io/some-project-id/some-product@sha256:0123456789abcedf0123456789abcedf0123456789abcedf0123456789abcedf`) works. Expected image reference including both tag and digest is very useful for identifying correct images which is complex with just using digest. Example stack trace: ``` Traceback (most recent call last): File "/home/user/product/venv/lib/python3.10/site-packages/google/api_core/grpc_helpers.py", line 72, in error_remapped_callable return callable_(*args, **kwargs) File "/home/user/product/venv/lib/python3.10/site-packages/grpc/_channel.py", line 946, in __call__ return _end_unary_response_blocking(state, call, False, None) File "/home/user/product/venv/lib/python3.10/site-packages/grpc/_channel.py", line 849, in _end_unary_response_blocking raise _InactiveRpcError(state) grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: status = StatusCode.INVALID_ARGUMENT details = "Invalid image URI gcr.io/some-project-id/some-product@sha256:0123456789abcedf0123456789abcedf0123456789abcedf0123456789abcedf." debug_error_string = "UNKNOWN:Error received from peer ipv4:172.217.13.106:443 {grpc_message:"Invalid image URI gcr.io/some-project-id/some-product@sha256:0123456789abcedf0123456789abcedf0123456789abcedf0123456789abcedf.", grpc_status:3, created_time:"2022-11-22T00:00:00.000000000+00:00"}" > The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/user/product/venv/lib/python3.10/site-packages/google/api_core/grpc_helpers.py", line 74, in error_remapped_callable raise exceptions.from_grpc_error(exc) from exc google.api_core.exceptions.InvalidArgument: 400 Invalid image URI gcr.io/some-project-id/some-product@sha256:0123456789abcedf0123456789abcedf0123456789abcedf0123456789abcedf. ``` Thanks! --- Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8490/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8490/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8483
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8483/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8483/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8483/events
https://github.com/kubeflow/pipelines/issues/8483
1,455,748,649
I_kwDOB-71UM5WxPYp
8,483
[frontend] Archived experiments do not show up in archived list
{ "login": "Linchin", "id": 12806577, "node_id": "MDQ6VXNlcjEyODA2NTc3", "avatar_url": "https://avatars.githubusercontent.com/u/12806577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Linchin", "html_url": "https://github.com/Linchin", "followers_url": "https://api.github.com/users/Linchin/followers", "following_url": "https://api.github.com/users/Linchin/following{/other_user}", "gists_url": "https://api.github.com/users/Linchin/gists{/gist_id}", "starred_url": "https://api.github.com/users/Linchin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Linchin/subscriptions", "organizations_url": "https://api.github.com/users/Linchin/orgs", "repos_url": "https://api.github.com/users/Linchin/repos", "events_url": "https://api.github.com/users/Linchin/events{/privacy}", "received_events_url": "https://api.github.com/users/Linchin/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
{ "login": "jlyaoyuli", "id": 56132941, "node_id": "MDQ6VXNlcjU2MTMyOTQx", "avatar_url": "https://avatars.githubusercontent.com/u/56132941?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jlyaoyuli", "html_url": "https://github.com/jlyaoyuli", "followers_url": "https://api.github.com/users/jlyaoyuli/followers", "following_url": "https://api.github.com/users/jlyaoyuli/following{/other_user}", "gists_url": "https://api.github.com/users/jlyaoyuli/gists{/gist_id}", "starred_url": "https://api.github.com/users/jlyaoyuli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jlyaoyuli/subscriptions", "organizations_url": "https://api.github.com/users/jlyaoyuli/orgs", "repos_url": "https://api.github.com/users/jlyaoyuli/repos", "events_url": "https://api.github.com/users/jlyaoyuli/events{/privacy}", "received_events_url": "https://api.github.com/users/jlyaoyuli/received_events", "type": "User", "site_admin": false }
[ { "login": "jlyaoyuli", "id": 56132941, "node_id": "MDQ6VXNlcjU2MTMyOTQx", "avatar_url": "https://avatars.githubusercontent.com/u/56132941?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jlyaoyuli", "html_url": "https://github.com/jlyaoyuli", "followers_url": "https://api.github.com/users/jlyaoyuli/followers", "following_url": "https://api.github.com/users/jlyaoyuli/following{/other_user}", "gists_url": "https://api.github.com/users/jlyaoyuli/gists{/gist_id}", "starred_url": "https://api.github.com/users/jlyaoyuli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jlyaoyuli/subscriptions", "organizations_url": "https://api.github.com/users/jlyaoyuli/orgs", "repos_url": "https://api.github.com/users/jlyaoyuli/repos", "events_url": "https://api.github.com/users/jlyaoyuli/events{/privacy}", "received_events_url": "https://api.github.com/users/jlyaoyuli/received_events", "type": "User", "site_admin": false } ]
null
[ "#8864 " ]
"2022-11-18T20:23:15"
"2023-05-18T00:01:17"
"2023-05-18T00:00:34"
COLLABORATOR
null
### Environment * How did you deploy Kubeflow Pipelines (KFP)? KFP standalone <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> * KFP version: Self built based on #8480 <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> ### Steps to reproduce Since this bug only appeared after PR #8480, maybe it's more reasonable to investigate after it is merged. Here are the steps: 1. Create an experiment using v2 api 2. Archive this experiment 3. On the frontend, the archived experiment still appears in the active experiments list. It's not present in the archived experiments list 4. However, if you click into this archived experiment, the button says "Restore", which means FE acknowledges that the experiment is archived. 5. This might be because the v2 api uses a different `StorageState` that is generated from the v2 proto <!-- Specify how to reproduce the problem. This may include information such as: a description of the process, code snippets, log output, or screenshots. --> ### Expected result <!-- What should the correct behavior be? --> ### Materials and Reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8483/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8483/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8460
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8460/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8460/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8460/events
https://github.com/kubeflow/pipelines/issues/8460
1,450,779,480
I_kwDOB-71UM5WeSNY
8,460
[pipelines]why user can see all pipelines from other member
{ "login": "hellobiek", "id": 2854520, "node_id": "MDQ6VXNlcjI4NTQ1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2854520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hellobiek", "html_url": "https://github.com/hellobiek", "followers_url": "https://api.github.com/users/hellobiek/followers", "following_url": "https://api.github.com/users/hellobiek/following{/other_user}", "gists_url": "https://api.github.com/users/hellobiek/gists{/gist_id}", "starred_url": "https://api.github.com/users/hellobiek/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hellobiek/subscriptions", "organizations_url": "https://api.github.com/users/hellobiek/orgs", "repos_url": "https://api.github.com/users/hellobiek/repos", "events_url": "https://api.github.com/users/hellobiek/events{/privacy}", "received_events_url": "https://api.github.com/users/hellobiek/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
null
[]
null
[ "Hi @hellobiek , namespace for pipeline is working in progress. \r\nhttps://github.com/kubeflow/pipelines/issues/4197", "@jlyaoyuli thank you. ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions." ]
"2022-11-16T03:32:27"
"2023-08-30T07:41:55"
null
NONE
null
kubeflow 1.5 kfp 1.8.12 why user can see all pipelines from other member?
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8460/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8460/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8459
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8459/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8459/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8459/events
https://github.com/kubeflow/pipelines/issues/8459
1,450,779,375
I_kwDOB-71UM5WeSLv
8,459
[sdk] Unable to use api/v2alpha1/pipeline_spec.proto with bazel and py_proto_library() intact
{ "login": "Writtic", "id": 11371498, "node_id": "MDQ6VXNlcjExMzcxNDk4", "avatar_url": "https://avatars.githubusercontent.com/u/11371498?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Writtic", "html_url": "https://github.com/Writtic", "followers_url": "https://api.github.com/users/Writtic/followers", "following_url": "https://api.github.com/users/Writtic/following{/other_user}", "gists_url": "https://api.github.com/users/Writtic/gists{/gist_id}", "starred_url": "https://api.github.com/users/Writtic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Writtic/subscriptions", "organizations_url": "https://api.github.com/users/Writtic/orgs", "repos_url": "https://api.github.com/users/Writtic/repos", "events_url": "https://api.github.com/users/Writtic/events{/privacy}", "received_events_url": "https://api.github.com/users/Writtic/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions." ]
"2022-11-16T03:32:20"
"2023-08-30T07:41:57"
null
NONE
null
### Environment * KFP version: 1.8.13 <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> * All dependencies version: * kfp==1.8.13 * kfp-pipeline-spec==0.1.16 * kfp-server-api==1.8.4 <!-- Specify the output of the following shell command: $pip list | grep kfp --> ### Steps to reproduce Based on [kfp-pipeline-repro-codes-8459](https://github.com/Writtic/kfp-pipeline-repro-codes-8459) ```bash bazel run //tools:gen_proto python main.py ``` ### Expected result Using `py_proto_library()` from com_google_protobuf, I hope to use the part of pipeline_spec.proto on [kubeflow/pipelines](https://github.com/kubeflow/pipelines) for reusability and integrity. However, Protobuf descriptor in `kfp-pipeline-spec` package has different package path from actual protobuf file path, so I can't use as it is. Due to this, I add the dependancy of [kubeflow/pipelines](https://github.com/kubeflow/pipelines) and bazel patches, build file for [kubeflow/pipelines](https://github.com/kubeflow/pipelines). As a result, I can get [pipeline_config_pb2.py](reproduce/proto/pipeline_config_pb2.py) with gen_proto.sh which is simply copy `*_pb2.py` file from bazel runtime environment. But `pipeline_config_pb2.py` refers to `kfp.pipeline_spec` which is in `kfp-pipeline-spec` python package in python runtime eventually. And even the protobuf file of `kfp.pipeline_spec` has no subpackage path. This bring some problem like below. - Protobuf file, [pipeline_config.proto](reproduce/proto/pipeline_config.proto), should use `import "pipeline_spec.proto";` - `pb2.py` file, [pipeline_config_pb2.py](reproduce/proto/pipeline_config_pb2.py), should use `from kfp.pipeline_spec import pipeline_spec` because of `kfp-pipeline-spec` ### Materials and Reference - Github Repository: [kfp-pipeline-repro-codes-8459](https://github.com/Writtic/kfp-pipeline-repro-codes-8459) <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8459/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8459/timeline
null
reopened
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8451
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8451/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8451/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8451/events
https://github.com/kubeflow/pipelines/issues/8451
1,449,261,523
I_kwDOB-71UM5WYfnT
8,451
kfp component can not access gpu (on premise)
{ "login": "TranThanh96", "id": 26323599, "node_id": "MDQ6VXNlcjI2MzIzNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/26323599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TranThanh96", "html_url": "https://github.com/TranThanh96", "followers_url": "https://api.github.com/users/TranThanh96/followers", "following_url": "https://api.github.com/users/TranThanh96/following{/other_user}", "gists_url": "https://api.github.com/users/TranThanh96/gists{/gist_id}", "starred_url": "https://api.github.com/users/TranThanh96/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TranThanh96/subscriptions", "organizations_url": "https://api.github.com/users/TranThanh96/orgs", "repos_url": "https://api.github.com/users/TranThanh96/repos", "events_url": "https://api.github.com/users/TranThanh96/events{/privacy}", "received_events_url": "https://api.github.com/users/TranThanh96/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Hello @TranThanh96 , \r\nHere is the instruction of enabling GPU: https://www.kubeflow.org/docs/distributions/gke/pipelines/enable-gpu-and-tpu/\r\nMaybe you were missing `gpu_op.add_node_selector_constraint()`" ]
"2022-11-15T06:52:26"
"2022-12-01T23:53:54"
"2022-12-01T23:53:54"
NONE
null
### Environment ubuntu 20-04 cluster k3s: 1 node gpu: https://www.kubeflow.org/docs/components/pipelines/v1/installation/localcluster-deployment/#1-setting-up-a-cluster-on-k3s deployed gpu-operator ### Steps to reproduce this is code check: ``` import kfp from kfp import dsl def gpu_smoking_check_op(): return dsl.ContainerOp( name='check', image='tensorflow/tensorflow:latest-gpu', command=['sh', '-c'], arguments=['nvidia-smi'] ).set_gpu_limit(1) @dsl.pipeline( name='GPU smoke check', description='smoke check as to whether GPU env is ready.' ) def gpu_pipeline(): gpu_smoking_check = gpu_smoking_check_op() if __name__ == '__main__': arguments = {} client.create_run_from_pipeline_func( gpu_pipeline, arguments=arguments) ``` and this is result: sh: 1: nvidia-smi: not found ![image](https://user-images.githubusercontent.com/26323599/201849120-27970509-3898-4e41-a1b8-6ca4d1920eb2.png) all pods are running: ![image](https://user-images.githubusercontent.com/26323599/201849255-ee024079-4089-4dd8-bdbf-ba34fdcf91ed.png) nvidia-docker check ok: ![image](https://user-images.githubusercontent.com/26323599/201849317-803140fb-0f80-4116-ae18-1c1f99dc654d.png) How can I access gpu on kfp?
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8451/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8451/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8450
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8450/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8450/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8450/events
https://github.com/kubeflow/pipelines/issues/8450
1,449,167,659
I_kwDOB-71UM5WYIsr
8,450
[sdk] latest releases of sdk are marked as "pre-release" on PyPI
{ "login": "thesuperzapper", "id": 5735406, "node_id": "MDQ6VXNlcjU3MzU0MDY=", "avatar_url": "https://avatars.githubusercontent.com/u/5735406?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thesuperzapper", "html_url": "https://github.com/thesuperzapper", "followers_url": "https://api.github.com/users/thesuperzapper/followers", "following_url": "https://api.github.com/users/thesuperzapper/following{/other_user}", "gists_url": "https://api.github.com/users/thesuperzapper/gists{/gist_id}", "starred_url": "https://api.github.com/users/thesuperzapper/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thesuperzapper/subscriptions", "organizations_url": "https://api.github.com/users/thesuperzapper/orgs", "repos_url": "https://api.github.com/users/thesuperzapper/repos", "events_url": "https://api.github.com/users/thesuperzapper/events{/privacy}", "received_events_url": "https://api.github.com/users/thesuperzapper/received_events", "type": "User", "site_admin": false }
[ { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "This is by design. KFP v2 is currently in progress, and we're not ready to claim GA yet. \r\nThe 2.0 alpha version of KFP included in Kubeflow 1.6 fully supports KFP v1 feature with the addition of v2 feature in alpha phase. Users can continue using KFP SDK 1.8.* with Kubeflow 1.6 for v1 features. Alternatively, they may choose to try out v2 features by explicitly upgrading to KFP SDK 2.0-beta." ]
"2022-11-15T04:58:53"
"2022-11-21T23:52:39"
"2022-11-21T23:52:32"
MEMBER
null
Currently, the latest version of the KFP SDK (`2.0.0b6`) is [marked as a "pre-release" on PyPI](https://pypi.org/project/kfp/#history). This is problematic for 2 main reasons: 1. If a user runs `pip install kfp` they will get `1.8.14`, which will not include 2.0.0 features 1. Kubeflow 1.6 [includes Kubeflow Pipelines `v2.0.0-alpha5`](https://www.kubeflow.org/docs/releases/kubeflow-1.6/#component-versions)
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8450/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8450/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8449
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8449/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8449/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8449/events
https://github.com/kubeflow/pipelines/issues/8449
1,449,118,162
I_kwDOB-71UM5WX8nS
8,449
[frontend] v2 ui doesn't show error cause
{ "login": "stobias123", "id": 590677, "node_id": "MDQ6VXNlcjU5MDY3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/590677?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stobias123", "html_url": "https://github.com/stobias123", "followers_url": "https://api.github.com/users/stobias123/followers", "following_url": "https://api.github.com/users/stobias123/following{/other_user}", "gists_url": "https://api.github.com/users/stobias123/gists{/gist_id}", "starred_url": "https://api.github.com/users/stobias123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stobias123/subscriptions", "organizations_url": "https://api.github.com/users/stobias123/orgs", "repos_url": "https://api.github.com/users/stobias123/repos", "events_url": "https://api.github.com/users/stobias123/events{/privacy}", "received_events_url": "https://api.github.com/users/stobias123/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
{ "login": "jlyaoyuli", "id": 56132941, "node_id": "MDQ6VXNlcjU2MTMyOTQx", "avatar_url": "https://avatars.githubusercontent.com/u/56132941?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jlyaoyuli", "html_url": "https://github.com/jlyaoyuli", "followers_url": "https://api.github.com/users/jlyaoyuli/followers", "following_url": "https://api.github.com/users/jlyaoyuli/following{/other_user}", "gists_url": "https://api.github.com/users/jlyaoyuli/gists{/gist_id}", "starred_url": "https://api.github.com/users/jlyaoyuli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jlyaoyuli/subscriptions", "organizations_url": "https://api.github.com/users/jlyaoyuli/orgs", "repos_url": "https://api.github.com/users/jlyaoyuli/repos", "events_url": "https://api.github.com/users/jlyaoyuli/events{/privacy}", "received_events_url": "https://api.github.com/users/jlyaoyuli/received_events", "type": "User", "site_admin": false }
[ { "login": "jlyaoyuli", "id": 56132941, "node_id": "MDQ6VXNlcjU2MTMyOTQx", "avatar_url": "https://avatars.githubusercontent.com/u/56132941?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jlyaoyuli", "html_url": "https://github.com/jlyaoyuli", "followers_url": "https://api.github.com/users/jlyaoyuli/followers", "following_url": "https://api.github.com/users/jlyaoyuli/following{/other_user}", "gists_url": "https://api.github.com/users/jlyaoyuli/gists{/gist_id}", "starred_url": "https://api.github.com/users/jlyaoyuli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jlyaoyuli/subscriptions", "organizations_url": "https://api.github.com/users/jlyaoyuli/orgs", "repos_url": "https://api.github.com/users/jlyaoyuli/repos", "events_url": "https://api.github.com/users/jlyaoyuli/events{/privacy}", "received_events_url": "https://api.github.com/users/jlyaoyuli/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @stobias123 , thank you for reaching out. Currently KFP v2 is in the alpha phase, and the fix is what we are currently working on.", "Hello @stobias123, we are happy to share that the logs feature is already included in our latest release (https://github.com/kubeflow/pipelines/releases/tag/2.0.0-rc.1). Please give a try and let us know where we can improve on our system! Also, Feel free to re-open this issue if you have any question!" ]
"2022-11-15T03:51:47"
"2023-05-24T17:21:01"
"2023-05-24T17:21:01"
NONE
null
### Environment * How did you deploy Kubeflow Pipelines (KFP)? * AWS Manifests. https://www.kubeflow.org/docs/pipelines/installation/overview/. --> * KFP version: gcr.io/ml-pipeline/frontend:2.0.0-alpha.5 To find the version number, See version number shows on bottom of KFP UI left sidenav. --> ### Steps to reproduce Run this pipeline. https://github.com/kubeflow/pipelines/blob/master/samples/v2/lightweight_python_functions_v2_pipeline/lightweight_python_functions_v2_pipeline.py After fixing a few parameter naming issues, it runs, and fails without showing why. (In my case it was b/c of bucket issues, but i have to go to logs) ### Expected result See logs and understand why it fails. ### Materials and Reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8449/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8449/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8446
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8446/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8446/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8446/events
https://github.com/kubeflow/pipelines/issues/8446
1,447,095,500
I_kwDOB-71UM5WQOzM
8,446
[backend] Unimplemented desc = unknown method ListTasksV1 for service api.TaskService
{ "login": "tanvithakur94", "id": 54823948, "node_id": "MDQ6VXNlcjU0ODIzOTQ4", "avatar_url": "https://avatars.githubusercontent.com/u/54823948?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tanvithakur94", "html_url": "https://github.com/tanvithakur94", "followers_url": "https://api.github.com/users/tanvithakur94/followers", "following_url": "https://api.github.com/users/tanvithakur94/following{/other_user}", "gists_url": "https://api.github.com/users/tanvithakur94/gists{/gist_id}", "starred_url": "https://api.github.com/users/tanvithakur94/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tanvithakur94/subscriptions", "organizations_url": "https://api.github.com/users/tanvithakur94/orgs", "repos_url": "https://api.github.com/users/tanvithakur94/repos", "events_url": "https://api.github.com/users/tanvithakur94/events{/privacy}", "received_events_url": "https://api.github.com/users/tanvithakur94/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "TL;DR: This should have been resolved now. \r\n\r\nLast week, we updated these two images, which we shouldn't have referenced via `latest` from the beginning:\r\nhttps://github.com/kubeflow/pipelines/blob/6b3b4df3d2a0dae0d10195e3d2fea76ed478af0c/backend/src/v2/compiler/argocompiler/argo.go#L110-L111\r\nThe change caused a mismatch between the API client and the API server. We reverted the images by manually adding `latest` label to the last working/matching images. In our next release, we will fix the problematic `latest` reference in the above code.", "I'm still seeing this on a fresh install", "> I'm still seeing this on a fresh install\r\n\r\n@stobias123 Sorry, I found the \"latest\" label we added to the old images was accidentally moved. Changed it back, this should work now. Please retry running any pipeline.", "@chensun I am seeing a different error now. Looks like the argo executor is not part of kfp-launcher image \r\n\r\n```\r\n containerStatuses:\r\n - containerID: containerd://4585e4aa7d7f41937c2f6a70321555d2c65070666a507a0a75b4c6857e171bd6\r\n image: sha256:374e2eca14d7602115e4be44f60b0587d8192a45298c353df4cfac3bfc7854f9\r\n imageID: gcr.io/ml-pipeline-test/dev/kfp-launcher-v2@sha256:4513cf5c10c252d94f383ce51a890514799c200795e3de5e90f91b98b2e2f959\r\n lastState: {}\r\n name: main\r\n ready: false\r\n restartCount: 0\r\n started: false\r\n state:\r\n terminated:\r\n containerID: containerd://4585e4aa7d7f41937c2f6a70321555d2c65070666a507a0a75b4c6857e171bd6\r\n exitCode: 128\r\n finishedAt: \"2023-01-04T20:57:59Z\"\r\n message: 'failed to create containerd task: failed to create shim task: OCI\r\n runtime create failed: runc create failed: unable to start container process:\r\n exec: \"/var/run/argo/argoexec\": stat /var/run/argo/argoexec: no such file\r\n or directory: unknown'\r\n reason: StartError\r\n startedAt: \"1970-01-01T00:00:00Z\"\r\n ", "@chensun I still get same error on apiserver v2.0.0.alpha6. Is there a way to fix it?", "I'm getting the same error too. @sergeyshevch Have you figured any temporary workaround?\r\n", "> I'm getting the same error too. @sergeyshevch Have you figured any temporary workaround?\n> \n> \n\nI returned to use pipelines sdk v1\nIt works fine ", "I would prefer not to use sdk v1 since I'd like to use `dsl.importer`. @chensun Any thoughts on why this still might be happening? Happy to try to fix it.", "@v-raja sorry for the slow response. I noticed the v2 image label got moved again. Just moved it back, can you retry and see if you still hit the issue?" ]
"2022-11-13T21:34:40"
"2023-03-28T17:24:58"
"2022-11-14T20:42:51"
NONE
null
`###` Environment kfp v2 SDK kfp version 2.0.0b6 kfp-pipeline-spec version 0.1.16 kfp-server-api version 2.0.0a6 ### Steps to reproduce We are trying to run a simple example from the [docs](https://www.kubeflow.org/docs/components/pipelines/v1/sdk-v2/python-function-components/) in order to test kfp v2 sdk in kubeflow namespace as shown below: ``` import kfp import kfp.dsl as dsl #from kfp.v2.dsl import component from kfp.dsl import component @component def add(a: float, b: float) -> float: '''Calculates sum of two arguments''' return a + b @dsl.pipeline( name='addition-pipeline', description='An example pipeline that performs addition calculations.', # pipeline_root='gs://my-pipeline-root/example-pipeline' ) def add_pipeline(a: float = 1, b: float = 7): add_task = add(a=a, b=b) if __name__ == '__main__': # Compiling the pipeline kfp.compiler.Compiler().compile(pipeline_func=add_pipeline, package_path='pipeline1.yaml') ``` Once we try and submit this pipeline, we see that system-dag-driver pod ran successfully. However, the system-container-driver pod terminates with an error: `I1113 20:31:38.515160 27 main.go:213] output ExecutorInput:{ "inputs": { "parameterValues": { "parallelism": "2" } }, "outputs": { "parameters": { "Output": { "outputFile": "/tmp/kfp/outputs/Output" } }, "outputFile": "/tmp/kfp_outputs/output_metadata.json" } } F1113 20:31:38.515178 27 main.go:74] KFP driver: driver.Container(pipelineName=pipeline/stress-test-v2, runID=caf03c46-c9db-4398-afb9-cea0821a3c51, task="get-loop-args", component="comp-get-loop-args", dagExecutionID=1085086, componentSpec) failed: failure while getting executionCache: failed to list tasks: rpc error: code = Unimplemented desc = unknown method ListTasksV1 for service api.TaskService` Also, another issuing we are facing is, when we submit the pipeline from the kubeflow UI, we are getting an error `Cannot get MLMD objects from Metadata store. Cannot find context with {"typeName":"system.PipelineRun": Unknown Content-type received.` <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8446/reactions", "total_count": 7, "+1": 7, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8446/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8445
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8445/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8445/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8445/events
https://github.com/kubeflow/pipelines/issues/8445
1,446,516,138
I_kwDOB-71UM5WOBWq
8,445
Kubeflow pipeline is displayed as single line json instead of formatted YAML file with KFP 1.8.5
{ "login": "hfarooqui", "id": 7764971, "node_id": "MDQ6VXNlcjc3NjQ5NzE=", "avatar_url": "https://avatars.githubusercontent.com/u/7764971?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hfarooqui", "html_url": "https://github.com/hfarooqui", "followers_url": "https://api.github.com/users/hfarooqui/followers", "following_url": "https://api.github.com/users/hfarooqui/following{/other_user}", "gists_url": "https://api.github.com/users/hfarooqui/gists{/gist_id}", "starred_url": "https://api.github.com/users/hfarooqui/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hfarooqui/subscriptions", "organizations_url": "https://api.github.com/users/hfarooqui/orgs", "repos_url": "https://api.github.com/users/hfarooqui/repos", "events_url": "https://api.github.com/users/hfarooqui/events{/privacy}", "received_events_url": "https://api.github.com/users/hfarooqui/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions." ]
"2022-11-12T16:31:48"
"2023-08-30T07:41:58"
null
NONE
null
### Environment * How did you deploy Kubeflow Pipelines (KFP)? KFP is deployed on EKS cluster (1.22) with S3+RDS using following procedure: https://github.com/kubeflow/pipelines/tree/master/manifests/kustomize/env/aws * KFP version: 1.8.5 ### Steps to reproduce After upgrading KFP from 1.7.1 to 1.8.5 upgrade we noticed that uploading a pipeline.yaml from Kubeflow dashboard shows up as json as a result it is just displayed in single line (instead of formatted yaml) making it hard to read. This was working fine with older version (1.7.1+eks1.21) NOTE: This issue is also seen with the fresh deployment of KFP v1.8.5 ### Expected result Pipeline YAML should appear as a formatted text with the line breaks instead of single like json file as with the older versions Impacted by this bug? Give it a 👍 ![1 8 5 (bad)](https://user-images.githubusercontent.com/7764971/201484274-5a029ac4-5cc6-40ed-bb3a-4ade7a83fd14.png) ![1 7 1 (good)](https://user-images.githubusercontent.com/7764971/201484273-637d32f5-64ad-4200-8e1e-da3f28b7d31b.png)
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8445/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8445/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8436
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8436/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8436/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8436/events
https://github.com/kubeflow/pipelines/issues/8436
1,440,967,563
I_kwDOB-71UM5V42uL
8,436
Publish conformance test guidelines to the KF community
{ "login": "james-jwu", "id": 54086668, "node_id": "MDQ6VXNlcjU0MDg2NjY4", "avatar_url": "https://avatars.githubusercontent.com/u/54086668?v=4", "gravatar_id": "", "url": "https://api.github.com/users/james-jwu", "html_url": "https://github.com/james-jwu", "followers_url": "https://api.github.com/users/james-jwu/followers", "following_url": "https://api.github.com/users/james-jwu/following{/other_user}", "gists_url": "https://api.github.com/users/james-jwu/gists{/gist_id}", "starred_url": "https://api.github.com/users/james-jwu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/james-jwu/subscriptions", "organizations_url": "https://api.github.com/users/james-jwu/orgs", "repos_url": "https://api.github.com/users/james-jwu/repos", "events_url": "https://api.github.com/users/james-jwu/events{/privacy}", "received_events_url": "https://api.github.com/users/james-jwu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Will continue driving KF conformance through a project in KF repo: https://github.com/orgs/kubeflow/projects/52" ]
"2022-11-08T22:01:25"
"2022-11-08T22:06:20"
"2022-11-08T22:05:59"
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8436/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8436/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8435
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8435/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8435/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8435/events
https://github.com/kubeflow/pipelines/issues/8435
1,440,965,753
I_kwDOB-71UM5V42R5
8,435
KFP conformance test design
{ "login": "james-jwu", "id": 54086668, "node_id": "MDQ6VXNlcjU0MDg2NjY4", "avatar_url": "https://avatars.githubusercontent.com/u/54086668?v=4", "gravatar_id": "", "url": "https://api.github.com/users/james-jwu", "html_url": "https://github.com/james-jwu", "followers_url": "https://api.github.com/users/james-jwu/followers", "following_url": "https://api.github.com/users/james-jwu/following{/other_user}", "gists_url": "https://api.github.com/users/james-jwu/gists{/gist_id}", "starred_url": "https://api.github.com/users/james-jwu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/james-jwu/subscriptions", "organizations_url": "https://api.github.com/users/james-jwu/orgs", "repos_url": "https://api.github.com/users/james-jwu/repos", "events_url": "https://api.github.com/users/james-jwu/events{/privacy}", "received_events_url": "https://api.github.com/users/james-jwu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Design doc: https://docs.google.com/document/d/1_til1HkVBFQ1wCgyUpWuMlKRYI4zP1YPmNxr75mzcps/edit" ]
"2022-11-08T21:59:53"
"2022-11-08T22:01:01"
"2022-11-08T22:01:01"
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8435/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8435/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8429
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8429/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8429/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8429/events
https://github.com/kubeflow/pipelines/issues/8429
1,438,319,937
I_kwDOB-71UM5VuwVB
8,429
Need to replace Minlo with s3 as an alternative for artifact storage in kubeflow
{ "login": "SoujanyaSan", "id": 117651360, "node_id": "U_kgDOBwM3oA", "avatar_url": "https://avatars.githubusercontent.com/u/117651360?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SoujanyaSan", "html_url": "https://github.com/SoujanyaSan", "followers_url": "https://api.github.com/users/SoujanyaSan/followers", "following_url": "https://api.github.com/users/SoujanyaSan/following{/other_user}", "gists_url": "https://api.github.com/users/SoujanyaSan/gists{/gist_id}", "starred_url": "https://api.github.com/users/SoujanyaSan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SoujanyaSan/subscriptions", "organizations_url": "https://api.github.com/users/SoujanyaSan/orgs", "repos_url": "https://api.github.com/users/SoujanyaSan/repos", "events_url": "https://api.github.com/users/SoujanyaSan/events{/privacy}", "received_events_url": "https://api.github.com/users/SoujanyaSan/received_events", "type": "User", "site_admin": false }
[ { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Hi @SoujanyaSan, thank you for bringing up the request. We have a similar feature request here #7878. Please see updates in this request." ]
"2022-11-07T12:55:00"
"2022-11-10T23:54:37"
"2022-11-10T23:54:37"
NONE
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> <!-- /area frontend --> /area backend <!-- /area sdk --> <!-- /area samples --> <!-- /area components --> ### What feature would you like to see? Need to have pipeline manifests,metrics and logs of each pipeline node should be exported to s3 from ml pipelines and also need to retireve from the same for the visualization in the kubeflow ### What is the use case or pain point? Facing difficulty in disintegrating minio from kubeflow(as by default kubeflow ships with Minlo) ### Is there a workaround currently? <!-- Without this feature, how do you accomplish your task today? --> --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8429/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8429/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8427
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8427/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8427/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8427/events
https://github.com/kubeflow/pipelines/issues/8427
1,437,252,717
I_kwDOB-71UM5Vqrxt
8,427
[Question] How to save GPU costs in kubeflow pipelines?
{ "login": "r-matsuzaka", "id": 76238346, "node_id": "MDQ6VXNlcjc2MjM4MzQ2", "avatar_url": "https://avatars.githubusercontent.com/u/76238346?v=4", "gravatar_id": "", "url": "https://api.github.com/users/r-matsuzaka", "html_url": "https://github.com/r-matsuzaka", "followers_url": "https://api.github.com/users/r-matsuzaka/followers", "following_url": "https://api.github.com/users/r-matsuzaka/following{/other_user}", "gists_url": "https://api.github.com/users/r-matsuzaka/gists{/gist_id}", "starred_url": "https://api.github.com/users/r-matsuzaka/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/r-matsuzaka/subscriptions", "organizations_url": "https://api.github.com/users/r-matsuzaka/orgs", "repos_url": "https://api.github.com/users/r-matsuzaka/repos", "events_url": "https://api.github.com/users/r-matsuzaka/events{/privacy}", "received_events_url": "https://api.github.com/users/r-matsuzaka/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @r-matsuzaka, thank you for your question! We offer component level support for node selection. See reference here: https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.dsl.html#kfp.dsl.BaseOp.add_node_selector_constraint\r\nPlease let us know if you have any more questions." ]
"2022-11-06T01:40:52"
"2022-11-10T23:47:43"
"2022-11-10T23:47:42"
NONE
null
Excuse me, is it possible to run each step on different node? For example, in this case `cifar-preproc` runs on CPU node but `cifar-train` runs on GPU node. To save costs, I want to use GPUs only when they are needed to save costs. ![image](https://user-images.githubusercontent.com/76238346/200150043-1d812f97-11e7-4182-96cf-4f244c2f6484.png) https://cloud.google.com/blog/topics/developers-practitioners/scalable-ml-workflows-using-pytorch-kubeflow-pipelines-and-vertex-pipelines?hl=en
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8427/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8427/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8424
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8424/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8424/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8424/events
https://github.com/kubeflow/pipelines/issues/8424
1,436,779,088
I_kwDOB-71UM5Vo4JQ
8,424
[bug] Kubeflow pipeline doesn't pick-up metrics from file /mlpipeline-metrics.json
{ "login": "crbl1122", "id": 30111494, "node_id": "MDQ6VXNlcjMwMTExNDk0", "avatar_url": "https://avatars.githubusercontent.com/u/30111494?v=4", "gravatar_id": "", "url": "https://api.github.com/users/crbl1122", "html_url": "https://github.com/crbl1122", "followers_url": "https://api.github.com/users/crbl1122/followers", "following_url": "https://api.github.com/users/crbl1122/following{/other_user}", "gists_url": "https://api.github.com/users/crbl1122/gists{/gist_id}", "starred_url": "https://api.github.com/users/crbl1122/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/crbl1122/subscriptions", "organizations_url": "https://api.github.com/users/crbl1122/orgs", "repos_url": "https://api.github.com/users/crbl1122/repos", "events_url": "https://api.github.com/users/crbl1122/events{/privacy}", "received_events_url": "https://api.github.com/users/crbl1122/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Hi @crbl1122, the correct name for the metrics file should be `mlpipeline-ui-metadata.json`. See reference here: https://www.kubeflow.org/docs/components/pipelines/v1/sdk/output-viewer/#v1-sdk-writing-out-metadata-for-the-output-viewers\r\nPlease let us know if you have any questions. Also, if you upgrade to kfp v2, you won't need the json file to specify the metrics. See: https://www.kubeflow.org/docs/components/pipelines/v1/sdk/output-viewer/#v1-sdk-writing-out-metadata-for-the-output-viewers", "Hi @Linchin the above didn't work either. Please note that as written in the issue description, I am testing the metrics visualization from a YAML specified component, containerized and registered in GCP Artifact registry (AR), not a function based component neither one defined through containerOp. So what I have is the containerized module existing in AR and the YAML specification file. The KBF version I printed above (v1) is from the virtual machine where I define the pipeline. I am not aware if once containerized and registered in AR, it is possible that a component will use under the hood, a different kubeflow version, like v2?\r\nNevertheless, I was trying to follow the instructions from this document: [https://www.kubeflow.org/docs/components/pipelines/v1/sdk/pipelines-metrics/](url).\r\nNow, I tried also the document you proposed with a tabular visualization of the sklearn's classification report and it didn't work either.\r\nOpening the file exported by the component during pipeline run: gs://.../training_.../MLPipeline UI Metadata I observe that no metrics values are saved in it, only this text: `{\"outputs\": [{\"type\": \"table\", \"storage\": \"gcs\", \"format\": \"csv\", \"header\": [\"precision\", \"recall\", \"f1-score\", \"support\"], \"source\": \"/mlpipeline-ui-metadata.json\"}]}\r\n`\r\nI checked in the container and the source csv /mlpipeline-ui-metadata.json do contain the actual classification report.\r\nPlease could you send me the correct link for kfp v2 metrics visualization ? You've sent twice the same link for kfp v1 document.\r\n\r\nHere is the adjusted code:\r\n\r\n```\r\n`def table_vis(mlpipeline_ui_metadata_path, prediction, predict_proba, y_true, target_name):\r\n classif_report_dict = classification_report(y_true, prediction,\\\r\n target_names=[target_name + '_0', target_name + '_1'], output_dict=True)\r\n print(classif_report_dict)\r\n report_df = pd.DataFrame(classif_report_dict)\r\n report_csv = '/mlpipeline-ui-metadata.json'\r\n # save\r\n report_df.to_csv(report_csv, index = False)\r\n \r\n print('print csv content')\r\n import sys\r\n with open(report_csv, 'r') as f:\r\n contents = f.read()\r\n print (contents)\r\n\r\n \r\n metadata = {\r\n 'outputs' : [{\r\n 'type': 'table',\r\n 'storage': 'gcs',\r\n 'format': 'csv',\r\n #'header': [x['name'] for x in schema],\r\n 'header': ['precision', 'recall', 'f1-score', 'support'],\r\n 'source': report_csv\r\n }]\r\n }\r\n \r\n with open(mlpipeline_ui_metadata_path, 'w') as metadata_file:\r\n json.dump(metadata, metadata_file)`\r\n\r\nYAML specs:\r\n\r\n`%%writefile training.yaml\r\nname: training\r\ndescription: Scikit trainer. Receives the name of train and validation BQ tables. Train, evaluate and save a model.\r\n\r\ninputs:\r\n- {name: training_table, type: String, description: 'name of the BQ training table'}\r\n- {name: validation_table, type: String, description: 'name of the BQ validation table'}\r\n- {name: target_name, type: String, description: 'Name of the target variable'}\r\n- {name: max_depth, type: Integer, description: 'max depth'}\r\n- {name: learning_rate, type: Float, description: 'learning rate'}\r\n- {name: n_estimators, type: Integer, description: 'n estimators'}\r\n\r\noutputs:\r\n- {name: gcs_model_path, type: OutputPath, description: 'output directory where the model is saved'}\r\n# - {name: MLPipeline_Metrics, type: Metrics, description: 'output directory where the metrics are saved'}\r\n- {name: MLPipeline UI Metadata, type: Metrics, description: 'output directory where the metrics are saved'}\r\n\r\nimplementation:\r\n container:\r\n image: ..... train_comp:latest\r\n command: [\r\n /src/component/train.py,\r\n --training_table, {inputValue: training_table},\r\n --validation_table, {inputValue: validation_table},\r\n --target_name, {inputValue: target_name},\r\n --gcs_model_path, {outputPath: gcs_model_path},\r\n # --mlpipeline_metrics_path, {outputPath: MLPipeline_Metrics},\r\n --mlpipeline_ui_metadata_path, {outputPath: MLPipeline UI Metadata},\r\n --max_depth, {inputValue: max_depth},\r\n --learning_rate, {inputValue: learning_rate},\r\n --n_estimators, {inputValue: n_estimators}\r\n ]`\r\n\r\n```" ]
"2022-11-05T02:35:14"
"2022-11-11T12:41:02"
"2022-11-10T23:50:32"
NONE
null
### Environment <!-- Please fill in those that seem relevant. --> * How do you deploy Kubeflow Pipelines (KFP)? <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> * KFP version: <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> * KFP SDK version: <!-- Specify the output of the following shell command: $pip list | grep kfp --> kfp 1.8.14 kfp-pipeline-spec 0.1.16 kfp-server-api 1.8.5 ### Steps to reproduce I've ensured that both the metrics artifact is correctly named: "mlpipeline-metrics" and the file saved in the root of the container: "mlpipeline-metrics.json". Still the Kubeflow pipeline doesn't display the metrics. I have seen this problem reported by others but seems to be a bug because no solution was documented. I am testing the metrics display in Kubeflow UI for a pipeline running on GCP. I use the reusable component method not python function based components. The component is containerized and pushed to artifact registry and I reference it through YAML specification. ``` def produce_metrics( mlpipeline_metrics): accuracy = 0.9 metrics = { 'metrics': [{ 'name': 'accuracy-score', # 'numberValue': accuracy, # 'format': "PERCENTAGE", # }] } # save to mlpipeline-metrics.json file in the root with open('/mlpipeline-metrics.json', 'w') as f: json.dump(metrics, f) # save to artifact path with open(mlpipeline_metrics + '.json', 'w') as f: json.dump(metrics, f) def main_fn(arguments): training_table_bq = arguments.training_table validation_table_bq = arguments.validation_table schema_dict = generate_schema(training_table_bq) target_name = arguments.target_name gcs_model_path = arguments.gcs_model_path mlpipeline_metrics = arguments.mlpipeline_metrics # run train evaluate gcs_model_path = train_evaluate(training_table_bq, validation_table_bq, schema_dict, target_name, gcs_model_path, mlpipeline_metrics) return gcs_model_path if __name__ == '__main__': parser = argparse.ArgumentParser(description = "train evaluate model") parser.add_argument("--training_table", type = str, help = 'Name of the input training table') parser.add_argument("--validation_table", type = str, help = 'Name of the input validation table') parser.add_argument("--target_name", type = str, help = 'Name of the target variable') parser.add_argument("--gcs_model_path", type = str, help = 'output directory where the model is saved') # parser.add_argument('--ui_metadata_output_path', # type=str, # default='/mlpipeline-ui-metadata.json', # help='Local output path for the file containing UI metadata JSON structure.') parser.add_argument('--mlpipeline_metrics', type=str, required = False, default='/mlpipeline-metrics.json', help='Local output path for the file containing metrics JSON structure.') args = parser.parse_args() ``` ### Expected result <!-- What should the correct behavior be? --> ### Materials and reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> ### Labels <!-- Please include labels below by uncommenting them to help us better triage issues --> <!-- /area frontend --> <!-- /area backend --> <!-- /area sdk --> <!-- /area testing --> <!-- /area samples --> <!-- /area components --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8424/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8424/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8420
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8420/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8420/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8420/events
https://github.com/kubeflow/pipelines/issues/8420
1,435,040,964
I_kwDOB-71UM5ViPzE
8,420
[sdk] bad generation of components path when buliding with target image in windows
{ "login": "Sergiodiaz53", "id": 6660257, "node_id": "MDQ6VXNlcjY2NjAyNTc=", "avatar_url": "https://avatars.githubusercontent.com/u/6660257?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Sergiodiaz53", "html_url": "https://github.com/Sergiodiaz53", "followers_url": "https://api.github.com/users/Sergiodiaz53/followers", "following_url": "https://api.github.com/users/Sergiodiaz53/following{/other_user}", "gists_url": "https://api.github.com/users/Sergiodiaz53/gists{/gist_id}", "starred_url": "https://api.github.com/users/Sergiodiaz53/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Sergiodiaz53/subscriptions", "organizations_url": "https://api.github.com/users/Sergiodiaz53/orgs", "repos_url": "https://api.github.com/users/Sergiodiaz53/repos", "events_url": "https://api.github.com/users/Sergiodiaz53/events{/privacy}", "received_events_url": "https://api.github.com/users/Sergiodiaz53/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
{ "login": "JOCSTAA", "id": 65559367, "node_id": "MDQ6VXNlcjY1NTU5MzY3", "avatar_url": "https://avatars.githubusercontent.com/u/65559367?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JOCSTAA", "html_url": "https://github.com/JOCSTAA", "followers_url": "https://api.github.com/users/JOCSTAA/followers", "following_url": "https://api.github.com/users/JOCSTAA/following{/other_user}", "gists_url": "https://api.github.com/users/JOCSTAA/gists{/gist_id}", "starred_url": "https://api.github.com/users/JOCSTAA/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JOCSTAA/subscriptions", "organizations_url": "https://api.github.com/users/JOCSTAA/orgs", "repos_url": "https://api.github.com/users/JOCSTAA/repos", "events_url": "https://api.github.com/users/JOCSTAA/events{/privacy}", "received_events_url": "https://api.github.com/users/JOCSTAA/received_events", "type": "User", "site_admin": false }
[ { "login": "JOCSTAA", "id": 65559367, "node_id": "MDQ6VXNlcjY1NTU5MzY3", "avatar_url": "https://avatars.githubusercontent.com/u/65559367?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JOCSTAA", "html_url": "https://github.com/JOCSTAA", "followers_url": "https://api.github.com/users/JOCSTAA/followers", "following_url": "https://api.github.com/users/JOCSTAA/following{/other_user}", "gists_url": "https://api.github.com/users/JOCSTAA/gists{/gist_id}", "starred_url": "https://api.github.com/users/JOCSTAA/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JOCSTAA/subscriptions", "organizations_url": "https://api.github.com/users/JOCSTAA/orgs", "repos_url": "https://api.github.com/users/JOCSTAA/repos", "events_url": "https://api.github.com/users/JOCSTAA/events{/privacy}", "received_events_url": "https://api.github.com/users/JOCSTAA/received_events", "type": "User", "site_admin": false } ]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions." ]
"2022-11-03T17:39:38"
"2023-08-31T07:42:19"
null
NONE
null
### Environment Windows 10 with python 3.9 running in a venv * KFP SDK version: 1.8.14 * All dependencies version: kfp 1.8.14 kfp-pipeline-spec 0.1.16 kfp-server-api 1.8.5 Hello there, Im having some problems regarding the building of python-based components in containers. Im trying to pack a set of components in the same image. ### Steps to reproduce My folder structure looks like this ``` src/generic_components/ |-- component_metadata/ |--- download_from_azure.yaml |--- download_from_ftp.yaml |--- visualize_dataset.yaml |-- download/ |-- download_from_azure/ |--- download_from_azure_comp.py |--- test/ |--- download_from_ftp/ |--- download_from_ftp_comp.py |--- test/ |-- visualization/ |--- visualize_dataset/ |--- visualize_dataset_comp.py |--- test/ |-- utils/ |--- modules that are used in the _comp files ``` All of them use the same target_image and the same base_image. So I'm using: `kfp components build src/generic_components/ --component-filepattern */*/*_comp.py` Then, both my kfp_config.ini and my dockerfile are generated using windows path instead of linux path (my base_image is based on linux) so I have: **kfp_config.ini** ``` [Components] download_from_azure = download\download_from_azure_comp.py .... ``` **dockerfile** ``` # Generated by KFP. FROM python:3.8 WORKDIR \usr\local\src\kfp\components COPY requirements.txt requirements.txt RUN pip install --no-cache-dir -r requirements.txt RUN pip install --no-cache-dir kfp==1.8.14 COPY . . ``` This ended up creating the docker image which first folder is: `'/usrlocalsrckfpcomponents/'` And looking for the component in: `'./download\download_from_azure\'` ### Expected result So, I'm bypassing the kfp command. Im generating the component.yaml file by running the script, creating my own kfp_config.ini and my own dockerfile and building/pushing the image myself. This way everything is working, as expected in kubeflow. I can have several components in the same image, maintained in the same project and with shared modules between them. I expected that kfp components build already generate this with the correct path agnostic from the platform. Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8420/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8420/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8418
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8418/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8418/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8418/events
https://github.com/kubeflow/pipelines/issues/8418
1,434,958,638
I_kwDOB-71UM5Vh7su
8,418
[bug] <Component container doesn't start>
{ "login": "crbl1122", "id": 30111494, "node_id": "MDQ6VXNlcjMwMTExNDk0", "avatar_url": "https://avatars.githubusercontent.com/u/30111494?v=4", "gravatar_id": "", "url": "https://api.github.com/users/crbl1122", "html_url": "https://github.com/crbl1122", "followers_url": "https://api.github.com/users/crbl1122/followers", "following_url": "https://api.github.com/users/crbl1122/following{/other_user}", "gists_url": "https://api.github.com/users/crbl1122/gists{/gist_id}", "starred_url": "https://api.github.com/users/crbl1122/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/crbl1122/subscriptions", "organizations_url": "https://api.github.com/users/crbl1122/orgs", "repos_url": "https://api.github.com/users/crbl1122/repos", "events_url": "https://api.github.com/users/crbl1122/events{/privacy}", "received_events_url": "https://api.github.com/users/crbl1122/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Hi @crbl1122, can you please share how you define the component YAML? Is `/src/components/program.py` coming from the YAML definition?", "Hi @chensun I realize that I overlooked the discrepance in path definition with the YAML files. Was in error thinking that the reference to 'program.py' is made in Dockerfile declaration of workdir and entry point (above). Will close this as it is not a bug.\r\n", "It is not a bug. It was due to discrepance in path definition between the YAML and Dockerfile files" ]
"2022-11-03T16:36:34"
"2022-11-04T04:43:26"
"2022-11-04T04:42:28"
NONE
null
### Environment <!-- Please fill in those that seem relevant. I am testing a yaml specified component in a GCP Kubeflow pipeline. The problem is that the component container doesn't start because is searching the "program.py" test module on a different path than the one specified in Dockerfile as working directory and where the module actually is placed. Please observe that the path reported in the error: " "/src/components/program.py" is different than the one from Dockerfile "/pipelines/component/src". `the replica workerpool0-0 exited with a non-zero status of 127. Termination reason: ContainerCannotRun. Termination log message: failed to create shim: OCI runtime create failed: runc create failed: unable to start container process: exec: "/src/components/program.py": stat /src/components/program.py: no such file or directory: unknown FROM python:3.9 RUN mkdir -p /pipelines/component/src WORKDIR /pipelines/component/src COPY program.py /pipelines/component/src/program.py COPY input.txt /pipelines/component/src/input.txt ENTRYPOINT ["python", "program.py"]` --> * How do you deploy Kubeflow Pipelines (KFP)? <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> * KFP version: <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> * KFP SDK version: <!-- Specify the output of the following shell command: $pip list | grep kfp --> kfp 1.8.14 kfp-pipeline-spec 0.1.16 kfp-server-api 1.8.5 ### Steps to reproduce <!-- Specify how to reproduce the problem. This may include information such as: a description of the process, code snippets, log output, or screenshots. --> I am testing a yaml specified component in a GCP Kubeflow pipeline. The problem is that the component container doesn't start because is searching the "program.py" test module on a different path than the one specifies in Dockerfile as working directory and where the module actually is placed. Please observe that the path reported in the error: " "/src/components/program.py" is different than the one from Dockerfile "/pipelines/component/src". `the replica workerpool0-0 exited with a non-zero status of 127. Termination reason: ContainerCannotRun. Termination log message: failed to create shim: OCI runtime create failed: runc create failed: unable to start container process: exec: "/src/components/program.py": stat /src/components/program.py: no such file or directory: unknown FROM python:3.9 RUN mkdir -p /pipelines/component/src WORKDIR /pipelines/component/src COPY program.py /pipelines/component/src/program.py COPY input.txt /pipelines/component/src/input.txt ENTRYPOINT ["python", "program.py"]` ### Expected result <!-- What should the correct behavior be? --> ### Materials and reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> ### Labels <!-- Please include labels below by uncommenting them to help us better triage issues --> <!-- /area frontend --> <!-- /area backend --> <!-- /area sdk --> <!-- /area testing --> <!-- /area samples --> <!-- /area components --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8418/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8418/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8417
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8417/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8417/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8417/events
https://github.com/kubeflow/pipelines/issues/8417
1,434,942,596
I_kwDOB-71UM5Vh3yE
8,417
[bug] <Kubeflow Component container doesn't start >
{ "login": "crbl1122", "id": 30111494, "node_id": "MDQ6VXNlcjMwMTExNDk0", "avatar_url": "https://avatars.githubusercontent.com/u/30111494?v=4", "gravatar_id": "", "url": "https://api.github.com/users/crbl1122", "html_url": "https://github.com/crbl1122", "followers_url": "https://api.github.com/users/crbl1122/followers", "following_url": "https://api.github.com/users/crbl1122/following{/other_user}", "gists_url": "https://api.github.com/users/crbl1122/gists{/gist_id}", "starred_url": "https://api.github.com/users/crbl1122/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/crbl1122/subscriptions", "organizations_url": "https://api.github.com/users/crbl1122/orgs", "repos_url": "https://api.github.com/users/crbl1122/repos", "events_url": "https://api.github.com/users/crbl1122/events{/privacy}", "received_events_url": "https://api.github.com/users/crbl1122/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
null
[]
null
[]
"2022-11-03T16:25:08"
"2022-11-03T16:25:37"
"2022-11-03T16:25:37"
NONE
null
### Environment <!-- I am testing a container based component specified with YAML file. While launching the pipeline in GCP, the container doesn't start because it cannot find the program.py module. I observe that it is searching program.py module on a different path than the one specified in Dockerfile as working directory and where the module actually is placed. I do not understand why is searching the module on a totally different path? The error message. Please note that path inside container "/src/components/program.py" from error, is different than the one specified in Dockerfile: " /pipelines/component/src". `The replica workerpool0-0 exited with a non-zero status of 127. Termination reason: ContainerCannotRun. Termination log message: failed to create shim: OCI runtime create failed: runc create failed: unable to start container process: exec: "/src/components/program.py": stat /src/components/program.py: no such file or directory: unknown ## Dockerfile FROM python:3.9 RUN mkdir -p /pipelines/component/src WORKDIR /pipelines/component/src COPY program.py /pipelines/component/src/program.py COPY input.txt /pipelines/component/src/input.txt ENTRYPOINT ["python", "program.py"]` --> * How do you deploy Kubeflow Pipelines (KFP)? <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> * KFP version: <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> * KFP SDK version: <!-- Specify the output of the following shell command: $pip list | grep kfp --> kfp 1.8.14 kfp-pipeline-spec 0.1.16 kfp-server-api 1.8.5 ### Steps to reproduce <!-- Specify how to reproduce the problem. This may include information such as: a description of the process, code snippets, log output, or screenshots. --> ### Expected result <!-- What should the correct behavior be? --> ### Materials and reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> ### Labels <!-- Please include labels below by uncommenting them to help us better triage issues --> <!-- /area frontend --> <!-- /area backend --> <!-- /area sdk --> <!-- /area testing --> <!-- /area samples --> <!-- /area components --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8417/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8417/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8413
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8413/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8413/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8413/events
https://github.com/kubeflow/pipelines/issues/8413
1,433,442,497
I_kwDOB-71UM5VcJjB
8,413
[bug] Compiling a component raises AttributeError: 'NoneType' object has no attribute '_set_metadata'
{ "login": "Rabelaiss", "id": 20670578, "node_id": "MDQ6VXNlcjIwNjcwNTc4", "avatar_url": "https://avatars.githubusercontent.com/u/20670578?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rabelaiss", "html_url": "https://github.com/Rabelaiss", "followers_url": "https://api.github.com/users/Rabelaiss/followers", "following_url": "https://api.github.com/users/Rabelaiss/following{/other_user}", "gists_url": "https://api.github.com/users/Rabelaiss/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rabelaiss/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rabelaiss/subscriptions", "organizations_url": "https://api.github.com/users/Rabelaiss/orgs", "repos_url": "https://api.github.com/users/Rabelaiss/repos", "events_url": "https://api.github.com/users/Rabelaiss/events{/privacy}", "received_events_url": "https://api.github.com/users/Rabelaiss/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1126834402, "node_id": "MDU6TGFiZWwxMTI2ODM0NDAy", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/components", "name": "area/components", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "solved installing V2 through `pip install kfp --pre`" ]
"2022-11-02T16:28:04"
"2022-11-03T14:39:28"
"2022-11-03T14:39:28"
NONE
null
Following the official documentation on Kubeflow V2 (link below), I'm trying to compile a simple component which just prints a string, but it raises an `AttributeError` which I don't understand. What's going on? ### Environment Windows 11 Python 3.10.1 kfp 1.8.14 kfp-pipeline-spec 0.1.16 kfp-server-api 1.8.5 ### Steps to reproduce ``` from kfp import dsl @dsl.component def printer(): print('----o----') from kfp import compiler cmplr = compiler.Compiler() cmplr.compile(printer, package_path='printer.yaml') ----o---- Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\gtu\AppData\Local\Programs\Python\Python310\lib\site-packages\kfp\compiler\compiler.py", line 1175, in compile self._create_and_write_workflow( File "C:\Users\gtu\AppData\Local\Programs\Python\Python310\lib\site-packages\kfp\compiler\compiler.py", line 1227, in _create_and_write_workflow workflow = self._create_workflow(pipeline_func, pipeline_name, File "C:\Users\gtu\AppData\Local\Programs\Python\Python310\lib\site-packages\kfp\compiler\compiler.py", line 1005, in _create_workflow pipeline_func(*args_list, **kwargs_dict) File "C:\Users\gtu\AppData\Local\Programs\Python\Python310\lib\site-packages\kfp\dsl\_component.py", line 118, in _component container_op._set_metadata(component_meta) AttributeError: 'NoneType' object has no attribute '_set_metadata' ``` ### Expected result Generation of file `printer.yaml`. ### Materials and reference https://www.kubeflow.org/docs/components/pipelines/v2/compile-a-pipeline/#compiling ### Labels /area components --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8413/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8413/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8411
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8411/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8411/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8411/events
https://github.com/kubeflow/pipelines/issues/8411
1,431,932,301
I_kwDOB-71UM5VWY2N
8,411
Pipelines WG Roadmap for KF 1.7
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
null
[]
null
[ "cc @jbottum @chensun @james-jwu ", "/Priority P1", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions." ]
"2022-11-01T19:00:34"
"2023-08-31T07:42:21"
null
COLLABORATOR
null
This is for tracking the progress for KFP deliverables in Kubeflow 1.7: - [ ] Enter KFP v2 beta phase - [ ] Introduce KFP APIv2 package which is a new standard for communicating with KFP server. - [ ] Complete Sub-DAG visualization on KFPv2 UI. - [ ] [KFPv2 Documentation](https://www.kubeflow.org/docs/components/pipelines/v2/) - [ ] [Tentative] Status IR in Backend Run API. - [ ] SDK - [ ] Support parallelism setting in ParallelFor - [ ] Support Pipeline as a component and exit handler. - [ ] Support [dynamic importer metadata](https://github.com/kubeflow/pipelines/pull/7660) - [ ] Authoring Pipelines with a list of artifacts. - [ ] Support for parallel-for fan-in. - [ ] Multiple Condition fan-in.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8411/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8411/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8408
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8408/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8408/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8408/events
https://github.com/kubeflow/pipelines/issues/8408
1,431,708,642
I_kwDOB-71UM5VViPi
8,408
[backend] ML Metadata writer errors with - Received message larger than max
{ "login": "Sharathmk99", "id": 3970340, "node_id": "MDQ6VXNlcjM5NzAzNDA=", "avatar_url": "https://avatars.githubusercontent.com/u/3970340?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Sharathmk99", "html_url": "https://github.com/Sharathmk99", "followers_url": "https://api.github.com/users/Sharathmk99/followers", "following_url": "https://api.github.com/users/Sharathmk99/following{/other_user}", "gists_url": "https://api.github.com/users/Sharathmk99/gists{/gist_id}", "starred_url": "https://api.github.com/users/Sharathmk99/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Sharathmk99/subscriptions", "organizations_url": "https://api.github.com/users/Sharathmk99/orgs", "repos_url": "https://api.github.com/users/Sharathmk99/repos", "events_url": "https://api.github.com/users/Sharathmk99/events{/privacy}", "received_events_url": "https://api.github.com/users/Sharathmk99/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @Sharathmk99, I'm not sure if it's a good idea to allow configure the limit, there likely will be other system limitation, for example, at DB storage layer. \r\nCan you help explain a bit your use case, why did you get such large content for artifacts?", "@chensun I was not facing this issue initially, but started to face once Kubeflow has more than 5k pipeline runs. We are not logging any metadata outside metadata-writer.\r\nLooks like we have to implement pagination on how metadata-writer queries metadata-server. Please see the issue here https://github.com/google/ml-metadata/issues/74 and https://github.com/google/ml-metadata/issues/42", "@chensun @Sharathmk99 \r\nI would like to address my issue here as well since I think it has the same root cause.\r\n\r\nI am using `kfp v1.8.9` and I have bunch of recurring pipelines that run every 10 minutes; and after ~ 1 week (> 5000 runs), I get the following error:\r\n\r\n```\r\n──────────────────── Traceback (most recent call last) ──────────────────────╮\r\n│ /usr/local/lib/python3.9/site-packages/ml_metadata/metadata_store/metadata_s │\r\n│ tore.py:213 in _call_method │\r\n│ │\r\n│ 210 │ else: │\r\n│ 211 │ grpc_method = getattr(self._metadata_store_stub, method_name) │\r\n│ 212 │ try: │\r\n│ ❱ 213 │ │ response.CopyFrom(grpc_method(request, timeout=self._grpc_tim │\r\n│ 214 │ except grpc.RpcError as e: │\r\n│ 215 │ │ # RpcError code uses a tuple to specify error code and short │\r\n│ 216 │ │ # description. │\r\n│ │\r\n│ /usr/local/lib/python3.9/site-packages/grpc/_channel.py:946 in __call__ │\r\n│ │\r\n│ 943 │ │ │ │ compression=None): │\r\n│ 944 │ │ state, call, = self._blocking(request, timeout, metadata, cre │\r\n│ 945 │ │ │ │ │ │ │ │ │ wait_for_ready, compression) │\r\n│ ❱ 946 │ │ return _end_unary_response_blocking(state, call, False, None) │\r\n│ 947 │ │\r\n│ 948 │ def with_call(self, │\r\n│ 949 │ │ │ │ request, │\r\n│ │\r\n│ /usr/local/lib/python3.9/site-packages/grpc/_channel.py:849 in │\r\n│ _end_unary_response_blocking │\r\n│ │\r\n│ 846 │ │ else: │\r\n│ 847 │ │ │ return state.response │\r\n│ 848 │ else: │\r\n│ ❱ 849 │ │ raise _InactiveRpcError(state) │\r\n│ 850 │\r\n│ 851 │\r\n│ 852 def _stream_unary_invocation_operationses(metadata, initial_metadata_ │\r\n╰──────────────────────────────────────────────────────────────────────────────╯\r\n_InactiveRpcError: <_InactiveRpcError of RPC that terminated with:\r\n status = StatusCode.RESOURCE_EXHAUSTED\r\n details = \"Received message larger than max (5282972 vs. 4194304)\"\r\n debug_error_string = \"UNKNOWN:Error received from peer \r\nmetadata-grpc-service.kubeflow:8080 \r\n{created_time:\"2023-03-27T14:37:17.768468153+00:00\", grpc_status:8, \r\ngrpc_message:\"Received message larger than max (5282972 vs. 4194304)\"}\"\r\n>\r\nDuring handling of the above exception, another exception occurred:\r\n╭───────────────────── Traceback (most recent call last) ──────────────────────╮\r\n│ /usr/local/lib/python3.9/runpy.py:197 in _run_module_as_main │\r\n│ │\r\n│ 194 │ main_globals = sys.modules[\"__main__\"].__dict__ │\r\n│ 195 │ if alter_argv: │\r\n│ 196 │ │ sys.argv[0] = mod_spec.origin │\r\n│ ❱ 197 │ return _run_code(code, main_globals, None, │\r\n│ 198 │ │ │ │ │ \"__main__\", mod_spec) │\r\n│ 199 │\r\n│ 200 def run_module(mod_name, init_globals=None, │\r\n│ │\r\n│ /usr/local/lib/python3.9/runpy.py:87 in _run_code │\r\n│ │\r\n│ 84 │ │ │ │ │ __loader__ = loader, │\r\n│ 85 │ │ │ │ │ __package__ = pkg_name, │\r\n│ 86 │ │ │ │ │ __spec__ = mod_spec) │\r\n│ ❱ 87 │ exec(code, run_globals) │\r\n│ 88 │ return run_globals │\r\n│ 89 │\r\n│ 90 def _run_module_code(code, init_globals=None, │\r\n│ │\r\n│ /usr/local/lib/python3.9/site-packages/zenml/entrypoints/step_entrypoint.py: │\r\n│ 62 in <module> │\r\n│ │\r\n│ 59 │\r\n│ 60 │\r\n│ 61 if __name__ == \"__main__\": │\r\n│ ❱ 62 │ main() │\r\n│ 63 │\r\n│ │\r\n│ /usr/local/lib/python3.9/site-packages/zenml/entrypoints/step_entrypoint.py: │\r\n│ 58 in main │\r\n│ │\r\n│ 55 │ entrypoint_config = entrypoint_config_class(arguments=remaining_arg │\r\n│ 56 │ │\r\n│ 57 │ # Run the entrypoint configuration │\r\n│ ❱ 58 │ entrypoint_config.run() │\r\n│ 59 │\r\n│ 60 │\r\n│ 61 if __name__ == \"__main__\": │\r\n│ │\r\n│ /usr/local/lib/python3.9/site-packages/zenml/entrypoints/step_entrypoint_con │\r\n│ figuration.py:628 in run │\r\n│ │\r\n│ 625 │ │ # Execute the actual step code. │\r\n│ 626 │ │ run_name = self.get_run_name(pipeline_name=pipeline_name) │\r\n│ 627 │ │ orchestrator = Repository().active_stack.orchestrator │\r\n│ ❱ 628 │ │ execution_info = orchestrator.run_step( │\r\n│ 629 │ │ │ step=step, run_name=run_name, pb2_pipeline=pb2_pipeline │\r\n│ 630 │ │ ) │\r\n│ 631 │\r\n│ │\r\n│ /usr/local/lib/python3.9/site-packages/zenml/orchestrators/base_orchestrator │\r\n│ .py:395 in run_step │\r\n│ │\r\n│ 392 │ │ # This is where the step actually gets executed using the │\r\n│ 393 │ │ # component_launcher │\r\n│ 394 │ │ repo.active_stack.prepare_step_run() │\r\n│ ❱ 395 │ │ execution_info = self._execute_step(component_launcher) │\r\n│ 396 │ │ repo.active_stack.cleanup_step_run() │\r\n│ 397 │ │ │\r\n│ 398 │ │ return execution_info │\r\n│ │\r\n│ /usr/local/lib/python3.9/site-packages/zenml/orchestrators/base_orchestrator │\r\n│ .py:419 in _execute_step │\r\n│ │\r\n│ 416 │ │ start_time = time.time() │\r\n│ 417 │ │ logger.info(f\"Step `{pipeline_step_name}` has started.\") │\r\n│ 418 │ │ try: │\r\n│ ❱ 419 │ │ │ execution_info = tfx_launcher.launch() │\r\n│ 420 │ │ │ if execution_info and get_cache_status(execution_info): │\r\n│ 421 │ │ │ │ logger.info(f\"Using cached version of `{pipeline_step_ │\r\n│ 422 │ │ except RuntimeError as e: │\r\n│ │\r\n│ /usr/local/lib/python3.9/site-packages/tfx/orchestration/portable/launcher.p │\r\n│ y:528 in launch │\r\n│ │\r\n│ 525 │ │ │ │ │ │ │ │ │ │ self._pipeline_runtime_spec │\r\n│ 526 │ │\r\n│ 527 │ # Runs as a normal node. │\r\n│ ❱ 528 │ execution_preparation_result = self._prepare_execution() │\r\n│ 529 │ (execution_info, contexts, │\r\n│ 530 │ is_execution_needed) = (execution_preparation_result.execution_in │\r\n│ 531 │ │ │ │ │ │ │ execution_preparation_result.contexts, │\r\n│ │\r\n│ /usr/local/lib/python3.9/site-packages/tfx/orchestration/portable/launcher.p │\r\n│ y:315 in _prepare_execution │\r\n│ │\r\n│ 312 │ input_artifacts = resolved_inputs[0] │\r\n│ 313 │ │\r\n│ 314 │ # 4. Registers execution in metadata. │\r\n│ ❱ 315 │ execution = self._register_or_reuse_execution( │\r\n│ 316 │ │ metadata_handler=m, │\r\n│ 317 │ │ contexts=contexts, │\r\n│ 318 │ │ input_artifacts=input_artifacts, │\r\n│ │\r\n│ /usr/local/lib/python3.9/site-packages/tfx/orchestration/portable/launcher.p │\r\n│ y:225 in _register_or_reuse_execution │\r\n│ │\r\n│ 222 │ exec_properties: Optional[Mapping[str, types.Property]] = None, │\r\n│ 223 ) -> metadata_store_pb2.Execution: │\r\n│ 224 │ \"\"\"Registers or reuses an execution in MLMD.\"\"\" │\r\n│ ❱ 225 │ executions = execution_lib.get_executions_associated_with_all_cont │\r\n│ 226 │ │ metadata_handler, contexts) │\r\n│ 227 │ if len(executions) > 1: │\r\n│ 228 │ raise RuntimeError('Expecting no more than one previous executio │\r\n│ │\r\n│ /usr/local/lib/python3.9/site-packages/tfx/orchestration/portable/mlmd/execu │\r\n│ tion_lib.py:290 in get_executions_associated_with_all_contexts │\r\n│ │\r\n│ 287 \"\"\" │\r\n│ 288 executions_dict = None │\r\n│ 289 for context in contexts: │\r\n│ ❱ 290 │ executions = metadata_handler.store.get_executions_by_context(cont │\r\n│ 291 │ if executions_dict is None: │\r\n│ 292 │ executions_dict = {e.id: e for e in executions} │\r\n│ 293 │ else: │\r\n│ │\r\n│ /usr/local/lib/python3.9/site-packages/ml_metadata/metadata_store/metadata_s │\r\n│ tore.py:1328 in get_executions_by_context │\r\n│ │\r\n│ 1325 │ │\r\n│ 1326 │ request = metadata_store_service_pb2.GetExecutionsByContextReques │\r\n│ 1327 │ request.context_id = context_id │\r\n│ ❱ 1328 │ return self._call_method_with_list_options('GetExecutionsByContex │\r\n│ 1329 │ │ │ │ │ │ │ │ │ │ │ 'executions', request, │\r\n│ 1330 │ │ │ │ │ │ │ │ │ │ │ list_options) │\r\n│ 1331 │\r\n│ │\r\n│ /usr/local/lib/python3.9/site-packages/ml_metadata/metadata_store/metadata_s │\r\n│ tore.py:980 in _call_method_with_list_options │\r\n│ │\r\n│ 977 │ if return_size and return_size < MAX_NUM_RESULT: │\r\n│ 978 │ │ request.options.max_result_size = return_size │\r\n│ 979 │ │\r\n│ ❱ 980 │ self._call(method_name, request, response) │\r\n│ 981 │ entities = getattr(response, entity_field_name) │\r\n│ 982 │ for x in entities: │\r\n│ 983 │ │ result.append(x) │\r\n│ │\r\n│ /usr/local/lib/python3.9/site-packages/ml_metadata/metadata_store/metadata_s │\r\n│ tore.py:188 in _call │\r\n│ │\r\n│ 185 │ avg_delay_sec = 2 │\r\n│ 186 │ while True: │\r\n│ 187 │ try: │\r\n│ ❱ 188 │ │ return self._call_method(method_name, request, response) │\r\n│ 189 │ except errors.AbortedError: │\r\n│ 190 │ │ num_retries -= 1 │\r\n│ 191 │ │ if num_retries == 0: │\r\n│ │\r\n│ /usr/local/lib/python3.9/site-packages/ml_metadata/metadata_store/metadata_s │\r\n│ tore.py:218 in _call_method │\r\n│ │\r\n│ 215 │ │ # RpcError code uses a tuple to specify error code and short │\r\n│ 216 │ │ # description. │\r\n│ 217 │ │ # \r\nhttps://grpc.github.io/grpc/python/_modules/grpc.html#Statu\r\n │\r\n│ ❱ 218 │ │ raise _make_exception(e.details(), e.code().value[0]) # pyty │\r\n│ 219 │\r\n│ 220 def _pywrap_cc_call(self, method, request, response) -> None: │\r\n│ 221 │ \"\"\"Calls method, serializing and deserializing inputs and outputs │\r\n╰──────────────────────────────────────────────────────────────────────────────╯\r\nResourceExhaustedError: Received message larger than max (5282972 vs. 4194304)\r\n``` \r\n\r\nI did read almost all the open/close related issues; and I am not sure if this can be done from the client side in my case; considering I am just using the `kfp.Client()` (if there is a way I would like to implement it until this issue is fixed upstream);\r\nThanks\r\n\r\n", "BTW, we upgraded to `kubeflow` version `1.7` and this issue is now fixed.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions." ]
"2022-11-01T16:19:50"
"2023-08-31T07:42:23"
null
NONE
null
### Environment ML Metadata writes errors with below exception. Looks like we need to change default `max_receive_message_length` from 4194304 to bigger number. Error: ``` Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/ml_metadata/metadata_store/metadata_store.py", line 199, in _call_method response.CopyFrom(grpc_method(request, timeout=self._grpc_timeout_sec)) File "/usr/local/lib/python3.7/site-packages/grpc/_channel.py", line 946, in __call__ return _end_unary_response_blocking(state, call, False, None) File "/usr/local/lib/python3.7/site-packages/grpc/_channel.py", line 849, in _end_unary_response_blocking raise _InactiveRpcError(state) grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: status = StatusCode.RESOURCE_EXHAUSTED details = "Received message larger than max (4765896 vs. 4194304)" debug_error_string = "{"created":"@1667318328.637859534","description":"Received message larger than max (4765896 vs. 4194304)","file":"src/core/ext/filters/message_size/message_size_filter.cc","file_line":206,"grpc_status":8}" ``` * How did you deploy Kubeflow Pipelines (KFP)? <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> On-premises * KFP version: 1.7.0 <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> * KFP SDK version: <!-- Specify the output of the following shell command: $pip list | grep kfp --> ### Steps to reproduce <!-- Specify how to reproduce the problem. This may include information such as: a description of the process, code snippets, log output, or screenshots. --> ### Expected result <!-- What should the correct behavior be? --> ### Materials and Reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8408/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8408/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8406
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8406/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8406/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8406/events
https://github.com/kubeflow/pipelines/issues/8406
1,431,618,888
I_kwDOB-71UM5VVMVI
8,406
[feature] Improved User Isolation in Kubeflow Pipelines
{ "login": "DomFleischmann", "id": 20354078, "node_id": "MDQ6VXNlcjIwMzU0MDc4", "avatar_url": "https://avatars.githubusercontent.com/u/20354078?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DomFleischmann", "html_url": "https://github.com/DomFleischmann", "followers_url": "https://api.github.com/users/DomFleischmann/followers", "following_url": "https://api.github.com/users/DomFleischmann/following{/other_user}", "gists_url": "https://api.github.com/users/DomFleischmann/gists{/gist_id}", "starred_url": "https://api.github.com/users/DomFleischmann/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DomFleischmann/subscriptions", "organizations_url": "https://api.github.com/users/DomFleischmann/orgs", "repos_url": "https://api.github.com/users/DomFleischmann/repos", "events_url": "https://api.github.com/users/DomFleischmann/events{/privacy}", "received_events_url": "https://api.github.com/users/DomFleischmann/received_events", "type": "User", "site_admin": false }
[ { "id": 930619513, "node_id": "MDU6TGFiZWw5MzA2MTk1MTM=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/priority/p1", "name": "priority/p1", "color": "cb03cc", "default": false, "description": "" }, { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
open
false
null
[]
null
[ "/priority p1", "I think the there are three main tasks. \r\nFrom here https://github.com/kubeflow/kubeflow/issues/6662 the listed main problems are \r\n\r\n1. \"The pipeline UI allows reading other peoples artifacts. The artifact proxy in the user namespace is insecure and obsolete. In the UI you can just get the artifact link from another user, remove the ?namespace=xxx parameter at the end and the UI server will fake the corresponding user for you. So if you know the S3/GCS path you can read other guys artifacts.\"\r\n\r\n2. https://github.com/kubeflow/pipelines/pull/7725#issuecomment-1277334000\r\n\r\n3. The namespaced pipeline definitions will be implemented by Arrikto.", "Thanks for starting this issue! Looping in @elikatsis from our side as well", "@StefanoFioravanzo , @juliusvonkohout @DomFleischmann \r\nHi Team, Any update on this feature, in kf v1.7.0 also we can see this is not implemented. any workaround for the same available now.", "@subasathees artifacts are correctly isolated when using Kubeflow Pipelines on [deployKF](https://github.com/deployKF/deployKF) which is my new Kubeflow distribution that includes Kubeflow Pipelines.\r\n\r\ndeployKF achieves this isolation by using [object prefixes with profile/namespace at the beginning](https://github.com/deployKF/deployKF/blob/main/generator/default_values.yaml#L1147-L1150), and assigning a unique [IAM role for each profile](https://github.com/deployKF/deployKF/blob/v0.1.0/generator/helpers/kubeflow-pipelines--object-store.tpl#L212-L244).\r\n\r\nThere is also some crazy stuff going on to ensure the isolation of KFP V2 artifacts, but it all boils down to [creating the `ConfigMap/kfp-launcher` in each profile namespace](https://github.com/deployKF/deployKF/blob/v0.1.0/generator/templates/manifests/kubeflow-tools/pipelines/resources/generate-profile-resources-clusterpolicy.yaml#L394-L424) so that the [`defaultPipelineRoot` is set to a different value](https://github.com/deployKF/deployKF/blob/v0.1.0/generator/default_values.yaml#L1704-L1711) for each profile.\r\n\r\nHowever, deployKF is still limited by Kubeflow Pipelines putting all pipeline definitions under the `pipelines/` object prefix (regardless of the profile/namespace).\r\n\r\n---\r\n\r\nInterestingly, the `?namespace=` bypass described in https://github.com/kubeflow/pipelines/issues/8406#issuecomment-1299781389 does not work in deployKF because of a few factors:\r\n\r\n- there is a very advanced feature [that automatically redirects the `?namespace=` parameter](https://github.com/deployKF/deployKF/blob/main/generator/default_values.yaml#L1755-L1766) to account for the case of a cached result being in a different namespace (and this happens to have the side effect of always forcing a the `?namespace=` parameter to be set)\r\n- even when the redirect future is disabled (at least for a minio object store, not s3), the `ml-pipeline-ui` pod from the `kubeflow` namespace actually has a bug that prevents it from accessing the minio service (because in deployKF minio lives in a different namespace to kubeflow)", "@zijianjoy @james-jwu we really need to fix the `?namespace=` parameter bypass described in https://github.com/kubeflow/pipelines/issues/8406#issuecomment-1299781389.\r\n\r\nThe bypass is that artifact auth is ignored when no namespace parameter is set. This is because when no namespace parameter is set, it uses the `ml-pipeline-ui` pod from the `kubeflow` namespace, rather than proxying to `ml-pipeline-artifact` in the profile namespaces (to which istio will control access based on the user-id header, with the AuthorizationPolicy).\r\n\r\nI think the best option is to have the `ml-pipeline-ui` (KFP frontend pod), reject artifact requests that don't specify `?namespace=`. \r\n\r\nTo do this, we would need to update this code to reject when no namespace parameter is found:\r\n\r\nhttps://github.com/kubeflow/pipelines/blob/79d31db90610e1965b702b258805939962b9a773/frontend/server/handlers/artifacts.ts#L342-L378", "> @zijianjoy @james-jwu we really need to fix the `?namespace=` parameter bypass described in [#8406 (comment)](https://github.com/kubeflow/pipelines/issues/8406#issuecomment-1299781389).\r\n> \r\n> The bypass is that artifact auth is ignored when no namespace parameter is set. This is because when no namespace parameter is set, it uses the `ml-pipeline-ui` pod from the `kubeflow` namespace, rather than proxying to `ml-pipeline-artifact` in the profile namespaces (to which istio will control access based on the user-id header, with the AuthorizationPolicy).\r\n> \r\n> I think the best option is to have the `ml-pipeline-ui` (KFP frontend pod), reject artifact requests that don't specify `?namespace=`.\r\n> \r\n> To do this, we would need to update this code to reject when no namespace parameter is found:\r\n> \r\n> https://github.com/kubeflow/pipelines/blob/79d31db90610e1965b702b258805939962b9a773/frontend/server/handlers/artifacts.ts#L342-L378\r\n\r\nCan you create a PR?", "@thesuperzapper \r\n\"However, deployKF is still limited by Kubeflow Pipelines putting all pipeline definitions under the pipelines/ object prefix (regardless of the profile/namespace).\"\r\nhttps://github.com/kubeflow/pipelines/pull/7725 also fixes the /pipelines minio access. Users should anyway not access that path. Although the PR is outdated and might still have too much permissions to make the minio UI work more user friendly. But one can easily fix that.\r\n\r\n@subasathees The namespaced pipeline definitions should be in 1.8 including the UI part. They are partially in 1.7.\r\n\r\nAll of this must be upstream. Having partial workarounds in downstream distributions is not a solution.", "@juliusvonkohout @thesuperzapper , Thanks for your detailed information, this will help.", "@juliusvonkohout I am pretty focused on deployKF right now, so don't have much time.\r\n\r\nThe change to reject artifact requests without `?namespace` should be relatively straightforward, but it could break stuff so we need to test carefully.", "@thesuperzapper we can also get rid of the per namespace artifact proxy and visualization server when doing this change. This would allow us to have zero overhead user namespaces. We just enforce ?namespace and use the already implemented direct way of ml-pipeline-ui to fetch artifacts from minio. Removing ?namespace from your query just uses that direct path by the way." ]
"2022-11-01T15:25:55"
"2023-07-20T09:23:02"
null
NONE
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> /area frontend /area backend /area sdk <!-- /area samples --> <!-- /area components --> ### What feature would you like to see? Authenticated and Authorized Users should be isolated by namespaces and should not have access to other users artifacts, unless authorized. The solution should be handled in frontend, backend, object storage and sdk. ### What is the use case or pain point? The current implementation allows users to access other users artifacts, this is a big security risk and a feature that limits enterprise adoption. ### Is there a workaround currently? Distributions are doing their own workarounds or enterprise customers need to deploy separate clusters for different users, which is unefficient. This is a Roadmap Item for Kubeflow 1.7 requested by the 1.7 Release Team. @zijianjoy @juliusvonkohout @StefanoFioravanzo @jbottum @annajung @kimwnasptd --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8406/reactions", "total_count": 16, "+1": 16, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8406/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8399
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8399/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8399/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8399/events
https://github.com/kubeflow/pipelines/issues/8399
1,428,577,321
I_kwDOB-71UM5VJlwp
8,399
[sdk] Cannot put metrics
{ "login": "r-matsuzaka", "id": 76238346, "node_id": "MDQ6VXNlcjc2MjM4MzQ2", "avatar_url": "https://avatars.githubusercontent.com/u/76238346?v=4", "gravatar_id": "", "url": "https://api.github.com/users/r-matsuzaka", "html_url": "https://github.com/r-matsuzaka", "followers_url": "https://api.github.com/users/r-matsuzaka/followers", "following_url": "https://api.github.com/users/r-matsuzaka/following{/other_user}", "gists_url": "https://api.github.com/users/r-matsuzaka/gists{/gist_id}", "starred_url": "https://api.github.com/users/r-matsuzaka/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/r-matsuzaka/subscriptions", "organizations_url": "https://api.github.com/users/r-matsuzaka/orgs", "repos_url": "https://api.github.com/users/r-matsuzaka/repos", "events_url": "https://api.github.com/users/r-matsuzaka/events{/privacy}", "received_events_url": "https://api.github.com/users/r-matsuzaka/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Hi @r-matsuzaka, sorry about the confusion, but KFP SDK 1.6.3 is not to be paired with KF manifest 1.6. To try out the v2 features, can you upgrade to KFP SDK 2.0.0b6? Thanks!", "@chensun \r\nHi. Thank you for your answer.\r\nExcuse me but I set up kubeflow by running the following and choose the Docker image from default list.\r\n```\r\ngit clone https://github.com/kubeflow/manifests.git -b v1.6-branch\r\ncd manifests\r\nbash hack/setup-kubeflow.sh\r\n```\r\nAnd I did not prepare Docker image for the kubeflow pipeline.\r\nSo you mean I need to prepare other Docker image which is installed `KFP SDK 2.0.0b6` and load it using custom image section? If Default Docker image does not work then I think manifests.git might need to be changed.\r\n\r\n\r\n![Screenshot from 2022-11-05 16-32-44](https://user-images.githubusercontent.com/76238346/200108473-aab1976b-d0d3-459a-9dbc-0cea28331a63.png)\r\n\r\n![Screenshot from 2022-11-05 16-42-55](https://user-images.githubusercontent.com/76238346/200109327-8e11eafc-9dce-44a6-95bd-4a0c6f9a6f1d.png)\r\n" ]
"2022-10-30T03:26:21"
"2022-11-05T07:45:16"
"2022-11-03T22:42:16"
NONE
null
### Environment * Ubuntu 20.04.5 LTS * kubernetes ``` $ kubectl version --short Client Version: v1.21.14 Server Version: v1.21.14 $ minikube version minikube version: v1.27.1 $ kustomize version --short 3.2.0 ``` * kubeflow/manifests v1.6-branch * KFP version: ``` $ pip list | grep kfp kfp 1.6.3 kfp-pipeline-spec 0.1.16 kfp-server-api 1.6.0 ``` ### Steps to reproduce ```python import os import kfp from kfp.v2 import dsl from kfp.v2.dsl import ( component, Output, ClassificationMetrics, Metrics, ) @component( packages_to_install=['sklearn'], base_image='python:3.9' ) def iris_sgdclassifier(test_samples_fraction: float, metrics: Output[ClassificationMetrics]): from sklearn import datasets, model_selection from sklearn.linear_model import SGDClassifier from sklearn.metrics import confusion_matrix iris_dataset = datasets.load_iris() train_x, test_x, train_y, test_y = model_selection.train_test_split( iris_dataset['data'], iris_dataset['target'], test_size=test_samples_fraction) classifier = SGDClassifier() classifier.fit(train_x, train_y) predictions = model_selection.cross_val_predict(classifier, train_x, train_y, cv=3) metrics.log_confusion_matrix( ['Setosa', 'Versicolour', 'Virginica'], confusion_matrix(train_y, predictions).tolist() # .tolist() to convert np array to list. ) @dsl.pipeline( name='metrics-visualization-pipeline') def metrics_visualization_pipeline(): iris_sgdclassifier_op = iris_sgdclassifier(test_samples_fraction=0.3) import requests HOST = "http://12.324.678.910:80" # please change this value USERNAME = "user@example.com" PASSWORD = "12341234" NAMESPACE = "kubeflow-user-example-com" session = requests.Session() response = session.get(HOST) print(response) print(response.url) headers = { "Content-Type": "application/x-www-form-urlencoded", } data = {"login": USERNAME, "password": PASSWORD} session.post(response.url, headers=headers, data=data) session_cookie = session.cookies.get_dict()["authservice_session"] client = kfp.Client( host=f"{HOST}/pipeline", cookies=f"authservice_session={session_cookie}", namespace=NAMESPACE, ) client.create_run_from_pipeline_func( metrics_visualization_pipeline, arguments={}) ``` ### Expected result I should see confusion matrix as described in https://www.kubeflow.org/docs/components/pipelines/v1/sdk/output-viewer/ But I got this error. ![Screenshot from 2022-10-30 12-24-30](https://user-images.githubusercontent.com/76238346/198861142-0f3fb13e-772c-4e5b-a22c-a43dd7e948af.png) ### Materials and Reference - https://www.kubeflow.org/docs/components/pipelines/v1/sdk/output-viewer/ - https://github.com/kubeflow/pipelines/blob/sdk/release-1.8/samples/test/metrics_visualization_v2.py --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8399/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8399/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8396
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8396/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8396/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8396/events
https://github.com/kubeflow/pipelines/issues/8396
1,426,627,478
I_kwDOB-71UM5VCJuW
8,396
[bug] component pull image from docker local registry
{ "login": "TranThanh96", "id": 26323599, "node_id": "MDQ6VXNlcjI2MzIzNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/26323599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TranThanh96", "html_url": "https://github.com/TranThanh96", "followers_url": "https://api.github.com/users/TranThanh96/followers", "following_url": "https://api.github.com/users/TranThanh96/following{/other_user}", "gists_url": "https://api.github.com/users/TranThanh96/gists{/gist_id}", "starred_url": "https://api.github.com/users/TranThanh96/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TranThanh96/subscriptions", "organizations_url": "https://api.github.com/users/TranThanh96/orgs", "repos_url": "https://api.github.com/users/TranThanh96/repos", "events_url": "https://api.github.com/users/TranThanh96/events{/privacy}", "received_events_url": "https://api.github.com/users/TranThanh96/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Hi @TranThanh96, I think pulling from local docker registry is out of scope for KFP, but a generic K8s issue. Maybe you can search for k8s tutorials on if this is possible and how to configure it." ]
"2022-10-28T03:58:08"
"2022-11-03T22:44:38"
"2022-11-03T22:44:38"
NONE
null
I am using kubeflow deploy on k3AI. I have a local docker registry on ip: 1.2.3.4 then I create a component and push into 1.2.3.4/hello-world:v0.0.1 I have a pipeline that uses the component above: ``` name: hello world description: hello world metadata: annotations: author: ThanhTM inputs: - {name: input, type: String, description: 'data'} implementation: container: image: 1.2.3.4/hello-world:v0.0.1 command: [python3, /pipelines/component/src/main.py] args: [ --input, {inputValue: data_uri_s3}, # input ] ``` but error keep happening: This step is in Pending state with this message: ImagePullBackOff: Back-off pulling image "1.2.3.4/hello-world:v0.0.1" Any tutorial to tackle this issue? thank you
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8396/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8396/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8394
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8394/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8394/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8394/events
https://github.com/kubeflow/pipelines/issues/8394
1,426,120,402
I_kwDOB-71UM5VAN7S
8,394
[feature] Automatically pass pod outputs via file system to avoid size limitation
{ "login": "amrobbins", "id": 17150932, "node_id": "MDQ6VXNlcjE3MTUwOTMy", "avatar_url": "https://avatars.githubusercontent.com/u/17150932?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amrobbins", "html_url": "https://github.com/amrobbins", "followers_url": "https://api.github.com/users/amrobbins/followers", "following_url": "https://api.github.com/users/amrobbins/following{/other_user}", "gists_url": "https://api.github.com/users/amrobbins/gists{/gist_id}", "starred_url": "https://api.github.com/users/amrobbins/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amrobbins/subscriptions", "organizations_url": "https://api.github.com/users/amrobbins/orgs", "repos_url": "https://api.github.com/users/amrobbins/repos", "events_url": "https://api.github.com/users/amrobbins/events{/privacy}", "received_events_url": "https://api.github.com/users/amrobbins/received_events", "type": "User", "site_admin": false }
[ { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
null
[]
null
[ "@amrobbins, can you check to see if you have the same issue in KFP v2? If so, can you please send a reproducible example?", "Yes, I'll be migrating over to kubeflow 2 and I'll provide an updates. Thanks.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions." ]
"2022-10-27T18:46:45"
"2023-09-01T07:42:13"
null
NONE
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> <!-- /area frontend --> /area backend <!-- /area sdk --> <!-- /area samples --> <!-- /area components --> ### What feature would you like to see? I would like kubeflow to pass op output via a file transparently to the user. ### What is the use case or pain point? I have a need to pass a list of batches of skus as output from an op, but some stores have enough sku's that this exceeds the maximum number of bytes that can be returned by an op, as shown below: ![metadata](https://user-images.githubusercontent.com/17150932/198372610-990b5f25-457f-47b4-a96d-b0525129d6ee.png) The reference to the list is then passed to a parallel for so that a pod is launched for each batch of skus. ### Is there a workaround currently? I am not aware of a workaround. I tried to write the sku's into a file instead of returning a list, however it doesn't seem like I can iterate over the contents of the file using a parallel for, so I don't have a way of dynamically determining the batches of skus and launching ops for each. --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8394/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8394/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8391
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8391/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8391/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8391/events
https://github.com/kubeflow/pipelines/issues/8391
1,424,390,177
I_kwDOB-71UM5U5ngh
8,391
[frontend] When execution graph gets too big, it is not displayed instead there is a spinning progress indicator that is displayed forever
{ "login": "amrobbins", "id": 17150932, "node_id": "MDQ6VXNlcjE3MTUwOTMy", "avatar_url": "https://avatars.githubusercontent.com/u/17150932?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amrobbins", "html_url": "https://github.com/amrobbins", "followers_url": "https://api.github.com/users/amrobbins/followers", "following_url": "https://api.github.com/users/amrobbins/following{/other_user}", "gists_url": "https://api.github.com/users/amrobbins/gists{/gist_id}", "starred_url": "https://api.github.com/users/amrobbins/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amrobbins/subscriptions", "organizations_url": "https://api.github.com/users/amrobbins/orgs", "repos_url": "https://api.github.com/users/amrobbins/repos", "events_url": "https://api.github.com/users/amrobbins/events{/privacy}", "received_events_url": "https://api.github.com/users/amrobbins/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "@amrobbins, I suggest using KFP v2. Parallel for loops are represented as subDAGs in v2, which enables the UI to handle them layer-by-layer." ]
"2022-10-26T17:02:52"
"2022-10-27T22:47:12"
"2022-10-27T22:47:11"
NONE
null
### Environment * How did you deploy Kubeflow Pipelines (KFP)? <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> I followed the instructs here: https://www.kubeflow.org/docs/distributions/gke/deploy/overview/ * KFP version: <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> v1.5.0 ### Steps to reproduce 1. Create a pipeline that includes a parallel for loop 2. Increase the number of pods that the parallel for launches until the issue is observed. The issue appear as in this screenshot ![spinning2](https://user-images.githubusercontent.com/17150932/198089321-522f3e78-6140-46d2-8636-816f58032d84.png) <!-- Specify how to reproduce the problem. This may include information such as: a description of the process, code snippets, log output, or screenshots. --> ### Expected result <!-- What should the correct behavior be? --> The graph would eventually display. This was the case for me prior to upgrading to v1.5.0. ### Materials and Reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8391/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8391/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8388
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8388/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8388/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8388/events
https://github.com/kubeflow/pipelines/issues/8388
1,422,580,898
I_kwDOB-71UM5Uytyi
8,388
Manage Jobs UI does not allow deletion of jobs with empty names
{ "login": "Poggecci", "id": 61938153, "node_id": "MDQ6VXNlcjYxOTM4MTUz", "avatar_url": "https://avatars.githubusercontent.com/u/61938153?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Poggecci", "html_url": "https://github.com/Poggecci", "followers_url": "https://api.github.com/users/Poggecci/followers", "following_url": "https://api.github.com/users/Poggecci/following{/other_user}", "gists_url": "https://api.github.com/users/Poggecci/gists{/gist_id}", "starred_url": "https://api.github.com/users/Poggecci/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Poggecci/subscriptions", "organizations_url": "https://api.github.com/users/Poggecci/orgs", "repos_url": "https://api.github.com/users/Poggecci/repos", "events_url": "https://api.github.com/users/Poggecci/events{/privacy}", "received_events_url": "https://api.github.com/users/Poggecci/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
{ "login": "jlyaoyuli", "id": 56132941, "node_id": "MDQ6VXNlcjU2MTMyOTQx", "avatar_url": "https://avatars.githubusercontent.com/u/56132941?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jlyaoyuli", "html_url": "https://github.com/jlyaoyuli", "followers_url": "https://api.github.com/users/jlyaoyuli/followers", "following_url": "https://api.github.com/users/jlyaoyuli/following{/other_user}", "gists_url": "https://api.github.com/users/jlyaoyuli/gists{/gist_id}", "starred_url": "https://api.github.com/users/jlyaoyuli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jlyaoyuli/subscriptions", "organizations_url": "https://api.github.com/users/jlyaoyuli/orgs", "repos_url": "https://api.github.com/users/jlyaoyuli/repos", "events_url": "https://api.github.com/users/jlyaoyuli/events{/privacy}", "received_events_url": "https://api.github.com/users/jlyaoyuli/received_events", "type": "User", "site_admin": false }
[ { "login": "jlyaoyuli", "id": 56132941, "node_id": "MDQ6VXNlcjU2MTMyOTQx", "avatar_url": "https://avatars.githubusercontent.com/u/56132941?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jlyaoyuli", "html_url": "https://github.com/jlyaoyuli", "followers_url": "https://api.github.com/users/jlyaoyuli/followers", "following_url": "https://api.github.com/users/jlyaoyuli/following{/other_user}", "gists_url": "https://api.github.com/users/jlyaoyuli/gists{/gist_id}", "starred_url": "https://api.github.com/users/jlyaoyuli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jlyaoyuli/subscriptions", "organizations_url": "https://api.github.com/users/jlyaoyuli/orgs", "repos_url": "https://api.github.com/users/jlyaoyuli/repos", "events_url": "https://api.github.com/users/jlyaoyuli/events{/privacy}", "received_events_url": "https://api.github.com/users/jlyaoyuli/received_events", "type": "User", "site_admin": false } ]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions." ]
"2022-10-25T14:40:49"
"2023-09-02T07:42:21"
null
NONE
null
### Environment <!-- Please fill in those that seem relevant. --> * How do you deploy Kubeflow Pipelines (KFP)? <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> * KFP version: [1.0.4](https://www.github.com/kubeflow/pipelines/commit/b604c6171244cc1cd80bfdc46248eaebf5f985d6) ### Steps to reproduce <!-- Specify how to reproduce the problem. This may include information such as: a description of the process, code snippets, log output, or screenshots. --> Using the kubeflow API, create a recurring run for any existing pipeline with the job name being blank. In the UI, you will not be able to select this job to modify or delete it. ### Expected result <!-- What should the correct behavior be? --> I should be able to click on the row containing the recurring run (job) even if the name is blank. ### Labels <!-- Please include labels below by uncommenting them to help us better triage issues --> /area frontend --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8388/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8388/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8386
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8386/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8386/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8386/events
https://github.com/kubeflow/pipelines/issues/8386
1,421,996,695
I_kwDOB-71UM5UwfKX
8,386
Critical OS CVEs in gcr.io/ml-pipeline/frontend:1.8.5 CVE-2022-37434
{ "login": "tobiasimbus", "id": 46684886, "node_id": "MDQ6VXNlcjQ2Njg0ODg2", "avatar_url": "https://avatars.githubusercontent.com/u/46684886?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tobiasimbus", "html_url": "https://github.com/tobiasimbus", "followers_url": "https://api.github.com/users/tobiasimbus/followers", "following_url": "https://api.github.com/users/tobiasimbus/following{/other_user}", "gists_url": "https://api.github.com/users/tobiasimbus/gists{/gist_id}", "starred_url": "https://api.github.com/users/tobiasimbus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tobiasimbus/subscriptions", "organizations_url": "https://api.github.com/users/tobiasimbus/orgs", "repos_url": "https://api.github.com/users/tobiasimbus/repos", "events_url": "https://api.github.com/users/tobiasimbus/events{/privacy}", "received_events_url": "https://api.github.com/users/tobiasimbus/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1682717385, "node_id": "MDU6TGFiZWwxNjgyNzE3Mzg1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/status/backlog", "name": "status/backlog", "color": "bc9090", "default": false, "description": "" } ]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }, { "login": "gkcalat", "id": 35157096, "node_id": "MDQ6VXNlcjM1MTU3MDk2", "avatar_url": "https://avatars.githubusercontent.com/u/35157096?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gkcalat", "html_url": "https://github.com/gkcalat", "followers_url": "https://api.github.com/users/gkcalat/followers", "following_url": "https://api.github.com/users/gkcalat/following{/other_user}", "gists_url": "https://api.github.com/users/gkcalat/gists{/gist_id}", "starred_url": "https://api.github.com/users/gkcalat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gkcalat/subscriptions", "organizations_url": "https://api.github.com/users/gkcalat/orgs", "repos_url": "https://api.github.com/users/gkcalat/repos", "events_url": "https://api.github.com/users/gkcalat/events{/privacy}", "received_events_url": "https://api.github.com/users/gkcalat/received_events", "type": "User", "site_admin": false } ]
null
[ "@tobiasimbus thank you for your contribution!\r\nWe will keep this in backlog until the next release." ]
"2022-10-25T07:29:32"
"2023-05-04T22:57:08"
null
NONE
null
### Environment KFP version:1.8.5 Critical OS CVEs reported by the anchore scanning service: CVE-2022-37434 ### Steps to reproduce `sudo docker run anchore/grype gcr.io/ml-pipeline/frontend:1.8.5 --only-fixed | grep Critical` ### Materials and reference ![image](https://user-images.githubusercontent.com/46684886/197710318-c1783d1a-babf-4606-a677-72a8e362885e.png) <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8386/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8386/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8385
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8385/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8385/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8385/events
https://github.com/kubeflow/pipelines/issues/8385
1,418,758,270
I_kwDOB-71UM5UkIh-
8,385
[sdk] Containerized Python Component module not found error
{ "login": "connor-mccarthy", "id": 55268212, "node_id": "MDQ6VXNlcjU1MjY4MjEy", "avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-mccarthy", "html_url": "https://github.com/connor-mccarthy", "followers_url": "https://api.github.com/users/connor-mccarthy/followers", "following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}", "gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions", "organizations_url": "https://api.github.com/users/connor-mccarthy/orgs", "repos_url": "https://api.github.com/users/connor-mccarthy/repos", "events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-mccarthy/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" } ]
open
false
{ "login": "connor-mccarthy", "id": 55268212, "node_id": "MDQ6VXNlcjU1MjY4MjEy", "avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-mccarthy", "html_url": "https://github.com/connor-mccarthy", "followers_url": "https://api.github.com/users/connor-mccarthy/followers", "following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}", "gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions", "organizations_url": "https://api.github.com/users/connor-mccarthy/orgs", "repos_url": "https://api.github.com/users/connor-mccarthy/repos", "events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-mccarthy/received_events", "type": "User", "site_admin": false }
[ { "login": "connor-mccarthy", "id": 55268212, "node_id": "MDQ6VXNlcjU1MjY4MjEy", "avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-mccarthy", "html_url": "https://github.com/connor-mccarthy", "followers_url": "https://api.github.com/users/connor-mccarthy/followers", "following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}", "gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions", "organizations_url": "https://api.github.com/users/connor-mccarthy/orgs", "repos_url": "https://api.github.com/users/connor-mccarthy/repos", "events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-mccarthy/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @connor-mccarthy I am trying to follow the docs and I ended up with the same error as #8353 how can I help to close this issue?", "I can reproduce it with even one just module:\r\n\r\nworking directory:\r\n\r\n```bash\r\n├── main.py\r\n├── poetry.lock\r\n├── poetry.toml\r\n├── pyproject.toml\r\n└── utils.py\r\n```\r\n\r\nWhere:\r\n\r\n## main.py\r\n\r\n```\r\nfrom kfp import dsl\r\nfrom .utils import return_date\r\n\r\n@dsl.component(\r\n base_image='python:3.7',\r\n target_image='gcr.io/my-project/my-component:v1',\r\n)\r\ndef main():\r\n x = return_date()\r\n```\r\n\r\n## utils.py\r\n\r\n```\r\nfrom datetime import datetime\r\n\r\ndef return_date():\r\n return datetime.now().strftime('%Y-%m-%d')\r\n```\r\n\r\n\r\n## Results:\r\n\r\n```bash\r\n╰─>$ kfp component build .\r\nBuilding component using KFP package path: kfp==2.0.0-beta.11\r\nattempted relative import with no known parent package\r\n```\r\n\r\nAlternatively: \r\n\r\n```bash\r\n─>$ kfp component build bug/\r\nBuilding component using KFP package path: kfp==2.0.0-beta.11\r\nattempted relative import with no known parent package\r\n```\r\n\r\nAdding a `__init__.py` which is not in the docs:\r\n\r\n have the same results\r\n \r\n Moving the files to a `src` folder and trying to replicate the [docs](https://www.kubeflow.org/docs/components/pipelines/v2/author-a-pipeline/components/#2-containerized-python-components) I cannot seem to have the same results as the docs\r\n \r\n ## Update:\r\n \r\nEven with changing the `utils.py` as:\r\n\r\n```\r\ndef return_date():\r\n return \"today\"\r\n```\r\n\r\nstill get the same error.\r\n\r\nMy poetry just have the package installation `kfp = {version = \"2.0.0b11\", extras = [\"all\"]}`\r\n\r\n\r\n\r\n", "@Davidnet, thank you for the detailed reproduction notes. This is on our backlog.\r\n\r\n> how can I help to close this issue?\r\n\r\nIf you're interested, you're welcome to address this bug and submit a PR!", "@connor-mccarthy Awesome, thanks for the response any ideas on where could I start looking?", "Thanks, @Davidnet. Some links:\r\n\r\n* [CLI entrypoint](https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/cli/cli.py#L96)\r\n* [`kfp component build` command](https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/cli/component.py)\r\n* [test code](https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/cli/component_test.py)\r\n* [docs source](https://github.com/kubeflow/website/blob/master/content/en/docs/components/pipelines/v2/author-a-pipeline/components.md)", "Hey guys,\r\nI also had a look on this issue as it is quite annoying and I think I might have found the cause for `No module named` problem.\r\nWhen running `kfp` command, this resolves (`which kfp`) to a script in (my case, depending on your python installation it will be different) mamba environment bin directory `~/mambaforge/envs/kfp-env/bin/kfp`.\r\nThe file looks like this:\r\n```\r\n#!/home/b4sus/mambaforge/envs/kfp-env/bin/python3.11\r\n# -*- coding: utf-8 -*-\r\nimport re\r\nimport sys\r\nfrom kfp.cli.__main__ import main\r\nif __name__ == '__main__':\r\n sys.argv[0] = re.sub(r'(-script\\.pyw|\\.exe)?$', '', sys.argv[0])\r\n sys.exit(main())\r\n```\r\nI patched the file by adding `print(sys.path)` to see the path where python will be checking for modules. The thing is that current directory (where the `kfp` is executed from) is not there. That one is added only when executing python directly (do not quote me on this, but see [here](https://docs.python.org/3/library/sys.html?#sys.path)) and not through this shell approach.\r\n\r\nNow comes the funny part, bare with me here :)\r\nThe sdk will check the provided directory in search for components. It will list python files by using Path.glob and try to load each module ([here](https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/cli/component.py#L173)).\r\nThis may or may not yield the error. It all depends on the order of files returned by the glob method.\r\nFor example, following the files from @connor-mccarthy setup, if the glob would return files in order `[module_two.py, module_one.py, component.py]`, the whole thing would work. Loading `module_two.py` would be no problem (no imports), loading `module_one.py` would be no problem (import of `module_two` would work because it was already loaded in), loading of `component.py` would be no problem (import of `module_one` would work because it was already loaded in).\r\n\r\nIn other words, python would never need to check `sys.path` for loading the modules.\r\n\r\nHowever, if the returned order from glob would be for example `[module_one.py, module_two.py, component.py]`, loading would fail during load of `module_one.py` which is importing `module_two`. This time python doesn't have `module_two` cached and needs to check `sys.path` and it cannot find it -> `No module named` error.\r\n\r\nAs a quickfix, I added following to the kfp file (`~/mambaforge/envs/kfp-env/bin/kfp`):\r\n```\r\n import os\r\n sys.path.append(os.getcwd())\r\n```\r\nThis should work when all the component python files are in the current directory where you execute `kfp`.\r\n\r\nIt also works when using slightly more of a structure, eg:\r\n```\r\nmy_project\r\n app\r\n components\r\n __init__.py\r\n my_component.py (having absolute import app.utils.helper\r\n utils\r\n __init__.py\r\n helper.py\r\n```\r\nThen from `my_project` directory execute:\r\n```\r\nkfp component build . --component-filepattern app/components/my_comp*\r\n```", "Thanks for this thorough investigation, @b4sus. This makes sense based on the errors we've observed.\r\n\r\n> As a quickfix, I added following to the kfp file (~/mambaforge/envs/kfp-env/bin/kfp):\r\n\r\nIs this quickfix a final fix? Or is there a more robust fix you have in mind? If final, are you interested in submitting a PR? I would be happy to review promptly.", "Hey @connor-mccarthy,\r\nsome more considerations could be necessary, including having some recommended structure for the project. Additionally, I am no expert on python import system, so that's something as well :)\r\n\r\nLet's assume standard (I think) python project (let's call it `myproject`) layout:\r\n```\r\nproject_dir/\r\n myproject/\r\n __init__.py\r\n components/\r\n __init__py.py\r\n my_comp.py\r\n pipelines/\r\n __init__py.py\r\n my_pipelines.py\r\n pyproject.toml\r\n .gitignore\r\n```\r\nNow (considering my fix) you have to run `kfp` from `project_dir` like\r\n```\r\nkfp component build .\r\n```\r\nThis will work, but there are caveats:\r\n1. you have to use absolute imports (generally recommended afaik)\r\n2. you have to use `.` as input for `kfp` - it will be recursive - checking subdirectories as well - desirable I'd say\r\n3. if somewhere you have also lightweight components it will fail (because of missing base_image) - so you have to separate those to different packages - eg `myproject/components/lightweight` and `myproject/components/cont` - and then use `myproject/components/cont` and then use `--component-filepattern` like `kfp component build . --component-filepattern \"myproject/components/cont/*.py\"`\r\n4. complete content of `project_dir` will end up in docker image\r\n5. input for `kfp component build` (the `.` above) is basically usesless - you cannot use anything else - otherwise Dockerfile and other files will be generated elsewhere and the docker image will not have correct structure copied into\r\n6. possibly something else\r\n\r\nAll this is fine for me, but needs to be considered.", "@b4sus \r\n> including having some recommended structure for the project\r\n\r\nI agree with this. I'm working on a v2 docs refresh currently and will keep this in mind.\r\n\r\n> but there are caveats\r\n\r\nThank you for laying out these considerations. In general, as long as there are no regressions (all existing user code that uses Containerized Python Components will still work), I'm happy to eagerly merge a better but not perfect fix. It sounds like some of these constraints may be been introduced by the proposed fix, however (such as the requirement to use the cwd `.` as the path). If that's the case, perhaps we ought to wait.\r\n\r\nIf we do wait, I'll take a stab at this soon and certainly leverage your investigation. Thank you again for the thorough writeup -- this helps a lot.", "Thanks for the issue breakdown @b4sus and @connor-mccarthy.\r\n\r\nI have created a PR https://github.com/kubeflow/pipelines/pull/9157 to essentially add the module directory to the sys.path within the `load_module` function.", "Just tested the fix with b15, nice to see progress in the topic :+1:\r\nHowever :), it is not working for me - I have similar setup as mentioned above:\r\n```\r\nproject_dir/\r\n myproject/\r\n __init__.py\r\n components/\r\n __init__py.py\r\n my_comp.py\r\n pipelines/\r\n __init__py.py\r\n my_pipelines.py\r\n utils.py\r\n pyproject.toml\r\n .gitignore\r\n```\r\nand using the absolute imports. From component module I import some util function from other module (via absolute import, eg in file `myproject/components/my_comp.py` using import `from myproject.utils import xyz`) -> the root package is not visible - `No module named 'myproject'` - what makes sense as it was never added to sys.path.\r\nI guess with relative imports only it's working, but it would be great to also support absolute.", "Thanks for testing and updating this, @b4sus. Reopening.", "This bug seems to have been introduced in `kfp==1.8.12`", "@b4sus, I've been looking into a fix for this and my sense is that we may not want to support absolute imports at this time, since it requires that the KFP component executor `pip install` the the user's package (in your example, the `myproject` package), which requires that KFP know something about the user's directory structure.\r\n\r\nIn a bit more detail, the `No module named 'myproject'` can be resolved at component build/compile time by running `pip install .` from `project_dir/` to install `myproject`. However, there will still be a `No module named 'myproject'` exception at component runtime, since KFP doesn't install the package in the component build (see the [Dockerfile](https://github.com/kubeflow/pipelines/blob/f9409a42d664cfbd2d6cb3f36997c14206a6586b/sdk/python/kfp/cli/component.py#L44) template) or runtime commands/args that execute the component.\r\n\r\nThis is of course something that we could change by adding additional parameters to the component build process or doing something \"smart\" under the hood in the component build logic, but I'm not sure the cost of either (a) exposing those parameters to users or (b) maintaining that smart logic is worth the benefit. It seems reasonable that the module that contains the component definition(s) should be runnable as a script (with relative imports), rather than something that requires pip installation (with absolute imports).\r\n\r\nLet me know if you have other thoughts or if my understanding needs refinement.\r\n\r\nThank you, by the way, for the very helpful minimal reproducible example.", "Hey @connor-mccarthy ,\r\nI understand the script nature - I also tend to run component as script to try (and I need to run it from `project_dir` for it to work).\r\nHowever, as the project grows and let's say you want to have proper tests (under `project_dir/test`) and run them simply by `pytest tests` from `project_dir`, you will run into problem (using relative imports).\r\n\r\nI might have another idea for a fix though :). What about adding `components_directory` (cli) argument from [build function](https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/cli/component.py#L393) to the path? Like this:\r\n```python\r\ndef build(components_directory: str, component_filepattern: str, engine: str,\r\n kfp_package_path: Optional[str], overwrite_dockerfile: bool,\r\n build_image: bool, platform: str, push_image: bool):\r\n \"\"\"Builds containers for KFP v2 Python-based components.\"\"\"\r\n\r\n sys.path.append(components_directory)\r\n\r\n if build_image and engine != 'docker':\r\n```\r\nThen I can run `kfp build` wherever with argument pointing to `project_dir` and the absolute import should (tested locally) work. This would solve absolute imports, relative should not be affected. I cannot think of case when this would cause an issue, also shouldn't be maintenance heavy.\r\nWhat do you think?", "Hey @b4sus,\r\n\r\nI have a similar project structure as you mentioned above. I wonder how you handle the component within the pipelines and how you separate the components. If the pipeline consist of more than one component, lets say `preprocessing` and `training`. \r\n\r\n```\r\nproject_dir/\r\n myproject/\r\n __init__.py\r\n components/\r\n preprocssing/\r\n __init__.py\r\n component.py\r\n training/\r\n __init__.py\r\n component.py\r\n __init__.py\r\n pipelines/\r\n __init__.py\r\n my_pipelines.py\r\n utils.py\r\n pyproject.toml\r\n .gitignore\r\n```\r\n\r\nFor now I use a CI/CI pipeline to build and push the components and finally compile and submit the pipeline job (using Google Vertex AI). The components build command looks like the following. This might be unorthodox, but this was the only way I could find, which works with absolute imports. \r\n\r\n```sh\r\nkfp component build . --component-filepattern ${{ matrix.component }}/component.py --push-image\r\n# The matrix.component variable looks for example like that \"myproject/components/preprocessing\"\r\n```\r\n\r\nNow the problem is, that the generate Dockerfile always has a `COPY . . ` command, which leads to a rebuild of the component everytime I change anything in my project. This also mean I cant changes component without rerunning the whole pipline. That can be costly. \r\n\r\nBevor the containerized python components I used the docker components and only copied the parts of the project relevant to the component. But that required a lot of manual work and lead to errors because of files missing. " ]
"2022-10-21T19:14:59"
"2023-08-16T12:33:02"
null
MEMBER
null
There is a bug when building a containerized Python component that happens (at least) in the case when the longest path of the import graph ending at the component involves >2 modules. ### Environment KFP SDK 2.0.0-beta.6 ### Steps to reproduce For example: ```python # component.py from module_one import one from kfp import dsl @dsl.component def comp(): ... ``` ```python # module_one.py from module_two import two one = 1 ``` ```python # module_two.py two = 2 ``` Then: `kfp component build .` You get a `No module named` error. ### Expected result Should build without an error. ### Materials and Reference Related: https://github.com/kubeflow/pipelines/issues/8353
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8385/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8385/timeline
null
reopened
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8382
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8382/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8382/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8382/events
https://github.com/kubeflow/pipelines/issues/8382
1,418,197,168
I_kwDOB-71UM5Uh_iw
8,382
[feature] Allow users to skip `docker build` in `kfp components build`
{ "login": "ghost", "id": 10137, "node_id": "MDQ6VXNlcjEwMTM3", "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghost", "html_url": "https://github.com/ghost", "followers_url": "https://api.github.com/users/ghost/followers", "following_url": "https://api.github.com/users/ghost/following{/other_user}", "gists_url": "https://api.github.com/users/ghost/gists{/gist_id}", "starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghost/subscriptions", "organizations_url": "https://api.github.com/users/ghost/orgs", "repos_url": "https://api.github.com/users/ghost/repos", "events_url": "https://api.github.com/users/ghost/events{/privacy}", "received_events_url": "https://api.github.com/users/ghost/received_events", "type": "User", "site_admin": false }
[ { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "The feature request makes sense to me. In terms of implementation, have you considered adding support for different engines? https://github.com/kubeflow/pipelines/blob/1f5d8ff39759d1b305807939a90f3f58f4befb78/sdk/python/kfp/deprecated/cli/components.py#L72-L76", "Hi Chen! Thanks for your quick response!\r\n\r\nI took a brief look at other container build engines. It looks like there is no official Python package for kaniko (although there is the unofficial [PyKaniko](https://github.com/nxexox/pykaniko)). Cloud Build could be supported using the Cloud Build Python SDK, but there are a lot of potential build options to account for! There are also a number of other build engines that users might prefer for one reason or another e.g. buildah.\r\n\r\nI think an ideal solution would be to add support for more container build engines (tbd which ones we support in the SDK), and also to allow the user to skip the container build step and build the containers themselves (e.g. in a subsequent step of a CI/CD pipeline or bash script)\r\n\r\nWhat do you think?", "Makes sense to me. \r\n/cc @connor-mccarthy for a second opinion.", "Agreed. Even with other build engine options, I think it makes sense to enable the \"no engine\" option also.", "Thanks both! In terms of next steps, I propose\r\n\r\n- [x] Merge in #8383 - thanks!\r\n- [x] Add the \"no engine\" option to v2 in master - I will work on a PR for this\r\n- [ ] Implement a Cloud Build option\r\n\r\nLooking more at kaniko, I'm not sure how this would work as kaniko is designed to run in its own container.\r\n\r\nCloud Build seems like a useful option to have, particularly for users who can't run container locally, so I'm happy to work on that. As I mentioned, there are a number of options that would need to be passed to Cloud Build e.g. project ID, region, staging bucket etc. How would you like these to be exposed to the user? My initial thought is to add additional command-line flags `--cloudbuild-project`, `--cloudbuild-region`, `--cloudbuild-staging-bucket` etc. Does that sound reasonable, or do you have another preferred approach?", "Thanks, @browningjp-datatonic!\r\n\r\n> Add the \"no engine\" option to v2 in master - I will work on a PR for this\r\n\r\nWe probably ought to use the same `--no-build-image` to conform with #8383 and ease migration from KFP SDK v1 to v2.\r\n\r\n> How would you like these to be exposed to the user?\r\n\r\nI think adding engine-specific flags is a non-option, unfortunately, as parameterizing `kfp components build` for each engine will quickly bloat the CLI's API. Are there other options for exposing these?", "> We probably ought to use the same `--no-build-image` to conform with #8383 and ease migration from KFP SDK v1 to v2.\r\n> \r\n\r\nYes, I will make sure they match\r\n\r\n> I think adding engine-specific flags is a non-option, unfortunately, as parameterizing `kfp components build` for each engine will quickly bloat the CLI's API. Are there other options for exposing these?\r\n\r\nI am struggling a bit for good options tbh! These are the ones that come to mind:\r\n\r\n1. Engine-specific flags - will quickly bloat the API as you say\r\n2. Some kind of config file - this is then another spec to design, maintain, and implement for each engine\r\n3. Environment variables - if a particular engine accepts options as env variables already, great (nothing to do), otherwise we have a naming convention to work out, maintain, implement...\r\n4. A generic `--build-options` flag containing comma-separated key-value pairs? E.g. `--build-options=project=my-gcp-project,region=europe-west2,staging_bucket=gs://my-gcs-bucket`. I think I like this option the most as it doesn't bloat the API (directly at least) and gives the flexibility for whatever options an engine might need. The drawback I can see is that it is fiddly to document well.\r\n5. (Something else I've not thought of...)\r\n\r\nWhat do you think?", "@browningjp-datatonic\r\n\r\nI don't have any other options in mind.\r\n\r\nIs cloudbuild the engine you are using? I'm curious if there are many user requests for this engine. If not, perhaps we don't need to rush to implement this engine.", "Yes, we would certainly use Cloud Build. All our customers are on Google Cloud anyway, and most of them are enterprise customers that can't use Docker locally.\r\n\r\nThat said, it's very easy to just run `kfp components build --no-build-image . && gcloud builds submit ...` and wrap that in a script or Makefile, so I agree it's not an urgent need.\r\n\r\nI notice that in the master branch, the docstring for the `--engine` flag says it is deprecated. If we are considering implementing more engines, should we remove this before the next stable release, so that we don't end up having to un-deprecate the flag?", "I think `kfp components build --no-build-image . && gcloud builds submit ...` is a good approach.\r\n\r\nWe probably don't want to be in the business of maintaining integrations with many different build providers and the KFP SDK provides only a very thin wrapper around the commands you shared.\r\n\r\nFor example:\r\n`kfp components build --no-build-image . && gcloud builds submit --cloudbuild-project ...` becomes\r\n`kfp components build --no-build-image --cloudbuild-project ... .`. The diff is `gcloud builds submit`. There is not much typing saved and it comes at the cost of (1) transparency about what is going on and (2) debuggability for the user via an added layer of indirection.\r\n\r\nThank you for bringing this up @browningjp-datatonic! I will review #8387.", "I agree - don't think it is worth the maintenance burden to support additional engines. I'm happy that this issue can be closed/resolved by #8383 and #8387 ", "BTW - what is the usual cadence for new releases for 1.8? We have a current customer use case that this would be extremely useful for!", "@browningjp-datatonic, we don't have a scheduled cadence for 1.8 releases. We typically wait until features accumulate to minimize having an abundance of patch releases with only a few number of changes.\r\n\r\nUntil the next release, you can pip install from GitHub HEAD. Does that unblock you for now?", "@connor-mccarthy no problem. The customer is not able to install Python packages from GitHub directly (everything goes through a private PyPi mirror), but we can make a start on development in a less restrictive environment.\r\n\r\nThanks for all your help!" ]
"2022-10-21T11:31:54"
"2022-10-31T10:18:35"
"2022-10-26T15:58:36"
NONE
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> <!-- /area frontend --> <!-- /area backend --> /area sdk <!-- /area samples --> <!-- /area components --> ### What feature would you like to see? <!-- Provide a description of this feature and the user experience. --> Building containerised components using the Python DSL, and packaging them using `kfp components build` is a really nice user experience - you get the benefits of the Python DSL without some of the drawbacks of "lightweight" components (e.g. having to install dependencies at runtime). However, there are some scenarios where building container images using Docker is not an option, for example: - Some organisations do not allow Docker to be installed on Data Scientists' laptops (very common for enterprise users) - Some CI/CD platforms do not support docker-in-docker, and use other mechanisms (such as kaniko) for building container images This feature would allow the user to skip the actual container image build, but still perform the other functions in `kfp components build` (generation of the different files). The user would then be responsible for building the container images using their preferred mechanism (e.g. Cloud Build, Kaniko) ### What is the use case or pain point? <!-- It helps us understand the benefit of this feature for your use case. --> Currently not all users are able to use the `kfp components build` command as they cannot run Docker (locally or in CI/CD) ### Is there a workaround currently? <!-- Without this feature, how do you accomplish your task today? --> Current workarounds: - Roll your own containerised components without the convenience of the `kfp components build` command - I suppose you could mock the `docker` library as is done in the tests? But this is quite hacky! --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8382/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8382/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8381
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8381/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8381/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8381/events
https://github.com/kubeflow/pipelines/issues/8381
1,415,375,551
I_kwDOB-71UM5UXOq_
8,381
[bug] GKE preemption inconsistently treated as WorkflowNodeError and WorkflowNodeFailed
{ "login": "pjo256", "id": 2179163, "node_id": "MDQ6VXNlcjIxNzkxNjM=", "avatar_url": "https://avatars.githubusercontent.com/u/2179163?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pjo256", "html_url": "https://github.com/pjo256", "followers_url": "https://api.github.com/users/pjo256/followers", "following_url": "https://api.github.com/users/pjo256/following{/other_user}", "gists_url": "https://api.github.com/users/pjo256/gists{/gist_id}", "starred_url": "https://api.github.com/users/pjo256/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pjo256/subscriptions", "organizations_url": "https://api.github.com/users/pjo256/orgs", "repos_url": "https://api.github.com/users/pjo256/repos", "events_url": "https://api.github.com/users/pjo256/events{/privacy}", "received_events_url": "https://api.github.com/users/pjo256/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
open
false
null
[]
null
[ "Hello @pjo256 , looks like it is an Argo question, can you create the issue in their repository and link it here? Thank you so much!", "Created https://github.com/argoproj/argo-workflows/issues/9880" ]
"2022-10-19T17:55:11"
"2022-10-21T15:19:10"
null
NONE
null
Hello! Unsure if this should be filed with Argo, but we're seeing some inconsistency with error-handling for preemptions. For some context, we run a large number of jobs with preemptible node pools in GKE. We typically don't want to retry application failures, but we always want to retry preemptions. We added a retry policy for `OnError`, and successfully triggered this policy when manually pre-empting nodes (e.g. deleting a node in the node pool). When looking at Argo workflows in the cluster, this was recorded as a `WorkflowNodeError` with a `pod deleted` error message and triggered our retry policy. During actual pre-emption, we see `WorkflowNodeFailed` - `Pod was terminated due to imminent node shutdown` - this does not trigger our `OnError` retry policy. Typical application failures show as `WorkflowNodeFailed` and do not trigger our retry policy as expected. On a related note, it would be useful if we could inspect the workflow status before retrying, or conditionally retry for a subset of failure reasons - not sure if this is already possible. ### Environment <!-- Please fill in those that seem relevant. --> * How do you deploy Kubeflow Pipelines (KFP)? GCP AI Platform Managed Pipelines on GKE <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> * KFP version: 1.8.5 <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> <!-- Specify the output of the following shell command: $pip list | grep kfp --> ### Expected result Node pre-emption should trigger the `OnError` retry policy. ### Materials and reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> <!-- Please include labels below by uncommenting them to help us better triage issues --> <!-- /area frontend --> <!-- /area backend --> <!-- /area sdk --> <!-- /area testing --> <!-- /area samples --> <!-- /area components --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8381/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8381/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8380
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8380/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8380/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8380/events
https://github.com/kubeflow/pipelines/issues/8380
1,415,228,996
I_kwDOB-71UM5UWq5E
8,380
[feature] Add i18n Support
{ "login": "Souheil-Yazji", "id": 35379392, "node_id": "MDQ6VXNlcjM1Mzc5Mzky", "avatar_url": "https://avatars.githubusercontent.com/u/35379392?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Souheil-Yazji", "html_url": "https://github.com/Souheil-Yazji", "followers_url": "https://api.github.com/users/Souheil-Yazji/followers", "following_url": "https://api.github.com/users/Souheil-Yazji/following{/other_user}", "gists_url": "https://api.github.com/users/Souheil-Yazji/gists{/gist_id}", "starred_url": "https://api.github.com/users/Souheil-Yazji/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Souheil-Yazji/subscriptions", "organizations_url": "https://api.github.com/users/Souheil-Yazji/orgs", "repos_url": "https://api.github.com/users/Souheil-Yazji/repos", "events_url": "https://api.github.com/users/Souheil-Yazji/events{/privacy}", "received_events_url": "https://api.github.com/users/Souheil-Yazji/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
open
false
null
[]
null
[ "Please bring i18n support to Kubeflow community (For example: https://github.com/kubeflow/kubeflow/issues/6665).\r\n\r\nMy feedback is that it will bring difficulty to maintain the i18n up-to-date in long term. Therefore I suggest making i18n an extension than contributing to the main repos." ]
"2022-10-19T15:55:19"
"2022-10-20T22:51:18"
null
NONE
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> /area frontend ### What feature would you like to see? <!-- Provide a description of this feature and the user experience. --> Include i18n support for Kubeflow Pipelines to allow multilingual support. ### What is the use case or pain point? <!-- It helps us understand the benefit of this feature for your use case. --> Offering kubeflow pipelines in languages other than English will help non-english native users with the kubeflow pipelines UI. ### Is there a workaround currently? <!-- Without this feature, how do you accomplish your task today? --> We host Kubeflow Pipelines at Statistics Canada, we're required to offer a french translation of the frontend, which we use our own [fork](https://github.com/StatCan/kubeflow-pipelines) to implement using i18next. One major issue is keeping the fork synced with the upstream code base, as we try to keep our hosting of Kubeflow up to date. A solution which would benefit the upstream project, our team, and other users would be to merge the i18n work to the upstream repo. I've already seen [a PR for i18n ](https://github.com/kubeflow/pipelines/pull/8137) which shows that other users of kubeflow are interested in internationalization of Pipelines. Our fork's current state is 2.0.0-alpha.3 which should integrate with KF1.6 successfully. We are currently implementing the i18n for that release. --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8380/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8380/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8379
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8379/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8379/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8379/events
https://github.com/kubeflow/pipelines/issues/8379
1,414,360,940
I_kwDOB-71UM5UTW9s
8,379
Please provide information on how to generate or get API_Keys to us the pipeline endpoints in the documentation
{ "login": "JV-DHL", "id": 116144452, "node_id": "U_kgDOBuw5RA", "avatar_url": "https://avatars.githubusercontent.com/u/116144452?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JV-DHL", "html_url": "https://github.com/JV-DHL", "followers_url": "https://api.github.com/users/JV-DHL/followers", "following_url": "https://api.github.com/users/JV-DHL/following{/other_user}", "gists_url": "https://api.github.com/users/JV-DHL/gists{/gist_id}", "starred_url": "https://api.github.com/users/JV-DHL/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JV-DHL/subscriptions", "organizations_url": "https://api.github.com/users/JV-DHL/orgs", "repos_url": "https://api.github.com/users/JV-DHL/repos", "events_url": "https://api.github.com/users/JV-DHL/events{/privacy}", "received_events_url": "https://api.github.com/users/JV-DHL/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Hello @JV-DHL , what do you mean by API_KEY and why do you need it? Closing this issue for now since I don't think it is related to KFP at the moment based on current information.", "I need the API_KEY to use the endpoints like run a pipeline , archive an experiment , get run details etc. Which can be done using kfp_spi_server in python. Though the method isn’t clear for me as in how the API’s can be used .\r\n\r\n\r\nFrom: James Liu ***@***.***>\r\nSent: Friday, October 21, 2022 4:15 AM\r\nTo: kubeflow/pipelines ***@***.***>\r\nCc: Jayavignesh P (DHL IT Services), external ***@***.***>; Mention ***@***.***>\r\nSubject: Re: [kubeflow/pipelines] Please provide information on how to generate or get API_Keys to us the pipeline endpoints in the documentation (Issue #8379)\r\n\r\n\r\nHello @JV-DHL<https://github.com/JV-DHL> , what do you mean by API_KEY and why do you need it? Closing this issue for now since I don't think it is related to KFP at the moment based on current information.\r\n\r\n—\r\nReply to this email directly, view it on GitHub<https://github.com/kubeflow/pipelines/issues/8379#issuecomment-1286240960>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/A3WDSRGKOGIOKRW4UCSBWBTWEHDPFANCNFSM6AAAAAARIYNCQM>.\r\nYou are receiving this because you were mentioned.Message ID: ***@***.******@***.***>>\r\n" ]
"2022-10-19T06:15:46"
"2022-10-21T04:47:17"
"2022-10-20T22:44:55"
NONE
null
I request you to please provide information on how to generate the API_KEY to use the kubeflow pipelines endpoints in the documentation
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8379/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8379/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8374
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8374/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8374/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8374/events
https://github.com/kubeflow/pipelines/issues/8374
1,412,247,849
I_kwDOB-71UM5ULTEp
8,374
[feature] Support graph sharding to limit the size of each shard
{ "login": "amrobbins", "id": 17150932, "node_id": "MDQ6VXNlcjE3MTUwOTMy", "avatar_url": "https://avatars.githubusercontent.com/u/17150932?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amrobbins", "html_url": "https://github.com/amrobbins", "followers_url": "https://api.github.com/users/amrobbins/followers", "following_url": "https://api.github.com/users/amrobbins/following{/other_user}", "gists_url": "https://api.github.com/users/amrobbins/gists{/gist_id}", "starred_url": "https://api.github.com/users/amrobbins/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amrobbins/subscriptions", "organizations_url": "https://api.github.com/users/amrobbins/orgs", "repos_url": "https://api.github.com/users/amrobbins/repos", "events_url": "https://api.github.com/users/amrobbins/events{/privacy}", "received_events_url": "https://api.github.com/users/amrobbins/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Hello @amrobbins , I think your request is very similar to `ParallelFor` functionality, and you can define pipeline run inside a pipeline run in KFPv2. Closing this issue now and feel free to reopen it if it doesn't fulfill your requirement.\r\n\r\nReferences:\r\n1. Docs on ParallelFor: https://www.kubeflow.org/docs/components/pipelines/v2/author-a-pipeline/pipelines/#dslparallelfor\r\n1. Example pipeline: https://github.com/kubeflow/pipelines/blob/98c4a22691c31a7988aba41debb8b8f56466fc1b/sdk/python/test_data/pipelines/pipeline_with_parallelfor_parallelism.py\r\n1. Reference docs on ParallelFor: https://kubeflow-pipelines.readthedocs.io/en/2.0.0b6/source/dsl.html#kfp.dsl.ParallelFor\r\n", "Hi James,\r\n\r\nThis feature is actually intended to allow parallel for to scale to process larger jobs. In my case I need to run jobs for a changing set of hundreds of stores. If I were to do this in a single parallel for loop, I would hit the size limit in the screenshot below:\r\n\r\n![sizeLimit](https://user-images.githubusercontent.com/17150932/197290471-ffd1604a-0227-4f11-b16b-47f0637d02b3.png)\r\n\r\nThat issue is also tracked here: [https://github.com/kubeflow/pipelines/issues/3134](https://github.com/kubeflow/pipelines/issues/3134)\r\n\r\nIt would really help me if 3134 had a resolution, but even if that was fixed it would not solve my issue because I would not be able to really inspect a single execution graph of that size. To get around both issues I break the job into shards, but doing this manually has big drawbacks and if there was another level of page, containing each shard that I could click on to get to the graph, that would really solve the scaling issue that I am up against.\r\n", "Hi James,\n\nI responded in a comment with further detail. Would you consider re-opening this issue?\n\nThanks,\nAndrew\n________________________________\nFrom: James Liu ***@***.***>\nSent: Thursday, October 20, 2022 5:42 PM\nTo: kubeflow/pipelines ***@***.***>\nCc: amrobbins ***@***.***>; Mention ***@***.***>\nSubject: Re: [kubeflow/pipelines] [feature] Support graph sharding to limit the size of each shard (Issue #8374)\n\n\nClosed #8374<https://github.com/kubeflow/pipelines/issues/8374> as completed.\n\n—\nReply to this email directly, view it on GitHub<https://github.com/kubeflow/pipelines/issues/8374#event-7635612315>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AEC3HVGNSFGSBVETW3HMPNTWEHDGHANCNFSM6AAAAAARHO3H7I>.\nYou are receiving this because you were mentioned.Message ID: ***@***.***>\n", "Hi Andrew,\r\n\r\nThank you for the explanation. \r\n\r\n> To get around both issues I break the job into shards, but doing this manually has big drawbacks and if there was another level of page, containing each shard that I could click on to get to the graph, that would really solve the scaling issue that I am up against.\r\n\r\nI think the KFP v2 may help mitigate the issue, if not solving it. \r\nIn KFP v2, we added a new feature that enables users to use pipelines within another pipeline ([a simple example demonstrating the use case](https://github.com/kubeflow/pipelines/blob/master/sdk/python/test_data/pipelines/pipeline_in_pipeline_complex.py)). You could possibly do a first pass of sharding of your data, send each shard to run in a sub-pipeline (using ParallelFor), where sub-pipelines could do further sharding. On the UI, the sub-pipelines (aka sub-DAGs) are rendered as single units similar to a component, and you can zoom into/expand the sub-pipelines to see details inside (per current implementation, it will navigate to a separate \"tab\" UI. Would this be close to what you may wanted?\r\n\r\nTo try out KFP v2, you need to install KFP 2.0.0a6 (available via [Kubeflow 1.6.1 on GCP](https://github.com/GoogleCloudPlatform/kubeflow-distribution/releases/tag/v1.6.1)), and use KFP SDK 2.0.0beta (latest being [2.0.0b6](https://pypi.org/project/kfp/2.0.0b6/)) to compile your pipeline. ", "Thank you for the suggestion. I will try this out and follow up on this thread.", "Hi I know it has been a while since I requested this feature. I do now have a kubeflow v2 compatible instance (I am using kfp==2.0.0b11, kfp-pipeline-spec==0.2.0, kfp-server-api==2.0.0a6). The pipeline in pipelines feature does seem like it will help in my use case, but when running kubeflow v2, I noticed there were no output logs for ops anymore, here is a screenshot showing what is there:\r\n\r\n![kfpv2logs](https://user-images.githubusercontent.com/17150932/214160548-5271518f-6d49-41c9-ad62-29972a0c8717.png)\r\n\r\nwhereas previously there were more tabs including the \"Logs\" tab.\r\n\r\n![kflogs](https://user-images.githubusercontent.com/17150932/214160947-00b75475-d843-4e94-93f4-b9b430197e5a.png)\r\n\r\nCan you confirm that kubeflow 2 will still have the ability to view run logs? Maybe this is just because the version I am using is a beta? thanks.\r\n", "Support logs view in KFPv2 is on our radar and currently an work item. cc @jlyaoyuli " ]
"2022-10-17T21:25:50"
"2023-02-19T18:49:36"
"2022-10-20T22:42:32"
NONE
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> /area frontend <!-- /area backend --> <!-- /area sdk --> <!-- /area samples --> <!-- /area components --> ### What feature would you like to see? Responded to all of these below. ### What is the use case or pain point? Responded to all of these below. ### Is there a workaround currently? For my use-case I need to create demand forecasts each night for each of the sku's that belong to a few hundred online stores. If I were to try to run this as a single pipeline, it would fail because it would surpass the size limits for the graph, but also it would be unwieldy due to its size. To work around this problem, my pipeline has 2 parameters named num_shards and selected_shard. I then use use modulus with the hash of the store's name to shard the jobs. This has gotten me by for a while but with a recent acquisition I now have so many stores that I would need to maintain maybe 50 shards. So my feature request is to build shard support into kubeflow. The way I am thinking about it, I would specify how many shards that a pipeline should be broken into and then kubeflow would create that many pipeline jobs, each one getting passed its shard number. At the top level, a sharded job would look like a normal pipeline run, but when I click on it, there is another level of page that would show maybe just a simple box representing each shard, with status of passed, failed or running. Then I could click on the box to drill down into the shard's graph to inspect the job. <!-- Without this feature, how do you accomplish your task today? --> I am considering trying to get 50 shards manually set up, but it will by kinda messy for this job. I expect that I will need to use a set up like this for a number of features so it's hard to imagine that I'll be able to keep scaling this way for other future jobs that I will need to run for the online stores that we support. --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8374/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8374/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8370
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8370/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8370/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8370/events
https://github.com/kubeflow/pipelines/issues/8370
1,409,784,569
I_kwDOB-71UM5UB5r5
8,370
[sdk] Protobuf 4 Support
{ "login": "alanhdu", "id": 1914111, "node_id": "MDQ6VXNlcjE5MTQxMTE=", "avatar_url": "https://avatars.githubusercontent.com/u/1914111?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alanhdu", "html_url": "https://github.com/alanhdu", "followers_url": "https://api.github.com/users/alanhdu/followers", "following_url": "https://api.github.com/users/alanhdu/following{/other_user}", "gists_url": "https://api.github.com/users/alanhdu/gists{/gist_id}", "starred_url": "https://api.github.com/users/alanhdu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alanhdu/subscriptions", "organizations_url": "https://api.github.com/users/alanhdu/orgs", "repos_url": "https://api.github.com/users/alanhdu/repos", "events_url": "https://api.github.com/users/alanhdu/events{/privacy}", "received_events_url": "https://api.github.com/users/alanhdu/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" } ]
open
false
{ "login": "connor-mccarthy", "id": 55268212, "node_id": "MDQ6VXNlcjU1MjY4MjEy", "avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-mccarthy", "html_url": "https://github.com/connor-mccarthy", "followers_url": "https://api.github.com/users/connor-mccarthy/followers", "following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}", "gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions", "organizations_url": "https://api.github.com/users/connor-mccarthy/orgs", "repos_url": "https://api.github.com/users/connor-mccarthy/repos", "events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-mccarthy/received_events", "type": "User", "site_admin": false }
[ { "login": "connor-mccarthy", "id": 55268212, "node_id": "MDQ6VXNlcjU1MjY4MjEy", "avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-mccarthy", "html_url": "https://github.com/connor-mccarthy", "followers_url": "https://api.github.com/users/connor-mccarthy/followers", "following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}", "gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions", "organizations_url": "https://api.github.com/users/connor-mccarthy/orgs", "repos_url": "https://api.github.com/users/connor-mccarthy/repos", "events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-mccarthy/received_events", "type": "User", "site_admin": false } ]
null
[ "/assign @connor-mccarthy ", "Any progress on this? New packages are starting to depend on protobuf 4, making dependency resolution impossible.", "We plan to release at least one version of `kfp==2.x.x` that depends on protobuf 3 before bumping to depend on protobuf 4.", "Any ETA here?", "bumping this up, any ETA?", "After more investigation, I'm not sure that we can bump major versions within KFP SDK v2. Doing so would require removing support for protobuf 3 per the protobuf [cross-version runtime guarantees](https://protobuf.dev/support/cross-version-runtime-guarantee/#major). This would prevent some users from upgrading KFP SDK versions. For a other smaller dependencies with greater compatibility across major versions, this might be acceptable. This is not the case for protobuf, unfortunately.\r\n\r\nPlease feel free to provide additional information if there is something I'm missing. I will close for now.", "> Please feel free to provide additional information if there is something I'm missing.\r\n\r\n@connor-mccarthy I feel like there must be something being missed here.\r\n\r\nA number of packages have successfully managed to allow simultaneous support of `protobuf` 3 and 4 (e.g. take a look at [`google-cloud-pubsub`'s requirements](https://github.com/googleapis/python-pubsub/blob/main/setup.py#L44)), so I don't follow why it's an impossibility for `kfp`.\r\n\r\n> This would prevent some users from upgrading KFP SDK versions.\r\n\r\nKeep in mind that it's equally true that some users are prevented from using the KFP SDK at all due to the lack of `protobuf` 4 support. By not allowing `protobuf` 4 at all, `kfp` introduces an impossible-to-resolve dependency conflict between itself and packages that depend on `protobuf` 4. More and more projects are beginning to require `protobuf` 4 (often transitively), so this conflict is becoming more and more problematic.", "I will reopen this for additional investigation and keep in our backlog. If feasible, I suspect we will need to update the protobuf version in quite a few libraries to implement this change, including [`kfp`](https://github.com/kubeflow/pipelines/tree/88e1045c116a6dc8adac83b5936821fe2ef9b263/sdk/python), [`kfp-kubernetes`](https://github.com/kubeflow/pipelines/tree/88e1045c116a6dc8adac83b5936821fe2ef9b263/kubernetes_platform/python), [`kfp-pipeline-spec`](https://github.com/kubeflow/pipelines/tree/88e1045c116a6dc8adac83b5936821fe2ef9b263/api/v2alpha1/python), and [`kfp_server_api`](https://github.com/kubeflow/pipelines/tree/master/backend/api/v2beta1/python_http_client), with careful attention to the version range that `kfp` specifies for each of these dependencies.", "Note that the major version change to 4.21.x was only for the Python support of `protobuf`. If you're installing from conda-forge, the underlying `libprotobuf` stayed at major version 3 back then. My understanding is that the serialization format remained compatible with older versions, but Python clients have to handle the switch to a different underlying package, [`upb`](https://github.com/protocolbuffers/protobuf/tree/main/upb) (micro protocol buffers). For Python clients that use the public Python API, that change should be transparent.\r\n\r\nhttps://github.com/protocolbuffers/protobuf/releases/tag/v21.0\r\n\r\nfrom conda-forge...\r\n```txt\r\nprotobuf 4.21.1 py310hd8f1fbe_0\r\n-------------------------------\r\n...\r\ndependencies: \r\n - libgcc-ng >=12\r\n - libprotobuf 3.21.1.*\r\n```\r\nThere was a major version change of `libprotobuf` with release 22.0. Afaict, the breaking changes there affect neither Python clients nor the serialization format.\r\nhttps://github.com/protocolbuffers/protobuf/releases/tag/v22.0" ]
"2022-10-14T19:49:47"
"2023-09-06T06:39:21"
null
NONE
null
### Environment The latest `kfp` version pins `protobuf < 4.0`: https://github.com/kubeflow/pipelines/blob/71c8e77d8a583c629e6f83dcd629fee8133f6d2a/sdk/python/requirements.in#L27 What would it take for `kfp` to support `protobuf==4.23`? It looks like Python's protobuf package went from `3.20 -> 4.21` and other packages are now starting to require `protobuf > 4`. The list of breaking changes is available at https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8370/reactions", "total_count": 15, "+1": 11, "-1": 0, "laugh": 0, "hooray": 0, "confused": 2, "heart": 0, "rocket": 0, "eyes": 2 }
https://api.github.com/repos/kubeflow/pipelines/issues/8370/timeline
null
reopened
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8366
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8366/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8366/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8366/events
https://github.com/kubeflow/pipelines/issues/8366
1,409,705,068
I_kwDOB-71UM5UBmRs
8,366
[backend] e2e test flakiness due to regression on v1 caching
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
null
[]
null
[]
"2022-10-14T18:25:51"
"2022-10-14T21:36:49"
"2022-10-14T21:36:49"
COLLABORATOR
null
E2e test often failed due to timeout a recursive sample never ends. The `flip-coin` component repeats the same result due to caching despite that caching being explicitly disabled by the pipeline author: https://github.com/kubeflow/pipelines/blob/65ee01a436758e0f7ca4741e517452c02182ec03/samples/core/recursion/recursion.py#L49-L50 Example run results: https://oss.gprow.dev/view/gs/oss-prow/pr-logs/pull/kubeflow_pipelines/8354/kubeflow-pipeline-e2e-test/1580250814025306112 ``` STEP TEMPLATE PODNAME DURATION MESSAGE ● recursive-loop-pipeline-2q672 recursive-loop-pipeline ├─✔ flip-coin flip-coin recursive-loop-pipeline-2q672-2911149689 7s └─● graph-flip-component-1 graph-flip-component-1 ├─✔ print print recursive-loop-pipeline-2q672-1450500999 3s ├─✔ flip-coin-2 flip-coin-2 recursive-loop-pipeline-2q672-4208770976 6s └─● condition-2 condition-2 └─● graph-flip-component-1 graph-flip-component-1 ├─✔ print print recursive-loop-pipeline-2q672-906617105 2s ├─✔ flip-coin-2 flip-coin-2 recursive-loop-pipeline-2q672-3449589790 1s └─● condition-2 condition-2 └─● graph-flip-component-1 graph-flip-component-1 ├─✔ print print recursive-loop-pipeline-2q672-3599713631 1s ├─✔ flip-coin-2 flip-coin-2 recursive-loop-pipeline-2q672-2157693544 1s └─● condition-2 condition-2 └─● graph-flip-component-1 graph-flip-component-1 ├─✔ print print recursive-loop-pipeline-2q672-1678845337 2s ├─✔ flip-coin-2 flip-coin-2 recursive-loop-pipeline-2q672-3451272790 2s └─● condition-2 condition-2 └─● graph-flip-component-1 graph-flip-component-1 ├─✔ print print recursive-loop-pipeline-2q672-3682909207 2s ├─✔ flip-coin-2 flip-coin-2 recursive-loop-pipeline-2q672-3689557328 2s └─● condition-2 condition-2 └─● graph-flip-component-1 graph-flip-component-1 ├─✔ print print recursive-loop-pipeline-2q672-3638286081 2s ├─✔ flip-coin-2 flip-coin-2 recursive-loop-pipeline-2q672-1308976046 2s └─● condition-2 condition-2 └─● graph-flip-component-1 graph-flip-component-1 ├─✔ print print recursive-loop-pipeline-2q672-2740954799 3s ├─✔ flip-coin-2 flip-coin-2 recursive-loop-pipeline-2q672-2458413912 2s └─● condition-2 condition-2 └─● graph-flip-component-1 graph-flip-component-1 ├─✔ print print recursive-loop-pipeline-2q672-4041224585 2s ├─✔ flip-coin-2 flip-coin-2 recursive-loop-pipeline-2q672-2080960678 2s └─● condition-2 condition-2 └─● graph-flip-component-1 graph-flip-component-1 ├─✔ print print recursive-loop-pipeline-2q672-2436650855 2s ├─✔ flip-coin-2 flip-coin-2 recursive-loop-pipeline-2q672-4253650176 1s └─● condition-2 condition-2 └─● graph-flip-component-1 graph-flip-component-1 ├─✔ print print recursive-loop-pipeline-2q672-3661157489 1s ├─✔ flip-coin-2 flip-coin-2 recursive-loop-pipeline-2q672-2305072254 1s └─● condition-2 condition-2 └─● graph-flip-component-1 graph-flip-component-1 ├─✔ print print recursive-loop-pipeline-2q672-2227129919 1s ├─✔ flip-coin-2 flip-coin-2 recursive-loop-pipeline-2q672-2711823560 1s └─● condition-2 condition-2 └─● graph-flip-component-1 graph-flip-component-1 ├─✔ print print recursive-loop-pipeline-2q672-2085747065 1s ├─✔ flip-coin-2 flip-coin-2 recursive-loop-pipeline-2q672-2345187766 1s └─● condition-2 condition-2 └─● graph-flip-component-1 graph-flip-component-1 ├─✔ print print recursive-loop-pipeline-2q672-3268679415 1s ├─✔ flip-coin-2 flip-coin-2 recursive-loop-pipeline-2q672-3551208624 1s └─● condition-2 condition-2 └─● graph-flip-component-1 graph-flip-component-1 ├─✔ print print recursive-loop-pipeline-2q672-3411284065 1s ├─✔ flip-coin-2 flip-coin-2 recursive-loop-pipeline-2q672-2249539982 2s └─● condition-2 condition-2 └─● graph-flip-component-1 graph-flip-component-1 ├─✔ print print recursive-loop-pipeline-2q672-1748291983 4s ├─✔ flip-coin-2 flip-coin-2 recursive-loop-pipeline-2q672-1446557880 2s └─● condition-2 condition-2 └─● graph-flip-component-1 graph-flip-component-1 ├─✔ print print recursive-loop-pipeline-2q672-2369874281 2s ├─✔ flip-coin-2 flip-coin-2 recursive-loop-pipeline-2q672-2625115526 2s └─● condition-2 condition-2 └─● graph-flip-component-1 graph-flip-component-1 ├─✔ print print recursive-loop-pipeline-2q672-2721038535 2s ├─✔ flip-coin-2 flip-coin-2 recursive-loop-pipeline-2q672-3615791712 2s └─● condition-2 condition-2 └─● graph-flip-component-1 graph-flip-component-1 ├─✔ print print recursive-loop-pipeline-2q672-776749137 2s ├─✔ flip-coin-2 flip-coin-2 recursive-loop-pipeline-2q672-1875536862 1s └─● condition-2 condition-2 └─● graph-flip-component-1 graph-flip-component-1 ├─✔ print print recursive-loop-pipeline-2q672-1362749599 1s ├─✔ flip-coin-2 flip-coin-2 recursive-loop-pipeline-2q672-2914464296 1s └─● condition-2 condition-2 └─● graph-flip-component-1 graph-flip-component-1 ├─✔ print print recursive-loop-pipeline-2q672-3165717209 1s ├─✔ flip-coin-2 flip-coin-2 recursive-loop-pipeline-2q672-1084307990 1s └─● condition-2 condition-2 └─● graph-flip-component-1 graph-flip-component-1 ├─✔ print print recursive-loop-pipeline-2q672-2109524055 1s ├─✔ flip-coin-2 flip-coin-2 recursive-loop-pipeline-2q672-1156597520 1s └─● condition-2 condition-2 └─● graph-flip-component-1 graph-flip-component-1 ├─✔ print print recursive-loop-pipeline-2q672-2903817793 1s ├─✔ flip-coin-2 flip-coin-2 recursive-loop-pipeline-2q672-3088352366 1s └─● condition-2 condition-2 └─● graph-flip-component-1 graph-flip-component-1 ├─✔ print print recursive-loop-pipeline-2q672-3461388015 2s ├─✔ flip-coin-2 flip-coin-2 recursive-loop-pipeline-2q672-1389615128 2s └─● condition-2 condition-2 └─● graph-flip-component-1 graph-flip-component-1 ├─✔ print print recursive-loop-pipeline-2q672-1712861385 2s ├─✔ flip-coin-2 flip-coin-2 recursive-loop-pipeline-2q672-1411227238 2s └─● condition-2 condition-2 └─● graph-flip-component-1 graph-flip-component-1 ├─✔ print print recursive-loop-pipeline-2q672-1239254695 2s ├─✔ flip-coin-2 flip-coin-2 recursive-loop-pipeline-2q672-3752137408 2s └─● condition-2 condition-2 └─● graph-flip-component-1 graph-flip-component-1 ├─✔ print print recursive-loop-pipeline-2q672-4021074865 2s ├─✔ flip-coin-2 flip-coin-2 recursive-loop-pipeline-2q672-534689086 1s └─● condition-2 condition-2 └─● graph-flip-component-1 graph-flip-component-1 ├─✔ print print recursive-loop-pipeline-2q672-579256959 1s ├─✔ flip-coin-2 flip-coin-2 recursive-loop-pipeline-2q672-4212266120 1s └─● condition-2 condition-2 └─● graph-flip-component-1 graph-flip-component-1 ├─✔ print print recursive-loop-pipeline-2q672-3603108025 1s ├─✔ flip-coin-2 flip-coin-2 recursive-loop-pipeline-2q672-2620779126 1s └─● condition-2 condition-2 └─● graph-flip-component-1 graph-flip-component-1 ├─✔ print print recursive-loop-pipeline-2q672-1910118711 1s ├─✔ flip-coin-2 flip-coin-2 recursive-loop-pipeline-2q672-171526256 1s └─● condition-2 condition-2 └─● graph-flip-component-1 graph-flip-component-1 ├─✔ print print recursive-loop-pipeline-2q672-2859045793 1s ├─✔ flip-coin-2 flip-coin-2 recursive-loop-pipeline-2q672-3404074830 1s └─● condition-2 condition-2 └─● graph-flip-component-1 graph-flip-component-1 ├─✔ print print recursive-loop-pipeline-2q672-920673231 2s ├─✔ flip-coin-2 flip-coin-2 recursive-loop-pipeline-2q672-814696056 2s └─● condition-2 condition-2 └─● graph-flip-component-1 graph-flip-component-1 ├─✔ print print recursive-loop-pipeline-2q672-2741399721 2s ├─✔ flip-coin-2 flip-coin-2 recursive-loop-pipeline-2q672-4138003782 2s └─● condition-2 condition-2 └─● graph-flip-component-1 graph-flip-component-1 ├─✔ print print recursive-loop-pipeline-2q672-3921474311 2s ├─✔ flip-coin-2 flip-coin-2 recursive-loop-pipeline-2q672-3899151136 2s └─● condition-2 condition-2 └─● graph-flip-component-1 graph-flip-component-1 ├─✔ print print recursive-loop-pipeline-2q672-3092523921 2s ├─✔ flip-coin-2 flip-coin-2 recursive-loop-pipeline-2q672-4064315038 2s └─● condition-2 condition-2 └─● graph-flip-component-1 graph-flip-component-1 ├─✔ print print recursive-loop-pipeline-2q672-3852097759 1s ├─✔ flip-coin-2 flip-coin-2 recursive-loop-pipeline-2q672-1269973480 1s └─● condition-2 condition-2 └─● graph-flip-component-1 graph-flip-component-1 ├─✔ print print recursive-loop-pipeline-2q672-4275018777 1s ├─✔ flip-coin-2 flip-coin-2 recursive-loop-pipeline-2q672-1842151638 1s └─● condition-2 condition-2 └─● graph-flip-component-1 graph-flip-component-1 ├─✔ print print recursive-loop-pipeline-2q672-2946413463 1s ├─✔ flip-coin-2 flip-coin-2 recursive-loop-pipeline-2q672-1685612752 2s └─● condition-2 condition-2 .... ```
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8366/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8366/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8365
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8365/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8365/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8365/events
https://github.com/kubeflow/pipelines/issues/8365
1,409,691,584
I_kwDOB-71UM5UBi_A
8,365
[feature] Add ability to select runs by job ID in multi-user mode when calling get runs api endpoint
{ "login": "tarat44", "id": 32471142, "node_id": "MDQ6VXNlcjMyNDcxMTQy", "avatar_url": "https://avatars.githubusercontent.com/u/32471142?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tarat44", "html_url": "https://github.com/tarat44", "followers_url": "https://api.github.com/users/tarat44/followers", "following_url": "https://api.github.com/users/tarat44/following{/other_user}", "gists_url": "https://api.github.com/users/tarat44/gists{/gist_id}", "starred_url": "https://api.github.com/users/tarat44/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tarat44/subscriptions", "organizations_url": "https://api.github.com/users/tarat44/orgs", "repos_url": "https://api.github.com/users/tarat44/repos", "events_url": "https://api.github.com/users/tarat44/events{/privacy}", "received_events_url": "https://api.github.com/users/tarat44/received_events", "type": "User", "site_admin": false }
[ { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
open
false
null
[]
null
[ "How do you get the endpoiint for V2??", "This request was based on the v1 api. I just started following the proposal for the v2 api and am curious about whether this feature will be included in v2. I know the proposal contains a plan to drop resource references, but will filtering by job id still be possible?" ]
"2022-10-14T18:11:11"
"2022-11-15T04:21:38"
null
CONTRIBUTOR
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> <!-- /area frontend --> /area backend <!-- /area sdk --> <!-- /area samples --> <!-- /area components --> ### What feature would you like to see? Add the ability to select runs by job ID in multi-user mode when calling the get runs api endpoint ### What is the use case or pain point? It is very difficult to gather all runs associated with a single job without this filter ### Is there a workaround currently? The runs api has to be called without this filter and then the results have to be parsed one by one to determine if they are associated with the given job. I would be happy to work on contributing this feature if the community thinks it would be helpful and aligned with the overall design path for kubeflow pipelines --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8365/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8365/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8358
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8358/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8358/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8358/events
https://github.com/kubeflow/pipelines/issues/8358
1,407,527,664
I_kwDOB-71UM5T5Srw
8,358
[sdk] Unable to retrieve runs / experiments / recurring jobs in pipeline component (500 error)
{ "login": "Lejboelle", "id": 21076664, "node_id": "MDQ6VXNlcjIxMDc2NjY0", "avatar_url": "https://avatars.githubusercontent.com/u/21076664?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Lejboelle", "html_url": "https://github.com/Lejboelle", "followers_url": "https://api.github.com/users/Lejboelle/followers", "following_url": "https://api.github.com/users/Lejboelle/following{/other_user}", "gists_url": "https://api.github.com/users/Lejboelle/gists{/gist_id}", "starred_url": "https://api.github.com/users/Lejboelle/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lejboelle/subscriptions", "organizations_url": "https://api.github.com/users/Lejboelle/orgs", "repos_url": "https://api.github.com/users/Lejboelle/repos", "events_url": "https://api.github.com/users/Lejboelle/events{/privacy}", "received_events_url": "https://api.github.com/users/Lejboelle/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "HI @Lejboelle!\r\nIt seems that you might have wrong client initialization. Please, follow [these instructions](https://www.kubeflow.org/docs/components/pipelines/v1/sdk/connect-api/).", "I fixed the issue - it wasn't the client initialization since the defaults point to the correct host and credentials.\r\nI did, however, enter a wrong audience value in the V1ServiceAccountTokenProjection (**pipeline**.kubeflow.org instead of **pipelines**.kubeflow.org).\r\n\r\nClosing." ]
"2022-10-13T10:11:55"
"2022-10-14T10:06:30"
"2022-10-14T10:06:29"
NONE
null
### Environment * KFP version: 2.0.0-alpha.5 (Kubeflow 1.6.1) * KFP SDK version: 1.8.14 ### Steps to reproduce My main goal is to retrieve a job name or job id in a recurring run. I tried the PIPELINE_JOB_ID_PLACEHOLDER from the v2 dsl, but that didn't work. Then I tried creating a component that uses the KFP SDK to retrieve the job id, however, it is not possible to even retrieve a list of runs, even though, the token is mounted. `from kfp import dsl, compiler from kfp.components import func_to_container_op import kubernetes.client as k8s ` `def _print(namespace: str): from kfp import Client client = Client() try: print("runs:") print(client.list_runs(namespace=namespace)) print() except Exception as e: print("failed to get runs") print(e) try: print("pipelines") print(client.list_pipelines()) except Exception: print("failed to get pipelines") try: print("experiments") print(client.list_experiments(namespace=namespace)) print() except Exception as e: print("failed to get experiments") print(e) print_op = func_to_container_op(_print, packages_to_install=["kfp==1.8.14"]) @dsl.pipeline( name="test-pipeline", ) def test_pipeline(): vol_sat = k8s.V1ServiceAccountTokenProjection(audience="pipeline.kubeflow.org", expiration_seconds=7200, path="token") vol_proj = k8s.V1ProjectedVolumeSource(sources=[k8s.V1VolumeProjection(service_account_token=vol_sat)]) vol = k8s.V1Volume(name="volume-kf-pipeline-token", projected=vol_proj) task = print_op(namespace="{{workflow.namespace}}") task.add_pvolumes({"/var/run/secrets/kubeflow/pipelines": vol}) ` ### Expected result The 'list_pipelines' works as expected: `{'next_page_token': None, 'pipelines': [...], 'total_size': 3}` But the other return: `(500) Reason: Internal Server Error HTTP response headers: HTTPHeaderDict({'content-type': 'application/json', 'date': 'Thu, 13 Oct 2022 09:52:55 GMT', 'x-envoy-upstream-service-time': '4', 'server': 'istio-envoy', 'x-envoy-decorator-operation': 'ml-pipeline.kubeflow.svc.cluster.local:8888/*', 'transfer-encoding': 'chunked'}) HTTP response body: {"error":"Internal error: [Unauthenticated: Request header error: there is no user identity header.: Request header error: there is no user identity header., Authentication failure: Unauthenticated: Review.Status.Authenticated is false: Failed to authenticate token review]\nFailed to authorize with API resource references\ngithub.com/kubeflow/pipelines/backend/src/common/util.Wrap\n\t/go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:287\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*RunServer).canAccessRun\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/run_server.go:400\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*RunServer).ListRuns\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/run_server.go:192\ngithub.com/kubeflow/pipelines/backend/api/go_client._RunService_ListRuns_Handler.func1\n\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/run.pb.go:2198\nmain.apiServerInterceptor\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/interceptor.go:30\ngithub.com/kubeflow/pipelines/backend/api/go_client._RunService_ListRuns_Handler\n\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/run.pb.go:2200\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\n\t/go/pkg/mod/google.golang.org/grpc@v1.44.0/server.go:1282\ngoogle.golang.org/grpc.(*Server).handleStream\n\t/go/pkg/mod/google.golang.org/grpc@v1.44.0/server.go:1616\ngoogle.golang.org/grpc.(*Server).serveStreams.func1.2\n\t/go/pkg/mod/google.golang.org/grpc@v1.44.0/server.go:921\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1581\nFailed to authorize with namespace resource reference.\ngithub.com/kubeflow/pipelines/backend/src/common/util.Wrap\n\t/go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:287\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*RunServer).ListRuns\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/run_server.go:194\ngithub.com/kubeflow/pipelines/backend/api/go_client._RunService_ListRuns_Handler.func1\n\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/run.pb.go:2198\nmain.apiServerInterceptor\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/interceptor.go:30\ngithub.com/kubeflow/pipelines/backend/api/go_client._RunService_ListRuns_Handler\n\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/run.pb.go:2200\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\n\t/go/pkg/mod/google.golang.org/grpc@v1.44.0/server.go:1282\ngoogle.golang.org/grpc.(*Server).handleStream\n\t/go/pkg/mod/google.golang.org/grpc@v1.44.0/server.go:1616\ngoogle.golang.org/grpc.(*Server).serveStreams.func1.2\n\t/go/pkg/mod/google.golang.org/grpc@v1.44.0/server.go:921\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1581","code":13,"message":"Internal error: [Unauthenticated: Request header error: there is no user identity header.: Request header error: there is no user identity header., Authentication failure: Unauthenticated: Review.Status.Authenticated is false: Failed to authenticate token review]\nFailed to authorize with API resource references\ngithub.com/kubeflow/pipelines/backend/src/common/util.Wrap\n\t/go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:287\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*RunServer).canAccessRun\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/run_server.go:400\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*RunServer).ListRuns\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/run_server.go:192\ngithub.com/kubeflow/pipelines/backend/api/go_client._RunService_ListRuns_Handler.func1\n\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/run.pb.go:2198\nmain.apiServerInterceptor\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/interceptor.go:30\ngithub.com/kubeflow/pipelines/backend/api/go_client._RunService_ListRuns_Handler\n\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/run.pb.go:2200\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\n\t/go/pkg/mod/google.golang.org/grpc@v1.44.0/server.go:1282\ngoogle.golang.org/grpc.(*Server).handleStream\n\t/go/pkg/mod/google.golang.org/grpc@v1.44.0/server.go:1616\ngoogle.golang.org/grpc.(*Server).serveStreams.func1.2\n\t/go/pkg/mod/google.golang.org/grpc@v1.44.0/server.go:921\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1581\nFailed to authorize with namespace resource reference.\ngithub.com/kubeflow/pipelines/backend/src/common/util.Wrap\n\t/go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:287\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*RunServer).ListRuns\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/run_server.go:194\ngithub.com/kubeflow/pipelines/backend/api/go_client._RunService_ListRuns_Handler.func1\n\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/run.pb.go:2198\nmain.apiServerInterceptor\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/interceptor.go:30\ngithub.com/kubeflow/pipelines/backend/api/go_client._RunService_ListRuns_Handler\n\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/run.pb.go:2200\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\n\t/go/pkg/mod/google.golang.org/grpc@v1.44.0/server.go:1282\ngoogle.golang.org/grpc.(*Server).handleStream\n\t/go/pkg/mod/google.golang.org/grpc@v1.44.0/server.go:1616\ngoogle.golang.org/grpc.(*Server).serveStreams.func1.2\n\t/go/pkg/mod/google.golang.org/grpc@v1.44.0/server.go:921\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1581","details":[{"@type":"type.googleapis.com/api.Error","error_message":"Internal error: [Unauthenticated: Request header error: there is no user identity header.: Request header error: there is no user identity header., Authentication failure: Unauthenticated: Review.Status.Authenticated is false: Failed to authenticate token review]\nFailed to authorize with API resource references\ngithub.com/kubeflow/pipelines/backend/src/common/util.Wrap\n\t/go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:287\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*RunServer).canAccessRun\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/run_server.go:400\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*RunServer).ListRuns\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/run_server.go:192\ngithub.com/kubeflow/pipelines/backend/api/go_client._RunService_ListRuns_Handler.func1\n\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/run.pb.go:2198\nmain.apiServerInterceptor\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/interceptor.go:30\ngithub.com/kubeflow/pipelines/backend/api/go_client._RunService_ListRuns_Handler\n\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/run.pb.go:2200\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\n\t/go/pkg/mod/google.golang.org/grpc@v1.44.0/server.go:1282\ngoogle.golang.org/grpc.(*Server).handleStream\n\t/go/pkg/mod/google.golang.org/grpc@v1.44.0/server.go:1616\ngoogle.golang.org/grpc.(*Server).serveStreams.func1.2\n\t/go/pkg/mod/google.golang.org/grpc@v1.44.0/server.go:921\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1581\nFailed to authorize with namespace resource reference.\ngithub.com/kubeflow/pipelines/backend/src/common/util.Wrap\n\t/go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:287\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*RunServer).ListRuns\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/run_server.go:194\ngithub.com/kubeflow/pipelines/backend/api/go_client._RunService_ListRuns_Handler.func1\n\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/run.pb.go:2198\nmain.apiServerInterceptor\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/interceptor.go:30\ngithub.com/kubeflow/pipelines/backend/api/go_client._RunService_ListRuns_Handler\n\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/run.pb.go:2200\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\n\t/go/pkg/mod/google.golang.org/grpc@v1.44.0/server.go:1282\ngoogle.golang.org/grpc.(*Server).handleStream\n\t/go/pkg/mod/google.golang.org/grpc@v1.44.0/server.go:1616\ngoogle.golang.org/grpc.(*Server).serveStreams.func1.2\n\t/go/pkg/mod/google.golang.org/grpc@v1.44.0/server.go:921\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1581","error_details":"Internal error: [Unauthenticated: Request header error: there is no user identity header.: Request header error: there is no user identity header., Authentication failure: Unauthenticated: Review.Status.Authenticated is false: Failed to authenticate token review]\nFailed to authorize with API resource references\ngithub.com/kubeflow/pipelines/backend/src/common/util.Wrap\n\t/go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:287\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*RunServer).canAccessRun\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/run_server.go:400\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*RunServer).ListRuns\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/run_server.go:192\ngithub.com/kubeflow/pipelines/backend/api/go_client._RunService_ListRuns_Handler.func1\n\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/run.pb.go:2198\nmain.apiServerInterceptor\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/interceptor.go:30\ngithub.com/kubeflow/pipelines/backend/api/go_client._RunService_ListRuns_Handler\n\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/run.pb.go:2200\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\n\t/go/pkg/mod/google.golang.org/grpc@v1.44.0/server.go:1282\ngoogle.golang.org/grpc.(*Server).handleStream\n\t/go/pkg/mod/google.golang.org/grpc@v1.44.0/server.go:1616\ngoogle.golang.org/grpc.(*Server).serveStreams.func1.2\n\t/go/pkg/mod/google.golang.org/grpc@v1.44.0/server.go:921\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1581\nFailed to authorize with namespace resource reference.\ngithub.com/kubeflow/pipelines/backend/src/common/util.Wrap\n\t/go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:287\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*RunServer).ListRuns\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/run_server.go:194\ngithub.com/kubeflow/pipelines/backend/api/go_client._RunService_ListRuns_Handler.func1\n\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/run.pb.go:2198\nmain.apiServerInterceptor\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/interceptor.go:30\ngithub.com/kubeflow/pipelines/backend/api/go_client._RunService_ListRuns_Handler\n\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/run.pb.go:2200\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\n\t/go/pkg/mod/google.golang.org/grpc@v1.44.0/server.go:1282\ngoogle.golang.org/grpc.(*Server).handleStream\n\t/go/pkg/mod/google.golang.org/grpc@v1.44.0/server.go:1616\ngoogle.golang.org/grpc.(*Server).serveStreams.func1.2\n\t/go/pkg/mod/google.golang.org/grpc@v1.44.0/server.go:921\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1581"}]}` Also tried manually loading the credentials using kfp.auth.ServiceAccountTokenVolumeCredentials(path=None) and input those to the client constructor. When running a notebook using an access-ml-pipeline poddefault, there are no issues, and I'm able to list runs, experiments, etc. Not sure if this is a bug or a design choice. Is there no way to retrieve the job id / job name in a recurring run? <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8358/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8358/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8356
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8356/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8356/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8356/events
https://github.com/kubeflow/pipelines/issues/8356
1,406,862,021
I_kwDOB-71UM5T2wLF
8,356
[frontend] No metrics shown in Run Output tab
{ "login": "Kokkini", "id": 22306485, "node_id": "MDQ6VXNlcjIyMzA2NDg1", "avatar_url": "https://avatars.githubusercontent.com/u/22306485?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Kokkini", "html_url": "https://github.com/Kokkini", "followers_url": "https://api.github.com/users/Kokkini/followers", "following_url": "https://api.github.com/users/Kokkini/following{/other_user}", "gists_url": "https://api.github.com/users/Kokkini/gists{/gist_id}", "starred_url": "https://api.github.com/users/Kokkini/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Kokkini/subscriptions", "organizations_url": "https://api.github.com/users/Kokkini/orgs", "repos_url": "https://api.github.com/users/Kokkini/repos", "events_url": "https://api.github.com/users/Kokkini/events{/privacy}", "received_events_url": "https://api.github.com/users/Kokkini/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "The metrics is not shown along side each run in the **Experiments** tab either:\r\n\r\n\r\n<img width=\"1440\" alt=\"Screen Shot 2022-10-12 at 3 33 16 PM\" src=\"https://user-images.githubusercontent.com/22306485/195460346-01cb5f5c-db9a-4395-9181-b0d66e96c825.png\">\r\n", "This has been fixed in the recent patch release [https://github.com/kubeflow/manifests/releases/tag/v1.6.1](url)", "@Lejboelle thank you. I'll try it out.", "I too am having this same issue, but with KFP 1.8.16.\r\nBy the way the above link to release/tag/v1.6.1 points to nothing \"No results matched your search\".", "Hi @Kokkini , @CheaterAdams,\r\n\r\nA newer release of a platform-agnostic Kubeflow 1.6.1 is [here](https://github.com/kubeflow/manifests/releases/tag/v1.6.1).\r\n", "If the fix was available in 1.6.1, it is broken in 1.8.16. Can someone tell me what the fix is and are any changes to the above code needed?\r\nthanks", "I resolved my problem by using KFP V2 and metrics.log_metric as seen in the code snippet below. With an earlier attempt at using v2 I was not successful. Encountered JSON formatting error. I now believe that was due to not specifying V2 compatible mode when compiling and running from pipeline function. See at the end of this post.\r\n\r\n\r\n``@component(\r\n packages_to_install=['scikit-learn'],\r\n base_image='python:3.9',\r\n)\r\ndef digit_classification(metrics: Output[Metrics]):\r\n \r\n from sklearn import model_selection\r\n from sklearn.linear_model import LogisticRegression\r\n from sklearn import datasets\r\n from sklearn.metrics import accuracy_score\r\n from sklearn.metrics import precision_score\r\n...\r\n #split data\r\n X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, test_size=test_size, random_state=seed)\r\n #fit model\r\n model.fit(X_train, y_train)\r\n\r\n #accuracy on test set\r\n result = model.score(X_test, y_test)\r\n metrics.log_metric('accuracy', (result*100.0))\r\n``\r\n\r\nkfp.compiler.Compiler(mode=**kfp.dsl.PipelineExecutionMode.V2_COMPATIBLE**).compile(\r\n pipeline_func=metrics_visualization_pipeline,\r\n package_path='pipeline.yaml')\r\n\r\nkfp_client.create_run_from_pipeline_func(\r\n metrics_visualization_pipeline,\r\n mode=**kfp.dsl.PipelineExecutionMode.V2_COMPATIBLE**,\r\n # You can optionally override your pipeline_root when submitting the run too:\r\n # pipeline_root='gs://my-pipeline-root/example-pipeline',\r\n arguments={})\r\n", "Hi @Kokkini, does your comment regarding platform agnostic Kubeflow 1.6.1 imply the fix is provided in 1.6.1?\r\nI am still looking for a solution using KPF 1.8.16.", "Does anyone have solution for this? I'm having this issue also, KFP 1.8.18.", "\r\nI am using kubeflow 1.7 with minio hosted on separate namespace (Basically , I have setup minio and kubeflow into same EKS cluster along with Kubeflow with an overall aim to achieve to segregate artifacts and metadata per namespace per bucket )\r\n\r\n-> Artifacts and Pipeline logs are going into dedicated buckets\r\n-> Metrics are being generated\r\n\r\nProblem - I can not see those metrics on Run Output Tab ,instead of getting metrics on Output artifacts\r\n\r\n@yhwang : I am curious to know , How **Run Output** reads data for metrics to display on UI.\r\n\r\n[Commit for above fix ] (https://github.com/jlyaoyuli/pipelines/commit/48c985bf337dbd9e8a8ab7067a05388639e23e21) s merged for kubeflow 1.7", "here is the documentation about Run Output: https://www.kubeflow.org/docs/components/pipelines/v1/sdk/output-viewer/\r\nbut I don't know the underlying implementation. I was doing the abstract interface implementation if you are curious about that line of code." ]
"2022-10-12T22:08:23"
"2023-06-19T18:14:23"
null
NONE
null
### Environment * How did you deploy Kubeflow Pipelines (KFP)? I installed it with the full kubeflow deployment. Kubeflow 1.6, installed with [juju](https://charmed-kubeflow.io/docs/install) * KFP version: **build version dev_local** (this shows on bottom of KFP UI left sidenav) ### Steps to reproduce Running this minimal pipeline that just saves a metrics ``` from kfp.components import OutputPath, create_component_from_func import kfp.dsl as dsl import kfp @dsl.pipeline( name='kubeflow-metric-demo') def metric_pipeline(): produce_metrics_op = create_component_from_func( produce_metrics, base_image='python:3.7', packages_to_install=[], output_component_file='component.yaml', ) produce_metrics_op() def produce_metrics( mlpipeline_metrics_path: OutputPath('Metrics') ): import json metrics = { 'metrics': [{ 'name': 'accuracy', 'numberValue': 0.9, 'format': "RAW" }] } with open(mlpipeline_metrics_path, 'w') as f: json.dump(metrics, f) print("metrics path") print(mlpipeline_metrics_path) if __name__ == "__main__": kfp.compiler.Compiler().compile(metric_pipeline, 'metric_pipeline.yaml') ``` ### Expected result I expect the metrics to be shown in the "Run output" tab in the UI. However, nothing is there as you can see in the image. <img width="1192" alt="Screen Shot 2022-10-12 at 3 03 14 PM" src="https://user-images.githubusercontent.com/22306485/195456680-858247f8-9f61-4283-a85d-964c71831de0.png"> However, the metrics did show up at the "output artifacts" section of the component <img width="1189" alt="Screen Shot 2022-10-12 at 3 07 34 PM" src="https://user-images.githubusercontent.com/22306485/195457044-ea0def8b-0d41-48b5-9734-8927e70b5faf.png"> ### Materials and Reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8356/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8356/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8355
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8355/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8355/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8355/events
https://github.com/kubeflow/pipelines/issues/8355
1,406,742,634
I_kwDOB-71UM5T2TBq
8,355
[backend] Conflicting RetryRun and TTLStrategy in Argo
{ "login": "casassg", "id": 6912589, "node_id": "MDQ6VXNlcjY5MTI1ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/6912589?v=4", "gravatar_id": "", "url": "https://api.github.com/users/casassg", "html_url": "https://github.com/casassg", "followers_url": "https://api.github.com/users/casassg/followers", "following_url": "https://api.github.com/users/casassg/following{/other_user}", "gists_url": "https://api.github.com/users/casassg/gists{/gist_id}", "starred_url": "https://api.github.com/users/casassg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/casassg/subscriptions", "organizations_url": "https://api.github.com/users/casassg/orgs", "repos_url": "https://api.github.com/users/casassg/repos", "events_url": "https://api.github.com/users/casassg/events{/privacy}", "received_events_url": "https://api.github.com/users/casassg/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" } ]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }, { "login": "gkcalat", "id": 35157096, "node_id": "MDQ6VXNlcjM1MTU3MDk2", "avatar_url": "https://avatars.githubusercontent.com/u/35157096?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gkcalat", "html_url": "https://github.com/gkcalat", "followers_url": "https://api.github.com/users/gkcalat/followers", "following_url": "https://api.github.com/users/gkcalat/following{/other_user}", "gists_url": "https://api.github.com/users/gkcalat/gists{/gist_id}", "starred_url": "https://api.github.com/users/gkcalat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gkcalat/subscriptions", "organizations_url": "https://api.github.com/users/gkcalat/orgs", "repos_url": "https://api.github.com/users/gkcalat/repos", "events_url": "https://api.github.com/users/gkcalat/events{/privacy}", "received_events_url": "https://api.github.com/users/gkcalat/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @casassg ! Thank you for bringing this up.\r\nKFP v1 backend does not support retry op. This is on our roadmap in KFP v2, but it is not available yet.", "@gkcalat note that RetryRun is part of KFP v1. Its an already existing feature, it just breaks under the above scenario" ]
"2022-10-12T20:04:56"
"2022-10-31T21:37:51"
null
CONTRIBUTOR
null
### Environment * How did you deploy Kubeflow Pipelines (KFP)? Kubeflow 1.4 on GKE. Custom Cluster * KFP version: 1.7.0 * KFP SDK version: 1.8.11 ### Steps to reproduce - Rough code: ```python import kfp from kfp.components import create_component_from_func @create_component_from_func def wait_for_n_seconds(seconds: int): import time time.sleep(seconds) raise Exception("Finished Sleeping") @kfp.dsl.pipeline() def my_waiting_pipeline(): wait_for_n_seconds(60) c = kfp.Client() conf = kfp.dsl.PipelineConf().set_ttl_seconds_after_finished(60) result = c.create_run_from_pipeline_func(pipeline_func=my_waiting_pipeline, pipeline_conf=conf, arguments={}) result.wait_for_run_completion() import time time.sleep(20) c.runs.retry_run(result.run_id) result.wait_for_run_completion() ``` - Running this code results in pipeline being stuck in running state forever ### Expected result - On retry it should attempt again to execute pipeline and then it will fail ### Materials and Reference Important logs from workflow-controller pod: ``` workflow-controller time="2022-10-12T19:51:04.079Z" level=info msg="Queueing Failed workflow gcasassaez/my-waiting-pipeline-v96t7 for delete in 1m0s" 2022-10-12 13:51:05.139 MDT workflow-controller time="2022-10-12T19:51:05.139Z" level=info msg="Queueing Failed workflow gcasassaez/my-waiting-pipeline-v96t7 for delete in 59s" 2022-10-12 13:51:05.140 MDT workflow-controller time="2022-10-12T19:51:05.139Z" level=info msg="Queueing Failed workflow gcasassaez/my-waiting-pipeline-v96t7 for delete in 59s" 2022-10-12 13:51:33.100 MDT workflow-controller time="2022-10-12T19:51:33.100Z" level=info msg="Processing workflow" namespace=gcasassaez workflow=my-waiting-pipeline-v96t7 2022-10-12 13:51:33.101 MDT workflow-controller time="2022-10-12T19:51:33.101Z" level=info msg="All of node my-waiting-pipeline-v96t7.wait-for-n-seconds dependencies [] completed" namespace=gcasassaez workflow=my-waiting-pipeline-v96t7 2022-10-12 13:51:33.102 MDT workflow-controller time="2022-10-12T19:51:33.102Z" level=info msg="Pod node my-waiting-pipeline-v96t7-2776729111 initialized Pending" namespace=gcasassaez workflow=my-waiting-pipeline-v96t7 2022-10-12 13:51:33.532 MDT workflow-controller time="2022-10-12T19:51:33.532Z" level=info msg="Created pod: my-waiting-pipeline-v96t7.wait-for-n-seconds (my-waiting-pipeline-v96t7-2776729111)" namespace=gcasassaez workflow=my-waiting-pipeline-v96t7 2022-10-12 13:51:33.549 MDT workflow-controller time="2022-10-12T19:51:33.549Z" level=info msg="Workflow update successful" namespace=gcasassaez phase=Running resourceVersion=2355618548 workflow=my-waiting-pipeline-v96t7 2022-10-12 13:51:43.075 MDT workflow-controller time="2022-10-12T19:51:43.075Z" level=info msg="Processing workflow" namespace=gcasassaez workflow=my-waiting-pipeline-v96t7 2022-10-12 13:51:43.076 MDT workflow-controller time="2022-10-12T19:51:43.076Z" level=info msg="Updating node my-waiting-pipeline-v96t7-2776729111 status Pending -> Running" namespace=gcasassaez workflow=my-waiting-pipeline-v96t7 2022-10-12 13:51:43.093 MDT workflow-controller time="2022-10-12T19:51:43.093Z" level=info msg="Workflow update successful" namespace=gcasassaez phase=Running resourceVersion=2355619126 workflow=my-waiting-pipeline-v96t7 2022-10-12 13:52:05.000 MDT workflow-controller time="2022-10-12T19:52:05.000Z" level=info msg="Deleting TTL expired workflow 'gcasassaez/my-waiting-pipeline-v96t7'" ``` Argo Workflows GC controller which does not check if the updated workflow should be GCed or not: https://github.com/argoproj/argo-workflows/blob/39b7f91392c4c0a0a7c167b5ad7c89b1382df68d/workflow/gccontroller/gc_controller.go#L201 Not sure if there is a way to clear the queue on Argo or wether this is something we can do from RetryRun which avoids this type of issues. --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8355/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8355/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8353
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8353/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8353/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8353/events
https://github.com/kubeflow/pipelines/issues/8353
1,406,346,127
I_kwDOB-71UM5T0yOP
8,353
[bug] Containerized Python Components
{ "login": "statmike", "id": 17235991, "node_id": "MDQ6VXNlcjE3MjM1OTkx", "avatar_url": "https://avatars.githubusercontent.com/u/17235991?v=4", "gravatar_id": "", "url": "https://api.github.com/users/statmike", "html_url": "https://github.com/statmike", "followers_url": "https://api.github.com/users/statmike/followers", "following_url": "https://api.github.com/users/statmike/following{/other_user}", "gists_url": "https://api.github.com/users/statmike/gists{/gist_id}", "starred_url": "https://api.github.com/users/statmike/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/statmike/subscriptions", "organizations_url": "https://api.github.com/users/statmike/orgs", "repos_url": "https://api.github.com/users/statmike/repos", "events_url": "https://api.github.com/users/statmike/events{/privacy}", "received_events_url": "https://api.github.com/users/statmike/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "**Update**\r\n\r\nI upgraded to `kfp.__version__ = 2.0.0b5` and it fixe the error of \"ERROR: No such command 'component'.\" and it matches the KFP API SDK.\r\n\r\nI still see the behavior of `ERROR: No KFP components found in file ...` for any module in the folder that does not contain a component definition. I tried using `--component-filepattern my_component.py` but then it fails to find my other python modules and returns errors like `No module named ...` for each module being imported in the `my_component.py`.\r\n\r\nTo check that my non component python modules work I:\r\n- used import statements in an interactive Python environment and tested the included functions\r\n- ran the entrypoint python module from a command line and observed its successful completion\r\n\r\n", "Hi, @statmike. Can you please provide the file tree that contains `my_component.py` and the accompanying source code files? Can you also provide what `my_component.py` looks like (feel free to omit the component body -- it's probably not necessary)?", "Having the same issue. Here are mine for a second look:\r\n\r\n# my_component.py\r\nfrom kfp import dsl\r\n\r\n@dsl.component(\r\n base_image='python:3.9'\r\n , target_image='gcr.io/my-project/my-containerized_python_component:v1'\r\n , output_component_file=\"component_containerized_python_component.yaml\"\r\n)\r\ndef train_model(\r\n):\r\n # component functionality\r\n print(\"hello\")\r\n\r\n\r\n# my_helper_module.py\r\ndef adding_stuff(x, y):\r\n return x + y", "Hello @connor-mccarthy ,\r\nThis is the full example I have been setting up:\r\n\r\n**tree**\r\n- ./src\r\n - __init__.py\r\n - my_component.py\r\n - my_helper.py\r\n - train.py\r\n\r\n**__init__.py**\r\nis empty\r\n\r\n**my_component.py**\r\n```python\r\n# my_component.py\r\nfrom kfp.v2 import dsl\r\nimport train\r\n\r\n@dsl.component(\r\n base_image = 'python:3.7',\r\n target_image = 'us-central1-docker.pkg.dev/statmike-mlops-349915/statmike-mlops-349915/tips_kfp_kfp_component',\r\n packages_to_install = ['numpy', 'scikit-learn']\r\n)\r\ndef train_model(\r\n size: int,\r\n metrics: dsl.Output[dsl.Metrics],\r\n class_metrics: dsl.Output[dsl.ClassificationMetrics]\r\n):\r\n # run\r\n cm, auPRC = train.runner(size)\r\n \r\n # output\r\n metrics.log_metric('auPRC', auPRC)\r\n class_metrics.log_confusion_matrix(['Not Fraud', 'Fraud'], cm)\r\n```\r\n\r\n**train.py**\r\n```python\r\n# train.py\r\nfrom sklearn import metrics\r\nimport my_helper\r\n\r\ndef runner(size):\r\n # make data\r\n x, y, p = my_helper.make_dataset(size)\r\n\r\n # fit logistic regression\r\n y_pred = my_helper.fit_logistic(x, y)\r\n\r\n # gather metrics\r\n cm = metrics.confusion_matrix(y, y_pred)\r\n auPRC = metrics.accuracy_score(y, y_pred)\r\n \r\n return cm, auPRC\r\n\r\ncm, auPRC = runner(100)\r\n```\r\n\r\n**my_helper.py**\r\n```python\r\n# my_helper.py\r\nimport numpy as np\r\nfrom sklearn import linear_model\r\n\r\n# Make some data where y = 0, 1 for a range of x's - let y=1 be more likely as x increases\r\ndef make_dataset(size):\r\n x = np.random.randn(size)\r\n p = 1 / (1 + np.exp(-1 * (5 * x)))\r\n y = np.random.binomial(1, p, size) \r\n return x, y, p\r\n\r\n# fit logistic regression\r\ndef fit_logistic(x, y):\r\n logisticReg = linear_model.LogisticRegression()\r\n x2 = x.reshape(-1,1)\r\n fit = logisticReg.fit(x2, y)\r\n return fit.predict(x2)\r\n```\r\n", "@statmike The linked PR will fix the issue. \r\n\r\nI tested that your code would work after a minor fix.\r\n\r\nIn your `my_component.py`, the following line\r\n```python\r\nclass_metrics.log_confusion_matrix(['Not Fraud', 'Fraud'], cm)\r\n```\r\nshould be updated to\r\n```python\r\nclass_metrics.log_confusion_matrix(['Not Fraud', 'Fraud'], cm.tolist())\r\n```\r\nbecause `cm` is of type `np.ndarrary` which is not JSON serializable, and the method requires a list typed input here.", "Thank you @chensun for the very fast response and fix. I look forward to using this once it is merged!", "Hello @chensun \r\nI installed `kfp==2.0.0b6` and tried the code above with the fix to `cm` and it looks like the import of the `my_helper.py` file is not occurring in time to be used. \r\n\r\nRunning `kfp component build ./{DIR}/src` Returns: \r\n```\r\nBuilding component using KFP package path: kfp==2.0.0-beta.6\r\nNo module named 'my_helper'\r\n```", "@statmike, thank you for raising this. This is indeed a bug. After some investigation, it happens (at least) in the case when the longest path of the import graph ending at the component involves >2 modules. For example:\r\n\r\n```python\r\n# component.py\r\nfrom module_one import one\r\n\r\n@dsl.component\r\ndef comp(): ...\r\n```\r\n\r\n```python\r\n# module_one.py\r\nfrom module_two import two\r\none = 1\r\n```\r\n\r\n```python\r\n# module_two.py\r\ntwo = 2\r\n```\r\n\r\nThe fix isn't immediately clear, but I will add this as a bug and look into it.", "@connor-mccarthy thank you for the quick validation. Do I need to open another issue for tracking? I notice this one is closed.", "@statmike\r\nI have made one here: https://github.com/kubeflow/pipelines/issues/8385" ]
"2022-10-12T14:46:48"
"2022-10-21T19:16:05"
"2022-10-13T21:04:04"
NONE
null
`kfp.__version__ = 1.8.14` Trying the instructions at [2. Containerized Python components](https://www.kubeflow.org/docs/components/pipelines/v2/author-a-pipeline/components/#2-containerized-python-components): - command `kfp component build ...` results in error: "Error: No such command 'component'." - KFP API SDK Reference for [kfp compontent build](https://kubeflow-pipelines.readthedocs.io/en/master/source/cli.html#kfp-component-build) - changing `kfp component` to `kfp components` seem to get pass the command issue - using `kfp components build` throw errors for `*.py` files that do not contain components: `ERROR: No KFP components found in file ...` - the other `*.py` file are modules imported by the file.py with component definitions using relative imports. This seems to be exactly how the example is laid out [doc example](https://www.kubeflow.org/docs/components/pipelines/v2/author-a-pipeline/components/#2-containerized-python-components)
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8353/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8353/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8352
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8352/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8352/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8352/events
https://github.com/kubeflow/pipelines/issues/8352
1,405,646,211
I_kwDOB-71UM5TyHWD
8,352
[bug] ValueError and KeyError when having multiple dsl condition with the same name
{ "login": "atch841", "id": 15893799, "node_id": "MDQ6VXNlcjE1ODkzNzk5", "avatar_url": "https://avatars.githubusercontent.com/u/15893799?v=4", "gravatar_id": "", "url": "https://api.github.com/users/atch841", "html_url": "https://github.com/atch841", "followers_url": "https://api.github.com/users/atch841/followers", "following_url": "https://api.github.com/users/atch841/following{/other_user}", "gists_url": "https://api.github.com/users/atch841/gists{/gist_id}", "starred_url": "https://api.github.com/users/atch841/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/atch841/subscriptions", "organizations_url": "https://api.github.com/users/atch841/orgs", "repos_url": "https://api.github.com/users/atch841/repos", "events_url": "https://api.github.com/users/atch841/events{/privacy}", "received_events_url": "https://api.github.com/users/atch841/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[]
"2022-10-12T06:30:25"
"2022-10-13T22:52:22"
null
NONE
null
### Environment * All dependencies version: kfp 1.8.11 kfp-pipeline-spec 0.1.13 kfp-server-api 1.7.1 ### Steps to reproduce add the same name to dsl.Condition in sample code https://github.com/kubeflow/pipelines/blob/master/samples/tutorials/DSL%20-%20Control%20structures/DSL%20-%20Control%20structures.py ### Expected result successfully compiled ### Materials and Reference sample code: ```python #!/usr/bin/env python3 # Copyright 2020 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # %% [markdown] # # DSL control structures tutorial # Shows how to use conditional execution and exit handlers. # %% from typing import NamedTuple import kfp from kfp import dsl from kfp.components import func_to_container_op, InputPath, OutputPath # %% [markdown] # ## Conditional execution # You can use the `with dsl.Condition(task1.outputs["output_name"] = "value"):` context to execute parts of the pipeline conditionally # %% @func_to_container_op def get_random_int_op(minimum: int, maximum: int) -> int: """Generate a random number between minimum and maximum (inclusive).""" import random result = random.randint(minimum, maximum) print(result) return result @func_to_container_op def flip_coin_op() -> str: """Flip a coin and output heads or tails randomly.""" import random result = random.choice(['heads', 'tails']) print(result) return result @func_to_container_op def print_op(message: str): """Print a message.""" print(message) @dsl.pipeline( name='Conditional execution pipeline', description='Shows how to use dsl.Condition().' ) def flipcoin_pipeline(): flip = flip_coin_op() with dsl.Condition(flip.output == 'heads'): random_num_head = get_random_int_op(0, 9) with dsl.Condition(random_num_head.output > 5): print_op('heads and %s > 5!' % random_num_head.output) with dsl.Condition(random_num_head.output <= 5): print_op('heads and %s <= 5!' % random_num_head.output) with dsl.Condition(flip.output == 'tails'): random_num_tail = get_random_int_op(10, 19) with dsl.Condition(random_num_tail.output > 15): print_op('tails and %s > 15!' % random_num_tail.output) with dsl.Condition(random_num_tail.output <= 15): print_op('tails and %s <= 15!' % random_num_tail.output) # Submit the pipeline for execution: #kfp.Client(host=kfp_endpoint).create_run_from_pipeline_func(flipcoin_pipeline, arguments={}) # %% [markdown] # ## Exit handlers # You can use `with dsl.ExitHandler(exit_task):` context to execute a task when the rest of the pipeline finishes (succeeds or fails) # %% @func_to_container_op def fail_op(message): """Fails.""" import sys print(message) sys.exit(1) @dsl.pipeline( name='Conditional execution pipeline with exit handler', description='Shows how to use dsl.Condition() and dsl.ExitHandler().' ) def flipcoin_exit_pipeline(): exit_task = print_op('Exit handler has worked!') with dsl.ExitHandler(exit_task): flip = flip_coin_op() with dsl.Condition(flip.output == 'heads', name='test'): random_num_head = get_random_int_op(0, 9) with dsl.Condition(random_num_head.output > 5, name='test'): print_op('heads and %s > 5!' % random_num_head.output) with dsl.Condition(random_num_head.output <= 5, name='test'): print_op('heads and %s <= 5!' % random_num_head.output) with dsl.Condition(flip.output == 'tails', name='test'): random_num_tail = get_random_int_op(10, 19) with dsl.Condition(random_num_tail.output > 15, name='test'): print_op('tails and %s > 15!' % random_num_tail.output) with dsl.Condition(random_num_tail.output <= 15, name='test'): print_op('tails and %s <= 15!' % random_num_tail.output) with dsl.Condition(flip.output == 'tails', name='test'): fail_op(message="Failing the run to demonstrate that exit handler still gets executed.") if __name__ == '__main__': # Compiling the pipeline kfp.compiler.Compiler().compile(flipcoin_exit_pipeline, __file__ + '.yaml') ``` Error message: ``` Traceback (most recent call last): File "/Users/tobychen/Desktop/kipeline/kfp_generator/report_bug.py", line 123, in <module> kfp.compiler.Compiler().compile(flipcoin_exit_pipeline, __file__ + '.yaml') File "/Users/tobychen/opt/anaconda3/lib/python3.9/site-packages/kfp/compiler/compiler.py", line 1180, in compile self._create_and_write_workflow( File "/Users/tobychen/opt/anaconda3/lib/python3.9/site-packages/kfp/compiler/compiler.py", line 1232, in _create_and_write_workflow workflow = self._create_workflow(pipeline_func, pipeline_name, File "/Users/tobychen/opt/anaconda3/lib/python3.9/site-packages/kfp/compiler/compiler.py", line 1063, in _create_workflow workflow = self._create_pipeline_workflow( File "/Users/tobychen/opt/anaconda3/lib/python3.9/site-packages/kfp/compiler/compiler.py", line 791, in _create_pipeline_workflow templates = self._create_dag_templates(pipeline, op_transformers) File "/Users/tobychen/opt/anaconda3/lib/python3.9/site-packages/kfp/compiler/compiler.py", line 726, in _create_dag_templates inputs, outputs = self._get_inputs_outputs( File "/Users/tobychen/opt/anaconda3/lib/python3.9/site-packages/kfp/compiler/compiler.py", line 295, in _get_inputs_outputs self._get_uncommon_ancestors(op_groups, opsgroup_groups, upstream_op, op) File "/Users/tobychen/opt/anaconda3/lib/python3.9/site-packages/kfp/compiler/compiler.py", line 190, in _get_uncommon_ancestors raise ValueError(op2.name + ' does not exist.') ValueError: print-op-2 does not exist. ``` In some other scenarios using the same name on multiple condition ops give KeyError: ``` Traceback (most recent call last): File "/Users/tobychen/Desktop/kipeline/kfp_generator/algorithm_readiness/easy_generator.py", line 656, in <module> kfp.compiler.Compiler().compile( File "/Users/tobychen/opt/anaconda3/lib/python3.9/site-packages/kfp/compiler/compiler.py", line 1180, in compile self._create_and_write_workflow( File "/Users/tobychen/opt/anaconda3/lib/python3.9/site-packages/kfp/compiler/compiler.py", line 1232, in _create_and_write_workflow workflow = self._create_workflow(pipeline_func, pipeline_name, File "/Users/tobychen/opt/anaconda3/lib/python3.9/site-packages/kfp/compiler/compiler.py", line 1063, in _create_workflow workflow = self._create_pipeline_workflow( File "/Users/tobychen/opt/anaconda3/lib/python3.9/site-packages/kfp/compiler/compiler.py", line 791, in _create_pipeline_workflow templates = self._create_dag_templates(pipeline, op_transformers) File "/Users/tobychen/opt/anaconda3/lib/python3.9/site-packages/kfp/compiler/compiler.py", line 726, in _create_dag_templates inputs, outputs = self._get_inputs_outputs( File "/Users/tobychen/opt/anaconda3/lib/python3.9/site-packages/kfp/compiler/compiler.py", line 316, in _get_inputs_outputs for group_name in op_groups[op.name][::-1]: KeyError: 'create-pv-3' ``` --- Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8352/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8352/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8350
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8350/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8350/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8350/events
https://github.com/kubeflow/pipelines/issues/8350
1,404,994,998
I_kwDOB-71UM5TvoW2
8,350
Pipeline running on GKE, Is there any way to access GPU from local machine?
{ "login": "MLHafizur", "id": 45520794, "node_id": "MDQ6VXNlcjQ1NTIwNzk0", "avatar_url": "https://avatars.githubusercontent.com/u/45520794?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MLHafizur", "html_url": "https://github.com/MLHafizur", "followers_url": "https://api.github.com/users/MLHafizur/followers", "following_url": "https://api.github.com/users/MLHafizur/following{/other_user}", "gists_url": "https://api.github.com/users/MLHafizur/gists{/gist_id}", "starred_url": "https://api.github.com/users/MLHafizur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MLHafizur/subscriptions", "organizations_url": "https://api.github.com/users/MLHafizur/orgs", "repos_url": "https://api.github.com/users/MLHafizur/repos", "events_url": "https://api.github.com/users/MLHafizur/events{/privacy}", "received_events_url": "https://api.github.com/users/MLHafizur/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi @MLHafizur!\r\nCould you please clarify what means \"local machine\" and what purposes would GPU access serve?" ]
"2022-10-11T17:31:43"
"2022-10-13T22:48:14"
null
NONE
null
Pipeline running on GKE, Is there any way to access GPU from local machine?
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8350/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8350/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8348
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8348/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8348/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8348/events
https://github.com/kubeflow/pipelines/issues/8348
1,404,352,292
I_kwDOB-71UM5TtLck
8,348
[bug] visualization keep showing nothing ?
{ "login": "TranThanh96", "id": 26323599, "node_id": "MDQ6VXNlcjI2MzIzNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/26323599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TranThanh96", "html_url": "https://github.com/TranThanh96", "followers_url": "https://api.github.com/users/TranThanh96/followers", "following_url": "https://api.github.com/users/TranThanh96/following{/other_user}", "gists_url": "https://api.github.com/users/TranThanh96/gists{/gist_id}", "starred_url": "https://api.github.com/users/TranThanh96/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TranThanh96/subscriptions", "organizations_url": "https://api.github.com/users/TranThanh96/orgs", "repos_url": "https://api.github.com/users/TranThanh96/repos", "events_url": "https://api.github.com/users/TranThanh96/events{/privacy}", "received_events_url": "https://api.github.com/users/TranThanh96/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "I have this problem, too.", "@TranThanh96 @Kokkini Please take a look at the workaround on this issue: https://github.com/awslabs/kubeflow-manifests/issues/117", "> @TranThanh96 @Kokkini Please take a look at the workaround on this issue: [awslabs/kubeflow-manifests#117](https://github.com/awslabs/kubeflow-manifests/issues/117)\r\n\r\nDone, thank you" ]
"2022-10-11T09:52:49"
"2022-10-13T05:11:48"
"2022-10-13T05:11:48"
NONE
null
### Environment Kubeflow 1.6 on AWS ### Steps to reproduce my code: ``` import argparse import json from pathlib import Path from collections import namedtuple def main(args): metadata = { 'outputs' : [{ 'type': 'web-app', 'storage': 'inline', 'source': '<h1>Hello, World!</h1>', }] } print('>> meta data: ', metadata) output = namedtuple('HelloWorldOutput', ['echo', 'mlpipeline_ui_metadata']) # return output(name + ': Hello, Wolrd!', json.dumps(metadata)) Path(args.mlpipeline_ui_metadata_path).parent.mkdir(parents=True, exist_ok=True) with open(args.mlpipeline_ui_metadata_path, 'w') as metadata_file: json.dump(metadata, metadata_file) return if __name__ == '__main__': parser = argparse.ArgumentParser(description='Argparse for com 1') parser.add_argument('--mlpipeline_ui_metadata_path', type=str, help='mlpipeline_ui_metadata_path, log output ') args = parser.parse_args() main(args) ``` my component: ``` name: download data description: download data from s3 bucket metadata: annotations: author: ThanhTM outputs: - {name: mlpipeline_ui_metadata_path, type: String, description: 'mlpipeline_ui_metadata_path visualize'} implementation: container: image: 80************03.dkr.ecr.us-east-1.amazonaws.com/cnndha@sha256:dc54c86e43ac49d3a543c3c0df725c715abfa4e06c928d449537c3454eb25de9 command: [python3, /pipelines/component/src/main.py] args: [ --mlpipeline_ui_metadata_path, {outputPath: mlpipeline_ui_metadata_path} ] ``` my code create pipeline: ``` visual_component = kfp.components.load_component_from_file('component.yaml') @kfp.dsl.pipeline( name="demo pipeline", description="hi" ) def my_pipeline( ): download_data_step = visual_component() arguments = {} client.create_run_from_pipeline_func(my_pipeline, arguments=arguments) ``` and my result: ![image](https://user-images.githubusercontent.com/26323599/195058505-beaada91-99e6-4b1c-a60c-dd63bcba94f3.png)
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8348/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8348/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8344
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8344/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8344/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8344/events
https://github.com/kubeflow/pipelines/issues/8344
1,402,996,790
I_kwDOB-71UM5ToAg2
8,344
[ScheduledWorkflow] Recurring Runs still scheduled disabled ScheduledWorkflow
{ "login": "hellobiek", "id": 2854520, "node_id": "MDQ6VXNlcjI4NTQ1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2854520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hellobiek", "html_url": "https://github.com/hellobiek", "followers_url": "https://api.github.com/users/hellobiek/followers", "following_url": "https://api.github.com/users/hellobiek/following{/other_user}", "gists_url": "https://api.github.com/users/hellobiek/gists{/gist_id}", "starred_url": "https://api.github.com/users/hellobiek/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hellobiek/subscriptions", "organizations_url": "https://api.github.com/users/hellobiek/orgs", "repos_url": "https://api.github.com/users/hellobiek/repos", "events_url": "https://api.github.com/users/hellobiek/events{/privacy}", "received_events_url": "https://api.github.com/users/hellobiek/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
open
false
null
[]
null
[]
"2022-10-10T11:09:47"
"2022-10-10T11:09:47"
null
NONE
null
### Environment I deploy a kubeflow 1.5 system, and when I using "Recurring Runs" for my etl work in kubeflow. and "Recurring Runs" runs in the following timezone "0 30 1 ? * 1-5" I disabled a "Recurring Runs" 7 days ago. when I enabled the "Recurring Runs" today. I suppose in the passed 7 days, the "Recurring Runs" should not be rescheduled. the system also run the disabled timing "Recurring Runs" * How do you deploy Kubeflow Pipelines (KFP)? kubeflow 1.5 cache-deployer:1.8.1 metadata-envoy:1.8.1 metadata-writer:1.8.1 api-server:1.8.1 frontend:1.8.1 persistenceagent:1.8.1 scheduledworkflow:1.8.1 viewer-crd-controller:1.8.1 visualization-server:1.8.1 ml_metadata_store_server:1.5.0 argoexec:v3.2.3-license-compliance workflow-controller:v3.2.3-license-compliance * KFP version: 1.8.1 * KFP SDK version: 1.8.12 ### Steps to reproduce set a "Recurring Runs", set to run it in cron format for every minute. then disable it. after a few minutes later, enable it. then you will find some runs in the disabled time will still run. ### Expected result some runs in the disabled time not run. ### Materials and reference createdAt: "2022-10-10T05:16:41Z" finishedAt: "2022-10-10T05:38:03Z" index: 4 name: cplf2m4p5l-4-1993319229 namespace: xisc01 phase: Succeeded **scheduledAt: "2022-10-04T01:30:00Z"** **startedAt: "2022-10-10T05:16:41Z"** uid: 92f4bf3c-fd75-40a7-be7b-b627a704d650 ### Labels <!-- Please include labels below by uncommenting them to help us better triage issues --> <!-- /area frontend --> <!-- /area backend --> <!-- /area sdk --> <!-- /area testing --> <!-- /area samples --> <!-- /area components --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8344/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8344/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8339
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8339/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8339/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8339/events
https://github.com/kubeflow/pipelines/issues/8339
1,400,456,760
I_kwDOB-71UM5TeUY4
8,339
[sdk] launcher component is not up to date
{ "login": "streamnsight", "id": 10981776, "node_id": "MDQ6VXNlcjEwOTgxNzc2", "avatar_url": "https://avatars.githubusercontent.com/u/10981776?v=4", "gravatar_id": "", "url": "https://api.github.com/users/streamnsight", "html_url": "https://github.com/streamnsight", "followers_url": "https://api.github.com/users/streamnsight/followers", "following_url": "https://api.github.com/users/streamnsight/following{/other_user}", "gists_url": "https://api.github.com/users/streamnsight/gists{/gist_id}", "starred_url": "https://api.github.com/users/streamnsight/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/streamnsight/subscriptions", "organizations_url": "https://api.github.com/users/streamnsight/orgs", "repos_url": "https://api.github.com/users/streamnsight/repos", "events_url": "https://api.github.com/users/streamnsight/events{/privacy}", "received_events_url": "https://api.github.com/users/streamnsight/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" } ]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[]
"2022-10-07T00:22:06"
"2022-10-13T22:43:08"
null
NONE
null
There has been PRs to fix the delete function in launch_tfjob but the image has not been updated in the component This PR https://github.com/kubeflow/pipelines/issues/7984 fixes the launch_crd.py, but the component here: https://github.com/kubeflow/pipelines/blob/master/components/kubeflow/launcher/component.yaml references the image `nikenano/launchernew:latest` This image is over 2 years old and still has the bug: https://hub.docker.com/r/nikenano/launchernew/tags From the repo, the build_image.sh script reference the local image as `ml-pipeline-kubeflow-tfjob`, and the corresponding image `gcr.io/ml-pipeline/ml-pipeline-kubeflow-tfjob:1.8.5` does not work either; it does not have the code packaged installed under `/ml` properly, it has `/ml/src/launch_tfjob.py` and `/ml/common/launch_crd.py` When running it I get: ``` docker run -it gcr.io/ml-pipeline/ml-pipeline-kubeflow-tfjob:1.8.5 python: can't open file '/ml/launch_tfjob.py': [Errno 2] No such file or directory ``` running the launch_tfjob.py from the /ml/src/ dir does not work either because the launch_crd is not in the right folder structure. ```bash docker run -it --entrypoint /usr/local/bin/python gcr.io/ml-pipeline/ml-pipeline-kubeflow-tfjob:1.8.5 /ml/src/launch_tfjob.py Traceback (most recent call last): File "/ml/src/launch_tfjob.py", line 22, in <module> import launch_crd ModuleNotFoundError: No module named 'launch_crd' ``` The image needs to be rebuilt and pushed, and the component.yaml needs to be updated with the latest image. Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8339/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8339/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8338
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8338/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8338/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8338/events
https://github.com/kubeflow/pipelines/issues/8338
1,400,252,927
I_kwDOB-71UM5Tdin_
8,338
[backend] Silent failure for Rest API connection issue
{ "login": "jimbudarz", "id": 692898, "node_id": "MDQ6VXNlcjY5Mjg5OA==", "avatar_url": "https://avatars.githubusercontent.com/u/692898?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jimbudarz", "html_url": "https://github.com/jimbudarz", "followers_url": "https://api.github.com/users/jimbudarz/followers", "following_url": "https://api.github.com/users/jimbudarz/following{/other_user}", "gists_url": "https://api.github.com/users/jimbudarz/gists{/gist_id}", "starred_url": "https://api.github.com/users/jimbudarz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jimbudarz/subscriptions", "organizations_url": "https://api.github.com/users/jimbudarz/orgs", "repos_url": "https://api.github.com/users/jimbudarz/repos", "events_url": "https://api.github.com/users/jimbudarz/events{/privacy}", "received_events_url": "https://api.github.com/users/jimbudarz/received_events", "type": "User", "site_admin": false }
[ { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
open
false
null
[]
null
[]
"2022-10-06T20:20:53"
"2022-10-06T22:48:14"
null
NONE
null
This is not a bug _per se_, but falls into the category of improvements. The current setup of the [Rest API](https://github.com/kubeflow/pipelines/blob/master/backend/api/python_http_client/kfp_server_api/rest.py)'s RestClientObject has a method called `request` that does not return an error in the case of a connection error. The only instance where it handles the error is SSLError. However, there are a number of additional connection errors that should throw errors instead of providing empty results (which can be misinterpreted by the end user.) Additional errors which can be handled are listed here: https://urllib3.readthedocs.io/en/stable/reference/urllib3.exceptions.html#urllib3.exceptions.HTTPError This became particularly evident when I tried to follow [these instructions](https://www.kubeflow.org/docs/components/pipelines/v1/sdk/connect-api/#standalone-kubeflow-pipelines-subfrom-outside-clustersub) to access Kubeflow Pipelines from outside my AWS cluster. In that case, running the following failed to return an error, even though it did not receive a result via GET from the server: ``` client = kfp.Client(host=f"{KUBEFLOW_ENDPOINT}/pipeline", cookies=auth_session["session_cookie"]) print(client.list_experiments()) ``` ### Environment * How did you deploy Kubeflow Pipelines (KFP)? N/A * KFP version: 1.3, but current master branch contains this issue * KFP SDK version: N/A ### Steps to reproduce From off-server on AWS: ``` import kfp KUBEFLOW_ENDPOINT = "http://localhost:8080" KUBEFLOW_USERNAME = "user@example.com" KUBEFLOW_PASSWORD = "12341234" auth_session = get_istio_auth_session( url=KUBEFLOW_ENDPOINT, username=KUBEFLOW_USERNAME, password=KUBEFLOW_PASSWORD ) client = kfp.Client(host=f"{KUBEFLOW_ENDPOINT}/pipeline", cookies=auth_session["session_cookie"]) print(client.list_experiments()) ``` ### Expected result Either a list of experiments on the server, or an error message stating the reason it failed to connect. ### Materials and Reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8338/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8338/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8332
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8332/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8332/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8332/events
https://github.com/kubeflow/pipelines/issues/8332
1,398,070,538
I_kwDOB-71UM5TVN0K
8,332
ml-pipeline unable to connect to mysql in kubefloq 1.6
{ "login": "psheorangithub", "id": 56983288, "node_id": "MDQ6VXNlcjU2OTgzMjg4", "avatar_url": "https://avatars.githubusercontent.com/u/56983288?v=4", "gravatar_id": "", "url": "https://api.github.com/users/psheorangithub", "html_url": "https://github.com/psheorangithub", "followers_url": "https://api.github.com/users/psheorangithub/followers", "following_url": "https://api.github.com/users/psheorangithub/following{/other_user}", "gists_url": "https://api.github.com/users/psheorangithub/gists{/gist_id}", "starred_url": "https://api.github.com/users/psheorangithub/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/psheorangithub/subscriptions", "organizations_url": "https://api.github.com/users/psheorangithub/orgs", "repos_url": "https://api.github.com/users/psheorangithub/repos", "events_url": "https://api.github.com/users/psheorangithub/events{/privacy}", "received_events_url": "https://api.github.com/users/psheorangithub/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
open
false
{ "login": "gkcalat", "id": 35157096, "node_id": "MDQ6VXNlcjM1MTU3MDk2", "avatar_url": "https://avatars.githubusercontent.com/u/35157096?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gkcalat", "html_url": "https://github.com/gkcalat", "followers_url": "https://api.github.com/users/gkcalat/followers", "following_url": "https://api.github.com/users/gkcalat/following{/other_user}", "gists_url": "https://api.github.com/users/gkcalat/gists{/gist_id}", "starred_url": "https://api.github.com/users/gkcalat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gkcalat/subscriptions", "organizations_url": "https://api.github.com/users/gkcalat/orgs", "repos_url": "https://api.github.com/users/gkcalat/repos", "events_url": "https://api.github.com/users/gkcalat/events{/privacy}", "received_events_url": "https://api.github.com/users/gkcalat/received_events", "type": "User", "site_admin": false }
[ { "login": "gkcalat", "id": 35157096, "node_id": "MDQ6VXNlcjM1MTU3MDk2", "avatar_url": "https://avatars.githubusercontent.com/u/35157096?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gkcalat", "html_url": "https://github.com/gkcalat", "followers_url": "https://api.github.com/users/gkcalat/followers", "following_url": "https://api.github.com/users/gkcalat/following{/other_user}", "gists_url": "https://api.github.com/users/gkcalat/gists{/gist_id}", "starred_url": "https://api.github.com/users/gkcalat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gkcalat/subscriptions", "organizations_url": "https://api.github.com/users/gkcalat/orgs", "repos_url": "https://api.github.com/users/gkcalat/repos", "events_url": "https://api.github.com/users/gkcalat/events{/privacy}", "received_events_url": "https://api.github.com/users/gkcalat/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @psheorangithub!\r\n\r\nCould you please clarify where and how did you deploy Kubeflow 1.6? I see some placeholders in the yaml manifests. Were they replaced before applying?", "@psheorangithub Did you resolve the issue? it seems I have the same ", "@gkcalat I have the same issue, when I change minio to use gsc instead pvc. But I don't understand how it related to mysql connections from ml-pipeline ", "@gkcalat logs from ml-pipeline\r\nI0201 13:28:57.675273 7 client_manager.go:160] Initializing client manager\r\n10\r\nI0201 13:28:57.675379 7 config.go:57] Config DBConfig.ExtraParams not specified, skipping\r\n9\r\n[mysql] 2023/02/01 13:28:57 packets.go:37: unexpected EOF\r\n8\r\n[mysql] 2023/02/01 13:28:58 packets.go:37: unexpected EOF\r\n7\r\n[mysql] 2023/02/01 13:28:58 packets.go:37: unexpected EOF\r\n6\r\n[mysql] 2023/02/01 13:29:00 packets.go:37: unexpected EOF\r\n5\r\n[mysql] 2023/02/01 13:29:02 packets.go:37: unexpected EOF\r\n4\r\n[mysql] 2023/02/01 13:29:05 packets.go:37: unexpected EOF\r\n3\r\n[mysql] 2023/02/01 13:29:07 packets.go:37: unexpected EOF\r\n2\r\n[mysql] 2023/02/01 13:29:12 packets.go:37: unexpected EOF\r\n1\r\n[mysql] 2023/02/01 13:29:16 packets.go:37: unexpected EOF", "logs from mysql\r\n2023-02-01T09:24:18.770139Z 278 [Note] Aborted connection 278 to db: 'mlpipeline' user: 'root' host: '127.0.0.6' (Got an error reading communication packets)\r\n299\r\n2023-02-01T11:10:37.355390Z 711 [Note] Aborted connection 711 to db: 'mlpipeline' user: 'root' host: '127.0.0.6' (Got an error reading communication packets)\r\n298\r\n2023-02-01T11:10:38.763233Z 713 [Note] Aborted connection 713 to db: 'mlpipeline' user: 'root' host: '127.0.0.6' (Got an error reading communication packets)\r\n297\r\n2023-02-01T11:10:57.104936Z 717 [Note] Aborted connection 717 to db: 'mlpipeline' user: 'root' host: '127.0.0.6' (Got an error reading communication packets)\r\n296\r\n2023-02-01T11:11:26.189905Z 721 [Note] Aborted connection 721 to db: 'mlpipeline' user: 'root' host: '127.0.0.6' (Got an error reading communication packets)\r\n295\r\n2023-02-01T11:12:09.084285Z 725 [Note] Aborted connection 725 to db: 'mlpipeline' user: 'root' host: '127.0.0.6' (Got an error reading communication packets)\r\n294\r\n2023-02-01T11:13:33.124578Z 733 [Note] Aborted connection 733 to db: 'mlpipeline' user: 'root' host: '127.0.0.6' (Got an error reading communication packets)\r\n293\r\n2023-02-01T11:16:18.158538Z 747 [Note] Aborted connection 747 to db: 'mlpipeline' user: 'root' host: '127.0.0.6' (Got an error reading communication packets)\r\n292\r\n2023-02-01T11:21:21.536725Z 769 [Note] Aborted connection 769 to db: 'mlpipeline' user: 'root' host: '127.0.0.6' (Got an error reading communication packets)\r\n291\r\n2023-02-01T11:26:28.264881Z 791 [Note] Aborted connection 791 to db: 'mlpipeline' user: 'root' host: '127.0.0.6' (Got an error reading communication packets)\r\n290\r\n2023-02-01T11:31:36.117026Z 813 [Note] Aborted connection 813 to db: 'mlpipeline' user: 'ro", "Hi @asahnovskiy-deloitte,\r\n\r\nWhich version of KFP are you using? What type of deployment is it (standalone, full Kubeflow, etc.)?\r\n\r\n>@gkcalat I have the same issue, when I change minio to use gsc instead pvc. But I don't understand how it related to mysql connections from ml-pipeline\r\n\r\nHow did you change the storage? Did you deploy a new instance or used `kubectl edit` on an existing cluster? Any chances you removed or changed MySQL's PVC? MySQL expects to find local files in `/var/lib/mysql` (`mysql-pv-claim`).\r\n\r\nCan you also provide logs from minio?" ]
"2022-10-05T16:33:04"
"2023-02-04T00:54:54"
null
NONE
null
### Environment <!-- Please fill in those that seem relevant. --> * How do you deploy Kubeflow Pipelines (KFP)? Deployed pipelines as part of kubeflow 1.6 * KFP version: 2.0.0-alpha.3 To find the version number, See version number shows on bottom of KFP UI left sidenav. --> * KFP SDK version: <!-- Specify the output of the following shell command: $pip list | grep kfp --> ### Steps to reproduce Deploy kubeflow manifest 1.6 ### Expected result ml-pipeline pod should come up ### Materials and reference Below is ml-pipeline anc mysql deployment manifest. ml-pipeline: --------------- apiVersion: apps/v1 kind: Deployment metadata: labels: app: ml-pipeline app.kubernetes.io/component: ml-pipeline app.kubernetes.io/name: kubeflow-pipelines application-crd-id: kubeflow-pipelines {{- toYaml .Values.labels | nindent 4 }} name: ml-pipeline namespace: kubeflow annotations: {{- toYaml .Values.annotations | nindent 4 }} spec: selector: matchLabels: app: ml-pipeline app.kubernetes.io/component: ml-pipeline app.kubernetes.io/name: kubeflow-pipelines application-crd-id: kubeflow-pipelines template: metadata: annotations: cluster-autoscaler.kubernetes.io/safe-to-evict: 'true' {{- toYaml .Values.annotations | nindent 8 }} labels: app: ml-pipeline app.kubernetes.io/component: ml-pipeline app.kubernetes.io/name: kubeflow-pipelines application-crd-id: kubeflow-pipelines {{- toYaml .Values.labels | nindent 8 }} spec: containers: - env: - name: KUBEFLOW_USERID_HEADER value: kubeflow-userid - name: KUBEFLOW_USERID_PREFIX value: '' - name: AUTO_UPDATE_PIPELINE_DEFAULT_VERSION valueFrom: configMapKeyRef: key: autoUpdatePipelineDefaultVersion name: pipeline-install-config - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: OBJECTSTORECONFIG_SECURE value: "true" - name: MINIO_SERVICE_SERVICE_HOST value: '{{ .Values.s3.host }}' - name: MINIO_SERVICE_SERVICE_PORT value: '{{ .Values.s3.port | quote }}' - name: OBJECTSTORECONFIG_BUCKETNAME valueFrom: configMapKeyRef: key: bucketName name: pipeline-install-config - name: DBCONFIG_USER valueFrom: secretKeyRef: key: username name: mysql-secret - name: DBCONFIG_PASSWORD valueFrom: secretKeyRef: key: password name: mysql-secret - name: DBCONFIG_DBNAME valueFrom: configMapKeyRef: key: pipelineDb name: pipeline-install-config - name: DBCONFIG_HOST valueFrom: configMapKeyRef: key: dbHost name: pipeline-install-config - name: DBCONFIG_PORT valueFrom: configMapKeyRef: key: dbPort name: pipeline-install-config - name: DBCONFIG_CONMAXLIFETIME valueFrom: configMapKeyRef: key: ConMaxLifeTime name: pipeline-install-config - name: OBJECTSTORECONFIG_ACCESSKEY valueFrom: secretKeyRef: key: accesskey name: mlpipeline-ceph-s3-artifact - name: OBJECTSTORECONFIG_SECRETACCESSKEY valueFrom: secretKeyRef: key: secretkey name: mlpipeline-ceph-s3-artifact envFrom: - configMapRef: name: pipeline-api-server-config-f4t72426kt image: gcr.io/ml-pipeline/api-server:2.0.0-alpha.3 imagePullPolicy: IfNotPresent livenessProbe: exec: command: - wget - -q - -S - -O - '-' - http://localhost:8888/apis/v1beta1/healthz initialDelaySeconds: 3 periodSeconds: 5 timeoutSeconds: 2 name: ml-pipeline-api-server ports: - containerPort: 8888 name: http - containerPort: 8887 name: grpc readinessProbe: exec: command: - wget - -q - -S - -O - '-' - http://localhost:8888/apis/v1beta1/healthz initialDelaySeconds: 3 periodSeconds: 5 timeoutSeconds: 2 resources: requests: cpu: 250m memory: 500Mi startupProbe: exec: command: - wget - -q - -S - -O - '-' - http://localhost:8888/apis/v1beta1/healthz failureThreshold: 12 periodSeconds: 5 timeoutSeconds: 2 volumeMounts: - name: ca-bundles mountPath: /etc/ssl/certs/ca-certificates.crt subPath: ca-certificates.crt serviceAccountName: ml-pipeline volumes: - name: ca-bundles configMap: name: ca-bundles defaultMode: 420 mysql: -------------- apiVersion: apps/v1 kind: Deployment metadata: labels: app: mysql application-crd-id: kubeflow-pipelines name: mysql namespace: kubeflow spec: selector: matchLabels: app: mysql application-crd-id: kubeflow-pipelines strategy: type: Recreate template: metadata: labels: app: mysql application-crd-id: kubeflow-pipelines spec: containers: - args: - --ignore-db-dir=lost+found - --datadir - /var/lib/mysql env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: key: password name: mysql-secret image: gcr.io/ml-pipeline/mysql:5.7-debian name: mysql ports: - containerPort: 3306 name: mysql resources: requests: cpu: 100m memory: 800Mi volumeMounts: - mountPath: /var/lib/mysql name: mysql-persistent-storage serviceAccountName: mysql volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim ### Labels <!-- Please include labels below by uncommenting them to help us better triage issues --> <!-- /area frontend --> <!-- /area backend --> <!-- /area sdk --> <!-- /area testing --> No errors in ml-pipeline logs ----------- ml-pipeline-5fc565dfc4-2zljp 1/2 Running 55 3h27m bash-3.2$ k logs ml-pipeline-5fc565dfc4-2zljp -n kubeflow I1005 16:29:47.455100 8 client_manager.go:160] Initializing client manager I1005 16:29:47.455255 8 config.go:57] Config DBConfig.ExtraParams not specified, skipping bash-3.2$ k logs ml-pipeline-5fc565dfc4-2zljp -n kubeflow -p I1005 16:28:17.581634 7 client_manager.go:160] Initializing client manager I1005 16:28:17.581844 7 config.go:57] Config DBConfig.ExtraParams not specified, skipping bash-3.2$ mysql shows below errors: ----------------------- 2022-10-05T12:53:21.615125Z 0 [Note] - '::' resolves to '::'; 2022-10-05T12:53:21.615171Z 0 [Note] Server socket created on IP: '::'. 2022-10-05T12:53:21.617003Z 0 [Warning] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory. 2022-10-05T12:53:21.635095Z 0 [Note] Event Scheduler: Loaded 0 events 2022-10-05T12:53:21.635645Z 0 [Note] mysqld: ready for connections. Version: '5.7.38' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server (GPL) 2022-10-05T12:56:07.554978Z 7 [Note] Aborted connection 7 to db: 'mlpipeline' user: 'root' host: '127.0.0.6' (Got an error reading communication packets) 2022-10-05T12:57:37.555598Z 9 [Note] Aborted connection 9 to db: 'mlpipeline' user: 'root' host: '127.0.0.6' (Got an error reading communication packets) 2022-10-05T12:59:07.557053Z 11 [Note] Aborted connection 11 to db: 'mlpipeline' user: 'root' host: '127.0.0.6' (Got an error reading communication packets) 2022-10-05T13:00:37.551263Z 13 [Note] Aborted connection 13 to db: 'mlpipeline' user: 'root' host: '127.0.0.6' (Got an error reading communication packets) 2022-10-05T13:02:07.552976Z 15 [Note] Aborted connection 15 to db: 'mlpipeline' user: 'root' host: '127.0.0.6' (Got an error reading communication packets) 2022-10-05T13:03:37.547383Z 17 [Note] Aborted connection 17 to db: 'mlpipeline' user: 'root' host: '127.0.0.6' (Got an error reading communication packets) 2022-10-05T13:07:57.543144Z 19 [Note] Aborted connection 19 to db: 'mlpipeline' user: 'root' host: '127.0.0.6' (Got an error reading communication packets) 2022-10-05T13:14:37.538061Z 21 [Note] Aborted connection 21 to db: 'mlpipeline' user: 'root' host: '127.0.0.6' (Got an error reading communication packets) 2022-10-05T13:16:07.554355Z 23 [Note] Aborted connection 23 to db: 'mlpipeline' user: 'root' host: '127.0.0.6' (Got an error reading communication packets) <!-- /area samples --> <!-- /area components --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8332/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8332/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8329
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8329/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8329/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8329/events
https://github.com/kubeflow/pipelines/issues/8329
1,394,868,430
I_kwDOB-71UM5TJADO
8,329
[backend] Data downloaded in one pipeline step is inaccessible in the next step.
{ "login": "the5solae", "id": 61509694, "node_id": "MDQ6VXNlcjYxNTA5Njk0", "avatar_url": "https://avatars.githubusercontent.com/u/61509694?v=4", "gravatar_id": "", "url": "https://api.github.com/users/the5solae", "html_url": "https://github.com/the5solae", "followers_url": "https://api.github.com/users/the5solae/followers", "following_url": "https://api.github.com/users/the5solae/following{/other_user}", "gists_url": "https://api.github.com/users/the5solae/gists{/gist_id}", "starred_url": "https://api.github.com/users/the5solae/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/the5solae/subscriptions", "organizations_url": "https://api.github.com/users/the5solae/orgs", "repos_url": "https://api.github.com/users/the5solae/repos", "events_url": "https://api.github.com/users/the5solae/events{/privacy}", "received_events_url": "https://api.github.com/users/the5solae/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
null
[]
null
[]
"2022-10-03T14:48:12"
"2022-10-05T09:00:07"
"2022-10-05T09:00:07"
NONE
null
### Environment * How did you deploy Kubeflow Pipelines (KFP)? **Full Kubeflow Deployement on OCI** <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> * KFP version: **1.6** <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> * KFP SDK version: <!-- Specify the output of the following shell command: $pip list | grep kfp --> kfp==1.6.3 kfp-pipeline-spec==0.1.16 kfp-server-api==1.6.0 ### Steps to reproduce <!-- Specify how to reproduce the problem. This may include information such as: a description of the process, code snippets, log output, or screenshots. --> - Create a jupyter notebook without a volume attached to it. Create a pipeline using the `components.func_to_container_op` with the following specifications: - Pipeline First step: Download and Process and Store Data. Data path is passed as an output of the first to different pipeline step using _filePath_ - Pipeline Second Step: Error is thrown because the file references from the first step are not found ### Expected result - Expected the second step to be able to fetch the data as it has been downloaded and processed. <!-- What should the correct behavior be? --> ### Materials and Reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> <img width="1174" alt="Screen Shot 2022-10-03 at 6 47 34 PM" src="https://user-images.githubusercontent.com/61509694/193607018-dc21dc22-cbe1-4c7e-9601-fb79f0760ca7.png"> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8329/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8329/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8328
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8328/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8328/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8328/events
https://github.com/kubeflow/pipelines/issues/8328
1,394,529,001
I_kwDOB-71UM5THtLp
8,328
[bug] Component doesn't receive some inputs if all of them come from the same output of another component
{ "login": "vstrimaitis", "id": 14166032, "node_id": "MDQ6VXNlcjE0MTY2MDMy", "avatar_url": "https://avatars.githubusercontent.com/u/14166032?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vstrimaitis", "html_url": "https://github.com/vstrimaitis", "followers_url": "https://api.github.com/users/vstrimaitis/followers", "following_url": "https://api.github.com/users/vstrimaitis/following{/other_user}", "gists_url": "https://api.github.com/users/vstrimaitis/gists{/gist_id}", "starred_url": "https://api.github.com/users/vstrimaitis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vstrimaitis/subscriptions", "organizations_url": "https://api.github.com/users/vstrimaitis/orgs", "repos_url": "https://api.github.com/users/vstrimaitis/repos", "events_url": "https://api.github.com/users/vstrimaitis/events{/privacy}", "received_events_url": "https://api.github.com/users/vstrimaitis/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
open
false
{ "login": "connor-mccarthy", "id": 55268212, "node_id": "MDQ6VXNlcjU1MjY4MjEy", "avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-mccarthy", "html_url": "https://github.com/connor-mccarthy", "followers_url": "https://api.github.com/users/connor-mccarthy/followers", "following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}", "gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions", "organizations_url": "https://api.github.com/users/connor-mccarthy/orgs", "repos_url": "https://api.github.com/users/connor-mccarthy/repos", "events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-mccarthy/received_events", "type": "User", "site_admin": false }
[ { "login": "connor-mccarthy", "id": 55268212, "node_id": "MDQ6VXNlcjU1MjY4MjEy", "avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-mccarthy", "html_url": "https://github.com/connor-mccarthy", "followers_url": "https://api.github.com/users/connor-mccarthy/followers", "following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}", "gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions", "organizations_url": "https://api.github.com/users/connor-mccarthy/orgs", "repos_url": "https://api.github.com/users/connor-mccarthy/repos", "events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-mccarthy/received_events", "type": "User", "site_admin": false } ]
null
[ "@vstrimaitis, can you please share the YAML that is generated?", "The compiler doesn't allow me to generate a YAML file, but here's the JSON output for the very first example:\r\n\r\n```json\r\n{\r\n \"pipelineSpec\": {\r\n \"components\": {\r\n \"comp-read-data\": {\r\n \"executorLabel\": \"exec-read-data\",\r\n \"inputDefinitions\": {\r\n \"artifacts\": {\r\n \"data1\": {\r\n \"artifactType\": {\r\n \"schemaTitle\": \"system.Artifact\",\r\n \"schemaVersion\": \"0.0.1\"\r\n }\r\n },\r\n \"data2\": {\r\n \"artifactType\": {\r\n \"schemaTitle\": \"system.Artifact\",\r\n \"schemaVersion\": \"0.0.1\"\r\n }\r\n }\r\n }\r\n }\r\n },\r\n \"comp-write-data\": {\r\n \"executorLabel\": \"exec-write-data\",\r\n \"outputDefinitions\": {\r\n \"artifacts\": {\r\n \"destination\": {\r\n \"artifactType\": {\r\n \"schemaTitle\": \"system.Artifact\",\r\n \"schemaVersion\": \"0.0.1\"\r\n }\r\n }\r\n }\r\n }\r\n }\r\n },\r\n \"deploymentSpec\": {\r\n \"executors\": {\r\n \"exec-read-data\": {\r\n \"container\": {\r\n \"args\": [\r\n \"--executor_input\",\r\n \"{{$}}\",\r\n \"--function_to_execute\",\r\n \"read_data\"\r\n ],\r\n \"command\": [\r\n \"sh\",\r\n \"-c\",\r\n \"\\nif ! [ -x \\\"$(command -v pip)\\\" ]; then\\n python3 -m ensurepip || python3 -m ensurepip --user || apt-get install python3-pip\\nfi\\n\\nPIP_DISABLE_PIP_VERSION_CHECK=1 python3 -m pip install --quiet --no-warn-script-location 'kfp==1.8.12' && \\\"$0\\\" \\\"$@\\\"\\n\",\r\n \"sh\",\r\n \"-ec\",\r\n \"program_path=$(mktemp -d)\\nprintf \\\"%s\\\" \\\"$0\\\" > \\\"$program_path/ephemeral_component.py\\\"\\npython3 -m kfp.v2.components.executor_main --component_module_path \\\"$program_path/ephemeral_component.py\\\" \\\"$@\\\"\\n\",\r\n \"\\nimport kfp\\nfrom kfp.v2 import dsl\\nfrom kfp.v2.dsl import *\\nfrom typing import *\\n\\ndef read_data(data1: Input[Artifact], data2: Input[Artifact]):\\n assert data1.path is not None\\n assert data2.path is not None\\n with open(data1.path, \\\"r\\\") as f:\\n print(f\\\"data1:\\\\n{f.read()}\\\")\\n with open(data2.path, \\\"r\\\") as f:\\n print(f\\\"data2:\\\\n{f.read()}\\\")\\n\\n\"\r\n ],\r\n \"image\": \"python:3.7\"\r\n }\r\n },\r\n \"exec-write-data\": {\r\n \"container\": {\r\n \"args\": [\r\n \"--executor_input\",\r\n \"{{$}}\",\r\n \"--function_to_execute\",\r\n \"write_data\"\r\n ],\r\n \"command\": [\r\n \"sh\",\r\n \"-c\",\r\n \"\\nif ! [ -x \\\"$(command -v pip)\\\" ]; then\\n python3 -m ensurepip || python3 -m ensurepip --user || apt-get install python3-pip\\nfi\\n\\nPIP_DISABLE_PIP_VERSION_CHECK=1 python3 -m pip install --quiet --no-warn-script-location 'kfp==1.8.12' && \\\"$0\\\" \\\"$@\\\"\\n\",\r\n \"sh\",\r\n \"-ec\",\r\n \"program_path=$(mktemp -d)\\nprintf \\\"%s\\\" \\\"$0\\\" > \\\"$program_path/ephemeral_component.py\\\"\\npython3 -m kfp.v2.components.executor_main --component_module_path \\\"$program_path/ephemeral_component.py\\\" \\\"$@\\\"\\n\",\r\n \"\\nimport kfp\\nfrom kfp.v2 import dsl\\nfrom kfp.v2.dsl import *\\nfrom typing import *\\n\\ndef write_data(destination: Output[Artifact]):\\n assert destination.path is not None\\n with open(destination.path, \\\"w\\\") as f:\\n f.write(\\\"test\\\")\\n\\n\"\r\n ],\r\n \"image\": \"python:3.7\"\r\n }\r\n }\r\n }\r\n },\r\n \"pipelineInfo\": {\r\n \"name\": \"pipeline-bug-test\"\r\n },\r\n \"root\": {\r\n \"dag\": {\r\n \"tasks\": {\r\n \"read-data\": {\r\n \"cachingOptions\": {\r\n \"enableCache\": true\r\n },\r\n \"componentRef\": {\r\n \"name\": \"comp-read-data\"\r\n },\r\n \"dependentTasks\": [\r\n \"write-data\"\r\n ],\r\n \"inputs\": {\r\n \"artifacts\": {\r\n \"data1\": {\r\n \"taskOutputArtifact\": {\r\n \"outputArtifactKey\": \"destination\",\r\n \"producerTask\": \"write-data\"\r\n }\r\n },\r\n \"data2\": {\r\n \"taskOutputArtifact\": {\r\n \"outputArtifactKey\": \"destination\",\r\n \"producerTask\": \"write-data\"\r\n }\r\n }\r\n }\r\n },\r\n \"taskInfo\": {\r\n \"name\": \"read-data\"\r\n }\r\n },\r\n \"write-data\": {\r\n \"cachingOptions\": {\r\n \"enableCache\": true\r\n },\r\n \"componentRef\": {\r\n \"name\": \"comp-write-data\"\r\n },\r\n \"taskInfo\": {\r\n \"name\": \"write-data\"\r\n }\r\n }\r\n }\r\n }\r\n },\r\n \"schemaVersion\": \"2.0.0\",\r\n \"sdkVersion\": \"kfp-1.8.12\"\r\n },\r\n \"runtimeConfig\": {}\r\n}\r\n```" ]
"2022-10-03T10:52:31"
"2022-10-11T10:57:51"
null
NONE
null
### Environment * How do you deploy Kubeflow Pipelines (KFP)? Vertex AI Pipelines on GCP * KFP version: Don't know - whatever Vertex AI uses 😁 * KFP SDK version: `1.8.12` ### Steps to reproduce Define the following pipeline: ```python from kfp.v2 import dsl from kfp.v2.dsl import Output, Artifact, Input @dsl.component def write_data(destination: Output[Artifact]): assert destination.path is not None with open(destination.path, "w") as f: f.write("test") @dsl.component def read_data(data1: Input[Artifact], data2: Input[Artifact]): assert data1.path is not None assert data2.path is not None with open(data1.path, "r") as f: print(f"data1:\n{f.read()}") with open(data2.path, "r") as f: print(f"data2:\n{f.read()}") @dsl.pipeline(name="pipeline-bug-test") def my_pipeline(): writer = write_data() reader = read_data(data1=writer.output, data2=writer.output) ``` Run it on Vertex AI. It will fail because either `data1.path` or `data2.path` will be `None` in the `read_data` step. ### Expected result The pipeline should run successfully. ### Materials and reference I also tried the following pipeline with the same result (probably because caching is enabled): ```python @dsl.pipeline(name="pipeline-bug-test") def my_pipeline(): writer1 = write_data() writer2 = write_data() reader = read_data(data1=writer1.output, data2=writer2.output) ``` I also tried this approach, which worked fine: ```python @dsl.component def write_data(text: str, destination: Output[Artifact]): assert destination.path is not None with open(destination.path, "w") as f: f.write(text) # ... @dsl.pipeline(name="pipeline-bug-test") def my_pipeline(): writer1 = write_data(text="abc") writer2 = write_data(text="def") reader = read_data(data1=writer1.output, data2=writer2.output) ``` Overall it seems like if you try to pass an output from component `A` into multiple inputs of component `B`, only one of those inputs is actually being passed in. We first encountered this when using components defined using the YAML spec, in which case one of the placeholders is not replaced (e.g. `{{$.inputs.artifacts['Train data path'].path}}` is passed into the component for one of the inputs whereas `/gcs/my-bucket/...` is passed in for the other input) ### Labels <!-- Please include labels below by uncommenting them to help us better triage issues --> I'm actually not sure where the source of the bug is, so I'm not adding any labels for now. <!-- /area frontend --> <!-- /area backend --> <!-- /area sdk --> <!-- /area testing --> <!-- /area samples --> <!-- /area components --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8328/reactions", "total_count": 8, "+1": 8, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8328/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8327
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8327/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8327/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8327/events
https://github.com/kubeflow/pipelines/issues/8327
1,393,732,872
I_kwDOB-71UM5TEq0I
8,327
saving pickle file
{ "login": "riyaj8888", "id": 29457825, "node_id": "MDQ6VXNlcjI5NDU3ODI1", "avatar_url": "https://avatars.githubusercontent.com/u/29457825?v=4", "gravatar_id": "", "url": "https://api.github.com/users/riyaj8888", "html_url": "https://github.com/riyaj8888", "followers_url": "https://api.github.com/users/riyaj8888/followers", "following_url": "https://api.github.com/users/riyaj8888/following{/other_user}", "gists_url": "https://api.github.com/users/riyaj8888/gists{/gist_id}", "starred_url": "https://api.github.com/users/riyaj8888/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/riyaj8888/subscriptions", "organizations_url": "https://api.github.com/users/riyaj8888/orgs", "repos_url": "https://api.github.com/users/riyaj8888/repos", "events_url": "https://api.github.com/users/riyaj8888/events{/privacy}", "received_events_url": "https://api.github.com/users/riyaj8888/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "connor-mccarthy", "id": 55268212, "node_id": "MDQ6VXNlcjU1MjY4MjEy", "avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-mccarthy", "html_url": "https://github.com/connor-mccarthy", "followers_url": "https://api.github.com/users/connor-mccarthy/followers", "following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}", "gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions", "organizations_url": "https://api.github.com/users/connor-mccarthy/orgs", "repos_url": "https://api.github.com/users/connor-mccarthy/repos", "events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-mccarthy/received_events", "type": "User", "site_admin": false }
[ { "login": "connor-mccarthy", "id": 55268212, "node_id": "MDQ6VXNlcjU1MjY4MjEy", "avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-mccarthy", "html_url": "https://github.com/connor-mccarthy", "followers_url": "https://api.github.com/users/connor-mccarthy/followers", "following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}", "gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions", "organizations_url": "https://api.github.com/users/connor-mccarthy/orgs", "repos_url": "https://api.github.com/users/connor-mccarthy/repos", "events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-mccarthy/received_events", "type": "User", "site_admin": false } ]
null
[ "@riyaj8888 Would you mind providing the code that get exception?" ]
"2022-10-02T11:29:28"
"2022-10-06T22:39:41"
null
NONE
null
Hi , I am trying to write pickle file of my trained sklearn model , but i am unable to do it. I am defining output_pickle_file:OutputPath(str) for saving it but not able to save using this. So i tried using BinaryFile but this tym i am able to save it but after saving ,tried to load it but i am getting following error. `TypeError: expected str, bytes or os.PathLike object, not _io.BufferedWriter` does anyone know why ?
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8327/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8327/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/8318
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/8318/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/8318/comments
https://api.github.com/repos/kubeflow/pipelines/issues/8318/events
https://github.com/kubeflow/pipelines/issues/8318
1,389,973,836
I_kwDOB-71UM5S2VFM
8,318
[backend] Duplicate volumes are found when running on ACI
{ "login": "abishek-murali", "id": 14052278, "node_id": "MDQ6VXNlcjE0MDUyMjc4", "avatar_url": "https://avatars.githubusercontent.com/u/14052278?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abishek-murali", "html_url": "https://github.com/abishek-murali", "followers_url": "https://api.github.com/users/abishek-murali/followers", "following_url": "https://api.github.com/users/abishek-murali/following{/other_user}", "gists_url": "https://api.github.com/users/abishek-murali/gists{/gist_id}", "starred_url": "https://api.github.com/users/abishek-murali/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abishek-murali/subscriptions", "organizations_url": "https://api.github.com/users/abishek-murali/orgs", "repos_url": "https://api.github.com/users/abishek-murali/repos", "events_url": "https://api.github.com/users/abishek-murali/events{/privacy}", "received_events_url": "https://api.github.com/users/abishek-murali/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" } ]
open
false
null
[]
null
[ "Hi @abishek-murali, thank you for reporting the bug. Maybe you can try upgrading the SDK to the version 1.8.14 and see if the error still exist. ", "@Linchin I upgraded to 1.8.14, and the error still exists" ]
"2022-09-28T21:38:17"
"2022-09-30T21:09:08"
null
NONE
null
### Environment * How did you deploy Kubeflow Pipelines (KFP)? [Standalone deployment](https://www.kubeflow.org/docs/components/pipelines/v1/installation/standalone-deployment/) <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> * KFP version: 1.8.5 (Kubeflow 1.6.0) <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> * KFP SDK version: 1.2.0 (and 1.8.14) <!-- Specify the output of the following shell command: $pip list | grep kfp --> ### Steps to reproduce - Running kubeflow job on AKS with virtual node enabled. Trying to run the containers using Azure container instances (ACI) <!-- Specify how to reproduce the problem. This may include information such as: a description of the process, code snippets, log output, or screenshots. --> Logs ``` api call to https://management.azure.com/subscriptions/xxx/resourceGroups/xxx/providers/Microsoft.ContainerInstance/containerGroups/xxxx?api-version=2021-07-01: got HTTP response status code 400 error code "DuplicateVolumes": Duplicate volumes 'kube-api-access-vn5q4' are found. ``` ### Expected result <!-- What should the correct behavior be? --> - Run container using ACI ### Materials and Reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> Known [limitations](https://can01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flearn.microsoft.com%2Fen-us%2Fazure%2Faks%2Fvirtual-nodes%23known-limitations&data=05%7C01%7Cabishek.murali%40canvass.io%7C8966891c9a864c4459bf08daa1838d8e%7C97332a9ca51844d9892b98b1e9b24b29%7C0%7C0%7C637999883635197847%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=Tgf43N7KROmJ2bfCvZxPzJbCrk5Umfup4beRpd86VWo%3D&reserved=0) about ACI --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/8318/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/8318/timeline
null
null
null
null
false