url
stringlengths
59
59
repository_url
stringclasses
1 value
labels_url
stringlengths
73
73
comments_url
stringlengths
68
68
events_url
stringlengths
66
66
html_url
stringlengths
49
49
id
int64
782M
1.89B
node_id
stringlengths
18
24
number
int64
4.97k
9.98k
title
stringlengths
2
306
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
4 values
active_lock_reason
null
body
stringlengths
0
63.6k
reactions
dict
timeline_url
stringlengths
68
68
performed_via_github_app
null
state_reason
stringclasses
3 values
draft
bool
0 classes
pull_request
dict
is_pull_request
bool
1 class
https://api.github.com/repos/kubeflow/pipelines/issues/7095
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7095/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7095/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7095/events
https://github.com/kubeflow/pipelines/issues/7095
1,085,616,611
I_kwDOB-71UM5AtTHj
7,095
test: error: deployment "ml-pipeline" exceeded its progress deadline
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
null
[]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-12-21T09:26:06"
"2022-04-17T06:27:49"
null
CONTRIBUTOR
null
All sample/e2e tests are failing with either * metadata-grpc deployment cannot rollout * ml-pipeline deployment cannot rollout because of connection timeout to the in-cluster mysql DB.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7095/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7095/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7093
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7093/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7093/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7093/events
https://github.com/kubeflow/pipelines/issues/7093
1,085,591,326
I_kwDOB-71UM5AtM8e
7,093
[backend] cache-deployer generate CSR with wrong usage
{ "login": "jomenxiao", "id": 4003391, "node_id": "MDQ6VXNlcjQwMDMzOTE=", "avatar_url": "https://avatars.githubusercontent.com/u/4003391?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jomenxiao", "html_url": "https://github.com/jomenxiao", "followers_url": "https://api.github.com/users/jomenxiao/followers", "following_url": "https://api.github.com/users/jomenxiao/following{/other_user}", "gists_url": "https://api.github.com/users/jomenxiao/gists{/gist_id}", "starred_url": "https://api.github.com/users/jomenxiao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jomenxiao/subscriptions", "organizations_url": "https://api.github.com/users/jomenxiao/orgs", "repos_url": "https://api.github.com/users/jomenxiao/repos", "events_url": "https://api.github.com/users/jomenxiao/events{/privacy}", "received_events_url": "https://api.github.com/users/jomenxiao/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "cc @chensun ", "I also bumped into the exact same issue while testing the KF 1.5 RC0 manifests https://github.com/kubeflow/manifests/issues/2099\r\n\r\nI think this has definitely something to do with KinD, but I couldn't get to the bottom of it. For me it was:\r\n* KinD cluster with K8s 1.20.7\r\n* `1.8.0-rc.1` commit\r\n\r\nBUT, when testing this with:\r\n* EKS with K8s 1.19\r\n* KFP 1.7.0\r\n\r\nThen the CertificateSigningRequest would get into `Approved` state, but the cache-deployer would still complain that a certificate would not appear.\r\n```\r\nERROR: After approving csr cache-server.kubeflow, the signed certificate did not appear on the resource. Giving up after 10 attempts.\r\n```" ]
"2021-12-21T08:59:01"
"2022-02-08T08:13:42"
"2022-02-08T08:13:42"
NONE
null
### Environment * How did you deploy Kubeflow Pipelines (KFP)? kind * KFP version: brunch `master` ### Steps to reproduce follow README `kustomize` https://github.com/kubeflow/pipelines/blob/master/manifests/kustomize/README.md * error message `"message": "invalid usage for client certificate: server auth",` ### describe csr ``` ➜ .kind kubectl get csr cache-server.kubeflow NAME AGE SIGNERNAME REQUESTOR CONDITION cache-server.kubeflow 6m38s kubernetes.io/kube-apiserver-client system:serviceaccount:kubeflow:kubeflow-pipelines-cache-deployer-sa Approved,Failed ➜ .kind kubectl get csr cache-server.kubeflow -o json { "apiVersion": "certificates.k8s.io/v1", "kind": "CertificateSigningRequest", "metadata": { "creationTimestamp": "2021-12-21T06:48:46Z", "name": "cache-server.kubeflow", "resourceVersion": "1485", "uid": "bece32dd-b0f2-4d31-9e1c-2aafa656945e" }, "spec": { "extra": { "authentication.kubernetes.io/pod-name": [ "cache-deployer-deployment-578ffc9d46-5bjml" ], "authentication.kubernetes.io/pod-uid": [ "4866ee10-fb48-4034-9c36-bf519e0b81f1" ] }, "groups": [ "system:serviceaccounts", "system:serviceaccounts:kubeflow", "system:authenticated" ], "request": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQzlEQ0NBZHdDQVFBd0pERWlNQ0FHQTFVRUF3d1pZMkZqYUdVdGMyVnlkbVZ5TG10MVltVm1iRzkzTG5OMgpZekNDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFLajcrZWtTRDA0Tm54YTczeXNLCnF6eVV4T1lnczEvUzJoZU9BMG1TVlVucXJsNm5ZVDRZWjBDWnViaFRvWUp0UEN2b0dSVXdoQ2ZsNE9nMFIzc1YKQ2VBdGpHT0RPR1BFdituUjJoTEgrSkFYZFIyM01sdzduOWhiQ0VhVWVONFNxWVdUK1BWbVhCV0RkdC90WVJPWQpuZnVzOFRqejJNbWgzN1ZUaUhqUGdDVlVoQTBIZHFkTms0VUMzb3UyL21PMHZYNXRLVTF5bWE2cEYyTUhMM1BtCnJzdDhzMmZGbnhkd0xlWFJlTzlPRm5xMlZzTHN2NGR5azE0SG5SWW82dER5eGwvWkpDYWNnTmNaYmVLeWNCVjcKYkhPOGVUbktoN20ydmppR0p2QnRnVG1NTzlpOWNLbzdzVS9YakROdyt0MVFMaFBxcDNHUkZITXVXUzEyMFAwNAovMFVDQXdFQUFhQ0JpakNCaHdZSktvWklodmNOQVFrT01Yb3dlREFKQmdOVkhSTUVBakFBTUFzR0ExVWREd1FFCkF3SUY0REFUQmdOVkhTVUVEREFLQmdnckJnRUZCUWNEQVRCSkJnTlZIUkVFUWpCQWdneGpZV05vWlMxelpYSjIKWlhLQ0ZXTmhZMmhsTFhObGNuWmxjaTVyZFdKbFpteHZkNElaWTJGamFHVXRjMlZ5ZG1WeUxtdDFZbVZtYkc5MwpMbk4yWXpBTkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQVRXL1dvdHpQRGtnNE1YbndyNUhxU2wxZ0FWOHBJM1dnCkpZRGdNMXRvdU1pTkZZaWZjVkRkRDZVNGhlblZBR0pTWSt2T1pXQkRoYTIvbVpNemROUXkybEZ1MDlOOHJ5eVYKcFpPTnBIWnRVZXloNTU1ZDdwWjVzNEtiRGFhd0RXV25tejBIWEk0SUJlY3FYSzRNVENsSVgrNXlLL2ZSZFd6RQo1RFNvcE9pWHQvMGxuMTJYNzZNcEV5WG5obDRQenpieG5wdFRJUjFPRU9Gb1pFUzdETkltbTcrQzRMNDVpSDhkCnBFSnhqQ0grcFVueVBWMkZLYUJnTHAzR2JHUDZlaTJ2eFdiRFVGWjhJdHNyNlpKdXNlU3Fib3pGV2lQOHBGYUMKdDhrUXNsRWlKSjFEellKdk5reXE3Wm5iMURRK09ESGZiV1IxdEtOdnJJaW1aSGJ4YVBxV3NBPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUgUkVRVUVTVC0tLS0tCg==", "signerName": "kubernetes.io/kube-apiserver-client", "uid": "3796d0f3-65b1-443a-9b13-97e9a61e381b", "usages": [ "digital signature", "key encipherment", "server auth" ], "username": "system:serviceaccount:kubeflow:kubeflow-pipelines-cache-deployer-sa" }, "status": { "conditions": [ { "lastTransitionTime": "2021-12-21T06:48:46Z", "lastUpdateTime": "2021-12-21T06:48:46Z", "message": "This CSR was approved by kubectl certificate approve.", "reason": "KubectlApprove", "status": "True", "type": "Approved" }, { "lastTransitionTime": "2021-12-21T06:48:46Z", "lastUpdateTime": "2021-12-21T06:48:46Z", "message": "invalid usage for client certificate: server auth", "reason": "SignerValidationFailure", "status": "True", "type": "Failed" } ] } } ```
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7093/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7093/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7089
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7089/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7089/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7089/events
https://github.com/kubeflow/pipelines/issues/7089
1,084,398,941
I_kwDOB-71UM5Aop1d
7,089
test: tests fail with "AccessDeniedException: 403 There is an account problem for the requested project."
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Ohh, my guess was wrong. The project ml-pipeline-test itself is suspended.", "Recovered" ]
"2021-12-20T06:11:50"
"2021-12-20T07:15:14"
"2021-12-20T07:15:14"
CONTRIBUTOR
null
This includes sample test, postsubmit tests. However, samples-v2 test is still passing. https://oss-prow.knative.dev/view/gs/oss-prow/pr-logs/pull/kubeflow_pipelines/7088/kubeflow-pipeline-sample-test/1472757856003428352 Therefore, my initial guess is that the image we use is too outdated, so that it can no longer authenticate with Google Cloud. Looking at https://github.com/GoogleCloudPlatform/oss-test-infra/blob/18c1b811dfaf8b07d83dccd73120049991424750/prow/prowjobs/kubeflow/pipelines/kubeflow-pipelines-presubmits.yaml, most tests use gcr.io/k8s-testimages/kubekins-e2e:v20210113-cc576af-master image, but samples v2 test is using python:3.7 image.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7089/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7089/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7078
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7078/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7078/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7078/events
https://github.com/kubeflow/pipelines/issues/7078
1,082,387,910
I_kwDOB-71UM5Ag-3G
7,078
[feature] Flexible tensorboard images
{ "login": "casassg", "id": 6912589, "node_id": "MDQ6VXNlcjY5MTI1ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/6912589?v=4", "gravatar_id": "", "url": "https://api.github.com/users/casassg", "html_url": "https://github.com/casassg", "followers_url": "https://api.github.com/users/casassg/followers", "following_url": "https://api.github.com/users/casassg/following{/other_user}", "gists_url": "https://api.github.com/users/casassg/gists{/gist_id}", "starred_url": "https://api.github.com/users/casassg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/casassg/subscriptions", "organizations_url": "https://api.github.com/users/casassg/orgs", "repos_url": "https://api.github.com/users/casassg/repos", "events_url": "https://api.github.com/users/casassg/events{/privacy}", "received_events_url": "https://api.github.com/users/casassg/received_events", "type": "User", "site_admin": false }
[ { "id": 930476737, "node_id": "MDU6TGFiZWw5MzA0NzY3Mzc=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/help%20wanted", "name": "help wanted", "color": "db1203", "default": true, "description": "The community is welcome to contribute." }, { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
open
false
null
[]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "any update on this? i have to include our antifactory images in our organization " ]
"2021-12-16T16:12:56"
"2022-09-14T15:38:34"
null
CONTRIBUTOR
null
### Feature Area /area frontend ### What feature would you like to see? Allow installations to modify the images available on tensorboard. This would allow us to have images with tftext or tensorboard expansions as needed. Currently tensorboard image list is a HTML embedded in the frontend. ### What is the use case or pain point? Use tensorboard from KFP with different variations/versions and custom images. ### Is there a workaround currently? Not possible, really manual deploy of an instance from kubectl. --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7078/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7078/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7072
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7072/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7072/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7072/events
https://github.com/kubeflow/pipelines/issues/7072
1,081,883,211
I_kwDOB-71UM5AfDpL
7,072
how to set restartPolicy "Never" to init containers
{ "login": "changhoekim", "id": 33795112, "node_id": "MDQ6VXNlcjMzNzk1MTEy", "avatar_url": "https://avatars.githubusercontent.com/u/33795112?v=4", "gravatar_id": "", "url": "https://api.github.com/users/changhoekim", "html_url": "https://github.com/changhoekim", "followers_url": "https://api.github.com/users/changhoekim/followers", "following_url": "https://api.github.com/users/changhoekim/following{/other_user}", "gists_url": "https://api.github.com/users/changhoekim/gists{/gist_id}", "starred_url": "https://api.github.com/users/changhoekim/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/changhoekim/subscriptions", "organizations_url": "https://api.github.com/users/changhoekim/orgs", "repos_url": "https://api.github.com/users/changhoekim/repos", "events_url": "https://api.github.com/users/changhoekim/events{/privacy}", "received_events_url": "https://api.github.com/users/changhoekim/received_events", "type": "User", "site_admin": false }
[ { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "As far as I'm aware, we don't support setting restartPolicy for either main container nor init container via SDK. \r\nBTW, ContainerOp is deprecated, you should received a warning for the code above.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-12-16T08:01:33"
"2022-04-17T06:27:34"
null
NONE
null
``` kfp.dsl.ContainerOp( name = 'spark-test', image = 'blahblah', init_containers=[ kfp.dsl.UserContainer(name='sdfs', image='sdfs', command=['sh', '-c'], args=['sdfsf'])], command = ["sh", "-c"]) ``` this is my operation. I tested wrong init containers and then pipeline retry my pipeline so many times. I want to test just one try. how to set restartPolicy to my pipeline container ?
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7072/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7072/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7064
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7064/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7064/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7064/events
https://github.com/kubeflow/pipelines/issues/7064
1,080,775,922
I_kwDOB-71UM5Aa1Ty
7,064
parse pipeline argument string 'None' to None
{ "login": "iuiu34", "id": 30587996, "node_id": "MDQ6VXNlcjMwNTg3OTk2", "avatar_url": "https://avatars.githubusercontent.com/u/30587996?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iuiu34", "html_url": "https://github.com/iuiu34", "followers_url": "https://api.github.com/users/iuiu34/followers", "following_url": "https://api.github.com/users/iuiu34/following{/other_user}", "gists_url": "https://api.github.com/users/iuiu34/gists{/gist_id}", "starred_url": "https://api.github.com/users/iuiu34/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iuiu34/subscriptions", "organizations_url": "https://api.github.com/users/iuiu34/orgs", "repos_url": "https://api.github.com/users/iuiu34/repos", "events_url": "https://api.github.com/users/iuiu34/events{/privacy}", "received_events_url": "https://api.github.com/users/iuiu34/received_events", "type": "User", "site_admin": false }
[ { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "pr #7065", "Thank you for raising the issue.\r\n\r\nKFP is language-agnostic pipeline system. `None` is a python-specific object that is not present in other languages. Im'm not sure it's a good idea to tie KFP to a specific language.\r\n\r\n>the component recieve 'None'.\r\n\r\nKFP orchestrates containerized command-line programs. Command-line programs, as defined by their POSIX standard, receive command-line arguments as an array of null-terminated strings. Command-line programs cannot receive \"None\" objects.\r\n\r\nThere is a way for a command-line program to distinguish between an empty string argument and a missing argument. Conditional placeholders in the component definitions help with that (`{if: {cond: {isPresent: Input1}, then: [\"--input-1\", {inputValue: Input1}}]}`).\r\n\r\nSDK automatically handles cases where an optional input argument was not specified when instantiating a component.\r\n\r\nHowever, as you've discovered, this might not work as well for pipeline arguments. The reason is that currently the conditional placeholders are evaluated at compile time, but the pipeline arguments can be specified after the compilation.\r\n\r\nThe specification of the compiled pipeline does not allow passing NoneType objects - only strings. So, if you want to be able to pass some value from pipeline parameter to a component and have the component interpret is as `None`, you have to do that on the component side.", "but when you say in the component side what do you mean?\r\na) manually inside the python code, adding `arg1 = None if arg1 == 'None' else arg1` in the pertinent functions\r\nb) automatically inside the component; in the container:command of the json. Which for py func would be something like adding a line inside the body as `locals() = {k:None if v == 'None' else v for v in locals()}` , but for components from yaml i don't know\r\nc) inside the inputs:parameters. Like the parsing that you do from str to bool (which i don't understand how it works). \r\n```\r\n\"input1\": { # string\r\n \"runtimeValue\": {\r\n \"constantValue\": {\r\n \"intValue\": \"1\"\r\n }\r\n }\r\n },```", "Alexey is correct that `None` is not something you can pass via command-line arguments.\r\nThat said, what you get is expected because you are passing string literal `\"None\"` which overrides the default value `None`. The way to receive a Python `None` object is to not pass anything at all, then the default value will be used.\r\n\r\nTry the following:\r\n```python\r\n@component\r\ndef task1(arg1: str = None):\r\n if arg1 is None:\r\n print('arg1 is None')\r\n@pipeline\r\ndef pipeline():\r\n task1()\r\n```", "But that's precisely the problem. The \"not passing anything at all\" is not an option in a function, it's only in the entrypoint.\r\n\r\nWhat I want is setting args with default None (not 'None') in the pipeline (not the component)\r\n\r\nBasically\r\n```py\r\n@component\r\ndef task1(arg1: str = None):\r\n if arg1 is None:\r\n print('arg1 is None')\r\n else:\r\n print(f\"arg1 value = '{arg1}'\")\r\n@pipeline\r\n pipeline(arg1: str = None):\r\n task1(arg1)\r\n```\r\npipeline() prints \"arg1 value = 'None'\" instead of the desired \"arg1 is None\"", "> What I want is setting args with default None (not 'None') in the pipeline (not the component)\r\n\r\nThat's not a supported scenario. As Alexey mentioned above, `None` is a Python-specific object, whereas KFP components are language-agnostic. \r\n\r\nIn your code sample, by passing object` None` to `task1` component, you're assuming that `task1` is a Python program. But that's not necessarily true. Albeit you write `task1` implementation using Python above, that doesn't mean all components are Python-based. In fact, we see quite a lot components written as bash commands.\r\n", "Yep, so agreed that in the pipeline arg1='None'.\r\n\r\nBut again; then, the parsing from 'None' to None, should be done (only) inside the py component? With something like `locals() = {k:None if v == 'None' else v for v in locals()} ` (option b)\r\n\r\nOr not parsing at all, and that py component should recieve 'None' and be treated by the user inside the function? (option a)", "> the parsing from 'None' to None, should be done (only) inside the py component?\r\n\r\nNo, I don't think we should ever convert `'None'` to `None`. It could be a legit user intention that they may want to pass `'None'` and consume it as string. We don't want to be \"oversmart\".\r\n\r\n> Or not parsing at all, and that py component should recieve 'None' and be treated by the user inside the function? (option a)\r\n\r\nYes, users can choose whatever they want, `'None'`, `'null'`, `'NoValue'`, etc. The system doesn't need to be aware of the user-chosen contract.\r\n\r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-12-15T09:13:20"
"2022-04-17T06:27:23"
null
NONE
null
When definining a kfp pipeline, with an str argument, if the value is 'None', then the component recieve 'None'. Wouldn't be better, to parse str 'None' to None (NoneType)? Now: ```py @component def task1(arg1: str = None): if arg1 is None or arg1 == 'None': print('arg1 is None') @pipeline pipeline(arg1: str = 'None'): task1(arg1) ``` expected: ```py @component def task1(arg1: str = None): if arg1 is None: print('arg1 is None') @pipeline pipeline(arg1: str = 'None'): task1(arg1) ```
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7064/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7064/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7063
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7063/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7063/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7063/events
https://github.com/kubeflow/pipelines/issues/7063
1,080,606,943
I_kwDOB-71UM5AaMDf
7,063
[backend] Runs triggered by jobs don't increase the Prometheus run counter
{ "login": "markwinter", "id": 4998112, "node_id": "MDQ6VXNlcjQ5OTgxMTI=", "avatar_url": "https://avatars.githubusercontent.com/u/4998112?v=4", "gravatar_id": "", "url": "https://api.github.com/users/markwinter", "html_url": "https://github.com/markwinter", "followers_url": "https://api.github.com/users/markwinter/followers", "following_url": "https://api.github.com/users/markwinter/following{/other_user}", "gists_url": "https://api.github.com/users/markwinter/gists{/gist_id}", "starred_url": "https://api.github.com/users/markwinter/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/markwinter/subscriptions", "organizations_url": "https://api.github.com/users/markwinter/orgs", "repos_url": "https://api.github.com/users/markwinter/repos", "events_url": "https://api.github.com/users/markwinter/events{/privacy}", "received_events_url": "https://api.github.com/users/markwinter/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Maybe using argo workflows `argo_workflows_count` is a better way to track runs" ]
"2021-12-15T06:05:41"
"2021-12-15T06:55:59"
"2021-12-15T06:39:12"
CONTRIBUTOR
null
### What steps did you take 1. Create a recurring run (seems to be called a Job in the backend?) 2. Enable the recurring run so that runs are generated <!-- A clear and concise description of what the bug is.--> ### What happened: - `job_server_job_count` correctly increases when creating a recurring run - `run_server_run_count` does not increase when runs started by recurring runs occur ### What did you expect to happen: I expected `run_server_run_count` to also increase for each run started by a recurring run so that I can track the total amount of runs. Currently that counter only increases from one-off runs. ### Environment: <!-- Please fill in those that seem relevant. --> * How do you deploy Kubeflow Pipelines (KFP)? Initially Kubeflow 1.3 but we have since updated individual components <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> * KFP version: 1.7 <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> * KFP SDK version: 1.7 <!-- Specify the output of the following shell command: $pip list | grep kfp --> ### Anything else you would like to add: <!-- Miscellaneous information that will assist in solving the issue.--> ### Labels <!-- Please include labels below by uncommenting them to help us better triage issues --> <!-- /area frontend --> /area backend <!-- /area sdk --> <!-- /area testing --> <!-- /area samples --> <!-- /area components --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7063/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7063/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7058
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7058/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7058/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7058/events
https://github.com/kubeflow/pipelines/issues/7058
1,079,202,685
I_kwDOB-71UM5AU1N9
7,058
v2 - input parameter type auto conversion
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "id": 2186355346, "node_id": "MDU6TGFiZWwyMTg2MzU1MzQ2", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/good%20first%20issue", "name": "good first issue", "color": "fef2c0", "default": true, "description": "" } ]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "hi @Bobgy @zijianjoy, can I take this one? I'm looking for a first issue, to get familiar with the code." ]
"2021-12-14T01:20:58"
"2022-02-15T08:05:04"
null
CONTRIBUTOR
null
When a component gets input parameters, its input parameters may not exactly match the component input spec's requirements. In such a case, we should consider adding auto type conversions for possible type mismatches. Right now, string to other type conversions have been implemented at [code link](https://github.com/kubeflow/pipelines/blob/ca6e05591d922f6958a7681827aea25c41b94573/v2/driver/driver.go#L634-L703). The issue tracks considering implementation of other type conversions like bool to int.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7058/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/kubeflow/pipelines/issues/7058/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7048
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7048/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7048/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7048/events
https://github.com/kubeflow/pipelines/issues/7048
1,078,060,845
I_kwDOB-71UM5AQect
7,048
v2 control flow - iterator (parallel for) advanced features
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
null
[]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-12-13T04:07:19"
"2022-04-17T06:27:46"
null
CONTRIBUTOR
null
* [ ] P1 https://github.com/kubeflow/pipelines/issues/6161 * [ ] P2 https://github.com/kubeflow/pipelines/tree/master/samples/core/loop_parallelism
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7048/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7048/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7047
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7047/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7047/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7047/events
https://github.com/kubeflow/pipelines/issues/7047
1,078,027,893
I_kwDOB-71UM5AQWZ1
7,047
feat(backend): support resource requests
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "id": 2152751095, "node_id": "MDU6TGFiZWwyMTUyNzUxMDk1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/frozen", "name": "lifecycle/frozen", "color": "ededed", "default": false, "description": null } ]
closed
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }, { "login": "gkcalat", "id": 35157096, "node_id": "MDQ6VXNlcjM1MTU3MDk2", "avatar_url": "https://avatars.githubusercontent.com/u/35157096?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gkcalat", "html_url": "https://github.com/gkcalat", "followers_url": "https://api.github.com/users/gkcalat/followers", "following_url": "https://api.github.com/users/gkcalat/following{/other_user}", "gists_url": "https://api.github.com/users/gkcalat/gists{/gist_id}", "starred_url": "https://api.github.com/users/gkcalat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gkcalat/subscriptions", "organizations_url": "https://api.github.com/users/gkcalat/orgs", "repos_url": "https://api.github.com/users/gkcalat/repos", "events_url": "https://api.github.com/users/gkcalat/events{/privacy}", "received_events_url": "https://api.github.com/users/gkcalat/received_events", "type": "User", "site_admin": false } ]
{ "url": "https://api.github.com/repos/kubeflow/pipelines/milestones/11", "html_url": "https://github.com/kubeflow/pipelines/milestone/11", "labels_url": "https://api.github.com/repos/kubeflow/pipelines/milestones/11/labels", "id": 9154677, "node_id": "MI_kwDOB-71UM4Ai7B1", "number": 11, "title": "KFP 2.0.0-beta.3", "description": null, "creator": { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 1, "state": "open", "created_at": "2023-03-13T21:27:51", "updated_at": "2023-04-25T17:48:42", "due_on": null, "closed_at": null }
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "/lifecycle frozen\r\n\r\n" ]
"2021-12-13T03:02:29"
"2023-04-10T00:51:12"
"2023-04-10T00:51:11"
CONTRIBUTOR
null
KFP pipeline spec only has resource limit fields right now, consider also adding support for request fields. https://github.com/kubeflow/pipelines/blob/1e032f550ce23cd40bfb6827b995248537b07d08/api/v2alpha1/pipeline_spec.proto#L632-L653
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7047/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7047/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7046
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7046/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7046/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7046/events
https://github.com/kubeflow/pipelines/issues/7046
1,078,027,307
I_kwDOB-71UM5AQWQr
7,046
feat(v2): support dynamic resource limits set by parameters
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
null
[]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-12-13T03:01:31"
"2022-04-17T07:27:22"
null
CONTRIBUTOR
null
These samples should pass: * [ ] https://github.com/kubeflow/pipelines/blob/master/samples/core/resource_spec/runtime_resource_request.py * [ ] https://github.com/kubeflow/pipelines/blob/master/samples/core/resource_spec/runtime_resource_request_gpu.py
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7046/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7046/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7043
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7043/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7043/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7043/events
https://github.com/kubeflow/pipelines/issues/7043
1,077,821,895
I_kwDOB-71UM5APkHH
7,043
feat(backend): support accelerator resources
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "id": 2152751095, "node_id": "MDU6TGFiZWwyMTUyNzUxMDk1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/frozen", "name": "lifecycle/frozen", "color": "ededed", "default": false, "description": null } ]
closed
false
{ "login": "gkcalat", "id": 35157096, "node_id": "MDQ6VXNlcjM1MTU3MDk2", "avatar_url": "https://avatars.githubusercontent.com/u/35157096?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gkcalat", "html_url": "https://github.com/gkcalat", "followers_url": "https://api.github.com/users/gkcalat/followers", "following_url": "https://api.github.com/users/gkcalat/following{/other_user}", "gists_url": "https://api.github.com/users/gkcalat/gists{/gist_id}", "starred_url": "https://api.github.com/users/gkcalat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gkcalat/subscriptions", "organizations_url": "https://api.github.com/users/gkcalat/orgs", "repos_url": "https://api.github.com/users/gkcalat/repos", "events_url": "https://api.github.com/users/gkcalat/events{/privacy}", "received_events_url": "https://api.github.com/users/gkcalat/received_events", "type": "User", "site_admin": false }
[ { "login": "gkcalat", "id": 35157096, "node_id": "MDQ6VXNlcjM1MTU3MDk2", "avatar_url": "https://avatars.githubusercontent.com/u/35157096?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gkcalat", "html_url": "https://github.com/gkcalat", "followers_url": "https://api.github.com/users/gkcalat/followers", "following_url": "https://api.github.com/users/gkcalat/following{/other_user}", "gists_url": "https://api.github.com/users/gkcalat/gists{/gist_id}", "starred_url": "https://api.github.com/users/gkcalat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gkcalat/subscriptions", "organizations_url": "https://api.github.com/users/gkcalat/orgs", "repos_url": "https://api.github.com/users/gkcalat/repos", "events_url": "https://api.github.com/users/gkcalat/events{/privacy}", "received_events_url": "https://api.github.com/users/gkcalat/received_events", "type": "User", "site_admin": false } ]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "/lifecycle frozen" ]
"2021-12-12T14:34:37"
"2023-03-07T01:22:14"
"2023-03-07T01:22:14"
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7043/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7043/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7040
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7040/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7040/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7040/events
https://github.com/kubeflow/pipelines/issues/7040
1,077,273,630
I_kwDOB-71UM5ANeQe
7,040
[bug] Idempotency in kubeflow pipeline sagemaker component.
{ "login": "goswamig", "id": 3092152, "node_id": "MDQ6VXNlcjMwOTIxNTI=", "avatar_url": "https://avatars.githubusercontent.com/u/3092152?v=4", "gravatar_id": "", "url": "https://api.github.com/users/goswamig", "html_url": "https://github.com/goswamig", "followers_url": "https://api.github.com/users/goswamig/followers", "following_url": "https://api.github.com/users/goswamig/following{/other_user}", "gists_url": "https://api.github.com/users/goswamig/gists{/gist_id}", "starred_url": "https://api.github.com/users/goswamig/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/goswamig/subscriptions", "organizations_url": "https://api.github.com/users/goswamig/orgs", "repos_url": "https://api.github.com/users/goswamig/repos", "events_url": "https://api.github.com/users/goswamig/events{/privacy}", "received_events_url": "https://api.github.com/users/goswamig/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." }, { "id": 2415263031, "node_id": "MDU6TGFiZWwyNDE1MjYzMDMx", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/components/aws/sagemaker", "name": "area/components/aws/sagemaker", "color": "0263f4", "default": false, "description": "AWS SageMaker components" } ]
open
false
null
[]
null
[ "@akartsky @surajkota @mbaijal @ryansteakley FYI.", "https://github.com/kubeflow/pipelines/issues/6465", "/area components/aws/sagemaker", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-12-10T22:14:18"
"2022-04-28T04:59:41"
null
CONTRIBUTOR
null
### What steps did you take If node scales/up down, the sagemaker component tries to create the same job which fails. Since sagemaker does not let create the same name job. Component controller should be able to detect this and resume the job from existing state. ### What happened: the job hangs/fail ### What did you expect to happen: I expect the job to resume from previous state. ### Environment: kfp-1.6 <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7040/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7040/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7029
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7029/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7029/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7029/events
https://github.com/kubeflow/pipelines/issues/7029
1,075,415,737
I_kwDOB-71UM5AGYq5
7,029
[feature] upgrade MLMD to 1.5.0
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
closed
false
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false } ]
null
[ "/assign @zijianjoy \r\n/cc @haoxins ", "All clients for MLMD have been updated." ]
"2021-12-09T10:36:31"
"2022-01-01T19:26:12"
"2022-01-01T19:26:12"
CONTRIBUTOR
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> <!-- /area frontend --> /area backend <!-- /area sdk --> <!-- /area samples --> <!-- /area components --> instructions: https://github.com/kubeflow/pipelines/tree/master/third_party/ml-metadata Tasks * [x] #6996 * [x] regenerate golang client * [x] regenerate frontend client
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7029/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7029/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7028
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7028/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7028/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7028/events
https://github.com/kubeflow/pipelines/issues/7028
1,075,363,752
I_kwDOB-71UM5AGL-o
7,028
[feature] Make func_to_container_op functions ready for autogenerated docs
{ "login": "hahamark1", "id": 12664815, "node_id": "MDQ6VXNlcjEyNjY0ODE1", "avatar_url": "https://avatars.githubusercontent.com/u/12664815?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hahamark1", "html_url": "https://github.com/hahamark1", "followers_url": "https://api.github.com/users/hahamark1/followers", "following_url": "https://api.github.com/users/hahamark1/following{/other_user}", "gists_url": "https://api.github.com/users/hahamark1/gists{/gist_id}", "starred_url": "https://api.github.com/users/hahamark1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hahamark1/subscriptions", "organizations_url": "https://api.github.com/users/hahamark1/orgs", "repos_url": "https://api.github.com/users/hahamark1/repos", "events_url": "https://api.github.com/users/hahamark1/events{/privacy}", "received_events_url": "https://api.github.com/users/hahamark1/received_events", "type": "User", "site_admin": false }
[ { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
open
false
null
[]
null
[ "cc @chensun ", "Hi @hahamark1, \r\n\r\n`func_to_container_op` function is actually deprecated. To generate the docstrings, we need some amount of work to generate kfp-specific docstrings. We will work on this in the future. Thanks!", "Hi @ji-yaqi , came across this and was wondering whether there is a way users could find out what are the deprecated aspects of v1 SDK from documentation/library itself. Would you be amenable to a PR that adds docstring updates/deprecation warnings to this function/other deprecated functions?" ]
"2021-12-09T09:44:38"
"2022-02-17T20:26:00"
null
NONE
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> <!-- /area frontend --> /area backend <!-- /area sdk --> <!-- /area samples --> <!-- /area components --> ### What feature would you like to see? Currently the function wrapper (`func_to_container_op`) Does not copy the docstring of the function to the new wrapper object. Due to this, autogenerated docs cannot deal with kubeflow components. ### What is the use case or pain point? It would help us drastically in spending less time on writing docs. ### Is there a workaround currently? Currently, we write are docs by hand. ### How to solve it? By adding the following to the function `func_to_container_op` in `components._python_op.py` ``` task_factory = _create_task_factory_from_component_spec(component_spec) task_factory.__name__ = func.__name__ task_factory.__doc__ = func.__doc__ return task_factory --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7028/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7028/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7023
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7023/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7023/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7023/events
https://github.com/kubeflow/pipelines/issues/7023
1,074,912,007
I_kwDOB-71UM5AEdsH
7,023
[sdk] Wrong IPython url links
{ "login": "juliadeclared", "id": 71798184, "node_id": "MDQ6VXNlcjcxNzk4MTg0", "avatar_url": "https://avatars.githubusercontent.com/u/71798184?v=4", "gravatar_id": "", "url": "https://api.github.com/users/juliadeclared", "html_url": "https://github.com/juliadeclared", "followers_url": "https://api.github.com/users/juliadeclared/followers", "following_url": "https://api.github.com/users/juliadeclared/following{/other_user}", "gists_url": "https://api.github.com/users/juliadeclared/gists{/gist_id}", "starred_url": "https://api.github.com/users/juliadeclared/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/juliadeclared/subscriptions", "organizations_url": "https://api.github.com/users/juliadeclared/orgs", "repos_url": "https://api.github.com/users/juliadeclared/repos", "events_url": "https://api.github.com/users/juliadeclared/events{/privacy}", "received_events_url": "https://api.github.com/users/juliadeclared/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
null
[]
null
[ "cc @chensun ", "Hi @juliadeclared I saw you closed the PR. Is the issue still valid? \r\n", "@chensun yes, the issue is still valid. For KF installations, the generated URI should include domain/_/pipeline since this is how its mapped for KF full installation.\r\n\r\nMaybe this can be a small doc update or something similar. But it was quite tricky for us to debug.", "> id. For KF installations, the generated URI should include domain/_/pipeline since this is how its mapped for KF full installation.\r\n\r\nI see. yes, I recall the URL for standalone KFP deployment and full fledge KF deployment are different. And currently we don't have a way to tell from the SDK client side, so we can only return one form of the URL. \r\nYou're probably right that this might need a doc update. Any suggestion where the change could be? Please feel free to create doc update PRs as well. Thanks!", "@chensun here is the doc update PR: https://github.com/kubeflow/website/pull/3105", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-12-08T22:49:30"
"2022-04-17T06:27:47"
null
NONE
null
### Environment Running on a full KF 1.4 installation on top of GKE. * KFP version: 1.7.1 <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> * KFP SDK version: 1.6.0 <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> * All dependencies version: <!-- Specify the output of the following shell command: $pip list | grep kfp --> kfp 1.6.0 kfp-pipeline-spec 0.1.7 kfp-server-api 1.6.0 ### Steps to reproduce <!-- Specify how to reproduce the problem. This may include information such as: a description of the process, code snippets, log output, or screenshots. --> Create an experiment `kfp.Client.create_experiment() `in a notebook Click the generated link in Ipython ### Expected result <!-- What should the correct behavior be? --> Currently generated link (incorrect): https://cluster_uri/#/experiments/details/88140703-5a6e-49ee-ba14-ed3b2a625877 Actual link (correct): https://cluster_uri/_/pipeline/?ns=namespace#/experiments/details/88140703-5a6e-49ee-ba14-ed3b2a625877 ### Materials and Reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7023/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7023/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7021
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7021/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7021/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7021/events
https://github.com/kubeflow/pipelines/issues/7021
1,074,060,393
I_kwDOB-71UM5ABNxp
7,021
pod status is NotReady
{ "login": "ashissharma97", "id": 44938127, "node_id": "MDQ6VXNlcjQ0OTM4MTI3", "avatar_url": "https://avatars.githubusercontent.com/u/44938127?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ashissharma97", "html_url": "https://github.com/ashissharma97", "followers_url": "https://api.github.com/users/ashissharma97/followers", "following_url": "https://api.github.com/users/ashissharma97/following{/other_user}", "gists_url": "https://api.github.com/users/ashissharma97/gists{/gist_id}", "starred_url": "https://api.github.com/users/ashissharma97/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ashissharma97/subscriptions", "organizations_url": "https://api.github.com/users/ashissharma97/orgs", "repos_url": "https://api.github.com/users/ashissharma97/repos", "events_url": "https://api.github.com/users/ashissharma97/events{/privacy}", "received_events_url": "https://api.github.com/users/ashissharma97/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Hello @ashissharma97 ,\r\n\r\nWould you like to get the events from failing pods to debug further\r\n\r\n```\r\nkubectl get <pod-id> -n <namespace> -o yaml\r\n```", "Hi @zijianjoy,\r\nThanks for replying\r\nHere is the yaml file of the pod \r\n```\r\napiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n labels:\r\n pipeline/runid: d5d18505-8c11-4caa-8e5a-168e14e35981\r\n pipelines.kubeflow.org/cache_enabled: \"true\"\r\n pipelines.kubeflow.org/cache_id: \"\"\r\n pipelines.kubeflow.org/metadata_context_id: \"3\"\r\n pipelines.kubeflow.org/metadata_execution_id: \"3\"\r\n workflows.argoproj.io/completed: \"false\"\r\n workflows.argoproj.io/workflow: \"[some_name]\"\r\n name: \"[name]\"\r\n namespace: \"[namespace]\"\r\n ownerReferences:\r\n - apiVersion: argoproj.io/v1alpha1\r\n blockOwnerDeletion: true\r\n controller: true\r\n kind: Workflow\r\n name: \"[some_name]\"\r\n uid: b6ff07e0-c798-406c-afc5-275ba61226f2\r\n resourceVersion: \"643080\"\r\n uid: e755c715-c613-401f-b699-a741fc07d35f\r\nspec:\r\n containers:\r\n - command:\r\n - argoexec\r\n - wait\r\n - --loglevel\r\n - info\r\n env:\r\n - name: ARGO_POD_NAME\r\n valueFrom:\r\n fieldRef:\r\n apiVersion: v1\r\n fieldPath: metadata.name\r\n - name: ARGO_CONTAINER_RUNTIME_EXECUTOR\r\n value: docker\r\n - name: GODEBUG\r\n value: x509ignoreCN=0\r\n - name: ARGO_CONTAINER_NAME\r\n value: wait\r\n - name: ARGO_INCLUDE_SCRIPT_OUTPUT\r\n value: \"false\"\r\n image: gcr.io/ml-pipeline/argoexec:v3.1.6-patch-license-compliance\r\n imagePullPolicy: IfNotPresent\r\n name: wait\r\n resources:\r\n limits:\r\n cpu: 500m\r\n memory: 512Mi\r\n requests:\r\n cpu: 10m\r\n memory: 32Mi\r\n terminationMessagePath: /dev/termination-log\r\n terminationMessagePolicy: File\r\n volumeMounts:\r\n - mountPath: /argo/podmetadata\r\n name: podmetadata\r\n - mountPath: /var/run/docker.sock\r\n name: docker-sock\r\n readOnly: true\r\n - mountPath: /argo/secret/mlpipeline-minio-artifact\r\n name: mlpipeline-minio-artifact\r\n readOnly: true\r\n - mountPath: /mainctrfs/obj\r\n name: \"[pvc]\"\r\n - mountPath: /var/run/secrets/kubernetes.io/serviceaccount\r\n name: kube-api-access-zgpz8\r\n readOnly: true\r\n - command:\r\n - python\r\n - print.py\r\n env:\r\n - name: ARGO_CONTAINER_NAME\r\n value: main\r\n - name: ARGO_INCLUDE_SCRIPT_OUTPUT\r\n value: \"false\"\r\n image: \"[imagelink]\"\r\n imagePullPolicy: Always\r\n name: main\r\n resources: {}\r\n terminationMessagePath: /dev/termination-log\r\n terminationMessagePolicy: File\r\n volumeMounts:\r\n - mountPath: /obj\r\n name: \"[volume_path]\"\r\n - mountPath: /var/run/secrets/kubernetes.io/serviceaccount\r\n name: kube-api-access-zgpz8\r\n readOnly: true\r\n dnsPolicy: ClusterFirst\r\n enableServiceLinks: true\r\n nodeName: gke-node\r\n preemptionPolicy: PreemptLowerPriority\r\n priority: 0\r\n restartPolicy: Never\r\n schedulerName: default-scheduler\r\n securityContext: {}\r\n serviceAccount: default-editor\r\n serviceAccountName: default-editor\r\n terminationGracePeriodSeconds: 30\r\n tolerations:\r\n - effect: NoExecute\r\n key: node.kubernetes.io/not-ready\r\n operator: Exists\r\n tolerationSeconds: 300\r\n - effect: NoExecute\r\n key: node.kubernetes.io/unreachable\r\n operator: Exists\r\n tolerationSeconds: 300\r\n volumes:\r\n - downwardAPI:\r\n defaultMode: 420\r\n items:\r\n - fieldRef:\r\n apiVersion: v1\r\n fieldPath: metadata.annotations\r\n path: annotations\r\n name: podmetadata\r\n - hostPath:\r\n path: /var/run/docker.sock\r\n type: Socket\r\n name: docker-sock\r\n - name: pvc_name\r\n persistentVolumeClaim:\r\n claimName: pvc_name\r\n - name: mlpipeline-minio-artifact\r\n secret:\r\n defaultMode: 420\r\n items:\r\n - key: accesskey\r\n path: accesskey\r\n - key: secretkey\r\n path: secretkey\r\n secretName: mlpipeline-minio-artifact\r\n - name: kube-api-access-zgpz8\r\n projected:\r\n defaultMode: 420\r\n sources:\r\n - serviceAccountToken:\r\n expirationSeconds: 3607\r\n path: token\r\n - configMap:\r\n items:\r\n - key: ca.crt\r\n path: ca.crt\r\n name: kube-root-ca.crt\r\n - downwardAPI:\r\n items:\r\n - fieldRef:\r\n apiVersion: v1\r\n fieldPath: metadata.namespace\r\n path: namespace\r\nstatus:\r\n conditions:\r\n - lastProbeTime: null\r\n lastTransitionTime: \"2021-12-10T19:19:28Z\"\r\n status: \"True\"\r\n type: Initialized\r\n - lastProbeTime: null\r\n lastTransitionTime: \"2021-12-10T19:19:28Z\"\r\n message: 'containers with unready status: [main]'\r\n reason: ContainersNotReady\r\n status: \"False\"\r\n type: Ready\r\n - lastProbeTime: null\r\n lastTransitionTime: \"2021-12-10T19:19:28Z\"\r\n message: 'containers with unready status: [main]'\r\n reason: ContainersNotReady\r\n status: \"False\"\r\n type: ContainersReady\r\n - lastProbeTime: null\r\n lastTransitionTime: \"2021-12-10T19:19:28Z\"\r\n status: \"True\"\r\n type: PodScheduled\r\n containerStatuses:\r\n - containerID: containerd://867d5a7e486aa4809f7090221d75c982b698b98cde66a57dd4c6b4cf1ba5eb52\r\n image: \"[container_link]\"\r\n imageID: \"[container_link_id]\"\r\n lastState: {}\r\n name: main\r\n ready: false\r\n restartCount: 0\r\n started: false\r\n state:\r\n terminated:\r\n containerID: containerd://867d5a7e486aa4809f7090221d75c982b698b98cde66a57dd4c6b4cf1ba5eb52\r\n exitCode: 0\r\n finishedAt: \"2021-12-10T19:20:36Z\"\r\n reason: Completed\r\n startedAt: \"2021-12-10T19:20:36Z\"\r\n - containerID: containerd://c78df00a2e6316fdf5ea69c8eddfa3c65adc358c2da852892ae2b7af7461f67f\r\n image: gcr.io/ml-pipeline/argoexec:v3.1.6-patch-license-compliance\r\n imageID: gcr.io/ml-pipeline/argoexec@sha256:44cf8455a51aa5b961d1a86f65e39adf5ffca9bdcd33a745c3b79f430b7439e0\r\n lastState: {}\r\n name: wait\r\n ready: true\r\n restartCount: 0\r\n started: true\r\n state:\r\n running:\r\n startedAt: \"2021-12-10T19:19:51Z\"\r\n hostIP: 10.1.0.31\r\n phase: Running\r\n podIP: 10.244.1.69\r\n podIPs:\r\n - ip: 10.244.1.69\r\n qosClass: Burstable\r\n startTime: \"2021-12-10T19:19:28Z\"\r\n```", "Can you share with me:\r\n\r\n1. Kubernetes version\r\n2. Kubeflow version\r\n3. What executor your are using? https://www.kubeflow.org/docs/components/pipelines/installation/choose-executor/", "Sure...\r\nKubernetes Version: 1.21.6-gke.1500\r\nKubeflow Version: v1.4.0\r\nThe executor I am using is the default one which is containerd.\r\n\r\nThanks\r\n\r\n", "Are you using emissary executor?\r\n\r\n`kubectl describe configmap workflow-controller-configmap | grep -A 2 containerRuntimeExecutor`", "Hi @zijianjoy,\r\nNo, I am using docker and this is happening when the task of the pod is completed instead of completed status it is showing NotReady.", "@ashissharma97 , KFP won't work with docker executor if you are using Kubernetes version > 1.19, because docker runtime has been deprecated by Kubernetes. You will have to switch to use emissary executor in order to make your pipeline working.\r\n\r\nIn order to use emissary executor: https://www.kubeflow.org/docs/components/pipelines/installation/choose-executor/#migrate-to-emissary-executor\r\n\r\nDetailed issue description: https://github.com/kubeflow/pipelines/issues/5714", "Thank You @zijianjoy, This resolved my issue." ]
"2021-12-08T06:30:36"
"2021-12-15T08:14:57"
"2021-12-15T08:14:57"
NONE
null
Hi, I have installed Kubeflow from Kubeflow manifest and when I am running a pipeline I am getting pod status as “NotReady”. Can somebody help with this? Thanks
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7021/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7021/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7015
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7015/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7015/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7015/events
https://github.com/kubeflow/pipelines/issues/7015
1,073,489,492
I_kwDOB-71UM4__CZU
7,015
set component ram request using pipeline parameter value
{ "login": "ypitrey", "id": 17247240, "node_id": "MDQ6VXNlcjE3MjQ3MjQw", "avatar_url": "https://avatars.githubusercontent.com/u/17247240?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ypitrey", "html_url": "https://github.com/ypitrey", "followers_url": "https://api.github.com/users/ypitrey/followers", "following_url": "https://api.github.com/users/ypitrey/following{/other_user}", "gists_url": "https://api.github.com/users/ypitrey/gists{/gist_id}", "starred_url": "https://api.github.com/users/ypitrey/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ypitrey/subscriptions", "organizations_url": "https://api.github.com/users/ypitrey/orgs", "repos_url": "https://api.github.com/users/ypitrey/repos", "events_url": "https://api.github.com/users/ypitrey/events{/privacy}", "received_events_url": "https://api.github.com/users/ypitrey/received_events", "type": "User", "site_admin": false }
[ { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
null
[]
null
[ "cc @chensun ", "@ypitrey The fundamental issue is that `300 * run_duration` is a piece of code not captured in any container. Generally speaking, a pipeline may only use components as building blocks, where each component is a containerized app. There can't be code hanging outside a component. Does that make sense?\r\nI assume you're running on Kubeflow Pipelines rather than Vertex Pipelines, am I right? If so, I would suggest you have an additional pipeline input for memory request:\r\n```python\r\n@dsl.pipeline(name='example_pipeline', description='foo')\r\ndef pipeline(run_duration: float, memory_request: str):\r\n ...\r\n comp = my_component(run_duration)\r\n comp.set_memory_request(memory_request) \r\n ...\r\n```\r\n\r\n\r\n ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-12-07T15:31:08"
"2022-04-17T07:27:13"
null
NONE
null
I have a pipeline in which some components require an amount of RAM that is dependent on the run duration. Our runs can take anything from 10 minutes to 12 hours and we run mostly short runs (less than 30 minutes). I'm looking into adapting the memory request to have enough RAM for long runs while saving costs on short runs. The run duration in hours is specified as a pipeline parameter. Basically, I'd like to do something like this: ``` @dsl.pipeline(name='example_pipeline', description='foo') def pipeline(run_duration: float): ... comp = my_component(run_duration) comp.set_memory_request(f'{300 * run_duration}Mi') # 30 is an arbitrary value for this example comp.set_memory_limit('4Gi') # '4Gi' is an arbitrary value for this example ... ``` But I can't do that because the value of `run_duration` isn't available from the `pipeline` function. Is there a workaround for this? Or am I looking at the issue from the wrong perspective? Thanks!
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7015/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7015/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7014
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7014/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7014/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7014/events
https://github.com/kubeflow/pipelines/issues/7014
1,073,316,650
I_kwDOB-71UM4_-YMq
7,014
[backend] Forbid unarchive runs action if the run belongs to archived experiment
{ "login": "difince", "id": 11557050, "node_id": "MDQ6VXNlcjExNTU3MDUw", "avatar_url": "https://avatars.githubusercontent.com/u/11557050?v=4", "gravatar_id": "", "url": "https://api.github.com/users/difince", "html_url": "https://github.com/difince", "followers_url": "https://api.github.com/users/difince/followers", "following_url": "https://api.github.com/users/difince/following{/other_user}", "gists_url": "https://api.github.com/users/difince/gists{/gist_id}", "starred_url": "https://api.github.com/users/difince/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/difince/subscriptions", "organizations_url": "https://api.github.com/users/difince/orgs", "repos_url": "https://api.github.com/users/difince/repos", "events_url": "https://api.github.com/users/difince/events{/privacy}", "received_events_url": "https://api.github.com/users/difince/received_events", "type": "User", "site_admin": false }
[ { "id": 930476737, "node_id": "MDU6TGFiZWw5MzA0NzY3Mzc=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/help%20wanted", "name": "help wanted", "color": "db1203", "default": true, "description": "The community is welcome to contribute." }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
closed
false
{ "login": "difince", "id": 11557050, "node_id": "MDQ6VXNlcjExNTU3MDUw", "avatar_url": "https://avatars.githubusercontent.com/u/11557050?v=4", "gravatar_id": "", "url": "https://api.github.com/users/difince", "html_url": "https://github.com/difince", "followers_url": "https://api.github.com/users/difince/followers", "following_url": "https://api.github.com/users/difince/following{/other_user}", "gists_url": "https://api.github.com/users/difince/gists{/gist_id}", "starred_url": "https://api.github.com/users/difince/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/difince/subscriptions", "organizations_url": "https://api.github.com/users/difince/orgs", "repos_url": "https://api.github.com/users/difince/repos", "events_url": "https://api.github.com/users/difince/events{/privacy}", "received_events_url": "https://api.github.com/users/difince/received_events", "type": "User", "site_admin": false }
[ { "login": "difince", "id": 11557050, "node_id": "MDQ6VXNlcjExNTU3MDUw", "avatar_url": "https://avatars.githubusercontent.com/u/11557050?v=4", "gravatar_id": "", "url": "https://api.github.com/users/difince", "html_url": "https://github.com/difince", "followers_url": "https://api.github.com/users/difince/followers", "following_url": "https://api.github.com/users/difince/following{/other_user}", "gists_url": "https://api.github.com/users/difince/gists{/gist_id}", "starred_url": "https://api.github.com/users/difince/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/difince/subscriptions", "organizations_url": "https://api.github.com/users/difince/orgs", "repos_url": "https://api.github.com/users/difince/repos", "events_url": "https://api.github.com/users/difince/events{/privacy}", "received_events_url": "https://api.github.com/users/difince/received_events", "type": "User", "site_admin": false } ]
null
[ "/assign @difince ", "This behavior sounds good to us, feel free to contribute to this fix!" ]
"2021-12-07T12:51:16"
"2022-02-09T23:24:33"
"2022-02-09T23:24:33"
MEMBER
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> <!-- /area frontend --> </area backend> <!-- /area sdk --> <!-- /area samples --> <!-- /area components --> ### What feature would you like to see? Add backend validation so that the user cannot activate a Run if it belongs to Archived Experiment. When [unarchive](https://www.kubeflow.org/docs/components/pipelines/reference/api/kubeflow-pipeline-api-spec/#operation--apis-v1beta1-runs--id-:unarchive-post) endpoint is called, check if the Run is under an **Archived** Experiment. If so, return an error 412 Precondition Failed <!-- Provide a description of this feature and the user experience. --> ### What is the use case or pain point? In this [issue-comment](https://github.com/kubeflow/pipelines/issues/5114#issuecomment-777735817), it was concluded that **Archived Experiment** _cannot_ contain **Active Runs**. This raises the need for such validation. ### Is there a workaround currently? <!-- Without this feature, how do you accomplish your task today? --> --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7014/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7014/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7013
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7013/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7013/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7013/events
https://github.com/kubeflow/pipelines/issues/7013
1,073,217,547
I_kwDOB-71UM4_-AAL
7,013
[sdk] Compile fails when the pipeline name is not specified
{ "login": "ysk24ok", "id": 3449164, "node_id": "MDQ6VXNlcjM0NDkxNjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/3449164?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ysk24ok", "html_url": "https://github.com/ysk24ok", "followers_url": "https://api.github.com/users/ysk24ok/followers", "following_url": "https://api.github.com/users/ysk24ok/following{/other_user}", "gists_url": "https://api.github.com/users/ysk24ok/gists{/gist_id}", "starred_url": "https://api.github.com/users/ysk24ok/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ysk24ok/subscriptions", "organizations_url": "https://api.github.com/users/ysk24ok/orgs", "repos_url": "https://api.github.com/users/ysk24ok/repos", "events_url": "https://api.github.com/users/ysk24ok/events{/privacy}", "received_events_url": "https://api.github.com/users/ysk24ok/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
closed
false
null
[]
null
[ "cc @chensun ", "Hi @ysk24ok, in V2, we ask that pipeline name should be specified for better usage. \r\n\r\nYour code should look like:\r\n```\r\n@pipeline(name=\"your_pipeline_name\")\r\ndef sample():\r\n do_nothing_op()\r\n```\r\n\r\nHope this explains!\r\n", "@ji-yaqi Thank you for your response.\r\nThen, how about making `name` arg of `pipeline` function required in v2 SDK?\r\nIt's a bit confusing that the type of`name` arg is `Optional[str]`.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "In kfp v2 (the stable release hasn't been made yet, though), the problem seems to be fixed.\r\n\r\n```console\r\n% cat pipeline.py\r\nfrom kfp.dsl import pipeline, component\r\nfrom kfp.compiler import Compiler\r\n\r\n\r\n@component\r\ndef do_nothing_op():\r\n pass\r\n\r\n\r\n@pipeline()\r\ndef sample_pipeline():\r\n do_nothing_op()\r\n\r\n\r\nCompiler().compile(\r\n pipeline_func=sample_pipeline,\r\n package_path=\"/tmp/pipeline.yaml\",\r\n)\r\n% python pipeline.py\r\n% yq '.pipelineInfo' < /tmp/pipeline.yaml\r\nname: sample-pipeline\r\n```\r\n\r\nThe pipeline name is inferred from the pipeline function name. Closing this issue." ]
"2021-12-07T11:05:12"
"2022-08-29T11:22:11"
"2022-08-29T11:22:10"
CONTRIBUTOR
null
### Environment * KFP version: <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> * KFP SDK version: master <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> * All dependencies version: <!-- Specify the output of the following shell command: $pip list | grep kfp --> ``` kfp 1.8.9 kfp-pipeline-spec 0.1.13 kfp-server-api 1.7.1 ``` ### Steps to reproduce <!-- Specify how to reproduce the problem. This may include information such as: a description of the process, code snippets, log output, or screenshots. --> ```console $ cat pipelines.py from kfp.v2.dsl import pipeline, component from kfp.v2.compiler import Compiler @component def do_nothing_op(): pass @pipeline() def sample(): do_nothing_op() Compiler().compile( pipeline_func=sample, package_path="/tmp/pipeline.json", ) $ python3 pipelines.py Traceback (most recent call last): File "pipelines.py", line 17, in <module> Compiler().compile( File "/home/ysk24ok/repos/pipelines/sdk/python/kfp/v2/compiler/compiler.py", line 94, in compile pipeline_spec = self._create_pipeline_v2( File "/home/ysk24ok/repos/pipelines/sdk/python/kfp/v2/compiler/compiler.py", line 184, in _create_pipeline_v2 pipeline_spec = self._create_pipeline_spec( File "/home/ysk24ok/repos/pipelines/sdk/python/kfp/v2/compiler/compiler.py", line 275, in _create_pipeline_spec self._validate_pipeline_name(pipeline.name) File "/home/ysk24ok/repos/pipelines/sdk/python/kfp/v2/compiler/compiler.py", line 393, in _validate_pipeline_name raise ValueError( ValueError: Invalid pipeline name: Sample. Please specify a pipeline name that matches the regular expression "^[a-z0-9][a-z0-9-]{0,127}$" using `dsl.pipeline(name=...)` decorator. ``` ### Expected result <!-- What should the correct behavior be? --> The pipeline can be compiled without specifying the name of the pipeline. ### Materials and Reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> https://github.com/kubeflow/pipelines/blob/c1b67154a2abb75bd7dea4d2c8c52a52e2e8f7b2/sdk/python/kfp/v2/compiler/compiler.py#L129 It seems this bug is caused here. If the pipeline name is not specified, `pipeline_name` is None and `pipeline_meta.name` is used as its name, which is the return value of `_python_function_name_to_component_name`. https://github.com/kubeflow/pipelines/blob/c1b67154a2abb75bd7dea4d2c8c52a52e2e8f7b2/sdk/python/kfp/v2/components/component_factory.py#L289-L291 This function capitalzies the first character and it does not match the rule in the compiler. So I think we should add a new function like `_python_function_name_to_pipeline_name` and use the result instead of `pipeline_meta.name`. Since the rule of components name and that of the pipeline name is different, we should use different functions. ``` pipeline_name = pipeline_name or _python_function_name_to_pipeline_name(pipeline_func) ``` If this approach is appropriate, I'll work on this. --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7013/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7013/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7012
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7012/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7012/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7012/events
https://github.com/kubeflow/pipelines/issues/7012
1,072,589,464
I_kwDOB-71UM4_7mqY
7,012
[google-cloud-pipeline-components] Job Name in gcp_resource is not formatted correctly or is empty
{ "login": "Qingwt", "id": 92800575, "node_id": "U_kgDOBYgGPw", "avatar_url": "https://avatars.githubusercontent.com/u/92800575?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Qingwt", "html_url": "https://github.com/Qingwt", "followers_url": "https://api.github.com/users/Qingwt/followers", "following_url": "https://api.github.com/users/Qingwt/following{/other_user}", "gists_url": "https://api.github.com/users/Qingwt/gists{/gist_id}", "starred_url": "https://api.github.com/users/Qingwt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Qingwt/subscriptions", "organizations_url": "https://api.github.com/users/Qingwt/orgs", "repos_url": "https://api.github.com/users/Qingwt/repos", "events_url": "https://api.github.com/users/Qingwt/events{/privacy}", "received_events_url": "https://api.github.com/users/Qingwt/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" } ]
open
false
null
[]
null
[ "cc @IronPan ", "kfp -> 1.8.10\r\ngoogle-cloud-pipeline-components -> 0.2.1\r\n\r\nI'm experiencing the same issue when running tasks created from `google_cloud_pipeline_components.experimental.hyperparameter_tuning_job` \r\nAnd I located to the same place `google_cloud_pipeline_components\\container\\experimental\\gcp_launcher\\job_remote_runner.py` at line 80, where the inputs of `pattern` and `string` to `re.finall` are flipped.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Hi there\r\nUsing `ModelBatchPredictOp` I'm experiencing the same issue:\r\n\r\npython = 3.8\r\ngoogle-cloud = 0.34.0\r\ngoogle-cloud-aiplatform = 1.12.0\r\nkfp = 1.8.12\r\ngoogle-cloud-pipeline-components = 1.0.5\r\n\r\n\r\n```python\r\nprediction_task = ModelBatchPredictOp(\r\n project=project,\r\n location=location,\r\n job_display_name=segment.job_name,\r\n model=loaded_model_task.outputs[\"model\"],\r\n gcs_source_uris=segment.source,\r\n instances_format=\"jsonl\",\r\n gcs_destination_output_uri_prefix=segment.results_prefix,\r\n predictions_format=\"jsonl\",\r\n machine_type=machine_type,\r\n starting_replica_count=segment.starting_replica_count,\r\n max_replica_count=512,\r\n)\r\n```\r\n\r\nWhat happens?\r\n```logs\r\nTraceback (most recent call last):\r\n File \"/opt/python3.7/lib/python3.7/runpy.py\", line 193, in _run_module_as_main\"\r\n \"__main__\", mod_spec)\"\r\n File \"/opt/python3.7/lib/python3.7/runpy.py\", line 85, in _run_code\"\r\n exec(code, run_globals)\"\r\n File \"/opt/python3.7/lib/python3.7/site-packages/google_cloud_pipeline_components/container/v1/gcp_launcher/launcher.py\", line 229, in <module>\"\r\n main(sys.argv[1:])\"\r\n File \"/opt/python3.7/lib/python3.7/site-packages/google_cloud_pipeline_components/container/v1/gcp_launcher/launcher.py\", line 225, in main\r\n _JOB_TYPE_TO_ACTION_MAP[job_type](**parsed_args)\"\r\n File \"/opt/python3.7/lib/python3.7/site-packages/google_cloud_pipeline_components/container/v1/gcp_launcher/batch_prediction_job_remote_runner.py\", line 105, in create_batch_prediction_job\r\n job_name = remote_runner.check_if_job_exists()\r\n File \"/opt/python3.7/lib/python3.7/site-packages/google_cloud_pipeline_components/container/v1/gcp_launcher/job_remote_runner.py\", line 104, in check_if_job_exists\r\n 'Job Name in gcp_resource is not formatted correctly or is empty.'\r\nValueError: Job Name in gcp_resource is not formatted correctly or is empty.\r\n```\r\n\r\nAny ideas or suggestions for dealing with this would be greatly appreciated.", "> Hi there Using `ModelBatchPredictOp` I'm experiencing the same issue:\r\n> \r\n> python = 3.8 google-cloud = 0.34.0 google-cloud-aiplatform = 1.12.0 kfp = 1.8.12 google-cloud-pipeline-components = 1.0.5\r\n> \r\n> ```python\r\n> prediction_task = ModelBatchPredictOp(\r\n> project=project,\r\n> location=location,\r\n> job_display_name=segment.job_name,\r\n> model=loaded_model_task.outputs[\"model\"],\r\n> gcs_source_uris=segment.source,\r\n> instances_format=\"jsonl\",\r\n> gcs_destination_output_uri_prefix=segment.results_prefix,\r\n> predictions_format=\"jsonl\",\r\n> machine_type=machine_type,\r\n> starting_replica_count=segment.starting_replica_count,\r\n> max_replica_count=512,\r\n> )\r\n> ```\r\n> \r\n> What happens?\r\n> \r\n> ```\r\n> Traceback (most recent call last):\r\n> File \"/opt/python3.7/lib/python3.7/runpy.py\", line 193, in _run_module_as_main\"\r\n> \"__main__\", mod_spec)\"\r\n> File \"/opt/python3.7/lib/python3.7/runpy.py\", line 85, in _run_code\"\r\n> exec(code, run_globals)\"\r\n> File \"/opt/python3.7/lib/python3.7/site-packages/google_cloud_pipeline_components/container/v1/gcp_launcher/launcher.py\", line 229, in <module>\"\r\n> main(sys.argv[1:])\"\r\n> File \"/opt/python3.7/lib/python3.7/site-packages/google_cloud_pipeline_components/container/v1/gcp_launcher/launcher.py\", line 225, in main\r\n> _JOB_TYPE_TO_ACTION_MAP[job_type](**parsed_args)\"\r\n> File \"/opt/python3.7/lib/python3.7/site-packages/google_cloud_pipeline_components/container/v1/gcp_launcher/batch_prediction_job_remote_runner.py\", line 105, in create_batch_prediction_job\r\n> job_name = remote_runner.check_if_job_exists()\r\n> File \"/opt/python3.7/lib/python3.7/site-packages/google_cloud_pipeline_components/container/v1/gcp_launcher/job_remote_runner.py\", line 104, in check_if_job_exists\r\n> 'Job Name in gcp_resource is not formatted correctly or is empty.'\r\n> ValueError: Job Name in gcp_resource is not formatted correctly or is empty.\r\n> ```\r\n> \r\n> Any ideas or suggestions for dealing with this would be greatly appreciated.\r\n\r\nI am currently having the same issue when trying to run a HyperparameterTuningJobRunOp. Did you find a solution?", "I have exactly the same issue, when trying to run batch prediction over the uploaded vertex model: `Job Name in gcp_resource is not formatted correctly or is empty.`\r\n\r\nManual batch prediction fails with: \r\n\r\n> Batch prediction job batch_preduct encountered the following errors: Model server terminated: model server container terminated: exit_code: 1 reason: \"Error\" started_at { seconds: 1661869830 } finished_at { seconds: 1661869831 } . \r\n\r\nThe same model works fine when deployed to an endpoint. Any hints or workarounds would be greatly appreciated!\r\n", "Update, according to [that commit](https://github.com/kubeflow/pipelines/commit/582aefa56d1850d3bd9f1cb5bff26bd25baa50dd), the issue was fixed in the code 19 days ago. It doesn't look like it is scheduled for release any time soon." ]
"2021-12-06T20:38:14"
"2022-08-31T13:33:23"
null
NONE
null
### Environment * KFP SDK version: 1.8.9 * All dependencies version: google_cloud_pipepline_components version: 0.2.0 ### Steps to reproduce Followed the tutorial of the Vertex AI pipeline and created a training job with the code below ``` training_op = gcc_aip.CustomContainerTrainingJobRunOp( display_name = "test", container_uri=container_uri, project = project, location = gcp_region, dataset = dataset_create_op.outputs["dataset"], staging_bucket = bucket, training_fraction_split = 0.8, validation_fraction_split = 0.1, test_fraction_split = 0.1, model_serving_container_image_uri = "custom container image uri", model_serving_container_health_route="/healthcheck", model_serving_container_predict_route="/predict", model_display_name = "scikit-tests", machine_type = "n1-standard-4", ) ``` After submitting the pipeline job, I received the error "Job Name in gcp_resource is not formatted correctly or is empty". By checking the source code, I suspect there is a bug in `google_cloud_pipeline_components\container\experimental\gcp_launcher\job_remote_runner.py` at line 80 The source code is ``` job_name_group = re.findall( job_resources.resources[0].resource_uri, f'{self.job_uri_prefix}(.*)') ``` To the best of my knowledge, the signature for `re.findall` is `re.findall(pattern, string, flags=0)` [python doc](https://docs.python.org/3/library/re.html#re.findall), which indicates that the first parameter should be "pattern" while in the code snippet above it looks like you are using the "string" as the first parameter. Could you please confirm if that is expected?
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7012/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7012/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7009
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7009/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7009/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7009/events
https://github.com/kubeflow/pipelines/issues/7009
1,072,199,648
I_kwDOB-71UM4_6Hfg
7,009
[backend] kubeflow 1.6.0 no support onExit
{ "login": "yiyuanyu17", "id": 15135974, "node_id": "MDQ6VXNlcjE1MTM1OTc0", "avatar_url": "https://avatars.githubusercontent.com/u/15135974?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yiyuanyu17", "html_url": "https://github.com/yiyuanyu17", "followers_url": "https://api.github.com/users/yiyuanyu17/followers", "following_url": "https://api.github.com/users/yiyuanyu17/following{/other_user}", "gists_url": "https://api.github.com/users/yiyuanyu17/gists{/gist_id}", "starred_url": "https://api.github.com/users/yiyuanyu17/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yiyuanyu17/subscriptions", "organizations_url": "https://api.github.com/users/yiyuanyu17/orgs", "repos_url": "https://api.github.com/users/yiyuanyu17/repos", "events_url": "https://api.github.com/users/yiyuanyu17/events{/privacy}", "received_events_url": "https://api.github.com/users/yiyuanyu17/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" } ]
open
false
null
[]
null
[ "here is our test pipeline\r\n\r\n\r\n\r\n```\r\nimport os\r\nimport sys\r\n\r\nimport kfp.compiler\r\nimport kfp.components as comp\r\nimport kfp.dsl as dsl\r\nimport json\r\n\r\nstart_comp = comp.load_component_from_file('./start.yaml')\r\nwork_comp = comp.load_component_from_file('./work.yaml')\r\nexit_comp = comp.load_component_from_file('./exit.yaml')\r\n\r\n\r\n@dsl.pipeline(\r\n name=\"[test]exit test pipeline\"\r\n)\r\ndef pipeline(\r\n task_id: int\r\n):\r\n\r\n pipeline_conf = dsl.get_pipeline_conf()\r\n pipeline_conf.set_image_pull_secrets([{\"name\": \"rudder-image-secret\"}])\r\n pipeline_conf.set_image_pull_policy(\"Always\")\r\n pipeline_conf.set_ttl_seconds_after_finished(7200) # one day\r\n\r\n\r\n exit_op = exit_comp(str(task_id)).set_display_name(\"onExit\")\r\n\r\n with dsl.ExitHandler(exit_op):\r\n start_op = start_comp().set_display_name(\"start\")\r\n work_op = work_comp().set_display_name(\"work\")\r\n work_op.after(start_op)\r\n\r\n\r\n\r\nif __name__ == '__main__':\r\n\r\n kfp.compiler.Compiler().compile(pipeline, __file__.rstrip('.py') + '.yaml')\r\n```\r\n\r\nand the start.yaml, work.yaml, exit.yaml provided here\r\nstart.yaml\r\n\r\n```\r\nname: start\r\ninputs:\r\noutputs:\r\nimplementation:\r\n container:\r\n image: busybox\r\n args: [\r\n '/bin/sh',\r\n '-c',\r\n 'echo pipeline start'\r\n ]\r\n```\r\n\r\nwork.yaml\r\n\r\n```\r\nname: work\r\ninputs:\r\noutputs:\r\nimplementation:\r\n container:\r\n image: busybox\r\n args: [\r\n '/bin/sh',\r\n '-c',\r\n 'echo working... ;sleep 10; echo done'\r\n ]\r\n```\r\n\r\nexit.yaml\r\n\r\n```\r\nname: exit\r\ninputs:\r\n - {name: task_id, type: String, description: 'task id'}\r\noutputs:\r\nimplementation:\r\n container:\r\n image: busybox\r\n args: [\r\n '/bin/sh',\r\n '-c',\r\n 'echo pipeline exit',\r\n {inputValue: task_id}\r\n ]\r\n```\r\n\r\nand use kfp 1.6.0 which can be uploaded in kubeflow 1.0, but can not be uploaded in kubeflow 1.6。and the error is \r\n```\r\n{\r\n \"error_message\": \"Error creating pipeline version: Create pipeline version failed: Failed to get parameters from the workflow: templates.exit inputs.parameters.task_id was not supplied\",\r\n \"error_details\": \"templates.exit inputs.parameters.task_id was not supplied\\ngithub.com/argoproj/argo-workflows/v3/errors.New\\n\\t/go/pkg/mod/github.com/argoproj/argo-workflows/v3@v3.1.0/errors/errors.go:49\\ngithub.com/argoproj/argo-workflows/v3/errors.Errorf\\n\\t/go/pkg/mod/github.com/argoproj/argo-workflows/v3@v3.1.0/errors/errors.go:55\\ngithub.com/argoproj/argo-workflows/v3/workflow/validate.(*templateValidationCtx).validateTemplate\\n\\t/go/pkg/mod/github.com/argoproj/argo-workflows/v3@v3.1.0/workflow/validate/validate.go:340\\ngithub.com/argoproj/argo-workflows/v3/workflow/validate.(*templateValidationCtx).validateTemplateHolder\\n\\t/go/pkg/mod/github.com/argoproj/argo-workflows/v3@v3.1.0/workflow/validate/validate.go:454\\ngithub.com/argoproj/argo-workflows/v3/workflow/validate.ValidateWorkflow\\n\\t/go/pkg/mod/github.com/argoproj/argo-workflows/v3@v3.1.0/workflow/validate/validate.go:210\\ngithub.com/kubeflow/pipelines/backend/src/common/util.ValidateWorkflow\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/common/util/template_util.go:60\\ngithub.com/kubeflow/pipelines/backend/src/common/util.GetParameters\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/common/util/template_util.go:31\\ngithub.com/kubeflow/pipelines/backend/src/apiserver/resource.(*ResourceManager).CreatePipelineVersion\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/resource/resource_manager.go:1153\\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*PipelineUploadServer).UploadPipelineVersion\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/pipeline_upload_server.go:185\\nnet/http.HandlerFunc.ServeHTTP\\n\\t/usr/local/go/src/net/http/server.go:2007\\ngithub.com/gorilla/mux.(*Router).ServeHTTP\\n\\t/go/pkg/mod/github.com/gorilla/mux@v1.8.0/mux.go:210\\nnet/http.serverHandler.ServeHTTP\\n\\t/usr/local/go/src/net/http/server.go:2802\\nnet/http.(*conn).serve\\n\\t/usr/local/go/src/net/http/server.go:1890\\nruntime.goexit\\n\\t/usr/local/go/src/runtime/asm_amd64.s:1357\\nFailed to get parameters from the workflow\\ngithub.com/kubeflow/pipelines/backend/src/common/util.Wrap\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:275\\ngithub.com/kubeflow/pipelines/backend/src/common/util.GetParameters\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/common/util/template_util.go:33\\ngithub.com/kubeflow/pipelines/backend/src/apiserver/resource.(*ResourceManager).CreatePipelineVersion\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/resource/resource_manager.go:1153\\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*PipelineUploadServer).UploadPipelineVersion\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/pipeline_upload_server.go:185\\nnet/http.HandlerFunc.ServeHTTP\\n\\t/usr/local/go/src/net/http/server.go:2007\\ngithub.com/gorilla/mux.(*Router).ServeHTTP\\n\\t/go/pkg/mod/github.com/gorilla/mux@v1.8.0/mux.go:210\\nnet/http.serverHandler.ServeHTTP\\n\\t/usr/local/go/src/net/http/server.go:2802\\nnet/http.(*conn).serve\\n\\t/usr/local/go/src/net/http/server.go:1890\\nruntime.goexit\\n\\t/usr/local/go/src/runtime/asm_amd64.s:1357\\nCreate pipeline version failed\\ngithub.com/kubeflow/pipelines/backend/src/common/util.Wrap\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:275\\ngithub.com/kubeflow/pipelines/backend/src/apiserver/resource.(*ResourceManager).CreatePipelineVersion\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/resource/resource_manager.go:1155\\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*PipelineUploadServer).UploadPipelineVersion\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/pipeline_upload_server.go:185\\nnet/http.HandlerFunc.ServeHTTP\\n\\t/usr/local/go/src/net/http/server.go:2007\\ngithub.com/gorilla/mux.(*Router).ServeHTTP\\n\\t/go/pkg/mod/github.com/gorilla/mux@v1.8.0/mux.go:210\\nnet/http.serverHandler.ServeHTTP\\n\\t/usr/local/go/src/net/http/server.go:2802\\nnet/http.(*conn).serve\\n\\t/usr/local/go/src/net/http/server.go:1890\\nruntime.goexit\\n\\t/usr/local/go/src/runtime/asm_amd64.s:1357\\nError creating pipeline version\\ngithub.com/kubeflow/pipelines/backend/src/common/util.Wrap\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:275\\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*PipelineUploadServer).UploadPipelineVersion\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/pipeline_upload_server.go:199\\nnet/http.HandlerFunc.ServeHTTP\\n\\t/usr/local/go/src/net/http/server.go:2007\\ngithub.com/gorilla/mux.(*Router).ServeHTTP\\n\\t/go/pkg/mod/github.com/gorilla/mux@v1.8.0/mux.go:210\\nnet/http.serverHandler.ServeHTTP\\n\\t/usr/local/go/src/net/http/server.go:2802\\nnet/http.(*conn).serve\\n\\t/usr/local/go/src/net/http/server.go:1890\\nruntime.goexit\\n\\t/usr/local/go/src/runtime/asm_amd64.s:1357\"\r\n}\r\n```", "hi ,is there any progress ?@ji-yaqi ", "Hi @yiyuanyu17, could you update your SDK version to 1.8.10 (latest)? We have fixed a bunch of bugs since 1.6.0. \r\nYour code can compile in 1.8.10 with no errors. ", "> Hi @yiyuanyu17, could you update your SDK version to 1.8.10 (latest)? We have fixed a bunch of bugs since 1.6.0. Your code can compile in 1.8.10 with no errors.\r\n\r\nif we update sdk version to 1.8.10, the yaml will work with kubeflow 1.0?", "> \r\n\r\n@ji-yaqi we test sdk 1.8.10, and still have the error ", "@yiyuanyu17 Do you have compilation error or upload error? I started on a clean version of 1.8.10 and doesn't have the compilation error. Maybe try force-reinstall the package? ", "> @yiyuanyu17 Do you have compilation error or upload error? I started on a clean version of 1.8.10 and doesn't have the compilation error. Maybe try force-reinstall the package?\r\n\r\n@ji-yaqi \r\n![image](https://user-images.githubusercontent.com/15135974/145662849-cd8981a1-0ec4-4f02-b634-547682e9f051.png)\r\ni reinstall the kfp==1.8.10 , and upload to the kubeflow 1.6.0. and the error still is \r\n\r\n```\r\n{\r\n \"error_message\": \"Error creating pipeline version: Create pipeline version failed: Failed to get parameters from the workflow: templates.exit inputs.parameters.task_id was not supplied\",\r\n \"error_details\": \"templates.exit inputs.parameters.task_id was not supplied\\ngithub.com/argoproj/argo-workflows/v3/errors.New\\n\\t/go/pkg/mod/github.com/argoproj/argo-workflows/v3@v3.1.0/errors/errors.go:49\\ngithub.com/argoproj/argo-workflows/v3/errors.Errorf\\n\\t/go/pkg/mod/github.com/argoproj/argo-workflows/v3@v3.1.0/errors/errors.go:55\\ngithub.com/argoproj/argo-workflows/v3/workflow/validate.(*templateValidationCtx).validateTemplate\\n\\t/go/pkg/mod/github.com/argoproj/argo-workflows/v3@v3.1.0/workflow/validate/validate.go:340\\ngithub.com/argoproj/argo-workflows/v3/workflow/validate.(*templateValidationCtx).validateTemplateHolder\\n\\t/go/pkg/mod/github.com/argoproj/argo-workflows/v3@v3.1.0/workflow/validate/validate.go:454\\ngithub.com/argoproj/argo-workflows/v3/workflow/validate.ValidateWorkflow\\n\\t/go/pkg/mod/github.com/argoproj/argo-workflows/v3@v3.1.0/workflow/validate/validate.go:210\\ngithub.com/kubeflow/pipelines/backend/src/common/util.ValidateWorkflow\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/common/util/template_util.go:60\\ngithub.com/kubeflow/pipelines/backend/src/common/util.GetParameters\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/common/util/template_util.go:31\\ngithub.com/kubeflow/pipelines/backend/src/apiserver/resource.(*ResourceManager).CreatePipelineVersion\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/resource/resource_manager.go:1153\\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*PipelineUploadServer).UploadPipelineVersion\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/pipeline_upload_server.go:185\\nnet/http.HandlerFunc.ServeHTTP\\n\\t/usr/local/go/src/net/http/server.go:2007\\ngithub.com/gorilla/mux.(*Router).ServeHTTP\\n\\t/go/pkg/mod/github.com/gorilla/mux@v1.8.0/mux.go:210\\nnet/http.serverHandler.ServeHTTP\\n\\t/usr/local/go/src/net/http/server.go:2802\\nnet/http.(*conn).serve\\n\\t/usr/local/go/src/net/http/server.go:1890\\nruntime.goexit\\n\\t/usr/local/go/src/runtime/asm_amd64.s:1357\\nFailed to get parameters from the workflow\\ngithub.com/kubeflow/pipelines/backend/src/common/util.Wrap\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:275\\ngithub.com/kubeflow/pipelines/backend/src/common/util.GetParameters\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/common/util/template_util.go:33\\ngithub.com/kubeflow/pipelines/backend/src/apiserver/resource.(*ResourceManager).CreatePipelineVersion\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/resource/resource_manager.go:1153\\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*PipelineUploadServer).UploadPipelineVersion\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/pipeline_upload_server.go:185\\nnet/http.HandlerFunc.ServeHTTP\\n\\t/usr/local/go/src/net/http/server.go:2007\\ngithub.com/gorilla/mux.(*Router).ServeHTTP\\n\\t/go/pkg/mod/github.com/gorilla/mux@v1.8.0/mux.go:210\\nnet/http.serverHandler.ServeHTTP\\n\\t/usr/local/go/src/net/http/server.go:2802\\nnet/http.(*conn).serve\\n\\t/usr/local/go/src/net/http/server.go:1890\\nruntime.goexit\\n\\t/usr/local/go/src/runtime/asm_amd64.s:1357\\nCreate pipeline version failed\\ngithub.com/kubeflow/pipelines/backend/src/common/util.Wrap\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:275\\ngithub.com/kubeflow/pipelines/backend/src/apiserver/resource.(*ResourceManager).CreatePipelineVersion\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/resource/resource_manager.go:1155\\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*PipelineUploadServer).UploadPipelineVersion\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/pipeline_upload_server.go:185\\nnet/http.HandlerFunc.ServeHTTP\\n\\t/usr/local/go/src/net/http/server.go:2007\\ngithub.com/gorilla/mux.(*Router).ServeHTTP\\n\\t/go/pkg/mod/github.com/gorilla/mux@v1.8.0/mux.go:210\\nnet/http.serverHandler.ServeHTTP\\n\\t/usr/local/go/src/net/http/server.go:2802\\nnet/http.(*conn).serve\\n\\t/usr/local/go/src/net/http/server.go:1890\\nruntime.goexit\\n\\t/usr/local/go/src/runtime/asm_amd64.s:1357\\nError creating pipeline version\\ngithub.com/kubeflow/pipelines/backend/src/common/util.Wrap\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:275\\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*PipelineUploadServer).UploadPipelineVersion\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/pipeline_upload_server.go:199\\nnet/http.HandlerFunc.ServeHTTP\\n\\t/usr/local/go/src/net/http/server.go:2007\\ngithub.com/gorilla/mux.(*Router).ServeHTTP\\n\\t/go/pkg/mod/github.com/gorilla/mux@v1.8.0/mux.go:210\\nnet/http.serverHandler.ServeHTTP\\n\\t/usr/local/go/src/net/http/server.go:2802\\nnet/http.(*conn).serve\\n\\t/usr/local/go/src/net/http/server.go:1890\\nruntime.goexit\\n\\t/usr/local/go/src/runtime/asm_amd64.s:1357\"\r\n}\r\n```", "I see. \r\n\r\nWe will need community help on this issue: add `task_id` to exit handler in KFP V1. \r\nWorkaround: manually add param in generated yaml file. \r\n\r\n", "> ssue: add\r\n\r\nit's ok if manually add param in generated yaml file, but it's our product environment, and our actual pipeline is more complex than the demo. so we need help to solve this bug by client. @ji-yaqi @james-jwu ", "> > ssue: add\r\n> \r\n> it's ok if manually add param in generated yaml file, but it's our product environment, and our actual pipeline is more complex than the demo. so we need help to solve this bug by client. @ji-yaqi @james-jwu\r\n\r\nwe want to upgrade our kubeflow to 1.6.0 to use the new functions of kf1.6.0, and we have many pipelines generate by python code, and we use exit hander to do some postprocess in any condition. If we manually add param in generated yaml file, it will be heavy work to modify each pipeline 。 @james-jwu ", "I investigate a bit, there's no behavior change from KFP SDK side.\r\nHere's a simplified yaml I tried, it works on kfp 1.4.1 but not on 1.7.0\r\n```yaml\r\napiVersion: argoproj.io/v1alpha1\r\nkind: Workflow\r\nmetadata:\r\n generateName: test-pipeline-\r\nspec:\r\n entrypoint: test-pipeline\r\n templates:\r\n - name: exit\r\n container:\r\n args: ['echo {{inputs.parameters.task_id}}']\r\n command: [/bin/sh, -c]\r\n image: busybox\r\n inputs:\r\n parameters:\r\n - {name: task_id}\r\n - name: exit-handler-1\r\n dag:\r\n tasks:\r\n - {name: start, template: start}\r\n - name: start\r\n container:\r\n args: [echo pipeline start]\r\n command: [/bin/sh, -c]\r\n image: busybox\r\n - name: test-pipeline\r\n dag:\r\n tasks:\r\n - {name: exit-handler-1, template: exit-handler-1}\r\n arguments:\r\n parameters:\r\n - name: task_id\r\n# if I add `value` for the parameter, then it can pass validation\r\n# value: something\r\n serviceAccountName: pipeline-runner\r\n onExit: exit\r\n```\r\nI believe this is a regression from Argo side, and opened https://github.com/argoproj/argo-workflows/issues/7424\r\n\r\nWhile waiting for Argo's fix, there's another workaround for you: give `task_id` a default value like such:\r\n```python\r\n@dsl.pipeline(\r\n name=\"[test]exit test pipeline\"\r\n)\r\ndef pipeline(\r\n task_id: int = 0 # just give it any dummy value, which will be override by the actual value you provide at submission time.\r\n):\r\n```\r\n\r\nThis should be much easier than modifying the generated yaml file.", "thanks, we will try to add the default value . @chensun ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "I encountered a same issue. Setting default value to python function helps.\r\n\r\nDeployment:\r\n```\r\nkfp pipeline 1.8.1\r\ngcr.io/ml-pipeline/api-server:1.8.1\r\ngcr.io/ml-pipeline/argoexec:v3.2.3-license-compliance\r\n```\r\n\r\nClient:\r\n```\r\nkfp==1.8.12\r\nkfp-api-server==1.8.1\r\n```", "+1 here, do we know if there's any way to fix it from kfp side (by providing some sort of default when param is being used on ExitHandler?) Or by at least failing more gracefully on Compiler if ExitOp contains a param w no default?" ]
"2021-12-06T14:12:56"
"2022-10-07T00:54:16"
null
NONE
null
### Environment * How did you deploy Kubeflow Pipelines (KFP)? kubeflow 1.6.0 ### Steps to reproduce I write a pipeline with on exit component and I can upload to kubeflow 1.0, but when I use 1.6.0 kubeflow, ![image](https://user-images.githubusercontent.com/15135974/144860628-6fcc5c2a-8cee-4a42-a9c2-f71f4163b7cf.png) but when comment the ![image](https://user-images.githubusercontent.com/15135974/144861131-e52a8abe-5eb0-4339-ba6a-9756d1c4088f.png) it can be upload to kubeflow 1.6.0 and missing the datasync component
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7009/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/kubeflow/pipelines/issues/7009/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7008
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7008/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7008/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7008/events
https://github.com/kubeflow/pipelines/issues/7008
1,072,096,659
I_kwDOB-71UM4_5uWT
7,008
[feature] Abstract global KFP compilation context/state
{ "login": "Udiknedormin", "id": 20307949, "node_id": "MDQ6VXNlcjIwMzA3OTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/20307949?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Udiknedormin", "html_url": "https://github.com/Udiknedormin", "followers_url": "https://api.github.com/users/Udiknedormin/followers", "following_url": "https://api.github.com/users/Udiknedormin/following{/other_user}", "gists_url": "https://api.github.com/users/Udiknedormin/gists{/gist_id}", "starred_url": "https://api.github.com/users/Udiknedormin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Udiknedormin/subscriptions", "organizations_url": "https://api.github.com/users/Udiknedormin/orgs", "repos_url": "https://api.github.com/users/Udiknedormin/repos", "events_url": "https://api.github.com/users/Udiknedormin/events{/privacy}", "received_events_url": "https://api.github.com/users/Udiknedormin/received_events", "type": "User", "site_admin": false }
[ { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
open
false
{ "login": "Tomcli", "id": 10889249, "node_id": "MDQ6VXNlcjEwODg5MjQ5", "avatar_url": "https://avatars.githubusercontent.com/u/10889249?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Tomcli", "html_url": "https://github.com/Tomcli", "followers_url": "https://api.github.com/users/Tomcli/followers", "following_url": "https://api.github.com/users/Tomcli/following{/other_user}", "gists_url": "https://api.github.com/users/Tomcli/gists{/gist_id}", "starred_url": "https://api.github.com/users/Tomcli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tomcli/subscriptions", "organizations_url": "https://api.github.com/users/Tomcli/orgs", "repos_url": "https://api.github.com/users/Tomcli/repos", "events_url": "https://api.github.com/users/Tomcli/events{/privacy}", "received_events_url": "https://api.github.com/users/Tomcli/received_events", "type": "User", "site_admin": false }
[ { "login": "Tomcli", "id": 10889249, "node_id": "MDQ6VXNlcjEwODg5MjQ5", "avatar_url": "https://avatars.githubusercontent.com/u/10889249?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Tomcli", "html_url": "https://github.com/Tomcli", "followers_url": "https://api.github.com/users/Tomcli/followers", "following_url": "https://api.github.com/users/Tomcli/following{/other_user}", "gists_url": "https://api.github.com/users/Tomcli/gists{/gist_id}", "starred_url": "https://api.github.com/users/Tomcli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tomcli/subscriptions", "organizations_url": "https://api.github.com/users/Tomcli/orgs", "repos_url": "https://api.github.com/users/Tomcli/repos", "events_url": "https://api.github.com/users/Tomcli/events{/privacy}", "received_events_url": "https://api.github.com/users/Tomcli/received_events", "type": "User", "site_admin": false } ]
null
[ "Maybe the following methods could be useful too, especially for testing and debugging purposes:\r\n\r\n```python\r\n@abstractmethod\r\ndef get_all_globals(self) -> Mapping[str, Any]: ...\r\n\r\n@abstractmethod\r\ndef reset_globals(self) -> None: ...\r\n\r\n@abstractmethod\r\ndef set_globals_map(self, values: Mapping[str, Any]) -> None: ...\r\n```\r\nand therefore `set_kfp_ctx` would be:\r\n```python\r\ndef set_kfp_ctx(ctx: KfpCompilationCtx):\r\n global _kfp_ctx\r\n current_globals = _kfp_ctx.get_all_globals()\r\n ctx.reset_globals()\r\n ctx.set_globals_map(current_globals)\r\n _kfp_ctx = ctx\r\n```\r\nThanks to that, any globals that are set by default in KFP will be propagated to the new context when switching to it.", "cc @chensun ", "We are running a python server that generates the python DSL code and compiles them, so it would be great if we can run multi-thread with this approach.", "We have to change the `class Pipeline():` to set the global variables for different threads. If this approach is okay we can open a PR with the implementation.", "> This is very ineffective in case of using KFP DSL compilation in concurrent environments, which could be trivially avoided by moving the global state to some proxy.\r\n\r\n> We are running a python server that generates the python DSL code and compiles them, so it would be great if we can run multi-thread with this approach.\r\n\r\nPipeline compilation are pretty fast, usually instantaneous, so we don't generally think of the need for concurrent compilation. How many pipelines are you compiling? Is sequential compilation posing some performance bottleneck in your flow?", "@chensun\r\n\r\n> Pipeline compilation are pretty fast, usually instantaneous\r\n\r\nAn average compilation takes 0.1-0.2 for small inputs, so it can only really be considered \"instantaneous\" for a single user scenario --- as few as ten pipelines compiling sequentially generate 1s of delay, which is not that small. And that's for minimal cases, not very large pipelines, which may take longer to compile.\r\n\r\n> Is sequential compilation posing some performance bottleneck in your flow?\r\n\r\nTen small compilations per second or even fewer larger ones don't sound great. If one can afford multiple python processes to handle it, it could probably passable, but that's not always a viable solution.\r\n\r\nIt seems to me that the changes I proposed are quite simple for something that could give 3-4 times of a boost (which I think is reasonable to expect of it) in case of concurrent compilation.", "@chensun please let us know if there are any valid arguments why not to do this - else let's move this forward", "> @chensun\r\n> \r\n> > Pipeline compilation are pretty fast, usually instantaneous\r\n> \r\n> An average compilation takes 0.1-0.2 for small inputs, so it can only really be considered \"instantaneous\" for a single user scenario --- as few as ten pipelines compiling sequentially generate 1s of delay, which is not that small. And that's for minimal cases, not very large pipelines, which may take longer to compile.\r\n> \r\n> > Is sequential compilation posing some performance bottleneck in your flow?\r\n> \r\n> Ten small compilations per second or even fewer larger ones don't sound great. If one can afford multiple python processes to handle it, it could probably passable, but that's not always a viable solution.\r\n> \r\n> It seems to me that the changes I proposed are quite simple for something that could give 3-4 times of a boost (which I think is reasonable to expect of it) in case of concurrent compilation.\r\n\r\nYes, that sounds reasonable to me. \r\n@ji-yaqi is also looking into changing the compilation context for the graph component support. So I'll let @ji-yaqi comment on the implementation details.", "Hi @chensun @ji-yaqi, we just want to confirm that you are still okay with these changes? I know there's some work going for the V2 component, should we PR this change to the 1.8 sdk branch or should we PR to the master branch and cherry pick?", "/assign" ]
"2021-12-06T12:29:31"
"2022-04-26T23:21:53"
null
CONTRIBUTOR
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> <!-- /area frontend --> <!-- /area backend --> /area sdk <!-- /area samples --> <!-- /area components --> ### What feature would you like to see? Gathering all the mutable globals used during KFP compilation into one singleton. That way, the users could implement their own implementation to support any compilation state model they need, instead of a synchronous global-variables-based approach that is currently used, and trivially set it in a single one, globally replacing one singleton with another. Thanks to that, third-party plugins could provide KFP SDK support for frameworks such as Flask, Django or others, independently from the KFP itself. At the same time, the singleton doesn't need to be aware of all the state needed by KFP, but rather provide a generic mechanism of setting and retrieving the values for globals via unique keys (names?). ### What is the use case or pain point? Currently, KFP compilation requires using numerous global variables such as `Pipeline._default_pipeline`, `_components._container_task_constructor`, `_container_op._register_op_handler` --- which means that at least the [`compile_pipeline` function must be @synchronized (locked) between calls](https://github.com/kubeflow/kfp-tekton/blob/master/sdk/python/tests/performance_tests.py#L108-L134). This is very ineffective in case of using KFP DSL compilation in concurrent environments, which could be trivially avoided by moving the global state to some proxy. ### Is there a workaround currently? One could try to manually overwrite each individual global private variable mutated in KFP with proxies, but it has many disadvantages to it: it's not only very hacky, but because they're private, they can be removed or added at any time in subsequent versions of KFP. ### Implementation details I imagine that the base class of such global context may look as follows: ```python class KfpCompilationCtx: @abstractmethod def set_global(self, name: str, value: Any) -> None: ... @abstractmethod def get_global(self, name: str) -> Any: ... def get_set_global(self, name: str, value: Any) -> Any: try: old_value = self.get_global(name) except: old_value = None self.set_global(name, value) return old_value # could also be a static inside of the class, but that would enable other classes to overwrite it _kfp_ctx: 'KfpCompilationCtx' = None def set_kfp_ctx(ctx: KfpCompilationCtx): global _kfp_ctx _kfp_ctx = ctx def get_kfp_ctx() -> KfpCompilationCtx: return _kfp_ctx class KfpGlobalCtx(KfpCompilationCtx): def __init__(self): super().__init__(self) self._globals_dict: Dict[str, Any] = {} def set_global(self, name: str, value: Any) -> None: self._globals_dict[name] = value def get_global(self, name: str) -> Any: return self._globals_dict[name] # by default, the context is global-based set_kfp_ctx(KfpGlobalCtx()) ``` but it would also trivially enable the following implementation for Flask: ```python from flask import g class KfpFlaskCtx(KfpCompilationCtx): def set_global(self, name: str, value: Any) -> None: setattr(g, name, value) def get_global(self, name: str) -> Any: return getattr(g, name) ``` Now, the current implementation of `Pipeline` class would change to: ```python class Pipeline(): def __init_(self, name: str): self.name = name self.ops = {} # Add the root group. self.groups = [_ops_group.OpsGroup('pipeline', name=name)] self.group_id = 0 self.conf = PipelineConf() self._metadata = None @staticmethod def get_default_pipeline() -> Optional['Pipeline']: return get_kfp_ctx().get_global('default_pipeline') @staticmethod def _set_default_pipeline(value: Optional['Pipeline']) -> None: return get_kfp_ctx().set_global('default_pipeline', value) def __enter__(self) -> 'Pipeline': if self.get_default_pipeline(): raise Exception('Nested pipelines are not allowed.') self._set_default_pipeline(self) ctx = get_kfp_ctx() self._old_container_task_constructor = ctx.get_set_global( 'container_task_constructor', _component_bridge._create_container_op_from_component_and_arguments ) def register_op_and_generate_id(op): return self.add_op(op, op.is_exit_handler) self._old__register_op_handler = ctx.get_set_global( 'register_op_handler', register_op_and_generate_id ) return self def __exit__(self, *args): self._set_default_pipeline(None) ctx = get_kfp_ctx() ctx.set_global(self._old__register_op_handler) ctx.set_global(self._old_container_task_constructor) ... ``` ...which is trivially compatible with both implementations. Considering how in KFP it's usually only required to set and unset some global, maybe `set_global` and `unset_global` would be better instead --- certainly would be easier to implement global locking mechanisms. Maybe a `setdefault` could be useful etc. But the general idea is that of the above. --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7008/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7008/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7003
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7003/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7003/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7003/events
https://github.com/kubeflow/pipelines/issues/7003
1,070,469,896
I_kwDOB-71UM4_zhMI
7,003
[feature] Add GetPipelineByName endpoint
{ "login": "difince", "id": 11557050, "node_id": "MDQ6VXNlcjExNTU3MDUw", "avatar_url": "https://avatars.githubusercontent.com/u/11557050?v=4", "gravatar_id": "", "url": "https://api.github.com/users/difince", "html_url": "https://github.com/difince", "followers_url": "https://api.github.com/users/difince/followers", "following_url": "https://api.github.com/users/difince/following{/other_user}", "gists_url": "https://api.github.com/users/difince/gists{/gist_id}", "starred_url": "https://api.github.com/users/difince/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/difince/subscriptions", "organizations_url": "https://api.github.com/users/difince/orgs", "repos_url": "https://api.github.com/users/difince/repos", "events_url": "https://api.github.com/users/difince/events{/privacy}", "received_events_url": "https://api.github.com/users/difince/received_events", "type": "User", "site_admin": false }
[ { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
closed
false
{ "login": "difince", "id": 11557050, "node_id": "MDQ6VXNlcjExNTU3MDUw", "avatar_url": "https://avatars.githubusercontent.com/u/11557050?v=4", "gravatar_id": "", "url": "https://api.github.com/users/difince", "html_url": "https://github.com/difince", "followers_url": "https://api.github.com/users/difince/followers", "following_url": "https://api.github.com/users/difince/following{/other_user}", "gists_url": "https://api.github.com/users/difince/gists{/gist_id}", "starred_url": "https://api.github.com/users/difince/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/difince/subscriptions", "organizations_url": "https://api.github.com/users/difince/orgs", "repos_url": "https://api.github.com/users/difince/repos", "events_url": "https://api.github.com/users/difince/events{/privacy}", "received_events_url": "https://api.github.com/users/difince/received_events", "type": "User", "site_admin": false }
[ { "login": "difince", "id": 11557050, "node_id": "MDQ6VXNlcjExNTU3MDUw", "avatar_url": "https://avatars.githubusercontent.com/u/11557050?v=4", "gravatar_id": "", "url": "https://api.github.com/users/difince", "html_url": "https://github.com/difince", "followers_url": "https://api.github.com/users/difince/followers", "following_url": "https://api.github.com/users/difince/following{/other_user}", "gists_url": "https://api.github.com/users/difince/gists{/gist_id}", "starred_url": "https://api.github.com/users/difince/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/difince/subscriptions", "organizations_url": "https://api.github.com/users/difince/orgs", "repos_url": "https://api.github.com/users/difince/repos", "events_url": "https://api.github.com/users/difince/events{/privacy}", "received_events_url": "https://api.github.com/users/difince/received_events", "type": "User", "site_admin": false } ]
null
[ "/assign @difince ", "/assign @capri-xiyue ", "pipeline name is not unique in KFP. I think we can make the combination of name and namespace unique in KFP. FYI: https://github.com/kubeflow/pipelines/issues/3360#issuecomment-834450654", "Thank you for clarifying this @capri-xiyue. \r\nThis issue is actually inspired by #3360 and I see it as the first issue from a list to cover entirely #3360. \r\nTo your point of uniqueness - in the issue, I have written\r\n\r\n> Because the pipeline name is unique within a namespace - the namespace should be taken into consideration\r\n\r\nSo, now in the [PR,](https://github.com/kubeflow/pipelines/pull/7004#pullrequestreview-827574954) there is a discussion on what is the best way to pass the namespace - using the `resource_reference` to filter on it or to make the namespace part of the URL itself. ", "Note: After the backend change is merged, let's also follow up on the SDK side by using this new endpoint getPipelineByName.", "> Note: After the backend change is merged, let's also follow up on the SDK side by using this new endpoint getPipelineByName.\r\n\r\nWe might want to delay the change, because we know a lot people upgrading KFP SDK to latest all the time, if we switch the SDK method to rely on new API, their usage will break, because their deployment is not upgraded yet. Ideally, when we change existing SDK client implementation, we need to wait until the backend API is released for a while.", "@zijianjoy \r\n> let's also follow up on the SDK side by using this new endpoint getPipelineByName\r\n\r\nCould you clarify what exactly do you mean? \r\nI see that this new endpoint could be incorporated within the CLI. The command could look like this: \r\n`kfp pipeline get --name NAME` \r\nWDYT? \r\n\r\nIs there something else I should consider? \r\n\r\nOnce I know what is supposed to be implemented I will open a dedicated issue about it. \r\n> we need to wait until the backend API is released for a while\r\n\r\nIs this time gone or not yet? \r\n\r\n", "@difince \r\n\r\n> I see that this new endpoint could be incorporated within the CLI. The command could look like this:\r\n> kfp pipeline get --name NAME\r\n\r\nThis makes sense to me, but a bit correction about the order of subcommand: `kfp pipeline get --name NAME`.\r\n\r\n>> we need to wait until the backend API is released for a while\r\n> Is this time gone or not yet?\r\n\r\nI tend to think it is about time SDK can start adding this API with SDK version 2.0.0+. cc @connor-mccarthy who is recently working on KFP SDK CLI." ]
"2021-12-03T11:00:30"
"2022-05-19T07:29:53"
"2022-02-15T17:38:41"
MEMBER
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> <!-- /area frontend --> /area backend <!-- /area sdk --> <!-- /area samples --> <!-- /area components --> ### What feature would you like to see? Add new endpoint - getPipelineByName Because the pipeline name is unique within a namespace - the namespace should be taken into consideration ### What is the use case or pain point? Simplify user experience and keep consistent with other tools like Kubernetes, DAGs in Argo, Airflow that use names as a reference instead of UUID This issue is inspired from #https://github.com/kubeflow/pipelines/issues/3360. ### Is there a workaround currently? Yes, use UUIDs instead --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7003/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7003/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6987
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6987/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6987/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6987/events
https://github.com/kubeflow/pipelines/issues/6987
1,068,023,708
I_kwDOB-71UM4_qL-c
6,987
[bug] upgrade from MLMD 1.0.0 to 1.4.0 stuck in crashloopbackoff
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
closed
false
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false } ]
null
[ "I tried reproducing this issue with a brand new MLMD grpc server, but I wasn't able to reproduce.", "I'm going to delete the MLMD db manually to unblock the upgrade for development in https://github.com/kubeflow/pipelines/pull/6983", "I connected to in-cluster mysql and deleted MLMD db in kfp-ci cluster.\r\n```\r\nkubectl run -it -n kubeflow --rm --image=mysql:8.0.12 --restart=Never mysql-client -- mysql -h mysql\r\n$ show databases;\r\n$ drop database metadb;\r\n```", "Thank you Yuan for finding this issue! Does it mean KFP 1.8 will have breaking change for KFP upgrade scenario?", "@zijianjoy we need to verify, if we cannot reproduce again, it might not be a breaking change.", "I have deployed a KFP 1.7.1, then upgrade to KFP 1.8.0 alpha 0. I am able to upgrade with no breaking change. \r\n\r\n### KFP 1.7.1 Artifacts (Look at version at bottom left)\r\n![mlmd171](https://user-images.githubusercontent.com/37026441/144732092-88c9bef0-710a-4b5f-a6a2-3e0b50e935c8.png)\r\n\r\n\r\n### KFP 1.8.0-alpha.0 Artifacts (Look at version at bottom left)\r\n![mlmd180](https://user-images.githubusercontent.com/37026441/144732094-07af10cf-5dab-42f3-b910-319e810e3935.png)\r\n\r\n\r\n### MLMD image version \r\n![mlmdimage140](https://user-images.githubusercontent.com/37026441/144732095-83c63d0a-b9ab-400d-b924-8e2d5c55de75.png)\r\n\r\n\r\nSeems that the issue you encountered is not common case? Note that the way I upgraded is to follow instruction: https://www.kubeflow.org/docs/components/pipelines/installation/standalone-deployment/#upgrading-kubeflow-pipelines. Then I chose emissary executor again https://www.kubeflow.org/docs/components/pipelines/installation/choose-executor/.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-12-01T06:43:11"
"2023-03-21T17:52:11"
"2023-03-21T17:52:11"
CONTRIBUTOR
null
Upstream issue: https://github.com/google/ml-metadata/issues/135
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6987/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6987/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6986
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6986/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6986/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6986/events
https://github.com/kubeflow/pipelines/issues/6986
1,067,607,765
I_kwDOB-71UM4_ombV
6,986
[sdk] v2 Components not supported by LocalClient
{ "login": "gpoulin-hopper", "id": 86018365, "node_id": "MDQ6VXNlcjg2MDE4MzY1", "avatar_url": "https://avatars.githubusercontent.com/u/86018365?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gpoulin-hopper", "html_url": "https://github.com/gpoulin-hopper", "followers_url": "https://api.github.com/users/gpoulin-hopper/followers", "following_url": "https://api.github.com/users/gpoulin-hopper/following{/other_user}", "gists_url": "https://api.github.com/users/gpoulin-hopper/gists{/gist_id}", "starred_url": "https://api.github.com/users/gpoulin-hopper/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gpoulin-hopper/subscriptions", "organizations_url": "https://api.github.com/users/gpoulin-hopper/orgs", "repos_url": "https://api.github.com/users/gpoulin-hopper/repos", "events_url": "https://api.github.com/users/gpoulin-hopper/events{/privacy}", "received_events_url": "https://api.github.com/users/gpoulin-hopper/received_events", "type": "User", "site_admin": false }
[ { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
open
false
null
[]
null
[ "This is known and expected. `run_pipeline_func_locally` was added as an alpha feature by a community contributor. It targets v1 only and has many limitations. \r\nThere's currently no plan to make it work for v2 components and pipelines.", "@chensun @gpoulin-hopper Actually I have implemented the local client for v2 which is only used internally for now. If needed I will be happy to pull a PR.", "@lynnmatrix yes please!", "@frederikfab I created a new project [kfp-local](https://github.com/lynnmatrix/kfp-local) for the local client v2.\r\n@chensun Is it necessary to merge it into kfp sdk?", "@gpoulin-hopper @frederikfab @lynnmatrix \r\nThis topic has been brought up from time to time, so I'd like to add some more contexts and thoughts here:\r\n- We view ML pipelines more than just executing code to completion. One critical area we build around is the ML Metadata. This is even more emphasized in the Kubeflow Pipelines v2 design [[doc](https://docs.google.com/document/d/1fHU29oScMEKPttDA1Th1ibImAKsFVVt2Ynr4ZME05i0/)]. \r\n- Kubeflow Pipelines supports [local deployment](https://www.kubeflow.org/docs/components/pipelines/installation/localcluster-deployment/) which would give you the full set of features. That would probably be a better choice if the goal is to test the end to end experience of running a pipeline in a local environment. \r\n- For quick \"unit tests\"-like testing/debugging, it is on our roadmap to support local execution of a component. But we currently don't plan the same for pipeline due to the aforementioned reasons and the complexity possibly involved in a pipeline: loops, conditions, exit handlers, importer, etc.\r\n\r\n@lynnmatrix Appreciate your effort to make local client work for v2. Going forward, we're proposing to require design docs for changes that affect UX in a nontrivial way. This would give us a more formal process of evaluating and discussing design choices. So if you'd like to merge [kfp-local](https://github.com/lynnmatrix/kfp-local) into KFP repo, could you please start with a design doc? (Consider what I mentioned above would probably be the comments you would receive in your design doc, you may want to address them in your design doc to justify adding the feature). Thanks!", "@chensun LocalClient is mostly for testing/debugging. I'll keep LocalClient in [kfp-local ](https://github.com/lynnmatrix/kfp-local) before figuring out how to write the design doc." ]
"2021-11-30T19:41:45"
"2022-04-24T10:32:45"
null
NONE
null
### Environment * KFP version: N/A * KFP SDK version: 1.8.9 * All dependencies version: ``` kfp 1.8.9 kfp-pipeline-spec 0.1.13 kfp-server-api 1.7.1 ``` ### Steps to reproduce Running this python script ```python3 import kfp import kfp.v2.dsl as dsl @dsl.component() def simple(): print("allo") @dsl.pipeline(name="test", description="test") def pipeline(): simple() kfp.run_pipeline_func_locally(pipeline, {}) ``` produce: ``` ERROR:root:WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv Traceback (most recent call last): File "/usr/local/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/local/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.7/site-packages/kfp/v2/components/executor_main.py", line 104, in <module> executor_main() File "/usr/local/lib/python3.7/site-packages/kfp/v2/components/executor_main.py", line 94, in executor_main executor_input = json.loads(args.executor_input) File "/usr/local/lib/python3.7/json/__init__.py", line 348, in loads return _default_decoder.decode(s) File "/usr/local/lib/python3.7/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/local/lib/python3.7/json/decoder.py", line 353, in raw_decode obj, end = self.scan_once(s, idx) json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1) ERROR:root:['docker', 'run', '-v', '/var/folders/99/4l_08z0d4yvfc3ywqvh42pr00000gq/T:/var/folders/99/4l_08z0d4yvfc3ywqvh42pr00000gq/T', 'python:3.7', 'sh', '-c', '\n\nif ! [ -x "$(command -v pip)" ]; then\n python3 -m ensurepip || python3 -m ensurepip --user || apt-get install python3-pip\nfi\n\nPIP_DISABLE_PIP_VERSION_CHECK=1 python3 -m pip install --quiet --no-warn-script-location \'kfp==1.8.9\' && "$0" "$@"\n', 'sh', '-ec', 'program_path=$(mktemp -d)\nprintf "%s" "$0" > "$program_path/ephemeral_component.py"\npython3 -m kfp.v2.components.executor_main --component_module_path "$program_path/ephemeral_component.py" "$@"\n', '\nimport kfp\nfrom kfp.v2 import dsl\nfrom kfp.v2.dsl import *\nfrom typing import *\n\ndef simple():\n print("allo")\n\n', '--executor_input', '{{$}}', '--function_to_execute', 'simple'] ``` ### Expected result To have the same result as if `simple` was a v1 component ```python3 import kfp from kfp.components import create_component_from_func import kfp.dsl as dsl @create_component_from_func def simple(): print("allo") @dsl.pipeline(name="test", description="test") def pipeline(): simple() ``` --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6986/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6986/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6981
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6981/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6981/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6981/events
https://github.com/kubeflow/pipelines/issues/6981
1,066,746,703
I_kwDOB-71UM4_lUNP
6,981
Argument type "google.VertexDataset" is incompatible with the input type "Dataset"
{ "login": "v-loves-avocados", "id": 52256615, "node_id": "MDQ6VXNlcjUyMjU2NjE1", "avatar_url": "https://avatars.githubusercontent.com/u/52256615?v=4", "gravatar_id": "", "url": "https://api.github.com/users/v-loves-avocados", "html_url": "https://github.com/v-loves-avocados", "followers_url": "https://api.github.com/users/v-loves-avocados/followers", "following_url": "https://api.github.com/users/v-loves-avocados/following{/other_user}", "gists_url": "https://api.github.com/users/v-loves-avocados/gists{/gist_id}", "starred_url": "https://api.github.com/users/v-loves-avocados/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/v-loves-avocados/subscriptions", "organizations_url": "https://api.github.com/users/v-loves-avocados/orgs", "repos_url": "https://api.github.com/users/v-loves-avocados/repos", "events_url": "https://api.github.com/users/v-loves-avocados/events{/privacy}", "received_events_url": "https://api.github.com/users/v-loves-avocados/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
open
false
null
[]
null
[ "any update on this? I am encountering something similar with using the output of Vertex AI model upload, so I have a `google.VertexModel` that does not match `Input[Model]` in a python component. ", "same problem with the VertexModel\r\nhave a customop defined:\r\n\r\nfrom kfp.v2.dsl import (\r\n component,\r\n Input,\r\n Model,\r\n)\r\ndef custom_deploy_to_endpoint(\r\n model: Input[Model],\r\n endpoint_display_name: str,\r\n project: str,\r\n):\r\n...\r\n\r\ncreating the model using \r\nmodel_upload_op = google_cloud_pipeline_components.aiplatform.ModelUploadOp(...)\r\nand passing to the custom op\r\n\r\ncustom_deploy_to_endpoint(\r\n model=model_upload_op.outputs['model'],\r\n endpoint_display_name=ENDPOINT_NAME,\r\n project=PROJECT_ID\r\n)\r\n\r\n' InconsistentTypeException: Incompatible argument passed to the input \"model\" of component \"Custom deploy to endpoint\": Argument type \"google.VertexModel\" is incompatible with the input type \"Model\" '\r\n\r\n\r\n\r\n\r\n\r\n", "try this:\r\n```\r\nhypertune_job_run_task = run_hptune(\r\n dataset=dataset_create_task.outputs[\"dataset\"].ignore_type(),\r\n hypertune_display_name=hypertune_display_name, \r\n image_uri=container_uri, \r\n staging_bucket=staging_bucket)`\r\n```" ]
"2021-11-30T03:29:19"
"2022-03-01T13:11:44"
null
NONE
null
### What steps did you take I created a KF pipeline in which: 1. First created a Vertex AI dataset 2. Defined a custom component, and pass the dataset as an input. I was expecting that the dataset_create_task.outputs["dataset"] would pass a dataset to run_hptune as input. However, I got this error: 'Argument type "google.VertexDataset" is incompatible with the input type "Dataset"' `def run_hptune( dataset: Input[Dataset], hypertune_display_name: str, image_uri: str, staging_bucket: str)->int:` `@dsl.pipeline(name=pipeline_name, pipeline_root='gs://xxx') def pipeline(): dataset_create_task = gcc_aip.TabularDatasetCreateOp( project=project_id, display_name=model_display_name, gcs_source=gcs_resource_uri ) hypertune_job_run_task = run_hptune( dataset=dataset_create_task.outputs["dataset"], hypertune_display_name=hypertune_display_name, image_uri=container_uri, staging_bucket=staging_bucket)`
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6981/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6981/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6977
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6977/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6977/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6977/events
https://github.com/kubeflow/pipelines/issues/6977
1,065,950,466
I_kwDOB-71UM4_iR0C
6,977
[feature] Provide a way to define a set of default annotations for tasks
{ "login": "johnbuluba", "id": 3648774, "node_id": "MDQ6VXNlcjM2NDg3NzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/3648774?v=4", "gravatar_id": "", "url": "https://api.github.com/users/johnbuluba", "html_url": "https://github.com/johnbuluba", "followers_url": "https://api.github.com/users/johnbuluba/followers", "following_url": "https://api.github.com/users/johnbuluba/following{/other_user}", "gists_url": "https://api.github.com/users/johnbuluba/gists{/gist_id}", "starred_url": "https://api.github.com/users/johnbuluba/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/johnbuluba/subscriptions", "organizations_url": "https://api.github.com/users/johnbuluba/orgs", "repos_url": "https://api.github.com/users/johnbuluba/repos", "events_url": "https://api.github.com/users/johnbuluba/events{/privacy}", "received_events_url": "https://api.github.com/users/johnbuluba/received_events", "type": "User", "site_admin": false }
[ { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." }, { "id": 2710158147, "node_id": "MDU6TGFiZWwyNzEwMTU4MTQ3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/needs%20more%20info", "name": "needs more info", "color": "DBEF12", "default": false, "description": "" } ]
open
false
null
[]
null
[ "Do you need to customize the annotation for pipeline tasks? For example, KFP provides a interface for user to define the annotations for specific pipeline tasks?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-11-29T11:57:39"
"2022-03-03T02:05:10"
null
NONE
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> <!-- /area frontend --> /area backend <!-- /area sdk --> <!-- /area samples --> <!-- /area components --> ### What feature would you like to see? We would like to be able to define a set of annotations on the server-side, that will be automatically added to all pipeline tasks. ### What is the use case or pain point? We need this feature to enable Istio sidecar injection by default to all the submitted pipelines. We want by default to annotate all tasks with: ```yaml sidecar.istio.io/inject: true proxy.istio.io/config: '{ "holdApplicationUntilProxyStarts": true }' ``` Currently, all tasks are automatically annotated with `sidecar.istio.io/inject: false` and we need a way to change this default behavior. ### Is there a workaround currently? The current workaround is to explicitly set the annotations in each pipeline task. --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6977/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6977/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6974
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6974/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6974/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6974/events
https://github.com/kubeflow/pipelines/issues/6974
1,065,592,462
I_kwDOB-71UM4_g6aO
6,974
[bug] TFX sample fails after upgrading to 1.4
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1260031624, "node_id": "MDU6TGFiZWwxMjYwMDMxNjI0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/samples", "name": "area/samples", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false } ]
null
[ "/assign @zijianjoy @jiyongjung0 ", "@Bobgy: GitHub didn't allow me to assign the following users: jiyongjung0.\n\nNote that only [kubeflow members](https://github.com/orgs/kubeflow/people), repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time.\nFor more information please see [the contributor guide](https://git.k8s.io/community/contributors/guide/first-contribution.md#issue-assignment-in-github)\n\n<details>\n\nIn response to [this](https://github.com/kubeflow/pipelines/issues/6974#issuecomment-981279275):\n\n>/assign @zijianjoy @jiyongjung0 \n\n\nInstructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.\n</details>", "Failure error message coming from upgrading TFX from 1.2 to 1.4:\r\n\r\n```\r\nsample-test-7c4kw-3475281298: =========Argo Workflow Log=========\r\nsample-test-7c4kw-3475281298: parameterized-tfx-oss-c6n2c-1861378462: time=\"2021-11-26T14:58:03.147Z\" level=info msg=\"capturing logs\" argo=true\r\nsample-test-7c4kw-3475281298: parameterized-tfx-oss-c6n2c-1861378462: 2021-11-26 14:58:05.872653: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0\r\nsample-test-7c4kw-3475281298: parameterized-tfx-oss-c6n2c-1861378462: usage: container_entrypoint.py [-h] --pipeline_root PIPELINE_ROOT\r\nsample-test-7c4kw-3475281298: parameterized-tfx-oss-c6n2c-1861378462: --kubeflow_metadata_config\r\nsample-test-7c4kw-3475281298: parameterized-tfx-oss-c6n2c-1861378462: KUBEFLOW_METADATA_CONFIG --serialized_component\r\nsample-test-7c4kw-3475281298: parameterized-tfx-oss-c6n2c-1861378462: SERIALIZED_COMPONENT --tfx_ir TFX_IR --node_id\r\nsample-test-7c4kw-3475281298: parameterized-tfx-oss-c6n2c-1861378462: NODE_ID [--runtime_parameter RUNTIME_PARAMETER]\r\nsample-test-7c4kw-3475281298: parameterized-tfx-oss-c6n2c-1861378462: container_entrypoint.py: error: the following arguments are required: --serialized_component\r\nsample-test-7c4kw-3475281298: parameterized-tfx-oss-c6n2c-1861378462: time=\"2021-11-26T14:58:15.441Z\" level=error msg=\"cannot save artifact /mlpipeline-ui-metadata.json\" argo=true error=\"stat /mlpipeline-ui-metadata.json: no such file or directory\"\r\nsample-test-7c4kw-3475281298: parameterized-tfx-oss-c6n2c-1861378462: Error: exit status 2\r\n```", "Thank you Yuan for finding this issue!\r\n\r\nWill it be related to the tfx version in the https://github.com/kubeflow/pipelines/blob/master/samples/core/parameterized_tfx_oss/parameterized_tfx_oss.py#L167?\r\n\r\nMaybe at least we should upgrade tfx version to 1.4.0 in the parameterized_tfx_oss sample first, then investigate further for the issue?\r\n\r\nTFX release changelog: https://github.com/tensorflow/tfx/blob/master/RELEASE.md", "Thanks! I agree that we need to upgrade the code @zijianjoy pointed.\r\nBy the way, how about using the [default value](https://github.com/tensorflow/tfx/blob/d00b80c92792ec202564e2e90d081b361835e314/tfx/orchestration/kubeflow/kubeflow_dag_runner.py#L54) to avoid this kind of failure? It uses the image in the docker hub, then we don't need to update the version every time. (Or we can use tfx.version.__version__ in the parameterized_tfx_oss.py.)", "> Thanks! I agree that we need to upgrade the code @zijianjoy pointed.\n> By the way, how about using the [default value](https://github.com/tensorflow/tfx/blob/d00b80c92792ec202564e2e90d081b361835e314/tfx/orchestration/kubeflow/kubeflow_dag_runner.py#L54) to avoid this kind of failure? It uses the image in the docker hub, then we don't need to update the version every time. (Or we can use tfx.version.__version__ in the parameterized_tfx_oss.py.)\n\nUsing version.__version__ sounds like a good idea, because we don't want new image releases to break the sample.", "Have come across the same problem with upgrading from 1.2 to 1.4. My kubeflow run fails with a final log of:\r\nusage: container_entrypoint.py [-h] --pipeline_root PIPELINE_ROOT\r\n --kubeflow_metadata_config\r\n KUBEFLOW_METADATA_CONFIG --serialized_component\r\n SERIALIZED_COMPONENT --tfx_ir TFX_IR --node_id\r\n NODE_ID [--runtime_parameter RUNTIME_PARAMETER]\r\ncontainer_entrypoint.py: error: the following arguments are required: --serialized_component\r\n\r\n@Bodgy Am I missing somthing but in the PR I'm not seeing how this is being fixed and the issue is getting closed? Is there a work around or will the PR above simply mean 1.5 shouldn't have the same issue?", "Hi, @nroberts1, I think that you have to upgrade the TFX version installed in the container image, too. [The fix](https://github.com/kubeflow/pipelines/commit/3cfff3db1da0906a46980ce4f2b7ceda7458272a) updated the image to the TFX version of the client environment.", "Yes that works I understand the fix now, thanks @jiyongjung0 " ]
"2021-11-29T04:18:29"
"2021-12-23T07:59:05"
"2021-12-02T10:00:06"
CONTRIBUTOR
null
### What steps did you take <!-- A clear and concise description of what the bug is.--> * MLMD & TFX upgrade: https://github.com/kubeflow/pipelines/pull/6910 ### What happened: * Check test: https://oss-prow.knative.dev/view/gs/oss-prow/logs/kubeflow-pipeline-postsubmit-integration-test/1464225919869652992 ### What did you expect to happen: We need to upgrade the TFX sample to 1.4.0 ### Anything else you would like to add: <!-- Miscellaneous information that will assist in solving the issue.--> ### Labels <!-- Please include labels below by uncommenting them to help us better triage issues --> <!-- /area frontend --> <!-- /area backend --> <!-- /area sdk --> <!-- /area testing --> /area samples <!-- /area components --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6974/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6974/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6972
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6972/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6972/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6972/events
https://github.com/kubeflow/pipelines/issues/6972
1,065,193,798
I_kwDOB-71UM4_fZFG
6,972
[backend] Create Run in KFPv2 failing because of invalid name in argo workflow
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "I did some investigation, the root cause is because when creating pipeline, the pipeline api will override the name to pipeline/xxx format, FYI \r\n\r\nhttps://github.com/kubeflow/pipelines/blob/cfdfe71a7e4944a3d79f0731ea8d7170738d40e5/backend/src/apiserver/resource/resource_manager.go#L263https://github.com/kubeflow/pipelines/blob/cfdfe71a7e4944a3d79f0731ea8d7170738d40e5/backend/src/apiserver/resource/resource_manager.go#L263\r\nand https://github.com/kubeflow/pipelines/blob/cfdfe71a7e4944a3d79f0731ea8d7170738d40e5/backend/src/apiserver/template/v2_template.go#L105\r\n\r\nAnd the backend compiler generated name based on the pipeline name https://github.com/kubeflow/pipelines/blob/cfdfe71a7e4944a3d79f0731ea8d7170738d40e5/v2/compiler/argo.go#L61\r\n\r\nI found the backend db still store the pipeline name like `v2upload` instead of `namespace/xxx` or `pipeline/xxx`\r\nThe backend compiler currently does not work with such override logic https://github.com/kubeflow/pipelines/blob/cfdfe71a7e4944a3d79f0731ea8d7170738d40e5/backend/src/apiserver/template/v2_template.go#L105\r\n\r\nI think we need to revisit the backend compiler logic to make sure it works with the pipeline v2 api.\r\n\r\ncc @Bobgy \r\n\r\n\r\n" ]
"2021-11-28T05:36:41"
"2021-12-08T05:48:59"
"2021-12-08T05:48:59"
COLLABORATOR
null
During KFPv2, I tried to create KFPv2 run using existing pipeline template. The following is the request payload for create run: ``` { "description":"", "name":"Run of v2upload (6792a)", "pipeline_spec":{ "parameters":[ ] }, "resource_references":[ { "key":{ "id":"ba85c817-a301-4f9c-920d-b654c4a03c81", "type":"EXPERIMENT" }, "relationship":"OWNER" }, { "key":{ "id":"3943f4b9-2d33-487c-8a6e-0a17fbeff62c", "type":"PIPELINE_VERSION" }, "relationship":"CREATOR" } ], "service_account":"" } ``` The error is: ![runcreationfailed](https://user-images.githubusercontent.com/37026441/143730996-93da1a44-5948-4685-bb46-3470c6228556.png) Looks like the run name pattern `pipeline/v2upload-8d5ks` didn't pass the argo workflow validation because it shouldn't contain `/` symbol. cc @Bobgy @capri-xiyue
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6972/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6972/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6970
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6970/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6970/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6970/events
https://github.com/kubeflow/pipelines/issues/6970
1,064,850,622
I_kwDOB-71UM4_eFS-
6,970
[bug] Link on KubeFlow website is 404
{ "login": "haifeng-jin", "id": 5476582, "node_id": "MDQ6VXNlcjU0NzY1ODI=", "avatar_url": "https://avatars.githubusercontent.com/u/5476582?v=4", "gravatar_id": "", "url": "https://api.github.com/users/haifeng-jin", "html_url": "https://github.com/haifeng-jin", "followers_url": "https://api.github.com/users/haifeng-jin/followers", "following_url": "https://api.github.com/users/haifeng-jin/following{/other_user}", "gists_url": "https://api.github.com/users/haifeng-jin/gists{/gist_id}", "starred_url": "https://api.github.com/users/haifeng-jin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/haifeng-jin/subscriptions", "organizations_url": "https://api.github.com/users/haifeng-jin/orgs", "repos_url": "https://api.github.com/users/haifeng-jin/repos", "events_url": "https://api.github.com/users/haifeng-jin/events{/privacy}", "received_events_url": "https://api.github.com/users/haifeng-jin/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Thank you for the report! I created the fix https://github.com/kubeflow/website/pull/3081" ]
"2021-11-26T23:16:56"
"2021-11-28T12:58:04"
"2021-11-28T12:58:04"
NONE
null
### What steps did you take <!-- A clear and concise description of what the bug is.--> Visit: https://www.kubeflow.org/ Click on the KubeFlow Pipelines link, which direct me to https://www.kubeflow.org/docs/components/pipelines/overview/pipelines-overview/ ### What happened: I see a 404 page. ### What did you expect to happen: display a page with contents. ### Environment: <!-- Please fill in those that seem relevant. --> * How do you deploy Kubeflow Pipelines (KFP)? <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> * KFP version: <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> * KFP SDK version: <!-- Specify the output of the following shell command: $pip list | grep kfp --> ### Anything else you would like to add: <!-- Miscellaneous information that will assist in solving the issue.--> ### Labels <!-- Please include labels below by uncommenting them to help us better triage issues --> <!-- /area frontend --> <!-- /area backend --> <!-- /area sdk --> <!-- /area testing --> <!-- /area samples --> <!-- /area components --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6970/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6970/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6966
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6966/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6966/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6966/events
https://github.com/kubeflow/pipelines/issues/6966
1,064,130,065
I_kwDOB-71UM4_bVYR
6,966
[sdk] component with optional input causes compile error in v2 compiler
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
null
[]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-11-26T06:03:28"
"2022-03-02T10:06:22"
null
CONTRIBUTOR
null
### Environment * KFP SDK version: master <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> ### Steps to reproduce <!-- Specify how to reproduce the problem. This may include information such as: a description of the process, code snippets, log output, or screenshots. --> * Migrate if_placeholder sample pipeline to v2: https://github.com/kubeflow/pipelines/blob/8aee62142aa13ae42b2dd18257d7e034861b7e5e/samples/test/placeholder_if.py by changing imports to `from kfp.v2 import components, dsl` * Compile * Got this error ``` Traceback (most recent call last): File "/home/gongyuan_kubeflow_org/github/kf/pipelines/sdk/python/kfp/v2/components/structures.py", line 579, in load_from_component_yaml return ComponentSpec.parse_obj(json_component) File "pydantic/main.py", line 578, in pydantic.main.BaseModel.parse_obj File "pydantic/main.py", line 406, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 1 validation error for ComponentSpec inputs value is not a valid dict (type=type_error.dict) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/gongyuan_kubeflow_org/miniconda3/envs/v2/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/gongyuan_kubeflow_org/miniconda3/envs/v2/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/gongyuan_kubeflow_org/github/kf/pipelines/samples/test/placeholder_if_test.py", line 17, in <module> from .placeholder_if_v2 import pipeline_both as pipeline_both_v2, pipeline_none as pipeline_none_v2 File "/home/gongyuan_kubeflow_org/github/kf/pipelines/samples/test/placeholder_if_v2.py", line 46, in <module> ''') File "/home/gongyuan_kubeflow_org/github/kf/pipelines/sdk/python/kfp/v2/components/yaml_component.py", line 36, in load_component_from_text structures.ComponentSpec.load_from_component_yaml(text)) File "/home/gongyuan_kubeflow_org/github/kf/pipelines/sdk/python/kfp/v2/components/structures.py", line 583, in load_from_component_yaml return cls.from_v1_component_spec(v1_component) File "/home/gongyuan_kubeflow_org/github/kf/pipelines/sdk/python/kfp/v2/components/structures.py", line 500, in from_v1_component_spec for spec in component_dict.get('outputs', []) File "pydantic/main.py", line 406, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 1 validation error for ComponentSpec __root__ 'NoneType' object is not iterable (type=type_error) ``` ### Expected result <!-- What should the correct behavior be? --> 1. Should this be supported? 2. If not, can we have a clearer error message? ### Materials and Reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6966/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6966/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6956
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6956/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6956/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6956/events
https://github.com/kubeflow/pipelines/issues/6956
1,062,667,067
I_kwDOB-71UM4_VwM7
6,956
run vertex pipeline with py-func locally
{ "login": "iuiu34", "id": 30587996, "node_id": "MDQ6VXNlcjMwNTg3OTk2", "avatar_url": "https://avatars.githubusercontent.com/u/30587996?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iuiu34", "html_url": "https://github.com/iuiu34", "followers_url": "https://api.github.com/users/iuiu34/followers", "following_url": "https://api.github.com/users/iuiu34/following{/other_user}", "gists_url": "https://api.github.com/users/iuiu34/gists{/gist_id}", "starred_url": "https://api.github.com/users/iuiu34/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iuiu34/subscriptions", "organizations_url": "https://api.github.com/users/iuiu34/orgs", "repos_url": "https://api.github.com/users/iuiu34/repos", "events_url": "https://api.github.com/users/iuiu34/events{/privacy}", "received_events_url": "https://api.github.com/users/iuiu34/received_events", "type": "User", "site_admin": false }
[ { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" }, { "id": 2710158147, "node_id": "MDU6TGFiZWwyNzEwMTU4MTQ3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/needs%20more%20info", "name": "needs more info", "color": "DBEF12", "default": false, "description": "" } ]
open
false
null
[]
null
[ "An start step would be next code playing with the source of the code to execute it dynamically. But would only work with single output's and you still need to import properly the components as functions (which is verbose)\r\n\r\nstart_step could be confusing and out of the current scope but since this is for debugging, seems a potential useful feature to me\r\n```py\r\ndef run_pipeline_local(start_step=None):\r\n func_steps_output = dict(\r\n step_1=\"step_1_output\",\r\n step_2=\"step_2_output\",\r\n ...)\r\n \r\n func_steps = list(func_steps_output.keys())\r\n\r\n if start_step is not None:\r\n if start_step not in func_steps:\r\n raise ValueError\r\n print(f\"start_step: {start_step}\")\r\n start_index = func_steps.index(start_step)\r\n else:\r\n start_index = -1\r\n\r\n func_imports = {k: f\" def {k}(*args, **kwargs): return {v}\" for k, v in func_steps_output.items()}\r\n func_imports = [v for k, v in func_imports.items() if func_steps.index(k) < start_index]\r\n func_imports = '\\n'.join(func_imports)\r\n from test_module.pipeline import pipeline as func\r\n func_source = inspect.getsource(func)\r\n # delete kfp stuff\r\n func_source = func_source.replace('.output', '')\r\n func_source = func_source.replace('.component_spec', '')\r\n func_source_index_def = func_source.index(\"def churn_pipeline\")\r\n func_source = func_source[func_source_index_def:]\r\n func_source_index_end_def = func_source.index(\"):\")\r\n func_source_def = func_source[:func_source_index_end_def + 3]\r\n func_source_body = func_source[func_source_index_end_def + 3:]\r\n kwargs = {}\r\n # kwargs['start_step'] = start_step\r\n # kwargs['experiment_id'] = experiment_id\r\n exec_source = f\"{func_source_def}\\n{func_imports}\\n{func_source_body}\\n{func.__name__}(**kwargs)\"\r\n print(exec_source)\r\n exec(exec_source)\r\n```", "Do you want to test and debug your python component in local environment?", "Yep, this is for components based on py, not prebuild ones like google_cloud_pipeline_components (but run them also in local would be cool)\r\nThis is in order to\r\na) avoid having 2 files: kfp-pipeline & local-pipeline\r\nb) enter debug mode\r\nc) avoid building the docker image and release for debugging", "cc @chensun ", "Components and pipelines are quite different--one is a containerized app with optional Python function implementation, the other is a set of components organized in a DAG. I'd suggest we clearly define the scope of this request first.\r\n\r\nIt's on our roadmap to support local debugging/testing of a primitive component, but we are not planning for local debugging of pipelines--pipeline requires an orchestration engine, to fully support running a pipeline locally is almost like reimplement KFP backend. ", "yep, the function `run_pipeline_local` defined above only covers the debugging of pipelines with only py-func components (i'm not sure if this is what you call primitive component)\r\n\r\nagree that to locally debug all types of components will be out of this scope, 'cause is far more complex.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "@chensun Do you have any details/updates on `It's on our roadmap to support local debugging/testing of a primitive component`? Thanks much!" ]
"2021-11-24T17:01:06"
"2023-05-02T17:23:30"
null
NONE
null
For debugging, in kfp.V2, would be good to run the vertex_pipeline based on py-funcs you define locally (without docker build) Also would avoid creating an almost identical, pipeline_local_func function. For example with an ```py from kfp.v2 import compiler pipeline_filename = 'tmp/kfp_pipeline.json' compiler.Compiler().compile(pipeline_func=pipeline_func, package_path=pipeline_filename) job = aiplatform.PipelineJob( pipeline_filename , local = True) job.run() ```
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6956/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6956/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6953
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6953/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6953/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6953/events
https://github.com/kubeflow/pipelines/issues/6953
1,061,853,986
I_kwDOB-71UM4_Spsi
6,953
doc: rename compatibility matrix page to tfx?
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
null
[]
null
[ "Thank you for creating this issue!\r\n\r\n\r\nFor the discussion of this doc title, I have a couple questions:\r\n\r\n1. Is there any plan for compatibility matrix beyond tfx in the future? (Say PytorchX, MLMD, argo, etc.)\r\n2. How does user find the right combination of ML frameworks+KFP nowadays? (Or how do we expect users to do to find out such combination) For example: A Kubeflow Pipelines user wants to use TFX, therefore they search `What TFX version to use in Kubeflow Pipelines` at Google to find out.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-11-24T00:39:36"
"2022-03-02T10:06:20"
null
CONTRIBUTOR
null
@zijianjoy @kramachandran @Bobgy in the interest of getting this merged, I have removed that rename from this PR. However, I still think we should discuss renaming that page in the future. _Originally posted by @thesuperzapper in https://github.com/kubeflow/website/pull/3063#discussion_r755575916_
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6953/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6953/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6949
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6949/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6949/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6949/events
https://github.com/kubeflow/pipelines/issues/6949
1,061,427,264
I_kwDOB-71UM4_RBhA
6,949
[bug] Conditions used to select component
{ "login": "DanielNobbe", "id": 42846160, "node_id": "MDQ6VXNlcjQyODQ2MTYw", "avatar_url": "https://avatars.githubusercontent.com/u/42846160?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DanielNobbe", "html_url": "https://github.com/DanielNobbe", "followers_url": "https://api.github.com/users/DanielNobbe/followers", "following_url": "https://api.github.com/users/DanielNobbe/following{/other_user}", "gists_url": "https://api.github.com/users/DanielNobbe/gists{/gist_id}", "starred_url": "https://api.github.com/users/DanielNobbe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DanielNobbe/subscriptions", "organizations_url": "https://api.github.com/users/DanielNobbe/orgs", "repos_url": "https://api.github.com/users/DanielNobbe/repos", "events_url": "https://api.github.com/users/DanielNobbe/events{/privacy}", "received_events_url": "https://api.github.com/users/DanielNobbe/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "I encountered a similar issue (but not the same).\r\nWhen I ran:\r\n```\r\n@func_to_container_op\r\ndef op_0(flag: bool) -> bool:\r\n return flag\r\n\r\n@func_to_container_op\r\ndef op_1(flag: bool) -> bool:\r\n return flag\r\n\r\n@func_to_container_op\r\ndef op_2(flag: bool) -> bool:\r\n return flag\r\n\r\n@func_to_container_op\r\ndef op_3(flag: bool):\r\n print(f\"flag={flag}\")\r\n\r\n@dsl.pipeline(name=\"sample\", description=__doc__)\r\ndef sample_pipeline(\r\n flag: bool = True,\r\n) -> None:\r\n a = op_0(flag)\r\n with dsl.Condition(flag == False):\r\n b = op_1(a.output)\r\n with dsl.Condition(flag == True):\r\n b = op_2(a.output)\r\n op_3(b.output)\r\n```\r\n, it caused the following error messages 🤔 :\r\n```\r\nkfp_server_api.exceptions.ApiException: (500)\r\nReason: Internal Server Error\r\nHTTP response headers: HTTPHeaderDict({'X-Powered-By': 'Express', 'content-type': 'application/json', 'date': 'Wed, 08 Dec 2021 01:19:05 GMT', 'content-length': '790', 'connection': 'close'})\r\nHTTP response body: {\"error\":\"Failed to create a new run.: InternalServerError: Failed to validate workflow for (): templates.sample.tasks.condition-2 templates.condition-2.outputs failed to resolve {{tasks.op-2.outputs.parameters.op-2-Output}}\",\"code\":13,\"message\":\"Failed to create a new run.: InternalServerError: Failed to validate workflow for (): templates.sample.tasks.condition-2 templates.condition-2.outputs failed to resolve {{tasks.op-2.outputs.parameters.op-2-Output}}\",\"details\":[{\"@type\":\"type.googleapis.com/api.Error\",\"error_message\":\"Internal Server Error\",\"error_details\":\"Failed to create a new run.: InternalServerError: Failed to validate workflow for (): templates.sample.tasks.condition-2 templates.condition-2.outputs failed to resolve {{tasks.op-2.outputs.parameters.op-2-Output}}\"}]}\r\n```\r\n\r\nWhen I commented out the last line `op_3(b.output)`, it worked correctly.", "I have been facing this issue too, basically anytime a variable could have two possible values due to being the result of a condition, on one of the conditions the pipeline will fail with:\r\n`This step is in Error state with this message: Unable to resolve: {{tasks.condition-Full-Experiment-1.outputs.artifacts.split-dataset-output_train}}`\r\n\r\nIm not sure if this is supposed to be the expected behaviour, but the only workarounds I was able to find is to put the rest of the pipeline that would depend upon the output ,separated in the conditional branch or to encapsulate the \"steps\" and returning multiple values. It doesn't seem intuitive to me but I was not able to find any other workarounds..\r\nAn example:\r\n\r\n### Separating steps in the conditionals\r\n```\r\n@dsl.pipeline(\r\n name='addition-pipeline',\r\n description='An example pipeline that performs addition calculations.',\r\n)\r\ndef add_pipeline(a: float, b: float,\r\n c: float, d: float,\r\n is_true: bool = True\r\n):\r\n\r\n add_task = add1(a, b)\r\n\r\n with dsl.Condition(is_true == True, name='is-True'):\r\n add_task_2_cd = add2(c, d)\r\n add_task_3 = add3(a, add_task_2_cd.output)\r\n with dsl.Condition(is_true == False, name='is-False'):\r\n add_task_2_ac = add2(a, c)\r\n add_task_3 = add3(a, add_task_2_ac.output) \r\n```", "I have encountered the same stuff. Much like raphael, I'm not sure if it's a bug or not.", "+1 to this. In my case I was trying to do two branches, one for hyperparameter tuning and one for not, then deployment steps following. I get this same error when trying to bring both conditional paths back to a common op.", "+1, I'm facing the same issue. This is what I'm trying to do\r\n1. Parse choices\r\n2. Execute a container based on a condition and produce an output\r\n3. Execute another container based on another condition and consume the output from step 2 above. \r\n\r\nTriggering a run ends up with \r\nkfp_server_api.exceptions.ApiException: (500)\r\nReason: Internal Server Error\r\nHTTP response headers: HTTPHeaderDict({'X-Powered-By': 'Express', 'content-type': 'application/json',\r\nHTTP response body: {\"error\":\"Failed to create a new run.: InternalServerError: Failed to validate workflow for (): templates.sample.tasks.condition-2 templates.condition-2.outputs failed to resolve {{tasks.op-2.outputs.parameters.op-2-" ]
"2021-11-23T15:35:36"
"2023-01-31T04:07:36"
null
NONE
null
### What steps did you take I am trying to use a pipeline to train two types of models (which are in different frameworks). I select which model type to train through a boolean pipeline argument. When selecting `Model_A`, I want to run a data preparation step for `Model_A`, and when selecting `Model_B` I want to run its respective data preparation step. I am using `dsl.Condition` to perform these steps. After either data preparation step, I want to run a data export step, which is the same for both models. I place this step after the conditional steps for the data preparation. See below an example script: ``` import kfp import kfp.dsl as dsl from kfp import compiler from kfp.components import create_component_from_func @create_component_from_func def add1(a: float, b: float) -> float: '''Calculates sum of two arguments''' print(a + b) return a + b @create_component_from_func def add2(a: float, b: float) -> float: '''Calculates sum of two arguments''' print(a + b) return a + b @create_component_from_func def add3(a: float, b: float) -> float: '''Calculates sum of two arguments''' print(a + b) return a + b @dsl.pipeline( name='addition-pipeline', description='An example pipeline that performs addition calculations.', ) def add_pipeline(a: float, b: float, c: float, d: float, is_true: bool = True ): add_task = add1(a, b) with dsl.Condition(is_true == True, name='is-True'): add_task_2 = add2(c, d) with dsl.Condition(is_true == False, name='is-False'): add_task_2 = add2(a, c) add_task_3 = add3(a, add_task_2.output) client = kfp.Client() client.create_run_from_pipeline_func( add_pipeline, arguments={'a': 1, 'b': 2, 'c': 5, 'd': 12, 'is_true': False}, ) ``` This script has a similar structure to the problem I am trying to solve: First, I perform an addition of two numbers (which are selected through the boolean flag), then I use the output of the addition in a following step. <!-- A clear and concise description of what the bug is.--> ### What happened: An error is triggered in the UI: `invalid spec: templates.addition-pipeline.tasks.condition-is-False-2 templates.condition-is-False-2.outputs failed to resolve {{tasks.add2-2.outputs.parameters.add2-2-Output}}` And the graph corresponding to the pipeline shows that `Add3` would not be started if the condition is True. ![image](https://user-images.githubusercontent.com/42846160/143053104-e3736d1e-23f0-48f0-b8f9-d9ecde895824.png) ### What did you expect to happen: The pipeline should not trigger an error, and `Add3` should always be dependent on the output of `Add2`. ### Environment: <!-- Please fill in those that seem relevant. --> * How do you deploy Kubeflow Pipelines (KFP)? <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> As part of a KubeFlow installation. * KFP version: <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> 1.5.0 * KFP SDK version: <!-- Specify the output of the following shell command: $pip list | grep kfp --> 1.8.9 ### Anything else you would like to add: <!-- Miscellaneous information that will assist in solving the issue.--> It seems that `kfp` does not reference the first `Add2` as an input to `Add3`, as if it is being overwritten by the second `Add2`. Is the `Condition` class not made for use cases such as this? In the examples, it is only used for leaf nodes of the DAG, but our use case does not seem very far-fetched. It would be great if someone can shed some light on this. ### Labels <!-- Please include labels below by uncommenting them to help us better triage issues --> <!-- /area frontend --> <!-- /area backend --> <!-- /area sdk --> <!-- /area testing --> <!-- /area samples --> <!-- /area components --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6949/reactions", "total_count": 26, "+1": 26, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6949/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6948
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6948/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6948/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6948/events
https://github.com/kubeflow/pipelines/issues/6948
1,061,282,880
I_kwDOB-71UM4_QeRA
6,948
[frontend] taskname instead of component name in pipeline view or rename components on use
{ "login": "tomalbrecht", "id": 17781570, "node_id": "MDQ6VXNlcjE3NzgxNTcw", "avatar_url": "https://avatars.githubusercontent.com/u/17781570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tomalbrecht", "html_url": "https://github.com/tomalbrecht", "followers_url": "https://api.github.com/users/tomalbrecht/followers", "following_url": "https://api.github.com/users/tomalbrecht/following{/other_user}", "gists_url": "https://api.github.com/users/tomalbrecht/gists{/gist_id}", "starred_url": "https://api.github.com/users/tomalbrecht/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tomalbrecht/subscriptions", "organizations_url": "https://api.github.com/users/tomalbrecht/orgs", "repos_url": "https://api.github.com/users/tomalbrecht/repos", "events_url": "https://api.github.com/users/tomalbrecht/events{/privacy}", "received_events_url": "https://api.github.com/users/tomalbrecht/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Well. A colleague found the undocumented solution. In case someone else wonders how you can rename the task:\r\n\r\n```\r\n task_get_model = op_get_minio_file(\r\n minio_uri=minio_model_uri\r\n )\r\n task_get_model.apply(use_aws_secret(\r\n 'mlpipeline-minio-artifact',\r\n 'accesskey',\r\n 'secretkey'\r\n ))\r\n task_get_model.set_display_name('load modelfile')\r\n```" ]
"2021-11-23T13:22:18"
"2021-11-23T13:31:43"
"2021-11-23T13:31:42"
NONE
null
### Environment * How did you deploy Kubeflow Pipelines (KFP)? via manifests * KFP version: 1.8.9 ### Steps to reproduce - create a pipeline using the same component twice (different runtime parameters) ### Expected result - I would expect the task-name instead of the component name within kubeflow pipelines UI - I'd like to distinct the usage of components. E.g. write statistics for validation/test or loading model/dataset ### Materials and Reference - is it possible to plot the taskname instead of the component name? - or how do I rename the component? ![Bildschirmfoto 2021-11-23 um 14 18 11](https://user-images.githubusercontent.com/17781570/143031574-f6acf62a-c82b-463a-90ce-709db986e6be.png) --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6948/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6948/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6940
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6940/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6940/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6940/events
https://github.com/kubeflow/pipelines/issues/6940
1,060,452,705
I_kwDOB-71UM4_NTlh
6,940
[feature] Publish a wheel for package `kfp`
{ "login": "busunkim96", "id": 8822365, "node_id": "MDQ6VXNlcjg4MjIzNjU=", "avatar_url": "https://avatars.githubusercontent.com/u/8822365?v=4", "gravatar_id": "", "url": "https://api.github.com/users/busunkim96", "html_url": "https://github.com/busunkim96", "followers_url": "https://api.github.com/users/busunkim96/followers", "following_url": "https://api.github.com/users/busunkim96/following{/other_user}", "gists_url": "https://api.github.com/users/busunkim96/gists{/gist_id}", "starred_url": "https://api.github.com/users/busunkim96/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/busunkim96/subscriptions", "organizations_url": "https://api.github.com/users/busunkim96/orgs", "repos_url": "https://api.github.com/users/busunkim96/repos", "events_url": "https://api.github.com/users/busunkim96/events{/privacy}", "received_events_url": "https://api.github.com/users/busunkim96/received_events", "type": "User", "site_admin": false }
[ { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
null
[]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-11-22T18:09:08"
"2022-03-02T10:06:24"
null
CONTRIBUTOR
null
### Feature Area /area sdk ### What feature would you like to see? Please publish a built distribution (wheel) for the package `kfp`. https://packaging.python.org/guides/analyzing-pypi-package-downloads/ says both files (source distribution and source archive) get created by default if you use the PyPA package `build`: ``` python3 -m pip install --upgrade build python3 -m build ``` ### What is the use case or pain point? I am one of the maintainers of `google-auth` and `google-cloud-*` libraries. This summer we released a v2 of `google-auth` and `google-api-core`, which we anticipated would cause diamond dependency conflicts for folks using multiple dependent packages. To mitigate this I proactively opened PRs like #6939 on many repositories. I was able to quickly discover dependent packages through the [PyPI Bigquery Dataset](https://packaging.python.org/guides/analyzing-pypi-package-downloads/). This dataset lists the `install_requires` as long as the package has published a wheel. https://dustingram.com/articles/2018/03/05/why-pypi-doesnt-know-dependencies/ explains in more detail why the wheel is needed to get this data. The wheel also makes it possible for websites like https://libraries.io/ to list dependents and dependencies of packages. Notice that https://libraries.io/pypi/kfp lists 0 dependencies. For comparison see https://libraries.io/pypi/google-cloud-vision. <!-- It helps us understand the benefit of this feature for your use case. --> ### Is there a workaround currently? I used two additional strategies to try to find dependent packages: 1. Looking through GItHub. This is more manageable when the search is scoped to a single organization, but gets more difficult when packages are in many organizations. 2. Ask folks on partner teams if they know of any projects that might depend directly on `google-auth`. Both options are imperfect and require human judgement. <!-- Without this feature, how do you accomplish your task today? --> --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6940/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6940/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6936
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6936/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6936/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6936/events
https://github.com/kubeflow/pipelines/issues/6936
1,059,589,930
I_kwDOB-71UM4_KA8q
6,936
avoid declaring two times same function (and params) in vertex-pipeline with py-functions
{ "login": "iuiu34", "id": 30587996, "node_id": "MDQ6VXNlcjMwNTg3OTk2", "avatar_url": "https://avatars.githubusercontent.com/u/30587996?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iuiu34", "html_url": "https://github.com/iuiu34", "followers_url": "https://api.github.com/users/iuiu34/followers", "following_url": "https://api.github.com/users/iuiu34/following{/other_user}", "gists_url": "https://api.github.com/users/iuiu34/gists{/gist_id}", "starred_url": "https://api.github.com/users/iuiu34/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iuiu34/subscriptions", "organizations_url": "https://api.github.com/users/iuiu34/orgs", "repos_url": "https://api.github.com/users/iuiu34/repos", "events_url": "https://api.github.com/users/iuiu34/events{/privacy}", "received_events_url": "https://api.github.com/users/iuiu34/received_events", "type": "User", "site_admin": false }
[ { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
null
[]
null
[ "PR #7049", "IIUC, what you want to achieve after the PR is something like this:\r\n\r\n```python\r\nfrom kfp_test.test import test\r\n \r\ntest_op = create_component_from_func(func=test, base_image=base_image)\r\n```\r\n\r\nThere're a couple of issues with this approach:\r\n1. It requires installing the `kfp_test` package locally in order to compile, but that's really not necessary.\r\n2. It makes an assumption that the locally installed package is the same version as that in the base image. But that's not something we can verify here.\r\n3. If we want to try different versions of the base image--assuming there's interface changes to the test function--it would be a big pain as we have to remember to update the local package to match the image or switch among different virtual environments.\r\n", "yep, you're right.\r\n\r\nThen another approach would be to call the `base_image` inside py with an entrypoint like\r\n\r\n```sh\r\nFUNC_SOURCE=${python3 -m kfp.v2.component_factory.get_func_source(func)}\r\n```\r\n\r\nwith `kfp.v2.component_factory.get_func_source` as in the PR:\r\n\r\n```py\r\ndef get_func_source(func):\r\n name = func.__name__\r\n signature = inspect.signature(func)\r\n func_source = f' def {name}{signature}:\\n' \\\r\n f' kwargs = locals()\\n ' \\\r\n f'from {func.__module__} import {name}\\n ' \\\r\n f'return {name}(**kwargs)\\n'\r\n return func_source\r\n```\r\n\r\nWith this we solve your problems cause now the usage would be\r\n```py\r\ntest_op = create_component_from_func(func='main_module.test', base_image=base_image)\r\n````\r\n\r\nBut main problem that I see now is that you need the image up and running before compiling the json.\r\nSomething like\r\n\r\n* start container (in whatever your server is)\r\n* run entrypoint\r\n* compile json\r\n* run pipeline (in whatever your server is)\r\n\r\nIf you have a server that's able to run pipelines, you should be able to start containers too. But I admit, that this back and forth is not very clean.\r\n\r\nWhat are your thoughts?", "I'm not sure I understand. \r\nStarting a container involves running its entrypoint. The compiled json tells the backend system how to run a container.", "entrypoint would be\r\n```sh\r\npython3 -m kfp.v2.component_factory.get_func_source(func)\r\n```\r\n\r\nthe idea would be to start the container alone before the pipeline to get the func definition. And then with the info, define the components and run the pipeline.\r\n\r\nYou can also think it as running first a one-step pipeline, with the `get_func_source` . \r\nAnd then setting up the components args for the desired pipeline\r\n\r\nAs said, is a little bit back and forth...\r\n\r\n", "> entrypoint would be\r\n> \r\n> ```shell\r\n> python3 -m kfp.v2.component_factory.get_func_source(func)\r\n> ```\r\n> \r\n> the idea would be to start the container alone before the pipeline to get the func definition. And then with the info, define the components and run the pipeline.\r\n> \r\n> You can also think it as running first a one-step pipeline, with the `get_func_source` . And then setting up the components args for the desired pipeline\r\n> \r\n> As said, is a little bit back and forth...\r\n\r\nI see. Probably not a good idea- that's even heavier than the current state...", "yep...\r\nAnway, if someone out there is in a similar situation:\r\nWants to convert py-funcs in a custom py-package into kfp-py-func components. But without changing the code (i.e. without putting the imports inside, etc.) \r\nHere you have an util `create-wrapper-components` that will create a components.py file automatically [link](https://github.com/iuiu34/create-wrapper-components-kfp)\r\n\r\nRequirements:\r\n- your functions should have type hint for input and output args (and only kfp-py-func available types)\r\n- your base_image for the components and the image/pc where you run the `create-wrapper-components` should have the same `your_custom_package` version", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-11-22T02:04:07"
"2022-04-17T06:27:26"
null
NONE
null
context: With vertex-pipeline + py-functions We can use a base_image (docker) with our py library pre-installed. Therefore the lib functions are available to import inside the pipeline. problem: Assuming we want to use one of this functions (without any change) in the pipeline, is a little verbose to declare with @component the same function again, which is corresponding paramater types. Would be cleaner if you could just declare it once. And the way the code works, wouldn't be so much of a change. solution proposal: assuming our function is in our library: `from kfp_test.test import test` then we have the option 1) clean but verbose ``` @component(base_image=base_image) def test(x: int) -> int: kwargs = locals() from kfp_test.test import test return test(**kwargs)` ``` 2) not clean but not verbose ``` from inspect import getsource as getsource2 # deal with line 93 in component_factory.py: func_code = inspect.getsource(func) def getsource(func): if 'func_code' in func.__dict__.keys(): return func.func_code else: return getsource2(func) from kfp.v2 import compiler (...) from kfp_test.test import test func_code = f' def {inspect.signature(test)}:\n kwargs = locals()\n from kfp_test.test import test\n return test(**kwargs)\n' test.func_code = func_code test = create_component_from_func(func=test, base_image=base_image) ``` if we had a function in kfp like `create_component_from_func_code` that handled option 2 natively (create a wrapper string automatically from the func and ingestit in the create_component() without declaring it) , then we would have a non-verbose, clean-way to use functions from the library inside the pipeline
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6936/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6936/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6935
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6935/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6935/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6935/events
https://github.com/kubeflow/pipelines/issues/6935
1,058,923,115
I_kwDOB-71UM4_HeJr
6,935
[feature] Provide a way to filter for currently enabled recurring runs
{ "login": "jli", "id": 133466, "node_id": "MDQ6VXNlcjEzMzQ2Ng==", "avatar_url": "https://avatars.githubusercontent.com/u/133466?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jli", "html_url": "https://github.com/jli", "followers_url": "https://api.github.com/users/jli/followers", "following_url": "https://api.github.com/users/jli/following{/other_user}", "gists_url": "https://api.github.com/users/jli/gists{/gist_id}", "starred_url": "https://api.github.com/users/jli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jli/subscriptions", "organizations_url": "https://api.github.com/users/jli/orgs", "repos_url": "https://api.github.com/users/jli/repos", "events_url": "https://api.github.com/users/jli/events{/privacy}", "received_events_url": "https://api.github.com/users/jli/received_events", "type": "User", "site_admin": false }
[ { "id": 930476737, "node_id": "MDU6TGFiZWw5MzA0NzY3Mzc=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/help%20wanted", "name": "help wanted", "color": "db1203", "default": true, "description": "The community is welcome to contribute." }, { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" }, { "id": 2152751095, "node_id": "MDU6TGFiZWwyMTUyNzUxMDk1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/frozen", "name": "lifecycle/frozen", "color": "ededed", "default": false, "description": null }, { "id": 2186355346, "node_id": "MDU6TGFiZWwyMTg2MzU1MzQ2", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/good%20first%20issue", "name": "good first issue", "color": "fef2c0", "default": true, "description": "" } ]
open
false
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false } ]
null
[ "This sounds like a useful feature to add. \r\n\r\nHello Xiyue, would you like to take a look and see if we are able to support filtering by job status from backend? Reference: https://github.com/kubeflow/pipelines/blob/949cfdca3794416210805ca1579adbd44ebec592/backend/src/apiserver/storage/job_store.go#L59\r\n\r\n/assign @capri-xiyue ", "@jli , would you like to contribute to this feature? We are happy to provide resources and areas to change if you want to implement it.", "@zijianjoy I don't think I have the time at the moment to contribute unfortunately, sorry!", "@jli , that is okay, we will keep this feature open for anyone interested.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "/lifecycle frozen" ]
"2021-11-19T21:16:13"
"2022-04-21T00:22:01"
null
CONTRIBUTOR
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> /area frontend <!-- /area backend --> <!-- /area sdk --> <!-- /area samples --> <!-- /area components --> ### What feature would you like to see? The recurring runs frontend page could be more useful for my teams if we could filter for just the currently active recurring runs. A checkbox/button would be great for this. Alternatively, just being able to sort by the "Status" column could be an easy way to implement this. ### What is the use case or pain point? When my team merges PRs, our CI disables the previous production recurring runs and schedules new runs with the latest code, so there are a lot of recurring runs. It would be helpful for us to understand what recurring runs are currently active. ### Is there a workaround currently? Perhaps we could use the KFP API client to get this? But no workaround in the frontend UI. --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6935/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6935/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6931
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6931/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6931/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6931/events
https://github.com/kubeflow/pipelines/issues/6931
1,058,627,123
I_kwDOB-71UM4_GV4z
6,931
How to use docker images from a private registry
{ "login": "konsloiz", "id": 22999070, "node_id": "MDQ6VXNlcjIyOTk5MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/22999070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/konsloiz", "html_url": "https://github.com/konsloiz", "followers_url": "https://api.github.com/users/konsloiz/followers", "following_url": "https://api.github.com/users/konsloiz/following{/other_user}", "gists_url": "https://api.github.com/users/konsloiz/gists{/gist_id}", "starred_url": "https://api.github.com/users/konsloiz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/konsloiz/subscriptions", "organizations_url": "https://api.github.com/users/konsloiz/orgs", "repos_url": "https://api.github.com/users/konsloiz/repos", "events_url": "https://api.github.com/users/konsloiz/events{/privacy}", "received_events_url": "https://api.github.com/users/konsloiz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "How do you solve this problem?", "For version 1.4 from what I see driver image is hardcoded in driver.go " ]
"2021-11-19T15:04:47"
"2022-01-22T20:01:28"
"2021-11-30T13:55:30"
NONE
null
Hello! We are trying to deploy KFP in an enterprise environment with docker images from a private registry (For security reasons we are not allowed to use public registries such as gcr.io - so we pull the images and push them to our environment). The problem is that we are missing where some of the images can be configured. More specifically we would like to know where can we change the 1. gcr.io/ml-pipeline/kfp-launcher 2. Python 3.7 3. gcr.io/ml-pipeline/kfp-driver Thank you for your help in advance.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6931/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6931/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6930
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6930/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6930/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6930/events
https://github.com/kubeflow/pipelines/issues/6930
1,058,496,812
I_kwDOB-71UM4_F2Es
6,930
Pipeline run does not write artifacts and stop
{ "login": "lenaherrmann-dfki", "id": 85176444, "node_id": "MDQ6VXNlcjg1MTc2NDQ0", "avatar_url": "https://avatars.githubusercontent.com/u/85176444?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lenaherrmann-dfki", "html_url": "https://github.com/lenaherrmann-dfki", "followers_url": "https://api.github.com/users/lenaherrmann-dfki/followers", "following_url": "https://api.github.com/users/lenaherrmann-dfki/following{/other_user}", "gists_url": "https://api.github.com/users/lenaherrmann-dfki/gists{/gist_id}", "starred_url": "https://api.github.com/users/lenaherrmann-dfki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lenaherrmann-dfki/subscriptions", "organizations_url": "https://api.github.com/users/lenaherrmann-dfki/orgs", "repos_url": "https://api.github.com/users/lenaherrmann-dfki/repos", "events_url": "https://api.github.com/users/lenaherrmann-dfki/events{/privacy}", "received_events_url": "https://api.github.com/users/lenaherrmann-dfki/received_events", "type": "User", "site_admin": false }
[ { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." }, { "id": 2710158147, "node_id": "MDU6TGFiZWwyNzEwMTU4MTQ3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/needs%20more%20info", "name": "needs more info", "color": "DBEF12", "default": false, "description": "" } ]
open
false
null
[]
null
[ "Do you see any errors when the pipeline run? Can you share your pipeline yaml?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-11-19T12:52:34"
"2022-03-03T02:05:09"
null
NONE
null
Hey folks, recently, I installed Kubeflow 1.4 with Multi User support. After a while, I managed to create a pipeline through a notebook-server. The pipeline should consists out of different steps each with input and output artifacts. Yet when I start the pipeline in a run, the first step does not stop nor seems to write any of the files as output-artifacts. If I look into the local minio, nothing has been written there. Do I need to grant the user explicit access to the kubeflow minio in order to write output artifacts? And if so, how do I grand this access? Any help is appreciated
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6930/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6930/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6928
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6928/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6928/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6928/events
https://github.com/kubeflow/pipelines/issues/6928
1,058,459,367
I_kwDOB-71UM4_Fs7n
6,928
samples: migrate v2 samples to hermetic v2 namespace
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }, { "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }, { "login": "capri-xiyue", "id": 52932582, "node_id": "MDQ6VXNlcjUyOTMyNTgy", "avatar_url": "https://avatars.githubusercontent.com/u/52932582?v=4", "gravatar_id": "", "url": "https://api.github.com/users/capri-xiyue", "html_url": "https://github.com/capri-xiyue", "followers_url": "https://api.github.com/users/capri-xiyue/followers", "following_url": "https://api.github.com/users/capri-xiyue/following{/other_user}", "gists_url": "https://api.github.com/users/capri-xiyue/gists{/gist_id}", "starred_url": "https://api.github.com/users/capri-xiyue/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/capri-xiyue/subscriptions", "organizations_url": "https://api.github.com/users/capri-xiyue/orgs", "repos_url": "https://api.github.com/users/capri-xiyue/repos", "events_url": "https://api.github.com/users/capri-xiyue/events{/privacy}", "received_events_url": "https://api.github.com/users/capri-xiyue/received_events", "type": "User", "site_admin": false } ]
null
[]
"2021-11-19T12:07:54"
"2021-11-20T18:18:36"
"2021-11-20T18:18:36"
CONTRIBUTOR
null
With https://github.com/kubeflow/pipelines/pull/6890 merged, we need to migrate v2 samples considering v2 breaking changes according to [KFP SDK 2.0: API Experience & Breaking Changes](https://docs.google.com/document/d/1nCUUVRXexXbQ0LDkGHsMIBDSu1WvJA9Upy1JzybNVMk/edit?usp=sharing). Samples in samples/test with V2_ENGINE mode, should be migrated (copied) to samples/v2 with the same test. Samples with V2_COMPATIBLE mode, should be cleaned up or only tested as v1 sample, according to https://github.com/kubeflow/pipelines/issues/6829. A search of samples we need to migrate: https://github.com/search?q=PipelineExecutionMode+V2_ENGINE++repo%3Akubeflow%2Fpipelines+filename%3A_test&type=Code&ref=advsearch&l=&l=
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6928/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6928/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6927
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6927/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6927/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6927/events
https://github.com/kubeflow/pipelines/issues/6927
1,058,416,460
I_kwDOB-71UM4_FidM
6,927
[sdk] dsl-compile-v2 fails to compile samples/v2/producer_consumer_param.py
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "id": 930619528, "node_id": "MDU6TGFiZWw5MzA2MTk1Mjg=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/testing", "name": "area/testing", "color": "00daff", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "This specific one is because of the mix use of v1 and v2 imports in the test sample file\r\nhttps://github.com/kubeflow/pipelines/blob/feebc6b66a719dc950c84500e00d23e36b37ae34/samples/v2/producer_consumer_param.py#L16-L19\r\n\r\nwhich I'm fixing in https://github.com/kubeflow/pipelines/pull/6932", "I wonder whether we can return a meaningful error message. This one is impossible to understand why.", "> I wonder whether we can return a meaningful error message. This one is impossible to understand why.\r\n\r\nAgree, will try to improve the message.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-11-19T11:14:39"
"2022-03-02T10:06:27"
null
CONTRIBUTOR
null
### Environment * KFP SDK version: master <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> ### Steps to reproduce <!-- Specify how to reproduce the problem. This may include information such as: a description of the process, code snippets, log output, or screenshots. --> ```bash $ cd github.com/kubeflow/pipelines $ pip install -e sdk/python $ cd samples/v2 $ dsl-compile-v2 --py producer_consumer_param.py --out a.json Traceback (most recent call last): File "/home/gongyuan_kubeflow_org/github/kf/pipelines/sdk/python/kfp/components/_data_passing.py", line 235, in serialize_value serialized_value = serializer(value) File "/home/gongyuan_kubeflow_org/github/kf/pipelines/sdk/python/kfp/components/_data_passing.py", line 40, in _serialize_str str(str_value), str(type(str_value)))) TypeError: Value "{{channel:task=;name=text;type=String;}}" has type "<class 'kfp.v2.components.pipeline_channel.PipelineParameterChannel'>" instead of str. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/gongyuan_kubeflow_org/miniconda3/envs/v2/bin/dsl-compile-v2", line 33, in <module> sys.exit(load_entry_point('kfp', 'console_scripts', 'dsl-compile-v2')()) File "/home/gongyuan_kubeflow_org/github/kf/pipelines/sdk/python/kfp/v2/compiler/main.py", line 164, in main type_check=not args.disable_type_check, File "/home/gongyuan_kubeflow_org/github/kf/pipelines/sdk/python/kfp/v2/compiler/main.py", line 149, in compile_pyfile type_check=type_check, File "/home/gongyuan_kubeflow_org/github/kf/pipelines/sdk/python/kfp/v2/compiler/main.py", line 102, in _compile_pipeline_function type_check=type_check) File "/home/gongyuan_kubeflow_org/github/kf/pipelines/sdk/python/kfp/v2/compiler/compiler.py", line 99, in compile pipeline_parameters_override=pipeline_parameters, File "/home/gongyuan_kubeflow_org/github/kf/pipelines/sdk/python/kfp/v2/compiler/compiler.py", line 153, in _create_pipeline_v2 pipeline_func(*args_list) File "/home/gongyuan_kubeflow_org/github/kf/pipelines/samples/v2/producer_consumer_param.py", line 61, in producer_consumer_param_pipeline producer = producer_op(input_text=text) File "/home/gongyuan_kubeflow_org/github/kf/pipelines/sdk/python/kfp/components/_dynamic.py", line 53, in Producer return dict_func(locals()) # noqa: F821 TODO File "/home/gongyuan_kubeflow_org/github/kf/pipelines/sdk/python/kfp/components/_components.py", line 389, in create_task_object_from_component_and_pythonic_arguments component_ref=component_ref, File "/home/gongyuan_kubeflow_org/github/kf/pipelines/sdk/python/kfp/components/_components.py", line 327, in _create_task_object_from_component_and_arguments **kwargs, File "/home/gongyuan_kubeflow_org/github/kf/pipelines/sdk/python/kfp/components/_components.py", line 278, in _create_task_spec_from_component_and_arguments input_type) File "/home/gongyuan_kubeflow_org/github/kf/pipelines/sdk/python/kfp/components/_data_passing.py", line 248, in serialize_value str(e), ValueError: Failed to serialize the value "{{channel:task=;name=text;type=String;}}" of type "PipelineParameterChannel" to type "String". Exception: Value "{{channel:task=;name=text;type=String;}}" has type "<class 'kfp.v2.components.pipeline_channel.PipelineParameterChannel'>" instead of str. ``` ### Expected result The sample should compile. <!-- What should the correct behavior be? --> ### Materials and Reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6927/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6927/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6926
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6926/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6926/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6926/events
https://github.com/kubeflow/pipelines/issues/6926
1,058,262,407
I_kwDOB-71UM4_E82H
6,926
[bug] Pipeline components do not fail as expected
{ "login": "pretidav", "id": 23082930, "node_id": "MDQ6VXNlcjIzMDgyOTMw", "avatar_url": "https://avatars.githubusercontent.com/u/23082930?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pretidav", "html_url": "https://github.com/pretidav", "followers_url": "https://api.github.com/users/pretidav/followers", "following_url": "https://api.github.com/users/pretidav/following{/other_user}", "gists_url": "https://api.github.com/users/pretidav/gists{/gist_id}", "starred_url": "https://api.github.com/users/pretidav/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pretidav/subscriptions", "organizations_url": "https://api.github.com/users/pretidav/orgs", "repos_url": "https://api.github.com/users/pretidav/repos", "events_url": "https://api.github.com/users/pretidav/events{/privacy}", "received_events_url": "https://api.github.com/users/pretidav/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." }, { "id": 2710158147, "node_id": "MDU6TGFiZWwyNzEwMTU4MTQ3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/needs%20more%20info", "name": "needs more info", "color": "DBEF12", "default": false, "description": "" } ]
open
false
null
[]
null
[ "@pretidav Can you share your pipeline component yaml?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-11-19T08:16:37"
"2022-03-03T02:05:11"
null
NONE
null
### What steps did you take We have a pipeline with several components defined via .yaml descriptors. ### What happened: Whenever the python code (contained into the images executed by the component) raises an error this does not translate with an error in the component execution which is still marked with a green mark. We tried several ways to raise the error, from a simple "exit(1)" to more specific python error raising methods. ### What did you expect to happen: We expect the component to correctly catch the error from its container execution, and stop the pipeline. This is crucial to ensure a correct caching functionality. If the component is not failing as it should, a not-working component execution can be cached by other runs. ### Environment: * How do you deploy Kubeflow Pipelines (KFP)? part of kubeflow deployment * KFP version: 1.7.0 * KFP SDK version: 1.8.2 ### Labels <!-- Please include labels below by uncommenting them to help us better triage issues --> <!-- /area frontend --> <!-- /area backend --> <!-- /area sdk --> <!-- /area components --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6926/reactions", "total_count": 11, "+1": 11, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6926/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6925
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6925/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6925/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6925/events
https://github.com/kubeflow/pipelines/issues/6925
1,057,814,368
I_kwDOB-71UM4_DPdg
6,925
[feature] <Submit spark job on AWS Databricks>
{ "login": "pwzhong", "id": 15694079, "node_id": "MDQ6VXNlcjE1Njk0MDc5", "avatar_url": "https://avatars.githubusercontent.com/u/15694079?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pwzhong", "html_url": "https://github.com/pwzhong", "followers_url": "https://api.github.com/users/pwzhong/followers", "following_url": "https://api.github.com/users/pwzhong/following{/other_user}", "gists_url": "https://api.github.com/users/pwzhong/gists{/gist_id}", "starred_url": "https://api.github.com/users/pwzhong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pwzhong/subscriptions", "organizations_url": "https://api.github.com/users/pwzhong/orgs", "repos_url": "https://api.github.com/users/pwzhong/repos", "events_url": "https://api.github.com/users/pwzhong/events{/privacy}", "received_events_url": "https://api.github.com/users/pwzhong/received_events", "type": "User", "site_admin": false }
[ { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
null
[]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-11-18T21:25:18"
"2022-03-02T10:06:29"
null
NONE
null
There are Kubeflow pipeline Ops to submit jobs to Azure Databricks. Are we able to also submit jobs to AWS Databricks? If not, does ResourceOp support it?
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6925/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6925/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6923
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6923/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6923/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6923/events
https://github.com/kubeflow/pipelines/issues/6923
1,057,805,899
I_kwDOB-71UM4_DNZL
6,923
[feature] [multi-user] Pipelines Profile Controller Supports Adding Labels to Created Resources
{ "login": "vinayan3", "id": 1034900, "node_id": "MDQ6VXNlcjEwMzQ5MDA=", "avatar_url": "https://avatars.githubusercontent.com/u/1034900?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vinayan3", "html_url": "https://github.com/vinayan3", "followers_url": "https://api.github.com/users/vinayan3/followers", "following_url": "https://api.github.com/users/vinayan3/following{/other_user}", "gists_url": "https://api.github.com/users/vinayan3/gists{/gist_id}", "starred_url": "https://api.github.com/users/vinayan3/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vinayan3/subscriptions", "organizations_url": "https://api.github.com/users/vinayan3/orgs", "repos_url": "https://api.github.com/users/vinayan3/repos", "events_url": "https://api.github.com/users/vinayan3/events{/privacy}", "received_events_url": "https://api.github.com/users/vinayan3/received_events", "type": "User", "site_admin": false }
[ { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" }, { "id": 2852159180, "node_id": "MDU6TGFiZWwyODUyMTU5MTgw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/manifests", "name": "area/manifests", "color": "4CD0BE", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "There is no KFP profile controller image, it is by design that we allow user to customize profile controller behavior as followed.\r\n\r\nYou can change https://github.com/kubeflow/pipelines/blob/master/manifests/kustomize/base/installs/multi-user/pipelines-profile-controller/sync.py#L161 in the way that you want to append labels, then apply the manifests to achieve your goal. This file will be packaged into configmap, and be executed as is during runtime." ]
"2021-11-18T21:13:49"
"2021-11-19T01:14:22"
"2021-11-19T01:14:22"
NONE
null
### Feature Area /area manifests ### What feature would you like to see? The kubeflow pipielines deployment is placed into a cluster which has labeling requirements. The pipelines profile controller sync.py creates Kube resources in the profile's namespace. There needs to be a feature to allow adding labels to all the created resources. ### What is the use case or pain point? The labeling requirements on the Kubernetes cluster will cause the pods to be rejected when hard enforcement in Gatekeeper is turned on. ### Is there a workaround currently? Not currently. The only workaround is to fork the pipelines repo, build the pipelines-profile-controller image, and use that docker image. <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6923/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6923/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6922
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6922/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6922/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6922/events
https://github.com/kubeflow/pipelines/issues/6922
1,057,270,294
I_kwDOB-71UM4_BKoW
6,922
ml-pipeline, metadata-grpc, ml-pipeline-persistentagent restart constantly
{ "login": "konsloiz", "id": 22999070, "node_id": "MDQ6VXNlcjIyOTk5MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/22999070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/konsloiz", "html_url": "https://github.com/konsloiz", "followers_url": "https://api.github.com/users/konsloiz/followers", "following_url": "https://api.github.com/users/konsloiz/following{/other_user}", "gists_url": "https://api.github.com/users/konsloiz/gists{/gist_id}", "starred_url": "https://api.github.com/users/konsloiz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/konsloiz/subscriptions", "organizations_url": "https://api.github.com/users/konsloiz/orgs", "repos_url": "https://api.github.com/users/konsloiz/repos", "events_url": "https://api.github.com/users/konsloiz/events{/privacy}", "received_events_url": "https://api.github.com/users/konsloiz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Can you look at the the yaml definition, events and logs in `ml-pipeline`? \r\n\r\n```\r\nkubectl logs my-pod --previous \r\n```\r\n\r\nhttps://kubernetes.io/docs/reference/kubectl/cheatsheet/#interacting-with-running-pods", "> kubectl logs my-pod --previous\r\n\r\nThanks for the reply. \r\n\r\n**ml-pipelines: logs**\r\n\r\n```bash\r\nI1119 07:39:09.049099 7 client_manager.go:160] Initializing client manager\r\nI1119 07:39:09.049182 7 config.go:57] Config DBConfig.ExtraParams not specified, skipping\r\n```\r\n\r\n**ml-pipelines: events**\r\n\r\n```bash\r\n25m Warning Unhealthy pod/ml-pipeline-f55c9d8cf-xbd2m Readiness probe failed:\r\n9m53s Warning BackOff pod/ml-pipeline-f55c9d8cf-xbd2m Back-off restarting failed container\r\n5m1s Warning Unhealthy pod/ml-pipeline-f55c9d8cf-xbd2m Liveness probe failed:\r\n```\r\n\r\nml-pipelines-apiserver-deployment.yaml\r\n\r\n```yaml\r\napiVersion: apps/v1\r\nkind: Deployment\r\nmetadata:\r\n labels:\r\n app: ml-pipeline\r\n name: ml-pipeline\r\nspec:\r\n selector:\r\n matchLabels:\r\n app: ml-pipeline\r\n template:\r\n metadata:\r\n labels:\r\n app: ml-pipeline\r\n annotations:\r\n cluster-autoscaler.kubernetes.io/safe-to-evict: \"true\"\r\n spec:\r\n securityContext:\r\n runAsUser: 1000\r\n containers:\r\n - env:\r\n - name: AUTO_UPDATE_PIPELINE_DEFAULT_VERSION\r\n valueFrom:\r\n configMapKeyRef:\r\n name: pipeline-install-config\r\n key: autoUpdatePipelineDefaultVersion\r\n - name: POD_NAMESPACE\r\n valueFrom:\r\n fieldRef:\r\n fieldPath: metadata.namespace\r\n - name: OBJECTSTORECONFIG_SECURE\r\n value: \"false\"\r\n - name: OBJECTSTORECONFIG_BUCKETNAME\r\n valueFrom:\r\n configMapKeyRef:\r\n name: pipeline-install-config\r\n key: bucketName\r\n - name: DBCONFIG_USER\r\n valueFrom:\r\n secretKeyRef:\r\n name: mysql-secret\r\n key: username\r\n - name: DBCONFIG_PASSWORD\r\n valueFrom:\r\n secretKeyRef:\r\n name: mysql-secret\r\n key: password\r\n - name: DBCONFIG_DBNAME\r\n valueFrom:\r\n configMapKeyRef:\r\n name: pipeline-install-config\r\n key: pipelineDb\r\n - name: DBCONFIG_HOST\r\n valueFrom:\r\n configMapKeyRef:\r\n name: pipeline-install-config\r\n key: dbHost\r\n - name: DBCONFIG_PORT\r\n valueFrom:\r\n configMapKeyRef:\r\n name: pipeline-install-config\r\n key: dbPort\r\n - name: DBCONFIG_CONMAXLIFETIMESEC\r\n valueFrom:\r\n configMapKeyRef:\r\n name: pipeline-install-config\r\n key: ConMaxLifeTimeSec\r\n - name: OBJECTSTORECONFIG_ACCESSKEY\r\n valueFrom:\r\n secretKeyRef:\r\n name: mlpipeline-minio-artifact\r\n key: accesskey\r\n - name: OBJECTSTORECONFIG_SECRETACCESSKEY\r\n valueFrom:\r\n secretKeyRef:\r\n name: mlpipeline-minio-artifact\r\n key: secretkey\r\n image: registry.app.corpintra.net/dna/ml-pipeline/api-server:dummy\r\n imagePullPolicy: IfNotPresent\r\n name: ml-pipeline-api-server\r\n ports:\r\n - name: http\r\n containerPort: 8888\r\n - name: grpc\r\n containerPort: 8887\r\n readinessProbe:\r\n exec:\r\n command:\r\n - wget\r\n - -q # quiet\r\n - -S # show server response\r\n - -O\r\n - \"-\" # Redirect output to stdout\r\n - http://localhost:8888/apis/v1beta1/healthz\r\n initialDelaySeconds: 3\r\n periodSeconds: 5\r\n timeoutSeconds: 2\r\n livenessProbe:\r\n exec:\r\n command:\r\n - wget\r\n - -q # quiet\r\n - -S # show server response\r\n - -O\r\n - \"-\" # Redirect output to stdout\r\n - http://localhost:8888/apis/v1beta1/healthz\r\n initialDelaySeconds: 3\r\n periodSeconds: 5\r\n timeoutSeconds: 2\r\n resources:\r\n requests:\r\n cpu: 250m\r\n memory: 500Mi\r\n serviceAccountName: ml-pipeline\r\n```\r\n\r\nAlso, I want to add here (maybe this can help) that the DBs are not created inside the MySQL Pod\r\n\r\n\r\n![image](https://user-images.githubusercontent.com/22999070/142585587-e5fbf748-425c-4262-b673-2afce4dcf927.png)\r\n\r\n\r\nLet me know if you need anything more. Thanks for helping me out.", "Closing this issue because it was a problem of our Network Policy. Thanks for the help.", "I'm having the exact same error, can you mind elaborating what your issue was? ", "I am also facing the same error. Would u elaborate on how you fixed the issue?", "Hello any idea, how to fix this error?\r\n", "Can you explain what was the fix?", "If you are looking for a working solution, we open-sourced our code. \r\n\r\nYou can find it in this repo: 👉 https://github.com/mercedes-benz/DnA" ]
"2021-11-18T11:56:17"
"2023-04-19T10:22:06"
"2021-11-19T15:22:14"
NONE
null
Hello everyone. We are trying to deploy KFP in an enterprise cluster. Unfortunately, the ml-pipeline, metadata-grpc, ml-pipeline-persistentagent pods are not initialized and they are constantly being restarted. Any ideas? **ml-pipeline-persistentagent logs** ```bash time="2021-11-18T11:51:41Z" level=fatal msg="Error creating ML pipeline API Server client: Failed to initialize pipeline client. Error: Waiting for ml pipeline API server failed after all attempts.: Get http://ml-pipeline:8888/apis/v1beta1/healthz: dial tcp 10.254.96.234:8888: connect: connection refused: Waiting for ml pipeline API server failed after all attempts.: Get http://ml-pipeline:8888/apis/v1beta1/healthz: dial tcp 10.254.96.234:8888: connect: connection refused" ``` **metadata-grpc logs** Empty **ml-pipeline logs** ```bash I1118 11:48:35.998899 7 client_manager.go:160] Initializing client manager I1118 11:48:35.998974 7 config.go:57] Config DBConfig.ExtraParams not specified, skipping ``` ![KFP pods](https://user-images.githubusercontent.com/22999070/142410060-542cc67c-2835-41cc-9499-eb03b358b0bd.png)
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6922/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6922/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6916
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6916/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6916/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6916/events
https://github.com/kubeflow/pipelines/issues/6916
1,055,633,855
I_kwDOB-71UM4-67G_
6,916
[feature] Upgrade argo-workflows to 3.2.x
{ "login": "jmcarp", "id": 1633460, "node_id": "MDQ6VXNlcjE2MzM0NjA=", "avatar_url": "https://avatars.githubusercontent.com/u/1633460?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmcarp", "html_url": "https://github.com/jmcarp", "followers_url": "https://api.github.com/users/jmcarp/followers", "following_url": "https://api.github.com/users/jmcarp/following{/other_user}", "gists_url": "https://api.github.com/users/jmcarp/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmcarp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmcarp/subscriptions", "organizations_url": "https://api.github.com/users/jmcarp/orgs", "repos_url": "https://api.github.com/users/jmcarp/repos", "events_url": "https://api.github.com/users/jmcarp/events{/privacy}", "received_events_url": "https://api.github.com/users/jmcarp/received_events", "type": "User", "site_admin": false }
[ { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
closed
false
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false } ]
null
[ "/assign @zijianjoy ", "@zijianjoy I gave this a try in #6920. Let me know if I'm on the right track--or if it's easier for you to do this instead.", "@jmcarp If you add me to edit your branch https://github.com/jmcarp/pipelines/tree/jmcarp/argo-workflows-v3.2, I can take care of the image upload part in https://github.com/kubeflow/pipelines/tree/master/third_party/argo#upgrade-argo-image", "@zijianjoy you might already have access because I checked `Allow edits by maintainers` on the PR, but just in case I also added you to my fork." ]
"2021-11-17T02:27:30"
"2021-12-01T06:19:14"
"2021-12-01T06:19:14"
CONTRIBUTOR
null
### Feature Area /area backend ### What feature would you like to see? Upgrade argo-workflows to 3.2.x. ### What is the use case or pain point? argo-workflows 3.2 includes some useful features, including conditional retries. ### Is there a workaround currently? Without conditional retries, we can either retry too often (retryPolicy: Always) and waste resources or not retry often enough (retryPolicy: OnError). cc @zijianjoy, since I see you upgraded argo-workflows recently. cc @jli and @joshlipschultz, who are also interested in the feature --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6916/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6916/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6915
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6915/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6915/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6915/events
https://github.com/kubeflow/pipelines/issues/6915
1,055,162,481
I_kwDOB-71UM4-5IBx
6,915
Change the way to read parameters based on google.protobuf.Value
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "id": 2152751095, "node_id": "MDU6TGFiZWwyMTUyNzUxMDk1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/frozen", "name": "lifecycle/frozen", "color": "ededed", "default": false, "description": null } ]
closed
false
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false } ]
null
[ "Switch to ts-proto: https://github.com/kubeflow/pipelines/commit/d60bc99bb61a9f23fce8eabd78634236e45a5dc1\r\n\r\nWe are pending for parameter for create run to be implemented.\r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "/lifecycle frozen" ]
"2021-11-16T17:53:48"
"2022-08-24T05:40:16"
"2022-08-24T05:40:16"
COLLABORATOR
null
Reference: https://github.com/kubeflow/pipelines/pull/6804/files Parameters change: https://github.com/neuromage/pipelines/blob/35f8448ae97c327c490a6e22159edd3280635489/samples/test/util.py#L520
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6915/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6915/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6913
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6913/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6913/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6913/events
https://github.com/kubeflow/pipelines/issues/6913
1,054,892,370
I_kwDOB-71UM4-4GFS
6,913
Can't enable GPU inside a pod
{ "login": "daniel-beyond", "id": 85550395, "node_id": "MDQ6VXNlcjg1NTUwMzk1", "avatar_url": "https://avatars.githubusercontent.com/u/85550395?v=4", "gravatar_id": "", "url": "https://api.github.com/users/daniel-beyond", "html_url": "https://github.com/daniel-beyond", "followers_url": "https://api.github.com/users/daniel-beyond/followers", "following_url": "https://api.github.com/users/daniel-beyond/following{/other_user}", "gists_url": "https://api.github.com/users/daniel-beyond/gists{/gist_id}", "starred_url": "https://api.github.com/users/daniel-beyond/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/daniel-beyond/subscriptions", "organizations_url": "https://api.github.com/users/daniel-beyond/orgs", "repos_url": "https://api.github.com/users/daniel-beyond/repos", "events_url": "https://api.github.com/users/daniel-beyond/events{/privacy}", "received_events_url": "https://api.github.com/users/daniel-beyond/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Hey how did you solve this?", "@kciy \r\n\r\nHey, the problem was the Kind cluster.\r\nFor some reason no pod was able to detect the GPU on the host, once I shifted to an AWS EKS the same exact configuration worked...\r\nI didn't find a way to use the \"--gpus all\" flag in Kind." ]
"2021-11-16T13:37:48"
"2021-12-20T15:28:57"
"2021-12-09T15:53:06"
NONE
null
### Environment Host - AWS ec2 instance with tesla k80 gpu - Kind cluster with 1 node - Nvidia driver is installed and nvidia-smi cmd works on host - Kfp version 1.8.9 ![image](https://user-images.githubusercontent.com/85550395/141992715-65103723-f00d-478e-9257-553f12bd2fbb.png) ### Steps to reproduce I have a docker container that runs a train model that needs GPU. When running on the host with the docker engine everything works fine with the cmd: "docker run -d --gpus all <image id>" and "nvidia-smi" works from within the container. The model will only run if I add the "--gpus all" flag, when I run the container without this flag "nvidia-smi" doesn't work and the container won't recognize the GPU. ![image](https://user-images.githubusercontent.com/85550395/141994266-3aa21983-cf4c-47a1-a50d-f92bda7ae256.png) When running the model with kfp as an argo workflow the pod raises the same exception: ![image](https://user-images.githubusercontent.com/85550395/141994266-3aa21983-cf4c-47a1-a50d-f92bda7ae256.png) I already tries to add "set_gpu_limit(1)" to the containerOp but to no success, the pod is stuck on pending and the UI shows: ![image](https://user-images.githubusercontent.com/85550395/141995168-b0ce5354-ad12-48ad-b25e-134675ab9359.png) **### Question** - **How do I enable "nvidia-smi" inside the pod through kfp** ??? - **Does this even work on KIND cluster or AWS?** Thanks a lot!
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6913/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6913/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6912
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6912/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6912/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6912/events
https://github.com/kubeflow/pipelines/issues/6912
1,054,703,867
I_kwDOB-71UM4-3YD7
6,912
[frontend] pipeline metrics not showing
{ "login": "tomalbrecht", "id": 17781570, "node_id": "MDQ6VXNlcjE3NzgxNTcw", "avatar_url": "https://avatars.githubusercontent.com/u/17781570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tomalbrecht", "html_url": "https://github.com/tomalbrecht", "followers_url": "https://api.github.com/users/tomalbrecht/followers", "following_url": "https://api.github.com/users/tomalbrecht/following{/other_user}", "gists_url": "https://api.github.com/users/tomalbrecht/gists{/gist_id}", "starred_url": "https://api.github.com/users/tomalbrecht/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tomalbrecht/subscriptions", "organizations_url": "https://api.github.com/users/tomalbrecht/orgs", "repos_url": "https://api.github.com/users/tomalbrecht/repos", "events_url": "https://api.github.com/users/tomalbrecht/events{/privacy}", "received_events_url": "https://api.github.com/users/tomalbrecht/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 2186355346, "node_id": "MDU6TGFiZWwyMTg2MzU1MzQ2", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/good%20first%20issue", "name": "good first issue", "color": "fef2c0", "default": true, "description": "" } ]
closed
false
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false } ]
null
[ "@tomalbrecht This feature is only supported in v1, but not v2 compatible. ", "@zijianjoy Could we get an update on this issue? \r\n\r\nIt looks like it received quite a few thumbs up but was not addressed in v2 compatible mode issue tacker https://github.com/kubeflow/pipelines/issues/6132\r\n\r\nI think you guys use thumbs up to help prioritizing issue, I just want to express that the issue I linked seems linked to this one and had a good amount of thumbs up that might not be reflected by this page.", "@AlexandreBrown Thank you for bringing this up!\r\n\r\nWe have put the V2 compatible work on hold and currently work fully to V2. We are currently tracking it in our KFP V2 project. That said, we are aware of this requirement and working towards it. The thumbs-up information helps us prioritize the work in the V2 timeline. ", "@zijianjoy : Please can you provide an update on this issue kubeflow/pipelines#7339", "I think we need to make some design decision in order to support this feature on KFPv2. It will be very helpful if you can provide information like this:\r\n\r\n1. What kind of metrics do you want to show on RunList? (For example: accuracy, roc score)\r\n2. How do you want to export these metrics in your pipeline? (For example: create a component with a fixed name that generates parameters with name accuracy/roc-score.)\r\n3. How do you want to use this feature on UI? (Show on RunList? Show on a different page? Do you need Filtering/Sorting?) Do you need any customization?", "We no longer support v2 compatible mode, and the general metrics visualization is support in v2. \r\nOne of the fixing PRs: https://github.com/kubeflow/pipelines/pull/7905" ]
"2021-11-16T10:24:55"
"2023-01-18T08:53:20"
"2023-01-18T08:53:19"
NONE
null
### Environment * How did you deploy Kubeflow Pipelines (KFP)? kubeflow manifests (branch 1.4) * KFP version: SDK 1.8.9 / build version dev_local ### Steps to reproduce - Copy/Paste Code from here: https://www.kubeflow.org/docs/components/pipelines/sdk/pipelines-metrics/ - Create pipeline and create_run from component ### Expected result - As documented here: https://www.kubeflow.org/docs/components/pipelines/sdk/pipelines-metrics/#view-the-metrics ### Materials and Reference - I run the pipeline in 'mode=v1dsl.PipelineExecutionMode.V2_COMPATIBLE)' - I've tested both way (json and file) ![image](https://user-images.githubusercontent.com/17781570/141967606-e29d58eb-7931-49a5-97f1-5e0e5ef2295a.png) ![image](https://user-images.githubusercontent.com/17781570/141967677-6a67d06f-7cce-4349-a890-347b1276fd7e.png) ![image](https://user-images.githubusercontent.com/17781570/141967733-163fcc93-6bd1-44d3-ba24-408dddc9df2e.png) [yamls.zip](https://github.com/kubeflow/pipelines/files/7545502/yamls.zip) --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6912/reactions", "total_count": 10, "+1": 10, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6912/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6906
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6906/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6906/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6906/events
https://github.com/kubeflow/pipelines/issues/6906
1,052,590,374
I_kwDOB-71UM4-vUEm
6,906
11/12/2021 Presubmit kubeflow-pipelines-tfx-python37 failure
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The failure is caused by https://github.com/tensorflow/tfx/commit/f796bcb059afa612a3ea9734b7c5e58cb2f2c782 which depends on https://github.com/google/ml-metadata/commit/aa492c474f8b3a54dbb6461ddb7b35c7e69b4a47,\r\nyet none of the changes are in a released package.\r\n\r\nIdeally, the ml-metadata change should be released first, then tfx can make the change with the ml-metadata dependency update. \r\nAssuming the tfx change won't be reverted, which is likely the case, we'll need to wait for the next ml-metadata release and the following dependency updates in tfx.\r\n\r\n/cc @jiyongjung0 " ]
"2021-11-13T07:53:39"
"2021-11-13T09:19:40"
"2021-11-13T09:19:40"
COLLABORATOR
null
https://oss-prow.knative.dev/view/gs/oss-prow/pr-logs/pull/kubeflow_pipelines/6905/kubeflow-pipelines-tfx-python37/1459423904140365824 ``` 2021-11-13 07:39:25.976031: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory 2021-11-13 07:39:25.976081: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. Traceback (most recent call last): File "/home/prow/go/src/github.com/kubeflow/pipelines/tfx/tfx/orchestration/kubeflow/kubeflow_dag_runner_test.py", line 23, in <module> from tfx.components.statistics_gen import component as statistics_gen_component File "/usr/local/lib/python3.7/site-packages/tfx/components/__init__.py", line 16, in <module> from tfx.components.bulk_inferrer.component import BulkInferrer File "/usr/local/lib/python3.7/site-packages/tfx/components/bulk_inferrer/component.py", line 18, in <module> from tfx import types File "/usr/local/lib/python3.7/site-packages/tfx/types/__init__.py", line 16, in <module> from tfx.types.artifact import Artifact File "/usr/local/lib/python3.7/site-packages/tfx/types/artifact.py", line 24, in <module> from tfx.types.system_artifacts import SystemArtifact File "/usr/local/lib/python3.7/site-packages/tfx/types/system_artifacts.py", line 38, in <module> class Dataset(SystemArtifact): File "/usr/local/lib/python3.7/site-packages/tfx/types/system_artifacts.py", line 40, in Dataset MLMD_SYSTEM_BASE_TYPE = mlmd_types.Dataset().system_type AttributeError: 'Dataset' object has no attribute 'system_type' ```
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6906/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6906/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6903
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6903/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6903/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6903/events
https://github.com/kubeflow/pipelines/issues/6903
1,052,384,784
I_kwDOB-71UM4-uh4Q
6,903
[bug] concurrent.futures.process.BrokenProcessPool using ProcessPoolExecutor
{ "login": "andrijaperovic", "id": 496024, "node_id": "MDQ6VXNlcjQ5NjAyNA==", "avatar_url": "https://avatars.githubusercontent.com/u/496024?v=4", "gravatar_id": "", "url": "https://api.github.com/users/andrijaperovic", "html_url": "https://github.com/andrijaperovic", "followers_url": "https://api.github.com/users/andrijaperovic/followers", "following_url": "https://api.github.com/users/andrijaperovic/following{/other_user}", "gists_url": "https://api.github.com/users/andrijaperovic/gists{/gist_id}", "starred_url": "https://api.github.com/users/andrijaperovic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andrijaperovic/subscriptions", "organizations_url": "https://api.github.com/users/andrijaperovic/orgs", "repos_url": "https://api.github.com/users/andrijaperovic/repos", "events_url": "https://api.github.com/users/andrijaperovic/events{/privacy}", "received_events_url": "https://api.github.com/users/andrijaperovic/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1126834402, "node_id": "MDU6TGFiZWwxMTI2ODM0NDAy", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/components", "name": "area/components", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Hello @andrijaperovic , are you able to run a sample code using ProcessPoolExecutor in an independent Kubernetes pod successfully? It doesn't seem like an issue related to Kubeflow.", "Hi @zijianjoy , looks like the issue was caused primarily by the node pool where the pipeline task was being scheduled which has a limited number of cores (vCPU's). Once we allocated a node pool in our k8s cluster with a suitable number of vCPU's, the pipeline ran successfully. Will go ahead and close this issue. Thanks for your help." ]
"2021-11-12T20:26:39"
"2021-11-19T01:59:40"
"2021-11-19T01:59:40"
NONE
null
### What steps did you take Running the following example code to gather output results from model training. Results are failing to be retrieved from executor at runtime, which leads to downstream exception. ``` with ProcessPoolExecutor() as p_executor: for i in range(nprocs): futures.append(p_executor.submit(pool_single_model_train, Models._member_names_[i], self.train_data, self.test_data, user_ids_test)) resultdict = {} for f in as_completed(futures): try: resultdict.update(f.result()) except Exception as exc: LOGGER.exception('Future process generated an exception: %s' % (exc)) self.score, self.count = resultdict[Models.model.name][1][0], \ resultdict[Models.model.name][1][1] ``` ### What happened: Following exception is raised in the pipeline run: ``` {"stack": "Traceback (most recent call last): File \"/trainer.py\", line 61, in __init__ resultdict.update(f.result()) File \"/opt/conda/lib/python3.7/concurrent/futures/_base.py\", line 428, in result return self.__get_result() File \"/opt/conda/lib/python3.7/concurrent/futures/_base.py\", line 384, in __get_result raise self._exception concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending. "} ``` ### What did you expect to happen: Result should have been completed successfully by ProcessPoolExecutor. ### Environment: Azure AKS v1.21.2 running KFP 1.7.0-alpha.1 * How do you deploy Kubeflow Pipelines (KFP)? https://github.com/kubeflow/pipelines/blob/master/manifests/kustomize/env/azure/readme.md * KFP version: 1.7.0-alpha.1 * KFP SDK version: 1.6.2 ### Anything else you would like to add: N/A ### Labels <!-- Please include labels below by uncommenting them to help us better triage issues --> /area components --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6903/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6903/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6901
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6901/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6901/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6901/events
https://github.com/kubeflow/pipelines/issues/6901
1,051,951,256
I_kwDOB-71UM4-s4CY
6,901
[bug] Everything is added to one item in the artifacts menu
{ "login": "jazzsir", "id": 4714923, "node_id": "MDQ6VXNlcjQ3MTQ5MjM=", "avatar_url": "https://avatars.githubusercontent.com/u/4714923?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jazzsir", "html_url": "https://github.com/jazzsir", "followers_url": "https://api.github.com/users/jazzsir/followers", "following_url": "https://api.github.com/users/jazzsir/following{/other_user}", "gists_url": "https://api.github.com/users/jazzsir/gists{/gist_id}", "starred_url": "https://api.github.com/users/jazzsir/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jazzsir/subscriptions", "organizations_url": "https://api.github.com/users/jazzsir/orgs", "repos_url": "https://api.github.com/users/jazzsir/repos", "events_url": "https://api.github.com/users/jazzsir/events{/privacy}", "received_events_url": "https://api.github.com/users/jazzsir/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" } ]
open
false
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false } ]
null
[ "/assign @zijianjoy ", "Please guide on how to provide value pipeline/workspace and name in artifact via KFP library\r\nfacing same issue", "This is fixed by https://github.com/kubeflow/pipelines/pull/6989. We are planning to make new release for it." ]
"2021-11-12T13:08:41"
"2022-02-04T20:51:47"
null
NONE
null
### What steps did you take I deployed various types of pipelines, for example https://github.com/Building-ML-Pipelines/building-machine-learning-pipelines/tree/master/pipelines/kubeflow_pipelines https://github.com/jazzsir/kubeflow-pipelines/blob/master/simple-pipeline/mnist.ipynb https://github.com/jazzsir/kubeflow-pipelines/blob/master/simple-pipeline/kfp-v2.ipynb https://github.com/jazzsir/kubeflow-pipelines/blob/master/simple-pipeline/kfp-v2-artifact.ipynb https://github.com/CODAIT/flight-delay-notebooks <!-- A clear and concise description of what the bug is.--> ### What happened: In the Artifacts menu 1. All artifacts are added to one item whose "Pipeline/Workspace" field has no value . 2. I can't select artifacts generated by [pipelines(v2)](https://github.com/jazzsir/kubeflow-pipelines/blob/master/simple-pipeline/kfp-v2-artifact.ipynb) with python typing the type of inputs/outputs (Model, Dataset..) to view "Overview" and "Lineage Explorer" because they don't have a name(marked with red rectangle in the picture). 3. Unlike Kubeflow 1.3(KFP 1.6.0), [pipelines](https://github.com/jazzsir/kubeflow-pipelines/blob/master/simple-pipeline/mnist.ipynb) without python typing the type of inputs/outputs (Model, Dataset..) leave no artifacts. 4. Only pipelines generated by [kubeflow_dag_runner of TFX](https://github.com/Building-ML-Pipelines/building-machine-learning-pipelines/blob/master/pipelines/kubeflow_pipelines/pipeline_kubeflow.py) have a name In the Excutions menu 1. "Run ID/Workspace/Pipeline" fields of Excutions generated by [pipelines(v2)](https://github.com/jazzsir/kubeflow-pipelines/blob/master/simple-pipeline/kfp-v2-artifact.ipynb) with python typing the type of inputs/outputs (Model, Dataset..) have no value. ![www](https://user-images.githubusercontent.com/4714923/141599188-4938601f-583e-478c-b3e6-1bdfa3c4a6d3.png) ### What did you expect to happen: In the Artifact menu 1. All artifacts are added in separate entries and their "Pipeline/Workspace" field have a value. 2. All pipelines leave artifacts 3. All items has a name so that we can select it and view "Overview" and "Lineage Explorer" In the Excutions menu 1. All "Run ID/Workspace/Pipeline" fields of Excutions have a value. ### Environment: <!-- Please fill in those that seem relevant. --> * How do you deploy Kubeflow Pipelines (KFP)? Full Kubeflow deployment (Kubeflow 1.4) * KFP version: 1.7.0 * KFP SDK version: 1.7.2 and 1.8.9 ### Anything else you would like to add: <!-- Miscellaneous information that will assist in solving the issue.--> ### Labels <!-- Please include labels below by uncommenting them to help us better triage issues --> /area frontend /area backend <!-- /area sdk --> <!-- /area testing --> <!-- /area samples --> <!-- /area components --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6901/reactions", "total_count": 6, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6901/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6900
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6900/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6900/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6900/events
https://github.com/kubeflow/pipelines/issues/6900
1,051,558,911
I_kwDOB-71UM4-rYP_
6,900
[Python version upgrade] Should we consider to upgrade the base/default images from Python:3.7 to 3.8
{ "login": "haoxins", "id": 2569835, "node_id": "MDQ6VXNlcjI1Njk4MzU=", "avatar_url": "https://avatars.githubusercontent.com/u/2569835?v=4", "gravatar_id": "", "url": "https://api.github.com/users/haoxins", "html_url": "https://github.com/haoxins", "followers_url": "https://api.github.com/users/haoxins/followers", "following_url": "https://api.github.com/users/haoxins/following{/other_user}", "gists_url": "https://api.github.com/users/haoxins/gists{/gist_id}", "starred_url": "https://api.github.com/users/haoxins/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/haoxins/subscriptions", "organizations_url": "https://api.github.com/users/haoxins/orgs", "repos_url": "https://api.github.com/users/haoxins/repos", "events_url": "https://api.github.com/users/haoxins/events{/privacy}", "received_events_url": "https://api.github.com/users/haoxins/received_events", "type": "User", "site_admin": false }
[ { "id": 1126834402, "node_id": "MDU6TGFiZWwxMTI2ODM0NDAy", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/components", "name": "area/components", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "cc @chensun ", "Hi @haoxins, in order not to break our existing users, we will not update the image to 3.8 until we reach a point of deprecating 3.7. Thanks for the advice!" ]
"2021-11-12T03:54:06"
"2021-12-13T19:45:26"
"2021-12-13T19:45:26"
CONTRIBUTOR
null
### Feature Area /area components ### What feature would you like to see? As `Python 3.10` has released, and the `Python 3.8` has became most OS's default `Python(3)` version So, I just advise that we should consider to upgrade the default Python version (Such as: base images in the `Dockerfile`, default images in the output pipeline yaml) to `3.8` --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6900/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6900/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6899
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6899/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6899/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6899/events
https://github.com/kubeflow/pipelines/issues/6899
1,051,386,870
I_kwDOB-71UM4-quP2
6,899
python Code sample for configmap , volume creation
{ "login": "kabilan6", "id": 13338461, "node_id": "MDQ6VXNlcjEzMzM4NDYx", "avatar_url": "https://avatars.githubusercontent.com/u/13338461?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kabilan6", "html_url": "https://github.com/kabilan6", "followers_url": "https://api.github.com/users/kabilan6/followers", "following_url": "https://api.github.com/users/kabilan6/following{/other_user}", "gists_url": "https://api.github.com/users/kabilan6/gists{/gist_id}", "starred_url": "https://api.github.com/users/kabilan6/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kabilan6/subscriptions", "organizations_url": "https://api.github.com/users/kabilan6/orgs", "repos_url": "https://api.github.com/users/kabilan6/repos", "events_url": "https://api.github.com/users/kabilan6/events{/privacy}", "received_events_url": "https://api.github.com/users/kabilan6/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
null
[]
null
[ "Hi @kabilan6 . There aren't any specific code samples for config maps but It should work the same way as other Kubernetes objects. For creating a PVC you can leverage [VolumeOp](https://github.com/kubeflow/pipelines/tree/master/samples/core/volume_ops) which utilises ResourceOps behind the scene. You can also leverage pre-built components by the community (https://github.com/kubeflow/pipelines/tree/master/components/contrib/kubernetes). ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-11-11T21:46:06"
"2022-03-02T10:06:36"
null
NONE
null
Any code samples for configmaps, persistent volume creation ? I tried resourceop but its not working.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6899/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6899/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6896
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6896/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6896/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6896/events
https://github.com/kubeflow/pipelines/issues/6896
1,051,025,575
I_kwDOB-71UM4-pWCn
6,896
[bug] The pipeline is terminated immediately after startup
{ "login": "SeibertronSS", "id": 69496864, "node_id": "MDQ6VXNlcjY5NDk2ODY0", "avatar_url": "https://avatars.githubusercontent.com/u/69496864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SeibertronSS", "html_url": "https://github.com/SeibertronSS", "followers_url": "https://api.github.com/users/SeibertronSS/followers", "following_url": "https://api.github.com/users/SeibertronSS/following{/other_user}", "gists_url": "https://api.github.com/users/SeibertronSS/gists{/gist_id}", "starred_url": "https://api.github.com/users/SeibertronSS/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SeibertronSS/subscriptions", "organizations_url": "https://api.github.com/users/SeibertronSS/orgs", "repos_url": "https://api.github.com/users/SeibertronSS/repos", "events_url": "https://api.github.com/users/SeibertronSS/events{/privacy}", "received_events_url": "https://api.github.com/users/SeibertronSS/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Can you try to use emissary executor https://www.kubeflow.org/docs/components/pipelines/installation/choose-executor/#emissary-executor", "> Can you try to use emissary executor https://www.kubeflow.org/docs/components/pipelines/installation/choose-executor/#emissary-executor\r\nThanks for your suggestions, I will try. After I use emissary executor, can I still use docker image? My pipeline has problems only after it has all four components, and it can work normally when it has only four or less components. Why is this?\r\n", "@SeibertronSS \r\n\r\nHow does the pipeline run work after applying emissary executor? \r\n\r\nFor the error message of your issue, would you like to look at this SO post to see if this applies to your deployment resource yaml? https://stackoverflow.com/questions/64990279/docker-error-process-linux-go319-getting-the-final-childs-pid-from-pipe-cause\r\n\r\nIf your pipeline run still fails, would you like to share your pipeline template which has 4 components?", "> @SeibertronSS\r\n> \r\n> How does the pipeline run work after applying emissary executor?\r\n> \r\n> For the error message of your issue, would you like to look at this SO post to see if this applies to your deployment resource yaml? https://stackoverflow.com/questions/64990279/docker-error-process-linux-go319-getting-the-final-childs-pid-from-pipe-cause\r\n> \r\n> If your pipeline run still fails, would you like to share your pipeline template which has 4 components?\r\n\r\nAfter our investigation, we found that there was a problem with the routing table of our cluster, which caused duplicate ClusterIP to appear. I will close this issue. Thank you very much for your time." ]
"2021-11-11T14:16:41"
"2021-11-19T06:36:17"
"2021-11-19T06:36:17"
NONE
null
### What happened: I created a pipeline with four components, but it was terminated immediately after it was started, and the status of the pipeline was displayed as `Unknown`. ### What did you expect to happen: Pipeline can run normally. ### Environment: <!-- Please fill in those that seem relevant. --> * How do you deploy Kubeflow Pipelines (KFP)? I deployed it by kubeflow manifests * KFP version: 1.7.0 * KFP SDK version: I used the pythonSDK, the version is 1.8.9 ### Anything else you would like to add: The version of **kubernetes is v1.19.9**. The error message is displayed as: ``` Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4s default-scheduler Successfully assigned kubeflow/pipeline-func-2qv86-3719103562 to 11.xx.x.xxx Normal Pulled 2s kubelet Container image "gcr.io/ml-pipeline/argoexec:v3.1.6-patch-license-compliance" already present on machine Normal Created 1s kubelet Created container wait Warning Failed 0s kubelet Error: failed to start container "wait": Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:319: getting the final child's pid from pipe caused \"EOF\"": unknown Normal Pulled 0s kubelet Container image "is.default.svc.explore.academy.jd.local/kubekit:0.2.3" already present on machine Warning Failed 0s kubelet Error: cannot find volume "pipeline-runner-token-7lcgk" to mount into container "main" ``` ### Labels <!-- Please include labels below by uncommenting them to help us better triage issues --> <!-- /area frontend --> <!-- /area backend --> <!-- /area sdk --> <!-- /area testing --> <!-- /area samples --> /area components --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6896/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6896/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6895
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6895/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6895/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6895/events
https://github.com/kubeflow/pipelines/issues/6895
1,050,807,696
I_kwDOB-71UM4-og2Q
6,895
[sdk] List of input paths is consumed as list of input values
{ "login": "skogsbrus", "id": 17073827, "node_id": "MDQ6VXNlcjE3MDczODI3", "avatar_url": "https://avatars.githubusercontent.com/u/17073827?v=4", "gravatar_id": "", "url": "https://api.github.com/users/skogsbrus", "html_url": "https://github.com/skogsbrus", "followers_url": "https://api.github.com/users/skogsbrus/followers", "following_url": "https://api.github.com/users/skogsbrus/following{/other_user}", "gists_url": "https://api.github.com/users/skogsbrus/gists{/gist_id}", "starred_url": "https://api.github.com/users/skogsbrus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/skogsbrus/subscriptions", "organizations_url": "https://api.github.com/users/skogsbrus/orgs", "repos_url": "https://api.github.com/users/skogsbrus/repos", "events_url": "https://api.github.com/users/skogsbrus/events{/privacy}", "received_events_url": "https://api.github.com/users/skogsbrus/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" } ]
open
false
null
[]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "I'm having this problem too, any update?", "If you are able to structure your code so that you pass a directory instead (containing the files you're interested in), I think that should work. Alternatively you can use mounted volumes to pass arbitrary files.\r\n\r\nBut AFAIK there's no fix for this specific issue.", "Hello, I'm very new to Kubeflow, just stumbled on this issue when I was looking for a solution for a similar issue. \r\nIn my case, I've a Modeling function that consumes a dataframe created by another function and also it requires to consume output path from a previous function that handles the download, so that it can read the files. I was wondering if there is an option to pass multiple InputPaths to a container. As I mentioned, I'm very new to ML and MLOps, so if anything that I've mentioned is clear, I can try to explain again. Please let me know", "Please ignore my question, I figured I can use InputPath twice in the same function and point them to different artifcats from previous containers. ", "Yup, if you just have two paths that works. It doesn't work well for a large number of paths / dynamic number of paths though." ]
"2021-11-11T10:07:09"
"2022-12-23T07:43:35"
null
NONE
null
### Environment * KFP version: 1.6.4 * KFP SDK version: KF 1.3 release * All dependencies version: ``` kfp==1.6.4 kfp-pipeline-spec==0.1.10 kfp-server-api==1.7.0 ``` ### Steps to reproduce I'm not confident that this isn't a user error, but I haven't found any documentation that describes my use case where a component consumes a list of input paths. 1. Create file outputs that are too large to pass as values. 2. Pass them as inputs to a component that expects a list of input paths 3. The components that created the files fail with the error `This step is in Error state with this message: failed to save outputs: Request entity too large: limit is 3145728`. According to https://github.com/kubeflow/pipelines/issues/3134, this suggests that the final component is trying to consume them as values as opposed to paths. ### Expected result The created paths should be passed as InputPaths, not as InputValues ### Materials and Reference Source code: ``` import kfp from kfp import dsl from kfp.components import InputPath, OutputPath from typing import List def create_component_from_func(): def decorator(func): return kfp.components.create_component_from_func(func=func) return decorator @create_component_from_func() def list_of_input_paths_op(input_paths: List[InputPath]): for p in input_paths: print(f"Got input path {p}") @create_component_from_func() def create_file_op(output_path: OutputPath()): from pathlib import Path Path(output_path).parent.mkdir(parents=True, exist_ok=True) with open(output_path, "wb") as out: out.seek((1024 * 1024 * 10) - 1) out.write(b'\0') @dsl.pipeline(name="list-of-input-paths", description="Demonstrates a bug") def pipeline(): a = create_file_op() b = create_file_op() list_of_input_paths_op([a.output, b.output]) if __name__ == "__main__": kfp.compiler.Compiler().compile(pipeline, "list_of_inputpaths.yml") ``` If I specify the type hint for `list_of_input_paths_op` as `List[InputPath()]` instead, I get an error: ``` TypeError: Parameters to generic types must be types. Got <kfp.components._python_op.InputPath object at 0x7fa54173caf0>. ``` <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6895/reactions", "total_count": 9, "+1": 9, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6895/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6894
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6894/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6894/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6894/events
https://github.com/kubeflow/pipelines/issues/6894
1,050,695,987
I_kwDOB-71UM4-oFkz
6,894
[backend] Archived Main Log File is Empty
{ "login": "twolffpiggott", "id": 13351115, "node_id": "MDQ6VXNlcjEzMzUxMTE1", "avatar_url": "https://avatars.githubusercontent.com/u/13351115?v=4", "gravatar_id": "", "url": "https://api.github.com/users/twolffpiggott", "html_url": "https://github.com/twolffpiggott", "followers_url": "https://api.github.com/users/twolffpiggott/followers", "following_url": "https://api.github.com/users/twolffpiggott/following{/other_user}", "gists_url": "https://api.github.com/users/twolffpiggott/gists{/gist_id}", "starred_url": "https://api.github.com/users/twolffpiggott/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/twolffpiggott/subscriptions", "organizations_url": "https://api.github.com/users/twolffpiggott/orgs", "repos_url": "https://api.github.com/users/twolffpiggott/repos", "events_url": "https://api.github.com/users/twolffpiggott/events{/privacy}", "received_events_url": "https://api.github.com/users/twolffpiggott/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
null
[]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-11-11T08:23:02"
"2022-03-02T10:06:41"
null
NONE
null
### Expected result Log archiving with MinIO worked as expected at first but at some point the archived main logs became empty files. The front-end displays the archived files, and these exist on MinIO, however the main logs are always empty (0 bytes) on MinIO. The output artifacts (apart from the main logs) are correctly archived and non-empty on MinIO. #### Logs are non-empty for pipeline run ![Screenshot 2021-11-11 at 09 43 12](https://user-images.githubusercontent.com/13351115/141262816-5e0538cd-797f-446b-969b-5154812a5787.png) #### Wait container shows archive occurring ![Screenshot 2021-11-11 at 09 25 57](https://user-images.githubusercontent.com/13351115/141262385-27dce5b8-ac8e-40de-b020-42e51eba5d0a.png) Note that there is a "Successfully saved file" info log line after initiating saving the output artifact but not after initiating saving the main logs. #### Argo Workflow shows the expected archive ![Screenshot 2021-11-11 at 09 24 31](https://user-images.githubusercontent.com/13351115/141262284-9a2d108c-0783-493f-94d6-dd22b1d9d26e.png) #### Archived files (including main logs) display on frontend ![Screenshot 2021-11-11 at 09 42 40](https://user-images.githubusercontent.com/13351115/141262730-dd639591-090b-42ee-a577-7141c4a0eb3d.png) #### Expected log files exist on MinIO but are empty for the main logs ![Screenshot 2021-11-11 at 09 40 34](https://user-images.githubusercontent.com/13351115/141262661-3af58240-526b-4138-bfd7-fe1e3b083c66.png) ### Steps to reproduce Set Kubeflow to archive logs using MinIO as introduced in [this PR](https://github.com/kubeflow/pipelines/pull/2081) i.e. set the following env vars: In `ml-pipeline-apiserver-deployment`: ``` - name: OBJECTSTORECONFIG_BUCKETNAME value: mlpipeline - name: MINIO_SERVICE_SERVICE_HOST value: minio-service.kubeflow - name: MINIO_SERVICE_SERVICE_PORT value: "9000" ``` In `ml-pipeline-ui-deployment`: ``` - name: ARGO_ARCHIVE_LOGS value: "true" - name: ARGO_ARCHIVE_ARTIFACTORY value: minio - name: ARGO_ARCHIVE_BUCKETNAME value: mlpipeline - name: ARGO_ARCHIVE_PREFIX value: artifacts ``` This log archiving worked as expected at first but at some point the archived main logs became empty files. The output artifacts are correctly archived throughout. ### Environment * How did you deploy Kubeflow Pipelines (KFP)? AWS EKS * KFP version: 1.1.0 (`988f5b0`) <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6894/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6894/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6888
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6888/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6888/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6888/events
https://github.com/kubeflow/pipelines/issues/6888
1,049,318,792
I_kwDOB-71UM4-i1WI
6,888
[feature] Abstract KFP Argo workflow client functions
{ "login": "Tomcli", "id": 10889249, "node_id": "MDQ6VXNlcjEwODg5MjQ5", "avatar_url": "https://avatars.githubusercontent.com/u/10889249?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Tomcli", "html_url": "https://github.com/Tomcli", "followers_url": "https://api.github.com/users/Tomcli/followers", "following_url": "https://api.github.com/users/Tomcli/following{/other_user}", "gists_url": "https://api.github.com/users/Tomcli/gists{/gist_id}", "starred_url": "https://api.github.com/users/Tomcli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tomcli/subscriptions", "organizations_url": "https://api.github.com/users/Tomcli/orgs", "repos_url": "https://api.github.com/users/Tomcli/repos", "events_url": "https://api.github.com/users/Tomcli/events{/privacy}", "received_events_url": "https://api.github.com/users/Tomcli/received_events", "type": "User", "site_admin": false }
[ { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
null
[]
null
[ "/cc @capri-xiyue @Bobgy ", "/cc @james-jwu ", "My 2 cents, we should also try to keep the interface clean enough so that not just argo or tekton but airflow or any work flow engine should be able to integrate well. ", "> My 2 cents, we should also try to keep the interface clean enough so that not just argo or tekton but airflow or any work flow engine should be able to integrate well.\r\n\r\nYes, we will try to keep Argo and Tekton names out of the pictures for this new interface. ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-11-10T02:01:40"
"2022-03-02T10:06:18"
null
MEMBER
null
### Feature Area Currently, the KFP backend still relies heavily on the Argo golang client to run, get workflows, updates, and retry pipelines. This still remains true after implementing the v2 backend compiler because the compiled workflow needed the Argo golang client to manage the Argo workflows on Kubernetes. Since the Argo golang client needs to be part of the KFP backend code, we are proposing to abstract some common Argo client functions into a common golang interface, so other backend runtimes such as Tekton can also be plugin this way Here is a rough docs we put up https://docs.google.com/document/d/1Kz0WEYUesxOU8YN9QeXlsJBsalllOyr9ygpTvcGkuvA/edit# We can take up this work if we agree on this approach. ### What feature would you like to see? <!-- Provide a description of this feature and the user experience. --> ### What is the use case or pain point? <!-- It helps us understand the benefit of this feature for your use case. --> ### Is there a workaround currently? <!-- Without this feature, how do you accomplish your task today? --> --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6888/reactions", "total_count": 5, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6888/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6885
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6885/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6885/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6885/events
https://github.com/kubeflow/pipelines/issues/6885
1,048,744,154
I_kwDOB-71UM4-gpDa
6,885
Trigger Airflow DAG from kubeflow V2 pipeline SDK
{ "login": "ajaykamal3", "id": 85946306, "node_id": "MDQ6VXNlcjg1OTQ2MzA2", "avatar_url": "https://avatars.githubusercontent.com/u/85946306?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ajaykamal3", "html_url": "https://github.com/ajaykamal3", "followers_url": "https://api.github.com/users/ajaykamal3/followers", "following_url": "https://api.github.com/users/ajaykamal3/following{/other_user}", "gists_url": "https://api.github.com/users/ajaykamal3/gists{/gist_id}", "starred_url": "https://api.github.com/users/ajaykamal3/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ajaykamal3/subscriptions", "organizations_url": "https://api.github.com/users/ajaykamal3/orgs", "repos_url": "https://api.github.com/users/ajaykamal3/repos", "events_url": "https://api.github.com/users/ajaykamal3/events{/privacy}", "received_events_url": "https://api.github.com/users/ajaykamal3/received_events", "type": "User", "site_admin": false }
[ { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
closed
false
null
[]
null
[ "In kubeflow you can use exit handler https://github.com/kubeflow/pipelines/blob/927d2a9f2dfdb90ae156979b9e0d72afa14adcd6/samples/core/exit_handler/exit_handler.py#L47. Ref: https://kubeflow-pipelines.readthedocs.io/en/stable/source/kfp.dsl.html#kfp.dsl.ExitHandler", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-11-09T15:30:55"
"2022-03-02T11:00:52"
"2022-03-02T11:00:52"
NONE
null
### Problem Currently I'm having a vertex AI pipeline built using kubeflow v2 pipeline sdk (python function based). The last step of the pipeline will save the data to Big query table. I would like to trigger an existing Airflow DAG which consumes the data that is stored in Big query table saved by vertex AI pipeline. How can I trigger the Airflow DAG right after the vertex AI pipeline run was completed? How can I achieve this in kubeflow? If it is not possible in kubeflow or vertex ai, any other way of solving this problem using other of google cloud platform would be really appericiated. Suggest some solution please.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6885/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6885/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6883
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6883/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6883/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6883/events
https://github.com/kubeflow/pipelines/issues/6883
1,047,531,349
I_kwDOB-71UM4-cA9V
6,883
[sdk] Access token used in wait_for_run_completion expires before it is refreshed
{ "login": "hieuhc", "id": 8268223, "node_id": "MDQ6VXNlcjgyNjgyMjM=", "avatar_url": "https://avatars.githubusercontent.com/u/8268223?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hieuhc", "html_url": "https://github.com/hieuhc", "followers_url": "https://api.github.com/users/hieuhc/followers", "following_url": "https://api.github.com/users/hieuhc/following{/other_user}", "gists_url": "https://api.github.com/users/hieuhc/gists{/gist_id}", "starred_url": "https://api.github.com/users/hieuhc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hieuhc/subscriptions", "organizations_url": "https://api.github.com/users/hieuhc/orgs", "repos_url": "https://api.github.com/users/hieuhc/repos", "events_url": "https://api.github.com/users/hieuhc/events{/privacy}", "received_events_url": "https://api.github.com/users/hieuhc/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
null
[]
null
[]
"2021-11-08T14:39:30"
"2021-11-23T19:35:09"
"2021-11-23T19:35:09"
CONTRIBUTOR
null
### Environment * KFP version: <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> 1.8.9 * KFP SDK version: <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> 1.7.0 ### Steps to reproduce There is no concrete way to reproduce this bug. In our case, we 'd like to check a long running pipeline's status with `wait_for_run_completion`. This method tries to refresh the access token at every `55` minutes, but my access token sometimes has a shorter expiration time, e.g. 40 or 45 minutes. Then the status checking step fails with `401 Unauthorized` The pipelines are run in AI Platform in GCP, the access tokens are generated from a service account. We have many different pipelines running under this same service account. <!-- Specify how to reproduce the problem. This may include information such as: a description of the process, code snippets, log output, or screenshots. --> ### Expected result `wait_for_run_completion` should work no matter how much the access token's expiration time is. <!-- What should the correct behavior be? --> <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍. I had a fix on my local to refresh the access token whenever we receive a `401 Unauthorized` error, I can open a PR with that if it makes sense.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6883/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6883/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6880
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6880/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6880/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6880/events
https://github.com/kubeflow/pipelines/issues/6880
1,047,074,979
I_kwDOB-71UM4-aRij
6,880
Provide an option to increase/attach shared memory for specific ContainerOp
{ "login": "arunnalpet", "id": 23610651, "node_id": "MDQ6VXNlcjIzNjEwNjUx", "avatar_url": "https://avatars.githubusercontent.com/u/23610651?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arunnalpet", "html_url": "https://github.com/arunnalpet", "followers_url": "https://api.github.com/users/arunnalpet/followers", "following_url": "https://api.github.com/users/arunnalpet/following{/other_user}", "gists_url": "https://api.github.com/users/arunnalpet/gists{/gist_id}", "starred_url": "https://api.github.com/users/arunnalpet/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arunnalpet/subscriptions", "organizations_url": "https://api.github.com/users/arunnalpet/orgs", "repos_url": "https://api.github.com/users/arunnalpet/repos", "events_url": "https://api.github.com/users/arunnalpet/events{/privacy}", "received_events_url": "https://api.github.com/users/arunnalpet/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1126834402, "node_id": "MDU6TGFiZWwxMTI2ODM0NDAy", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/components", "name": "area/components", "color": "d2b48c", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" } ]
open
false
null
[]
null
[ "Not sure if there's anything special with `/dev/shm`. But KFP DSL v1 supports `add_volume` and `add_volume_mount`. Would that work for you?", "Hi @arunnalpet, I encountered this problem too, and I resolved as follows.\r\n\r\n```python\r\nimport kfp.dsl as dsl\r\nimport kubernetes as k8s # should pip install kubernetes first\r\n\r\n@dsl.pipeline(name=\"test_training\")\r\ndef training_pipeline():\r\n volume = dsl.PipelineVolume(volume=k8s.client.V1Volume(\r\n name=\"shm\",\r\n empty_dir=k8s.client.V1EmptyDirVolumeSource(medium='Memory')))\r\n\r\n training_op.add_pvolumes({'/dev/shm': volume})\r\n\r\n```\r\n\r\nHope this helps.\r\n\r\n*Rerference: https://stackoverflow.com/a/63806311*\r\n*As for why medium is set to **Memory** is described [here](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir)*", "Thanks @b02202050 .. let me try this out, and will close the issue subsequently.. ", "Adding @b02202050 code results in `This step is in Error state with this message: OOMKilled (exit code 137)` after training step in `wait` container.\r\n" ]
"2021-11-08T06:41:38"
"2022-02-24T15:42:25"
null
NONE
null
### What steps did you take While creating a ContainerOp, there is no option to attach a volume for /dev/shm. This is needed for training pytorch models. Training with larger batch size, throws up this error. Training with tiny batch sizes will go through just fine. ### What happened: Currently the code fails with: `Bus error. It is possible that dataloader's workers are out of shared memory. Please try to raise your shared memory limit. ` ### What did you expect to happen: Either shared memory should be mounted automatically for container operations, or we should have an option to configure shared memory using pipeline DSL. I do not see any examples as well around this. ### Anything else you would like to add: This is taken care while spinning up notebook instances with "Enabled Shared Memory" Option. The same option needs to be provided with ContainerOp in pipeline DSL. The same is discussed in this [issue thread](https://github.com/kubeflow/kubeflow/issues/2522#issuecomment-520190909). ### Labels <!-- Please include labels below by uncommenting them to help us better triage issues --> <!-- /area frontend --> /area backend /area sdk <!-- /area testing --> <!-- /area samples --> /area components --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6880/reactions", "total_count": 6, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6880/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6873
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6873/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6873/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6873/events
https://github.com/kubeflow/pipelines/issues/6873
1,046,085,034
I_kwDOB-71UM4-Wf2q
6,873
[frontend] "Pipelines" and "Experiments" icons disappear when left sidebar is collapsed
{ "login": "jli", "id": 133466, "node_id": "MDQ6VXNlcjEzMzQ2Ng==", "avatar_url": "https://avatars.githubusercontent.com/u/133466?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jli", "html_url": "https://github.com/jli", "followers_url": "https://api.github.com/users/jli/followers", "following_url": "https://api.github.com/users/jli/following{/other_user}", "gists_url": "https://api.github.com/users/jli/gists{/gist_id}", "starred_url": "https://api.github.com/users/jli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jli/subscriptions", "organizations_url": "https://api.github.com/users/jli/orgs", "repos_url": "https://api.github.com/users/jli/repos", "events_url": "https://api.github.com/users/jli/events{/privacy}", "received_events_url": "https://api.github.com/users/jli/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Thanks for the report, this is fixed in https://github.com/kubeflow/pipelines/pull/6440, I expect the next release to fix it." ]
"2021-11-05T17:33:52"
"2021-11-12T00:46:17"
"2021-11-12T00:46:17"
CONTRIBUTOR
null
### Environment * How did you deploy Kubeflow Pipelines (KFP)? standalone * KFP version: 1.7.1 ### Steps to reproduce Collapse the left sidebar. Observe that the first 2 icons, "Pipelines" and "Experiments", aren't visible: <img width="66" alt="Screen Shot 2021-11-05 at 13 30 20" src="https://user-images.githubusercontent.com/133466/140553518-b3eae248-ebc8-4c5f-8f6a-4ebe6279d86b.png"> ### Expected result Pipelines and Experiments icons are visible when the sidebar is collapsed. ### Materials and Reference - When the sidebar is expanded, the icons and text appear normally. - When the sidebar is collapsed, I can still click on the empty spaces where the icons should be, and navigate to the respective pages. - I took a quick look at the HTML changes when the sidebar is collapsed, and didn't notice anything obviously wrong. The first 2 sidebar buttons seem to have all the same classes as the rest of the buttons when expanded and collapsed. --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6873/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6873/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6872
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6872/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6872/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6872/events
https://github.com/kubeflow/pipelines/issues/6872
1,045,595,790
I_kwDOB-71UM4-UoaO
6,872
[bug] Recurring runs with cron trigger dissapearing
{ "login": "Nenrikido", "id": 23308096, "node_id": "MDQ6VXNlcjIzMzA4MDk2", "avatar_url": "https://avatars.githubusercontent.com/u/23308096?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Nenrikido", "html_url": "https://github.com/Nenrikido", "followers_url": "https://api.github.com/users/Nenrikido/followers", "following_url": "https://api.github.com/users/Nenrikido/following{/other_user}", "gists_url": "https://api.github.com/users/Nenrikido/gists{/gist_id}", "starred_url": "https://api.github.com/users/Nenrikido/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Nenrikido/subscriptions", "organizations_url": "https://api.github.com/users/Nenrikido/orgs", "repos_url": "https://api.github.com/users/Nenrikido/repos", "events_url": "https://api.github.com/users/Nenrikido/events{/privacy}", "received_events_url": "https://api.github.com/users/Nenrikido/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
null
[]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-11-05T09:02:12"
"2022-03-02T10:06:47"
null
NONE
null
### What steps did you take I created a recurring run of a pipeline with a cron, the cron value is `0 0 0 3 * *` it is expected to run each 3rd day of each month. Here's the entire setup of the recurring run trigger : Enabled Yes Trigger 0 0 0 3 * * Max. concurrent runs 1 Catchup false Start time 02/12/2021, 15:15:00 ### What happened: When the pipeline executes, sometimes the recurring run is deleted. I cannot find what is different about the run which makes the recurring run dissapear. There is no bug happening during the pipeline execution, and all my other pipelines and recurring runs without crons are working just fine. ### What did you expect to happen: I expect the recurring run to stay up after each run ### Environment: <!-- Please fill in those that seem relevant. --> * How do you deploy Kubeflow Pipelines (KFP)? KFP standalone on GKE <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> * KFP version: 1.7.0 <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> * KFP SDK version: ``` kfp 1.6.2 kfp-pipeline-spec 0.1.7 kfp-server-api 1.5.0 ``` <!-- Specify the output of the following shell command: $pip list | grep kfp --> ### Anything else you would like to add: <!-- Miscellaneous information that will assist in solving the issue.--> I can't put any label on this issue since I don't know exactly from where it comes. Please don't hesitate to ask me for more informations if there is some I can fetch. One thing I should note is that this issue happened several times, with several different start_time values. ### Labels <!-- Please include labels below by uncommenting them to help us better triage issues --> <!-- /area frontend --> <!-- /area backend --> <!-- /area sdk --> <!-- /area testing --> <!-- /area samples --> <!-- /area components --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6872/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6872/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6870
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6870/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6870/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6870/events
https://github.com/kubeflow/pipelines/issues/6870
1,045,169,610
I_kwDOB-71UM4-TAXK
6,870
[frontend] Run details page prefetches all artifacts; initial page load transfers a lot of data
{ "login": "jli", "id": 133466, "node_id": "MDQ6VXNlcjEzMzQ2Ng==", "avatar_url": "https://avatars.githubusercontent.com/u/133466?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jli", "html_url": "https://github.com/jli", "followers_url": "https://api.github.com/users/jli/followers", "following_url": "https://api.github.com/users/jli/following{/other_user}", "gists_url": "https://api.github.com/users/jli/gists{/gist_id}", "starred_url": "https://api.github.com/users/jli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jli/subscriptions", "organizations_url": "https://api.github.com/users/jli/orgs", "repos_url": "https://api.github.com/users/jli/repos", "events_url": "https://api.github.com/users/jli/events{/privacy}", "received_events_url": "https://api.github.com/users/jli/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
open
false
null
[]
null
[ "Hello @jli , we should probably change the fetching of all visualization artifacts when user switch to `Run output` tab on top of Run Details page. https://github.com/kubeflow/pipelines/blob/master/frontend/src/pages/RunDetails.tsx#L581-L615", "@zijianjoy, I also noticed that the run details page refreshes artifacts periodically until the run is finished, meaning that we load mostly the same artifacts many times. Can artifacts change for a given step after they're written? I was wondering if we needed to keep refetching artifacts for completed nodes.", "@jmcarp After Artifact is written, its state can still be changed (For example: MARK_FOR_DELETE). But we don't need to fetch Artifact if user is not on `Run Output` tab, so by default you are not fetching all artifacts on `Graph` tab. We can separate the fetching mechanism among tabs in `RunDetails.tsx` file. ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-11-04T20:23:35"
"2022-03-23T06:11:04"
null
CONTRIBUTOR
null
### Environment * How did you deploy Kubeflow Pipelines (KFP)? our own deployment to GKE (not sure if standalone or full) * KFP version: 1.5.1 ### Steps to reproduce background: My team has a large experiment which trains ~15 models in parallel. Each model training process includes a QC/reporting step, which generates an HTML report with various metrics and visualizations. Depending on the model and the data, the reports can be between 1MB and 30MB (they can include many images). These reports are stored on GCS and rendered as `web-app` type visualizations. When opening the KFP runs detail page for these experiments, the frontend makes a bunch of requests to `/artifacts/get?source=minio&...mlpipeline-ui-metadata.tgz`, and then another bunch of requests to `/artifacts/get?source=gcs&...our_report_file.html`. Because these report files can be large, and there are many of them in this case, this is resulting in almost 500MB of data transfers every time someone opens the KFP UI page for this run: <img width="1105" alt="Screen Shot 2021-11-04 at 14 52 33" src="https://user-images.githubusercontent.com/133466/140413075-387a0d20-3b16-44ae-af8e-f139b62d471f.png"> This adds load on the minio and ml-pipeline-ui services and eats bandwidth. ### Expected result Instead of prefetching all the reports, only fetch them when the user clicks on the "Visualizations" tab. ### Materials and Reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6870/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6870/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6867
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6867/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6867/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6867/events
https://github.com/kubeflow/pipelines/issues/6867
1,044,536,562
I_kwDOB-71UM4-Qlzy
6,867
[Python SDK] Can we upgrade the Google API Python Client packages to v2?
{ "login": "haoxins", "id": 2569835, "node_id": "MDQ6VXNlcjI1Njk4MzU=", "avatar_url": "https://avatars.githubusercontent.com/u/2569835?v=4", "gravatar_id": "", "url": "https://api.github.com/users/haoxins", "html_url": "https://github.com/haoxins", "followers_url": "https://api.github.com/users/haoxins/followers", "following_url": "https://api.github.com/users/haoxins/following{/other_user}", "gists_url": "https://api.github.com/users/haoxins/gists{/gist_id}", "starred_url": "https://api.github.com/users/haoxins/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/haoxins/subscriptions", "organizations_url": "https://api.github.com/users/haoxins/orgs", "repos_url": "https://api.github.com/users/haoxins/repos", "events_url": "https://api.github.com/users/haoxins/events{/privacy}", "received_events_url": "https://api.github.com/users/haoxins/received_events", "type": "User", "site_admin": false }
[ { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "cc @chensun ", "Hi @haoxins, thanks for opening this issue. \r\nWe're removing google-api-python-client for v2. \r\nPlease use https://github.com/googleapis/python-aiplatform for submitting pipeline jobs. " ]
"2021-11-04T09:53:14"
"2021-11-10T22:19:26"
"2021-11-10T22:19:26"
CONTRIBUTOR
null
### Feature Area /area sdk ### What feature would you like to see? * We should upgrade Google API packages to `v2` ### What is the use case or pain point? * For our use cases (which the users on GCP), we will get warning messages if we use both kfp and google-api-client ``` ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. kfp 1.8.9 requires google-api-python-client<2,>=1.7.8, but you have google-api-python-client 2.29.0 which is incompatible. ``` I think it's better to upgrade the dependency~ --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6867/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/kubeflow/pipelines/issues/6867/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6860
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6860/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6860/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6860/events
https://github.com/kubeflow/pipelines/issues/6860
1,044,094,046
I_kwDOB-71UM4-O5xe
6,860
11/03/2021 Presubmit kubeflow-pipelines-tfx-python37 failing
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Looks like this is related to keras 2.7.\r\nTest was passing previously when it installed keras 2.6\r\n https://oss-prow.knative.dev/view/gs/oss-prow/pr-logs/pull/kubeflow_pipelines/6854/kubeflow-pipelines-tfx-python37/1455697869322326016", "Root cause: https://github.com/keras-team/keras/issues/15579" ]
"2021-11-03T20:41:49"
"2021-11-03T22:51:22"
"2021-11-03T22:51:22"
COLLABORATOR
null
https://oss-prow.knative.dev/view/gs/oss-prow/pr-logs/pull/kubeflow_pipelines/6859/kubeflow-pipelines-tfx-python37/1455985756802650112 ``` 2021-11-03 19:55:53.822029: E tensorflow/core/lib/monitoring/collection_registry.cc:77] Cannot register 2 metrics with the same name: /tensorflow/api/keras/optimizers Traceback (most recent call last): File "/home/prow/go/src/github.com/kubeflow/pipelines/tfx/tfx/orchestration/kubeflow/kubeflow_dag_runner_test.py", line 23, in <module> from tfx.components.statistics_gen import component as statistics_gen_component File "/usr/local/lib/python3.7/site-packages/tfx/components/__init__.py", line 16, in <module> from tfx.components.bulk_inferrer.component import BulkInferrer File "/usr/local/lib/python3.7/site-packages/tfx/components/bulk_inferrer/component.py", line 19, in <module> from tfx.components.bulk_inferrer import executor File "/usr/local/lib/python3.7/site-packages/tfx/components/bulk_inferrer/executor.py", line 30, in <module> from tfx.types import standard_component_specs File "/usr/local/lib/python3.7/site-packages/tfx/types/standard_component_specs.py", line 16, in <module> import tensorflow_model_analysis as tfma File "/usr/local/lib/python3.7/site-packages/tensorflow_model_analysis/__init__.py", line 33, in <module> from tensorflow_model_analysis.api import tfma_unit as test File "/usr/local/lib/python3.7/site-packages/tensorflow_model_analysis/api/tfma_unit.py", line 71, in <module> from tensorflow_model_analysis.api import model_eval_lib File "/usr/local/lib/python3.7/site-packages/tensorflow_model_analysis/api/model_eval_lib.py", line 36, in <module> from tensorflow_model_analysis.evaluators import evaluator File "/usr/local/lib/python3.7/site-packages/tensorflow_model_analysis/evaluators/__init__.py", line 17, in <module> from tensorflow_model_analysis.evaluators.analysis_table_evaluator import AnalysisTableEvaluator File "/usr/local/lib/python3.7/site-packages/tensorflow_model_analysis/evaluators/analysis_table_evaluator.py", line 27, in <module> from tensorflow_model_analysis.evaluators import evaluator File "/usr/local/lib/python3.7/site-packages/tensorflow_model_analysis/evaluators/evaluator.py", line 25, in <module> from tensorflow_model_analysis.extractors import extractor File "/usr/local/lib/python3.7/site-packages/tensorflow_model_analysis/extractors/__init__.py", line 18, in <module> from tensorflow_model_analysis.extractors.example_weights_extractor import ExampleWeightsExtractor File "/usr/local/lib/python3.7/site-packages/tensorflow_model_analysis/extractors/example_weights_extractor.py", line 27, in <module> from tensorflow_model_analysis.extractors import extractor File "/usr/local/lib/python3.7/site-packages/tensorflow_model_analysis/extractors/extractor.py", line 26, in <module> from tensorflow_model_analysis.utils import util File "/usr/local/lib/python3.7/site-packages/tensorflow_model_analysis/utils/__init__.py", line 21, in <module> from tensorflow_model_analysis.utils.model_util import CombineFnWithModels File "/usr/local/lib/python3.7/site-packages/tensorflow_model_analysis/utils/model_util.py", line 31, in <module> from tensorflow_model_analysis.eval_saved_model import load File "/usr/local/lib/python3.7/site-packages/tensorflow_model_analysis/eval_saved_model/load.py", line 25, in <module> from tensorflow_model_analysis.eval_metrics_graph import eval_metrics_graph File "/usr/local/lib/python3.7/site-packages/tensorflow_model_analysis/eval_metrics_graph/eval_metrics_graph.py", line 44, in <module> from tensorflow_model_analysis.eval_saved_model import util File "/usr/local/lib/python3.7/site-packages/tensorflow_model_analysis/eval_saved_model/util.py", line 492, in <module> checkpoint_path: Optional[Text] = None) -> Optional[bytes]: File "/usr/local/lib/python3.7/site-packages/tensorflow/python/util/lazy_loader.py", line 62, in __getattr__ module = self._load() File "/usr/local/lib/python3.7/site-packages/tensorflow/python/util/lazy_loader.py", line 45, in _load module = importlib.import_module(self.__name__) File "/usr/local/lib/python3.7/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "/usr/local/lib/python3.7/site-packages/tensorflow_estimator/__init__.py", line 10, in <module> from tensorflow_estimator._api.v1 import estimator File "/usr/local/lib/python3.7/site-packages/tensorflow_estimator/_api/v1/estimator/__init__.py", line 10, in <module> from tensorflow_estimator._api.v1.estimator import experimental File "/usr/local/lib/python3.7/site-packages/tensorflow_estimator/_api/v1/estimator/experimental/__init__.py", line 10, in <module> from tensorflow_estimator.python.estimator.canned.dnn import dnn_logit_fn_builder File "/usr/local/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/canned/dnn.py", line 29, in <module> from tensorflow_estimator.python.estimator.canned import optimizers File "/usr/local/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/canned/optimizers.py", line 34, in <module> 'Adagrad': tf.keras.optimizers.Adagrad, File "/usr/local/lib/python3.7/site-packages/tensorflow/python/util/lazy_loader.py", line 62, in __getattr__ module = self._load() File "/usr/local/lib/python3.7/site-packages/tensorflow/python/util/lazy_loader.py", line 45, in _load module = importlib.import_module(self.__name__) File "/usr/local/lib/python3.7/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "/usr/local/lib/python3.7/site-packages/keras/__init__.py", line 25, in <module> from keras import models File "/usr/local/lib/python3.7/site-packages/keras/models.py", line 20, in <module> from keras import metrics as metrics_module File "/usr/local/lib/python3.7/site-packages/keras/metrics.py", line 26, in <module> from keras import activations File "/usr/local/lib/python3.7/site-packages/keras/activations.py", line 20, in <module> from keras.layers import advanced_activations File "/usr/local/lib/python3.7/site-packages/keras/layers/__init__.py", line 23, in <module> from keras.engine.input_layer import Input File "/usr/local/lib/python3.7/site-packages/keras/engine/input_layer.py", line 21, in <module> from keras.engine import base_layer File "/usr/local/lib/python3.7/site-packages/keras/engine/base_layer.py", line 43, in <module> from keras.mixed_precision import loss_scale_optimizer File "/usr/local/lib/python3.7/site-packages/keras/mixed_precision/loss_scale_optimizer.py", line 18, in <module> from keras import optimizers File "/usr/local/lib/python3.7/site-packages/keras/optimizers.py", line 26, in <module> from keras.optimizer_v2 import adadelta as adadelta_v2 File "/usr/local/lib/python3.7/site-packages/keras/optimizer_v2/adadelta.py", line 22, in <module> from keras.optimizer_v2 import optimizer_v2 File "/usr/local/lib/python3.7/site-packages/keras/optimizer_v2/optimizer_v2.py", line 37, in <module> "/tensorflow/api/keras/optimizers", "keras optimizer usage", "method") File "/usr/local/lib/python3.7/site-packages/tensorflow/python/eager/monitoring.py", line 361, in __init__ len(labels), name, description, *labels) File "/usr/local/lib/python3.7/site-packages/tensorflow/python/eager/monitoring.py", line 135, in __init__ self._metric = self._metric_methods[self._label_length].create(*args) tensorflow.python.framework.errors_impl.AlreadyExistsError: Another metric with the same name already exists. ```
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6860/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6860/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6858
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6858/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6858/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6858/events
https://github.com/kubeflow/pipelines/issues/6858
1,043,844,498
I_kwDOB-71UM4-N82S
6,858
[bug] Set a gpu limit on a ContainerOp using a pipeline parameter input
{ "login": "jdalbosc-cisco", "id": 85731466, "node_id": "MDQ6VXNlcjg1NzMxNDY2", "avatar_url": "https://avatars.githubusercontent.com/u/85731466?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jdalbosc-cisco", "html_url": "https://github.com/jdalbosc-cisco", "followers_url": "https://api.github.com/users/jdalbosc-cisco/followers", "following_url": "https://api.github.com/users/jdalbosc-cisco/following{/other_user}", "gists_url": "https://api.github.com/users/jdalbosc-cisco/gists{/gist_id}", "starred_url": "https://api.github.com/users/jdalbosc-cisco/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jdalbosc-cisco/subscriptions", "organizations_url": "https://api.github.com/users/jdalbosc-cisco/orgs", "repos_url": "https://api.github.com/users/jdalbosc-cisco/repos", "events_url": "https://api.github.com/users/jdalbosc-cisco/events{/privacy}", "received_events_url": "https://api.github.com/users/jdalbosc-cisco/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 2710158147, "node_id": "MDU6TGFiZWwyNzEwMTU4MTQ3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/needs%20more%20info", "name": "needs more info", "color": "DBEF12", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "@Bobgy @zijianjoy Please can you take a look at this issue.", "I think this is resolved in.the later version of the SDK.\r\n@jdalbosc-cisco Can you please upgrade your sdk and try again?", "@chensun Thank you for your reply.\r\n\r\nI've upgraded kfp to 1.8.9 and indeed the problem is solved. The content of the yaml argo workflow is now:\r\n```\r\n resources:\r\n limits: {nvidia: '{{inputs.parameters.num_gpus}}', memory: 4G}\r\n```\r\nWhich is what I wan't I guess (not sure about nvidia not being nvidia/gpu but apart from that it's ok).\r\n\r\nNow the problem is that when I try to load this yaml in Kubeflow 1.3 UI I have an error message complaining about the format of the quantity not being valid:\r\n\r\n**Pipeline version creation failed**\r\n_{\"error_message\":\"Error creating pipeline version: Create pipeline version failed: Failed to get parameters from the workflow: InvalidInputError: Failed to parse the parameter.: error unmarshaling JSON: quantities must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'\",\"error_details\":\"Error creating pipeline version: Create pipeline version failed: Failed to get parameters from the workflow: InvalidInputError: Failed to parse the parameter.: error unmarshaling JSON: quantities must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'\"}_\r\n\r\nI guess I might have to update to a newer version of Kubeflow like 1.4 (which I am not in control of)? Is there a compatibility matrix between Kfp and Kubeflow?", "Hello @jdalbosc-cisco , can you upload a simplified version of your pipeline version file? So we can try it out on the latest version of KFP backend v1.7.1 to see whether this issue is fixed.", "Hello @zijianjoy,\r\n\r\nThank you for your reply.\r\n\r\nHere are two files that enable to reproduce the problem (renamed to .txt to workaround Github extensions filtering). The first one is the code that compiles the pipeline, and the second one is the pipeline yaml file created with the latest version of Kfp 1.8.9.\r\n\r\nWhen I try to upload the gpu_sum.yaml file in the _Upload Pipeline or Pipeline version_ of the Kubeflow 1.3 UI and click on _Create_ to create the pipeline, I have the following error message that shows up in the UI that prevents me from uploading the pipeline:\r\n\r\n**Pipeline version creation failed**\r\n\r\n_{\"error_message\":\"Error creating pipeline: Create pipeline failed: Failed to get parameters from the workflow: InvalidInputError: Failed to parse the parameter.: error unmarshaling JSON: quantities must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'\",\"error_details\":\"Error creating pipeline: Create pipeline failed: Failed to get parameters from the workflow: InvalidInputError: Failed to parse the parameter.: error unmarshaling JSON: quantities must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'\"}_\r\n\r\nHope this helps.\r\nRegards,\r\n\r\n[compile.py.txt](https://github.com/kubeflow/pipelines/files/7481670/compile.py.txt)\r\n[gpu_sum.yaml.txt](https://github.com/kubeflow/pipelines/files/7481671/gpu_sum.yaml.txt)\r\n\r\n\r\n", "This seems like an argo issue, backend doesn't accept this format because of this line in gpu_sum.yaml:\r\n```\r\nlimits: {memory: 100M, nvidia: '{{inputs.parameters.num_gpus}}'} \r\n```\r\n\r\nIt doesn't fit in the format validation:\r\n\r\n```\r\n'^([+-]?[0-9.]+)([eEinumkKMGTP][-+]?[0-9])$'\"\r\n```\r\n\r\nI think either KFP backend or Argo failed to recognize this as parameter, cc @Bobgy to confirm.", "\r\n\r\n> This seems like an argo issue, backend doesn't accept this format because of this line in gpu_sum.yaml:\r\n> \r\n> ```\r\n> limits: {memory: 100M, nvidia: '{{inputs.parameters.num_gpus}}'} \r\n> ```\r\n> \r\n> It doesn't fit in the format validation:\r\n> \r\n> ```\r\n> '^([+-]?[0-9.]+)([eEinumkKMGTP][-+]?[0-9])$'\"\r\n> ```\r\n> \r\n> I think either KFP backend or Argo failed to recognize this as parameter, cc @Bobgy to confirm.\r\n\r\nI have the same issue on Kubeflow 1.4 (deployed by manifest), KFP backend v1.7.0. KFP SDK 1.8.9\r\n", "@zijianjoy \r\n\r\n> This seems like an argo issue, backend doesn't accept this format ...\r\n\r\nPlease try submitting this argo workflow which has pretty much the same GPU limit syntax and worked fine for me (argo version that comes with kubeflow 1.4):\r\n```yaml\r\napiVersion: argoproj.io/v1alpha1\r\nkind: Workflow\r\nmetadata:\r\n name: cuda-vector-add\r\nspec:\r\n entrypoint: main\r\n arguments:\r\n parameters:\r\n - name: gpus\r\n value: 0\r\n templates:\r\n - name: main\r\n container:\r\n image: \"k8s.gcr.io/cuda-vector-add:v0.1\"\r\n resources:\r\n limits: {memory: 100M, nvidia: '{{workflow.parameters.gpus}}'} \r\n```\r\n\r\n\r\nIf it's any help, please consider using this very simple pipeline as a minimum reproducible example:\r\n```python\r\nimport kfp\r\nfrom kfp import dsl\r\n\r\n@kfp.components.func_to_container_op\r\ndef test():\r\n print(\"test\")\r\n\r\n@dsl.pipeline(name=\"Test\")\r\ndef test_pipeline(gpus: int = 0):\r\n test_task = test()\r\n test_task.set_cpu_request(\"4\").set_memory_request(\"4Gi\").set_gpu_limit(gpus)\r\n test_task.container.image = \"k8s.gcr.io/cuda-vector-add:v0.1\"\r\n\r\nif __name__ == \"__main__\":\r\n kfp.compiler.Compiler().compile(\r\n pipeline_func=test_pipeline,\r\n package_path=__file__.replace(\".py\", \".yaml\"),\r\n )\r\n```\r\n\r\nThis bug is a major issue for our team, given that we upload pipelines in a GitOps sort of way. I hope you could look into it soon.\r\n", "Hello @ashrafgt , after some investigation, we now think that we should create PodSpecPatch for GPU resources. Refer to issues:\r\n\r\nhttps://github.com/kubeflow/pipelines/issues/1956\r\nhttps://github.com/kubeflow/pipelines/issues/4877\r\n\r\nPodSpecPatch example is in https://github.com/argoproj/argo-workflows/blob/master/examples/pod-spec-patch.yaml. Please note that both examples you provided (argo workflow yaml and KFP pipeline in python) have failed for the same error message on my side. This is why I think we should change to use podSpecPatch (which I suspect such functionality has already been provided by https://github.com/kubeflow/pipelines/pull/5972). cc @NikeNano ", "@zijianjoy Thank you for checking this! I think the argo example probably didn't work because you're using an older version. I remember that I had tl use pod spec patches before, but not anymore. Regardless of how it's implemented, it'll be great to be able to dynamically control GPU resources :)", "Thanks @zijianjoy, using kfp 1.8.9, **add_resource_request** and **add_resource_limit** methods instead of set_gpu_limit, I was able to specify a gpu limit using a pipeline input parameter. The updated code below that worked for me.\r\nThanks again! :slightly_smiling_face:\r\n\r\n```\r\nimport functools\r\nimport kfp\r\nimport kfp.components\r\n\r\n\r\ndef set_resources(cpu: str = '200m', memory: str = '100M', gpu: int = None):\r\n \"\"\"\r\n This is a decorator that applies CPU and memory settings to the specified component (pod).\r\n\r\n For security reasons, the following rules must be adhered to pass pod validations on the cluster:\r\n - CPU limits must not be set\r\n - CPU request must be set\r\n - Memory limits and requests must be set and equal\r\n \"\"\"\r\n def decorator(component_op):\r\n @functools.wraps(component_op)\r\n def wrapper(*args, **kwargs):\r\n component = component_op(*args, **kwargs)\r\n component.set_cpu_request(cpu)\r\n component.set_memory_request(memory)\r\n component.set_memory_limit(memory)\r\n if gpu is not None:\r\n component.set_gpu_limit(gpu)\r\n return component\r\n return wrapper\r\n return decorator\r\n\r\ndef gpu_sum_pipeline(num_gpus: int, a: float = 1., b: float = 2.):\r\n @set_resources(gpu=None)\r\n def gpu_sum_op(a: float, b: float) -> None:\r\n def gpu_sum(a: float, b: float) -> None:\r\n print(a+b)\r\n gpu_sum_task_factory=kfp.components.create_component_from_func(gpu_sum,\r\n base_image='python:3.9')\r\n return gpu_sum_task_factory(a, b)\r\n\r\n gpu_sum_task = gpu_sum_op(a, b)\r\n gpu_sum_task.add_resource_request('nvidia.com/gpu', num_gpus)\r\n gpu_sum_task.add_resource_limit('nvidia.com/gpu', num_gpus)\r\n\r\nkfp.compiler.Compiler().compile(\r\n pipeline_func=gpu_sum_pipeline,\r\n package_path='gpu_sum.yaml')\r\n```", "It is awesome to see your problem resolved!", "Can I use like this code snippet?\r\n```\r\n@dsl.pipeline()\r\ndef pipeline(\r\n timestamp: str, tag: str, language: str,\r\n):\r\n\r\nop = dsl.ContainerOp( # this session is not involved \r\n name=f'clustering-{timestamp}',\r\n image=f'{REGISTRY_IMAGE}/clustering_stage:{tag}',\r\n command=['python3'],\r\n arguments=[\r\n 'main.py', timestamp, 1, language,\r\n params_str,\r\n output_path\r\n ],\r\n )\r\n\r\nwith dsl.Condition(tag == 'prod'):\r\n if gpu:\r\n op.set_cpu_limit(\"2500m\")\r\n op.set_memory_limit(\"8000Mi\")\r\n else:\r\n op.set_cpu_limit(\"800m\")\r\n op.set_memory_limit(\"4000Mi\")\r\n\r\n if gpu:\r\n op.add_resource_limit('gpu.ailabs.tw/geforce-rtx-2080-ti', '1')\r\n op.set_cpu_request(\"20m\")\r\n op.set_memory_request(\"50Mi\")\r\n\r\n```\r\nI want to ONLY set resource limit when tag value is \"prod\". ( tag value is from params when `run_pipeline`\r\nI already do like above code.\r\nHowever, it does not work.\r\nCan anyone give me help? Thanks" ]
"2021-11-03T16:55:13"
"2022-04-27T17:20:54"
"2022-02-04T20:48:33"
NONE
null
### What steps did you take I’m using Kfp 1.6.4 to define a pipeline. I would like to set a gpu limit on a ContainerOp using a pipeline parameter input. I have the following pipeline definition: ``` @dsl.pipeline(name="darknet-train-pipeline", description="Trains a darknet network") def darknet_train_pipeline(... num_gpus: int, ...) -> None: ... training_op = kfp.components.load_component_from_file( os.path.join(os.path.dirname(__file__), 'components/darknet_framework/training/component.yaml')) training_task = training_op(another_task.output) training_task.set_gpu_limit(num_gpus) # here a TypeError exception is raised ``` ### What happened: When I try to call set_gpu_limit on my ContainerOp I have the following TypeError exception: ``` File "/Users/jean-francois.dalbos/work/git/darknet_models/venv/lib/python3.9/site-packages/kfp/dsl/_container_op.py", line 380, in set_gpu_limit self._container_spec.resources.accelerator.count = int(gpu) TypeError: int() argument must be a string, a bytes-like object or a number, not 'PipelineParam' ``` ### What did you expect to happen: I want Kfp to create a pipeline definition where the gpu limit depends on the pipeline input parameter named num_gpus. In the argo workflow, I would expect something like (not quite sure of the syntax though): limits: {memory: 4G, nvidia.com/gpu: {{inputs.parameters.num_gpus}} } ### Environment: <!-- Please fill in those that seem relevant. --> * How do you deploy Kubeflow Pipelines (KFP)? In Kubeflow 1.3 UI, uploading the pipeline as an argo workflow yaml file. * KFP version: 1.3 * KFP SDK version: 1.6.4 ### Anything else you would like to add: <!-- Miscellaneous information that will assist in solving the issue.--> ### Labels <!-- Please include labels below by uncommenting them to help us better triage issues --> /area sdk --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6858/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6858/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6857
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6857/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6857/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6857/events
https://github.com/kubeflow/pipelines/issues/6857
1,043,804,528
I_kwDOB-71UM4-NzFw
6,857
[feature] Explicit list of supported GPU architectures
{ "login": "jnatale11", "id": 9668658, "node_id": "MDQ6VXNlcjk2Njg2NTg=", "avatar_url": "https://avatars.githubusercontent.com/u/9668658?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jnatale11", "html_url": "https://github.com/jnatale11", "followers_url": "https://api.github.com/users/jnatale11/followers", "following_url": "https://api.github.com/users/jnatale11/following{/other_user}", "gists_url": "https://api.github.com/users/jnatale11/gists{/gist_id}", "starred_url": "https://api.github.com/users/jnatale11/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jnatale11/subscriptions", "organizations_url": "https://api.github.com/users/jnatale11/orgs", "repos_url": "https://api.github.com/users/jnatale11/repos", "events_url": "https://api.github.com/users/jnatale11/events{/privacy}", "received_events_url": "https://api.github.com/users/jnatale11/received_events", "type": "User", "site_admin": false }
[ { "id": 1260031624, "node_id": "MDU6TGFiZWwxMjYwMDMxNjI0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/samples", "name": "area/samples", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "@zijianjoy I've discussed this with Vedant Padwal and he thinks the sample may need to be updated.", "@jnatale11 Can you try to run the sample to validate if this is working? I don't think there should be limitation on the types of GPUs by Kubeflow. I think the comment in this `add_node_selector_constrain` is not listing all GPUs. You can run this command to find all GPUs `gcloud compute accelerator-types list`?\r\n\r\nhttp://cloud/kubernetes-engine/docs/how-to/gpus", "Thank you for your help James! It seems there's just some contradiction in the docs." ]
"2021-11-03T16:20:55"
"2021-11-03T21:46:30"
"2021-11-03T21:46:30"
NONE
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> /area samples ### What feature would you like to see? <!-- Provide a description of this feature and the user experience. --> I'd like to know if there's anywhere in the docs/samples which explicitly outline which GPUs are supported by Kubeflow. So far, I've found on the `kfp.dsl._container_op` [doc](https://kubeflow-pipelines.readthedocs.io/en/latest/_modules/kfp/dsl/_container_op.html) (at fn "add_node_selector_constraint") that only nvidia-tesla-k80 and tpu-v3 are supported. Yet, on the [GPU sample](https://github.com/kubeflow/pipelines/blob/master/samples/tutorials/gpu/gpu.ipynb) there's usage of `nvidia-tesla-p100`. Could someone clarify, or update the sample? ### What is the use case or pain point? <!-- It helps us understand the benefit of this feature for your use case. --> Clarity on which GPUs are supported ### Is there a workaround currently? <!-- Without this feature, how do you accomplish your task today? --> --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6857/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6857/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6849
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6849/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6849/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6849/events
https://github.com/kubeflow/pipelines/issues/6849
1,041,875,382
I_kwDOB-71UM4-GcG2
6,849
[backend] Deprecated unittest aliases were removed in Python 3.11
{ "login": "tirkarthi", "id": 3972343, "node_id": "MDQ6VXNlcjM5NzIzNDM=", "avatar_url": "https://avatars.githubusercontent.com/u/3972343?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tirkarthi", "html_url": "https://github.com/tirkarthi", "followers_url": "https://api.github.com/users/tirkarthi/followers", "following_url": "https://api.github.com/users/tirkarthi/following{/other_user}", "gists_url": "https://api.github.com/users/tirkarthi/gists{/gist_id}", "starred_url": "https://api.github.com/users/tirkarthi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tirkarthi/subscriptions", "organizations_url": "https://api.github.com/users/tirkarthi/orgs", "repos_url": "https://api.github.com/users/tirkarthi/repos", "events_url": "https://api.github.com/users/tirkarthi/events{/privacy}", "received_events_url": "https://api.github.com/users/tirkarthi/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
null
[]
null
[ "cc @SinaChavoshi @IronPan ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-11-02T04:33:39"
"2022-03-02T15:05:10"
null
NONE
null
* `assertDictContainsSubset` was removed. * `assertEquals` was removed in favor `assertEqual` ``` components/google-cloud/tests/experimental/custom_job/unit/test_custom_job.py: self.assertDictContainsSubset( components/google-cloud/tests/experimental/custom_job/unit/test_custom_job.py: self.assertDictContainsSubset( components/google-cloud/tests/experimental/custom_job/unit/test_custom_job.py: self.assertDictContainsSubset( components/google-cloud/tests/experimental/custom_job/unit/test_custom_job.py: self.assertDictContainsSubset( components/google-cloud/tests/experimental/custom_job/unit/test_custom_job.py: self.assertDictContainsSubset( components/google-cloud/tests/experimental/custom_job/unit/test_custom_job.py: self.assertDictContainsSubset( components/google-cloud/tests/experimental/custom_job/unit/test_custom_job.py: self.assertDictContainsSubset( components/google-cloud/tests/experimental/custom_job/unit/test_custom_job.py: self.assertDictContainsSubset( components/google-cloud/tests/experimental/custom_job/unit/test_custom_job.py: self.assertDictContainsSubset( components/google-cloud/tests/experimental/custom_job/unit/test_custom_job.py: self.assertDictContainsSubset( components/google-cloud/tests/experimental/custom_job/unit/test_custom_job.py: self.assertDictContainsSubset( sdk/python/kfp/containers_tests/component_builder_test.py: self.assertEquals(actual_component_yaml, self._expected_component_yaml) ``` ### Expected result The deprecated aliases should be fixed ### Materials and Reference Ref : https://github.com/python/cpython/pull/28268
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6849/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6849/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6848
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6848/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6848/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6848/events
https://github.com/kubeflow/pipelines/issues/6848
1,041,709,768
I_kwDOB-71UM4-FzrI
6,848
issue with google_cloud_pipeline_components 0.1.9 ModelUploadOp ("invalid value")
{ "login": "amygdala", "id": 115093, "node_id": "MDQ6VXNlcjExNTA5Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/115093?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amygdala", "html_url": "https://github.com/amygdala", "followers_url": "https://api.github.com/users/amygdala/followers", "following_url": "https://api.github.com/users/amygdala/following{/other_user}", "gists_url": "https://api.github.com/users/amygdala/gists{/gist_id}", "starred_url": "https://api.github.com/users/amygdala/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amygdala/subscriptions", "organizations_url": "https://api.github.com/users/amygdala/orgs", "repos_url": "https://api.github.com/users/amygdala/repos", "events_url": "https://api.github.com/users/amygdala/events{/privacy}", "received_events_url": "https://api.github.com/users/amygdala/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
open
false
null
[]
null
[ "Here's what the set of input args looks like in the console: https://screenshot.googleplex.com/HdYMJLiz8oVhxjB\r\n/cc @SinaChavoshi @IronPan ", "@amygdala try configuring port as below - `serving_container_ports=[{\"containerPort\" : PORT}]`. The component follows k8s v1/core spec. Worked for me with above config.\r\n ", "Interesting -- the api spec says: `serving_container_ports (Optional[Sequence[int]]=None):` but maybe the docs need to be updated. \r\n", "We spent hours trying to find an answer to this. We need better alignment between the documentation in here https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-0.2.0/google_cloud_pipeline_components.aiplatform.html and here https://cloud.google.com/vertex-ai/docs/predictions/use-custom-container", "Stumbled upon this issue after more time I'd like to publicly admit. Passing the port argument as @jagadeeshi2i mentioned did the trick for me.\r\n\r\nI am now struggling to set `serving_container_environment_variables` as I think it might be suffering the same problem. I am currently passing env vars to the container in the following way ([docs][1] lists it as a `Optional[Dict[str, str]]`):\r\n```python\r\nmodel_upload_op = gcc_aip.ModelUploadOp(\r\n project=\"and-reporting\",\r\n location=\"us-west1\",\r\n display_name=\"session_model\",\r\n serving_container_image_uri=\"gcr.io/and-reporting/pred:latest\",\r\n # The following is creating troubles...\r\n serving_container_environment_variables={\"MODEL_BUCKET\": \"ml_session_model/model},\r\n serving_container_ports=[{\"containerPort\": 5000}],\r\n serving_container_predict_route=\"/predict\",\r\n serving_container_health_route=\"/health\",\r\n)\r\n```\r\nWhich produces the following error:\r\n```\r\nRuntimeError: Failed to create the resource. Error: {'code': 400, 'message': 'Invalid JSON payload received. Unknown name \"MODEL_BUCKET\" at \\'model.container_spec.env[0]\\': Cannot find field.', 'status': 'INVALID_ARGUMENT', 'details': [{'@type': 'type.googleapis.com/google.rpc.BadRequest', 'fieldViolations': [{'field': 'model.container_spec.env[0]', 'description': 'Invalid JSON payload received. Unknown name \"MODEL_BUCKET\" at \\'model.container_spec.env[0]\\': Cannot find field.'}]}]}\r\n```\r\n\r\nI think `ModelUploadOp` might have been left out from the refresh in #5481.\r\n\r\n## Solution\r\n\r\nCan confirm `serving_container_environment_variables` documentation is outdated. Setting it accordingly to the [kubernetes docs][2] solved it for me:\r\n```python\r\nserving_container_environment_variables=[\r\n {\"name\": \"MODEL_BUCKET\", \"value\": \"ml_session_model/model\"}\r\n],\r\n```\r\n\r\n[1]: https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-0.2.2/google_cloud_pipeline_components.aiplatform.html#google_cloud_pipeline_components.aiplatform.CustomContainerTrainingJobRunOp\r\n[2]: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#envvar-v1-core" ]
"2021-11-01T22:50:32"
"2022-02-03T12:52:24"
null
CONTRIBUTOR
null
with the v0.1.9 of `ModelUploadOp` I'm seeing the following error. I confirmed that **things work fine in 0.1.7.** `PORT` is set as follows: ``` PORT = 8080 ``` Then, in the pipeline definition, `ModelUploadOp` is configured as follows: ``` model_upload_op = gcc_aip.ModelUploadOp( project=project, display_name=model_display_name, serving_container_image_uri=build_image_task.outputs['serving_container_uri'], serving_container_predict_route="/predictions/{}".format(MAR_MODEL_NAME), serving_container_health_route="/ping", serving_container_ports=[PORT] ``` This is the error I see (again, I do NOT see this problem with v0.1.7): ``` 2021-11-01 13:01:17.976 PDTRuntimeError: Failed to create the resource. Error: {'code': 400, 'message': "Invalid value at 'model.container_spec.ports[0]' (type.googleapis.com/google.cloud.aiplatform.v1.Port), 8080", 'status': 'INVALID_ARGUMENT', 'details': [{'@type': 'type.googleapis.com/google.rpc.BadRequest', 'fieldViolations': [{'field': 'model.container_spec.ports[0]', 'description': "Invalid value at 'model.container_spec.ports[0]' (type.googleapis.com/google.cloud.aiplatform.v1.Port), 8080"}]}]} ``` I wondered if something has changed about the expected format of this arg, but from the documentation (`serving_container_ports (Optional[Sequence[int]]=None`) it doesn't seem so. Again, this code runs fine with 0.1.7.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6848/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6848/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6845
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6845/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6845/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6845/events
https://github.com/kubeflow/pipelines/issues/6845
1,040,844,352
I_kwDOB-71UM4-CgZA
6,845
[backend] severe perf problem in archive experiment API
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "/cc @capri-xiyue ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "We just today got hit by this issue (Kubeflow 1.3 deployment). Would really appreciate if this could get addressed in a future version. Right now, we have to tell our users to not archive experiments. \r\n\r\n**Edit**: Sorry, just saw that your original issue description already included my comment below\r\n\r\n_An unfortunate side effect of the performance issue is, that since this results in an `UPDATE` statement, the `run_details` and `resource_references` table are locked during this operation, leading to other failed API requests, e.g. updating a single run, etc._", "Fixed by @difince.\n\n/close", "@gkcalat: Closing this issue.\n\n<details>\n\nIn response to [this](https://github.com/kubeflow/pipelines/issues/6845#issuecomment-1640333024):\n\n>Fixed by @difince.\n>\n>/close\n\n\nInstructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.\n</details>" ]
"2021-11-01T07:12:28"
"2023-07-18T14:26:27"
"2023-07-18T14:26:24"
CONTRIBUTOR
null
### Environment * KFP version: master <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> ### Steps to reproduce <!-- Specify how to reproduce the problem. This may include information such as: a description of the process, code snippets, log output, or screenshots. --> 1. On a KFP cluster with many runs/experiments. 2. Use archive experiment API on any experiment, observe that the API takes >1 minute to process and may timeout + cause other queries to fail. Observed failure look like https://github.com/kubeflow/pipelines/issues/6815#issue-1037408014 The issue caused severe flakiness in test infra: https://github.com/kubeflow/pipelines/issues/6815, and was mitigated by skipping the archive experiment step in test script: https://github.com/kubeflow/pipelines/pull/6843 (We should revert the https://github.com/kubeflow/pipelines/pull/6843 after fixing the perf problem) ### Expected result There's no perf problem with experiment archive API. <!-- What should the correct behavior be? --> ### Materials and Reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> This is caused by a known TODO item to improve SQL perf: https://github.com/kubeflow/pipelines/blob/74c7773ca40decfd0d4ed40dc93a6af591bbc190/backend/src/apiserver/storage/experiment_store.go#L266 I experimented with a SQL query improvement in https://github.com/kubeflow/pipelines/issues/6815#issuecomment-955935043, we still need to investigate how to turn that into our golang backend. --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6845/reactions", "total_count": 7, "+1": 7, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6845/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6839
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6839/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6839/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6839/events
https://github.com/kubeflow/pipelines/issues/6839
1,039,885,703
I_kwDOB-71UM49-2WH
6,839
[frontend] S3Error: 503 in ml-pipeline-ui-artifact
{ "login": "lightning-like", "id": 53789463, "node_id": "MDQ6VXNlcjUzNzg5NDYz", "avatar_url": "https://avatars.githubusercontent.com/u/53789463?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lightning-like", "html_url": "https://github.com/lightning-like", "followers_url": "https://api.github.com/users/lightning-like/followers", "following_url": "https://api.github.com/users/lightning-like/following{/other_user}", "gists_url": "https://api.github.com/users/lightning-like/gists{/gist_id}", "starred_url": "https://api.github.com/users/lightning-like/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lightning-like/subscriptions", "organizations_url": "https://api.github.com/users/lightning-like/orgs", "repos_url": "https://api.github.com/users/lightning-like/repos", "events_url": "https://api.github.com/users/lightning-like/events{/privacy}", "received_events_url": "https://api.github.com/users/lightning-like/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false } ]
null
[ "problem solved.\r\nWe fixed destination rule in Istio", "@lightning-like could you please describe what you changed in Istio. I have the same issue.", "@holadepo have you fixed the problem? I am facing the same issue.\r\n@lightning-like Could you explain a bit more what istio changes you made?", "> @holadepo have you fixed the problem? I am facing the same issue. @lightning-like Could you explain a bit more what istio changes you made?\r\n\r\nI have not fixed it yet. Please let me know if you have a fix", "I have the same problem on an on-prem KF 1.4 with multi-tenancy. I managed to copy the secret `ml-pipeline-ui-artifact` from `kubeflow` namespace into my profile namespace, sothat the pipeline in my namespace can run. But the UI can't load `ml-pipeline-ui-artifact` from default minio. I got both `503` and `504` errors. Is there any workarround available, without upgrade to KFP 1.8.1? ", "@yingding \r\n\r\nAdd these lines to the ml-pipeline-ui-artifact deployment yaml file that you have within the namespace that you want to have this capability:\r\n\r\n```\r\nenv:\r\n - name: MINIO_ACCESS_KEY\r\n valueFrom:\r\n secretKeyRef:\r\n name: mlpipeline-minio-artifact\r\n key: accesskey\r\n - name: MINIO_SECRET_KEY\r\n valueFrom:\r\n secretKeyRef:\r\n name: mlpipeline-minio-artifact\r\n key: secretkey\r\n```\r\n\r\nThe workaround is to keep doing that with each new user/namespace added (I guess it can be automated via few kubectl commands).\r\n![obraz](https://user-images.githubusercontent.com/22456734/173168009-3b9ba3c4-2a5e-46d9-9cb7-2c22acbc2ba1.png)\r\n", "@Bartket Thanks for your hint\r\n> ... ml-pipeline-ui-artifact deployment yaml file that you have within the namespace ...\r\n\r\nI just want to double check, `ml-pipeline-ui-artifact` deployment is a KF 1.5 new feature, which I don't have in both my namespace and also not in `kubeflow` namespace using KF 1.4.0 right? Is it updated with KF 1.4.1 ?\r\n\r\nThe artifacts works on our KF 1.5 installation, but it doesn't work on the other KF 1.4.0, I was hoping to patch KF 1.4.0 without upgrading as workaround first.\r\n\r\n\r\n\r\n\r\n", "@lightning-like I am facing a very similar issue. Using an external on-prem minio server. Changed secrets in user profile namespaces. When I try to view the logs -- marked in the screenshot -- I get the response `Failed to get object in bucket mlpipeline at path artifacts/conditional-execution-pipeline-with-exit-handler-lpcnt/2022/06/17/conditional-execution-pipeline-with-exit-handler-lpcnt-2953709790/main.log: S3Error: 400` (Screenshot attached)\r\n\r\n<img width=\"1154\" alt=\"Screen Shot 2022-06-17 at 5 01 53 pm\" src=\"https://user-images.githubusercontent.com/47710229/174244192-70a82492-afa1-4c31-be0b-eca284a87718.png\">\r\n\r\n\r\n<img width=\"1154\" alt=\"Screen Shot 2022-06-17 at 5 05 25 pm\" src=\"https://user-images.githubusercontent.com/47710229/174244305-0df3c5f9-bf16-4c97-b48d-2b3cab1061fc.png\">\r\n\r\n\r\nI can see the pipelines logs pushed to my external minio but cant be retrieved from UI. I need to manually sign in to my minio to view the logs everytime\r\n\r\n<img width=\"943\" alt=\"Screen Shot 2022-06-17 at 5 07 31 pm\" src=\"https://user-images.githubusercontent.com/47710229/174244885-054a52a9-b9f7-41f9-8603-aa581b85c487.png\">\r\n\r\nFollowed this tutorial https://blog.min.io/how-to-kubeflow-minio/\r\n\r\nLogs from ml-pipeline-ui-artifact in kubeflow-user-example-com namespace\r\n\r\n```\r\nGET /pipeline/artifacts/minio/mlpipeline/artifacts/conditional-execution-pipeline-with-exit-handler-lpcnt/2022/06/17/conditional-execution-pipeline-with-exit-handler-lpcnt-2953709790/main.log\r\nGetting storage artifact at: minio: mlpipeline/artifacts/conditional-execution-pipeline-with-exit-handler-lpcnt/2022/06/17/conditional-execution-pipeline-with-exit-handler-lpcnt-2953709790/main.log\r\nS3Error: 400\r\n at getError (/server/node_modules/minio/dist/main/transformers.js:138:15)\r\n at /server/node_modules/minio/dist/main/transformers.js:158:14\r\n at DestroyableTransform._flush (/server/node_modules/minio/dist/main/transformers.js:80:10)\r\n at DestroyableTransform.prefinish (/server/node_modules/readable-stream/lib/_stream_transform.js:138:10)\r\n at DestroyableTransform.emit (events.js:400:28)\r\n at prefinish (/server/node_modules/readable-stream/lib/_stream_writable.js:619:14)\r\n at finishMaybe (/server/node_modules/readable-stream/lib/_stream_writable.js:627:5)\r\n at endWritable (/server/node_modules/readable-stream/lib/_stream_writable.js:638:3)\r\n at DestroyableTransform.Writable.end (/server/node_modules/readable-stream/lib/_stream_writable.js:594:41)\r\n at IncomingMessage.onend (internal/streams/readable.js:670:10) {\r\n code: 'UnknownError',\r\n amzRequestid: null,\r\n amzId2: null,\r\n amzBucketRegion: null\r\n}\r\n```\r\n\r\nCan i get a clue on this?\r\n\r\n\r\n", "@yingding We are using Kubeflow 1.4 (Kubeflow on AWS version).\r\n\r\nFYI: You should be good by applying this fix from PR below by replacing few lines in _**manifests/kustomize/base/installs/multi-user/pipelines-profile-controller/sync.py**_ and redeploying via Kustomize. I am yet to test it though.\r\n\r\n[gh pr checkout 5864](https://github.com/kubeflow/pipelines/pull/5864/files/764840ccd7ae85dd4f4c497d84602c40519c51e9#diff-997ca857ae89736a948d7bc38a69264639b3ddebfaf0eb172d13d35f6f535d62)", "@RakeshRaj97 hey I wrote that tutorial hahaha glad someone is using it, I just hit the same problem you are facing while doing a deployment on OpenShift, this doesn't happen on other vanilla environments, did you found a solution to this problem?", "error was caused because the MinIO external artifact storage was behind TLS so specifying port `443` is not enough `MINIO_SSL=\"true\"` as an environment variable to the pipelines ui deployment is also needed.", "@dvaldivia Thanks for that great tutorial. It is very helpful and saved me a ton of effort by centrally managing all data in one external minio. I highly appreciate and support your effort on making that tutorial and please continue to update it on a regular basis if possible.\r\n\r\nMINIO_SSL='true' in `ml-pipeline-ui` deployment still not solving the issue. My current image for `ml-pipeline-ui` is `gcr.io/ml-pipeline/frontend:1.8.2`\r\n\r\nI also see this error \r\n```\r\nFailed to get object in bucket mlpipeline at path artifacts/create-pipeline-4sgrp/2022/10/18/create-pipeline-4sgrp-2895230044/main.log: Error: getaddrinfo ENOTFOUND internal-minio.example.com\r\n```\r\n", "### > ---FIXED the above issue using the solution below----\r\nYour ml-pipeline-artifact-ui will have a istio-proxy that will log all the inbound and outbound information.\r\n\r\n`kubectl logs ml-pipeline-ui-artifact-3452345235-4563456-c istio-proxy -n kubeflow-user-example-com`\r\n\r\n[2023-02-13T21:37:42.064Z] \"GET /mlpipeline?location HTTP/1.1\" 503 UF,_**URX upstream_reset_before_response_started{connection_failure,TLS_error:_268435703:SSL_routines:OPENSSL_internal:WRONG_VERSION_NUMBER}**_ - \"-\" 0 213 40 - \"-\" \"MinIO (linux; x64) minio-js/7.0.14\" \"5674567567-5402-5647-5675-5764674574\" \"minio-service.kubeflow:9000\" \"10.214.1.10:9000\" outbound|9000||minio-service.kubeflow.svc.cluster.local - 10.4.227.136:9000 10.224.1.134:57526 - default\r\n\r\n**FIX:** https://github.com/kubeflow/kubeflow/issues/5271\r\n\r\n kubectl edit destinationrule -n kubeflow ml-pipeline-minio\r\n\r\n Modify the tls.mode (the last line) from ISTIO_MUTUAL to DISABLE\r\n\r\n----------------------------------------------------------------------------------------------------------------------------------\r\n\r\n**Original Issue information below.**\r\n\r\nIm seeing the same issue on Kubeflow installation on Azure Kubernetes Service (AKS). Like the author said if we remove the ?namespace in the link then we can download the artefact. I tried all the above solutions but issue still continues.\r\n\r\nI tried these 2 too but NOT working.\r\n\r\nhttps://github.com/kubeflow/pipelines/issues/6839#issuecomment-1158605309\r\nhttps://github.com/kubeflow/pipelines/issues/6839#issuecomment-1284719116\r\n\r\n### Environment\r\n\r\nKubernetes: Azure Kubernetes Service (AKS)\r\nKubernetes: 1.22.11\r\nKubectl: 1.25.3\r\nKustomize: 4.5.7\r\nKubeflow Manifest: 1.6.0\r\ntitle=\"Build: dev_local \r\nDashboard: v0.0.2-39bd19\r\nIsolation-Mode: multi-user\"\r\n\r\n* How did you deploy Kubeflow Pipelines (KFP)?\r\nDownloaded manifest from link https://github.com/kubeflow/manifests/archive/refs/tags/v1.6.0.tar.gz\r\nwhile ! kustomize build example \\| kubectl apply -f -; do echo \"Retrying to apply resources\"; sleep 10; done\r\n\r\nhttps://www.kubeflow.org/docs/pipelines/installation/overview/. -->\r\n* KFP version: 1.6.0\r\n\r\n### Steps to reproduce\r\n\r\n1. Upload pipeline.\r\n2. Run pipeline.\r\n3. Go to run details page. (https://kubeflowurl.com/_/pipeline/?ns=kubeflow-user-example-com#/runs/details/24addcef-d05f-47be-afb1-4a57343c7e03)\r\n4. Go to Input/Output section.\r\n5. Under Artifacts section click on minio link.\r\n6. Should open up a new tab in your browser.\r\n7. File download should start.\r\n\r\n", "It's work for me.\r\n\r\n```yaml\r\napiVersion: networking.istio.io/v1alpha3\r\nkind: DestinationRule\r\nmetadata:\r\n labels:\r\n application-crd-id: kubeflow-pipelines\r\n name: ml-pipeline-minio\r\n namespace: kubeflow\r\nspec:\r\n host: minio-service.kubeflow.svc.cluster.local\r\n trafficPolicy:\r\n tls:\r\n mode: DISABLE\r\n```\r\n" ]
"2021-10-29T19:56:35"
"2023-03-01T06:41:33"
"2021-12-04T19:40:50"
NONE
null
### How did you deploy Kubeflow Pipelines (KFP)? We are attempting to deploy the kubeflow pipeline over a local cent-os cluster using kustomize We use multi-user isolation k8s v1.19.6 On a custom Kubernetes deployment. KFP version: 1.7.0 KFP SDK version: kfp 1.8.6 kfp-pipeline-spec 0.1.11 kfp-server-api 1.7.0 ### Steps to reproduce For create run we use `V2 compatible` mode and default `package_root` Try open/load/view any artefact in runs gui ``` Failed to get object in bucket mlpipeline at path v2/artifacts/pipeline/Georgy test/025b7274-585b-4aac-95c3-9ca2d609d47e/irs/output_dataset: S3Error: 503 ``` Remove the ?namespace in the link then we can download the artefact. log from ml-pipeline-ui-artifact when we try get artefact with ?namespace ``` GET /pipeline/artifacts/get?source=minio&peek=256&bucket=mlpipeline&key=v2%2Fartifacts%2Fpipeline%2FGeorgy+test%2F025b7274-585b-4aac-95c3-9ca2d609d47e%2Firs%2Fmarkdown_artifact Getting storage artifact at: minio: mlpipeline/v2/artifacts/pipeline/Georgy test/025b7274-585b-4aac-95c3-9ca2d609d47e/irs/markdown_artifact S3Error: 503 at getError (/server/node_modules/minio/dist/main/transformers.js:138:15) at /server/node_modules/minio/dist/main/transformers.js:158:14 at DestroyableTransform._flush (/server/node_modules/minio/dist/main/transformers.js:80:10) at DestroyableTransform.prefinish (/server/node_modules/readable-stream/lib/_stream_transform.js:138:10) at DestroyableTransform.emit (events.js:223:5) at prefinish (/server/node_modules/readable-stream/lib/_stream_writable.js:619:14) at finishMaybe (/server/node_modules/readable-stream/lib/_stream_writable.js:627:5) at endWritable (/server/node_modules/readable-stream/lib/_stream_writable.js:638:3) at DestroyableTransform.Writable.end (/server/node_modules/readable-stream/lib/_stream_writable.js:594:41) at IncomingMessage.onend (_stream_readable.js:693:10) { code: 'UnknownError', amzRequestid: null, amzId2: null, amzBucketRegion: null } ``` log from ml-pipeline-ui when we try get artefact without ?namespace ``` GET /pipeline/artifacts/minio/mlpipeline/v2/artifacts/pipeline/Georgy%20test/025b7274-585b-4aac-95c3-9ca2d609d47e/irs/output_dataset Getting storage artifact at: minio: mlpipeline/v2/artifacts/pipeline/Georgy test/025b7274-585b-4aac-95c3-9ca2d609d47e/irs/output_dataset ``` ### Materials and Reference We'll be grateful if you can get any clue what we can look more. We tried to: - env according #6750 and #4649 <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6839/reactions", "total_count": 8, "+1": 8, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6839/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6838
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6838/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6838/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6838/events
https://github.com/kubeflow/pipelines/issues/6838
1,039,805,460
I_kwDOB-71UM49-iwU
6,838
[bug] reenable cache v2 and importer v2 e2e test
{ "login": "capri-xiyue", "id": 52932582, "node_id": "MDQ6VXNlcjUyOTMyNTgy", "avatar_url": "https://avatars.githubusercontent.com/u/52932582?v=4", "gravatar_id": "", "url": "https://api.github.com/users/capri-xiyue", "html_url": "https://github.com/capri-xiyue", "followers_url": "https://api.github.com/users/capri-xiyue/followers", "following_url": "https://api.github.com/users/capri-xiyue/following{/other_user}", "gists_url": "https://api.github.com/users/capri-xiyue/gists{/gist_id}", "starred_url": "https://api.github.com/users/capri-xiyue/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/capri-xiyue/subscriptions", "organizations_url": "https://api.github.com/users/capri-xiyue/orgs", "repos_url": "https://api.github.com/users/capri-xiyue/repos", "events_url": "https://api.github.com/users/capri-xiyue/events{/privacy}", "received_events_url": "https://api.github.com/users/capri-xiyue/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
null
[]
null
[ "cc @chensun @Bobgy ", "For the cache v2, looks like the feature is working. But there is something wrong with test infra util code. It does not pass correct input to run cache\r\n\r\nFor the importer, the feature is broken\r\n\r\nFYI: https://oss-prow.knative.dev/view/gs/oss-prow/pr-logs/pull/kubeflow_pipelines/6837/kubeflow-pipelines-samples-v2/1454141792524963840\r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-10-29T17:58:22"
"2022-03-02T15:05:14"
null
CONTRIBUTOR
null
### What steps did you take cache e2e and importer e2e is broken after the https://github.com/kubeflow/pipelines/pull/6804 change. Need to figure out how to add protobuf.Value support in in cache and importer v2 and reenable cache v2 and importer v2 e2e test as soon as possible. <!-- A clear and concise description of what the bug is.--> ### What happened: ### What did you expect to happen: ### Environment: <!-- Please fill in those that seem relevant. --> * How do you deploy Kubeflow Pipelines (KFP)? <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> * KFP version: <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> * KFP SDK version: <!-- Specify the output of the following shell command: $pip list | grep kfp --> ### Anything else you would like to add: <!-- Miscellaneous information that will assist in solving the issue.--> ### Labels <!-- Please include labels below by uncommenting them to help us better triage issues --> <!-- /area frontend --> <!-- /area backend --> <!-- /area sdk --> <!-- /area testing --> <!-- /area samples --> <!-- /area components --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6838/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6838/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6836
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6836/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6836/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6836/events
https://github.com/kubeflow/pipelines/issues/6836
1,039,693,971
I_kwDOB-71UM49-HiT
6,836
Unable to create experiments other namespaces in Kubeflow
{ "login": "Sushmita92feb", "id": 33658752, "node_id": "MDQ6VXNlcjMzNjU4NzUy", "avatar_url": "https://avatars.githubusercontent.com/u/33658752?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Sushmita92feb", "html_url": "https://github.com/Sushmita92feb", "followers_url": "https://api.github.com/users/Sushmita92feb/followers", "following_url": "https://api.github.com/users/Sushmita92feb/following{/other_user}", "gists_url": "https://api.github.com/users/Sushmita92feb/gists{/gist_id}", "starred_url": "https://api.github.com/users/Sushmita92feb/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Sushmita92feb/subscriptions", "organizations_url": "https://api.github.com/users/Sushmita92feb/orgs", "repos_url": "https://api.github.com/users/Sushmita92feb/repos", "events_url": "https://api.github.com/users/Sushmita92feb/events{/privacy}", "received_events_url": "https://api.github.com/users/Sushmita92feb/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
null
[]
null
[ "cc @aronchick ", "How are you submitting your pipeline? can you share the pipeline used?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-10-29T15:36:17"
"2022-03-02T10:06:45"
null
NONE
null
I have a Kubeflow platform installed on Azure Kubernetes that supports multi-user isolation. But still, I am not able to create experiments and submit pipelines in some other namespace. All the Pipelines are getting run in the Single kubeflow namespace. Can someone please guide me on how can I segregate the pipeline into different namespaces? Thanks in Advance Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6836/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6836/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6831
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6831/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6831/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6831/events
https://github.com/kubeflow/pipelines/issues/6831
1,039,115,119
I_kwDOB-71UM4976Nv
6,831
[feature] Add hooks to transform compiled template/workflow manifests
{ "login": "jmcarp", "id": 1633460, "node_id": "MDQ6VXNlcjE2MzM0NjA=", "avatar_url": "https://avatars.githubusercontent.com/u/1633460?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmcarp", "html_url": "https://github.com/jmcarp", "followers_url": "https://api.github.com/users/jmcarp/followers", "following_url": "https://api.github.com/users/jmcarp/following{/other_user}", "gists_url": "https://api.github.com/users/jmcarp/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmcarp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmcarp/subscriptions", "organizations_url": "https://api.github.com/users/jmcarp/orgs", "repos_url": "https://api.github.com/users/jmcarp/repos", "events_url": "https://api.github.com/users/jmcarp/events{/privacy}", "received_events_url": "https://api.github.com/users/jmcarp/received_events", "type": "User", "site_admin": false }
[ { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
null
[]
null
[ "Is it possible for you to add post-processing step after Pipeline SDK compiles to workflow template?\r\n\r\ncc @chensun for advice.", "Hi @jmcarp, that's an interesting idea. I wonder if a hook is added to `Compiler.compile` or `to PipelineContext`, then how different would this be from a post-compilation patching approach? (you can compile a pipeline using KFP SDK, and then modify the compilation result in using your custom code).\r\n\r\nNote that we're moving towards a direction that KFP SDK will produce a platform-agnostic format, which we call the Pipeline Intermediate Representation (IR). And we are still evaluating how to support platform specific features in this new world.", "> how different would this be from a post-compilation patching approach? (you can compile a pipeline using KFP SDK, and then modify the compilation result in using your custom code).\r\n\r\nIt would be similar I think. I'm already wrapping the kfp.compiler in order to implement this hack to reduce the generated workflow size: https://github.com/kubeflow/pipelines/issues/4170#issuecomment-655764762\r\n\r\nIt would just make it easier to do it. My wrapper function calls the kfp.compiler to write to a temp filename, reads the yaml back in, modifies the data, then writes back out to the final yaml file location. If KFP's compiler just took an argument that was a Callable, which gets called to mutate the generated workflow object before writing to disk, that would just save all users from having to do the wrapper function.\r\n\r\n", "Reducing the generated workflow size seems to be orthogonal to the hook. \r\nThat being said, we're moving towards v2, where compiler no longer produce Argo yaml but some intermediate representation defined by this proto: https://github.com/kubeflow/pipelines/blob/master/api/v2alpha1/pipeline_spec.proto\r\nSo I'm not sure if you would still need to a hook to modify the compilation result. By the time KFP v2 is ready, the conversion from this IR to Argo workflow yaml would happen in the backend.\r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-10-29T02:55:13"
"2022-04-17T07:27:12"
null
CONTRIBUTOR
null
### Feature Area /area sdk ### What feature would you like to see? Argo workflows has features that aren't exposed in KFP, such as synchronization, template defaults, and retry expressions. If I want to use an argo feature that isn't exposed in KFP, I can file a PR against this repo and wait for a new release, or fork or patch the sdk, or subclass the compiler and override private methods. None of these seem like good options! Instead, it would be useful if KFP included template- and workflow-level hooks allowing users to arbitrarily transform the compiled manifests. I don't have a strong opinion about where these hooks should go, but I think @jli and/or @maxhully suggested adding options to `Compiler.compile` or to `PipelineContext`. ### What is the use case or pain point? Use argo workflows options that aren't yet supported in KFP. ### Is there a workaround currently? Let me know if there's a better workaround, but all I've thought of so far is patching the sdk or overriding private methods. --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6831/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6831/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6829
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6829/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6829/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6829/events
https://github.com/kubeflow/pipelines/issues/6829
1,039,064,545
I_kwDOB-71UM497t3h
6,829
End of support for v2 compatible mode in KFP SDK 1.9
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Can we also setup a plan to refactor the documents/websites?\r\n\r\nFor me, I need introduce Kubeflow pipelines to my colleagues (most are data scientists) in my company.\r\nThe Kubeflow website still be the most important source of truth for end users.\r\nWill be very impassioned if we can update the documents to the latest KFP.\r\n", "> Can we also setup a plan to refactor the documents/websites?\r\n> \r\n> For me, I need introduce Kubeflow pipelines to my colleagues (most are data scientists) in my company. The Kubeflow website still be the most important source of truth for end users. Will be very impassioned if we can update the documents to the latest KFP.\r\n\r\nYes, we have a push for documentation refresh in H1 2021." ]
"2021-10-29T00:49:17"
"2022-02-03T23:07:12"
null
COLLABORATOR
null
KFP v2 compatible mode (http://bit.ly/kfp-v2-compatible) was introduced to let KFP users experiment with KFP DSL v2 features on KFP before KFP v2 engine is ready. The features is first available in KFP SDK 1.6.2. Since then, we have collected valuable user feedback which helped us evaluate our design. Given the ongoing effort for KFP v2 engine development and the rapid development in KFP SDK v2 code base, we decide to drop v2 compatible support in KFP SDK 1.9.0 release. Users can still try out v2 compatible mode via older version of KFP SDK (the latest one with v2 compatible support is KFP SDK 1.8.7). /cc @Bobgy /cc @james-jwu /cc @neuromage
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6829/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6829/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6821
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6821/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6821/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6821/events
https://github.com/kubeflow/pipelines/issues/6821
1,037,909,539
I_kwDOB-71UM493T4j
6,821
Automatically mount temporary directory when using the PNS executor in Argo Workflows
{ "login": "aoen", "id": 1592778, "node_id": "MDQ6VXNlcjE1OTI3Nzg=", "avatar_url": "https://avatars.githubusercontent.com/u/1592778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aoen", "html_url": "https://github.com/aoen", "followers_url": "https://api.github.com/users/aoen/followers", "following_url": "https://api.github.com/users/aoen/following{/other_user}", "gists_url": "https://api.github.com/users/aoen/gists{/gist_id}", "starred_url": "https://api.github.com/users/aoen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aoen/subscriptions", "organizations_url": "https://api.github.com/users/aoen/orgs", "repos_url": "https://api.github.com/users/aoen/repos", "events_url": "https://api.github.com/users/aoen/events{/privacy}", "received_events_url": "https://api.github.com/users/aoen/received_events", "type": "User", "site_admin": false }
[ { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" }, { "id": 2710158147, "node_id": "MDU6TGFiZWwyNzEwMTU4MTQ3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/needs%20more%20info", "name": "needs more info", "color": "DBEF12", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Hello @aoen , currently we support Emissary Executor, would you like to try that instead of PNS executor for your pipeline workflow? https://www.kubeflow.org/docs/components/pipelines/installation/choose-executor/#emissary-executor.\r\n\r\nKFP doesn't guarantee PNS executor is working at this moment." ]
"2021-10-27T22:20:49"
"2021-10-29T00:02:27"
"2021-10-29T00:02:27"
NONE
null
### Feature Area /area sdk ### What feature would you like to see? A temporary directory should be mounted automatically by KFP when using the PNS executor in Argo Workflows. ### What is the use case or pain point? When using PNS executor in Argo Workflows, we noticed some KFP pipelines that were writing output artifacts to /tmp/ were failing with this error: "failed to save outputs: failed to chroot to main filesystem: operation not permitted". ### Is there a workaround currently? Yes, by adding a volume mount explicitly: ``` op = dsl.ContainerOp(....) volume_name = "output-empty-dir" volume = k8s_client.V1Volume(name=volume_name, empty_dir=k8s_client.V1EmptyDirVolumeSource()) volume_mount = k8s_client.V1VolumeMount(name=volume_name, mount_path=OUTPUT_DIR) op.container.add_volume_mount(volume_mount) op.add_volume(volume) ``` This work-around isn't great since it's not intuitive to users that they need to add this and it adds additional boilerplate to writing pipelines. --- Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6821/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6821/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6819
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6819/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6819/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6819/events
https://github.com/kubeflow/pipelines/issues/6819
1,037,787,996
I_kwDOB-71UM4922Nc
6,819
issue with `google_cloud_pipeline_components` and artifact URIs
{ "login": "amygdala", "id": 115093, "node_id": "MDQ6VXNlcjExNTA5Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/115093?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amygdala", "html_url": "https://github.com/amygdala", "followers_url": "https://api.github.com/users/amygdala/followers", "following_url": "https://api.github.com/users/amygdala/following{/other_user}", "gists_url": "https://api.github.com/users/amygdala/gists{/gist_id}", "starred_url": "https://api.github.com/users/amygdala/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amygdala/subscriptions", "organizations_url": "https://api.github.com/users/amygdala/orgs", "repos_url": "https://api.github.com/users/amygdala/repos", "events_url": "https://api.github.com/users/amygdala/events{/privacy}", "received_events_url": "https://api.github.com/users/amygdala/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
{ "login": "james-jwu", "id": 54086668, "node_id": "MDQ6VXNlcjU0MDg2NjY4", "avatar_url": "https://avatars.githubusercontent.com/u/54086668?v=4", "gravatar_id": "", "url": "https://api.github.com/users/james-jwu", "html_url": "https://github.com/james-jwu", "followers_url": "https://api.github.com/users/james-jwu/followers", "following_url": "https://api.github.com/users/james-jwu/following{/other_user}", "gists_url": "https://api.github.com/users/james-jwu/gists{/gist_id}", "starred_url": "https://api.github.com/users/james-jwu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/james-jwu/subscriptions", "organizations_url": "https://api.github.com/users/james-jwu/orgs", "repos_url": "https://api.github.com/users/james-jwu/repos", "events_url": "https://api.github.com/users/james-jwu/events{/privacy}", "received_events_url": "https://api.github.com/users/james-jwu/received_events", "type": "User", "site_admin": false }
[ { "login": "james-jwu", "id": 54086668, "node_id": "MDQ6VXNlcjU0MDg2NjY4", "avatar_url": "https://avatars.githubusercontent.com/u/54086668?v=4", "gravatar_id": "", "url": "https://api.github.com/users/james-jwu", "html_url": "https://github.com/james-jwu", "followers_url": "https://api.github.com/users/james-jwu/followers", "following_url": "https://api.github.com/users/james-jwu/following{/other_user}", "gists_url": "https://api.github.com/users/james-jwu/gists{/gist_id}", "starred_url": "https://api.github.com/users/james-jwu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/james-jwu/subscriptions", "organizations_url": "https://api.github.com/users/james-jwu/orgs", "repos_url": "https://api.github.com/users/james-jwu/repos", "events_url": "https://api.github.com/users/james-jwu/events{/privacy}", "received_events_url": "https://api.github.com/users/james-jwu/received_events", "type": "User", "site_admin": false } ]
null
[ "There are two aspects of this issue:\r\n\r\nUI: UI should try to exam the response code when trying to preview Artifact content, if failed, we should skip previewing this Artifact.\r\n\r\nLauncher: When trying to download artifact, if failed, continue with the execution. Because the Artifact URL is not a file.\r\n\r\nConsideration point: This fix applies to both v2 and v2compatible. Should we backport this fix to v2compatible?", "/assign @capri-xiyue ", "Update: for now, the team has dropped v2 compatible mode support. Instead, a v2 engine mode is in the works, ETA early next year for an alpha version.", "Reassigned it to James Wu to further discuss the priority", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "I have a similar issue with GCPC components.\r\nI'm calling the official ModelDeployOp GCPC component and the log is\r\n```\r\nI0722 02:27:58.939668 1 env.go:30] cannot find launcher configmap: name=\"kfp-launcher\" namespace=\"default\"\r\nI0722 02:27:58.939793 1 launcher.go:144] PipelineRoot defaults to \"minio://mlpipeline/v2/artifacts\".\r\nI0722 02:27:58.940124 1 cache.go:120] Connecting to cache endpoint 10.16.15.15:8887\r\nI0722 02:27:58.973459 1 launcher.go:193] enable caching\r\nF0722 02:27:59.035967 1 main.go:50] Failed to execute component: failed to fetch non default buckets: failed to parse bucketConfig for output artifact \"endpoint\" with uri \"\r\nhttps://us-central1-aiplatform.googleapis.com/v1/projects/620505276504/locations/us-central1/endpoints/5039484002562473984\r\n\": parse bucket config failed: unsupported Cloud bucket: \"https://us-central1-aiplatform.googleapis.com/v1/projects/620505276504/locations/us-central1/endpoints/5039484002562473984\"\r\nhttps://us-central1-aiplatform.googleapis.com/v1/projects/620505276504/locations/us-central1/endpoints/5039484002562473984\r\n\"\r\n```\r\n\r\n**Does this essentially mean that GCPC cannot be used with the current KFP backend?**\r\n\r\nWith the `V2_COMPATIBLE` mode I'm getting the aforementioned errors.\r\n\r\nWith the `V2_ENGINE` mode I'm getting the `ValueError: V2_ENGINE execution mode is not supported yet.` error." ]
"2021-10-27T19:31:52"
"2022-07-22T02:36:41"
null
CONTRIBUTOR
null
This is with the 1.7 Marketplace install of KFP on a GKE cluster, and running a pipeline that uses the 1P components via v2 compat mode. I'm using v0.1.8 of the `google_cloud_pipeline_components`. There's an apparent issue with the 1P component handling of artifact URIs in v2 compat mode. (This pipeline runs fine on Vertex Pipelines). I believe this is new-ish behaviour. While I haven't tested with v 0.17, it would be interesting to see if that version has the same issues. For these components, I'm seeing errors like this one for `model-upload`: ``` message: "Output Artifact "model" does not have a recognized storage URI "https://us-central1-aiplatform.googleapis.com/v1/projects/xxxxxxxx/locations/us-central1/models/5506217892460363776". Skipping uploading to remote storage." ``` The model is uploaded successfully-- it's just the Artifact handling that's not working. Then, looking at the `Input/Output' panel, I see this, possibly related: ``` { "error": { "code": 401, "message": "Request is missing required authentication credential. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole- ... ``` I see the same for e.g. the `endpoint-create` op. Then, a downstream op like `model-deploy` which takes as inputs the outputs from those ops, fails outright (unsurprisingly) as it can't get its inputs properly: ``` message: "Failed to execute component: failed to fetch non default buckets: failed to parse bucketConfig for output artifact "endpoint" with uri "https://us-central1-aiplatform.googleapis.com/v1/projects/xxxxxxxx/locations/us-central1/endpoints/1899964888889950208": parse bucket config failed: unsupported Cloud bucket: "https://us-central1-aiplatform.googleapis.com/v1/projects/xxxxxxxx/locations/us-central1/endpoints/1899964888889950208"" ``` ![image](https://user-images.githubusercontent.com/115093/139131178-ceebd036-176b-4ce5-9cfa-fe5095578b27.png) Note: I tried passing the default SA as an arg to the ops in case the 401 credentials issue is relevant, but that didn't seem to make a difference. Nor did passing the default SA as an arg to `create_run_from_pipeline_func` help (though the node pool is using the same SA already). (in fact doing that that threw a different error: `task 'keras-debug-pipeline-jwcmv.custompythonpackagetrainingjob-run' errored: pods "keras-debug-pipeline-jwcmv-2236201252" is forbidden: error looking up service account default/xxxxxxxx-compute@developer.gserviceaccount.com: serviceaccount "xxxxxxxx-compute@developer.gserviceaccount.com" not found` ) /cc @SinaChavoshi @Bobgy @IronPan
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6819/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6819/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6818
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6818/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6818/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6818/events
https://github.com/kubeflow/pipelines/issues/6818
1,037,652,740
I_kwDOB-71UM492VME
6,818
V2 Output Artifact Classes and Vertex Pipelines
{ "login": "ml6-liam", "id": 93276261, "node_id": "U_kgDOBY9IZQ", "avatar_url": "https://avatars.githubusercontent.com/u/93276261?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ml6-liam", "html_url": "https://github.com/ml6-liam", "followers_url": "https://api.github.com/users/ml6-liam/followers", "following_url": "https://api.github.com/users/ml6-liam/following{/other_user}", "gists_url": "https://api.github.com/users/ml6-liam/gists{/gist_id}", "starred_url": "https://api.github.com/users/ml6-liam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ml6-liam/subscriptions", "organizations_url": "https://api.github.com/users/ml6-liam/orgs", "repos_url": "https://api.github.com/users/ml6-liam/repos", "events_url": "https://api.github.com/users/ml6-liam/events{/privacy}", "received_events_url": "https://api.github.com/users/ml6-liam/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Can you try using this approach to set artifact path?\r\n```\r\noutput_artifact.path = 'Path/to/fil'\r\n```\r\n\r\ncc @chensun for the rest of questions.", "Hi, I tried this way, it does not throw an error and the pipeline succeeds, however the URI is not updated when I view the Artifacts/Metadata of the run in Vertex AI Console post-completion. Maybe it is more of a Vertex Problem.", "Hi,\r\n\r\nI have found a way that works. In the end we used kfp sdk to generate a yaml file based on a `@component` decorated python function, we then adapted this format for our reusable components. Our component.yaml now looks like this:\r\n\r\n```\r\nname: predict\r\ndescription: Prepare and create predictions request\r\nimplementation:\r\n container:\r\n args:\r\n - --executor_input\r\n - executorInput: null\r\n - --function_to_execute\r\n - predict\r\n command:\r\n - python3\r\n - -m\r\n - kfp.v2.components.executor_main\r\n - --component_module_path\r\n - predict.py\r\n image: gcr.io/PROJECT_ID/kfp/components/predict:latest\r\ninputs: \r\n - name: input_1\r\n type: String\r\n - name: intput_2\r\n type: String\r\noutputs:\r\n - name: output_1\r\n type: Dataset\r\n - name: output_2\r\n type: Dataset\r\n```\r\nwith this change to the yaml, we can now successfully update the artifacts metadata dictionary, and uri through `artifact.path = '/path/to/file'`. These updates are displayed in the Vertex UI.\r\n\r\nI am still unsure why the component.yaml format specified in the Kubeflow documentation does not work - I think this may be a bug with Vertex Pipelines.", "Thanks for this @ml6-liam . I've had similar issues of not being able to save metadata with Artifacts while using containerized components. I tried what you posted above and am able to save metadata successfully now. I would agree this seems like a bug.", "@ml6-liam, glad you were able to figure it out.\r\n\r\n> I am still unsure why the component.yaml format specified in the Kubeflow documentation does not work - I think this may be a bug with Vertex Pipelines.\r\n\r\nThis isn't a bug but a new feature in v2. To support this feature we need to control the container entrypoint with `kfp.v2.components.executor_main` as you've already discovered. More specifically, the code that creates the artifact instances and saves the metadata is here: https://github.com/kubeflow/pipelines/blob/927d2a9f2dfdb90ae156979b9e0d72afa14adcd6/sdk/python/kfp/v2/components/executor.py\r\n\r\nSo if you had the legacy yaml component without this piece of code being injected in the container entrypoint, the functionality would be missing as expected.\r\n\r\nAdmit that we're currently short on documentation, our team is prioritizing documentation improvement in Q1 2022.\r\n", "Also, you might want to take a look at: https://github.com/kubeflow/pipelines/pull/6417#issue-977634071\r\nWhich would help you build your reusable components with full v2 features support." ]
"2021-10-27T17:02:50"
"2021-11-29T22:12:43"
"2021-11-29T22:12:43"
NONE
null
I am trying to create a vertex pipeline using the kfp SDK v2, I'm not sure if this is a vertex issue or a kfp issue, so forgive me if this is the wrong place for this query. I have a reusable component in my pipeline from which I want to return a Dataset Artifact. in the component.yaml I have the output specified: ``` outputs: - name: model_configuration description: output dataset describing model configuration type: Dataset ``` and as well in the command of the yaml: ``` --model_configuration, {outputPath: model_configuration} ``` Then in the function implementing the components logic, I declare a function parameter for the output like so: `output_model_configuration_output: Output[Dataset]` in the Artifact types class (declared here: https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/v2/components/types/artifact_types.py) I can see there is a method for setting the path of the Artifact with `output_artifact.path('Path/to/fil')`, but when I implement this method in my code (`output_model_configuration_output.path(f"{output_path}model_configuration.parquet")`), I am met with an error: `TypeError: 'NoneType' object is not callable` I tried writing the URI To the artifact object's uri variable directly like so: ```output_model_configuration_output.uri = f"{output_path}model_configuration.parquet"``` This didn't throw an error, but the URI Value of the artifact object displayed in the vertex pipeline was not updated in the UI when the pipeline completed. In addition, I tried adding some metadata to the artifact in this manner: ```output_model_configuration_output.metadata['num_rows'] = float(len(model_configuration))``` But I don't see this metadata reflected in the Vertex Pipeline UI When the pipeline run finishes, similar to the updated URI. Let me know if there is anymore information I can provide, or if their is a more appropriate channel for this query.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6818/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6818/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6815
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6815/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6815/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6815/events
https://github.com/kubeflow/pipelines/issues/6815
1,037,408,014
I_kwDOB-71UM491ZcO
6,815
[testing] kfp-ci cluster related tests flaky
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }, { "login": "capri-xiyue", "id": 52932582, "node_id": "MDQ6VXNlcjUyOTMyNTgy", "avatar_url": "https://avatars.githubusercontent.com/u/52932582?v=4", "gravatar_id": "", "url": "https://api.github.com/users/capri-xiyue", "html_url": "https://github.com/capri-xiyue", "followers_url": "https://api.github.com/users/capri-xiyue/followers", "following_url": "https://api.github.com/users/capri-xiyue/following{/other_user}", "gists_url": "https://api.github.com/users/capri-xiyue/gists{/gist_id}", "starred_url": "https://api.github.com/users/capri-xiyue/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/capri-xiyue/subscriptions", "organizations_url": "https://api.github.com/users/capri-xiyue/orgs", "repos_url": "https://api.github.com/users/capri-xiyue/repos", "events_url": "https://api.github.com/users/capri-xiyue/events{/privacy}", "received_events_url": "https://api.github.com/users/capri-xiyue/received_events", "type": "User", "site_admin": false } ]
null
[ "/assign @capri-xiyue @Bobgy ", "/cc @chensun ", "Are there other causes for flakiness?", "Saw another cause of flakiness in https://oss-prow.knative.dev/view/gs/oss-prow/pr-logs/pull/kubeflow_pipelines/6796/kubeflow-pipelines-samples-v2/1453537694247292928\r\n\r\n```\r\nThis step is in Failed state with this message: OOMKilled (exit code 137)\r\n```\r\nIt happened on the xgboost sample.", "For the lock wait timeout issue, Looks like it is because the backend api has a lot of transactions involved in mysql part. Maybe deadlock happened or the mysql is not fined tuned. FYI: https://stackoverflow.com/questions/5836623/getting-lock-wait-timeout-exceeded-try-restarting-transaction-even-though-im", "The following comments try to solve the following problem, because it shows up the most often.\r\n```\r\n[{\"@type\":\"type.googleapis.com/api.Error\",\"error_message\":\"Internal Server Error\",\"error_details\":\"Failed to create a new run.: InternalServerError: Failed to store run v2-sample-test-zcmzl to table: Error 1205: Lock wait timeout exceeded; try restarting transaction\"}]}\r\n```\r\n\r\nConnect to in-cluster mysql DB via:\r\n```\r\nkubectl run -it -n kubeflow --rm --image=mysql:8.0.12 --restart=Never mysql-client -- mysql -h mysql\r\n```\r\n\r\nHere's current DB size:\r\n\r\n```\r\nmysql> select count(*) from run_details;\r\n+----------+\r\n| count(*) |\r\n+----------+\r\n| 150004 |\r\n+----------+\r\n1 row in set (5.98 sec)\r\nmysql> select count(*) from experiments;\r\n+----------+\r\n| count(*) |\r\n+----------+\r\n| 65422 |\r\n+----------+\r\n1 row in set (0.07 sec)\r\n```\r\n\r\nSome queries on run seem to run for a very long time.\r\n\r\nI'm trying to use `show full processlist` to figure out which ones take a long time.", "This query stays in preparing state for more than 1 minute:\r\n```\r\nUPDATE run_details SET StorageState = ? WHERE UUID in (SELECT ResourceUUID FROM resource_references as rf WHERE (rf.ResourceType = ? AND rf.ReferenceUUID = ? AND rf.ReferenceType = ?))\r\n```\r\n\r\nBased on https://dba.stackexchange.com/a/121846, it seems our query should not use a select subquery, instead it should use JOIN for better performance.\r\n\r\nI verified an actual query is also slow:\r\n```\r\nmysql> UPDATE run_details set storageState = \"abc\" WHERE UUID in (SELECT ResourceUUID FROM resource_references as rf WHERE (rf.ResourceType = \"def\" AND rf.ReferenceUUID = \"ggg\" AND rf.ReferenceType = \"ccc\"));\r\nQuery OK, 0 rows affected (6.27 sec)\r\nRows matched: 0 Changed: 0 Warnings: 0\r\n```", "I tried to rewrite this query using JOIN, and it's now much faster:\r\n\r\n```\r\nmysql> explain UPDATE run_details as runs JOIN resource_references as rf ON runs.UUID = rf.ResourceUUID\r\n -> SET StorageState = \"def\"\r\n -> WHERE rf.ResourceType = \"c\" AND rf.ReferenceUUID = \"b\" AND rf.ReferenceType = \"c\";\r\n+----+-------------+-------+------------+--------+-------------------------+-----------------+---------+----------------------------+------+----------+-------------+\r\n| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |\r\n+----+-------------+-------+------------+--------+-------------------------+-----------------+---------+----------------------------+------+----------+-------------+\r\n| 1 | SIMPLE | rf | NULL | ref | PRIMARY,referencefilter | referencefilter | 771 | const,const,const | 1 | 100.00 | Using index |\r\n| 1 | UPDATE | runs | NULL | eq_ref | PRIMARY | PRIMARY | 257 | mlpipeline.rf.ResourceUUID | 1 | 100.00 | NULL |\r\n+----+-------------+-------+------------+--------+-------------------------+-----------------+---------+----------------------------+------+----------+-------------+\r\n2 rows in set (0.00 sec)\r\n\r\nmysql>\r\nmysql> UPDATE run_details as runs JOIN resource_references as rf ON runs.UUID = rf.ResourceUUID\r\n -> SET StorageState = \"def\"\r\n -> WHERE rf.ResourceType = \"c\" AND rf.ReferenceUUID = \"b\" AND rf.ReferenceType = \"c\";\r\nQuery OK, 0 rows affected (0.00 sec)\r\nRows matched: 0 Changed: 0 Warnings: 0\r\n```\r\n\r\nrewrote query\r\n```\r\nUPDATE run_details as runs JOIN resource_references as rf ON runs.UUID = rf.ResourceUUID\r\nSET StorageState = \"def\"\r\nWHERE rf.ResourceType = \"c\" AND rf.ReferenceUUID = \"b\" AND rf.ReferenceType = \"c\";\r\n```", "Found out the offending query in source code, actually jingzhang left a TODO to optimize it : ) https://github.com/kubeflow/pipelines/blob/74c7773ca40decfd0d4ed40dc93a6af591bbc190/backend/src/apiserver/storage/experiment_store.go#L266", "Temporarily skipped archive experiment step in tests to verify whether it helps resolve the flakiness.", "https://testgrid.k8s.io/googleoss-kubeflow-pipelines#kubeflow-pipelines-periodic-functional-test\r\nlooks like flakiness is resolved after workaround #6843", "Closing because I think we get the flakiness resolved, please reopen if not." ]
"2021-10-27T13:13:18"
"2021-11-02T00:18:37"
"2021-11-02T00:18:37"
CONTRIBUTOR
null
[example error log](https://oss-prow.knative.dev/view/gs/oss-prow/pr-logs/pull/kubeflow_pipelines/6746/kubeflow-pipelines-samples-v2/1450527267473068032#1:build-log.txt%3A224) > HTTP response headers: HTTPHeaderDict({'Content-Type': 'application/json', 'Date': 'Tue, 19 Oct 2021 18:22:34 GMT', 'Vary': 'Origin', 'X-Content-Type-Options': 'nosniff', 'X-Frame-Options': 'SAMEORIGIN', 'X-Powered-By': 'Express', 'X-Xss-Protection': '0', 'Transfer-Encoding': 'chunked', 'Set-Cookie': 'S=cloud_datalab_tunnel=HMzjedbUtOfvCUxatmKXGynHsJuhMxw5of6HG3PVShE; Path=/; Max-Age=3600'}) HTTP response body: {"error":"Failed to create a new run.: InternalServerError: Failed to store run v2-sample-test-zcmzl to table: Error 1205: Lock wait timeout exceeded; try restarting transaction","code":13,"message":"Failed to create a new run.: InternalServerError: Failed to store run v2-sample-test-zcmzl to table: Error 1205: Lock wait timeout exceeded; try restarting transaction","details":[{"@type":"type.googleapis.com/api.Error","error_message":"Internal Server Error","error_details":"Failed to create a new run.: InternalServerError: Failed to store run v2-sample-test-zcmzl to table: Error 1205: Lock wait timeout exceeded; try restarting transaction"}]}
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6815/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6815/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6813
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6813/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6813/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6813/events
https://github.com/kubeflow/pipelines/issues/6813
1,037,358,605
I_kwDOB-71UM491NYN
6,813
[testing] v2 sample test failing at master
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false } ]
null
[ "https://oss-prow.knative.dev/view/gs/oss-prow/pr-logs/pull/kubeflow_pipelines/6804/kubeflow-pipelines-v2-go-test/1453335007127932928 logs all look good, but the test failed.\r\n\r\nEDIT: I was wrong, the unit test properly failed at line https://oss-prow.knative.dev/view/gs/oss-prow/pr-logs/pull/kubeflow_pipelines/6804/kubeflow-pipelines-v2-go-test/1453335007127932928#1:build-log.txt%3A44", "Sending a blank PR to test presubmit test on master branch: https://github.com/kubeflow/pipelines/pull/6814\r\n\r\nThe blank PR also fails for v2 sample test, so v2 sample test is broken on master.", "Updated what I found in [the issue body](https://github.com/kubeflow/pipelines/issues/6813#issue-1037358605).", "There were some flakiness before the broken change was introduced, I'll investigate that separately at https://github.com/kubeflow/pipelines/issues/6815.", "The last SDK related PR that got a passing v2 sample test was https://github.com/kubeflow/pipelines/pull/6472.\r\nAnd there was only one SDK PR after that -- https://github.com/kubeflow/pipelines/pull/6803, which is most likely the PR that breaks master.\r\n\r\n@chensun can you try to debug the error message?\r\n\r\nI understand the frustration with flaky tests, eventually you will stop trusting a flaky test, that's the natural behavior. So it's a proof that the flakiness issue has been very adversely affecting productivity. I'll prioritize fixing it with @capri-xiyue .", "I think the fix is to add this module to setup.py. Sending a quick PR to hopefully fix this.", "Thanks @Bobgy and @neuromage, that was a miss by me, but I wasn't expect the sample test to hit experimental code path. Realize that I have an accidental leak in `components/__init__.py`. Going to send another PR to fix this." ]
"2021-10-27T12:25:54"
"2021-10-27T16:29:09"
"2021-10-27T16:17:04"
CONTRIBUTOR
null
reported by @chensun at https://github.com/kubeflow/pipelines/pull/6804#issuecomment-952542710 UPDATE: after the investigations below, it seems to me that v2 sample test is not flaky, it's basically broken at master. The error message is https://4e18c21c9d33d20f-dot-datalab-vm-staging.googleusercontent.com/#/runs/details/bea48911-71d1-42ee-9dca-530bfee1f08e ``` Traceback (most recent call last): File "/usr/local/lib/python3.7/runpy.py", line 183, in _run_module_as_main mod_name, mod_spec, code = _get_module_details(mod_name, _Error) File "/usr/local/lib/python3.7/runpy.py", line 109, in _get_module_details __import__(pkg_name) File "/usr/local/lib/python3.7/site-packages/kfp/__init__.py", line 23, in <module> from . import dsl File "/usr/local/lib/python3.7/site-packages/kfp/dsl/__init__.py", line 16, in <module> from ._pipeline import Pipeline, PipelineExecutionMode, pipeline, get_pipeline_conf, PipelineConf File "/usr/local/lib/python3.7/site-packages/kfp/dsl/_pipeline.py", line 22, in <module> from kfp.dsl import _component_bridge File "/usr/local/lib/python3.7/site-packages/kfp/dsl/_component_bridge.py", line 30, in <module> from kfp.dsl import component_spec as dsl_component_spec File "/usr/local/lib/python3.7/site-packages/kfp/dsl/component_spec.py", line 21, in <module> from kfp.v2.components.types import type_utils File "/usr/local/lib/python3.7/site-packages/kfp/v2/components/__init__.py", line 15, in <module> from kfp.v2.components.experimental.yaml_component import load_component_from_text File "/usr/local/lib/python3.7/site-packages/kfp/v2/components/experimental/yaml_component.py", line 16, in <module> from kfp.v2.components.experimental import base_component File "/usr/local/lib/python3.7/site-packages/kfp/v2/components/experimental/base_component.py", line 19, in <module> from kfp.v2.components.experimental import pipeline_task File "/usr/local/lib/python3.7/site-packages/kfp/v2/components/experimental/pipeline_task.py", line 21, in <module> from kfp.v2.components.experimental import pipeline_channel File "/usr/local/lib/python3.7/site-packages/kfp/v2/components/experimental/pipeline_channel.py", line 22, in <module> from kfp.v2.components.types.experimental import type_utils ModuleNotFoundError: No module named 'kfp.v2.components.types.experimental' F1027 12:43:37.163580 1 main.go:50] Failed to execute component: exit status 1 ```
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6813/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6813/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6812
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6812/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6812/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6812/events
https://github.com/kubeflow/pipelines/issues/6812
1,037,219,896
I_kwDOB-71UM490rg4
6,812
[bug] create component launcher configmaps/kfp-launcher : Forbidden
{ "login": "lightning-like", "id": 53789463, "node_id": "MDQ6VXNlcjUzNzg5NDYz", "avatar_url": "https://avatars.githubusercontent.com/u/53789463?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lightning-like", "html_url": "https://github.com/lightning-like", "followers_url": "https://api.github.com/users/lightning-like/followers", "following_url": "https://api.github.com/users/lightning-like/following{/other_user}", "gists_url": "https://api.github.com/users/lightning-like/gists{/gist_id}", "starred_url": "https://api.github.com/users/lightning-like/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lightning-like/subscriptions", "organizations_url": "https://api.github.com/users/lightning-like/orgs", "repos_url": "https://api.github.com/users/lightning-like/repos", "events_url": "https://api.github.com/users/lightning-like/events{/privacy}", "received_events_url": "https://api.github.com/users/lightning-like/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Sorry for interrupting. We assume the issue https://github.com/kubeflow/pipelines/pull/6530 is the same.\r\n\r\nWe have added root in the base image, it's helped" ]
"2021-10-27T10:00:36"
"2021-10-27T16:23:33"
"2021-10-27T16:23:33"
NONE
null
### What steps did you take Try to run `client.create_run_from_pipeline_func` from sdk with mode `kfp.dsl.PipelineExecutionMode.V2_COMPATIBLE` ### What happened: There is an error when we start pipeline v2 meanwhile v1 works fine ` F1026 18:45:17.472270 61 main.go:50] Failed to create component launcher: Get " https://100.128.0.1:443/api/v1/namespaces/ruasmg5/configmaps/kfp-launcher ": Forbidden ` ### Environment: We are attempting to deploy the kubeflow pipeline over a local cent-os cluster using kustomize We use multi-user isolation k8s v1.19.6 * How do you deploy Kubeflow Pipelines (KFP)? On a custom Kubernetes deployment. * KFP version: 1.7.0 * KFP SDK version: kfp 1.8.6 kfp-pipeline-spec 0.1.11 kfp-server-api 1.7.0 ### Anything else you would like to add: We'll be grateful if you can get any clue what we can look more. We tried to get full access to serves account predefine config map predefine `pipene_root` at `create_run_from_pipeline_func` ### Labels /area backend --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a :+1:. We prioritise the issues with the most :+1:.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6812/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6812/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6811
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6811/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6811/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6811/events
https://github.com/kubeflow/pipelines/issues/6811
1,037,218,290
I_kwDOB-71UM490rHy
6,811
[bug] Kubeflow Pipeline connection error
{ "login": "Deseram", "id": 48093298, "node_id": "MDQ6VXNlcjQ4MDkzMjk4", "avatar_url": "https://avatars.githubusercontent.com/u/48093298?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Deseram", "html_url": "https://github.com/Deseram", "followers_url": "https://api.github.com/users/Deseram/followers", "following_url": "https://api.github.com/users/Deseram/following{/other_user}", "gists_url": "https://api.github.com/users/Deseram/gists{/gist_id}", "starred_url": "https://api.github.com/users/Deseram/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Deseram/subscriptions", "organizations_url": "https://api.github.com/users/Deseram/orgs", "repos_url": "https://api.github.com/users/Deseram/repos", "events_url": "https://api.github.com/users/Deseram/events{/privacy}", "received_events_url": "https://api.github.com/users/Deseram/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." }, { "id": 2710158147, "node_id": "MDU6TGFiZWwyNzEwMTU4MTQ3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/needs%20more%20info", "name": "needs more info", "color": "DBEF12", "default": false, "description": "" } ]
open
false
null
[]
null
[ "Hello @Deseram, there are two issues you are facing here.\r\n\r\nFor SDK issue: Please refer to https://www.kubeflow.org/docs/components/pipelines/multi-user/#when-using-the-sdk for specifying namespace when calling KFP API in multi-user mode. \r\n\r\nFor ml-pipeline-ui issue: Please check your istio configuration to see whether there has been any change since last successful access. \r\n\r\nWould you like to try them out and see whether it resolves your issue? Feel free to share more info when you are investigating.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-10-27T09:58:58"
"2022-03-02T15:05:15"
null
NONE
null
### What steps did you take We have had our kubeflow setup running for about 6 months now. But recently, we noticed that the sdk was having trouble communicating with KF Pipelines, this wasn't a regular incident. But on occasions, we would get the following errors: `List experiments failed.: InternalServerError: Failed to list experiments: invalid connection: invalid connection","message":"List experiments failed.: InternalServerError: Failed to list experiments: invalid connection: invalid connection","code":13,"details":[{"@type":"type.googleapis.com/api.Error","error_message":"Internal Server Error","error_details":"List experiments failed.: InternalServerError: Failed to list experiments: invalid connection: invalid connection` But, with the above error, the pipelines were started in the back, the sdk just failed. --- But the most recent event was that the entire Kubeflow Pipelines UI went down, its just presented the following text: `upstream connect error or disconnect/reset before headers. reset reason: connection failure` Due to this being a production issue, did some investigation on the cluster itself. All the pods we healthy and didn't seem to have issues. The `ml-pipelines` pod the following errors: ``` I1027 08:05:53.056560 7 interceptor.go:29] /api.RunService/ListRuns handler starting I1027 08:05:53.056623 7 error.go:227] Invalid input error: ListRuns must filter by resource reference in multi-user mode. github.com/kubeflow/pipelines/backend/src/common/util.NewInvalidInputError /go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:174 github.com/kubeflow/pipelines/backend/src/apiserver/server.(*RunServer).ListRuns /go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/run_server.go:76 github.com/kubeflow/pipelines/backend/api/go_client._RunService_ListRuns_Handler.func1 /go/src/github.com/kubeflow/pipelines/backend/api/go_client/run.pb.go:1327 main.apiServerInterceptor /go/src/github.com/kubeflow/pipelines/backend/src/apiserver/interceptor.go:30 github.com/kubeflow/pipelines/backend/api/go_client._RunService_ListRuns_Handler /go/src/github.com/kubeflow/pipelines/backend/api/go_client/run.pb.go:1329 google.golang.org/grpc.(*Server).processUnaryRPC /go/pkg/mod/google.golang.org/grpc@v1.23.0/server.go:995 google.golang.org/grpc.(*Server).handleStream /go/pkg/mod/google.golang.org/grpc@v1.23.0/server.go:1275 google.golang.org/grpc.(*Server).serveStreams.func1.1 /go/pkg/mod/google.golang.org/grpc@v1.23.0/server.go:710 runtime.goexit /usr/local/go/src/runtime/asm_amd64.s:1357 /api.RunService/ListRuns call failed github.com/kubeflow/pipelines/backend/src/common/util.(*UserError).wrapf /go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:215 github.com/kubeflow/pipelines/backend/src/common/util.Wrapf /go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:240 main.apiServerInterceptor /go/src/github.com/kubeflow/pipelines/backend/src/apiserver/interceptor.go:32 github.com/kubeflow/pipelines/backend/api/go_client._RunService_ListRuns_Handler /go/src/github.com/kubeflow/pipelines/backend/api/go_client/run.pb.go:1329 google.golang.org/grpc.(*Server).processUnaryRPC /go/pkg/mod/google.golang.org/grpc@v1.23.0/server.go:995 google.golang.org/grpc.(*Server).handleStream /go/pkg/mod/google.golang.org/grpc@v1.23.0/server.go:1275 google.golang.org/grpc.(*Server).serveStreams.func1.1 /go/pkg/mod/google.golang.org/grpc@v1.23.0/server.go:710 runtime.goexit /usr/local/go/src/runtime/asm_amd64.s:1357 ``` and ``` I1027 08:06:05.062235 7 error.go:227] Invalid input error: Invalid resource references for experiment. ListExperiment requires filtering by namespace. github.com/kubeflow/pipelines/backend/src/common/util.NewInvalidInputError /go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:174 github.com/kubeflow/pipelines/backend/src/apiserver/server.(*ExperimentServer).ListExperiment /go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/experiment_server.go:68 github.com/kubeflow/pipelines/backend/api/go_client._ExperimentService_ListExperiment_Handler.func1 /go/src/github.com/kubeflow/pipelines/backend/api/go_client/experiment.pb.go:619 main.apiServerInterceptor /go/src/github.com/kubeflow/pipelines/backend/src/apiserver/interceptor.go:30 github.com/kubeflow/pipelines/backend/api/go_client._ExperimentService_ListExperiment_Handler /go/src/github.com/kubeflow/pipelines/backend/api/go_client/experiment.pb.go:621 google.golang.org/grpc.(*Server).processUnaryRPC /go/pkg/mod/google.golang.org/grpc@v1.23.0/server.go:995 google.golang.org/grpc.(*Server).handleStream /go/pkg/mod/google.golang.org/grpc@v1.23.0/server.go:1275 google.golang.org/grpc.(*Server).serveStreams.func1.1 /go/pkg/mod/google.golang.org/grpc@v1.23.0/server.go:710 runtime.goexit /usr/local/go/src/runtime/asm_amd64.s:1357 /api.ExperimentService/ListExperiment call failed github.com/kubeflow/pipelines/backend/src/common/util.(*UserError).wrapf /go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:215 github.com/kubeflow/pipelines/backend/src/common/util.Wrapf /go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:240 main.apiServerInterceptor /go/src/github.com/kubeflow/pipelines/backend/src/apiserver/interceptor.go:32 github.com/kubeflow/pipelines/backend/api/go_client._ExperimentService_ListExperiment_Handler /go/src/github.com/kubeflow/pipelines/backend/api/go_client/experiment.pb.go:621 google.golang.org/grpc.(*Server).processUnaryRPC /go/pkg/mod/google.golang.org/grpc@v1.23.0/server.go:995 google.golang.org/grpc.(*Server).handleStream /go/pkg/mod/google.golang.org/grpc@v1.23.0/server.go:1275 google.golang.org/grpc.(*Server).serveStreams.func1.1 /go/pkg/mod/google.golang.org/grpc@v1.23.0/server.go:710 runtime.goexit /usr/local/go/src/runtime/asm_amd64.s:1357 I1027 08:06:13.842497 7 interceptor.go:29] /api.PipelineService/ListPipelines handler starting ``` Finally, we decided to restart all `ml-pipeline-*` pods, and once restarted, the problem seems to be resolved. We suspect the issue had something to do with `istio` and the proxies for each pod were failing, but there were no logs to point these out. We would like to ensure that we don't have similar issues happening in production again, could we please have a solution for this. ### What happened: SKD failed with :+1: `List experiments failed.: InternalServerError: Failed to list experiments: invalid connection: invalid connection","message":"List experiments failed.: InternalServerError: Failed to list experiments: invalid connection: invalid connection","code":13,"details":[{"@type":"type.googleapis.com/api.Error","error_message":"Internal Server Error","error_details":"List experiments failed.: InternalServerError: Failed to list experiments: invalid connection: invalid connection` But, with the above error, the pipelines were started in the back, the sdk just failed. Kubeflow UI displayed `upstream connect error or disconnect/reset before headers. reset reason: connection failure` ### What did you expect to happen: SDK to connect to client and start experiment Kubeflow UI to show all pipelines and experiments. ### Environment: - AWS - Kubernetes version: 1.18 * How do you deploy Kubeflow Pipelines (KFP)? <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> * KFP version: 1.0.4 To find the version number, See version number shows on bottom of KFP UI left sidenav. --> * KFP SDK version: 1.3.0 ### Labels <!-- /area frontend --> <!-- /area backend --> <!-- /area sdk --> <!-- /area components --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6811/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6811/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6810
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6810/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6810/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6810/events
https://github.com/kubeflow/pipelines/issues/6810
1,037,076,029
I_kwDOB-71UM490IY9
6,810
[feature] Provide custom index_url to pip when installing packages in Pipeline components
{ "login": "valko073", "id": 22097398, "node_id": "MDQ6VXNlcjIyMDk3Mzk4", "avatar_url": "https://avatars.githubusercontent.com/u/22097398?v=4", "gravatar_id": "", "url": "https://api.github.com/users/valko073", "html_url": "https://github.com/valko073", "followers_url": "https://api.github.com/users/valko073/followers", "following_url": "https://api.github.com/users/valko073/following{/other_user}", "gists_url": "https://api.github.com/users/valko073/gists{/gist_id}", "starred_url": "https://api.github.com/users/valko073/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/valko073/subscriptions", "organizations_url": "https://api.github.com/users/valko073/orgs", "repos_url": "https://api.github.com/users/valko073/repos", "events_url": "https://api.github.com/users/valko073/events{/privacy}", "received_events_url": "https://api.github.com/users/valko073/received_events", "type": "User", "site_admin": false }
[ { "id": 1126834402, "node_id": "MDU6TGFiZWwxMTI2ODM0NDAy", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/components", "name": "area/components", "color": "d2b48c", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
closed
false
null
[]
null
[ "cc @chensun ", "Hi @valko073 , IIRC `packages_to_install` also accepts a full GitHub path, for instance `'git+https://github.com/kubeflow/pipelines#egg=kfp&subdirectory=sdk/python'`. Would that work for you?\r\n\r\nAlternatively, you may also consider containerizing your component--install the dependencies while building the container image.", "Hi @chensun, \r\nthe packages are hosted on a private Pypi server. I tried using the link to the server which didn't work for me. What did work was writing a custom image for the components that simply takes the bare Python:3.x image and adds a pip.conf that points to our server.", "> Hi @chensun, the packages are hosted on a private Pypi server. I tried using the link to the server which didn't work for me. What did work was writing a custom image for the components that simply takes the bare Python:3.x image and adds a pip.conf that points to our server.\r\n\r\nGot it. Thanks for the info.\r\nIf you don't mind hosting a custom image, I would suggest you pre-install the packages into the image--that way it's a one time installation cost, you would save some recurring runtime cost.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Hi @valko073, just want to let you know that this was fixed via https://github.com/kubeflow/pipelines/pull/7453, which was cherrypicked and released in KFP SDK 1.8.12.\r\n\r\n/cc @connor-mccarthy " ]
"2021-10-27T07:35:03"
"2022-04-18T17:29:52"
"2022-04-18T17:29:51"
NONE
null
### Feature Area /area sdk /area components ### What feature would you like to see? As a user I would like to be able to provide a custom url to a self-hosted Pypi server for the components to install their _packages_to_install_ from. I could not find a way of doing so in the documentation / code. ### What is the use case or pain point? Due to Firewall restrictions, it is not possible for me to reach the official Pypi server from my Kubeflow. When manually installing packages, I can point pip via the index_url to the self-hosted Pypi inside the network. I would love to have that same functionality when starting Pipeline components. ### Is there a workaround currently? As a workaround, we have some custom Docker images with most commonly used packages preinstalled. However, this makes the images unnecessarily large and cannot cope with specific packages not preinstalled in the images. --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6810/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6810/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6799
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6799/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6799/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6799/events
https://github.com/kubeflow/pipelines/issues/6799
1,035,294,762
I_kwDOB-71UM49tVgq
6,799
[sdk] superfluous isinstance check in local run
{ "login": "feizerl", "id": 88751631, "node_id": "MDQ6VXNlcjg4NzUxNjMx", "avatar_url": "https://avatars.githubusercontent.com/u/88751631?v=4", "gravatar_id": "", "url": "https://api.github.com/users/feizerl", "html_url": "https://github.com/feizerl", "followers_url": "https://api.github.com/users/feizerl/followers", "following_url": "https://api.github.com/users/feizerl/following{/other_user}", "gists_url": "https://api.github.com/users/feizerl/gists{/gist_id}", "starred_url": "https://api.github.com/users/feizerl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/feizerl/subscriptions", "organizations_url": "https://api.github.com/users/feizerl/orgs", "repos_url": "https://api.github.com/users/feizerl/repos", "events_url": "https://api.github.com/users/feizerl/events{/privacy}", "received_events_url": "https://api.github.com/users/feizerl/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "cc @lynnmatrix ", "@feizerl Thanks for this issue, and you are right, the check will always be true. \r\nWhat i really want to check is whether the item is an instance of primitive type, if not, convert the item to string so that it can be used in command line.\r\n" ]
"2021-10-25T15:44:31"
"2021-12-03T18:14:58"
"2021-12-03T18:14:58"
CONTRIBUTOR
null
Hello, I am trying to run a kubeflow pipeline locally, and am currently running into issue due to this check: https://github.com/kubeflow/pipelines/blob/4abc4fd1874f7937a193d31dbbe650618c88ca95/sdk/python/kfp/_local_client.py#L475 My code (roughly) looks like this: ```py @container_op def upload(..) -> list: ... @container_op def download(path: String()): ... with ParallelFor(upload.output) as path: download(path) ``` The code works on a remote k8 run, but fails locally when run with LocalClient due to the extra layer of serialization because of the above check. It changes path from `gs://...` to `"gs://..."`. (Note the extra pair of quotations). What's the purpose of that check? Isn't that always going to succeed since everything is an instance of `object` in python anyway.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6799/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6799/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6793
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6793/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6793/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6793/events
https://github.com/kubeflow/pipelines/issues/6793
1,033,276,886
I_kwDOB-71UM49lo3W
6,793
Kubeflow component does not finish executing when running on **e2-highmem-4** machine type cluster
{ "login": "sharmahemlata", "id": 29674709, "node_id": "MDQ6VXNlcjI5Njc0NzA5", "avatar_url": "https://avatars.githubusercontent.com/u/29674709?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sharmahemlata", "html_url": "https://github.com/sharmahemlata", "followers_url": "https://api.github.com/users/sharmahemlata/followers", "following_url": "https://api.github.com/users/sharmahemlata/following{/other_user}", "gists_url": "https://api.github.com/users/sharmahemlata/gists{/gist_id}", "starred_url": "https://api.github.com/users/sharmahemlata/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sharmahemlata/subscriptions", "organizations_url": "https://api.github.com/users/sharmahemlata/orgs", "repos_url": "https://api.github.com/users/sharmahemlata/repos", "events_url": "https://api.github.com/users/sharmahemlata/events{/privacy}", "received_events_url": "https://api.github.com/users/sharmahemlata/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 2710158147, "node_id": "MDU6TGFiZWwyNzEwMTU4MTQ3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/needs%20more%20info", "name": "needs more info", "color": "DBEF12", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Hello @sharmahemlata , would you like to share more information about your cluster?\r\n\r\n1. K8s version on GKE \r\n2. node image type for Nodepool `custom-2-4352` and `e2-highmem-4`\r\n3. Are you using Docker executor or Emissary Executor? https://www.kubeflow.org/docs/components/pipelines/installation/choose-executor/#choosing-the-workflow-executor\r\n\r\nIf you are using `cos_containerd` in your `e2-highmem-4` node pool, you will need to switch to Emissary executor to work successfully. https://www.kubeflow.org/docs/components/pipelines/installation/choose-executor/#emissary-executor", "Hi @zijianjoy, \r\nThanks for responding. Switching to the Emissary executor gives me errors, so I need to use the docker executor. \r\n\r\nHowever, the machine image type for my nodes was `Container-Optimized OS with containerd`. \r\nMy Kubeflow components are packaged as docker containers, so now I have changed the node image type to `Container-Optimized OS with Docker` and it works perfectly well. \r\n\r\nThanks for all your help, it was first switching to the Emissary executor that steered me in the correct direction. :) " ]
"2021-10-22T07:40:58"
"2021-10-29T05:48:37"
"2021-10-29T05:48:37"
NONE
null
### What steps did you take On GCP, I created a kubeflow pipelines instance along with which a default cluster that uses machine **custom-2-4352** is created. I also added another node pool to the cluster of the machine type **e2-highmem-4** ### What happened: When kubeflow component pods are assigned to the **custom-2-4352** node pool, then the kubeflow components finish executing and the output artifacts are visible. However, when they are assigned to the **e2-highmem-4** node pool, the components do not finish. Interestingly, the logs confirm that the component code has finished executing. ### Environment: * How do you deploy Kubeflow Pipelines (KFP)? **on GCP** * KFP version: **1.7.0** * KFP SDK version: **1.6.3** ### Additional Information: I also tried to enter the pod and look for the artifacts using the kubectl command: `kubectl exec -c main -it decision-tree-pipeline-pslcc-834569436 -- /bin/bash` But I get this error:` error: unable to upgrade connection: container not found ("main")` The main container is there, the logs are available for it. ### Attachments: I am attaching two pod definition files: 1. For a pod that runs **successfully** on the **custom-2-4352** node pool 2. For a pod that runs **unsuccessfully** on the **e2-highmem-4** node pool ### Labels /area components Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍 [e2-highmem-4-nodepool-pod.txt](https://github.com/kubeflow/pipelines/files/7395157/e2-highmem-4-nodepool-pod.txt) [custom-2-4352-nodepool-pod.txt](https://github.com/kubeflow/pipelines/files/7395159/custom-2-4352-nodepool-pod.txt) .
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6793/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6793/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6792
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6792/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6792/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6792/events
https://github.com/kubeflow/pipelines/issues/6792
1,032,994,118
I_kwDOB-71UM49kj1G
6,792
How to retrieve component status and logs from pipeline SDK ?
{ "login": "ysozer", "id": 34837436, "node_id": "MDQ6VXNlcjM0ODM3NDM2", "avatar_url": "https://avatars.githubusercontent.com/u/34837436?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ysozer", "html_url": "https://github.com/ysozer", "followers_url": "https://api.github.com/users/ysozer/followers", "following_url": "https://api.github.com/users/ysozer/following{/other_user}", "gists_url": "https://api.github.com/users/ysozer/gists{/gist_id}", "starred_url": "https://api.github.com/users/ysozer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ysozer/subscriptions", "organizations_url": "https://api.github.com/users/ysozer/orgs", "repos_url": "https://api.github.com/users/ysozer/repos", "events_url": "https://api.github.com/users/ysozer/events{/privacy}", "received_events_url": "https://api.github.com/users/ysozer/received_events", "type": "User", "site_admin": false }
[ { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
open
false
null
[]
null
[ "The workaround is to call GET /apis/v1beta1/runs/{run_id} https://www.kubeflow.org/docs/components/pipelines/reference/api/kubeflow-pipeline-api-spec/#operation--apis-v1beta1-runs--run_id--get and check the apiPipelineRuntime field.\r\nKFP V2 will support this feature.", "@capri-xiyue thank you for your message, do you know how can we encode the workflow_manifest? It seems like protobuf to_dict() is not available here. Thanks", "@ysozer\r\n\r\nThis Client API seems like what you want. You can check completion by run_id.\r\n\r\nhttps://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.client.html#kfp.Client.wait_for_run_completion\r\n\r\nOr you can use `get_run` and write your own script with `while` statement.\r\n\r\nhttps://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.client.html#kfp.Client.get_run", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "@capri-xiyue is there active development on this feature currently? We were thinking of trying to contribute it upstream. ", "Any idea of how this would be done running the pipeline on Vertex AI? Can you access the state of the component/pipeline run somehow?", "Has this been resolved yet in KFP 2.0 version?", "I have a functional implementation on my subclass of `Client`. I can submit a PR if it seems useful to the maintainers. No pressure!\r\n\r\nGiven:\r\n```python\r\nclient = Client()\r\nlogs = client.get_logs(<run_id>)\r\n```\r\n\r\nThe result is a dictionary with a key for each logging backend and logs for each component listed in sequence. \r\n\r\nIt also optionally pretty prints all component logs to stdout:\r\n```\r\nKubernetes\r\n==========\r\n> logs-model\r\ntime=\"2022-07-19T19:35:53.435Z\" level=info msg=\"capturing logs\" argo=true\r\nhello world\r\n\r\nMinIO\r\n=====\r\n> logs-model\r\nhello world\r\n```\r\n\r\nThis pipeline only has one component, `logs-model`.", "> I have a functional implementation on my subclass of `Client`. I can submit a PR if it seems useful to the maintainers. No pressure!\r\n> \r\n> Given:\r\n> \r\n> ```python\r\n> client = Client()\r\n> logs = client.get_logs(<run_id>)\r\n> ```\r\n> \r\n> The result is a dictionary with a key for each logging backend and logs for each component listed in sequence.\r\n> \r\n> It also optionally pretty prints all component logs to stdout:\r\n> \r\n> ```\r\n> Kubernetes\r\n> ==========\r\n> > logs-model\r\n> time=\"2022-07-19T19:35:53.435Z\" level=info msg=\"capturing logs\" argo=true\r\n> hello world\r\n> \r\n> MinIO\r\n> =====\r\n> > logs-model\r\n> hello world\r\n> ```\r\n> \r\n> This pipeline only has one component, `logs-model`.\r\n\r\nHowever, does it return the component status though? I am mostly interested in the component status", "No, but that would be really trivial to add. Thanks for the feature suggestion! ", "Was wondering on the status of this, if a PR has been submitted or if some workaround exists for Kfp v1.7\r\n\r\nThe components in the pipeline I'm working with will output it's training status in the logs section. Intent here is to script something to grab the logs, parse them, and save them to eventually export as metrics. I don't have much control over the way the components input and output since I'm building with a set of images that I don't have modify access to. That's why I'd rather just script something on top to grab the logs.\r\n\r\nI've tried the apiPipelineRuntime field as suggested but the stuff I'm getting from that doesn't seem to be the same as the logs that I'm looking for. Screenshot of the specific kind/place I'm looking for:\r\n\r\n![image](https://user-images.githubusercontent.com/110224695/181717858-00faf908-4f5e-4eb6-a4eb-941618698d1f.png)\r\n", "Would ya'll prefer a table that looks like this?\r\n```python\r\n+---------------+-----------+\r\n| Component | Status |\r\n+---------------+-----------+\r\n| hello-world | Succeeded |\r\n| kf-greeting | Succeeded |\r\n| kf-greeting-2 | Succeeded |\r\n+---------------+-----------+\r\nProgress: 3/3\r\n```\r\n\r\nOr a DAG in ASCII that looks like this?\r\n```python\r\n+---------------------------+ +-------------------------+\r\n| kf-greeting-2 | succeeded | <-- | hello-world | succeeded |\r\n+---------------------------+ +-------------------------+\r\n |\r\n |\r\n v\r\n +-------------------------+\r\n | kf-greeting | succeeded |\r\n +-------------------------+\r\n```\r\n\r\nOr would you prefer some rendered DOT file output in a Jupyter notebook cell?" ]
"2021-10-21T22:31:51"
"2022-08-11T18:23:48"
null
NONE
null
### Feature Area Hello In our project, we are using kubeflow pipelines from SDK and not using UI at all. It is quite difficult to understand pipeline failure. Is there any way to retrieve component status and logs of individual pipeline run? Maybe it is possible but I could not find it. ### What feature would you like to see? I would like to retrieve component results and logs from pipeline SDK. ### What is the use case or pain point? When a pipeline fails, we can't determine the reason for failure without going to kubeflow GUI. ### Is there a workaround currently? No workaround --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6792/reactions", "total_count": 5, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6792/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6791
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6791/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6791/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6791/events
https://github.com/kubeflow/pipelines/issues/6791
1,032,898,182
I_kwDOB-71UM49kMaG
6,791
[feature] install tar.gz package from bucket
{ "login": "LeoGrosjean", "id": 9574771, "node_id": "MDQ6VXNlcjk1NzQ3NzE=", "avatar_url": "https://avatars.githubusercontent.com/u/9574771?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LeoGrosjean", "html_url": "https://github.com/LeoGrosjean", "followers_url": "https://api.github.com/users/LeoGrosjean/followers", "following_url": "https://api.github.com/users/LeoGrosjean/following{/other_user}", "gists_url": "https://api.github.com/users/LeoGrosjean/gists{/gist_id}", "starred_url": "https://api.github.com/users/LeoGrosjean/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LeoGrosjean/subscriptions", "organizations_url": "https://api.github.com/users/LeoGrosjean/orgs", "repos_url": "https://api.github.com/users/LeoGrosjean/repos", "events_url": "https://api.github.com/users/LeoGrosjean/events{/privacy}", "received_events_url": "https://api.github.com/users/LeoGrosjean/received_events", "type": "User", "site_admin": false }
[ { "id": 1126834402, "node_id": "MDU6TGFiZWwxMTI2ODM0NDAy", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/components", "name": "area/components", "color": "d2b48c", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-10-21T20:08:38"
"2022-03-02T15:05:25"
null
NONE
null
### Feature Area /area sdk /area components ### What feature would you like to see? Hello ! It will be so awesome to allow this kind of package install (I know I can make my own image, but it would be easy to do a POC with this kind of import) ```python @component( packages_to_install = [ "pandas", "gs://pnmsyslog-vertex-kfp-test/package/pca_aiplatform-2.tar.gz" # <-- HERE ], ) def train_pca( dataset: Input[Dataset], model_artifact: Output[Model] ): from pca_aiplatform import get_model model = get_model() ... ``` ### What is the use case or pain point? Quickly test a package uploaded on GCS, S3 or Blobstorage ### Is there a workaround currently? Yes 😄 --- Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6791/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6791/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6787
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6787/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6787/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6787/events
https://github.com/kubeflow/pipelines/issues/6787
1,032,000,373
I_kwDOB-71UM49gxN1
6,787
mlpipeline_ui_metadata_path is not retrievable via a pipeline's run_result
{ "login": "drubinstein", "id": 577149, "node_id": "MDQ6VXNlcjU3NzE0OQ==", "avatar_url": "https://avatars.githubusercontent.com/u/577149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/drubinstein", "html_url": "https://github.com/drubinstein", "followers_url": "https://api.github.com/users/drubinstein/followers", "following_url": "https://api.github.com/users/drubinstein/following{/other_user}", "gists_url": "https://api.github.com/users/drubinstein/gists{/gist_id}", "starred_url": "https://api.github.com/users/drubinstein/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/drubinstein/subscriptions", "organizations_url": "https://api.github.com/users/drubinstein/orgs", "repos_url": "https://api.github.com/users/drubinstein/repos", "events_url": "https://api.github.com/users/drubinstein/events{/privacy}", "received_events_url": "https://api.github.com/users/drubinstein/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false } ]
null
[ "We don't currently provide visualization support in local runner: https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.html?highlight=run_pipeline_func_locally#kfp.run_pipeline_func_locally (local runner as Alpha). Welcome PRs if you would like to contribute.", "Thanks for your response.\r\n\r\nI'm not trying to visualize the results. I only want to get the file with the `get_output_path` function and read its contents. Is that not possible? I can do it with other `OutputPath` arguments. Just not one with this name.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-10-21T02:49:32"
"2022-03-02T15:05:21"
null
NONE
null
### What steps did you take & What happened I am trying to create a python component for visualization. The component writes HTML to `mlpipeline_ui_metadata_path`. I then try to place the component in a pipeline using the local executor and run the local pipeline with idea that I can take the pipeline's run result, call `get_output_file` and check the written HTML to make sure it is what I expected. However, when I try to run this, all I see is ``` def _get_output_file_path( self, run_name: str, pipeline: dsl.Pipeline, op_name: str, output_name: str = None, ) -> str: """Get the file path of component output.""" op_dependency = pipeline.ops[op_name] if output_name is None and len(op_dependency.file_outputs) == 1: output_name = next(iter(op_dependency.file_outputs.keys())) > output_file = op_dependency.file_outputs[output_name] E KeyError: None ``` Even if I try to explicitly pass `mlpipeline_ui_metadata` as the key. When I try to print out all the possible outputs for the op, I see {} even though there is an output artifact if I try to print the op object from `run_result._pipeline.ops`. An example script: ```python3 import kfp import kfp.dsl from kfp.components import create_component_from_func,InputPath, OutputPath @create_component_from_func def do_something_with_ui(mlpipeline_ui_metadata_path: OutputPath()): import json with open(mlpipeline_ui_metadata_path, "w") as f: json.dump( { "outputs": [ { "type": "web-app", "storage": "inline", "source": f"<p>Hello</p>", } ] }, f, ) def test_ui(): def _pipeline(): x = do_something_with_ui() run_result = kfp.run_pipeline_func_locally( pipeline_func=_pipeline, arguments={}, execution_mode=kfp.LocalClient.ExecutionMode("local"), ) assert run_result.success print(run_result._pipeline.ops) output = run_result.get_output_file("do-something-with-ui") ``` ### What did you expect to happen: I expected to get the path to the mlpipeilne_ui_metadata OutputPath that I could subsequently inspect. ### Environment: * Local Runner * KFP version: 1.8.6 --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6787/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6787/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6782
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6782/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6782/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6782/events
https://github.com/kubeflow/pipelines/issues/6782
1,031,780,192
I_kwDOB-71UM49f7dg
6,782
[feature] viewer reconciler deletes oldest viewers down to actual MaxNumViewers number
{ "login": "davidxia", "id": 480621, "node_id": "MDQ6VXNlcjQ4MDYyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/480621?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davidxia", "html_url": "https://github.com/davidxia", "followers_url": "https://api.github.com/users/davidxia/followers", "following_url": "https://api.github.com/users/davidxia/following{/other_user}", "gists_url": "https://api.github.com/users/davidxia/gists{/gist_id}", "starred_url": "https://api.github.com/users/davidxia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidxia/subscriptions", "organizations_url": "https://api.github.com/users/davidxia/orgs", "repos_url": "https://api.github.com/users/davidxia/repos", "events_url": "https://api.github.com/users/davidxia/events{/privacy}", "received_events_url": "https://api.github.com/users/davidxia/received_events", "type": "User", "site_admin": false }
[ { "id": 930476737, "node_id": "MDU6TGFiZWw5MzA0NzY3Mzc=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/help%20wanted", "name": "help wanted", "color": "db1203", "default": true, "description": "The community is welcome to contribute." }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
null
[]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-10-20T19:59:13"
"2022-03-02T15:05:25"
null
CONTRIBUTOR
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> <!-- /area frontend --> /area backend <!-- /area sdk --> <!-- /area samples --> <!-- /area components --> ### What feature would you like to see? MaxNumViewers behavior currently only deletes the oldest viewer in a namespace when creating a new one. But it won't delete oldest viewers down to the max specified. For example, if there's already 50 viewers and you set the MaxNumViewers to 5, when you create a new one, the Reconciler will delete the oldest viewer and still leave you with 50. Would it be an improvement to have the Reconciler delete down to the max specified? If so, I can contribute a quick, small PR. https://github.com/kubeflow/pipelines/blob/74c7773ca40decfd0d4ed40dc93a6af591bbc190/backend/src/crd/controller/viewer/reconciler/reconciler.go#L56-L61 ### What is the use case or pain point? As a user I might expect the MaxNumViewers to be the actual max and that there's garbage collection down to this number. ### Is there a workaround currently? Manually kubectl delete viewers. --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6782/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6782/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6780
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6780/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6780/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6780/events
https://github.com/kubeflow/pipelines/issues/6780
1,031,556,519
I_kwDOB-71UM49fE2n
6,780
Run List Page Buttons: Compare runs, Clone run, Archive and restore runs
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 2152751095, "node_id": "MDU6TGFiZWwyMTUyNzUxMDk1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/frozen", "name": "lifecycle/frozen", "color": "ededed", "default": false, "description": null } ]
closed
false
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false } ]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "/lifecycle frozen" ]
"2021-10-20T15:36:41"
"2022-09-30T19:26:57"
"2022-09-30T19:26:57"
COLLABORATOR
null
null
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6780/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6780/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6779
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6779/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6779/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6779/events
https://github.com/kubeflow/pipelines/issues/6779
1,031,556,458
I_kwDOB-71UM49fE1q
6,779
Pipeline Detail Page Buttons: Create run, Upload version, Delete pipeline
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
{ "login": "jlyaoyuli", "id": 56132941, "node_id": "MDQ6VXNlcjU2MTMyOTQx", "avatar_url": "https://avatars.githubusercontent.com/u/56132941?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jlyaoyuli", "html_url": "https://github.com/jlyaoyuli", "followers_url": "https://api.github.com/users/jlyaoyuli/followers", "following_url": "https://api.github.com/users/jlyaoyuli/following{/other_user}", "gists_url": "https://api.github.com/users/jlyaoyuli/gists{/gist_id}", "starred_url": "https://api.github.com/users/jlyaoyuli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jlyaoyuli/subscriptions", "organizations_url": "https://api.github.com/users/jlyaoyuli/orgs", "repos_url": "https://api.github.com/users/jlyaoyuli/repos", "events_url": "https://api.github.com/users/jlyaoyuli/events{/privacy}", "received_events_url": "https://api.github.com/users/jlyaoyuli/received_events", "type": "User", "site_admin": false }
[ { "login": "jlyaoyuli", "id": 56132941, "node_id": "MDQ6VXNlcjU2MTMyOTQx", "avatar_url": "https://avatars.githubusercontent.com/u/56132941?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jlyaoyuli", "html_url": "https://github.com/jlyaoyuli", "followers_url": "https://api.github.com/users/jlyaoyuli/followers", "following_url": "https://api.github.com/users/jlyaoyuli/following{/other_user}", "gists_url": "https://api.github.com/users/jlyaoyuli/gists{/gist_id}", "starred_url": "https://api.github.com/users/jlyaoyuli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jlyaoyuli/subscriptions", "organizations_url": "https://api.github.com/users/jlyaoyuli/orgs", "repos_url": "https://api.github.com/users/jlyaoyuli/repos", "events_url": "https://api.github.com/users/jlyaoyuli/events{/privacy}", "received_events_url": "https://api.github.com/users/jlyaoyuli/received_events", "type": "User", "site_admin": false } ]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-10-20T15:36:37"
"2023-05-24T17:23:01"
"2023-05-24T17:23:01"
COLLABORATOR
null
# User Journeys We need to validate a list of functionality regarding Pipeline Detail Page. 1. Ability to create a new run without parameter, it should be able to create a new run entity in the RunList. 2. Ability to create a new run with parameter, there are various types we should support using Protobuf.Value. 2. Ability to create a new run with custom pipeline root. 3. Ability to create recurring run. 4. Ability to upload a new version and navigate among multiple versions. 5. Ability to delete a version or a pipeline. # Related issue https://github.com/kubeflow/pipelines/issues/6674
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6779/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6779/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6778
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6778/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6778/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6778/events
https://github.com/kubeflow/pipelines/issues/6778
1,031,555,266
I_kwDOB-71UM49fEjC
6,778
Convert from PipelineJob to PipelineSpec when loading static DAG.
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false } ]
null
[]
"2021-10-20T15:35:22"
"2021-11-19T13:12:33"
"2021-11-19T13:12:33"
COLLABORATOR
null
null
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6778/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6778/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6777
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6777/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6777/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6777/events
https://github.com/kubeflow/pipelines/issues/6777
1,031,358,741
I_kwDOB-71UM49eUkV
6,777
[bug] Error uploading pipeline by URL
{ "login": "dgajewski1", "id": 9049532, "node_id": "MDQ6VXNlcjkwNDk1MzI=", "avatar_url": "https://avatars.githubusercontent.com/u/9049532?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dgajewski1", "html_url": "https://github.com/dgajewski1", "followers_url": "https://api.github.com/users/dgajewski1/followers", "following_url": "https://api.github.com/users/dgajewski1/following{/other_user}", "gists_url": "https://api.github.com/users/dgajewski1/gists{/gist_id}", "starred_url": "https://api.github.com/users/dgajewski1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dgajewski1/subscriptions", "organizations_url": "https://api.github.com/users/dgajewski1/orgs", "repos_url": "https://api.github.com/users/dgajewski1/repos", "events_url": "https://api.github.com/users/dgajewski1/events{/privacy}", "received_events_url": "https://api.github.com/users/dgajewski1/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." }, { "id": 2710158147, "node_id": "MDU6TGFiZWwyNzEwMTU4MTQ3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/needs%20more%20info", "name": "needs more info", "color": "DBEF12", "default": false, "description": "" } ]
open
false
null
[]
null
[ "Can you help share the pipeline definition that you want to upload?", "Example file, I've tested upload from s3 without success.\r\nPlease unpack and test .yaml file.\r\n[test_pipeline_upload.yaml.zip](https://github.com/kubeflow/pipelines/files/7440440/test_pipeline_upload.yaml.zip)\r\n\r\n", "Bump", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-10-20T12:35:55"
"2022-04-17T06:27:24"
null
NONE
null
### Environment - Full Kubeflow deployment on AWS EKS, Kubeflow 1.2.0, KFP 1.0.4 or - local Pipelines deployment on Minikube, KFP 1.6.0 On both deployments same problem ### Steps to reproduce In Kubeflow Pipelines UI uploading pipeline by url raising an error: <img width="593" alt="Screenshot 2021-10-20 at 12 56 45" src="https://user-images.githubusercontent.com/9049532/138093572-f94beec6-8528-4a02-89b7-b8c5804c28bc.png"> And logs from ml-pipeline pod: _I1020 13:07:43.102924 8 error.go:227] error converting YAML to JSON: yaml: line 41: found unexpected end of stream InvalidInputError: Failed to parse the parameter. github.com/kubeflow/pipelines/backend/src/common/util.NewInvalidInputErrorWithDetails /go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:179 github.com/kubeflow/pipelines/backend/src/common/util.ValidateWorkflow /go/src/github.com/kubeflow/pipelines/backend/src/common/util/template_util.go:51 github.com/kubeflow/pipelines/backend/src/common/util.GetParameters /go/src/github.com/kubeflow/pipelines/backend/src/common/util/template_util.go:30 github.com/kubeflow/pipelines/backend/src/apiserver/resource.(*ResourceManager).CreatePipelineVersion /go/src/github.com/kubeflow/pipelines/backend/src/apiserver/resource/resource_manager.go:1007 github.com/kubeflow/pipelines/backend/src/apiserver/server.(*PipelineServer).CreatePipelineVersion /go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/pipeline_server.go:142 github.com/kubeflow/pipelines/backend/api/go_client._PipelineService_CreatePipelineVersion_Handler.func1 /go/src/github.com/kubeflow/pipelines/backend/api/go_client/pipeline.pb.go:1106 main.apiServerInterceptor /go/src/github.com/kubeflow/pipelines/backend/src/apiserver/interceptor.go:30 github.com/kubeflow/pipelines/backend/api/go_client._PipelineService_CreatePipelineVersion_Handler /go/src/github.com/kubeflow/pipelines/backend/api/go_client/pipeline.pb.go:1108 google.golang.org/grpc.(*Server).processUnaryRPC /go/pkg/mod/google.golang.org/grpc@v1.23.0/server.go:995 google.golang.org/grpc.(*Server).handleStream /go/pkg/mod/google.golang.org/grpc@v1.23.0/server.go:1275 google.golang.org/grpc.(*Server).serveStreams.func1.1 /go/pkg/mod/google.golang.org/grpc@v1.23.0/server.go:710 runtime.goexit_ Problem occurs sometimes, more often for larger files. Sometimes uploading not fails, but pipeline yaml is incomplete. Seems like a problem with connection timeouts or buffering (or something different?). I've tried getting pipeline from AWS s3 bucket or even from local fileserver without any chunking or streaming, with same results as above. There are no bugs while uploading from file from local machine, only from url. ### Expected result Pipelines should be correctly uploaded by URL --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6777/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6777/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6776
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6776/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6776/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6776/events
https://github.com/kubeflow/pipelines/issues/6776
1,030,973,199
I_kwDOB-71UM49c2cP
6,776
v2 control flow - design
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false } ]
null
[ "The design is ready for review.\r\nhttps://docs.google.com/document/d/1TZeZtxwPzAImIu8Jk_e-4otSx467Ckf0smNe7JbPReE/edit?usp=sharing&resourcekey=0-lTeZGW_Ys78j1LU60CEARg", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-10-20T04:55:05"
"2022-04-17T06:27:44"
null
CONTRIBUTOR
null
expand bit.ly/kfp-v2 design with control flow related detailed design
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6776/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6776/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6772
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6772/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6772/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6772/events
https://github.com/kubeflow/pipelines/issues/6772
1,030,732,070
I_kwDOB-71UM49b7km
6,772
Action Required: Fix Renovate Configuration
{ "login": "forking-renovate[bot]", "id": 34481203, "node_id": "MDM6Qm90MzQ0ODEyMDM=", "avatar_url": "https://avatars.githubusercontent.com/in/7402?v=4", "gravatar_id": "", "url": "https://api.github.com/users/forking-renovate%5Bbot%5D", "html_url": "https://github.com/apps/forking-renovate", "followers_url": "https://api.github.com/users/forking-renovate%5Bbot%5D/followers", "following_url": "https://api.github.com/users/forking-renovate%5Bbot%5D/following{/other_user}", "gists_url": "https://api.github.com/users/forking-renovate%5Bbot%5D/gists{/gist_id}", "starred_url": "https://api.github.com/users/forking-renovate%5Bbot%5D/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/forking-renovate%5Bbot%5D/subscriptions", "organizations_url": "https://api.github.com/users/forking-renovate%5Bbot%5D/orgs", "repos_url": "https://api.github.com/users/forking-renovate%5Bbot%5D/repos", "events_url": "https://api.github.com/users/forking-renovate%5Bbot%5D/events{/privacy}", "received_events_url": "https://api.github.com/users/forking-renovate%5Bbot%5D/received_events", "type": "Bot", "site_admin": false }
[]
closed
false
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false } ]
null
[ "/assign @zijianjoy " ]
"2021-10-19T20:50:13"
"2022-08-04T23:59:15"
"2022-08-04T23:59:15"
NONE
null
There is an error with this repository's Renovate configuration that needs to be fixed. As a precaution, Renovate will stop PRs until it is resolved. Location: `.github/renovate.json5` Error type: Invalid JSON5 (parsing failed) Message: `JSON5.parse error: JSON5: invalid character '\"' at 3:3`
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6772/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6772/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6766
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6766/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6766/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6766/events
https://github.com/kubeflow/pipelines/issues/6766
1,030,036,875
I_kwDOB-71UM49ZR2L
6,766
[bug] Address CVE on click lower than 8.x
{ "login": "quan-dang", "id": 25025782, "node_id": "MDQ6VXNlcjI1MDI1Nzgy", "avatar_url": "https://avatars.githubusercontent.com/u/25025782?v=4", "gravatar_id": "", "url": "https://api.github.com/users/quan-dang", "html_url": "https://github.com/quan-dang", "followers_url": "https://api.github.com/users/quan-dang/followers", "following_url": "https://api.github.com/users/quan-dang/following{/other_user}", "gists_url": "https://api.github.com/users/quan-dang/gists{/gist_id}", "starred_url": "https://api.github.com/users/quan-dang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/quan-dang/subscriptions", "organizations_url": "https://api.github.com/users/quan-dang/orgs", "repos_url": "https://api.github.com/users/quan-dang/repos", "events_url": "https://api.github.com/users/quan-dang/events{/privacy}", "received_events_url": "https://api.github.com/users/quan-dang/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
open
false
null
[]
null
[ "@chensun Can you help take a look at this? Looks like click is the sdk dependency.", "@capri-xiyue I accidentally discovered this problem while trying to install Seldon Core ver 1.11.1, which uses click version 8.x instead.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Is there an ETA for updating the `click` dependency to >= 8? This is impacting the adoption of applications that use the KFP SDK. Thanks." ]
"2021-10-19T08:54:40"
"2022-03-18T14:23:50"
null
NONE
null
### What happened: A CVE issue has been reported on click version 7.x on GitHub https://github.com/pallets/click/issues/1833 ### What did you expect to happen: Please update click to resolve it!
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6766/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6766/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/6758
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/6758/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/6758/comments
https://api.github.com/repos/kubeflow/pipelines/issues/6758/events
https://github.com/kubeflow/pipelines/issues/6758
1,029,668,368
I_kwDOB-71UM49X34Q
6,758
[KFPv2] Handle the path from run list -> pipeline detail, which user created run without uploading pipeline.
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false } ]
null
[]
"2021-10-18T22:42:50"
"2021-10-25T19:34:43"
"2021-10-25T19:34:43"
COLLABORATOR
null
null
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/6758/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/6758/timeline
null
completed
null
null
false