url
stringlengths
59
59
repository_url
stringclasses
1 value
labels_url
stringlengths
73
73
comments_url
stringlengths
68
68
events_url
stringlengths
66
66
html_url
stringlengths
49
49
id
int64
782M
1.89B
node_id
stringlengths
18
24
number
int64
4.97k
9.98k
title
stringlengths
2
306
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
4 values
active_lock_reason
null
body
stringlengths
0
63.6k
reactions
dict
timeline_url
stringlengths
68
68
performed_via_github_app
null
state_reason
stringclasses
3 values
draft
bool
0 classes
pull_request
dict
is_pull_request
bool
1 class
https://api.github.com/repos/kubeflow/pipelines/issues/5427
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5427/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5427/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5427/events
https://github.com/kubeflow/pipelines/issues/5427
850,290,167
MDU6SXNzdWU4NTAyOTAxNjc=
5,427
[feature] Support other gcp project's gcr images to be pulled by service account with workload identity
{ "login": "Matts966", "id": 28551465, "node_id": "MDQ6VXNlcjI4NTUxNDY1", "avatar_url": "https://avatars.githubusercontent.com/u/28551465?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Matts966", "html_url": "https://github.com/Matts966", "followers_url": "https://api.github.com/users/Matts966/followers", "following_url": "https://api.github.com/users/Matts966/following{/other_user}", "gists_url": "https://api.github.com/users/Matts966/gists{/gist_id}", "starred_url": "https://api.github.com/users/Matts966/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Matts966/subscriptions", "organizations_url": "https://api.github.com/users/Matts966/orgs", "repos_url": "https://api.github.com/users/Matts966/repos", "events_url": "https://api.github.com/users/Matts966/events{/privacy}", "received_events_url": "https://api.github.com/users/Matts966/received_events", "type": "User", "site_admin": false }
[ { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "A GKE nodepool has a VM service account. This is used to pull images. You need to grant storage permission to this SA. ", "Thank you for quick response!\r\n\r\nSo, workload identity cannot affect service account to pull images? I also bound the google service account that has enough permissions to some Kubeflow Pipelines service account such as `pipeline-runner`, but it does not work.\r\n\r\nIs it only the node pool SA that can pull images?\r\n\r\nIn addition, I found problem of `set_image_pull_secrets` of `Config` object. It does not change the secret form `pipeline-runner-token-*****`.\r\nMaybe related to https://github.com/kubeflow/pipelines/issues/3847. I will add another issue with more confirmation.", "> Is it only the node pool SA that can pull images?\r\n\r\nIn my organization we grant permission to nodepoolSA. I suppose docker pull secrets might work as well. \r\nHowever default-editor/pipeline-runner does not pull images and hence granting permission to it will not work.\r\n\r\n", "> In addition, I found problem of set_image_pull_secrets of Config object. It does not change the secret form pipeline-runner-token-*****.\r\n\r\nThis was my mistake. Maybe specifying `imagePullSecret` is the best way. ", "Please close this issue if this is out of scope.", "Closed because it sounds like GKE expected behavior" ]
"2021-04-05T11:04:11"
"2021-04-09T00:55:54"
"2021-04-09T00:55:54"
CONTRIBUTOR
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> <!-- /area frontend --> /area backend <!-- /area sdk --> <!-- /area samples --> <!-- /area components --> ### What feature would you like to see? <!-- Provide a description of this feature and the user experience. --> Current workload identity supports multiple project for calling normal APIs, but we cannot pull images on other project's container registry even if the google service account has enough storage permissions (ErrImagePull occurs). I don't know who can handle better (GCP's backend or Kubeflow Pipelines), but I confirmed that we can pull images after calling `gcloud configure-docker`. ### What is the use case or pain point? <!-- It helps us understand the benefit of this feature for your use case. --> With this supported, we can easily call other GCP project's API on Kubeflow Pipelines easily. ### Is there a workaround currently? <!-- Without this feature, how do you accomplish your task today? --> We can use normal kubernetes secret for imagePullSecret. --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5427/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5427/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5426
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5426/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5426/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5426/events
https://github.com/kubeflow/pipelines/issues/5426
849,845,121
MDU6SXNzdWU4NDk4NDUxMjE=
5,426
Unable to use tensorboard viewer using S3 path
{ "login": "shrinath-suresh", "id": 63862647, "node_id": "MDQ6VXNlcjYzODYyNjQ3", "avatar_url": "https://avatars.githubusercontent.com/u/63862647?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shrinath-suresh", "html_url": "https://github.com/shrinath-suresh", "followers_url": "https://api.github.com/users/shrinath-suresh/followers", "following_url": "https://api.github.com/users/shrinath-suresh/following{/other_user}", "gists_url": "https://api.github.com/users/shrinath-suresh/gists{/gist_id}", "starred_url": "https://api.github.com/users/shrinath-suresh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shrinath-suresh/subscriptions", "organizations_url": "https://api.github.com/users/shrinath-suresh/orgs", "repos_url": "https://api.github.com/users/shrinath-suresh/repos", "events_url": "https://api.github.com/users/shrinath-suresh/events{/privacy}", "received_events_url": "https://api.github.com/users/shrinath-suresh/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
{ "login": "PatrickXYS", "id": 23116624, "node_id": "MDQ6VXNlcjIzMTE2NjI0", "avatar_url": "https://avatars.githubusercontent.com/u/23116624?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PatrickXYS", "html_url": "https://github.com/PatrickXYS", "followers_url": "https://api.github.com/users/PatrickXYS/followers", "following_url": "https://api.github.com/users/PatrickXYS/following{/other_user}", "gists_url": "https://api.github.com/users/PatrickXYS/gists{/gist_id}", "starred_url": "https://api.github.com/users/PatrickXYS/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PatrickXYS/subscriptions", "organizations_url": "https://api.github.com/users/PatrickXYS/orgs", "repos_url": "https://api.github.com/users/PatrickXYS/repos", "events_url": "https://api.github.com/users/PatrickXYS/events{/privacy}", "received_events_url": "https://api.github.com/users/PatrickXYS/received_events", "type": "User", "site_admin": false }
[ { "login": "PatrickXYS", "id": 23116624, "node_id": "MDQ6VXNlcjIzMTE2NjI0", "avatar_url": "https://avatars.githubusercontent.com/u/23116624?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PatrickXYS", "html_url": "https://github.com/PatrickXYS", "followers_url": "https://api.github.com/users/PatrickXYS/followers", "following_url": "https://api.github.com/users/PatrickXYS/following{/other_user}", "gists_url": "https://api.github.com/users/PatrickXYS/gists{/gist_id}", "starred_url": "https://api.github.com/users/PatrickXYS/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PatrickXYS/subscriptions", "organizations_url": "https://api.github.com/users/PatrickXYS/orgs", "repos_url": "https://api.github.com/users/PatrickXYS/repos", "events_url": "https://api.github.com/users/PatrickXYS/events{/privacy}", "received_events_url": "https://api.github.com/users/PatrickXYS/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @shrinath-suresh, can you explain the difference between --db and --logdir, are there any references?\n\nKFP doesn't currently support changing the argument, but we can make it configurable.\n\nWhat kind of configuration experience do you prefer? Do all s3 uris need to change the argument?\n\n\nOr we can add an input textbox on KFP UI where users may come and configure their args?\n\nOr do you have any other suggestions", "@Bobgy Thanks for the reply. \r\n\r\nWe were able to log the tensorboard events to S3. Only the Dashboard was having issues when opened with S3 Path. I was looking into couple of blog posts related to the tensorboard S3 integration. One of them suggested to have the paramter as `db` instead of `logdir` (i will try to see if i can get the link). However, i couldnt do it in KFP, as there is no option to set the `db` parameter.\r\n\r\nI have quick question before the suggestion. **Does KFP support Tensorboard with S3 urls as backend?**. \r\n\r\nIt would be helpful to have an extra parameter in `mlpipeline-ui-metadata.json` file. For Example:\r\n\r\n`{\"outputs\": [{\"type\": \"tensorboard\", \"source\": \"s3://<S3_path>\", \"type\" : \"db\"}]}`\r\n", "It doesn't work out of the box, you may refer to this issue how to configure credentials:\nhttps://github.com/kubeflow/pipelines/issues/4364", "The suggestion sounds reasonable, but I'd still want to confirm why db arg is used and what it means.", "@Bobgy \r\n\r\n`tensorboard --help` output shows the db argument help as below\r\n\r\n` --db URI [experimental] sets SQL database URI and enables DB backend mode, which is read-only unless --db_import is also passed.`\r\n\r\nLooks like it is used specifically for SQLITE databases. I couldnt find the link where db argument is suggested to be used for S3 URLs ", "> It doesn't work out of the box, you may refer to this issue how to configure credentials:\r\n> #4364\r\n\r\nWe have installed kubeflow-pipelines using this branch - https://github.com/e2fyi/kubeflow-aws/tree/master (1.4.1). And looks like, the configuration which is mentioned in the #4364 is already part of installation. ", "@shrinath-suresh thanks for these additional information\r\n\r\njust to be sure, I want to confirm this --\r\n\r\nwhen you see \"empty dashboard is shown\", can you try to get logs of the tensorboard instance? It may have more detailed error messages.", "It doesn't wanna work for me either on tensorflow 2.0.0 with following Error message: Unable to connect to endpoint. While it works on tensorflow 2.3.2. Is there a way to bump up tensorflow version for this viewer?", "@lechwolowski it's not currently, but I just filed https://github.com/kubeflow/pipelines/issues/5471 which tries to support customizing tensorboard image", "@Bobgy We have identified the solution. Tensorboard with S3 buckets works based on region. The bucket which we used was in different region. Once we updated the AWS_REGION variable in cluster, we are able to see the tensorboard UI", "Cool @shrinath-suresh it'll be helpful commenting what exactly is required (like an example patch of tensorboard), so other people finding this issue can get their problems solved", "Solution: Tensorboard with S3 URLs works **based on the region where bucket is placed**. Ensure to set `AWS_REGION` value as same as the bucket region. ", "@Bobgy It will be more helpful if the documentation is updated about the supported systems - https://www.kubeflow.org/docs/components/pipelines/sdk/output-viewer/#tensorboard . What do you think ?", "That's a very good point, do you want to contribute that?", "Besides that, IIUC `AWS_REGION` can be different per pipeline?\r\nSo I think it'll be ideal to support https://github.com/kubeflow/pipelines/issues/4714, e.g. you can configure the AWS_REGION in your mlpipeline-ui-metadata as a field, and let it auto picked up by KFP started tensorboard.", "WDYT @shrinath-suresh ", "@Bobgy @shrinath-suresh Generally, to use anything Tensorflow-based (e.g. TF serving, TFX) the following environment variables needs to be set:\r\nENDPOINT_URL\r\nS3_USE_HTTPS\r\nS3_VERIFY_SSL\r\nAWS_ACCESS_KEY_ID\r\nAWS_SECRET_ACCESS_KEY\r\n\r\nAdditionally, one might need to add own certificates if you're self-hosting your S3 solution (e.g. Minio, Ceph) on-prem.", "> That's a very good point, do you want to contribute that?\r\n\r\nAlready occupied with deliverables. I can pick it up after 2 weeks. I will make a note of it.", "> WDYT @shrinath-suresh\r\n\r\nIt makes more sense to have it as a parameter in the mlpipeline-ui-metatadata and it would be easy to configure as well. ", "Thanks for the feedback @ConverJens @shrinath-suresh!\r\nLet's move the discussion to https://github.com/kubeflow/pipelines/issues/4714" ]
"2021-04-04T10:26:33"
"2021-04-16T10:10:01"
"2021-04-16T06:59:38"
CONTRIBUTOR
null
### What steps did you take To view the output in tensorboard we have created the `mlpipeline-ui-metadata.json` file with following content (Followed the instructions from - https://www.kubeflow.org/docs/components/pipelines/sdk/output-viewer/#tensorboard ) ``` {"outputs": [{"type": "tensorboard", "source": "s3://<S3_path>"}]} ``` to view the training output in tensorboard. ### What happened: In the visualizations tab, `Start Tensorboard` button is appearing as expected. ![image](https://user-images.githubusercontent.com/63862647/113505495-e0718d00-955c-11eb-971e-f86c3acc1f6e.png) Once `Start Tensorboard` is clicked, `Open Tensorboard` is appearing as expected. ![image](https://user-images.githubusercontent.com/63862647/113505521-f8e1a780-955c-11eb-91dd-d6ba9fbe0f2f.png) But when the `Start Tensorboard` is clicked, an empty dashboard is shown. ![image](https://user-images.githubusercontent.com/63862647/113617492-00e03b00-9674-11eb-9fa8-92000c1368e9.png) ### What did you expect to happen: Tensorboard should be opened as below ![image](https://user-images.githubusercontent.com/63862647/113505605-8e7d3700-955d-11eb-9116-583fc0977bd8.png) ### Environment: <!-- Please fill in those that seem relevant. --> * KFP version: 1.4 To find the version number, See version number shows on bottom of KFP UI left sidenav. --> * KFP SDK version: 0.5.2 ### Anything else you would like to add: To test the above behavious in local machine, we tried running ``` tensorboard --logdir s3://<S3_path> ``` Tensorboard is showing an empty dashboard. However, the dashboard is working fine with the db argument as below ``` tensorboard --db s3://<S3_path> ``` From the https://www.kubeflow.org/docs/components/pipelines/sdk/output-viewer/#tensorboard i couldnt figure out a way to change from `logdir` to `db`. Or is there any other way to achieve this in kubeflow `mlpipeline-ui-metadata.json` ? ### Labels <!-- Please include labels below by uncommenting them to help us better triage issues --> /area frontend <!-- /area backend --> <!-- /area sdk --> <!-- /area testing --> <!-- /area samples --> <!-- /area components --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5426/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5426/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5425
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5425/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5425/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5425/events
https://github.com/kubeflow/pipelines/issues/5425
849,673,166
MDU6SXNzdWU4NDk2NzMxNjY=
5,425
[Testing] tests flaky when pulling from dockerhub
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
closed
false
null
[]
null
[ "From PR https://github.com/kubeflow/pipelines/pull/5424", "Do you know what this error comes from? These images should be on google container registry as I understand, are there any rate limiting? ", "Let me elaborate, the above error occurs when we are building KFP docker images, we need to pull some common images from dockerhub as the base.", "Ok, thanks! Do you expect these errors to be due to the rate limit? Could we move the base images to gcr as well?", "I agree. Looks like it recovered, if it keeps happening, we should do that", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n" ]
"2021-04-03T14:52:23"
"2022-04-18T17:27:41"
"2022-04-18T17:27:41"
CONTRIBUTOR
null
See https://oss-prow.knative.dev/view/gs/oss-prow/pr-logs/pull/kubeflow_pipelines/5424/kubeflow-pipeline-e2e-test/1378351440040300544#1:build-log.txt%3A1308 > Step #1 - "scheduledworkflow": Get https://registry-1.docker.io/v2/: net/http: request canceled (Client.Timeout exceeded while awaiting headers) The same error message happens randomly during different image build process.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5425/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5425/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5423
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5423/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5423/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5423/events
https://github.com/kubeflow/pipelines/issues/5423
849,656,849
MDU6SXNzdWU4NDk2NTY4NDk=
5,423
[bug] api server panics on workflow parameter without value
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false } ]
null
[ "/assign", "The failing line was recently changed when upgrading argo in https://github.com/kubeflow/pipelines/pull/5041, it seems we need to check nil pointer there.\r\nhttps://github.com/kubeflow/pipelines/blob/14b447a07c975a21b7b525d49f6676058f020cd8/backend/src/common/util/workflow.go#L54\r\n/cc @NikeNano ", "I built a reproduction pipeline, when submitting it with any missing parameters.\r\nKFP API server crashes like above\r\n\r\n```python\r\nfrom kfp import dsl, components\r\n\r\necho = components.load_component_from_text(\r\n \"\"\"\r\nname: Echo\r\ninputs:\r\n- {name: text, type: String}\r\nimplementation:\r\n container:\r\n image: alpine\r\n command:\r\n - echo\r\n - {inputValue: text}\r\n\"\"\"\r\n)\r\n\r\n\r\n@dsl.pipeline(name='missing_parameter')\r\ndef pipeline(\r\n parameter:\r\n str # parameter should be specified when submitting, but we are missing it in the test\r\n):\r\n echo_op = echo(text=parameter)\r\n```" ]
"2021-04-03T13:20:03"
"2021-04-04T01:27:22"
"2021-04-04T01:27:22"
CONTRIBUTOR
null
### What steps did you take <!-- A clear and concise description of what the bug is.--> 1. create a run with a parameter with no values ### What happened: KFP api server panics with the following stacktrace: ``` I0403 13:01:11.747485 7 interceptor.go:29] /api.RunService/CreateRun handler starting panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x15e7999] goroutine 2100 [running]: github.com/kubeflow/pipelines/backend/src/common/util.(*Workflow).OverrideParameters(0xc0000c2e28, 0xc000ebf4a0) /go/src/github.com/kubeflow/pipelines/backend/src/common/util/workflow.go:54 +0x2f9 github.com/kubeflow/pipelines/backend/src/apiserver/resource.(*ResourceManager).CreateRun(0xc000326ea0, 0xc0001e8fc0, 0x0, 0x0, 0xc00034c1a0) /go/src/github.com/kubeflow/pipelines/backend/src/apiserver/resource/resource_manager.go:362 +0x2ca github.com/kubeflow/pipelines/backend/src/apiserver/server.(*RunServer).CreateRun(0xc0000aa6f0, 0x1fb0360, 0xc000ebf0e0, 0xc000ebf110, 0xc0000aa6f0, 0x2d58e08, 0xc0001793b0) /go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/run_server.go:135 +0xa6 github.com/kubeflow/pipelines/backend/api/go_client._RunService_CreateRun_Handler.func1(0x1fb0360, 0xc000ebf0e0, 0x1b97b40, 0xc000ebf110, 0xc000641ac8, 0x1, 0x1, 0x7f9af189b008) /go/src/github.com/kubeflow/pipelines/backend/api/go_client/run.pb.go:1501 +0x86 main.apiServerInterceptor(0x1fb0360, 0xc000ebf0e0, 0x1b97b40, 0xc000ebf110, 0xc000edb500, 0xc000edb520, 0xc000e78b58, 0x495b68, 0x1b87600, 0xc000ebf0e0) /go/src/github.com/kubeflow/pipelines/backend/src/apiserver/interceptor.go:30 +0xf7 github.com/kubeflow/pipelines/backend/api/go_client._RunService_CreateRun_Handler(0x1bb5100, 0xc0000aa6f0, 0x1fb0360, 0xc000ebf0e0, 0xc0007c69c0, 0x1d96158, 0x1fb0360, 0xc000ebf0e0, 0xc001278000, 0x2ddac) /go/src/github.com/kubeflow/pipelines/backend/api/go_client/run.pb.go:1503 +0x14b google.golang.org/grpc.(*Server).processUnaryRPC(0xc00059e000, 0x1fd4720, 0xc000206c00, 0xc000416f00, 0xc00055e150, 0x2fa68a0, 0x0, 0x0, 0x0) /go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:1210 +0x4fd google.golang.org/grpc.(*Server).handleStream(0xc00059e000, 0x1fd4720, 0xc000206c00, 0xc000416f00, 0x0) /go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:1533 +0xd57 google.golang.org/grpc.(*Server).serveStreams.func1.2(0xc000290720, 0xc00059e000, 0x1fd4720, 0xc000206c00, 0xc000416f00) /go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:871 +0xbb created by google.golang.org/grpc.(*Server).serveStreams.func1 /go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:869 +0x204 ``` ### What did you expect to happen: ### Environment: <!-- Please fill in those that seem relevant. --> * How do you deploy Kubeflow Pipelines (KFP)? <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> * KFP version: 1.5.0-rc.0 <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> ### Anything else you would like to add: <!-- Miscellaneous information that will assist in solving the issue.--> ### Labels <!-- Please include labels below by uncommenting them to help us better triage issues --> <!-- /area frontend --> /area backend <!-- /area sdk --> <!-- /area testing --> <!-- /area samples --> <!-- /area components --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5423/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5423/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5422
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5422/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5422/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5422/events
https://github.com/kubeflow/pipelines/issues/5422
849,550,576
MDU6SXNzdWU4NDk1NTA1NzY=
5,422
[sdk] Task.after() on Condition OpsGroup vs ContainerOp
{ "login": "Tomcli", "id": 10889249, "node_id": "MDQ6VXNlcjEwODg5MjQ5", "avatar_url": "https://avatars.githubusercontent.com/u/10889249?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Tomcli", "html_url": "https://github.com/Tomcli", "followers_url": "https://api.github.com/users/Tomcli/followers", "following_url": "https://api.github.com/users/Tomcli/following{/other_user}", "gists_url": "https://api.github.com/users/Tomcli/gists{/gist_id}", "starred_url": "https://api.github.com/users/Tomcli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tomcli/subscriptions", "organizations_url": "https://api.github.com/users/Tomcli/orgs", "repos_url": "https://api.github.com/users/Tomcli/repos", "events_url": "https://api.github.com/users/Tomcli/events{/privacy}", "received_events_url": "https://api.github.com/users/Tomcli/received_events", "type": "User", "site_admin": false }
[ { "id": 1122445408, "node_id": "MDU6TGFiZWwxMTIyNDQ1NDA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk/dsl/compiler", "name": "area/sdk/dsl/compiler", "color": "d2b48c", "default": false, "description": "" }, { "id": 1499519734, "node_id": "MDU6TGFiZWwxNDk5NTE5NzM0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/upstream_issue", "name": "upstream_issue", "color": "006b75", "default": false, "description": "" }, { "id": 1682717392, "node_id": "MDU6TGFiZWwxNjgyNzE3Mzky", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/question", "name": "kind/question", "color": "2515fc", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
closed
false
{ "login": "neuromage", "id": 206520, "node_id": "MDQ6VXNlcjIwNjUyMA==", "avatar_url": "https://avatars.githubusercontent.com/u/206520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/neuromage", "html_url": "https://github.com/neuromage", "followers_url": "https://api.github.com/users/neuromage/followers", "following_url": "https://api.github.com/users/neuromage/following{/other_user}", "gists_url": "https://api.github.com/users/neuromage/gists{/gist_id}", "starred_url": "https://api.github.com/users/neuromage/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/neuromage/subscriptions", "organizations_url": "https://api.github.com/users/neuromage/orgs", "repos_url": "https://api.github.com/users/neuromage/repos", "events_url": "https://api.github.com/users/neuromage/events{/privacy}", "received_events_url": "https://api.github.com/users/neuromage/received_events", "type": "User", "site_admin": false }
[ { "login": "neuromage", "id": 206520, "node_id": "MDQ6VXNlcjIwNjUyMA==", "avatar_url": "https://avatars.githubusercontent.com/u/206520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/neuromage", "html_url": "https://github.com/neuromage", "followers_url": "https://api.github.com/users/neuromage/followers", "following_url": "https://api.github.com/users/neuromage/following{/other_user}", "gists_url": "https://api.github.com/users/neuromage/gists{/gist_id}", "starred_url": "https://api.github.com/users/neuromage/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/neuromage/subscriptions", "organizations_url": "https://api.github.com/users/neuromage/orgs", "repos_url": "https://api.github.com/users/neuromage/repos", "events_url": "https://api.github.com/users/neuromage/events{/privacy}", "received_events_url": "https://api.github.com/users/neuromage/received_events", "type": "User", "site_admin": false }, { "login": "Ark-kun", "id": 1829149, "node_id": "MDQ6VXNlcjE4MjkxNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ark-kun", "html_url": "https://github.com/Ark-kun", "followers_url": "https://api.github.com/users/Ark-kun/followers", "following_url": "https://api.github.com/users/Ark-kun/following{/other_user}", "gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions", "organizations_url": "https://api.github.com/users/Ark-kun/orgs", "repos_url": "https://api.github.com/users/Ark-kun/repos", "events_url": "https://api.github.com/users/Ark-kun/events{/privacy}", "received_events_url": "https://api.github.com/users/Ark-kun/received_events", "type": "User", "site_admin": false }, { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "/kind question", "cc @Udiknedormin ", "@Tomcli Per my understanding:\r\n\r\n* running an op after condition group should run whenever the predicate was passed or not, as I want run it after the whole DAG\r\n* running an op after some op in the group should only run if the predicate was passed, as I only want to run it if the actual conditional task was executed\r\n\r\nOf course it should work with data-passing too, in the op-after-op case (the op-after-DAG cannot use it, for obvious reasons). That is --- unless condition's status is exposed as a pipelineparam, which would introduce additional interesting possibilities. A component which runs after the condition could check if the condition DAG was run or not.", "/assign @chensun ", "/assign @neuromage ", ">We are wondering is it the expected DSL behavior where the task dependencies should point to the Argo condition task in both cases?\r\n\r\nThis is unexpected behavior.\r\n\r\nI did my own experiment and got and found several compiler bugs:\r\n\r\n```python\r\nimport kfp\r\nfrom kfp import components\r\n\r\n@components.create_component_from_func\r\ndef produce_str() -> str:\r\n return \"Hello world\"\r\n\r\n@components.create_component_from_func\r\ndef consume_str(message: str):\r\n print(message)\r\n\r\ndef my_pipeline():\r\n hello_str = produce_str().output\r\n with kfp.dsl.Condition(hello_str == \"Hello world\"):\r\n produce_task = produce_str()\r\n consume_str(produce_task.output)\r\n\r\nif __name__ == '__main__':\r\n kfp.compiler.Compiler().compile(my_pipeline, '2021-04-09 Check contitionals.yaml')\r\n```\r\n\r\nError: `invalid spec: templates.my-pipeline.tasks.condition-1 templates.condition-1.outputs failed to resolve {{tasks.produce-str-2.outputs.parameters.produce-str-2-Output}}`\r\nThis is a compiler bug - the data passing rewriter did not detect that `produce-str-2-Output` was used as parameter downstream, so only artifact remained.\r\n\r\nNow trying the same with artifacts:\r\n\r\n```python\r\nfrom kfp.components import InputPath, OutputPath, create_component_from_func\r\n@create_component_from_func\r\ndef produce_file(output_path: OutputPath()):\r\n from pathlib import Path\r\n Path(output_path).write_text(output_path)\r\n\r\n@create_component_from_func\r\ndef consume_file(message_path: InputPath()):\r\n from pathlib import Path\r\n message = Path(message_path).read_text()\r\n print(message)\r\n\r\ndef my_file_pipeline():\r\n hello_str = produce_str().output\r\n with kfp.dsl.Condition(hello_str == \"Hello world\"):\r\n produce_file_task = produce_file()\r\n consume_file(produce_file_task.output)\r\n\r\n with kfp.dsl.Condition(hello_str == \"Tails\"):\r\n produce_file2_task = produce_file()\r\n consume_file(produce_file2_task.output)\r\n\r\n \r\nif __name__ == '__main__':\r\n import kfp\r\n kfp.compiler.Compiler().compile(my_file_pipeline, '2021-04-09 Check conditionals.arifacts.yaml')\r\n```\r\n\r\nThe first part works fine, but the second consume task fails with `This step is in Error state with this message: Unable to resolve: {{tasks.condition-2.outputs.artifacts.produce-file-2-output}}`\r\n\r\n\r\nTo sum it up, I've found 3 issues:\r\n\r\n1) Argo seems to treat Skipped tasks same as Succeeded when evaluating `dependencies`. I consider this to be surprising and against the spirit of the dependency concept. We can try contacting Argo and changing this on their side. Meanwhile, as a workaround, we should fix that at compiler level. Dependency should mean \"dependency on success\". (Issue 3 complicates the fix though.)\r\n2) KFP compiler (data passing rewriter) seems to miss output value usage in `when`. This is a bug.\r\n3) KFP compiler seems to compile dependency on conditional task as dependency on condition DAG. This does not seem to be correct.\r\n\r\nP.S. I'm not sure using `.after` on `OpsGroup`/`Condition` was intended to be supported.\r\n", "Thanks @Ark-kun. Good to know this is a bug in the KFP Argo compiler. We are able to work around on the KFP Tekton compiler because we decided not to put conditional tasks into a sub-dag to optimize it for Tekton. \r\n\r\nWe want to make sure KFP Argo also compiles dependency on conditional task, so we are not diverging the runtime behavior with the same KFP DSL.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n" ]
"2021-04-03T01:24:39"
"2022-04-18T17:27:33"
"2022-04-18T17:27:33"
MEMBER
null
We have few questions on how the condition is constructed in Argo. e.g. using the below condition example https://github.com/kubeflow/pipelines/blob/master/samples/core/condition/condition.py When we try to define a containerOp that run after dsl.condition or containerOp inside the condition block, it creates the same result when compiles to Argo. We are wondering is it the expected DSL behavior where the task dependencies should point to the Argo condition task in both cases? ```python @dsl.pipeline( name='Conditional execution pipeline', description='Shows how to use dsl.Condition().' ) def flipcoin_pipeline(): flip = flip_coin_op() with dsl.Condition(flip.output == 'heads') as cond_a: random_num_head = random_num_op(0, 9) random_num_head2 = random_num_op(0, 9) with dsl.Condition(flip.output == 'tails') as cond_b: random_num_tail = random_num_op(10, 19) random_num_tail2 = random_num_op(10, 19) # print_output and print_output2 produce the same dependencies print_output = print_op('after cond_a cond_b').after(cond_a).after(cond_b) print_output2 = print_op('after random_num_head random_num_tail').after(random_num_head).after(random_num_tail) ``` In Argo dag: ``` dag: tasks: - name: condition-1 template: condition-1 when: '"{{tasks.flip-coin.outputs.parameters.flip-coin-output}}" == "heads"' dependencies: [flip-coin] - name: condition-2 template: condition-2 when: '"{{tasks.flip-coin.outputs.parameters.flip-coin-output}}" == "tails"' dependencies: [flip-coin] - {name: flip-coin, template: flip-coin} - name: print template: print dependencies: [condition-1, condition-2] - name: print-2 template: print-2 dependencies: [condition-1, condition-2] ``` Thanks. related docs: https://docs.google.com/document/d/1QPWKoeiPFDcI1JWH-nMe7x_xIJekL11bhu7Fp3fNT24/edit#
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5422/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5422/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5415
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5415/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5415/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5415/events
https://github.com/kubeflow/pipelines/issues/5415
848,775,716
MDU6SXNzdWU4NDg3NzU3MTY=
5,415
manifests: Istio AuthorizationPolicy is wrong for multi-user
{ "login": "yanniszark", "id": 6123106, "node_id": "MDQ6VXNlcjYxMjMxMDY=", "avatar_url": "https://avatars.githubusercontent.com/u/6123106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yanniszark", "html_url": "https://github.com/yanniszark", "followers_url": "https://api.github.com/users/yanniszark/followers", "following_url": "https://api.github.com/users/yanniszark/following{/other_user}", "gists_url": "https://api.github.com/users/yanniszark/gists{/gist_id}", "starred_url": "https://api.github.com/users/yanniszark/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanniszark/subscriptions", "organizations_url": "https://api.github.com/users/yanniszark/orgs", "repos_url": "https://api.github.com/users/yanniszark/repos", "events_url": "https://api.github.com/users/yanniszark/events{/privacy}", "received_events_url": "https://api.github.com/users/yanniszark/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
{ "login": "yanniszark", "id": 6123106, "node_id": "MDQ6VXNlcjYxMjMxMDY=", "avatar_url": "https://avatars.githubusercontent.com/u/6123106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yanniszark", "html_url": "https://github.com/yanniszark", "followers_url": "https://api.github.com/users/yanniszark/followers", "following_url": "https://api.github.com/users/yanniszark/following{/other_user}", "gists_url": "https://api.github.com/users/yanniszark/gists{/gist_id}", "starred_url": "https://api.github.com/users/yanniszark/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanniszark/subscriptions", "organizations_url": "https://api.github.com/users/yanniszark/orgs", "repos_url": "https://api.github.com/users/yanniszark/repos", "events_url": "https://api.github.com/users/yanniszark/events{/privacy}", "received_events_url": "https://api.github.com/users/yanniszark/received_events", "type": "User", "site_admin": false }
[ { "login": "yanniszark", "id": 6123106, "node_id": "MDQ6VXNlcjYxMjMxMDY=", "avatar_url": "https://avatars.githubusercontent.com/u/6123106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yanniszark", "html_url": "https://github.com/yanniszark", "followers_url": "https://api.github.com/users/yanniszark/followers", "following_url": "https://api.github.com/users/yanniszark/following{/other_user}", "gists_url": "https://api.github.com/users/yanniszark/gists{/gist_id}", "starred_url": "https://api.github.com/users/yanniszark/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanniszark/subscriptions", "organizations_url": "https://api.github.com/users/yanniszark/orgs", "repos_url": "https://api.github.com/users/yanniszark/repos", "events_url": "https://api.github.com/users/yanniszark/events{/privacy}", "received_events_url": "https://api.github.com/users/yanniszark/received_events", "type": "User", "site_admin": false } ]
null
[ "/assign @yanniszark " ]
"2021-04-01T20:43:04"
"2021-04-02T09:43:20"
"2021-04-02T09:43:20"
CONTRIBUTOR
null
### What steps did you take Deploy KFP multi-user. ### What happened: The Istio AuthorizationPolicy resources are invalid. After fixing them, some policies were missing: - Allow `metadata-grpc-server` traffic in MetadataDB MySQL. - Allow all traffic to the Metadata GRPC Server. I will PR the amended AuthorizationPolicy asap.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5415/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5415/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5414
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5414/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5414/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5414/events
https://github.com/kubeflow/pipelines/issues/5414
848,720,246
MDU6SXNzdWU4NDg3MjAyNDY=
5,414
[manifests] metadata-envoy-deployment sidecar injection
{ "login": "yanniszark", "id": 6123106, "node_id": "MDQ6VXNlcjYxMjMxMDY=", "avatar_url": "https://avatars.githubusercontent.com/u/6123106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yanniszark", "html_url": "https://github.com/yanniszark", "followers_url": "https://api.github.com/users/yanniszark/followers", "following_url": "https://api.github.com/users/yanniszark/following{/other_user}", "gists_url": "https://api.github.com/users/yanniszark/gists{/gist_id}", "starred_url": "https://api.github.com/users/yanniszark/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanniszark/subscriptions", "organizations_url": "https://api.github.com/users/yanniszark/orgs", "repos_url": "https://api.github.com/users/yanniszark/repos", "events_url": "https://api.github.com/users/yanniszark/events{/privacy}", "received_events_url": "https://api.github.com/users/yanniszark/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Thank you!\nIt's a gateway to allow using grpcweb from KFP UI.", "Do we also need to apply the same change to `metadata-grpc-deployment` as well?\r\n\r\nMy deployment looks like:\r\n![metadatadeployment](https://user-images.githubusercontent.com/37026441/113367114-09511280-9310-11eb-811b-65ca37b02652.png)\r\n" ]
"2021-04-01T19:12:17"
"2021-04-02T09:43:20"
"2021-04-02T09:43:20"
CONTRIBUTOR
null
### What steps did you take Deploy multi-user pipelines manifests. The `metadata-envoy-deployment` is injected with a sidecar and crashes. The reason is that `metadata-envoy-deployment` runs an Envoy Pod and it seems 2 Envoys don't play well together. Thus, we need to disable injection for this Deployment. @Bobgy if possible, can you give some context how `metadata-envoy-deployment` is used? I will open a PR for adding a sidecar injection annotation for `metadata-envoy-deployment`.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5414/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5414/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5413
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5413/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5413/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5413/events
https://github.com/kubeflow/pipelines/issues/5413
848,717,321
MDU6SXNzdWU4NDg3MTczMjE=
5,413
[backend] metadata writer multi-user mode incorrect list
{ "login": "yanniszark", "id": 6123106, "node_id": "MDQ6VXNlcjYxMjMxMDY=", "avatar_url": "https://avatars.githubusercontent.com/u/6123106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yanniszark", "html_url": "https://github.com/yanniszark", "followers_url": "https://api.github.com/users/yanniszark/followers", "following_url": "https://api.github.com/users/yanniszark/following{/other_user}", "gists_url": "https://api.github.com/users/yanniszark/gists{/gist_id}", "starred_url": "https://api.github.com/users/yanniszark/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanniszark/subscriptions", "organizations_url": "https://api.github.com/users/yanniszark/orgs", "repos_url": "https://api.github.com/users/yanniszark/repos", "events_url": "https://api.github.com/users/yanniszark/events{/privacy}", "received_events_url": "https://api.github.com/users/yanniszark/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
{ "login": "yanniszark", "id": 6123106, "node_id": "MDQ6VXNlcjYxMjMxMDY=", "avatar_url": "https://avatars.githubusercontent.com/u/6123106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yanniszark", "html_url": "https://github.com/yanniszark", "followers_url": "https://api.github.com/users/yanniszark/followers", "following_url": "https://api.github.com/users/yanniszark/following{/other_user}", "gists_url": "https://api.github.com/users/yanniszark/gists{/gist_id}", "starred_url": "https://api.github.com/users/yanniszark/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanniszark/subscriptions", "organizations_url": "https://api.github.com/users/yanniszark/orgs", "repos_url": "https://api.github.com/users/yanniszark/repos", "events_url": "https://api.github.com/users/yanniszark/events{/privacy}", "received_events_url": "https://api.github.com/users/yanniszark/received_events", "type": "User", "site_admin": false }
[ { "login": "yanniszark", "id": 6123106, "node_id": "MDQ6VXNlcjYxMjMxMDY=", "avatar_url": "https://avatars.githubusercontent.com/u/6123106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yanniszark", "html_url": "https://github.com/yanniszark", "followers_url": "https://api.github.com/users/yanniszark/followers", "following_url": "https://api.github.com/users/yanniszark/following{/other_user}", "gists_url": "https://api.github.com/users/yanniszark/gists{/gist_id}", "starred_url": "https://api.github.com/users/yanniszark/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanniszark/subscriptions", "organizations_url": "https://api.github.com/users/yanniszark/orgs", "repos_url": "https://api.github.com/users/yanniszark/repos", "events_url": "https://api.github.com/users/yanniszark/events{/privacy}", "received_events_url": "https://api.github.com/users/yanniszark/received_events", "type": "User", "site_admin": false } ]
null
[ "I will be opening a PR with the aforementioned fix asap\r\n/assign @yanniszark " ]
"2021-04-01T19:07:31"
"2021-04-02T08:56:20"
"2021-04-02T08:56:20"
CONTRIBUTOR
null
The metadata writer was failing with a strange error: ``` Start watching Kubernetes Pods created by Argo Traceback (most recent call last): File "/kfp/metadata_writer/metadata_writer.py", line 145, in <module> _request_timeout=2000, # Sometimes HTTP GET gets stuck File "/usr/local/lib/python3.7/site-packages/kubernetes/watch/watch.py", line 142, in stream resp = func(*args, **kwargs) File "/usr/local/lib/python3.7/site-packages/kubernetes/client/apis/core_v1_api.py", line 12372, in list_namespaced_pod (data) = self.list_namespaced_pod_with_http_info(namespace, **kwargs) File "/usr/local/lib/python3.7/site-packages/kubernetes/client/apis/core_v1_api.py", line 12472, in list_namespaced_pod_with_http_info collection_formats=collection_formats) File "/usr/local/lib/python3.7/site-packages/kubernetes/client/api_client.py", line 344, in call_api _return_http_data_only, collection_formats, _preload_content, _request_timeout) File "/usr/local/lib/python3.7/site-packages/kubernetes/client/api_client.py", line 178, in __call_api _request_timeout=_request_timeout) File "/usr/local/lib/python3.7/site-packages/kubernetes/client/api_client.py", line 365, in request headers=headers) File "/usr/local/lib/python3.7/site-packages/kubernetes/client/rest.py", line 231, in GET query_params=query_params) File "/usr/local/lib/python3.7/site-packages/kubernetes/client/rest.py", line 222, in request raise ApiException(http_resp=r) kubernetes.client.rest.ApiException: (403) Reason: Forbidden HTTP response headers: HTTPHeaderDict({'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Content-Type-Options': 'nosniff', 'Date': 'Thu, 01 Apr 2021 19:01:38 GMT', 'Content-Length': '347'}) HTTP response body: b'{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"namespaces \\"pods\\" is forbidden: User \\"system:serviceaccount:kubeflow:kubeflow-pipelines-metadata-writer\\" cannot get resource \\"namespaces\\" in API group \\"\\" in the namespace \\"pods\\"","reason":"Forbidden","details":{"name":"pods","kind":"namespaces"},"code":403}\n' ``` This is the line that failed in the code: ```python for event in k8s_watch.stream( k8s_api.list_namespaced_pod, namespace=namespace_to_watch, label_selector=ARGO_WORKFLOW_LABEL_KEY, timeout_seconds=1800, # Sometimes watch gets stuck _request_timeout=2000, # Sometimes HTTP GET gets stuck ): ``` If the `namespace_to_watch` variable is empty (which is for multi-user installations), then the K8s library makes a request to `/api/v1/namespaces//pods`. Here is the excerpt from the library, which @elikatsis found: ```python def list_namespaced_pod(self, namespace, **kwargs): # noqa: E501 """list_namespaced_pod # noqa: E501 ... """ kwargs['_return_http_data_only'] = True return self.list_namespaced_pod_with_http_info(namespace, **kwargs) # noqa: E501 def list_namespaced_pod_with_http_info(self, namespace, **kwargs): # noqa: E501 """list_namespaced_pod # noqa: E501 ... """ ... return self.api_client.call_api( '/api/v1/namespaces/{namespace}/pods', 'GET', # <------- namespace="": /api/v1/namespaces/pods path_params, query_params, header_params, body=body_params, post_params=form_params, files=local_var_files, response_type='V1PodList', # noqa: E501 auth_settings=auth_settings, async_req=local_var_params.get('async_req'), _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 _preload_content=local_var_params.get('_preload_content', True), _request_timeout=local_var_params.get('_request_timeout'), collection_formats=collection_formats) ``` Fixed by changing the API call to `list_pod_for_all_namespaces` in case the `namespace_to_watch` variable is empty. All the kudos go to @elikatsis for this one.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5413/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5413/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5411
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5411/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5411/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5411/events
https://github.com/kubeflow/pipelines/issues/5411
848,383,743
MDU6SXNzdWU4NDgzODM3NDM=
5,411
[backend] controller-manager crashes in 1.5.0-rc.1
{ "login": "juliusvonkohout", "id": 45896133, "node_id": "MDQ6VXNlcjQ1ODk2MTMz", "avatar_url": "https://avatars.githubusercontent.com/u/45896133?v=4", "gravatar_id": "", "url": "https://api.github.com/users/juliusvonkohout", "html_url": "https://github.com/juliusvonkohout", "followers_url": "https://api.github.com/users/juliusvonkohout/followers", "following_url": "https://api.github.com/users/juliusvonkohout/following{/other_user}", "gists_url": "https://api.github.com/users/juliusvonkohout/gists{/gist_id}", "starred_url": "https://api.github.com/users/juliusvonkohout/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/juliusvonkohout/subscriptions", "organizations_url": "https://api.github.com/users/juliusvonkohout/orgs", "repos_url": "https://api.github.com/users/juliusvonkohout/repos", "events_url": "https://api.github.com/users/juliusvonkohout/events{/privacy}", "received_events_url": "https://api.github.com/users/juliusvonkohout/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "@DavidSpek @Bobgy it seems like someone wrote a new image that does not runasnonroot\r\nhttps://github.com/kubeflow/manifests/pull/1756\r\n\r\nCan you update to a newer version of gcr.io/ml-pipeline/application-crd-controller:1.0-beta-non-cluster-role ? It seems to be from here https://github.com/kubernetes-sigs/application according to https://github.com/kubeflow/pipelines/blob/1cd189567d4ba7e71aa1f31ddaf9a46c17ae8ae3/manifests/kustomize/third-party/application/application-controller-deployment.yaml#L27\r\nMaybe that will fix the issue. This image is very old (2019)", "The permissions in the base image seem to be wrong\r\n\r\n```\r\ndrwx------. 1 root root 14 Nov 11 2019 root\r\n```\r\nis the problem. Someone forgot to allow group 0 read on that directory\r\n\r\nFull output\r\n\r\n```\r\ndr-xr-xr-x. 1 root root 6 Apr 1 11:45 .\r\ndr-xr-xr-x. 1 root root 6 Apr 1 11:45 ..\r\ndrwxr-xr-x. 1 root root 902 Oct 29 2019 bin\r\ndrwxr-xr-x. 1 root root 0 Apr 24 2018 boot\r\ndrwxr-xr-x. 5 root root 360 Apr 1 11:45 dev\r\ndrwxr-xr-x. 1 root root 14 Oct 31 2019 etc\r\ndrwxr-xr-x. 1 root root 0 Apr 24 2018 home\r\ndrwxr-xr-x. 1 root root 84 May 23 2017 lib\r\ndrwxr-xr-x. 1 root root 40 Oct 29 2019 lib64\r\ndrwxr-xr-x. 1 root root 0 Oct 29 2019 media\r\ndrwxr-xr-x. 1 root root 0 Oct 29 2019 mnt\r\ndrwxr-xr-x. 1 root root 0 Oct 29 2019 opt\r\ndr-xr-xr-x. 514 root root 0 Apr 1 11:45 proc\r\ndrwx------. 1 root root 14 Nov 11 2019 root\r\ndrwxr-xr-x. 1 root root 14 Apr 1 11:45 run\r\ndrwxr-xr-x. 1 root root 14 Oct 31 2019 sbin\r\ndrwxr-xr-x. 1 root root 0 Oct 29 2019 srv\r\ndr-xr-xr-x. 13 root root 0 Mar 28 21:41 sys\r\ndrwxrwxrwt. 1 root root 0 Oct 29 2019 tmp\r\ndrwxr-xr-x. 1 root root 8 Oct 29 2019 usr\r\ndrwxr-xr-x. 1 root root 6 Oct 29 2019 var\r\n```", "@juliusvonkohout the image is only used to provide k8s application monitoring. It's an optional component. May be remove it from your deployment?", "kubectl apply -k github.com/kubeflow/pipelines/manifests/kustomize/env/playform-agnostic?ref=$PIPELINE_VERSION\n\nAnd it will no longer get installed", "@Bobgy Yes, the pipelines still work without it. But there are still two questions:\n\n1. Will it be included in the full kubeflow 1.3 by default? If yes it must be fixed.\n\n2. Should the documentation be updated then? Because more and more clusters will enforce security policies. In the end I strongly prefer a proper solution. Either removing it or fixing it.\n\nI would like to work on a solution, if you propose one.", "> @Bobgy Yes, the pipelines still work without it. But there are still two questions:\r\n> \r\n> 1. Will it be included in the full kubeflow 1.3 by default? If yes it must be fixed.\r\n\r\nNo, it won't for pipelines, but you may find another copy in kubeflow/manifests repo, it might be deployed by default, so some applications rely on it. I'd suggest confirm with @yanniszark the release manager about it.\r\n\r\n> 2. Should the documentation be updated then? Because more and more clusters will enforce security policies. In the end I strongly prefer a proper solution. Either removing it or fixing it.\r\n\r\nDo you mean KFP standalone documentation? Yes, I'd love to see an update. Do you want to contribute to this?\r\n\r\n> I would like to work on a solution, if you propose one.\r\n\r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Hi @juliusvonkohout, metacontroller is not application controller (this issue's topic). Did you confuse them together?", "So far it works in 1.4" ]
"2021-04-01T11:33:00"
"2021-10-15T13:30:19"
"2021-10-15T13:30:19"
MEMBER
null
### Environment ``` export PIPELINE_VERSION=1.5.0-rc.1 kubectl apply -k "github.com/kubeflow/pipelines/manifests/kustomize/cluster-scoped-resources?ref=$PIPELINE_VERSION" kubectl wait --for condition=established --timeout=60s crd/applications.app.k8s.io kubectl apply -k "github.com/kubeflow/pipelines/manifests/kustomize/env/dev?ref=$PIPELINE_VERSION" kubectl scale deployment/cache-deployer-deployment --replicas 0 -n kubeflow kubectl scale deployment/cache-server --replicas 0 -n kubeflow ``` ### Steps to reproduce ``` [julius@fedora kubeflow]$ kubectl -n kubeflow get pods,replicasets NAME READY STATUS RESTARTS AGE pod/controller-manager-6d7b565545-zrhrb 0/1 CrashLoopBackOff 8 18m pod/metadata-envoy-deployment-797b6886b7-r8478 1/1 Running 0 20m pod/metadata-grpc-deployment-76b64ffc4f-h99sf 1/1 Running 2 20m pod/metadata-writer-579f577c59-9xvrc 1/1 Running 0 20m pod/minio-5b65df66c9-wsjms 1/1 Running 0 20m pod/ml-pipeline-647d5c6c46-8qrd4 1/1 Running 1 20m pod/ml-pipeline-persistenceagent-75fb75b66c-rr4g2 1/1 Running 0 20m pod/ml-pipeline-scheduledworkflow-7cf474cd6d-qtm9x 1/1 Running 0 20m pod/ml-pipeline-ui-7f97cdb4cd-96z5c 1/1 Running 0 20m pod/ml-pipeline-viewer-crd-5f66b89768-httx2 1/1 Running 0 20m pod/ml-pipeline-visualizationserver-656d556bdc-trmcb 1/1 Running 0 20m pod/mysql-f7b9b7dd4-mwb49 1/1 Running 0 20m pod/sum-pipeline-x64k4-133507279 0/2 Completed 0 10m pod/sum-pipeline-x64k4-1823999632 0/2 Completed 0 10m pod/sum-pipeline-x64k4-2485853679 0/2 Completed 0 9m36s pod/sum-pipeline-x64k4-2652093435 0/2 Completed 0 11m pod/workflow-controller-5f9c8ff668-w7vgw 1/1 Running 0 4m15s ... [julius@fedora kubeflow]$ kubectl -n kubeflow logs pod/controller-manager-6d7b565545-zrhrb logs are hidden because volume is too excessive /bin/sh: 2: /root/manager: Permission denied ``` I modified the deployment to get at the logs --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5411/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5411/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5404
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5404/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5404/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5404/events
https://github.com/kubeflow/pipelines/issues/5404
847,503,479
MDU6SXNzdWU4NDc1MDM0Nzk=
5,404
[feature] Condition combinators for dsl.Condition
{ "login": "Udiknedormin", "id": 20307949, "node_id": "MDQ6VXNlcjIwMzA3OTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/20307949?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Udiknedormin", "html_url": "https://github.com/Udiknedormin", "followers_url": "https://api.github.com/users/Udiknedormin/followers", "following_url": "https://api.github.com/users/Udiknedormin/following{/other_user}", "gists_url": "https://api.github.com/users/Udiknedormin/gists{/gist_id}", "starred_url": "https://api.github.com/users/Udiknedormin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Udiknedormin/subscriptions", "organizations_url": "https://api.github.com/users/Udiknedormin/orgs", "repos_url": "https://api.github.com/users/Udiknedormin/repos", "events_url": "https://api.github.com/users/Udiknedormin/events{/privacy}", "received_events_url": "https://api.github.com/users/Udiknedormin/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1145992201, "node_id": "MDU6TGFiZWwxMTQ1OTkyMjAx", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk/dsl", "name": "area/sdk/dsl", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
closed
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "neuromage", "id": 206520, "node_id": "MDQ6VXNlcjIwNjUyMA==", "avatar_url": "https://avatars.githubusercontent.com/u/206520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/neuromage", "html_url": "https://github.com/neuromage", "followers_url": "https://api.github.com/users/neuromage/followers", "following_url": "https://api.github.com/users/neuromage/following{/other_user}", "gists_url": "https://api.github.com/users/neuromage/gists{/gist_id}", "starred_url": "https://api.github.com/users/neuromage/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/neuromage/subscriptions", "organizations_url": "https://api.github.com/users/neuromage/orgs", "repos_url": "https://api.github.com/users/neuromage/repos", "events_url": "https://api.github.com/users/neuromage/events{/privacy}", "received_events_url": "https://api.github.com/users/neuromage/received_events", "type": "User", "site_admin": false }, { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "/area backend", "related issue: https://github.com/kubeflow/pipelines/issues/482", "/assign @neuromage @chensun ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n", "/reopen", "@maganaluis: You can't reopen an issue/PR unless you authored it or you are a collaborator.\n\n<details>\n\nIn response to [this](https://github.com/kubeflow/pipelines/issues/5404#issuecomment-1205812558):\n\n>/reopen\n\n\nInstructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.\n</details>", "workaround\r\n\r\n```py\r\n@component\r\ndef or_op(x: bool, y: bool) -> bool:\r\n return x or y\r\n\r\n\r\n@component\r\ndef and_op(x: bool, y: bool) -> bool:\r\n return x and y\r\n\r\n\r\n@component\r\ndef not_op(x: bool) -> bool:\r\n return not x\r\n\r\ndef pipeline_fn():\r\n ...\r\n with dsl.Condition(or_op(param1 == value1, not_op(param2 == value2).output).output == \"true\"):\r\n```", "Hi @hrsma2i \r\nI tried your approach. Since in my case param1 and param2 are output of task above this conditional statement, I get the following error:\r\n\r\n`ValueError: Constant argument inputs must be one of type ['String', 'Integer', 'Float', 'Boolean', 'List', 'Dict'] Got: ConditionOperator(operator='>', left_operand={{channel:task=fraud-model-evaluator;name=auPrc;type=Float;}}, right_operand=0.188) of type <class 'kfp.components.pipeline_channel.ConditionOperator'>.`\r\n![image](https://github.com/kubeflow/pipelines/assets/46483973/0edef3d1-922a-44d0-8ac6-13a2f23afe25)\r\n\r\nany idea?" ]
"2021-03-31T23:08:11"
"2023-09-13T15:03:35"
"2022-04-18T17:27:39"
CONTRIBUTOR
null
### Feature Area /area backend ### What feature would you like to see? Right now, `dsl.Condition` only allows to use a single comparison: ```python with dsl.Condition(param1 == value1): ... ``` Because `ConditionOperator` doesn't define any custom `__bool__` which would produce a warning, the following conditions silently work in unexpected ways: ```python with dsl.Condition(param1 == value1 and param2 == value2): # yields param2 == value2 ... with dsl.Condition(param1 == value1 or param2 == value2): # yields param1 == value1 ... ``` Because `and` and `or` Python cannot be overloaded, the only thing one can do to prevent those misleading behaviour is to add a warning to `__bool__` method of `ConditionOperator` (which would require turning it into an `attr` class instead of a `namedtuple`). The same kind of a problem does occur in e.g. [pandas](https://stackoverflow.com/questions/21415661/logical-operators-for-boolean-indexing-in-pandas), and cannot easily solved in Python. However, `&` (via `__and__`) and a custom method (e.g. `and_then`) are fine to define on `ConditionOperator`: ```python with dsl.Condition((param1 == value1) | (param2 == value2)): ... with dsl.Condition((param1 == value1).or_else(param2 == value2)): ... ``` As seen above, brackets are needed, due to operator precedence (no overwritable binary operator has lower precedence than `==`). Arguably, it's still well readable and also enables complex conditions. The generated yaml in such a case would include: ``` "$tasks.some-task.results.param1" == "value1" || "$tasks.some-task.results.param2" == "value2" ``` For more complex scenarios, braces would need to be used: ```python with dsl.Condition( ((param1 == value1) | (param2 == value2)) & (param3 == value3) ): ... ``` ``` ("$tasks.some-task.results.param1" == "value1" || "$tasks.some-task.results.param2" == "value2") && "$tasks.some-task.results.param3" == "value3" ``` For readability, if multiple operators of the same kind are used, those should not produce braces: ```python with dsl.Condition( (param1 == value1) | (param2 == value2) | (param3 == value3) ): ... ``` ``` "$tasks.some-task.results.param1" == "value1" || "$tasks.some-task.results.param2" == "value2" || "$tasks.some-task.results.param3" == "value3" ``` Note: considering how Argo is using govaluate, handling and/or shouldn't be a problem --- `__and__` (`&`) would compile to `&&` and `__or__` (`|`) would compile to `||`. The `ConditionOperator` would be treated as a binary tree with non-`ConditionOperator` values at its leaves. ### What is the use case or pain point? This feature allows users to create much more complex conditions, combining multiple simple predicates. Typical use case is error handling. Conjunction of conditions can be used e.g. if some handling (like logging) depends both on some status from component and the logging level defined in a pipeline parameter: ```python @dsl.pipeline(name="test error handling") def test_error_handling(log_errors_of_level: int = 3): op_lvl_2 = dsl.ContainerOp(...) with dsl.Condition( (op_lvl_2.errCode != "") & (log_errors_of_level <= 2) ): LogErrorOp(op_lvl_2.errCode) ``` Alternative of predicates can be used to execute error-handling code if any of parallel actions failed (or produced some undesired result): ```python @dsl.pipeline(name="test error handling") def test_error_handling(log_errors_of_level: int = 3): op1 = dsl.ContainerOp(...) op2 = dsl.ContainerOp(...) op3 = dsl.ContainerOp(...) with dsl.Condition( (op1.errCode != "") | (op2.errCode != "") | (op3.errCode != "") ): LogErrorOp(op1.errCode, op2.errCode, op3.errCode) ``` ### Is there a workaround currently? While one can emulate "and" using hierarchy of conditions, although with poor readability both in DSL and generated yaml: ```python with dsl.Condition(param1 == value1): with dsl.Condition(param2 == value2): ... ``` ...the same cannot be done with "or". --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5404/reactions", "total_count": 17, "+1": 17, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5404/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5402
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5402/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5402/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5402/events
https://github.com/kubeflow/pipelines/issues/5402
847,113,869
MDU6SXNzdWU4NDcxMTM4Njk=
5,402
[bug] Argo kustomization does not replace namespace in ClusterRoleBinding ServiceAccount reference
{ "login": "yanniszark", "id": 6123106, "node_id": "MDQ6VXNlcjYxMjMxMDY=", "avatar_url": "https://avatars.githubusercontent.com/u/6123106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yanniszark", "html_url": "https://github.com/yanniszark", "followers_url": "https://api.github.com/users/yanniszark/followers", "following_url": "https://api.github.com/users/yanniszark/following{/other_user}", "gists_url": "https://api.github.com/users/yanniszark/gists{/gist_id}", "starred_url": "https://api.github.com/users/yanniszark/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanniszark/subscriptions", "organizations_url": "https://api.github.com/users/yanniszark/orgs", "repos_url": "https://api.github.com/users/yanniszark/repos", "events_url": "https://api.github.com/users/yanniszark/events{/privacy}", "received_events_url": "https://api.github.com/users/yanniszark/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
{ "login": "yanniszark", "id": 6123106, "node_id": "MDQ6VXNlcjYxMjMxMDY=", "avatar_url": "https://avatars.githubusercontent.com/u/6123106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yanniszark", "html_url": "https://github.com/yanniszark", "followers_url": "https://api.github.com/users/yanniszark/followers", "following_url": "https://api.github.com/users/yanniszark/following{/other_user}", "gists_url": "https://api.github.com/users/yanniszark/gists{/gist_id}", "starred_url": "https://api.github.com/users/yanniszark/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanniszark/subscriptions", "organizations_url": "https://api.github.com/users/yanniszark/orgs", "repos_url": "https://api.github.com/users/yanniszark/repos", "events_url": "https://api.github.com/users/yanniszark/events{/privacy}", "received_events_url": "https://api.github.com/users/yanniszark/received_events", "type": "User", "site_admin": false }
[ { "login": "yanniszark", "id": 6123106, "node_id": "MDQ6VXNlcjYxMjMxMDY=", "avatar_url": "https://avatars.githubusercontent.com/u/6123106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yanniszark", "html_url": "https://github.com/yanniszark", "followers_url": "https://api.github.com/users/yanniszark/followers", "following_url": "https://api.github.com/users/yanniszark/following{/other_user}", "gists_url": "https://api.github.com/users/yanniszark/gists{/gist_id}", "starred_url": "https://api.github.com/users/yanniszark/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanniszark/subscriptions", "organizations_url": "https://api.github.com/users/yanniszark/orgs", "repos_url": "https://api.github.com/users/yanniszark/repos", "events_url": "https://api.github.com/users/yanniszark/events{/privacy}", "received_events_url": "https://api.github.com/users/yanniszark/received_events", "type": "User", "site_admin": false } ]
null
[ "/assign @yanniszark " ]
"2021-03-31T18:37:18"
"2021-04-01T16:16:53"
"2021-04-01T01:43:19"
CONTRIBUTOR
null
### What steps did you take ```sh kustomize build pipelines/manifests/kustomize/env/platform-agnostic-multi-user ``` ### What happened: The generated ClusterRoleBinding has the namespace set as `argo`, even though kustomize sets it as `kubeflow`. Here is the generated YAML: ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: application-crd-id: kubeflow-pipelines name: argo-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: argo-cluster-role subjects: - kind: ServiceAccount name: argo namespace: argo ``` ### What did you expect to happen: Kustomize should have replaced the namespace with `kubeflow`: ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: application-crd-id: kubeflow-pipelines name: argo-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: argo-cluster-role subjects: - kind: ServiceAccount name: argo namespace: kubeflow ``` The reason that didn't happen is because the ClusterRoleBinding is defined as: ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: argo-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: argo-cluster-role subjects: - kind: ServiceAccount name: argo namespace: argo ``` but the ServiceAccount is defined as: ```yaml apiVersion: v1 kind: ServiceAccount metadata: name: argo ``` because of that, kustomize doesn't register the ServiceAccount reference and doesn't substitute the namespace. We tried to include a patch that removes the namespace line from the `ClusterRoleBinding` and it worked, so we will PR that.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5402/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5402/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5401
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5401/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5401/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5401/events
https://github.com/kubeflow/pipelines/issues/5401
847,037,112
MDU6SXNzdWU4NDcwMzcxMTI=
5,401
[backend] Metrics Stopped Showing up when More than 5
{ "login": "tinahbu", "id": 7570863, "node_id": "MDQ6VXNlcjc1NzA4NjM=", "avatar_url": "https://avatars.githubusercontent.com/u/7570863?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tinahbu", "html_url": "https://github.com/tinahbu", "followers_url": "https://api.github.com/users/tinahbu/followers", "following_url": "https://api.github.com/users/tinahbu/following{/other_user}", "gists_url": "https://api.github.com/users/tinahbu/gists{/gist_id}", "starred_url": "https://api.github.com/users/tinahbu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tinahbu/subscriptions", "organizations_url": "https://api.github.com/users/tinahbu/orgs", "repos_url": "https://api.github.com/users/tinahbu/repos", "events_url": "https://api.github.com/users/tinahbu/events{/privacy}", "received_events_url": "https://api.github.com/users/tinahbu/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
null
[]
null
[ "This is one of the issues we plan to solve together in bit.ly/kfp-v2. Putting it in IR-based KFP project", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "still waiting on a bug fix/feature release on this. im not able to view what's at bit.ly/kfp-v2 but would like to be notified when this is prioritized. thanks!", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-03-31T17:47:02"
"2022-03-02T15:05:19"
null
NONE
null
### Environment * How did you deploy Kubeflow Pipelines (KFP)? we used [`kfctl`](https://github.com/kubeflow/manifests/blob/master/distributions/kfdef/kfctl_istio_dex.v1.2.0.yaml) * KFP version: `v1.2.0` * KFP SDK version: ``` kfp==1.4.0 kfp-pipeline-spec==0.1.6 kfp-server-api==1.4.1 ``` ### Steps to reproduce this is not an exhaustive experiment by any means but our metrics stopped showing up in the run table & "run output" tab when we write more than 5 metrics to `/mlpipeline-metrics.json` and we are not sure if this is a bug or a restriction from kubeflow can have up to 5 metrics: ![IMG_6526](https://user-images.githubusercontent.com/7570863/113187741-0ec63400-920e-11eb-9fc0-70aa6e8fadd2.JPG) when 6 metrics (or more) were sent, none showed up: * didn't show up in table ![IMG_6527](https://user-images.githubusercontent.com/7570863/113187794-1ede1380-920e-11eb-9c90-5db0e4b94796.JPG) * didn't show up in "compare runs" (the whole metrics section is empty) ![IMG_6529](https://user-images.githubusercontent.com/7570863/113187836-2ac9d580-920e-11eb-8660-c99a33e79a37.jpg) ### Expected result * all metrics should be visible when "compare runs" * it would be great to have more than 2 visible columns in the table too ### Materials and Reference i did some googling and it seems some people used ``` file_outputs={ "mlpipeline-metrics": "/mlpipeline-metrics.json", "mlpipeline-full-metrics": "/mlpipeline-full-metrics.json", "mlpipeline-ui-metadata": "/mlpipeline-ui-metadata.json", } ``` in their pipeline component. i cant find `mlpipeline-full-metrics` with `ag` in the pipelines repo though. not sure if this is what i should be doing when sending too many metrics i couldn't really find where to look in this repo for this issue. would love any pointers
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5401/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5401/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5400
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5400/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5400/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5400/events
https://github.com/kubeflow/pipelines/issues/5400
847,006,652
MDU6SXNzdWU4NDcwMDY2NTI=
5,400
[bug] manifests: Include metacontroller manifests
{ "login": "yanniszark", "id": 6123106, "node_id": "MDQ6VXNlcjYxMjMxMDY=", "avatar_url": "https://avatars.githubusercontent.com/u/6123106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yanniszark", "html_url": "https://github.com/yanniszark", "followers_url": "https://api.github.com/users/yanniszark/followers", "following_url": "https://api.github.com/users/yanniszark/following{/other_user}", "gists_url": "https://api.github.com/users/yanniszark/gists{/gist_id}", "starred_url": "https://api.github.com/users/yanniszark/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanniszark/subscriptions", "organizations_url": "https://api.github.com/users/yanniszark/orgs", "repos_url": "https://api.github.com/users/yanniszark/repos", "events_url": "https://api.github.com/users/yanniszark/events{/privacy}", "received_events_url": "https://api.github.com/users/yanniszark/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
{ "login": "yanniszark", "id": 6123106, "node_id": "MDQ6VXNlcjYxMjMxMDY=", "avatar_url": "https://avatars.githubusercontent.com/u/6123106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yanniszark", "html_url": "https://github.com/yanniszark", "followers_url": "https://api.github.com/users/yanniszark/followers", "following_url": "https://api.github.com/users/yanniszark/following{/other_user}", "gists_url": "https://api.github.com/users/yanniszark/gists{/gist_id}", "starred_url": "https://api.github.com/users/yanniszark/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanniszark/subscriptions", "organizations_url": "https://api.github.com/users/yanniszark/orgs", "repos_url": "https://api.github.com/users/yanniszark/repos", "events_url": "https://api.github.com/users/yanniszark/events{/privacy}", "received_events_url": "https://api.github.com/users/yanniszark/received_events", "type": "User", "site_admin": false }
[ { "login": "yanniszark", "id": 6123106, "node_id": "MDQ6VXNlcjYxMjMxMDY=", "avatar_url": "https://avatars.githubusercontent.com/u/6123106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yanniszark", "html_url": "https://github.com/yanniszark", "followers_url": "https://api.github.com/users/yanniszark/followers", "following_url": "https://api.github.com/users/yanniszark/following{/other_user}", "gists_url": "https://api.github.com/users/yanniszark/gists{/gist_id}", "starred_url": "https://api.github.com/users/yanniszark/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanniszark/subscriptions", "organizations_url": "https://api.github.com/users/yanniszark/orgs", "repos_url": "https://api.github.com/users/yanniszark/repos", "events_url": "https://api.github.com/users/yanniszark/events{/privacy}", "received_events_url": "https://api.github.com/users/yanniszark/received_events", "type": "User", "site_admin": false } ]
null
[ "/assign @yanniszark ", "metacontroller is also used in GCP distribution, but since KFP needs it, I agree we can put it here." ]
"2021-03-31T17:26:46"
"2021-04-01T01:43:19"
"2021-04-01T01:43:19"
CONTRIBUTOR
null
### What steps did you take The multi-user pipelines installation includes the metacontroller component. However, it doesn't include it under `third-party`, like for other apps. Since metacontroller is something that KFP uses, we should move it to the KFP repo under third-party and not have it under `contrib` in kubeflow/manifests.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5400/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5400/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5399
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5399/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5399/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5399/events
https://github.com/kubeflow/pipelines/issues/5399
846,970,631
MDU6SXNzdWU4NDY5NzA2MzE=
5,399
[bug] manifests: Platform-agnostic multi-user envs should set the kubeflow header
{ "login": "yanniszark", "id": 6123106, "node_id": "MDQ6VXNlcjYxMjMxMDY=", "avatar_url": "https://avatars.githubusercontent.com/u/6123106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yanniszark", "html_url": "https://github.com/yanniszark", "followers_url": "https://api.github.com/users/yanniszark/followers", "following_url": "https://api.github.com/users/yanniszark/following{/other_user}", "gists_url": "https://api.github.com/users/yanniszark/gists{/gist_id}", "starred_url": "https://api.github.com/users/yanniszark/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanniszark/subscriptions", "organizations_url": "https://api.github.com/users/yanniszark/orgs", "repos_url": "https://api.github.com/users/yanniszark/repos", "events_url": "https://api.github.com/users/yanniszark/events{/privacy}", "received_events_url": "https://api.github.com/users/yanniszark/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
{ "login": "yanniszark", "id": 6123106, "node_id": "MDQ6VXNlcjYxMjMxMDY=", "avatar_url": "https://avatars.githubusercontent.com/u/6123106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yanniszark", "html_url": "https://github.com/yanniszark", "followers_url": "https://api.github.com/users/yanniszark/followers", "following_url": "https://api.github.com/users/yanniszark/following{/other_user}", "gists_url": "https://api.github.com/users/yanniszark/gists{/gist_id}", "starred_url": "https://api.github.com/users/yanniszark/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanniszark/subscriptions", "organizations_url": "https://api.github.com/users/yanniszark/orgs", "repos_url": "https://api.github.com/users/yanniszark/repos", "events_url": "https://api.github.com/users/yanniszark/events{/privacy}", "received_events_url": "https://api.github.com/users/yanniszark/received_events", "type": "User", "site_admin": false }
[ { "login": "yanniszark", "id": 6123106, "node_id": "MDQ6VXNlcjYxMjMxMDY=", "avatar_url": "https://avatars.githubusercontent.com/u/6123106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yanniszark", "html_url": "https://github.com/yanniszark", "followers_url": "https://api.github.com/users/yanniszark/followers", "following_url": "https://api.github.com/users/yanniszark/following{/other_user}", "gists_url": "https://api.github.com/users/yanniszark/gists{/gist_id}", "starred_url": "https://api.github.com/users/yanniszark/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanniszark/subscriptions", "organizations_url": "https://api.github.com/users/yanniszark/orgs", "repos_url": "https://api.github.com/users/yanniszark/repos", "events_url": "https://api.github.com/users/yanniszark/events{/privacy}", "received_events_url": "https://api.github.com/users/yanniszark/received_events", "type": "User", "site_admin": false } ]
null
[ "/assign @yanniszark " ]
"2021-03-31T17:00:55"
"2021-04-01T01:43:19"
"2021-04-01T01:43:19"
CONTRIBUTOR
null
Platform-agnostic multi-user envs should set the kubeflow header to a default value (`kubeflow-userid`), so that they are self-contained and deployable without any extra configuration.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5399/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5399/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5398
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5398/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5398/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5398/events
https://github.com/kubeflow/pipelines/issues/5398
846,957,837
MDU6SXNzdWU4NDY5NTc4Mzc=
5,398
[bug] manifests: Metadata VirtualService is missing
{ "login": "yanniszark", "id": 6123106, "node_id": "MDQ6VXNlcjYxMjMxMDY=", "avatar_url": "https://avatars.githubusercontent.com/u/6123106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yanniszark", "html_url": "https://github.com/yanniszark", "followers_url": "https://api.github.com/users/yanniszark/followers", "following_url": "https://api.github.com/users/yanniszark/following{/other_user}", "gists_url": "https://api.github.com/users/yanniszark/gists{/gist_id}", "starred_url": "https://api.github.com/users/yanniszark/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanniszark/subscriptions", "organizations_url": "https://api.github.com/users/yanniszark/orgs", "repos_url": "https://api.github.com/users/yanniszark/repos", "events_url": "https://api.github.com/users/yanniszark/events{/privacy}", "received_events_url": "https://api.github.com/users/yanniszark/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
{ "login": "yanniszark", "id": 6123106, "node_id": "MDQ6VXNlcjYxMjMxMDY=", "avatar_url": "https://avatars.githubusercontent.com/u/6123106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yanniszark", "html_url": "https://github.com/yanniszark", "followers_url": "https://api.github.com/users/yanniszark/followers", "following_url": "https://api.github.com/users/yanniszark/following{/other_user}", "gists_url": "https://api.github.com/users/yanniszark/gists{/gist_id}", "starred_url": "https://api.github.com/users/yanniszark/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanniszark/subscriptions", "organizations_url": "https://api.github.com/users/yanniszark/orgs", "repos_url": "https://api.github.com/users/yanniszark/repos", "events_url": "https://api.github.com/users/yanniszark/events{/privacy}", "received_events_url": "https://api.github.com/users/yanniszark/received_events", "type": "User", "site_admin": false }
[ { "login": "yanniszark", "id": 6123106, "node_id": "MDQ6VXNlcjYxMjMxMDY=", "avatar_url": "https://avatars.githubusercontent.com/u/6123106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yanniszark", "html_url": "https://github.com/yanniszark", "followers_url": "https://api.github.com/users/yanniszark/followers", "following_url": "https://api.github.com/users/yanniszark/following{/other_user}", "gists_url": "https://api.github.com/users/yanniszark/gists{/gist_id}", "starred_url": "https://api.github.com/users/yanniszark/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanniszark/subscriptions", "organizations_url": "https://api.github.com/users/yanniszark/orgs", "repos_url": "https://api.github.com/users/yanniszark/repos", "events_url": "https://api.github.com/users/yanniszark/events{/privacy}", "received_events_url": "https://api.github.com/users/yanniszark/received_events", "type": "User", "site_admin": false } ]
null
[ "/assign @yanniszark " ]
"2021-03-31T16:50:06"
"2021-04-01T01:43:19"
"2021-04-01T01:43:19"
CONTRIBUTOR
null
### What steps did you take Install KFP from multi-user manifests. ### What happened: The KFP Metadata UI is missing erroring out because it is missing a VirtualService. We tested with the following VirtualService: ```yaml apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: metadata-grpc namespace: kubeflow spec: gateways: - kubeflow-gateway hosts: - '*' http: - match: - uri: prefix: /ml_metadata rewrite: uri: /ml_metadata route: - destination: host: ml-pipeline-ui.kubeflow.svc.cluster.local port: number: 80 ```
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5398/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5398/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5394
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5394/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5394/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5394/events
https://github.com/kubeflow/pipelines/issues/5394
845,441,219
MDU6SXNzdWU4NDU0NDEyMTk=
5,394
[feature] Use Tensorboard within Kubeflow Pipelines UI with AWS Sagemaker training component
{ "login": "ankitaggarwal011", "id": 4436850, "node_id": "MDQ6VXNlcjQ0MzY4NTA=", "avatar_url": "https://avatars.githubusercontent.com/u/4436850?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ankitaggarwal011", "html_url": "https://github.com/ankitaggarwal011", "followers_url": "https://api.github.com/users/ankitaggarwal011/followers", "following_url": "https://api.github.com/users/ankitaggarwal011/following{/other_user}", "gists_url": "https://api.github.com/users/ankitaggarwal011/gists{/gist_id}", "starred_url": "https://api.github.com/users/ankitaggarwal011/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ankitaggarwal011/subscriptions", "organizations_url": "https://api.github.com/users/ankitaggarwal011/orgs", "repos_url": "https://api.github.com/users/ankitaggarwal011/repos", "events_url": "https://api.github.com/users/ankitaggarwal011/events{/privacy}", "received_events_url": "https://api.github.com/users/ankitaggarwal011/received_events", "type": "User", "site_admin": false }
[ { "id": 1126834402, "node_id": "MDU6TGFiZWwxMTI2ODM0NDAy", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/components", "name": "area/components", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" }, { "id": 2152751095, "node_id": "MDU6TGFiZWwyMTUyNzUxMDk1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/frozen", "name": "lifecycle/frozen", "color": "ededed", "default": false, "description": null }, { "id": 2415263031, "node_id": "MDU6TGFiZWwyNDE1MjYzMDMx", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/components/aws/sagemaker", "name": "area/components/aws/sagemaker", "color": "0263f4", "default": false, "description": "AWS SageMaker components" } ]
open
false
null
[]
null
[ "/assign @kubeflow/aws ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "/area components/aws/sagemaker", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "/lifecycle frozen" ]
"2021-03-30T23:15:08"
"2022-08-08T19:28:23"
null
NONE
null
### Feature Area /area components ### What feature would you like to see? Option to use start/use tensorboard from the Kubeflow Pipelines UI when using AWS Sagemaker training component. At the moment, that doesn't seem to be available in the sagemaker training component. The `mlpipeline-ui-metadata.json` file output is not exposed in sagemaker training component and the code to [dump metrics visualization metadata](https://www.kubeflow.org/docs/components/pipelines/sdk/output-viewer/#tensorboard) needs to be added to the docker image (code not available) for the sagemaker components. ### What is the use case or pain point? Makes it easier to open tensorboard and monitor training job metrics in the same Kubeflow UI without manual commands or going to AWS console. ### Is there a workaround currently? Not that I know of. Maybe use initContainer to dump the metadata file or share volume storage with another step that dumps the metadata file, but more of a hack? --- Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5394/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5394/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5381
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5381/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5381/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5381/events
https://github.com/kubeflow/pipelines/issues/5381
842,392,953
MDU6SXNzdWU4NDIzOTI5NTM=
5,381
[feature] CLI command to run a pipeline from python files
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "id": 930476737, "node_id": "MDU6TGFiZWw5MzA0NzY3Mzc=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/help%20wanted", "name": "help wanted", "color": "db1203", "default": true, "description": "The community is welcome to contribute." }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
closed
false
null
[]
null
[ "This would be nice. We based our KFP repo on [this example](https://github.com/ksalama/kubeflow-examples/tree/master/kfp-cloudbuild), which is referred to in this [GCP article](https://cloud.google.com/solutions/machine-learning/architecture-for-mlops-using-tfx-kubeflow-pipelines-and-cloud-build#cicd_architecture) about using KFP with Cloud Build.\r\n\r\nThe [Cloud Build pipeline](https://github.com/ksalama/kubeflow-examples/tree/master/kfp-cloudbuild#cloud-build-steps) from the example includes the following steps:\r\n\r\n> 6. Compile Pipeline: This executes dsl-compile to compile the pipeline defined in [workflow.py](https://github.com/ksalama/kubeflow-examples/blob/master/kfp-cloudbuild/pipeline/workflow.py), to generate pipeline.tar.gz package.\r\n> 7. Upload to GCS: This uploads the pipeline.tar.gz package to GCS.\r\n> 8. Deploy & Run Pipeline: This executes the deploy_pipeline method in the [helper.py](https://github.com/ksalama/kubeflow-examples/blob/master/kfp-cloudbuild/pipeline/helper.py) module, given the pipeline.tar.gz package and pipeline version. The method deploys the pipeline package to KFP. If an optional --run argument is passed, the method also creates and experiment in KFP, and run pipeline given the created experiment, compiled pipeline package, and loaded parameter values from the [settings.yaml](https://github.com/ksalama/kubeflow-examples/blob/master/kfp-cloudbuild/pipeline/settings.yaml) file.\r\n\r\nWe've modified the helper.py script a bit, but the core functionality of uploading and running a pipeline remains the same.\r\n\r\nA few comments:\r\n1. We are basically doing the same thing as the \"use dsl-compile and then kfp upload and then kfp run\" option mentioned in the description. I think this proposed feature would allow us to replace the helper.py script with a call to `kfp run`.\r\n2. helper.py loads a settings.yaml file, converts the key-value pairs in settings.yaml into a dict, then passes that dict into `kfp.Client.run_pipeline`'s `params` parameter. I would still want to store our pipeline parameters into a separate config file, so I would probably write a script that parses settings.yaml and pass them into the proposed `--params=key1=value1,key2=value2,...` command line option.\r\n3. By using helper.py, we don't need a `__main__` in our pipelines and don't need to call `kfp.Client`, similar to the proposed `kfp run`.\r\n4. Our CI pipeline retains step 7 from the example repo, \"Upload to GCS.\" We have a push trigger that will run CI when PRs are merged to our main branch, which will compile our KFP pipelines and upload them to a GCS bucket. We have a separate CI pipeline that runs each night and, instead of compiling the KFP pipelines, will download the latest compiled KFP pipeline from GCS and run that. So, even though the proposed `kfp run` command removes the need to call `dsl-compile`, it might still be nice to optionally produce the compiled pipeline, for archival purposes.\r\n", "Let me know if you need help. I have built exactly this in Go (but it calls `dsl-compile` for the compilation step for the time being). Doing it in Python should be even easier.", "This is super interesting to us! We're working on some higher level abstractions here too and would love to collaborate. Has this already been committed to? We'd love to sync offline and see what we could do together.", "@aronchick Is it possible to present a demo on KFP community meetings? Or you need to wait until it's open?", "I think i'd like to wait until it's open, but happy to give as many folks as would like private demos.\n________________________________\nFrom: Yuan (Bob) Gong ***@***.***>\nSent: Tuesday, March 30, 2021 8:14 PM\nTo: kubeflow/pipelines ***@***.***>\nCc: David Aronchick ***@***.***>; Mention ***@***.***>\nSubject: Re: [kubeflow/pipelines] [feature] CLI command to run a pipeline from python files (#5381)\n\n\n@aronchick<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Faronchick&data=04%7C01%7Cdavid.aronchick%40microsoft.com%7C09477d4bb02546f98b2108d8f3f3121b%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637527572602641398%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=2ZxOZP5kPw%2F6148e1mEG1Yx1DrKKyKO5q9wohM2Cpxw%3D&reserved=0> Is it possible to present a demo on KFP community meetings? Or you need to wait until it's open?\n\n—\nYou are receiving this because you were mentioned.\nReply to this email directly, view it on GitHub<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fkubeflow%2Fpipelines%2Fissues%2F5381%23issuecomment-810726304&data=04%7C01%7Cdavid.aronchick%40microsoft.com%7C09477d4bb02546f98b2108d8f3f3121b%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637527572602651393%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=NK2eX5XZ%2BFHFmqQhoni%2FEV6XwS%2BeUCpQdDyy1MW845E%3D&reserved=0>, or unsubscribe<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAAAMQ5KQ37K2TNB5WBTHUWDTGKHQTANCNFSM4Z4MSSLQ&data=04%7C01%7Cdavid.aronchick%40microsoft.com%7C09477d4bb02546f98b2108d8f3f3121b%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637527572602661393%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=WYaV4j3hQPzr6Sw9gBl0DSPbZ7hB0JpmfTT4y%2FasO6I%3D&reserved=0>.\n", "Thanks for the heads up! I'd love to wait for a presentation in KFP community meeting when it's open.", "With the number of things on my hand for v2 and v2 compatible. I don't currently have bandwidth on this. Welcome anyone who's interested to pick up", "Cross posting for context, I think @aronchick is talking about [The SAME Project: A Cloud Native Approach to Reproducible Machine Learning - David Aronchick](https://www.youtube.com/watch?v=aQvvGKVX25I)", "This is correct 🙂 I'll do a demo in the coming weeks - as soon as i can find the time!\r\n________________________________\r\nFrom: Yuan (Bob) Gong ***@***.***>\r\nSent: Thursday, July 1, 2021 3:06 AM\r\nTo: kubeflow/pipelines ***@***.***>\r\nCc: David Aronchick ***@***.***>; Mention ***@***.***>\r\nSubject: [EXTERNAL] Re: [kubeflow/pipelines] [feature] CLI command to run a pipeline from python files (#5381)\r\n\r\n\r\nCross posting for context, I think @aronchick<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Faronchick&data=04%7C01%7Cdavid.aronchick%40microsoft.com%7Ca53824335d68462b152508d93c77e32d%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637607307879377677%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=g5GtbOdP0C%2B4lzEbllVRUK4gpTalpybv14x%2BtUxv380%3D&reserved=0> is talking about The SAME Project: A Cloud Native Approach to Reproducible Machine Learning - David Aronchick<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DaQvvGKVX25I&data=04%7C01%7Cdavid.aronchick%40microsoft.com%7Ca53824335d68462b152508d93c77e32d%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637607307879377677%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=AdfdEUTYvIwsQZdlCTmiZaMdyOHg42UIY2o%2B%2FUok7AI%3D&reserved=0>\r\n\r\n—\r\nYou are receiving this because you were mentioned.\r\nReply to this email directly, view it on GitHub<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fkubeflow%2Fpipelines%2Fissues%2F5381%23issuecomment-872108450&data=04%7C01%7Cdavid.aronchick%40microsoft.com%7Ca53824335d68462b152508d93c77e32d%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637607307879387671%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=zfhFXOgTZgd3QGR5C1NVmGOXvg%2FOxUKBWoDhmxrExXM%3D&reserved=0>, or unsubscribe<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAAAMQ5PZVXWXWK5VGHM4UI3TVQ42DANCNFSM4Z4MSSLQ&data=04%7C01%7Cdavid.aronchick%40microsoft.com%7Ca53824335d68462b152508d93c77e32d%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637607307879387671%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=p9Vs7JyNkMCEqlH4V%2BiTWwThR9aIgGbAicEwNO%2BvgOs%3D&reserved=0>.\r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n" ]
"2021-03-27T02:13:46"
"2022-03-03T03:05:38"
"2022-03-03T03:05:38"
CONTRIBUTOR
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> <!-- /area frontend --> <!-- /area backend --> /area sdk <!-- /area samples --> <!-- /area components --> ### What feature would you like to see? <!-- Provide a description of this feature and the user experience. --> A kfp CLI command to directly run python pipelines. For example: ```bash kfp run sample_pipeline.py --host <host-name> --params=key1=value1,key2=value2,... ``` This command directly compiles a python pipeline and submits it to KFP instance with specified hostname. Expectation of `sample_pipeline.py` should be the same as `dsl-compile` command. Or what can be even better -- Set up KFP instance connection configuration including credentials: ```bash vim ~/.kfp/config.json # this stores all KFP related configuration in a local config file # or export KF_PIPELINES_ENDPOINT=<kfp-host> ``` Run samples you want: ```bash # all host and credentials information are loaded from either a config file or env vars # so these command do not need to repeat them kfp run sample_pipeline.py --params=key1=value1,key2=value2,... ``` ### What is the use case or pain point? <!-- It helps us understand the benefit of this feature for your use case. --> I have two options for doing the same thing: * write python script wrappers that calls `create_run_from_pipeline_func` * use `dsl-compile` and then `kfp upload` and then `kfp run` Both options involve some unnecessary steps, I think it's beneficial providing a CLI command to make this common command easier, because * this command can be very helpful during pipeline development * people do not need to write `__main__` function for every pipeline, and they do not need to support arguments like host, parameters. All of these interface can be standardized in kfp run CLI. * it's convenient for building a KFP component that runs KFP pipelines or hooking up a KFP pipeline in a CI step --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5381/reactions", "total_count": 12, "+1": 12, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5381/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5380
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5380/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5380/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5380/events
https://github.com/kubeflow/pipelines/issues/5380
841,860,584
MDU6SXNzdWU4NDE4NjA1ODQ=
5,380
[feature] get output of conditional step
{ "login": "alvercau", "id": 24573258, "node_id": "MDQ6VXNlcjI0NTczMjU4", "avatar_url": "https://avatars.githubusercontent.com/u/24573258?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvercau", "html_url": "https://github.com/alvercau", "followers_url": "https://api.github.com/users/alvercau/followers", "following_url": "https://api.github.com/users/alvercau/following{/other_user}", "gists_url": "https://api.github.com/users/alvercau/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvercau/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvercau/subscriptions", "organizations_url": "https://api.github.com/users/alvercau/orgs", "repos_url": "https://api.github.com/users/alvercau/repos", "events_url": "https://api.github.com/users/alvercau/events{/privacy}", "received_events_url": "https://api.github.com/users/alvercau/received_events", "type": "User", "site_admin": false }
[ { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
closed
false
{ "login": "Ark-kun", "id": 1829149, "node_id": "MDQ6VXNlcjE4MjkxNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ark-kun", "html_url": "https://github.com/Ark-kun", "followers_url": "https://api.github.com/users/Ark-kun/followers", "following_url": "https://api.github.com/users/Ark-kun/following{/other_user}", "gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions", "organizations_url": "https://api.github.com/users/Ark-kun/orgs", "repos_url": "https://api.github.com/users/Ark-kun/repos", "events_url": "https://api.github.com/users/Ark-kun/events{/privacy}", "received_events_url": "https://api.github.com/users/Ark-kun/received_events", "type": "User", "site_admin": false }
[ { "login": "Ark-kun", "id": 1829149, "node_id": "MDQ6VXNlcjE4MjkxNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ark-kun", "html_url": "https://github.com/Ark-kun", "followers_url": "https://api.github.com/users/Ark-kun/followers", "following_url": "https://api.github.com/users/Ark-kun/following{/other_user}", "gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions", "organizations_url": "https://api.github.com/users/Ark-kun/orgs", "repos_url": "https://api.github.com/users/Ark-kun/repos", "events_url": "https://api.github.com/users/Ark-kun/events{/privacy}", "received_events_url": "https://api.github.com/users/Ark-kun/received_events", "type": "User", "site_admin": false } ]
null
[ "We currently do not provide a way to re-join conditional branches.\r\n\r\nThis leads to some code duplication:\r\nhttps://github.com/kubeflow/pipelines/blob/a80421191db917322ff312626409526b0a76aa68/samples/core/continue_training_from_prod/continue_training_from_prod.py\r\n\r\nOne workaround might be to have the training process encapsulated in a `@dsl.graph_component` function. Then you can call it with or without passing the pre-trained model.\r\n(I'm not 100% sure it will work though.)\r\n\r\nYou could also do something similar with graph `component.yaml` components.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n" ]
"2021-03-26T11:33:15"
"2022-04-18T17:27:35"
"2022-04-18T17:27:35"
NONE
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> <!-- /area frontend --> <!-- /area backend --> /area sdk <!-- /area samples --> <!-- /area components --> ### What feature would you like to see? Currently it does not seem to be possible to get the output of a conditional step, like so: ``` with dsl.Condition(base_model == "", name="Skip-Pretraining"): pretraining_step = pretraining_op( image=image, ) outputs = pretraining_op.outputs["output"] ``` ### What is the use case or pain point? I have a pipeline with optional pretraining for transfer learning, and in further steps of my pipeline, I use either the pretrained model as a base model, which path is stored in the outputs of the step, or a different model mode in case there was no pretraining. The further steps need to know which base model to use, depending on whether there was pretraining or not. ### Is there a workaround currently? Now I check if the pretrained model file exists in the mounted volume, but it's not ideal. --- Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5380/reactions", "total_count": 8, "+1": 8, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5380/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5373
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5373/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5373/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5373/events
https://github.com/kubeflow/pipelines/issues/5373
840,760,396
MDU6SXNzdWU4NDA3NjAzOTY=
5,373
[feature] Documentation for kubeflow component specification operations
{ "login": "Windrill", "id": 13634511, "node_id": "MDQ6VXNlcjEzNjM0NTEx", "avatar_url": "https://avatars.githubusercontent.com/u/13634511?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Windrill", "html_url": "https://github.com/Windrill", "followers_url": "https://api.github.com/users/Windrill/followers", "following_url": "https://api.github.com/users/Windrill/following{/other_user}", "gists_url": "https://api.github.com/users/Windrill/gists{/gist_id}", "starred_url": "https://api.github.com/users/Windrill/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Windrill/subscriptions", "organizations_url": "https://api.github.com/users/Windrill/orgs", "repos_url": "https://api.github.com/users/Windrill/repos", "events_url": "https://api.github.com/users/Windrill/events{/privacy}", "received_events_url": "https://api.github.com/users/Windrill/received_events", "type": "User", "site_admin": false }
[ { "id": 1260031624, "node_id": "MDU6TGFiZWwxMjYwMDMxNjI0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/samples", "name": "area/samples", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
closed
false
null
[]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n" ]
"2021-03-25T09:58:37"
"2022-04-18T17:27:36"
"2022-04-18T17:27:36"
NONE
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> <!-- /area frontend --> <!-- /area backend --> <!-- /area sdk --> /area samples <!-- /area components --> ### What feature would you like to see? From Issue #2337 I learned about the 'concat' operation which I needed when writing the Kubeflow Component Specification for my pipeline. I am not sure where to find an exhaustive/relatively inclusive list of such operations for use, such as conditionals, comparators (string_var == ""), or of all the types of a defined input/output in 'inputs' and 'outputs', eg - {name: variable, type: String, description:"Description"} . Where would one look for these? <!-- Provide a description of this feature and the user experience. --> ### What is the use case or pain point? Difficult to find what are all the Specifications that Kubeflow Pipelines provide, and use them. <!-- It helps us understand the benefit of this feature for your use case. --> ### Is there a workaround currently? Google search to try to find Kubeflow specification that matches use case. <!-- Without this feature, how do you accomplish your task today? --> --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5373/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5373/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5372
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5372/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5372/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5372/events
https://github.com/kubeflow/pipelines/issues/5372
840,182,060
MDU6SXNzdWU4NDAxODIwNjA=
5,372
[frontend] Sometimes the "choose file" button doesn't work when trying to upload pipeline
{ "login": "capri-xiyue", "id": 52932582, "node_id": "MDQ6VXNlcjUyOTMyNTgy", "avatar_url": "https://avatars.githubusercontent.com/u/52932582?v=4", "gravatar_id": "", "url": "https://api.github.com/users/capri-xiyue", "html_url": "https://github.com/capri-xiyue", "followers_url": "https://api.github.com/users/capri-xiyue/followers", "following_url": "https://api.github.com/users/capri-xiyue/following{/other_user}", "gists_url": "https://api.github.com/users/capri-xiyue/gists{/gist_id}", "starred_url": "https://api.github.com/users/capri-xiyue/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/capri-xiyue/subscriptions", "organizations_url": "https://api.github.com/users/capri-xiyue/orgs", "repos_url": "https://api.github.com/users/capri-xiyue/repos", "events_url": "https://api.github.com/users/capri-xiyue/events{/privacy}", "received_events_url": "https://api.github.com/users/capri-xiyue/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
closed
false
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false } ]
null
[ "![Uploading Screen Shot 2021-03-24 at 2.21.49 PM.png…]()\r\n", "Pending reproduction", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n" ]
"2021-03-24T21:33:49"
"2022-04-18T17:27:37"
"2022-04-18T17:27:37"
CONTRIBUTOR
null
### Environment * How did you deploy Kubeflow Pipelines (KFP)? <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> * KFP version: 1.4.0 (In fact, it also happened in 1.0.x and 1.3.x version) <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> ### Steps to reproduce It may not easy to reproduce the bug. Sometimes the "choose file" button doesn't work when trying to upload pipeline, the screen won't pop up which allows the user to choose the file to upload. After I restarted the chrome, the "choose file" button started to work. ### Expected result After clicking "choose file" button when trying to upload pipeline, a screen pops up which allows the user to choose the file. ### Materials and Reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5372/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5372/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5368
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5368/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5368/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5368/events
https://github.com/kubeflow/pipelines/issues/5368
839,744,204
MDU6SXNzdWU4Mzk3NDQyMDQ=
5,368
[bug] kpt pkg get results in error message "wrong Node Kind for expected: MappingNode was SequenceNode"
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 2852159180, "node_id": "MDU6TGFiZWwyODUyMTU5MTgw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/manifests", "name": "area/manifests", "color": "4CD0BE", "default": false, "description": "" } ]
closed
false
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false } ]
null
[ "@Bobgy: The label(s) `area/deployment` cannot be applied, because the repository doesn't have them.\n\n<details>\n\nIn response to [this](https://github.com/kubeflow/pipelines/issues/5368):\n\n>### What steps did you take\r\n>\r\n>```bash\r\n>kpt pkg get https://github.com/kubeflow/pipelines.git/manifests/kustomize@1.5.0-rc.0\r\n>```\r\n>\r\n>\r\n>### What happened:\r\n>> $ kpt pkg get https://github.com/kubeflow/pipelines.git/manifests/kustomize@1.5.0-rc.0 kfp-standalone-1/kustomize/upstream\r\n>fetching package /manifests/kustomize from https://github.com/kubeflow/pipelines to kfp-standalone-1/kustomize/upstream\r\n>error: /home/ext_gongyuan_google_com/github/kf/testing/test-infra/kfp/kfp-standalone-1/kustomize/upstream/third-party/argo/upstream/manifests/namespace-install/overlays/argo-server-deployment.yaml: wrong Node Kind for expected: MappingNode was SequenceNode: value: {- op: add\r\n> path: /spec/template/spec/containers/0/args/-\r\n> value: --namespaced}\r\n>\r\n>### What did you expect to happen:\r\n>There should be no error message when fetching using kpt.\r\n>\r\n>### Environment:\r\n>kpt version\r\n>0.37.1\r\n>\r\n>### Anything else you would like to add:\r\n>\r\n>\r\n>\r\n>### Labels\r\n>\r\n>\r\n>\r\n>\r\n>\r\n>\r\n>\r\n>\r\n>/area deployment\r\n>\r\n>---\r\n>\r\n>\r\n>Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.\r\n>\n\n\nInstructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.\n</details>", "upstream issue: https://github.com/GoogleContainerTools/kpt/issues/1218\r\n\r\nthe fix we should apply: https://github.com/GoogleContainerTools/kpt/issues/1218#issuecomment-756472616\r\n\r\n> kpt can only parse and process valid Kubernetes Resource files. Since the file has .yaml extension and is not a valid kubernetes resource, you must intimate kpt to not parse it by creating a file with name .krmignore in the package directory and add the file name(list_only.yaml) to it. This way, kpt ignores the file and leave it untouched.", "Because we plan to use the package in KF 1.3, we should fix it before the final release.", "I'm seeing some problems with `.krmignore`.\r\nhttps://github.com/GoogleContainerTools/kpt/issues/1218#issuecomment-806305825", "Before there is a kpt improvement, I think we should avoid using YAML to represent arrays for JSON patch.", "Workaround merged, so this is no longer P0", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This comment is directed towards branch upgradekpt of the fork by zijianjoy. \r\n\r\nI assumed it is appropriate to address that here because there is no option to raise an issue in that repo, and that repo is the referenced by the official kubeflow getting started guide: https://www.kubeflow.org/docs/distributions/gke/deploy/deploy-cli/\r\nSpecifically, from the file https://github.com/kubeflow/gcp-blueprints/blob/master/kubeflow/apps/pipelines/pull-upstream.sh\r\n\r\nRunning the command in that file:\r\n```\r\nkpt pkg get https://github.com/zijianjoy/pipelines.git/manifests/kustomize/@upgradekpt upstream\r\n```\r\nfrom Google cloud command line shell results in this error:\r\n```\r\nError: /home/test_user/upstream/third-party/argo/upstream/manifests/namespace-install/overlays/argo-server-deployment.yaml: wrong Node Kind for expected: MappingNode was SequenceNode: value: {- op: add\r\n path: /spec/template/spec/containers/0/args/-\r\n value: --namespaced}\r\n```\r\nThis command is part of the scripts that deploy kubeflow, and as a result I cannot complete the process.\r\n\r\nTrying to run on the root of the repo\r\n```\r\nkpt pkg get https://github.com/zijianjoy/pipelines.git/@upgradekpt upstream\r\n```\r\nGives this error:\r\n```\r\nError: /home/test_user/upstream/backend/src/crd/samples/scheduledworkflow/invalid.yaml: yaml: line 22: did not find expected whitespace or line break\r\n```\r\n\r\nRunning\r\n```\r\nkpt pkg get https://github.com/kubeflow/pipelines.git/manifests/kustomize/@google-cloud-pipeline-components-0.1.4 upstream\r\n```\r\nwhich seems to be related to the version used by the getting started guide gives this error:\r\n```\r\nError: Kptfile at \"/tmp/kpt-get-448525949/manifests/kustomize/third-party/argo/upstream/manifests\" has an old version (\"v1alpha1\") of the Kptfile schema.\r\nPlease update the package to the latest format by following https://kpt.dev/installation/migration.\r\n```\r\n\r\nI am not sure how to proceed to install kubeflow." ]
"2021-03-24T13:49:39"
"2021-10-25T06:15:07"
"2021-10-25T06:15:07"
CONTRIBUTOR
null
### What steps did you take ```bash kpt pkg get https://github.com/kubeflow/pipelines.git/manifests/kustomize@1.5.0-rc.0 . ``` <!-- A clear and concise description of what the bug is.--> ### What happened: > $ kpt pkg get https://github.com/kubeflow/pipelines.git/manifests/kustomize@1.5.0-rc.0 kfp-standalone-1/kustomize/upstream fetching package /manifests/kustomize from https://github.com/kubeflow/pipelines to kfp-standalone-1/kustomize/upstream error: /home/ext_gongyuan_google_com/github/kf/testing/test-infra/kfp/kfp-standalone-1/kustomize/upstream/third-party/argo/upstream/manifests/namespace-install/overlays/argo-server-deployment.yaml: wrong Node Kind for expected: MappingNode was SequenceNode: value: {- op: add path: /spec/template/spec/containers/0/args/- value: --namespaced} ### What did you expect to happen: There should be no error message when fetching using kpt. ### Environment: kpt version 0.37.1 ### Anything else you would like to add: <!-- Miscellaneous information that will assist in solving the issue.--> ### Labels <!-- Please include labels below by uncommenting them to help us better triage issues --> <!-- /area frontend --> <!-- /area backend --> <!-- /area sdk --> <!-- /area testing --> <!-- /area samples --> <!-- /area components --> /area deployment --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5368/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5368/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5367
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5367/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5367/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5367/events
https://github.com/kubeflow/pipelines/issues/5367
839,581,852
MDU6SXNzdWU4Mzk1ODE4NTI=
5,367
[frontend] Artifact preview can cause ml-pipeline-ui to crash
{ "login": "vstrimaitis", "id": 14166032, "node_id": "MDQ6VXNlcjE0MTY2MDMy", "avatar_url": "https://avatars.githubusercontent.com/u/14166032?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vstrimaitis", "html_url": "https://github.com/vstrimaitis", "followers_url": "https://api.github.com/users/vstrimaitis/followers", "following_url": "https://api.github.com/users/vstrimaitis/following{/other_user}", "gists_url": "https://api.github.com/users/vstrimaitis/gists{/gist_id}", "starred_url": "https://api.github.com/users/vstrimaitis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vstrimaitis/subscriptions", "organizations_url": "https://api.github.com/users/vstrimaitis/orgs", "repos_url": "https://api.github.com/users/vstrimaitis/repos", "events_url": "https://api.github.com/users/vstrimaitis/events{/privacy}", "received_events_url": "https://api.github.com/users/vstrimaitis/received_events", "type": "User", "site_admin": false }
[ { "id": 930476737, "node_id": "MDU6TGFiZWw5MzA0NzY3Mzc=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/help%20wanted", "name": "help wanted", "color": "db1203", "default": true, "description": "The community is welcome to contribute." }, { "id": 930619513, "node_id": "MDU6TGFiZWw5MzA2MTk1MTM=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/priority/p1", "name": "priority/p1", "color": "cb03cc", "default": false, "description": "" }, { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 2152751095, "node_id": "MDU6TGFiZWwyMTUyNzUxMDk1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/frozen", "name": "lifecycle/frozen", "color": "ededed", "default": false, "description": null } ]
open
false
null
[]
null
[ "Thank you for the report!\r\nContributions welcomed, the code path related is https://github.com/kubeflow/pipelines/blob/eb558423ecdca969f2dea55e0d1b0197fafbcb13/frontend/server/handlers/artifacts.ts#L200\r\nand development guide: https://github.com/kubeflow/pipelines/tree/master/frontend", "/cc @zijianjoy ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "/lifecycle frozen" ]
"2021-03-24T10:34:28"
"2021-07-09T17:59:27"
null
NONE
null
### Environment * How did you deploy Kubeflow Pipelines (KFP)? GCP Marketplace * KFP version: 1.4.1 ### Steps to reproduce 1. Upload a relatively large (e.g. ~5G) file to a GCS bucket 2. Create a pipeline run using the code snippet below 3. Open the run in the KFP UI 4. Click on the "Download from GCS" component 5. Continuously open and close the "Input/Output" tab (e.g. by clicking on the "x" of the side pane, then on the component, then on the "x" again, etc.; or by clicking on "Visualizations", then "Input/Output", then "Visualizations", etc.) 6. You should see the resource usage of the `ml-pipeline-ui` pod skyrocket and eventually the pod will be restarted Code snippet to build and run a pipeline: ```python import kfp from kfp.compiler import compiler LARGE_FILE_URI = "gs://..." KFP_HOST = "https://..." @kfp.dsl.pipeline( name="UI Crashing Pipeline", description="Pipeline to help reproduce the KFP UI crash", ) def pipeline(): download_op = kfp.components.load_component_from_url( "https://raw.githubusercontent.com/kubeflow/pipelines/master/components/google-cloud/storage/download_blob/component.yaml" ) _ = download_op(gcs_path=LARGE_FILE_URI,) if __name__ == "__main__": compiler.Compiler().compile(pipeline, "pipeline.zip") client = kfp.Client(KFP_HOST) experiment = client.create_experiment(name="example") run = client.run_pipeline(experiment.id, "ui-crashing-pipeline", "pipeline.zip") ``` Some metrics graphs of `ml-pipeline-ui` extracted from GKE (the bumps correspond to the times that I was clicking around as specified in step 5): ![image](https://user-images.githubusercontent.com/14166032/112295068-36604e00-8c9c-11eb-852b-7e4ec047849a.png) ![image](https://user-images.githubusercontent.com/14166032/112295090-3bbd9880-8c9c-11eb-9fda-78d385b5cab1.png) ![image](https://user-images.githubusercontent.com/14166032/112295107-3fe9b600-8c9c-11eb-8872-125e9402e560.png) ### Expected result The `ml-pipeline-ui` doesn't start using insane amounts of resources and doesn't crash. ### Materials and Reference As far as I've investigated, clicking on the "Inputs/Outputs" tab sends a request to the backend to get 255 bytes of each file to use as a preview. However, the actual pod seems to download the whole file, which causes the network and CPU spikes I've shown above. I'm not familiar enough with the implementation, but probably a thing to look into is whether it's possible to prevent the pod from downloading the whole file. Another option could probably be to use some sort of cache for the preview. --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5367/reactions", "total_count": 9, "+1": 9, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5367/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5366
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5366/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5366/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5366/events
https://github.com/kubeflow/pipelines/issues/5366
839,567,051
MDU6SXNzdWU4Mzk1NjcwNTE=
5,366
[feature] KFP UI log viewer should wrap when lines are long
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" }, { "id": 2152751095, "node_id": "MDU6TGFiZWwyMTUyNzUxMDk1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/frozen", "name": "lifecycle/frozen", "color": "ededed", "default": false, "description": null } ]
open
false
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false } ]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "/lifecycle frozen", "Some challenges for implementing wrapping ---\r\n\r\nBecause we used react-virtualized to make log viewer work with large scale of logs, which seems like a reasonable request.\r\nIt's a very challenging problem if you want both virtualization and wrapping -- you cannot easily know how many lines some text will wrap into without actually rendering it, therefore it's hard to calculate height of each item. Most virtualization UI libraries only support fixed height items.\r\n\r\nHowever, KFP UI currently loads [the entire logs file as a string](https://github.com/kubeflow/pipelines/blob/cc83e1089b573256e781ed2e4ac90f604129e769/frontend/src/components/LogViewer.tsx#L71) into browser memory. I don't think it actually scales well. If people are not getting problems with the existing LogViewer, maybe we don't need so much scalability.", "## Proposal\r\n\r\n* Change LogViewer to not virtualize (virtualize means when there are 1000 lines of logs, only render the visible 20 lines in the browser).\r\n* If total log size is over a limit, hide the middle part of the log by default (because usually what's at the beginning and ending are most useful), and allow users to open a separate raw text window to check the full log\r\n* wrap log lines by default\r\n\r\nWith my proposal, this should be relatively easy to implement and shouldn't break when log volume is huge.", "It sounds good to me for skipping middle part of logs and opening raw text window for full log.\r\n\r\nI have a question about not to virtualize + wrap log lines: How do we know how to break a long line to multiple small lines? The side panel size is adjustable, and the chrome tab can be zoom in and out. Therefore number of characters each line is dynamic.", "@zijianjoy that's the default behavior of browser, if you put some text in a div, they wrap when the div width is not enough.\n\nThere is a css property that disables this behavior, you may want to search for it." ]
"2021-03-24T10:18:55"
"2021-07-29T00:41:19"
null
CONTRIBUTOR
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> /area frontend <!-- /area backend --> <!-- /area sdk --> <!-- /area samples --> <!-- /area components --> ### What feature would you like to see? <!-- Provide a description of this feature and the user experience. --> KFP UI log viewer should wrap when lines are long ### What is the use case or pain point? <!-- It helps us understand the benefit of this feature for your use case. --> When debugging pipelines, I often find log lines are too long so I couldn't easily browse its full content. Because it's typically a lot harder to scroll left and right than up and down. ### Is there a workaround currently? <!-- Without this feature, how do you accomplish your task today? --> Scroll horizontally to see the log --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5366/reactions", "total_count": 8, "+1": 8, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5366/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5358
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5358/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5358/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5358/events
https://github.com/kubeflow/pipelines/issues/5358
838,099,529
MDU6SXNzdWU4MzgwOTk1Mjk=
5,358
[frontend] Upgrade Argo UI template to match original content
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1682717397, "node_id": "MDU6TGFiZWwxNjgyNzE3Mzk3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/process", "name": "kind/process", "color": "2515fc", "default": false, "description": "" } ]
closed
false
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false } ]
null
[]
"2021-03-22T20:52:00"
"2021-03-29T15:23:50"
"2021-03-29T15:23:50"
COLLABORATOR
null
We have upgraded backend to use newer Argo version `2.12.9`:https://github.com/kubeflow/pipelines/pull/5266. Based on https://github.com/kubeflow/pipelines/pull/5339#discussion_r598079077, we need to make sure we are also updating other contents in `/kubeflow/pipelines/frontend/third_party/argo-ui/argo_template.ts` to match https://github.com/argoproj/argo-workflows/blob/80b5ab9b8e35b4dba71396062abe32918cd76ddd/ui/src/models/workflows.ts#L863-L873.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5358/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5358/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5355
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5355/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5355/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5355/events
https://github.com/kubeflow/pipelines/issues/5355
838,087,366
MDU6SXNzdWU4MzgwODczNjY=
5,355
03/22/2021 Presubmit kubeflow-pipelines-tfx-python36 failing
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[]
"2021-03-22T20:34:26"
"2021-03-22T23:43:43"
"2021-03-22T23:43:43"
COLLABORATOR
null
https://oss-prow.knative.dev/view/gs/oss-prow/pr-logs/pull/kubeflow_pipelines/5349/kubeflow-pipelines-tfx-python36/1373883178296545280 Looks like the error is caused by bazel version: ``` ERROR: /root/.cache/bazel/_bazel_root/a58ac343beec6d39a39bba9ecc91209d/external/org_tensorflow/tensorflow/version_check.bzl:47:13: Traceback (most recent call last): File "/tmp/pip-req-build-ziym2n3h/WORKSPACE", line 66 check_bazel_version_at_least(<1 more arguments>) File "/root/.cache/bazel/_bazel_root/a58ac343beec6d39a39bba9ecc91209d/external/org_tensorflow/tensorflow/version_check.bzl", line 47, in check_bazel_version_at_least fail(<1 more arguments>) Current Bazel version is 3.4.1, expected at least 3.7.2 ERROR: error loading package 'external': Package 'external' contains errors INFO: Elapsed time: 12.728s INFO: 0 processes. FAILED: Build did NOT complete successfully (0 packages loaded) ``` We'll try update bazel here: https://github.com/kubeflow/pipelines/blob/053d62e80f99a34f66b5d4035d425aed8df6c8a9/test/presubmit-tests-tfx.sh#L36
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5355/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5355/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5353
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5353/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5353/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5353/events
https://github.com/kubeflow/pipelines/issues/5353
837,848,818
MDU6SXNzdWU4Mzc4NDg4MTg=
5,353
kfserving pipeline deployment is failing with below error
{ "login": "tiru1930", "id": 12211287, "node_id": "MDQ6VXNlcjEyMjExMjg3", "avatar_url": "https://avatars.githubusercontent.com/u/12211287?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tiru1930", "html_url": "https://github.com/tiru1930", "followers_url": "https://api.github.com/users/tiru1930/followers", "following_url": "https://api.github.com/users/tiru1930/following{/other_user}", "gists_url": "https://api.github.com/users/tiru1930/gists{/gist_id}", "starred_url": "https://api.github.com/users/tiru1930/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tiru1930/subscriptions", "organizations_url": "https://api.github.com/users/tiru1930/orgs", "repos_url": "https://api.github.com/users/tiru1930/repos", "events_url": "https://api.github.com/users/tiru1930/events{/privacy}", "received_events_url": "https://api.github.com/users/tiru1930/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
{ "login": "pvaneck", "id": 1868861, "node_id": "MDQ6VXNlcjE4Njg4NjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1868861?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pvaneck", "html_url": "https://github.com/pvaneck", "followers_url": "https://api.github.com/users/pvaneck/followers", "following_url": "https://api.github.com/users/pvaneck/following{/other_user}", "gists_url": "https://api.github.com/users/pvaneck/gists{/gist_id}", "starred_url": "https://api.github.com/users/pvaneck/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pvaneck/subscriptions", "organizations_url": "https://api.github.com/users/pvaneck/orgs", "repos_url": "https://api.github.com/users/pvaneck/repos", "events_url": "https://api.github.com/users/pvaneck/events{/privacy}", "received_events_url": "https://api.github.com/users/pvaneck/received_events", "type": "User", "site_admin": false }
[ { "login": "pvaneck", "id": 1868861, "node_id": "MDQ6VXNlcjE4Njg4NjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1868861?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pvaneck", "html_url": "https://github.com/pvaneck", "followers_url": "https://api.github.com/users/pvaneck/followers", "following_url": "https://api.github.com/users/pvaneck/following{/other_user}", "gists_url": "https://api.github.com/users/pvaneck/gists{/gist_id}", "starred_url": "https://api.github.com/users/pvaneck/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pvaneck/subscriptions", "organizations_url": "https://api.github.com/users/pvaneck/orgs", "repos_url": "https://api.github.com/users/pvaneck/repos", "events_url": "https://api.github.com/users/pvaneck/events{/privacy}", "received_events_url": "https://api.github.com/users/pvaneck/received_events", "type": "User", "site_admin": false } ]
null
[ "Can you provide more info such as sample pipeline code that prompts the error and the version of KFServing you have installed?", "```\r\nimport kfp.dsl as dsl\r\nimport kfp.compiler as compiler\r\nfrom kfp import components\r\n\r\nkfserving_op = components.load_component_from_url(\r\n \"https://raw.githubusercontent.com/kubeflow/pipelines/master/components/kubeflow/kfserving/component.yaml\"\r\n)\r\n\r\n\r\ndef deploy_inference_image_op(action=\"create\",\r\n model_name=\"kfserving-inference-model-v1\",\r\n namespace=\"kfserving-test\",\r\n framework=\"custom\",\r\n raw_yaml=\"\",\r\n watch_timeout=\"300\"):\r\n return kfserving_op(\r\n action=action,\r\n model_name=model_name,\r\n namespace=namespace,\r\n framework=framework,\r\n service_account=\"ass\",\r\n inferenceservice_yaml=raw_yaml,\r\n watch_timeout=watch_timeout\r\n )\r\n\r\n\r\n@dsl.pipeline(\r\n name=\"kfp-kfserving-model-deploy\", description=\"deploy into kubeflow with kfserving\",\r\n)\r\ndef deploy_kf_pipeline(model_name: str, kf_inference_yaml: str, watch_timeout: str = '300'):\r\n deploy_inference_image_op(model_name=model_name, raw_yaml=kf_inference_yaml, watch_timeout=watch_timeout)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n file_name = __file__ + \".tar.gz\"\r\n compiler.Compiler().compile(deploy_kf_pipeline, file_name)\r\n\r\n```\r\n\r\nkfserving 0.5.1", "Hmm, what is being passed in as the inferenceservice_yaml in your case? There could potentially be something there causing an issue. Might be worth checking if the yaml can be deployed normally using kubectl. \r\n\r\nAlso, if you need to use a specific service account for the predictor pods while using the 'inferenceservice_yaml` argument, currently you need to pass it in with the yaml itself with:\r\n\r\n```\r\nspec:\r\n predictor:\r\n serviceAccountName: \"some-service-account\"\r\n ...\r\n```\r\nPassing it in as an arg currently doesn't overwrite or update the YAML passed in (though maybe it should).", "@pvaneck i was able deploy inference yaml with kubectl CLI,. \r\n\r\nYes, though i have mentioned service account in above code, same service account is mentioned in yaml also, same configuration is working for below commit in pipelines \r\n\r\n``` \r\ncommit 65bed9b6d1d676ef2d541a970d3edc0aee12400d\r\nAuthor: juliusvonkohout <45896133+juliusvonkohout@users.noreply.github.com>\r\nDate: Wed Feb 24 21:54:14 2021 +0100\r\n\r\n Changes for kfserving 0.4.1 (#4479)\r\n \r\n * Changes for kfserving 0.4.0\r\n \r\n * update to kfserving==0.4.0\r\n\r\npip freeze | grep kfserving\r\nkfserving==0.4.1\r\n\r\n```\r\n\r\nSo with latest code, i,e kfserving==0.5.1\r\n\r\n````\r\nfrom kfserving import V1alpha2EndpointSpec\r\nfrom kfserving import V1alpha2PredictorSpec\r\nfrom kfserving import V1alpha2TensorflowSpec\r\nfrom kfserving import V1alpha2PyTorchSpec\r\nfrom kfserving import V1alpha2SKLearnSpec\r\nfrom kfserving import V1alpha2XGBoostSpec\r\nfrom kfserving.models.v1alpha2_onnx_spec import V1alpha2ONNXSpec\r\nfrom kfserving import V1alpha2TritonSpec\r\nfrom kfserving import V1alpha2CustomSpec\r\nfrom kfserving import V1alpha2InferenceServiceSpec\r\nfrom kfserving import V1alpha2InferenceService\r\n\r\n````\r\nabove imports are moved to \r\n\r\n```\r\nfrom kfserving import V1beta1InferenceService\r\nfrom kfserving import V1beta1InferenceServiceSpec\r\nfrom kfserving import V1beta1LightGBMSpec\r\nfrom kfserving import V1beta1ONNXRuntimeSpec\r\nfrom kfserving import V1beta1PMMLSpec\r\nfrom kfserving import V1beta1PredictorSpec\r\nfrom kfserving import V1beta1SKLearnSpec\r\nfrom kfserving import V1beta1TFServingSpec\r\nfrom kfserving import V1beta1TorchServeSpec\r\nfrom kfserving import V1beta1TritonSpec\r\nfrom kfserving import V1beta1XGBoostSpec\r\n```\r\ni,e for v1alpha2 to v1beta1 so if run kfserving deploy pipeline with latest code its was looking for v1beta1 API,which is not found so its giving 404 error \r\n\r\n{'api_version': 'serving.kubeflow.org/v1beta1',", "Ah, seems like your cluster doesn't have KFServing >= 0.5.0 installed which is required for using the v1beta1 API. You had the right idea with using the old commit. Just continue to use your code, but load the old components yaml file. i.e:\r\n\r\n```\r\nhttps://github.com/kubeflow/pipelines/blob/65bed9b6d1d676ef2d541a970d3edc0aee12400d/components/kubeflow/kfserving/component.yaml\r\n```\r\n\r\nSorry for the hiccup. I will update the README to make this a bit clearer." ]
"2021-03-22T15:37:11"
"2021-04-08T01:10:02"
"2021-04-08T01:10:02"
NONE
null
``` raceback (most recent call last): File "kfservingdeployer.py", line 414, in <module> main() File "kfservingdeployer.py", line 381, in main max_replicas=max_replicas File "kfservingdeployer.py", line 235, in perform_action watch=True, timeout_seconds=watch_timeout) File "kfservingdeployer.py", line 168, in submit_api_request outputs = custom_obj_api.create_namespaced_custom_object(*args, isvc) File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api/custom_objects_api.py", line 225, in create_namespaced_custom_object return self.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, **kwargs) # noqa: E501 File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api/custom_objects_api.py", line 358, in create_namespaced_custom_object_with_http_info collection_formats=collection_formats) File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 353, in call_api _preload_content, _request_timeout, _host) File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 184, in __call_api _request_timeout=_request_timeout) File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 397, in request body=body) File "/usr/local/lib/python3.6/site-packages/kubernetes/client/rest.py", line 280, in POST body=body) File "/usr/local/lib/python3.6/site-packages/kubernetes/client/rest.py", line 233, in request raise ApiException(http_resp=r) kubernetes.client.exceptions.ApiException: (404) Reason: Not Found HTTP response headers: HTTPHeaderDict({'Audit-Id': '6fb21efd-452a-4332-94ce-11cd581663b4', 'Cache-Control': 'no-cache, private', 'Content-Type': 'text/plain; charset=utf-8', 'X-Content-Type-Options': 'nosniff', 'Date': 'Mon, 22 Mar 2021 15:25:48 GMT', 'Content-Length': '19'}) HTTP response body: 404 page not found ```
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5353/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5353/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5350
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5350/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5350/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5350/events
https://github.com/kubeflow/pipelines/issues/5350
837,421,715
MDU6SXNzdWU4Mzc0MjE3MTU=
5,350
Dashboard page crashes when entering invalid value into Recurring run "Start time" input
{ "login": "tkislan", "id": 5370798, "node_id": "MDQ6VXNlcjUzNzA3OTg=", "avatar_url": "https://avatars.githubusercontent.com/u/5370798?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tkislan", "html_url": "https://github.com/tkislan", "followers_url": "https://api.github.com/users/tkislan/followers", "following_url": "https://api.github.com/users/tkislan/following{/other_user}", "gists_url": "https://api.github.com/users/tkislan/gists{/gist_id}", "starred_url": "https://api.github.com/users/tkislan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tkislan/subscriptions", "organizations_url": "https://api.github.com/users/tkislan/orgs", "repos_url": "https://api.github.com/users/tkislan/repos", "events_url": "https://api.github.com/users/tkislan/events{/privacy}", "received_events_url": "https://api.github.com/users/tkislan/received_events", "type": "User", "site_admin": false }
[ { "id": 930619511, "node_id": "MDU6TGFiZWw5MzA2MTk1MTE=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/priority/p0", "name": "priority/p0", "color": "db1203", "default": false, "description": "" }, { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false } ]
null
[ "Thank you for reporting! I can validate that this issue happens on Safari browser, which doesn't enforce the time format as it does on Chrome: \r\n![enforce](https://user-images.githubusercontent.com/37026441/113336259-fbcd6580-92da-11eb-968b-e730ba4570f6.png)\r\n\r\nTrying to find for a fix for this usecase.\r\n", "Validated Safari support for native `date` and `time` picker in TextField: https://caniuse.com/input-datetime", "Explored the option to use @material-ui/pickers but that is going to be a huge change: https://material-ui.com/components/pickers/. Most of them are not related to the issue itself, but related to many breaking changes from `@material-ui/core` because of @material-ui/pickers has a requirement on `@material-ui/core@^4.0.0`. \r\n\r\nI am leaning towards changing the behavior of current TextField to unblock Safari usage.", "Closing this issue with the PR which prevented throwing error when datetime format is invalid." ]
"2021-03-22T07:39:05"
"2021-04-02T16:26:41"
"2021-04-02T16:26:41"
NONE
null
/kind bug **What steps did you take and what happened:** <img width="634" alt="image" src="https://user-images.githubusercontent.com/5370798/107193399-1a0b9900-69ef-11eb-839b-64c803abe362.png"> When I select whole value in the text input, and press "1" (meaning that I want to write full time by hand), the whole page crashes, and only top banner and white page is displayed <img width="1440" alt="image" src="https://user-images.githubusercontent.com/5370798/107193737-7373c800-69ef-11eb-88fd-a646711847f0.png"> Same thing happens if I just delete the `:` character, without even losing the focus on the input Also, "Date" fields are broken in a same way as well .. as soon as you delete the `-` character, whole page crashes again Attaching the console log as well, but this is so easy, and 100% reproducible issue. **What did you expect to happen:** I expected a proper validation error displayed, after I leave the field, not while typing it. Also, why not just use some time picker? Because validation is obviously just completely wrong .. if you type `09:92`, no error is displayed ... **Anything else you would like to add:** Safari console log: ``` [Error] Failed to load resource: the server responded with a status of 400 () (runs, line 0) [Error] Error: Invalid picker format xi — TriggerUtils.ts:146 (anonymous function) — Trigger.tsx:408 bi — react-dom.production.min.js:131:466 _i — react-dom.production.min.js:131:200 _l — react-dom.production.min.js:252 _l (anonymous function) — scheduler.production.min.js:18:437 vl — react-dom.production.min.js:244:440 il — react-dom.production.min.js:223:423 il (anonymous function) — react-dom.production.min.js:121:115 (anonymous function) — scheduler.production.min.js:18:437 Yo — react-dom.production.min.js:121 Ko — react-dom.production.min.js:120:496 (anonymous function) — react-dom.production.min.js:224:85 le — react-dom.production.min.js:285:132 pe — react-dom.production.min.js:27 Mn — react-dom.production.min.js:83:282 Bn — react-dom.production.min.js:84:473 Fn — react-dom.production.min.js:84:101 Fn (anonymous function) — scheduler.production.min.js:18:437 se — react-dom.production.min.js:285 Tn — react-dom.production.min.js:82:280 Tn ds (2.729a2889.chunk.js:2:2176548) (anonymous function) (2.729a2889.chunk.js:2:2182311) bi (2.729a2889.chunk.js:2:2149218) _i (2.729a2889.chunk.js:2:2148952) _l (2.729a2889.chunk.js:2:2200695) _l (anonymous function) (2.729a2889.chunk.js:2:2219939) vl (2.729a2889.chunk.js:2:2196681) il (2.729a2889.chunk.js:2:2187008) il (anonymous function) (2.729a2889.chunk.js:2:2144991) (anonymous function) (2.729a2889.chunk.js:2:2219939) Yo (2.729a2889.chunk.js:2:2144937) Ko (2.729a2889.chunk.js:2:2144872) (anonymous function) (2.729a2889.chunk.js:2:2214205) le (2.729a2889.chunk.js:2:2214209) pe (2.729a2889.chunk.js:2:2104545) Mn (2.729a2889.chunk.js:2:2128859) Bn (2.729a2889.chunk.js:2:2129787) Fn (2.729a2889.chunk.js:2:2129075) Fn (anonymous function) (2.729a2889.chunk.js:2:2219939) se (2.729a2889.chunk.js:2:2214040) Tn (2.729a2889.chunk.js:2:2128498) Tn [Error] Error: Invalid picker format Yo (2.729a2889.chunk.js:2:2145080) Ko (2.729a2889.chunk.js:2:2144872) (anonymous function) (2.729a2889.chunk.js:2:2214205) le (2.729a2889.chunk.js:2:2214209) pe (2.729a2889.chunk.js:2:2104545) Mn (2.729a2889.chunk.js:2:2128859) Bn (2.729a2889.chunk.js:2:2129787) Fn (2.729a2889.chunk.js:2:2129075) Fn (anonymous function) (2.729a2889.chunk.js:2:2219939) se (2.729a2889.chunk.js:2:2214040) Tn (2.729a2889.chunk.js:2:2128498) Tn ``` **Environment:** - Kubeflow version: v1beta1 - kfctl version: kfctl v1.2.0-0-gbc038f9 - Kubernetes platform: EKS - Kubernetes version: v1.18.9-eks-d1db3c - OS: EKS managed nodes Copied from https://github.com/kubeflow/kubeflow/issues/5587
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5350/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5350/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5348
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5348/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5348/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5348/events
https://github.com/kubeflow/pipelines/issues/5348
837,347,857
MDU6SXNzdWU4MzczNDc4NTc=
5,348
ScheduledWorkflow is not deleted along with RecurringRun
{ "login": "TrsNium", "id": 11388424, "node_id": "MDQ6VXNlcjExMzg4NDI0", "avatar_url": "https://avatars.githubusercontent.com/u/11388424?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TrsNium", "html_url": "https://github.com/TrsNium", "followers_url": "https://api.github.com/users/TrsNium/followers", "following_url": "https://api.github.com/users/TrsNium/following{/other_user}", "gists_url": "https://api.github.com/users/TrsNium/gists{/gist_id}", "starred_url": "https://api.github.com/users/TrsNium/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TrsNium/subscriptions", "organizations_url": "https://api.github.com/users/TrsNium/orgs", "repos_url": "https://api.github.com/users/TrsNium/repos", "events_url": "https://api.github.com/users/TrsNium/events{/privacy}", "received_events_url": "https://api.github.com/users/TrsNium/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
closed
false
null
[]
null
[ "@hilcj hello. I think this issue should be labeld to `area/backend`", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n" ]
"2021-03-22T05:56:41"
"2022-04-18T17:27:46"
"2022-04-18T17:27:46"
CONTRIBUTOR
null
### What steps did you take: Removed Recurring Run via python's KFP package. (The Recurring Runs to be deleted are all those associated with a particular Experiment.) ### What happened: The job information of the RecurringRun were removed from mysql, but the ScheduledWorkflow (CRD) was not removed. ### What did you expect to happen: Both the job information of the RecurringRun on mysql and the ScheduledWorkflow (CRD) should be deleted. ### Environment: kubeflow pipelines are running on GKE. GKE version: 1.18.12-gke.1210 mysql version: MySQL 8.0 How did you deploy Kubeflow Pipelines (KFP)? It is built on an infrastructure environment managed by terraform. Deploying a resource generated by manifest( of gcp-blue prints) with some editing. KFP version: <!-- If you are not sure, build commit shows on bottom of KFP UI left sidenav. --> 1.2 KFP SDK version: <!-- Please attach the output of this shell command: $pip list | grep kfp --> (pypi)kfp 1.3.0 ### Anything else you would like to add: I read the source code, and there is no mechanism to rollback when the process fails. This causes inconsistencies between the DB and other resources such as CRD, so I think a transactional mechanism should be implemented. Also, I think there should be a mechanism to synchronize the ScheduledWorkflow (CRD) with the job information in mysql. /kind bug <!-- Please include labels by uncommenting them to help us better triage issues, choose from the following --> <!-- // /area frontend // /area backend // /area sdk // /area testing // /area engprod -->
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5348/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5348/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5347
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5347/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5347/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5347/events
https://github.com/kubeflow/pipelines/issues/5347
837,284,112
MDU6SXNzdWU4MzcyODQxMTI=
5,347
failed to save outputs: fork/exec /usr/local/bin/docker: exec format error
{ "login": "xybuty", "id": 44302306, "node_id": "MDQ6VXNlcjQ0MzAyMzA2", "avatar_url": "https://avatars.githubusercontent.com/u/44302306?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xybuty", "html_url": "https://github.com/xybuty", "followers_url": "https://api.github.com/users/xybuty/followers", "following_url": "https://api.github.com/users/xybuty/following{/other_user}", "gists_url": "https://api.github.com/users/xybuty/gists{/gist_id}", "starred_url": "https://api.github.com/users/xybuty/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xybuty/subscriptions", "organizations_url": "https://api.github.com/users/xybuty/orgs", "repos_url": "https://api.github.com/users/xybuty/repos", "events_url": "https://api.github.com/users/xybuty/events{/privacy}", "received_events_url": "https://api.github.com/users/xybuty/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
{ "login": "Ark-kun", "id": 1829149, "node_id": "MDQ6VXNlcjE4MjkxNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ark-kun", "html_url": "https://github.com/Ark-kun", "followers_url": "https://api.github.com/users/Ark-kun/followers", "following_url": "https://api.github.com/users/Ark-kun/following{/other_user}", "gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions", "organizations_url": "https://api.github.com/users/Ark-kun/orgs", "repos_url": "https://api.github.com/users/Ark-kun/repos", "events_url": "https://api.github.com/users/Ark-kun/events{/privacy}", "received_events_url": "https://api.github.com/users/Ark-kun/received_events", "type": "User", "site_admin": false }
[ { "login": "Ark-kun", "id": 1829149, "node_id": "MDQ6VXNlcjE4MjkxNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ark-kun", "html_url": "https://github.com/Ark-kun", "followers_url": "https://api.github.com/users/Ark-kun/followers", "following_url": "https://api.github.com/users/Ark-kun/following{/other_user}", "gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions", "organizations_url": "https://api.github.com/users/Ark-kun/orgs", "repos_url": "https://api.github.com/users/Ark-kun/repos", "events_url": "https://api.github.com/users/Ark-kun/events{/privacy}", "received_events_url": "https://api.github.com/users/Ark-kun/received_events", "type": "User", "site_admin": false } ]
null
[ "I think the main reason for the error is that you're not using full paths in `file_outputs`. The system cannot find those files.\r\nCan you try specifying the full paths everywhere?\r\n\r\nAdditionally I see couple of other issues (although they should not be leading to errors).\r\n\r\n* Directly constructing `ContainerOp` objects is deprecated. Please create components in `component.yaml` format. Its very easy and is supported. See https://github.com/kubeflow/pipelines/blob/master/samples/tutorials/Data%20passing%20in%20python%20components.ipynb and https://www.kubeflow.org/docs/pipelines/sdk/component-development/\r\n* We encourage users to specify the program command in the component definition (and not just relying on the container ENTRYPOINT)", "Found another root cause: `np.save` adds extension to the file path:\r\nHere is the documentation for `np.save`\r\n```\r\nIf file is a string or Path, a ``.npy``\r\n extension will be appended to the filename if it does not already\r\n have one.\r\n```", "I've tried to recreate your case using the best practices:\r\n\r\n```python\r\n# Component:\r\nfrom kfp.components import create_component_from_func, InputPath, OutputPath\r\n\r\ndef sklearn_datasets_load_boston(\r\n x_train_path: OutputPath('NumPy'),\r\n y_train_path: OutputPath('NumPy'),\r\n x_test_path: OutputPath('NumPy'),\r\n y_test_path: OutputPath('NumPy'),\r\n test_size: float = 0.33,\r\n): \r\n import os\r\n import numpy as np\r\n from sklearn import datasets\r\n from sklearn.model_selection import train_test_split \r\n\r\n x, y = datasets.load_boston(return_X_y=True)\r\n x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=test_size)\r\n\r\n with open(x_train_path, 'wb') as x_train_file:\r\n np.save(x_train_file, x_train)\r\n with open(y_train_path, 'wb') as y_train_file:\r\n np.save(y_train_file, y_train)\r\n with open(x_test_path, 'wb') as x_test_file:\r\n np.save(x_test_file, x_test)\r\n with open(y_test_path, 'wb') as y_test_file:\r\n np.save(y_test_file, y_test)\r\n \r\n\r\n\r\nif __name__ == '__main__':\r\n sklearn_datasets_load_boston_op = create_component_from_func(\r\n sklearn_datasets_load_boston,\r\n packages_to_install=['scikit-learn==0.24.1'],\r\n output_component_file='component.yaml',\r\n )\r\n\r\n\r\n# Pipeline:\r\ndef load_data_pipeline():\r\n load_data_task = sklearn_datasets_load_boston_op(test_size=0.33)\r\n\r\n\r\nif __name__ == '__main__':\r\n import kfp\r\n kfp_endpoint = None\r\n kfp.Client(host=kfp_endpoint).create_run_from_pipeline_func(\r\n load_data_pipeline,\r\n arguments={}\r\n )\r\n```", "> I've tried to recreate your case using the best practices:\r\n> \r\n> ```python\r\n> # Component:\r\n> from kfp.components import create_component_from_func, InputPath, OutputPath\r\n> \r\n> def sklearn_datasets_load_boston(\r\n> x_train_path: OutputPath('NumPy'),\r\n> y_train_path: OutputPath('NumPy'),\r\n> x_test_path: OutputPath('NumPy'),\r\n> y_test_path: OutputPath('NumPy'),\r\n> test_size: float = 0.33,\r\n> ): \r\n> import os\r\n> import numpy as np\r\n> from sklearn import datasets\r\n> from sklearn.model_selection import train_test_split \r\n> \r\n> x, y = datasets.load_boston(return_X_y=True)\r\n> x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=test_size)\r\n> \r\n> with open(x_train_path, 'wb') as x_train_file:\r\n> np.save(x_train_file, x_train)\r\n> with open(y_train_path, 'wb') as y_train_file:\r\n> np.save(y_train_file, y_train)\r\n> with open(x_test_path, 'wb') as x_test_file:\r\n> np.save(x_test_file, x_test)\r\n> with open(y_test_path, 'wb') as y_test_file:\r\n> np.save(y_test_file, y_test)\r\n> \r\n> \r\n> \r\n> if __name__ == '__main__':\r\n> sklearn_datasets_load_boston_op = create_component_from_func(\r\n> sklearn_datasets_load_boston,\r\n> packages_to_install=['scikit-learn==0.24.1'],\r\n> output_component_file='component.yaml',\r\n> )\r\n> \r\n> \r\n> # Pipeline:\r\n> def load_data_pipeline():\r\n> load_data_task = sklearn_datasets_load_boston_op(test_size=0.33)\r\n> \r\n> \r\n> if __name__ == '__main__':\r\n> import kfp\r\n> kfp_endpoint = None\r\n> kfp.Client(host=kfp_endpoint).create_run_from_pipeline_func(\r\n> load_data_pipeline,\r\n> arguments={}\r\n> )\r\n> ```\r\n\r\nThank you so much for your reply,I‘m trying to figure out the difference between our code.\r\n**And I have another question:how to point kfp_endpoint?**\r\nMy KF-cluster is deployed on two local virtual machines,I wrote code in the local python environment and upload it to the cluster through the UI interface;How should I point the kfp_endpoint?\r\nThe master node'ip is 192.168.52.10,the svc state like these\r\n![image](https://user-images.githubusercontent.com/44302306/112102703-a363dd00-8be3-11eb-9458-a63a6b0dc166.png)\r\n", ">I‘m trying to figure out the difference between our code.\r\n\r\nThe root issue in your code was probably using non-absolute paths in `file_outputs`.\r\n\r\nWhile fixing the issue I've also discovered that `np.save(data_path, data)` writes to `data_path + \".npy\"`, which was leading to a similar error message in my code. To fix that I've used `np.save` with pre-opened files.", ">how to point kfp_endpoint?\r\n>My KF-cluster is deployed on two local virtual machines,I wrote code in the local python environment and upload it to the cluster through the UI interface;How should I point the kfp_endpoint?\r\n\r\nI'm not sure the direct connection to the API serve with `kfp_endpoint = \"http://10.100.203.15:8888\"` will work.\r\nYou'll probably need to port forward the api server `kubectl port-forward svc/ml-pipeline 8888` and use `kfp_endpoint \r\n = http://127.0.0.1:8888`\r\n\r\n/cc @Bobgy ", "> > I‘m trying to figure out the difference between our code.\r\n> \r\n> The root issue in your code was probably using non-absolute paths in `file_outputs`.\r\n> \r\n> While fixing the issue I've also discovered that `np.save(data_path, data)` writes to `data_path + \".npy\"`, which was leading to a similar error message in my code. To fix that I've used `np.save` with pre-opened files.\r\n\r\nthank you so much for your reply ,After I improved my code according to your guidance,I found maybe my problem is not same as this problem [https://github.com/kubeflow/pipelines/issues/750].Because the load_data image can save data to the path in containter,like this\r\n![image](https://user-images.githubusercontent.com/44302306/112306006-4a2aa500-8cda-11eb-8d08-715399950994.png)\r\nThen I check the pod logs I get the following error.\r\n![image](https://user-images.githubusercontent.com/44302306/112306332-aa214b80-8cda-11eb-95b3-d8d15ae999d0.png)\r\nLook forward your help!", "> > I‘m trying to figure out the difference between our code.\r\n> \r\n> The root issue in your code was probably using non-absolute paths in `file_outputs`.\r\n> \r\n> While fixing the issue I've also discovered that `np.save(data_path, data)` writes to `data_path + \".npy\"`, which was leading to a similar error message in my code. To fix that I've used `np.save` with pre-opened files.\r\n\r\nThe screenshot is not clear,here is the information of the error\r\n`time=\"2021-03-24T19:04:42Z\" level=info msg=\"Creating a docker executor\"\r\ntime=\"2021-03-24T19:04:42Z\" level=info msg=\"Executor (version: v2.3.0, build_date: 2019-05-20T22:10:54Z) initialized (pod: kubeflow/load-data-2-mctsc-3737213461) with template:\\n{\\\"name\\\":\\\"load-data\\\",\\\"inputs\\\":{},\\\"outputs\\\":{\\\"artifacts\\\":[{\\\"name\\\":\\\"load-data-X_train\\\",\\\"path\\\":\\\"/X_train.npy\\\"}]},\\\"metadata\\\":{},\\\"container\\\":{\\\"name\\\":\\\"\\\",\\\"image\\\":\\\"load_data2:v0.0.2\\\",\\\"command\\\":[\\\"python\\\",\\\"load_data.py\\\"],\\\"resources\\\":{}},\\\"archiveLocation\\\":{\\\"s3\\\":{\\\"endpoint\\\":\\\"minio-service.kubeflow:9000\\\",\\\"bucket\\\":\\\"mlpipeline\\\",\\\"insecure\\\":true,\\\"accessKeySecret\\\":{\\\"name\\\":\\\"mlpipeline-minio-artifact\\\",\\\"key\\\":\\\"accesskey\\\"},\\\"secretKeySecret\\\":{\\\"name\\\":\\\"mlpipeline-minio-artifact\\\",\\\"key\\\":\\\"secretkey\\\"},\\\"key\\\":\\\"artifacts/load-data-2-mctsc/load-data-2-mctsc-3737213461\\\"}}}\"\r\ntime=\"2021-03-24T19:04:42Z\" level=info msg=\"Waiting on main container\"\r\ntime=\"2021-03-24T19:04:42Z\" level=info msg=\"main container started with container ID: 9f20576bc2d9b749429bc259cf10c932a4636fbdd1041043c8cc5550d2d74158\"\r\ntime=\"2021-03-24T19:04:42Z\" level=info msg=\"Starting annotations monitor\"\r\ntime=\"2021-03-24T19:04:42Z\" level=info msg=\"docker wait 9f20576bc2d9b749429bc259cf10c932a4636fbdd1041043c8cc5550d2d74158\"\r\ntime=\"2021-03-24T19:04:42Z\" level=info msg=\"Starting deadline monitor\"\r\ntime=\"2021-03-24T19:04:42Z\" level=error msg=\"executor error: fork/exec /usr/local/bin/docker: exec format error\\ngithub.com/argoproj/argo/errors.Wrap\\n\\t/go/src/github.com/argoproj/argo/errors/errors.go:88\\ngithub.com/argoproj/argo/errors.InternalWrapError\\n\\t/go/src/github.com/argoproj/argo/errors/errors.go:71\\ngithub.com/argoproj/argo/workflow/common.RunCommand\\n\\t/go/src/github.com/argoproj/argo/workflow/common/util.go:397\\ngithub.com/argoproj/argo/workflow/executor/docker.(*DockerExecutor).Wait\\n\\t/go/src/github.com/argoproj/argo/workflow/executor/docker/docker.go:95\\ngithub.com/argoproj/argo/workflow/executor.(*WorkflowExecutor).Wait\\n\\t/go/src/github.com/argoproj/argo/workflow/executor/executor.go:867\\ngithub.com/argoproj/argo/cmd/argoexec/commands.waitContainer\\n\\t/go/src/github.com/argoproj/argo/cmd/argoexec/commands/wait.go:32\\ngithub.com/argoproj/argo/cmd/argoexec/commands.NewWaitCommand.func1\\n\\t/go/src/github.com/argoproj/argo/cmd/argoexec/commands/wait.go:16\\ngithub.com/spf13/cobra.(*Command).execute\\n\\t/go/src/github.com/spf13/cobra/command.go:766\\ngithub.com/spf13/cobra.(*Command).ExecuteC\\n\\t/go/src/github.com/spf13/cobra/command.go:852\\ngithub.com/spf13/cobra.(*Command).Execute\\n\\t/go/src/github.com/spf13/cobra/command.go:800\\nmain.main\\n\\t/go/src/github.com/argoproj/argo/cmd/argoexec/main.go:17\\nruntime.main\\n\\t/usr/local/go/src/runtime/proc.go:201\\nruntime.goexit\\n\\t/usr/local/go/src/runtime/asm_amd64.s:1333\"\r\ntime=\"2021-03-24T19:04:42Z\" level=info msg=\"No sidecars\"\r\ntime=\"2021-03-24T19:04:42Z\" level=info msg=\"No output parameters\"\r\ntime=\"2021-03-24T19:04:42Z\" level=info msg=\"Saving output artifacts\"\r\ntime=\"2021-03-24T19:04:42Z\" level=info msg=\"Staging artifact: load-data-X_train\"\r\ntime=\"2021-03-24T19:04:42Z\" level=info msg=\"Copying /X_train.npy from container base image layer to /argo/outputs/artifacts/load-data-X_train.tgz\"\r\ntime=\"2021-03-24T19:04:42Z\" level=info msg=\"Archiving 9f20576bc2d9b749429bc259cf10c932a4636fbdd1041043c8cc5550d2d74158:/X_train.npy to /argo/outputs/artifacts/load-data-X_train.tgz\"\r\ntime=\"2021-03-24T19:04:42Z\" level=info msg=\"sh -c docker cp -a 9f20576bc2d9b749429bc259cf10c932a4636fbdd1041043c8cc5550d2d74158:/X_train.npy - | gzip > /argo/outputs/artifacts/load-data-X_train.tgz\"\r\ntime=\"2021-03-24T19:04:42Z\" level=info msg=\"Annotations monitor stopped\"\r\ntime=\"2021-03-24T19:04:42Z\" level=warning msg=\"path /X_train.npy does not exist (or /X_train.npy is empty) in archive /argo/outputs/artifacts/load-data-X_train.tgz\"\r\ntime=\"2021-03-24T19:04:42Z\" level=error msg=\"executor error: path /X_train.npy does not exist (or /X_train.npy is empty) in archive /argo/outputs/artifacts/load-data-X_train.tgz\\ngithub.com/argoproj/argo/errors.New\\n\\t/go/src/github.com/argoproj/argo/errors/errors.go:49\\ngithub.com/argoproj/argo/errors.Errorf\\n\\t/go/src/github.com/argoproj/argo/errors/errors.go:55\\ngithub.com/argoproj/argo/workflow/executor/docker.(*DockerExecutor).CopyFile\\n\\t/go/src/github.com/argoproj/argo/workflow/executor/docker/docker.go:66\\ngithub.com/argoproj/argo/workflow/executor.(*WorkflowExecutor).stageArchiveFile\\n\\t/go/src/github.com/argoproj/argo/workflow/executor/executor.go:344\\ngithub.com/argoproj/argo/workflow/executor.(*WorkflowExecutor).saveArtifact\\n\\t/go/src/github.com/argoproj/argo/workflow/executor/executor.go:245\\ngithub.com/argoproj/argo/workflow/executor.(*WorkflowExecutor).SaveArtifacts\\n\\t/go/src/github.com/argoproj/argo/workflow/executor/executor.go:231\\ngithub.com/argoproj/argo/cmd/argoexec/commands.waitContainer\\n\\t/go/src/github.com/argoproj/argo/cmd/argoexec/commands/wait.go:54\\ngithub.com/argoproj/argo/cmd/argoexec/commands.NewWaitCommand.func1\\n\\t/go/src/github.com/argoproj/argo/cmd/argoexec/commands/wait.go:16\\ngithub.com/spf13/cobra.(*Command).execute\\n\\t/go/src/github.com/spf13/cobra/command.go:766\\ngithub.com/spf13/cobra.(*Command).ExecuteC\\n\\t/go/src/github.com/spf13/cobra/command.go:852\\ngithub.com/spf13/cobra.(*Command).Execute\\n\\t/go/src/github.com/spf13/cobra/command.go:800\\nmain.main\\n\\t/go/src/github.com/argoproj/argo/cmd/argoexec/main.go:17\\nruntime.main\\n\\t/usr/local/go/src/runtime/proc.go:201\\nruntime.goexit\\n\\t/usr/local/go/src/runtime/asm_amd64.s:1333\"\r\ntime=\"2021-03-24T19:04:42Z\" level=info msg=\"Alloc=4108 TotalAlloc=11155 Sys=70590 NumGC=4 Goroutines=8\"\r\ntime=\"2021-03-24T19:04:42Z\" level=fatal msg=\"path /X_train.npy does not exist (or /X_train.npy is empty) in archive /argo/outputs/artifacts/load-data-X_train.tgz\\ngithub.com/argoproj/argo/errors.New\\n\\t/go/src/github.com/argoproj/argo/errors/errors.go:49\\ngithub.com/argoproj/argo/errors.Errorf\\n\\t/go/src/github.com/argoproj/argo/errors/errors.go:55\\ngithub.com/argoproj/argo/workflow/executor/docker.(*DockerExecutor).CopyFile\\n\\t/go/src/github.com/argoproj/argo/workflow/executor/docker/docker.go:66\\ngithub.com/argoproj/argo/workflow/executor.(*WorkflowExecutor).stageArchiveFile\\n\\t/go/src/github.com/argoproj/argo/workflow/executor/executor.go:344\\ngithub.com/argoproj/argo/workflow/executor.(*WorkflowExecutor).saveArtifact\\n\\t/go/src/github.com/argoproj/argo/workflow/executor/executor.go:245\\ngithub.com/argoproj/argo/workflow/executor.(*WorkflowExecutor).SaveArtifacts\\n\\t/go/src/github.com/argoproj/argo/workflow/executor/executor.go:231\\ngithub.com/argoproj/argo/cmd/argoexec/commands.waitContainer\\n\\t/go/src/github.com/argoproj/argo/cmd/argoexec/commands/wait.go:54\\ngithub.com/argoproj/argo/cmd/argoexec/commands.NewWaitCommand.func1\\n\\t/go/src/github.com/argoproj/argo/cmd/argoexec/commands/wait.go:16\\ngithub.com/spf13/cobra.(*Command).execute\\n\\t/go/src/github.com/spf13/cobra/command.go:766\\ngithub.com/spf13/cobra.(*Command).ExecuteC\\n\\t/go/src/github.com/spf13/cobra/command.go:852\\ngithub.com/spf13/cobra.(*Command).Execute\\n\\t/go/src/github.com/spf13/cobra/command.go:800\\nmain.main\\n\\t/go/src/github.com/argoproj/argo/cmd/argoexec/main.go:17\\nruntime.main\\n\\t/usr/local/go/src/runtime/proc.go:201\\nruntime.goexit\\n\\t/usr/local/go/src/runtime/asm_amd64.s:1333\"`", "Do you still get the error when you run the pipeline code I've posted? It worked for me.", "> Do you still get the error when you run the pipeline code I've posted? It worked for me.\r\n\r\nYES,I still got the error (T_T)", "Duplicate of https://github.com/kubeflow/pipelines/issues/1654\n\nI think this is the problem -- kubernetes 1.19 has dropped built-in docker support. So you should switch to use argo pns executor.", "> Duplicate of #1654\r\n> \r\n> I think this is the problem -- kubernetes 1.19 has dropped built-in docker support. So you should switch to use argo pns executor.\r\nYES,You are right,this problem didn't reappear after Ichange rhe k8s version。", "Looks like this particular issue is resolved." ]
"2021-03-22T03:38:33"
"2021-04-10T04:09:16"
"2021-04-10T04:09:16"
NONE
null
### What steps did you take: [A clear and concise description of what the bug is.] **I upload a pipeline to load data,the function's code like this** ![image](https://user-images.githubusercontent.com/44302306/111936955-b605e580-8b01-11eb-9d30-e1f306cd0c65.png) **Dockerfile** ![image](https://user-images.githubusercontent.com/44302306/111937036-e8afde00-8b01-11eb-8457-8676e254dd18.png) And the pipeline's code like this ![image](https://user-images.githubusercontent.com/44302306/111937087-ffeecb80-8b01-11eb-9a2b-55083dd64d0f.png) ### What happened: The load-data pipeline was uploaded successful,but when I create a run based on the load-data pipeline,it occur this error ![image](https://user-images.githubusercontent.com/44302306/111937325-7390d880-8b02-11eb-9c05-526063c72c5e.png) ### Environment: <!-- Please fill in those that seem relevant. --> My environment is docker version :1.13.1 k8s : v1.19.4 kubeflow:1.0.2 pipelines:0.2.5 How did you deploy Kubeflow Pipelines (KFP)? <!-- If you are not sure, here's [an introduction of all options](https://www.kubeflow.org/docs/pipelines/installation/overview/). --> KFP version: <!-- If you are not sure, build commit shows on bottom of KFP UI left sidenav. --> KFP SDK version: <!-- Please attach the output of this shell command: $pip list | grep kfp --> ![image](https://user-images.githubusercontent.com/44302306/111937500-d1bdbb80-8b02-11eb-8e4d-b77bb5a88adf.png) ### Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.] /kind bug <!-- Please include labels by uncommenting them to help us better triage issues, choose from the following --> <!-- // /area frontend // /area backend // /area sdk // /area testing // /area engprod -->
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5347/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5347/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5343
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5343/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5343/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5343/events
https://github.com/kubeflow/pipelines/issues/5343
836,855,994
MDU6SXNzdWU4MzY4NTU5OTQ=
5,343
KFServing component missing request timeout support
{ "login": "midhun1998", "id": 24776450, "node_id": "MDQ6VXNlcjI0Nzc2NDUw", "avatar_url": "https://avatars.githubusercontent.com/u/24776450?v=4", "gravatar_id": "", "url": "https://api.github.com/users/midhun1998", "html_url": "https://github.com/midhun1998", "followers_url": "https://api.github.com/users/midhun1998/followers", "following_url": "https://api.github.com/users/midhun1998/following{/other_user}", "gists_url": "https://api.github.com/users/midhun1998/gists{/gist_id}", "starred_url": "https://api.github.com/users/midhun1998/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/midhun1998/subscriptions", "organizations_url": "https://api.github.com/users/midhun1998/orgs", "repos_url": "https://api.github.com/users/midhun1998/repos", "events_url": "https://api.github.com/users/midhun1998/events{/privacy}", "received_events_url": "https://api.github.com/users/midhun1998/received_events", "type": "User", "site_admin": false }
[ { "id": 930476737, "node_id": "MDU6TGFiZWw5MzA0NzY3Mzc=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/help%20wanted", "name": "help wanted", "color": "db1203", "default": true, "description": "The community is welcome to contribute." }, { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Bump @pvaneck ", "Just a note that any of the PodSpec fields can be used with the component if you pass in yaml directly:\r\n```python\r\n isvc_yaml = '''\r\n apiVersion: \"serving.kubeflow.org/v1beta1\"\r\n kind: \"InferenceService\"\r\n metadata:\r\n name: \"sklearn-iris2\"\r\n spec:\r\n predictor:\r\n timeout: 123\r\n sklearn:\r\n storageUri: \"gs://kfserving-samples/models/sklearn/iris\"\r\n '''\r\n kfserving_op(\r\n action='apply',\r\n inferenceservice_yaml=isvc_yaml\r\n```\r\n\r\nHowever, I'd be open to having timeout be an actual arg of the component as it seems to be a field that may be commonly used.", "/assign @kubeflow/wg-serving-leads ", "Hi @Bobgy. I will be opening a PR soon. I have added it and I'm currently testing.", "@pvaneck Shall I wait for PR #5386 to be merged? Also, shall I push the image with tag `v0.5.1` or `v0.5.1.1`. Please guide.", "@midhun1998 You can go ahead and submit a PR as it can always be rebased, and I don't foresee any conflicts. I will build and push the image with your changes once it passes the review process. " ]
"2021-03-20T16:41:13"
"2021-04-08T03:25:57"
"2021-04-08T03:25:57"
MEMBER
null
/kind bug /kind feature Hi. Recently the KFServing component for the pipeline was updated to v0.5.1 supporting v1beta1 API. This component is still missing the `timeout` field in PredictorSpec which specifies the number of seconds to wait before timing out a request to the component. A couple of issues were raised in this regard: - [https://github.com/kubeflow/kfserving/issues/892]( https://github.com/kubeflow/kfserving/issues/892) - [https://github.com/kubeflow/kfserving/issues/1129](https://github.com/kubeflow/kfserving/issues/1129) I would be happy to submit a PR as seems like a very small change. :heart: Proposed solution: Add an extra args named `requestTimeout` to component yaml and use this as parameter in the `create_predictor_spec` function.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5343/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5343/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5335
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5335/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5335/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5335/events
https://github.com/kubeflow/pipelines/issues/5335
835,619,546
MDU6SXNzdWU4MzU2MTk1NDY=
5,335
kubeflow pipelines recurring run fails intermittently due to minio service
{ "login": "rexshang", "id": 13360422, "node_id": "MDQ6VXNlcjEzMzYwNDIy", "avatar_url": "https://avatars.githubusercontent.com/u/13360422?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rexshang", "html_url": "https://github.com/rexshang", "followers_url": "https://api.github.com/users/rexshang/followers", "following_url": "https://api.github.com/users/rexshang/following{/other_user}", "gists_url": "https://api.github.com/users/rexshang/gists{/gist_id}", "starred_url": "https://api.github.com/users/rexshang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rexshang/subscriptions", "organizations_url": "https://api.github.com/users/rexshang/orgs", "repos_url": "https://api.github.com/users/rexshang/repos", "events_url": "https://api.github.com/users/rexshang/events{/privacy}", "received_events_url": "https://api.github.com/users/rexshang/received_events", "type": "User", "site_admin": false }
[ { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1682717392, "node_id": "MDU6TGFiZWwxNjgyNzE3Mzky", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/question", "name": "kind/question", "color": "2515fc", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
null
[]
null
[ "HI @rexshang,\r\n\r\nI can think of three ways to improve:\r\n* set up a retry strategy, so that the pipeline can auto retry a few times when the instability happens (you should always expect other services may fail)\r\n* use managed storage, gcs is probably stabler than minio\r\n* upgrade minio server, the default one is very old", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Hi @Bobgy,\r\n\r\nI have the exact same problem and it's happening really often. I already have a retry strategy but it's not retrying when this fail.\r\nI'm on KFP 1.7.1 deployed on GKE via the marketplace, using managed storage and managed sql.\r\n\r\nCan I upgrade minio without breaking KFP ? Is there anything else I could do ?\r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-03-19T06:40:14"
"2022-04-18T15:26:52"
null
NONE
null
### What steps did you take: I'm running kubeflow 1.0.4 in GCP. There is a recurring job that runs every 15 min. It would fail about 1 time a day and generate the following error. Does anyone have any pointers to help me troubleshoot? ### What happened: "time="2021-03-18T04:55:22Z" level=warning msg="Failed to put file: Get http://minio-service.default:9000/kubeflow-artifact/?location=: dial tcp x.x.x.x:9000: connect: connection refused" "time="2021-03-18T04:55:22Z" level=error msg="executor error: timed out waiting for the condition" ### Environment: Google Cloud Platform Kubeflow 1.0.4 How did you deploy Kubeflow Pipelines (KFP)? using GCP AI Platform Pipelines "NEW INSTANCE" process KFP version: 1.0.4 KFP SDK version: N/A ### Anything else you would like to add: /kind bug /area backend <!-- // /area frontend // /area sdk // /area testing // /area engprod -->
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5335/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5335/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5333
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5333/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5333/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5333/events
https://github.com/kubeflow/pipelines/issues/5333
835,415,209
MDU6SXNzdWU4MzU0MTUyMDk=
5,333
[Doc] Operator best practices doc
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "id": 930619540, "node_id": "MDU6TGFiZWw5MzA2MTk1NDA=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/docs", "name": "area/docs", "color": "d2b48c", "default": false, "description": null }, { "id": 2152751095, "node_id": "MDU6TGFiZWwyMTUyNzUxMDk1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/frozen", "name": "lifecycle/frozen", "color": "ededed", "default": false, "description": null } ]
closed
false
null
[]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "/lifecycle frozen", "superceded by https://github.com/kubeflow/pipelines/issues/6204" ]
"2021-03-19T01:01:48"
"2021-08-25T06:35:13"
"2021-08-25T06:35:13"
CONTRIBUTOR
null
We are missing some doc to introduce how to configure KFP to run stably in a production environment.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5333/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5333/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5329
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5329/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5329/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5329/events
https://github.com/kubeflow/pipelines/issues/5329
834,862,040
MDU6SXNzdWU4MzQ4NjIwNDA=
5,329
Database Connection Errors
{ "login": "maganaluis", "id": 15258405, "node_id": "MDQ6VXNlcjE1MjU4NDA1", "avatar_url": "https://avatars.githubusercontent.com/u/15258405?v=4", "gravatar_id": "", "url": "https://api.github.com/users/maganaluis", "html_url": "https://github.com/maganaluis", "followers_url": "https://api.github.com/users/maganaluis/followers", "following_url": "https://api.github.com/users/maganaluis/following{/other_user}", "gists_url": "https://api.github.com/users/maganaluis/gists{/gist_id}", "starred_url": "https://api.github.com/users/maganaluis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/maganaluis/subscriptions", "organizations_url": "https://api.github.com/users/maganaluis/orgs", "repos_url": "https://api.github.com/users/maganaluis/repos", "events_url": "https://api.github.com/users/maganaluis/events{/privacy}", "received_events_url": "https://api.github.com/users/maganaluis/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
closed
false
null
[]
null
[ "Thank you for the suggestion @maganaluis ! It makes sense to change the lifetime for DB connection, and make it configurable. Welcome to contribute this feature!", "@zijianjoy I created the PR. After changing this configuration we did not experience any of these errors anymore, so seems to be the correct approach here. ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "For anyone coming across this issue, any time you see an error like `[mysql] XXXX/XX/XX XX:XX:XX packets.go:36: unexpected EOF` in the `cache-server` pods, you are almost certainly dealing with a network-level issue between your pods and the MySQL database.\r\n\r\nThe most likely case is that there is some __asymmetric routing__ going on. That is, your Pods might be able to create a connection to MySQL, but MySQL might not have a route back to your Pod. Note, [MySQL connections are complex](https://dev.mysql.com/doc/dev/mysql-server/latest/page_protocol_connection_phase.html) and will sometimes initiate new TCP connections back to the client, which will fail in the previous case.\r\n\r\nI have a more detailed write up on this issue: https://github.com/kubeflow/pipelines/issues/3763#issuecomment-1547009188" ]
"2021-03-18T14:11:06"
"2023-05-14T21:59:38"
"2021-08-04T21:28:23"
CONTRIBUTOR
null
### What steps did you take: We recently switched from using the MySQL database that Kubeflow Pipelines comes with, to a managed database on Azure. We started observing errors such as the one below, the errors are not consistent but they do happen often enough to impact our workloads. I have not been able to replicate the issue outside of Kubeflow, more specifically this ```unexpected EOF``` error. As it is suggested by methane https://github.com/go-sql-driver/mysql/issues/674#issuecomment-345661869 the general advice is to set ```DB.SetConnMaxLifetime(time.Second)``` to something small like 1-3 minutes, but this is not a setting that's enabled by env variables by Kubeflow Pipelines. I believe it should be because I'm sure others will run into the same issue as Kubeflow gains more Enterprise Adoption. We are going to test if this change reduces or hopefully eliminates these errors, but at least from the research I've done it looks like the right approach. ```golang func (c *ClientManager) init() { glog.Info("Initializing client manager") db := initDBClient(common.GetDurationConfig(initConnectionTimeout)) db.SetConnMaxLifetime(time.Second * 120) ``` ``` [mysql] 2021/03/16 07:27:11 packets.go:36: unexpected EOF [mysql] 2021/03/16 07:27:11 connection.go:141: write tcp 10.244.24.107:50550->40.71.8.203:3306: write: broken pipe E0316 07:27:11.976783 6 experiment_store.go:78] Failed to start transaction to list jobs I0316 07:27:11.976819 6 error.go:243] invalid connection InternalServerError: Failed to list experiments: invalid connection github.com/kubeflow/pipelines/backend/src/common/util.NewInternalServerError /go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:142 github.com/kubeflow/pipelines/backend/src/apiserver/storage.(*ExperimentStore).ListExperiments.func1 /go/src/github.com/kubeflow/pipelines/backend/src/apiserver/storage/experiment_store.go:49 github.com/kubeflow/pipelines/backend/src/apiserver/storage.(*ExperimentStore).ListExperiments /go/src/github.com/kubeflow/pipelines/backend/src/apiserver/storage/experiment_store.go:79 github.com/kubeflow/pipelines/backend/src/apiserver/resource.(*ResourceManager).ListExperiments /go/src/github.com/kubeflow/pipelines/backend/src/apiserver/resource/resource_manager.go:145 github.com/kubeflow/pipelines/backend/src/apiserver/server.(*ExperimentServer).ListExperiment /go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/experiment_server.go:162 github.com/kubeflow/pipelines/backend/api/go_client._ExperimentService_ListExperiment_Handler.func1 /go/src/github.com/kubeflow/pipelines/backend/api/go_client/experiment.pb.go:619 main.apiServerInterceptor /go/src/github.com/kubeflow/pipelines/backend/src/apiserver/interceptor.go:30 github.com/kubeflow/pipelines/backend/api/go_client._ExperimentService_ListExperiment_Handler /go/src/github.com/kubeflow/pipelines/backend/api/go_client/experiment.pb.go:621 google.golang.org/grpc.(*Server).processUnaryRPC /go/pkg/mod/google.golang.org/grpc@v1.28.0/server.go:1082 google.golang.org/grpc.(*Server).handleStream /go/pkg/mod/google.golang.org/grpc@v1.28.0/server.go:1405 google.golang.org/grpc.(*Server).serveStreams.func1.1 /go/pkg/mod/google.golang.org/grpc@v1.28.0/server.go:746 runtime.goexit /usr/local/go/src/runtime/asm_amd64.s:1357 List experiments failed. ``` ### What happened: After switching Kubeflow to a managed database, we have been experiencing sporadic connection errors in the Kubeflow API service. ### What did you expect to happen: I would expect not to have any database connection issues. ### Environment: <!-- Please fill in those that seem relevant. --> How did you deploy Kubeflow Pipelines (KFP)? <!-- If you are not sure, here's [an introduction of all options](https://www.kubeflow.org/docs/pipelines/installation/overview/). --> - Kustomize KFP version: <!-- If you are not sure, build commit shows on bottom of KFP UI left sidenav. --> ~ 1.3 KFP SDK version: <!-- Please attach the output of this shell command: $pip list | grep kfp --> ~ 1.3 ### Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.] /kind bug <!-- Please include labels by uncommenting them to help us better triage issues, choose from the following --> <!-- // /area frontend // /area backend // /area sdk // /area testing // /area engprod -->
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5329/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5329/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5328
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5328/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5328/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5328/events
https://github.com/kubeflow/pipelines/issues/5328
834,793,202
MDU6SXNzdWU4MzQ3OTMyMDI=
5,328
New API to get Pipeline ID from it's name
{ "login": "AntPeixe", "id": 21229140, "node_id": "MDQ6VXNlcjIxMjI5MTQw", "avatar_url": "https://avatars.githubusercontent.com/u/21229140?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AntPeixe", "html_url": "https://github.com/AntPeixe", "followers_url": "https://api.github.com/users/AntPeixe/followers", "following_url": "https://api.github.com/users/AntPeixe/following{/other_user}", "gists_url": "https://api.github.com/users/AntPeixe/gists{/gist_id}", "starred_url": "https://api.github.com/users/AntPeixe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AntPeixe/subscriptions", "organizations_url": "https://api.github.com/users/AntPeixe/orgs", "repos_url": "https://api.github.com/users/AntPeixe/repos", "events_url": "https://api.github.com/users/AntPeixe/events{/privacy}", "received_events_url": "https://api.github.com/users/AntPeixe/received_events", "type": "User", "site_admin": false }
[ { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
closed
false
null
[]
null
[ "Thank you @AntPeixe for this suggestion! Currently we are able to filter by pipeline name, but we might need to take an action to give some examples about how to use it, and improve reference documentation.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n" ]
"2021-03-18T13:13:06"
"2022-04-18T17:27:49"
"2022-04-18T17:27:49"
NONE
null
/kind feature **Why you need this feature:** Right now it isn't possible to (easily) get the pipeline ID by it's name. When creating a new pipeline it may happen that a name already exists and the POST request returns an error. I'd like to be able to then get ID of such pipeline using it's name. I think currently there is a workaround by getting all pipelines and the filtering from the list of results but this isn't as handy. **Describe the solution you'd like:** A new API endpoint to find a single pipeline ID by providing it's name.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5328/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5328/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5324
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5324/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5324/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5324/events
https://github.com/kubeflow/pipelines/issues/5324
834,559,908
MDU6SXNzdWU4MzQ1NTk5MDg=
5,324
Kubeflow pipeline cannot get graph and state of a run
{ "login": "Windrill", "id": 13634511, "node_id": "MDQ6VXNlcjEzNjM0NTEx", "avatar_url": "https://avatars.githubusercontent.com/u/13634511?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Windrill", "html_url": "https://github.com/Windrill", "followers_url": "https://api.github.com/users/Windrill/followers", "following_url": "https://api.github.com/users/Windrill/following{/other_user}", "gists_url": "https://api.github.com/users/Windrill/gists{/gist_id}", "starred_url": "https://api.github.com/users/Windrill/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Windrill/subscriptions", "organizations_url": "https://api.github.com/users/Windrill/orgs", "repos_url": "https://api.github.com/users/Windrill/repos", "events_url": "https://api.github.com/users/Windrill/events{/privacy}", "received_events_url": "https://api.github.com/users/Windrill/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 2710158147, "node_id": "MDU6TGFiZWwyNzEwMTU4MTQ3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/needs%20more%20info", "name": "needs more info", "color": "DBEF12", "default": false, "description": "" } ]
closed
false
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false } ]
null
[ "Duplicate of https://github.com/kubeflow/pipelines/issues/3763", "Can you confirm?", "Confirmed this fixes the issue\r\n`kubectl delete pod ml-pipeline-persistenceagent-xxxxxxx-xxxx -n kubeflow`\r\n\r\nAlso observed that the persistence-agent was looping over an old copy of pipelines possibly from a previous deployment of Kubeflow on Kubernetes that has already been reset by kubeadm on the same machine.\r\n\r\nAfter the restart, the correct pipelines are being read correctly and the status of new runs can be displayed.", "Thanks for the confirmation, if you have additional information how this situation could happen, please comment on #3763, that will be very helpful. I could not reproduce it any more.\r\n\r\nI am closing this issue to allow all discussion happen in the same thread" ]
"2021-03-18T09:12:12"
"2021-03-19T02:58:36"
"2021-03-19T02:58:36"
NONE
null
This is a deployment of Kubeflow on Kubernetes. Previously, Kubeflow was running fine. From one day, all newly created runs cannot get any other status than 'Unknown'. Inside the run, the 'Graph' tab is stuck at loading forever. Run outputs stay at none, and the 'Config' tab shows everything as expected. From the Argo UI, every run is still functioning correctly, and the logs and artifacts are accessible as normal. ![image](https://user-images.githubusercontent.com/13634511/111600541-7b076780-880c-11eb-85bb-cae3a94a8b2f.png) ![image](https://user-images.githubusercontent.com/13634511/111600605-8b1f4700-880c-11eb-9b55-6a5bb4311c3e.png) Kubeflow version (the version of kubeflow source code specified in the manifest file used to deploy kubeflow) is around 1.1 If the graph is left to load, from the console, there are a few error requests, if it is relevant: GET http://[ip]:31380/pipeline/apis/v1beta1/runs/01d0ca69-931d-42c2-afa7-4bf0f72f08cf 500 Utils.tsx:32 Response for path: system/project-id was not 'ok' Response was: upstream request timeout What communication could have gone wrong?
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5324/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5324/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5322
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5322/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5322/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5322/events
https://github.com/kubeflow/pipelines/issues/5322
834,407,625
MDU6SXNzdWU4MzQ0MDc2MjU=
5,322
[UI] visualization of failed pipelines inaccurate after upgrading argo
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }, { "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false } ]
null
[ "/cc @zijianjoy ", "Example pipeline\r\n\r\n```\r\nHOST = '<redacted>'\r\n\r\nimport kfp\r\nimport kfp.components as comp\r\n\r\n\r\ndef fail():\r\n import sys\r\n println(\"failed\")\r\n sys.exit(1)\r\n\r\n\r\ndef echo():\r\n println(\"hello world\")\r\n\r\n\r\ndef main():\r\n fail_op = comp.func_to_container_op(fail)\r\n echo_op = comp.func_to_container_op(echo)\r\n\r\n @kfp.dsl.pipeline(name='fail sample')\r\n def fail_sample():\r\n fail_task = fail_op()\r\n echo_task = echo_op()\r\n echo_task.after(fail_task)\r\n\r\n client = kfp.Client(host=HOST)\r\n client.create_run_from_pipeline_func(fail_sample, {})\r\n\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```", "However, here is the pipeline visualization of the failed pipeline\r\n\r\n![image](https://user-images.githubusercontent.com/4957653/111728523-fec26200-88a7-11eb-8ecd-d8c4508af8f5.png)\r\n\r\nThe pipeline has two tasks, the first fails, so the second one is omitted.\r\nWhat happened?\r\nHowever, the already completed and failed pipeline visualization shows the last task as pending further execution.\r\nand the echo op status is unknown.\r\n\r\nExpected: there should be no unfinished sign after the last step. and echo op should be shown as omitted", "Fixed by #5339 \r\n@zijianjoy FYI, if you use `Fixes #5322` in your PR's title. This issue will be auto closed when your PR is merged. That's best practice for keeping the issue lifecycle up-to-date." ]
"2021-03-18T05:29:16"
"2021-03-24T08:28:23"
"2021-03-24T08:28:23"
CONTRIBUTOR
null
### What steps did you take: [A clear and concise description of what the bug is.] ### What happened: It seems there was a change that all skipped nodes in the pipeline are also added to workflow status, so they also render in KFP UI now. However, it doesn't show properly. TODO: I'll upload a screenshot and give an example here. ### What did you expect to happen: The skipped nodes should either be hidden, or we remove the unfinished sign after the last unfinished node. ### Environment: <!-- Please fill in those that seem relevant. --> How did you deploy Kubeflow Pipelines (KFP)? <!-- If you are not sure, here's [an introduction of all options](https://www.kubeflow.org/docs/pipelines/installation/overview/). --> KFP version: <!-- If you are not sure, build commit shows on bottom of KFP UI left sidenav. --> KFP SDK version: <!-- Please attach the output of this shell command: $pip list | grep kfp --> ### Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.] /kind bug <!-- Please include labels by uncommenting them to help us better triage issues, choose from the following --> <!-- // /area frontend // /area backend // /area sdk // /area testing // /area engprod -->
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5322/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5322/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5321
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5321/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5321/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5321/events
https://github.com/kubeflow/pipelines/issues/5321
834,328,025
MDU6SXNzdWU4MzQzMjgwMjU=
5,321
Envoy proxy is NOT ready on ml-pipeline-ui and ml-pipeline-visualizationserver
{ "login": "fahadh4ilyas", "id": 37577369, "node_id": "MDQ6VXNlcjM3NTc3MzY5", "avatar_url": "https://avatars.githubusercontent.com/u/37577369?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fahadh4ilyas", "html_url": "https://github.com/fahadh4ilyas", "followers_url": "https://api.github.com/users/fahadh4ilyas/followers", "following_url": "https://api.github.com/users/fahadh4ilyas/following{/other_user}", "gists_url": "https://api.github.com/users/fahadh4ilyas/gists{/gist_id}", "starred_url": "https://api.github.com/users/fahadh4ilyas/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fahadh4ilyas/subscriptions", "organizations_url": "https://api.github.com/users/fahadh4ilyas/orgs", "repos_url": "https://api.github.com/users/fahadh4ilyas/repos", "events_url": "https://api.github.com/users/fahadh4ilyas/events{/privacy}", "received_events_url": "https://api.github.com/users/fahadh4ilyas/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Duplicate of https://github.com/kubeflow/pipelines/issues/5223", "No one maintains Kubeflow 1.2 istio-dex distribution" ]
"2021-03-18T02:23:18"
"2021-03-19T01:06:49"
"2021-03-19T01:06:49"
NONE
null
### What steps did you take: [A clear and concise description of what the bug is.] I'm just installing Kubeflow with kfctl_istio_dex.yaml config in my Kubernetes local cluster following [this instruction](https://www.kubeflow.org/docs/started/k8s/kfctl-istio-dex/). Every pods are running successfully except `ml-pipeline-ui` and `ml-pipeline-visualizationserver`. The containers are running and ready except `istio-proxy` container ### What happened: Here is the log of `istio-proxy` from `ml-pipeline-ui` pod ``` [2021-03-18 02:16:55.627][21][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:87] gRPC config stream closed: 14, no healthy upstream [2021-03-18 02:16:55.627][21][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:50] Unable to establish new stream 2021-03-18T02:16:55.733771Z info Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected 2021-03-18T02:16:57.734035Z info Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected 2021-03-18T02:16:59.733651Z info Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected 2021-03-18T02:17:01.733671Z info Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected 2021-03-18T02:17:03.733793Z info Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected 2021-03-18T02:17:05.733660Z info Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected 2021-03-18T02:17:07.733940Z info Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected ``` And here is log of `istio-proxy` from `ml-pipeline-visualizationserver` pod ``` [2021-03-18 02:19:24.289][22][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:87] gRPC config stream closed: 14, all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp 10.99.171.76:8060: connect: connection timed out" 2021-03-18T02:19:26.055989Z info Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected 2021-03-18T02:19:28.056171Z info Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected 2021-03-18T02:19:30.056271Z info Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected 2021-03-18T02:19:32.056020Z info Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected 2021-03-18T02:19:34.056093Z info Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected [2021-03-18 02:19:35.179][22][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:87] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection failure 2021-03-18T02:19:36.056009Z info Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected [2021-03-18 02:19:37.414][22][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:87] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection failure 2021-03-18T02:19:38.056105Z info Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected 2021-03-18T02:19:40.056148Z info Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected [2021-03-18 02:19:41.892][22][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:87] gRPC config stream closed: 2, failed to get root cert 2021-03-18T02:19:42.056084Z info Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected 2021-03-18T02:19:44.055754Z info Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected 2021-03-18T02:19:46.056156Z info Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected ``` ### What did you expect to happen: Every pods are running and ready ### Environment: <!-- Please fill in those that seem relevant. --> How did you deploy Kubeflow Pipelines (KFP)? I'm just following [this instruction](https://www.kubeflow.org/docs/started/k8s/kfctl-istio-dex/) KFP version: 1.2.0
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5321/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5321/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5318
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5318/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5318/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5318/events
https://github.com/kubeflow/pipelines/issues/5318
834,169,753
MDU6SXNzdWU4MzQxNjk3NTM=
5,318
PIPELINES CRASHING
{ "login": "prasadkyp7", "id": 80782583, "node_id": "MDQ6VXNlcjgwNzgyNTgz", "avatar_url": "https://avatars.githubusercontent.com/u/80782583?v=4", "gravatar_id": "", "url": "https://api.github.com/users/prasadkyp7", "html_url": "https://github.com/prasadkyp7", "followers_url": "https://api.github.com/users/prasadkyp7/followers", "following_url": "https://api.github.com/users/prasadkyp7/following{/other_user}", "gists_url": "https://api.github.com/users/prasadkyp7/gists{/gist_id}", "starred_url": "https://api.github.com/users/prasadkyp7/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/prasadkyp7/subscriptions", "organizations_url": "https://api.github.com/users/prasadkyp7/orgs", "repos_url": "https://api.github.com/users/prasadkyp7/repos", "events_url": "https://api.github.com/users/prasadkyp7/events{/privacy}", "received_events_url": "https://api.github.com/users/prasadkyp7/received_events", "type": "User", "site_admin": false }
[ { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." }, { "id": 2710158147, "node_id": "MDU6TGFiZWwyNzEwMTU4MTQ3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/needs%20more%20info", "name": "needs more info", "color": "DBEF12", "default": false, "description": "" } ]
open
false
null
[]
null
[ "> i think the kubeflow is triggering the pipelines to be in a single node. \r\n\r\nAre you sure this is the case? KFP compiler creates and Argo Workflow object and Argo controller just creates Kubernetes Pods. Do you see anything strange with the pods that Argo creates? Perhaps the pods are not allowed to be scheduled on the auto-scaling node pool.", "Yes, I think Argo Controller is creating all the pods for pipelines under a single nod. i checked it from kubernetes side.\r\nso stressing a single nod is killing the pods. why Argo Controller is putting all the pods under a single node. if so , how can we change that while creating the pipeline using pipeline compiler", "This issue happens when Kubernetes always thinks that current node's resource is enough for the pipeline job, so it keeps assigning to the same node. \r\n\r\nSolution: Set memory/CPU requests/limit on pipeline steps to guarantee they are not evicted when the cluster is under resource constraints. They can be set using KFP DSL. After you get container ops, they have a method for setting memory/CPU limit/request: https://kubeflow-pipelines.readthedocs.io/en/stable/source/kfp.dsl.html#kfp.dsl.Sidecar.set_memory_limit . Once Kubernetes knows that your job's resource requirement cannot be met with current node, it will assign pipeline to another node and that is when autoscaling plays a role.\r\n\r\nWould you like to give it a try and reply if this resolves your issue?", "Hello, \r\nI faced the same issue and then set memory_request and cpu_request. I also randomly increased the number of nodes in my cluster and enabled auto-scaling. Now, the pipelines do not crash, but they are taking too long to execute( the pipeline has been running for two days) . Is there anyway of knowing how many nodes my pipeline will need? Also, each node has 2 CPUs, can we increase the CPUs? ", "@sharmahemlata \r\n\r\nYou can add some logging to your pipeline, so you can dig in further which step in your component is taking a long time to finish. Argo workflow handles the execution so it might be better to follow up on Argo workflow side. You can set https://kubeflow-pipelines.readthedocs.io/en/stable/source/kfp.dsl.html#kfp.dsl.Sidecar.set_cpu_request (set_cpu_request) to increase the minimum CPUs you use.", "@zijianjoy Thanks for responding. There is logging throughout the program. As soon as the container starts, 'Inside the container' is logged. I have increased the number of CPUs to a point where the CPUs are underutilised. \r\n\r\nI recently got the kubernetes application developer certificate to try and understand yaml files in case the issue was with the argo workflow controller. I have used the updated version of the argo workflow controller. and changed some configuration according to the issues on the argo workflow issues page, nothing there has been useful either. \r\n\r\nI noticed something interested lately, when my container has errors (eg: syntax errors), then the logging happens up until the error occurs. If there are no errors in my code, there is no logging whatsoever. ", "> @zijianjoy Thanks for responding. There is logging throughout the program. As soon as the container starts, 'Inside the container' is logged. I have increased the number of CPUs to a point where the CPUs are underutilised.\r\n> \r\n> I recently got the kubernetes application developer certificate to try and understand yaml files in case the issue was with the argo workflow controller. I have used the updated version of the argo workflow controller. and changed some configuration according to the issues on the argo workflow issues page, nothing there has been useful either.\r\n> \r\n> I noticed something interested lately, when my container has errors (eg: syntax errors), then the logging happens up until the error occurs. If there are no errors in my code, there is no logging whatsoever.\r\n\r\nAs it turns out, my python code was not logging anything because I need to flush the output stream (sys.stdout.flush()). \r\nMy issue seems to inside the container source code. Sorry for the bother. ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-03-17T20:53:31"
"2022-03-03T00:05:15"
null
NONE
null
When i run many pipelines from kubeflow UI. some of the pipelines are crashing with the error statement "This step is in Failed state with this message: failed with exit code 137". I think that this is a out of memory error. when we investigated from kubernetes side , we see that all the pipelines are running on a single node. Even though if we have enabled autoscaling between the nodes for kubeflow pipelines , they are getting into the single node. i think the kubeflow is triggering the pipelines to be in a single node. I don't know if this is a bug, if not ,I want to know how can we autoscale the pipelines between the nodes from kubeflow side
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5318/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5318/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5316
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5316/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5316/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5316/events
https://github.com/kubeflow/pipelines/issues/5316
833,982,945
MDU6SXNzdWU4MzM5ODI5NDU=
5,316
[frontend] Add pagination to Artifact list
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Hello!\r\n\r\nThere are two old issues about this, so let's link to them:\r\n* https://github.com/kubeflow/pipelines/issues/3226\r\n* https://github.com/kubeflow/pipelines/issues/3731\r\n\r\nMoreover, to fix this I suggest we rely on existing pagination support of the backend, thus making the the UI backwards compatible.\r\n\r\n1. MLMetadata supports pagination in `GetArtifacts`, `GetExecutions`, and `GetContexts` APIs since 0.23.0 ([release notes](https://github.com/google/ml-metadata/releases/tag/v0.23.0))\r\n2. MLMetadata support pagination in `GetExecutionsByContext` and\r\n`GetArtifactsByContext` APIs since 0.25.1 ([release notes](https://github.com/google/ml-metadata/releases/tag/v0.25.1))\r\n3. KFP manifests deploy MLMetadata version 0.25.1 ([source](https://github.com/kubeflow/pipelines/blob/5990285894d655b0b52ca4d9a2f08756a99f1ec8/manifests/kustomize/base/metadata/metadata-grpc-deployment.yaml#L25))\r\n\r\nTherefore, it's just a matter of\r\n1. updating the client used by the UI, and\r\n2. update the UI code to show pages similar to runs list, experiments list, ...\r\n\r\nIn the end, there will be no breaking changes, no backend API changes, or different metadata store communication. What do you think?", "Yes, the original plan was to support pagination when backend supports that. And the backend support has already been released as @elikatsis pointed out.\n\nSuggest close either this issue or the existing issue, so that we do not have two threads for the same topic", "Thank you @elikatsis and @Bobgy for the reference! It makes sense to change client since metadata has already supported pagination. I will close this issue here. Since we have linked this issue to the previous ones, we can find the reference for our decision on making client changes for supporting pagination." ]
"2021-03-17T16:45:07"
"2021-03-18T17:38:03"
"2021-03-18T17:38:03"
COLLABORATOR
null
### What is the issue Discussion: https://github.com/kubeflow/pipelines/pull/5311#pullrequestreview-613890507 It is hard to scale when there are many artifacts to show on [Artifact list](https://github.com/zijianjoy/pipelines/blob/master/frontend/src/pages/ArtifactList.tsx). We should implement something that API can accept pageSize and pageToken like [listPipelines](https://github.com/zijianjoy/pipelines/blob/master/frontend/src/pages/PipelineList.tsx#L174-L179). It might require update on backend API, or we can implement this during KFPv2 implementation, because it touches metadata store communication. /kind bug /area frontend @Bobgy
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5316/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5316/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5315
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5315/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5315/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5315/events
https://github.com/kubeflow/pipelines/issues/5315
833,978,606
MDU6SXNzdWU4MzM5Nzg2MDY=
5,315
[frontend] Make clickable items styling distinguish from non-clickable items
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1116354964, "node_id": "MDU6TGFiZWwxMTE2MzU0OTY0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/size/S", "name": "size/S", "color": "ededed", "default": false, "description": null } ]
closed
false
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false } ]
null
[ "Moving this to v2 compatible project, because this especially affects MLMD artifacts etc, they are important in v2 compatible mode" ]
"2021-03-17T16:40:04"
"2021-07-15T20:36:38"
"2021-07-15T20:36:38"
COLLABORATOR
null
### Issue description Discussion: https://github.com/kubeflow/pipelines/pull/5311#pullrequestreview-613890507 We received report from users that the clickable items on Artifact list page are hard to discover, because they are grey color. You can only discover that they are clickable when you hover over to that item, which will turn the buttons into blue and underlined. But they are actually darker grey while non-clickable items are lighter grey. Our mitigation is to add underline strictly to clickable items on this page, but it breaks the existing styling consistency with other pages. ### What decision we want to make We need to determine a clickable styling and apply to all pages: This style needs to be `accessible`, `discoverable` and `visual friendly`. /kind bug /area frontend @Bobgy
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5315/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5315/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5307
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5307/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5307/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5307/events
https://github.com/kubeflow/pipelines/issues/5307
832,310,627
MDU6SXNzdWU4MzIzMTA2Mjc=
5,307
pipeline cannot read data from pvc
{ "login": "summerisc", "id": 21230663, "node_id": "MDQ6VXNlcjIxMjMwNjYz", "avatar_url": "https://avatars.githubusercontent.com/u/21230663?v=4", "gravatar_id": "", "url": "https://api.github.com/users/summerisc", "html_url": "https://github.com/summerisc", "followers_url": "https://api.github.com/users/summerisc/followers", "following_url": "https://api.github.com/users/summerisc/following{/other_user}", "gists_url": "https://api.github.com/users/summerisc/gists{/gist_id}", "starred_url": "https://api.github.com/users/summerisc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/summerisc/subscriptions", "organizations_url": "https://api.github.com/users/summerisc/orgs", "repos_url": "https://api.github.com/users/summerisc/repos", "events_url": "https://api.github.com/users/summerisc/events{/privacy}", "received_events_url": "https://api.github.com/users/summerisc/received_events", "type": "User", "site_admin": false }
[ { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." }, { "id": 2710158147, "node_id": "MDU6TGFiZWwyNzEwMTU4MTQ3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/needs%20more%20info", "name": "needs more info", "color": "DBEF12", "default": false, "description": "" } ]
closed
false
{ "login": "Ark-kun", "id": 1829149, "node_id": "MDQ6VXNlcjE4MjkxNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ark-kun", "html_url": "https://github.com/Ark-kun", "followers_url": "https://api.github.com/users/Ark-kun/followers", "following_url": "https://api.github.com/users/Ark-kun/following{/other_user}", "gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions", "organizations_url": "https://api.github.com/users/Ark-kun/orgs", "repos_url": "https://api.github.com/users/Ark-kun/repos", "events_url": "https://api.github.com/users/Ark-kun/events{/privacy}", "received_events_url": "https://api.github.com/users/Ark-kun/received_events", "type": "User", "site_admin": false }
[ { "login": "Ark-kun", "id": 1829149, "node_id": "MDQ6VXNlcjE4MjkxNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ark-kun", "html_url": "https://github.com/Ark-kun", "followers_url": "https://api.github.com/users/Ark-kun/followers", "following_url": "https://api.github.com/users/Ark-kun/following{/other_user}", "gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions", "organizations_url": "https://api.github.com/users/Ark-kun/orgs", "repos_url": "https://api.github.com/users/Ark-kun/repos", "events_url": "https://api.github.com/users/Ark-kun/events{/privacy}", "received_events_url": "https://api.github.com/users/Ark-kun/received_events", "type": "User", "site_admin": false } ]
null
[ "I'm not sure this is related to Kubeflow Pipelines. Have you checked how the created pod looks like? You can also replace the command with `sleep 9999` and then use `kubectl exec` to probe and debug the pod.\r\n\r\nWe advice our users to avoid using volumes directly (and creating ContainerOp directly as well).\r\n\r\n```python\r\npreprocess_op = kfp.components.load_component_from_text('''\r\nname: process_data\r\ninputs:\r\n- {name: input1}\r\noutputs:\r\n- {name: output1}\r\nimplementation:\r\n container:\r\n image: image:latest\r\n command:\r\n - sh\r\n - -exc\r\n - cd /home && python test.py\r\n - --input-path\r\n - - {inputPath: input1}\r\n - --output-path\r\n - - {outputPath: output1}\r\n''')\r\n```\r\n\r\nIs there a reason this won't work for you?\r\n\r\nIf the data is too large for the standard artifact passing, then there is a compilation option that seamlessly replaces the intermediate artifact storage with a volume without changing the pipeline code.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n" ]
"2021-03-16T00:45:04"
"2022-04-18T17:27:50"
"2022-04-18T17:27:50"
NONE
null
More like a question kind Currently, I have a pipeline contains a op like: ``` def preprocess_op(input_path, output_path): op = dsl.ContainerOp( name='process data', image='image:latest', command=[ 'sh', '-c', ], arguments=[ f'cd /home && python test.py' f'--input-path {input_path}' f'--output-path {output_path}' ], file_outputs={} ) op.apply(onprem.mount_pvc(pvc_name="flow-test", volume_name="pipeline", volume_mount_path="/mnt/data")) return op ``` I have a pvc called flow-test already bound ![Screen Shot 2021-03-15 at 8 43 41 PM](https://user-images.githubusercontent.com/21230663/111239233-22975680-85cf-11eb-96c7-bbeaf743b7cf.png) When I run: `client.create_run_from_pipeline_func(test_pipeline, arguments={'input_path': '/mnt/data', 'output_path': '/'}, namespace='namespaceName')` it shows `No such file or directory` so it seems /mnt/data cannot be found by the ContainerOp. So just wondering is it the correct way to get data from pvc inside the pipeline? If not, could anyone share what would be the correct way? I am using kubeflow for on-prem servers with k8s deployed on them.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5307/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5307/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5306
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5306/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5306/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5306/events
https://github.com/kubeflow/pipelines/issues/5306
832,225,674
MDU6SXNzdWU4MzIyMjU2NzQ=
5,306
E2E-mnist sample not up-to-date with Katib launcher
{ "login": "chinhuang007", "id": 10040293, "node_id": "MDQ6VXNlcjEwMDQwMjkz", "avatar_url": "https://avatars.githubusercontent.com/u/10040293?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chinhuang007", "html_url": "https://github.com/chinhuang007", "followers_url": "https://api.github.com/users/chinhuang007/followers", "following_url": "https://api.github.com/users/chinhuang007/following{/other_user}", "gists_url": "https://api.github.com/users/chinhuang007/gists{/gist_id}", "starred_url": "https://api.github.com/users/chinhuang007/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chinhuang007/subscriptions", "organizations_url": "https://api.github.com/users/chinhuang007/orgs", "repos_url": "https://api.github.com/users/chinhuang007/repos", "events_url": "https://api.github.com/users/chinhuang007/events{/privacy}", "received_events_url": "https://api.github.com/users/chinhuang007/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." }, { "id": 2710158147, "node_id": "MDU6TGFiZWwyNzEwMTU4MTQ3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/needs%20more%20info", "name": "needs more info", "color": "DBEF12", "default": false, "description": "" } ]
closed
false
{ "login": "andreyvelich", "id": 31112157, "node_id": "MDQ6VXNlcjMxMTEyMTU3", "avatar_url": "https://avatars.githubusercontent.com/u/31112157?v=4", "gravatar_id": "", "url": "https://api.github.com/users/andreyvelich", "html_url": "https://github.com/andreyvelich", "followers_url": "https://api.github.com/users/andreyvelich/followers", "following_url": "https://api.github.com/users/andreyvelich/following{/other_user}", "gists_url": "https://api.github.com/users/andreyvelich/gists{/gist_id}", "starred_url": "https://api.github.com/users/andreyvelich/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andreyvelich/subscriptions", "organizations_url": "https://api.github.com/users/andreyvelich/orgs", "repos_url": "https://api.github.com/users/andreyvelich/repos", "events_url": "https://api.github.com/users/andreyvelich/events{/privacy}", "received_events_url": "https://api.github.com/users/andreyvelich/received_events", "type": "User", "site_admin": false }
[ { "login": "andreyvelich", "id": 31112157, "node_id": "MDQ6VXNlcjMxMTEyMTU3", "avatar_url": "https://avatars.githubusercontent.com/u/31112157?v=4", "gravatar_id": "", "url": "https://api.github.com/users/andreyvelich", "html_url": "https://github.com/andreyvelich", "followers_url": "https://api.github.com/users/andreyvelich/followers", "following_url": "https://api.github.com/users/andreyvelich/following{/other_user}", "gists_url": "https://api.github.com/users/andreyvelich/gists{/gist_id}", "starred_url": "https://api.github.com/users/andreyvelich/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andreyvelich/subscriptions", "organizations_url": "https://api.github.com/users/andreyvelich/orgs", "repos_url": "https://api.github.com/users/andreyvelich/repos", "events_url": "https://api.github.com/users/andreyvelich/events{/privacy}", "received_events_url": "https://api.github.com/users/andreyvelich/received_events", "type": "User", "site_admin": false } ]
null
[ "This example is no longer maintained in this repo. I tried to fix it for kubeflow 1.0 at https://github.com/kubeflow/pipelines/pull/4126 a while ago, but seems like the original author is no longer active", "cc @andreyvelich : What is the current status of this component? Should we perform a fix or remove this example?", "Thank you for mentioning me @zijianjoy.\r\nYes, I have plans to refactor e2e mnist example with the new launchers for Katib, KFServing and TF-Operator.\r\nI want to make it after the Kubeflow 1.3 release.\r\n\r\n/assign @andreyvelich ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-03-15T21:58:34"
"2021-07-27T18:22:44"
"2021-07-27T18:22:44"
NONE
null
### What steps did you take: Run the e2e mnist notebook ### What happened: Got TypeError: Katib - Launch Experiment() got an unexpected keyword argument 'parallel_trial_count' Trying to execute ``` katib_experiment_launcher_op = components.load_component_from_url('https://raw.githubusercontent.com/kubeflow/pipelines/master/components/kubeflow/katib-launcher/component.yaml') op1 = katib_experiment_launcher_op( experiment_name=name, experiment_namespace=namespace, parallel_trial_count=3, max_trial_count=12, objective=str(objectiveConfig), algorithm=str(algorithmConfig), trial_template=str(trialTemplate), parameters=str(parameters), metrics_collector=str(metricsCollectorSpec), # experiment_timeout_minutes=experimentTimeoutMinutes, delete_finished_experiment=False) ``` ### What did you expect to happen: Not throwing an error ### Environment: kubeflow 1.2 How did you deploy Kubeflow Pipelines (KFP)? part of kubeflow 1.2 on IBM cloud KFP SDK version: kfp 1.0.4 kfp-pipeline-spec 0.1.6 kfp-server-api 1.4.1 kfp-tekton 0.4.0 ### Anything else you would like to add: Seems affected by this PR https://github.com/kubeflow/pipelines/pull/4798 /kind bug
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5306/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5306/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5305
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5305/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5305/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5305/events
https://github.com/kubeflow/pipelines/issues/5305
832,056,768
MDU6SXNzdWU4MzIwNTY3Njg=
5,305
Better document the "concat" and "if" placeholders
{ "login": "Ark-kun", "id": 1829149, "node_id": "MDQ6VXNlcjE4MjkxNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ark-kun", "html_url": "https://github.com/Ark-kun", "followers_url": "https://api.github.com/users/Ark-kun/followers", "following_url": "https://api.github.com/users/Ark-kun/following{/other_user}", "gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions", "organizations_url": "https://api.github.com/users/Ark-kun/orgs", "repos_url": "https://api.github.com/users/Ark-kun/repos", "events_url": "https://api.github.com/users/Ark-kun/events{/privacy}", "received_events_url": "https://api.github.com/users/Ark-kun/received_events", "type": "User", "site_admin": false }
[ { "id": 930619540, "node_id": "MDU6TGFiZWw5MzA2MTk1NDA=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/docs", "name": "area/docs", "color": "d2b48c", "default": false, "description": null }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
closed
false
{ "login": "Ark-kun", "id": 1829149, "node_id": "MDQ6VXNlcjE4MjkxNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ark-kun", "html_url": "https://github.com/Ark-kun", "followers_url": "https://api.github.com/users/Ark-kun/followers", "following_url": "https://api.github.com/users/Ark-kun/following{/other_user}", "gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions", "organizations_url": "https://api.github.com/users/Ark-kun/orgs", "repos_url": "https://api.github.com/users/Ark-kun/repos", "events_url": "https://api.github.com/users/Ark-kun/events{/privacy}", "received_events_url": "https://api.github.com/users/Ark-kun/received_events", "type": "User", "site_admin": false }
[ { "login": "Ark-kun", "id": 1829149, "node_id": "MDQ6VXNlcjE4MjkxNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ark-kun", "html_url": "https://github.com/Ark-kun", "followers_url": "https://api.github.com/users/Ark-kun/followers", "following_url": "https://api.github.com/users/Ark-kun/following{/other_user}", "gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions", "organizations_url": "https://api.github.com/users/Ark-kun/orgs", "repos_url": "https://api.github.com/users/Ark-kun/repos", "events_url": "https://api.github.com/users/Ark-kun/events{/privacy}", "received_events_url": "https://api.github.com/users/Ark-kun/received_events", "type": "User", "site_admin": false } ]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n" ]
"2021-03-15T17:58:13"
"2022-04-18T17:27:48"
"2022-04-18T17:27:48"
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5305/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5305/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5304
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5304/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5304/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5304/events
https://github.com/kubeflow/pipelines/issues/5304
831,878,336
MDU6SXNzdWU4MzE4NzgzMzY=
5,304
ML-Engine deploy model component missing region parameter
{ "login": "Amrit-Mann", "id": 49272096, "node_id": "MDQ6VXNlcjQ5MjcyMDk2", "avatar_url": "https://avatars.githubusercontent.com/u/49272096?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Amrit-Mann", "html_url": "https://github.com/Amrit-Mann", "followers_url": "https://api.github.com/users/Amrit-Mann/followers", "following_url": "https://api.github.com/users/Amrit-Mann/following{/other_user}", "gists_url": "https://api.github.com/users/Amrit-Mann/gists{/gist_id}", "starred_url": "https://api.github.com/users/Amrit-Mann/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Amrit-Mann/subscriptions", "organizations_url": "https://api.github.com/users/Amrit-Mann/orgs", "repos_url": "https://api.github.com/users/Amrit-Mann/repos", "events_url": "https://api.github.com/users/Amrit-Mann/events{/privacy}", "received_events_url": "https://api.github.com/users/Amrit-Mann/received_events", "type": "User", "site_admin": false }
[ { "id": 930476737, "node_id": "MDU6TGFiZWw5MzA0NzY3Mzc=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/help%20wanted", "name": "help wanted", "color": "db1203", "default": true, "description": "The community is welcome to contribute." }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
{ "login": "NikeNano", "id": 22057410, "node_id": "MDQ6VXNlcjIyMDU3NDEw", "avatar_url": "https://avatars.githubusercontent.com/u/22057410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NikeNano", "html_url": "https://github.com/NikeNano", "followers_url": "https://api.github.com/users/NikeNano/followers", "following_url": "https://api.github.com/users/NikeNano/following{/other_user}", "gists_url": "https://api.github.com/users/NikeNano/gists{/gist_id}", "starred_url": "https://api.github.com/users/NikeNano/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NikeNano/subscriptions", "organizations_url": "https://api.github.com/users/NikeNano/orgs", "repos_url": "https://api.github.com/users/NikeNano/repos", "events_url": "https://api.github.com/users/NikeNano/events{/privacy}", "received_events_url": "https://api.github.com/users/NikeNano/received_events", "type": "User", "site_admin": false }
[ { "login": "NikeNano", "id": 22057410, "node_id": "MDQ6VXNlcjIyMDU3NDEw", "avatar_url": "https://avatars.githubusercontent.com/u/22057410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NikeNano", "html_url": "https://github.com/NikeNano", "followers_url": "https://api.github.com/users/NikeNano/followers", "following_url": "https://api.github.com/users/NikeNano/following{/other_user}", "gists_url": "https://api.github.com/users/NikeNano/gists{/gist_id}", "starred_url": "https://api.github.com/users/NikeNano/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NikeNano/subscriptions", "organizations_url": "https://api.github.com/users/NikeNano/orgs", "repos_url": "https://api.github.com/users/NikeNano/repos", "events_url": "https://api.github.com/users/NikeNano/events{/privacy}", "received_events_url": "https://api.github.com/users/NikeNano/received_events", "type": "User", "site_admin": false } ]
null
[ "/assign", "@zijianjoy should we aim for making [all arguments](https://cloud.google.com/sdk/gcloud/reference/ml-engine/versions/create) available for the user or keep a selected few(I see that we currently only support some options)? ", "After looking in to this I believe this is actually already possible @Amrit-Mann, if you look at the options: for `model` and `version` the take a json following:\r\n- model, https://cloud.google.com/ml-engine/reference/rest/v1/projects.models\r\n- version , https://cloud.google.com/ml-engine/reference/rest/v1/projects.models.versions \r\n\r\nwhere in you have the options to specify the region that you like to deploy to. \r\n\r\n> @zijianjoy should we aim for making [all arguments](https://cloud.google.com/sdk/gcloud/reference/ml-engine/versions/create) available for the user or keep a selected few(I see that we currently only support some options)?\r\n\r\nAll available options should already be possible to set. Let me know if you still has some issues @Amrit-Mann ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Thanks @NikeNano for the research! Since you can use ML-engine REST API, looks like you should be able to call this endpoint to specify regions.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-03-15T14:37:51"
"2022-03-03T00:05:29"
null
NONE
null
Hi, I am using the ml-engine component to deploy a ML model to AI Platform Prediction (from a Kubeflow deployment in AI Platform Pipelines), however there is no available parameter to deploy the model into a particular region. This results in the model deployment defaulting to the us-central1 region. Is there any way to deploy a model using this component to a specific region? Any help on this will be much appreciated!. Thanks. Link to component: https://github.com/kubeflow/pipelines/tree/master/components/gcp/ml_engine/deploy **What steps did you take:** 1 - Trained a model using AI Platform Pipelines 2 - Saved this model to Google Cloud Storage as a .pkl file 3 - Tried to use the deploy component to deploy this model to AI Platform Prediction, however **What happened:** There is no region parameter for this deploy component hence the deployment happens in the default region (us-central1) **What did you expect to happen:** Be able to specify the region parameter in the input for this component. (Since we can do this when deploying the model from either gcloud commands/GCP Console **Anything else you would like to add:** /kind feature /kind component
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5304/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5304/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5303
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5303/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5303/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5303/events
https://github.com/kubeflow/pipelines/issues/5303
831,795,881
MDU6SXNzdWU4MzE3OTU4ODE=
5,303
Kubeflow / AI Platform Pipelines runtime context missing when output is taken from cache
{ "login": "SaschaHeyer", "id": 1991664, "node_id": "MDQ6VXNlcjE5OTE2NjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1991664?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SaschaHeyer", "html_url": "https://github.com/SaschaHeyer", "followers_url": "https://api.github.com/users/SaschaHeyer/followers", "following_url": "https://api.github.com/users/SaschaHeyer/following{/other_user}", "gists_url": "https://api.github.com/users/SaschaHeyer/gists{/gist_id}", "starred_url": "https://api.github.com/users/SaschaHeyer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SaschaHeyer/subscriptions", "organizations_url": "https://api.github.com/users/SaschaHeyer/orgs", "repos_url": "https://api.github.com/users/SaschaHeyer/repos", "events_url": "https://api.github.com/users/SaschaHeyer/events{/privacy}", "received_events_url": "https://api.github.com/users/SaschaHeyer/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @SaschaHeyer, thank you for the report!\n\nWhat is your TFX version? KFP cache shouldn't run on TFX pipelines. This is a bug.", "Hi @Bobgy \r\nissue can be reproduced with 0.27.0 and 0.28.0", "@SaschaHeyer can you provide a minimum reproducible TFX pipeline with a requirements.txt for all dependencies versions?\r\nI tried TFX 0.28.0 with KFP 1.5.0-rc.0 (which should be very similar to 1.4.1 you tested on), and cannot reproduce this problem.", "Hi @Bobgy \r\nhere is the example you can use https://github.com/SaschaHeyer/Sentiment-Analysis-TFX", "Thank you! I will try it out today", "Hi @SaschaHeyer, sorry I do not know how to compile and run your pipeline.\r\nCan you document that?", "An important thing to verify is whether the following annotation exists on your pods:\r\n![image](https://user-images.githubusercontent.com/4957653/115177640-379d6300-a102-11eb-8d61-b756b431b098.png)\r\n\r\n`pipelines.kubeflow.org/pipeline-sdk-type: tfx`", "@Bobgy I have the same issue, and I have the annotation `pipelines.kubeflow.org/pipeline-sdk-type: tfx` on my pods.\r\nCould you explain how this annotation affect if it exists?", "@judyliou sorry for the late reply, the annotation is used to distinguish tfx pipelines from KFP pipelines.\r\nFor TFX pipelines, they should not be cached by KFP cache controller, because TFX come with their first party caching feature. This bug is caused by KFP caching TFX pipelines.\r\n\r\nCan you confirm your KFP backend is upgraded to 1.5.0+? The fix was released in 1.5.0, see https://github.com/kubeflow/pipelines/commit/76cad52d3b405f44b987e9e3a9d27a40d2b0c0d5.", "Ohh, so that explains this original issue, KFP backend is 1.4.1 and the fix was only released in 1.5.0", "Google Marketplace does not offer 1.5.0 yet, only 1.4.1. Is there a workaround to bring back visualisations (as described in [this ticket](https://github.com/kubeflow/pipelines/issues/5302), which leads here)?", "@axeltidemann one workaround is to upgrade cache server after install \n\n```\n# 1. connect to cluster kubectl\n\nkubectl edit deployment cache-server -n namespace\n\n# edit the image to gcr.io/ml-pipeline/cache-server:1.5.0\n```", "@Bobgy That would not work. Google Marketplace does not have any container images with a version 1.5.0 tag, you can check that [here](https://console.cloud.google.com/gcr/images/cloud-marketplace/GLOBAL/google-cloud-ai-platform/kubeflow-pipelines/cacheserver?gcrImageListsize=30). ", "@aneeshc-c you are right, I fixed the instruction to use oss image repo instead", "That did not bring back visualisations, @Bobgy . Any other suggestions?", "@axeltidemann can you report your ML Metadata tab screenshot after the fix? (And it should be a new run too)", "Sure, @Bobgy. You can tell that the Outputs are wrong, it should be Type **ExampleStatistics**.\r\n![Screenshot 2021-06-01 at 08 58 45](https://user-images.githubusercontent.com/6352118/120280202-aea84880-c2b7-11eb-8c46-e79eb9aa6ac5.png)\r\n", "@axeltidemann thanks for the extra info, that reminded me https://github.com/kubeflow/pipelines/pull/5364 fix is also required. (the metadata you showed are logged from KFP metadata-writer, but we should show metadata of native TFX pipelines).\r\n\r\nRefer to https://www.kubeflow.org/docs/components/pipelines/installation/overview/#choosing-an-installation-option, I'd suggest deploying Kubeflow Pipelines standalone 1.6.0 to get all the fixes, before GCP AI Platform Pipelines has an update. Feature diff between KFP Standalone and GCP AI Platform Pipelines is very minimal, you only get extra features from KFP standalone, because of the flexibility.", "Thank you for that suggestion, @Bobgy. We are a bit hesitant to not use the Google Marketplace one. But thanks for letting us know we should wait for 1.6.0.", "I'm curious why, but I'm working on blockers to release the latest version. Rough ETA would be 2 weeks", "@Bobgy roughly 2 weeks to get 1.6.0 on Google Marketplace? That is great!", "@Bobgy We also have the standalone pipeline installation through Google Cloud Marketplace, now I'm wondering what's the proper upgrade strategy? We have had 3 clusters so far, `0.4`, `1.0` and `1.4`, and everytime `upgrade` for us has been:\r\n* Go through the market place installation\r\n* Create a new cluster and install the newer version \r\n* Try to deprecate the older one and migrate pipelines into the newer one\r\n\r\n1. Is there anyway to `upgrade` instead of `fresh re-install` from the Marketplace package?\r\n2. Considering there's no way to `upgrade` from Market place, do you recommend us follow the manual upgrade docs [here](https://www.kubeflow.org/docs/components/pipelines/installation/standalone-deployment/#upgrading-kubeflow-pipelines) for gcp market place installations or the same `install and migrate` strategy?\r\n", "@AlirezaSadeghi did you read the docs: https://www.kubeflow.org/docs/distributions/gke/pipelines/upgrade/?", "Regarding this issue, I also just learned that tfx templates may override to use `tfx-template` as label value: https://github.com/tensorflow/tfx/blob/985105d58d04292163fd1e25d51a319d32448b9a/tfx/experimental/templates/penguin/kubeflow_runner.py#L66.\r\n\r\nSo KFP should match loosely, any label value with the word tfx should be ignored by KFP cacher and metadata-writer." ]
"2021-03-15T13:10:14"
"2021-07-14T05:24:59"
"2021-07-14T05:24:59"
NONE
null
### What steps did you take: 1. deploy pipeline with one component 2. run pipeline with one component (👍 works) 3. add another component 4. run the pipeline (this time the output is taken from cache) (👎 fails) ### What happened: The pipeline runs into the following error ``` Traceback (most recent call last): File "/opt/conda/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/opt/conda/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/opt/conda/lib/python3.7/site-packages/tfx/orchestration/kubeflow/container_entrypoint.py", line 360, in <module> main() File "/opt/conda/lib/python3.7/site-packages/tfx/orchestration/kubeflow/container_entrypoint.py", line 353, in main execution_info = launcher.launch() File "/opt/conda/lib/python3.7/site-packages/tfx/orchestration/launcher/base_component_launcher.py", line 198, in launch self._exec_properties) File "/opt/conda/lib/python3.7/site-packages/tfx/orchestration/launcher/base_component_launcher.py", line 167, in _run_driver component_info=self._component_info) File "/opt/conda/lib/python3.7/site-packages/tfx/dsl/components/base/base_driver.py", line 270, in pre_execution driver_args, pipeline_info) File "/opt/conda/lib/python3.7/site-packages/tfx/dsl/components/base/base_driver.py", line 158, in resolve_input_artifacts producer_component_id=input_channel.producer_component_id) File "/opt/conda/lib/python3.7/site-packages/tfx/orchestration/metadata.py", line 948, in search_artifacts pipeline_info) RuntimeError: Pipeline run context for PipelineInfo(pipeline_name: sentiment4, pipeline_root: gs://sascha-playground-doit-kubeflowpipelines-default/sentiment4, run_id: sentiment4-qnknl) does not exist ``` Assume the second component doesn't find the cached data because the component did not exist in the first run. First run: <img width="1216" alt="1run" src="https://user-images.githubusercontent.com/1991664/111158263-1e9c1200-8598-11eb-969f-370aec9c4be2.png"> Second run with additional component <img width="1216" alt="2run" src="https://user-images.githubusercontent.com/1991664/111158281-22c82f80-8598-11eb-8e2a-ec3b070163f0.png"> ### What did you expect to happen: Pipeline run completes without errors ### Environment: AI Platform Pipelines How did you deploy Kubeflow Pipelines (KFP)? AI Platform Pipelines KFP version: https://github.com/kubeflow/pipelines/commit/d79071c0bef19442483abc101769a0d893e72f42 KFP SDK version: no pip in AI Platform Pipelines ### Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.] /kind bug <!-- Please include labels by uncommenting them to help us better triage issues, choose from the following --> <!-- // /area frontend // /area backend // /area sdk // /area testing // /area engprod -->
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5303/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5303/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5302
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5302/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5302/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5302/events
https://github.com/kubeflow/pipelines/issues/5302
831,792,710
MDU6SXNzdWU4MzE3OTI3MTA=
5,302
Kubeflow / AI Platform Pipelines component visualization missing
{ "login": "SaschaHeyer", "id": 1991664, "node_id": "MDQ6VXNlcjE5OTE2NjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1991664?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SaschaHeyer", "html_url": "https://github.com/SaschaHeyer", "followers_url": "https://api.github.com/users/SaschaHeyer/followers", "following_url": "https://api.github.com/users/SaschaHeyer/following{/other_user}", "gists_url": "https://api.github.com/users/SaschaHeyer/gists{/gist_id}", "starred_url": "https://api.github.com/users/SaschaHeyer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SaschaHeyer/subscriptions", "organizations_url": "https://api.github.com/users/SaschaHeyer/orgs", "repos_url": "https://api.github.com/users/SaschaHeyer/repos", "events_url": "https://api.github.com/users/SaschaHeyer/events{/privacy}", "received_events_url": "https://api.github.com/users/SaschaHeyer/received_events", "type": "User", "site_admin": false }
[ { "id": 930619511, "node_id": "MDU6TGFiZWw5MzA2MTk1MTE=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/priority/p0", "name": "priority/p0", "color": "db1203", "default": false, "description": "" }, { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false } ]
null
[ "Duplicate of #5303", "This has the same root cause, so I am closing it" ]
"2021-03-15T13:06:45"
"2021-03-27T00:58:41"
"2021-03-27T00:58:41"
NONE
null
### What steps did you take: 1. run the pipeline (visualization is rendered) 2. run the pipeline again (visualization is missing) ### What happened: On the second run, the StatisticsGen component has missing static HTML and metadata. This issue can be reproduced only if the run output is taken from the cache. Runs not taken from cache work as expected. I assume this is an issue related to the cache. <img width="1211" alt="1" src="https://user-images.githubusercontent.com/1991664/111157846-a59cba80-8597-11eb-9d35-b95f77582e09.png"> <img width="1201" alt="11" src="https://user-images.githubusercontent.com/1991664/111157853-a6cde780-8597-11eb-99d0-66fdbac7c6b4.png"> ### What did you expect to happen: The visualization for the TFX StatisticsGen component is rendered ### Environment: Google AI Platform Pipelines How did you deploy Kubeflow Pipelines (KFP)? Using Google AI Platform Pipelines KFP version: https://github.com/kubeflow/pipelines/commit/d79071c0bef19442483abc101769a0d893e72f42 KFP SDK version: no pip in AI Platform Pipelines ### Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.] /kind bug <!-- Please include labels by uncommenting them to help us better triage issues, choose from the following --> <!-- // /area frontend // /area backend // /area sdk // /area testing // /area engprod -->
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5302/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5302/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5296
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5296/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5296/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5296/events
https://github.com/kubeflow/pipelines/issues/5296
830,890,815
MDU6SXNzdWU4MzA4OTA4MTU=
5,296
[sdk] KFP SDK fail to use argo CLI >=2.12 to lint workflows
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
{ "login": "Ark-kun", "id": 1829149, "node_id": "MDQ6VXNlcjE4MjkxNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ark-kun", "html_url": "https://github.com/Ark-kun", "followers_url": "https://api.github.com/users/Ark-kun/followers", "following_url": "https://api.github.com/users/Ark-kun/following{/other_user}", "gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions", "organizations_url": "https://api.github.com/users/Ark-kun/orgs", "repos_url": "https://api.github.com/users/Ark-kun/repos", "events_url": "https://api.github.com/users/Ark-kun/events{/privacy}", "received_events_url": "https://api.github.com/users/Ark-kun/received_events", "type": "User", "site_admin": false }
[ { "login": "Ark-kun", "id": 1829149, "node_id": "MDQ6VXNlcjE4MjkxNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ark-kun", "html_url": "https://github.com/Ark-kun", "followers_url": "https://api.github.com/users/Ark-kun/followers", "following_url": "https://api.github.com/users/Ark-kun/following{/other_user}", "gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions", "organizations_url": "https://api.github.com/users/Ark-kun/orgs", "repos_url": "https://api.github.com/users/Ark-kun/repos", "events_url": "https://api.github.com/users/Ark-kun/events{/privacy}", "received_events_url": "https://api.github.com/users/Ark-kun/received_events", "type": "User", "site_admin": false }, { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "/assign @Ark-kun @chensun " ]
"2021-03-13T12:41:59"
"2021-03-19T04:10:17"
"2021-03-19T04:10:17"
CONTRIBUTOR
null
### What happened: After https://github.com/kubeflow/pipelines/pull/5266, I noticed in tests, the warning > Cannot validate the compiled workflow. Found the argo program in PATH, but it's not usable. argo v2.4.3 should work. ### What did you expect to happen: KFP SDK can use latest argo CLI to validate workflows locally. ### Environment: <!-- Please fill in those that seem relevant. --> KFP SDK version: 1.4.0 ### Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.] /kind bug <!-- Please include labels by uncommenting them to help us better triage issues, choose from the following --> /area sdk <!-- // /area frontend // /area backend // /area testing // /area engprod --> The root cause seems to be that, KFP SDK tries to lint empty input using argo CLI to verify if argo CLI works. However, the default behavior of Argo CLI changed, it gives an error when validating empty string. We should update to validate an empty workflow instead. See https://github.com/kubeflow/pipelines/blob/21e3ef8311308c748c0b1656a5f8a06df0ea3045/sdk/python/kfp/compiler/compiler.py#L1094
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5296/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5296/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5295
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5295/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5295/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5295/events
https://github.com/kubeflow/pipelines/issues/5295
830,774,598
MDU6SXNzdWU4MzA3NzQ1OTg=
5,295
Feature Request: Enable custom runtime parameters from user transforms
{ "login": "aoen", "id": 1592778, "node_id": "MDQ6VXNlcjE1OTI3Nzg=", "avatar_url": "https://avatars.githubusercontent.com/u/1592778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aoen", "html_url": "https://github.com/aoen", "followers_url": "https://api.github.com/users/aoen/followers", "following_url": "https://api.github.com/users/aoen/following{/other_user}", "gists_url": "https://api.github.com/users/aoen/gists{/gist_id}", "starred_url": "https://api.github.com/users/aoen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aoen/subscriptions", "organizations_url": "https://api.github.com/users/aoen/orgs", "repos_url": "https://api.github.com/users/aoen/repos", "events_url": "https://api.github.com/users/aoen/events{/privacy}", "received_events_url": "https://api.github.com/users/aoen/received_events", "type": "User", "site_admin": false }
[ { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." }, { "id": 2710158147, "node_id": "MDU6TGFiZWwyNzEwMTU4MTQ3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/needs%20more%20info", "name": "needs more info", "color": "DBEF12", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "We currently have the following support for time formatting: https://github.com/kubeflow/pipelines/blob/0795597562e076437a21745e524b5c960b1edb68/backend/src/crd/samples/scheduledworkflow/parameterized.yaml#L29-L33.\r\n\r\nDoes it fit your use case?", "Hi @zijianjoy tried exploring the documentation. \r\n\r\nLet me provide a deeper detail:\r\n- We are looking to parametrize our TFX pipeline to pick up ScheduledTime so that it runs on the correct partition for that run. \r\n- For that, we need some way to extract days since epoch (which is what TFX defines as `span` as seen [here](https://github.com/tensorflow/tfx/blob/v0.28.0/tfx/components/example_gen/utils.py#L240-L242) )\r\n- In addition, we need to parse it as an integer which is problematic, since when specified it's a string on TFX RuntimParameter. This is due to TFX needing an integer [here](https://github.com/tensorflow/tfx/blob/v0.28.0/tfx/proto/range_config.proto#L21)\r\n\r\n\r\n\r\n\r\n\r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n" ]
"2021-03-13T01:59:16"
"2022-04-18T17:27:53"
"2022-04-18T17:27:53"
NONE
null
We are trying to use KFP's [ScheduledTime] parameter in TFX Pipelines runtime parameters. [We have discussed](https://github.com/tensorflow/tfx/issues/3354) with @1025KB from the TFX project and he suggested we file this ticket to enable this functionality on the KFP side by allowing users to be able to specify transforms on KFP runtime parameters like [ScheduledTime], and have these transformed values accessible by the components in the running pipeline. e.g. a user could specify a transform like iso_date_to_tfx_span("[ScheduledTime]") which would convert the scheduled time in ISO format to an integer representing the corresponding TFX span using the TFX utility libraries. This is an example interface, and will leave the details for the interface and the actual parameter storage and retrieval interface to TFX/KFP folks to suss out. cc @casassg
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5295/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5295/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5292
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5292/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5292/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5292/events
https://github.com/kubeflow/pipelines/issues/5292
830,284,073
MDU6SXNzdWU4MzAyODQwNzM=
5,292
Feature request: don't start run if obligatory parameters are missing
{ "login": "alvercau", "id": 24573258, "node_id": "MDQ6VXNlcjI0NTczMjU4", "avatar_url": "https://avatars.githubusercontent.com/u/24573258?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvercau", "html_url": "https://github.com/alvercau", "followers_url": "https://api.github.com/users/alvercau/followers", "following_url": "https://api.github.com/users/alvercau/following{/other_user}", "gists_url": "https://api.github.com/users/alvercau/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvercau/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvercau/subscriptions", "organizations_url": "https://api.github.com/users/alvercau/orgs", "repos_url": "https://api.github.com/users/alvercau/repos", "events_url": "https://api.github.com/users/alvercau/events{/privacy}", "received_events_url": "https://api.github.com/users/alvercau/received_events", "type": "User", "site_admin": false }
[ { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
closed
false
null
[]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n" ]
"2021-03-12T16:41:48"
"2022-04-18T17:28:01"
"2022-04-18T17:28:01"
NONE
null
/kind feature **Why you need this feature**: Currently it is possible to start a run even if some parameters that are obligatory in a step, are missing. It would be easier to have the option to make some parameters obligatory to begin with (just like pipeline, pipeline version, run name, experiment), making it impossible to start a run if these obligatory parameters are not passed. This would avoid creating a whole container that simply immediately raises an error because a parameter is missing. **Describe the solution you'd like:** Have an option to make a parameter obligatory before starting a run.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5292/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5292/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5288
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5288/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5288/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5288/events
https://github.com/kubeflow/pipelines/issues/5288
829,727,011
MDU6SXNzdWU4Mjk3MjcwMTE=
5,288
[Bug] KeyValueStore fails to check the cached data with new data
{ "login": "lynnmatrix", "id": 174119, "node_id": "MDQ6VXNlcjE3NDExOQ==", "avatar_url": "https://avatars.githubusercontent.com/u/174119?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lynnmatrix", "html_url": "https://github.com/lynnmatrix", "followers_url": "https://api.github.com/users/lynnmatrix/followers", "following_url": "https://api.github.com/users/lynnmatrix/following{/other_user}", "gists_url": "https://api.github.com/users/lynnmatrix/gists{/gist_id}", "starred_url": "https://api.github.com/users/lynnmatrix/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lynnmatrix/subscriptions", "organizations_url": "https://api.github.com/users/lynnmatrix/orgs", "repos_url": "https://api.github.com/users/lynnmatrix/repos", "events_url": "https://api.github.com/users/lynnmatrix/events{/privacy}", "received_events_url": "https://api.github.com/users/lynnmatrix/received_events", "type": "User", "site_admin": false }
[ { "id": 930476737, "node_id": "MDU6TGFiZWw5MzA0NzY3Mzc=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/help%20wanted", "name": "help wanted", "color": "db1203", "default": true, "description": "The community is welcome to contribute." }, { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
{ "login": "Ark-kun", "id": 1829149, "node_id": "MDQ6VXNlcjE4MjkxNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ark-kun", "html_url": "https://github.com/Ark-kun", "followers_url": "https://api.github.com/users/Ark-kun/followers", "following_url": "https://api.github.com/users/Ark-kun/following{/other_user}", "gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions", "organizations_url": "https://api.github.com/users/Ark-kun/orgs", "repos_url": "https://api.github.com/users/Ark-kun/repos", "events_url": "https://api.github.com/users/Ark-kun/events{/privacy}", "received_events_url": "https://api.github.com/users/Ark-kun/received_events", "type": "User", "site_admin": false }
[ { "login": "Ark-kun", "id": 1829149, "node_id": "MDQ6VXNlcjE4MjkxNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ark-kun", "html_url": "https://github.com/Ark-kun", "followers_url": "https://api.github.com/users/Ark-kun/followers", "following_url": "https://api.github.com/users/Ark-kun/following{/other_user}", "gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions", "organizations_url": "https://api.github.com/users/Ark-kun/orgs", "repos_url": "https://api.github.com/users/Ark-kun/repos", "events_url": "https://api.github.com/users/Ark-kun/events{/privacy}", "received_events_url": "https://api.github.com/users/Ark-kun/received_events", "type": "User", "site_admin": false } ]
null
[ "cc @Ark-kun ", "Thank you @lynnmatrix ! This bug makes sense and would you like to contribute a PR for the fix?", "@zijianjoy, My colleague @michaelliqx has fixed it in PR #5290.", "Thank you for the discovery and the fix.\r\nI wonder how this worked before....?\r\n\r\nUpdate: I see that this code was somewhat unused - it was only there to check the cache value being overwritten. I guess with the bug it was truncating the file, getting 0 for the `old_data` and then overwriting the file with the new value.", "This bug raises exception: `TypeError: write_bytes() missing 1 required positional argument: 'data'`.\r\n\r\nYou are right, this code was unused.\r\nKeyValueStore is only used in ComponentStore._refresh_component_cache, and it will check the existence of the key before writing, so there is no chance to overwrite." ]
"2021-03-12T03:37:16"
"2021-03-24T06:08:06"
"2021-03-24T00:03:44"
MEMBER
null
Typo bug. ``` def store_value_bytes(self, key: str, data: bytes) -> str: ... if cache_value_file_path.exists(): old_data = cache_value_file_path.write_bytes() ... ``` should be: ``` def store_value_bytes(self, key: str, data: bytes) -> str: ... if cache_value_file_path.exists(): old_data = cache_value_file_path.read_bytes() ... ```
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5288/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5288/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5285
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5285/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5285/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5285/events
https://github.com/kubeflow/pipelines/issues/5285
829,102,215
MDU6SXNzdWU4MjkxMDIyMTU=
5,285
Orchestration - Flakiness in small samples when using PNS executor
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "id": 1499519734, "node_id": "MDU6TGFiZWwxNDk5NTE5NzM0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/upstream_issue", "name": "upstream_issue", "color": "006b75", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
closed
false
{ "login": "neuromage", "id": 206520, "node_id": "MDQ6VXNlcjIwNjUyMA==", "avatar_url": "https://avatars.githubusercontent.com/u/206520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/neuromage", "html_url": "https://github.com/neuromage", "followers_url": "https://api.github.com/users/neuromage/followers", "following_url": "https://api.github.com/users/neuromage/following{/other_user}", "gists_url": "https://api.github.com/users/neuromage/gists{/gist_id}", "starred_url": "https://api.github.com/users/neuromage/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/neuromage/subscriptions", "organizations_url": "https://api.github.com/users/neuromage/orgs", "repos_url": "https://api.github.com/users/neuromage/repos", "events_url": "https://api.github.com/users/neuromage/events{/privacy}", "received_events_url": "https://api.github.com/users/neuromage/received_events", "type": "User", "site_admin": false }
[ { "login": "neuromage", "id": 206520, "node_id": "MDQ6VXNlcjIwNjUyMA==", "avatar_url": "https://avatars.githubusercontent.com/u/206520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/neuromage", "html_url": "https://github.com/neuromage", "followers_url": "https://api.github.com/users/neuromage/followers", "following_url": "https://api.github.com/users/neuromage/following{/other_user}", "gists_url": "https://api.github.com/users/neuromage/gists{/gist_id}", "starred_url": "https://api.github.com/users/neuromage/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/neuromage/subscriptions", "organizations_url": "https://api.github.com/users/neuromage/orgs", "repos_url": "https://api.github.com/users/neuromage/repos", "events_url": "https://api.github.com/users/neuromage/events{/privacy}", "received_events_url": "https://api.github.com/users/neuromage/received_events", "type": "User", "site_admin": false }, { "login": "Ark-kun", "id": 1829149, "node_id": "MDQ6VXNlcjE4MjkxNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ark-kun", "html_url": "https://github.com/Ark-kun", "followers_url": "https://api.github.com/users/Ark-kun/followers", "following_url": "https://api.github.com/users/Ark-kun/following{/other_user}", "gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions", "organizations_url": "https://api.github.com/users/Ark-kun/orgs", "repos_url": "https://api.github.com/users/Ark-kun/repos", "events_url": "https://api.github.com/users/Ark-kun/events{/privacy}", "received_events_url": "https://api.github.com/users/Ark-kun/received_events", "type": "User", "site_admin": false }, { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }, { "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false } ]
null
[ "/assign @chensun @neuromage \r\n/cc @Ark-kun @capri-xiyue \r\n\r\nI think we need to discuss this problem, from KFP side, we could automatically mount emptyDir for users.", "/cc @jessesuen @alexec\r\nDo you have any suggestions? Is my above understanding of the workarounds accurate?", "Hmm, strange, I'm not seeing flakiness anymore\r\n\r\nI'll keep observing this problem", "I see postsubmit flakyness. https://oss-prow.knative.dev/view/gs/oss-prow/logs/kubeflow-pipeline-postsubmit-integration-test/1371547567694811136", "I'll revert back to docker as default for current release, but we should evaluate possibility of mounting emptyDir volumes for artifact paths for users.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "We now recommend emissary executor instead: https://github.com/kubeflow/pipelines/issues/1654#issuecomment-903224028" ]
"2021-03-11T12:22:54"
"2021-08-22T07:03:03"
"2021-08-22T07:03:03"
CONTRIBUTOR
null
In https://github.com/kubeflow/pipelines/pull/5273, I switched to PNS executor by default. After that, it seems lightweight component sample fail more frequently than before. (It failed 3 times consecutively in an example I found, but it seems to me that the last two failures should be fixed by https://github.com/kubeflow/pipelines/pull/5284, we can observe the actual flakiness rate after the change) Symptom, pipeline components that run too fast fail with: > failed to save outputs: could not chroot into main for artifact collection: container may have exited too quickly https://oss-prow.knative.dev/view/gs/oss-prow/pr-logs/pull/kubeflow_pipelines/4147/kubeflow-pipeline-sample-test/1369916972707352576#1:build-log.txt%3A5681 Root cause seems to be: https://github.com/argoproj/argo-workflows/issues/1256#issuecomment-481471276 And workarounds can be: * (hacky) let the main container sleep for a while * (stable fix) mount the artifacts on an emptyDir
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5285/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5285/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5283
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5283/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5283/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5283/events
https://github.com/kubeflow/pipelines/issues/5283
829,080,364
MDU6SXNzdWU4MjkwODAzNjQ=
5,283
[Testing] sample-test failing with unknown shorthand flag "-w"
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false } ]
null
[ "> Examples:\r\n> \\# Print the logs of a workflow:\r\n> \r\n> argo logs my-wf" ]
"2021-03-11T11:53:09"
"2021-03-12T03:39:24"
"2021-03-12T03:39:24"
CONTRIBUTOR
null
> sample-test-bpnv9-4034248067: unknown shorthand flag: 'w' in -w See https://storage.googleapis.com/oss-prow/pr-logs/pull/kubeflow_pipelines/4147/kubeflow-pipeline-sample-test/1369916972707352576/build-log.txt in https://github.com/kubeflow/pipelines/pull/4147. It seems after certain versions, argo CLI no longer have the `-w` flag. It directly accepts workflow name as a positional argument. (Note, the code path is only triggered when a pipeline fails, after we fix this issue, when pipeline samples fail, there should be detailed error log there) /cc @chensun @zijianjoy @neuromage /assign I'll try to fix this.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5283/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5283/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5282
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5282/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5282/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5282/events
https://github.com/kubeflow/pipelines/issues/5282
828,875,083
MDU6SXNzdWU4Mjg4NzUwODM=
5,282
Unexpected behaviour given multiple optional inputs in component YAML
{ "login": "amyxst", "id": 5170374, "node_id": "MDQ6VXNlcjUxNzAzNzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/5170374?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyxst", "html_url": "https://github.com/amyxst", "followers_url": "https://api.github.com/users/amyxst/followers", "following_url": "https://api.github.com/users/amyxst/following{/other_user}", "gists_url": "https://api.github.com/users/amyxst/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyxst/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyxst/subscriptions", "organizations_url": "https://api.github.com/users/amyxst/orgs", "repos_url": "https://api.github.com/users/amyxst/repos", "events_url": "https://api.github.com/users/amyxst/events{/privacy}", "received_events_url": "https://api.github.com/users/amyxst/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 1493369148, "node_id": "MDU6TGFiZWwxNDkzMzY5MTQ4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/status/triaged", "name": "status/triaged", "color": "18f440", "default": false, "description": "Whether the issue has been explicitly triaged" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
closed
false
{ "login": "Ark-kun", "id": 1829149, "node_id": "MDQ6VXNlcjE4MjkxNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ark-kun", "html_url": "https://github.com/Ark-kun", "followers_url": "https://api.github.com/users/Ark-kun/followers", "following_url": "https://api.github.com/users/Ark-kun/following{/other_user}", "gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions", "organizations_url": "https://api.github.com/users/Ark-kun/orgs", "repos_url": "https://api.github.com/users/Ark-kun/repos", "events_url": "https://api.github.com/users/Ark-kun/events{/privacy}", "received_events_url": "https://api.github.com/users/Ark-kun/received_events", "type": "User", "site_admin": false }
[ { "login": "Ark-kun", "id": 1829149, "node_id": "MDQ6VXNlcjE4MjkxNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ark-kun", "html_url": "https://github.com/Ark-kun", "followers_url": "https://api.github.com/users/Ark-kun/followers", "following_url": "https://api.github.com/users/Ark-kun/following{/other_user}", "gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions", "organizations_url": "https://api.github.com/users/Ark-kun/orgs", "repos_url": "https://api.github.com/users/Ark-kun/repos", "events_url": "https://api.github.com/users/Ark-kun/events{/privacy}", "received_events_url": "https://api.github.com/users/Ark-kun/received_events", "type": "User", "site_admin": false }, { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "@Bobgy is it possible to confirm what the expected behavior is with optional params?", "/assign @chensun @Ark-kun ", "Thank you for the detailed issue report.\r\n\r\nThe behavior is by design. There is no special/different behavior for multiple optional arguments. What you see is mostly the way you program parses the command-line.\r\n\r\nThe question is a bit unclear regarding what behavior you are expecting any why.\r\n\r\n`optional: true` means that the pipeline author may skip passing any argument to the optional input.\r\nIn current implementation, the input placeholder can be replaced with *nothing* if no argument was passed, but TBH, an attempt to resolve a placeholder for a missing input is more like an error. We should probably add a warning for that case.\r\n\r\nCommand-line programs can be made to handle such CLI arguments.\r\nIf your program cannot handle them, you can either change the program or change the specification.\r\n\r\n>set a default value of null \r\n\r\nThe container component specification describes command-line programs. Command-lines have no concept of \"null\". Command line arguments are strings. This is an OS limitation. Additionally, when parsing the YAML we treat `default: null` same as unspecified.\r\n\r\nYou could set default value to an empty string BTW, if that works for you.\r\n\r\n>optional params get passed as flags instead of as omitted arguments as expected\r\n>assigns 40 to A\r\n>passes in --A as a flag;\r\n>passed both arguments in as flags instead of arguments\r\n\r\nLet's try to understand what KFP does. KFP does not \"assign\" anything to the variables of your python program. Nor does it quite \"pass\" things to programs. It does not look into your program code. What KFP does is execute command-line programs after replacing the placeholders. (You can easily check the command-line arguments of your program in the \"Pod\" tab. You can also debug programs by using the `command: [\"bash\", \"-c\", 'echo \"$0\" \"$@\"']`).\r\nCommand-lines do not have concepts like \"assigning\", \"passing\" or \"flags\". Command line consists of executable name and an array of arguments (which are null-terminated C strings).\r\nWhen you omit arguments for optional inputs, you get a pretty expected command-line: `python3 /src/myprint.py --param1 --param2`. Then you program interprets that the way you've observed.\r\n\r\nAn important thing to understand is that the interface is the command-line. An array of strings. Your program must be able to interpret its command-line. If you want your program to receive special values like `None`, `-Infinity` , `<built-in function>` or `dict`, you need to think about how you're going to represent them on the command-line.\r\n\r\n### Solution\r\n\r\nIf you want to add/remove parts of command-line based on the presence of argument for an optional input, just use the `if` placeholder:\r\n```yaml\r\nname: myprint\r\ninputs:\r\n- {name: A, optional: true, type: String}\r\n- {name: B, optional: true, type: String}\r\nimplementation:\r\n container:\r\n image: gcr.io/.../mycomp\r\n command: [python3, /src/myprint.py]\r\n args:\r\n - if:\r\n cond: {isPresent: A}\r\n then: [--param1, {inputValue: A}]\r\n - if:\r\n cond: {isPresent: B}\r\n then: [--param2, {inputValue: B}]\r\n```\r\n\r\nPlease check how components generated by `kfp.components.create_component_from_func` look like: https://github.com/kubeflow/pipelines/blob/a80421191db917322ff312626409526b0a76aa68/components/json/Build_list/component.yaml#L78", "@Ark-kun Thank you for your detailed response! I just want to clarify,\r\n\r\n## 1. \r\nI think the main issue that I've seen is with regards to what you mention about:\r\n> Additionally, when parsing the YAML we treat default: null same as unspecified.\r\n\r\nIf we have a program `myprint.py` that can optionally accept an option `--param1`, a user running the program in the normal course of action can choose to omit specifying a value for the option by omitting it from the program execution altogther: running `python a.py` without `--param1=some_arg`.\r\n\r\nBut in the YAML spec, there doesn't appear to be a way to have KFP _not_ to pass in `--param1`, if the user doesn't supply the argument for the component in the pipeline. e.g. Given the YAML:\r\n```\r\nname: myprint\r\ninputs:\r\n- {name: A, optional: true, type: String}\r\nimplementation:\r\n container:\r\n image: gcr.io/.../mycomp\r\n command: [python3, /src/myprint.py]\r\n args:\r\n - --param1\r\n - {inputValue: A}\r\n```\r\nOmitting `param1` for the component in the pipeline:\r\n```\r\n@kfp.dsl.pipeline()\r\ndef pipeline():\r\n first_add_task = myprint_op()\r\n```\r\nThis seems to produce the equivalent of calling (command line receives 3 tokens):\r\n`python myprint.py --param1`\r\n\r\nInstead of what is expected (command line receives 2 tokens):\r\n`python myprint.py`\r\n\r\nThis behaviour seems a little incongruent to me. Especially in the case of when we specify param1 to take on a **non-null** default value:\r\n`{name: A, optional: true, default: \"hello\", type: String}`\r\n\r\nIn this case, when we omit param1 from the pipeline, the default gets passed as expected:\r\n> python myprint.py --param1=hello\r\n\r\nBut if we specify a null default value in the YAML spec:\r\n`{name: A, optional: true, default: ~, type: String}`\r\n\r\nTo stay consistent with the non-null case, it _should_ yield:\r\n`python myprint.py`\r\n\r\nBut instead, we get:\r\n`python myprint.py --param1`\r\n\r\nFrom the perspective of the command line parser, the params take on completely different meanings. The former being that since no `--param1` was found, the parser will assign it some value. In other words, param1 interpreted as a **key** in a key-value pair. But in the latter case, param1 is interpreted as a boolean flag.\r\n\r\n**I just want to confirm if this is indeed the expected behaviour?** If so, that in order to omit an option from being passed in altogether if it's optional, that I should be employing the conditionals in the YAML component spec as described?\r\n\r\n## 2.\r\n\r\nLastly, I want to double check the version of KFP that supports those conditional statements in the YAML.\r\n\r\nWhenever I have specified them, I'm unable to load the component:\r\n```Error: Structure \"OrderedDict(['container', OrderedDict([...]), ('args', OrderedDict([('if', None), ('cond', OrderedDict([('isPresent', 'A')])), ('then', ['--param1', OrderedDict([('inputValue', 'A')])])])]) is incompatible with type \"typing.Union[kfp.components._structures.ContainerImplementation, kfp.components._structures.GraphImplementation, NoneType]\" - none of the types in Union are compatible.```", "Hey @amyxst, did you check https://github.com/kubeflow/pipelines/blob/master/samples/test/placeholder_if.py?\n\nCurrently, KFP doesn't really support runtime if placeholder, so suggest wrapping your command call in bash or python and add the conditional there", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n", "@amyxst \r\n\r\n>I don't think TC interface is deterministic, because it looks like mouse and focus based multi modal.\r\n\r\nI've listed an example that does just that using the `if` placeholder.\r\n\r\n```\r\nname: myprint\r\ninputs:\r\n- {name: A, optional: true, type: String}\r\n- {name: B, optional: true, type: String}\r\nimplementation:\r\n container:\r\n image: gcr.io/.../mycomp\r\n command: [python3, /src/myprint.py]\r\n args:\r\n - if:\r\n cond: {isPresent: A}\r\n then: [--param1, {inputValue: A}]\r\n - if:\r\n cond: {isPresent: B}\r\n then: [--param2, {inputValue: B}]\r\n```\r\n\r\n>I just want to confirm if this is indeed the expected behaviour?\r\n\r\nYes. In your case, the command line is `[\"python3\", \"/src/myprint.py\", \"--param1\", A]. There are 4 elements in the list. Each one is independent. `\"--param1\"` is completely independent from `A`.\r\n\r\nIf you want more complex behavior, use the `if` placeholder created e\\specifically for this case.\r\n\r\n>Lastly, I want to double check the version of KFP that supports those conditional statements in the YAML.\r\n\r\nYes.\r\n\r\n>Lastly, I want to double check the version of KFP that supports those conditional statements in the YAML.\r\n\r\nYour YAML data likely has a formatting issue. Please post the full component YAML and the full error message.\r\n\r\n" ]
"2021-03-11T07:36:42"
"2022-03-13T01:10:31"
"2022-03-03T06:05:32"
NONE
null
### What steps did you take: When an input param is given as `optional: True` in the component YAML, and the Python program uses a CLI library such as argparse or click to accept options, the optional params get passed as **flags** instead of as **omitted arguments** as expected. Given a `component.yaml` file such as: ``` name: myprint inputs: - {name: A, optional: true, type: String} - {name: B, optional: true, type: String} implementation: container: image: gcr.io/.../mycomp command: [python3, /src/myprint.py] args: [ --param1, {inputValue: A}, --param2, {inputValue: B}, ] ``` And `myprint.py` that was built into the image above: ``` import argparse, json def myprint(param_1: str, param_2: str) -> float: print(json.dumps(locals(), indent=4)) parser = argparse.ArgumentParser() parser.add_argument('--param1', type=str, default="default1") parser.add_argument('--param2', type=str, default="default2") args = parser.parse_args() myprint(args.param1, args.param2) ``` ### What happened: **1) When we omit both arguments in the pipeline** (Using the component spec from above) ``` myprint_op = kfp.components.load_component_from_file("component.yaml") @dsl.pipeline() def myprint_pipeline(): first_add_task = myprint_op() # neither input `a` nor `b` are supplied to myprint_op run = client.create_run_from_pipeline_func(myprint_pipeline, arguments=arguments) ``` The run is compiled and submitted successfully. But we see from the output that both params were passed in as **flags** instead of **arguments**, and **worse, `--param1` got passed in as the _value_ of `--param1`.** ``` # console output { "param_1": "--param2", "param_2": null } ``` By indicating both inputs as optional in the YAML spec, it seemed to have instead passed both arguments in as flags instead of arguments. i.e. equivalent of: `python3 /src/myprint.py --param1 --param2` instead of `python3 /src/myprint.py` (which would set param1 and param2 __argument__ values to None, allowing argparse to assign default values.) **2. Programs with one optional argument** In cases where our `.py` program only has only one optional argument, say `--param1`, and described as: `{name: param1, optional: true, type: String}`. If we omit the param in the pipeline similar to before, the run compiles successfully. But during component runtime, we get the error: ``` Error: --param1 option requires an argument ``` **3. Adding defaults to optional inputs** It seems that it's not possible to set a default value of null for any inputs via the YAML file. There seems to be some incongruence between how non-null and null default values are treated: 1a. `{name: A, optional: false, default: 40}` - Pipeline compiles successfully when A is omitted, assigns 40 to A, as expected. 1b. `{name: A, optional: false, default: null}` - Pipeline fails to compile, complaining A is missing; but if it's in keeping with the previous case, the expected behaviour should be to assign null as the value of A 2a. `{name: A, optional: true, default: 40}` - Pipeline compiles successfully when A is omitted, and assigns 40 to A - this seems to be the exact same behaviour as in the case of 1a.? 2b. `{name: A, optional: true, default: null}` - Compiles successfully, but passes in --A as a flag; in this case it seems `{name: A, optional: true, default: null}` and `{name: A, optional: true}` behave in exactly in the same way. In general, there doesn't seem to be an agreement as to when default values are assigned when optional is set to true or false. And this is especially the case when we attempt to set `default: null` (or `default: ~`). ### Environment: <!-- Please fill in those that seem relevant. --> How did you deploy Kubeflow Pipelines (KFP)? KFP version: https://github.com/kubeflow/pipelines/commit/743746b96e6efc502c33e1e529f2fe89ce09481c KFP SDK version: ``` kfp 1.4.0 kfp-pipeline-spec 0.1.6 kfp-server-api 1.4.1 ``` ### Anything else you would like to add: **All in all,** there seems to be two main ways of resolving this: 1. Change in the KFP library: Allow that when `optional: true`, and no default is set, do NOT send a flag to the component program - so that argparse/click within the component program can handle empty arguments properly. 2. With argparse/click library: Find a way to somehow have these CLI libraries parse an input as both a flag and an option. So that when KFP passes a flag into the component program, the CLI library can interpret the flag as if it's an option with a None value. Have dug around on this possibility, but this doesn't seem to be something that these libraries support. Finally a temporary workaround we've been using: 3. I've created a wrapper around the .py component, and used the wrapper component as the entrypoint instead to the component in the YAML spec. The wrapper effectively removes any flags it receives, before invoking the actual component. e.g. assuming we omitted param1 and param2, KFP now calls `myprint_wrapper.py` with `python3 /src/myprint_wrapper.py --param1 --param2`. The wrapper then removes the flags --param1 --param2, before invoking the actual component without those erroneous flags: `python3 /src/myprint.py` /kind bug /area sdk <!-- Please include labels by uncommenting them to help us better triage issues, choose from the following --> <!-- // /area frontend // /area backend // /area sdk // /area testing // /area engprod -->
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5282/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5282/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5281
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5281/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5281/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5281/events
https://github.com/kubeflow/pipelines/issues/5281
828,797,430
MDU6SXNzdWU4Mjg3OTc0MzA=
5,281
Feature gap between `ContainerOp` and YAML component: hydra argument support
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "id": 1122445895, "node_id": "MDU6TGFiZWwxMTIyNDQ1ODk1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk/components", "name": "area/sdk/components", "color": "d2b48c", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
closed
false
{ "login": "Ark-kun", "id": 1829149, "node_id": "MDQ6VXNlcjE4MjkxNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ark-kun", "html_url": "https://github.com/Ark-kun", "followers_url": "https://api.github.com/users/Ark-kun/followers", "following_url": "https://api.github.com/users/Ark-kun/following{/other_user}", "gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions", "organizations_url": "https://api.github.com/users/Ark-kun/orgs", "repos_url": "https://api.github.com/users/Ark-kun/repos", "events_url": "https://api.github.com/users/Ark-kun/events{/privacy}", "received_events_url": "https://api.github.com/users/Ark-kun/received_events", "type": "User", "site_admin": false }
[ { "login": "Ark-kun", "id": 1829149, "node_id": "MDQ6VXNlcjE4MjkxNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ark-kun", "html_url": "https://github.com/Ark-kun", "followers_url": "https://api.github.com/users/Ark-kun/followers", "following_url": "https://api.github.com/users/Ark-kun/following{/other_user}", "gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions", "organizations_url": "https://api.github.com/users/Ark-kun/orgs", "repos_url": "https://api.github.com/users/Ark-kun/repos", "events_url": "https://api.github.com/users/Ark-kun/events{/privacy}", "received_events_url": "https://api.github.com/users/Ark-kun/received_events", "type": "User", "site_admin": false }, { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "Interesting. The hydra command-line format is different from the normal way of specifying command-line arguments that 99% of other command-line tools use (separating flag and value, instead of joining them with `=`).\r\n\r\nHowever, the `ComponentSpec`/`component.yaml` format supports such command lines via the `concat`: placeholders:\r\n\r\n```yaml\r\nname: TransformOp\r\ninputs:\r\n- {name: template_prj, type: String}\r\n- {name: env, type: String}\r\n- {name: project_api_key, type: String}\r\n- {name: data_api_key, type: String}\r\n- {name: model_id, type: String}\r\nimplementation:\r\n container:\r\n image: gcr.io/my-org/kubeflow-projects/causal_discovery\r\n args:\r\n - concat: [\"component=\", {inputValue: template_prj}, \"/transformer\"]\r\n - concat: [\"common=\", {inputValue: env}]\r\n - concat: [\"common.template_prj=\", {inputValue: template_prj}]\r\n - concat: [\"common.project_api_key=\", {inputValue: project_api_key}]\r\n - concat: [\"common.data_api_key=\", {inputValue: data_api_key}]\r\n - concat: [\"common.model_id=\", {inputValue: model_id}]\r\n```\r\n\r\n>This is because when we load and parse the YAML, common.project_api_key={inputValue is taken as a key of a dictionary, while project_api_key} is taken as the the value for the key, which is apparently wrong.\r\nSo we have a limitation here that a placeholder like {inputValue: project_api_key} must be on its own line (the list item).\r\n\r\nI'm not sure that's the case. `- common.project_api_key={inputValue: project_api_key}` is just wrong YAML syntax. An item should wither be a string or dict, but not string-o-dict. Using quoting prevents parts of the value from being parsed as dict by the YAML parser. `- \"common.project_api_key={inputValue: project_api_key}\"`. However this does not solve the user issue - a string is just a string. The placeholder won't be substituted. We decided not to invent a new placeholder language that needs to be parsed. We use the YAML language instead. Placeholders are clearly distinguished from the verbatim strings by their type: verbatim strings are strings while placeholders are maps. For a proper solution, see the above code.\r\n\r\nP.S. YAML looks nice and compact, but the syntax may be harder to learn. Fortunately, YAML is superset of JSON, so you can just use JSON to define the component:\r\n\r\n```json\r\n{\r\n \"name\": \"TransformOp\",\r\n \"inputs\": [\r\n {\"name\": \"template_prj\", \"type\": \"String\"},\r\n {\"name\": \"env\", \"type\": \"String\"},\r\n {\"name\": \"project_api_key\", \"type\": \"String\"},\r\n {\"name\": \"data_api_key\", \"type\": \"String\"},\r\n {\"name\": \"model_id\", \"type\": \"String\"}\r\n ],\r\n \"implementation\": {\r\n \"container\": {\r\n \"image\": \"gcr.io/my-org/kubeflow-projects/causal_discovery\",\r\n \"args\": [\r\n {\r\n \"concat\": [\r\n \"component=\",\r\n {\"inputValue\": \"template_prj\"},\r\n \"/transformer\"\r\n ]\r\n },\r\n {\r\n \"concat\": [\r\n \"common=\",\r\n {\"inputValue\": \"env\"}\r\n ]\r\n },\r\n {\r\n \"concat\": [\r\n \"common.template_prj=\",\r\n {\"inputValue\": \"template_prj\"}\r\n ]\r\n },\r\n {\r\n \"concat\": [\r\n \"common.project_api_key=\",\r\n {\"inputValue\": \"project_api_key\"}\r\n ]\r\n },\r\n {\r\n \"concat\": [\r\n \"common.data_api_key=\",\r\n {\"inputValue\": \"data_api_key\"}\r\n ]\r\n },\r\n {\r\n \"concat\": [\r\n \"common.model_id=\",\r\n {\"inputValue\": \"model_id\"}\r\n ]\r\n }\r\n ]\r\n }\r\n }\r\n}\r\n```", "The customer has verified that the `concat` solution works. I've created and issue to improve the `concat` placeholder documentation: https://github.com/kubeflow/pipelines/issues/5305" ]
"2021-03-11T06:13:45"
"2021-03-15T17:59:03"
"2021-03-15T17:57:21"
COLLABORATOR
null
Using `ContainerOp` to define component, users can use [hydra](https://hydra.cc/) arguments like shown below. ``` import kfp.dsl as dsl import kfp.gcp as gcp import kfp.compiler as compiler .... transform = dsl.ContainerOp( name="transform", image="gcr.io/{}/image:latest".format(PROJECT), arguments=[ "component={}/transformer".format(TEMPLATE_PRJ), "common={}".format(ENV), "common.template_prj={}".format(TEMPLATE_PRJ), "common.project_api_key={}".format(project_api_key), "common.data_api_key={}".format(data_api_key), "common.model_id={}".format(model_id), ], ) ``` However, this is not easily achievable via YAML component. The following code results compile error when loading the component YAML. ``` name: TransformOp inputs: - {name: project_api_key, type: String} - {name: data_api_key, type: String} - {name: model_id, type: String} implementation: container: image: gcr.io/%s/kubeflow-projects/causal_discovery args: - component=%s/transformer - common=%s - common.template_prj=%s - common.project_api_key={inputValue: project_api_key} - common.data_api_key={inputValue: data_api_key} - common.model_id={inputValue: model_id} ``` This is because when we load and parse the YAML, `common.project_api_key={inputValue` is taken as a key of a dictionary, while `project_api_key}` is taken as the the value for the key, which is apparently wrong. So we have a limitation here that a placeholder like `{inputValue: project_api_key}` must be on its own line (the list item). /kind feature /area sdk
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5281/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5281/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5280
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5280/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5280/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5280/events
https://github.com/kubeflow/pipelines/issues/5280
828,767,334
MDU6SXNzdWU4Mjg3NjczMzQ=
5,280
Constructing dsl.ContainerOp instance directly in user code doesn't always raise the warning.
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "/assign @chensun \r\n/cc @Ark-kun ", "Hmm.\r\nInteresting. We had this issue before and @munagekar fixed it by changing the warning type from `DeprecationWarning` to `FutureWarning`. https://github.com/kubeflow/pipelines/pull/4658\r\n\r\nIt looks like that warning is still not show to the users even when we see it in tests.", "This is really weird. Unlike `DeprecationWarning`, `FutureWarning` should always be shown: https://docs.python.org/3/library/warnings.html#default-warning-filter\r\n\r\nI can reproduce the behavior. I also see that when the warning is shown it's only shown once (in a Jupyter session).", "> I also see that when the warning is shown it's only shown once (in a Jupyter session).\r\n\r\nThis is the usual behavior with python warnings. Warning in a loop get called once only.\r\n\r\n> This is really weird. Unlike DeprecationWarning, FutureWarning should always be shown\r\n\r\nhttps://docs.python.org/3/library/warnings.html#default-warning-filter\r\nThis is correct, however this is not a python issue.\r\n\r\nI have investigated this further, the reason the compiler doesn't produce the warning is because the code in the first case does not reach the warning. The attached screenshot explains it further.\r\n\r\nhttps://github.com/kubeflow/pipelines/blob/a12e88d1da57b897a3a25b5c44540fc7d3c9a40e/sdk/python/kfp/dsl/_container_op.py#L1104-L1105\r\n\r\nThe core issue is that `__eq__` in `kfp.dsl._pipeline_param.PipelineParam` is incorrectly returning NamedTuple. `__eq__ `should return either True or False. A named tuple is assumed to be True.\r\nhttps://github.com/kubeflow/pipelines/blob/67afca4938c35e6b8be29e61df3883af11554220/sdk/python/kfp/dsl/_pipeline_param.py#L227-L228\r\n\r\n![Screen Shot 2021-03-13 at 2 11 15](https://user-images.githubusercontent.com/10258799/110974425-db784e00-83a1-11eb-9201-7e9420070c61.png)\r\n\r\n\r\n", "Thank you for clear explanation. We are facing this issue and want to solve it.\r\n\r\nLet me try to fix this issue, I'll create PR.", "I created PR https://github.com/kubeflow/pipelines/pull/5299", "> The core issue is that `__eq__` in `kfp.dsl._pipeline_param.PipelineParam` is incorrectly returning NamedTuple. `__eq__ `should return either True or False. A named tuple is assumed to be True.\r\n\r\nOh my! Thank you for the great investigation." ]
"2021-03-11T05:38:46"
"2021-03-30T09:56:18"
"2021-03-30T09:56:18"
COLLABORATOR
null
### What steps did you take: Compile the following pipeline ``` import kfp def _sample_component_op( name: str, run_id: str, sample_metric_value: float ) -> kfp.dsl.ContainerOp: arguments = [run_id, sample_metric_value] return kfp.dsl.ContainerOp( name=name, image='library/hello-world:latest', arguments=arguments, ) @kfp.dsl.pipeline(name='minimul') def sample_pipeline( run_id: str = kfp.dsl.RUN_ID_PLACEHOLDER, sample_metric_value: float = 0.5, ): """Execute main procedure of the pipeline.""" _sample_component_op("component", run_id, sample_metric_value) ``` ### What happened: No warning raised on using `ContainerOp` directly. ### What did you expect to happen: Seeing the following warning: ``` /usr/local/lib/python3.7/dist-packages/kfp/dsl/_container_op.py:1039: FutureWarning: Please create reusable components instead of constructing ContainerOp instances directly. Reusable components are shareable, portable and have compatibility and support guarantees. Please see the documentation: https://www.kubeflow.org/docs/pipelines/sdk/component-development/#writing-your-component-definition-file The components can be created manually (or, in case of python, using kfp.components.create_component_from_func or func_to_container_op) and then loaded using kfp.components.load_component_from_file, load_component_from_uri or load_component_from_text: https://kubeflow-pipelines.readthedocs.io/en/stable/source/kfp.components.html#kfp.components.load_component_from_file category=FutureWarning, ``` KFP SDK version: 1.4.0 ### Anything else you would like to add: For comparison, compiling the following example raises the warning as expected. ``` import kfp @kfp.dsl.pipeline(name='hello-world-pipeline') def sample_pipeline(): kfp.dsl.ContainerOp( name='hello-world', image='library/hello-world:latest', arguments=[], ) ``` /kind bug /area sdk
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5280/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5280/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5279
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5279/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5279/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5279/events
https://github.com/kubeflow/pipelines/issues/5279
828,299,878
MDU6SXNzdWU4MjgyOTk4Nzg=
5,279
Python func component renames function arguments
{ "login": "yuhuishi-convect", "id": 74702693, "node_id": "MDQ6VXNlcjc0NzAyNjkz", "avatar_url": "https://avatars.githubusercontent.com/u/74702693?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yuhuishi-convect", "html_url": "https://github.com/yuhuishi-convect", "followers_url": "https://api.github.com/users/yuhuishi-convect/followers", "following_url": "https://api.github.com/users/yuhuishi-convect/following{/other_user}", "gists_url": "https://api.github.com/users/yuhuishi-convect/gists{/gist_id}", "starred_url": "https://api.github.com/users/yuhuishi-convect/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuhuishi-convect/subscriptions", "organizations_url": "https://api.github.com/users/yuhuishi-convect/orgs", "repos_url": "https://api.github.com/users/yuhuishi-convect/repos", "events_url": "https://api.github.com/users/yuhuishi-convect/events{/privacy}", "received_events_url": "https://api.github.com/users/yuhuishi-convect/received_events", "type": "User", "site_admin": false }
[ { "id": 1122445895, "node_id": "MDU6TGFiZWwxMTIyNDQ1ODk1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk/components", "name": "area/sdk/components", "color": "d2b48c", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
closed
false
{ "login": "Ark-kun", "id": 1829149, "node_id": "MDQ6VXNlcjE4MjkxNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ark-kun", "html_url": "https://github.com/Ark-kun", "followers_url": "https://api.github.com/users/Ark-kun/followers", "following_url": "https://api.github.com/users/Ark-kun/following{/other_user}", "gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions", "organizations_url": "https://api.github.com/users/Ark-kun/orgs", "repos_url": "https://api.github.com/users/Ark-kun/repos", "events_url": "https://api.github.com/users/Ark-kun/events{/privacy}", "received_events_url": "https://api.github.com/users/Ark-kun/received_events", "type": "User", "site_admin": false }
[ { "login": "neuromage", "id": 206520, "node_id": "MDQ6VXNlcjIwNjUyMA==", "avatar_url": "https://avatars.githubusercontent.com/u/206520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/neuromage", "html_url": "https://github.com/neuromage", "followers_url": "https://api.github.com/users/neuromage/followers", "following_url": "https://api.github.com/users/neuromage/following{/other_user}", "gists_url": "https://api.github.com/users/neuromage/gists{/gist_id}", "starred_url": "https://api.github.com/users/neuromage/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/neuromage/subscriptions", "organizations_url": "https://api.github.com/users/neuromage/orgs", "repos_url": "https://api.github.com/users/neuromage/repos", "events_url": "https://api.github.com/users/neuromage/events{/privacy}", "received_events_url": "https://api.github.com/users/neuromage/received_events", "type": "User", "site_admin": false }, { "login": "Ark-kun", "id": 1829149, "node_id": "MDQ6VXNlcjE4MjkxNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ark-kun", "html_url": "https://github.com/Ark-kun", "followers_url": "https://api.github.com/users/Ark-kun/followers", "following_url": "https://api.github.com/users/Ark-kun/following{/other_user}", "gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions", "organizations_url": "https://api.github.com/users/Ark-kun/orgs", "repos_url": "https://api.github.com/users/Ark-kun/repos", "events_url": "https://api.github.com/users/Ark-kun/events{/privacy}", "received_events_url": "https://api.github.com/users/Ark-kun/received_events", "type": "User", "site_admin": false }, { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "/assign @Ark-kun @neuromage @chensun ", "There seems to be an error in your function definition. I'm not fully sure what is the behavior you've intended, but you probably should not be using the `InputPath` annotation given your component function code:\r\nJust use `def hydrate_schema(synced_local_path: str`. Or better maybe even call it `synced_uri`, since it seems to be a URI not local path.\r\n\r\nCan you try this solution and tell us whether it has fixed your problem?\r\n\r\nP.S. If you tried to call your component, you'd have noticed that `synced_local_path` does not really receive what you expected. It would have been a real local path (`/tmp/inputs/synced_local/data`) containing whatever data the component received from the upstream (probably a URI). This is what `InputPath` does.\r\n\r\n### Long explanation:\r\n\r\nThe behavior is: When using create_component_from_func/func_to_container_op: When a function parameter uses `InputPath` or `OutputPath` annotation and the parameter name ends with `_path` or `_file`, that part is stripped when generating the input/output name.\r\n\r\nLet me try to explain why this design was chosen.\r\n\r\nWhen you use `create_component_from_func`, there are two separate architecture layers: component layer and python function layer.\r\nOn the pipeline level, the author passes artifacts between components. The pipeline author does not manually pass URIs or local paths. Instead they just connect outputs to inputs.\r\nHowever you function has slightly different interface and gets some data from local files, a concept not existing for the pipeline authors.\r\n`create_component_from_func` generates glue command-line program code to bridge between those layers. Annotations like `InputPath` and `OutputPath` influence the way that bridge is constructed.\r\n\r\n`InputPath` means \"write the passed artifact contents to a local file and give me path to that file instead of the content itself\". When the component receives a \"Dataset\" (big text file in CSV format), you function receives a \"Dataset path\" (a small string with local path). These are very different kinds of objects, so it's natural that the names are different.\r\n\r\nThis difference becomes especially apparent if you consider numbers:\r\nNotice how the function expects a string path, but the component input has type `Integer`\r\nFunction: \r\n```python\r\ndef consume_num(number_path: InputPath(int)):\r\n open(number_path) as f:\r\n number = f.read()\r\n```\r\nComponent:\r\n```yaml\r\ninputs:\r\n- {name: Number, type: Integer}\r\n....\r\nimplementation:\r\n container:\r\n args:\r\n - --number-path, {inputPath: Number}\r\n```\r\nPipeline:\r\n```python\r\ndef my_pipeline():\r\n consume_num_op(number=42)\r\n```\r\n\r\nObserve the flow:\r\n1. The pipeline author passes value `42` to the input `Number` using `number=42`\r\n2. The component specification says that the value for the Number input needs to be written to a local file (`/tmp/outputs/Number/data`)\r\n3. The program receives the path as an argument: `--number-path` `/tmp/outputs/Number/data`\r\n4. The path is parsed from the command-line and passed to the function: `number_path=\"/tmp/outputs/Number/data\"`\r\n5. The function code uses the path to read the number from the file\r\n\r\nIf the `create_component_from_func` did not strip `_path` when naming the inputs, this would look wrong and weird for the pipeline author:\r\n```python\r\ndef my_pipeline():\r\n consume_num_op(number_path=42)\r\n```\r\n`number_path=42` looks wrong since 42 is not a valid path - it's an integer.", "@Ark-kun Thanks for the very comprehensive explanation. \r\n\r\n> The behavior is: When using create_component_from_func/func_to_container_op: When a function parameter uses InputPath or OutputPath annotation and the parameter name ends with _path or _file, that part is stripped when generating the input/output name.\r\n\r\nThis is something I was not aware of. I used `InputPath` since `synced_local_path` is expecting some output data from a previous component. But I was not aware of the auto-stripping mechanism so that the argument name needs to be changed when calling the function from the pipeline.\r\n\r\nThat answers my question. Is there a way this is documented somewhere in the user guide?\r\n\r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n" ]
"2021-03-10T20:32:53"
"2022-04-18T17:27:32"
"2022-04-18T17:27:32"
NONE
null
### What steps did you take: Converting a python func to compoent: ```python import kfp def hydrate_schema( synced_local_path: kfp.components.InputPath(str), data_schema: str ) -> str: import re if not synced_local_path.endswith("/"): synced_local_path += "/" return re.sub(r"s3:.+\/", synced_local_path, data_schema) hydrate_schema_op = kfp.components.create_component_from_func(hydrate_schema, output_component_file="replace_schema.yaml") ``` In the generated component yaml ```yaml name: Hydrate schema inputs: - {name: synced_local, type: String} - {name: data_schema, type: String} outputs: - {name: Output, type: String} implementation: container: image: python:3.7 command: - sh - -ec - | program_path=$(mktemp) echo -n "$0" > "$program_path" python3 -u "$program_path" "$@" - | def hydrate_schema( synced_local_path, data_schema ): import re if not synced_local_path.endswith("/"): synced_local_path += "/" return re.sub(r"s3:.+\/", synced_local_path, data_schema) def _serialize_str(str_value: str) -> str: if not isinstance(str_value, str): raise TypeError('Value "{}" has type "{}" instead of str.'.format(str(str_value), str(type(str_value)))) return str_value import argparse _parser = argparse.ArgumentParser(prog='Hydrate schema', description='') _parser.add_argument("--synced-local", dest="synced_local_path", type=str, required=True, default=argparse.SUPPRESS) _parser.add_argument("--data-schema", dest="data_schema", type=str, required=True, default=argparse.SUPPRESS) _parser.add_argument("----output-paths", dest="_output_paths", type=str, nargs=1) _parsed_args = vars(_parser.parse_args()) _output_files = _parsed_args.pop("_output_paths", []) _outputs = hydrate_schema(**_parsed_args) _outputs = [_outputs] _output_serializers = [ _serialize_str, ] import os for idx, output_file in enumerate(_output_files): try: os.makedirs(os.path.dirname(output_file)) except OSError: pass with open(output_file, 'w') as f: f.write(_output_serializers[idx](_outputs[idx])) args: - --synced-local - {inputPath: synced_local} - --data-schema - {inputValue: data_schema} - '----output-paths' - {outputPath: Output} ``` The argument `synced_local_path` is replaced with `synced_local`. Therefore, when using the component as ```python local_data_schema = hydrate_schema_op( data_schema=data_schema, synced_local_path=data_sync.output ) ``` Compiler complains that ``` TypeError: Hydrate schema() got an unexpected keyword argument 'synced_local_path' ``` ### What happened: The python func argument `synced_local_path` is renamed to `synced_local` ### What did you expect to happen: The argument names are preserved after conversion. ### Environment: <!-- Please fill in those that seem relevant. --> How did you deploy Kubeflow Pipelines (KFP)? <!-- If you are not sure, here's [an introduction of all options](https://www.kubeflow.org/docs/pipelines/installation/overview/). --> Local cluster deployment using Kind KFP version: 1.2.0 <!-- If you are not sure, build commit shows on bottom of KFP UI left sidenav. --> KFP SDK version: 1.2.0 <!-- Please attach the output of this shell command: $pip list | grep kfp --> ### Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.] /kind bug <!-- Please include labels by uncommenting them to help us better triage issues, choose from the following --> <!-- // /area frontend // /area backend // /area sdk // /area testing // /area engprod -->
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5279/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5279/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5276
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5276/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5276/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5276/events
https://github.com/kubeflow/pipelines/issues/5276
828,034,789
MDU6SXNzdWU4MjgwMzQ3ODk=
5,276
[Manifests] how to organize argo artifact repository config without duplication?
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "id": 930619535, "node_id": "MDU6TGFiZWw5MzA2MTk1MzU=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/priority/p2", "name": "priority/p2", "color": "fc9915", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
closed
false
null
[]
null
[ "For long term 3 sound like the way to go. ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n" ]
"2021-03-10T16:51:51"
"2022-04-18T17:27:53"
"2022-04-18T17:27:53"
CONTRIBUTOR
null
## The problem Realized the problem from https://github.com/kubeflow/pipelines/pull/5273. Argo artifact repository config is highly coupled to minio, so it's best put inside minio's manifests and get it reused in manifests envs that use minio. However, if we save the config as a patch, it's only allowed to use patches in the kustomization.yaml 's same or sub-folders. So if we do not include the minio-argo-artifact-repository patch by default, all downstream manifests that use argo with minio will have to duplicate the config patch. For Kubeflow, we have set up best practices for always using `--load_restrictor none`, so that there's no such restriction. However, the argument is not available in `kubectl kustomize`. ## Potential Solutions ### Solution 1 - Wait for kubectl to upgrade kustomize (from upstream issue, looks like there will soon be a release) Pros: * consistent with Kubeflow Cons: * requiring an extra argument is still a common confusion, and sort of hacky * we need to wait for next kubectl release and people using the latest kubectl version (can be a quarter) ### Solution 2 - Use configmap called `artifact-repositories` to choose default artifact repository. Ref: https://argoproj.github.io/argo-workflows/configure-artifact-repository/ Pros: * we no longer need to reuse the config as a patch, it can be a resource, so not limited by load_restrictor anymore Cons: * it only takes effect in the namespace that has it, so multi-user mode KFP doesn't work very well with this approach * the feature was added in argo 3.0, so we need to upgrade first ### Solution 3 - Use kustomize components https://kubectl.docs.kubernetes.io/guides/config_management/components/ this seems like exactly the canonical solution Pros: * this kustomize feature is exactly built to solve this problem Cons: * we need to upgrade to kustomize 3.7+, so we can no longer ask users to `kubectl apply -k manifests` * kustomize components seem to be an alpha API: `kustomize.config.k8s.io/v1alpha1` ### Analysis of the solutions Solution 1 & 2 has fatal problems. It seems we should take Solution 3, which should work long term (but it's also an alpha API). A prerequisite is either of the following: * Wait for kubectl release that includes latest kustomize * Update all documentation that were using `kubectl apply -k` to `kubectl apply -f <prehydrated-manifest>` or `kustomize build xxx | kubectl apply -f -` (when the manifest needs some var config, so it requires local hydration) and introduce kustomize installation process. Maybe the only acceptable short term solution is to default workflow-controller-configmap's artifact repository config to use minio, so that most of the manifests do not need to change it.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5276/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5276/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5272
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5272/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5272/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5272/events
https://github.com/kubeflow/pipelines/issues/5272
826,881,274
MDU6SXNzdWU4MjY4ODEyNzQ=
5,272
[Testing] kubeflow-pipeline-postsubmit-integration-test flaky 2021-3-10
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
closed
false
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false } ]
null
[ "For aiplatform, here is a sample of test failure log:\r\n\r\n```\r\n\r\nsample-test-92wvs-4170907803: Using endpoint [https://ml.googleapis.com/]\r\nsample-test-92wvs-4170907803: ERROR: (gcloud.ai-platform.predict) HTTP request failed. Response: {\r\nsample-test-92wvs-4170907803: \"error\": {\r\nsample-test-92wvs-4170907803: \"code\": 404,\r\nsample-test-92wvs-4170907803: \"message\": \"Field: name Error: Online prediction is unavailable for this version. Please verify that CreateVersion has completed successfully.\",\r\nsample-test-92wvs-4170907803: \"status\": \"NOT_FOUND\",\r\nsample-test-92wvs-4170907803: \"details\": [\r\nsample-test-92wvs-4170907803: {\r\nsample-test-92wvs-4170907803: \"@type\": \"type.googleapis.com/google.rpc.BadRequest\",\r\nsample-test-92wvs-4170907803: \"fieldViolations\": [\r\nsample-test-92wvs-4170907803: {\r\nsample-test-92wvs-4170907803: \"field\": \"name\",\r\nsample-test-92wvs-4170907803: \"description\": \"Online prediction is unavailable for this version. Please verify that CreateVersion has completed successfully.\"\r\nsample-test-92wvs-4170907803: }\r\nsample-test-92wvs-4170907803: ]\r\nsample-test-92wvs-4170907803: }\r\nsample-test-92wvs-4170907803: ]\r\nsample-test-92wvs-4170907803: }\r\nsample-test-92wvs-4170907803: }\r\nsample-test-92wvs-4170907803: \r\n```\r\n\r\n", "Possible related: https://github.com/kubeflow/pipelines/blob/master/samples/core/ai_platform/ai_platform.ipynb", "it was because of argo pns executor, after reverting back to docker executor the first postsubmit succeeded", "ohhh, you are right. There's another flakiness in aiplatform sample. keep this open", "Regarding `container_build`:\r\n\r\nError message is as followed:\r\n\r\n```\r\n\r\nsample-test-6gw5p-2308294888: KFP API host is 302db4d8b0ea18f8-dot-us-east1.pipelines.googleusercontent.com\r\nsample-test-6gw5p-2308294888: Run the sample tests...\r\nsample-test-6gw5p-2308294888: Traceback (most recent call last):\r\nsample-test-6gw5p-2308294888: File \"/python/src/github.com/kubeflow/pipelines/test/sample-test/sample_test_launcher.py\", line 260, in <module>\r\nsample-test-6gw5p-2308294888: main()\r\nsample-test-6gw5p-2308294888: File \"/python/src/github.com/kubeflow/pipelines/test/sample-test/sample_test_launcher.py\", line 256, in main\r\nsample-test-6gw5p-2308294888: 'component_test': ComponentTest\r\nsample-test-6gw5p-2308294888: File \"/usr/local/lib/python3.7/dist-packages/fire/core.py\", line 141, in Fire\r\nsample-test-6gw5p-2308294888: component_trace = _Fire(component, args, parsed_flag_args, context, name)\r\nsample-test-6gw5p-2308294888: File \"/usr/local/lib/python3.7/dist-packages/fire/core.py\", line 471, in _Fire\r\nsample-test-6gw5p-2308294888: target=component.__name__)\r\nsample-test-6gw5p-2308294888: File \"/usr/local/lib/python3.7/dist-packages/fire/core.py\", line 681, in _CallAndUpdateTrace\r\nsample-test-6gw5p-2308294888: component = fn(*varargs, **kwargs)\r\nsample-test-6gw5p-2308294888: File \"/python/src/github.com/kubeflow/pipelines/test/sample-test/sample_test_launcher.py\", line 183, in run_test\r\nsample-test-6gw5p-2308294888: nbchecker.check()\r\nsample-test-6gw5p-2308294888: File \"/python/src/github.com/kubeflow/pipelines/test/sample-test/check_notebook_results.py\", line 74, in check\r\nsample-test-6gw5p-2308294888: experiment_id = client.get_experiment(experiment_name=experiment).id\r\nsample-test-6gw5p-2308294888: File \"/usr/local/lib/python3.7/dist-packages/kfp-1.4.1rc1-py3.7.egg/kfp/_client.py\", line 462, in get_experiment\r\nsample-test-6gw5p-2308294888: raise ValueError('No experiment is found with name {}.'.format(experiment_name))\r\nsample-test-6gw5p-2308294888: ValueError: No experiment is found with name container_build-test.\r\n```\r\n\r\nSuspecting it has something to do with backend not able to return the experiment in request, or this experiment doesn't exist.", "Example logs: https://oss-prow.knative.dev/view/gs/oss-prow/logs/kubeflow-pipeline-postsubmit-integration-test/1374652399771193344", "[Prow](https://oss-prow.knative.dev/view/gs/oss-prow/logs/kubeflow-pipeline-postsubmit-integration-test/1374652399771193344)\r\n\r\n[Pod](\r\nhttps://00f74ba44b420418175dd31cb3c371cfc9ef2b71ab-apidata.googleusercontent.com/download/storage/v1/b/oss-prow/o/logs%2Fkubeflow-pipeline-postsubmit-integration-test%2F1374652399771193344%2Fartifacts%2Fpods_info%2Fkaniko-7v2gs.txt?jk=AFshE3WgSYrbaJ_tnqAP2uHklKC4QfRLQr2DCqSwJD9nn-3B25X_SpBMabvOAGh5QTEVwpFX7qqxyf1xZdrtLcciykBwxsSebvvHSvYAk8BtaSfFfN6ShsF6nb5CjV6DpyOUDen76mPvGGHR4ov0a1S-OHrB3UySfVJ9ur9XqFJJ6iowcb1yjFtol14JsxgtUnaHcxWIiZ7IKhVQGcx30Lp-YU--1H1efLZXu1eANHH8uwKp1nSQ6XYG8JnnzD-zlbuIuA0r93YmYvZpJXL6xcVeagEPniE4FHGjIC38jOz2zreM2ZZjKyRoGNnwAvWgk93EdrmFXyfw1Qd50ZC9iWjr6_AWl8Oake2pY2m4CYJ_3cVDAwKOeO6f8HC4fFA7Qff_WqBqqNz-F9EipRvcBLZ2yqve-zvb2u5WBNDmzKo-CdPdGFMDBtXuo36CQJnTP8BqqbLCciw6qwZqWRdMokhD705dvTUuTPJ3TM46WB5Cjwt5_5784BVBUyN96nRPgMwydMgOdOmoRwTGyScbjHDi9eFMNPPOXbwTUyAuOgKHuc8pkac9n-D3ya-J6sSjWrWZXoUeBkygOytxn9QVh_VxNSjx-cu0cnyrzlTeroWwr_nPqhjVNk5zE3FTavD1cDxknd_fB3qTgKQP9nRVEH6MrCnf0M13Uqtl46nma3Ql45LfJuZZISFA00N8u904Jr4MnU6ST2QXT8Q2c0dpGrECI5eN7G-M6I0bvT6gH5HZ6KyQkl_1Ys8wBW36e-BIyTHct7o-3X3CqAIy3uWFVi_fdUi2ip8R1Bg-xoPeR8q-QAJR1LN-RzcnJk55jMfyFQLup7ejzncOz0pqJgJ9_RC93UVYx3YrdlqBkTzImjY0axORxMcL7C7uLWdLixS9UfFXaZ94S1SglUdf8PGflzDcWgO22MTpqXcEosmeBCW-rI9PfviEkdvRWKI6pZpbjXHIDTzknCo6Uh1b8NiSnFC5pc7uua3I0zjqCAJq-lf0QN6xrtRJRXvZ0ESyo2dMN5XL0J4SQy1phSKAL4gTqJV8bB4tuMbkHvoSNCivFb33hrd4Mcxw0QgV3_YsSLVN5Bj2X3p5S0U&isca=1)\r\n\r\n[Kaniko logs](\r\nhttps://pantheon.corp.google.com/logs/query;query=resource.type%3D%22k8s_container%22%0Aresource.labels.project_id%3D%22ml-pipeline-test%22%0Aresource.labels.location%3D%22us-east1-b%22%0Aresource.labels.cluster_name%3D%22sample-d9c0196-6848%22%0Aresource.labels.namespace_name%3D%22kubeflow%22%0Aresource.labels.pod_name%3D%22kaniko-7v2gs%22;timeRange=P7D;cursorTimestamp=2021-03-24T10:28:06.449898604Z?project=ml-pipeline-test)\r\n\r\nMaybe it has failed to download the image.\r\n```\r\n\u001b[36mINFO\u001b[0m[0001] Resolved base name gcr.io/deeplearning-platform-release/tf-cpu.1-14 to gcr.io/deeplearning-platform-release/tf-cpu.1-14\r\n\u001b[36mINFO\u001b[0m[0001] Resolved base name gcr.io/deeplearning-platform-release/tf-cpu.1-14 to gcr.io/deeplearning-platform-release/tf-cpu.1-14\r\n\u001b[36mINFO\u001b[0m[0001] Downloading base image gcr.io/deeplearning-platform-release/tf-cpu.1-14\r\n\u001b[36mINFO\u001b[0m[0001] Error while retrieving image from cache: getting file info: stat /cache/sha256:bd464d284317fdd4fedea19b72b6424710ed2a7695ce391f718d74278a3ab97a: no such file or directory\r\n\u001b[36mINFO\u001b[0m[0001] Downloading base image gcr.io/deeplearning-platform-release/tf-cpu.1-14\r\n\u001b[36mINFO\u001b[0m[0002] Built cross stage deps: map[]\r\n\u001b[36mINFO\u001b[0m[0002] Downloading base image gcr.io/deeplearning-platform-release/tf-cpu.1-14\r\n\u001b[36mINFO\u001b[0m[0002] Error while retrieving image from cache: getting file info: stat /cache/sha256:bd464d284317fdd4fedea19b72b6424710ed2a7695ce391f718d74278a3ab97a: no such file or directory\r\n\u001b[36mINFO\u001b[0m[0002] Downloading base image gcr.io/deeplearning-platform-release/tf-cpu.1-14\r\n\u001b[36mINFO\u001b[0m[0002] Using files from context: [/kaniko/buildcontext/requirements.txt]\r\n\u001b[36mINFO\u001b[0m[0002] Checking for cached layer gcr.io/ml-pipeline-test/7841878234294039750/kfp_container/cache:ba7a7556fbb481e0dac58fa95852cd1657d89697c8e490d6d4a4299955fb85a6...\r\n\u001b[36mINFO\u001b[0m[0002] No cached layer found for cmd RUN python3 -m pip install -r requirements.txt\r\n\u001b[36mINFO\u001b[0m[0002] Unpacking rootfs as cmd RUN python3 -m pip install -r requirements.txt requires it.\r\n\u001b[36mINFO\u001b[0m[0275] Taking snapshot of full filesystem...\r\n```\r\n\r\nPerhaps we should use a smaller image for that sample/test.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Obsolete" ]
"2021-03-10T00:58:54"
"2021-08-22T07:00:13"
"2021-08-22T07:00:13"
CONTRIBUTOR
null
kubeflow-pipeline-postsubmit-integration-test has been failing consecutively. I did a quick check, the following samples are both flaky: * aiplatform * container_build Creating this issue to track solving this /assign @zijianjoy
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5272/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5272/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5270
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5270/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5270/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5270/events
https://github.com/kubeflow/pipelines/issues/5270
826,683,485
MDU6SXNzdWU4MjY2ODM0ODU=
5,270
parallelfor_item_argument_resolving test failed to run
{ "login": "Tomcli", "id": 10889249, "node_id": "MDQ6VXNlcjEwODg5MjQ5", "avatar_url": "https://avatars.githubusercontent.com/u/10889249?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Tomcli", "html_url": "https://github.com/Tomcli", "followers_url": "https://api.github.com/users/Tomcli/followers", "following_url": "https://api.github.com/users/Tomcli/following{/other_user}", "gists_url": "https://api.github.com/users/Tomcli/gists{/gist_id}", "starred_url": "https://api.github.com/users/Tomcli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tomcli/subscriptions", "organizations_url": "https://api.github.com/users/Tomcli/orgs", "repos_url": "https://api.github.com/users/Tomcli/repos", "events_url": "https://api.github.com/users/Tomcli/events{/privacy}", "received_events_url": "https://api.github.com/users/Tomcli/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
{ "login": "Ark-kun", "id": 1829149, "node_id": "MDQ6VXNlcjE4MjkxNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ark-kun", "html_url": "https://github.com/Ark-kun", "followers_url": "https://api.github.com/users/Ark-kun/followers", "following_url": "https://api.github.com/users/Ark-kun/following{/other_user}", "gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions", "organizations_url": "https://api.github.com/users/Ark-kun/orgs", "repos_url": "https://api.github.com/users/Ark-kun/repos", "events_url": "https://api.github.com/users/Ark-kun/events{/privacy}", "received_events_url": "https://api.github.com/users/Ark-kun/received_events", "type": "User", "site_admin": false }
[ { "login": "Ark-kun", "id": 1829149, "node_id": "MDQ6VXNlcjE4MjkxNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ark-kun", "html_url": "https://github.com/Ark-kun", "followers_url": "https://api.github.com/users/Ark-kun/followers", "following_url": "https://api.github.com/users/Ark-kun/following{/other_user}", "gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions", "organizations_url": "https://api.github.com/users/Ark-kun/orgs", "repos_url": "https://api.github.com/users/Ark-kun/repos", "events_url": "https://api.github.com/users/Ark-kun/events{/privacy}", "received_events_url": "https://api.github.com/users/Ark-kun/received_events", "type": "User", "site_admin": false } ]
null
[ "Good catch.\r\nSome time ago the return value of the component function was not passed completely correctly. So the components had to whap output in a tuple when outputting a single list. The problem was fixed in the SDK, but the test was not updated." ]
"2021-03-09T21:47:37"
"2021-03-11T22:06:25"
"2021-03-11T22:06:25"
MEMBER
null
### What steps did you take: Run the parallelfor_item_argument_resolving compiler test example on Kubeflow Pipeline, the pipeline will fail because item arguments are wrapped into list of list of items instead of list of items https://github.com/kubeflow/pipelines/blob/master/sdk/python/tests/compiler/testdata/parallelfor_item_argument_resolving.py ### What happened: The `func_to_container_op` wraps the containerOp result into a list. Because the containerOp output is already a list of items, `func_to_container_op` warps the output into a list of list of items. e.g. ```python @func_to_container_op def produce_list_of_dicts() -> list: return ([{"aaa": "aaa1", "bbb": "bbb1"}, {"aaa": "aaa2", "bbb": "bbb2"}],) # Actual output in containerOp: [[{"aaa": "aaa1", "bbb": "bbb1"}, {"aaa": "aaa2", "bbb": "bbb2"}]] ``` <img width="1076" alt="Screen Shot 2021-03-09 at 1 44 30 PM" src="https://user-images.githubusercontent.com/10889249/110542224-a3d97f80-80dd-11eb-8b0a-134d04ec8870.png"> ### What did you expect to happen: The compiler example could be a typo and only need to remove the `[ ]` from the return statement. ### Environment: <!-- Please fill in those that seem relevant. --> How did you deploy Kubeflow Pipelines (KFP)? <!-- If you are not sure, here's [an introduction of all options](https://www.kubeflow.org/docs/pipelines/installation/overview/). --> KFP version: <!-- If you are not sure, build commit shows on bottom of KFP UI left sidenav. --> 1.4.0 KFP SDK version: <!-- Please attach the output of this shell command: $pip list | grep kfp --> 1.4.0 ### Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.] /kind bug <!-- Please include labels by uncommenting them to help us better triage issues, choose from the following --> <!-- // /area frontend // /area backend // /area sdk // /area testing // /area engprod -->
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5270/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5270/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5269
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5269/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5269/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5269/events
https://github.com/kubeflow/pipelines/issues/5269
826,428,560
MDU6SXNzdWU4MjY0Mjg1NjA=
5,269
userid propagation issue between pipeline services
{ "login": "raffaelespazzoli", "id": 6179036, "node_id": "MDQ6VXNlcjYxNzkwMzY=", "avatar_url": "https://avatars.githubusercontent.com/u/6179036?v=4", "gravatar_id": "", "url": "https://api.github.com/users/raffaelespazzoli", "html_url": "https://github.com/raffaelespazzoli", "followers_url": "https://api.github.com/users/raffaelespazzoli/followers", "following_url": "https://api.github.com/users/raffaelespazzoli/following{/other_user}", "gists_url": "https://api.github.com/users/raffaelespazzoli/gists{/gist_id}", "starred_url": "https://api.github.com/users/raffaelespazzoli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/raffaelespazzoli/subscriptions", "organizations_url": "https://api.github.com/users/raffaelespazzoli/orgs", "repos_url": "https://api.github.com/users/raffaelespazzoli/repos", "events_url": "https://api.github.com/users/raffaelespazzoli/events{/privacy}", "received_events_url": "https://api.github.com/users/raffaelespazzoli/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
closed
false
null
[]
null
[ "I'm also getting this error in the main landing page:\r\n```json\r\n{error: \"Invalid input error: ListRuns must filter by resource reference in multi-user mode.\",…}\r\ncode: 3\r\ndetails: [{@type: \"type.googleapis.com/api.Error\",…}]\r\n0: {@type: \"type.googleapis.com/api.Error\",…}\r\n@type: \"type.googleapis.com/api.Error\"\r\nerror_details: \"Invalid input error: ListRuns must filter by resource reference in multi-user mode.\"\r\nerror_message: \"ListRuns must filter by resource reference in multi-user mode.\"\r\nerror: \"Invalid input error: ListRuns must filter by resource reference in multi-user mode.\"\r\nmessage: \"Invalid input error: ListRuns must filter by resource reference in multi-user mode.\"\r\n```\r\n![image](https://user-images.githubusercontent.com/6179036/110525029-1fd5c680-80e2-11eb-9e1d-6b307bd7c6c4.png)\r\n\r\nI think it's related to the same underlying root cause. ", "we can actually see that the userid header is propagated to the `ml-pipeline` service:\r\n![image](https://user-images.githubusercontent.com/6179036/110526069-5cee8880-80e3-11eb-99ca-bff2a8a3061f.png)\r\n\r\nany advice on what might be failing?", "looking more at the issue and the code it looks like the code where the issue arise is here:\r\nhttps://github.com/kubeflow/pipelines/blob/5445ce82c7cd79bd517565750f72cf6a893c8cee/backend/src/apiserver/server/run_server.go#L179\r\n\r\nwhere `ReferenceKey` must be null. \r\nwhy might it be happening?", "@raffaelespazzoli I'm not familiar with k8s_istio config, but can you confirm whether the following are configured properly for your env?\r\n\r\nEnv var config on ml-pipeline-ui deployment:\r\nhttps://github.com/kubeflow/pipelines/blob/4c51b576f577e337049599cb85b098006664265f/frontend/server/configs.ts#L106-L113\r\n\r\nEnv var config on ml-pipeline deployment:\r\nhttps://github.com/kubeflow/pipelines/blob/d9c019641ef9ebd78db60cdb78ea29b0d9933008/backend/src/apiserver/common/config.go#L31-L32", "@Bobgy both variable are set in both services via a shared configmap. I can prove that the header is propagated looking at the traces with Jaeger.", "@raffaelespazzoli sorry I didn't see the notification, maybe you can try Kubeflow 1.3 soon.\r\n\r\nNow they have verified KFP multi-user mode working properly for installation on K8s cluster only.\r\nhttps://github.com/kubeflow/manifests", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n" ]
"2021-03-09T18:29:51"
"2022-03-03T06:05:33"
"2022-03-03T06:05:33"
NONE
null
### What steps did you take: I'm running a multiuser pipeline environment with kubeflow 1.2.0. From what I am seeing the `ml-pipeline-ui` service calls the `ml-pipeline` service when some actions on the ui are take like for example: start a pipeline run or create an experiment I'm getting this error on the ui: ``` {"error":"Failed to authorize the request.: Failed to authorize with API resource references: Bad request.: BadRequestError: Request header error: there is no user identity header.: Request header error: there is no user identity header.","message":"Failed to authorize the request.: Failed to authorize with API resource references: Bad request.: BadRequestError: Request header error: there is no user identity header.: Request header error: there is no user identity header.","code":10,"details":[{"@type":"type.googleapis.com/api.Error","error_message":"Request header error: there is no user identity header.","error_details":"Failed to authorize the request.: Failed to authorize with API resource references: Bad request.: BadRequestError: Request header error: there is no user identity header.: Request header error: there is no user identity header."}]} ``` and I can see this on the `ml-pipeline` pod log: ``` I0309 16:31:31.191130 6 error.go:227] Request header error: there is no user identity header. github.com/kubeflow/pipelines/backend/src/apiserver/server.getUserIdentity /go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/util.go:304 github.com/kubeflow/pipelines/backend/src/apiserver/server.isAuthorized /go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/util.go:377 github.com/kubeflow/pipelines/backend/src/apiserver/server.(*ExperimentServer).ListExperiment /go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/experiment_server.go:74 ``` from what I understand the `ml-pipline-ui service` should forward the userid via the header configured in the `kubeflow-config` configmap, but that does not seem to be happening. Is there a flag to enable this behavior? Am I missing something? ### What did you expect to happen: If the userid header field was passed I'd expect not to get the error and to be able to launch a pipeline. ### Environment: kubeflow 1.2.0 on OCP 4.6 How did you deploy Kubeflow Pipelines (KFP)? I used kfctl, with the `kfctl_k8s_istio.v1.2.0.yaml` config with a few adjustment for OCP. ### Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.] /kind bug /area frontend /area backend
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5269/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5269/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5263
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5263/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5263/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5263/events
https://github.com/kubeflow/pipelines/issues/5263
825,113,268
MDU6SXNzdWU4MjUxMTMyNjg=
5,263
Postsubmit tests are failing
{ "login": "Ark-kun", "id": 1829149, "node_id": "MDQ6VXNlcjE4MjkxNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ark-kun", "html_url": "https://github.com/Ark-kun", "followers_url": "https://api.github.com/users/Ark-kun/followers", "following_url": "https://api.github.com/users/Ark-kun/following{/other_user}", "gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions", "organizations_url": "https://api.github.com/users/Ark-kun/orgs", "repos_url": "https://api.github.com/users/Ark-kun/repos", "events_url": "https://api.github.com/users/Ark-kun/events{/privacy}", "received_events_url": "https://api.github.com/users/Ark-kun/received_events", "type": "User", "site_admin": false }
[ { "id": 930619511, "node_id": "MDU6TGFiZWw5MzA2MTk1MTE=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/priority/p0", "name": "priority/p0", "color": "db1203", "default": false, "description": "" }, { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1122445408, "node_id": "MDU6TGFiZWwxMTIyNDQ1NDA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk/dsl/compiler", "name": "area/sdk/dsl/compiler", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[]
"2021-03-09T00:50:22"
"2021-03-09T19:56:23"
"2021-03-09T19:56:23"
CONTRIBUTOR
null
https://oss-prow.knative.dev/view/gs/oss-prow/logs/kubeflow-pipeline-postsubmit-integration-test/1368057462506131456 ``` sample-test-bg6w4-565595884: https://raw.githubusercontent.com/kubeflow/pipelines/01a23ae8672d3b18e88adf3036071496aca3552d/components/gcp/bigquery/query/component.yaml in Bigquery - Query(query, project_id, dataset_id, table_id, output_gcs_path, dataset_location, job_config) sample-test-bg6w4-565595884:  sample-test-bg6w4-565595884: /usr/local/lib/python3.7/dist-packages/kfp-1.4.1-py3.7.egg/kfp/components/_components.py in create_task_object_from_component_and_pythonic_arguments(pythonic_arguments) sample-test-bg6w4-565595884:  432 component_spec=component_spec, sample-test-bg6w4-565595884:  433 arguments=arguments, sample-test-bg6w4-565595884: --> 434 component_ref=component_ref, sample-test-bg6w4-565595884:  435 ) sample-test-bg6w4-565595884:  436  sample-test-bg6w4-565595884:  sample-test-bg6w4-565595884: /usr/local/lib/python3.7/dist-packages/kfp-1.4.1-py3.7.egg/kfp/components/_components.py in _create_task_object_from_component_and_arguments(component_spec, arguments, component_ref, **kwargs) sample-test-bg6w4-565595884:  380 arguments=arguments, sample-test-bg6w4-565595884:  381 component_ref=component_ref, sample-test-bg6w4-565595884: --> 382 **kwargs, sample-test-bg6w4-565595884:  383 ) sample-test-bg6w4-565595884:  384  sample-test-bg6w4-565595884:  sample-test-bg6w4-565595884: /usr/local/lib/python3.7/dist-packages/kfp-1.4.1-py3.7.egg/kfp/dsl/_component_bridge.py in _create_container_op_from_component_and_arguments(component_spec, arguments, component_ref) sample-test-bg6w4-565595884:  143 task.execution_options.caching_strategy.max_cache_staleness = 'P0D' sample-test-bg6w4-565595884:  144  sample-test-bg6w4-565595884: --> 145 _attach_v2_specs(task, component_spec, original_arguments) sample-test-bg6w4-565595884:  146  sample-test-bg6w4-565595884:  147 return task sample-test-bg6w4-565595884:  sample-test-bg6w4-565595884: /usr/local/lib/python3.7/dist-packages/kfp-1.4.1-py3.7.egg/kfp/dsl/_component_bridge.py in _attach_v2_specs(task, component_spec, arguments) sample-test-bg6w4-565595884:  345 input_name, component_spec.inputs) sample-test-bg6w4-565595884:  346 importer_specs[input_name] = importer_node.build_importer_spec( sample-test-bg6w4-565595884: --> 347 input_type_schema=type_schema, constant_value=argument_value) sample-test-bg6w4-565595884:  348 elif isinstance(argument_value, int): sample-test-bg6w4-565595884:  349 pipeline_task_spec.inputs.parameters[ sample-test-bg6w4-565595884:  sample-test-bg6w4-565595884: /usr/local/lib/python3.7/dist-packages/kfp-1.4.1-py3.7.egg/kfp/dsl/importer_node.py in build_importer_spec(input_type_schema, pipeline_param_name, constant_value) sample-test-bg6w4-565595884:  42 """ sample-test-bg6w4-565595884:  43 assert bool(pipeline_param_name) != bool(constant_value), ( sample-test-bg6w4-565595884: ---> 44 'importer spec should be built using either pipeline_param_name or ' sample-test-bg6w4-565595884:  45 'constant_value.') sample-test-bg6w4-565595884:  46 importer_spec = pipeline_spec_pb2.PipelineDeploymentConfig.ImporterSpec() sample-test-bg6w4-565595884:  sample-test-bg6w4-565595884: AssertionError: importer spec should be built using either pipeline_param_name or constant_value. sample-test-bg6w4-565595884: KFP API host is 3edeebea8e9ac0e0-dot-us-east1.pipelines.googleusercontent.com sample-test-bg6w4-565595884: Run the sample tests... sample-test-bg6w4-565595884: Traceback (most recent call last): sample-test-bg6w4-565595884: File "/python/src/github.com/kubeflow/pipelines/test/sample-test/sample_test_launcher.py", line 260, in <module> sample-test-bg6w4-565595884: main() sample-test-bg6w4-565595884: File "/python/src/github.com/kubeflow/pipelines/test/sample-test/sample_test_launcher.py", line 256, in main sample-test-bg6w4-565595884: 'component_test': ComponentTest sample-test-bg6w4-565595884: File "/usr/local/lib/python3.7/dist-packages/fire/core.py", line 141, in Fire sample-test-bg6w4-565595884: component_trace = _Fire(component, args, parsed_flag_args, context, name) sample-test-bg6w4-565595884: File "/usr/local/lib/python3.7/dist-packages/fire/core.py", line 471, in _Fire sample-test-bg6w4-565595884: target=component.__name__) sample-test-bg6w4-565595884: File "/usr/local/lib/python3.7/dist-packages/fire/core.py", line 681, in _CallAndUpdateTrace sample-test-bg6w4-565595884: component = fn(*varargs, **kwargs) sample-test-bg6w4-565595884: File "/python/src/github.com/kubeflow/pipelines/test/sample-test/sample_test_launcher.py", line 183, in run_test sample-test-bg6w4-565595884: nbchecker.check() sample-test-bg6w4-565595884: File "/python/src/github.com/kubeflow/pipelines/test/sample-test/check_notebook_results.py", line 74, in check sample-test-bg6w4-565595884: experiment_id = client.get_experiment(experiment_name=experiment).id sample-test-bg6w4-565595884: File "/usr/local/lib/python3.7/dist-packages/kfp-1.4.1-py3.7.egg/kfp/_client.py", line 454, in get_experiment sample-test-bg6w4-565595884: raise ValueError('No experiment is found with name {}.'.format(experiment_name)) ```
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5263/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5263/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5257
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5257/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5257/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5257/events
https://github.com/kubeflow/pipelines/issues/5257
824,311,950
MDU6SXNzdWU4MjQzMTE5NTA=
5,257
VolumeOp was not able to create PVC
{ "login": "yuhuishi-convect", "id": 74702693, "node_id": "MDQ6VXNlcjc0NzAyNjkz", "avatar_url": "https://avatars.githubusercontent.com/u/74702693?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yuhuishi-convect", "html_url": "https://github.com/yuhuishi-convect", "followers_url": "https://api.github.com/users/yuhuishi-convect/followers", "following_url": "https://api.github.com/users/yuhuishi-convect/following{/other_user}", "gists_url": "https://api.github.com/users/yuhuishi-convect/gists{/gist_id}", "starred_url": "https://api.github.com/users/yuhuishi-convect/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuhuishi-convect/subscriptions", "organizations_url": "https://api.github.com/users/yuhuishi-convect/orgs", "repos_url": "https://api.github.com/users/yuhuishi-convect/repos", "events_url": "https://api.github.com/users/yuhuishi-convect/events{/privacy}", "received_events_url": "https://api.github.com/users/yuhuishi-convect/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
{ "login": "elikatsis", "id": 14970053, "node_id": "MDQ6VXNlcjE0OTcwMDUz", "avatar_url": "https://avatars.githubusercontent.com/u/14970053?v=4", "gravatar_id": "", "url": "https://api.github.com/users/elikatsis", "html_url": "https://github.com/elikatsis", "followers_url": "https://api.github.com/users/elikatsis/followers", "following_url": "https://api.github.com/users/elikatsis/following{/other_user}", "gists_url": "https://api.github.com/users/elikatsis/gists{/gist_id}", "starred_url": "https://api.github.com/users/elikatsis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elikatsis/subscriptions", "organizations_url": "https://api.github.com/users/elikatsis/orgs", "repos_url": "https://api.github.com/users/elikatsis/repos", "events_url": "https://api.github.com/users/elikatsis/events{/privacy}", "received_events_url": "https://api.github.com/users/elikatsis/received_events", "type": "User", "site_admin": false }
[ { "login": "elikatsis", "id": 14970053, "node_id": "MDQ6VXNlcjE0OTcwMDUz", "avatar_url": "https://avatars.githubusercontent.com/u/14970053?v=4", "gravatar_id": "", "url": "https://api.github.com/users/elikatsis", "html_url": "https://github.com/elikatsis", "followers_url": "https://api.github.com/users/elikatsis/followers", "following_url": "https://api.github.com/users/elikatsis/following{/other_user}", "gists_url": "https://api.github.com/users/elikatsis/gists{/gist_id}", "starred_url": "https://api.github.com/users/elikatsis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elikatsis/subscriptions", "organizations_url": "https://api.github.com/users/elikatsis/orgs", "repos_url": "https://api.github.com/users/elikatsis/repos", "events_url": "https://api.github.com/users/elikatsis/events{/privacy}", "received_events_url": "https://api.github.com/users/elikatsis/received_events", "type": "User", "site_admin": false } ]
null
[ "/assign @elikatsis \r\nCan you help with this issue?", "Kubeflow cache's steps, so given the same inputs, it skips the step and instead fetches outputs from minio. \r\nVolume op has no outputs in Minio so ideally VolumeOp should not be cached. \r\n\r\nIn this case you would need to change the name of the PVC. \r\n\r\nYou can also refer to https://github.com/kubeflow/pipelines/issues/5055#issue-796655777. ", "Does the following work ?\r\n```python\r\n vop = kfp.dsl.VolumeOp(\r\n name=\"volume_creation\",\r\n resource_name=f\"{{{{workflow.name}}}}-sharedpvc\",\r\n size=\"5Gi\",\r\n modes=[\"RWO\"]\r\n )\r\n```\r\n\r\nThis will hopefully change the input to the ResourceOp and hence prevent caching.\r\n \r\n Update: This workound doesn't seem to work with Kubeflow 1.3/latest kfp version. I think this workaround worked with Kubeflow 1.2. ", "> \r\n> \r\n> Does the following work ?\r\n> \r\n> ```python\r\n> vop = kfp.dsl.VolumeOp(\r\n> name=\"volume_creation\",\r\n> resource_name=f\"{{{{workflow.name}}}}-sharedpvc\",\r\n> size=\"5Gi\",\r\n> modes=[\"RWO\"]\r\n> )\r\n> ```\r\n> \r\n> This will hopefully change the input to the ResourceOp and hence prevent caching.\r\n\r\nYes, the problem is fixed after I disabled the cache. Thanks for the help.", "@Bobgy sorry I had totally missed this.\r\n\r\nI think the problem here is that\r\n1. the mechanism is caching steps that it shouldn't do so\r\n2. there is no \"user\" selection on whether to cache some specific step or not, only global API server configuration. The API server overrides any configuration: https://github.com/kubeflow/pipelines/blob/601b104c0beba8049dfee493d0b07c0aea390b78/backend/src/apiserver/resource/resource_manager.go#L377", "I'll reopen this issue as it needs some fix apart from globally disabling the cache\r\n\r\n/reopen", "@elikatsis: Reopened this issue.\n\n<details>\n\nIn response to [this](https://github.com/kubeflow/pipelines/issues/5257#issuecomment-852327111):\n\n>I'll reopen this issue as it needs some fix apart from globally disabling the cache\r\n>\r\n>/reopen\n\n\nInstructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.\n</details>", "Can we rename this issue? The typo makes it hard to find.", "> Does the following work ?\r\n> \r\n> ```python\r\n> vop = kfp.dsl.VolumeOp(\r\n> name=\"volume_creation\",\r\n> resource_name=f\"{{{{workflow.name}}}}-sharedpvc\",\r\n> size=\"5Gi\",\r\n> modes=[\"RWO\"]\r\n> )\r\n> ```\r\n> \r\n> This will hopefully change the input to the ResourceOp and hence prevent caching.\r\n\r\nHi, for some reason, this workaround does not work for me. KF version: 1.6.0, Kubeflow SDK 1.6.3. I used the same pipeline from the issue description. My pcv was created only once.\r\n\r\nThank you", "Facing the same issue. `VolumeOp` fails to create PVC with log message `This step output is taken from cache.`\r\n\r\nTried to change `resource_name`, but that workaround didn't work (if it worked before).", "I think KFP v1 caching should not cache volume op / resource op, because the side effect is intended.\r\n/cc @Ark-kun \r\n\r\nWelcome contributions to fix this.\r\nCaching webhook: https://github.com/kubeflow/pipelines/tree/master/backend/src/cache.\r\n\r\nBesides that, caching for KFP v2 compatible mode should no longer cache volume ops. You can also consider trying it out too when it's released and documented for your env. (currently documented for KFP standalone, but not full Kubeflow.)", "Updated workaround for getting around VolumeOp Caching. This seems to work with kfp version 1.7.2. \r\n\r\nREF: https://github.com/kubeflow/pipelines/issues/4857#issuecomment-740279537\r\n\r\n```python\r\n test_vop = kfp.dsl.VolumeOp(\r\n name=\"volume\",\r\n resource_name=\"pvc-name\",\r\n modes=['RWO'],\r\n storage_class=\"standard\",\r\n size=\"10Gi\"\r\n ).add_pod_annotation(name=\"pipelines.kubeflow.org/max_cache_staleness\", value=\"P0D\")\r\n```", "Not sure this is the correct place to bring this up. And I am not familiar with v2 component configuration yet but it looks like the mutation webhook in cache-server backend is looking for key pipelines.kubeflow.org/enable_caching while the python sdk is creating key pipelines.kubeflow.org/cache_enabled when using set_cache_enabled.\r\n\r\nSee following files:\r\nhttps://github.com/kubeflow/pipelines/blob/master/backend/src/cache/server/mutation.go#L36\r\nhttps://github.com/kubeflow/pipelines/blob/dec03067ca1f89f1ca23c7397830d60201448fa6/sdk/python/kfp/compiler/_op_to_template.py#L186\r\n\r\nAlthough it was mentioned to use .add_pod_annotation(name=\"pipelines.kubeflow.org/max_cache_staleness\", value=\"P0D\") which looks like it should work from the code in mutation.go. If that's the case will the base_op function set_caching_enabled be deprecated in the future. \r\n\r\nThe change on cache-server to allow the function of set_caching_enabled to work will be easy. I made the changes and tested them on my forked version of this repo. I can make a PR if the set_caching_enabled is not going to be deprecated. ", "> \r\n> \r\n> Updated workaround for getting around VolumeOp Caching. This seems to work with kfp version 1.7.2.\r\n> \r\n> REF: [#4857 (comment)](https://github.com/kubeflow/pipelines/issues/4857#issuecomment-740279537)\r\n> \r\n> ```python\r\n> test_vop = kfp.dsl.VolumeOp(\r\n> name=\"volume\",\r\n> resource_name=\"pvc-name\",\r\n> modes=['RWO'],\r\n> storage_class=\"standard\",\r\n> size=\"10Gi\"\r\n> ).add_pod_annotation(name=\"pipelines.kubeflow.org/max_cache_staleness\", value=\"P0D\")\r\n> ```\r\n\r\nThank you very much, i was experiencing the same issue", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-03-08T08:24:43"
"2022-03-02T18:05:18"
null
NONE
null
### What steps did you take: A simple pipeline with one vol_op and one simple step that mounts the pvc created. ```python # minimal example import kfp @kfp.dsl.pipeline( name="data download and upload", description="Data IO test" ) def volume_pipeline(): # shared vol vop = kfp.dsl.VolumeOp( name="volume_creation", resource_name="sharedpvc", size="5Gi", modes=["RWO"] ) # mount the vol simple_task = kfp.dsl.ContainerOp( name="simple task", image="bash", arguments=[ "echo", "hello", ">/data/hello.text" ] ).add_pvolumes({ "/data": vop.volume }) # run the pipeline client = kfp.Client() client.create_run_from_pipeline_func(volume_pipeline, arguments={}) ``` ### What happened: The VolumeOp was not able to create the PVC, therefore the depending task complains about not finding the PVC. `kubectl get pvc -n kubeflow | grep sharedpvc` didn't return any results. ### What did you expect to happen: The VolumeOp shall create a PVC named `sharedpvc`. ### Environment: <!-- Please fill in those that seem relevant. --> How did you deploy Kubeflow Pipelines (KFP)? Deploying Kubeflow Pipelines on a local kind cluster <!-- If you are not sure, here's [an introduction of all options](https://www.kubeflow.org/docs/pipelines/installation/overview/). --> KFP version: 1.2.0 <!-- If you are not sure, build commit shows on bottom of KFP UI left sidenav. --> KFP SDK version: 1.4.0 <!-- Please attach the output of this shell command: $pip list | grep kfp --> ### Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.] The log of the VolumneOp indicates ``` This step output is taken from cache. ``` I was trying to prevent it from using the cache but didn't succeed. The manifest from the VolumneOp ``` apiVersion: v1 kind: PersistentVolumeClaim metadata: name: '{{workflow.name}}-sharedpvc' spec: accessModes: - RWO resources: requests: storage: 5Gi ``` The log of the depending task indicates ``` This step is in Pending state with this message: Unschedulable: persistentvolumeclaim "{{tasks.volume-creation.outputs.parameters.volume-creation-name}}" not found ``` Storage class used ``` yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"name":"standard"},"provisioner":"rancher.io/local-path","reclaimPolicy":"Delete","volumeBindingMode":"WaitForFirstConsumer"} storageclass.kubernetes.io/is-default-class: "true" creationTimestamp: "2020-12-29T06:31:17Z" name: standard resourceVersion: "195" selfLink: /apis/storage.k8s.io/v1/storageclasses/standard uid: 5dbc1bff-b488-4d3a-b45f-e710cf96a415 provisioner: rancher.io/local-path reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer ``` /kind bug <!-- Please include labels by uncommenting them to help us better triage issues, choose from the following --> <!-- // /area frontend // /area backend // /area sdk // /area testing // /area engprod -->
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5257/reactions", "total_count": 7, "+1": 7, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5257/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5254
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5254/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5254/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5254/events
https://github.com/kubeflow/pipelines/issues/5254
823,672,141
MDU6SXNzdWU4MjM2NzIxNDE=
5,254
Pip package kfp 1.4.1 and kfp-server-api 1.4.0 missing
{ "login": "DavidSpek", "id": 28541758, "node_id": "MDQ6VXNlcjI4NTQxNzU4", "avatar_url": "https://avatars.githubusercontent.com/u/28541758?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DavidSpek", "html_url": "https://github.com/DavidSpek", "followers_url": "https://api.github.com/users/DavidSpek/followers", "following_url": "https://api.github.com/users/DavidSpek/following{/other_user}", "gists_url": "https://api.github.com/users/DavidSpek/gists{/gist_id}", "starred_url": "https://api.github.com/users/DavidSpek/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DavidSpek/subscriptions", "organizations_url": "https://api.github.com/users/DavidSpek/orgs", "repos_url": "https://api.github.com/users/DavidSpek/repos", "events_url": "https://api.github.com/users/DavidSpek/events{/privacy}", "received_events_url": "https://api.github.com/users/DavidSpek/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "/cc @chensun \n\nRecently, KFP sdk has been released in separate schedule than backend. There hasn't been enough communication about it.\n\nKfp-server-api 1.4.0 missing was a mistake, but I no longer see the need to release it when 1.4.1 is out.", "Thanks for confirming this. This is something I wanted to clear up for the new Notebook images. Are you expecting to release new versions before the release still? ", "@chensun do you have a suggestion for David?", "I'm now running into a dependency issue when installing the latest `kfserving` package (0.5.1) and `kfp` 1.4.0: `kfp 1.4.0 requires kubernetes<12.0.0,>=8.0.0, but you have kubernetes 12.0.1 which is incompatible`\r\n\r\nIt would be nice to have this solved so the latest packages can be installed in the notebook images. ", "/assign @chensun ", "The kubernetes version issue is fixed by https://github.com/kubeflow/pipelines/pull/5349.\r\n\r\nAlso I think the question is answered.", "@Bobgy Will this require a push to PyPi?", "@DavidSpek yes, it will be in the next release", "Thanks for confirming!", "@Bobgy It seems like https://github.com/kubeflow/pipelines/pull/5349 is not included in kfp 1.4.1rc1. Is there a timeline for an updated release of the kfp package?", "@neuromage to comment on this", "@neuromage I could you chime in on this? It would be nice to have an updated version of the kfp SDK on PyPI so the notebook images can be updated. ", "We're planning an SDK release this week. Probably have an RC version out mid-week at least.\r\n\r\n/cc @chensun " ]
"2021-03-06T15:15:10"
"2021-05-10T00:50:48"
"2021-03-24T08:29:54"
CONTRIBUTOR
null
While updating some images I noticed a discrepancy with the releases of `kfp` and `kfp-server-api` on PyPi. `kfp` 1.4.1 is not available on PyPi but `kfp-server-api` 1.4.1 is available. In turn `kfp` 1.4.0 is available and `kfp-server-api` 1.4.1 is not available. Is this intentional (were kfp 1.4.1 and kfp-server-api 1.4.0 never released) or has something gone wrong with pushing the package? /cc @Bobgy
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5254/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5254/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5252
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5252/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5252/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5252/events
https://github.com/kubeflow/pipelines/issues/5252
823,281,436
MDU6SXNzdWU4MjMyODE0MzY=
5,252
I can't get sorted runs based on a metric using the kfp python package
{ "login": "rafaelalou", "id": 68055241, "node_id": "MDQ6VXNlcjY4MDU1MjQx", "avatar_url": "https://avatars.githubusercontent.com/u/68055241?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rafaelalou", "html_url": "https://github.com/rafaelalou", "followers_url": "https://api.github.com/users/rafaelalou/followers", "following_url": "https://api.github.com/users/rafaelalou/following{/other_user}", "gists_url": "https://api.github.com/users/rafaelalou/gists{/gist_id}", "starred_url": "https://api.github.com/users/rafaelalou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rafaelalou/subscriptions", "organizations_url": "https://api.github.com/users/rafaelalou/orgs", "repos_url": "https://api.github.com/users/rafaelalou/repos", "events_url": "https://api.github.com/users/rafaelalou/events{/privacy}", "received_events_url": "https://api.github.com/users/rafaelalou/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
null
[]
null
[]
"2021-03-05T17:28:43"
"2021-03-08T15:17:59"
"2021-03-08T15:17:59"
NONE
null
### What steps did you take: I am trying to get runs sorted on a particular metric using the kfp python package. I am calling the `list_runs` function and passing the `sort_by` parameter. i.e. ```list_runs(sort_by='metric:accuracy_score asc')``` ### What happened: Getting the following error: ``` HTTP response body: {"error":"Failed to create list options: Invalid input error: Invalid sorting field: \"metric:accuracy_score\": %!s(\u003cnil\u003e)","message":"Failed to create list options: Invalid input error: Invalid sorting field: \"metric:accuracy_score\": %!s(\u003cnil\u003e)","code":3,"details":[{"@type":"type.googleapis.com/api.Error","error_message":"Invalid sorting field: \"metric:accuracy_score\": %!s(\u003cnil\u003e)","error_details":"Failed to create list options: Invalid input error: Invalid sorting field: \"metric:accuracy_score\": %!s(\u003cnil\u003e)"}]} ``` ### Environment: <!-- Please fill in those that seem relevant. --> KFP SDK version: <!-- Please attach the output of this shell command: $pip list | grep kfp --> kfp 1.4.0 kfp-pipeline-spec 0.1.6 kfp-server-api 1.4.1 ### Anything else you would like to add: I was expecting this https://github.com/kubeflow/pipelines/blob/master/backend/src/apiserver/model/run.go#L107 to return true. Any ideas what the problem may be? /kind bug /area backend /area sdk
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5252/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5252/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5247
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5247/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5247/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5247/events
https://github.com/kubeflow/pipelines/issues/5247
822,788,555
MDU6SXNzdWU4MjI3ODg1NTU=
5,247
dsl ResourceOp with parameterized manifest is throwing validation error
{ "login": "deepk2u", "id": 1802638, "node_id": "MDQ6VXNlcjE4MDI2Mzg=", "avatar_url": "https://avatars.githubusercontent.com/u/1802638?v=4", "gravatar_id": "", "url": "https://api.github.com/users/deepk2u", "html_url": "https://github.com/deepk2u", "followers_url": "https://api.github.com/users/deepk2u/followers", "following_url": "https://api.github.com/users/deepk2u/following{/other_user}", "gists_url": "https://api.github.com/users/deepk2u/gists{/gist_id}", "starred_url": "https://api.github.com/users/deepk2u/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/deepk2u/subscriptions", "organizations_url": "https://api.github.com/users/deepk2u/orgs", "repos_url": "https://api.github.com/users/deepk2u/repos", "events_url": "https://api.github.com/users/deepk2u/events{/privacy}", "received_events_url": "https://api.github.com/users/deepk2u/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
{ "login": "Ark-kun", "id": 1829149, "node_id": "MDQ6VXNlcjE4MjkxNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ark-kun", "html_url": "https://github.com/Ark-kun", "followers_url": "https://api.github.com/users/Ark-kun/followers", "following_url": "https://api.github.com/users/Ark-kun/following{/other_user}", "gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions", "organizations_url": "https://api.github.com/users/Ark-kun/orgs", "repos_url": "https://api.github.com/users/Ark-kun/repos", "events_url": "https://api.github.com/users/Ark-kun/events{/privacy}", "received_events_url": "https://api.github.com/users/Ark-kun/received_events", "type": "User", "site_admin": false }
[ { "login": "Ark-kun", "id": 1829149, "node_id": "MDQ6VXNlcjE4MjkxNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ark-kun", "html_url": "https://github.com/Ark-kun", "followers_url": "https://api.github.com/users/Ark-kun/followers", "following_url": "https://api.github.com/users/Ark-kun/following{/other_user}", "gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions", "organizations_url": "https://api.github.com/users/Ark-kun/orgs", "repos_url": "https://api.github.com/users/Ark-kun/repos", "events_url": "https://api.github.com/users/Ark-kun/events{/privacy}", "received_events_url": "https://api.github.com/users/Ark-kun/received_events", "type": "User", "site_admin": false }, { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "/assign @chensun @Ark-kun ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Any update on this?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-03-05T06:24:52"
"2022-03-02T10:06:30"
null
CONTRIBUTOR
null
### What steps did you take: I am trying to pass the output of one step to resourceOp instead of the actual manifest ``` c_op = dsl.ContainerOp( name='patch', image='docker.intuit.com/data-mlplatform/mlp-components/service/patch', command=['python3','mlp/components/patch.py'], arguments=[ "--manifest", manifest, "--patch", patch], file_outputs={'train': '/output.txt'} ) #create training job op = dsl.ResourceOp( name="train", action="create", k8s_resource=c_op.outputs['train'], success_condition="status.trainingJobStatus==Completed,status.secondaryStatus==Completed", failure_condition="status.trainingJobStatus==Failed,status.secondaryStatus==Failed", attribute_outputs={ "job-name": "{.status.sageMakerTrainingJobName}", "model-path": "{.status.modelPath}", "cloud-watch-url": "{.status.cloudWatchLogUrl}" }) ``` ### What happened: it throws following error during workflow validation `Error: time="2021-03-04T22:09:12-08:00" level=fatal msg="/dev/stdin: templates.test-pipeline.tasks.train templates.train.resource.manifest must be a valid yaml"` After checking the workflow manifest, manifest value is separated by | (see below) which is the actual reason behind the issue, workflow writer should not have appended this | in case if the manifest is parameterized. ``` - name: train resource: action: create successCondition: status.trainingJobStatus==Completed,status.secondaryStatus==Completed failureCondition: status.trainingJobStatus==Failed,status.secondaryStatus==Failed manifest: | '{{inputs.parameters.patch-train}}' inputs: parameters: - {name: patch-train} outputs: parameters: - name: train-cloud-watch-url valueFrom: {jsonPath: '{.status.cloudWatchLogUrl}'} - name: train-job-name valueFrom: {jsonPath: '{.status.sageMakerTrainingJobName}'} - name: train-manifest valueFrom: {jsonPath: '{}'} - name: train-model-path valueFrom: {jsonPath: '{.status.modelPath}'} - name: train-name valueFrom: {jsonPath: '{.metadata.name}'} ``` ### What did you expect to happen: The compiler should write the value of manifest as `manifest: '{{inputs.parameters.patch-train}}'` in case if manifest value is parameterized. ### Environment: <!-- Please fill in those that seem relevant. --> How did you deploy Kubeflow Pipelines (KFP)? local KFP version: 1.4.0 KFP SDK version: 1.4.0 ### Anything else you would like to add: When I fixed the workflow manually by removing | and ran the `argo lint` command which workflow validator does, it passes /kind bug <!-- Please include labels by uncommenting them to help us better triage issues, choose from the following --> <!-- // /area frontend // /area backend // /area sdk // /area testing // /area engprod --> pipeline.yaml ``` import os from kfp import dsl, components from training_component import mlp_train_op training_manifest = """ apiVersion: sagemaker.aws.amazon.com/v1 kind: TrainingJob metadata: name: xgboost-mnist spec: roleArn: SAGEMAKER_EXECUTION_ROLE_ARN region: us-east-1 algorithmSpecification: trainingImage: 811284229777.dkr.ecr.us-east-1.amazonaws.com/xgboost:latest trainingInputMode: File outputDataConfig: s3OutputPath: s3://BUCKET_NAME/xgboost-mnist/models/ inputDataConfig: - channelName: train dataSource: s3DataSource: s3DataType: S3Prefix s3Uri: s3://BUCKET_NAME/xgboost-mnist/train/ s3DataDistributionType: FullyReplicated contentType: text/csv compressionType: None - channelName: validation dataSource: s3DataSource: s3DataType: S3Prefix s3Uri: s3://BUCKET_NAME/xgboost-mnist/validation/ s3DataDistributionType: FullyReplicated contentType: text/csv compressionType: None resourceConfig: instanceCount: 1 instanceType: ml.m4.xlarge volumeSizeInGB: 5 hyperParameters: - name: max_depth value: "5" - name: eta value: "0.2" - name: gamma value: "4" - name: min_child_weight value: "6" - name: silent value: "0" - name: objective value: multi:softmax - name: num_class value: "10" - name: num_round value: "10" stoppingCondition: maxRuntimeInSeconds: 86400 """ @dsl.pipeline( name="Test Pipeline", description="Kubeflow test pipeline") def pipeline(): train = mlp_train_op(manifest=training_manifest, patch='') if __name__ == "__main__": kfp.compiler.Compiler().compile(pipeline, __file__ + ".zip") ``` training_component.yaml ``` import logging import os import argparse from kfp import dsl, components import yaml def mlp_train_op(manifest, patch): c_op = dsl.ContainerOp( name='patch', image='docker.intuit.com/data-mlplatform/mlp-components/service/patch', command=['python3','mlp/components/patch.py'], arguments=[ "--manifest", manifest, "--patch", patch], file_outputs={'train': '/output.txt'} ) #create training job op = dsl.ResourceOp( name="train", action="create", k8s_resource=c_op.outputs['train'], success_condition="status.trainingJobStatus==Completed,status.secondaryStatus==Completed", failure_condition="status.trainingJobStatus==Failed,status.secondaryStatus==Failed", attribute_outputs={ "job-name": "{.status.sageMakerTrainingJobName}", "model-path": "{.status.modelPath}", "cloud-watch-url": "{.status.cloudWatchLogUrl}" }) return op ```
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5247/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5247/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5244
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5244/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5244/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5244/events
https://github.com/kubeflow/pipelines/issues/5244
822,627,271
MDU6SXNzdWU4MjI2MjcyNzE=
5,244
Cache Webhook "disabled" by default in Azure
{ "login": "aabbccddeeffgghhii1438", "id": 35978194, "node_id": "MDQ6VXNlcjM1OTc4MTk0", "avatar_url": "https://avatars.githubusercontent.com/u/35978194?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aabbccddeeffgghhii1438", "html_url": "https://github.com/aabbccddeeffgghhii1438", "followers_url": "https://api.github.com/users/aabbccddeeffgghhii1438/followers", "following_url": "https://api.github.com/users/aabbccddeeffgghhii1438/following{/other_user}", "gists_url": "https://api.github.com/users/aabbccddeeffgghhii1438/gists{/gist_id}", "starred_url": "https://api.github.com/users/aabbccddeeffgghhii1438/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aabbccddeeffgghhii1438/subscriptions", "organizations_url": "https://api.github.com/users/aabbccddeeffgghhii1438/orgs", "repos_url": "https://api.github.com/users/aabbccddeeffgghhii1438/repos", "events_url": "https://api.github.com/users/aabbccddeeffgghhii1438/events{/privacy}", "received_events_url": "https://api.github.com/users/aabbccddeeffgghhii1438/received_events", "type": "User", "site_admin": false }
[ { "id": 930619525, "node_id": "MDU6TGFiZWw5MzA2MTk1MjU=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/platform/azure", "name": "platform/azure", "color": "2515fc", "default": false, "description": "" }, { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
closed
false
null
[]
null
[ "@dtzar @eedorenko @sudivate", "In azure, the problem is that istio pods are not being injected to the pods in kubeflow. \r\n\r\nAzure enforces that the `MutatingWebhookConfiguration - istio-sidecar-injector` in kubeflow is being automatically edited by AKS to add the following match expression in the namespaceSelector:\r\n\r\n```\r\nmatchExpressions:\r\n- key: control-plane\r\n operator: DoesNotExist\r\n```\r\n\r\nSo the MutatingWebhookConfiguration looks like this:\r\n\r\n```\r\nnamespaceSelector:\r\n\tmatchExpressions:\r\n - key: control-plane\r\n operator: DoesNotExist\r\n\tmatchLabels:\r\n \t\tistio-injection: enabled\r\n```\r\n\r\nThis will exclude the kubeflow namespace, since the namespace have the label:\r\n\r\n```\r\nlabels:\r\n control-plane: kubeflow\r\n```\r\n\r\nTo solve this issue, you need to deactivate the admission enforcer from aks, using the following annotation in the MutatingWebhookConfiguration:\r\n\r\n```\r\napiVersion: admissionregistration.k8s.io/v1\r\nkind: MutatingWebhookConfiguration\r\nmetadata:\r\n annotations:\r\n admissions.enforcer/disabled: \"true\"\r\n name: istio-sidecar-injector\r\n```\r\n\r\nI believe, that the following issues are all related to this, so you don't need to disable istio (changing the DestinationRules from ISTIO_MUTUAL to DISABLE):\r\n\r\nhttps://github.com/kubeflow/pipelines/issues/4469\r\nhttps://github.com/kubeflow/kubeflow/issues/5271\r\nhttps://github.com/kubeflow/kubeflow/issues/5277\r\nhttps://github.com/Azure/AKS/issues/1771\r\n\r\n+info:\r\nhttps://docs.microsoft.com/en-us/azure/aks/faq#can-i-use-admission-controller-webhooks-on-aks\r\n", "Yes, the point of my issue was suggesting we can update docs by explaining the need for adding `admissions.enforcer/disabled: \"true\"`, or possibly add it in the azure overlays.", "@andre-lx I updated the istio-sidecar-injector MutatingWebhookConfiguration by adding the annotation as you described but istio sidecars were not injected with the pipeline components. I tried deleting the pods, restarting the deployments and even deleted and applied the pipeline component. Adding a label instead of an annotation also seemed to have no effect.\r\n\r\nThen I edited the istio-sidecar-injector and removed the matchExpression added by AKS in the namespaceSelector and that solved the issue.\r\n\r\nDid you recommend to add the annotation / label in the istio-sidecar-injector MutatingWebhookConfiguration only or is there more to it?\r\n\r\nI am installing Kubeflow 1.3 by installing individual components as described [here](https://github.com/kubeflow/manifests) \r\n", "Hi @danishsamad. Not sure, actually. \r\n\r\nIn my case, deleting the match expression didn't solve the issue, since AKS automatically add that match expression. \r\n\r\nI'm not sure if I did the manual delete or if the annotation deleted it.\r\n\r\nMaybe I added the annotation and eliminated the match expression manually.\r\n\r\nAnyway, in new clusters, the annotation worked as expected -> the match expression is not added and the istio pods are injected to the pipelines pods. ", "@andre-lx Thanks for the fix. I'll be looking at adding this to the Istio installation used by [ArgoFlow-Azure](https://github.com/argoflow/argoflow-azure), which is still in the very early stages but will hopefully have more integrations with Azure soon. Any help to add Azure integrations to this deployment are greatly appreciated.", "I just ran into the same problem installing manifests 1.3.1 using kustomize.\r\nAdding the label `admissions.enforcer/disabled: \"true\"` at runtime (even after restrting all cluster pods) did not work for me.\r\nAdding the annotation before the installation, however, **did work** (i.e. modfying or overlaying the MutatingWebhookConfiguration in common/istio-1-9/istio-install/base/install.yaml). No other modifications needed.\r\nThanks!", "Hi, k8s newbie here, do you mind providing more details on how to solve this issue @TobiasGoerke ?", "Hello @Roman-Ka, there's a file in your manifests folder at `common/istio-1-9/istio-install/base/install.yaml` that you will somehow need to modify. The easiest (but least maintainable way) is to just edit the file directly. \r\nThere a resource hidden in there that looks like this:\r\n\r\n```\r\napiVersion: admissionregistration.k8s.io/v1beta1\r\nkind: MutatingWebhookConfiguration\r\nmetadata:\r\n name: istio-sidecar-injector\r\n labels:\r\n istio.io/rev: default\r\n install.operator.istio.io/owning-resource: unknown\r\n operator.istio.io/component: \"Pilot\"\r\n app: sidecar-injector\r\n release: istio\r\n ....\r\n```\r\n\r\nHere you will need to add the annotation so that it looks like this:\r\n```\r\napiVersion: admissionregistration.k8s.io/v1beta1\r\nkind: MutatingWebhookConfiguration\r\nmetadata:\r\n name: istio-sidecar-injector\r\n annotations: \r\n admissions.enforcer/disabled: \"true\"\r\n labels:\r\n istio.io/rev: default\r\n install.operator.istio.io/owning-resource: unknown\r\n operator.istio.io/component: \"Pilot\"\r\n app: sidecar-injector\r\n release: istio\r\n ....\r\n```\r\n\r\nThen install Kubeflow using that folder and you should be good.\r\n\r\nHope that helps!\r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-03-05T00:30:56"
"2022-03-11T06:04:49"
"2022-03-11T06:04:49"
NONE
null
### What happened: [A clear and concise description of what the bug is.] Right now, mutating webhooks are used for components such as the cache-server. Previously there was an issue with the knative webhook, so the label "control-plane" was attached to prevent the webhook from triggering all the time. (Refer to https://github.com/kubeflow/kubeflow/issues/4511). However, Azure by default adds the below namespace selector to mutatingwebhooks to prevent applying to AKS internal namespaces. (https://github.com/Azure/AKS/issues/1771) ``` namespaceSelector: matchExpressions: - key: control-plane operator: DoesNotExist ``` As the KF namespace comes with "control-plane: kubeflow", this causes the cache server to fail to mutate any pods in Kubeflow. ### What did you expect to happen: It seems unfair to expect Kubeflow to fix this issue, as this dependency is inherently caused by Azure upstream. Perhaps we can update the Azure docs / default deploy to tell the users that these components won't work as intended? ### Environment: <!-- Please fill in those that seem relevant. --> Azure How did you deploy Kubeflow Pipelines (KFP)? <!-- If you are not sure, here's [an introduction of all options](https://www.kubeflow.org/docs/pipelines/installation/overview/). --> KFP version: <!-- If you are not sure, build commit shows on bottom of KFP UI left sidenav. --> 1.2 KFP SDK version: <!-- Please attach the output of this shell command: $pip list | grep kfp --> 1.4 ### Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.] /kind bug <!-- Please include labels by uncommenting them to help us better triage issues, choose from the following --> <!-- // /area frontend // /area backend // /area sdk // /area testing // /area engprod -->
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5244/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5244/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5240
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5240/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5240/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5240/events
https://github.com/kubeflow/pipelines/issues/5240
822,412,063
MDU6SXNzdWU4MjI0MTIwNjM=
5,240
Presubmit test failing 03/04/2021
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-03-04T19:00:01"
"2021-03-04T20:43:48"
"2021-03-04T20:43:48"
COLLABORATOR
null
Error: ``` Traceback (most recent call last): File "/samples/core/train_until_good/train_until_good.py", line 1, in <module> import kfp; kfp.components.default_base_image_or_builder="gcr.io/google-appengine/python:2020-03-31-141326" File "/usr/local/lib/python3.7/site-packages/kfp-1.4.1-py3.7.egg/kfp/__init__.py", line 23, in <module> from . import dsl File "/usr/local/lib/python3.7/site-packages/kfp-1.4.1-py3.7.egg/kfp/dsl/__init__.py", line 17, in <module> from ._pipeline import Pipeline, pipeline, get_pipeline_conf, PipelineConf File "/usr/local/lib/python3.7/site-packages/kfp-1.4.1-py3.7.egg/kfp/dsl/_pipeline.py", line 20, in <module> from kfp.dsl import _component_bridge File "/usr/local/lib/python3.7/site-packages/kfp-1.4.1-py3.7.egg/kfp/dsl/_component_bridge.py", line 26, in <module> from kfp.dsl import component_spec as dsl_component_spec File "/usr/local/lib/python3.7/site-packages/kfp-1.4.1-py3.7.egg/kfp/dsl/component_spec.py", line 21, in <module> from kfp.dsl import type_utils File "/usr/local/lib/python3.7/site-packages/kfp-1.4.1-py3.7.egg/kfp/dsl/type_utils.py", line 26, in <module> 'model': ontology_artifacts.Model.get_artifact_type(), File "/usr/local/lib/python3.7/site-packages/kfp-1.4.1-py3.7.egg/kfp/dsl/ontology_artifacts.py", line 37, in get_artifact_type with open(schema_file_path) as schema_file: FileNotFoundError: [Errno 2] No such file or directory: '/usr/local/lib/python3.7/site-packages/kfp-1.4.1-py3.7.egg/kfp/dsl/ontology_type_schemas/model.yaml' The command '/bin/sh -c set -e; < /samples/sample_config.json jq .[].file --raw-output | while read pipeline_yaml; do pipeline_py="${pipeline_yaml%.yaml}"; mv "$pipeline_py" "${pipeline_py}.tmp"; echo 'import kfp; kfp.components.default_base_image_or_builder="gcr.io/google-appengine/python:2020-03-31-141326"' | cat - "${pipeline_py}.tmp" > "$pipeline_py"; dsl-compile --py "$pipeline_py" --output "$pipeline_yaml" || python3 "$pipeline_py"; done' returned a non-zero code: 1 ERROR ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1 ``` The yaml file was added by #5197 I think the fix should be include these yaml files in setuptools. /cc @dushyanthsc @capri-xiyue
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5240/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5240/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5238
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5238/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5238/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5238/events
https://github.com/kubeflow/pipelines/issues/5238
821,826,483
MDU6SXNzdWU4MjE4MjY0ODM=
5,238
Issue in decoding the output artifact fetched using self._run_api.read_artifact() in Kubeflow client
{ "login": "vishalsmb", "id": 30661709, "node_id": "MDQ6VXNlcjMwNjYxNzA5", "avatar_url": "https://avatars.githubusercontent.com/u/30661709?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vishalsmb", "html_url": "https://github.com/vishalsmb", "followers_url": "https://api.github.com/users/vishalsmb/followers", "following_url": "https://api.github.com/users/vishalsmb/following{/other_user}", "gists_url": "https://api.github.com/users/vishalsmb/gists{/gist_id}", "starred_url": "https://api.github.com/users/vishalsmb/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vishalsmb/subscriptions", "organizations_url": "https://api.github.com/users/vishalsmb/orgs", "repos_url": "https://api.github.com/users/vishalsmb/repos", "events_url": "https://api.github.com/users/vishalsmb/events{/privacy}", "received_events_url": "https://api.github.com/users/vishalsmb/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@vishalsmb I think https://github.com/kubeflow/pipelines/issues/4327#issuecomment-687255001 is probably a good example" ]
"2021-03-04T07:01:24"
"2021-03-30T02:42:53"
"2021-03-30T02:42:53"
NONE
null
I'm fetching an artifact output using kubeflow client. When I try to decompress the returned value, its returning junk characters along with the actual content. Attaching the sample code. Please let me know if any is wrong with the decoding that I'm doing here. Code : `from kfp import Client import base64 import gzip kubeflowService = Client(host='') artifact = kubeflowService ._run_api.read_artifact(run_id=run_id, node_id=key, artifact_name=artifact_name) value = artifact.data print(gzip.decompress(base64.b64decode(value))`
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5238/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5238/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5236
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5236/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5236/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5236/events
https://github.com/kubeflow/pipelines/issues/5236
821,698,248
MDU6SXNzdWU4MjE2OTgyNDg=
5,236
[Testing] postsubmit mkp test failure 2021.3.4
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false } ]
null
[ "The failure: https://oss-prow.knative.dev/view/gs/oss-prow/logs/kubeflow-pipeline-postsubmit-mkp-e2e-test/1365452157549023232#1:build-log.txt%3A7816\r\n\r\nStep #1 - \"verify\": 37s Warning FailedScheduling pod/ml-pipeline-55bbc45946-m7s66 0/3 nodes are available: 2 Insufficient memory, 3 Insufficient cpu.", "It seems that the added resource request make it impossible to schedule ml-pipeline pod in the cluster.", "The default cluster for marketplace has 3 nodes each with 2 CPUs and 3GB memory allocatable.\r\nSo the new requirements set in #5158 seems too large, it marked 4GB memory for ml-pipeline server. I assume it was prepared more for large scale production env.\r\n\r\nWe need to reduce the request to fit into the default cluster.", "/cc @NikeNano ", "/assign\r\n\r\nI'll fix this", "Another issue to address is that, mkp test should be triggered on presubmit if a PR touches MKP manifest. Let me add the auto trigger.", "Some reading on requests & limits: https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-resource-requests-and-limits", "Some investigation into other OSS projects, argo doesn't provide default for requests/limits for the most part: https://github.com/argoproj/argo-workflows/tree/master/manifests.\r\nThere's a doc page to introduce how to improve cost optimization: https://argoproj.github.io/argo-workflows/cost-optimisation/.", "I think we'll need an operator manual documentation to tell people how to adjust their resource requests, but maybe as a next step.\r\nAdding some requests as default only make it better than before -- without setting it, it's assumed to be 0. Also, when requested number is reached, the Pod is not killed like a limit.", "> I think we'll need an operator manual documentation to tell people how to adjust their resource requests, but maybe as a next step.\r\n> \r\n\r\nI think this sounds good, also if we could provide some ball park figures. I guess however if we set them to high we will request more resources than actually used for most people. ", "Caused by https://github.com/kubeflow/pipelines/issues/5148" ]
"2021-03-04T02:50:29"
"2021-04-01T02:07:40"
"2021-03-04T06:03:41"
CONTRIBUTOR
null
First failing mkp test: https://oss-prow.knative.dev/view/gs/oss-prow/logs/kubeflow-pipeline-postsubmit-mkp-e2e-test/1365452157549023232 Root cause seems to be: https://github.com/kubeflow/pipelines/pull/5158
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5236/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5236/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5234
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5234/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5234/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5234/events
https://github.com/kubeflow/pipelines/issues/5234
821,046,204
MDU6SXNzdWU4MjEwNDYyMDQ=
5,234
Delete pipeline pods after running
{ "login": "andre-lx", "id": 44682155, "node_id": "MDQ6VXNlcjQ0NjgyMTU1", "avatar_url": "https://avatars.githubusercontent.com/u/44682155?v=4", "gravatar_id": "", "url": "https://api.github.com/users/andre-lx", "html_url": "https://github.com/andre-lx", "followers_url": "https://api.github.com/users/andre-lx/followers", "following_url": "https://api.github.com/users/andre-lx/following{/other_user}", "gists_url": "https://api.github.com/users/andre-lx/gists{/gist_id}", "starred_url": "https://api.github.com/users/andre-lx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andre-lx/subscriptions", "organizations_url": "https://api.github.com/users/andre-lx/orgs", "repos_url": "https://api.github.com/users/andre-lx/repos", "events_url": "https://api.github.com/users/andre-lx/events{/privacy}", "received_events_url": "https://api.github.com/users/andre-lx/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @andre-lx, is https://github.com/kubeflow/pipelines/issues/3938 the corresponding config?", "I think: \r\n```\r\n.set_ttl_seconds_after_finished(seconds: int)\r\n```\r\nis what you are looking for if you like to remove the underlying pipeline after if is finished for a specific pipeline, reference posted by Bobgy above will be for the all pipeline in the cluster. Docs can be found [here](https://kubeflow-pipelines.readthedocs.io/en/latest/_modules/kfp/dsl/_pipeline.html#PipelineConf.set_ttl_seconds_after_finished)", "Hi. \r\n\r\nI had already tested this, but I tried again with both gpu and non gpu pods, with this config:\r\n\r\n```python\r\n@dsl.pipeline(\r\n name=\"test\",\r\n description=\"test\",\r\n)\r\n\r\ndef some_func():\r\n some_task = kfp_generator().set_gpu_limit(1, 'nvidia')\r\n\r\n dsl.get_pipeline_conf().set_ttl_seconds_after_finished(20)\r\n```\r\n\r\nBut nothing happen after the 20 seconds. \r\n\r\nI am missing something? \r\n\r\nThanks @Bobgy and @NikeNano ", "I tested the following pipeline: \r\n\r\n```python\r\nimport kfp\r\nfrom kfp import dsl\r\n\r\n\r\ndef random_num_op(low, high):\r\n \"\"\"Generate a random number between low and high.\"\"\"\r\n return dsl.ContainerOp(\r\n name='Generate random number',\r\n image='python:alpine3.6',\r\n command=['sh', '-c'],\r\n arguments=['python -c \"import random; print(random.randint($0, $1))\" | tee $2', str(low), str(high), '/tmp/output'],\r\n file_outputs={'output': '/tmp/output'}\r\n )\r\n\r\n\r\ndef flip_coin_op():\r\n \"\"\"Flip a coin and output heads or tails randomly.\"\"\"\r\n return dsl.ContainerOp(\r\n name='Flip coin',\r\n image='python:alpine3.6',\r\n command=['sh', '-c'],\r\n arguments=['python -c \"import random; result = \\'heads\\' if random.randint(0,1) == 0 '\r\n 'else \\'tails\\'; print(result)\" | tee /tmp/output'],\r\n file_outputs={'output': '/tmp/output'}\r\n )\r\n\r\n\r\ndef print_op(msg):\r\n \"\"\"Print a message.\"\"\"\r\n return dsl.ContainerOp(\r\n name='Print',\r\n image='alpine:3.6',\r\n command=['echo', msg],\r\n )\r\n \r\n\r\n@dsl.pipeline(\r\n name='Conditional execution pipeline',\r\n description='Shows how to use dsl.Condition().'\r\n)\r\ndef flipcoin_pipeline():\r\n flip = flip_coin_op()\r\n with dsl.Condition(flip.output == 'heads'):\r\n random_num_head = random_num_op(0, 9)\r\n with dsl.Condition(random_num_head.output > 5):\r\n print_op('heads and %s > 5!' % random_num_head.output)\r\n with dsl.Condition(random_num_head.output <= 5):\r\n print_op('heads and %s <= 5!' % random_num_head.output)\r\n\r\n with dsl.Condition(flip.output == 'tails'):\r\n random_num_tail = random_num_op(10, 19)\r\n with dsl.Condition(random_num_tail.output > 15):\r\n print_op('tails and %s > 15!' % random_num_tail.output)\r\n with dsl.Condition(random_num_tail.output <= 15):\r\n print_op('tails and %s <= 15!' % random_num_tail.output)\r\n dsl.get_pipeline_conf().set_ttl_seconds_after_finished(20)\r\n\r\n```\r\n\r\nWatching the kubeflow namespace for the pods: \r\n\r\n```\r\nwatch kubectl get pods -n kubeflow\r\n```\r\n\r\nI could confirm that the pods where deleted ~20 seconds from completion. Also checked removing the line: \r\n\r\n```\r\n\r\ndsl.get_pipeline_conf().set_ttl_seconds_after_finished(20)\r\n```\r\nwhich resulted in that the pods where not deleted. \r\n\r\nCould you @andre-lx share your version of `kubeflow pipelines` and a full example where it is not working for you so I can try to reproduce it? ", "Hi @NikeNano. Thanks for the full example. I also tried with that example, and it didn't work. The only thing I added is the part of compiling and running the pipeline. \r\n\r\n```python\r\npipeline_func = flipcoin_pipeline\r\npipeline_filename = pipeline_func.__name__ + \".pipeline.tar.gz\"\r\n\r\nimport kfp.compiler as comp\r\n\r\ncomp.Compiler().compile(pipeline_func, pipeline_filename)\r\n\r\nclient = kfp.Client()\r\nmy_experiment = client.create_experiment(name='demo')\r\nmy_run = client.run_pipeline(my_experiment.id, pipeline_func.__name__, \r\n pipeline_filename)\r\n```\r\n\r\nAfter that, the pipeline run successful:\r\n\r\n![image](https://user-images.githubusercontent.com/44682155/111140363-2a78db80-857a-11eb-9550-7be0e8eed67a.png)\r\n\r\nBut the pods didn't get deleted:\r\n\r\n```cli\r\nkubectl get pods -n xx\r\nNAME READY STATUS RESTARTS AGE\r\nconditional-execution-pipeline-bh8ln-1858453776 0/2 Completed 0 2m48s\r\nconditional-execution-pipeline-bh8ln-2630484688 0/2 Completed 0 2m44s\r\nconditional-execution-pipeline-bh8ln-3054820269 0/2 Completed 0 2m51s\r\n```\r\n\r\nI can also see the \"flag\" in the workflow yaml:\r\n\r\n```yaml\r\napiVersion: argoproj.io/v1alpha1\r\nkind: Workflow\r\nmetadata:\r\n ...\r\nspec:\r\n ...\r\n ttlSecondsAfterFinished: 20\r\n\r\n```\r\n\r\nI'm using the following versions:\r\n\r\n```\r\npip freeze | grep kfp\r\nkfp==1.4.0\r\nkfp-pipeline-spec==0.1.6\r\n```\r\n\r\nAnd:\r\n\r\n```\r\nimages:\r\n- name: gcr.io/ml-pipeline/frontend\r\n newTag: 1.3.0\r\n- name: gcr.io/ml-pipeline/api-server\r\n newTag: 1.3.0\r\n- name: gcr.io/ml-pipeline/persistenceagent\r\n newTag: 1.3.0\r\n- name: gcr.io/ml-pipeline/scheduledworkflow\r\n newTag: 1.3.0\r\n- name: gcr.io/ml-pipeline/viewer-crd-controller\r\n newTag: 1.3.0\r\n- name: gcr.io/ml-pipeline/visualization-server\r\n newTag: 1.3.0\r\n```", "I can not recreate this issues @andre-lx, I am running Kubeflow pipelines 1.3 as well but get the pods get killed. In between different attempts it take some time(minutes extra when I try it out). \r\n\r\nComplete example:\r\n``` \r\nimport kfp\r\nfrom kfp import dsl\r\n\r\n\r\ndef random_num_op(low, high):\r\n \"\"\"Generate a random number between low and high.\"\"\"\r\n return dsl.ContainerOp(\r\n name='Generate random number',\r\n image='python:alpine3.6',\r\n command=['sh', '-c'],\r\n arguments=['python -c \"import random; print(random.randint($0, $1))\" | tee $2', str(low), str(high), '/tmp/output'],\r\n file_outputs={'output': '/tmp/output'}\r\n )\r\n\r\n\r\ndef flip_coin_op():\r\n \"\"\"Flip a coin and output heads or tails randomly.\"\"\"\r\n return dsl.ContainerOp(\r\n name='Flip coin',\r\n image='python:alpine3.6',\r\n command=['sh', '-c'],\r\n arguments=['python -c \"import random; result = \\'heads\\' if random.randint(0,1) == 0 '\r\n 'else \\'tails\\'; print(result)\" | tee /tmp/output'],\r\n file_outputs={'output': '/tmp/output'}\r\n )\r\n\r\n\r\ndef print_op(msg):\r\n \"\"\"Print a message.\"\"\"\r\n return dsl.ContainerOp(\r\n name='Print',\r\n image='alpine:3.6',\r\n command=['echo', msg],\r\n )\r\n \r\n\r\n@dsl.pipeline(\r\n name='Conditional execution pipeline',\r\n description='Shows how to use dsl.Condition().'\r\n)\r\ndef flipcoin_pipeline():\r\n flip = flip_coin_op()\r\n with dsl.Condition(flip.output == 'heads'):\r\n random_num_head = random_num_op(0, 9)\r\n with dsl.Condition(random_num_head.output > 5):\r\n print_op('heads and %s > 5!' % random_num_head.output)\r\n with dsl.Condition(random_num_head.output <= 5):\r\n print_op('heads and %s <= 5!' % random_num_head.output)\r\n\r\n with dsl.Condition(flip.output == 'tails'):\r\n random_num_tail = random_num_op(10, 19)\r\n with dsl.Condition(random_num_tail.output > 15):\r\n print_op('tails and %s > 15!' % random_num_tail.output)\r\n with dsl.Condition(random_num_tail.output <= 15):\r\n print_op('tails and %s <= 15!' % random_num_tail.output)\r\n dsl.get_pipeline_conf().set_ttl_seconds_after_finished(20)\r\n```\r\n\r\n```\r\npipeline_func = flipcoin_pipeline\r\npipeline_filename = pipeline_func.__name__ + \".pipeline.tar.gz\"\r\n\r\nimport kfp.compiler as comp\r\n\r\ncomp.Compiler().compile(flipcoin_pipeline, pipeline_filename)\r\n\r\nclient = kfp.Client(host=\"http://localhost:8080/\")\r\nmy_experiment = client.create_experiment(name='demo')\r\nmy_run = client.run_pipeline(my_experiment.id, pipeline_func.__name__, \r\n pipeline_filename)\r\n```\r\n\r\nI run this on a minkube cluster, using the following set up with commit c484cfa46cfae1e9d11b5d00e0799b8e52c15e33, instructions can be found here; https://github.com/kubeflow/pipelines/tree/1.3.0/manifests/kustomize#option-1-install-it-to-any-k8s-cluster", "Thanks for the quick response. \r\n\r\nI'm doing a few debug around the workflow and the pipelines code. And I notice a few things, that can maybe help solving my issue. \r\n\r\nI also downgrade the kfp sdk to version 1.3.0 to match all the resources:\r\n\r\n```\r\npip freeze | grep kfp\r\nkfp==1.3.0\r\nkfp-pipeline-spec==0.1.7\r\n```\r\n\r\nChanging the default value in the persistenceagent deployment, it works well, the pods are deleted after the 20 seconds for example:\r\n\r\nhttps://github.com/kubeflow/pipelines/blob/19830416e77e2e9327e5865dc54da088cc73f55b/manifests/kustomize/base/pipeline/ml-pipeline-persistenceagent-deployment.yaml#L24-L25\r\n\r\nThen, I set again the env variable to the default of 86400.\r\n\r\n\r\nI checked the logs, and I'm getting the following logs for the example you provided:\r\n\r\n```\r\nlevel=info msg=\"Skip syncing Workflow (conditional-execution-pipeline-mkr59): workflow marked as persisted.\"\r\nlevel=info msg=\"Success while syncing resource (xxxx/conditional-execution-pipeline-mkr59)\"\r\n```\r\n\r\nAfter checking the code, I saw that the logs are printed by the function:\r\n\r\nhttps://github.com/kubeflow/pipelines/blob/0795597562e076437a21745e524b5c960b1edb68/backend/src/agent/persistence/worker/workflow_saver.go#L63-L68\r\n\r\nAnalysing the workflow yaml, I can see that the label, the ttl and the finish at exists:\r\n\r\n```yaml\r\napiVersion: argoproj.io/v1alpha1\r\nkind: Workflow\r\nmetadata:\r\n annotations:\r\n pipelines.kubeflow.org/kfp_sdk_version: 1.3.0\r\n pipelines.kubeflow.org/pipeline_compilation_time: ...\r\n pipelines.kubeflow.org/pipeline_spec: '...'\r\n labels:\r\n pipeline/persistedFinalState: \"true\"\r\n pipeline/runid: e8d0c54a-57fe-44d2-a242-8880e170ac92\r\n pipelines.kubeflow.org/kfp_sdk_version: 1.3.0\r\n workflows.argoproj.io/completed: \"true\"\r\n workflows.argoproj.io/phase: Succeeded\r\n name: conditional-execution-pipeline-mkr59\r\n namespace: xxxx\r\nspec:\r\n arguments: {}\r\n entrypoint: conditional-execution-pipeline\r\n serviceAccountName: default-editor\r\n templates:\r\n - dag:\r\n ...\r\n - dag:\r\n .\r\n .\r\n .\r\n - container:\r\n ...\r\n - container:\r\n .\r\n .\r\n .\r\n ttlSecondsAfterWorkflowFinished: 20\r\nstatus:\r\n finishedAt: \"2021-03-20T16:45:04Z\"\r\n nodes:\r\n ...\r\n phase: Succeeded\r\n startedAt: \"2021-03-20T16:44:54Z\"\r\n\r\n```\r\n\r\nSo, looking to the function, assuming for example that the run finished at 00h:10m and the time now, is 00h:12m, 12-10=2 minutes = 120 seconds, and 120 seconds are > than the 20 seconds defined for the ttl. So, why are this log being printed? \r\n\r\nI looks like that the ttl is not being passed, or is overrided by the default value. \r\n\r\n**Additional notes:**\r\n\r\nI also made the same tests without the env variable set in the deployment.\r\n\r\nI also try it out with resources version 1.4.1 and kfp 1.4.0:\r\n\r\n```\r\npip freeze | grep kfp\r\nkfp==1.4.0\r\nkfp-pipeline-spec==0.1.7\r\n```\r\n\r\n```yaml\r\nName: conditional-execution-pipeline-xsbfw\r\nNamespace: xxxx\r\nLabels: pipeline/persistedFinalState=true\r\n pipeline/runid=1bfdfa01-31c2-4fde-8892-0c78193b7b3e\r\n pipelines.kubeflow.org/kfp_sdk_version=1.4.0\r\n workflows.argoproj.io/completed=true\r\n workflows.argoproj.io/phase=Succeeded\r\nAnnotations: pipelines.kubeflow.org/kfp_sdk_version: 1.4.0\r\n pipelines.kubeflow.org/pipeline_compilation_time: ...\r\n pipelines.kubeflow.org/pipeline_spec: ...\r\n pipelines.kubeflow.org/run_name: flipcoin_pipeline\r\nAPI Version: argoproj.io/v1alpha1\r\nKind: Workflow\r\nMetadata:\r\n Creation Timestamp: 2021-03-20T18:50:26Z\r\n Generate Name: conditional-execution-pipeline-\r\nSpec:\r\n Arguments:\r\n Entrypoint: conditional-execution-pipeline\r\n Service Account Name: default-editor\r\n Templates:\r\n Dag:\r\n .\r\n .\r\n .\r\n Container:\r\n .\r\n .\r\n .\r\n Ttl Seconds After Finished: 20\r\nStatus:\r\n Finished At: 2021-03-20T18:50:34Z\r\n Nodes:\r\n ...\r\n Phase: Succeeded\r\n Started At: 2021-03-20T18:50:26Z\r\n````\r\n\r\nNo changes at all. \r\n\r\nThanks ", "I will give it a look and come back. Is this something you know more about @Bobgy? ", "Quick first notes, there are a two types of garbage collection at play here, the one `argo` does and the one tha persistence agent does. I am not sure though why it seems for you like the one related to the persistence agent `TTL_SECONDS_AFTER_WORKFLOW_FINISH ` affects the one set for argo: `ttlSecondsAfterWorkflowFinished` in this case I think the shorter one should decide when the workflow gets deleted but might be missing something. ", "I'd suggest to use `TTL_SECONDS_AFTER_WORKFLOW_FINISH`, and avoid using the argo one, because `TTL_SECONDS_AFTER_WORKFLOW_FINISH` will only GC the workflow after persistence agent has backed it up in KFP mysql DB. If you use the argo parameter and the workflow was not backed up before it was GCed, the record would be lost forever.\r\n\r\n> level=info msg=\"Skip syncing Workflow (conditional-execution-pipeline-mkr59): workflow marked as persisted.\"\r\nlevel=info msg=\"Success while syncing resource (xxxx/conditional-execution-pipeline-mkr59)\"\r\n\r\nThe logs are expected, because persistence agent loops over these workflows periodically, it won't take any actions until the TTL has reached.\r\n", "The conversation has been pretty long, are there any remaining questions not addressed?", "Found it!\r\n\r\nAfter checking the `workflow-controller` logs, I found the error:\r\n\r\n```\r\nE0324 14:29:49.195776 1 ttlcontroller.go:124] error deleting 'namespace_name/workflow_name': workflows.argoproj.io \"workflow_name\" is forbidden: User \"system:serviceaccount:kubeflow:argo\" cannot delete resource \"workflows\" in API group \"argoproj.io\" in the namespace \"namespace_name\"\r\n```\r\n\r\nhttps://github.com/kubeflow/manifests/blob/3e08dc102059def5a0b0d04560c7d119959bf506/contrib/argo/base/cluster-role.yaml#L36-L46\r\n\r\nSo, after add the missing verb, everything worked out with the `set_ttl_seconds_after_finished` config:\r\n\r\n\r\n```yaml \r\napiVersion: rbac.authorization.k8s.io/v1beta1\r\nkind: ClusterRole\r\nmetadata:\r\n labels:\r\n app: argo\r\n name: argo\r\nrules:\r\n- apiGroups:\r\n...\r\n- apiGroups:\r\n - argoproj.io\r\n resources:\r\n - workflows\r\n - workflows/finalizers\r\n verbs:\r\n - get\r\n - list\r\n - watch\r\n - update\r\n - patch\r\n - delete\r\n- apiGroups:\r\n...\r\n```\r\n\r\nThanks to both! \r\n" ]
"2021-03-03T12:02:46"
"2021-08-11T19:26:12"
"2021-03-24T14:41:52"
NONE
null
Hi. I was trying to delete the pods of the pipelines after running. This will help a lot, for example in gpu pipeline functions. As an example: ```python def some_func(): some_task = ( kfp_generator().set_gpu_limit(1, 'nvidia') ``` In our enviorment, this pipeline step will start a gpu node in our pool, but, the only way of deleting this node is deleting the run. In the argo project, I can see that this is possible using the `podGC`: https://github.com/Duske/argo/blob/bd4750fbb9413eea8ced0ca642664f54fb5b3c47/examples/pod-gc-strategy.yaml#L9-L15 Currently, there are any way of doing this throw the kfp pipeline in python? Thanks
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5234/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5234/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5232
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5232/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5232/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5232/events
https://github.com/kubeflow/pipelines/issues/5232
820,897,119
MDU6SXNzdWU4MjA4OTcxMTk=
5,232
Upgrade argo image to 2.12+
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false } ]
null
[ "/assign", "I found that there's now a maintained tool to extract licenses from go libraries: https://github.com/google/go-licenses.\r\n\r\nWe can use it instead of our home-made tool.\r\n\r\nUPDATE: I took a look and it seems this new tool doesn't support go modules, so it could only get licenses using a library's latest version IIUC. See https://github.com/google/go-licenses/issues/33.", "I'll have to keep using https://github.com/kubeflow/testing/tree/master/py/kubeflow/testing/go-license-tools", "Noticed a problem with kubeflow go-license-tools, although\r\n\r\n> The command `go list -m all` lists the current module and all its dependencies:\r\n\r\nI found that when using this command in argo repo, it lists a dependency: `github.com/fasthttp-contrib/websocket`. However, this repo doesn't have a license file.\r\n\r\nSo I kept investigating, and found that argo is not using this module. It's mentioned in `go.sum`, but not used anywhere in the repo.\r\n\r\nTherefore, I can conclude that `go list -m all` lists too many dependencies, including those not used.\r\n\r\nUPDATE: I take it back.\r\n\r\nI learned that I can use `go mod graph` to search for dependency graph. https://stackoverflow.com/a/63779350/8745218\r\n\r\nAnd I figured out that `fasthttp-contrib/websocket` is introduced by `github.com/gavv/httpexpect/v2 v2.0.3`\r\n\r\n```\r\n$ go mod graph | grep fasthttp-contrib/websocket\r\ngithub.com/gavv/httpexpect/v2@v2.0.3 github.com/fasthttp-contrib/websocket@v0.0.0-20160511215533-1f3b11f56072\r\n$ go mod graph | grep httpexpect\r\ngithub.com/argoproj/argo/v2 github.com/gavv/httpexpect/v2@v2.0.3\r\n```\r\n\r\nhttpexpect is only used in tests, but not built in the binary, so it's not a license issue for us.", "In summary, two dependencies are problematic:\r\n\r\n* https://github.com/yudai/pp mentions it is licensed under MIT, but it doesn't keep a copy of MIT license in the repo.\r\n* https://github.com/fasthttp-contrib/websocket doesn't have a license\r\n\r\nBoth of them were imported by httpexpect v2.0.3 (a dev dependency), ~~so I guess httpexpect isn't very well maintained either~~.\r\nWell, httpexpect fixed this problem for fasthttp-contrib/websocket in https://github.com/gavv/httpexpect/issues/74, and released 2.1.0 and 2.2.0 after that.\r\n\r\nBecause they are dev dependencies not built into images, we don't need to worry about them.", "There are several images I'll need to generate license info, so I built a better tool to automate this: https://github.com/Bobgy/go-mod-licenses.", "@Bobgy Just came across this, thought it would be relevant. https://blog.argoproj.io/argo-workflows-v3-0-4d0b69f15a6e", "@DavidSpek thanks, rationale for choosing 2.12 was recorded in https://github.com/kubeflow/pipelines/issues/4553#issuecomment-768018127", "Hmm, I just discovered https://github.com/github/licensed, it's written in Ruby, but it seems to support a wide range of languages and a robust workflow for working with licenses & cache.", "I'm starting to notice some limitations of my approach:\r\n* dev dependencies are pulled in as well -- this greatly increases number of licenses and their problems to deal with (but in fact they may not be in the binary).\r\n* only supports repos using one single license (not yet seeing an exception)", "/reopen\r\nI also need to update gcp marketplace manifests", "@Bobgy: Reopened this issue.\n\n<details>\n\nIn response to [this](https://github.com/kubeflow/pipelines/issues/5232#issuecomment-795302385):\n\n>/reopen\r\n>I also need to update gcp marketplace manifests\n\n\nInstructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.\n</details>" ]
"2021-03-03T08:56:23"
"2021-03-11T06:03:11"
"2021-03-11T06:03:10"
CONTRIBUTOR
null
Part of https://github.com/kubeflow/pipelines/issues/4553 /cc @NikeNano
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5232/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5232/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5230
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5230/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5230/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5230/events
https://github.com/kubeflow/pipelines/issues/5230
820,769,343
MDU6SXNzdWU4MjA3NjkzNDM=
5,230
403 error or empty response when using KFP SDK to connect to AI Platform Pipelines / KFP standalone on GCP
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@Bobgy I also found this problem today. The solution you suggested it solves well thanks.\r\n\r\nI want to know why this changes happened, and where I can find out before this kind of changes occurs.", "Tnx @Bobgy , we also had the problem starting today. Your solution fixed it, though we had to use kfp==1.4.0\r\n\r\nThis was the stacktrace:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.7/runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"/usr/lib/python3.7/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/home/thoma/git/fednot/valuation-kfp/pipelines/compile_and_upload.py\", line 60, in <module>\r\n main(parser.parse_args())\r\n File \"/home/thoma/git/fednot/valuation-kfp/pipelines/compile_and_upload.py\", line 49, in main\r\n upload.upload_pipelines(yaml_files)\r\n File \"/home/thoma/git/fednot/valuation-kfp/pipelines/helpers/upload.py\", line 91, in upload_pipelines\r\n client.pipeline_uploads.upload_pipeline(file_path, name=name)\r\n File \"/home/thoma/env_fednot_kfp/lib/python3.7/site-packages/kfp_server_api/api/pipeline_upload_service_api.py\", line 56, in upload_pipeline\r\n (data) = self.upload_pipeline_with_http_info(uploadfile, **kwargs) # noqa: E501\r\n File \"/home/thoma/env_fednot_kfp/lib/python3.7/site-packages/kfp_server_api/api/pipeline_upload_service_api.py\", line 139, in upload_pipeline_with_http_info\r\n collection_formats=collection_formats)\r\n File \"/home/thoma/env_fednot_kfp/lib/python3.7/site-packages/kfp_server_api/api_client.py\", line 330, in call_api\r\n _preload_content, _request_timeout)\r\n File \"/home/thoma/env_fednot_kfp/lib/python3.7/site-packages/kfp_server_api/api_client.py\", line 161, in __call_api\r\n _request_timeout=_request_timeout)\r\n File \"/home/thoma/env_fednot_kfp/lib/python3.7/site-packages/kfp_server_api/api_client.py\", line 373, in request\r\n body=body)\r\n File \"/home/thoma/env_fednot_kfp/lib/python3.7/site-packages/kfp_server_api/rest.py\", line 275, in POST\r\n body=body)\r\n File \"/home/thoma/env_fednot_kfp/lib/python3.7/site-packages/kfp_server_api/rest.py\", line 228, in request\r\n raise ApiException(http_resp=r)\r\nkfp_server_api.rest.ApiException: (403)\r\nReason: Forbidden\r\nHTTP response headers: HTTPHeaderDict({'Content-Length': '1449', 'Content-Type': 'text/html; charset=utf-8', 'Date': 'Thu, 04 Mar 2021 15:53:04 GMT', 'Vary': 'Origin', 'X-Content-Type-Options': 'nosniff', 'X-Frame-Options': 'SAMEORIGIN', 'X-Xss-Protection': '0', 'Set-Cookie': 'S=cloud_datalab_tunnel=piSB961yvViWsUe-wYMAzq0HPAsm9e-Bb7n7dSGcfGw; Path=/; Max-Age=3600'})\r\nHTTP response body: \r\n<!DOCTYPE html>\r\n<html lang=en>\r\n <meta charset=utf-8>\r\n <meta name=viewport content=\"initial-scale=1, minimum-scale=1, width=device-width\">\r\n <title>Error 403 (Forbidden)!!1</title>\r\n <style>\r\n *{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/logos/errorpage/error_logo-150x54.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/logos/errorpage/error_logo-150x54-2x.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/logos/errorpage/error_logo-150x54-2x.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/logos/errorpage/error_logo-150x54-2x.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px}\r\n </style>\r\n <a href=//www.google.com/><span id=logo aria-label=Google></span></a>\r\n <p><b>403.</b> <ins>That’s an error.</ins>\r\n <p> <ins>That’s all we know.</ins>\r\n ```", "@Bobgy thank you! I was poking around for quite a while this morning and now wish I bothered to look at the most recent issues first! 😄 \r\n\r\nI confirmed it works both with `kfp==1.0.4` and `kfp==1.4.0`. \r\n\r\nThis was on an AI Platforms Pipelines instance version `1.0.4`.", "Also, for GitHub search indexing purposes, the empty response we were seeing when calling `list_pipelines()` for us was:\r\n\r\n```\r\n{'next_page_token': None, 'pipelines': None, 'total_size': None}\r\n```\r\n\r\n_(that's what I was searching for initially when looking for solutions)_", "@Bobgy thanks so much for the fix!\r\n\r\nInteresting note: we have a script for uploading and running pipelines that was creating a KFP Client in the following way (replace * with the KFP instance’s hostname):\r\n\r\n```\r\nkfp.Client(host='*.pipelines.googleusercontent.com')\r\n```\r\n\r\nAs you suggested, adding https:// fixes the problem. However, the problem only started appearing for us yesterday when we were running the script manually. Our Cloud Build pipelines use the same script and were unaffected.", "@hahns0lo you should also change your Cloud Build pipelines as suggested here for code consistency concerns.", "Issue has been resolved. Please follow guidance if you encounter this issue. " ]
"2021-03-03T05:50:35"
"2021-04-09T00:36:41"
"2021-04-09T00:36:41"
CONTRIBUTOR
null
The problem just started happening yesterday. To resolve the problem, ensure you are doing BOTH of the following: 1. Use "kfp" SDK version 0.5.2 or >= 1.0.4: e.g. `pip install kfp==0.5.2` or `pip install kfp==1.0.4` or `pip install --upgrade kfp`. You can check which version you are using via `pip list | grep kfp`. 2. Use explicit https protocol, eg from python: ``` import kfp client = kfp.Client(host='https://*.pipelines.googleusercontent.com') ``` when following [the documentation to connect to KFP host in AI Platform Pipelines / KFP standalone on GCP](https://cloud.google.com/ai-platform/pipelines/docs/connecting-with-sdk).
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5230/reactions", "total_count": 6, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5230/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5229
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5229/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5229/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5229/events
https://github.com/kubeflow/pipelines/issues/5229
820,125,839
MDU6SXNzdWU4MjAxMjU4Mzk=
5,229
missing "hidden input" type for enabling credential input to pipline
{ "login": "klaimane", "id": 23376544, "node_id": "MDQ6VXNlcjIzMzc2NTQ0", "avatar_url": "https://avatars.githubusercontent.com/u/23376544?v=4", "gravatar_id": "", "url": "https://api.github.com/users/klaimane", "html_url": "https://github.com/klaimane", "followers_url": "https://api.github.com/users/klaimane/followers", "following_url": "https://api.github.com/users/klaimane/following{/other_user}", "gists_url": "https://api.github.com/users/klaimane/gists{/gist_id}", "starred_url": "https://api.github.com/users/klaimane/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/klaimane/subscriptions", "organizations_url": "https://api.github.com/users/klaimane/orgs", "repos_url": "https://api.github.com/users/klaimane/repos", "events_url": "https://api.github.com/users/klaimane/events{/privacy}", "received_events_url": "https://api.github.com/users/klaimane/received_events", "type": "User", "site_admin": false }
[ { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
closed
false
null
[]
null
[ "@klaimane I'd suggest mounting k8s secret into related steps, so it's not handle by the pipeline system.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n" ]
"2021-03-02T15:28:41"
"2022-04-28T18:00:19"
"2022-04-28T18:00:19"
NONE
null
I need to pass credentials to the pipe in order to authenticate, but currently all pipeline input parameters are visible in the UI and pipe logs. We need a dedicated input type that is hidden (e.g. displays asterisks) to enable users to authenticate securely without the credential showing up in the pipe UI or logs..
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5229/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5229/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5223
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5223/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5223/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5223/events
https://github.com/kubeflow/pipelines/issues/5223
819,261,868
MDU6SXNzdWU4MTkyNjE4Njg=
5,223
[Kubeflow Dex Distribution] KF Pipelines 100% Unusable - MULTIPLE PEOPLE REPORTING
{ "login": "ReggieCarey", "id": 10270182, "node_id": "MDQ6VXNlcjEwMjcwMTgy", "avatar_url": "https://avatars.githubusercontent.com/u/10270182?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ReggieCarey", "html_url": "https://github.com/ReggieCarey", "followers_url": "https://api.github.com/users/ReggieCarey/followers", "following_url": "https://api.github.com/users/ReggieCarey/following{/other_user}", "gists_url": "https://api.github.com/users/ReggieCarey/gists{/gist_id}", "starred_url": "https://api.github.com/users/ReggieCarey/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ReggieCarey/subscriptions", "organizations_url": "https://api.github.com/users/ReggieCarey/orgs", "repos_url": "https://api.github.com/users/ReggieCarey/repos", "events_url": "https://api.github.com/users/ReggieCarey/events{/privacy}", "received_events_url": "https://api.github.com/users/ReggieCarey/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
open
false
{ "login": "yanniszark", "id": 6123106, "node_id": "MDQ6VXNlcjYxMjMxMDY=", "avatar_url": "https://avatars.githubusercontent.com/u/6123106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yanniszark", "html_url": "https://github.com/yanniszark", "followers_url": "https://api.github.com/users/yanniszark/followers", "following_url": "https://api.github.com/users/yanniszark/following{/other_user}", "gists_url": "https://api.github.com/users/yanniszark/gists{/gist_id}", "starred_url": "https://api.github.com/users/yanniszark/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanniszark/subscriptions", "organizations_url": "https://api.github.com/users/yanniszark/orgs", "repos_url": "https://api.github.com/users/yanniszark/repos", "events_url": "https://api.github.com/users/yanniszark/events{/privacy}", "received_events_url": "https://api.github.com/users/yanniszark/received_events", "type": "User", "site_admin": false }
[ { "login": "yanniszark", "id": 6123106, "node_id": "MDQ6VXNlcjYxMjMxMDY=", "avatar_url": "https://avatars.githubusercontent.com/u/6123106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yanniszark", "html_url": "https://github.com/yanniszark", "followers_url": "https://api.github.com/users/yanniszark/followers", "following_url": "https://api.github.com/users/yanniszark/following{/other_user}", "gists_url": "https://api.github.com/users/yanniszark/gists{/gist_id}", "starred_url": "https://api.github.com/users/yanniszark/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanniszark/subscriptions", "organizations_url": "https://api.github.com/users/yanniszark/orgs", "repos_url": "https://api.github.com/users/yanniszark/repos", "events_url": "https://api.github.com/users/yanniszark/events{/privacy}", "received_events_url": "https://api.github.com/users/yanniszark/received_events", "type": "User", "site_admin": false } ]
null
[ "Note most users of Kubeflow experiencing this bug cannot afford to wait 6-9 months for the next release which may or may not address this problem. This bug breaks this product. We need a solution immediately. This part of the product is considered STABLE. ", "Kubernetes installed via Kubespray ", "Hi @ReggieCarey, sorry for your bad experience.\r\n\r\nKF 1.2.0 with Dex on K8s is a community maintained distribution.\r\n\r\n/assign @yanniszark \r\nCan you take a look or suggest who else can take a look at this issue?", "@ReggieCarey I'm sorry for your bad experience with 1.2. From Arrikto's side, we supported this distribution until Kubeflow 1.1. In Kubeflow 1.2, we started transitioning out of `kfctl` and do not support this distribution in 1.2.\r\n\r\nOur current efforts are focused on releasing 1.3. In Kubeflow 1.3, we plan to support a similar distribution but without `kfctl` at all. To give you an idea of the current timeline, the release candidate for 1.3 is March 15th. Distributions will be tested and the release will be finalized soon after.", "@yanniszark thanks for looking into this. Some more info. I disabled sidecar injection in the kubeflow namespace. This allowed ml pipeline to connect with MySQL and populate 1 pipeline. I can't get to the pipeline dashboard from the KF dashboard but I can get to it via kubectl proxy. Experiments still don't work. Need to check the cache server status. \n\nQ: Why are you consuming MySQL instead of a SQL service?", "So I guess there were some miscommunication, I thought Arrikto was still supporting Kubeflow 1.2 with dex. If that's not the case, we should have deleted the dex distribution from kubeflow.org documentation during the release.", "@Bobgy Just dropping in here quickly as I came across the issue. I think what @yanniszark is said is just that deployment with `kfctl` is not being supported by Arrikto for 1.2. I believe the Dex part is still included in this though (as one of the OIDC provider options). Please correct me if I am wrong. ", "You are right, I was only referring to the distribution", "@Bobgy is there a solution for that?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "We are experiencing the same problem in 1.4.0 (on-prem installation with Dex). ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "For everyone watching, any time you see an error like `[mysql] XXXX/XX/XX XX:XX:XX packets.go:36: unexpected EOF` in the `cache-server` pods, you are almost certainly dealing with a network-level issue between your pods and the MySQL database.\r\n\r\nThe most likely case is that there is some __asymmetric routing__ going on. That is, your Pods might be able to create a connection to MySQL, but MySQL might not have a route back to your Pod. Note, [MySQL connections are complex](https://dev.mysql.com/doc/dev/mysql-server/latest/page_protocol_connection_phase.html) and will sometimes initiate new TCP connections back to the client, which will fail in the previous case.\r\n\r\nI have a more detailed write up on this issue: https://github.com/kubeflow/pipelines/issues/3763#issuecomment-1547009188" ]
"2021-03-01T20:54:25"
"2023-05-14T21:58:36"
null
NONE
null
### What steps did you take: KFP in KF 1.2.0 with Dex on K8s 1.18.9 does not work. I receive an error in the KF dashboard when attempting to view pipelines: Error: failed to retrieve list of pipelines. Click Details for more information. -> An error occured, no healthy upstream ### What happened: Installed Kubeflow 1.2.0 on-prem as per installation instructions. Any attempt to see pipelines or use pipelines fails. ### What did you expect to happen: I expected to be able to use Pipelines ### Environment: Kubernetes version 1.18.9 Kubeflow version 1.2.0 Installed with Dex, configured after deploy to use LDAP. ml-pipelines pod fails to start completely. Logs indicate How did you deploy Kubeflow Pipelines (KFP)? Installed Kubeflow Pipelines as part of Kubeflow installation for on-prem with dex. KFP version: 1.0.4 KFP SDK version: <!-- Please attach the output of this shell command: $pip list | grep kfp --> I HAVEN'T GOTTEN FAR ENOUGH TO USE THIS! ### Anything else you would like to add: ml-pipeline pod refuses to run: ``` $ kubectl get pods -n kubeflow NAME READY STATUS RESTARTS AGE admission-webhook-bootstrap-stateful-set-0 1/1 Running 0 4d20h admission-webhook-deployment-5d9ccb5696-f6zs6 1/1 Running 0 4d20h application-controller-stateful-set-0 1/1 Running 0 4d21h argo-ui-684bcb587f-z84nh 1/1 Running 0 4d16h cache-deployer-deployment-6667847478-7h2w8 2/2 Running 2 4d21h cache-server-bd9c859db-755zj 2/2 Running 527 4d21h centraldashboard-895c4c768-46xgc 1/1 Running 0 4d21h jupyter-web-app-deployment-6588c6f544-c5m45 1/1 Running 0 3d3h katib-controller-75c8d47f8c-5k2tr 1/1 Running 0 4d21h katib-db-manager-6c88c68d79-cgxdh 1/1 Running 0 4d16h katib-mysql-858f68f588-zvhnj 1/1 Running 0 4d21h katib-ui-68f59498d4-bkscp 1/1 Running 0 4d21h kfserving-controller-manager-0 2/2 Running 0 36h kubeflow-pipelines-profile-controller-69c94df75b-xtpfj 1/1 Running 0 4d21h metacontroller-0 1/1 Running 0 4d21h metadata-db-757dc9c7b5-pt75k 1/1 Running 0 4d21h metadata-envoy-deployment-6ff58757f6-57pjc 1/1 Running 0 4d21h metadata-grpc-deployment-76d69f69c8-xcmjk 1/1 Running 3 4d21h metadata-writer-6d94ffb7df-mhnxj 2/2 Running 1 4d21h minio-66c9cd74c9-jrss8 1/1 Running 0 4d21h ml-pipeline-54989c9946-s2f46 1/2 Running 926 4d21h ml-pipeline-persistenceagent-7f6bf7646-ldct6 2/2 Running 0 4d21h ml-pipeline-scheduledworkflow-66db7bcf5d-q244j 2/2 Running 0 4d16h ml-pipeline-ui-756b58fb-gpwn9 2/2 Running 0 4d21h ml-pipeline-viewer-crd-58f59f87db-dmj2l 2/2 Running 2 4d21h ml-pipeline-visualizationserver-6f9ff4974-k4cf9 2/2 Running 0 4d21h mpi-operator-77bb5d8f4b-w4dhj 1/1 Running 0 4d21h mxnet-operator-68b688bb69-b5985 1/1 Running 0 4d16h mysql-7694c6b8b7-jthp6 2/2 Running 0 4d17h notebook-controller-deployment-58447d4b4c-6ll57 1/1 Running 0 4d21h profiles-deployment-78d4549cbc-z9lld 2/2 Running 0 4d21h pytorch-operator-b79799447-f8nnl 1/1 Running 0 4d21h seldon-controller-manager-5fc5dfc86c-nh2qm 1/1 Running 0 4d21h spark-operatorsparkoperator-67c6bc65fb-8tgn5 1/1 Running 0 4d21h tf-job-operator-5c97f4bf7-g5vtw 1/1 Running 0 4d21h workflow-controller-5c7cc7976d-5n6tb 1/1 Running 0 4d16h ``` ``` $ kubectl logs -n kubeflow ml-pipeline-54989c9946-s2f46 ml-pipeline-api-server I0301 20:22:00.153656 6 client_manager.go:134] Initializing client manager I0301 20:22:00.153817 6 config.go:50] Config DBConfig.ExtraParams not specified, skipping [mysql] 2021/03/01 20:22:01 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:22:02 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:22:04 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:22:07 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:22:10 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:22:13 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:22:16 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:22:23 packets.go:36: unexpected EOF ``` ``` $ kubectl logs -n kubeflow mysql-7694c6b8b7-jthp6 mysql ... MySQL init process done. Ready for start up. 2021-02-25 03:04:17 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details). 2021-02-25 03:04:17 0 [Note] mysqld (mysqld 5.6.44) starting as process 1 ... 2021-02-25 03:04:17 1 [Note] Plugin 'FEDERATED' is disabled. 2021-02-25 03:04:17 1 [Note] InnoDB: Using atomics to ref count buffer pool pages 2021-02-25 03:04:17 1 [Note] InnoDB: The InnoDB memory heap is disabled 2021-02-25 03:04:17 1 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins 2021-02-25 03:04:17 1 [Note] InnoDB: Memory barrier is not used 2021-02-25 03:04:17 1 [Note] InnoDB: Compressed tables use zlib 1.2.11 2021-02-25 03:04:17 1 [Note] InnoDB: Using Linux native AIO 2021-02-25 03:04:17 1 [Note] InnoDB: Using CPU crc32 instructions 2021-02-25 03:04:17 1 [Note] InnoDB: Initializing buffer pool, size = 128.0M 2021-02-25 03:04:17 1 [Note] InnoDB: Completed initialization of buffer pool 2021-02-25 03:04:17 1 [Note] InnoDB: Highest supported file format is Barracuda. 2021-02-25 03:04:17 1 [Note] InnoDB: 128 rollback segment(s) are active. 2021-02-25 03:04:17 1 [Note] InnoDB: Waiting for purge to start 2021-02-25 03:04:17 1 [Note] InnoDB: 5.6.44 started; log sequence number 1625997 2021-02-25 03:04:17 1 [Note] Server hostname (bind-address): '*'; port: 3306 2021-02-25 03:04:17 1 [Note] IPv6 is available. 2021-02-25 03:04:17 1 [Note] - '::' resolves to '::'; 2021-02-25 03:04:17 1 [Note] Server socket created on IP: '::'. 2021-02-25 03:04:17 1 [Warning] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory. 2021-02-25 03:04:17 1 [Warning] 'proxies_priv' entry '@ root@mysql-7694c6b8b7-jthp6' ignored in --skip-name-resolve mode. 2021-02-25 03:04:17 1 [Note] Event Scheduler: Loaded 0 events 2021-02-25 03:04:17 1 [Note] mysqld: ready for connections. Version: '5.6.44' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server (GPL) ``` Cache Server also is unable to connect to MYSQL ``` $ kubectl logs -n kubeflow cache-server-bd9c859db-755zj server 2021/03/01 20:19:21 Initing client manager.... [mysql] 2021/03/01 20:19:22 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:19:24 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:19:25 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:19:27 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:19:30 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:19:33 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:19:39 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:19:46 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:19:55 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:20:07 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:20:26 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:21:02 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:21:40 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:22:35 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:23:58 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:25:09 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:25:50 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:25:51 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:25:52 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:25:54 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:25:56 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:25:59 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:26:02 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:26:06 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:26:15 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:26:20 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:26:34 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:27:03 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:27:45 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:28:11 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:29:39 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:30:12 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:31:32 packets.go:36: unexpected EOF [mysql] 2021/03/01 20:32:07 packets.go:36: unexpected EOF F0301 20:32:07.437107 1 error.go:305] invalid connection goroutine 1 [running]: github.com/golang/glog.stacks(0xc000786600, 0xc0004790a0, 0x3f, 0x40) /go/pkg/mod/github.com/golang/glog@v0.0.0-20160126235308-23def4e6c14b/glog.go:769 +0xd4 github.com/golang/glog.(*loggingT).output(0x237c4c0, 0xc000000003, 0xc000479080, 0x20d8f16, 0x8, 0x131, 0x0) /go/pkg/mod/github.com/golang/glog@v0.0.0-20160126235308-23def4e6c14b/glog.go:720 +0x329 github.com/golang/glog.(*loggingT).printf(0x237c4c0, 0x3, 0x14ca0b3, 0x2, 0xc0006c58f8, 0x1, 0x1) /go/pkg/mod/github.com/golang/glog@v0.0.0-20160126235308-23def4e6c14b/glog.go:655 +0x14b github.com/golang/glog.Fatalf(0x14ca0b3, 0x2, 0xc0006c58f8, 0x1, 0x1) /go/pkg/mod/github.com/golang/glog@v0.0.0-20160126235308-23def4e6c14b/glog.go:1148 +0x67 github.com/kubeflow/pipelines/backend/src/common/util.TerminateIfError(0x1649b00, 0xc0005eca40) /go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:305 +0x79 main.initMysql(0x7ffefc6905bd, 0x5, 0x7ffefc6905cd, 0x5, 0x7ffefc6905dd, 0x4, 0x7ffefc6905ec, 0x7, 0x7ffefc6905fe, 0x4, ...) /go/src/github.com/kubeflow/pipelines/backend/src/cache/client_manager.go:157 +0x466 main.initDBClient(0x7ffefc6905bd, 0x5, 0x7ffefc6905cd, 0x5, 0x7ffefc6905dd, 0x4, 0x7ffefc6905ec, 0x7, 0x7ffefc6905fe, 0x4, ...) /go/src/github.com/kubeflow/pipelines/backend/src/cache/client_manager.go:71 +0x599 main.(*ClientManager).init(0xc0006c5db8, 0x7ffefc6905bd, 0x5, 0x7ffefc6905cd, 0x5, 0x7ffefc6905dd, 0x4, 0x7ffefc6905ec, 0x7, 0x7ffefc6905fe, ...) /go/src/github.com/kubeflow/pipelines/backend/src/cache/client_manager.go:57 +0x80 main.NewClientManager(0x7ffefc6905bd, 0x5, 0x7ffefc6905cd, 0x5, 0x7ffefc6905dd, 0x4, 0x7ffefc6905ec, 0x7, 0x7ffefc6905fe, 0x4, ...) /go/src/github.com/kubeflow/pipelines/backend/src/cache/client_manager.go:169 +0xab main.main() /go/src/github.com/kubeflow/pipelines/backend/src/cache/main.go:71 +0x367 ``` Attempted suggestions for repair (ALL fail - please do not suggest) 1) ISTIO disable ISTIO_MUTUAL -> DISABLE : This allows the mysql db to be populated but the KFP UI will NOT startup. 2) ISTIO configure STRICT vs PERMISSIVE : Pipelines and Jupyter Notebooks will not come up. The product as advertised online does not work on a vanilla on-prem, K8s installation. It appears to work on GCP, Azure, AwS, and possibly IBM. Provided diagnostic tools are not compatible with an on-prem installation: ``` $ kfp diagnose_me Google Cloud SDK is not installed, gcloud, gsutil and kubectl are required for this app to run. Please follow instructions at https://cloud.google.com/sdk/install to install the SDK. ``` /kind bug <!-- Please include labels by uncommenting them to help us better triage issues, choose from the following --> <!-- // /area frontend /area backend // /area sdk // /area testing // /area engprod -->
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5223/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5223/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5221
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5221/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5221/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5221/events
https://github.com/kubeflow/pipelines/issues/5221
819,193,971
MDU6SXNzdWU4MTkxOTM5NzE=
5,221
Issue with current implementation of data passing using volume.
{ "login": "boarder7395", "id": 37314943, "node_id": "MDQ6VXNlcjM3MzE0OTQz", "avatar_url": "https://avatars.githubusercontent.com/u/37314943?v=4", "gravatar_id": "", "url": "https://api.github.com/users/boarder7395", "html_url": "https://github.com/boarder7395", "followers_url": "https://api.github.com/users/boarder7395/followers", "following_url": "https://api.github.com/users/boarder7395/following{/other_user}", "gists_url": "https://api.github.com/users/boarder7395/gists{/gist_id}", "starred_url": "https://api.github.com/users/boarder7395/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/boarder7395/subscriptions", "organizations_url": "https://api.github.com/users/boarder7395/orgs", "repos_url": "https://api.github.com/users/boarder7395/repos", "events_url": "https://api.github.com/users/boarder7395/events{/privacy}", "received_events_url": "https://api.github.com/users/boarder7395/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
closed
false
{ "login": "Ark-kun", "id": 1829149, "node_id": "MDQ6VXNlcjE4MjkxNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ark-kun", "html_url": "https://github.com/Ark-kun", "followers_url": "https://api.github.com/users/Ark-kun/followers", "following_url": "https://api.github.com/users/Ark-kun/following{/other_user}", "gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions", "organizations_url": "https://api.github.com/users/Ark-kun/orgs", "repos_url": "https://api.github.com/users/Ark-kun/repos", "events_url": "https://api.github.com/users/Ark-kun/events{/privacy}", "received_events_url": "https://api.github.com/users/Ark-kun/received_events", "type": "User", "site_admin": false }
[ { "login": "Ark-kun", "id": 1829149, "node_id": "MDQ6VXNlcjE4MjkxNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ark-kun", "html_url": "https://github.com/Ark-kun", "followers_url": "https://api.github.com/users/Ark-kun/followers", "following_url": "https://api.github.com/users/Ark-kun/following{/other_user}", "gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions", "organizations_url": "https://api.github.com/users/Ark-kun/orgs", "repos_url": "https://api.github.com/users/Ark-kun/repos", "events_url": "https://api.github.com/users/Ark-kun/events{/privacy}", "received_events_url": "https://api.github.com/users/Ark-kun/received_events", "type": "User", "site_admin": false }, { "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks @boarder7395 for filing this issue. \r\n\r\nI think the solution for treating `mlpipeline-metadata-ui` and `metrics-ui artifacts` as special in this rewriter is reasonable. If you submit a PR, we can review it. You mentioned you had a couple of other options too. Would you mind outlining them in this issue?\r\n\r\nAlso, question on your use-case - where does the data actually reside? E.g., if it's on s3, would it be easier for you to operate on the blob store or do you need local file access? I'm assuming the latter but would like to check if that's indeed the case. If not, we're working on placeholders called `InputUri` and `OutputUri` which may help with letting the component directly operate on the location of the data in Cloud storage. \r\n\r\n/cc @chensun \r\n/cc @Bobgy ", "We have a few different use cases. But data is sourced from potentially two locations; s3, and internal databases. \r\n\r\nTo walk through an example pipeline:\r\n1) We extract the data in a single component and write that data to an fsx volume using the data_passing_using_volume implementation. \r\n2) The next component will do some preprocessing to the dataset, for this we are using spark kubernetes implementation with the components pod acting as the driver. This then outputs a preprocessed dataset.\r\n3) The preprocessed dataset is uploaded to s3 using a component that only does the upload operation.\r\n3) The next component trains a tensorflow model using the preprocessed dataset on the fsx volume.\r\n4) The model and tensorboard logs are uploaded in a separate component to s3.\r\n\r\nAs for potential other solutions:\r\n\r\nCurrently the workflow rewriter replaces all component artifacts with parameters that point to the location of a directory/file on the volume.\r\n\r\nInstead the new implementation would use the component input/output annotations to specify whether the input/output should be passed using artifacts or parameters or potentially both.\r\n\r\nOur use case would benefit from passing outputs using both parameters and artifacts while inputs would be handled only with parameters. This way data artifacts are uploaded to artifact storage for tracking and subsequent components can use the parameter reference location to avoid downloading large amounts of data from artifact storage.\r\n\r\nWhen a step in the pipeline outputs an artifact and a parameter, the subsequent step should use the parameter output from the previous component instead of the artifact to prevent downloading the data from artifact storage.\r\n\r\nBenefits of this implementation:\r\nData can be uploaded to artifact storage for tracking.\r\nThis approach removes the need to download data from artifact storage to run the next step since the data is already available on the volume.", "Okay so the more I do testing on this it seems like I cannot use both parameters and artifacts since both would try to be at the same mount point. \r\n\r\nI will continue to explore options in this space. ", "@neuromage I am curious about the inputUri, and outputURI. If it does what I think it does that could be the solution to my use case. \r\n\r\nIdeal workflow:\r\nSpark job reads data from a database and outputs to an outputURI in minio (aws:s3). The next component takes in the outputURI but maps it to an inputPath so the argo wait container handles the download from artifact storage to container storage. Would this be possible? Also if you have a branch for this work I'd love to take a look at the implementation.", "Most of the UX features currently do not support volumes. Most Kubernetes volumes can only be mounted to a single Pod at a time, so if the frontend or backend pod mounts the volume, it cannot be mounted to other pods for reading or writing.\r\nThe volume is not mounted to frontend or backend, so the UX cannot visualize the data in volumes, be it the volume data-passing method or the VolumeOp or manually mounted volumes.\r\n\r\nThe volume-based data passing method is currently suitable for the following scenario: The user develops a pipeline using the normal data-passing method to debug the pipeline on <100GB datasets. One the pipeline is debugged and ready for production, if more than 100GB data passing is needed, then the pipeline can be compiled in a special way to use volumes for data passing.\r\nThe main feature of `rewrite_data_passing_to_use_volumes` is that the pipeline code does not need to be changed in any way and remains portable - you can decide whether to use volumes at compilation time.\r\n\r\n>I believe instead this function should delete all artifacts except mlpipeline-ui-metadata, and mlpipeline-metrics artifacts.\r\n\r\nI'm not sure this will fully solve the issue. The `mlpipeline-ui-metadata` is usually a small JSON structure that just points to some other artifact data. But since the artifact data is in storage, the frontend and backend cannot reach it.\r\n\r\nA better solution would be to support volume-based data passing on the backend and frontend side: There should be a single persistent volume intended for data storage. That volume should be exposed as an NFS volume, so that multiple pods can read it and write to it. (This is similar to how Minio exposes a persistent disk as S3 storage.) The NFS volume will be mounted to the backend and frontend pods. This way the frontend and backend can access the data stored in the volume and the artifact preview and visualizations will work.\r\n\r\n@Bobgy What do you think about this proposal? (Expose the storage volume as NFS and mount to backend Pod).", "Agree with that. It aligns with the existing feature in https://github.com/kubeflow/pipelines/blob/master/docs/config/volume-support.md", "The proposed solution makes sense to me at first glance but one issue I’m having trouble understanding is if we’re using NFS volumes for artifact storage wouldn’t that mean we need to provision enough storage to hold all artifacts? For an example scenario imagine an average pipeline uses 1.5TB of data, and our users are running 15 experiments daily. In this case the daily required FSx storage would be 22.5 TB. If we cleared unused artifacts after 3 months then we would potentially require 2025 TB of storage allocated for artifacts. The cost of running volume only artifact storage then explodes. \r\n\r\nAlternatively if using FSx with lustre; a daemonset could be created that watches active pipelines and pushes artifacts to s3 by utilizing hsm_archive when the pipeline is done with those components. If the front end needs access to those artifacts at a later date lustre will pull the data from the cheap s3 object storage. I believe this will work for GCP as well since it seems like you can create a lustre file system there as well. With this setup storage could be reduced to 22.5TB (or less) since only while the pipeline is running will storage be on the FSx volume. \r\n\r\n\r\n<img width=\"823\" alt=\"Screen Shot 2021-03-22 at 4 07 59 PM\" src=\"https://user-images.githubusercontent.com/37314943/112051838-c8474a00-8b28-11eb-9b2b-28ad983fdf45.png\"> \r\n_Figure 1: Architecture using Lustre_\r\n\r\n&nbsp;\r\n&nbsp;\r\n\r\nAlternatively I have been testing an approach using outputUri. For this approach the spark component in the pipeline will write directly to minIO artifact storage and subsequent steps can use inputPath to download the data to local using argo wait container, or they can use inputUri and handle access to minIO directly. The limitation with this approach is idempotence is not guaranteed. Also currently kfp does not support passing between an outputUri and InputPath. I have done only basic testing to this point and have not tested whether step caching works with this configuration, any insights here would be greatly appreciated.\r\n\r\n<img width=\"989\" alt=\"Screen Shot 2021-03-22 at 3 47 09 PM\" src=\"https://user-images.githubusercontent.com/37314943/112051525-6a1a6700-8b28-11eb-93e7-48ddc1c2d665.png\"> \r\n_Figure 2: Artifact Passing using Only Uri’s_\r\n\r\n&nbsp;\r\n&nbsp;\r\n\r\n<img width=\"711\" alt=\"Screen Shot 2021-03-22 at 4 07 27 PM\" src=\"https://user-images.githubusercontent.com/37314943/112051766-b796d400-8b28-11eb-8cfa-bd5df6185009.png\"> \r\n_Figure 3: Uri Passing for Spark Only_\r\n\r\n&nbsp;\r\n&nbsp;", "@neuromage I did some local testing with inputUri and outputUri. In my simple test it seems like the UI does not recognize those as artifacts and therefore they're not tracked by the metadata service. Is that correct?", "Hi @boarder7395 yes I think InputUri and OutputIUri aren't supported yet in the UI. \r\n\r\nIt sounds like you want the first step of your pipeline (the Spark job) to read/write directly to S3, and the second step to download from S3 into local container, perform training, and then re-upload back to S3. The Evaluate step is the same as the Train step. Am I reading this right?\r\n\r\nIf so, then I think we can try to support this in our v2-compatible mode we've been experimenting with. It sounds like you'd want to be able to specify when the inputURI should be downloaded and when it shouldn't. Is this right or am I totally misunderstanding the problem?\r\n\r\nAlso, do you need minio here, or is this just drawn because that's how Argo works? In the v2 compatible pipeline, we have a custom launcher that does the data download/upload, which means you may not need minio. Let me know what you think. On my part, let me send a quick doc/readme to explain the v2-compatible mode. \r\n\r\n/cc @Bobgy \r\n", "@neuromage Yes you are reading that correctly. Being able to specify when to download and when not is exactly the functionality I was trying to accomplish. And your right I have no need for minIO in the diagrams above only included it because that’s how Argo works.\r\n\r\nAs for the quick doc/readme that would be greatly appreciated!!\r\n\r\nYou mentioned inputUri and outputUri aren’t supported yet in the UI. Curious if the planning with volume data passing would treat workflow parameters similarly to artifacts and work with step cacheing? Or is the concept of parameters vs artifacts different in the V2 version. ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n" ]
"2021-03-01T19:48:58"
"2022-04-18T17:27:44"
"2022-04-18T17:27:44"
CONTRIBUTOR
null
### What steps did you take: When using the experimental feature of data passing with a volume mlpipeline-metadata-ui artifacts, and metrics-ui artifacts do not work. I ran some local testing and have identified the issue to be in kfp/compiler/_data_passing_using_volume.py::rewrite_data_passing_to_use_volumes. Currently this function deletes all artifacts and replaces them with parameters. I believe instead this function should delete all artifacts except mlpipeline-metadata-ui, and metrics-ui artifacts. ### What happened: Artifacts are not displayed in UI when using volume data passing. ### What did you expect to happen: Artifacts to be displayed. ### Environment: Running kfp on aws. How did you deploy Kubeflow Pipelines (KFP)? Full kubeflow deployment. KFP version: <!-- If you are not sure, build commit shows on bottom of KFP UI left sidenav. --> 743746b KFP SDK version: <!-- Please attach the output of this shell command: $pip list | grep kfp --> 1.4.0 ### Anything else you would like to add: I have implemented the fix and tested on my end. Happy to make the PR to fix this just having some issues pushing my code to kfp github. I have signed the developer agreement. Are there additional steps I need to take to be able to contribute, from the documentation I don't believe I am missing anything. /kind bug /area sdk
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5221/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5221/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5219
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5219/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5219/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5219/events
https://github.com/kubeflow/pipelines/issues/5219
819,058,826
MDU6SXNzdWU4MTkwNTg4MjY=
5,219
KFP DSL Changes requested for Condition Spec
{ "login": "animeshsingh", "id": 3631320, "node_id": "MDQ6VXNlcjM2MzEzMjA=", "avatar_url": "https://avatars.githubusercontent.com/u/3631320?v=4", "gravatar_id": "", "url": "https://api.github.com/users/animeshsingh", "html_url": "https://github.com/animeshsingh", "followers_url": "https://api.github.com/users/animeshsingh/followers", "following_url": "https://api.github.com/users/animeshsingh/following{/other_user}", "gists_url": "https://api.github.com/users/animeshsingh/gists{/gist_id}", "starred_url": "https://api.github.com/users/animeshsingh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/animeshsingh/subscriptions", "organizations_url": "https://api.github.com/users/animeshsingh/orgs", "repos_url": "https://api.github.com/users/animeshsingh/repos", "events_url": "https://api.github.com/users/animeshsingh/events{/privacy}", "received_events_url": "https://api.github.com/users/animeshsingh/received_events", "type": "User", "site_admin": false }
[ { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
closed
false
null
[]
null
[ "cc @Udiknedormin @Tomcli @Bobgy ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n" ]
"2021-03-01T16:53:57"
"2022-04-29T03:59:45"
"2022-04-29T03:59:45"
CONTRIBUTOR
null
https://docs.google.com/document/d/1B2CQgyouqd6oSnqXtGk9uXaUV7JKEvQxFERbyQcZjZw/edit?usp=sharing
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5219/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5219/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5209
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5209/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5209/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5209/events
https://github.com/kubeflow/pipelines/issues/5209
818,429,501
MDU6SXNzdWU4MTg0Mjk1MDE=
5,209
[Testing] python presubmit tests failing 2021.3.1
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "id": 930619513, "node_id": "MDU6TGFiZWw5MzA2MTk1MTM=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/priority/p1", "name": "priority/p1", "color": "cb03cc", "default": false, "description": "" } ]
closed
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "Found in https://github.com/kubeflow/pipelines/pull/5196\r\n\r\nWhy did it require docker?", "> Why did it require docker?\r\n\r\nThe test log seems to be messed up. The docker requirement is from another test which somehow didn't shown as \"FAILED\". That test explicitly asks for testing with `docker`: \r\nhttps://github.com/kubeflow/pipelines/blob/8af15a3ad02ff6727790594c93186da38420b83c/sdk/python/tests/local_runner_test.py#L195\r\n\r\nNot sure if it was this error that messed up the subsequent tests.\r\n\r\nDid we recently remove docker from the CI test environment? \r\n", "I don't think we ever had docker there, in the CI test env, tests are run inside a Pod, so there shouldn't be docker there.\r\n\r\nThe test was introduced in https://github.com/kubeflow/pipelines/pull/4983, curious why it didn't fail originally", "> I don't think we ever had docker there, in the CI test env, tests are run inside a Pod, so there shouldn't be docker there.\r\n> \r\n> The test was introduced in #4983, curious why it didn't fail originally\r\n\r\nThen I was probably mistaken -- the missing docker error likely didn't fail the test. Although I think we should eliminate such error anyway. Besides, a UT shouldn't have dependencies on docker, best practice should be using mock here.\r\n\r\nThat left one possibility: [the other test](https://github.com/kubeflow/pipelines/blob/ec7201db5a6661a1d703d0e240a219d307f190f6/sdk/python/kfp/v2/google/aiplatform_e2e_test.py#L44) was flaky.\r\n\r\nReopen this issue as P1, I'll spend some time debugging the flaky test some time this week.", "On first glance the `test_execution_mode_exclude_op` test seems to have been working as intended.\r\nIt tried to explicitly use docker and failed. Then tried excluding the image (so that it does not run on docker) and that succeeds.\r\nThe tests were passing here: https://oss-prow.knative.dev/view/gs/oss-prow/pr-logs/pull/kubeflow_pipelines/4983/kubeflow-pipelines-sdk-python39/1364431670366703616", "> On first glance the `test_execution_mode_exclude_op` test seems to have been working as intended.\r\n> It tried to explicitly use docker and failed. Then tried excluding the image (so that it does not run on docker) and that succeeds.\r\n> The tests were passing here: https://oss-prow.knative.dev/view/gs/oss-prow/pr-logs/pull/kubeflow_pipelines/4983/kubeflow-pipelines-sdk-python39/1364431670366703616\r\n\r\nRight, I realized that. Missing docker didn't fail the test, it yields the same expected behavior though through a different route: \"docker not found\" vs \"image not found\".\r\nThat said, I think we should use mock to avoid actually calling docker binary." ]
"2021-03-01T03:46:08"
"2021-03-01T19:04:17"
"2021-03-01T19:04:17"
CONTRIBUTOR
null
https://oss-prow.knative.dev/view/gs/oss-prow/pr-logs/pull/kubeflow_pipelines/5196/kubeflow-pipelines-sdk-python36/1365228134722441216 /assign @chensun
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5209/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5209/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5199
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5199/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5199/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5199/events
https://github.com/kubeflow/pipelines/issues/5199
817,435,214
MDU6SXNzdWU4MTc0MzUyMTQ=
5,199
[FR] Hide sidebar when in Kubeflow mode
{ "login": "StefanoFioravanzo", "id": 3354305, "node_id": "MDQ6VXNlcjMzNTQzMDU=", "avatar_url": "https://avatars.githubusercontent.com/u/3354305?v=4", "gravatar_id": "", "url": "https://api.github.com/users/StefanoFioravanzo", "html_url": "https://github.com/StefanoFioravanzo", "followers_url": "https://api.github.com/users/StefanoFioravanzo/followers", "following_url": "https://api.github.com/users/StefanoFioravanzo/following{/other_user}", "gists_url": "https://api.github.com/users/StefanoFioravanzo/gists{/gist_id}", "starred_url": "https://api.github.com/users/StefanoFioravanzo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/StefanoFioravanzo/subscriptions", "organizations_url": "https://api.github.com/users/StefanoFioravanzo/orgs", "repos_url": "https://api.github.com/users/StefanoFioravanzo/repos", "events_url": "https://api.github.com/users/StefanoFioravanzo/events{/privacy}", "received_events_url": "https://api.github.com/users/StefanoFioravanzo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "KFP comes already with two deployment modes: `KUBEFLOW` and `MARKETPLACE`. See https://github.com/kubeflow/pipelines/blob/0795597562e076437a21745e524b5c960b1edb68/frontend/src/lib/Flags.ts#L1-L4 and https://github.com/kubeflow/pipelines/blob/0795597562e076437a21745e524b5c960b1edb68/frontend/server/handlers/index-html.ts#L60-L72\r\n\r\nWhen in `KUBEFLOW` mode, KFP loads the Centraldashboard shared library to inherit the selected Namespace.\r\n\r\nGiven that this mechanism is already in place, we can simply hide the sidebar component whenever the deployment is `KUBEFLOW`. With the Kubeflow 1.3 release, we will update the Centraldashboard manifests to include all KFP pages as well. \r\n\r\nNow:\r\n\r\n![Screenshot 2021-02-15 at 10 44 45](https://user-images.githubusercontent.com/3354305/109318181-d1a50580-784d-11eb-91e3-921540f1c61d.png)\r\n\r\nThen:\r\n\r\n![Screenshot 2021-02-15 at 10 02 50](https://user-images.githubusercontent.com/3354305/109318293-f0a39780-784d-11eb-8aed-806afb6d4dc9.png)\r\n\r\n/cc @Bobgy \r\n", "@StefanoFioravanzo some of KFP users may deploy it in their own env, shall we add a separate config to hide sidebar?", "@Bobgy Do you have examples of people deploying KFP in `KUBEFLOW` mode, but outside of Kubeflow? I think in those cases they wouldn't set the `DEPLOYMENT` var, since it's not needed. So:\r\n\r\n1. Having `KUBEFLOW` deployment mode both load the Centraldashboard lib and hide the sidenav makes it crystal clear what this mode is for: being part of Kubeflow\r\n2. Having a `HIDE_SIDENAV` var makes it more flexible, allowing people to activate Centraldashboard lib without hiding the sidebar\r\n\r\nI do prefer (1) because it makes more sense semantically, but if you feel strongly about (2) then let's go with it", "@StefanoFioravanzo we do have customers who pick Kubeflow components and deploy in a very customized way (in multi-user mode). So I worry if we always hide sidenav for Kubeflow deployment, KFP UI will be very coupled to central dashboard UI. It'll be harder to upgrade KFP UI (including sidenav upgrade) by itself.\r\n\r\nTherefore, I'd suggest to go with 2.\r\nor maybe, `HIDE_SIDENAV` can default to true in `KUBEFLOW` mode, but it can be overridden by explicitly setting `HIDE_SIDENAV=false`.", "@Bobgy I like this proposal\r\n\r\n> `HIDE_SIDENAV` can default to true in `KUBEFLOW` mode, but it can be overridden by explicitly setting `HIDE_SIDENAV=false`.\r\n\r\nOk then, I will update the related PR to introduce this new env var" ]
"2021-02-26T15:11:16"
"2021-03-09T15:02:26"
"2021-03-09T15:02:26"
MEMBER
null
There is an ongoing discussion about how web apps should behave when deployed as part Kubeflow. See https://github.com/kubeflow/kubeflow/issues/5566. The main discussion point I would like to focus on for KFP UI is (quoting from the issue): > There is a CentralDashboard which: > - decides what Namespaces to show to the user and feeds to the app the selected Namespace > - has a left hand sidebar, which can have subsections https://github.com/kubeflow/kubeflow/pull/5474. This side bar can be used for navigating the user between different pages of the underlying deployed app Centraldashboard should act as the single place where all applications expose their pages. Centraldashboard's sidebar is very easily customizable with a ConfigMap and allows creating collapsible subsections. Consider also the discussion here https://github.com/kubeflow/katib/issues/1437#issuecomment-782039859 around the sidebar of the Katib UI. The new Katib UI (https://github.com/kubeflow/katib/pull/1427) will not have such sidebar and instead expose Katib pages using Centraldashboard (when deployed in Kubeflow mode).
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5199/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5199/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5198
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5198/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5198/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5198/events
https://github.com/kubeflow/pipelines/issues/5198
817,152,750
MDU6SXNzdWU4MTcxNTI3NTA=
5,198
bug in compilier.Compiler._validate_exit_handler
{ "login": "HaozhengAN", "id": 34512369, "node_id": "MDQ6VXNlcjM0NTEyMzY5", "avatar_url": "https://avatars.githubusercontent.com/u/34512369?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HaozhengAN", "html_url": "https://github.com/HaozhengAN", "followers_url": "https://api.github.com/users/HaozhengAN/followers", "following_url": "https://api.github.com/users/HaozhengAN/following{/other_user}", "gists_url": "https://api.github.com/users/HaozhengAN/gists{/gist_id}", "starred_url": "https://api.github.com/users/HaozhengAN/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HaozhengAN/subscriptions", "organizations_url": "https://api.github.com/users/HaozhengAN/orgs", "repos_url": "https://api.github.com/users/HaozhengAN/repos", "events_url": "https://api.github.com/users/HaozhengAN/events{/privacy}", "received_events_url": "https://api.github.com/users/HaozhengAN/received_events", "type": "User", "site_admin": false }
[ { "id": 930619513, "node_id": "MDU6TGFiZWw5MzA2MTk1MTM=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/priority/p1", "name": "priority/p1", "color": "cb03cc", "default": false, "description": "" }, { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
closed
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "/assign @chensun ", "@HaozhengAN , I cannot reproduce this. \r\n\r\nCan you maybe try again in a clean environment?\r\n\r\nHere's what I did for your reference:\r\n1. Create a new virtual environment\r\n`python3 -m venv ~/venv/kfp1.4.0`\r\n`source ~/venv/kfp1.4.0/bin/activate`\r\n2. Install kfp 1.4.0\r\n`pip3 install kfp==1.4.0`\r\n3. Compile the pipeline file shown below\r\n`python3 test.py`\r\n\r\n```test.py\r\nfrom kfp import components\r\nfrom kfp import dsl\r\nimport kfp\r\n\r\n@components.create_component_from_func\r\ndef get_random_int_op(minimum: int, maximum: int) -> int:\r\n \"\"\"Generate a random number between minimum and maximum (inclusive).\"\"\"\r\n import random\r\n result = random.randint(minimum, maximum)\r\n print(result)\r\n return result\r\n\r\n@components.create_component_from_func\r\ndef flip_coin_op() -> str:\r\n \"\"\"Flip a coin and output heads or tails randomly.\"\"\"\r\n import random\r\n result = 'heads' if random.randint(0, 1) == 0 else 'tails'\r\n return result\r\n\r\n@components.create_component_from_func\r\ndef print_op(message: str):\r\n \"\"\"Print a message.\"\"\"\r\n print(message)\r\n\r\n@components.create_component_from_func\r\ndef fail_op(message):\r\n \"\"\"Fails.\"\"\"\r\n import sys\r\n print(message)\r\n sys.exit(1)\r\n\r\n\r\n@dsl.pipeline(\r\n name='Conditional execution pipeline with exit handler',\r\n description='Shows how to use dsl.Condition() and dsl.ExitHandler().'\r\n)\r\ndef flipcoin_exit_pipeline():\r\n exit_task = print_op('Exit handler has worked!')\r\n exit_task2 = print_op('Exit handler has worked!')\r\n with dsl.ExitHandler(exit_task):\r\n flip = flip_coin_op()\r\n with dsl.Condition(flip.output == 'heads'):\r\n random_num_head = get_random_int_op(0, 9)\r\n with dsl.Condition(random_num_head.output > 5):\r\n print_op('heads and %s > 5!')\r\n with dsl.Condition(random_num_head.output <= 5):\r\n print_op('heads and %s <= 5!' % random_num_head.output)\r\n\r\n with dsl.Condition(flip.output == 'tails'):\r\n random_num_tail = get_random_int_op(10, 19)\r\n with dsl.Condition(random_num_tail.output > 15):\r\n print_op('tails and %s > 15!' % random_num_tail.output)\r\n with dsl.Condition(random_num_tail.output <= 15):\r\n print_op('tails and %s <= 15!' % random_num_tail.output)\r\n\r\n with dsl.Condition(flip.output == 'tails'):\r\n fail_op(message=\"Failing the run to demonstrate that exit handler still gets executed.\")\r\n\r\n\r\nif __name__ == '__main__':\r\n # Compiling the pipeline\r\n kfp.compiler.Compiler().compile(flipcoin_exit_pipeline, __file__ + '.yaml')\r\n```", "I've just followed your steps. It can be reproduced stably when I execute Python 3 test.py No exception is thrown, but in the test.py Inside, exit_task2&nbsp; does not belong to Exithandler, so an error should be reported, but&nbsp; it's not. \r\n\r\n\r\n\r\n\r\n\r\n\r\n------------------&nbsp;原始邮件&nbsp;------------------\r\n发件人: \"Chen Sun\"<notifications@github.com&gt;; \r\n发送时间: 2021年3月5日(星期五) 晚上6:28\r\n收件人: \"kubeflow/pipelines\"<pipelines@noreply.github.com&gt;; \r\n抄送: \"小安\"<317291885@qq.com&gt;; \"Mention\"<mention@noreply.github.com&gt;; \r\n主题: Re: [kubeflow/pipelines] bug in compilier.Compiler._validate_exit_handler (#5198)\r\n\r\n\r\n\r\n\r\n\r\n \r\n@HaozhengAN , I cannot reproduce this.\r\n \r\nCan you maybe try again in a clean environment?\r\n \r\nHere's what I did for your reference:\r\n \r\nCreate a new virtual environment\r\n python3 -m venv ~/venv/kfp1.4.0\r\n source ~/venv/kfp1.4.0/bin/activate\r\n \r\nInstall kfp 1.4.0\r\n pip3 install kfp==1.4.0\r\n \r\nCompile the pipeline file shown below\r\n python3 test.py\r\n from kfp import components from kfp import dsl import kfp @components.create_component_from_func def get_random_int_op(minimum: int, maximum: int) -&gt; int: \"\"\"Generate a random number between minimum and maximum (inclusive).\"\"\" import random result = random.randint(minimum, maximum) print(result) return result @components.create_component_from_func def flip_coin_op() -&gt; str: \"\"\"Flip a coin and output heads or tails randomly.\"\"\" import random result = 'heads' if random.randint(0, 1) == 0 else 'tails' return result @components.create_component_from_func def print_op(message: str): \"\"\"Print a message.\"\"\" print(message) @components.create_component_from_func def fail_op(message): \"\"\"Fails.\"\"\" import sys print(message) sys.exit(1) @dsl.pipeline( name='Conditional execution pipeline with exit handler', description='Shows how to use dsl.Condition() and dsl.ExitHandler().' ) def flipcoin_exit_pipeline(): exit_task = print_op('Exit handler has worked!') exit_task2 = print_op('Exit handler has worked!') with dsl.ExitHandler(exit_task): flip = flip_coin_op() with dsl.Condition(flip.output == 'heads'): random_num_head = get_random_int_op(0, 9) with dsl.Condition(random_num_head.output &gt; 5): print_op('heads and %s &gt; 5!') with dsl.Condition(random_num_head.output <= 5): print_op('heads and %s <= 5!' % random_num_head.output) with dsl.Condition(flip.output == 'tails'): random_num_tail = get_random_int_op(10, 19) with dsl.Condition(random_num_tail.output &gt; 15): print_op('tails and %s &gt; 15!' % random_num_tail.output) with dsl.Condition(random_num_tail.output <= 15): print_op('tails and %s <= 15!' % random_num_tail.output) with dsl.Condition(flip.output == 'tails'): fail_op(message=\"Failing the run to demonstrate that exit handler still gets executed.\") if __name__ == '__main__': # Compiling the pipeline kfp.compiler.Compiler().compile(flipcoin_exit_pipeline, __file__ + '.yaml')\r\n \r\n—\r\nYou are receiving this because you were mentioned.\r\nReply to this email directly, view it on GitHub, or unsubscribe.", "> I've just followed your steps. It can be reproduced stably when I execute Python 3 test.py No exception is thrown, but in the test.py Inside, exit_task2&nbsp; does not belong to Exithandler, so an error should be reported, but&nbsp; it's not.\r\n\r\nI see. sorry I misread your initial report -- I thought you unintentionally misplaced/swapped \"what happened\" and \"what expect to happen\". \r\nI agree the error message you pointed to might be a bit misleading. But in fact what you see is the expected behavior.\r\nWe do allow extra ops outside exithandler scope just that you can have only one op used as exithandler task.\r\n\r\nFor example, you can see in our xgboost sample that `_diagnose_me_op` is also outside exithandler but not the exithandler task.\r\n\r\nhttps://github.com/kubeflow/pipelines/blob/5ffa2be1b9243773d5459b22406a3f2cd5e7881c/samples/core/xgboost_training_cm/xgboost_training_cm.py#L233-L244\r\n\r\n", "```python\r\n@dsl.pipeline(\r\n name='Conditional execution pipeline with exit handler',\r\n description='Shows how to use dsl.Condition() and dsl.ExitHandler().'\r\n)\r\ndef flipcoin_exit_pipeline():\r\n exit_task = print_op('Exit handler has worked!')\r\n exit_task2 = print_op('Exit handler has worked!')\r\n exit_task3 = print_op('Exit handler has worked!')\r\n with dsl.ExitHandler(exit_task):\r\n flip = flip_coin_op()\r\n with dsl.Condition(flip.output == 'heads'):\r\n random_num_head = get_random_int_op(0, 9)\r\n with dsl.Condition(random_num_head.output > 5):\r\n print_op('heads and %s > 5!')\r\n with dsl.Condition(random_num_head.output <= 5):\r\n print_op('heads and %s <= 5!' % random_num_head.output)\r\n\r\n with dsl.Condition(flip.output == 'tails'):\r\n random_num_tail = get_random_int_op(10, 19)\r\n with dsl.Condition(random_num_tail.output > 15):\r\n print_op('tails and %s > 15!' % random_num_tail.output)\r\n with dsl.Condition(random_num_tail.output <= 15):\r\n print_op('tails and %s <= 15!' % random_num_tail.output)\r\n\r\n with dsl.Condition(flip.output == 'tails'):\r\n fail_op(message=\"Failing the run to demonstrate that exit handler still gets executed.\")\r\n\r\n\r\nif __name__ == '__main__':\r\n # Compiling the pipeline\r\n kfp.compiler.Compiler().compile(flipcoin_exit_pipeline, __file__ + '.yaml')\r\n\r\n```\r\nbut this will raise an error:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"control_structures.py\", line 125, in <module>\r\n kfp.compiler.Compiler().compile(flipcoin_exit_pipeline, __file__ + '.yaml')\r\n File \"/Users/anhaozheng/pipelines-master/sdk/python/kfp/compiler/compiler.py\", line 984, in compile\r\n self._create_and_write_workflow(\r\n File \"/Users/anhaozheng/pipelines-master/sdk/python/kfp/compiler/compiler.py\", line 1037, in _create_and_write_workflow\r\n workflow = self._create_workflow(\r\n File \"/Users/anhaozheng/pipelines-master/sdk/python/kfp/compiler/compiler.py\", line 874, in _create_workflow\r\n self._validate_exit_handler(dsl_pipeline)\r\n File \"/Users/anhaozheng/pipelines-master/sdk/python/kfp/compiler/compiler.py\", line 779, in _validate_exit_handler\r\n return _validate_exit_handler_helper(pipeline.groups[0], [], False)\r\n File \"/Users/anhaozheng/pipelines-master/sdk/python/kfp/compiler/compiler.py\", line 777, in _validate_exit_handler_helper\r\n _validate_exit_handler_helper(g, exiting_op_names, handler_exists)\r\n File \"/Users/anhaozheng/pipelines-master/sdk/python/kfp/compiler/compiler.py\", line 770, in _validate_exit_handler_helper\r\n raise ValueError('Only one global exit_handler is allowed and all ops need to be included.')\r\nValueError: Only one global exit_handler is allowed and all ops need to be included.\r\n```\r\n\r\nand the msg of valueError is: \r\n> Only one global exit_handler is allowed and all ops need to be included. ", "Thanks! The last example shows inconsistence in the check is indeed a bug then. \r\n\r\nNot sure if we want to enforce the check or remove the check though. ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n" ]
"2021-02-26T08:32:15"
"2022-04-19T08:27:52"
"2022-04-19T08:27:52"
NONE
null
### What steps did you take: ```python @dsl.pipeline( name='Conditional execution pipeline with exit handler', description='Shows how to use dsl.Condition() and dsl.ExitHandler().' ) def flipcoin_exit_pipeline(): exit_task = print_op('Exit handler has worked!') exit_task2 = print_op('Exit handler has worked!') with dsl.ExitHandler(exit_task): flip = flip_coin_op() with dsl.Condition(flip.output == 'heads'): random_num_head = get_random_int_op(0, 9) with dsl.Condition(random_num_head.output > 5): print_op('heads and %s > 5!') with dsl.Condition(random_num_head.output <= 5): print_op('heads and %s <= 5!' % random_num_head.output) with dsl.Condition(flip.output == 'tails'): random_num_tail = get_random_int_op(10, 19) with dsl.Condition(random_num_tail.output > 15): print_op('tails and %s > 15!' % random_num_tail.output) with dsl.Condition(random_num_tail.output <= 15): print_op('tails and %s <= 15!' % random_num_tail.output) with dsl.Condition(flip.output == 'tails'): fail_op(message="Failing the run to demonstrate that exit handler still gets executed.") if __name__ == '__main__': # Compiling the pipeline kfp.compiler.Compiler().compile(flipcoin_exit_pipeline, __file__ + '.yaml') ``` ### What happened: it compiled success, ### What did you expect to happen: raise Error such as follow: ``` Traceback (most recent call last): File "control_structures.py", line 125, in <module> kfp.compiler.Compiler().compile(flipcoin_exit_pipeline, __file__ + '.yaml') File "/Users/anhaozheng/pipelines-master/sdk/python/kfp/compiler/compiler.py", line 984, in compile self._create_and_write_workflow( File "/Users/anhaozheng/pipelines-master/sdk/python/kfp/compiler/compiler.py", line 1037, in _create_and_write_workflow workflow = self._create_workflow( File "/Users/anhaozheng/pipelines-master/sdk/python/kfp/compiler/compiler.py", line 874, in _create_workflow self._validate_exit_handler(dsl_pipeline) File "/Users/anhaozheng/pipelines-master/sdk/python/kfp/compiler/compiler.py", line 779, in _validate_exit_handler return _validate_exit_handler_helper(pipeline.groups[0], [], False) File "/Users/anhaozheng/pipelines-master/sdk/python/kfp/compiler/compiler.py", line 777, in _validate_exit_handler_helper _validate_exit_handler_helper(g, exiting_op_names, handler_exists) File "/Users/anhaozheng/pipelines-master/sdk/python/kfp/compiler/compiler.py", line 770, in _validate_exit_handler_helper raise ValueError('Only one global exit_handler is allowed and all ops need to be included.') ValueError: Only one global exit_handler is allowed and all ops need to be included. ``` ### Environment: <!-- Please fill in those that seem relevant. --> How did you deploy Kubeflow Pipelines (KFP)? <!-- If you are not sure, here's [an introduction of all options](https://www.kubeflow.org/docs/pipelines/installation/overview/). --> KFP version: <!-- If you are not sure, build commit shows on bottom of KFP UI left sidenav. --> KFP SDK version: <!--1.4.0--> 1.4.0 ### Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.] /kind bug <!-- Please include labels by uncommenting them to help us better triage issues, choose from the following --> <!-- // /area frontend // /area backend // /area sdk // /area testing // /area engprod -->
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5198/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5198/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5194
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5194/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5194/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5194/events
https://github.com/kubeflow/pipelines/issues/5194
816,340,097
MDU6SXNzdWU4MTYzNDAwOTc=
5,194
[TFX] TFMA 0.27.0 broken in KFP UI
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
closed
false
{ "login": "Ark-kun", "id": 1829149, "node_id": "MDQ6VXNlcjE4MjkxNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ark-kun", "html_url": "https://github.com/Ark-kun", "followers_url": "https://api.github.com/users/Ark-kun/followers", "following_url": "https://api.github.com/users/Ark-kun/following{/other_user}", "gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions", "organizations_url": "https://api.github.com/users/Ark-kun/orgs", "repos_url": "https://api.github.com/users/Ark-kun/repos", "events_url": "https://api.github.com/users/Ark-kun/events{/privacy}", "received_events_url": "https://api.github.com/users/Ark-kun/received_events", "type": "User", "site_admin": false }
[ { "login": "Ark-kun", "id": 1829149, "node_id": "MDQ6VXNlcjE4MjkxNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ark-kun", "html_url": "https://github.com/Ark-kun", "followers_url": "https://api.github.com/users/Ark-kun/followers", "following_url": "https://api.github.com/users/Ark-kun/following{/other_user}", "gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions", "organizations_url": "https://api.github.com/users/Ark-kun/orgs", "repos_url": "https://api.github.com/users/Ark-kun/repos", "events_url": "https://api.github.com/users/Ark-kun/events{/privacy}", "received_events_url": "https://api.github.com/users/Ark-kun/received_events", "type": "User", "site_admin": false }, { "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false } ]
null
[ "Worked around by https://github.com/kubeflow/pipelines/pull/5191.\r\n\r\nWe need to report the issue upstream.", "> I see that both old and new fail to load the tensorflow_model_analysis.js script.\r\nBut then the old version succeeds in loading https://unpkg.com/tensorflow_model_analysis@0.21.5/dist/vulcanized_tfma.js while the new version fails trying to load from https://24643bfacd9ed2e-dot-us-central1.pipelines.googleusercontent.com/nbextensions/tensorflow_model_analysis/vulcanized_tfma.js\r\n\r\n> In the index.js:\r\n> \r\n> Before:\r\n> \tfunction loadVulcanizedTemplate() {\t\r\n> const templateLocation = __webpack_require__.p + 'vulcanized_tfma.js';\r\n> After:\r\n> function loadVulcanizedTemplate() {\r\n> const templateLocation =\r\n> (document.querySelector('body').getAttribute('data-base-url') || '/') +\r\n> 'nbextensions/tensorflow_model_analysis/vulcanized_tfma.js';\r\n> \r\n\r\n@Ark-kun 's investigation", "/assign @Bobgy @Ark-kun ", "With kubeflow 1.4.0 and TFX 0.27.0 the visualisation works for me on the first run when there is no baseline model.\r\n\r\nHowever at on the second run I get:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<ipython-input-81-95e408225f11> in <module>\r\n 10 columns=[] if len(featureKeys) == 0 else featureKeys[0]['featureKeys']\r\n 11 slicing_spec = tfma.slicer.SingleSliceSpec(columns=columns)\r\n---> 12 eval_result = tfma.load_eval_result('s3://kubeflow/tfx/eb_lstm/Evaluator/evaluation/147')\r\n 13 slicing_metrics_view = tfma.view.render_slicing_metrics(eval_result, slicing_spec=slicing_spec)\r\n 14 view = io.StringIO()\r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow_model_analysis/api/model_eval_lib.py in load_eval_result(output_path, output_file_format, model_name)\r\n 273 output_path, output_file_format):\r\n 274 plots_list.append(\r\n--> 275 util.convert_plots_proto_to_dict(p, model_name=model_name))\r\n 276 if not model_locations:\r\n 277 model_location = ''\r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow_model_analysis/view/util.py in convert_plots_proto_to_dict(plots_for_slice, model_name)\r\n 584 raise ValueError('Fail to find plots for model name: %s . '\r\n 585 'Available model names are [%s]' %\r\n--> 586 (model_name, ', '.join(keys)))\r\n 587 \r\n 588 return (slicer.deserialize_slice_key(plots_for_slice.slice_key), plots_map)\r\n\r\nValueError: Fail to find plots for model name: None . Available model names are [candidate, baseline]\r\n```\r\n\r\nAs the model_name isn't specified.", "It works on the first run since the util.py accepts None if there is only a single model with evaluation results:\r\n\r\nhttps://github.com/tensorflow/model-analysis/blob/master/tensorflow_model_analysis/view/util.py#L585-L587", "Thank you for raising the issue\n\nWhat would be suggested fix on the caller line?", "This works for me both on first and second runs:\r\n\r\nhttps://github.com/kubeflow/pipelines/pull/5260\r\n\r\nThere is no `name` key if only one model evaluation is available (on the first run), hence the `.get` method and extra check.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n" ]
"2021-02-25T11:15:09"
"2022-04-28T18:00:03"
"2022-04-28T18:00:03"
CONTRIBUTOR
null
> I tried to upgrade some other TFX related deps, however, I'm getting the following errors in various places in parameterized_tfx_sample: > > * when clicking visualizations on `evaluator`, the visualization js simply crashes without any clear error message. I tried to take a look at browser console, but it only shows. EDIT: workarounded in #5191 > > ``` > Uncaught Error: Script error for: tensorflow_model_analysis > http://requirejs.org/docs/errors.html#scripterror > at C (require.min.js:8) > at HTMLScriptElement.onScriptError (require.min.js:29) > ``` Originally posted in https://github.com/kubeflow/pipelines/issues/5137#issuecomment-785012674
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5194/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5194/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5193
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5193/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5193/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5193/events
https://github.com/kubeflow/pipelines/issues/5193
816,338,502
MDU6SXNzdWU4MTYzMzg1MDI=
5,193
[TFX] penguins example
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
closed
false
{ "login": "NikeNano", "id": 22057410, "node_id": "MDQ6VXNlcjIyMDU3NDEw", "avatar_url": "https://avatars.githubusercontent.com/u/22057410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NikeNano", "html_url": "https://github.com/NikeNano", "followers_url": "https://api.github.com/users/NikeNano/followers", "following_url": "https://api.github.com/users/NikeNano/following{/other_user}", "gists_url": "https://api.github.com/users/NikeNano/gists{/gist_id}", "starred_url": "https://api.github.com/users/NikeNano/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NikeNano/subscriptions", "organizations_url": "https://api.github.com/users/NikeNano/orgs", "repos_url": "https://api.github.com/users/NikeNano/repos", "events_url": "https://api.github.com/users/NikeNano/events{/privacy}", "received_events_url": "https://api.github.com/users/NikeNano/received_events", "type": "User", "site_admin": false }
[ { "login": "NikeNano", "id": 22057410, "node_id": "MDQ6VXNlcjIyMDU3NDEw", "avatar_url": "https://avatars.githubusercontent.com/u/22057410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NikeNano", "html_url": "https://github.com/NikeNano", "followers_url": "https://api.github.com/users/NikeNano/followers", "following_url": "https://api.github.com/users/NikeNano/following{/other_user}", "gists_url": "https://api.github.com/users/NikeNano/gists{/gist_id}", "starred_url": "https://api.github.com/users/NikeNano/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NikeNano/subscriptions", "organizations_url": "https://api.github.com/users/NikeNano/orgs", "repos_url": "https://api.github.com/users/NikeNano/repos", "events_url": "https://api.github.com/users/NikeNano/events{/privacy}", "received_events_url": "https://api.github.com/users/NikeNano/received_events", "type": "User", "site_admin": false } ]
null
[ "/cc @chensun \r\nDo you have bandwidth to take this?\r\n\r\nNot urgent, but I believe we should better have it in next release.", "I would be happy to help out with this as well. ", "/assign", "blocked by: https://github.com/tensorflow/tfx/pull/3484", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n" ]
"2021-02-25T11:13:01"
"2022-04-18T17:27:42"
"2022-04-18T17:27:42"
CONTRIBUTOR
null
TFX iris example was removed in new releases, it should be replaced by penguins example. context: https://github.com/kubeflow/pipelines/pull/5189
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5193/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5193/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5181
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5181/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5181/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5181/events
https://github.com/kubeflow/pipelines/issues/5181
815,735,520
MDU6SXNzdWU4MTU3MzU1MjA=
5,181
KFServing v0.5 component for Kubeflow Pipelines
{ "login": "animeshsingh", "id": 3631320, "node_id": "MDQ6VXNlcjM2MzEzMjA=", "avatar_url": "https://avatars.githubusercontent.com/u/3631320?v=4", "gravatar_id": "", "url": "https://api.github.com/users/animeshsingh", "html_url": "https://github.com/animeshsingh", "followers_url": "https://api.github.com/users/animeshsingh/followers", "following_url": "https://api.github.com/users/animeshsingh/following{/other_user}", "gists_url": "https://api.github.com/users/animeshsingh/gists{/gist_id}", "starred_url": "https://api.github.com/users/animeshsingh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/animeshsingh/subscriptions", "organizations_url": "https://api.github.com/users/animeshsingh/orgs", "repos_url": "https://api.github.com/users/animeshsingh/repos", "events_url": "https://api.github.com/users/animeshsingh/events{/privacy}", "received_events_url": "https://api.github.com/users/animeshsingh/received_events", "type": "User", "site_admin": false }
[ { "id": 1493369148, "node_id": "MDU6TGFiZWwxNDkzMzY5MTQ4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/status/triaged", "name": "status/triaged", "color": "18f440", "default": false, "description": "Whether the issue has been explicitly triaged" } ]
closed
false
{ "login": "pvaneck", "id": 1868861, "node_id": "MDQ6VXNlcjE4Njg4NjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1868861?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pvaneck", "html_url": "https://github.com/pvaneck", "followers_url": "https://api.github.com/users/pvaneck/followers", "following_url": "https://api.github.com/users/pvaneck/following{/other_user}", "gists_url": "https://api.github.com/users/pvaneck/gists{/gist_id}", "starred_url": "https://api.github.com/users/pvaneck/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pvaneck/subscriptions", "organizations_url": "https://api.github.com/users/pvaneck/orgs", "repos_url": "https://api.github.com/users/pvaneck/repos", "events_url": "https://api.github.com/users/pvaneck/events{/privacy}", "received_events_url": "https://api.github.com/users/pvaneck/received_events", "type": "User", "site_admin": false }
[ { "login": "pvaneck", "id": 1868861, "node_id": "MDQ6VXNlcjE4Njg4NjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1868861?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pvaneck", "html_url": "https://github.com/pvaneck", "followers_url": "https://api.github.com/users/pvaneck/followers", "following_url": "https://api.github.com/users/pvaneck/following{/other_user}", "gists_url": "https://api.github.com/users/pvaneck/gists{/gist_id}", "starred_url": "https://api.github.com/users/pvaneck/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pvaneck/subscriptions", "organizations_url": "https://api.github.com/users/pvaneck/orgs", "repos_url": "https://api.github.com/users/pvaneck/repos", "events_url": "https://api.github.com/users/pvaneck/events{/privacy}", "received_events_url": "https://api.github.com/users/pvaneck/received_events", "type": "User", "site_admin": false } ]
null
[ "There are quite a few changes with the v1beta1 API that comes with KFServing v0.5.0. Here are some initial findings as I go over the component use case:\r\n\r\nKFServing Component Use Cases\r\n\r\n\r\n- Create InferenceService for available frameworks. General structure would look like:\r\n```\r\nV1beta1InferenceService(\r\n spec=V1beta1InferenceServiceSpec(\r\n      predictor=V1beta1PredictorSpec(\r\n         tensorflow=V1beta1TFServingSpec(\r\n          storage_uri=’gs://…’\r\n         pytorch=V1beta1TorchServeSpec\r\n         sklearn=V1beta1SKLearnSpec\r\n         xgboost=V1beta1XGBoostSpec\r\n         onnx=V1beta1ONNXRuntimeSpec\r\n         pmml=V1beta1PMMLSpec\r\n         triton=V1beta1TritonSpec\r\n      explainer=V1beta1ExplainerSpec\r\n      transformer=V1beta1TransformerSpec\r\n```\r\n- Allow custom serving.\r\n - Example v1beta1 custom YAML: https://github.com/kubeflow/kfserving/blob/master/docs/samples/v1beta1/custom/simple.yaml \r\n - General structure:\r\n ```\r\n V1beta1PredictorSpec(\r\n containers=client.V1Container(\r\n ```\r\n- Deploy and promote canaries\r\n - Canaries work a bit differently in v1beta1 (https://github.com/kubeflow/kfserving/tree/master/docs/samples/v1beta1/rollout)\r\n - You now specify `canaryTrafficPercent` in V1beta1PredictorSpec spec\r\n - Promoting a canary model is done by removing the canaryTrafficPercent field.\r\n - Sample canary: https://github.com/kubeflow/kfserving/blob/master/test/e2e/predictor/test_canary.py\r\n\r\n", "For the first iteration, can aim to mirror the current v1alpha2 functionality of the current component (the use cases listed above). Will target this for next week.\r\n\r\nNext iteration, can focus on adding support for explainers and transformers.\r\n\r\nThe v2 predict API and multi-model serving, I believe, are still experimental, so support for those can be added in the future when those become stable features.", "/assign @pvaneck \r\n/cc @moficodes ", "Draft PR is up, but still needs a bit more testing with actual pipelines as I only tested using the CLI. A README will also probably need to be created for documenting various ways of using the component." ]
"2021-02-24T18:28:56"
"2021-03-05T22:15:49"
"2021-03-05T22:15:49"
CONTRIBUTOR
null
Needs to be updated/upgraded/tested
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5181/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5181/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5178
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5178/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5178/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5178/events
https://github.com/kubeflow/pipelines/issues/5178
815,419,160
MDU6SXNzdWU4MTU0MTkxNjA=
5,178
[Testing] tfx sample pipeline broken in release not caught by postsubmit
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
closed
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "![image](https://user-images.githubusercontent.com/4957653/108999173-e8781a80-76dc-11eb-9e58-3158bcacc613.png)\r\n\r\nNotice that, postsubmit test before the release commit passed, however the released version had a problematic tfx sample pipeline (ignore the failure, it was expected for a release commit).\r\n\r\nIntegration test of the passing commit: https://oss-prow.knative.dev/view/gs/oss-prow/logs/kubeflow-pipeline-postsubmit-integration-test/1360157617023881216\r\n\r\nWhat's even weirder, after the fix https://github.com/kubeflow/pipelines/pull/5165, postsubmit tests start to fail on parameterized_tfx_sample pipeline.", "One problem I found was that, the integration test in postsubmit is incorrectly using presubmit test script: https://github.com/GoogleCloudPlatform/oss-test-infra/blob/f29fc29cd617497ea44164ff6a1734c7dee3c0f4/prow/prowjobs/kubeflow/pipelines/kubeflow-pipelines-postsubmits.yaml#L50\r\n\r\nEDIT: this is not root cause of this problem", "I think I found the root cause, it's caused by incomplete upgrade of tfx dependencies.\r\n\r\nLet me explain the order of events happening:\r\n1. pip started to fail for python 3.5 images, so we fixed image build by upgrading to tfx==0.26.0 in https://github.com/kubeflow/pipelines/pull/5052.\r\n\r\n At this point, builtin sample is already failing, because tfx sdk version is bumped to 0.26.0, but tfx image is still at 0.22.0.\r\n However, postsubmit tests do not fail, because [sample-test/requirements.in](https://github.com/kubeflow/pipelines/blob/82e9731c3278d1006aca2dfc78a43298397ab092/test/sample-test/requirements.in#L6-L6) was still at tfx==0.22.0.\r\n2. we released KFP 1.4.0\r\n3. we got user report that the builtin tfx sample fails -- as expected\r\n4. we fixed builtin sample by updating tfx image: #5165\r\n \r\n However, postsubmit tests start to fail for tfx sample, because tfx image is now at 0.27.0 while [sample-test/requirements.in](https://github.com/kubeflow/pipelines/blob/82e9731c3278d1006aca2dfc78a43298397ab092/test/sample-test/requirements.in#L6-L6) was still at tfx==0.22.0.\r\n\r\n## Conclusion\r\n\r\n* We should clearly document how to upgrade TFX dependency in KFP, so that people do not mistakenly only upgrade a subset of them.\r\n* It'll be more ideal if tfx version for the multiple places have a single source of truth, or they can be updated programmatically using a script. A quick action we can do is probably letting [sample-test/requirements.in](https://github.com/kubeflow/pipelines/blob/82e9731c3278d1006aca2dfc78a43298397ab092/test/sample-test/requirements.in#L6-L6) derive from [backend compiler requirements](https://github.com/kubeflow/pipelines/blob/master/backend/requirements.in), so that they are always in-sync -- sample-test env is the same as prebuilt sample compilation env.", "High priority problems fixed and I made samples-test to imports the same requirements.in from backend requirements.in.\r\n\r\nWhat's missing is that, people may update requirements.in, but forget to update all requirements.txt.\r\nEDIT: this still seems like a high priority and easy mistake.\r\n\r\n/assign @chensun \r\nDo you think you can take this?", "and https://github.com/kubeflow/pipelines/pull/5187 is pending review", "> High priority problems fixed and I made samples-test to imports the same requirements.in from backend requirements.in.\r\n> \r\n> What's missing is that, people may update requirements.in, but forget to update all requirements.txt.\r\n> EDIT: this still seems like a high priority and easy mistake.\r\n> \r\n> /assign @chensun\r\n> Do you think you can take this?\r\n\r\nAren't we deprecating `requirements.in` given `sdk/python/requirements.txt` covered by https://github.com/kubeflow/pipelines/pull/5056? ", "@chensun for clarification, #5056 explicitly disabled renovate for python packages.\r\n\r\nYou can make the decision to enable it.", "> @chensun for clarification, #5056 explicitly disabled renovate for python packages.\r\n> \r\n> You can make the decision to enable it.\r\n\r\nI see, thanks for the clarification. Checked again, and it did disabled python. (I was under the wrong impression that sdk/python/requirements.txt was covered as I saw an ignore list with some components path yet sdk is not in that list).", "> I want to echo again what I said [here](https://github.com/kubeflow/pipelines/issues/5137#issuecomment-785467128). I think `pip install -r sdk/python/requirements.txt` doesn't represent the most common user journey -- think about our notebook samples, it only has `pip install kfp` or `pip install kfp==<pinned version>`, but I've rarely seen `pip install -r sdk/python/requirements.txt`. \r\n> \r\n> I would suggest we move away from installing requirements.txt in tests. So the tests creates an environment closer to a fresh installation of `pip install kfp`. If there's a newly emerged dependency issue, we would probably be able to see it in tests.\r\n> \r\n> P.S.: Taking tfx for example, their [`requirements.txt`](https://github.com/tensorflow/tfx/blob/463586187bf0cdc1e7290f27bf5096d2f13f1593/requirements.txt) contains nothing but [`-e .`](https://pip.pypa.io/en/stable/reference/pip_install/?highlight=requirements.txt#cmdoption-e) (read from setup.py)\r\n\r\n-- @chensun \r\n\r\nMoving some of the discussion from the PR thread back to the main issue.\r\n\r\nI agree with Chen, testing as what users would get is an important test. However, we used to do that before introducing requirements.{in/txt}, the result was that from time to time presubmit broke without any clues and we needed to dig through the dependencies to find out why.\r\n\r\nI just want to make sure we are not going in cycles, we should admit approaches have their pros and cons.\r\n\r\nI think the discussion laid out in https://github.com/kubeflow/pipelines/issues/4682 is not significantly different from what we have here. Maybe the best approach is also to have a requirements.txt, but set up a bot to update it periodically as PRs. In this way, if that update PR fails, we know users might hit that problem too, but it won't be blocking presubmit tests (and other people not working on this problem).", "> However, we used to do that before introducing requirements.{in/txt}, the result was that from time to time presubmit broke without any clues and we needed to dig through the dependencies to find out why.\r\n\r\nIf I'm not mistaken, this is usually due to new versions of dependencies not compatible with other existing dependencies. And that's a sign that we need to fix `setup.py` by adding upper limit restrictions to the dependencies. Using `requirements.txt` in test is not solving the underlying issue but hiding it. \r\n\r\nThe fact that many of the dependencies listed in kfp `setup.py` don't have the upper version limit is problematic IMHO. \r\nhttps://github.com/kubeflow/pipelines/blob/9bc63f59b27886a84bfcf4ece1062d489d06e0f5/sdk/python/setup.py#L24-L47\r\nSo I think one action item is to add upper limits regardless whether we use `requirements.txt` in tests. WDYT?\r\n\r\nEDIT: created https://github.com/kubeflow/pipelines/pull/5258", "TODO: update TFX upgrade documentation", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n" ]
"2021-02-24T12:10:41"
"2022-03-03T06:05:29"
"2022-03-03T06:05:29"
CONTRIBUTOR
null
In #5137, tfx sample pipeline was broken, but it was not caught by postsubmit test. > IIRC our integration test should cover this sample, do we know why this is not captured? Originally posted in https://github.com/kubeflow/pipelines/issues/5137#issuecomment-783788084 I think this is a high priority issue, because it causes a lot of efforts after 1.4.0 was released. /cc @numerology @chensun
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5178/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5178/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5175
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5175/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5175/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5175/events
https://github.com/kubeflow/pipelines/issues/5175
815,225,518
MDU6SXNzdWU4MTUyMjU1MTg=
5,175
Metadata writer is not looking at the correct namespaces for argo pods
{ "login": "deepk2u", "id": 1802638, "node_id": "MDQ6VXNlcjE4MDI2Mzg=", "avatar_url": "https://avatars.githubusercontent.com/u/1802638?v=4", "gravatar_id": "", "url": "https://api.github.com/users/deepk2u", "html_url": "https://github.com/deepk2u", "followers_url": "https://api.github.com/users/deepk2u/followers", "following_url": "https://api.github.com/users/deepk2u/following{/other_user}", "gists_url": "https://api.github.com/users/deepk2u/gists{/gist_id}", "starred_url": "https://api.github.com/users/deepk2u/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/deepk2u/subscriptions", "organizations_url": "https://api.github.com/users/deepk2u/orgs", "repos_url": "https://api.github.com/users/deepk2u/repos", "events_url": "https://api.github.com/users/deepk2u/events{/privacy}", "received_events_url": "https://api.github.com/users/deepk2u/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
null
[]
null
[]
"2021-02-24T08:00:51"
"2021-03-01T16:25:40"
"2021-03-01T16:25:40"
CONTRIBUTOR
null
### What steps did you take: Deployed metadata-writer Deployment ### What happened: https://github.com/kubeflow/pipelines/blob/master/backend/metadata_writer/src/metadata_writer.py#L27 results in setting the namespace to watch as `default` as we are setting empty string or a particular namespace only. ### What did you expect to happen: This results in the writer watching one particular namespace only, while it should watch for all namespaces for argo pods so that Output Artifact can be written for those cases ### Environment: <!-- Please fill in those that seem relevant. --> How did you deploy Kubeflow Pipelines (KFP)? <!-- If you are not sure, here's [an introduction of all options](https://www.kubeflow.org/docs/pipelines/installation/overview/). --> https://www.kubeflow.org/docs/started/k8s/kfctl-istio-dex/ Using Kubeflow 1.2 KFP version: <!-- If you are not sure, build commit shows on bottom of KFP UI left sidenav. --> KFP SDK version: <!-- Please attach the output of this shell command: $pip list | grep kfp --> ### Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.] /kind bug <!-- Please include labels by uncommenting them to help us better triage issues, choose from the following --> <!-- // /area frontend /area backend // /area sdk // /area testing // /area engprod -->
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5175/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5175/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5173
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5173/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5173/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5173/events
https://github.com/kubeflow/pipelines/issues/5173
814,785,054
MDU6SXNzdWU4MTQ3ODUwNTQ=
5,173
pipeline compilation from subgraph components doesn't allow relative path URIs in the subgraph component yaml
{ "login": "amygdala", "id": 115093, "node_id": "MDQ6VXNlcjExNTA5Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/115093?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amygdala", "html_url": "https://github.com/amygdala", "followers_url": "https://api.github.com/users/amygdala/followers", "following_url": "https://api.github.com/users/amygdala/following{/other_user}", "gists_url": "https://api.github.com/users/amygdala/gists{/gist_id}", "starred_url": "https://api.github.com/users/amygdala/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amygdala/subscriptions", "organizations_url": "https://api.github.com/users/amygdala/orgs", "repos_url": "https://api.github.com/users/amygdala/repos", "events_url": "https://api.github.com/users/amygdala/events{/privacy}", "received_events_url": "https://api.github.com/users/amygdala/received_events", "type": "User", "site_admin": false }
[ { "id": 1122445895, "node_id": "MDU6TGFiZWwxMTIyNDQ1ODk1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk/components", "name": "area/sdk/components", "color": "d2b48c", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
closed
false
{ "login": "Ark-kun", "id": 1829149, "node_id": "MDQ6VXNlcjE4MjkxNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ark-kun", "html_url": "https://github.com/Ark-kun", "followers_url": "https://api.github.com/users/Ark-kun/followers", "following_url": "https://api.github.com/users/Ark-kun/following{/other_user}", "gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions", "organizations_url": "https://api.github.com/users/Ark-kun/orgs", "repos_url": "https://api.github.com/users/Ark-kun/repos", "events_url": "https://api.github.com/users/Ark-kun/events{/privacy}", "received_events_url": "https://api.github.com/users/Ark-kun/received_events", "type": "User", "site_admin": false }
[ { "login": "Ark-kun", "id": 1829149, "node_id": "MDQ6VXNlcjE4MjkxNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ark-kun", "html_url": "https://github.com/Ark-kun", "followers_url": "https://api.github.com/users/Ark-kun/followers", "following_url": "https://api.github.com/users/Ark-kun/following{/other_user}", "gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions", "organizations_url": "https://api.github.com/users/Ark-kun/orgs", "repos_url": "https://api.github.com/users/Ark-kun/repos", "events_url": "https://api.github.com/users/Ark-kun/events{/privacy}", "received_events_url": "https://api.github.com/users/Ark-kun/received_events", "type": "User", "site_admin": false } ]
null
[ "/cc @Ark-kun ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n" ]
"2021-02-23T19:57:31"
"2022-04-28T18:00:20"
"2022-04-28T18:00:20"
CONTRIBUTOR
null
(With KFP 1.4) My subgraph component was constructed from other components that I loaded via `load_component_from_file()`. So, the subgraph yaml has entries that look like this: ``` tasks: Create dataset tabular bigquery sample: componentRef: {digest: e185acfe42a7dd076c54a55ba368c772f72a602a2c5182754ac5ad33b0f2e106, url: ./tables_create_dataset_component.yaml} arguments: ... ``` Then, I get this error when compiling a pipeline based on that component: `MissingSchema: Invalid URL './tables_create_dataset_component.yaml'` (The file is available locally at that path). After discussion with Alexey, we should support this. Comment from Alexey: I think the issue is pretty simple - ComponentStore always tries to download components references with URIs using the requests library without checking for the schema.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5173/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5173/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5172
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5172/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5172/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5172/events
https://github.com/kubeflow/pipelines/issues/5172
814,727,369
MDU6SXNzdWU4MTQ3MjczNjk=
5,172
client.create_experiment and client.get_experiment treat differences in case differently
{ "login": "ptitzler", "id": 13068832, "node_id": "MDQ6VXNlcjEzMDY4ODMy", "avatar_url": "https://avatars.githubusercontent.com/u/13068832?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ptitzler", "html_url": "https://github.com/ptitzler", "followers_url": "https://api.github.com/users/ptitzler/followers", "following_url": "https://api.github.com/users/ptitzler/following{/other_user}", "gists_url": "https://api.github.com/users/ptitzler/gists{/gist_id}", "starred_url": "https://api.github.com/users/ptitzler/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ptitzler/subscriptions", "organizations_url": "https://api.github.com/users/ptitzler/orgs", "repos_url": "https://api.github.com/users/ptitzler/repos", "events_url": "https://api.github.com/users/ptitzler/events{/privacy}", "received_events_url": "https://api.github.com/users/ptitzler/received_events", "type": "User", "site_admin": false }
[ { "id": 930476737, "node_id": "MDU6TGFiZWw5MzA0NzY3Mzc=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/help%20wanted", "name": "help wanted", "color": "db1203", "default": true, "description": "The community is welcome to contribute." }, { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
{ "login": "NikeNano", "id": 22057410, "node_id": "MDQ6VXNlcjIyMDU3NDEw", "avatar_url": "https://avatars.githubusercontent.com/u/22057410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NikeNano", "html_url": "https://github.com/NikeNano", "followers_url": "https://api.github.com/users/NikeNano/followers", "following_url": "https://api.github.com/users/NikeNano/following{/other_user}", "gists_url": "https://api.github.com/users/NikeNano/gists{/gist_id}", "starred_url": "https://api.github.com/users/NikeNano/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NikeNano/subscriptions", "organizations_url": "https://api.github.com/users/NikeNano/orgs", "repos_url": "https://api.github.com/users/NikeNano/repos", "events_url": "https://api.github.com/users/NikeNano/events{/privacy}", "received_events_url": "https://api.github.com/users/NikeNano/received_events", "type": "User", "site_admin": false }
[ { "login": "NikeNano", "id": 22057410, "node_id": "MDQ6VXNlcjIyMDU3NDEw", "avatar_url": "https://avatars.githubusercontent.com/u/22057410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NikeNano", "html_url": "https://github.com/NikeNano", "followers_url": "https://api.github.com/users/NikeNano/followers", "following_url": "https://api.github.com/users/NikeNano/following{/other_user}", "gists_url": "https://api.github.com/users/NikeNano/gists{/gist_id}", "starred_url": "https://api.github.com/users/NikeNano/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NikeNano/subscriptions", "organizations_url": "https://api.github.com/users/NikeNano/orgs", "repos_url": "https://api.github.com/users/NikeNano/repos", "events_url": "https://api.github.com/users/NikeNano/events{/privacy}", "received_events_url": "https://api.github.com/users/NikeNano/received_events", "type": "User", "site_admin": false } ]
null
[ "Makes sense, I think we should distinguish case when creating an experiment too.\r\nWelcome contribution on this change.", "/assign", "The error originates from that `mysql` is [not case sensitve](https://dev.mysql.com/doc/refman/8.0/en/case-sensitivity.html) in the comparisons as I understand it for CHAR, VARCHAR, TEXT. \r\n```sql\r\nERROR 1062 (23000): Duplicate entry 'new-' for key 'idx_name_namespace'\r\n```\r\nwhere I use `new` as the experiment name. \r\n\r\nWill use lowercase for the comparisons as well. " ]
"2021-02-23T18:34:10"
"2021-03-28T11:14:51"
"2021-03-28T11:14:51"
NONE
null
### What steps did you take: The following code snippet tries to create two experiments. The experiment names only differ in case, e.g. `Untitled` vs `untitled` ``` import kfp client = kfp.Client(host='http://.../pipeline') # create experiment succeeds if there is no experiment named `untitled` (ignoring case) experiment = client.create_experiment('Untitled', namespace='anonymous') # create experiment fails experiment = client.create_experiment('untitled', namespace='anonymous') # output > Create experiment failed.: Already exist error: Failed to create a new experiment. The name untitled already exists. Please specify a new name. experiment = client.get_experiment('untitled', namespace='anonymous') # output > ValueError: No experiment is found with name untitled. ``` ### What happened: Create experiment fails because an experiment with the name already exists. ### What did you expect to happen: `get_experiment` and `create_experiment` behave the same (ignore case or honor it). ### Environment: <!-- Please fill in those that seem relevant. --> How did you deploy Kubeflow Pipelines (KFP)? <!-- If you are not sure, here's [an introduction of all options](https://www.kubeflow.org/docs/pipelines/installation/overview/). --> KFP version: 1.0.4 KFP SDK version: 1.3.0 ### Anything else you would like to add: Found https://github.com/kubeflow/pipelines/issues/2240 but the issue is stale. /kind bug <!-- Please include labels by uncommenting them to help us better triage issues, choose from the following --> <!-- // /area frontend // /area backend /area sdk // /area testing // /area engprod -->
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5172/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5172/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5166
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5166/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5166/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5166/events
https://github.com/kubeflow/pipelines/issues/5166
813,795,265
MDU6SXNzdWU4MTM3OTUyNjU=
5,166
UI metadata sometimes fails to load
{ "login": "muyajil", "id": 19391143, "node_id": "MDQ6VXNlcjE5MzkxMTQz", "avatar_url": "https://avatars.githubusercontent.com/u/19391143?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muyajil", "html_url": "https://github.com/muyajil", "followers_url": "https://api.github.com/users/muyajil/followers", "following_url": "https://api.github.com/users/muyajil/following{/other_user}", "gists_url": "https://api.github.com/users/muyajil/gists{/gist_id}", "starred_url": "https://api.github.com/users/muyajil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muyajil/subscriptions", "organizations_url": "https://api.github.com/users/muyajil/orgs", "repos_url": "https://api.github.com/users/muyajil/repos", "events_url": "https://api.github.com/users/muyajil/events{/privacy}", "received_events_url": "https://api.github.com/users/muyajil/received_events", "type": "User", "site_admin": false }
[ { "id": 930476737, "node_id": "MDU6TGFiZWw5MzA0NzY3Mzc=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/help%20wanted", "name": "help wanted", "color": "db1203", "default": true, "description": "The community is welcome to contribute." }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
closed
false
null
[]
null
[ "Hi @muyajil, the code you mentioned is correct, because the _fetch function also returns a Promise, this function can just return it, and let its callsites to await for the promise.\r\n\r\nI'm not sure what the problem you hit, welcome more investigation to the root cause. You may follow https://github.com/kubeflow/pipelines/tree/master/frontend to develop the UI server locally to verify your ideas.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n" ]
"2021-02-22T19:28:45"
"2022-04-18T17:27:38"
"2022-04-18T17:27:38"
NONE
null
### What steps did you take? Create UI metadata artifacts as inline web apps. ### What happened? When trying to view the artifacts I receive the error: `Could not parse metadata file at: artifacts/xxx/xxx/mlpipeline-ui-metadata.tgz. Error: SyntaxError: Unexpected end of JSON input` Sometimes it works and sometimes it does not. ### What did you expect to happen That I can see the generated web apps always. ### Environment: Azure with OIDC KFP version: 1.2.0 ### Anything else you would like to add: The error is triggered in this line: https://github.com/kubeflow/pipelines/blob/61f9c2c328d245d89c9d9b8c923f24dbbd08cdc9/frontend/src/lib/OutputArtifactLoader.ts#L75 I think the error can be solved by awaiting the promise in this function: https://github.com/kubeflow/pipelines/blob/61f9c2c328d245d89c9d9b8c923f24dbbd08cdc9/frontend/src/lib/Apis.ts#L212 However I am not sure about the implications and would appreciate some feedback. If this would fix the issue, I would happily submit a PR.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5166/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5166/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5164
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5164/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5164/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5164/events
https://github.com/kubeflow/pipelines/issues/5164
813,317,973
MDU6SXNzdWU4MTMzMTc5NzM=
5,164
TFX InfraValidator component fails in KubeFlow
{ "login": "ConverJens", "id": 61828156, "node_id": "MDQ6VXNlcjYxODI4MTU2", "avatar_url": "https://avatars.githubusercontent.com/u/61828156?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ConverJens", "html_url": "https://github.com/ConverJens", "followers_url": "https://api.github.com/users/ConverJens/followers", "following_url": "https://api.github.com/users/ConverJens/following{/other_user}", "gists_url": "https://api.github.com/users/ConverJens/gists{/gist_id}", "starred_url": "https://api.github.com/users/ConverJens/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ConverJens/subscriptions", "organizations_url": "https://api.github.com/users/ConverJens/orgs", "repos_url": "https://api.github.com/users/ConverJens/repos", "events_url": "https://api.github.com/users/ConverJens/events{/privacy}", "received_events_url": "https://api.github.com/users/ConverJens/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "neuromage", "id": 206520, "node_id": "MDQ6VXNlcjIwNjUyMA==", "avatar_url": "https://avatars.githubusercontent.com/u/206520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/neuromage", "html_url": "https://github.com/neuromage", "followers_url": "https://api.github.com/users/neuromage/followers", "following_url": "https://api.github.com/users/neuromage/following{/other_user}", "gists_url": "https://api.github.com/users/neuromage/gists{/gist_id}", "starred_url": "https://api.github.com/users/neuromage/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/neuromage/subscriptions", "organizations_url": "https://api.github.com/users/neuromage/orgs", "repos_url": "https://api.github.com/users/neuromage/repos", "events_url": "https://api.github.com/users/neuromage/events{/privacy}", "received_events_url": "https://api.github.com/users/neuromage/received_events", "type": "User", "site_admin": false }, { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "/cc @chensun ", "I think this can be hardly solved without TFX IV providing more flexible configuration options to expose authentication capability when running on a more complex deployment.\r\n\r\nMaybe we shall port this to https://github.com/tensorflow/tfx ?", "@numerology @Bobgy @chensun This has already been raised and evaluated on the TFX side: https://github.com/tensorflow/tfx/issues/3257. \r\n\r\nDo you think authentication is the issue here? If you believe the issue lies with the TFX IV component, please drop a line on the issue mentioned above and we can continue the discussion there.", "@numerology Anything new on this?", "/assign @chensun \r\n/assign @neuromage \r\nto see what's the best direction on this one.\r\n\r\nI still believe TFX IV needs to expose certain k8s specific configuration in order to make this happen but perhaps KFP can also have a KFP-native 'InfraValidator' to dry-run an output model?", "@numerology But is this really an authentication issue, though? IV tries to spawn a pod using the python k8s client but that fails with above message. It seems as when IV tries to create the new pod, the istio sidecar tries to forward to KFs webhook but that fails. I'm not certain but this doesn't seem to be because lack of authentication. Perhaps it's simply that the SA default-editor lacks sufficient privileges to perform this?", "@numerology I tried to deploy IV and specified a service account that has full privileges but that still failed with the same error so SA does not seems to be the issue. I also tried to run the entire pipeline using this SA by specifying it in the KFP UI but with the exact same result.", "@numerology @chensun @neuromage I finally managed to solve this. The issue was that the TF serving pod spawned by IV did not have the `sidecar.istio.io/inject=false` which caused it to fail.\r\n\r\nThe solution is to allow a user to specify annotations for the serving pod that IV spawnes. No changes to KFP are needed so I'm closing this one." ]
"2021-02-22T09:31:23"
"2021-04-07T08:00:39"
"2021-04-07T08:00:39"
CONTRIBUTOR
null
### What steps did you take: I'm running a TFX pipeline with an InfraValidator component (https://www.tensorflow.org/tfx/guide/infra_validator). This works by spinning up a TFServing pod using the k8s python client, and optionally querying it. ### What happened: When InfraValidator tries to spin up TFServing using the CreateNamespacedPod it get a 500 error which seems to originate from istio. See logs below. ### What did you expect to happen: Pod to be successfully created. ### Environment: On-prem KubeFlow installation. How did you deploy Kubeflow Pipelines (KFP)? With KF 1.1.0 istio dex (e.g. multi-user). KFP version: 1.0.0 KFP SDK version: 1.1.2 k8s version: 1.19 TFX Version: 0.27.0 Python version: 3.7 ### Anything else you would like to add: The issue seems to istio rejecting the attempt to create the pod. Logs from failure: ``` INFO:absl:Starting infra validation (attempt 1/5). INFO:absl:Starting KubernetesRunner(image: docker.vby.svenskaspel.se:8181/tensorflow/serving:2.3.0, pod_name: None). INFO:absl:Stopping KubernetesRunner(image: docker.vby.svenskaspel.se:8181/tensorflow/serving:2.3.0, pod_name: None). INFO:absl:Deleting Pod (name=None) WARNING:absl:Error occurred while deleting the Pod. Please run the following command to manually clean it up: kubectl delete pod --namespace admin None Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/tfx/components/infra_validator/executor.py", line 356, in _ValidateOnce runner.Start() File "/usr/local/lib/python3.7/dist-packages/tfx/components/infra_validator/model_server_runners/kubernetes_runner.py", line 140, in Start body=self._BuildPodManifest()) File "/usr/local/lib/python3.7/dist-packages/kubernetes/client/apis/core_v1_api.py", line 6115, in create_namespaced_pod (data) = self.create_namespaced_pod_with_http_info(namespace, body, **kwargs) File "/usr/local/lib/python3.7/dist-packages/kubernetes/client/apis/core_v1_api.py", line 6206, in create_namespaced_pod_with_http_info collection_formats=collection_formats) File "/usr/local/lib/python3.7/dist-packages/kubernetes/client/api_client.py", line 344, in call_api _return_http_data_only, collection_formats, _preload_content, _request_timeout) File "/usr/local/lib/python3.7/dist-packages/kubernetes/client/api_client.py", line 178, in __call_api _request_timeout=_request_timeout) File "/usr/local/lib/python3.7/dist-packages/kubernetes/client/api_client.py", line 387, in request body=body) File "/usr/local/lib/python3.7/dist-packages/kubernetes/client/rest.py", line 266, in POST body=body) File "/usr/local/lib/python3.7/dist-packages/kubernetes/client/rest.py", line 222, in request raise ApiException(http_resp=r) kubernetes.client.rest.ApiException: (500) Reason: Internal Server Error HTTP response headers: HTTPHeaderDict({'Audit-Id': 'f16c63f3-113e-4eba-b50b-5f56f81c7599', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Date': 'Wed, 17 Feb 2021 13:30:17 GMT', 'Content-Length': '457'}) HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Internal error occurred: failed calling webhook \"sidecar-injector.istio.io\": Post \" https://istio-sidecar-injector.istio-system.svc:443/inject?timeout=30s \": EOF","reason":"InternalError","details":{"causes":[{"message":"failed calling webhook \"sidecar-injector.istio.io\": Post \"https://istio-sidecar-injector.istio-system.svc:443/inject?timeout=30s\": EOF"}]},"code":500} https://istio-sidecar-injector.istio-system.svc:443/inject?timeout=30s \": EOF"}]},"code":500} ``` Any help would be appreciated! /kind bug /area backend
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5164/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5164/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5161
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5161/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5161/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5161/events
https://github.com/kubeflow/pipelines/issues/5161
813,195,387
MDU6SXNzdWU4MTMxOTUzODc=
5,161
Metadata writer cannnot handle ipv6 metadata service host
{ "login": "vsk2015", "id": 13765217, "node_id": "MDQ6VXNlcjEzNzY1MjE3", "avatar_url": "https://avatars.githubusercontent.com/u/13765217?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vsk2015", "html_url": "https://github.com/vsk2015", "followers_url": "https://api.github.com/users/vsk2015/followers", "following_url": "https://api.github.com/users/vsk2015/following{/other_user}", "gists_url": "https://api.github.com/users/vsk2015/gists{/gist_id}", "starred_url": "https://api.github.com/users/vsk2015/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vsk2015/subscriptions", "organizations_url": "https://api.github.com/users/vsk2015/orgs", "repos_url": "https://api.github.com/users/vsk2015/repos", "events_url": "https://api.github.com/users/vsk2015/events{/privacy}", "received_events_url": "https://api.github.com/users/vsk2015/received_events", "type": "User", "site_admin": false }
[ { "id": 1863015205, "node_id": "MDU6TGFiZWwxODYzMDE1MjA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/metadata-writer", "name": "area/metadata-writer", "color": "60fc35", "default": false, "description": "" } ]
closed
false
{ "login": "Ark-kun", "id": 1829149, "node_id": "MDQ6VXNlcjE4MjkxNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ark-kun", "html_url": "https://github.com/Ark-kun", "followers_url": "https://api.github.com/users/Ark-kun/followers", "following_url": "https://api.github.com/users/Ark-kun/following{/other_user}", "gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions", "organizations_url": "https://api.github.com/users/Ark-kun/orgs", "repos_url": "https://api.github.com/users/Ark-kun/repos", "events_url": "https://api.github.com/users/Ark-kun/events{/privacy}", "received_events_url": "https://api.github.com/users/Ark-kun/received_events", "type": "User", "site_admin": false }
[ { "login": "Ark-kun", "id": 1829149, "node_id": "MDQ6VXNlcjE4MjkxNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ark-kun", "html_url": "https://github.com/Ark-kun", "followers_url": "https://api.github.com/users/Ark-kun/followers", "following_url": "https://api.github.com/users/Ark-kun/following{/other_user}", "gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions", "organizations_url": "https://api.github.com/users/Ark-kun/orgs", "repos_url": "https://api.github.com/users/Ark-kun/repos", "events_url": "https://api.github.com/users/Ark-kun/events{/privacy}", "received_events_url": "https://api.github.com/users/Ark-kun/received_events", "type": "User", "site_admin": false } ]
null
[ "@Ark-kun Sorry I messed up with my push. Here is the PR https://github.com/kubeflow/pipelines/pull/5246 with correct changes." ]
"2021-02-22T06:58:49"
"2021-03-12T00:12:24"
"2021-03-12T00:12:24"
CONTRIBUTOR
null
The original issue is reported here https://github.com/kubeflow/kubeflow/issues/5605
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5161/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5161/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5159
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5159/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5159/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5159/events
https://github.com/kubeflow/pipelines/issues/5159
813,162,428
MDU6SXNzdWU4MTMxNjI0Mjg=
5,159
[FR] Configurable Image in Component.yaml
{ "login": "munagekar", "id": 10258799, "node_id": "MDQ6VXNlcjEwMjU4Nzk5", "avatar_url": "https://avatars.githubusercontent.com/u/10258799?v=4", "gravatar_id": "", "url": "https://api.github.com/users/munagekar", "html_url": "https://github.com/munagekar", "followers_url": "https://api.github.com/users/munagekar/followers", "following_url": "https://api.github.com/users/munagekar/following{/other_user}", "gists_url": "https://api.github.com/users/munagekar/gists{/gist_id}", "starred_url": "https://api.github.com/users/munagekar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/munagekar/subscriptions", "organizations_url": "https://api.github.com/users/munagekar/orgs", "repos_url": "https://api.github.com/users/munagekar/repos", "events_url": "https://api.github.com/users/munagekar/events{/privacy}", "received_events_url": "https://api.github.com/users/munagekar/received_events", "type": "User", "site_admin": false }
[ { "id": 1122445895, "node_id": "MDU6TGFiZWwxMTIyNDQ1ODk1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk/components", "name": "area/sdk/components", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
{ "login": "Ark-kun", "id": 1829149, "node_id": "MDQ6VXNlcjE4MjkxNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ark-kun", "html_url": "https://github.com/Ark-kun", "followers_url": "https://api.github.com/users/Ark-kun/followers", "following_url": "https://api.github.com/users/Ark-kun/following{/other_user}", "gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions", "organizations_url": "https://api.github.com/users/Ark-kun/orgs", "repos_url": "https://api.github.com/users/Ark-kun/repos", "events_url": "https://api.github.com/users/Ark-kun/events{/privacy}", "received_events_url": "https://api.github.com/users/Ark-kun/received_events", "type": "User", "site_admin": false }
[ { "login": "Ark-kun", "id": 1829149, "node_id": "MDQ6VXNlcjE4MjkxNDk=", "avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ark-kun", "html_url": "https://github.com/Ark-kun", "followers_url": "https://api.github.com/users/Ark-kun/followers", "following_url": "https://api.github.com/users/Ark-kun/following{/other_user}", "gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions", "organizations_url": "https://api.github.com/users/Ark-kun/orgs", "repos_url": "https://api.github.com/users/Ark-kun/repos", "events_url": "https://api.github.com/users/Ark-kun/events{/privacy}", "received_events_url": "https://api.github.com/users/Ark-kun/received_events", "type": "User", "site_admin": false } ]
null
[ "/cc @chensun @neuromage ", "What is you main usage scenario for this feature request?\r\n\r\n\r\nUsually we advice out users to either use image tags (which are mutable) or use image replacement (in test scenarios). In KFP it's also possible to replace the container image after creating an instance of the component.\r\nThe `component.yaml` spec describes a command-line program inside container. So it is only compatible with specific family of images, making the `Docker Image` not a good candidate for an input.", "@Ark-kun Thank you for your response.\r\n\r\n# Usage Scenarios\r\n\r\n- Changes in Image, without changes in code. Example changes to python packages in the image, Changing the python docker version, etc. End users of the components could be able to customize the docker image. \r\n- For machine learning it is often necessary to ensure that the packages/versions used in training and serving are the exact same version. There can be subtle changes which might make training and serving different. A user will be able to use the same image with the same package versions for all steps of the training pipeline and use the same in the serving container.\r\n- Some clusters may have strong security requirements such as the images must be pulled in from authorized docker repository. Users on such cluster might want to simply change the image to an authorized docker registry.\r\n\r\n\r\n> Usually we advice out users to either use image tags (which are mutable)\r\n\r\nA mutable image tag does not give reproducibility. \r\n\r\n> it's also possible to replace the container image after creating an instance of the component.\r\n\r\nI was not aware that this is possible. Could you point me to the relevant function or variable ?\r\n\r\nFollowing is something I have resorted to.\r\n\r\n```python\r\nimport yaml\r\ndef override_component_image(yaml_str: str, new_image: str) -> str:\r\n \"\"\"\r\n Overrides the component image in yaml\r\n\r\n Args:\r\n yaml_str: Yaml file as a string\r\n new_image: The new image which it to be written.\r\n\r\n Returns:\r\n Modified yaml file with container image replaced\r\n \"\"\"\r\n yaml_data = yaml.load(yaml_str, Loader=yaml.FullLoader)\r\n yaml_data[\"implementation\"][\"container\"][\"image\"] = new_image\r\n return yaml.dump(yaml_data, sort_keys=False)\r\n\r\n``` \r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n", "\r\n>A mutable image tag does not give reproducibility.\r\n\r\nFair point. But the same is also true about components. For reproducibility you need a strict component version linking to a strict container image version.\r\n\r\n>>it's also possible to replace the container image after creating an instance of the component.\r\n>I was not aware that this is possible. Could you point me to the relevant function or variable ?\r\n\r\nWhat instantiate a component, you get a task object. The task object is ContainerOp, which has `.container` property which allows you to configure any Kubernetes container properties (`image`, `resources`, etc).\r\n\r\nSo, you can tweak the container image after instantiating the component:\r\n\r\n```python\r\ntask1 = op1(...)\r\ntask1.container.image = ...\r\n```\r\nDoes this work for you?\r\n\r\n>Following is something I have resorted to.\r\n\r\nFor a CI/CD scenario the best way would be to create a new version of component.yaml every time you create a new container version. This way, the pipeline can link to a strict component version and have reproducibility.\r\n\r\nP.S. I had a thought about a building a compiler feature which allows substituting container images. I wonder whether this would be useful.", "```python\r\ntask1 = op1(...)\r\ntask1.container.image = ...\r\n```\r\n> Does this work for you?\r\n\r\nThis seems reasonable and elegant solution for component users who have specific requirements. The scenarios I mentioned are rare and may be it is not worthwhile to build a compiler feature for this, especially when it is possible to override the image trivially. " ]
"2021-02-22T06:12:24"
"2021-03-13T16:18:29"
"2021-03-13T09:44:58"
CONTRIBUTOR
null
Would it be possible to make the image name configurable in component.yaml specification, so that the end user could override the image. It is possible to write a function to modify the yaml before it is read by ` kfp.components.load_component_from_text`, however I think this would be a useful addition to component.yaml specification, this is something that was supported by ContainerOp. For example the following component.yaml adds another additional inputValue to specify the docker image. However this does not work when loading the component.yaml. ```yaml name: echo description: Academic Component. Echoes input to output. metadata: annotations: version: "0.0.1" inputs: - { name: Text, type: String, description: Text to be echoed } - { name: Docker Image, type: String, description: Docker Image to be used, default: alpine } outputs: - { name: Echoed Text, type: String, description: Echoed Text } implementation: container: image: {inputValue: Docker Image} command: [sh, -c] args: [ echo $0 && mkdir -p "$(dirname "$1")" &&echo $0 > $1, { inputValue: Text }, { outputPath: Echoed Text} ] ``` However this does not work. Attempting to load this component gives an error. ```python kfp.components.load_component_from_text(manifest.read_text()) ``` # Traceback. ``` kfp.components.load_component_from_text(manifest.read_text()) File "/usr/local/lib/python3.8/site-packages/kfp/components/_components.py", line 115, in load_component_from_text component_spec = _load_component_spec_from_component_text(text) File "/usr/local/lib/python3.8/site-packages/kfp/components/_components.py", line 164, in _load_component_spec_from_component_text component_spec = ComponentSpec.from_dict(component_dict) File "/usr/local/lib/python3.8/site-packages/kfp/components/modelbase.py", line 285, in from_dict return parse_object_from_struct_based_on_class_init(cls, struct, serialized_names=cls._serialized_names) File "/usr/local/lib/python3.8/site-packages/kfp/components/modelbase.py", line 238, in parse_object_from_struct_based_on_class_init args[python_name] = parse_object_from_struct_based_on_type(value, param_type) File "/usr/local/lib/python3.8/site-packages/kfp/components/modelbase.py", line 158, in parse_object_from_struct_based_on_type raise TypeError('\n'.join(exception_lines)) TypeError: Error: ContainerImplementation.from_dict(struct=OrderedDict([('container', OrderedDict([('image', OrderedDict([('inputValue', 'Docker Image')])), ('command', ['sh', '-c']), ('args', ['echo $0 && mkdir -p "$(dirname "$1")" &&echo $0 > $1', OrderedDict([('inputValue', 'Text')]), OrderedDict([('outputPath', 'Echoed Text')])])]))])) failed with exception: Error: ContainerSpec.from_dict(struct=OrderedDict([('image', OrderedDict([('inputValue', 'Docker Image')])), ('command', ['sh', '-c']), ('args', ['echo $0 && mkdir -p "$(dirname "$1")" &&echo $0 > $1', OrderedDict([('inputValue', 'Text')]), OrderedDict([('outputPath', 'Echoed Text')])])])) failed with exception: Error: Structure "OrderedDict([('inputValue', 'Docker Image')])" is incompatible with type "<class 'str'>". Structure is not the instance of the type, the type does not have .from_dict method and is not generic. Error: GraphImplementation.from_dict(struct=OrderedDict([('container', OrderedDict([('image', OrderedDict([('inputValue', 'Docker Image')])), ('command', ['sh', '-c']), ('args', ['echo $0 && mkdir -p "$(dirname "$1")" &&echo $0 > $1', OrderedDict([('inputValue', 'Text')]), OrderedDict([('outputPath', 'Echoed Text')])])]))])) failed with exception: __init__() got an unexpected keyword argument 'container' Error: Structure "OrderedDict([('container', OrderedDict([('image', OrderedDict([('inputValue', 'Docker Image')])), ('command', ['sh', '-c']), ('args', ['echo $0 && mkdir -p "$(dirname "$1")" &&echo $0 > $1', OrderedDict([('inputValue', 'Text')]), OrderedDict([('outputPath', 'Echoed Text')])])]))])" is not None. Error: Structure "OrderedDict([('container', OrderedDict([('image', OrderedDict([('inputValue', 'Docker Image')])), ('command', ['sh', '-c']), ('args', ['echo $0 && mkdir -p "$(dirname "$1")" &&echo $0 > $1', OrderedDict([('inputValue', 'Text')]), OrderedDict([('outputPath', 'Echoed Text')])])]))])" is incompatible with type "typing.Union[kfp.components._structures.ContainerImplementation, kfp.components._structures.GraphImplementation, NoneType]" - none of the types in Union are compatible. ```
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5159/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5159/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5155
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5155/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5155/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5155/events
https://github.com/kubeflow/pipelines/issues/5155
812,568,150
MDU6SXNzdWU4MTI1NjgxNTA=
5,155
Recommended Daemon Container Support Alternative?
{ "login": "jl-massey", "id": 6101125, "node_id": "MDQ6VXNlcjYxMDExMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6101125?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jl-massey", "html_url": "https://github.com/jl-massey", "followers_url": "https://api.github.com/users/jl-massey/followers", "following_url": "https://api.github.com/users/jl-massey/following{/other_user}", "gists_url": "https://api.github.com/users/jl-massey/gists{/gist_id}", "starred_url": "https://api.github.com/users/jl-massey/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jl-massey/subscriptions", "organizations_url": "https://api.github.com/users/jl-massey/orgs", "repos_url": "https://api.github.com/users/jl-massey/repos", "events_url": "https://api.github.com/users/jl-massey/events{/privacy}", "received_events_url": "https://api.github.com/users/jl-massey/received_events", "type": "User", "site_admin": false }
[ { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
closed
false
null
[]
null
[ "/cc @Ark-kun @chensun ", "I was wondering about this too\r\ndoes kfp have a Op which can start a container and just leave it there\r\n\r\nThanks", "I've gotten around this by compiling the pipeline yaml, then adding the \"daemon: true\" line to the argo pipeline yaml before uploading to KFP server. This feels like I've taken the long/wrong way around.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been automatically closed because it has not had recent activity. Please comment \"/reopen\" to reopen it.\n" ]
"2021-02-20T09:54:36"
"2022-04-18T17:27:45"
"2022-04-18T17:27:45"
NONE
null
# Argo Daemon Containers I've looked through as many docs I could find and determined that Argo pipelines support daemon containers that suit my purposes pretty well (spinning up database server to use as a cache for a pipeline). # Question: 1. Are there any examples of using daemon containers with KFP SDK in python? 2. If not, does someone have a 'solid' alternative to hacking the pipeline YAML? Thanks a lot, also; I'm loving the work the team has done!
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5155/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5155/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5154
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5154/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5154/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5154/events
https://github.com/kubeflow/pipelines/issues/5154
812,544,721
MDU6SXNzdWU4MTI1NDQ3MjE=
5,154
[Testing] periodic functional test failing frequently 2.20
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "capri-xiyue", "id": 52932582, "node_id": "MDQ6VXNlcjUyOTMyNTgy", "avatar_url": "https://avatars.githubusercontent.com/u/52932582?v=4", "gravatar_id": "", "url": "https://api.github.com/users/capri-xiyue", "html_url": "https://github.com/capri-xiyue", "followers_url": "https://api.github.com/users/capri-xiyue/followers", "following_url": "https://api.github.com/users/capri-xiyue/following{/other_user}", "gists_url": "https://api.github.com/users/capri-xiyue/gists{/gist_id}", "starred_url": "https://api.github.com/users/capri-xiyue/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/capri-xiyue/subscriptions", "organizations_url": "https://api.github.com/users/capri-xiyue/orgs", "repos_url": "https://api.github.com/users/capri-xiyue/repos", "events_url": "https://api.github.com/users/capri-xiyue/events{/privacy}", "received_events_url": "https://api.github.com/users/capri-xiyue/received_events", "type": "User", "site_admin": false }
[ { "login": "capri-xiyue", "id": 52932582, "node_id": "MDQ6VXNlcjUyOTMyNTgy", "avatar_url": "https://avatars.githubusercontent.com/u/52932582?v=4", "gravatar_id": "", "url": "https://api.github.com/users/capri-xiyue", "html_url": "https://github.com/capri-xiyue", "followers_url": "https://api.github.com/users/capri-xiyue/followers", "following_url": "https://api.github.com/users/capri-xiyue/following{/other_user}", "gists_url": "https://api.github.com/users/capri-xiyue/gists{/gist_id}", "starred_url": "https://api.github.com/users/capri-xiyue/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/capri-xiyue/subscriptions", "organizations_url": "https://api.github.com/users/capri-xiyue/orgs", "repos_url": "https://api.github.com/users/capri-xiyue/repos", "events_url": "https://api.github.com/users/capri-xiyue/events{/privacy}", "received_events_url": "https://api.github.com/users/capri-xiyue/received_events", "type": "User", "site_admin": false } ]
null
[ "This wasn't notified, because after migration to gcp oss prow, I haven't set up testgrid and notification, that will be the last step during migration in https://github.com/kubernetes/test-infra/issues/14343", "The logs showed that the periodic functional test failed because it encountered 403 error when creating experiment.\r\n**For staging cluster(staging proxy):**\r\nI was able to reproduce the same error in local.\r\nIt worked when I used kfp ui to create experiment but it failed with 403 when I used kfp sdk to create experiment.\r\nIt also failed with 403 even after I granted the service account which is used to run periodic functional test with Editor or Owner role. \r\n\r\n**For my own cluste(production proxy):**\r\nEverything works\r\n\r\nIt looks like something went wrong with kfp sdk or something becomes incompatiable with kfp sdk in staging cluster or the token got by kfp sdk is no longer valid. We used https://github.com/kubeflow/pipelines/blob/61f9c2c328d245d89c9d9b8c923f24dbbd08cdc9/test/kfp-functional-test/run_kfp_functional_test.py#L59 to initialize the sdk.\r\n\r\nChecked the kfp sdk code,\r\nthe kfp sdk should use https://github.com/kubeflow/pipelines/blob/61f9c2c328d245d89c9d9b8c923f24dbbd08cdc9/sdk/python/kfp/_client.py#L201 to get the token.\r\nI downloaded the key of the default service account and set \"GOOGLE_APPLICATION_CREDENTIALS\" with the key file of the default service account but still got 403 error.\r\nFYI @chensun ", "I tried to switch from production proxy to staging proxy for sample test via https://github.com/kubeflow/pipelines/pull/5167. It failed with 403 error.\r\nLooks like there is something wrong with staging inverse proxy.", "Will try to switch from staging proxy to production proxy for periodic functional test https://oss-prow.knative.dev/view/gs/oss-prow/logs/kubeflow-pipelines-periodic-functional-test/1363984968635650048 to further verify whether the root cause is staging inverse proxy.", "https://github.com/kubeflow/testing/pull/907 is just a possible temporary fix.", "> [kubeflow/testing#907](https://github.com/kubeflow/testing/pull/907) is just a possible temporary fix.\r\n\r\nVerified that production proxy works with https://oss-prow.knative.dev/view/gs/oss-prow/logs/kubeflow-pipelines-periodic-functional-test/1364066509248270336. This confirms that the root cause of failure is probably staging inverse proxy.", "Found out the root cause. Somehow the staging inverse proxy only passes request with scope \"https://www.googleapis.com/auth/cloud-platform.read-only\". And we use https://github.com/kubeflow/pipelines/blob/1fee4054a77d39877d997ec94374201188fa87d0/sdk/python/kfp/_auth.py#L40 which is \"https://www.googleapis.com/auth/cloud-platform\".\r\nI was able to create experiment when I changed the scope to \"https://www.googleapis.com/auth/cloud-platform.read-only\"\r\nBut staging proxy should support both scope \"https://www.googleapis.com/auth/cloud-platform\" and \"https://www.googleapis.com/auth/cloud-platform.read-only\". The team which is responsible for staging inverse proxy is working on the investigation and fix.", "Awesome, does that mean your efforts prevented an outage in prod?", "> Awesome, does that mean your efforts prevented an outage in prod?\r\n\r\nI think so.", "https://oss-prow.knative.dev/view/gs/oss-prow/logs/kubeflow-pipelines-periodic-functional-test/1364764611215101952 passed." ]
"2021-02-20T07:50:15"
"2021-02-25T02:35:29"
"2021-02-25T02:35:28"
CONTRIBUTOR
null
https://oss-prow.knative.dev/?repo=kubeflow%2Fpipelines /assign @capri-xiyue
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5154/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5154/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5152
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5152/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5152/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5152/events
https://github.com/kubeflow/pipelines/issues/5152
811,948,463
MDU6SXNzdWU4MTE5NDg0NjM=
5,152
[Testing] Postsubmit failing 2021 Feb 19
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Figured out some previous failures were because of test workflow timeout, it timeouts at 1 hour, but now it's exceeded:\r\n> [test-timing] It took 60m35.255s.\r\n\r\nWe need to increase timeout to make it stabler.\r\n\r\nSome recent failures was because of GKE instability.", "Increased integration test timeout.\r\nand recent postsubmit has succeeded, we can close the issue" ]
"2021-02-19T11:33:58"
"2021-02-20T01:11:02"
"2021-02-20T01:11:02"
CONTRIBUTOR
null
example: https://oss-prow.knative.dev/view/gs/oss-prow/logs/kubeflow-pipeline-postsubmit-integration-test/1362660888636559360 it's been failing for a while since 1.4.0 release.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5152/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5152/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5151
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5151/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5151/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5151/events
https://github.com/kubeflow/pipelines/issues/5151
811,765,211
MDU6SXNzdWU4MTE3NjUyMTE=
5,151
Default Notebook images cannot install KFP due to enum34
{ "login": "aabbccddeeffgghhii1438", "id": 35978194, "node_id": "MDQ6VXNlcjM1OTc4MTk0", "avatar_url": "https://avatars.githubusercontent.com/u/35978194?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aabbccddeeffgghhii1438", "html_url": "https://github.com/aabbccddeeffgghhii1438", "followers_url": "https://api.github.com/users/aabbccddeeffgghhii1438/followers", "following_url": "https://api.github.com/users/aabbccddeeffgghhii1438/following{/other_user}", "gists_url": "https://api.github.com/users/aabbccddeeffgghhii1438/gists{/gist_id}", "starred_url": "https://api.github.com/users/aabbccddeeffgghhii1438/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aabbccddeeffgghhii1438/subscriptions", "organizations_url": "https://api.github.com/users/aabbccddeeffgghhii1438/orgs", "repos_url": "https://api.github.com/users/aabbccddeeffgghhii1438/repos", "events_url": "https://api.github.com/users/aabbccddeeffgghhii1438/events{/privacy}", "received_events_url": "https://api.github.com/users/aabbccddeeffgghhii1438/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Additional info:\r\nenum34 is installed by jupyter-http-over-ws==0.0.7. The latest version, 0.0.8, has added restrictions in setup.py to prevent installing enum34 for python3.4+. A brute force fix would be changing the pip installs in the image to the following:\r\n\r\n``RUN pip3 uninstall -y enum34 jupyter-http-over-ws && pip3 --no-cache-dir install jupyter-console==6.0.0 jupyterlab xgboost kubeflow-fairing==1.0.1 kfp jupyter-http-over-ws``", "/cc @chensun ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Same here.", "/cc @DavidSpek @kimwnasptd \nLooks like the issue should be fixed on notebook side", "The default notebook images contain KFP and I haven't seen this error come up when I built them. ", "I believe this is no longer relevant. The default behavior changed with commit https://github.com/kubeflow/manifests/commit/d36fc9c0555c936c7b71fd273b8e4604985ebba8#diff-dcdcdee366160f43e5771e4d4ccbfcb180ea8fbcfa23b07be243a326f49a2a39, as spawner_ui_config was updated to use new docker images, which presumably have KFP preinstalled.", "Thanks for the update!" ]
"2021-02-19T07:10:44"
"2021-07-04T07:24:22"
"2021-07-04T07:24:22"
NONE
null
### What steps did you take: [A clear and concise description of what the bug is.] I ran "pip install kfp" in the default notebook server image gcr.io/kubeflow-images-public/tensorflow-2.1.0-notebook-cpu:1.0.0. ### What happened: ... ERROR: Command errored out with exit status 1: command: /usr/bin/python3 /usr/local/lib/python3.6/dist-packages/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-e1cn2jdf/overlay --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- 'setuptools>=40.8.0' wheel cwd: None Complete output (14 lines): Traceback (most recent call last): File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.6/dist-packages/pip/__main__.py", line 16, in <module> from pip._internal.main import main as _main # isort:skip # noqa File "/usr/local/lib/python3.6/dist-packages/pip/_internal/main.py", line 8, in <module> import locale File "/usr/lib/python3.6/locale.py", line 16, in <module> import re File "/usr/lib/python3.6/re.py", line 142, in <module> class RegexFlag(enum.IntFlag): AttributeError: module 'enum' has no attribute 'IntFlag' ---------------------------------------- ERROR: Command errored out with exit status 1: /usr/bin/python3 /usr/local/lib/python3.6/dist-packages/pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-e1cn2jdf/overlay --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- 'setuptools>=40.8.0' wheel Check the logs for full command output. ### What did you expect to happen: It should install successfully. The above error happens because enum34 is installed. The notebook image uses python3.6, which does not need an enum backport. Also, I cannot uninstall inside the image without root privileges, which means the HEAD images need to be rebuilt. ### Environment: <!-- Please fill in those that seem relevant. --> How did you deploy Kubeflow Pipelines (KFP)? <!-- If you are not sure, here's [an introduction of all options](https://www.kubeflow.org/docs/pipelines/installation/overview/). --> Azure with OIDC KFP version: 1.2.0 KFP SDK version: Can't install.. ### Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.] /kind bug <!-- Please include labels by uncommenting them to help us better triage issues, choose from the following --> <!-- // /area frontend // /area backend // /area sdk // /area testing // /area engprod -->
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5151/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5151/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5214
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5214/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5214/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5214/events
https://github.com/kubeflow/pipelines/issues/5214
818,615,254
MDU6SXNzdWU4MTg2MTUyNTQ=
5,214
'kubeflow-pipelines-profile-controller' fails to deploy pods on profile creation when ResourceQuota is set in the profile.
{ "login": "henrysecond1", "id": 16417183, "node_id": "MDQ6VXNlcjE2NDE3MTgz", "avatar_url": "https://avatars.githubusercontent.com/u/16417183?v=4", "gravatar_id": "", "url": "https://api.github.com/users/henrysecond1", "html_url": "https://github.com/henrysecond1", "followers_url": "https://api.github.com/users/henrysecond1/followers", "following_url": "https://api.github.com/users/henrysecond1/following{/other_user}", "gists_url": "https://api.github.com/users/henrysecond1/gists{/gist_id}", "starred_url": "https://api.github.com/users/henrysecond1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/henrysecond1/subscriptions", "organizations_url": "https://api.github.com/users/henrysecond1/orgs", "repos_url": "https://api.github.com/users/henrysecond1/repos", "events_url": "https://api.github.com/users/henrysecond1/events{/privacy}", "received_events_url": "https://api.github.com/users/henrysecond1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "This would be more like a KFP-specific issue\r\n\r\n/cc @Bobgy \r\n\r\nCan you help transfer the issue to kubeflow/pipelines?", "Thanks @henrysecond1! That makes sense adding some default value there.\r\nManifests is currently migrating to kubeflow/pipelines repo. Would you want to open a PR after that?", "Sure, thanks for the explanation. I'll open PR after the migration. " ]
"2021-02-18T16:27:26"
"2021-03-13T03:26:47"
"2021-03-13T03:26:47"
CONTRIBUTOR
null
In multi-user mode, it seems like `kubeflow-pipelines-profile-controller` deploy below pods on Kubeflow profile creation. - `ml-pipeline-ui-artifact` - `ml-pipeline-visualizationserver` When `ResourceQuota` is set in the profile, `kubeflow-pipelines-profile-controller` fails to deploy `ml-pipeline-ui-artifact` and `ml-pipeline-visualizationserver` with below error. ``` Warning FailedCreate 17m replicaset-controller Error creating: pods "ml-pipeline-ui-artifact-684c5db68-s74w8" is forbidden: failed quota: kf-resource-quota: must specify cpu,memory ``` - Related code: [https://github.com/kubeflow/manifests/blob/master/apps/pipeline/upstream/installs/multi-user/pipelines-profile-controller/sync.py](https://github.com/kubeflow/manifests/blob/master/apps/pipeline/upstream/installs/multi-user/pipelines-profile-controller/sync.py) - Cause: The container resource limit & request is not set on the pod specs, so the pods can not be deployed in the namespace (which has `ResourceQuota` ). Since Kubeflow profile supports setting `ResourceQuota`, `kubeflow-pipelines-profile-controller` should set container resource requests & limits in pod specs to avoid above errors. I confirmed that with below patch, ml-pipeline pods are successfully deployed. ```python diff --git a/apps/pipeline/upstream/installs/multi-user/pipelines-profile-controller/sync.py b/apps/pipeline/upstream/installs/multi-user/pipelines-profile-controller/sync.py index 75c6e5db..a0e71fbf 100644 --- a/apps/pipeline/upstream/installs/multi-user/pipelines-profile-controller/sync.py +++ b/apps/pipeline/upstream/installs/multi-user/pipelines-profile-controller/sync.py @@ -104,6 +104,16 @@ class Controller(BaseHTTPRequestHandler): "ports": [{ "containerPort": 8888 }], + "resources": { + "requests": { + "cpu": "50m", + "memory": "200Mi" + }, + "limits": { + "cpu": "500m", + "memory": "2Gi" + }, + } }], "serviceAccountName": "default-editor", @@ -204,7 +214,17 @@ class Controller(BaseHTTPRequestHandler): "IfNotPresent", "ports": [{ "containerPort": 3000 - }] + }], + "resources": { + "requests": { + "cpu": "50m", + "memory": "200Mi" + }, + "limits": { + "cpu": "500m", + "memory": "2Gi" + }, + } }], "serviceAccountName": "default-editor" ``` Please take a look, thanks.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5214/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5214/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5148
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5148/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5148/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5148/events
https://github.com/kubeflow/pipelines/issues/5148
811,093,738
MDU6SXNzdWU4MTEwOTM3Mzg=
5,148
[FR] Default resource requirement/limits for the KFP UI and system services
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "id": 930476737, "node_id": "MDU6TGFiZWw5MzA0NzY3Mzc=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/help%20wanted", "name": "help wanted", "color": "db1203", "default": true, "description": "The community is welcome to contribute." } ]
closed
false
{ "login": "NikeNano", "id": 22057410, "node_id": "MDQ6VXNlcjIyMDU3NDEw", "avatar_url": "https://avatars.githubusercontent.com/u/22057410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NikeNano", "html_url": "https://github.com/NikeNano", "followers_url": "https://api.github.com/users/NikeNano/followers", "following_url": "https://api.github.com/users/NikeNano/following{/other_user}", "gists_url": "https://api.github.com/users/NikeNano/gists{/gist_id}", "starred_url": "https://api.github.com/users/NikeNano/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NikeNano/subscriptions", "organizations_url": "https://api.github.com/users/NikeNano/orgs", "repos_url": "https://api.github.com/users/NikeNano/repos", "events_url": "https://api.github.com/users/NikeNano/events{/privacy}", "received_events_url": "https://api.github.com/users/NikeNano/received_events", "type": "User", "site_admin": false }
[ { "login": "NikeNano", "id": 22057410, "node_id": "MDQ6VXNlcjIyMDU3NDEw", "avatar_url": "https://avatars.githubusercontent.com/u/22057410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NikeNano", "html_url": "https://github.com/NikeNano", "followers_url": "https://api.github.com/users/NikeNano/followers", "following_url": "https://api.github.com/users/NikeNano/following{/other_user}", "gists_url": "https://api.github.com/users/NikeNano/gists{/gist_id}", "starred_url": "https://api.github.com/users/NikeNano/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NikeNano/subscriptions", "organizations_url": "https://api.github.com/users/NikeNano/orgs", "repos_url": "https://api.github.com/users/NikeNano/repos", "events_url": "https://api.github.com/users/NikeNano/events{/privacy}", "received_events_url": "https://api.github.com/users/NikeNano/received_events", "type": "User", "site_admin": false } ]
null
[ "Got some help from Sid Palas:\r\n\r\n```\r\nA couple of example request settings:\r\nml-pipeline (api server)\r\n requests:\r\n cpu: '2'\r\n memory: 4Gi\r\nml-pipeline-ui\r\n requests:\r\n cpu: 10m\r\n memory: 70Mi\r\nworkflow-controller (argo)\r\n requests:\r\n cpu: 200m\r\n memory: 3Gi\r\nminio\r\n requests:\r\n cpu: 20m\r\n memory: 25Mi\r\npersistent-agent\r\n requests:\r\n cpu: 120m\r\n memory: 2Gi\r\n```\r\n\r\nsee thread https://kubeflow.slack.com/archives/CE10KS9M4/p1613655024114300", "According to the argo documentation the memory and cpu usage for argo scales linearly with the nbr of workflows, [see](https://github.com/argoproj/argo-workflows/blob/master/docs/cost-optimisation.md#limit-the-total-number-of-workflows-and-pods). So users will probably have to adjust this according if they are running heavier workloads or like to reduce costs. \r\n\r\nI would be happy to update this!\r\n\r\n/assign\r\n\r\n", "thank you @NikeNano ", "/reopen\r\nafter https://github.com/kubeflow/pipelines/pull/5273, we need to apply default resource requirement to argo pods again", "@Bobgy: Reopened this issue.\n\n<details>\n\nIn response to [this](https://github.com/kubeflow/pipelines/issues/5148#issuecomment-811585951):\n\n>/reopen\r\n>after https://github.com/kubeflow/pipelines/pull/5273, we need to apply default resource requirement to argo pods again\n\n\nInstructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.\n</details>" ]
"2021-02-18T13:21:06"
"2021-04-01T03:52:15"
"2021-04-01T03:52:15"
CONTRIBUTOR
null
UPDATE: at the end, we decided to only add resource requirements, see discussion in https://github.com/kubeflow/pipelines/issues/5236#issuecomment-790301148 It's desirable to provide a set of default resource requirement & limits for KFP UI & system services, to make sure their QoS is `Guaranteed` by default. https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/ I'm not exactly sure what will be reasonable, because if they are set too low, the services may stop operating when there are workloads reaching a limit. But setting them to make QoS Guaranteed is also important, because otherwise when there are many other workloads, KFP UI & API services may be evicted because default QoS is BestEffort and BestEffort Pods are the first to be evicted by Kubernetes when it runs out of resources.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5148/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5148/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5147
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5147/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5147/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5147/events
https://github.com/kubeflow/pipelines/issues/5147
811,003,688
MDU6SXNzdWU4MTEwMDM2ODg=
5,147
build_image_from_working_dir Kubernetes job creation failed.
{ "login": "Adedolapo-Akin", "id": 47377835, "node_id": "MDQ6VXNlcjQ3Mzc3ODM1", "avatar_url": "https://avatars.githubusercontent.com/u/47377835?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Adedolapo-Akin", "html_url": "https://github.com/Adedolapo-Akin", "followers_url": "https://api.github.com/users/Adedolapo-Akin/followers", "following_url": "https://api.github.com/users/Adedolapo-Akin/following{/other_user}", "gists_url": "https://api.github.com/users/Adedolapo-Akin/gists{/gist_id}", "starred_url": "https://api.github.com/users/Adedolapo-Akin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Adedolapo-Akin/subscriptions", "organizations_url": "https://api.github.com/users/Adedolapo-Akin/orgs", "repos_url": "https://api.github.com/users/Adedolapo-Akin/repos", "events_url": "https://api.github.com/users/Adedolapo-Akin/events{/privacy}", "received_events_url": "https://api.github.com/users/Adedolapo-Akin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @Adedolapo-Akin , looks like your KFP is not deployed in kubeflow namespace.\r\nYou can configure the namespace when using build_image_from_working_dir: https://kubeflow-pipelines.readthedocs.io/en/stable/source/kfp.containers.html#kfp.containers.build_image_from_working_dir" ]
"2021-02-18T11:15:37"
"2021-03-12T01:11:19"
"2021-03-12T01:11:19"
NONE
null
### What steps did you take: kfp v. 1.4.0 I'm trying build_image_from_working_dir() as per https://github.com/kubeflow/pipelines/blob/master/samples/core/container_build/container_build.ipynb, from an AI Platform notebook. ### What happened: code # Building and pushing new container image image_with_packages = build_image_from_working_dir( #working_dir='.', # Optional. Default is the current directory #base_image='google/cloud-sdk:latest', # Optional #image_name='gcr.io/my-org/my-image:latest', # Optional. Default is gcr.io/<project_id>/<notebook_id>/kfp_container ) # Creating component while explicitly specifying the newly-built base image read_data_op = func_to_container_op(read_data, base_image=image_with_packages) Error ERROR:root:Exception when calling CoreV1Api->create_namespaced_pod: (404) Reason: Not Found HTTP response headers: HTTPHeaderDict({'Audit-Id': '49f84f01-752c-4780-adfa-afd0fbdc184a', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Date': 'Thu, 18 Feb 2021 10:41:35 GMT', 'Content-Length': '196'}) HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"namespaces \"kubeflow\" not found","reason":"NotFound","details":{"name":"kubeflow","kind":"namespaces"},"code":404} Traceback (most recent call last): File "/opt/conda/lib/python3.7/site-packages/kfp/containers/_k8s_job_helper.py", line 65, in _create_k8s_job api_response = self._corev1.create_namespaced_pod(yaml_spec['metadata']['namespace'], pod) File "/opt/conda/lib/python3.7/site-packages/kubernetes/client/api/core_v1_api.py", line 6174, in create_namespaced_pod (data) = self.create_namespaced_pod_with_http_info(namespace, body, **kwargs) # noqa: E501 File "/opt/conda/lib/python3.7/site-packages/kubernetes/client/api/core_v1_api.py", line 6265, in create_namespaced_pod_with_http_info collection_formats=collection_formats) File "/opt/conda/lib/python3.7/site-packages/kubernetes/client/api_client.py", line 345, in call_api _preload_content, _request_timeout) File "/opt/conda/lib/python3.7/site-packages/kubernetes/client/api_client.py", line 176, in __call_api _request_timeout=_request_timeout) File "/opt/conda/lib/python3.7/site-packages/kubernetes/client/api_client.py", line 388, in request body=body) File "/opt/conda/lib/python3.7/site-packages/kubernetes/client/rest.py", line 278, in POST body=body) File "/opt/conda/lib/python3.7/site-packages/kubernetes/client/rest.py", line 231, in request raise ApiException(http_resp=r) kubernetes.client.rest.ApiException: (404) Reason: Not Found HTTP response headers: HTTPHeaderDict({'Audit-Id': '49f84f01-752c-4780-adfa-afd0fbdc184a', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Date': 'Thu, 18 Feb 2021 10:41:35 GMT', 'Content-Length': '196'}) HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"namespaces \"kubeflow\" not found","reason":"NotFound","details":{"name":"kubeflow","kind":"namespaces"},"code":404} ### What did you expect to happen: To build image ### Environment: <!-- Please fill in those that seem relevant. --> GCP AI Platform Notebook
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5147/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5147/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5145
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5145/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5145/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5145/events
https://github.com/kubeflow/pipelines/issues/5145
810,806,090
MDU6SXNzdWU4MTA4MDYwOTA=
5,145
HTTP response body: OpenID Connect token expired: JWT has expired
{ "login": "wanglong001", "id": 14817376, "node_id": "MDQ6VXNlcjE0ODE3Mzc2", "avatar_url": "https://avatars.githubusercontent.com/u/14817376?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wanglong001", "html_url": "https://github.com/wanglong001", "followers_url": "https://api.github.com/users/wanglong001/followers", "following_url": "https://api.github.com/users/wanglong001/following{/other_user}", "gists_url": "https://api.github.com/users/wanglong001/gists{/gist_id}", "starred_url": "https://api.github.com/users/wanglong001/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wanglong001/subscriptions", "organizations_url": "https://api.github.com/users/wanglong001/orgs", "repos_url": "https://api.github.com/users/wanglong001/repos", "events_url": "https://api.github.com/users/wanglong001/events{/privacy}", "received_events_url": "https://api.github.com/users/wanglong001/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Duplicate of https://github.com/kubeflow/pipelines/issues/4321" ]
"2021-02-18T06:34:00"
"2021-03-12T01:07:00"
"2021-03-12T01:07:00"
NONE
null
### What steps did you take: 1. gcloud auth activate-service-account account --key-file=... 2. copy credentials.json ~/.config/kfp/ 3. Client is a singleton: kfp.Client(host=KUBEFLOW_AUTHORIZATION.HOST, client_id=KUBEFLOW_AUTHORIZATION.CLIENT_ID, other_client_id=KUBEFLOW_AUTHORIZATION.OTHER_CLIENT_ID, other_client_secret=KUBEFLOW_AUTHORIZATION.OTHER_CLIENT_SECRET) 4. after a few days, this problem occurred ### What happened: ![image](https://user-images.githubusercontent.com/14817376/108314396-ca01b300-71f4-11eb-9161-5aa71dcbd1aa.png) ### What did you expect to happen: ![image](https://user-images.githubusercontent.com/14817376/108314716-514f2680-71f5-11eb-8f3a-4d647ad72a3b.png) _is_refresh_token = True, The token should be refreshed automatically, but JWT has expired. ### Environment: _**client:**_ ubuntu18.06 kfp 1.4.0 kfp-pipeline-spec 0.1.5 kfp-server-api 1.3.0 Python 3.7.7 _**service:**_ google cloud
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5145/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5145/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5139
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5139/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5139/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5139/events
https://github.com/kubeflow/pipelines/issues/5139
809,598,565
MDU6SXNzdWU4MDk1OTg1NjU=
5,139
Building component from Python function producing PendingDeprecationWarning
{ "login": "shaikmanu797", "id": 18584590, "node_id": "MDQ6VXNlcjE4NTg0NTkw", "avatar_url": "https://avatars.githubusercontent.com/u/18584590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shaikmanu797", "html_url": "https://github.com/shaikmanu797", "followers_url": "https://api.github.com/users/shaikmanu797/followers", "following_url": "https://api.github.com/users/shaikmanu797/following{/other_user}", "gists_url": "https://api.github.com/users/shaikmanu797/gists{/gist_id}", "starred_url": "https://api.github.com/users/shaikmanu797/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shaikmanu797/subscriptions", "organizations_url": "https://api.github.com/users/shaikmanu797/orgs", "repos_url": "https://api.github.com/users/shaikmanu797/repos", "events_url": "https://api.github.com/users/shaikmanu797/events{/privacy}", "received_events_url": "https://api.github.com/users/shaikmanu797/received_events", "type": "User", "site_admin": false }
[ { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 1682717392, "node_id": "MDU6TGFiZWwxNjgyNzE3Mzky", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/question", "name": "kind/question", "color": "2515fc", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "In the documentation [here](https://www.kubeflow.org/docs/pipelines/sdk/python-function-components/) I think you can find the information you are looking for. It also points to some examples that should help guide you in the right direction. Let me know if you still have some issues. ", "I've written similar code, I think it's already fixed in later versions. Please correct me if I'm wrong" ]
"2021-02-16T19:47:14"
"2021-04-02T01:00:24"
"2021-04-02T01:00:24"
CONTRIBUTOR
null
### What happened: I have been using the below ContainerOp in a pipeline for unit tests and with `kfp>1.1.0` I started getting `PendingDeprecationWarning` while Compiling the pipeline using ContainerOp. ```python3 @pytest.fixture def add_op() -> ContainerOp: def add_ints(a: int, b: int) -> int: return a + b add_comp = func_to_container_op( func=add_ints, base_image="python:3.7-slim-buster" ) return add_comp @dsl.pipeline("test", "test pipeline") def test_pipeline(a: int, b: int): add = add_op(a, b) add.set_display_name("Add Two Integers") add.add_pod_label("component_name", "add_ints") def test_pipeline_image_settings(self): with tempfile.NamedTemporaryFile(suffix=".yaml") as f: Compiler().compile(pipeline_func=test_pipeline, package_path=f.name) ``` ```console PendingDeprecationWarning: dsl.ContainerOp.image will be removed in future releases. Use dsl.ContainerOp.container.image instead ``` ### What did you expect to happen: Could someone let me know what is an alternative method to use to build component from a python function? ### Environment: Python 3.8 How did you deploy Kubeflow Pipelines (KFP)? Kubeflow on Kubernetes KFP version: <!-- If you are not sure, build commit shows on bottom of KFP UI left sidenav. --> KFP SDK version: 1.4.0 ### Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.] /kind question /area sdk
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5139/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5139/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/5138
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/5138/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/5138/comments
https://api.github.com/repos/kubeflow/pipelines/issues/5138/events
https://github.com/kubeflow/pipelines/issues/5138
808,439,548
MDU6SXNzdWU4MDg0Mzk1NDg=
5,138
Add authentication with ServiceAccountToken
{ "login": "yanniszark", "id": 6123106, "node_id": "MDQ6VXNlcjYxMjMxMDY=", "avatar_url": "https://avatars.githubusercontent.com/u/6123106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yanniszark", "html_url": "https://github.com/yanniszark", "followers_url": "https://api.github.com/users/yanniszark/followers", "following_url": "https://api.github.com/users/yanniszark/following{/other_user}", "gists_url": "https://api.github.com/users/yanniszark/gists{/gist_id}", "starred_url": "https://api.github.com/users/yanniszark/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanniszark/subscriptions", "organizations_url": "https://api.github.com/users/yanniszark/orgs", "repos_url": "https://api.github.com/users/yanniszark/repos", "events_url": "https://api.github.com/users/yanniszark/events{/privacy}", "received_events_url": "https://api.github.com/users/yanniszark/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "yanniszark", "id": 6123106, "node_id": "MDQ6VXNlcjYxMjMxMDY=", "avatar_url": "https://avatars.githubusercontent.com/u/6123106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yanniszark", "html_url": "https://github.com/yanniszark", "followers_url": "https://api.github.com/users/yanniszark/followers", "following_url": "https://api.github.com/users/yanniszark/following{/other_user}", "gists_url": "https://api.github.com/users/yanniszark/gists{/gist_id}", "starred_url": "https://api.github.com/users/yanniszark/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanniszark/subscriptions", "organizations_url": "https://api.github.com/users/yanniszark/orgs", "repos_url": "https://api.github.com/users/yanniszark/repos", "events_url": "https://api.github.com/users/yanniszark/events{/privacy}", "received_events_url": "https://api.github.com/users/yanniszark/received_events", "type": "User", "site_admin": false }
[ { "login": "yanniszark", "id": 6123106, "node_id": "MDQ6VXNlcjYxMjMxMDY=", "avatar_url": "https://avatars.githubusercontent.com/u/6123106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yanniszark", "html_url": "https://github.com/yanniszark", "followers_url": "https://api.github.com/users/yanniszark/followers", "following_url": "https://api.github.com/users/yanniszark/following{/other_user}", "gists_url": "https://api.github.com/users/yanniszark/gists{/gist_id}", "starred_url": "https://api.github.com/users/yanniszark/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanniszark/subscriptions", "organizations_url": "https://api.github.com/users/yanniszark/orgs", "repos_url": "https://api.github.com/users/yanniszark/repos", "events_url": "https://api.github.com/users/yanniszark/events{/privacy}", "received_events_url": "https://api.github.com/users/yanniszark/received_events", "type": "User", "site_admin": false }, { "login": "elikatsis", "id": 14970053, "node_id": "MDQ6VXNlcjE0OTcwMDUz", "avatar_url": "https://avatars.githubusercontent.com/u/14970053?v=4", "gravatar_id": "", "url": "https://api.github.com/users/elikatsis", "html_url": "https://github.com/elikatsis", "followers_url": "https://api.github.com/users/elikatsis/followers", "following_url": "https://api.github.com/users/elikatsis/following{/other_user}", "gists_url": "https://api.github.com/users/elikatsis/gists{/gist_id}", "starred_url": "https://api.github.com/users/elikatsis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elikatsis/subscriptions", "organizations_url": "https://api.github.com/users/elikatsis/orgs", "repos_url": "https://api.github.com/users/elikatsis/repos", "events_url": "https://api.github.com/users/elikatsis/events{/privacy}", "received_events_url": "https://api.github.com/users/elikatsis/received_events", "type": "User", "site_admin": false } ]
null
[ "Thank you for the proposal! I'd love to see it getting it upstreamed too. It's a common request in #4440.", "Hello!\r\n\r\nI'll provide an update here as I'll be pushing a PR covering the backend part very soon.\r\n\r\nAs mentioned in the first comment, we are adding a new authentication method: authentication using ServiceAccountTokens.\r\nFor this, we need the clients to put ServiceAccountTokens in requests and the backend (KFP API server) to retrieve them and authenticate the requests.\r\n\r\n#### How will this ServiceAccountToken find its way in the requests?\r\n1. The client finds a proper ServiceAccountToken (more on this later on)\r\n2. It adds an `Authorization: Bearer <token>` header in all requests\r\n\r\n#### What does the authentication cycle of the backend look like?\r\n1. We will **extend** the authentication mechanisms of the KFP API server with one more authenticator [and we will make the available authenticators extendable]\r\n2. Every request will pass through all available authenticators (currently, `Kubeflow-UserID` header and ServiceAccountToken) until one succeeds.\r\n Then, that is, if one succeeds, authentication succeeds.\r\n Otherwise, that is, if all authenticators have failed, the request is considered unauthenticated.\r\n\r\n#### How does the ServiceAccountToken authenticator work?\r\n\r\n1. The KFP API server creates a `TokenReview` using the ServiceAccountToken retrieved from the requests bearer token header and some expected audience (for the KFP case, this can be `ml-pipeline`)\r\n2. Kubernetes responds (with the `TokenReviewStatus`) whether the token is associated with a known user and with what audience\r\n3. The KFP API server verifies that `ml-pipeline` is in the audience specified in the Kubernetes response\r\n4. The KFP API server considers the request authenticated and assumes the user specified by Kubernetes in its response\r\n\r\nUseful links:\r\n* https://kubernetes.io/docs/reference/access-authn-authz/authentication/\r\n* https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.16/#tokenreview-v1-authentication-k8s-io\r\n\r\n#### How does the client find a ServiceAccountToken to use?\r\n\r\nKubernetes has built-in ways to project tokens with specific audience for the ServiceAccount of a pod.\r\nEach container of a pod mounts the token similarly to how it would mount some volume.\r\nThe kubelet generates a token and stores it in a file. Then, to retrieve the token, it's just a matter of reading this file.\r\n\r\nThe KFP client should have a seamless way to\r\n1. retrieve the path where the token is mounted,\r\n2. read it, and\r\n3. use it in request headers.\r\n\r\nThe token has an expiration time, however the kubelet makes sure to refresh this token before it expires.\r\nSo, finally, the client should re-read the token every now and then.\r\n\r\nThis last part is also relevant to the discussion of https://github.com/kubeflow/pipelines/issues/4683\r\n\r\nUseful links:\r\n* https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection", "@elikatsis @yanniszark If any help is needed so this can be added for 1.3 please let me know. I think a large portion of the community has been waiting for this a while now and it would be great to have it included in 1.3. ", "/assign @elikatsis ", "@DavidSpek thanks for volunteering!\r\nWe actually have the code ready for a PR, and we've extensively tested it in our deployments.\r\nI believe it would be really helpful if had the time to test the PRs (backend & client)!\r\n\r\nBefore I open the client PR I'll present some implementation details (we've described an overview in [the comment above](https://github.com/kubeflow/pipelines/issues/5138#issuecomment-795391197))\r\n\r\nAs mentioned in https://github.com/kubeflow/pipelines/issues/4683#issuecomment-719652792, we want to have a generic way to provide credentials to the client. We will be using a `TokenCredentials` abstract class for this and we will be making use of a very interesting built-in Kubernetes `Configuration` functionality: `auth_settings`. [Obviously, we use a `Configuration` in our client ([source](https://github.com/kubeflow/pipelines/blob/1577bdb41913613f6268366b6e6e20fdfddde693/sdk/python/kfp/_client.py#L131)).]\r\n\r\n### Requirements\r\n\r\n1. We want some credentials to find their way in request headers and, more specifically, in the `Authorization: Bearer <token>` header.\r\n2. Also, we need a way to refresh the `token` before making an API call (as mentioned in the comment above, when projecting service account tokens for pods, the kubelet refreshes them every now and then, so a client needs to read the token often)\r\n\r\n### Information about the Kubernetes Configuration object\r\n\r\n1. A Kubernetes `Configuration`, based on its attributes, it may hold some `BearerToken` authentication settings ([source](https://github.com/kubernetes-client/python/blob/f17076f0e12bd894877825906864b42f25756f0c/kubernetes/client/configuration.py#L326))\r\n2. Before making an API call it updates the request using these settings ([source](https://github.com/kubernetes-client/python/blob/b79ad6837b2f5326c7dad488a64eed7c3987e856/kubernetes/client/api_client.py#L166)). Based on these settings, it may populate the request with:\r\n * cookies,\r\n * headers, or\r\n * queries.\r\n\r\n[Expanding (1)] As shown in [this source](https://github.com/kubernetes-client/python/blob/f17076f0e12bd894877825906864b42f25756f0c/kubernetes/client/configuration.py#L332), by providing a `Configuration.api_key[\"authorization\"]` we can add a `BearerToken` auth setting which:\r\n1. adds a header to the request ([source](https://github.com/kubernetes-client/python/blob/f17076f0e12bd894877825906864b42f25756f0c/kubernetes/client/configuration.py#L335))\r\n2. the header name is `authorization` ([source](https://github.com/kubernetes-client/python/blob/f17076f0e12bd894877825906864b42f25756f0c/kubernetes/client/configuration.py#L336))\r\n3. the header value is retrieved using the `get_api_key_with_prefix()` method ([source](https://github.com/kubernetes-client/python/blob/f17076f0e12bd894877825906864b42f25756f0c/kubernetes/client/configuration.py#L337))\r\n\r\n[Expanding (3)] The `get_api_key_with_prefix()` method ([source](https://github.com/kubernetes-client/python/blob/f17076f0e12bd894877825906864b42f25756f0c/kubernetes/client/configuration.py#L295))\r\n1. Eventually returns `self.api_key[\"some-key\"]` with a desired prefix if `self.api_key_prefix[\"some-key\"]` is set\r\n2. Note that before running any of this, it executes the `refresh_api_key_hook()` method if it is defined :exclamation: \r\n\r\n[Expanding (2)] The `refresh_api_key_hook()` method runs **before every request**. And, as its name suggests, it's a neat way to refresh the api keys!\r\n\r\n### Conclusions\r\n\r\nTo sum up, what we need to do is:\r\n1. populate our `config.api_key[\"authorization\"] = token`,\r\n2. populate our `config.api_key_prefix[\"authorization\"] = \"Bearer\"`, and\r\n3. provide our `config.refresh_api_key_hook` with a function that updated `config.api_key[\"authorization\"]`.\r\n\r\nSo, for **this case** (authentication with ServiceAccountTokens), we need to\r\n1. Read the contents of a specific file in the container's file system (projected service account tokens are essentially volumes mounted on pods). This is the `token`\r\n2. Use a method that reads and returns the contents of this file as the `refresh_api_key_hook`\r\n\r\n### Design decisions\r\n\r\n1. We will create a subclass of `TokenReview` named `ServiceAccountTokenVolumeCredentials`\r\n2. The class constructor will be expecting a path pointing to the file where the token is stored\r\n3. If the user doesn't provide a path, the constructor will look for an environment setting: the value of the environment variable `ML_PIPELINE_SA_TOKEN_PATH`\r\n4. If the user doesn't provide a path and the environment variable is not set, the constructor will fall back to reading the path `/var/run/secrets/ml-pipeline/token`\r\n5. The `Client` constructor will be expecting a `credentials` argument and manipulate it accordingly\r\n6. If no `credentials` are provided and the client detects it is running inside a pod, it will attempt to use a `ServiceAccountTokenVolumeCredentials`.\r\n\r\n#### How to set up the pod to authenticate against KFP\r\n\r\nWe (Arrikto) have been using a `PodDefault` that configures the pod to authenticate against KFP based on the aforementioned design.\r\nHere follows the `PodDefault`, it essentially describes all that we need to supplement the pod definition with:\r\n```yaml\r\napiVersion: kubeflow.org/v1alpha1\r\nkind: PodDefault\r\nmetadata:\r\n name: access-ml-pipeline\r\nspec:\r\n desc: Allow access to Kubeflow Pipelines\r\n selector:\r\n matchLabels:\r\n access-ml-pipeline: \"true\"\r\n volumeMounts:\r\n - mountPath: /var/run/secrets/ml-pipeline\r\n name: volume-ml-pipeline-token\r\n readOnly: true\r\n volumes:\r\n - name: volume-ml-pipeline-token\r\n projected:\r\n sources:\r\n - serviceAccountToken:\r\n path: token\r\n expirationSeconds: 7200\r\n audience: ml-pipeline\r\n env:\r\n - name: ML_PIPELINE_SA_TOKEN_PATH\r\n value: /var/run/secrets/ml-pipeline/token # this is dependent on the volume mount path and SAT path\r\n```", "@elikatsis Thanks for the detailed post. I will look at it more closely tomorrow and do my best to help test the PRs. ", "@Bobgy, @DavidSpek I've opened two PRs :tada: \r\n\r\n1. Backend: #5286 \r\n2. Client: #5287", "Hi @elikatsis! Thank you for the detailed design and PRs!\r\nI think these are absolutely great work and I'll start looking at them right now.\r\n\r\nHowever, despite that, I'm a little concerned that the design was only made public 5 days before Kubeflow 1.3 feature cut date -- March 15th. I think we agreed early on the rough direction, that was a good heads up, but it's not possible to discuss this fairly complex feature design thoroughly within 5 days. If we commit to shipping this in KF 1.3, we can only rush to a decision.\r\n\r\nBesides that an important dependency (important in the terms of making zero-config default better experience) on `PodDefault` was only revealed 3 days before the feature cut date, which I especially worry about.", "@Bobgy thanks for putting time on this!\r\n\r\n>I'm a little concerned that the design was only made public 5 days before Kubeflow 1.3\r\n\r\nYour concerns are totally valid and understandable. We agree it is very close to the first RC and this may be a bit pressing.\r\n\r\n> I think we agreed early on the rough direction, that was a good heads up, but it's not possible to discuss this fairly complex feature design thoroughly within 5 days\r\n\r\n\r\nIndeed, this is an advanced feature. However, most of the changes we had already discussed due to the joint talk you had with @yanniszark.\r\n\r\nThat's why we expect the backend changes to be unsurprising.\r\nAs far as the client is concerned, the change is relatively small and fully backwards compatible. In fact, it doesn't affect existing users at all.\r\n\r\nNote that all of the changes are extensions to existing functionality and are not removing or changing any old behavior.\r\n\r\n> Besides that an important dependency (important in the terms of making zero-config default better experience) on `PodDefault` was only revealed 3 days before the feature cut date, which I especially worry about.\r\n\r\nWe agree, but it's not necessary to have a zero-config issue before the RC. We can still use the alternative of _some_ config, if we want.\r\n\r\nTo sum up: yes, we are very close to the RC (also take into consideration that cutting a release was pushed one week), but let's do our best and see if we can make it! Many users rely on it. If we don't make it, it's ok!", "@elikatsis thanks I just realized the RC cut delay, I'm glad we get some more breath on this feature.\r\n\r\n> We agree, but it's not necessary to have a zero-config issue before the RC. We can still use the alternative of some config, if we want.\r\n\r\nMakes sense, so I'd frame the discussion around common things we agree on, would you mind splitting your PR as smaller ones , so that we can approve the ones we fully agree on right now first for the RC? (For clarification, I don't mean to ask you to split right now, but rather during review if we see parts that everyone agrees on, we can split them out for a quick merge.)\r\n\r\nand I've got very good context on the backend part based on previous discussion with Yannis, I think we can get them merged.\r\n\r\nThe only part I have concerns is the user facing interface to add service account tokens. What do you think about letting KFP api server inject projected service account token to every KFP pod? I don't think that raises more security risk (because service account tokens are already available there), nor is there chance to break existing components. Pros -- we do not need PodDefault there, so one less dependency.\r\n\r\ne.g. I guess we can configure https://argoproj.github.io/argo-workflows/default-workflow-specs/ with a global `podSpecPatch` like https://github.com/argoproj/argo-workflows/blob/master/examples/pod-spec-patch-wf-tmpl.yaml to get this behavior easily.", "For clarification, I'm prioritizing reviewing the backend PR, because it's a blocker of release. The SDK PR can be released after Kubeflow release, because users can easily upgrade the SDK at any time, and there's very little coupling to the server.", "I had totally missed these comments :scream: \r\n\r\n>would you mind splitting your PR as smaller ones , so that we can approve the ones we fully agree on right now first for the RC?\r\n\r\nWe've merged the backend now, I hope you are good with this and did not hesitate asking me to split some commits. Next time feel free to explicitly ask for things like that during the review!\r\n\r\n>What do you think about letting KFP api server inject projected service account token to every KFP pod?\r\n>e.g. I guess we can configure https://argoproj.github.io/argo-workflows/default-workflow-specs/ with a global podSpecPatch like https://github.com/argoproj/argo-workflows/blob/master/examples/pod-spec-patch-wf-tmpl.yaml to get this behavior easily.\r\n\r\nThese sound like very good ideas. However, maybe we want an explicit way to declare something like \"allow _this_ pod to have access to KFP, but not _this_ one\".\r\n\r\nWe will iterate on these ideas internally and come back to it!", "@elikatsis Does it make sense to integrate the PodDefault you shared above with the notebook controller to make the user experience more seamless? I believe this would be the best way to solve [this](https://github.com/kubeflow/pipelines/issues/4440) long standing issue. ", "> I had totally missed these comments :scream: \n> \n> >would you mind splitting your PR as smaller ones , so that we can approve the ones we fully agree on right now first for the RC?\n> \n> We've merged the backend now, I hope you are good with this and did not hesitate asking me to split some commits. Next time feel free to explicitly ask for things like that during the review!\n\nNo worries, the backend PR LGTM. I was mostly talking about concerns for the sdk PR.\n\n> >What do you think about letting KFP api server inject projected service account token to every KFP pod?\n> >e.g. I guess we can configure https://argoproj.github.io/argo-workflows/default-workflow-specs/ with a global podSpecPatch like https://github.com/argoproj/argo-workflows/blob/master/examples/pod-spec-patch-wf-tmpl.yaml to get this behavior easily.\n> \n> These sound like very good ideas. However, maybe we want an explicit way to declare something like \"allow _this_ pod to have access to KFP, but not _this_ one\".\n> \n> We will iterate on these ideas internally and come back to it!\n\nI'd prefer adhering to the standard RBAC model. Each Pod has access to a service account, while we add RBAC rules to control what one service account can do. I worry the addition of choosing which pods can have access to KFP api is introducing an unnecessary abstract layer.", "@elikatsis @yanniszark I just recall a separate concern with current implementations.\r\n\r\nWhen we support using service account token to authenticate in KFP api server, we need to configure Istio authorization rules to allow all types of access. However, that seems to break the security assumption when using HTTP user id header to authenticate -- only requests from istio ingress gateway are allowed to access KFP api server.\r\n\r\nHow do you overcome this problem?\r\n\r\nPer previous discussion in https://github.com/kubeflow/pipelines/issues/4440#issuecomment-707579674, we should implement ability to parse the special Istio header X-Forwarded-Client-Cert, so that KFP API server can know which requests come from Istio ingress gateway. and this part is currently missing in the implementations", "Above concern has been answered by @yanniszark through slack, and implemented in https://github.com/kubeflow/pipelines/pull/5420. We can add an authorization policy rule to only allow requests from other pods without userid header.", "What is the status on this? Can we authenticate using service accounts now?", "@Bobgy I been thinking about this the past week or so while I'm looking into the Istio and user profile implementation a bit more. I think the current implementation, and relying on service accounts in general, have some serious shortcomings which break the implementation of sharing namespaces. Currently, there is an option to add a user to your namespace as a \"viewer\". Once the \"viewer\" is it the namespace, he/she can enter a notebook. As the service account that is mounted to the notebook has the permissions of the namespace owner, the \"viewer\" can create whatever resources they want in a namespace where they shouldn't have these permissions.\r\n\r\nThis situation gets more problematic when taking into account that many enterprise users will use IAM roles that are bound to the service account in the given namespace for access to cloud resources, such as S3 buckets, and others resources outside the scope of Kubeflow. In this scenario, when a user shares their namespace with another users (be it as \"editor\" or \"viewer\"), they are granting that user IAM permissions which they should not have the authority to do.\r\n\r\nI'm not quite sure what the solution for this is, as I currently do not have the time to work on it (and it would be rather large to implement). But what is clear, is that the solution to this cannot be implemented with service accounts. Taking a notebook server as an example, what I have in mind now would be some way to forward the authorization headers from a browser session to requests made from that notebook to other endpoints. I'm not sure if and how this would be possible, but this would also allow the use of the standard authorization policy for access the Pipelines API within the cluster. For off-cluster resources and the IAM roles, possibly (though I haven't actually looked at this at all yet), the Istio egress gateway can be used along with Istio authorization policies.", "@DavidSpek Thank you for raising the concerns, did you check that KFP supports granular permissions -- e.g. viewer, writer.\n\nThe default for adding a contributor grants editor permissions, but it's only the central dashboard UI that do not support viewers.\n\nSimilarly, notebooks should implement such a control.", "@DavidSpek \r\nAs a temporary workaround, what if instead inviting contributors to user's \"private\" profile we invite them to team/project profile? Specifically, separate profile e.g. \"Project X\" gets created with access to specific cloud resources (e.g. with dedicated IAM role). Then we can manage it's [members](https://www.kubeflow.org/docs/components/multi-tenancy/getting-started/#managing-contributors-manually).\r\n\r\nThis at least solves the issue of unauthorized access you mentioned.\r\n\r\nIn my opinion, in general it would be better to think of collaboration in Kubeflow as of shared resources to which users can be granted/revoked access.", "@Bobgy Can you maybe link to where the granular permissions for KFP are defined or handled? All I can find at the moment is role bindings for service account.\r\n\r\n@bartgras A project \"profile\" would be an option. However, you will run into the problem that an there is no distinction between users and projects. Thus, you will need to create a user for each project and then share that project namespace with the contributors. I believe things break when you try to have 1 user be the owner of multiple namespaces. This is because a profile/user and a namespace are the same thing, and there is no concept of a project manager of some sort. ", "@DavidSpek\r\nThis issue is about notebooks (pods) talking to pods. What you describe happening is correct, but it's not a problem per se. It's highly connected to Kubernetes concepts and its concept of isolation.\r\n\r\nWhen we're speaking of pods talking to pods, using ServiceAccounts **is** the way to go since we are running on Kubernetes. We shouldn't go back to something similar to KFAM, which proved to cause trouble. We just need to be aware of the concepts.\r\n\r\nThe discussion of your concern about a user with read permissions connecting to a notebook with edit permissions is valid, but belongs in a different issue.\r\nIn short, we believe that the way to go is having a new profile for which the admin assigns permissions to users.\r\n\r\n>[...] However, you will run into the problem that an there is no distinction between users and projects. [...] a profile/user and a namespace are the same thing, and there is no concept of a project manager of some sort.\r\n\r\nThere **can** be a profile with no actual user owner, **user and profile are distinct entities**. We consider profiles and namespaces equivalent, because a profile is a namespace with specific default configuration.", "The problems I have raised are indeed mostly related to notebook pods, as these are also the largest security issue that will be the most difficult to solve. The issue with using any type of service account in a notebook pod is that no distinction can be made between and \"viewer\" and an \"editor\", which makes this problem relevant for the current implementation for accessing the KFP endpoint. Anybody with access to a notebook pod has the same amount of permissions, because they are all using the same service account. So the concept of a user having \"writer\"/\"editor\" or \"viewer\" permissions breaks completely, and therefore also breaks the KFP security implementation.\r\n\r\nAnother example of this was actually featured in a [Microsoft Security blog post](https://www.microsoft.com/security/blog/2020/06/10/misconfigured-kubeflow-workloads-are-a-security-risk/), where an admin might think that disabling custom images in the jupyter spawner UI would actually stop people from running custom images. However, because the service account mounted to the notebook pod has the permissions to spawn a notebook (I believe it is even the same service account), a user can very easily spawn a notebook with a custom image from their notebook pod.\r\n\r\nA solution for this **must** have some way of taking the user session headers into account, as there is no other way to distinguish the origin of a request. This is a non-trivial problem which **cannot** be solved with a new profile where the admin assigns permissions to users, as this does not tackle the problem of service account permission **in any way**.", "@DavidSpek Can you create a separate issue for notebooks WG to track the discussion? It doesn't seem related to this issue specifically.", "@Bobgy Sure. I actually thought of a possible solution to this problem last night, so I will work that out a bit more and create an issue.\r\n\r\nShort overview of my idea:\r\nI think the solution to safely sharing namespaces is to bind the service account of the user creating the notebook to the notebook pod. Then, in the web-app we can check if there is a difference in the permissions between the service account bound to the pod and the user trying to access the notebook. If the user trying to access the notebook doesn’t have all the same permissions (or more), the connect button in the Jupyter Web App is greyed out for that user. To actually block the user from accessing the notebook some kind of Istio authorization policy also needs to be created.", "UPDATE: after some feasibility validation, I think my idea below doesn't work as I thought.\r\n\r\n## Conclusion\r\n\r\nI prefer @elikatsis 's current implementation as the canonical authentication mechanism.\r\n\r\nHere are some investigation logs:\r\n\r\n## How to get the token programmatically\r\n\r\nwe can use python k8s client's [create_namespaced_service_account_token](https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CoreV1Api.md#create_namespaced_service_account_token) method to programmatically get a projected token with pipelines.kubeflow.org audience.\r\n\r\n> ## **create_namespaced_service_account_token**\r\n> V1TokenRequest create_namespaced_service_account_token(name, namespace, body, dry_run=dry_run, field_manager=field_manager, pretty=pretty)\r\n> \r\n> ### Parameters\r\n> \r\n> Name | Type | Description | Notes\r\n> ------------- | ------------- | ------------- | -------------\r\n> **name** | **str**| name of the TokenRequest | \r\n> **namespace** | **str**| object name and auth scope, such as for teams and projects | \r\n> **body** | [**V1TokenRequest**](V1TokenRequest.md)| | \r\n\r\n## Why it's not good?\r\n\r\nThe major problem is that we must know the service account **name** of current Pod to call this create_namespaced_service_account_token API. However, service account **name** is not exposed by default, we must set up a volume to project metadata info into the Pod.\r\n\r\nHowever, this is against my initial motivation --- I wanted to have an authentication mechanism that does not depend on Pod spec changes. Now it seems this is impossible. If we will have two different ways to authenticate, but both needs different changes to the Pod spec, that's even worse than only having one way to do this. I'd say that using the originally proposed service account token projection volume is a clearer abstraction to achieve the goal we want.\r\n\r\nTherefore, I came to the conclusion that, I think we should stick to only one solution -- the existing PR: https://github.com/kubeflow/pipelines/pull/5676\r\n\r\n\r\n\r\n======\r\n\r\nMy idea to make this authentication zero-config:\r\n\r\nhttps://github.com/kubeflow/pipelines/pull/5287#pullrequestreview-619441730\r\n\r\n> I don't like the idea of depending on serviceaccount projection very much, because the SDK would depend on external configuration for each pod it runs in.\r\n> I wonder if you ever considered using [TokenRequest API](https://www.pulumi.com/docs/reference/pkg/kubernetes/authentication/v1/tokenrequest/) directly, that will make the SDK usable anywhere. As long as the service account has RBAC permission of TokenRequest API, which seems to be a reasonable default for both KFP pipelines and notebooks.", "Any update on this and possible options? This is still a blocker for in cluster kfp clients in Notebook servers. ", "I am facing this issue in Kubeflow 1.3. I assumed the backend piece is integrated as part of the 1.3 release, so I tried using the python sdk from the branch here: https://github.com/arrikto/kubeflow-pipelines/tree/feature-client-creds-sa-token-volume\r\nas this is part of the MR that would be merged to fix this problem. I still ran into the issue of authentication failure. Here's the error log: \r\n```\r\nApiException: (500)\r\nReason: Internal Server Error\r\nHTTP response headers: HTTPHeaderDict({'content-type': 'application/json', 'trailer': 'Grpc-Trailer-Content-Type', 'date': 'Tue, 25 May 2021 23:31:31 GMT', 'x-envoy-upstream-service-time': '182', 'server': 'istio-envoy', 'x-envoy-decorator-operation': 'ml-pipeline.kubeflow.svc.cluster.local:8888/*', 'transfer-encoding': 'chunked'})\r\nHTTP response body: {\"error\":\"Internal error: [Unauthenticated: Request header error: there is no user identity header.: Request header error: there is no user identity header., Authentication failure: Unauthenticated: Review.Status.Authenticated is false: Failed to authenticate token review]\\nFailed to authorize with API resource references\\ngithub.com/kubeflow/pipelines/backend/src/common/util.Wrap\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:275\\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*ExperimentServer).canAccessExperiment\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/experiment_server.go:249\\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*ExperimentServer).ListExperiment\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/experiment_server.go:148\\ngithub.com/kubeflow/pipelines/backend/api/go_client._ExperimentService_ListExperiment_Handler.func1\\n\\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/experiment.pb.go:748\\nmain.apiServerInterceptor\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/interceptor.go:30\\ngithub.com/kubeflow/pipelines/backend/api/go_client._ExperimentService_ListExperiment_Handler\\n\\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/experiment.pb.go:750\\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\\n\\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:1210\\ngoogle.golang.org/grpc.(*Server).handleStream\\n\\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:1533\\ngoogle.golang.org/grpc.(*Server).serveStreams.func1.2\\n\\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:871\\nruntime.goexit\\n\\t/usr/local/go/src/runtime/asm_amd64.s:1357\\nFailed to authorize with API resource references\\ngithub.com/kubeflow/pipelines/backend/src/common/util.Wrap\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:275\\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*ExperimentServer).ListExperiment\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/experiment_server.go:150\\ngithub.com/kubeflow/pipelines/backend/api/go_client._ExperimentService_ListExperiment_Handler.func1\\n\\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/experiment.pb.go:748\\nmain.apiServerInterceptor\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/interceptor.go:30\\ngithub.com/kubeflow/pipelines/backend/api/go_client._ExperimentService_ListExperiment_Handler\\n\\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/experiment.pb.go:750\\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\\n\\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:1210\\ngoogle.golang.org/grpc.(*Server).handleStream\\n\\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:1533\\ngoogle.golang.org/grpc.(*Server).serveStreams.func1.2\\n\\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:871\\nruntime.goexit\\n\\t/usr/local/go/src/runtime/asm_amd64.s:1357\",\"code\":13,\"message\":\"Internal error: [Unauthenticated: Request header error: there is no user identity header.: Request header error: there is no user identity header., Authentication failure: Unauthenticated: Review.Status.Authenticated is false: Failed to authenticate token review]\\nFailed to authorize with API resource references\\ngithub.com/kubeflow/pipelines/backend/src/common/util.Wrap\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:275\\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*ExperimentServer).canAccessExperiment\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/experiment_server.go:249\\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*ExperimentServer).ListExperiment\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/experiment_server.go:148\\ngithub.com/kubeflow/pipelines/backend/api/go_client._ExperimentService_ListExperiment_Handler.func1\\n\\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/experiment.pb.go:748\\nmain.apiServerInterceptor\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/interceptor.go:30\\ngithub.com/kubeflow/pipelines/backend/api/go_client._ExperimentService_ListExperiment_Handler\\n\\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/experiment.pb.go:750\\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\\n\\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:1210\\ngoogle.golang.org/grpc.(*Server).handleStream\\n\\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:1533\\ngoogle.golang.org/grpc.(*Server).serveStreams.func1.2\\n\\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:871\\nruntime.goexit\\n\\t/usr/local/go/src/runtime/asm_amd64.s:1357\\nFailed to authorize with API resource references\\ngithub.com/kubeflow/pipelines/backend/src/common/util.Wrap\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:275\\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*ExperimentServer).ListExperiment\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/experiment_server.go:150\\ngithub.com/kubeflow/pipelines/backend/api/go_client._ExperimentService_ListExperiment_Handler.func1\\n\\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/experiment.pb.go:748\\nmain.apiServerInterceptor\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/interceptor.go:30\\ngithub.com/kubeflow/pipelines/backend/api/go_client._ExperimentService_ListExperiment_Handler\\n\\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/experiment.pb.go:750\\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\\n\\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:1210\\ngoogle.golang.org/grpc.(*Server).handleStream\\n\\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:1533\\ngoogle.golang.org/grpc.(*Server).serveStreams.func1.2\\n\\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:871\\nruntime.goexit\\n\\t/usr/local/go/src/runtime/asm_amd64.s:1357\",\"details\":[{\"@type\":\"type.googleapis.com/api.Error\",\"error_message\":\"Internal error: [Unauthenticated: Request header error: there is no user identity header.: Request header error: there is no user identity header., Authentication failure: Unauthenticated: Review.Status.Authenticated is false: Failed to authenticate token review]\\nFailed to authorize with API resource references\\ngithub.com/kubeflow/pipelines/backend/src/common/util.Wrap\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:275\\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*ExperimentServer).canAccessExperiment\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/experiment_server.go:249\\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*ExperimentServer).ListExperiment\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/experiment_server.go:148\\ngithub.com/kubeflow/pipelines/backend/api/go_client._ExperimentService_ListExperiment_Handler.func1\\n\\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/experiment.pb.go:748\\nmain.apiServerInterceptor\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/interceptor.go:30\\ngithub.com/kubeflow/pipelines/backend/api/go_client._ExperimentService_ListExperiment_Handler\\n\\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/experiment.pb.go:750\\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\\n\\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:1210\\ngoogle.golang.org/grpc.(*Server).handleStream\\n\\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:1533\\ngoogle.golang.org/grpc.(*Server).serveStreams.func1.2\\n\\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:871\\nruntime.goexit\\n\\t/usr/local/go/src/runtime/asm_amd64.s:1357\\nFailed to authorize with API resource references\\ngithub.com/kubeflow/pipelines/backend/src/common/util.Wrap\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:275\\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*ExperimentServer).ListExperiment\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/experiment_server.go:150\\ngithub.com/kubeflow/pipelines/backend/api/go_client._ExperimentService_ListExperiment_Handler.func1\\n\\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/experiment.pb.go:748\\nmain.apiServerInterceptor\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/interceptor.go:30\\ngithub.com/kubeflow/pipelines/backend/api/go_client._ExperimentService_ListExperiment_Handler\\n\\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/experiment.pb.go:750\\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\\n\\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:1210\\ngoogle.golang.org/grpc.(*Server).handleStream\\n\\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:1533\\ngoogle.golang.org/grpc.(*Server).serveStreams.func1.2\\n\\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:871\\nruntime.goexit\\n\\t/usr/local/go/src/runtime/asm_amd64.s:1357\",\"error_details\":\"Internal error: [Unauthenticated: Request header error: there is no user identity header.: Request header error: there is no user identity header., Authentication failure: Unauthenticated: Review.Status.Authenticated is false: Failed to authenticate token review]\\nFailed to authorize with API resource references\\ngithub.com/kubeflow/pipelines/backend/src/common/util.Wrap\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:275\\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*ExperimentServer).canAccessExperiment\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/experiment_server.go:249\\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*ExperimentServer).ListExperiment\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/experiment_server.go:148\\ngithub.com/kubeflow/pipelines/backend/api/go_client._ExperimentService_ListExperiment_Handler.func1\\n\\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/experiment.pb.go:748\\nmain.apiServerInterceptor\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/interceptor.go:30\\ngithub.com/kubeflow/pipelines/backend/api/go_client._ExperimentService_ListExperiment_Handler\\n\\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/experiment.pb.go:750\\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\\n\\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:1210\\ngoogle.golang.org/grpc.(*Server).handleStream\\n\\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:1533\\ngoogle.golang.org/grpc.(*Server).serveStreams.func1.2\\n\\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:871\\nruntime.goexit\\n\\t/usr/local/go/src/runtime/asm_amd64.s:1357\\nFailed to authorize with API resource references\\ngithub.com/kubeflow/pipelines/backend/src/common/util.Wrap\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:275\\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*ExperimentServer).ListExperiment\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/experiment_server.go:150\\ngithub.com/kubeflow/pipelines/backend/api/go_client._ExperimentService_ListExperiment_Handler.func1\\n\\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/experiment.pb.go:748\\nmain.apiServerInterceptor\\n\\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/interceptor.go:30\\ngithub.com/kubeflow/pipelines/backend/api/go_client._ExperimentService_ListExperiment_Handler\\n\\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/experiment.pb.go:750\\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\\n\\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:1210\\ngoogle.golang.org/grpc.(*Server).handleStream\\n\\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:1533\\ngoogle.golang.org/grpc.(*Server).serveStreams.func1.2\\n\\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:871\\nruntime.goexit\\n\\t/usr/local/go/src/runtime/asm_amd64.s:1357\"}]}\r\n```\r\n\r\nI was not able to get PodDefault working as mentioned in the comment here: https://github.com/kubeflow/pipelines/issues/5138#issuecomment-797003710, but I manually added the volume and volume mount to the jupyter notebook pod by following the steps here: https://www.gitmemory.com/issue/kubeflow/kubeflow/3571/520660278\r\n\r\n@elikatsis am I missing something here while trying to get in cluster authentication working?", "FYI, I investigated the other idea of letting KFP SDK use k8s client to get service account token with KFP audience.\r\nI figured out that it also needs some pod spec change, so it's better we stick to @elikatsis's current design and impl.\r\n\r\nDetails logged in https://github.com/kubeflow/pipelines/issues/5138#issuecomment-845846555.", "Hi @jaiganeshp, I just tried getting a token with kfp audience and used it on KFP API (deployed via Kubeflow 1.3 on GCP). It worked for me.\r\n\r\nTherefore, I think KFP backend auth is correctly implemented. I suspect there's sth not working correctly when you prepare your volume or how your SDK reads the volume." ]
"2021-02-15T11:09:18"
"2023-03-06T19:27:10"
null
CONTRIBUTOR
null
### Problem Statement Clients in various namespaces (e.g., Notebooks) need to access the Pipelines API. However, there is currently no way for these clients to authenticate to the Pipelines API: https://github.com/kubeflow/pipelines/issues/4440 https://github.com/kubeflow/pipelines/issues/4733 In-cluster clients need a way to authenticate to the KFP API Server. ### Proposed Solution The correct way to do this is by using audience-scoped ServiceAccountTokens. In Arrikto's Kubeflow distribution, we have been successfully using this method for a long time, in numerous customer environments. We want to upstream this solution so the whole community can benefit as well, since we see this is an issue many users bump into. Changes need to happen in 2 places: - API Server, which needs to support authentication with ServiceAccountToken. - KFP Client, to better support this authentication method. /assign @yanniszark cc @Bobgy
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/5138/reactions", "total_count": 22, "+1": 22, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/5138/timeline
null
null
null
null
false