url
stringlengths
59
59
repository_url
stringclasses
1 value
labels_url
stringlengths
73
73
comments_url
stringlengths
68
68
events_url
stringlengths
66
66
html_url
stringlengths
49
49
id
int64
782M
1.89B
node_id
stringlengths
18
24
number
int64
4.97k
9.98k
title
stringlengths
2
306
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
4 values
active_lock_reason
null
body
stringlengths
0
63.6k
โŒ€
reactions
dict
timeline_url
stringlengths
68
68
performed_via_github_app
null
state_reason
stringclasses
3 values
draft
bool
0 classes
pull_request
dict
is_pull_request
bool
1 class
https://api.github.com/repos/kubeflow/pipelines/issues/7455
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7455/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7455/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7455/events
https://github.com/kubeflow/pipelines/issues/7455
1,178,514,300
I_kwDOB-71UM5GPrN8
7,455
[feature] Support IR YAML format in API
{ "login": "Linchin", "id": 12806577, "node_id": "MDQ6VXNlcjEyODA2NTc3", "avatar_url": "https://avatars.githubusercontent.com/u/12806577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Linchin", "html_url": "https://github.com/Linchin", "followers_url": "https://api.github.com/users/Linchin/followers", "following_url": "https://api.github.com/users/Linchin/following{/other_user}", "gists_url": "https://api.github.com/users/Linchin/gists{/gist_id}", "starred_url": "https://api.github.com/users/Linchin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Linchin/subscriptions", "organizations_url": "https://api.github.com/users/Linchin/orgs", "repos_url": "https://api.github.com/users/Linchin/repos", "events_url": "https://api.github.com/users/Linchin/events{/privacy}", "received_events_url": "https://api.github.com/users/Linchin/received_events", "type": "User", "site_admin": false }
[ { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
closed
false
{ "login": "Linchin", "id": 12806577, "node_id": "MDQ6VXNlcjEyODA2NTc3", "avatar_url": "https://avatars.githubusercontent.com/u/12806577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Linchin", "html_url": "https://github.com/Linchin", "followers_url": "https://api.github.com/users/Linchin/followers", "following_url": "https://api.github.com/users/Linchin/following{/other_user}", "gists_url": "https://api.github.com/users/Linchin/gists{/gist_id}", "starred_url": "https://api.github.com/users/Linchin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Linchin/subscriptions", "organizations_url": "https://api.github.com/users/Linchin/orgs", "repos_url": "https://api.github.com/users/Linchin/repos", "events_url": "https://api.github.com/users/Linchin/events{/privacy}", "received_events_url": "https://api.github.com/users/Linchin/received_events", "type": "User", "site_admin": false }
[ { "login": "Linchin", "id": 12806577, "node_id": "MDQ6VXNlcjEyODA2NTc3", "avatar_url": "https://avatars.githubusercontent.com/u/12806577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Linchin", "html_url": "https://github.com/Linchin", "followers_url": "https://api.github.com/users/Linchin/followers", "following_url": "https://api.github.com/users/Linchin/following{/other_user}", "gists_url": "https://api.github.com/users/Linchin/gists{/gist_id}", "starred_url": "https://api.github.com/users/Linchin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Linchin/subscriptions", "organizations_url": "https://api.github.com/users/Linchin/orgs", "repos_url": "https://api.github.com/users/Linchin/repos", "events_url": "https://api.github.com/users/Linchin/events{/privacy}", "received_events_url": "https://api.github.com/users/Linchin/received_events", "type": "User", "site_admin": false } ]
null
[]
"2022-03-23T18:37:11"
"2022-04-06T17:02:13"
"2022-04-06T17:02:13"
COLLABORATOR
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> <!-- /area frontend --> /area backend <!-- /area sdk --> <!-- /area samples --> <!-- /area components --> ### What feature would you like to see? <!-- Provide a description of this feature and the user experience. --> Use YAML instead of JSON for intermediate representation (IR). ### What is the use case or pain point? <!-- It helps us understand the benefit of this feature for your use case. --> YAML is a superset of JSON and is easier to understand for a reader. It also paves the way to enhancing readability in the future. ### Is there a workaround currently? <!-- Without this feature, how do you accomplish your task today? --> NA --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a ๐Ÿ‘. We prioritize fulfilling features with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7455/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7455/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7454
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7454/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7454/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7454/events
https://github.com/kubeflow/pipelines/issues/7454
1,178,485,678
I_kwDOB-71UM5GPkOu
7,454
[feature] Access to ParallelFor values and set_paralellism per Op
{ "login": "mikwieczorek", "id": 40968185, "node_id": "MDQ6VXNlcjQwOTY4MTg1", "avatar_url": "https://avatars.githubusercontent.com/u/40968185?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mikwieczorek", "html_url": "https://github.com/mikwieczorek", "followers_url": "https://api.github.com/users/mikwieczorek/followers", "following_url": "https://api.github.com/users/mikwieczorek/following{/other_user}", "gists_url": "https://api.github.com/users/mikwieczorek/gists{/gist_id}", "starred_url": "https://api.github.com/users/mikwieczorek/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mikwieczorek/subscriptions", "organizations_url": "https://api.github.com/users/mikwieczorek/orgs", "repos_url": "https://api.github.com/users/mikwieczorek/repos", "events_url": "https://api.github.com/users/mikwieczorek/events{/privacy}", "received_events_url": "https://api.github.com/users/mikwieczorek/received_events", "type": "User", "site_admin": false }
[ { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[]
"2022-03-23T18:07:28"
"2022-03-24T22:53:58"
null
NONE
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> <!-- /area frontend --> /area backend /area sdk <!-- /area samples --> <!-- /area components --> ### What feature would you like to see? I stumbled upon a case where two features could be useful when building dynamically a pipeline that is controlled by some outside config. ### What is the use case or pain point? Let's say we have a number of datasets and number of models we want to train. Not all models should be run on all datasets, so we use a config to specify pipeline content. (Examples in the code). Moreover, we would like that each run-per-dataset is parallel to each other and some task may be CPU/RAM heavy so we would like to limit the parallelism per Op-type. This i a case when we want to run a pipeline from github CI/CD to test the newly pushed code on a set of dataset and models to ensure it's validity and performance. ### Is there a workaround currently? Semi-workaround is presented in the example as using config in a function that returns pipeline-function and iterating over the config without using `ParallelFor`, but this seems problematic in limiting parallelism per Op. For the problem of paralellism I saw a related issue https://github.com/kubeflow/pipelines/issues/4089, but also I know that Argo allows setting parallelism per Task/Step, so having that in Kubeflow would be nice. Also using separate pipelines per dateset is somewhat a working option. ```import kfp from kfp.components import func_to_container_op import kfp.dsl as dsl import json ### Functions def print_fun(calculation: str) -> str: print("Calculation Type: ",calculation) return calculation def add(a: float, b: float) -> float: return a + b def multiply(a: float, b: float) -> float: return a * b def divide(a: float, b: float) -> float: return a / b def subtract(a: float, b: float) -> float: return a - b ### Container ops print_op = func_to_container_op(print_fun) add_op = func_to_container_op(add) multiply_op = func_to_container_op(multiply) divide_op = func_to_container_op(divide) subtract_op = func_to_container_op(subtract) ### Dict to easily fetch op according to config name2operator = { "add": add_op, "multiply": multiply_op, "divide": divide_op, "subtract": subtract_op } ### Example config master_config = [ { "name": "name1", "a": 1, "b": 1, "models": ["add", "subtract", "multiply", "divide"] }, { "name": "name2", "a": 0, "b": 1, "models": ["multiply", "divide"] }, { "name": "name3", "a": 100, "b": 2, "models": ["subtract", "multiply", "divide"] }, ] ``` Semi-working solution ``` ### Workaround to use for-loop and create model-task in line with config. ### Using ParallelFor won't allow to get the config as def get_pipeline(config): @dsl.pipeline( name='Parallel pipeline mock test', description='Pipeline with for-loop and config-based operators.' ) def multi_pipeline(): for config_item in config: root_op = print_op(config_item['name']) root_op. for model_name in config_item['models']: new_op = name2operator[model_name](config_item['a'], config_item['b']) new_op.after(root_op) return multi_pipeline client = kfp.Client() client.create_run_from_pipeline_func(get_pipeline(master_config), arguments={}) ``` Example of my thought process in solving the problem โ€“ changing `ParallelFor` ``` @dsl.pipeline( name='Parallel pipeline mock test', description='Pipeline with for-loop and config-based operators.' ) def multi_pipeline( config ): root_ops = [] # I only want parallelism limit on the print_op for idx, item in enumerate(dsl.ParallelFor(config, parallelism=2)): root_op = print_op(item['name']) root_ops.append(root_op) # Now it is impossible, as config is PipelineParam and it is not iterable, but would be nice if it is for root_idx, item in enumerate(config): # item['models'] is also not iterable for model_name in item['models']: new_op = name2operator[model_name](item.a, item.b) new_op.after(root_ops[root_idx]) client = kfp.Client() client.create_run_from_pipeline_func(multi_pipeline, arguments={ "config": json.dumps(master_config) }) ``` --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a ๐Ÿ‘. We prioritize fulfilling features with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7454/reactions", "total_count": 10, "+1": 10, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7454/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7450
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7450/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7450/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7450/events
https://github.com/kubeflow/pipelines/issues/7450
1,177,365,961
I_kwDOB-71UM5GLS3J
7,450
[sdk] unarchive_run sets run.storage_state to None instead of STORAGESTATE_ACTIVE
{ "login": "dandawg", "id": 12484302, "node_id": "MDQ6VXNlcjEyNDg0MzAy", "avatar_url": "https://avatars.githubusercontent.com/u/12484302?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dandawg", "html_url": "https://github.com/dandawg", "followers_url": "https://api.github.com/users/dandawg/followers", "following_url": "https://api.github.com/users/dandawg/following{/other_user}", "gists_url": "https://api.github.com/users/dandawg/gists{/gist_id}", "starred_url": "https://api.github.com/users/dandawg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dandawg/subscriptions", "organizations_url": "https://api.github.com/users/dandawg/orgs", "repos_url": "https://api.github.com/users/dandawg/repos", "events_url": "https://api.github.com/users/dandawg/events{/privacy}", "received_events_url": "https://api.github.com/users/dandawg/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" } ]
open
false
null
[]
null
[ "Could you please archive and then unarchive the run on UI and then check the storage_state from sdk, and see if it behaves the same? Thank you." ]
"2022-03-22T22:20:48"
"2022-03-24T22:48:00"
null
NONE
null
### Environment * KFP version: 1.4 * KFP SDK version: 1.8.11 * All dependencies version: NA ### Steps to reproduce 1. Do a pipeline run: ``` run = client.create_run_from_pipeline_package( pipeline.yaml, run_name='my_run', experiment_name='my_exp', arguments=args ) result = run.wait_for_run_to_complete() ``` 2. Archive the run ``` client.runs.archive_run(result.run.id) ``` 3. Check that the storage state has transition to STORAGESTATE_ARCHIVED ``` run_after_archive = client.runs.get_run(result.run.id) print(run_after_archive.run.storage_state) ``` 4. Now, unarchive the run, and verify the storage state has transitioned to None ``` client.runs.unarchive_run(result.run.id) run_after_unarchive = client.runs.get_run(result.run.id) print(run_after_unarchive.run.storage_state) ``` ### Expected result When a run is unarchived, the storage state should transition back to STORAGESTATE_ACTIVE. This would be consistent with what I get when I archive, and then subsequently unarchive an experiment using similar commands (client.experiment.archive_experiment, and client.experiment.unarchive_experiment). In the experiment unarchive case, the storage state transitions back to STORAGESTATE_ACTIVE. ### Materials and Reference Impacted by this bug? Give it a ๐Ÿ‘. We prioritise the issues with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7450/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7450/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7449
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7449/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7449/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7449/events
https://github.com/kubeflow/pipelines/issues/7449
1,177,338,195
I_kwDOB-71UM5GLMFT
7,449
kubeflow pipeline run in "kubeflow" namespace instead of user namespace
{ "login": "pwzhong", "id": 15694079, "node_id": "MDQ6VXNlcjE1Njk0MDc5", "avatar_url": "https://avatars.githubusercontent.com/u/15694079?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pwzhong", "html_url": "https://github.com/pwzhong", "followers_url": "https://api.github.com/users/pwzhong/followers", "following_url": "https://api.github.com/users/pwzhong/following{/other_user}", "gists_url": "https://api.github.com/users/pwzhong/gists{/gist_id}", "starred_url": "https://api.github.com/users/pwzhong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pwzhong/subscriptions", "organizations_url": "https://api.github.com/users/pwzhong/orgs", "repos_url": "https://api.github.com/users/pwzhong/repos", "events_url": "https://api.github.com/users/pwzhong/events{/privacy}", "received_events_url": "https://api.github.com/users/pwzhong/received_events", "type": "User", "site_admin": false }
[ { "id": 1682717392, "node_id": "MDU6TGFiZWwxNjgyNzE3Mzky", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/question", "name": "kind/question", "color": "2515fc", "default": false, "description": "" } ]
open
false
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false } ]
null
[ "Hello @pwzhong , how do you deploy Kubeflow? You are probably installing KFP single-user mode in a full Kubeflow deployment." ]
"2022-03-22T21:41:17"
"2022-03-24T22:54:26"
null
NONE
null
/kind question **Question:** My team has installed kubeflow 1.4 on AKS 1.21 with multi-user mode enabled. When I created a pipeline run in a user namespace, it was run in โ€œkubeflowโ€ namespace, instead of the user namespace. There is no namespace specified in pipeline yaml, and when a pipeline run is created, it automatically added "namespace: kubeflow" in the pod metadata and run it there, like what is shown in below screenshot. ![image](https://user-images.githubusercontent.com/15694079/159579556-157f1d7c-9580-4a58-b8cd-cfc971a03608.png) I tried to specify namespace in pipeline yaml file, but it gives me error. Probably because kubeflow does not allow overwrite the namespace. ![image](https://user-images.githubusercontent.com/15694079/159580132-4660b8ec-4cda-46ee-829c-fd2c10e09aab.png) If I created a notebook, it was inside the user namespace as expected though. Any idea how to fix this?
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7449/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7449/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7445
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7445/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7445/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7445/events
https://github.com/kubeflow/pipelines/issues/7445
1,175,154,847
I_kwDOB-71UM5GC3Cf
7,445
[feature] Option to disable downloading of artifacts through UI
{ "login": "bodak", "id": 6807878, "node_id": "MDQ6VXNlcjY4MDc4Nzg=", "avatar_url": "https://avatars.githubusercontent.com/u/6807878?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bodak", "html_url": "https://github.com/bodak", "followers_url": "https://api.github.com/users/bodak/followers", "following_url": "https://api.github.com/users/bodak/following{/other_user}", "gists_url": "https://api.github.com/users/bodak/gists{/gist_id}", "starred_url": "https://api.github.com/users/bodak/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bodak/subscriptions", "organizations_url": "https://api.github.com/users/bodak/orgs", "repos_url": "https://api.github.com/users/bodak/repos", "events_url": "https://api.github.com/users/bodak/events{/privacy}", "received_events_url": "https://api.github.com/users/bodak/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
closed
false
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false } ]
null
[ "Hello @bodak , are you using full fledged Kubeflow deployment? That is the only way we can use multi-user mode, where artifact access control is possible.\r\n\r\nAlso, data access restriction should be enforced on the storage side, by allowing only the selected personnel to download them. If such access control is in-place, UI doesn't need to do anything to disable download. It is because UI only open a URL link for download, which is coming from storage solution (based on which storage you are using). So if you are managing sensitive data, that restriction should happen on the storage IAM side. \r\n\r\nIf using minio, I am not sure whether whether minio has such access control, if not, it is the limitation on minio side.", "Thanks for the explanation! This was for Kubeflow Pipelines only.\r\nYour suggestion makes sense and restriction on the storage side were already implemented.\r\nI'll close this issue." ]
"2022-03-21T10:08:25"
"2022-03-31T09:05:31"
"2022-03-31T09:05:31"
NONE
null
### Feature Area /area frontend /area backend ### What feature would you like to see? - Option to disable the ability to download artifacts from the front-end UI. Some form of this would limit accidentally clicking the download link. - Option to restrict access to the back-end artifact storage. Some form of this would limit "guessing" the download link (unsure if that is even possible). ### What is the use case or pain point? We work with sensitive data. It is very simple to download artifacts produced from the front-end UI and save locally. ### Is there a workaround currently? 1. I have not found any option in https://github.com/kubeflow/pipelines/blob/master/frontend/server/app.ts#L119-L150, but I might have been looking in the wrong places. 2. We are looking into blocking access between the minio server and the UI server through kubernetes.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7445/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7445/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7444
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7444/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7444/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7444/events
https://github.com/kubeflow/pipelines/issues/7444
1,174,334,504
I_kwDOB-71UM5F_uwo
7,444
Convert PipelineSpec format from json to yaml
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "jlyaoyuli", "id": 56132941, "node_id": "MDQ6VXNlcjU2MTMyOTQx", "avatar_url": "https://avatars.githubusercontent.com/u/56132941?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jlyaoyuli", "html_url": "https://github.com/jlyaoyuli", "followers_url": "https://api.github.com/users/jlyaoyuli/followers", "following_url": "https://api.github.com/users/jlyaoyuli/following{/other_user}", "gists_url": "https://api.github.com/users/jlyaoyuli/gists{/gist_id}", "starred_url": "https://api.github.com/users/jlyaoyuli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jlyaoyuli/subscriptions", "organizations_url": "https://api.github.com/users/jlyaoyuli/orgs", "repos_url": "https://api.github.com/users/jlyaoyuli/repos", "events_url": "https://api.github.com/users/jlyaoyuli/events{/privacy}", "received_events_url": "https://api.github.com/users/jlyaoyuli/received_events", "type": "User", "site_admin": false }
[ { "login": "jlyaoyuli", "id": 56132941, "node_id": "MDQ6VXNlcjU2MTMyOTQx", "avatar_url": "https://avatars.githubusercontent.com/u/56132941?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jlyaoyuli", "html_url": "https://github.com/jlyaoyuli", "followers_url": "https://api.github.com/users/jlyaoyuli/followers", "following_url": "https://api.github.com/users/jlyaoyuli/following{/other_user}", "gists_url": "https://api.github.com/users/jlyaoyuli/gists{/gist_id}", "starred_url": "https://api.github.com/users/jlyaoyuli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jlyaoyuli/subscriptions", "organizations_url": "https://api.github.com/users/jlyaoyuli/orgs", "repos_url": "https://api.github.com/users/jlyaoyuli/repos", "events_url": "https://api.github.com/users/jlyaoyuli/events{/privacy}", "received_events_url": "https://api.github.com/users/jlyaoyuli/received_events", "type": "User", "site_admin": false } ]
null
[ "Related item: https://github.com/kubeflow/pipelines/pull/6524", "cc @jlyaoyuli ", "This PR can help validating the change: https://github.com/kubeflow/pipelines/pull/7570", "Unit test: Update tests accordingly for new yaml format: https://github.com/kubeflow/pipelines/tree/master/frontend/src/data/test" ]
"2022-03-19T19:09:08"
"2022-05-11T19:50:21"
"2022-05-11T19:50:21"
COLLABORATOR
null
## Problem There are two types of Pipeline Template definition: ArgoWorkflow (for KFPv1) and PipelineSpec (for KFPv2). Currently we are using `isPipelineSpec()` to determine whether it is v1 or v2. Reference: https://github.com/kubeflow/pipelines/blob/939f81088b39ba703cc821d18e7a99523662bf5f/frontend/src/lib/v2/WorkflowUtils.ts#L48-L67 We are assuming that if the PipelineTemplate is not ArgoWorkflow, then it is PipelineSpec: see `isArgoWorkflowTemplate()`: https://github.com/kubeflow/pipelines/blob/939f81088b39ba703cc821d18e7a99523662bf5f/frontend/src/lib/v2/WorkflowUtils.ts#L22-L31 We are assuming that PipelineSpec is in JSON format. https://github.com/kubeflow/pipelines/blob/939f81088b39ba703cc821d18e7a99523662bf5f/frontend/src/lib/v2/WorkflowUtils.ts#L34-L45 Now the direction is to use YAML format instead of JSON format on PipelineSpec. We need to switch our logic to adopt YAML format and abandon JSON format. See SDK reference: https://github.com/kubeflow/pipelines/pull/7431. ## Possible solution Possible tool: https://github.com/nodeca/js-yaml ## Note We also need to update test data https://github.com/kubeflow/pipelines/tree/master/frontend/mock-backend/data/v2/pipeline
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7444/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7444/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7441
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7441/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7441/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7441/events
https://github.com/kubeflow/pipelines/issues/7441
1,173,988,703
I_kwDOB-71UM5F-aVf
7,441
[feature] Clone recurring run capability in sdk
{ "login": "droctothorpe", "id": 24783969, "node_id": "MDQ6VXNlcjI0NzgzOTY5", "avatar_url": "https://avatars.githubusercontent.com/u/24783969?v=4", "gravatar_id": "", "url": "https://api.github.com/users/droctothorpe", "html_url": "https://github.com/droctothorpe", "followers_url": "https://api.github.com/users/droctothorpe/followers", "following_url": "https://api.github.com/users/droctothorpe/following{/other_user}", "gists_url": "https://api.github.com/users/droctothorpe/gists{/gist_id}", "starred_url": "https://api.github.com/users/droctothorpe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/droctothorpe/subscriptions", "organizations_url": "https://api.github.com/users/droctothorpe/orgs", "repos_url": "https://api.github.com/users/droctothorpe/repos", "events_url": "https://api.github.com/users/droctothorpe/events{/privacy}", "received_events_url": "https://api.github.com/users/droctothorpe/received_events", "type": "User", "site_admin": false }
[ { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[]
"2022-03-18T20:18:54"
"2022-03-24T23:04:04"
null
CONTRIBUTOR
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> <!-- /area frontend --> <!-- /area backend --> /area sdk <!-- /area samples --> <!-- /area components --> ### What feature would you like to see? There's no way to patch an existing recurring run via the SDK (or GUI). The GUI gets around this with the clone recurring run button, but there's no equivalent in the SDK. A very common scenario, for example, is updating the cron string of an existing recurring run. This is more or less impossible via the SDK. One way to address this would be by adding a `clone_recurring_run` method to the SDK that emulates what the frontend does. <!-- Provide a description of this feature and the user experience. --> ### What is the use case or pain point? Making minor modifications to an existing recurring run via the SDK is not possible. <!-- It helps us understand the benefit of this feature for your use case. --> ### Is there a workaround currently? Only through the GUI via the clone button. <!-- Without this feature, how do you accomplish your task today? --> --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a ๐Ÿ‘. We prioritize fulfilling features with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7441/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7441/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7437
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7437/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7437/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7437/events
https://github.com/kubeflow/pipelines/issues/7437
1,173,349,821
I_kwDOB-71UM5F7-W9
7,437
Why is certificates.k8s.io/v1 used in Cache Deployer instead of OpenSSL?
{ "login": "konsloiz", "id": 22999070, "node_id": "MDQ6VXNlcjIyOTk5MDcw", "avatar_url": "https://avatars.githubusercontent.com/u/22999070?v=4", "gravatar_id": "", "url": "https://api.github.com/users/konsloiz", "html_url": "https://github.com/konsloiz", "followers_url": "https://api.github.com/users/konsloiz/followers", "following_url": "https://api.github.com/users/konsloiz/following{/other_user}", "gists_url": "https://api.github.com/users/konsloiz/gists{/gist_id}", "starred_url": "https://api.github.com/users/konsloiz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/konsloiz/subscriptions", "organizations_url": "https://api.github.com/users/konsloiz/orgs", "repos_url": "https://api.github.com/users/konsloiz/repos", "events_url": "https://api.github.com/users/konsloiz/events{/privacy}", "received_events_url": "https://api.github.com/users/konsloiz/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Could you please discuss the problem in this issue?\r\nhttps://github.com/kubeflow/manifests/issues/2165" ]
"2022-03-18T08:56:01"
"2022-03-24T23:03:08"
null
NONE
null
Caching is one of the most crucial features of KFP. Each time a pipeline step is the same as an already executed, the results are loaded from the cache server. Caching is accomplished in KFP via two interdependent modules: the cache deployer and the cache server. While trying to set up the modules in an enterprise cluster ([Mercedes-Benz AG](https://github.com/mercedes-benz/DnA)), it was noted that the installation couldnโ€™t be completed. The reason was that the cache deployer is built to generate a Signed Certificate for the cache server by referring to the Kubernetes Certificate-SigningRequest API. ``` yaml ... # create server cert/key CSR and send to k8s API cat <<EOF | kubectl create -f - apiVersion: certificates.k8s.io/v1 kind: CertificateSigningRequest metadata: name: ${csrName} spec: groups: - system:authenticated request: $(cat ${tmpdir}/server.csr | base64 | tr -d '\n') signerName: kubernetes.io/kubelet-serving usages: - digital signature - key encipherment - server auth EOF .. ``` The usage of API server certificates in our enterprise environment is restricted because those allow permission escalation. The security risk is critical, as by using this API, users can order certificates that let them impersonate both Kubernetes control plane and cluster team access. To adjust the cache deployerโ€™s certificate generation process without affecting the actual functionality to avoid loosening the security restrictions, we [used](https://github.com/mercedes-benz/DnA/blob/9c2487e111490285ee57dc241aa27778a1acc774/deployment/dockerfiles/kubeflow/kfp/backend/src/cache/deployer/webhook-create-signed-cert.sh#L98) the widely-known OpenSSL. Is there any specific reason for using the K8s API? If not, would the community be interested in an upstream contribution?
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7437/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7437/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7435
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7435/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7435/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7435/events
https://github.com/kubeflow/pipelines/issues/7435
1,172,209,396
I_kwDOB-71UM5F3n70
7,435
[feature] Add a param in create_recurring_run to support override the exists job with the same name in the same experiment
{ "login": "haoxins", "id": 2569835, "node_id": "MDQ6VXNlcjI1Njk4MzU=", "avatar_url": "https://avatars.githubusercontent.com/u/2569835?v=4", "gravatar_id": "", "url": "https://api.github.com/users/haoxins", "html_url": "https://github.com/haoxins", "followers_url": "https://api.github.com/users/haoxins/followers", "following_url": "https://api.github.com/users/haoxins/following{/other_user}", "gists_url": "https://api.github.com/users/haoxins/gists{/gist_id}", "starred_url": "https://api.github.com/users/haoxins/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/haoxins/subscriptions", "organizations_url": "https://api.github.com/users/haoxins/orgs", "repos_url": "https://api.github.com/users/haoxins/repos", "events_url": "https://api.github.com/users/haoxins/events{/privacy}", "received_events_url": "https://api.github.com/users/haoxins/received_events", "type": "User", "site_admin": false }
[ { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
closed
false
{ "login": "connor-mccarthy", "id": 55268212, "node_id": "MDQ6VXNlcjU1MjY4MjEy", "avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-mccarthy", "html_url": "https://github.com/connor-mccarthy", "followers_url": "https://api.github.com/users/connor-mccarthy/followers", "following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}", "gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions", "organizations_url": "https://api.github.com/users/connor-mccarthy/orgs", "repos_url": "https://api.github.com/users/connor-mccarthy/repos", "events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-mccarthy/received_events", "type": "User", "site_admin": false }
[ { "login": "connor-mccarthy", "id": 55268212, "node_id": "MDQ6VXNlcjU1MjY4MjEy", "avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-mccarthy", "html_url": "https://github.com/connor-mccarthy", "followers_url": "https://api.github.com/users/connor-mccarthy/followers", "following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}", "gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions", "organizations_url": "https://api.github.com/users/connor-mccarthy/orgs", "repos_url": "https://api.github.com/users/connor-mccarthy/repos", "events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-mccarthy/received_events", "type": "User", "site_admin": false } ]
null
[ "Currently we don't have a plan for implementing this feature in `kfp`. If you are interested in contributing this feature, could you please compose a design doc for discussion in the [Kubeflow Pipelines Community](https://www.kubeflow.org/docs/about/community/) meeting?", "Yeah, I can contribute to this.\r\n\r\nAnd this should be a small change so that is it really need a design doc?\r\n\r\nI think there are only two things need to be agreed?\r\n\r\n1. What is the param name should be?\r\n2. Is the flow I provided fine to be accepted?\r\n2.1 `list_recurring_runs()`: List the runs by `experiment_id` and `job name`\r\n2.2 `Delete the jobs`\r\n\r\nIf so, I can submit a PR to continue the feature implementation.", "Thanks for your response, @haoxins, and for your interest in contributing! And thanks also for contributing a comment fix the other day -- that's very helpful.\r\n\r\n**Why a design doc**\r\nI do think a design doc is warranted to explore two aspects of this contribution:\r\n\r\n1) **Costs and benefits of including this feature.** User-facing features in particular require maintenance and possibly slow down future feature development in an effort to maintain complete functionality and backward-compatibility. Those are some of the costs, so it's helpful to explicitly lay out the benefits.\r\n\r\n2) **Different approaches for implementation and their associated costs and benefits.** I think there are a few complexities that may be introduced at implementation time. The current suggested implementation probably would get us most of the way there, but may have some unintended side-effects that are worth exploring in a design doc.\r\n\r\n**Some considerations**\r\nFor number 2 of \"Why a design doc\", I think it may make more sense to implement the CRUD logic in the backend instead of doing so in the SDK. Put differently: we probably would want an SDK PUT rather than an SDK DELETE + POST. This is both a) consistent with the current paradigm of the SDK as a client of the backed and b) allows the backend to do additional cleanup and reference updating associated with deleting a job. @zijianjoy can speak more to the backend responsibilities for an operation like this. This is just one example of some of the considerations that should be explored in a design doc.\r\n\r\n**Design doc structure**\r\nA basic design doc structure is:\r\n1) What is the user story? In other words: what is the objective of the feature contribution? What are users **current** options for achieving this objective?\r\n2) What are some reasonable ways of implementing this objective as a feature? Include pros/cons. (This addresses number 1 of \"Why a design doc\".)\r\n3) Pick a preferred solution from \"Design doc structure\" part 2. What options are there for implementing this solution? Include an explanation, API/code snippets (if relevant), and pros/cons for each. (This addresses number 2 of \"Why a design doc\".)\r\n4) Are there any additional considerations not already discussed?\r\n\r\nPlease don't be discouraged by the enumeration of parts 1-4 here. My intention by being explicit is actually to make the design doc _easier_ to put together. Each section can be somewhat brief so long as it captures the key points.\r\n\r\nThanks again and please follow up with any questions.\r\n\r\ncc/ @ji-yaqi @chensun", "I'm closing this because I don't want to waste too much time for this. " ]
"2022-03-17T10:40:08"
"2022-03-18T16:58:25"
"2022-03-18T16:58:24"
CONTRIBUTOR
null
### Feature Area /area sdk ### What feature would you like to see? Add a param in `create_recurring_run` func to support override exists recurring job with the same name. ### What is the use case or pain point? In my use case, when I changed the logic of my job, I will remove the jobs with the same name first, and create a new one with the same name again. I do this because I don't want to create duplicated jobs. The code will looks like ```python recurring_run_name = "..." recurring_run_description = "..." jobs = kfp_client.list_recurring_runs( experiment_id=experiment_id, filter=filter_by_job_name, ... ).jobs if jobs != None: # Delete the jobs kfp_client.create_recurring_run( experiment_id=experiment_id, job_name=recurring_run_name, description=recurring_run_description, ... ) ``` So, would it accepted for you guys to add a param such as `replace_exists` to implement these logic. --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a ๐Ÿ‘. We prioritize fulfilling features with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7435/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7435/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7432
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7432/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7432/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7432/events
https://github.com/kubeflow/pipelines/issues/7432
1,171,726,909
I_kwDOB-71UM5F1yI9
7,432
[feature] Support for features in v2
{ "login": "casassg", "id": 6912589, "node_id": "MDQ6VXNlcjY5MTI1ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/6912589?v=4", "gravatar_id": "", "url": "https://api.github.com/users/casassg", "html_url": "https://github.com/casassg", "followers_url": "https://api.github.com/users/casassg/followers", "following_url": "https://api.github.com/users/casassg/following{/other_user}", "gists_url": "https://api.github.com/users/casassg/gists{/gist_id}", "starred_url": "https://api.github.com/users/casassg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/casassg/subscriptions", "organizations_url": "https://api.github.com/users/casassg/orgs", "repos_url": "https://api.github.com/users/casassg/repos", "events_url": "https://api.github.com/users/casassg/events{/privacy}", "received_events_url": "https://api.github.com/users/casassg/received_events", "type": "User", "site_admin": false }
[ { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "reference from community meeting: https://docs.google.com/document/d/1cHAdK1FoGEbuQ-Rl6adBDL5W2YpDiUbnMLIwmoXBoAU/edit#bookmark=id.pn3sq4nva5w0", "Another tentative question is: What are the current ways to follow along the state of v1 to v2 transition for backend and all?" ]
"2022-03-17T00:27:03"
"2022-03-17T22:45:01"
null
CONTRIBUTOR
null
### Feature Area /area sdk ### What feature would you like to see? Would like to decipher wether KFP v2 will continue to support: - Recursive pipelines (https://www.kubeflow.org/docs/components/pipelines/sdk/dsl-recursion/) - ContainerOp components using python SDK directly (not python functions) - LocalClient for local execution ### What is the use case or pain point? Recursive pipelines is useful for research HPT and similar use cases where it's not clear when will a pipeline need to stop executing. ### Is there a workaround currently? This is just a request for more info since I lack seeing it in the docs themselves. --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a ๐Ÿ‘. We prioritize fulfilling features with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7432/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7432/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7421
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7421/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7421/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7421/events
https://github.com/kubeflow/pipelines/issues/7421
1,170,384,578
I_kwDOB-71UM5FwqbC
7,421
[backend] "Updating" a recurring run, whilst a recurring run is in progress causes it to terminate
{ "login": "alexlatchford", "id": 628146, "node_id": "MDQ6VXNlcjYyODE0Ng==", "avatar_url": "https://avatars.githubusercontent.com/u/628146?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexlatchford", "html_url": "https://github.com/alexlatchford", "followers_url": "https://api.github.com/users/alexlatchford/followers", "following_url": "https://api.github.com/users/alexlatchford/following{/other_user}", "gists_url": "https://api.github.com/users/alexlatchford/gists{/gist_id}", "starred_url": "https://api.github.com/users/alexlatchford/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexlatchford/subscriptions", "organizations_url": "https://api.github.com/users/alexlatchford/orgs", "repos_url": "https://api.github.com/users/alexlatchford/repos", "events_url": "https://api.github.com/users/alexlatchford/events{/privacy}", "received_events_url": "https://api.github.com/users/alexlatchford/received_events", "type": "User", "site_admin": false }
[ { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
open
false
null
[]
null
[]
"2022-03-16T00:06:23"
"2022-03-17T22:40:21"
null
CONTRIBUTOR
null
### Environment * How did you deploy Kubeflow Pipelines (KFP)? <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> Atop AWS using the `kubeflow/manifests` official distribution with some overlays specific to Zillow. * KFP version: <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> We use KF v1.3.1 currently (plan to update to v1.5 next month), looks like that includes KFP v1.5.1 ([link](https://github.com/kubeflow/manifests/blob/v1.3.1/apps/pipeline/upstream/base/pipeline/kustomization.yaml#L43)). * KFP SDK version: <!-- Specify the output of the following shell command: $pip list | grep kfp --> We run a forked version ([see here](https://github.com/zillow/pipelines)), I think we last pulled in at ~v1.8. ### Steps to reproduce <!-- Specify how to reproduce the problem. This may include information such as: a description of the process, code snippets, log output, or screenshots. --> The problem is that via CICD we want our scientists to be able to define a `recurring-run` and have it updated on subsequent CICD executions if they change their settings. Given the limitations of the "Job" API in KFP we delete and re-add to "update" a recurring run, (see the [API reference](https://www.kubeflow.org/docs/components/pipelines/reference/api/kubeflow-pipeline-api-spec/#tag-JobService) for more info). What happens in this case if the first recurring run has triggered a workflow then the scheduledworkflow controller deletes the `ScheduledWorkflow` and then via Kubernetes garbage collection it cascades and deletes any in-progress `Workflow` resources. Then when the "new" updated recurring run is created it needs to wait for the new trigger point to schedule the next run, it won't try to resurrect the now terminated run (and it definitely shouldn't thinking about it). ### Expected result <!-- What should the correct behavior be? --> Basically some way of keeping the run that was in progress before the "update" occurred. Potentially when deleting a recurring run if there was a flag available in the API/SDK/CLI to say to [orphan on deletion](https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/#set-orphan-deletion-policy) that'd solve this issue and you'd not need a complicated update API. ### Materials and Reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> Apologies, maybe this is more of a "feature" request thinking about it now. Apologies for the mischaracterization! --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a ๐Ÿ‘. We prioritise the issues with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7421/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7421/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7420
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7420/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7420/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7420/events
https://github.com/kubeflow/pipelines/issues/7420
1,170,348,318
I_kwDOB-71UM5Fwhke
7,420
[SDK] Pipeline runs web UI shows the wrong namespace
{ "login": "emenendez", "id": 3814114, "node_id": "MDQ6VXNlcjM4MTQxMTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/3814114?v=4", "gravatar_id": "", "url": "https://api.github.com/users/emenendez", "html_url": "https://github.com/emenendez", "followers_url": "https://api.github.com/users/emenendez/followers", "following_url": "https://api.github.com/users/emenendez/following{/other_user}", "gists_url": "https://api.github.com/users/emenendez/gists{/gist_id}", "starred_url": "https://api.github.com/users/emenendez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emenendez/subscriptions", "organizations_url": "https://api.github.com/users/emenendez/orgs", "repos_url": "https://api.github.com/users/emenendez/repos", "events_url": "https://api.github.com/users/emenendez/events{/privacy}", "received_events_url": "https://api.github.com/users/emenendez/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" } ]
open
false
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false } ]
null
[ "Hello @emenendez , this is a known limitation we have right now, because SDK doesn't have the information about whether it is an KFP standalone or a full-fledged Kubeflow deployment. We don't have a clear idea yet about how to overcome this.", "Thanks so much for following up @zijianjoy!" ]
"2022-03-15T23:01:38"
"2022-03-21T22:56:09"
null
NONE
null
### Environment * How did you deploy Kubeflow Pipelines (KFP)? As part of a full Kubeflow 1.4 install on GKE. * KFP version: 1.7.0, packaged with Kubeflow 1.4 ### Steps to reproduce 1. Use the KFP Python SDK (for example, `create_run_from_pipeline_func()` to create a pipeline from a Jupyter Notebook. Use the `namespace` argument to create this pipeline run in a namespace that's not your own, but in which you are a collaborator. 2. The notebook will display a link to the "run details" page in the web UI of the following form: https://<cluster-domain>/pipeline/#/runs/details/<uuid> 3. Open the link. 4. The pipeline run details page is shown, but the namespace selector in the top-left corner shows your own default namespace, not the namespace in which the pipeline was actually run. ### Expected result The pipeline run details should be shown, and the namespace selector in the top-left corner should show the namespace in which the pipeline was run (not your own namespace). Thank you for your help debugging this issue! --- Impacted by this bug? Give it a ๐Ÿ‘. We prioritise the issues with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7420/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7420/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7416
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7416/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7416/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7416/events
https://github.com/kubeflow/pipelines/issues/7416
1,169,117,766
I_kwDOB-71UM5Fr1JG
7,416
[bug] google-cloud-pipeline-components.readthedocs.io now points to 1.0.1 version instead of 1.0.0
{ "login": "dianeo-mit", "id": 46697321, "node_id": "MDQ6VXNlcjQ2Njk3MzIx", "avatar_url": "https://avatars.githubusercontent.com/u/46697321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dianeo-mit", "html_url": "https://github.com/dianeo-mit", "followers_url": "https://api.github.com/users/dianeo-mit/followers", "following_url": "https://api.github.com/users/dianeo-mit/following{/other_user}", "gists_url": "https://api.github.com/users/dianeo-mit/gists{/gist_id}", "starred_url": "https://api.github.com/users/dianeo-mit/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dianeo-mit/subscriptions", "organizations_url": "https://api.github.com/users/dianeo-mit/orgs", "repos_url": "https://api.github.com/users/dianeo-mit/repos", "events_url": "https://api.github.com/users/dianeo-mit/events{/privacy}", "received_events_url": "https://api.github.com/users/dianeo-mit/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
null
[]
null
[]
"2022-03-15T02:47:00"
"2022-03-17T22:43:12"
"2022-03-17T22:43:12"
NONE
null
### What steps did you take I went to the main documentation URL: https://google-cloud-pipeline-components.readthedocs.io/ ### What happened: I was directed to this URL, which doesn't exist: https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-1.0.1/ ### What did you expect to happen: I expected to be taken to this URL, which does exist - and which worked perfectly last week: https://google-cloud-pipeline-components.readthedocs.io/en/google-cloud-pipeline-components-1.0.0/ ### Environment: Google Chrome Version 99.0.4844.57 (Official Build) (64-bit) ### Anything else you would like to add: <!-- Miscellaneous information that will assist in solving the issue.--> ### Labels <!-- Please include labels below by uncommenting them to help us better triage issues --> <!-- /area frontend --> <!-- /area backend --> <!-- /area sdk --> <!-- /area testing --> <!-- /area samples --> <!-- /area components --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a ๐Ÿ‘. We prioritise the issues with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7416/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7416/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7413
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7413/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7413/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7413/events
https://github.com/kubeflow/pipelines/issues/7413
1,169,007,766
I_kwDOB-71UM5FraSW
7,413
[sdk] Code style/formatting standardization
{ "login": "connor-mccarthy", "id": 55268212, "node_id": "MDQ6VXNlcjU1MjY4MjEy", "avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-mccarthy", "html_url": "https://github.com/connor-mccarthy", "followers_url": "https://api.github.com/users/connor-mccarthy/followers", "following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}", "gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions", "organizations_url": "https://api.github.com/users/connor-mccarthy/orgs", "repos_url": "https://api.github.com/users/connor-mccarthy/repos", "events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-mccarthy/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
{ "login": "connor-mccarthy", "id": 55268212, "node_id": "MDQ6VXNlcjU1MjY4MjEy", "avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-mccarthy", "html_url": "https://github.com/connor-mccarthy", "followers_url": "https://api.github.com/users/connor-mccarthy/followers", "following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}", "gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions", "organizations_url": "https://api.github.com/users/connor-mccarthy/orgs", "repos_url": "https://api.github.com/users/connor-mccarthy/repos", "events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-mccarthy/received_events", "type": "User", "site_admin": false }
[ { "login": "connor-mccarthy", "id": 55268212, "node_id": "MDQ6VXNlcjU1MjY4MjEy", "avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-mccarthy", "html_url": "https://github.com/connor-mccarthy", "followers_url": "https://api.github.com/users/connor-mccarthy/followers", "following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}", "gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions", "organizations_url": "https://api.github.com/users/connor-mccarthy/orgs", "repos_url": "https://api.github.com/users/connor-mccarthy/repos", "events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-mccarthy/received_events", "type": "User", "site_admin": false } ]
null
[ "Format with `yapf`: https://github.com/kubeflow/pipelines/pull/7414\r\n\r\nUpdate contributing guidelines: https://github.com/kubeflow/pipelines/pull/7436", "Closing in favor of a larger developer experience improvement effort." ]
"2022-03-14T23:13:33"
"2022-04-27T00:07:18"
"2022-04-27T00:07:17"
MEMBER
null
This issue describes steps for improving the linting and formatting of our codebase. Action items: - [ ] Change source code files to be `pylint` compliant and align `.pylintrc` with the desired state of our source code files - [x] Format all files with `yapf` - [ ] Create (a) CI/CD workflow(s) to hold linting, formatting, and other code checks - [x] Provide developers with tooling to help make their PRs compliant (scripts, tox, etc.) /area sdk /cc @chensun /cc @ji-yaqi
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7413/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7413/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7389
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7389/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7389/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7389/events
https://github.com/kubeflow/pipelines/issues/7389
1,163,345,756
I_kwDOB-71UM5FVz9c
7,389
[bug] google_cloud_pipeline_components ModelBatchPredictOp()
{ "login": "Erin-Servian", "id": 77812083, "node_id": "MDQ6VXNlcjc3ODEyMDgz", "avatar_url": "https://avatars.githubusercontent.com/u/77812083?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Erin-Servian", "html_url": "https://github.com/Erin-Servian", "followers_url": "https://api.github.com/users/Erin-Servian/followers", "following_url": "https://api.github.com/users/Erin-Servian/following{/other_user}", "gists_url": "https://api.github.com/users/Erin-Servian/gists{/gist_id}", "starred_url": "https://api.github.com/users/Erin-Servian/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Erin-Servian/subscriptions", "organizations_url": "https://api.github.com/users/Erin-Servian/orgs", "repos_url": "https://api.github.com/users/Erin-Servian/repos", "events_url": "https://api.github.com/users/Erin-Servian/events{/privacy}", "received_events_url": "https://api.github.com/users/Erin-Servian/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
open
false
{ "login": "IronPan", "id": 2348602, "node_id": "MDQ6VXNlcjIzNDg2MDI=", "avatar_url": "https://avatars.githubusercontent.com/u/2348602?v=4", "gravatar_id": "", "url": "https://api.github.com/users/IronPan", "html_url": "https://github.com/IronPan", "followers_url": "https://api.github.com/users/IronPan/followers", "following_url": "https://api.github.com/users/IronPan/following{/other_user}", "gists_url": "https://api.github.com/users/IronPan/gists{/gist_id}", "starred_url": "https://api.github.com/users/IronPan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/IronPan/subscriptions", "organizations_url": "https://api.github.com/users/IronPan/orgs", "repos_url": "https://api.github.com/users/IronPan/repos", "events_url": "https://api.github.com/users/IronPan/events{/privacy}", "received_events_url": "https://api.github.com/users/IronPan/received_events", "type": "User", "site_admin": false }
[ { "login": "IronPan", "id": 2348602, "node_id": "MDQ6VXNlcjIzNDg2MDI=", "avatar_url": "https://avatars.githubusercontent.com/u/2348602?v=4", "gravatar_id": "", "url": "https://api.github.com/users/IronPan", "html_url": "https://github.com/IronPan", "followers_url": "https://api.github.com/users/IronPan/followers", "following_url": "https://api.github.com/users/IronPan/following{/other_user}", "gists_url": "https://api.github.com/users/IronPan/gists{/gist_id}", "starred_url": "https://api.github.com/users/IronPan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/IronPan/subscriptions", "organizations_url": "https://api.github.com/users/IronPan/orgs", "repos_url": "https://api.github.com/users/IronPan/repos", "events_url": "https://api.github.com/users/IronPan/events{/privacy}", "received_events_url": "https://api.github.com/users/IronPan/received_events", "type": "User", "site_admin": false } ]
null
[ "Any updates? I'm having the same issue when creating a batch prediction job that consumes TFRecord files.", "Ah, I believe I found the issue. I missed that the gcs_source_uris argument must be a list of strings. I had passed a string and the quotes around the URI were removed in the payload JSON string. It might be helpful to clarify the documentation here or add some type checking of the inputs.", "thanks mdruby. your comment was helpful in resolving my issue. " ]
"2022-03-09T00:14:22"
"2023-07-27T16:02:11"
null
NONE
null
### What steps did you take: I was deploying a pipeline to Vertex AI that was doing Model Upload and Batch Prediction jobs. But I got an error when the pipeline was executing the Batch Prediction Job. The code is below: ``` from kfp.v2.google.client import AIPlatformClient from google_cloud_pipeline_components import aiplatform as gcc_aip from kfp.v2 import compiler, dsl @dsl.pipeline(name="sample-pipeline", pipeline_root=pipeline_root) def pipeline(): model_upload_op = gcc_aip.ModelUploadOp( project=GCP_PROJECT_ID, display_name=model_display_name, artifact_uri=model_artifact_uri, serving_container_image_uri=PREDICTION_IMAGE, serving_container_ports = [{"container_port": 8080}], serving_container_predict_route = "/predict", serving_container_health_route = "/health", labels=model_metadata ) gcc_aip.ModelBatchPredictOp( project=GCP_PROJECT_ID, job_display_name=pipeline_id, model=model_upload_op.outputs["model"], gcs_source_uris=batch_prediction_data_uri, gcs_destination_output_uri_prefix=batch_prediction_results_uri, instances_format="csv", predictions_format="jsonl", machine_type = "n1-standard-2", starting_replica_count=1, max_replica_count=1, ) compiler.Compiler().compile(pipeline_func=pipeline, package_path="pipeline.json") api_client = AIPlatformClient(project_id=GCP_PROJECT_ID, region=GCP_REGION) response = api_client.create_run_from_job_spec( 'pipeline.json', pipeline_root=pipeline_root ) ``` <!-- A clear and concise description of what the bug is.--> ### What happened: The error messages are below: ``` Info 2022-03-07T21:29:52.083653660ZINFO:root:Job started for type: BatchPredictionJob Error 2022-03-07T21:29:52.115350265ZTraceback (most recent call last): Error 2022-03-07T21:29:52.115389275Z File "/opt/python3.7/lib/python3.7/runpy.py", line 193, in _run_module_as_main Error 2022-03-07T21:29:52.115585620Z "__main__", mod_spec) Error 2022-03-07T21:29:52.115634257Z File "/opt/python3.7/lib/python3.7/runpy.py", line 85, in _run_code Error 2022-03-07T21:29:52.115704257Z exec(code, run_globals) Error 2022-03-07T21:29:52.115712908Z File "/opt/python3.7/lib/python3.7/site-packages/google_cloud_pipeline_components/container/v1/gcp_launcher/launcher.py", line 229, in <module> Error 2022-03-07T21:29:52.115924803Z main(sys.argv[1:]) Error 2022-03-07T21:29:52.115938316Z File "/opt/python3.7/lib/python3.7/site-packages/google_cloud_pipeline_components/container/v1/gcp_launcher/launcher.py", line 225, in main Error 2022-03-07T21:29:52.116124076Z _JOB_TYPE_TO_ACTION_MAP[job_type](**parsed_args) Error 2022-03-07T21:29:52.116137398Z File "/opt/python3.7/lib/python3.7/site-packages/google_cloud_pipeline_components/container/v1/gcp_launcher/batch_prediction_job_remote_runner.py", line 95, in create_batch_prediction_job Error 2022-03-07T21:29:52.116316276Z insert_artifact_into_payload(executor_input, payload)) Error 2022-03-07T21:29:52.116362741Z File "/opt/python3.7/lib/python3.7/site-packages/google_cloud_pipeline_components/container/v1/gcp_launcher/batch_prediction_job_remote_runner.py", line 47, in insert_artifact_into_payload Error 2022-03-07T21:29:52.116380612Z job_spec = json.loads(payload) Error 2022-03-07T21:29:52.116401022Z File "/opt/python3.7/lib/python3.7/json/__init__.py", line 348, in loads Error 2022-03-07T21:29:52.116745671Z return _default_decoder.decode(s) Error 2022-03-07T21:29:52.116771230Z File "/opt/python3.7/lib/python3.7/json/decoder.py", line 337, in decode Error 2022-03-07T21:29:52.116975231Z obj, end = self.raw_decode(s, idx=_w(s, 0).end()) Error 2022-03-07T21:29:52.117005384Z File "/opt/python3.7/lib/python3.7/json/decoder.py", line 355, in raw_decode Error 2022-03-07T21:29:52.117186534Z raise JSONDecodeError("Expecting value", s, err.value) from None Error 2022-03-07T21:29:52.117214480Zjson.decoder.JSONDecodeError: Expecting value: line 1 column 210 (char 209) Info 2022-03-07T21:30:05.776338368ZJob is running. Error 2022-03-07T21:30:05.834539659ZThe replica workerpool0-0 exited with a non-zero status of 1. ``` ### What did you expect to happen: The Vertex AI Batch Prediction Job can be executed correctly through the pre-built component ModelBatchPredictOp(). ### Environment: <!-- Please fill in those that seem relevant. --> * How do you deploy Kubeflow Pipelines (KFP)? I was using Vertex AI Notebook Instance and deploy the pipeline to Vertex AI Pipelines <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> * KFP version: v2 <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> * KFP SDK version: 1.8.11 <!-- Specify the output of the following shell command: $pip list | grep kfp --> * google-cloud-pipeline-components version: 1.0.0 ### Anything else you would like to add: <!-- Miscellaneous information that will assist in solving the issue.--> ### Labels <!-- Please include labels below by uncommenting them to help us better triage issues --> <!-- /area frontend --> <!-- /area backend --> <!-- /area sdk --> <!-- /area testing --> <!-- /area samples --> <!-- /area components --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a ๐Ÿ‘. We prioritise the issues with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7389/reactions", "total_count": 5, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7389/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7382
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7382/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7382/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7382/events
https://github.com/kubeflow/pipelines/issues/7382
1,161,993,663
I_kwDOB-71UM5FQp2_
7,382
[PROPOSAL] A Generic Protocol for Kubeflow Pipeline Template Registry
{ "login": "hilcj", "id": 17188784, "node_id": "MDQ6VXNlcjE3MTg4Nzg0", "avatar_url": "https://avatars.githubusercontent.com/u/17188784?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hilcj", "html_url": "https://github.com/hilcj", "followers_url": "https://api.github.com/users/hilcj/followers", "following_url": "https://api.github.com/users/hilcj/following{/other_user}", "gists_url": "https://api.github.com/users/hilcj/gists{/gist_id}", "starred_url": "https://api.github.com/users/hilcj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hilcj/subscriptions", "organizations_url": "https://api.github.com/users/hilcj/orgs", "repos_url": "https://api.github.com/users/hilcj/repos", "events_url": "https://api.github.com/users/hilcj/events{/privacy}", "received_events_url": "https://api.github.com/users/hilcj/received_events", "type": "User", "site_admin": false }
[ { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Thanks for all feedbacks! Closing this issue and review responses.\r\n" ]
"2022-03-07T22:27:00"
"2022-03-29T15:24:36"
"2022-03-29T15:24:36"
CONTRIBUTOR
null
## Introduction We are working on a generic protocol design of Kubeflow Pipeline Template Registry. At high level, the protocol will define the generic APIs between KFP SDK client and a template registry server to - upload and download template (as YAML file or ZIP), and - organize templates as versioned objects. Then, KFP SDK will implement a registry client class that can work with any registry server that implements the APIs defined in the generic protocol. Feedbacks are welcomed :-) ## Resource Hierarchy ![Generic Protocol for Kubeflow Pipelines Template Registry - Resource Hierarchy](https://user-images.githubusercontent.com/17188784/157094711-75a7f5b8-a6c7-4778-8f21-651a0f5cd2a2.jpg) The protocol will enable organizing templates as versioned objects in the registry, following the hierarchy of - Host (`http://my-registry.server/kubeflow-pipelines/public/`) - Package (`mnist-demo-workflow`) - Version (`sha256:e54fdaโ€ฆ`) - Tag (`latest`, `v2`) Note - Version name will use sha256 digest of the uploaded content. - Tags are nicknames for versions, uniquely identifying a version under a package. - Tags are unique among versions - the same tag can only apply to a single version under a package. - Multiple tags can apply to the same version. - Tags are not fixed - an existing tag can be removed from the existing version, or be updated to point to another version. ## API Design Please visit [Generic Protocol for Kubeflow Pipelines Template Registry](https://docs.google.com/document/d/1Lc4NqN0VYIO3CHUPb_6LRIDF4VAWYy23MfpHYM8SncA/edit#heading=h.x9snb54sjlu9) for the detailed design. If you don't have permission to access the doc, please join https://groups.google.com/g/kubeflow-discuss for access.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7382/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7382/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7381
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7381/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7381/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7381/events
https://github.com/kubeflow/pipelines/issues/7381
1,161,887,261
I_kwDOB-71UM5FQP4d
7,381
[feature] Add the ability to decide if we want an Artifact to be uploaded to the remote or not
{ "login": "AlexandreBrown", "id": 26939775, "node_id": "MDQ6VXNlcjI2OTM5Nzc1", "avatar_url": "https://avatars.githubusercontent.com/u/26939775?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AlexandreBrown", "html_url": "https://github.com/AlexandreBrown", "followers_url": "https://api.github.com/users/AlexandreBrown/followers", "following_url": "https://api.github.com/users/AlexandreBrown/following{/other_user}", "gists_url": "https://api.github.com/users/AlexandreBrown/gists{/gist_id}", "starred_url": "https://api.github.com/users/AlexandreBrown/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AlexandreBrown/subscriptions", "organizations_url": "https://api.github.com/users/AlexandreBrown/orgs", "repos_url": "https://api.github.com/users/AlexandreBrown/repos", "events_url": "https://api.github.com/users/AlexandreBrown/events{/privacy}", "received_events_url": "https://api.github.com/users/AlexandreBrown/received_events", "type": "User", "site_admin": false }
[ { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
closed
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[]
"2022-03-07T20:27:24"
"2022-07-12T12:55:54"
"2022-07-12T12:55:54"
NONE
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> /area frontend /area backend /area sdk ### What feature would you like to see? I would like to be able to decide if a component Output/Input should be uploaded to the remote or not. The `Artifact` would always be registered by MLMD but the upload to the remote could be optional. <!-- Provide a description of this feature and the user experience. --> We could add a flag to the kfp sdk `shouldUploadArtifactsToRemote` that controls this for a specific artifact input/output. ### What is the use case or pain point? The use case is that we have huge datasets we don't necessarily want to duplicate on the remote. We perform multiple processing on the dataset that can easily be re-done to get the exact same dataset since KFP records all the pipeline parameters that were used for the run (`random_seed`, `dataset_name`, `dataset_version` etc). We use https://github.com/iterative/ldb-resources to version our datasets. <!-- It helps us understand the benefit of this feature for your use case. --> This feature would prevent overloading the remote with duplicates of datasets, if we want to get back a dataset that was used, we can simply refer to the pipeline params and in our case we'll be good to go. It would also speed up our pipelines considerabely since we wouldn't have to wait for the re-upload between every components. This might not be the case for everyone, but we should have the choice. ### Is there a workaround currently? The only workaround I see so far would be to manually create folders for our datasets, instead of using `Output/Input[Dataset]`, to pass the `my_dataset_path: str` between components and use PVCs to share storage. This is far from ideal and we won't even use this workaround since PVCs would make the pipeline not portable. --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a ๐Ÿ‘. We prioritize fulfilling features with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7381/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7381/timeline
null
not_planned
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7379
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7379/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7379/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7379/events
https://github.com/kubeflow/pipelines/issues/7379
1,160,869,953
I_kwDOB-71UM5FMXhB
7,379
Update the feature flags in localStorage if the new version has more feature flags than the stored one
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "id": 2975820904, "node_id": "MDU6TGFiZWwyOTc1ODIwOTA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/v2", "name": "area/v2", "color": "A27925", "default": false, "description": "" } ]
closed
false
{ "login": "jlyaoyuli", "id": 56132941, "node_id": "MDQ6VXNlcjU2MTMyOTQx", "avatar_url": "https://avatars.githubusercontent.com/u/56132941?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jlyaoyuli", "html_url": "https://github.com/jlyaoyuli", "followers_url": "https://api.github.com/users/jlyaoyuli/followers", "following_url": "https://api.github.com/users/jlyaoyuli/following{/other_user}", "gists_url": "https://api.github.com/users/jlyaoyuli/gists{/gist_id}", "starred_url": "https://api.github.com/users/jlyaoyuli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jlyaoyuli/subscriptions", "organizations_url": "https://api.github.com/users/jlyaoyuli/orgs", "repos_url": "https://api.github.com/users/jlyaoyuli/repos", "events_url": "https://api.github.com/users/jlyaoyuli/events{/privacy}", "received_events_url": "https://api.github.com/users/jlyaoyuli/received_events", "type": "User", "site_admin": false }
[ { "login": "jlyaoyuli", "id": 56132941, "node_id": "MDQ6VXNlcjU2MTMyOTQx", "avatar_url": "https://avatars.githubusercontent.com/u/56132941?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jlyaoyuli", "html_url": "https://github.com/jlyaoyuli", "followers_url": "https://api.github.com/users/jlyaoyuli/followers", "following_url": "https://api.github.com/users/jlyaoyuli/following{/other_user}", "gists_url": "https://api.github.com/users/jlyaoyuli/gists{/gist_id}", "starred_url": "https://api.github.com/users/jlyaoyuli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jlyaoyuli/subscriptions", "organizations_url": "https://api.github.com/users/jlyaoyuli/orgs", "repos_url": "https://api.github.com/users/jlyaoyuli/repos", "events_url": "https://api.github.com/users/jlyaoyuli/events{/privacy}", "received_events_url": "https://api.github.com/users/jlyaoyuli/received_events", "type": "User", "site_admin": false } ]
null
[]
"2022-03-07T03:59:59"
"2022-04-23T21:38:36"
"2022-04-23T21:38:36"
COLLABORATOR
null
## Description Currently, we have a hidden page which allows user to enable/disable features which are under development. The URL path is `<KFP_HOST_ADDRESS>/frontend_features`. We call it Feature Flag page, refer to [Large features development](https://github.com/kubeflow/pipelines/tree/master/frontend#large-features-development) documentation. ## Problem Currently, if the feature flag map called `flags` already exists in localStorage, it will be automatically loaded to window object: Refer to https://github.com/kubeflow/pipelines/blob/7ad949106f9310a1bf3c5ff960e2c6274cd042d5/frontend/src/features.ts#L31-L34. However, we will likely be adding new flags when we release KFPv2 for users to try. That means we need to overwrite the existing flag map with `default key list` by new KFP frontend code, while keeping the current value of existing key from the local storage. ## Approach Add a logic to merge existing flag map in localStorage with the default flag map hardcoded by frontend codebase: - If a key doesn't exist in localStorage: Add this key with default value. - If a key already exist in localStorage: Keep this key and keep the existing value. - If a key exists in localStorage, but not in default flag map: Delete this key. After this merge, save the merged flag map back to localStorage and window object. *Note:* We should never change the default value of an existing key, and never add back a key which has been deprecated. Because it will cause inconsistency to user experience. We should update documentation to keep a record about past deprecated keys, and warn developers about the risk of changing default value. ## Reference - [Large features development](https://github.com/kubeflow/pipelines/tree/master/frontend#large-features-development) documentation. - [window.localStorage](https://developer.mozilla.org/en-US/docs/Web/API/Window/localStorage)
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7379/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7379/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7378
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7378/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7378/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7378/events
https://github.com/kubeflow/pipelines/issues/7378
1,160,349,776
I_kwDOB-71UM5FKYhQ
7,378
ValueError: Invalid value for `image_pull_policy` (None), must be one of ['Always', 'IfNotPresent', 'Never']
{ "login": "rahuja23", "id": 51020974, "node_id": "MDQ6VXNlcjUxMDIwOTc0", "avatar_url": "https://avatars.githubusercontent.com/u/51020974?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rahuja23", "html_url": "https://github.com/rahuja23", "followers_url": "https://api.github.com/users/rahuja23/followers", "following_url": "https://api.github.com/users/rahuja23/following{/other_user}", "gists_url": "https://api.github.com/users/rahuja23/gists{/gist_id}", "starred_url": "https://api.github.com/users/rahuja23/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rahuja23/subscriptions", "organizations_url": "https://api.github.com/users/rahuja23/orgs", "repos_url": "https://api.github.com/users/rahuja23/repos", "events_url": "https://api.github.com/users/rahuja23/events{/privacy}", "received_events_url": "https://api.github.com/users/rahuja23/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" } ]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "`image_pull_policy` is actually not supported in v2 yet. The reason that this method is still \"available\" is because we have the v1 and v2 code mixed together in 1.8 branch. This has changed in the master branch, as we completely removed `ContainerOp` and its methods not supported in v2 yet.\r\n" ]
"2022-03-05T13:20:25"
"2022-03-11T01:02:04"
null
NONE
null
* KFP version: kfp 1.8.11 kfp-pipeline-spec 0.1.13 kfp-server-api 1.7.1 I am trying to create a Kubeflow pipeline using python SDK v2 and when I am trying to run the code on the pipeline I am getting "Image_Pull_Policy" error. ``` import kfp from caffe2.perfkernels.hp_emblookup_codegen import o from kfp import dsl from kfp.v2.dsl import component from kfp.v2.dsl import ( Input, Output, Artifact, Dataset, ) @component def dataloader_op(datasets:Output[Artifact]): op = kfp.components.load_component_from_text(''' name: dataloader implementation: container: image: racahu23/ml-blueprint_dataloader:8 imagePullPolicy: Always command: - python - /home/user/dataloader.py - "(--platform aws)" - "(--bucketname new-classification)" - "(--remoteDirectoryName datasets) - {outputPath: /home/user/datasets} ''') op.container.set_image_pull_policy("Always") op.container.set_termination_message_policy('FallbackToLogsOnError') return op @component() def datapreprocessor(datasets:Dataset, Data: Output[Artifact]): op = kfp.components.load_component_from_text(''' name: datapreprocessor implementation: container: image: racahu23/ml-blueprint_preprocessor:7 imagePullPolicy: Always command: - python - "$(--input_dir datasets)" - "$(--output_dir Data)" - {outputPath: /home/user/Data} ''') op.container.set_image_pull_policy("Always") op.container.set_termination_message_policy('FallbackToLogsOnError') return op @component( base_image="docker.io/kubeflowkatib/kubeflow-pipelines-launcher", output_component_file="katib_training_component.yaml" ) def training(Data) -> bool: import os import json experiment_spec = { "algorithm": { "algorithmName": "random" }, "maxFailedTrialCount": 1, "maxTrialCount": 2, "objective": { "goal": 0.99, "objectiveMetricName": "Accuracy", "type": "maximize" }, "parallelTrialCount": 1, "metricsCollectorSpec": { "source": { "fileSystemPath": { "path": "/katib/training.log", "kind": "File" } }, "collector": { "kind": "File" } }, "parameters": [ { "feasibleSpace": { "max": "32", "min": "32" }, "name": "batch_size", "parameterType": "int" } ], "trialTemplate": { "primaryContainerName": "training-container", "retain": "true", "trialParameters": [ { "description": "Number of estimators for the training model", "name": "batch_size", "reference": "batch_size" } ], "trialSpec": { "apiVersion": "batch/v1", "kind": "Job", "spec": { "template": { "metadata": { "annotations": { "sidecar.istio.io/inject": "false" } }, "spec": { "containers": [ { "command": [ "python3", "/home/user/primary_trainer.py", "--input_dir" +" " + str(Data), "--num_labels 9", "--logging_dir logs/", '--num_train_epochs 1', '--evaluation_strategy epoch', '--per_device_train_batch_size ${trialParameters.batchSize}', '--per_device_eval_batch_size 64', '--save_strategy epoch', '--logging_strategy epoch', '--eval_steps 100' ], "image": "racahu23/ml-blueprint_trainer:12", "imagePullPolicy": "Always", "name": "training-container" } ], "restartPolicy": "Never" } } } } } } experiment_spec = json.dumps(experiment_spec) exec_str = f""" python src/launch_experiment.py \ --experiment-name katib-pipeline \ --experiment-namespace <namespace> \ --experiment-spec '{experiment_spec}' \ --experiment-timeout-minutes '60' \ --delete-after-done "False" \ --output-file /tmp/outputs/Best_Parameter_Set/data \ """ os.system(exec_str) return True @dsl.pipeline(name="katib_experiment_pipeline") def training_pipeline( ): _dataloader_op = dataloader_op() _preprocessor_op = datapreprocessor((_dataloader_op.outputs["datasets"])) training_task = (training(_preprocessor_op.outputs["Data"])) arguments = { } client = kfp.Client(namespace="kubeflow", host="http://localhost:8080") client.create_run_from_pipeline_func(training_pipeline,arguments=arguments,namespace= "kubeflow",mode = kfp.dsl.PipelineExecutionMode.V2_COMPATIBLE,enable_caching = False) ``` This is the error that I am getting: <img width="1573" alt="Screen Shot 2022-03-05 at 14 15 36" src="https://user-images.githubusercontent.com/51020974/156884731-01e036b3-4cdd-48f7-b49a-74d67f887af7.png"> I have been struggling quite a bit in fixing this issue. Any help would be really appreciated.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7378/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7378/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7370
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7370/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7370/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7370/events
https://github.com/kubeflow/pipelines/issues/7370
1,157,116,251
I_kwDOB-71UM5E-DFb
7,370
[feature] Add an endpoint that restore experiment and its runs at once
{ "login": "difince", "id": 11557050, "node_id": "MDQ6VXNlcjExNTU3MDUw", "avatar_url": "https://avatars.githubusercontent.com/u/11557050?v=4", "gravatar_id": "", "url": "https://api.github.com/users/difince", "html_url": "https://github.com/difince", "followers_url": "https://api.github.com/users/difince/followers", "following_url": "https://api.github.com/users/difince/following{/other_user}", "gists_url": "https://api.github.com/users/difince/gists{/gist_id}", "starred_url": "https://api.github.com/users/difince/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/difince/subscriptions", "organizations_url": "https://api.github.com/users/difince/orgs", "repos_url": "https://api.github.com/users/difince/repos", "events_url": "https://api.github.com/users/difince/events{/privacy}", "received_events_url": "https://api.github.com/users/difince/received_events", "type": "User", "site_admin": false }
[ { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
open
false
null
[]
null
[ "Hello @difince , can you write a design proposal and discuss it in a KFP community meeting? So we can have a chance to discuss how it fits user's need and how to implement the new APIs." ]
"2022-03-02T12:16:33"
"2022-03-03T23:49:05"
null
MEMBER
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> <!-- /area frontend --> /area backend <!-- /area sdk --> <!-- /area samples --> <!-- /area components --> ### What feature would you like to see? Add an endpoint that restores Experiment and all of its Runs at once ```/apis/v1beta1/experiments/{id}/runs:unarchive``` <!-- Provide a description of this feature and the user experience. --> ### What is the use case or pain point? Provide an option to the user to restore an Experiment and all of its Runs in a single click. For more context took a look at this comment - https://github.com/kubeflow/pipelines/issues/5114#issuecomment-777932647 This endpoint will be leveraged by [this](https://github.com/kubeflow/pipelines/issues/7335) frontend issue. <!-- It helps us understand the benefit of this feature for your use case. --> ### Is there a workaround currently? Yes, do it with multiple steps. <!-- Without this feature, how do you accomplish your task today? --> --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a ๐Ÿ‘. We prioritize fulfilling features with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7370/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7370/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7369
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7369/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7369/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7369/events
https://github.com/kubeflow/pipelines/issues/7369
1,157,084,504
I_kwDOB-71UM5E97VY
7,369
[feature] Create parent directories of `output_component_file`
{ "login": "gabrielmbmb", "id": 29572918, "node_id": "MDQ6VXNlcjI5NTcyOTE4", "avatar_url": "https://avatars.githubusercontent.com/u/29572918?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gabrielmbmb", "html_url": "https://github.com/gabrielmbmb", "followers_url": "https://api.github.com/users/gabrielmbmb/followers", "following_url": "https://api.github.com/users/gabrielmbmb/following{/other_user}", "gists_url": "https://api.github.com/users/gabrielmbmb/gists{/gist_id}", "starred_url": "https://api.github.com/users/gabrielmbmb/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gabrielmbmb/subscriptions", "organizations_url": "https://api.github.com/users/gabrielmbmb/orgs", "repos_url": "https://api.github.com/users/gabrielmbmb/repos", "events_url": "https://api.github.com/users/gabrielmbmb/events{/privacy}", "received_events_url": "https://api.github.com/users/gabrielmbmb/received_events", "type": "User", "site_admin": false }
[ { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
open
false
null
[]
null
[ "/cc @ji-yaqi ", "Hi @gabrielmbmb, thank you and you can contribute this branch https://github.com/kubeflow/pipelines/tree/sdk/release-1.8 for SDK v1 changes. We might have some changes in v2 for this output file. " ]
"2022-03-02T11:47:02"
"2022-03-03T23:58:45"
null
NONE
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> /area sdk ### What feature would you like to see? I would like `output_component_file` file hierarchy to be created i.e. create parent directories if they do not exist. The example below raises `FileNotFoundError` because the parent directories do not exist. **Code example:** ```python from kfp.v2.dsl import pipeline, component @component( output_component_file="build/components/test_component.yaml", ) def test_component() -> None: ... @pipeline(name="test-pipe") def test_pipeline() -> None: task = test_component() if __name__ == "__main__": from kfp.v2 import compiler compiler.Compiler().compile( pipeline_func=test_pipeline, package_path="build/pipeline.json" ) ``` **Traceback:** ```plain Traceback (most recent call last): File "test.py", line 7, in <module> def test_component() -> None: File "/home/gmdev/.cache/pypoetry/virtualenvs/whatever-Ez6qCvWa-py3.8/lib/python3.8/site-packages/kfp/v2/components/component_decorator.py", line 104, in component return component_factory.create_component_from_func( File "/home/gmdev/.cache/pypoetry/virtualenvs/whatever-Ez6qCvWa-py3.8/lib/python3.8/site-packages/kfp/v2/components/component_factory.py", line 440, in create_component_from_func component_spec.save(output_component_file) File "/home/gmdev/.cache/pypoetry/virtualenvs/whatever-Ez6qCvWa-py3.8/lib/python3.8/site-packages/kfp/components/_structures.py", line 494, in save with open(file_path, 'w') as f: FileNotFoundError: [Errno 2] No such file or directory: 'build/components/test_component.yaml' ``` ### What is the use case or pain point? Kubeflow SDK will try to generate the component file definition automatically, so if the value of `output_component_file` contains directories that do not exist it will raise `FileNotFoundError`. ### Is there a workaround currently? Yes, you can create the directory hierarchy just before `kfp` is imported to avoid the issue, but this solution does not seem very clean to me. ```python from pathlib import Path Path("build/components").mkdir(exist_ok=True, parents=True) from kfp.v2.dsl import pipeline, component ... ``` Maybe it would be a good idea to add in the method below to check if the parent directories of `file_path` exist and if they don't, then create them. https://github.com/kubeflow/pipelines/blob/6965dbac2faee0411fc0ca9565fd9a9d7ef8e2bf/sdk/python/kfp/components/_structures.py#L486-L495 I would love to start contributing to this repo, and if this feature proposal is finally accepted I would love to create a PR. --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a ๐Ÿ‘. We prioritize fulfilling features with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7369/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7369/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7368
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7368/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7368/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7368/events
https://github.com/kubeflow/pipelines/issues/7368
1,157,054,224
I_kwDOB-71UM5E9z8Q
7,368
[backend] DeletePipeline does not clean PipelienVersions data from Minio
{ "login": "difince", "id": 11557050, "node_id": "MDQ6VXNlcjExNTU3MDUw", "avatar_url": "https://avatars.githubusercontent.com/u/11557050?v=4", "gravatar_id": "", "url": "https://api.github.com/users/difince", "html_url": "https://github.com/difince", "followers_url": "https://api.github.com/users/difince/followers", "following_url": "https://api.github.com/users/difince/following{/other_user}", "gists_url": "https://api.github.com/users/difince/gists{/gist_id}", "starred_url": "https://api.github.com/users/difince/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/difince/subscriptions", "organizations_url": "https://api.github.com/users/difince/orgs", "repos_url": "https://api.github.com/users/difince/repos", "events_url": "https://api.github.com/users/difince/events{/privacy}", "received_events_url": "https://api.github.com/users/difince/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" } ]
open
false
{ "login": "difince", "id": 11557050, "node_id": "MDQ6VXNlcjExNTU3MDUw", "avatar_url": "https://avatars.githubusercontent.com/u/11557050?v=4", "gravatar_id": "", "url": "https://api.github.com/users/difince", "html_url": "https://github.com/difince", "followers_url": "https://api.github.com/users/difince/followers", "following_url": "https://api.github.com/users/difince/following{/other_user}", "gists_url": "https://api.github.com/users/difince/gists{/gist_id}", "starred_url": "https://api.github.com/users/difince/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/difince/subscriptions", "organizations_url": "https://api.github.com/users/difince/orgs", "repos_url": "https://api.github.com/users/difince/repos", "events_url": "https://api.github.com/users/difince/events{/privacy}", "received_events_url": "https://api.github.com/users/difince/received_events", "type": "User", "site_admin": false }
[ { "login": "difince", "id": 11557050, "node_id": "MDQ6VXNlcjExNTU3MDUw", "avatar_url": "https://avatars.githubusercontent.com/u/11557050?v=4", "gravatar_id": "", "url": "https://api.github.com/users/difince", "html_url": "https://github.com/difince", "followers_url": "https://api.github.com/users/difince/followers", "following_url": "https://api.github.com/users/difince/following{/other_user}", "gists_url": "https://api.github.com/users/difince/gists{/gist_id}", "starred_url": "https://api.github.com/users/difince/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/difince/subscriptions", "organizations_url": "https://api.github.com/users/difince/orgs", "repos_url": "https://api.github.com/users/difince/repos", "events_url": "https://api.github.com/users/difince/events{/privacy}", "received_events_url": "https://api.github.com/users/difince/received_events", "type": "User", "site_admin": false } ]
null
[ "/assign @difince \r\n", "/cc @chensun " ]
"2022-03-02T11:19:56"
"2022-03-31T13:11:16"
null
MEMBER
null
When a Pipeline has Pipeline Versions for each version there is a record in the Object Store (Minio). On delete Pipeline all the records (Pipeline and its Versions) are deleted from the database(MySQL), but not all are deleted from Minio. Minio records for Pipeline Versions are not deleted. ### Environment Linux Ubuntu - 20.04.3 LTS kind v0.11.1 go1.16.4 linux/amd64 kustomize version: {KustomizeVersion:3.2.0 GitCommit:a3103f1e62ddb5b696daa3fd359bb6f2e8333b49 BuildDate:2019-09-18T16:26:36Z GoOs:linux GoArch:amd64} Docker Engine 20.10.12 * How did you deploy Kubeflow Pipelines (KFP)? Full Kubeflow deployment ```while ! kustomize build example | kubectl apply -f -; do echo "Retrying to apply resources"; sleep 10; done``` with manifest [commit](https://github.com/kubeflow/manifests/tree/bcc0e4f8bd54977aa62f9dae1c41ad47d62524c5) <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> * KFP version: Kubeflow Pipelines apps/pipeline/upstream [1.7.0](https://github.com/kubeflow/pipelines/tree/1.7.0/manifests/kustomize) **But the problem exists with the latest Pipeline version as well!** * KFP SDK version: <!-- Specify the output of the following shell command: $pip list | grep kfp --> ### Steps to reproduce 1. Create a Pipeline with Pipeline versions 2. Delete the Pipeline 3. Check Minio - Minio records associated with the Pipeline Versions are not deleted. <!-- Specify how to reproduce the problem. This may include information such as: a description of the process, code snippets, log output, or screenshots. --> ### Expected result Minio records associated with the Pipeline Versions to be deleted as well <!-- What should the correct behavior be? --> ### Materials and Reference In [delete Pipeline function](https://github.com/kubeflow/pipelines/blob/6965dbac2faee0411fc0ca9565fd9a9d7ef8e2bf/backend/src/apiserver/resource/resource_manager.go#L224) there is a comment that clearly says - that the current implementation of this func supports only Pipelines with a **single** Version. Today, **multiple** Pipeline Versions are supported and it is time to be handled appropriately when Deleting a Pipeline. ``` // Delete pipeline file and DB entry. // Not fail the request if this step failed. A background run will do the cleanup. // https://github.com/kubeflow/pipelines/issues/388 // TODO(jingzhang36): For now (before exposing version API), we have only 1 // file with both pipeline and version pointing to it; so it is ok to do // the deletion as follows. After exposing version API, we can have multiple // versions and hence multiple files, and we shall improve performance by // either using async deletion in order for this method to be non-blocking // or or exploring other performance optimization tools provided by gcs. err = r.objectStore.DeleteFile(r.objectStore.GetPipelineKey(fmt.Sprint(pipelineId))) if err != nil { glog.Errorf("%v", errors.Wrapf(err, "Failed to delete pipeline file for pipeline %v", pipelineId)) return nil } err = r.pipelineStore.DeletePipeline(pipelineId) ``` <!-- Help us debug this issue by providing resources such as sample code, background context, or links to references. --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a ๐Ÿ‘. We prioritise the issues with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7368/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7368/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7363
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7363/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7363/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7363/events
https://github.com/kubeflow/pipelines/issues/7363
1,155,676,871
I_kwDOB-71UM5E4jrH
7,363
[sample] Error running xgboost_training_cm.py
{ "login": "Linchin", "id": 12806577, "node_id": "MDQ6VXNlcjEyODA2NTc3", "avatar_url": "https://avatars.githubusercontent.com/u/12806577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Linchin", "html_url": "https://github.com/Linchin", "followers_url": "https://api.github.com/users/Linchin/followers", "following_url": "https://api.github.com/users/Linchin/following{/other_user}", "gists_url": "https://api.github.com/users/Linchin/gists{/gist_id}", "starred_url": "https://api.github.com/users/Linchin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Linchin/subscriptions", "organizations_url": "https://api.github.com/users/Linchin/orgs", "repos_url": "https://api.github.com/users/Linchin/repos", "events_url": "https://api.github.com/users/Linchin/events{/privacy}", "received_events_url": "https://api.github.com/users/Linchin/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1126834402, "node_id": "MDU6TGFiZWwxMTI2ODM0NDAy", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/components", "name": "area/components", "color": "d2b48c", "default": false, "description": "" }, { "id": 1260031624, "node_id": "MDU6TGFiZWwxMjYwMDMxNjI0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/samples", "name": "area/samples", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "Thank you! Should we see an update to the GCR image for [ml-pipeline-gcp](https://console.cloud.google.com/gcr/images/ml-pipeline/GLOBAL/ml-pipeline-gcp)? \r\n\r\nAre we sure the dataproc client is fixed? I don't see anything that calls [_create_client](https://github.com/kubeflow/pipelines/blob/master/components/gcp/container/component_sdk/python/kfp_component/google/dataproc/_client.py#L26). It seems like this [should be called _build_client](https://github.com/kubeflow/pipelines/blob/6965dbac2faee0411fc0ca9565fd9a9d7ef8e2bf/components/gcp/container/component_sdk/python/kfp_component/google/common/_utils.py#L170) for it to work.", "@greggdonovan You're spot on. Please feel free to send a PR for the fix since you discovered it. Or I can do it as well if you wish.\r\n\r\nRegarding the release of the ml-pipeline-gcp images. This set of components is actually being deprecated: https://github.com/kubeflow/pipelines/tree/master/components/gcp/dataproc#readme\r\n\r\nWe might still release an updated image with our next KFP release. But our goal is to drop the support and decouple it from KFP release.", "Ran into the same issue as the author. Since, those components files referencing that ml-pipeline-gcp images are being deprecated, whats the recommented way to manage dataproc (create/delete)? \r\n\r\nThat means that code sample in the doc is not right anymore then:\r\nhttps://www.kubeflow.org/docs/components/pipelines/introduction/#the-python-code-that-represents-the-pipeline" ]
"2022-03-01T18:35:14"
"2022-08-08T15:55:09"
"2022-03-01T20:51:03"
COLLABORATOR
null
### Environment * How did you deploy Kubeflow Pipelines (KFP)? GCP marketplace <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> * KFP version: 1.7.1 <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> * KFP SDK version: <!-- Specify the output of the following shell command: $pip3 list | grep kfp --> ``` kfp 1.8.11 kfp-pipeline-spec 0.1.13 kfp-server-api 1.8.1 ``` ### Steps to reproduce <!-- Specify how to reproduce the problem. This may include information such as: a description of the process, code snippets, log output, or screenshots. --> Follow the steps given in the following link to deploy `xgboost_training_cm.py` https://github.com/kubeflow/pipelines/tree/master/samples/core/xgboost_training_cm ### Expected result <!-- What should the correct behavior be? --> There shows an error with `dataproc-create-cluster` (the second block) ``` Traceback (most recent call last): File "/usr/local/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/local/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/ml/kfp_component/launcher/__main__.py", line 45, in <module> main() File "/ml/kfp_component/launcher/__main__.py", line 42, in main launch(args.file_or_module, args.args) File "/ml/kfp_component/launcher/launcher.py", line 45, in launch return fire.Fire(module, command=args, name=module.__name__) File "/usr/local/lib/python3.7/site-packages/fire/core.py", line 127, in Fire component_trace = _Fire(component, args, context, name) File "/usr/local/lib/python3.7/site-packages/fire/core.py", line 366, in _Fire component, remaining_args) File "/usr/local/lib/python3.7/site-packages/fire/core.py", line 542, in _CallCallable result = fn(*varargs, **kwargs) File "/ml/kfp_component/google/dataproc/_create_cluster.py", line 76, in create_cluster client = DataprocClient() File "/ml/kfp_component/google/common/_utils.py", line 170, in __init__ self._build_client() TypeError: _build_client() takes 0 positional arguments but 1 was given ``` ### Materials and Reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> N/A --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a ๐Ÿ‘. We prioritise the issues with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7363/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7363/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7361
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7361/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7361/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7361/events
https://github.com/kubeflow/pipelines/issues/7361
1,154,622,478
I_kwDOB-71UM5E0iQO
7,361
[backend] "Signal command failed: command terminated with exit code 1" when terminating a pipeline
{ "login": "emenendez", "id": 3814114, "node_id": "MDQ6VXNlcjM4MTQxMTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/3814114?v=4", "gravatar_id": "", "url": "https://api.github.com/users/emenendez", "html_url": "https://github.com/emenendez", "followers_url": "https://api.github.com/users/emenendez/followers", "following_url": "https://api.github.com/users/emenendez/following{/other_user}", "gists_url": "https://api.github.com/users/emenendez/gists{/gist_id}", "starred_url": "https://api.github.com/users/emenendez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emenendez/subscriptions", "organizations_url": "https://api.github.com/users/emenendez/orgs", "repos_url": "https://api.github.com/users/emenendez/repos", "events_url": "https://api.github.com/users/emenendez/events{/privacy}", "received_events_url": "https://api.github.com/users/emenendez/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" } ]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "/assign @chensun ", "@chensun @emenendez \r\n\r\nHas this issue been fixed already ? I am seeing a similar issue on Kubeflow 1.5 though the error logs do not show a `Signal command failed`. Instead, the wait container seems to send the right signal to the main container but the main container does not catch it. From the argo documentation this is possibly because the process is not running as PID1 - is there a workaround/fix for this ?\r\n\r\n" ]
"2022-02-28T22:34:26"
"2022-06-14T20:52:10"
null
NONE
null
### Environment * How did you deploy Kubeflow Pipelines (KFP)?: As part of a full Kubeflow 1.4 deployment on GKE. * KFP version: 1.7.0, packaged with Kubeflow 1.4 * KFP SDK version: N/A ### Steps to reproduce 1. Create a new pipeline run. 2. Click the "Terminate" link in the KFP web UI. 3. Observe that the currently-running pod runs to completion before termination. ### Expected result In step 3 above, the currently-running pod should immediately terminate. ### Materials and Reference I have been able to determine the following so far: 1. When the "Terminate" button is clicked, KFP adds `activeDeadlineSeconds: 0` to the spec of the workflow being terminated. This is happening as expected. 2. The Argo Worfklows controller notices this and attempts to kill the currently-running pod by executing `sh -c kill -s USR2 $(pidof argoexec)` in the `main` container of the running pod. This causes the Argo Workflows controller to log the following error: ``` time="2022-02-25T21:54:54.186Z" level=info msg="Applying sooner Workflow Deadline for pod emenendez-taxi-x5q66-345673846 at: 2022-02-25 21:53:08 +0000 UTC" namespace=emenendez workflow=emenendez-taxi-x5q66 time="2022-02-25T21:54:54.186Z" level=info msg="Updating execution control of emenendez-taxi-x5q66-345673846: {\"deadline\":\"2022-02-25T21:53:08Z\"}" namespace=emenendez workflow=emenendez-taxi-x5q66 time="2022-02-25T21:54:54.244Z" level=info msg="Patch pods 200" time="2022-02-25T21:54:54.246Z" level=info msg="Signalling emenendez-taxi-x5q66-345673846 of updates" namespace=emenendez workflow=emenendez-taxi-x5q66 time="2022-02-25T21:54:54.247Z" level=info msg="https://7.255.204.1:443/api/v1/namespaces/emenendez/pods/emenendez-taxi-x5q66-345673846/exec?command=sh&command=-c&command=kill+-s+USR2+%24%28pidof+argoexec%29&container=main&stderr=true&stdout=true&tty=false" time="2022-02-25T21:54:54.292Z" level=info msg="Create pods/exec 101" time="2022-02-25T21:54:54.390Z" level=warning msg="Signal command failed: command terminated with exit code 1" namespace=emenendez workflow=emenendez-taxi-x5q66 ``` This appears to be a bug with the Argo Workflows controller -- instead of executing `sh -c kill -s USR2 $(pidof argoexec)` in the `main` container, it should execute that command in the *`wait`* container. It appears this bug was introduced in https://github.com/argoproj/argo-workflows/pull/5099, and fixed as part of an unrelated refactor in https://github.com/argoproj/argo-workflows/pull/6022, which is included in Argo Workflows 3.2.0. The specific buggy code is [this block](https://github.com/argoproj/argo-workflows/blob/release-3.1/workflow/controller/exec_control.go#L80-L96), which iterates over the main containers in a pod and sends the USR2 signal there. Interestingly enough, I could not find an issue related to this bug in the Argo Workflows project. Questions: 1. Are there any other workarounds or fixes for the inability to terminate a running pipeline pod other than upgrading Argo Workflows? 2. If not, could Argo Workflows be updated to a version with the fix? Thanks so much! --- Impacted by this bug? Give it a ๐Ÿ‘. We prioritise the issues with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7361/reactions", "total_count": 8, "+1": 8, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7361/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7358
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7358/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7358/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7358/events
https://github.com/kubeflow/pipelines/issues/7358
1,153,842,767
I_kwDOB-71UM5Exj5P
7,358
[bug] Component input "project_id" with type "GCPProjectID"
{ "login": "vgmartinez", "id": 10614247, "node_id": "MDQ6VXNlcjEwNjE0MjQ3", "avatar_url": "https://avatars.githubusercontent.com/u/10614247?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vgmartinez", "html_url": "https://github.com/vgmartinez", "followers_url": "https://api.github.com/users/vgmartinez/followers", "following_url": "https://api.github.com/users/vgmartinez/following{/other_user}", "gists_url": "https://api.github.com/users/vgmartinez/gists{/gist_id}", "starred_url": "https://api.github.com/users/vgmartinez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vgmartinez/subscriptions", "organizations_url": "https://api.github.com/users/vgmartinez/orgs", "repos_url": "https://api.github.com/users/vgmartinez/repos", "events_url": "https://api.github.com/users/vgmartinez/events{/privacy}", "received_events_url": "https://api.github.com/users/vgmartinez/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
open
false
null
[]
null
[ "/cc @chensun " ]
"2022-02-28T09:16:41"
"2022-03-03T23:56:17"
null
NONE
null
### What happened: I getting this error when use the Dataproc component, there is a way to pass the project_id like a string? ``` TypeError: Passing value "saasbcn-98ce0" with type "String" (as "Parameter") to component input "project_id" with type "GCPProjectID" (as "Artifact") is incompatible. Please fix the type of the component input. ``` <!-- /area sdk --> <!-- /area components --> Best regards, Victor
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7358/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7358/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7356
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7356/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7356/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7356/events
https://github.com/kubeflow/pipelines/issues/7356
1,153,220,339
I_kwDOB-71UM5EvL7z
7,356
Best data versioning tool/framework with kfp for seamless integration.
{ "login": "vamshi-rvk", "id": 45108015, "node_id": "MDQ6VXNlcjQ1MTA4MDE1", "avatar_url": "https://avatars.githubusercontent.com/u/45108015?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vamshi-rvk", "html_url": "https://github.com/vamshi-rvk", "followers_url": "https://api.github.com/users/vamshi-rvk/followers", "following_url": "https://api.github.com/users/vamshi-rvk/following{/other_user}", "gists_url": "https://api.github.com/users/vamshi-rvk/gists{/gist_id}", "starred_url": "https://api.github.com/users/vamshi-rvk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vamshi-rvk/subscriptions", "organizations_url": "https://api.github.com/users/vamshi-rvk/orgs", "repos_url": "https://api.github.com/users/vamshi-rvk/repos", "events_url": "https://api.github.com/users/vamshi-rvk/events{/privacy}", "received_events_url": "https://api.github.com/users/vamshi-rvk/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "connor-mccarthy", "id": 55268212, "node_id": "MDQ6VXNlcjU1MjY4MjEy", "avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-mccarthy", "html_url": "https://github.com/connor-mccarthy", "followers_url": "https://api.github.com/users/connor-mccarthy/followers", "following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}", "gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions", "organizations_url": "https://api.github.com/users/connor-mccarthy/orgs", "repos_url": "https://api.github.com/users/connor-mccarthy/repos", "events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-mccarthy/received_events", "type": "User", "site_admin": false }
[ { "login": "connor-mccarthy", "id": 55268212, "node_id": "MDQ6VXNlcjU1MjY4MjEy", "avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-mccarthy", "html_url": "https://github.com/connor-mccarthy", "followers_url": "https://api.github.com/users/connor-mccarthy/followers", "following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}", "gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions", "organizations_url": "https://api.github.com/users/connor-mccarthy/orgs", "repos_url": "https://api.github.com/users/connor-mccarthy/repos", "events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-mccarthy/received_events", "type": "User", "site_admin": false } ]
null
[ "/assign @connor-mccarthy ", "@vamshi-rvk, thanks for your question.\r\n\r\n`kfp` does not natively integrate with any data versioning tools/frameworks, but [`dvc`](https://dvc.org/) is a popular open-source data version control system used in many MLOps workflows, including those that work with image data. I recommend you see if this service meets your needs.", "> @vamshi-rvk, thanks for your question.\r\n> \r\n> `kfp` does not natively integrate with any data versioning tools/frameworks, but [`dvc`](https://dvc.org/) is a popular open-source data version control system used in many MLOps workflows, including those that work with image data. I recommend you see if this service meets your needs.\r\n\r\nthanks for the respons." ]
"2022-02-27T11:48:54"
"2022-03-10T14:18:25"
"2022-03-09T18:13:56"
NONE
null
Hi, our team has started implementing kfp for mlops, and we are researching on best methods to data version. We primarily work on **images** but not limited to it. Which would be the best data versioning framework which works along with kfp with seamless integration and featureset.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7356/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7356/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7347
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7347/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7347/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7347/events
https://github.com/kubeflow/pipelines/issues/7347
1,148,458,821
I_kwDOB-71UM5EdBdF
7,347
GCP Vertex AI, deploying model to existing endpoint
{ "login": "clausagerskov", "id": 13769591, "node_id": "MDQ6VXNlcjEzNzY5NTkx", "avatar_url": "https://avatars.githubusercontent.com/u/13769591?v=4", "gravatar_id": "", "url": "https://api.github.com/users/clausagerskov", "html_url": "https://github.com/clausagerskov", "followers_url": "https://api.github.com/users/clausagerskov/followers", "following_url": "https://api.github.com/users/clausagerskov/following{/other_user}", "gists_url": "https://api.github.com/users/clausagerskov/gists{/gist_id}", "starred_url": "https://api.github.com/users/clausagerskov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/clausagerskov/subscriptions", "organizations_url": "https://api.github.com/users/clausagerskov/orgs", "repos_url": "https://api.github.com/users/clausagerskov/repos", "events_url": "https://api.github.com/users/clausagerskov/events{/privacy}", "received_events_url": "https://api.github.com/users/clausagerskov/received_events", "type": "User", "site_admin": false }
[ { "id": 1126834402, "node_id": "MDU6TGFiZWwxMTI2ODM0NDAy", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/components", "name": "area/components", "color": "d2b48c", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
open
false
{ "login": "IronPan", "id": 2348602, "node_id": "MDQ6VXNlcjIzNDg2MDI=", "avatar_url": "https://avatars.githubusercontent.com/u/2348602?v=4", "gravatar_id": "", "url": "https://api.github.com/users/IronPan", "html_url": "https://github.com/IronPan", "followers_url": "https://api.github.com/users/IronPan/followers", "following_url": "https://api.github.com/users/IronPan/following{/other_user}", "gists_url": "https://api.github.com/users/IronPan/gists{/gist_id}", "starred_url": "https://api.github.com/users/IronPan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/IronPan/subscriptions", "organizations_url": "https://api.github.com/users/IronPan/orgs", "repos_url": "https://api.github.com/users/IronPan/repos", "events_url": "https://api.github.com/users/IronPan/events{/privacy}", "received_events_url": "https://api.github.com/users/IronPan/received_events", "type": "User", "site_admin": false }
[ { "login": "IronPan", "id": 2348602, "node_id": "MDQ6VXNlcjIzNDg2MDI=", "avatar_url": "https://avatars.githubusercontent.com/u/2348602?v=4", "gravatar_id": "", "url": "https://api.github.com/users/IronPan", "html_url": "https://github.com/IronPan", "followers_url": "https://api.github.com/users/IronPan/followers", "following_url": "https://api.github.com/users/IronPan/following{/other_user}", "gists_url": "https://api.github.com/users/IronPan/gists{/gist_id}", "starred_url": "https://api.github.com/users/IronPan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/IronPan/subscriptions", "organizations_url": "https://api.github.com/users/IronPan/orgs", "repos_url": "https://api.github.com/users/IronPan/repos", "events_url": "https://api.github.com/users/IronPan/events{/privacy}", "received_events_url": "https://api.github.com/users/IronPan/received_events", "type": "User", "site_admin": false } ]
null
[ "I'm doing this by using a custom component that checks for existing endpoints based on display_name and then deploys the model to it if found. \r\n\r\n```\r\naiplatform.init(project=project)\r\ntarget_endpoint = None\r\nfor endpoint in aiplatform.Endpoint.list(order_by=\"update_time desc\"):\r\n if endpoint.display_name == endpoint_display_name:\r\n target_endpoint = endpoint\r\n\r\nif target_endpoint is None:\r\n target_endpoint = aiplatform.Endpoint.create(\r\n project=project,\r\n display_name=endpoint_display_name,\r\n )\r\n\r\n target_model = aiplatform.Model(model_id)\r\n target_endpoint.deploy(\r\n model=target_model,\r\n min_replica_count=1,\r\n max_replica_count=1,\r\n traffic_percentage=100,\r\n machine_type='n1-standard-4',\r\n )\r\n```", "okay so in other words its not supported using the google_cloud_pipeline_components library?", "@lc-billyfung how do i access that endpoint then as an output of the custom component, in case i want to deploy a newly uploaded model that i therefore do not have the id of?", "nope, does not solve the problem. to be able to do this i need to either\r\na) Pass a google.cloud.aiplatform.Endpoint to the google_cloud_pipeline_components.aiplatform.ModelDeployOp which is not supported ( Input argument supports only the following types: PipelineParam, str, int, float, bool, dict, and list). Need to somehow pass the endpoint as a string or convert to pipelineparam.\r\nb) Get the model id from the pipelineparam that is output from the ModelUpload op, which is not known until deployment.", "Yeah it's not supported using the library currently, only way I've found to do it is with a custom component, something like: \r\n\r\n```\r\nfrom kfp.v2.dsl import (\r\n component,\r\n Input,\r\n Model,\r\n)\r\n\r\n\r\n@component(base_image=\"python:3.9\", packages_to_install=['google-cloud-aiplatform])\r\ndef custom_deploy_to_endpoint(\r\n model: Input[Model],\r\n endpoint_display_name: str,\r\n project: str,\r\n):\r\n import sys\r\n from google.cloud import aiplatform\r\n\r\n aiplatform.init(project=project)\r\n target_endpoint = None\r\n for endpoint in aiplatform.Endpoint.list(order_by=\"update_time desc\"):\r\n if endpoint.display_name == endpoint_display_name:\r\n target_endpoint = endpoint\r\n \r\n if target_endpoint is None:\r\n target_endpoint = aiplatform.Endpoint.create(\r\n project=project,\r\n display_name=endpoint_display_name,\r\n )\r\n model_id = model.uri.split('aiplatform://v1/')[1]\r\n target_model = aiplatform.Model(model_id)\r\n target_endpoint.deploy(\r\n model=target_model,\r\n min_replica_count=1,\r\n max_replica_count=1,\r\n traffic_percentage=50,\r\n machine_type='n1-standard-4',\r\n )\r\n\r\n\r\ndef pipeline(\r\n project: str = PROJECT_ID,\r\n model_display_name: str = MODEL_DISPLAY_NAME,\r\n serving_container_image_uri: str = IMAGE_URI,\r\n):\r\n train_task = print_op(\"No training to be done here!\")\r\n \r\n model_upload_op = gcc_aip.ModelUploadOp(\r\n project=project, \r\n location=REGION,\r\n display_name=model_display_name,\r\n # artifact_uri=WORKING_DIR,\r\n serving_container_image_uri=serving_container_image_uri,\r\n serving_container_ports=[{\"containerPort\": 8000}],\r\n serving_container_predict_route=\"/hello_world\",\r\n serving_container_health_route=\"/health\", \r\n )\r\n\r\n custom_deploy_to_endpoint(\r\n model=model_upload_op.outputs['model'],\r\n endpoint_display_name=ENDPOINT_NAME,\r\n project=PROJECT_ID\r\n )\r\n```", "@lc-billyfung this doesnt work either. the problem is that we are mixing google.cloud.aiplatform with google_cloud_pipeline_components.aiplatform. \r\nYou have defined your custom op to take a kfp.v2.dsl.Model as input but then you pass it a google.VertexModel which is incompatible.\r\n\r\n", "I see from this issue https://github.com/kubeflow/pipelines/issues/6981 that you are aware that this is a problem so do you actually have a working example?", "@lc-billyfung it seems your code was close enough for me to get it working with some minor modifications. scrolling through the other issues in the repo i found a mention of this command \".ignore_type()\". using that, with a small correction to your str split and the pipeline deploys:\r\n\r\n```\r\nfrom google_cloud_pipeline_components import aiplatform as gcc_aip\r\nimport kfp\r\n@kfp.dsl.pipeline(name=\"multimodel-test\")\r\ndef pipeline(\r\n project: str = PROJECT_ID,\r\n model_display_name: str = MODEL_DISPLAY_NAME,\r\n serving_container_image_uri: str = IMAGE_URI,\r\n):\r\n train_task = print_op(\"No training to be done here!\")\r\n \r\n model_upload_op = gcc_aip.ModelUploadOp(\r\n project=project, \r\n location=REGION,\r\n display_name=model_display_name,\r\n # artifact_uri=WORKING_DIR,\r\n serving_container_image_uri=serving_container_image_uri,\r\n serving_container_ports=[{\"containerPort\": 8000}],\r\n serving_container_predict_route=\"/hello_world\",\r\n serving_container_health_route=\"/health\", \r\n )\r\n \r\n custom_deploy_op = custom_deploy_to_endpoint(\r\n model=model_upload_op.outputs['model'].ignore_type(),\r\n endpoint_display_name=ENDPOINT_NAME,\r\n project=PROJECT_ID\r\n )\r\n custom_deploy_op.after(model_upload_op)\r\n\r\n@component(base_image=\"python:3.9\", packages_to_install=['google-cloud-aiplatform'])\r\ndef custom_deploy_to_endpoint(\r\n model: Input[Model],\r\n endpoint_display_name: str,\r\n project: str,\r\n):\r\n import sys\r\n from google.cloud import aiplatform\r\n\r\n aiplatform.init(project=project)\r\n target_endpoint = None\r\n for endpoint in aiplatform.Endpoint.list(order_by=\"update_time desc\"):\r\n if endpoint.display_name == endpoint_display_name:\r\n target_endpoint = endpoint\r\n \r\n if target_endpoint is None:\r\n target_endpoint = aiplatform.Endpoint.create(\r\n project=project,\r\n display_name=endpoint_display_name,\r\n )\r\n \r\n model_id = model.uri.split('models/')[1]\r\n target_model = aiplatform.Model(model_id)\r\n target_endpoint.deploy(\r\n model=target_model,\r\n min_replica_count=1,\r\n max_replica_count=1,\r\n traffic_percentage=50,\r\n machine_type='n1-standard-4',\r\n )\r\n```", "Hello,\r\n\r\nyou can wrap the existing Endpoint into an Artifact and pass that to `ModelDeployOp`:\r\n\r\n```python\r\nimport google.cloud.aiplatform as aip\r\nimport kfp\r\nfrom kfp.v2.dsl import component\r\n\r\n@kfp.dsl.pipeline(name=\"example\")\r\ndef pipeline(\r\n ...\r\n):\r\n from google_cloud_pipeline_components.types import artifact_types\r\n from google_cloud_pipeline_components.v1.custom_job import CustomTrainingJobOp\r\n from google_cloud_pipeline_components.v1.endpoint import ModelDeployOp\r\n from google_cloud_pipeline_components.v1.model import ModelUploadOp\r\n from kfp.v2.components import importer_node\r\n\r\n model_upload_op = ModelUploadOp(...)\r\n\r\n endpoint_uri = \"https://us-central1-aiplatform.googleapis.com/v1/projects/xxxx/locations/us-central1/endpoints/yyyy\"\r\n endpoint = kfp.v2.dsl.importer(\r\n artifact_uri=endpoint_uri,\r\n artifact_class=artifact_types.VertexEndpoint,\r\n metadata={\r\n \"resourceName\": \"projects/xxxx/locations/us-central1/endpoints/yyyy\"\r\n }\r\n ).output\r\n \r\n\r\n ModelDeployOp(\r\n endpoint=endpoint,\r\n model=model_upload_op.outputs[\"model\"],\r\n deployed_model_display_name=model_display_name,\r\n dedicated_resources_machine_type=\"n1-standard-16\",\r\n dedicated_resources_min_replica_count=1,\r\n dedicated_resources_max_replica_count=1,\r\n )\r\n```" ]
"2022-02-23T18:52:32"
"2022-05-25T11:40:54"
null
NONE
null
### Feature Area /area sdk /area components I currently have a simple pipeline that deploys to Vertex AI that takes an existing model container image from Artifact Registry, uploads it to the Vertex AI model store, creates an endpoint and deploys the model to that endpoint. So far so good. Now say I have a new revision of that model and I want to deploy that to the same endpoint with a 50/50 traffic split between the old and new version. This is achievable relatively simply in the Vertex AI gui but how do I do this in a kubeflow pipeline? How am I able to get either an existing model (already uploaded) or an already created endpoint to pass to the ModelDeployOp? The following code snippet does not work as the returned endpoint type doesnt plug into the operation. ``` def pipeline( project: str = PROJECT_ID, model_display_name: str = MODEL_DISPLAY_NAME, serving_container_image_uri: str = IMAGE_URI, ): train_task = print_op("No training to be done here!") model_upload_op = gcc_aip.ModelUploadOp( project=project, location=REGION, display_name=model_display_name, # artifact_uri=WORKING_DIR, serving_container_image_uri=serving_container_image_uri, serving_container_ports=[{"containerPort": 8000}], serving_container_predict_route="/hello_world", serving_container_health_route="/health", ) endpoints = aip.Endpoint.list( filter=f"display_name={ENDPOINT_NAME}", order_by="create_time" ) existing_endpoint = endpoints[0] existing_endpoint = existing_endpoint model_deploy_op = gcc_aip.ModelDeployOp( endpoint=existing_endpoint, model=model_upload_op.outputs["model"], deployed_model_display_name=model_display_name, dedicated_resources_machine_type="n1-standard-4", dedicated_resources_min_replica_count=1, dedicated_resources_max_replica_count=1, traffic_split=traffic_split, ) ``` <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a ๐Ÿ‘. We prioritize fulfilling features with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7347/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7347/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7346
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7346/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7346/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7346/events
https://github.com/kubeflow/pipelines/issues/7346
1,148,069,282
I_kwDOB-71UM5EbiWi
7,346
[components] Google Cloud Custom Job missing option in create_custom_training_job_op_from_component
{ "login": "k-gupta", "id": 19842405, "node_id": "MDQ6VXNlcjE5ODQyNDA1", "avatar_url": "https://avatars.githubusercontent.com/u/19842405?v=4", "gravatar_id": "", "url": "https://api.github.com/users/k-gupta", "html_url": "https://github.com/k-gupta", "followers_url": "https://api.github.com/users/k-gupta/followers", "following_url": "https://api.github.com/users/k-gupta/following{/other_user}", "gists_url": "https://api.github.com/users/k-gupta/gists{/gist_id}", "starred_url": "https://api.github.com/users/k-gupta/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/k-gupta/subscriptions", "organizations_url": "https://api.github.com/users/k-gupta/orgs", "repos_url": "https://api.github.com/users/k-gupta/repos", "events_url": "https://api.github.com/users/k-gupta/events{/privacy}", "received_events_url": "https://api.github.com/users/k-gupta/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
{ "login": "IronPan", "id": 2348602, "node_id": "MDQ6VXNlcjIzNDg2MDI=", "avatar_url": "https://avatars.githubusercontent.com/u/2348602?v=4", "gravatar_id": "", "url": "https://api.github.com/users/IronPan", "html_url": "https://github.com/IronPan", "followers_url": "https://api.github.com/users/IronPan/followers", "following_url": "https://api.github.com/users/IronPan/following{/other_user}", "gists_url": "https://api.github.com/users/IronPan/gists{/gist_id}", "starred_url": "https://api.github.com/users/IronPan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/IronPan/subscriptions", "organizations_url": "https://api.github.com/users/IronPan/orgs", "repos_url": "https://api.github.com/users/IronPan/repos", "events_url": "https://api.github.com/users/IronPan/events{/privacy}", "received_events_url": "https://api.github.com/users/IronPan/received_events", "type": "User", "site_admin": false }
[ { "login": "IronPan", "id": 2348602, "node_id": "MDQ6VXNlcjIzNDg2MDI=", "avatar_url": "https://avatars.githubusercontent.com/u/2348602?v=4", "gravatar_id": "", "url": "https://api.github.com/users/IronPan", "html_url": "https://github.com/IronPan", "followers_url": "https://api.github.com/users/IronPan/followers", "following_url": "https://api.github.com/users/IronPan/following{/other_user}", "gists_url": "https://api.github.com/users/IronPan/gists{/gist_id}", "starred_url": "https://api.github.com/users/IronPan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/IronPan/subscriptions", "organizations_url": "https://api.github.com/users/IronPan/orgs", "repos_url": "https://api.github.com/users/IronPan/repos", "events_url": "https://api.github.com/users/IronPan/events{/privacy}", "received_events_url": "https://api.github.com/users/IronPan/received_events", "type": "User", "site_admin": false } ]
null
[ "This is now supported https://github.com/kubeflow/pipelines/blob/master/components/google-cloud/google_cloud_pipeline_components/v1/custom_job/utils.py#L51\r\n\r\nplease upgrade to gcpc v1.0.0 to use this.", "/close", "@IronPan: Closing this issue.\n\n<details>\n\nIn response to [this](https://github.com/kubeflow/pipelines/issues/7346#issuecomment-1051678793):\n\n>/close\n\n\nInstructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.\n</details>", "Hey @IronPan , I think there is a still a bug on this. The network variable is exposed as a component parameter but reserved name is not. It doesn't make sense here because we can specify network name as a parameter at runtime but won't be able set the reserved range to use in that network at runtime. Would it be possible to make that change? " ]
"2022-02-23T13:04:39"
"2022-03-03T23:11:46"
"2022-02-26T06:27:33"
NONE
null
Looks like option reserved_ip_ranges is missing in create_custom_training_job_op_from_component() in utils. It was only added to components.yaml.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7346/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7346/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7345
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7345/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7345/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7345/events
https://github.com/kubeflow/pipelines/issues/7345
1,148,044,736
I_kwDOB-71UM5EbcXA
7,345
[sdk] Can't use create_component_from_func with pip packages when running as non-root
{ "login": "skogsbrus", "id": 17073827, "node_id": "MDQ6VXNlcjE3MDczODI3", "avatar_url": "https://avatars.githubusercontent.com/u/17073827?v=4", "gravatar_id": "", "url": "https://api.github.com/users/skogsbrus", "html_url": "https://github.com/skogsbrus", "followers_url": "https://api.github.com/users/skogsbrus/followers", "following_url": "https://api.github.com/users/skogsbrus/following{/other_user}", "gists_url": "https://api.github.com/users/skogsbrus/gists{/gist_id}", "starred_url": "https://api.github.com/users/skogsbrus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/skogsbrus/subscriptions", "organizations_url": "https://api.github.com/users/skogsbrus/orgs", "repos_url": "https://api.github.com/users/skogsbrus/repos", "events_url": "https://api.github.com/users/skogsbrus/events{/privacy}", "received_events_url": "https://api.github.com/users/skogsbrus/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" } ]
open
false
{ "login": "ji-yaqi", "id": 17338099, "node_id": "MDQ6VXNlcjE3MzM4MDk5", "avatar_url": "https://avatars.githubusercontent.com/u/17338099?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ji-yaqi", "html_url": "https://github.com/ji-yaqi", "followers_url": "https://api.github.com/users/ji-yaqi/followers", "following_url": "https://api.github.com/users/ji-yaqi/following{/other_user}", "gists_url": "https://api.github.com/users/ji-yaqi/gists{/gist_id}", "starred_url": "https://api.github.com/users/ji-yaqi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ji-yaqi/subscriptions", "organizations_url": "https://api.github.com/users/ji-yaqi/orgs", "repos_url": "https://api.github.com/users/ji-yaqi/repos", "events_url": "https://api.github.com/users/ji-yaqi/events{/privacy}", "received_events_url": "https://api.github.com/users/ji-yaqi/received_events", "type": "User", "site_admin": false }
[ { "login": "ji-yaqi", "id": 17338099, "node_id": "MDQ6VXNlcjE3MzM4MDk5", "avatar_url": "https://avatars.githubusercontent.com/u/17338099?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ji-yaqi", "html_url": "https://github.com/ji-yaqi", "followers_url": "https://api.github.com/users/ji-yaqi/followers", "following_url": "https://api.github.com/users/ji-yaqi/following{/other_user}", "gists_url": "https://api.github.com/users/ji-yaqi/gists{/gist_id}", "starred_url": "https://api.github.com/users/ji-yaqi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ji-yaqi/subscriptions", "organizations_url": "https://api.github.com/users/ji-yaqi/orgs", "repos_url": "https://api.github.com/users/ji-yaqi/repos", "events_url": "https://api.github.com/users/ji-yaqi/events{/privacy}", "received_events_url": "https://api.github.com/users/ji-yaqi/received_events", "type": "User", "site_admin": false } ]
null
[ "With KFP 1.8.11 (compared with 1.6.6) the `_get_packages_to_install_command` has unfortunately been refactored to be performed inline in _func_to_component_spec, making this much harder to monkeypatch.", "cc @connor-mccarthy, seems like it gets to do with pip_install info. ", "@[skogsbrus](https://github.com/skogsbrus), as a short term fix, can you try adding `\"--no-cache-dir\"` as the first element in your `packages_to_install` array and see if this resolves the issue for you? I have not tested, but I suspect this might work.", "@connor-mccarthy I'll be setting up a fork shortly to circumvent this issue, I can confirm then.\r\n\r\nBut AFAIK that won't work either because a non-root user still does not have permission to write to the default install directory unless the image has explicitly been built with this in mind. The reason why my suggested fix works is because it installs to /tmp", "Also note that the warning and error are separate:\r\n\r\nCan't write to cache (but not fatal)\r\n```\r\nWARNING: The directory '/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you should use sudo's -H flag.\r\n```\r\n\r\nCan't write to install dir:\r\n```\r\nERROR: Could not install packages due to an OSError: [Errno 13] Permission denied: '/.local'\r\nCheck the permissions.\r\n```", "Thanks, @skogsbrus, for clarifying the warning v error. If you manage to remedy this issue, please feel to submit a PR! And perhaps include some notes on pros/cons and alternative solutions considered in this issue to help a reviewer.", "I've got a fix for this now, but I'll need to go through an approval process at work due to the CLA before I can contribute.", "FYI the contribution process has been stuck for a while and I haven't been able to prioritize this. Will update once I am able to contribute.", "Contribution process has been completed. I'll try to pick this up when I find the time (note: might take a while). In the meantime, feel free to ping me if there's anything I can clarify" ]
"2022-02-23T12:43:05"
"2022-09-23T11:29:44"
null
NONE
null
### Environment * KFP version: 1.7 (KF 1.4) * KFP SDK version: 1.6.6 * All dependencies version: ``` kfp 1.6.6 kfp-pipeline-spec 0.1.13 kfp-server-api 1.7.1 ``` ### Steps to reproduce Background: * Due to security concerns it's a bad idea to run containers as root. * For composability and maintenance, it's a good idea to define small & modular KFP components. With this in mind, I wish to report that `create_components_from_func` does not work as expected when the container is run as a non-root user and when the `packages_to_install` parameter is used to add some runtime dependencies. To reproduce, see attached pipeline definition at the bottom. When this pipeline is run, the following output is seen in Kubeflow: ``` WARNING: The directory '/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you should use sudo's -H flag. ERROR: Could not install packages due to an OSError: [Errno 13] Permission denied: '/.local' Check the permissions. WARNING: The directory '/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you should use sudo's -H flag. ERROR: Could not install packages due to an OSError: [Errno 13] Permission denied: '/.local' Check the permissions. Error: exit status 1 ``` ### Expected result The correct behaviour here would be for packages to be installed in a location that's writable by non-root users. As a direct consequence, that location would also to have to be added to PYTHONPATH. With the attached pipeline definition, `kfp.components._python_op._get_packages_to_install_command` today produces the following yaml: ``` - sh - -c - (PIP_DISABLE_PIP_VERSION_CHECK=1 python3 -m pip install --quiet --no-warn-script-location 'tqdm' || PIP_DISABLE_PIP_VERSION_CHECK=1 python3 -m pip install --quiet --no-warn-script-location 'tqdm' --user) && "$0" "$@" - sh - -ec - | program_path=$(mktemp) printf "%s" "$0" > "$program_path" python3 -u "$program_path" "$@" - | def hello_world(): import tqdm print("Hello world!") import argparse _parser = argparse.ArgumentParser(prog='Hello world', description='') _parsed_args = vars(_parser.parse_args()) _outputs = hello_world(**_parsed_args) ``` I propose that `kfp.components._python_op._get_packages_to_install_command` is changed to instead output: ``` - sh - -c - (PIP_DISABLE_PIP_VERSION_CHECK=1 python3 -m pip install --quiet --no-warn-script-location 'tqdm' || PIP_DISABLE_PIP_VERSION_CHECK=1 PYTHONUSERBASE=/tmp/pip python3 -m pip install --quiet --no-warn-script-location --cache-dir /tmp/pip-cache 'tqdm' --user) && "$0" "$@" - sh - -ec - | PIP_CUSTOM_FOLDER=$(realpath /tmp/pip/lib/*/site-packages) program_path=$(mktemp) printf "%s" "$0" > "$program_path" PYTHONPATH=$PYTHONPATH:$PIP_CUSTOM_FOLDER python3 -u "$program_path" "$@" - | def hello_world(): import tqdm print("Hello world") import argparse _parser = argparse.ArgumentParser(prog='Hello world', description='') _parsed_args = vars(_parser.parse_args()) _outputs = hello_world(**_parsed_args) ``` This change would accomplish two things: * Allow non-root users to install pip packages on the fly * Allow non-root users to install packages from cache A side note: I don't have the historical context of why KFP first tries to install packages as root and on failure as the current user with `--user`, IMO doing it with `--user` from the beginning would make more sense. But might be missing something :) If you agree with the structure of my proposal, I can work on the change - seems like a pretty small fix. Thanks! ### Materials and Reference #### Pipeline definition ``` import argparse import kfp import kubernetes def hello_world(): import tqdm print("Hello world!") def hello_world_op(): return kfp.components.create_component_from_func(func=hello_world, packages_to_install=['tqdm'])() def pipeline(): component = hello_world_op() user_sc = kubernetes.client.models.V1SecurityContext(run_as_user=1234) component.set_security_context(user_sc) def get_args(): parser = argparse.ArgumentParser() parser.add_argument('--yaml', type=str, required=True) return parser.parse_args() def main(): args = get_args() kfp.compiler.Compiler().compile( pipeline_func=pipeline, package_path=args.yaml) if __name__ == "__main__": main() print(kfp.__version__) ``` --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a ๐Ÿ‘. We prioritise the issues with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7345/reactions", "total_count": 7, "+1": 7, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7345/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7343
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7343/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7343/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7343/events
https://github.com/kubeflow/pipelines/issues/7343
1,146,594,609
I_kwDOB-71UM5EV6Ux
7,343
[feature][question]When output artifacts, is there a way to disable compression?
{ "login": "yangyang919", "id": 51769527, "node_id": "MDQ6VXNlcjUxNzY5NTI3", "avatar_url": "https://avatars.githubusercontent.com/u/51769527?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yangyang919", "html_url": "https://github.com/yangyang919", "followers_url": "https://api.github.com/users/yangyang919/followers", "following_url": "https://api.github.com/users/yangyang919/following{/other_user}", "gists_url": "https://api.github.com/users/yangyang919/gists{/gist_id}", "starred_url": "https://api.github.com/users/yangyang919/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yangyang919/subscriptions", "organizations_url": "https://api.github.com/users/yangyang919/orgs", "repos_url": "https://api.github.com/users/yangyang919/repos", "events_url": "https://api.github.com/users/yangyang919/events{/privacy}", "received_events_url": "https://api.github.com/users/yangyang919/received_events", "type": "User", "site_admin": false }
[ { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "@yangyang919 hi, any help you can provide on solving the same? Thanks in advance." ]
"2022-02-22T08:23:54"
"2022-06-23T11:44:11"
null
NONE
null
We have one kubeflow pipeline to train a model and then output a model to Minio storage. Then we are using KServe to fetch the model from Minio and deploy it. But by default, the model is compressed as tgz file in Mino, which makes KServe couldn't load the model. Any guidance on how to solve this? Thanks!!!
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7343/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7343/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7341
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7341/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7341/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7341/events
https://github.com/kubeflow/pipelines/issues/7341
1,145,751,289
I_kwDOB-71UM5ESsb5
7,341
[feature] System utilization for the pipeline and its steps
{ "login": "mikwieczorek", "id": 40968185, "node_id": "MDQ6VXNlcjQwOTY4MTg1", "avatar_url": "https://avatars.githubusercontent.com/u/40968185?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mikwieczorek", "html_url": "https://github.com/mikwieczorek", "followers_url": "https://api.github.com/users/mikwieczorek/followers", "following_url": "https://api.github.com/users/mikwieczorek/following{/other_user}", "gists_url": "https://api.github.com/users/mikwieczorek/gists{/gist_id}", "starred_url": "https://api.github.com/users/mikwieczorek/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mikwieczorek/subscriptions", "organizations_url": "https://api.github.com/users/mikwieczorek/orgs", "repos_url": "https://api.github.com/users/mikwieczorek/repos", "events_url": "https://api.github.com/users/mikwieczorek/events{/privacy}", "received_events_url": "https://api.github.com/users/mikwieczorek/received_events", "type": "User", "site_admin": false }
[ { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
open
false
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false } ]
null
[ "This is similar to https://github.com/kubeflow/pipelines/issues/6336. You can try deploying Prometheus and expose run metrics to there." ]
"2022-02-21T12:43:54"
"2022-02-24T23:47:09"
null
NONE
null
### Feature Area Not sure ### What feature would you like to see? Logging of system utilization to KFP metrics. Pipeline statistics of CPU/RAM/GPU usage over the course of pipeline run could be helpful in experiment comparison from performance point of view. Example how the final results could look like when plotted nicely [wandb-example](https://wandb.ai/borisd13/char-RNN/runs/cw9gnx9z/system ) ### What is the use case or pain point? Monitoring resource utilization by the pipeline and by each step. Would allow to compare the runs in terms of performance/resource requirements. ### Is there a workaround currently? Manual logging or Prometheus? I encountered some Prometheus traces in the repo, but looks like deprecated or abandoned. If there is an existing solution to that problem, please let me know. --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a ๐Ÿ‘. We prioritize fulfilling features with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7341/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7341/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7340
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7340/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7340/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7340/events
https://github.com/kubeflow/pipelines/issues/7340
1,144,950,663
I_kwDOB-71UM5EPo-H
7,340
[feature] Parametrize pipeline using current date and time
{ "login": "irabanillo91", "id": 60658860, "node_id": "MDQ6VXNlcjYwNjU4ODYw", "avatar_url": "https://avatars.githubusercontent.com/u/60658860?v=4", "gravatar_id": "", "url": "https://api.github.com/users/irabanillo91", "html_url": "https://github.com/irabanillo91", "followers_url": "https://api.github.com/users/irabanillo91/followers", "following_url": "https://api.github.com/users/irabanillo91/following{/other_user}", "gists_url": "https://api.github.com/users/irabanillo91/gists{/gist_id}", "starred_url": "https://api.github.com/users/irabanillo91/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/irabanillo91/subscriptions", "organizations_url": "https://api.github.com/users/irabanillo91/orgs", "repos_url": "https://api.github.com/users/irabanillo91/repos", "events_url": "https://api.github.com/users/irabanillo91/events{/privacy}", "received_events_url": "https://api.github.com/users/irabanillo91/received_events", "type": "User", "site_admin": false }
[ { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[]
"2022-02-20T08:10:17"
"2022-02-24T23:41:05"
null
NONE
null
Is there any way to parametrize a pipeline with current date and time? I'd like to do something like ```python @pipeline() def my_pipeline( date: str = datetime.now().strftime("%Y_%m_%d-%H:%M:%S.%f")[:-4], ): ... ``` By default I want to set that input argument to the datetime when the pipeline is run. However, for reusability purposes, I'd also like to be able to set it manually (pointing to a previous run). I can go into further details of why I want to do this if needed. Any way I can achieve this? Thanks a lot in advance!
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7340/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7340/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7338
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7338/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7338/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7338/events
https://github.com/kubeflow/pipelines/issues/7338
1,144,519,302
I_kwDOB-71UM5EN_qG
7,338
[release] Backend 1.8.1 tracker
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "/cc @zijianjoy ", "Thank you Chen for handling this patch release! By learning from this experience, it is better to do a Kubeflow validation next time before cutting an official release. After this PR is merged, would you like to make a RC release for v1.8.1 first, which allows me to do a full validation on Kubeflow, then make an official release? Thank you!", "> Thank you Chen for handling this patch release! By learning from this experience, it is better to do a Kubeflow validation next time before cutting an official release. After this PR is merged, would you like to make a RC release for v1.8.1 first, which allows me to do a full validation on Kubeflow, then make an official release? Thank you!\r\n\r\n@zijianjoy 1.8.1-rc.0 is out: https://github.com/kubeflow/pipelines/releases/tag/1.8.1-rc.0\r\nYou can test it in full fledge Kubeflow now. Thanks!", "Confirming that this release candidate is working on Full Kubeflow, we can cut official version now. Thank you Chen!", "Released: https://github.com/kubeflow/pipelines/releases/tag/1.8.1", "Appreciate the release! @chensun " ]
"2022-02-19T06:50:34"
"2022-03-02T07:31:02"
"2022-03-02T07:31:01"
COLLABORATOR
null
Need to cherry-pick the following PRs to release-1.8 branch before making the official release: - [x] https://github.com/kubeflow/pipelines/pull/7337
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7338/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7338/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7336
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7336/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7336/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7336/events
https://github.com/kubeflow/pipelines/issues/7336
1,143,931,733
I_kwDOB-71UM5ELwNV
7,336
[backend] Use `default-editor` as serviceAccountName in multi-user mode
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "cc @chensun @Bobgy ", "I think the fix should be:\r\n\r\n```\r\nconst (\r\n\tDefaultPipelineRunnerServiceAccountFlag = \"DefaultPipelineRunnerServiceAccount\"\r\n)\r\n\r\n\tif len(workflowServiceAccount) == 0 || workflowServiceAccount == common.DefaultPipelineRunnerServiceAccount {\r\n\t\t// To reserve SDK backward compatibility, the backend only replaces\r\n\t\t// serviceaccount when it is empty or equal to default value set by SDK.\r\n\t\tworkflow.SetServiceAccount(common.GetStringConfigWithDefault(common.DefaultPipelineRunnerServiceAccountFlag, common.DefaultPipelineRunnerServiceAccount))\r\n\t}\r\n```" ]
"2022-02-18T23:28:04"
"2022-02-22T07:34:24"
"2022-02-22T07:34:24"
COLLABORATOR
null
### Environment This is an issue I found with KFP backend 1.8.0 on Full fledged Kubeflow. ### Steps to reproduce Run a KFP pipeline on full Kubeflow, the error is ``` This step is in Error state with this message: task 'conditional-execution-pipeline-with-exit-handler-6gfkq.exit-handler-1.flip-coin-op' errored: pods "conditional-execution-pipeline-with-exit-handler-6gfkq-1717566668" is forbidden: error looking up service account jamxl/pipeline-runner: serviceaccount "pipeline-runner" not found ``` ### Materials and Reference We are always using the `pipeline-runner` service account in this file: https://github.com/kubeflow/pipelines/blob/3e734ed19146f569e910f75627d12239ec2e86dc/backend/src/apiserver/template/template.go#L191-L202. Instead, it should use `default-editor` for multi-user mode, the util to check multi-user mode is https://github.com/kubeflow/pipelines/blob/2e945750cb1758eea6db8453b437e57e68152b4a/backend/src/apiserver/common/config.go#L94 <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a ๐Ÿ‘. We prioritise the issues with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7336/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7336/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7335
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7335/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7335/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7335/events
https://github.com/kubeflow/pipelines/issues/7335
1,142,896,692
I_kwDOB-71UM5EHzg0
7,335
[feature] When restoring an Experiment, add an option to restore all of its Runs
{ "login": "difince", "id": 11557050, "node_id": "MDQ6VXNlcjExNTU3MDUw", "avatar_url": "https://avatars.githubusercontent.com/u/11557050?v=4", "gravatar_id": "", "url": "https://api.github.com/users/difince", "html_url": "https://github.com/difince", "followers_url": "https://api.github.com/users/difince/followers", "following_url": "https://api.github.com/users/difince/following{/other_user}", "gists_url": "https://api.github.com/users/difince/gists{/gist_id}", "starred_url": "https://api.github.com/users/difince/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/difince/subscriptions", "organizations_url": "https://api.github.com/users/difince/orgs", "repos_url": "https://api.github.com/users/difince/repos", "events_url": "https://api.github.com/users/difince/events{/privacy}", "received_events_url": "https://api.github.com/users/difince/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
open
false
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false } ]
null
[ "Hello @difince , thank you for filing this request. I am wondering what is the use case for restoring all the runs inside an experiment?\r\n\r\nI am assuming the archived runs are something people don't usually visit. I might lack some knowledge on why people would want to massively restore all the runs.", "Hi @zijianjoy \r\nCould you take a look on these comments ( the first one is written by you) - [comment_1](https://github.com/kubeflow/pipelines/issues/5114#issuecomment-777735817) and [comment_2](https://github.com/kubeflow/pipelines/issues/5114#issuecomment-777932647)\r\n\r\nI have already created a [PR](https://github.com/kubeflow/pipelines/pull/7147) about number 3: \r\n\r\n> 3) backend: Check whether a run is under archived experiment when [Unarchive API](https://www.kubeflow.org/docs/pipelines/reference/api/kubeflow-pipeline-api-spec/#operation--apis-v1beta1-runs--id-:unarchive-post) is called. If so, fail with error 400 (FAILED_PRECONDITION).\r\n\r\nSo as a follow-up, I was thinking to implement a number 2: \r\n\r\n> frontend: When restoring an experiment, popup should shows 3 options for user to choose: Cancel, Restore Experiment and Restore Experiment and All Runs, and highlight Restore Experiment because that is the default behavior.\r\n\r\nIf I understand the use case correctly - when the user wants to restore an Experiment - it would be nice having an option to restore with a single click the experiment & unarchive all of its runs. \r\nAs a whole, this use case is marked as \"good to have\" - so if you think that it is not needed - I will close the issue", "I just took a look on the backend - it seems that there is no appropriate endpoint to be used for `bulk` restore of experiment & all runs \r\nIf we decide that this use-case is wanted - we could think of adding a new endpoint to [experiment.proto](https://github.com/kubeflow/pipelines/blob/master/backend/api/experiment.proto) \r\nSomething like this .. WDYT?\r\n```\r\n/apis/v1beta1/experiments/{id}/runs:unarchive\r\n```", "Hello @difince , thank you for the reference. I tend to agree that this feature is good-to-have after [comment 2](https://github.com/kubeflow/pipelines/issues/5114#issuecomment-777932647). But if you are interested in implementing this API, then I am happy to support it. The API proposal sounds good to me:\r\n\r\n```\r\n/apis/v1beta1/experiments/{id}/runs:unarchive\r\n```\r\n", "Hello @difince , I have got an update from the team yesterday. We start to suggest everyone making design proposal for review, for any API related changes. If you are going to work on this item, would you like to put together a doc to describe the new API? Then we can discuss it in the Kubeflow Pipelines Community meeting." ]
"2022-02-18T11:39:39"
"2022-03-03T23:24:35"
null
MEMBER
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> /area frontend <!-- /area backend --> <!-- /area sdk --> <!-- /area samples --> <!-- /area components --> ### What feature would you like to see? When restoring an experiment, a popup should show 3 options for the user to choose: `Cancel`, `Restore Experiment` and `Restore Experiment and All Runs`. `Restore Experiment` needs to be highlighted because that is the default behavior. <!-- Provide a description of this feature and the user experience. --> ### What is the use case or pain point? see this [comment](https://github.com/kubeflow/pipelines/issues/5114#issuecomment-777735817) <!-- It helps us understand the benefit of this feature for your use case. --> ### Is there a workaround currently? Yes, currently the user could Restore the Experiment and then restore its Runs. <!-- Without this feature, how do you accomplish your task today? --> --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a ๐Ÿ‘. We prioritize fulfilling features with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7335/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7335/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7334
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7334/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7334/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7334/events
https://github.com/kubeflow/pipelines/issues/7334
1,142,738,713
I_kwDOB-71UM5EHM8Z
7,334
[feature] Add deletePipelineByName endpoint
{ "login": "difince", "id": 11557050, "node_id": "MDQ6VXNlcjExNTU3MDUw", "avatar_url": "https://avatars.githubusercontent.com/u/11557050?v=4", "gravatar_id": "", "url": "https://api.github.com/users/difince", "html_url": "https://github.com/difince", "followers_url": "https://api.github.com/users/difince/followers", "following_url": "https://api.github.com/users/difince/following{/other_user}", "gists_url": "https://api.github.com/users/difince/gists{/gist_id}", "starred_url": "https://api.github.com/users/difince/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/difince/subscriptions", "organizations_url": "https://api.github.com/users/difince/orgs", "repos_url": "https://api.github.com/users/difince/repos", "events_url": "https://api.github.com/users/difince/events{/privacy}", "received_events_url": "https://api.github.com/users/difince/received_events", "type": "User", "site_admin": false }
[ { "id": 930476737, "node_id": "MDU6TGFiZWw5MzA0NzY3Mzc=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/help%20wanted", "name": "help wanted", "color": "db1203", "default": true, "description": "The community is welcome to contribute." }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
open
false
{ "login": "difince", "id": 11557050, "node_id": "MDQ6VXNlcjExNTU3MDUw", "avatar_url": "https://avatars.githubusercontent.com/u/11557050?v=4", "gravatar_id": "", "url": "https://api.github.com/users/difince", "html_url": "https://github.com/difince", "followers_url": "https://api.github.com/users/difince/followers", "following_url": "https://api.github.com/users/difince/following{/other_user}", "gists_url": "https://api.github.com/users/difince/gists{/gist_id}", "starred_url": "https://api.github.com/users/difince/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/difince/subscriptions", "organizations_url": "https://api.github.com/users/difince/orgs", "repos_url": "https://api.github.com/users/difince/repos", "events_url": "https://api.github.com/users/difince/events{/privacy}", "received_events_url": "https://api.github.com/users/difince/received_events", "type": "User", "site_admin": false }
[ { "login": "difince", "id": 11557050, "node_id": "MDQ6VXNlcjExNTU3MDUw", "avatar_url": "https://avatars.githubusercontent.com/u/11557050?v=4", "gravatar_id": "", "url": "https://api.github.com/users/difince", "html_url": "https://github.com/difince", "followers_url": "https://api.github.com/users/difince/followers", "following_url": "https://api.github.com/users/difince/following{/other_user}", "gists_url": "https://api.github.com/users/difince/gists{/gist_id}", "starred_url": "https://api.github.com/users/difince/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/difince/subscriptions", "organizations_url": "https://api.github.com/users/difince/orgs", "repos_url": "https://api.github.com/users/difince/repos", "events_url": "https://api.github.com/users/difince/events{/privacy}", "received_events_url": "https://api.github.com/users/difince/received_events", "type": "User", "site_admin": false } ]
null
[ "/assign @difince difince" ]
"2022-02-18T09:45:35"
"2022-02-24T23:39:58"
null
MEMBER
null
### Feature Area /area backend <!-- Uncomment the labels below which are relevant to this feature: --> <!-- /area frontend --> <!-- /area backend --> <!-- /area sdk --> <!-- /area samples --> <!-- /area components --> ### What feature would you like to see? Add new endpoint - deletePipelineByName In the case of Pipeline standalone: `/apis/v1beta1/namespaces/-/pipelines/{name}` In the case of full-fledged deployment: `/apis/v1beta1/namespaces/{namespace}/pipelines/{name}` <!-- Provide a description of this feature and the user experience. --> ### What is the use case or pain point? Simplify user experience and keep consistent with other tools like Kubernetes, DAGs in Argo, Airflow that use names as a reference instead of UUID This issue is inspired from #https://github.com/kubeflow/pipelines/issues/3360. <!-- It helps us understand the benefit of this feature for your use case. --> ### Is there a workaround currently? Yes, use UUIDs instead <!-- Without this feature, how do you accomplish your task today? --> --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a ๐Ÿ‘. We prioritize fulfilling features with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7334/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7334/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7330
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7330/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7330/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7330/events
https://github.com/kubeflow/pipelines/issues/7330
1,141,072,007
I_kwDOB-71UM5EA2CH
7,330
[feature] allow jobs to fail
{ "login": "yarnabrina", "id": 39331844, "node_id": "MDQ6VXNlcjM5MzMxODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/39331844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yarnabrina", "html_url": "https://github.com/yarnabrina", "followers_url": "https://api.github.com/users/yarnabrina/followers", "following_url": "https://api.github.com/users/yarnabrina/following{/other_user}", "gists_url": "https://api.github.com/users/yarnabrina/gists{/gist_id}", "starred_url": "https://api.github.com/users/yarnabrina/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yarnabrina/subscriptions", "organizations_url": "https://api.github.com/users/yarnabrina/orgs", "repos_url": "https://api.github.com/users/yarnabrina/repos", "events_url": "https://api.github.com/users/yarnabrina/events{/privacy}", "received_events_url": "https://api.github.com/users/yarnabrina/received_events", "type": "User", "site_admin": false }
[ { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "Making it configurable sounds fine to me. \r\n\r\nA possible workaround for now, albeit hacky, you could put your training into an exit handler task. In that case, it would run regardless whether the upstream tasks succeed or not.", "Hi @chensun, thanks for taking this. Would be very nice to have this.\r\n\r\nRegarding the workaround you suggested, can you please give some more details? I did find about `kfp.dsl.ExitHandler` before, but there doesn't seem to be much documentation and/or examples online, and in my attempts I failed to apply an exit task for a few specific operations, as it complained that it has to be global for all.", "This is related to https://github.com/kubeflow/pipelines/issues/6749\r\n\r\nGeneral workaround is to always return status code 0 from the pipeline steps and e.g. return some output (e.g. a string `OK` or `FAIL`) instead, which can be chained together with `dsl.Condition` to verify whether to continue the pipeline or not in a specific branch of the `ParallelFor`. \r\n\r\n**This workaround does not cover the overall pipeline status**, so no warning โš ๏ธ signs / red statuses - everything is green โœ…\r\n\r\n", "@yarnabrina , @chensun I've created a pull request implementing this behaviour - I would really appreciate your feedback on that [https://github.com/kubeflow/pipelines/pull/7373](https://github.com/kubeflow/pipelines/pull/7373).", "I think most situations can be handled with an [`kfp.dsl.ExitHandler`](https://kubeflow-pipelines.readthedocs.io/en/stable/source/kfp.dsl.html#kfp.dsl.ExitHandler), which runs a single ContainerOp regardless of if the ContainerOps it wraps succeed or fail. \r\n\r\nBut we might consider making functionality like ExitHandler more \"implicit\" by having an [airflow-style `trigger_rule`](https://airflow.apache.org/docs/apache-airflow/stable/concepts/dags.html#trigger-rules) flag on the ContainerOp. (Proposed in issue: https://github.com/kubeflow/pipelines/issues/2372)", "I don't fully understand where it's \"most situations\". Are multiple exit handlers now supported in KFP? As far as I can see, no https://github.com/kubeflow/pipelines/blob/e2687ce5c22455bbde0ababb3ad46588d5ed7939/sdk/python/kfp/compiler/compiler.py#L236 , so in common scenario of branch-out with ParallelFor (as in my PR) - it cannot be used.", "@marrrcin I agree that only having a single ExitHandler is problematic, would allowing multiple also address your issue?", "I don't think so - it's not a single job launched multiple times in parallel, it's a chain of consecutive jobs of which some might be allowed to fail - you can take a look at the screenshot in my PR (https://github.com/kubeflow/pipelines/pull/7373). Even having multiple exit handlers would not cover that imho.", "Exit handler ops cannot explicitly depend on any previous operations so they cannot be parameterized by outputs of previous operations or be guaranteed to run after previous steps.\r\n\r\nMy use case is having integration tests run that are themselves kubeflow pipelines and I would like to be able to verify that a task fails without the integration test failing. Configuring that in the dsl would be a lot cleaner than being included in application logic or directly in ci/cd.", "I also have a similar scenario, any work around this yet?" ]
"2022-02-17T09:21:49"
"2023-02-07T02:24:37"
null
NONE
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> <!-- /area frontend --> <!-- /area backend --> <!-- /area sdk --> <!-- /area samples --> <!-- /area components --> /area backend ### What feature would you like to see? <!-- Provide a description of this feature and the user experience. --> _**Allow failure of jobs**_ - If an operation fails, do not fail the pipeline. Allow the pipeline to continue to the next stage, and there it may fail if that does not have the pre-requisites. ### What is the use case or pain point? <!-- It helps us understand the benefit of this feature for your use case. --> In the machine learning pipelines, it is fairly common to run multiple models, or possibly different configurations of same model, and this possibly runs on a subset of training data. After these are trained, usually they are compared using some metric, the best model is chosen, and that is run on the entire training data to have the final trained model. If someone uses [`kfp.dsl.ParallelFor`](https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.dsl.html#kfp.dsl.ParallelFor) to run the different models, failure in one of them causes the entire pipeline to fail and possibly successful training of other models are lost. But if the next stage, the one to compare using metric supports comparison of the available (i.e. successful) models, the pipeline failure costs the time to train those models, as one have to restart. If we support the requested feature, the failed operations will display an warning (may be โš ๏ธ), and will go on to final training step. Then depending on whether that supports comparison of subset of all models, it will proceed as if the failed models were not there. If not, it'll fail there. Very similar functionality in available in few CI tools. For example, Gitlab CI has [`allow_failure`](https://docs.gitlab.com/ee/ci/yaml/#allow_failure), Travis CI has [`allow_failures`](https://docs.travis-ci.com/user/customizing-the-build/#jobs-that-are-allowed-to-fail), etc. ### Is there a workaround currently? <!-- Without this feature, how do you accomplish your task today? --> It is possible to do very broad top level exception handling to suppress failures. However, in this way the fact that it failed is hidden in the logs and not displayed in the pipeline dashboard. In scheduled pipelines where no one really go through the logs of all "successful" pipelines, these failures will go unnoticed. --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a ๐Ÿ‘. We prioritize fulfilling features with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7330/reactions", "total_count": 14, "+1": 14, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7330/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7329
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7329/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7329/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7329/events
https://github.com/kubeflow/pipelines/issues/7329
1,140,778,166
I_kwDOB-71UM5D_uS2
7,329
[backend] Some executions on Kubeflow MLMD have no values
{ "login": "calvinleungyk", "id": 6678871, "node_id": "MDQ6VXNlcjY2Nzg4NzE=", "avatar_url": "https://avatars.githubusercontent.com/u/6678871?v=4", "gravatar_id": "", "url": "https://api.github.com/users/calvinleungyk", "html_url": "https://github.com/calvinleungyk", "followers_url": "https://api.github.com/users/calvinleungyk/followers", "following_url": "https://api.github.com/users/calvinleungyk/following{/other_user}", "gists_url": "https://api.github.com/users/calvinleungyk/gists{/gist_id}", "starred_url": "https://api.github.com/users/calvinleungyk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/calvinleungyk/subscriptions", "organizations_url": "https://api.github.com/users/calvinleungyk/orgs", "repos_url": "https://api.github.com/users/calvinleungyk/repos", "events_url": "https://api.github.com/users/calvinleungyk/events{/privacy}", "received_events_url": "https://api.github.com/users/calvinleungyk/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" } ]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "Can't tell if anything was wrong without seeing the pipeline/component code, it appears to me that the value here is a json serialized list of string." ]
"2022-02-17T01:58:19"
"2022-02-17T23:36:23"
null
NONE
null
### Environment * How did you deploy Kubeflow Pipelines (KFP)? Full Kubeflow deployment * KFP version: 1.5 * KFP SDK version: 1.6.0 ### Steps to reproduce Some output values of Kubeflow MLMD `mlmd.get_executions_by_context(context.id)` ([reference](https://www.tensorflow.org/tfx/ml_metadata/api_docs/python/mlmd/MetadataStore#get_executions_by_context)) just include the field names and no values, e.g. ``` custom_properties { key: "input:output_artifacts" value { string_value: "[\"saved_model_uri\", \"model_name\", \"user_data\"]" } } ``` ### Expected result Output values of components should include the field names as well as the values ### Materials and Reference --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a ๐Ÿ‘. We prioritise the issues with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7329/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7329/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7328
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7328/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7328/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7328/events
https://github.com/kubeflow/pipelines/issues/7328
1,140,777,684
I_kwDOB-71UM5D_uLU
7,328
[backend] Only 1 component gets logged into MLMD for some pipeline runs
{ "login": "calvinleungyk", "id": 6678871, "node_id": "MDQ6VXNlcjY2Nzg4NzE=", "avatar_url": "https://avatars.githubusercontent.com/u/6678871?v=4", "gravatar_id": "", "url": "https://api.github.com/users/calvinleungyk", "html_url": "https://github.com/calvinleungyk", "followers_url": "https://api.github.com/users/calvinleungyk/followers", "following_url": "https://api.github.com/users/calvinleungyk/following{/other_user}", "gists_url": "https://api.github.com/users/calvinleungyk/gists{/gist_id}", "starred_url": "https://api.github.com/users/calvinleungyk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/calvinleungyk/subscriptions", "organizations_url": "https://api.github.com/users/calvinleungyk/orgs", "repos_url": "https://api.github.com/users/calvinleungyk/repos", "events_url": "https://api.github.com/users/calvinleungyk/events{/privacy}", "received_events_url": "https://api.github.com/users/calvinleungyk/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" } ]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "In v1, the metadata logging is done in an async way. There could be a number of reasons if you have missing MLMD.\r\n\r\nKFP 1.5 is relatively old, there's an chance there were bugs being fixed in the later version. Could you please try using the latest KFP release?", "@chensun we are actually on 1.7.0 (aka default version for KF 1.4 distribution). \r\n\r\nIs there any pointers as to what and how KFP does log information into MLMD?" ]
"2022-02-17T01:57:19"
"2022-02-18T17:26:50"
null
NONE
null
### Environment * How did you deploy Kubeflow Pipelines (KFP)? Full Kubeflow deployment * KFP version: 1.5 * KFP SDK version: 1.6.0 ### Steps to reproduce After running several pipelines multiple times, only 1 component gets logged into MLMD for some pipeline runs. This is problematic as we can't query for pipeline artifacts reliably from MLMD. ### Expected result For every run we expect all components log information into Kubeflow MLMD ### Materials and Reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a ๐Ÿ‘. We prioritise the issues with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7328/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7328/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7327
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7327/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7327/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7327/events
https://github.com/kubeflow/pipelines/issues/7327
1,140,775,951
I_kwDOB-71UM5D_twP
7,327
[feature] Remove custom hash suffix to end of KFP pipeline name in executions
{ "login": "calvinleungyk", "id": 6678871, "node_id": "MDQ6VXNlcjY2Nzg4NzE=", "avatar_url": "https://avatars.githubusercontent.com/u/6678871?v=4", "gravatar_id": "", "url": "https://api.github.com/users/calvinleungyk", "html_url": "https://github.com/calvinleungyk", "followers_url": "https://api.github.com/users/calvinleungyk/followers", "following_url": "https://api.github.com/users/calvinleungyk/following{/other_user}", "gists_url": "https://api.github.com/users/calvinleungyk/gists{/gist_id}", "starred_url": "https://api.github.com/users/calvinleungyk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/calvinleungyk/subscriptions", "organizations_url": "https://api.github.com/users/calvinleungyk/orgs", "repos_url": "https://api.github.com/users/calvinleungyk/repos", "events_url": "https://api.github.com/users/calvinleungyk/events{/privacy}", "received_events_url": "https://api.github.com/users/calvinleungyk/received_events", "type": "User", "site_admin": false }
[ { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "There needs to be some unique name, otherwise, if you run a pipeline multiple time, how would we differentiate them?\r\n\r\n> Users have to query for the correct execution or context and get the relevant id then get the desired pipeline which is counter intuitive and troublesome.\r\n\r\nThis feels a reasonable pattern to me. Otherwise, I guess you're looking for a filtering by partial match query? You would still need to have a way to pick one from the filtered list in case there are multiple matches." ]
"2022-02-17T01:53:41"
"2022-02-18T00:19:54"
null
NONE
null
### Feature Area /area backend ### What feature would you like to see? Remove custom hash suffix to end of KFP pipeline name in executions ### What is the use case or pain point? KFP adds a custom hash to the end of the pipeline name similar to how kubernetes adds a unique suffix to pod names. This makes it hard to query runs and metadata of pipelines after the execution. ### Is there a workaround currently? Users have to query for the correct execution or context and get the relevant id then get the desired pipeline which is counter intuitive and troublesome. --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a ๐Ÿ‘. We prioritize fulfilling features with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7327/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7327/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7325
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7325/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7325/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7325/events
https://github.com/kubeflow/pipelines/issues/7325
1,140,723,909
I_kwDOB-71UM5D_hDF
7,325
[sdk] kfp.dsl.RUN_ID_PLACEHOLDER confusion: is this different than Argo's {{workflow.uid}}?
{ "login": "jli", "id": 133466, "node_id": "MDQ6VXNlcjEzMzQ2Ng==", "avatar_url": "https://avatars.githubusercontent.com/u/133466?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jli", "html_url": "https://github.com/jli", "followers_url": "https://api.github.com/users/jli/followers", "following_url": "https://api.github.com/users/jli/following{/other_user}", "gists_url": "https://api.github.com/users/jli/gists{/gist_id}", "starred_url": "https://api.github.com/users/jli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jli/subscriptions", "organizations_url": "https://api.github.com/users/jli/orgs", "repos_url": "https://api.github.com/users/jli/repos", "events_url": "https://api.github.com/users/jli/events{/privacy}", "received_events_url": "https://api.github.com/users/jli/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
{ "login": "ji-yaqi", "id": 17338099, "node_id": "MDQ6VXNlcjE3MzM4MDk5", "avatar_url": "https://avatars.githubusercontent.com/u/17338099?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ji-yaqi", "html_url": "https://github.com/ji-yaqi", "followers_url": "https://api.github.com/users/ji-yaqi/followers", "following_url": "https://api.github.com/users/ji-yaqi/following{/other_user}", "gists_url": "https://api.github.com/users/ji-yaqi/gists{/gist_id}", "starred_url": "https://api.github.com/users/ji-yaqi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ji-yaqi/subscriptions", "organizations_url": "https://api.github.com/users/ji-yaqi/orgs", "repos_url": "https://api.github.com/users/ji-yaqi/repos", "events_url": "https://api.github.com/users/ji-yaqi/events{/privacy}", "received_events_url": "https://api.github.com/users/ji-yaqi/received_events", "type": "User", "site_admin": false }
[ { "login": "ji-yaqi", "id": 17338099, "node_id": "MDQ6VXNlcjE3MzM4MDk5", "avatar_url": "https://avatars.githubusercontent.com/u/17338099?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ji-yaqi", "html_url": "https://github.com/ji-yaqi", "followers_url": "https://api.github.com/users/ji-yaqi/followers", "following_url": "https://api.github.com/users/ji-yaqi/following{/other_user}", "gists_url": "https://api.github.com/users/ji-yaqi/gists{/gist_id}", "starred_url": "https://api.github.com/users/ji-yaqi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ji-yaqi/subscriptions", "organizations_url": "https://api.github.com/users/ji-yaqi/orgs", "repos_url": "https://api.github.com/users/ji-yaqi/repos", "events_url": "https://api.github.com/users/ji-yaqi/events{/privacy}", "received_events_url": "https://api.github.com/users/ji-yaqi/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @jli Argo's `{{workflow.uid}}` is an Argo-specific and `kfp.dsl.RUN_ID_PLACEHOLDER` is a platform neutral argument. Feel free to reopen if you need more clarification, thanks!" ]
"2022-02-17T00:23:48"
"2022-02-17T23:18:27"
"2022-02-17T23:18:27"
CONTRIBUTOR
null
### Environment * KFP version: 1.7.1 * KFP SDK version: 1.6.2 * All dependencies version: ``` kfp 1.6.2 kfp-pipeline-spec 0.1.13 kfp-server-api 1.7.1 ``` ### Steps to reproduce My team uses `kfp.dsl.RUN_ID_PLACEHOLDER` as a way to reference a particular KFP run. We use this as part of the GCS path for storing artifacts, and we save it in BigQuery in a `kfp_run_id` column. I wanted to change our system so we use a short version of this ID in some places. For example, if the full ID is `ee7b084b-5051-4a08-b007-b075b2d8bd08`, I wanted to use the first segment `ee7b084b` (kind of like using short git commit hashes). I saw that Argo supports some expressions ([docs for Argo workflow variables](https://github.com/argoproj/argo-workflows/blob/master/docs/variables.md)), so I tried using `{{=sprig.substr(0, 8, workflow.uid)}}`. This appeared to work, but the 8 characters didn't match the full KFP run ID. ### Expected result I expected `{{=sprig.substr(0, 8, workflow.uid}}` to be the first 8 characters of `{{workflow.uid}}`. ### Materials and Reference I was debugging by passing `plain:{{workflow.uid}} substr:{{=sprig.substr(0, 36, workflow.uid}}` as an argument to a test component, and I noticed that the workflow object on k8s replaces the "plain" version but not the "substr" version (based on `kubectl get wf`). Based on the discussions here, it seems that KFP's backend is searching for the exact string `{{workflow.uid}}` and replacing it with its own UUID. Is that correct? - https://github.com/kubeflow/pipelines/issues/3681 - https://github.com/kubeflow/pipelines/pull/4995 - https://github.com/kubeflow/pipelines/pull/3709 - https://github.com/kubeflow/pipelines/issues/5474 Is there any way to work around this? Thanks. --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a ๐Ÿ‘. We prioritise the issues with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7325/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7325/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7310
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7310/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7310/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7310/events
https://github.com/kubeflow/pipelines/issues/7310
1,138,450,378
I_kwDOB-71UM5D21_K
7,310
[sdk] PipelineParams are not serialized
{ "login": "max-gartz", "id": 32574331, "node_id": "MDQ6VXNlcjMyNTc0MzMx", "avatar_url": "https://avatars.githubusercontent.com/u/32574331?v=4", "gravatar_id": "", "url": "https://api.github.com/users/max-gartz", "html_url": "https://github.com/max-gartz", "followers_url": "https://api.github.com/users/max-gartz/followers", "following_url": "https://api.github.com/users/max-gartz/following{/other_user}", "gists_url": "https://api.github.com/users/max-gartz/gists{/gist_id}", "starred_url": "https://api.github.com/users/max-gartz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/max-gartz/subscriptions", "organizations_url": "https://api.github.com/users/max-gartz/orgs", "repos_url": "https://api.github.com/users/max-gartz/repos", "events_url": "https://api.github.com/users/max-gartz/events{/privacy}", "received_events_url": "https://api.github.com/users/max-gartz/received_events", "type": "User", "site_admin": false }
[ { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "This is expected. The basic building blocks for a pipeline are components and dsl features like `dsl.Condition`, `dsl.ParallelFor`, etc. You should avoid having arbitrary Python code like `dict(foo=test)`, they are non-containerized code that won't be executed at pipeline runtime. \r\nConsider using a component instead. For example:\r\n\r\n```python\r\n@component\r\ndef build_dict(key: str, value: str) -> dict:\r\n return dict(key=value)\r\n```", "@chensun thanks for the feedback.\r\nThis seems to me to be quite a limitation though. \r\nOne would need to create such a component for every dict one wants to pass. \r\na similar case is using string valued parameters in fstrings or concatenation. \r\nMoreover this clutters the graph with uninteresting operations. \r\n\r\nI agree that functional stuff has to happen inside of components, but combining parameters should be possible in the pipeline definition itself.\r\nFrom how I understand things, kfp could serialize the parameters and replace them at runtime. At least from the issue I mentioned, this seems to be what was happening at some point." ]
"2022-02-15T09:58:37"
"2022-02-18T07:57:25"
"2022-02-18T00:04:12"
NONE
null
### Environment * KFP SDK version: 1.8.11 (v2) This is related to: #2206 I also commented there, but I am not sure if adding a new issue is the better way to go. I try to pass a dict containing pipeline parameters to a component (as shown by numerology in #2206 ), but the serialization of the pipeline params does not work. ```python import fire from utils.pipeline_cli import PipelineCLI from kfp.v2 import dsl @dsl.component(base_image="python:3.8-slim") def print_op(params: dict): print(params) @dsl.pipeline( name="test", description="test-pipeline.", ) def pipeline(test: str = "test"): print_op(params=dict(foo=test)) if __name__ == "__main__": cli = PipelineCLI(pipeline=pipeline) fire.Fire(cli) ``` where PipelineCLI implements the compile and run subcommands. even the following minimal example does not work. ```bash python3 -c 'import kfp, json;print(json.dumps(kfp.dsl.PipelineParam("aaa")))' ``` both yield: ```bash Traceback (most recent call last): File "<string>", line 1, in <module> File "/opt/conda/lib/python3.7/json/__init__.py", line 231, in dumps return _default_encoder.encode(obj) File "/opt/conda/lib/python3.7/json/encoder.py", line 199, in encode chunks = self.iterencode(o, _one_shot=True) File "/opt/conda/lib/python3.7/json/encoder.py", line 257, in iterencode return _iterencode(o, 0) File "/opt/conda/lib/python3.7/json/encoder.py", line 179, in default raise TypeError(f'Object of type {o.__class__.__name__} ' TypeError: Object of type PipelineParam is not JSON serializable ```
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7310/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7310/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7302
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7302/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7302/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7302/events
https://github.com/kubeflow/pipelines/issues/7302
1,136,742,596
I_kwDOB-71UM5DwVDE
7,302
[feature] Multi-user support for local standalone deployment
{ "login": "terrykong", "id": 7576060, "node_id": "MDQ6VXNlcjc1NzYwNjA=", "avatar_url": "https://avatars.githubusercontent.com/u/7576060?v=4", "gravatar_id": "", "url": "https://api.github.com/users/terrykong", "html_url": "https://github.com/terrykong", "followers_url": "https://api.github.com/users/terrykong/followers", "following_url": "https://api.github.com/users/terrykong/following{/other_user}", "gists_url": "https://api.github.com/users/terrykong/gists{/gist_id}", "starred_url": "https://api.github.com/users/terrykong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/terrykong/subscriptions", "organizations_url": "https://api.github.com/users/terrykong/orgs", "repos_url": "https://api.github.com/users/terrykong/repos", "events_url": "https://api.github.com/users/terrykong/events{/privacy}", "received_events_url": "https://api.github.com/users/terrykong/received_events", "type": "User", "site_admin": false }
[ { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
closed
false
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false } ]
null
[ "Duplicate with https://github.com/kubeflow/pipelines/issues/6314.\r\n\r\nCurrently we don't have plan to support Multi-user feature in KFP standalone. Please use full Kubeflow if you want to benefit from Multi-user feature." ]
"2022-02-14T04:15:51"
"2022-02-17T23:26:08"
"2022-02-17T23:26:08"
NONE
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> <!-- /area frontend --> /area backend <!-- /area sdk --> <!-- /area samples --> <!-- /area components --> ### What feature would you like to see? <!-- Provide a description of this feature and the user experience. --> I would like to be able to run pipelines in specific namespaces that I have set up as a cluster administrator without having to deploy the full Kubeflow deployment. ### What is the use case or pain point? <!-- It helps us understand the benefit of this feature for your use case. --> Right now, I have followed [these instructions](https://www.kubeflow.org/docs/components/pipelines/installation/localcluster-deployment/) and everything works, but when I try to create a run from a pipeline func and specify a namespace, I see ```text HTTP response body: {"error":"Validate experiment request failed.: Invalid input error: In single-user mode, CreateExperimentRequest shouldn't contain resource references.","code":3,"message":"Validate experiment request failed.: Invalid input error: In single-user mode, CreateExperimentRequest shouldn't contain resource references.","details":[{"@type":"type.googleapis.com/api.Error","error_message":"In single-user mode, CreateExperimentRequest shouldn't contain resource references.","error_details":"Validate experiment request failed.: Invalid input error: In single-user mode, CreateExperimentRequest shouldn't contain resource references."}]} ``` My use case is that I have an existing kubernetes cluster, and there are existing namespaces that everyone uses, so it would be convenient if I could just set the namespace and have that be accepted by the KFP backend. ### Is there a workaround currently? <!-- Without this feature, how do you accomplish your task today? --> Not that I am aware of. --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a ๐Ÿ‘. We prioritize fulfilling features with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7302/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7302/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7301
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7301/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7301/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7301/events
https://github.com/kubeflow/pipelines/issues/7301
1,134,166,557
I_kwDOB-71UM5DmgId
7,301
Reuse artifact in different pipeline/run
{ "login": "irabanillo91", "id": 60658860, "node_id": "MDQ6VXNlcjYwNjU4ODYw", "avatar_url": "https://avatars.githubusercontent.com/u/60658860?v=4", "gravatar_id": "", "url": "https://api.github.com/users/irabanillo91", "html_url": "https://github.com/irabanillo91", "followers_url": "https://api.github.com/users/irabanillo91/followers", "following_url": "https://api.github.com/users/irabanillo91/following{/other_user}", "gists_url": "https://api.github.com/users/irabanillo91/gists{/gist_id}", "starred_url": "https://api.github.com/users/irabanillo91/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/irabanillo91/subscriptions", "organizations_url": "https://api.github.com/users/irabanillo91/orgs", "repos_url": "https://api.github.com/users/irabanillo91/repos", "events_url": "https://api.github.com/users/irabanillo91/events{/privacy}", "received_events_url": "https://api.github.com/users/irabanillo91/received_events", "type": "User", "site_admin": false }
[ { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "To reuse an existing artifact, you could use an `importer` that imports an artifact with a given path. For example,\r\nhttps://github.com/kubeflow/pipelines/blob/a9f18fe985016ba9c1cbb24665d2b173797c91ec/sdk/python/kfp/v2/compiler_cli_tests/test_data/pipeline_with_importer.py#L50-L55\r\nThis currently only works on Vertex Pipelines, but we will support it in Kubeflow Pipelines backend in the near future.", "Nice! So in order to skip the execution of the first component based on the given uri, I was thinking about something like this\r\n\r\n```python\r\n@dsl.pipeline(name='pipeline-with-importer', pipeline_root='dummy_root')\r\ndef my_pipeline(dataset_uri: Optional[str]):\r\n with dsl.Condition(dataset_uri != ''):\r\n dataset_preparation = importer(artifact_uri=dataset_uri, artifact_class=Dataset)\r\n\r\n with dsl.Condition(dataset_uri == ''):\r\n dataset_preparation = dataset_preparation_op()\r\n\r\n train = train_op(dataset=dataset_preparation.outputs['artifact'])\r\n eval = eval_op(dataset=dataset_preparation.outputs['artifact'])\r\n```\r\n\r\nHowever, this won't work if the first condition it's met! The reason is that at compilation time, train&eval are defined to use the output from the component in the second condition. How can I overcome this?\r\n\r\n\r\nOn a different note, one thing that is slightly annoying is that the name of the artifact is hard-coded (see [here](https://github.com/kubeflow/pipelines/blob/a9f18fe985016ba9c1cbb24665d2b173797c91ec/sdk/python/kfp/v2/components/importer_node.py#L62)), so it's always set to ['artifact'](https://github.com/kubeflow/pipelines/blob/a9f18fe985016ba9c1cbb24665d2b173797c91ec/sdk/python/kfp/v2/components/importer_node.py#L26). **Couldn't we just pass the name as an argument to importer? And maybe have 'artifact' as default value.**\r\n\r\nFor my use case, that in turn forces me to set the output name for `dataset_preparation` to 'artifact as well. That component might generate multiple artifacts, so I'd rather use a descriptive name for each of them.", "@chensun Until it is supported in Kubeflow should we use simple `str` type for the dataset url and load it ourselves? ", "> @chensun Until it is supported in Kubeflow should we use simple `str` type for the dataset url and load it ourselves?\r\n\r\nI think it's not straight-forward, or at least I haven't been able to come up with an efficient way. I believe you can't convert the `str` to `Dataset` in the pipeline definition (that'd be executed at compilation time, not runtime). So the only way I can think of is inside the component. If the URI is passed, the component assigns it to the `Output[Dataset]` and skips the rest of the component logic. Something like:\r\n\r\n```python\r\n@component\r\ndef dataset_preparation_op(dataset_uri: str, dataset: Output[Dataset])\r\n\tif len(dataset_uri):\r\n\t\tdataset.uri = dataset_uri\r\n else:\r\n\t\t'Run dataset preparation logic'\r\n```\r\n\r\nHowever, I've set up Kubeflow to push the output artifacts to Minio after execution. So when this component ends, it will try to push `dataset.path` -> `dataset.uri`. Which will fail cause there's nothing at`dataset.path` within the container.\r\n\r\nTherefore, I'd have to do the following:\r\n\r\n```python\r\n@component\r\ndef dataset_preparation_op(dataset_uri: str, dataset: Output[Dataset])\r\n\tif len(dataset_uri):\r\n\t\tdataset.uri = dataset_uri\r\n\t\tdownload_object(key=dataset.uri, path=dataset.path)\r\n else:\r\n\t\t'Run dataset preparation logic'\r\n```\r\n\r\nThis would work, but it is extremely inefficient. Not only I'm launching a pod for just a simple cast, but now I have to download an artifact (which could be big), just to upload it immediately after and overwrite the original one with an exact copy of itself.", "@irabanillo91 I meant until we can specify that we don't want to upload a `Dataset` object I see 2 options : \r\n1. Use KFP SDK v1 and use `InputPath('Dataset')`\r\n2. Use KFP SDK v2 and ditch the `Dataset` object all together and only use a `my_dataset_path: str` or `my_dataset_path: InputPath(str)`, you'd have a `my_dataset_path: OutpuPath(str)` in a \"Download Component\"", "> @irabanillo91 I meant until we can specify that we don't want to upload a `Dataset` object I see 2 options :\r\n> \r\n> 1. Use KFP SDK v1 and use `InputPath('Dataset')`\r\n> 2. Ditch the `Dataset` object all together and only use a `my_dataset_path: str` to download it in a Download Component then you share the PVC with other components and you're good to go. Other components can receive the local path as input.\r\n\r\n@AlexandreBrown thanks for the reply! So it'd be something like this?\r\n\r\n```python\r\nDATASET_PATH = '/data/dataset.csv'\r\n\r\n@component\r\ndef prepare_dataset_component(dataset_uri: str):\r\n if len(dataset_uri):\r\n download_object(key=dataset_uri, path=DATASET_PATH)\r\n else:\r\n import time\r\n dataset = dataset_preparation_logic() \r\n with open(DATASET_PATH, \"w\") as f:\r\n f.write(dataset)\r\n upload_uri = f\"s3://data-bucket/datasets/{time.strftime('%Y%m%d-%H%M%S')}.csv\"\r\n upload_object(key=upload_uri, path=DATASET_PATH)\r\n\r\n@component\r\ndef training_component():\r\n with open(DATASET_PATH, \"w\") as f:\r\n dataset = f.read()\r\n model = training_logic(dataset)\r\n\r\n@pipeline(name=\"\", description=\"\", pipeline_root=\"\")\r\ndef training_pipeline(dataset_uri: Optional[str]):\r\n prepare_dataset_op = prepare_dataset_component(dataset_uri=dataset_uri)\r\n training_op = training_component()\r\n```\r\n\r\nIf `dataset_uri` is empty, it will run the component and store the dataset locally. If `dataset_uri` is provided instead, it downloads it and skips component execution. It just requires ensuring `DATASET_PATH` is visible across components.\r\n\r\nNot ideal since it doesn't leverage Kubeflow abstractions for artifacts handling. It requires manually uploading/downloading dataset and setting up the the shared volume across components. But I guess it would get us going for now.", "You can split dataset_preparation into two components, the first one is used to create a pvc, and the second one accepts the url parameter to update the data in the pvc, so that the first component will only run once, which is equivalent to mounting the same One dataset, the second one is used to update the data, and then the same Pipeline can use different urls to achieve the purpose of expanding the dataset" ]
"2022-02-12T15:11:51"
"2022-05-09T06:04:04"
null
NONE
null
My pipeline consists of three components: `dataset_preparation`, `training` and `evaluation`. The first one reads all files in my data lake and generates a dataset artifact (stored to S3) that is used in the following components. We keep adding new files to the datalake on a daily basis, so by default I want to train on all the data available. Therefore, I don't want to cache that component. However, in some cases I might want to re-run that pipeline (with different hyperparameters for instance). To be able to compare runs, I'd like to run it on an existing dataset artifact. Thus I'd like to provide the artifact as an input parameter to the pipeline and skip the execution of `dataset_preparation`. **Summary**: I'd like to be able to provide a dataset artifact as an optional input to the pipeline. If not provided, it will be created by the `dataset_preparation` component. If provided, `dataset_preparation` will be skipped. Is there any way to accomplish this? Thanks a lot in advance!
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7301/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7301/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7339
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7339/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7339/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7339/events
https://github.com/kubeflow/pipelines/issues/7339
1,144,880,139
I_kwDOB-71UM5EPXwL
7,339
Component Visualizations are not shown in Run Output
{ "login": "jayaswalayush", "id": 59866888, "node_id": "MDQ6VXNlcjU5ODY2ODg4", "avatar_url": "https://avatars.githubusercontent.com/u/59866888?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jayaswalayush", "html_url": "https://github.com/jayaswalayush", "followers_url": "https://api.github.com/users/jayaswalayush/followers", "following_url": "https://api.github.com/users/jayaswalayush/following{/other_user}", "gists_url": "https://api.github.com/users/jayaswalayush/gists{/gist_id}", "starred_url": "https://api.github.com/users/jayaswalayush/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jayaswalayush/subscriptions", "organizations_url": "https://api.github.com/users/jayaswalayush/orgs", "repos_url": "https://api.github.com/users/jayaswalayush/repos", "events_url": "https://api.github.com/users/jayaswalayush/events{/privacy}", "received_events_url": "https://api.github.com/users/jayaswalayush/received_events", "type": "User", "site_admin": false }
[ { "id": 2186355346, "node_id": "MDU6TGFiZWwyMTg2MzU1MzQ2", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/good%20first%20issue", "name": "good first issue", "color": "fef2c0", "default": true, "description": "" }, { "id": 2975820904, "node_id": "MDU6TGFiZWwyOTc1ODIwOTA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/v2", "name": "area/v2", "color": "A27925", "default": false, "description": "" } ]
open
false
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false } ]
null
[ "Hello @jayaswalayush , I have moved this issue to kubeflow pipelines repository. \r\n\r\nIndeed the `Run Output` for showing all visualizations is not implemented yet for V2. This is on our radar and we are planning to start working on it once we have built KFP v2.0.0-alpha.\r\n\r\nCode location: \r\nhttps://github.com/kubeflow/pipelines/blob/master/frontend/src/pages/RunDetails.tsx#L582-L614\r\n\r\nhttps://github.com/kubeflow/pipelines/blob/master/frontend/src/components/CompareTable.tsx", "Hello @zijianjoy , Any idea by when this issue will be resolved ? If we migrate to KFP v2.0.0-alpha then is this feature supported ?", "@jayaswalayush We are aiming to implement this feature in KFP v2.0.0-beta but we don't know exactly when yet. Please stay tuned!", "Hey folks, I stumbled across this issue after having the same problems. However mine is with the 1.8.1 version of kubeflow (standalone) and with the v1 compiler. Were you experiencing the same issues with both the v1 and v2 compilers @jayaswalayush??", "For anyone who wants to use the Run Output feature: currently you can use KFPv1 because the feature is still available as is. \r\n![9TZqJtNyEPPxYPp](https://user-images.githubusercontent.com/37026441/171988236-8d8f2327-5b11-4e5a-930b-c604bdcaf4e0.png)\r\n\r\nFor KFPv2, we are working on providing the critical features so it gets GA first. Because the visualization is already available in side panel right now with KFP 2.0.0-alpha.2, the `Run Output` feature is nice to have but not necessarily blocking you from viewing the metrics. So we might have to attend to critical features first before attending to Run Output.\r\n", "@zijianjoy : Sure thanks for the update but we would like to wait for this feature in KFP 2.0. Hope this feature is added soon", "Hi, I have the same issue as @nkosteski - after upgrading from Kubeflow Pipelines 1.4 to 1.8.5 I cannot see the Visualizations from the separate steps in \"Run Output\" view - even though they are properly rendered in step outputs Visualization tabs. Pipeline is V1. Interesting phenomenon is, that \"Visualization\" generated by the last pipeline step is rendered properly - any previous visualizations are ommited. In 1.4 this feature was working properly so I suspect this is some kind of regression." ]
"2022-02-11T11:45:33"
"2023-01-18T20:13:51"
null
NONE
null
I have uploaded the metrics_visualization pipeline in the Kubeflow Console and when I invoke a run I am able to see the visualizations yet they are not seen in the run output ![image](https://user-images.githubusercontent.com/59866888/153585818-78748c5d-bfe7-4228-95d5-c79afdd5c13e.png) ![image](https://user-images.githubusercontent.com/59866888/153585857-cda2921e-48b9-4528-b568-e8866fffb993.png) Do we need to make any configuration changes to show the visualizations in the Run Output ? Used the below code to compile the pipeline : compiler.Compiler(mode=kfp.dsl.PipelineExecutionMode.V2_COMPATIBLE).compile( pipeline_func=pipeline, package_path=__file__.replace('.py', '.yaml')) As per the documentation : https://www.kubeflow.org/docs/components/pipelines/sdk/output-viewer/ The Run output tab shows the visualizations for all pipeline steps in the selected run. To open the tab in the Kubeflow Pipelines UI: Click Experiments to see your current pipeline experiments. Click the experiment name of the experiment that you want to view. Click the run name of the run that you want to view. Click the Run output tab.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7339/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7339/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7296
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7296/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7296/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7296/events
https://github.com/kubeflow/pipelines/issues/7296
1,131,931,250
I_kwDOB-71UM5Dd-Zy
7,296
[feature].need support for file list
{ "login": "robscc", "id": 1586561, "node_id": "MDQ6VXNlcjE1ODY1NjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1586561?v=4", "gravatar_id": "", "url": "https://api.github.com/users/robscc", "html_url": "https://github.com/robscc", "followers_url": "https://api.github.com/users/robscc/followers", "following_url": "https://api.github.com/users/robscc/following{/other_user}", "gists_url": "https://api.github.com/users/robscc/gists{/gist_id}", "starred_url": "https://api.github.com/users/robscc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/robscc/subscriptions", "organizations_url": "https://api.github.com/users/robscc/orgs", "repos_url": "https://api.github.com/users/robscc/repos", "events_url": "https://api.github.com/users/robscc/events{/privacy}", "received_events_url": "https://api.github.com/users/robscc/received_events", "type": "User", "site_admin": false }
[ { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "> i want to merge multi files in one pipeline task,so i need support list[InputPath] such like this\r\n\r\nCan't you use `list[str]`?\r\n", "> > i want to merge multi files in one pipeline task,so i need support list[InputPath] such like this\r\n> \r\n> Can't you use `list[str]`?\r\n\r\nso i put the content to str? that would make pod yaml too large to apply into k8s cluster" ]
"2022-02-11T07:09:09"
"2022-02-21T06:10:00"
null
NONE
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> <!-- /area frontend --> <!-- /area backend --> <!-- /area sdk --> <!-- /area samples --> <!-- /area components --> ### What feature would you like to see? i want to merge multi files in one pipeline task,so i need support list[InputPath] such like this ### What is the use case or pain point? if i have 50 tasks and output 50 files i need write a function with 50 params , it is painfull ### Is there a workaround currently? i actually write 50 params function --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a ๐Ÿ‘. We prioritize fulfilling features with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7296/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7296/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7294
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7294/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7294/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7294/events
https://github.com/kubeflow/pipelines/issues/7294
1,130,231,053
I_kwDOB-71UM5DXfUN
7,294
[frontend] args sort in ui doesn't follow py-func pipeline
{ "login": "iuiu34", "id": 30587996, "node_id": "MDQ6VXNlcjMwNTg3OTk2", "avatar_url": "https://avatars.githubusercontent.com/u/30587996?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iuiu34", "html_url": "https://github.com/iuiu34", "followers_url": "https://api.github.com/users/iuiu34/followers", "following_url": "https://api.github.com/users/iuiu34/following{/other_user}", "gists_url": "https://api.github.com/users/iuiu34/gists{/gist_id}", "starred_url": "https://api.github.com/users/iuiu34/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iuiu34/subscriptions", "organizations_url": "https://api.github.com/users/iuiu34/orgs", "repos_url": "https://api.github.com/users/iuiu34/repos", "events_url": "https://api.github.com/users/iuiu34/events{/privacy}", "received_events_url": "https://api.github.com/users/iuiu34/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Hello @iuiu34 , since it is Vertex Pipelines, you can file Google Support ticket to get more help. Closing it for now as it is not related to kubeflow pipelines." ]
"2022-02-10T14:56:29"
"2022-02-10T23:57:49"
"2022-02-10T23:57:48"
NONE
null
### Environment Google vertex ### Deployment If we define ```py @component def test(a: str, b: str, c: str, d:str) -> str: kwargs = locals() return str(kwargs) @pipeline( name=name ) def pipeline( a: str = 'a', b: str = 'b', c: str = 'c', d: str = 'd'): test_task = test(a, b, c,d) ``` Then the UI, doesn't display the args in order (a,b,c,d). In fact, the order seems pretty random (d,a,c,b). There's any reason behind this?
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7294/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7294/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7285
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7285/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7285/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7285/events
https://github.com/kubeflow/pipelines/issues/7285
1,128,765,820
I_kwDOB-71UM5DR5l8
7,285
[frontend] Make JSON structured logs more readable
{ "login": "davidxia", "id": 480621, "node_id": "MDQ6VXNlcjQ4MDYyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/480621?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davidxia", "html_url": "https://github.com/davidxia", "followers_url": "https://api.github.com/users/davidxia/followers", "following_url": "https://api.github.com/users/davidxia/following{/other_user}", "gists_url": "https://api.github.com/users/davidxia/gists{/gist_id}", "starred_url": "https://api.github.com/users/davidxia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidxia/subscriptions", "organizations_url": "https://api.github.com/users/davidxia/orgs", "repos_url": "https://api.github.com/users/davidxia/repos", "events_url": "https://api.github.com/users/davidxia/events{/privacy}", "received_events_url": "https://api.github.com/users/davidxia/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
open
false
null
[]
null
[ "Hello @davidxia , I agree this is a nice feature to add. If you are interested, feel free to contribute. https://github.com/kubeflow/pipelines/blob/master/frontend/src/components/LogViewer.tsx" ]
"2022-02-09T16:31:12"
"2022-02-10T23:43:14"
null
CONTRIBUTOR
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> /area frontend ### What feature would you like to see? Something that makes JSON logs more readable. Perhaps JSON blocks that can be expanded and collapsed. ### What is the use case or pain point? I use Google Cloud Logging with [JSON structured logs](https://cloud.google.com/logging/docs/structured-logging). Large JSON logs in the Kubeflow log viewer UI aren't very human readable. Lines are long. Users have to [scroll horizontally](https://github.com/kubeflow/pipelines/issues/5366) to view them. ![image](https://user-images.githubusercontent.com/480621/153245257-aead21a5-220e-4980-9434-1cfd78b2b83f.png) ### Is there a workaround currently? Not really * Scrolling horizontally and squinting * Make my brain parse JSON better and increase my short term memory * Copy-paste JSON string into JSON prettifier --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a ๐Ÿ‘. We prioritize fulfilling features with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7285/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7285/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7283
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7283/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7283/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7283/events
https://github.com/kubeflow/pipelines/issues/7283
1,128,004,353
I_kwDOB-71UM5DO_sB
7,283
The exception is raised when non ascii code characters are passed as a argument to pipeline component.
{ "login": "ak-tanak", "id": 99300842, "node_id": "U_kgDOBes16g", "avatar_url": "https://avatars.githubusercontent.com/u/99300842?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ak-tanak", "html_url": "https://github.com/ak-tanak", "followers_url": "https://api.github.com/users/ak-tanak/followers", "following_url": "https://api.github.com/users/ak-tanak/following{/other_user}", "gists_url": "https://api.github.com/users/ak-tanak/gists{/gist_id}", "starred_url": "https://api.github.com/users/ak-tanak/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ak-tanak/subscriptions", "organizations_url": "https://api.github.com/users/ak-tanak/orgs", "repos_url": "https://api.github.com/users/ak-tanak/repos", "events_url": "https://api.github.com/users/ak-tanak/events{/privacy}", "received_events_url": "https://api.github.com/users/ak-tanak/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
open
false
null
[]
null
[ "@ak-tanak Hello,\r\n\r\nWould you like to use the V1 mode first by removing `kfp.dsl.PipelineExecutionMode.V2_COMPATIBLE` and see whether you are unblocked? V2 compatible is not supported any more. Thank you!", "@zijianjoy I try to run the code by removing `kfp.dsl.PipelineExecutionMode.V2_COMPATIBLE` , but same exception is raised.\r\n\r\n```\r\nApiException: (500)\r\nReason: Internal Server Error\r\nHTTP response headers: HTTPHeaderDict({'content-type': 'application/json', 'date': 'Mon, 14 Feb 2022 00:25:15 GMT', 'content-length': '772', 'x-envoy-upstream-service-time': '119', 'server': 'envoy'})\r\nHTTP response body: {\"error\":\"Failed to create a new run.: InternalServerError: Failed to store run simple-pipeline-8vkjz to table: Error 1366: Incorrect string value: '\\\\xE3\\\\x81\\\\x82\\\"],...' for column 'WorkflowRuntimeManifest' at row 1\",\"code\":13,\"message\":\"Failed to create a new run.: InternalServerError: Failed to store run simple-pipeline-8vkjz to table: Error 1366: Incorrect string value: '\\\\xE3\\\\x81\\\\x82\\\"],...' for column 'WorkflowRuntimeManifest' at row 1\",\"details\":[{\"@type\":\"type.googleapis.com/api.Error\",\"error_message\":\"Internal Server Error\",\"error_details\":\"Failed to create a new run.: InternalServerError: Failed to store run simple-pipeline-8vkjz to table: Error 1366: Incorrect string value: '\\\\xE3\\\\x81\\\\x82\\\"],...' for column 'WorkflowRuntimeManifest' at row 1\"}]}\r\n```", "Can you parse the yaml file that is generated by the sample code? Looks like the generated argo workflow format is invalid. It sounds to me a argo workflow issue: https://github.com/argoproj/argo-workflows/\r\n\r\ncc @chensun ", "Yes.\r\nI try to run the another sample code on notebook to generate the yaml file and run the pipeline.\r\n\r\n```python\r\nimport kfp\r\nimport kfp.dsl as dsl\r\nimport kfp.components as comp\r\n\r\ndef display_str(var: str):\r\n print(var)\r\n\r\ndisplay_str_op = comp.func_to_container_op(display_str, base_image=\"python:3.7\")\r\n\r\n@dsl.pipeline(\r\n name=\"simple-pipeline\"\r\n)\r\ndef simple_pipeline():\r\n out = display_str_op('ใ‚')\r\n\r\nkfp.compiler.Compiler().compile(\r\n pipeline_func=simple_pipeline,\r\n package_path='simple-pipeline.yaml')\r\n\r\nclient = kfp.Client(\"http://ml-pipeline.kubeflow:8888\")\r\nrun_result = client.create_run_from_pipeline_package(\r\n pipeline_file = 'simple-pipeline.yaml', \r\n arguments = {}, \r\n namespace = 'kubeflow-user-example-com'\r\n)\r\n```\r\n\r\nBut I got same issue. Following exception is raised.\r\n\r\n```\r\nApiException: (500)\r\nReason: Internal Server Error\r\nHTTP response headers: HTTPHeaderDict({'content-type': 'application/json', 'date': 'Tue, 15 Feb 2022 01:34:39 GMT', 'content-length': '772', 'x-envoy-upstream-service-time': '73', 'server': 'envoy'})\r\nHTTP response body: {\"error\":\"Failed to create a new run.: InternalServerError: Failed to store run simple-pipeline-l6wlg to table: Error 1366: Incorrect string value: '\\\\xE3\\\\x81\\\\x82\\\"],...' for column 'WorkflowRuntimeManifest' at row 1\",\"code\":13,\"message\":\"Failed to create a new run.: InternalServerError: Failed to store run simple-pipeline-l6wlg to table: Error 1366: Incorrect string value: '\\\\xE3\\\\x81\\\\x82\\\"],...' for column 'WorkflowRuntimeManifest' at row 1\",\"details\":[{\"@type\":\"type.googleapis.com/api.Error\",\"error_message\":\"Internal Server Error\",\"error_details\":\"Failed to create a new run.: InternalServerError: Failed to store run simple-pipeline-l6wlg to table: Error 1366: Incorrect string value: '\\\\xE3\\\\x81\\\\x82\\\"],...' for column 'WorkflowRuntimeManifest' at row 1\"}]}\r\n```\r\n\r\nSo I try to run the yaml file on Argo Workflow by Argo CLI.\r\nThe pipeline yaml is completed successfully.\r\n\r\n```\r\n$ sed -i -e 's/serviceAccountName: pipeline-runner//' simple-pipeline.yaml\r\n$ argo submit -n argo --watch ./simple-pipeline.yaml\r\n\r\n$ argo logs -n argo @latest\r\nsimple-pipeline-cgsw8-2339847167: ใ‚\r\n```", "Having the same issue, just want to try some mock data, getting the same error. still no solutions?", "In my case, just replace your string with its utf-16. For example, use '\\u6700' instead of 'ๆœ€'. can anyone help fix this issue based on my research?" ]
"2022-02-09T03:24:18"
"2023-04-23T04:46:40"
null
NONE
null
### What steps did you take Run the following pipeline example code on notebook to log multi-byte string "ใ‚". ```python import kfp from kfp import dsl from kfp.v2.dsl import component @component(base_image="python:3.7") def display_str_op(var: str): print(var) @dsl.pipeline( name="simple-pipeline" ) def simple_pipeline(): display_str_op("ใ‚") client = kfp.Client("http://ml-pipeline.kubeflow:8888") run_result = client.create_run_from_pipeline_func( simple_pipeline, experiment_name = 'test', run_name = 'simple pipeline', arguments = {}, namespace = 'kubeflow-user-example-com', mode = kfp.dsl.PipelineExecutionMode.V2_COMPATIBLE ) ``` ### What happened: Following Exception is raised in the pipeline running. ``` ApiException: (500) Reason: Internal Server Error HTTP response headers: HTTPHeaderDict({'content-type': 'application/json', 'date': 'Wed, 05 Jan 2022 02:42:42 GMT', 'content-length': '775', 'x-envoy-upstream-service-time': '83', 'server': 'envoy'}) HTTP response body: {"error":"Failed to create a new run.: InternalServerError: Failed to store run simple-pipeline-vfzfz to table: Error 1366: Incorrect string value: '\\xEF\\xBC\\xA1\",\"...' for column 'WorkflowRuntimeManifest' at row 1","code":13,"message":"Failed to create a new run.: InternalServerError: Failed to store run simple-pipeline-vfzfz to table: Error 1366: Incorrect string value: '\\xEF\\xBC\\xA1\",\"...' for column 'WorkflowRuntimeManifest' at row 1","details":[{"@type":"type.googleapis.com/api.Error","error_message":"Internal Server Error","error_details":"Failed to create a new run.: InternalServerError: Failed to store run simple-pipeline-vfzfz to table: Error 1366: Incorrect string value: '\\xE3\\x81\\x82\",\"...' for column 'WorkflowRuntimeManifest' at row 1"}]} ``` ### What did you expect to happen: Result should have been completed successfully. ### Environment: * How do you deploy Kubeflow Pipelines (KFP)? - Install Kubeflow v1.4.0 with https://github.com/kubeflow/manifests#install-with-a-single-command * KFP version: - 1.7.0 * KFP SDK version: - 1.6.3 ### Anything else you would like to add: The version of kubernetes is v1.21.1 ### Labels <!-- Please include labels below by uncommenting them to help us better triage issues --> <!-- /area frontend --> <!-- /area backend --> <!-- /area sdk --> <!-- /area testing --> <!-- /area samples --> <!-- /area components --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a ๐Ÿ‘. We prioritise the issues with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7283/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7283/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7274
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7274/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7274/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7274/events
https://github.com/kubeflow/pipelines/issues/7274
1,126,899,789
I_kwDOB-71UM5DKyBN
7,274
[backend] Argo Workflows created for ScheduledWorkflows are missing common metadata
{ "login": "jmendesky", "id": 56542576, "node_id": "MDQ6VXNlcjU2NTQyNTc2", "avatar_url": "https://avatars.githubusercontent.com/u/56542576?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmendesky", "html_url": "https://github.com/jmendesky", "followers_url": "https://api.github.com/users/jmendesky/followers", "following_url": "https://api.github.com/users/jmendesky/following{/other_user}", "gists_url": "https://api.github.com/users/jmendesky/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmendesky/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmendesky/subscriptions", "organizations_url": "https://api.github.com/users/jmendesky/orgs", "repos_url": "https://api.github.com/users/jmendesky/repos", "events_url": "https://api.github.com/users/jmendesky/events{/privacy}", "received_events_url": "https://api.github.com/users/jmendesky/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" } ]
open
false
null
[]
null
[ "/assign @ji-yaqi ", "Hi @jmendesky, is there any impact of this missing metadata that we should be aware of?", "> Hi @jmendesky, is there any impact of this missing metadata that we should be aware of?\r\n\r\n@ji-yaqi we use this metadata in our automation which reacts to finished pipeline runs. This automation is currently broken.\r\nIn general, I think this inconsistency can lead to more potential problems to more users if they rely on this metadata being present.", "Is there any update on this?" ]
"2022-02-08T07:44:17"
"2022-06-20T07:04:40"
null
NONE
null
### Environment * How did you deploy Kubeflow Pipelines (KFP)? Kustomize * KFP version: 1.3.0 - but the same behaviour is present in current versions * KFP SDK version: 1.8.10 ### Steps to reproduce - Create a Pipeline run - Create a Recurring Run for the same pipeline - Compare `.metadata` of the resulting Argo Workflows Example created as a single Pipeline run: ```yaml apiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: annotations: > pipelines.kubeflow.org/kfp_sdk_version: 1.8.10 > pipelines.kubeflow.org/pipeline_compilation_time: '2022-02-01T15:41:19.456019' > pipelines.kubeflow.org/pipeline_spec: '{"description": "Constructs a Kubeflow pipeline.", "inputs": [{"name": "pipeline-root"}], "name": "training-pipeline"}'} labels: > pipelines.kubeflow.org/kfp_sdk_version: 1.8.10 workflows.argoproj.io/completed: "true" workflows.argoproj.io/phase: Succeeded pipeline/persistedFinalState: "true" pipeline/runid: 96455954-6f96-4933-9ee1-a85cd676b8c6 ... ``` Example created by a recurring run: ```yaml apiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: annotations: {} labels: pipeline/persistedFinalState: "true" pipeline/runid: bd44f525-7576-4ca8-ae82-b0eb050d1ae9 scheduledworkflows.kubeflow.org/isOwnedByScheduledWorkflow: "true" scheduledworkflows.kubeflow.org/scheduledWorkflowName: training-run-configurr8fjn scheduledworkflows.kubeflow.org/workflowEpoch: "1644166800" scheduledworkflows.kubeflow.org/workflowIndex: "2" workflows.argoproj.io/completed: "true" workflows.argoproj.io/phase: Succeeded ... ``` You can see that the workflow for the recurring run has an entirely new set of labels and no annotations. Specifically, the compiled argo workflow contains `pipelines.kubeflow.org/*` fields which get removed for the scheduled run. ### Expected result Both workflows should have the same common metadata - most importantly labels and annotations. We use these fields for automation after a pipeline has finshed. ### Materials and Reference After some investigation I found out that the `ScheduledWorkflow` CRD doesn't contain a metadata field: https://github.com/kubeflow/pipelines/blob/master/backend/src/apiserver/template/argo_template.go#L92 and https://github.com/kubeflow/pipelines/blob/master/backend/src/crd/pkg/apis/scheduledworkflow/v1beta1/types.go#L48 and that the original Workflow's Spec gets copied without its metadata: https://github.com/kubeflow/pipelines/blob/master/backend/src/crd/controller/scheduledworkflow/util/scheduled_workflow.go#L164 --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a ๐Ÿ‘. We prioritise the issues with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7274/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7274/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7271
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7271/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7271/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7271/events
https://github.com/kubeflow/pipelines/issues/7271
1,126,398,832
I_kwDOB-71UM5DI3tw
7,271
[backend] ml-pipeline deployment readiness probe failed
{ "login": "yuhuishi-convect", "id": 74702693, "node_id": "MDQ6VXNlcjc0NzAyNjkz", "avatar_url": "https://avatars.githubusercontent.com/u/74702693?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yuhuishi-convect", "html_url": "https://github.com/yuhuishi-convect", "followers_url": "https://api.github.com/users/yuhuishi-convect/followers", "following_url": "https://api.github.com/users/yuhuishi-convect/following{/other_user}", "gists_url": "https://api.github.com/users/yuhuishi-convect/gists{/gist_id}", "starred_url": "https://api.github.com/users/yuhuishi-convect/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuhuishi-convect/subscriptions", "organizations_url": "https://api.github.com/users/yuhuishi-convect/orgs", "repos_url": "https://api.github.com/users/yuhuishi-convect/repos", "events_url": "https://api.github.com/users/yuhuishi-convect/events{/privacy}", "received_events_url": "https://api.github.com/users/yuhuishi-convect/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" } ]
open
false
null
[]
null
[ "Hello @yuhuishi-convect , can you provide more information about other deployments in your cluster?\r\n\r\nml-pipeline is the last Deployment that can be ready only when other Deployments are running. Possible reason is that your storage client is failing (SQL database, etc.), which caused the ml-pipeline also failing. Can you share more information about the healthiness of your other Deployments in the cluster?", "May I ask which Kubernetes version you are deploying to? Similar post: https://github.com/kubernetes/kubernetes/issues/106111", "> Hello @yuhuishi-convect , can you provide more information about other deployments in your cluster?\r\n> \r\n> ml-pipeline is the last Deployment that can be ready only when other Deployments are running. Possible reason is that your storage client is failing (SQL database, etc.), which caused the ml-pipeline also failing. Can you share more information about the healthiness of your other Deployments in the cluster?\r\n\r\n\r\n\r\n```bash\r\n\r\n$ k get pods -n kubeflow-helm \r\nNAME READY STATUS RESTARTS AGE\r\ncache-deployer-deployment-bb8d6cb65-9hqfb 1/1 Running 0 10m\r\ncache-server-7fffdd889d-zgnc9 1/1 Running 0 10m\r\nmetadata-envoy-7cd8b6db48-nw6w8 1/1 Running 0 10m\r\nmetadata-grpc-deployment-69995cb9dc-lq9c8 1/1 Running 1 10m\r\nmetadata-writer-5986bfb78-v7dwr 1/1 Running 0 10m\r\nminio-5cd667bc76-2965c 1/1 Running 0 10m\r\nml-pipeline-5ffbcfcd95-wjhvn 0/1 Running 5 4m12s\r\nml-pipeline-persistenceagent-84fdcf9cbc-pq2nv 1/1 Running 4 10m\r\nml-pipeline-scheduledworkflow-59d66b54c6-qc957 1/1 Running 0 10m\r\nml-pipeline-ui-58d56bd7cc-mvzcl 1/1 Running 0 10m\r\nml-pipeline-viewer-crd-856f5454d8-hkk65 1/1 Running 0 10m\r\nml-pipeline-visualizationserver-5486886667-c62pr 1/1 Running 0 10m\r\nmysql-85445f56b7-b7fp5 1/1 Running 0 11m\r\nworkflow-controller-7f469d8fcd-c6fzn 1/1 Running 0 10m\r\n\r\n```", "> May I ask which Kubernetes version you are deploying to? Similar post: [kubernetes/kubernetes#106111](https://github.com/kubernetes/kubernetes/issues/106111)\r\n\r\n```bash\r\n$ k version \r\nClient Version: version.Info{Major:\"1\", Minor:\"22\", GitVersion:\"v1.22.3\", GitCommit:\"c92036820499fedefec0f847e2054d824aea6cd1\", GitTreeState:\"clean\", BuildDate:\"2021-10-27T18:41:28Z\", GoVersion:\"go1.16.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\r\nServer Version: version.Info{Major:\"1\", Minor:\"21+\", GitVersion:\"v1.21.5-eks-bc4871b\", GitCommit:\"5236faf39f1b7a7dabea8df12726f25608131aa9\", GitTreeState:\"clean\", BuildDate:\"2021-10-29T23:32:16Z\", GoVersion:\"go1.16.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\r\n```", "@yuhuishi-convect \r\n\r\nThe KFP backend 1.2 is very old version, it might not work in Kubernetes 1.21. Can you try to deploy KFP backend v1.8.1 instead? https://github.com/kubeflow/pipelines/releases/tag/1.8.1" ]
"2022-02-07T19:21:47"
"2022-05-19T03:12:27"
null
NONE
null
### Environment * How did you deploy Kubeflow Pipelines (KFP)? AWS EKS <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> * KFP version: 1.2 <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> * KFP SDK version: 1.2 <!-- Specify the output of the following shell command: $pip list | grep kfp --> ### Steps to reproduce <!-- Specify how to reproduce the problem. This may include information such as: a description of the process, code snippets, log output, or screenshots. --> ### Expected result <!-- What should the correct behavior be? --> ### Materials and Reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> <details> <summary> The liveness probe of the `ml-pipeline` deployment failed. </summary> ``` $ k describe -n kubeflow pod ml-pipeline-5f465d4c56-7xcs8 Name: ml-pipeline-5f465d4c56-7xcs8 Namespace: kubeflow Priority: 0 Node: ip-10-0-3-78.us-west-2.compute.internal/10.0.3.78 Start Time: Mon, 07 Feb 2022 11:05:22 -0800 Labels: app=ml-pipeline application-crd-id=kubeflow-pipelines pod-template-hash=5f465d4c56 Annotations: kubectl.kubernetes.io/restartedAt: 2022-02-06T17:31:44-08:00 kubernetes.io/psp: eks.privileged sidecar.istio.io/inject: false Status: Running IP: 10.0.3.52 IPs: IP: 10.0.3.52 Controlled By: ReplicaSet/ml-pipeline-5f465d4c56 Containers: ml-pipeline-api-server: Container ID: docker://6659ead43604634288ebe7987ba5f41e892e06c568645b2883547b3c26cdb167 Image: gcr.io/ml-pipeline/api-server:1.2.0 Image ID: docker-pullable://gcr.io/ml-pipeline/api-server@sha256:6553e9855e6d38eb5a70beeea39a2c37ac85b60f26a5c061b5e5e2adfffd960b Ports: 8888/TCP, 8887/TCP Host Ports: 0/TCP, 0/TCP State: Running Started: Mon, 07 Feb 2022 11:05:23 -0800 Ready: False Restart Count: 0 Liveness: exec [wget -q -S -O - http://localhost:8888/apis/v1beta1/healthz] delay=3s timeout=2s period=5s #success=1 #failure=3 Readiness: exec [wget -q -S -O - http://localhost:8888/apis/v1beta1/healthz] delay=3s timeout=2s period=5s #success=1 #failure=3 Environment: AUTO_UPDATE_PIPELINE_DEFAULT_VERSION: <set to the key 'autoUpdatePipelineDefaultVersion' of config map 'pipeline-install-config-d42hc87dh2'> Optional: false POD_NAMESPACE: kubeflow (v1:metadata.namespace) OBJECTSTORECONFIG_SECURE: false OBJECTSTORECONFIG_BUCKETNAME: <set to the key 'bucketName' of config map 'pipeline-install-config-d42hc87dh2'> Optional: false DBCONFIG_USER: <set to the key 'username' in secret 'mysql-secret-fd5gktm75t'> Optional: false DBCONFIG_PASSWORD: <set to the key 'password' in secret 'mysql-secret-fd5gktm75t'> Optional: false DBCONFIG_DBNAME: <set to the key 'pipelineDb' of config map 'pipeline-install-config-d42hc87dh2'> Optional: false DBCONFIG_HOST: <set to the key 'dbHost' of config map 'pipeline-install-config-d42hc87dh2'> Optional: false DBCONFIG_PORT: <set to the key 'dbPort' of config map 'pipeline-install-config-d42hc87dh2'> Optional: false OBJECTSTORECONFIG_ACCESSKEY: <set to the key 'accesskey' in secret 'mlpipeline-minio-artifact'> Optional: false OBJECTSTORECONFIG_SECRETACCESSKEY: <set to the key 'secretkey' in secret 'mlpipeline-minio-artifact'> Optional: false Mounts: /var/run/secrets/kubernetes.io/serviceaccount from ml-pipeline-token-zvqgd (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: ml-pipeline-token-zvqgd: Type: Secret (a volume populated by a Secret) SecretName: ml-pipeline-token-zvqgd Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 12m default-scheduler Successfully assigned kubeflow/ml-pipeline-5f465d4c56-7xcs8 to ip-10-0-3-78.us-west-2.compute.internal Normal Pulled 12m kubelet Container image "gcr.io/ml-pipeline/api-server:1.2.0" already present on machine Normal Created 12m kubelet Created container ml-pipeline-api-server Normal Started 12m kubelet Started container ml-pipeline-api-server Warning Unhealthy 6s (x2 over 6m12s) kubelet Readiness probe errored: rpc error: code = DeadlineExceeded desc = context deadline exceeded Warning Unhealthy 4s (x2 over 6m10s) kubelet Liveness probe errored: rpc error: code = DeadlineExceeded desc = context deadline exceeded ``` </details> Logs of the pod ``` $ k logs -n kubeflow ml-pipeline-5f465d4c56-7xcs8 I0207 19:05:23.824447 9 client_manager.go:140] Initializing client manager I0207 19:05:23.824841 9 config.go:56] Config DBConfig.ExtraParams not specified, skipping ``` Executing the health check from the pod receives no response ``` k exec -n kubeflow ml-pipeline-5f465d4c56-7xcs8 -- wget -q -S -O - http://localhost:8888/apis/v1beta1/healthz ``` <details> <summary> Deployment yaml of the `ml-pipeline` </summary> ``` $ k get deploy -n kubeflow ml-pipeline -o yaml apiVersion: apps/v1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "5" kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"ml-pipeline","application-crd-id":"kubeflow-pipelines"},"name":"ml-pipeline","namespace":"kubeflow"},"spec":{"selector":{"matchLabels":{"app":"ml-pipeline","application-crd-id":"kubeflow-pipelines"}},"template":{"metadata":{"labels":{"app":"ml-pipeline","application-crd-id":"kubeflow-pipelines"}},"spec":{"containers":[{"env":[{"name":"AUTO_UPDATE_PIPELINE_DEFAULT_VERSION","valueFrom":{"configMapKeyRef":{"key":"autoUpdatePipelineDefaultVersion","name":"pipeline-install-config-d42hc87dh2"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}},{"name":"OBJECTSTORECONFIG_SECURE","value":"false"},{"name":"OBJECTSTORECONFIG_BUCKETNAME","valueFrom":{"configMapKeyRef":{"key":"bucketName","name":"pipeline-install-config-d42hc87dh2"}}},{"name":"DBCONFIG_USER","valueFrom":{"secretKeyRef":{"key":"username","name":"mysql-secret-fd5gktm75t"}}},{"name":"DBCONFIG_PASSWORD","valueFrom":{"secretKeyRef":{"key":"password","name":"mysql-secret-fd5gktm75t"}}},{"name":"DBCONFIG_DBNAME","valueFrom":{"configMapKeyRef":{"key":"pipelineDb","name":"pipeline-install-config-d42hc87dh2"}}},{"name":"DBCONFIG_HOST","valueFrom":{"configMapKeyRef":{"key":"dbHost","name":"pipeline-install-config-d42hc87dh2"}}},{"name":"DBCONFIG_PORT","valueFrom":{"configMapKeyRef":{"key":"dbPort","name":"pipeline-install-config-d42hc87dh2"}}},{"name":"OBJECTSTORECONFIG_ACCESSKEY","valueFrom":{"secretKeyRef":{"key":"accesskey","name":"mlpipeline-minio-artifact"}}},{"name":"OBJECTSTORECONFIG_SECRETACCESSKEY","valueFrom":{"secretKeyRef":{"key":"secretkey","name":"mlpipeline-minio-artifact"}}}],"image":"gcr.io/ml-pipeline/api-server:1.2.0","imagePullPolicy":"IfNotPresent","livenessProbe":{"exec":{"command":["wget","-q","-S","-O","-","http://localhost:8888/apis/v1beta1/healthz"]},"initialDelaySeconds":3,"periodSeconds":5,"timeoutSeconds":2},"name":"ml-pipeline-api-server","ports":[{"containerPort":8888,"name":"http"},{"containerPort":8887,"name":"grpc"}],"readinessProbe":{"exec":{"command":["wget","-q","-S","-O","-","http://localhost:8888/apis/v1beta1/healthz"]},"initialDelaySeconds":3,"periodSeconds":5,"timeoutSeconds":2}}],"serviceAccountName":"ml-pipeline"}}}} creationTimestamp: "2021-01-15T22:01:56Z" generation: 15 labels: app: ml-pipeline application-crd-id: kubeflow-pipelines name: ml-pipeline namespace: kubeflow ownerReferences: - apiVersion: app.k8s.io/v1beta1 blockOwnerDeletion: true controller: false kind: Application name: pipeline uid: ea8a9b37-0c16-439e-bc49-3399051aca6e resourceVersion: "532602378" selfLink: /apis/apps/v1/namespaces/kubeflow/deployments/ml-pipeline uid: 908e252d-c7c6-49f2-88e0-dcf568097b14 spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: ml-pipeline application-crd-id: kubeflow-pipelines strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: annotations: kubectl.kubernetes.io/restartedAt: "2022-02-06T17:31:44-08:00" sidecar.istio.io/inject: "false" creationTimestamp: null labels: app: ml-pipeline application-crd-id: kubeflow-pipelines spec: containers: - env: - name: AUTO_UPDATE_PIPELINE_DEFAULT_VERSION valueFrom: configMapKeyRef: key: autoUpdatePipelineDefaultVersion name: pipeline-install-config-d42hc87dh2 - name: POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: OBJECTSTORECONFIG_SECURE value: "false" - name: OBJECTSTORECONFIG_BUCKETNAME valueFrom: configMapKeyRef: key: bucketName name: pipeline-install-config-d42hc87dh2 - name: DBCONFIG_USER valueFrom: secretKeyRef: key: username name: mysql-secret-fd5gktm75t - name: DBCONFIG_PASSWORD valueFrom: secretKeyRef: key: password name: mysql-secret-fd5gktm75t - name: DBCONFIG_DBNAME valueFrom: configMapKeyRef: key: pipelineDb name: pipeline-install-config-d42hc87dh2 - name: DBCONFIG_HOST valueFrom: configMapKeyRef: key: dbHost name: pipeline-install-config-d42hc87dh2 - name: DBCONFIG_PORT valueFrom: configMapKeyRef: key: dbPort name: pipeline-install-config-d42hc87dh2 - name: OBJECTSTORECONFIG_ACCESSKEY valueFrom: secretKeyRef: key: accesskey name: mlpipeline-minio-artifact - name: OBJECTSTORECONFIG_SECRETACCESSKEY valueFrom: secretKeyRef: key: secretkey name: mlpipeline-minio-artifact image: gcr.io/ml-pipeline/api-server:1.2.0 imagePullPolicy: IfNotPresent livenessProbe: exec: command: - wget - -q - -S - -O - '-' - http://localhost:8888/apis/v1beta1/healthz failureThreshold: 3 initialDelaySeconds: 3 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 2 name: ml-pipeline-api-server ports: - containerPort: 8888 name: http protocol: TCP - containerPort: 8887 name: grpc protocol: TCP readinessProbe: exec: command: - wget - -q - -S - -O - '-' - http://localhost:8888/apis/v1beta1/healthz failureThreshold: 3 initialDelaySeconds: 3 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 2 resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: ml-pipeline serviceAccountName: ml-pipeline terminationGracePeriodSeconds: 30 status: conditions: - lastTransitionTime: "2022-02-07T18:58:32Z" lastUpdateTime: "2022-02-07T18:58:32Z" message: Deployment does not have minimum availability. reason: MinimumReplicasUnavailable status: "False" type: Available - lastTransitionTime: "2022-02-07T19:16:46Z" lastUpdateTime: "2022-02-07T19:16:46Z" message: ReplicaSet "ml-pipeline-5f465d4c56" has timed out progressing. reason: ProgressDeadlineExceeded status: "False" type: Progressing observedGeneration: 15 replicas: 2 unavailableReplicas: 2 updatedReplicas: 1 ``` </details> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a ๐Ÿ‘. We prioritise the issues with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7271/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7271/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7270
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7270/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7270/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7270/events
https://github.com/kubeflow/pipelines/issues/7270
1,126,244,456
I_kwDOB-71UM5DISBo
7,270
[bug] No _back_quoted_if_needed(model_name) in bigquery_predict_model_job
{ "login": "andrewmorrison-sky", "id": 84837583, "node_id": "MDQ6VXNlcjg0ODM3NTgz", "avatar_url": "https://avatars.githubusercontent.com/u/84837583?v=4", "gravatar_id": "", "url": "https://api.github.com/users/andrewmorrison-sky", "html_url": "https://github.com/andrewmorrison-sky", "followers_url": "https://api.github.com/users/andrewmorrison-sky/followers", "following_url": "https://api.github.com/users/andrewmorrison-sky/following{/other_user}", "gists_url": "https://api.github.com/users/andrewmorrison-sky/gists{/gist_id}", "starred_url": "https://api.github.com/users/andrewmorrison-sky/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andrewmorrison-sky/subscriptions", "organizations_url": "https://api.github.com/users/andrewmorrison-sky/orgs", "repos_url": "https://api.github.com/users/andrewmorrison-sky/repos", "events_url": "https://api.github.com/users/andrewmorrison-sky/events{/privacy}", "received_events_url": "https://api.github.com/users/andrewmorrison-sky/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
open
false
null
[]
null
[ "cc @IronPan " ]
"2022-02-07T16:50:51"
"2022-02-10T23:56:29"
null
NONE
null
### What steps did you take Run google_cloud_pipeline_components.experimental.bigquery.BigqueryPredictModelJobOp job on pipeline. Pass in parameter model_name = "\`project.dataset.model\`" WITH BACKTICKS as I did on google_cloud_pipeline_components.experimental.bigquery.BigqueryEvaluateModelJobOp. Pipeline run fails due to error Syntax error: Invalid empty identifier at [1:32]. I can see that this is due to https://github.com/kubeflow/pipelines/blob/master/components/google-cloud/google_cloud_pipeline_components/container/experimental/gcp_launcher/bigquery_job_remote_runner.py#L393 where it is written as 'SELECT * FROM ML.PREDICT(MODEL \`%s\`, %s%s)'. Note that there is '\`%s\`', which means that if I pass in model_name with backticks already included there will be two backticks now Where as in https://github.com/kubeflow/pipelines/blob/master/components/google-cloud/google_cloud_pipeline_components/container/experimental/gcp_launcher/bigquery_job_remote_runner.py#L599 It has 'SELECT * FROM ML.EVALUATE(MODEL %s%s%s)' % ( _back_quoted_if_needed(model_name), input_data_sql, threshold_sql) ### What happened: Pipeline job failed as I included backticks in model_name parameter ### What did you expect to happen: That the pipeline would run successfully ### Environment: kfp 1.8.11 google_cloud_pipeline_components 0.2.2 There just needs to be consistency between components, either backticks should not be included in both, or the _back_quoted_if_needed(model_name) should be done in both. <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a ๐Ÿ‘. We prioritise the issues with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7270/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7270/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7266
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7266/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7266/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7266/events
https://github.com/kubeflow/pipelines/issues/7266
1,124,869,929
I_kwDOB-71UM5DDCcp
7,266
[backend] v1 caching broken in KFP 1.8 release branch
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[]
"2022-02-05T08:24:34"
"2022-02-07T08:19:18"
"2022-02-07T08:19:18"
COLLABORATOR
null
The root cause is https://github.com/argoproj/argo-workflows/pull/6022 introduced a behavior change that the content of Argo workflow template is no longer logged under `pod.metadata.annotations`, instead it puts the template in container env. The change was released in Argo workflow [v3.2.0-rc1](https://github.com/argoproj/argo-workflows/blob/master/CHANGELOG.md#v320-rc1-2021-08-19). And https://github.com/kubeflow/pipelines/pull/6920 upgraded Argo workflow used in KFP from `3.1.14` to `3.2.3`. The change broke v1 caching implementation because it relies on Argo workflow template present in `pod.metadata.annotations`.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7266/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7266/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7263
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7263/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7263/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7263/events
https://github.com/kubeflow/pipelines/issues/7263
1,124,694,310
I_kwDOB-71UM5DCXkm
7,263
[release] Backend 1.8.0 tracker
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "One more item to cherry pick: https://github.com/kubeflow/pipelines/pull/7227", "Hello Chen, would you mind also picking up this PR? https://github.com/kubeflow/pipelines/pull/7155", "Released: https://github.com/kubeflow/pipelines/releases/tag/1.8.0" ]
"2022-02-04T23:06:34"
"2022-02-19T06:47:53"
"2022-02-19T06:47:53"
COLLABORATOR
null
## Cherry pick PRs Make sure to cherry pick the following PRs to `release-1.8` branch before making the official release: - [x] https://github.com/kubeflow/pipelines/pull/7252 - [x] https://github.com/kubeflow/pipelines/pull/7267 - [x] https://github.com/kubeflow/pipelines/pull/7227 - [x] https://github.com/kubeflow/pipelines/pull/7273 - [x] https://github.com/kubeflow/pipelines/pull/7155 - [x] https://github.com/kubeflow/pipelines/pull/7311 Validate the new release candidate before official release. ## Instruction https://github.com/kubeflow/pipelines/blob/master/RELEASE.md#release-manager-instructions (Note, we need to skip some steps)
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7263/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7263/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7261
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7261/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7261/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7261/events
https://github.com/kubeflow/pipelines/issues/7261
1,124,600,900
I_kwDOB-71UM5DCAxE
7,261
Possible to set max concurrent runs on an experiment?
{ "login": "calvinleungyk", "id": 6678871, "node_id": "MDQ6VXNlcjY2Nzg4NzE=", "avatar_url": "https://avatars.githubusercontent.com/u/6678871?v=4", "gravatar_id": "", "url": "https://api.github.com/users/calvinleungyk", "html_url": "https://github.com/calvinleungyk", "followers_url": "https://api.github.com/users/calvinleungyk/followers", "following_url": "https://api.github.com/users/calvinleungyk/following{/other_user}", "gists_url": "https://api.github.com/users/calvinleungyk/gists{/gist_id}", "starred_url": "https://api.github.com/users/calvinleungyk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/calvinleungyk/subscriptions", "organizations_url": "https://api.github.com/users/calvinleungyk/orgs", "repos_url": "https://api.github.com/users/calvinleungyk/repos", "events_url": "https://api.github.com/users/calvinleungyk/events{/privacy}", "received_events_url": "https://api.github.com/users/calvinleungyk/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "ji-yaqi", "id": 17338099, "node_id": "MDQ6VXNlcjE3MzM4MDk5", "avatar_url": "https://avatars.githubusercontent.com/u/17338099?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ji-yaqi", "html_url": "https://github.com/ji-yaqi", "followers_url": "https://api.github.com/users/ji-yaqi/followers", "following_url": "https://api.github.com/users/ji-yaqi/following{/other_user}", "gists_url": "https://api.github.com/users/ji-yaqi/gists{/gist_id}", "starred_url": "https://api.github.com/users/ji-yaqi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ji-yaqi/subscriptions", "organizations_url": "https://api.github.com/users/ji-yaqi/orgs", "repos_url": "https://api.github.com/users/ji-yaqi/repos", "events_url": "https://api.github.com/users/ji-yaqi/events{/privacy}", "received_events_url": "https://api.github.com/users/ji-yaqi/received_events", "type": "User", "site_admin": false }
[ { "login": "ji-yaqi", "id": 17338099, "node_id": "MDQ6VXNlcjE3MzM4MDk5", "avatar_url": "https://avatars.githubusercontent.com/u/17338099?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ji-yaqi", "html_url": "https://github.com/ji-yaqi", "followers_url": "https://api.github.com/users/ji-yaqi/followers", "following_url": "https://api.github.com/users/ji-yaqi/following{/other_user}", "gists_url": "https://api.github.com/users/ji-yaqi/gists{/gist_id}", "starred_url": "https://api.github.com/users/ji-yaqi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ji-yaqi/subscriptions", "organizations_url": "https://api.github.com/users/ji-yaqi/orgs", "repos_url": "https://api.github.com/users/ji-yaqi/repos", "events_url": "https://api.github.com/users/ji-yaqi/events{/privacy}", "received_events_url": "https://api.github.com/users/ji-yaqi/received_events", "type": "User", "site_admin": false } ]
null
[ "/assign @ji-yaqi " ]
"2022-02-04T20:43:39"
"2022-02-10T23:45:16"
null
NONE
null
Hi folks, we have a use case where we have multiple training model pipelines and one evaluation pipeline. We'd like to block on the evaluation pipeline so that there is no race condition when comparing model performances. e.g. consider the case where model A has much better performance than model B, and model B is slightly better than production model. If we run evaluation pipeline for both model A <-> production and then model B <-> production at the same time, it is possible that model B will end up in production in the end. It doesn't seem possible to set max concurrent runs = 1 if it's not a recurring run. Is there a way to set this for experiments and non-recurring runs, or what's the recommendation here? Having the user involve some sort of distributed lock seems way too complicated.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7261/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7261/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7257
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7257/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7257/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7257/events
https://github.com/kubeflow/pipelines/issues/7257
1,124,201,238
I_kwDOB-71UM5DAfMW
7,257
[bug] Artifacts and Lineage explorer don't work with KFP SDK V2
{ "login": "AlexandreBrown", "id": 26939775, "node_id": "MDQ6VXNlcjI2OTM5Nzc1", "avatar_url": "https://avatars.githubusercontent.com/u/26939775?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AlexandreBrown", "html_url": "https://github.com/AlexandreBrown", "followers_url": "https://api.github.com/users/AlexandreBrown/followers", "following_url": "https://api.github.com/users/AlexandreBrown/following{/other_user}", "gists_url": "https://api.github.com/users/AlexandreBrown/gists{/gist_id}", "starred_url": "https://api.github.com/users/AlexandreBrown/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AlexandreBrown/subscriptions", "organizations_url": "https://api.github.com/users/AlexandreBrown/orgs", "repos_url": "https://api.github.com/users/AlexandreBrown/repos", "events_url": "https://api.github.com/users/AlexandreBrown/events{/privacy}", "received_events_url": "https://api.github.com/users/AlexandreBrown/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Hello @AlexandreBrown , this is fixed in https://github.com/kubeflow/pipelines/pull/6989, you can try any 1.8.0 release candidate, for example: v1.8.0-rc.2 for the KFP backend.", "Thanks @zijianjoy , could you explain how I can test the 1.8.0-rc.2 KFP backend with Kubeflow 1.4.1 manifest? Thanks", "It depends, for example, if you use standalone KFP, this is the upgrade guide: https://www.kubeflow.org/docs/components/pipelines/installation/standalone-deployment/#upgrading-kubeflow-pipelines.\n\nOr, if you are using Kubeflow on Google Cloud, you can follow the corresponding guidance on Kubeflow.org -> Distribution." ]
"2022-02-04T13:23:54"
"2022-02-11T03:49:54"
"2022-02-10T23:40:05"
NONE
null
### What steps did you take 1. Install Kubeflow 1.4 2. Create a pipeline with multiple artifacts type to illustrate the issue better (use Input[T] and Output[T]) 3. Run pipeline 4. Navigate to the Artifacts page ### What happened: - No pipeline workspace and pipeline name is visible. - All pipeline runs are grouped under this unnamed group - Lineage explorer is not accessible/visible ![image-1.png](https://user-images.githubusercontent.com/26939775/152535918-a1fbb61f-94da-427c-a5b2-5b518e995dfc.png) ### What did you expect to happen: I expect to get the same result as when using KFP SDK V1. Meaning we see all pipelines as groups and all artifacts under their respective pipeline group and we can access the lineage explorer. ### Environment: * How do you deploy Kubeflow Pipelines (KFP)? As part of Kubeflow manifest * KFP version: The one from Kubeflow 1.4 (backend is 1.7.1 I think) * KFP SDK version: 1.8.6 ### Labels /area frontend /area backend /area sdk <!-- /area testing --> <!-- /area samples --> <!-- /area components --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a ๐Ÿ‘. We prioritise the issues with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7257/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7257/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7250
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7250/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7250/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7250/events
https://github.com/kubeflow/pipelines/issues/7250
1,122,658,581
I_kwDOB-71UM5C6mkV
7,250
[feature] Use Artifact metadata in a pipeline condition
{ "login": "burtenshaw", "id": 19620375, "node_id": "MDQ6VXNlcjE5NjIwMzc1", "avatar_url": "https://avatars.githubusercontent.com/u/19620375?v=4", "gravatar_id": "", "url": "https://api.github.com/users/burtenshaw", "html_url": "https://github.com/burtenshaw", "followers_url": "https://api.github.com/users/burtenshaw/followers", "following_url": "https://api.github.com/users/burtenshaw/following{/other_user}", "gists_url": "https://api.github.com/users/burtenshaw/gists{/gist_id}", "starred_url": "https://api.github.com/users/burtenshaw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/burtenshaw/subscriptions", "organizations_url": "https://api.github.com/users/burtenshaw/orgs", "repos_url": "https://api.github.com/users/burtenshaw/repos", "events_url": "https://api.github.com/users/burtenshaw/events{/privacy}", "received_events_url": "https://api.github.com/users/burtenshaw/received_events", "type": "User", "site_admin": false }
[ { "id": 1126834402, "node_id": "MDU6TGFiZWwxMTI2ODM0NDAy", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/components", "name": "area/components", "color": "d2b48c", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "The feature request sounds reasonable." ]
"2022-02-03T05:04:12"
"2022-02-18T00:58:36"
null
NONE
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> <!-- /area frontend --> <!-- /area backend --> /area sdk <!-- /area samples --> /area components ### What feature would you like to see? <!-- Provide a description of this feature and the user experience. --> Use the value of a metadata field from an Artifact in a `dsl.Condition`. For example: ```python with dsl.Condition( component_op.metadata["n_rows"] < 20, name="check-missing-vectors", ): ``` Or, allow for a component to return both a basic output and an Artifact output. Which would facilitate: ```python with dsl.Condition( component_op.output < 20, name="check-missing-vectors", ): ``` ### What is the use case or pain point? I have been using the `dsl.Condition` to skip components based on the number of entries in a dataset. To do this, I used the Dataset's metadata attribute (`dataset.metadata["n_rows"]`). I then had to use another component to get the value and pass it to `dsl.Condition` and compare integers. This extra component seems unnecessary, and acting on metadata could make the condition logic really powerful. <!-- It helps us understand the benefit of this feature for your use case. --> ### Is there a workaround currently? As mentioned, I currently use a separate lightweight component that retrieves the metadata value. ```python @component def check_df_length_op(null_vectors: Input[Dataset]) -> int: return null_vectors.metadata["n_rows"] ``` <!-- Without this feature, how do you accomplish your task today? --> --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a ๐Ÿ‘. We prioritize fulfilling features with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7250/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7250/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7242
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7242/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7242/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7242/events
https://github.com/kubeflow/pipelines/issues/7242
1,121,016,761
I_kwDOB-71UM5C0Vu5
7,242
[feature] Deduplicate component templates
{ "login": "jli", "id": 133466, "node_id": "MDQ6VXNlcjEzMzQ2Ng==", "avatar_url": "https://avatars.githubusercontent.com/u/133466?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jli", "html_url": "https://github.com/jli", "followers_url": "https://api.github.com/users/jli/followers", "following_url": "https://api.github.com/users/jli/following{/other_user}", "gists_url": "https://api.github.com/users/jli/gists{/gist_id}", "starred_url": "https://api.github.com/users/jli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jli/subscriptions", "organizations_url": "https://api.github.com/users/jli/orgs", "repos_url": "https://api.github.com/users/jli/repos", "events_url": "https://api.github.com/users/jli/events{/privacy}", "received_events_url": "https://api.github.com/users/jli/received_events", "type": "User", "site_admin": false }
[ { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
open
false
{ "login": "ji-yaqi", "id": 17338099, "node_id": "MDQ6VXNlcjE3MzM4MDk5", "avatar_url": "https://avatars.githubusercontent.com/u/17338099?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ji-yaqi", "html_url": "https://github.com/ji-yaqi", "followers_url": "https://api.github.com/users/ji-yaqi/followers", "following_url": "https://api.github.com/users/ji-yaqi/following{/other_user}", "gists_url": "https://api.github.com/users/ji-yaqi/gists{/gist_id}", "starred_url": "https://api.github.com/users/ji-yaqi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ji-yaqi/subscriptions", "organizations_url": "https://api.github.com/users/ji-yaqi/orgs", "repos_url": "https://api.github.com/users/ji-yaqi/repos", "events_url": "https://api.github.com/users/ji-yaqi/events{/privacy}", "received_events_url": "https://api.github.com/users/ji-yaqi/received_events", "type": "User", "site_admin": false }
[ { "login": "ji-yaqi", "id": 17338099, "node_id": "MDQ6VXNlcjE3MzM4MDk5", "avatar_url": "https://avatars.githubusercontent.com/u/17338099?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ji-yaqi", "html_url": "https://github.com/ji-yaqi", "followers_url": "https://api.github.com/users/ji-yaqi/followers", "following_url": "https://api.github.com/users/ji-yaqi/following{/other_user}", "gists_url": "https://api.github.com/users/ji-yaqi/gists{/gist_id}", "starred_url": "https://api.github.com/users/ji-yaqi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ji-yaqi/subscriptions", "organizations_url": "https://api.github.com/users/ji-yaqi/orgs", "repos_url": "https://api.github.com/users/ji-yaqi/repos", "events_url": "https://api.github.com/users/ji-yaqi/events{/privacy}", "received_events_url": "https://api.github.com/users/ji-yaqi/received_events", "type": "User", "site_admin": false } ]
null
[ "hm, actually, I think this is a dupe of this older issue which got auto-closed https://github.com/kubeflow/pipelines/issues/4272" ]
"2022-02-01T17:47:06"
"2022-02-05T01:51:42"
null
CONTRIBUTOR
null
### Feature Area /area sdk ### What feature would you like to see? I would like the KFP compiler to reuse templates when calling the same component multiple times. In other words, each component only appears as a template once in the `spec.templates` pipeline yaml spec, and called with different input parameters each time they're used. Currently, each time a component is used in a pipeline, the compiler emits a new copy of the component definition in the pipeline YAML spec. I tried decorating my component functions with `@kfp.dsl.graph_component`, and refactored things so that each component function only takes `PipelineParam` inputs, but it seems like it didn't work: I still have multiple copies of each component. Perhaps I'm just using `graph_component` incorrectly? ### What is the use case or pain point? An important use case for my team is to run a single pipeline that trains/scores/QCs multiple models and then runs a reporting step comparing the results from each model. The size of the pipeline YAML spec scales linearly with the number of models we include in the pipeline. This is preventing us from comparing all the models we would like to. (Related: #4170) ### Is there a workaround currently? Not that I know of. This is blocking us from running pipelines as big as we'd like. We are already using the workaround suggested here to shrink the yaml output: https://github.com/kubeflow/pipelines/issues/4170#issuecomment-655764762 (I suppose we could try to deduplicate the generated pipeline yaml... but that seems quite complex.) --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a ๐Ÿ‘. We prioritize fulfilling features with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7242/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7242/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7241
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7241/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7241/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7241/events
https://github.com/kubeflow/pipelines/issues/7241
1,120,442,203
I_kwDOB-71UM5CyJdb
7,241
[feature] Allow specifying inputs that are ignored by cache
{ "login": "meowcakes", "id": 3435150, "node_id": "MDQ6VXNlcjM0MzUxNTA=", "avatar_url": "https://avatars.githubusercontent.com/u/3435150?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meowcakes", "html_url": "https://github.com/meowcakes", "followers_url": "https://api.github.com/users/meowcakes/followers", "following_url": "https://api.github.com/users/meowcakes/following{/other_user}", "gists_url": "https://api.github.com/users/meowcakes/gists{/gist_id}", "starred_url": "https://api.github.com/users/meowcakes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meowcakes/subscriptions", "organizations_url": "https://api.github.com/users/meowcakes/orgs", "repos_url": "https://api.github.com/users/meowcakes/repos", "events_url": "https://api.github.com/users/meowcakes/events{/privacy}", "received_events_url": "https://api.github.com/users/meowcakes/received_events", "type": "User", "site_admin": false }
[ { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Duplicate issue" ]
"2022-02-01T09:48:19"
"2022-02-01T09:50:55"
"2022-02-01T09:50:55"
CONTRIBUTOR
null
It would be nice if we could control which inputs are taken into consideration when deciding whether to use cached results or not, e.g. allow setting a boolean flag in an input's definition to tell Kubeflow to ignore it when checking the cache such that if its value changes between runs, the cached output is still used. The use case is that components may have inputs that do not semantically change the resulting outputs, and so it would be better to take the output from cache rather than running the component again. To give an example, if your component runs a Beam job, you may not want to rerun the component if only the Beam pipeline options change (this is actually how TFX's caching works). As another example, if your component runs an EMR job and tags it with the workflow ID for traceability, it will rerun every time due to the workflow ID being unique on every run, despite it not changing the output at all. Even more granular control over caching would also be nice, such as still using the cache when the image changes.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7241/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7241/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7238
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7238/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7238/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7238/events
https://github.com/kubeflow/pipelines/issues/7238
1,120,080,692
I_kwDOB-71UM5CwxM0
7,238
[SDK] KFP SDK V2 Namespace Plan
{ "login": "ji-yaqi", "id": 17338099, "node_id": "MDQ6VXNlcjE3MzM4MDk5", "avatar_url": "https://avatars.githubusercontent.com/u/17338099?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ji-yaqi", "html_url": "https://github.com/ji-yaqi", "followers_url": "https://api.github.com/users/ji-yaqi/followers", "following_url": "https://api.github.com/users/ji-yaqi/following{/other_user}", "gists_url": "https://api.github.com/users/ji-yaqi/gists{/gist_id}", "starred_url": "https://api.github.com/users/ji-yaqi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ji-yaqi/subscriptions", "organizations_url": "https://api.github.com/users/ji-yaqi/orgs", "repos_url": "https://api.github.com/users/ji-yaqi/repos", "events_url": "https://api.github.com/users/ji-yaqi/events{/privacy}", "received_events_url": "https://api.github.com/users/ji-yaqi/received_events", "type": "User", "site_admin": false }
[ { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Fixed by https://github.com/kubeflow/pipelines/pull/7376 and https://github.com/kubeflow/pipelines/pull/7291" ]
"2022-02-01T00:21:54"
"2022-03-05T00:38:00"
"2022-03-05T00:38:00"
CONTRIBUTOR
null
## Background Previously on **KFP SDK v1.x**, we had two modes of import, signaling two different compilers with different compilation results. * <code>import kfp</code> imports KFP v1 and v2-compatible mode compiler * <code>import kfp.v2</code> imports KFP v2 mode compiler Two modes of import are confusing to the users. Although users are in SDK v1.x, they have <code>kfp</code> and <code>kfp.v2</code> to choose from, where normally the package name (kfp) indicates the default versioning. As KFP SDK evolves to v2.0 version, we want to set a standard way of importing KFP namespace, for easier usage and also cleaner code for maintenance. ## Proposed Namespace for KFP SDK v2 Since the package name indicates the default versioning, we propose in **KFP v2** and possible future iterations, we will always use `kfp` as the official namespace. * <code>import kfp</code> imports KFP v2 mode compiler For v1-related features, users need to go back to SDK v1.x and use <code>import kfp</code>. In this way, users don't need to change imports when they are experiencing major version updates. For existing GA customers, we will provide an alias of `kfp.v2` to `kfp`, and throw a warning if user uses kfp.v2. Full set of warnings will be documented in the migration guide. ### Note: 1. v2-compatible mode was launched as an experimental feature, so it won't appear in v2 SDK and follows cloud deprecation policy. 2. For KFP SDK v1.x features, we will maintain it for 1 year after GA of SDK v2.0. After that, we will be providing security patches and major bug fixes for any bug that results in user code breaking until usage on GCP (Vertex and KFP) is 0. ## User Impact on Namespace 1. KFP V1 users migration (NO changes) <table> <tr> <td> Current </td> </tr> <tr> <td style="background-color: #f0f0f0"> ``` from kfp import dsl from kfp import compiler @dsl.pipeline(name='my-pipeline') def pipeline(): ... compiler.Compiler().compile(pipeline, 'path') ``` </td> </tr> </table> 2. Vertex Pipeline users (primarily removing v2 from the imports). Note: there are other changes suggested for Vertex Pipeline users which will be documented in the migration doc. <table> <tr> <td>Current </td> <td>Anticipated change </td> </tr> <tr> <td style="background-color: #f0f0f0"> ``` from kfp.v2 import dsl from kfp.v2 import compiler @dsl.pipeline(name='my-pipeline') def pipeline(): ... compiler.Compiler().compile(pipeline, 'path') ``` </td> <td style="background-color: #f0f0f0"> ``` from kfp import dsl from kfp import compiler @dsl.pipeline(name='my-pipeline') def pipeline(): ... compiler.Compiler().compile(pipeline, 'path') ``` </td> </tr> </table> 3. Component authors: libraries with dependencies on KFP (e.g. GCPC) <table> <tr> <td>Current </td> <td>Anticipated change </td> </tr> <tr> <td style="background-color: #f0f0f0"> ``` from kfp.v2 import dsl @dsl.component( packages_to_install=[ 'google-cloud-aiplatform', 'google-cloud-pipeline-components', 'protobuf' ], base_image='python:3.7') def GetTrialsOp(gcp_resources: str) -> list: โ€ฆ ``` </td> <td style="background-color: #f0f0f0"> ``` from kfp import dsl @dsl.component( packages_to_install=[ 'google-cloud-aiplatform', 'google-cloud-pipeline-components', 'protobuf' ], base_image='python:3.7') def GetTrialsOp(gcp_resources: str) -> list: โ€ฆ ``` </td> </tr> </table>
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7238/reactions", "total_count": 6, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7238/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7234
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7234/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7234/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7234/events
https://github.com/kubeflow/pipelines/issues/7234
1,119,369,627
I_kwDOB-71UM5CuDmb
7,234
[components] custom training job from component bug
{ "login": "wardVD", "id": 2136274, "node_id": "MDQ6VXNlcjIxMzYyNzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2136274?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wardVD", "html_url": "https://github.com/wardVD", "followers_url": "https://api.github.com/users/wardVD/followers", "following_url": "https://api.github.com/users/wardVD/following{/other_user}", "gists_url": "https://api.github.com/users/wardVD/gists{/gist_id}", "starred_url": "https://api.github.com/users/wardVD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wardVD/subscriptions", "organizations_url": "https://api.github.com/users/wardVD/orgs", "repos_url": "https://api.github.com/users/wardVD/repos", "events_url": "https://api.github.com/users/wardVD/events{/privacy}", "received_events_url": "https://api.github.com/users/wardVD/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
{ "login": "ji-yaqi", "id": 17338099, "node_id": "MDQ6VXNlcjE3MzM4MDk5", "avatar_url": "https://avatars.githubusercontent.com/u/17338099?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ji-yaqi", "html_url": "https://github.com/ji-yaqi", "followers_url": "https://api.github.com/users/ji-yaqi/followers", "following_url": "https://api.github.com/users/ji-yaqi/following{/other_user}", "gists_url": "https://api.github.com/users/ji-yaqi/gists{/gist_id}", "starred_url": "https://api.github.com/users/ji-yaqi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ji-yaqi/subscriptions", "organizations_url": "https://api.github.com/users/ji-yaqi/orgs", "repos_url": "https://api.github.com/users/ji-yaqi/repos", "events_url": "https://api.github.com/users/ji-yaqi/events{/privacy}", "received_events_url": "https://api.github.com/users/ji-yaqi/received_events", "type": "User", "site_admin": false }
[ { "login": "ji-yaqi", "id": 17338099, "node_id": "MDQ6VXNlcjE3MzM4MDk5", "avatar_url": "https://avatars.githubusercontent.com/u/17338099?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ji-yaqi", "html_url": "https://github.com/ji-yaqi", "followers_url": "https://api.github.com/users/ji-yaqi/followers", "following_url": "https://api.github.com/users/ji-yaqi/following{/other_user}", "gists_url": "https://api.github.com/users/ji-yaqi/gists{/gist_id}", "starred_url": "https://api.github.com/users/ji-yaqi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ji-yaqi/subscriptions", "organizations_url": "https://api.github.com/users/ji-yaqi/orgs", "repos_url": "https://api.github.com/users/ji-yaqi/repos", "events_url": "https://api.github.com/users/ji-yaqi/events{/privacy}", "received_events_url": "https://api.github.com/users/ji-yaqi/received_events", "type": "User", "site_admin": false } ]
null
[ "Having same issue it is not able to print the log_metric also", "Reproduced the issue and looks like the `executor_output.json` is not generated which is causing the problem. ", "This affects also the output parameters", "This is now fixed, and the output is emitted as expected. Please reopen if this still happens.", "@IronPan still not able to use log_metric(), any suggestion?\r\n" ]
"2022-01-31T12:30:45"
"2023-01-05T10:42:17"
"2022-04-18T17:25:03"
NONE
null
### Environment * KFP version: kfp==1.8.10 * google_cloud_pipeline_components SDK version: google-cloud-pipeline-components==0.2.2 ### Steps to reproduce 1) create a python function based component, e.g.: ``` @component def process( output_dataset_one: Output[Dataset], ): output_dataset_one.metadata["hello"] = "there" ``` 2) import custom job utils in pipeline code: ``` from google_cloud_pipeline_components.experimental.custom_job import utils as custom_job_utils ``` Transform component into training job: ``` process_op = custom_job_utils.create_custom_training_job_op_from_component( process, display_name = 'test-component', machine_type = 'n1-standard-4', boot_disk_type = 'pd-ssd', boot_disk_size_gb = "150" ) ``` 3) Run pipeline: ``` @dsl.pipeline( name='test', description="Pipeline to test custom training job component" ) def test_pipeline(): _ = ( process_op( project=<PROJECT_ID>, location=<REGION>, ) .set_display_name('test') ) ``` ### Expected result The artifact`output_dataset_one` will have a property/metadata `hello` if component is run normally, but when the component is transformed into a custom training job, the resulting artifact will not contain any properties/metadata. This seems like a bug. ### Materials and Reference https://github.com/kubeflow/pipelines/blob/master/components/google-cloud/google_cloud_pipeline_components/experimental/custom_job/utils.py --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a ๐Ÿ‘. We prioritise the issues with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7234/reactions", "total_count": 15, "+1": 15, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7234/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7230
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7230/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7230/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7230/events
https://github.com/kubeflow/pipelines/issues/7230
1,117,674,955
I_kwDOB-71UM5Cnl3L
7,230
No graph to show in ML Pipeline UI kubeflow 1.3 on Openshift 4.8
{ "login": "rganeshsharma", "id": 44496498, "node_id": "MDQ6VXNlcjQ0NDk2NDk4", "avatar_url": "https://avatars.githubusercontent.com/u/44496498?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rganeshsharma", "html_url": "https://github.com/rganeshsharma", "followers_url": "https://api.github.com/users/rganeshsharma/followers", "following_url": "https://api.github.com/users/rganeshsharma/following{/other_user}", "gists_url": "https://api.github.com/users/rganeshsharma/gists{/gist_id}", "starred_url": "https://api.github.com/users/rganeshsharma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rganeshsharma/subscriptions", "organizations_url": "https://api.github.com/users/rganeshsharma/orgs", "repos_url": "https://api.github.com/users/rganeshsharma/repos", "events_url": "https://api.github.com/users/rganeshsharma/events{/privacy}", "received_events_url": "https://api.github.com/users/rganeshsharma/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
open
false
{ "login": "animeshsingh", "id": 3631320, "node_id": "MDQ6VXNlcjM2MzEzMjA=", "avatar_url": "https://avatars.githubusercontent.com/u/3631320?v=4", "gravatar_id": "", "url": "https://api.github.com/users/animeshsingh", "html_url": "https://github.com/animeshsingh", "followers_url": "https://api.github.com/users/animeshsingh/followers", "following_url": "https://api.github.com/users/animeshsingh/following{/other_user}", "gists_url": "https://api.github.com/users/animeshsingh/gists{/gist_id}", "starred_url": "https://api.github.com/users/animeshsingh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/animeshsingh/subscriptions", "organizations_url": "https://api.github.com/users/animeshsingh/orgs", "repos_url": "https://api.github.com/users/animeshsingh/repos", "events_url": "https://api.github.com/users/animeshsingh/events{/privacy}", "received_events_url": "https://api.github.com/users/animeshsingh/received_events", "type": "User", "site_admin": false }
[ { "login": "animeshsingh", "id": 3631320, "node_id": "MDQ6VXNlcjM2MzEzMjA=", "avatar_url": "https://avatars.githubusercontent.com/u/3631320?v=4", "gravatar_id": "", "url": "https://api.github.com/users/animeshsingh", "html_url": "https://github.com/animeshsingh", "followers_url": "https://api.github.com/users/animeshsingh/followers", "following_url": "https://api.github.com/users/animeshsingh/following{/other_user}", "gists_url": "https://api.github.com/users/animeshsingh/gists{/gist_id}", "starred_url": "https://api.github.com/users/animeshsingh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/animeshsingh/subscriptions", "organizations_url": "https://api.github.com/users/animeshsingh/orgs", "repos_url": "https://api.github.com/users/animeshsingh/repos", "events_url": "https://api.github.com/users/animeshsingh/events{/privacy}", "received_events_url": "https://api.github.com/users/animeshsingh/received_events", "type": "User", "site_admin": false }, { "login": "Tomcli", "id": 10889249, "node_id": "MDQ6VXNlcjEwODg5MjQ5", "avatar_url": "https://avatars.githubusercontent.com/u/10889249?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Tomcli", "html_url": "https://github.com/Tomcli", "followers_url": "https://api.github.com/users/Tomcli/followers", "following_url": "https://api.github.com/users/Tomcli/following{/other_user}", "gists_url": "https://api.github.com/users/Tomcli/gists{/gist_id}", "starred_url": "https://api.github.com/users/Tomcli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tomcli/subscriptions", "organizations_url": "https://api.github.com/users/Tomcli/orgs", "repos_url": "https://api.github.com/users/Tomcli/repos", "events_url": "https://api.github.com/users/Tomcli/events{/privacy}", "received_events_url": "https://api.github.com/users/Tomcli/received_events", "type": "User", "site_admin": false } ]
null
[ "cc @animeshsingh @Tomcli \r\n", "For initial debugging: Can you open Chrome developer tool on your browser to see if there is any error message?", "Hi @rganeshsharma, which Openshift kfdef did you use? I remember Kubeflow 1.3 for opendatahub should be single user because the Istio sidecar for multi-user required pod privilege on openshift. So there shouldn't be any `ISTIO_MUTUAL` in the istio virtual services. ", "It looks like the opendatahub website didn't point to any kubeflow kfdef. @rganeshsharma Did you follow the Openshift instructions on Kubeflow to get the opendatahub kfdef? https://www.kubeflow.org/docs/distributions/openshift/install-kubeflow/#installing-kubeflow\r\n", "cc @nakfour " ]
"2022-01-28T17:47:44"
"2022-02-04T00:35:17"
null
NONE
null
### What steps did you take <!-- A clear and concise description of what the bug is.--> No graph to show in ML Pipeline UI kubeflow 1.3 on Openshift 4.8 We have used https://opendatahub.io/docs/kubeflow/installation.html for deploying Kubelfow 1.3 ### What happened: We are unable to get Graph view of the pipeline we see something like "No graph to show" I have edited destination rules in Kubeflow ml-pipeline and ml-pipeline-ui and ml-pipeline-visualizationserver from ISTIO_MUTUAL to DISABLE ### What did you expect to happen: We must see Graph for pipeline s we create ### Environment: <!-- Please fill in those that seem relevant. --> * How do you deploy Kubeflow Pipelines (KFP)? https://opendatahub.io/docs/kubeflow/installation.html <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> * KFP version: Kubeflow Pipelines Tekton | v0.8.0 <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> * KFP SDK version: <!-- Specify the output of the following shell command: $pip list | grep kfp --> ### Anything else you would like to add: <!-- Miscellaneous information that will assist in solving the issue.--> I have restarted all the PODS also the controller-manager-pod restarts and logs are too excessive to take a look at. Please advise on what could be done? --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a ๐Ÿ‘. We prioritise the issues with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7230/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7230/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7225
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7225/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7225/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7225/events
https://github.com/kubeflow/pipelines/issues/7225
1,115,867,871
I_kwDOB-71UM5Cgsrf
7,225
[feature] Should be able to set _DEFAULT_LAUNCHER_IMAGE using environment variable in notebook pod
{ "login": "typhoonzero", "id": 13348433, "node_id": "MDQ6VXNlcjEzMzQ4NDMz", "avatar_url": "https://avatars.githubusercontent.com/u/13348433?v=4", "gravatar_id": "", "url": "https://api.github.com/users/typhoonzero", "html_url": "https://github.com/typhoonzero", "followers_url": "https://api.github.com/users/typhoonzero/followers", "following_url": "https://api.github.com/users/typhoonzero/following{/other_user}", "gists_url": "https://api.github.com/users/typhoonzero/gists{/gist_id}", "starred_url": "https://api.github.com/users/typhoonzero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/typhoonzero/subscriptions", "organizations_url": "https://api.github.com/users/typhoonzero/orgs", "repos_url": "https://api.github.com/users/typhoonzero/repos", "events_url": "https://api.github.com/users/typhoonzero/events{/privacy}", "received_events_url": "https://api.github.com/users/typhoonzero/received_events", "type": "User", "site_admin": false }
[ { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @typhoonzero, v2 compatible mode is being deprecated in the next SDK release: https://github.com/kubeflow/pipelines/issues/6829\r\n\r\nWe would have a similar launcher/driver image for v2. And we plan to version it the same way as we version the rest KFP backend images.\r\nThat being said, I'm still interested in why would you want to use use private registry for the launcher image. Are you making custom images? Can you share some details to help us understand your use case? Thanks!", "Thanks for the reply @chensun , my case is simple, I can't access `gcr.io` in China, so I'm hoping to host the images in a private registry.", "any update on this?" ]
"2022-01-27T07:30:08"
"2022-10-05T11:31:30"
null
NONE
null
### Feature Area <!-- /area sdk --> ### What feature would you like to see? Currently the default launcher image is set using a constant: `sdk/python/kfp/compiler/v2_compat.py:_DEFAULT_LAUNCHER_IMAGE = "gcr.io/ml-pipeline/kfp-launcher:1.8.7"`, however, if we want to use a private registry when launching a pipeline in Kubeflow notebook, we may need to use an environment variable to set this, so that we can just configure that in the Kubeflow notebook YAML file. ### Is there a workaround currently? I can set it directly in the function call here: https://kubeflow-pipelines.readthedocs.io/en/latest/source/kfp.client.html#kfp.Client.create_run_from_pipeline_func, but I'd like to configure a default for all users. --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a ๐Ÿ‘. We prioritize fulfilling features with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7225/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7225/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7224
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7224/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7224/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7224/events
https://github.com/kubeflow/pipelines/issues/7224
1,115,788,949
I_kwDOB-71UM5CgZaV
7,224
[bug] Unable to obtain the output of a graph component for downstream components
{ "login": "akshayc11", "id": 702424, "node_id": "MDQ6VXNlcjcwMjQyNA==", "avatar_url": "https://avatars.githubusercontent.com/u/702424?v=4", "gravatar_id": "", "url": "https://api.github.com/users/akshayc11", "html_url": "https://github.com/akshayc11", "followers_url": "https://api.github.com/users/akshayc11/followers", "following_url": "https://api.github.com/users/akshayc11/following{/other_user}", "gists_url": "https://api.github.com/users/akshayc11/gists{/gist_id}", "starred_url": "https://api.github.com/users/akshayc11/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/akshayc11/subscriptions", "organizations_url": "https://api.github.com/users/akshayc11/orgs", "repos_url": "https://api.github.com/users/akshayc11/repos", "events_url": "https://api.github.com/users/akshayc11/events{/privacy}", "received_events_url": "https://api.github.com/users/akshayc11/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1126834402, "node_id": "MDU6TGFiZWwxMTI2ODM0NDAy", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/components", "name": "area/components", "color": "d2b48c", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" } ]
open
false
{ "login": "ji-yaqi", "id": 17338099, "node_id": "MDQ6VXNlcjE3MzM4MDk5", "avatar_url": "https://avatars.githubusercontent.com/u/17338099?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ji-yaqi", "html_url": "https://github.com/ji-yaqi", "followers_url": "https://api.github.com/users/ji-yaqi/followers", "following_url": "https://api.github.com/users/ji-yaqi/following{/other_user}", "gists_url": "https://api.github.com/users/ji-yaqi/gists{/gist_id}", "starred_url": "https://api.github.com/users/ji-yaqi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ji-yaqi/subscriptions", "organizations_url": "https://api.github.com/users/ji-yaqi/orgs", "repos_url": "https://api.github.com/users/ji-yaqi/repos", "events_url": "https://api.github.com/users/ji-yaqi/events{/privacy}", "received_events_url": "https://api.github.com/users/ji-yaqi/received_events", "type": "User", "site_admin": false }
[ { "login": "ji-yaqi", "id": 17338099, "node_id": "MDQ6VXNlcjE3MzM4MDk5", "avatar_url": "https://avatars.githubusercontent.com/u/17338099?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ji-yaqi", "html_url": "https://github.com/ji-yaqi", "followers_url": "https://api.github.com/users/ji-yaqi/followers", "following_url": "https://api.github.com/users/ji-yaqi/following{/other_user}", "gists_url": "https://api.github.com/users/ji-yaqi/gists{/gist_id}", "starred_url": "https://api.github.com/users/ji-yaqi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ji-yaqi/subscriptions", "organizations_url": "https://api.github.com/users/ji-yaqi/orgs", "repos_url": "https://api.github.com/users/ji-yaqi/repos", "events_url": "https://api.github.com/users/ji-yaqi/events{/privacy}", "received_events_url": "https://api.github.com/users/ji-yaqi/received_events", "type": "User", "site_admin": false } ]
null
[ "It looks like maybe https://github.com/kubeflow/pipelines/blob/6965dbac2faee0411fc0ca9565fd9a9d7ef8e2bf/sdk/python/kfp/dsl/_component.py#L148 is missing assigning graph_ops_group.outputs when it assigns inputs and arguments. I would love to contribute a fix but it's not clear to me where to add a unit test for this.", "What's the work around here?", "One could write to an external system (e.g. GCS, S3) and have downstream components read from an agreed-upon path. Feels pretty side-effecty though." ]
"2022-01-27T05:24:27"
"2022-04-21T18:29:55"
null
NONE
null
### What steps did you take In a folder, created the following: c1.yaml ```yaml name: Dummy Component A description: Manually Created Component Spec inputs: - name: Arg Input type: String outputs: - name: Arg Output type: String implementation: container: image: dummy-image:1.0 command: [ "dummy", "command", "A" ] args: [ "--arg-input", {inputValue: Arg Input}, "--arg-output", {outputPath: Arg Output} ] ``` c2.yaml ```yaml name: Dummy Component B description: Manually Created Component Spec inputs: - name: Arg Input type: String outputs: - name: Arg Output type: String implementation: container: image: dummy-image:1.0 command: [ "dummy", "command", "B" ] args: [ "--arg-input", {inputValue: Arg Input}, "--arg-output", {outputPath: Arg Output} ] ``` build_dummy_pipeline.py ```python import kfp.components import kfp import kfp.compiler c1_func = kfp.components.load_component("c1.yaml") c2_func = kfp.components.load_component("c2.yaml") def generate_graph_component_func(): @kfp.dsl.graph_component def c1_graph_component(input_arg): c1_op_a = c1_func(arg_input=input_arg) c1_op_b = c1_func(arg_input=c1_op_a.output) return {"Graph Output": c1_op_b.output} return c1_graph_component c1_graph_comp_func = generate_graph_component_func() @kfp.dsl.pipeline( name="Dummy Pipeline", description="Dummy Pipeline" ) def dummy_pipeline(input_arg: str): graph_comp_op = c1_graph_comp_func(input_arg) c2_op_a = c2_func(arg_input=graph_comp_op.outputs["Graph Output"]) return c2_op_a.output kfp.compiler.Compiler().compile(dummy_pipeline, "dummy_pipeline.yaml") ``` ```Command run: python3 build_dummy_pipeline.py ``` ### What happened: ```bash Traceback (most recent call last): File "build_dummy_pipeline.py", line 28, in <module> kfp.compiler.Compiler().compile(dummy_pipeline, "dummy_pipeline.yaml") File "/Users/achandrashekaran/PycharmProjects/kf-model-trainer/.venv/lib/python3.7/site-packages/kfp/compiler/compiler.py", line 1064, in compile package_path=package_path) File "/Users/achandrashekaran/PycharmProjects/kf-model-trainer/.venv/lib/python3.7/site-packages/kfp/compiler/compiler.py", line 1119, in _create_and_write_workflow pipeline_conf) File "/Users/achandrashekaran/PycharmProjects/kf-model-trainer/.venv/lib/python3.7/site-packages/kfp/compiler/compiler.py", line 900, in _create_workflow pipeline_func(*args_list, **kwargs_dict) File "build_dummy_pipeline.py", line 25, in dummy_pipeline c2_op_a = c2_func(arg_input=graph_comp_op.outputs["Graph Output"]) KeyError: 'Graph Output' ``` ### What did you expect to happen: On running the above python script, I expected the output of the graph component to be passed to c2_op_a and the pipeline to compile successfully. ### Environment: <!-- Please fill in those that seem relevant. --> * How do you deploy Kubeflow Pipelines (KFP)? <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> * KFP version: 1.7.1 <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> * KFP SDK version: -NA- <!-- Specify the output of the following shell command: $pip list | grep kfp --> ### Anything else you would like to add: <!-- Miscellaneous information that will assist in solving the issue.--> ### Labels <!-- Please include labels below by uncommenting them to help us better triage issues --> <!-- /area frontend --> <!-- /area backend --> /area sdk <!-- /area testing --> <!-- /area samples --> /area components --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a ๐Ÿ‘. We prioritise the issues with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7224/reactions", "total_count": 5, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7224/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7220
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7220/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7220/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7220/events
https://github.com/kubeflow/pipelines/issues/7220
1,115,060,649
I_kwDOB-71UM5Cdnmp
7,220
[SDK] AttributeError: 'ComponentStore' object has no attribute 'uri_search_template'
{ "login": "DarioBernardo", "id": 9448151, "node_id": "MDQ6VXNlcjk0NDgxNTE=", "avatar_url": "https://avatars.githubusercontent.com/u/9448151?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DarioBernardo", "html_url": "https://github.com/DarioBernardo", "followers_url": "https://api.github.com/users/DarioBernardo/followers", "following_url": "https://api.github.com/users/DarioBernardo/following{/other_user}", "gists_url": "https://api.github.com/users/DarioBernardo/gists{/gist_id}", "starred_url": "https://api.github.com/users/DarioBernardo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DarioBernardo/subscriptions", "organizations_url": "https://api.github.com/users/DarioBernardo/orgs", "repos_url": "https://api.github.com/users/DarioBernardo/repos", "events_url": "https://api.github.com/users/DarioBernardo/events{/privacy}", "received_events_url": "https://api.github.com/users/DarioBernardo/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
{ "login": "ji-yaqi", "id": 17338099, "node_id": "MDQ6VXNlcjE3MzM4MDk5", "avatar_url": "https://avatars.githubusercontent.com/u/17338099?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ji-yaqi", "html_url": "https://github.com/ji-yaqi", "followers_url": "https://api.github.com/users/ji-yaqi/followers", "following_url": "https://api.github.com/users/ji-yaqi/following{/other_user}", "gists_url": "https://api.github.com/users/ji-yaqi/gists{/gist_id}", "starred_url": "https://api.github.com/users/ji-yaqi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ji-yaqi/subscriptions", "organizations_url": "https://api.github.com/users/ji-yaqi/orgs", "repos_url": "https://api.github.com/users/ji-yaqi/repos", "events_url": "https://api.github.com/users/ji-yaqi/events{/privacy}", "received_events_url": "https://api.github.com/users/ji-yaqi/received_events", "type": "User", "site_admin": false }
[ { "login": "ji-yaqi", "id": 17338099, "node_id": "MDQ6VXNlcjE3MzM4MDk5", "avatar_url": "https://avatars.githubusercontent.com/u/17338099?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ji-yaqi", "html_url": "https://github.com/ji-yaqi", "followers_url": "https://api.github.com/users/ji-yaqi/followers", "following_url": "https://api.github.com/users/ji-yaqi/following{/other_user}", "gists_url": "https://api.github.com/users/ji-yaqi/gists{/gist_id}", "starred_url": "https://api.github.com/users/ji-yaqi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ji-yaqi/subscriptions", "organizations_url": "https://api.github.com/users/ji-yaqi/orgs", "repos_url": "https://api.github.com/users/ji-yaqi/repos", "events_url": "https://api.github.com/users/ji-yaqi/events{/privacy}", "received_events_url": "https://api.github.com/users/ji-yaqi/received_events", "type": "User", "site_admin": false } ]
null
[ "I solved the issue. It looks like there is a bug if you don't set up `uri_search_template` when creating the ComponentStore. \r\nI solved it by providing it as parameter, it doesn't really matter what you pass to the constructor really, but it is required because in the code there is the following\r\n```\r\nif self.url_search_prefixes: \r\n```\r\nbut if you don't pass something that attribute doesn't get created in the constructor.\r\nI fixed it like this:\r\n```\r\ncomponent_store = kfp.components.ComponentStore(\r\n local_search_paths=[\"local_search_path\"], url_search_prefixes=[COMPONENT_URL_SEARCH_PREFIX]), uri_search_template=\"{name}/\"\r\n```\r\n\r\nI hope it helps.\r\nI would suggest addressing this by either make the error more clear or a better check on `uri_search_template` attribute.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2022-01-26T14:09:30"
"2022-04-28T04:59:38"
null
NONE
null
Hi, I am trying to load some prebuit gcp kubeflow components using `kfp.components.ComponentStore`. However I am getting this error: ``` line 180, in _load_component_spec_in_component_ref if self.uri_search_template: AttributeError: 'ComponentStore' object has no attribute 'uri_search_template' ``` when at this line of code: `mlengine_train_op = component_store.load_component('ml_engine/train')` ### Environment * How did you deploy Kubeflow Pipelines (KFP)? I am running this locally. * KFP version: ``` kfp 1.8.10 kfp-pipeline-spec 0.1.13 kfp-server-api 1.7.1 ``` ### Steps to reproduce ``` import kfp from kfp.components import func_to_container_op COMPONENT_URL_SEARCH_PREFIX = "https://raw.githubusercontent.com/kubeflow/pipelines/1.7.1/components/gcp/" component_store = kfp.components.ComponentStore( local_search_paths=None, url_search_prefixes=[COMPONENT_URL_SEARCH_PREFIX]) mlengine_train_op = component_store.load_component('ml_engine/train') mlengine_deploy_op = component_store.load_component('ml_engine/deploy') ``` --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a ๐Ÿ‘. We prioritise the issues with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7220/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7220/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7215
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7215/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7215/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7215/events
https://github.com/kubeflow/pipelines/issues/7215
1,114,502,450
I_kwDOB-71UM5CbfUy
7,215
[sdk] [question] How can I write test code for the code that uses the sdk?
{ "login": "zamonia500", "id": 25953706, "node_id": "MDQ6VXNlcjI1OTUzNzA2", "avatar_url": "https://avatars.githubusercontent.com/u/25953706?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zamonia500", "html_url": "https://github.com/zamonia500", "followers_url": "https://api.github.com/users/zamonia500/followers", "following_url": "https://api.github.com/users/zamonia500/following{/other_user}", "gists_url": "https://api.github.com/users/zamonia500/gists{/gist_id}", "starred_url": "https://api.github.com/users/zamonia500/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zamonia500/subscriptions", "organizations_url": "https://api.github.com/users/zamonia500/orgs", "repos_url": "https://api.github.com/users/zamonia500/repos", "events_url": "https://api.github.com/users/zamonia500/events{/privacy}", "received_events_url": "https://api.github.com/users/zamonia500/received_events", "type": "User", "site_admin": false }
[ { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 1682717392, "node_id": "MDU6TGFiZWwxNjgyNzE3Mzky", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/question", "name": "kind/question", "color": "2515fc", "default": false, "description": "" } ]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "/remove-kind bug", "@zamonia500 , can you provide some details on what are you trying to test for? Are you testing a pipeline end to end? or it's more like local unit test for a component's implementation code?\r\n\r\n> This code implemented in a way that uses the methods of kfp.Client.\r\n> \r\n> And now I'm trying to test this code, but this code holds bunch of kfp_server_api & kfp.Client methods.\r\n\r\nIf you're not looking for pipeline e2e test, your code probably shouldn't be bundled with kfp.Client.\r\n", "> @zamonia500 , can you provide some details on what are you trying to test for? Are you testing a pipeline end to end? or it's more like local unit test for a component's implementation code?\r\n> \r\n> > This code implemented in a way that uses the methods of kfp.Client.\r\n> > And now I'm trying to test this code, but this code holds bunch of kfp_server_api & kfp.Client methods.\r\n> \r\n> If you're not looking for pipeline e2e test, your code probably shouldn't be bundled with kfp.Client.\r\n\r\n@chensun , sorry for late reply :(\r\nIt's unit test for a client code to get list of pipelines and execute some of it using kfp.Client code.\r\n\r\nlooks like this\r\n\r\n```python\r\npipeline_id = kfp_client.get_pipeline_id(pipeline_name)\r\npipeline_versions = kfp_client.list_pipeline_versions(\r\n pipeline_id,\r\n page_size=1,\r\n sort_by=\"created_at desc\",\r\n)\r\nexperiment = kfp_client.create_experiment(\"zamonia500-exp\")\r\ngenerated_run = kfp_client.run_pipeline(\r\n experiment_id=experiment.id,\r\n job_name=\"zamonia500-job\",\r\n params=pipeline_params, # retrieved from somewhere\r\n version_id=pipeline_version.id,\r\n)\r\n```" ]
"2022-01-26T00:39:40"
"2022-02-08T23:56:49"
null
NONE
null
/kind question ### Environment * KFP version: <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> * KFP SDK version: ``` kfp==1.8.9 ``` <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> * All dependencies version: ``` kfp 1.8.9 kfp-pipeline-spec 0.1.13 kfp-server-api 1.7.1 ``` <!-- Specify the output of the following shell command: $pip list | grep kfp --> I implemented a python code to get the list of updated pipeline versions after a certain time offset using kfp.Client . This code implemented in a way that uses the methods of kfp.Client. And now I'm trying to test this code, but this code holds bunch of kfp_server_api & kfp.Client methods. It's very inconvenient to mock these SDK code to write test code. Is there anyone who has tested the code using kfp.Client? Please share some good ideas thanks!
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7215/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7215/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7214
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7214/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7214/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7214/events
https://github.com/kubeflow/pipelines/issues/7214
1,114,420,787
I_kwDOB-71UM5CbLYz
7,214
[bug] Bucket param missing when getting pipeline artifacts causing 403 in UI
{ "login": "mttcnnff", "id": 17532157, "node_id": "MDQ6VXNlcjE3NTMyMTU3", "avatar_url": "https://avatars.githubusercontent.com/u/17532157?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mttcnnff", "html_url": "https://github.com/mttcnnff", "followers_url": "https://api.github.com/users/mttcnnff/followers", "following_url": "https://api.github.com/users/mttcnnff/following{/other_user}", "gists_url": "https://api.github.com/users/mttcnnff/gists{/gist_id}", "starred_url": "https://api.github.com/users/mttcnnff/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mttcnnff/subscriptions", "organizations_url": "https://api.github.com/users/mttcnnff/orgs", "repos_url": "https://api.github.com/users/mttcnnff/repos", "events_url": "https://api.github.com/users/mttcnnff/events{/privacy}", "received_events_url": "https://api.github.com/users/mttcnnff/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false } ]
null
[ "Can you try using Kubeflow 1.4 (KFP 1.7)?\r\n\r\nNote that the bucket information is specified in https://github.com/kubeflow/gcp-blueprints/blob/master/kubeflow/env.sh#L44, I am able to use this feature without problem (bucket is specified in request). So there might be some bugs that have been fixed in Kubeflow 1.4.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2022-01-25T22:21:51"
"2022-04-28T04:59:37"
null
NONE
null
### What steps did you take 1. Navigated to view a pipeline in the UI 2. Clicked a component to view details 3. Clicked the "Visualizations" tab 4. Saw 403 responses in the network tab in browser: ![image](https://user-images.githubusercontent.com/17532157/151059268-868b1edc-68d7-4b26-88b1-8d4f38cff8da.png) 5. Port forwarded the `ml-pipeline-ui-artifact` service to `localhost:3000`: ``` kubectl port-forward svc/ml-pipeline-ui-artifact 3000:80 ``` 6. Copied request url which 403'd from screenshot: ``` https://test.kubeflow-platform.spotify.net/pipeline/artifacts/get?source=minio&namespace=hyperkube&bucket=&key=artifacts%2Ftaxi-example-pipeline-52s9w%2Ftaxi-example-pipeline-52s9w-1020558034%2Fmlpipeline-ui-metadata.tgz ``` 7. Curl url with domain replaced by `localhost:3000` to target port-forwarded service, which resulted in error message: ``` curl localhost:3000/pipeline/artifacts/get\?source\=minio\&namespace\=hyperkube\&bucket\=\&key\=artifacts%2Ftaxi-example-pipeline-52s9w%2Ftaxi-example-pipeline-52s9w-1020558034%2Fmlpipeline-ui-metadata.tgz Storage bucket is missing from artifact request% ``` 8. Re-did Step 7 again, but filled in bucket with expected bucket name `kf-test-artifact-store`, which resulted in a successful response: ``` curl localhost:3000/pipeline/artifacts/get\?source\=minio\&namespace\=hyperkube\&bucket\=kf-test-artifact-store\&key\=artifacts%2Ftaxi-example-pipeline-52s9w%2Ftaxi-example-pipeline-52s9w-1020558034%2Fmlpipeline-ui-metadata.tgz -i HTTP/1.1 200 OK ``` ### What happened: It doesn't seem that the `bucket=` query param is not being populated and I'm not sure why. We use Minio GCS Gateway for artifact storage and a Multi User install of KFP. Visualizations don't load as they seem to make the request described above on mount to the React DOM. In addition, component output strings are not loaded (unsure if that's related). Visualizations not loading: ![image](https://user-images.githubusercontent.com/17532157/151062194-7c04632c-45b3-42b1-8c23-390ba8486b3c.png) Component output strings not loading: ![image](https://user-images.githubusercontent.com/17532157/151062244-469ec00a-44d4-4aad-954a-6d8a94ded5ec.png) ### What did you expect to happen: Visualizations to load: ![image](https://user-images.githubusercontent.com/17532157/151062410-ada5ebb4-c28c-4116-ac35-ec7a280eca44.png) ### Environment: * How do you deploy Kubeflow Pipelines (KFP)? <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> Via argo app of apps based off of GCP Blueprints * KFP version: 1.3.0 <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> * KFP SDK version: 1.4.0 <!-- Specify the output of the following shell command: $pip list | grep kfp --> ### Anything else you would like to add: <!-- Miscellaneous information that will assist in solving the issue.--> ### Labels <!-- Please include labels below by uncommenting them to help us better triage issues --> <!-- /area frontend --> /area backend <!-- /area sdk --> <!-- /area testing --> <!-- /area samples --> <!-- /area components --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a ๐Ÿ‘. We prioritise the issues with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7214/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7214/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7213
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7213/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7213/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7213/events
https://github.com/kubeflow/pipelines/issues/7213
1,114,128,049
I_kwDOB-71UM5CaD6x
7,213
[backend] Executor v2 : incorrect use of 0644 file mask when creating local dirs while retrieving artifacts from object storage ?
{ "login": "francoisserra", "id": 6873823, "node_id": "MDQ6VXNlcjY4NzM4MjM=", "avatar_url": "https://avatars.githubusercontent.com/u/6873823?v=4", "gravatar_id": "", "url": "https://api.github.com/users/francoisserra", "html_url": "https://github.com/francoisserra", "followers_url": "https://api.github.com/users/francoisserra/followers", "following_url": "https://api.github.com/users/francoisserra/following{/other_user}", "gists_url": "https://api.github.com/users/francoisserra/gists{/gist_id}", "starred_url": "https://api.github.com/users/francoisserra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/francoisserra/subscriptions", "organizations_url": "https://api.github.com/users/francoisserra/orgs", "repos_url": "https://api.github.com/users/francoisserra/repos", "events_url": "https://api.github.com/users/francoisserra/events{/privacy}", "received_events_url": "https://api.github.com/users/francoisserra/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Fixed by the above PR." ]
"2022-01-25T16:48:15"
"2022-01-28T00:46:38"
"2022-01-28T00:46:38"
NONE
null
### Environment * How did you deploy Kubeflow Pipelines (KFP)? deployed Kubeflow on a GKE cluster <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> * KFP version: <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> kubeflow v1.4 branch * KFP SDK version: <!-- Specify the output of the following shell command: $pip list | grep kfp --> n/a ### Steps to reproduce We discovered that when local copying Minio artifacts, the Kubeflow Executor V2 creates local dirs with a `0644` mask, preventing the non-root users to access artifact dirs - and that's the case when using Kaniko to build some Docker image generated by the BentoML Framework (see https://github.com/bentoml/BentoML/issues/2199) **Is is a required behaviour or is it possible to use the `0744` mask for dirs ?** https://github.com/kubeflow/pipelines/blob/627b37c3edf23ee5efc1c31edcdcdd4ff5944f64/v2/objectstore/object_store.go#L266 more details: When launching a pipeline step in k8s that uses [Kaniko](https://github.com/GoogleContainerTools/kaniko) the `/kaniko/executor` command is wrapped by `/kfp-launcher/launch` command (i.e. the executor V2). This command processes the args to resolve pipeline artifacts (namely thoses begining with `minio://` ) i.e. it copy from the minio bucket to the local FS of the container (under `/minio/` directory. Note it is *not* a mount point/k8s volume) . It turns out that the dir is in `drw-r--r--` mode (`0644`), so preventing to perform `chown` operations. modifying it to `drwxr--r--` before launching kaniko fix the bug w/o tweaking the Dockerfile. <!-- Specify how to reproduce the problem. This may include information such as: a description of the process, code snippets, log output, or screenshots. --> ### Expected result use `0744` file mask when creating local dirs while downloading artifacts from `minio` storage. <!-- What should the correct behavior be? --> ### Materials and Reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a ๐Ÿ‘. We prioritise the issues with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7213/reactions", "total_count": 4, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 1, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7213/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7204
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7204/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7204/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7204/events
https://github.com/kubeflow/pipelines/issues/7204
1,113,331,890
I_kwDOB-71UM5CXBiy
7,204
[sdk] KFP Compiler doesn't propagate ParallelFor inputs
{ "login": "sabiroid", "id": 1318733, "node_id": "MDQ6VXNlcjEzMTg3MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/1318733?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sabiroid", "html_url": "https://github.com/sabiroid", "followers_url": "https://api.github.com/users/sabiroid/followers", "following_url": "https://api.github.com/users/sabiroid/following{/other_user}", "gists_url": "https://api.github.com/users/sabiroid/gists{/gist_id}", "starred_url": "https://api.github.com/users/sabiroid/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sabiroid/subscriptions", "organizations_url": "https://api.github.com/users/sabiroid/orgs", "repos_url": "https://api.github.com/users/sabiroid/repos", "events_url": "https://api.github.com/users/sabiroid/events{/privacy}", "received_events_url": "https://api.github.com/users/sabiroid/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" } ]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "This is KFP v1 code, which is currently moved to deprecated folder in master branch HEAD. We're currenlty focusing on the new v2 code. And based on a similar code in [this test file](https://github.com/kubeflow/pipelines/blob/5dcc6d9910372eee0b37cf31723910ed5f6fe98e/sdk/python/kfp/compiler_cli_tests/test_data/pipeline_with_loops_and_conditions.py#L92-L96), I'd assume this may not reproduce in v2. Please let me know if i'm wrong.", "Yes, it looks like that" ]
"2022-01-25T01:52:26"
"2022-03-15T20:03:09"
null
CONTRIBUTOR
null
### Environment * KFP version: 1.8.2 * KFP SDK version: 1.8.2 * All dependencies version: ``` kfp 1.8.2 kfp-pipeline-spec 0.1.13 kfp-server-api 1.3.0 ``` ### Steps to reproduce Try to compile this code: ``` import kfp from kfp.components import func_to_container_op @func_to_container_op def produce_items() -> str: return '["first item", "second item"]' @func_to_container_op def do_nothing(): pass @kfp.dsl.pipeline() def parallelfor_nested_pipeline_param_resolving(fname1: str, fname2: str): items_op = produce_items() items = items_op.output with kfp.dsl.ParallelFor(items) as outer_loop_item: with kfp.dsl.ParallelFor(items) as inner_loop_item: do_nothing() if __name__ == '__main__': import kfp.compiler as compiler compiler.Compiler().compile(parallelfor_nested_pipeline_param_resolving, __file__ + '.yaml') ``` It fails with a `KeyError: 'produce-items'` exception from within `fix_big_data_passing` function (not the actual bug location). ### Expected result It should compile. ### Materials and Reference The minimal code to reproduce is provided above. Based on my preliminary research, it seems that `compiler/compiler.py` simply doesn't have logic to recursively propagate inputs/outputs of `OpsGroup`s as it does for regular `Op`s except for several very specific cases, and `ParallelFor` is not among them. The exception it is failing the compilation with is not the actual bug location. It just surfaces the fact that certain parameter was not propagated through nested `OpsGroups`. The code above can also be used as a unit test once the issue is fixed. --- Impacted by this bug? Give it a ๐Ÿ‘. We prioritise the issues with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7204/reactions", "total_count": 12, "+1": 12, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7204/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7201
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7201/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7201/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7201/events
https://github.com/kubeflow/pipelines/issues/7201
1,113,183,408
I_kwDOB-71UM5CWdSw
7,201
[feature] Allow user to specify which input parameters to use in cache key
{ "login": "amitchnick", "id": 22899956, "node_id": "MDQ6VXNlcjIyODk5OTU2", "avatar_url": "https://avatars.githubusercontent.com/u/22899956?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amitchnick", "html_url": "https://github.com/amitchnick", "followers_url": "https://api.github.com/users/amitchnick/followers", "following_url": "https://api.github.com/users/amitchnick/following{/other_user}", "gists_url": "https://api.github.com/users/amitchnick/gists{/gist_id}", "starred_url": "https://api.github.com/users/amitchnick/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amitchnick/subscriptions", "organizations_url": "https://api.github.com/users/amitchnick/orgs", "repos_url": "https://api.github.com/users/amitchnick/repos", "events_url": "https://api.github.com/users/amitchnick/events{/privacy}", "received_events_url": "https://api.github.com/users/amitchnick/received_events", "type": "User", "site_admin": false }
[ { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "> However, each run has a unique s3_path which we sync training checkpoints to during the job, which is the only way the jobs can finish successfully. \r\n\r\nI'm not quite following this part, how is `s3_path` used in the actual training step? Maybe some minimum pseudo code would help. Can you restructure your pipeline in a way that this is not an input to the training step?\r\n", "Hi @chensun, thank you for your reply! Good question. The `s3_path` is used in a `sidecar` container on the training job, which syncs local training checkpoints to s3 intermittently during training. The container operator looks something like this:\r\n\r\n```\r\ndef create_training_op(model_class_path: str,\r\n image: str,\r\n data_input_dir: str,\r\n model_output_dir: str,\r\n model_class_args: Dict[str, Any],\r\n s3_checkpoint_path: str) -> ContainerOp:\r\n arguments = ['train', dsl.InputArgumentPath(data_input_dir), model_output_dir]\r\n ....\r\n chkpt_download_init = init_containers.create_s3_download_init_container(\r\n model_output_dir, s3_checkpoint_path) # downloads an initial checkpoint from S3 if one exists (to resume training)\r\n sidecar_containers = [\r\n sidecars.UploadSidecar.create(s3_checkpoint_path, model_output_dir), # uploads checkpoints to s3\r\n sidecars.DownloadSidecar.create(s3_checkpoint_path, model_output_dir), # downloads checkpoints from s3\r\n ]\r\n training_op = ContainerOp(name='train',\r\n image=image,\r\n command=['python', model_class_path],\r\n arguments=arguments,\r\n file_outputs={'output': model_output_dir},\r\n sidecars=sidecar_containers,\r\n init_containers=[kube2iam_init, create_dir_init, chkpt_download_init])\r\n ...\r\n return training_op\r\n```\r\n\r\nI have tried to restructure it in a way to avoid using the `s3_path` as an argument (ie. come up with some canonical way to get the unique run path from inside the job, and not have to pass it through, or set it in an environment variable or something, but unfortunately I have yet to figure out a work-around. \r\n\r\nI hope this provides some more clarity!" ]
"2022-01-24T21:58:31"
"2022-03-17T14:40:50"
null
NONE
null
### Feature Area <!-- Uncomment the labels below which are relevant to this feature: --> <!-- /area frontend --> <!-- /area backend --> /area sdk <!-- /area samples --> <!-- /area components --> ### What feature would you like to see? Right now, the only caching customization that exists on an operator level in v1 KFP (please correct me if I am wrong), is: ``` task = some_op() task.execution_options.caching_strategy.max_cache_staleness ``` Something that would be greatly helpful for my use case is if there was a way to specify which input parameters should be used to generate the cache key, such as follows: ``` # assuming my input params are as follows: data_input_dir: str, model_output_dir: str, s3_path: str, task = some_op() task.execution_options.caching_strategy.include_input_parameters([data_input_dir, model_output_dir]) ``` The expected behavior here would be that if there was a previous run of the given task with identical `data_input_dir` and `model_output_dir` inputs, but a different `s3_path`, the previously-cached task would be valid and be used. ### What is the use case or pain point? We are trying to build full model training pipelines using KFP. The training jobs take a long time, and we would like them to use the cache whenever possible. However, each run has a _unique_ `s3_path` which we sync training checkpoints to during the job, which is the only way the jobs can finish successfully. Thus, we must provide `s3_path` as an input parameter, but it should not be used to determine whether the model has been trained with the same inputs before (ie. whether to use the cache). Rather, in our case, this should only depend on the data used (artifacts from the upstream task), and the config parameters (other input parameters to the task). Therefore, we would like to be able to have the caching mechanism ignore this unique `s3_path` parameter. ### Is there a workaround currently? I have tried experimenting with injecting the S3 path as an env var, but it is still considered an input parameter when I do this. Unfortunately, there does not seem to be a real workaround at this time; we are just unable to take advantage of the cache on these jobs. --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a ๐Ÿ‘. We prioritize fulfilling features with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7201/reactions", "total_count": 17, "+1": 17, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7201/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7196
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7196/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7196/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7196/events
https://github.com/kubeflow/pipelines/issues/7196
1,112,901,649
I_kwDOB-71UM5CVYgR
7,196
[frontend] Unable to upload pipeline by URL in KF 1.4
{ "login": "akangst", "id": 87344220, "node_id": "MDQ6VXNlcjg3MzQ0MjIw", "avatar_url": "https://avatars.githubusercontent.com/u/87344220?v=4", "gravatar_id": "", "url": "https://api.github.com/users/akangst", "html_url": "https://github.com/akangst", "followers_url": "https://api.github.com/users/akangst/followers", "following_url": "https://api.github.com/users/akangst/following{/other_user}", "gists_url": "https://api.github.com/users/akangst/gists{/gist_id}", "starred_url": "https://api.github.com/users/akangst/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/akangst/subscriptions", "organizations_url": "https://api.github.com/users/akangst/orgs", "repos_url": "https://api.github.com/users/akangst/repos", "events_url": "https://api.github.com/users/akangst/events{/privacy}", "received_events_url": "https://api.github.com/users/akangst/received_events", "type": "User", "site_admin": false }
[ { "id": 930476737, "node_id": "MDU6TGFiZWw5MzA0NzY3Mzc=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/help%20wanted", "name": "help wanted", "color": "db1203", "default": true, "description": "The community is welcome to contribute." }, { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
open
false
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "login": "annajung", "id": 24755674, "node_id": "MDQ6VXNlcjI0NzU1Njc0", "avatar_url": "https://avatars.githubusercontent.com/u/24755674?v=4", "gravatar_id": "", "url": "https://api.github.com/users/annajung", "html_url": "https://github.com/annajung", "followers_url": "https://api.github.com/users/annajung/followers", "following_url": "https://api.github.com/users/annajung/following{/other_user}", "gists_url": "https://api.github.com/users/annajung/gists{/gist_id}", "starred_url": "https://api.github.com/users/annajung/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/annajung/subscriptions", "organizations_url": "https://api.github.com/users/annajung/orgs", "repos_url": "https://api.github.com/users/annajung/repos", "events_url": "https://api.github.com/users/annajung/events{/privacy}", "received_events_url": "https://api.github.com/users/annajung/received_events", "type": "User", "site_admin": false }, { "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false } ]
null
[ "cc @annajung , looks like this is related to https://github.com/kubeflow/pipelines/pull/6291. Do you have any insight for this issue? Thank you!", "It's been a while since I worked on the issue, so I'm getting caught up. Taking a look at this issue this week and hope to have an update PR. \r\n\r\n/assign @annajung ", "Hi @annajung @zijianjoy, this error seems to still be present in 1.7.0-rc.1.", "Hello,\r\ntested with 1.7.0-rc.1, Issue persists with the same error.\r\n**Findings:**\r\nImport by url only works when choosing to `Create a new pipeline version under an existing pipeline`, but fails on `Create a new pipeline`." ]
"2022-01-24T16:55:12"
"2023-03-13T13:07:11"
null
NONE
null
### Environment * How did you deploy Kubeflow Pipelines (KFP)? We're deployed in a full kubeflow installation in the platform-agnostic-multi-user configuration. * KFP version: 1.7 ### Steps to reproduce 1. Open KF Pipelines UI 2. Click "Upload Pipeline" in the upper right. 3. Fill out Name and Description fields 4. Select "Import by url" Radio Button and paste github link into Package Url box. 5. Click "Create" ### Expected result It should accept the pipeline and work like it does when the same template is uploaded as a local file. I get good results using that method. ### Materials and Reference Error Message that is output: {"error":"Failed to authorize with API resource references: PermissionDenied: User 'dhoover@emailaddress.com' is not authorized with reason: (request: \u0026ResourceAttributes{Namespace:,Verb:create,Group:,Version:,Resource:pipelines,Subresource:,Name:,}): Unauthorized access","code":7,"message":"Failed to authorize with API resource references: PermissionDenied: User 'dhoover@emailaddress.com' is not authorized with reason: (request: \u0026ResourceAttributes{Namespace:,Verb:create,Group:,Version:,Resource:pipelines,Subresource:,Name:,}): Unauthorized access","details":[{"@type":"type.googleapis.com/api.Error","error_message":"User 'dhoover@emailaddress.com' is not authorized with reason: (request: \u0026ResourceAttributes{Namespace:,Verb:create,Group:,Version:,Resource:pipelines,Subresource:,Name:,})","error_details":"Failed to authorize with API resource references: PermissionDenied: User 'dhoover@emailaddress.com' is not authorized with reason: (request: \u0026ResourceAttributes{Namespace:,Verb:create,Group:,Version:,Resource:pipelines,Subresource:,Name:,}): Unauthorized access"}]} To me this looks a lot like https://github.com/kubeflow/pipelines/issues/6102 where it also was not getting the proper attributes to make a clean Auth call back to the cluster. It looks like this was also referenced in https://github.com/kubeflow/pipelines/pull/6291. but perhaps it was not fixed all the way in multi-user configuration. --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a ๐Ÿ‘. We prioritise the issues with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7196/reactions", "total_count": 6, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7196/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7194
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7194/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7194/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7194/events
https://github.com/kubeflow/pipelines/issues/7194
1,112,531,057
I_kwDOB-71UM5CT-Bx
7,194
[feature] integrate with Lightning ecosystem CI
{ "login": "Borda", "id": 6035284, "node_id": "MDQ6VXNlcjYwMzUyODQ=", "avatar_url": "https://avatars.githubusercontent.com/u/6035284?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Borda", "html_url": "https://github.com/Borda", "followers_url": "https://api.github.com/users/Borda/followers", "following_url": "https://api.github.com/users/Borda/following{/other_user}", "gists_url": "https://api.github.com/users/Borda/gists{/gist_id}", "starred_url": "https://api.github.com/users/Borda/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Borda/subscriptions", "organizations_url": "https://api.github.com/users/Borda/orgs", "repos_url": "https://api.github.com/users/Borda/repos", "events_url": "https://api.github.com/users/Borda/events{/privacy}", "received_events_url": "https://api.github.com/users/Borda/received_events", "type": "User", "site_admin": false }
[ { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2022-01-24T11:36:29"
"2022-04-28T04:59:39"
null
NONE
null
Hello and so happy to see you use Pytorch-Lightning! :tada: Just wondering if you already heard about quite the new **Pytorch Lightning (PL) ecosystem CI** where we would like to invite you to... You can check out our blog post about it: [Stay Ahead of Breaking Changes with the New Lightning Ecosystem CI](https://devblog.pytorchlightning.ai/stay-ahead-of-breaking-changes-with-the-new-lightning-ecosystem-ci-b7e1cf78a6c7) :zap: As you use PL framework for your cool project, we would like to enhance your experience and offer you safe updates to our future releases. At this moment, you run tests with a particular PL version, but it may accidentally happen that the next version will be incompatible with your project... :confused: We do not intend to change anything on our project side, but still here we have a solution - ecosystem CI with testing both - your and our latest development head we can find it very early and prevent releasing eventually bad version... :+1: **What is needed to do?** - have some tests, including PL integration - add config to ecosystem CI - https://github.com/PyTorchLightning/ecosystem-ci **What will you get?** - scheduled nightly testing configured for development/stable versions - slack notification if something went wrong to investigate - testing also on multi-GPU machine as our gift to you :rabbit:
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7194/reactions", "total_count": 18, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 6, "rocket": 5, "eyes": 1 }
https://api.github.com/repos/kubeflow/pipelines/issues/7194/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7193
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7193/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7193/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7193/events
https://github.com/kubeflow/pipelines/issues/7193
1,111,861,620
I_kwDOB-71UM5CRal0
7,193
Does Kubeflow pipelines support Sensor component?
{ "login": "Amitg1", "id": 24718581, "node_id": "MDQ6VXNlcjI0NzE4NTgx", "avatar_url": "https://avatars.githubusercontent.com/u/24718581?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Amitg1", "html_url": "https://github.com/Amitg1", "followers_url": "https://api.github.com/users/Amitg1/followers", "following_url": "https://api.github.com/users/Amitg1/following{/other_user}", "gists_url": "https://api.github.com/users/Amitg1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Amitg1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Amitg1/subscriptions", "organizations_url": "https://api.github.com/users/Amitg1/orgs", "repos_url": "https://api.github.com/users/Amitg1/repos", "events_url": "https://api.github.com/users/Amitg1/events{/privacy}", "received_events_url": "https://api.github.com/users/Amitg1/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi! @IronPan, What's the status? can I help somehow in planning/impl?", "It would be great to have this feature in Kubeflow. @IronPan Any insights on it." ]
"2022-01-23T12:47:35"
"2023-01-27T23:38:53"
null
NONE
null
Sensors are a special type of Operator that are designed to do exactly one thing - wait for something to occur. It can be time-based, or waiting for a file, or an external event, but all they do is wait until something happens, and then succeed so their downstream tasks can run. from: https://airflow.apache.org/docs/apache-airflow/stable/concepts/sensors.html Do I have to impl the logic myself? Thanks!
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7193/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7193/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7184
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7184/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7184/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7184/events
https://github.com/kubeflow/pipelines/issues/7184
1,110,597,434
I_kwDOB-71UM5CMl86
7,184
Add security context privilege in the pipeline workflow pod
{ "login": "mike0355", "id": 51089749, "node_id": "MDQ6VXNlcjUxMDg5NzQ5", "avatar_url": "https://avatars.githubusercontent.com/u/51089749?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mike0355", "html_url": "https://github.com/mike0355", "followers_url": "https://api.github.com/users/mike0355/followers", "following_url": "https://api.github.com/users/mike0355/following{/other_user}", "gists_url": "https://api.github.com/users/mike0355/gists{/gist_id}", "starred_url": "https://api.github.com/users/mike0355/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mike0355/subscriptions", "organizations_url": "https://api.github.com/users/mike0355/orgs", "repos_url": "https://api.github.com/users/mike0355/repos", "events_url": "https://api.github.com/users/mike0355/events{/privacy}", "received_events_url": "https://api.github.com/users/mike0355/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "I'm not sure whether it's even possible. The yaml file you're tinkering with is an Argo workflow template. Maybe reach out to them to see if it's a supported scenario?" ]
"2022-01-21T15:31:12"
"2022-02-17T23:54:36"
null
NONE
null
/securityContext on Notebooks Hello everyone, because of the needs of the project, I want each pod in the workflow to access the files under the /dev path on the local side. I have tried adding security parameters to the yaml file of the Notebook, as shown the figure, But I still can't access the files under /dev when I actually run the pipeline. Is my writing wrong? ![image](https://user-images.githubusercontent.com/51089749/150479896-6f7d14a4-2101-4f67-be11-afffed0ef243.png) This is my pipeline . I can't get the video0 under the /dev. ![image](https://user-images.githubusercontent.com/51089749/150479947-6140f234-7424-4903-9600-f43064bbf05c.png) In the container like this ![image](https://user-images.githubusercontent.com/51089749/150480141-9033efe7-c4ab-4557-aaaa-c91bbf716c65.png) I have successfully implemented this project on k8s. At that time, I added the security context parameter directly to the pod yaml file, and successfully accessed /dev after deployment. As shown as figure ![image](https://user-images.githubusercontent.com/51089749/150480399-6fbbc2f6-c2ac-4e59-9f23-a14c7184228b.png) ![image](https://user-images.githubusercontent.com/51089749/150480657-827586ca-3dd5-4afd-b203-58555bf589c9.png) Because there have been cases of successful access on k8s before, I think it should also be successfully accessed on kubeflow. How can I modify the yaml file of notebooks?
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7184/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7184/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7171
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7171/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7171/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7171/events
https://github.com/kubeflow/pipelines/issues/7171
1,105,471,719
I_kwDOB-71UM5B5Cjn
7,171
[sdk] Update absl-py requirement to accommodate future TF versions
{ "login": "jiyongjung0", "id": 869152, "node_id": "MDQ6VXNlcjg2OTE1Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/869152?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jiyongjung0", "html_url": "https://github.com/jiyongjung0", "followers_url": "https://api.github.com/users/jiyongjung0/followers", "following_url": "https://api.github.com/users/jiyongjung0/following{/other_user}", "gists_url": "https://api.github.com/users/jiyongjung0/gists{/gist_id}", "starred_url": "https://api.github.com/users/jiyongjung0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jiyongjung0/subscriptions", "organizations_url": "https://api.github.com/users/jiyongjung0/orgs", "repos_url": "https://api.github.com/users/jiyongjung0/repos", "events_url": "https://api.github.com/users/jiyongjung0/events{/privacy}", "received_events_url": "https://api.github.com/users/jiyongjung0/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1136110037, "node_id": "MDU6TGFiZWwxMTM2MTEwMDM3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk", "name": "area/sdk", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "@chensun Thank you for resolving the issue!\r\n\r\nBy the way, is there a plan to release a new version of kfp in the near future? TFX integration tests are still failing and I'm thinking about some kind of hacks to workaround the current breakage if the release will not happen shortly.", "Yes, we plan to release 1.8.11 shortly, and I'll cherry pick your change into the release branch. Will let you know when the release is out.", "@jiyongjung0 FYI, we released 1.8.11 today that contains the change you made: https://pypi.org/project/kfp/1.8.11/", "I confirmed that our tests pass with the new version. Thank you for the release!" ]
"2022-01-17T07:23:10"
"2022-01-25T01:53:41"
"2022-01-19T22:14:57"
CONTRIBUTOR
null
### Environment * KFP version: N/A * KFP SDK version: 1.8.10 (latest) * All dependencies version: N/A ### Steps to reproduce I believe that Tensorflow is frequently used with KFP and TF and should be able to be installed together. TF updated their absl-py requirement to `>=1.0.0` recently. [setup.py in TF](https://github.com/tensorflow/tensorflow/blob/c93e4757491a4abb7aa990e18f2d321e4a935a03/tensorflow/tools/pip_package/setup.py#L77). The next TF version(probably, 2.8.0) cannot be installed with the recent version kfp, because kfp is pinned to `<=0.11`. TF-nightly was already hit by this issue. ``` $ pip install tf-nightly==2.9.0.dev20220116 kfp==1.8.10 ... ERROR: Cannot install kfp==1.8.10 and tf-nightly==2.9.0.dev20220116 because these package versions have conflicting dependencies. The conflict is caused by: tf-nightly 2.9.0.dev20220116 depends on absl-py>=1.0.0 kfp 1.8.10 depends on absl-py<=0.11 and >=0.9 To fix this you could try to: 1. loosen the range of package versions you've specified 2. remove package versions to allow pip attempt to solve the dependency conflict ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/user_guide/#fixing-conflicting-dependencies ``` ### Expected result I think that the version range of absl should become `absl-py>=0.9,<2.0.0` if possible. Absl-py doesn't have [any breaking change recently ([Change log](https://github.com/abseil/abseil-py/blob/main/CHANGELOG.md)) and it should be safe to upgrade. We are doing [a similar upgrade in TFX](https://github.com/tensorflow/tfx/pull/4573). It would be great if you can do a patch release with the fix if possible. Because we are already seeing failures in integration tests that utilizing TF-nightly and kfp. ### Materials and Reference N/A --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a ๐Ÿ‘. We prioritise the issues with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7171/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7171/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7169
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7169/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7169/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7169/events
https://github.com/kubeflow/pipelines/issues/7169
1,105,303,745
I_kwDOB-71UM5B4ZjB
7,169
[backend] discussion - resource reference in API interface considered harmful
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1682717377, "node_id": "MDU6TGFiZWwxNjgyNzE3Mzc3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/discussion", "name": "kind/discussion", "color": "ecfc15", "default": false, "description": "" } ]
open
false
null
[]
null
[ "/cc @zijianjoy @chensun @difince \r\n\r\nSharing some thoughts about current API design, what do you think?", "@Bobgy I totally agree with all of your arguments and I could confirm that using the resource_reference was not that easy and intuitive for me as well. \r\nI have a question for case of [ getPipelineByName ](https://github.com/kubeflow/pipelines/pull/7004)(and namespace)\r\nHow the URL path `/apis/v1beta1/namespaces/{namespace}/pipelines/{name}` is going to look like taking into consideration that the namespace is not a mandatory field? (in cases of Kubeflow standalone deployment and in multi-user mode - \"shared\" pipelines?) \r\n\r\n", "The question above is more about PR #7004 I will move it there. I agree with this proposal", "Agreeing to this proposal ๐Ÿ‘. By looking at the KFP API list: https://www.kubeflow.org/docs/components/pipelines/reference/api/kubeflow-pipeline-api-spec/, I feel that the amount of work will require a major refactoring of existing APIs. And more importantly, it takes a longer time to migrate to new APIs and deprecate old APIs.\r\n\r\nCode structure: For a better migration process, I am thinking about creating another [package](https://developers.google.com/protocol-buffers/docs/proto3#packages) for new APIs: https://github.com/kubeflow/pipelines/tree/master/backend/api. So when we deprecate, we just need to deprecate the old package. \r\n\r\nKFP standalone use case: When namespace is not required, we can use `-` for `unique resource lookup` in https://google.aip.dev/159.", "> Code structure: For a better migration process, I am thinking about creating another [package](https://developers.google.com/protocol-buffers/docs/proto3#packages) for new APIs: https://github.com/kubeflow/pipelines/tree/master/backend/api. So when we deprecate, we just need to deprecate the old package.\r\n\r\nJust my opinion, I am not sure whether it's worth the effort creating a new API package, because [it's non-trivial supporting two APIs using the same code](https://github.com/kubeflow/pipelines/blob/577902ca46eee101cf1d7d197908797f4ab16fce/backend/src/apiserver/main.go#L96-L109) and if we do, they might need to be two different endpoints, so that users will need to migrate.\r\n\r\nAlso, it's natural for an API to evolve: add new fields, deprecate old fields, support both fields in the same time to allow gradual migration to the new field etc. You might feel that keeping the deprecated fields make the API untidy -- which I feel like we only need a mindset shift, APIs will always evolve, and there will always be deprecated fields. We cannot afford the cost (both for us and for our users) of making a new set of APIs every time we have some deprecated fields. Maybe we can learn to accept that's natural.\r\n\r\nexample: https://github.com/kubeflow/pipelines/blob/a03b41e129712b1416ee68cdea1e5ad4da1341dc/api/v2alpha1/pipeline_spec.proto#L29-L31", "I fully agree with Yuan's argument.\r\n\r\nThe way the Experiments <-> Pipeline relationship and Pipeline <-> Version relationship were introduced and implemented was less than ideal. Actually, initially it was significantly worse: The user could not submit a pipeline run without creating an experiment first and specifying the relationship. Fortunately, this was changed after some time. The way the pipeline versions were introduced also went against the typical API design guidelines, which lead to many issues reported by the users.\r\nTo this day, we do not see any significant usage of Experiments or Versions.\r\n\r\nThe API could have been much simpler to use:\r\n\r\n```\r\nRun/create\r\nExperiment/create\r\nExperiment/<ID>/runs/add(runId=...)\r\nExperiment/<ID>/runs/remove(runId=...)\r\nExperiment/<ID>/runs/list\r\n\r\nPipelineGroup/create\r\nPipelineGroup/<ID>/pipelines/list\r\nPipelineGroup/<ID>/pipelines/latest\r\nPipeline/create(spec=..., pipelineGroupId=...)\r\n```", "Based on @Bobgy proposal in this issue, Anna @annajung and I created a proposal [document](https://docs.google.com/document/d/19OfU-hIsY4xBKA6b_F3dYIbSgrLMrVshucUSsyqotQ4/edit). It contains specific APIs changes we are proposing in order to remove `resource_references` from the APIs. Please comment or ask questions in the document or here in the issue. Hope we will bring the proposal up for discussion during the next Pipeline meeting (04/27/2022). \r\n\r\ncc: @chensun, @zijianjoy, @james-jwu, @Ark-kun" ]
"2022-01-17T02:42:58"
"2022-04-14T14:47:36"
null
CONTRIBUTOR
null
There were some previous discussions around this topic, but let me summarize my arguments against using resource references in the public API. ## Proposal Replace resource reference in API interface with explicit fields. For example, instead of passing resource reference for a pipeline version, we can just have a version ID / version Name field in the pipeline spec to specify which version we want to run. After explicit fields are added in the API, generated API client will be good enough to use directly, so we may no longer need the manually maintained helpers. ## Rationale for introducing resource references Each resource (pipeline, pipeline version, experiment, recurring run, etc) may have external relationship with other resources. Therefore, we introduced resource reference as a flexible and generic way to express and store this relationship, especially in the DB layer. API proto: https://github.com/kubeflow/pipelines/blob/0d64c3490a2da50c667e0dfd7a09b6e6ab7f5e55/backend/api/resource_reference.proto#L20-L52 DB Storage model: https://github.com/kubeflow/pipelines/blob/0d64c3490a2da50c667e0dfd7a09b6e6ab7f5e55/backend/src/apiserver/model/resource_reference.go#L23-L45 ## Arguments against resource references However, there are some cognitive overhead for understanding resource references. For a given type of resource, what are the possible resource references? There are combinations for resource type, relationship type. A request is only valid when you get both right. For example, what are valid combinations of resource references for a create run request? The api definition says almost nothing: https://github.com/kubeflow/pipelines/blob/0d64c3490a2da50c667e0dfd7a09b6e6ab7f5e55/backend/api/run.proto#L233-L236. You can only learn that by reading other docs or reading code: https://github.com/kubeflow/pipelines/blob/0d64c3490a2da50c667e0dfd7a09b6e6ab7f5e55/backend/src/apiserver/server/util.go#L337 that the only possible combination is a pipeline version resource type + creator relationship. For REST API users, they also need to worry about what's the exact spelling for the enums, they cannot get a helpful error when their spellings are incorrect. For API generated client users, they need to learn where to import the resource reference model object and how to construct them: https://github.com/kubeflow/pipelines/blob/0d64c3490a2da50c667e0dfd7a09b6e6ab7f5e55/sdk/python/kfp/_client.py#L913-L919 It's hard to figure out without an example, and it's easy to forget that I'll need to look up this code snippet every once a while. Therefore, a seemingly simple and flexible resource reference field expands to so many complexities in the downstream. At the end, we manually maintain a SDK client method to expose the version ID arg as a simple field to hide resource reference from API: https://github.com/kubeflow/pipelines/blob/0d64c3490a2da50c667e0dfd7a09b6e6ab7f5e55/sdk/python/kfp/_client.py#L697. Therefore, I propose that we consider adding resource references alternatives in public API interface and gradually let people use those explicit fields directly. It's still reasonable to use resource reference as the underlying implementation, because it's a flexible model. ## Common arguments for resource references and why I think they are not valid 1. Resource reference makes public interface flexible, e.g. when we add new relationships between resources or change sth, we don't need to change the public interface. [counter-argument]: I think the flexibility is only an illusion, an interface includes both the type definition and constraints user need to understand, as I mentioned above, users have to understand the exact valid combinations for a resource reference, so that's in fact part of the interface contract itself. Having a flexible type definition does not help make the actual interface flexible. 2. We can keep the API implementation simple. [counter-argument]: the cost of making API implementation simple is that additional complexities are introduced in the downstream -- SDK manually maintained methods, more documentation to explain these fields (that we cannot easily keep up-to-date) and still there are user cognitive load when understanding this. If we look at KFP ecosystem as a whole, I think more complexities are introduced overall than the simpler API implementation. <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a ๐Ÿ‘. We prioritise the issues with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7169/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7169/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7163
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7163/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7163/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7163/events
https://github.com/kubeflow/pipelines/issues/7163
1,100,850,362
I_kwDOB-71UM5BnaS6
7,163
Recommended way to use Tensorboard in a pipelines run
{ "login": "mgiessing", "id": 40735330, "node_id": "MDQ6VXNlcjQwNzM1MzMw", "avatar_url": "https://avatars.githubusercontent.com/u/40735330?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mgiessing", "html_url": "https://github.com/mgiessing", "followers_url": "https://api.github.com/users/mgiessing/followers", "following_url": "https://api.github.com/users/mgiessing/following{/other_user}", "gists_url": "https://api.github.com/users/mgiessing/gists{/gist_id}", "starred_url": "https://api.github.com/users/mgiessing/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mgiessing/subscriptions", "organizations_url": "https://api.github.com/users/mgiessing/orgs", "repos_url": "https://api.github.com/users/mgiessing/repos", "events_url": "https://api.github.com/users/mgiessing/events{/privacy}", "received_events_url": "https://api.github.com/users/mgiessing/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Per our doc, your option 1 seems to be our recommendation: https://www.kubeflow.org/docs/components/pipelines/sdk/output-viewer/#tensorboard", "Have you been able to get the viewer linked to the component? I'm able to generate the artifact named \"mlpipeline_ui_metadata\" as explained in the docs\r\n![image](https://user-images.githubusercontent.com/60658860/150397983-9e227cf7-f79d-4cac-b58e-970c5cac9599.png)\r\n\r\nwhich looks like:\r\n`{\"outputs\": [{\"type\": \"tensorboard\", \"source\": \"s3://my_bucket/ai_kubeflow_example/pipeline/CatsDogs_inaki/9c735a49-f1b4-44ac-99b3-18d768e46600/training-component-v2/logs\"}]}`\r\n\r\nHowever, nothing shows in the \"Visualizations\" tab\r\n![image](https://user-images.githubusercontent.com/60658860/150398479-cd476ddd-3d1a-41d6-911f-73c9581fbc41.png)\r\n\r\nAlternatively, if I start a Tensorboard server pointing to that same S3 URI, all looks good. ", "Yes it does work for me. How does your function look like which you'd expect to link the viewer?", "I've tried to simplify it as much as possible. It looks like this:\r\n\r\n```\r\nfrom kfp.aws import use_aws_secret\r\nfrom kfp.compiler import Compiler\r\nfrom kfp.components import func_to_container_op\r\nfrom kfp.dsl import PipelineExecutionMode, pipeline\r\nfrom ai_kubeflow_example.kubeflow.images import DOCKER_IMAGE\r\n\r\nfrom typing import NamedTuple\r\ndef simple_component()-> NamedTuple(\"EvaluationOutput\", [('mlpipeline_ui_metadata', 'UI_metadata')]):\r\n import json\r\n # Exports a sample tensorboard:\r\n metadata = {\r\n 'outputs': [{\r\n 'type': 'tensorboard',\r\n 'source': 'gs://ml-pipeline-dataset/tensorboard-train',\r\n }]\r\n }\r\n from collections import namedtuple\r\n out_tuple = namedtuple(\"EvaluationOutput\", [\"mlpipeline_ui_metadata\"])\r\n\r\n return out_tuple(json.dumps(metadata))\r\n\r\n\r\n@pipeline(name=\"Simple component\", description=\"Simple component\",)\r\ndef example_pipeline():\r\n simple_component_op = func_to_container_op(func=simple_component, base_image=DOCKER_IMAGE)\r\n simple_component_op().apply(use_aws_secret(\"mlpipeline-minio-artifact\", \"accesskey\", \"secretkey\", \"eu-west-2\"))\r\n\r\n\r\nif __name__ == \"__main__\":\r\n Compiler(mode=PipelineExecutionMode.V2_COMPATIBLE).compile(training_pipeline_v2, f\"{__file__[:-len('.py')]}.yaml\")\r\n```\r\n\r\nHowever, nothing shows up in the Viz tab\r\n![image](https://user-images.githubusercontent.com/60658860/150535783-a1fa3ab3-05e9-44ac-a410-a3a503dd503d.png)\r\n\r\n![image](https://user-images.githubusercontent.com/60658860/150535748-41c4e3e4-e976-4d23-9238-cf19f1e36e93.png)\r\n", "I don't use AWS, nor do I use x86 but your code seems to link the Tensorboard for me if I adjust it minimally according to my environment:\r\n\r\n```python\r\nimport kfp\r\nfrom kfp.compiler import Compiler\r\nfrom kfp.components import func_to_container_op\r\nfrom kfp.dsl import PipelineExecutionMode, pipeline\r\n\r\nfrom typing import NamedTuple\r\ndef simple_component()-> NamedTuple(\"EvaluationOutput\", [('mlpipeline_ui_metadata', 'UI_metadata')]):\r\n import json\r\n # Exports a sample tensorboard:\r\n metadata = {\r\n 'outputs': [{\r\n 'type': 'tensorboard',\r\n 'source': 'gs://ml-pipeline-dataset/tensorboard-train',\r\n }]\r\n }\r\n from collections import namedtuple\r\n out_tuple = namedtuple(\"EvaluationOutput\", [\"mlpipeline_ui_metadata\"])\r\n\r\n return out_tuple(json.dumps(metadata))\r\n\r\n\r\n@pipeline(name=\"Simple component\", description=\"Simple component\")\r\ndef example_pipeline():\r\n simple_component_op = func_to_container_op(func=simple_component, base_image=\"python:3.7\")\r\n simple_component_op() #.apply(use_aws_secret(\"mlpipeline-minio-artifact\", \"accesskey\", \"secretkey\", \"eu-west-2\"))\r\n\r\n\r\nif __name__ == \"__main__\":\r\n kfp.Client().create_run_from_pipeline_func(example_pipeline, arguments={}, namespace='user-example-com')\r\n #Compiler(mode=PipelineExecutionMode.V2_COMPATIBLE).compile(training_pipeline_v2, f\"{__file__[:-len('.py')]}.yaml\")\r\n```\r\n\r\n![image](https://user-images.githubusercontent.com/40735330/150554748-26973841-b8ea-4794-aff6-a40e2271b08e.png)\r\n\r\nI saw you first compile the function into a workflow.yaml file which you then use to create a run I guess? Shouldn't you use the correct pipeline func `example_pipeline` instead of `training_pipeline_v2` then...? However you seem to be able to get a correct run out.", "Oh, my bad, that was a copy&paste issue. In my code I generate the pipeline for `example_pipeline`, not `training_pipeline_v2`. Then I manually upload the yaml file and create the run from the dashboard.\r\n\r\nCan I ask you what version of kubeflow you're using (we're on [1.4](https://www.kubeflow.org/docs/releases/kubeflow-1.4/))? And also what version of the python package kfp you're using (we're on `kfp=1.8.10`)?\r\n\r\nI'm not sure if the fact that I'm using the V2_COMPATIBLE compilation mode might make the difference...", "Okay! I'm using Kubeflow v1.3.0 & kfp 1.8.5 - unfortunately we don't have v1.4.0 for our architecture, so I cannot compare that.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "@mgiessing [mgiessing](https://github.com/mgiessing)\r\nSo do you solve you problem? How do you use tensorboard in pipeline now.\r\nI get the same issue with your first option, I just can not open tensorboard until the component is completed.", "> 2. Mount a volume (maybe the NB server volume?) into the pipeline component and start the Tensorboard via the Kubeflow Dashboard Tab (the way which is also shown in the yt video). Probably sth. similar to this example I guess: https://github.com/kubeflow/pipelines/blob/master/samples/core/volume_ops/volume_ops.py\r\nThe advantage I see with the latter is that I can also start using/inspecting TensorBoard while the model is still training, however I read, that those will make the pipeline less flexible/reusable and should be avoided if possible...?\r\n\r\nhttps://github.com/kubeflow/pipelines/blob/1.8.18/docs/config/volume-support.md\r\n`volume-support` feature implemented from https://github.com/kubeflow/pipelines/pull/4236 \r\nIt can solve the tensorboard with training issues.\r\nAttach PVC to Training task, then tensorboard can be opened with a PVC volume while training.\r\n\r\nThis feature is available from [kubeflow 1.1.2](https://github.com/kubeflow/pipelines/releases/tag/1.1.2)\r\nBut, [according to the document](https://github.com/kubeflow/pipelines/blob/1.8.18/docs/config/volume-support.md) it's still alpha.\r\n\r\n@chensun \r\n`volume-support` feature is still in alpha. [Feature PR](https://github.com/kubeflow/pipelines/pull/4236) was merged 2 years ago. \r\nAny plans? To be removed feature? or to be released?\r\n" ]
"2022-01-12T22:29:06"
"2023-01-25T04:49:27"
null
NONE
null
Hi, I'm currently wondering what is the recommended way to include Tensorboard **inside** a pipeline run. For non-pipeline tb I followed a minimal example that trains mnist on a notebook server --> https://www.youtube.com/watch?v=eMDF2Bk8YRY which works good! For pipelines I currently feel that there are two options (probably even more) 1. Use **tensorboard viewer**, therefore I need to upload the logs to MinIO (I'm using local Kubeflow deployment) similar to the following Notebook snippet. The "disadvantage" I see there is that I have to wait until the component is completed until I can start the TensorBoard. ```python from typing import NamedTuple def mnist_train(model_file: str) -> NamedTuple("EvaluationOutput", [('mlpipeline_ui_metadata', 'UI_metadata')]): from datetime import datetime import tensorflow as tf model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(512, activation=tf.nn.relu), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation=tf.nn.softmax) ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) print(model.summary()) mnist = tf.keras.datasets.mnist (x_train, y_train),(x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 callbacks = [ tf.keras.callbacks.TensorBoard(log_dir='logs/' + datetime.now().date().__str__()), tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'), ] model.fit(x_train, y_train, batch_size=32, epochs=3, callbacks=callbacks, validation_data=(x_test, y_test)) ##To Do: Upload the logs to MinIO and point the source path to MinIO instead of gs://... import json from collections import namedtuple # Exports a sample tensorboard: metadata = { 'outputs' : [{ 'type': 'tensorboard', 'source': 'gs://ml-pipeline-dataset/tensorboard-train', }] } out_tuple = namedtuple("EvaluationOutput", ["mlpipeline_ui_metadata"]) return out_tuple(json.dumps(metadata)) model_train_op = comp.func_to_container_op(func=mnist_train, base_image="quay.io/ibm/kubeflow-component-tensorflow-cpu:latest") @dsl.pipeline( name='Mnist pipeline', description='A toy pipeline that performs mnist model training.' ) def mnist_pipeline(model_file: str = 'mnist_model.h5'): model_train_op(model_file=model_file) arguments = {"model_file":"mnist_model.h5"} # Submit pipeline directly from pipeline function run_result = client.create_run_from_pipeline_func(mnist_pipeline, arguments=arguments, namespace='kubeflow-user-example-com') ``` 2. Mount a volume (maybe the NB server volume?) into the pipeline component and start the Tensorboard via the Kubeflow Dashboard Tab (the way which is also shown in the yt video). Probably sth. similar to this example I guess: https://github.com/kubeflow/pipelines/blob/master/samples/core/volume_ops/volume_ops.py The advantage I see with the latter is that I can also start using/inspecting TensorBoard while the model is still training, however I read, that those will make the pipeline less flexible/reusable and should be avoided if possible...? Maybe someone can shed some light on the recommended way to achieve this. Thanks!
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7163/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7163/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7162
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7162/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7162/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7162/events
https://github.com/kubeflow/pipelines/issues/7162
1,100,188,167
I_kwDOB-71UM5Bk4oH
7,162
Minor security flaws detected by OWASP scan
{ "login": "bix709", "id": 18500334, "node_id": "MDQ6VXNlcjE4NTAwMzM0", "avatar_url": "https://avatars.githubusercontent.com/u/18500334?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bix709", "html_url": "https://github.com/bix709", "followers_url": "https://api.github.com/users/bix709/followers", "following_url": "https://api.github.com/users/bix709/following{/other_user}", "gists_url": "https://api.github.com/users/bix709/gists{/gist_id}", "starred_url": "https://api.github.com/users/bix709/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bix709/subscriptions", "organizations_url": "https://api.github.com/users/bix709/orgs", "repos_url": "https://api.github.com/users/bix709/repos", "events_url": "https://api.github.com/users/bix709/events{/privacy}", "received_events_url": "https://api.github.com/users/bix709/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
null
[]
null
[ "@bix709 Kubeflow 1.2.0, KFP 1.0.4 is kinda old, can you try on the latest version and see if the issue still exists?\r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2022-01-12T10:51:42"
"2022-04-17T06:27:21"
null
NONE
null
What steps did you take: Hey, I ran OWASP scan on kubeflow central dashboard and KFP to see if there are any security flaws I should take to the consideration before going to production with our kubeflow instance. What happened: OWASP findings: - Missing X-Frame-Options header ``` The X-Frame-Options header is not set in the HTTP response, meaning the page can potentially be loaded into an attacker-controlled frame. This could lead to clickjacking, where an attacker adds an invisible layer on top of the legitimate page to trick users into clicking on a malicious link or taking a harmful action ``` - Missing X-XSS-Protection header ``` The X-XSS-Protection response header provides a layer of protection against reflected cross-site scripting (XSS) attacks by instructing browsers to abort rendering a page in which a reflected XSS attack has been detected. This is a best-effort second line of defense measure which helps prevent an attacker from using evasion techniques to avoid the neutralization mechanisms that the filters use by default. When configured appropriately, browser-level XSS filters can provide additional layers of defense against web application attacks. ``` - HTTP response headers like 'Server', 'X-Powered-By', 'X-AspNetVersion', 'X-AspNetMvcVersion' could disclose information about the platform and technologies used by the website. The HTTP response include one or more such headers. - Unencoded characters accepted by endpoint **/api/workgroup/get -contributors/** ``` The web application reflects potentially dangerous characters such as single quotes, double quotes, and angle brackets. These characters are commonly used for HTML injection attacks such as cross-site scripting (XSS). ``` - No Referrer Policy is specified for all static and dynamic pages Environment: Full Kubeflow deployment on AWS EKS, Kubeflow 1.2.0, KFP 1.0.4 As these are just a minor findings, I think it would be worth to assess which one are on purpose, and what is needed to be improved. Also I'd appreciate if you could give some guidelines about the headers, if they can be tuned on ingress/istio level, or should remain unchanged - as I couldn't find such information in the docs.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7162/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7162/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7160
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7160/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7160/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7160/events
https://github.com/kubeflow/pipelines/issues/7160
1,098,534,456
I_kwDOB-71UM5Bek44
7,160
[feature] to_csv in google_cloud_pipeline_components's bigquery operators
{ "login": "ckchow", "id": 3922740, "node_id": "MDQ6VXNlcjM5MjI3NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/3922740?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ckchow", "html_url": "https://github.com/ckchow", "followers_url": "https://api.github.com/users/ckchow/followers", "following_url": "https://api.github.com/users/ckchow/following{/other_user}", "gists_url": "https://api.github.com/users/ckchow/gists{/gist_id}", "starred_url": "https://api.github.com/users/ckchow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ckchow/subscriptions", "organizations_url": "https://api.github.com/users/ckchow/orgs", "repos_url": "https://api.github.com/users/ckchow/repos", "events_url": "https://api.github.com/users/ckchow/events{/privacy}", "received_events_url": "https://api.github.com/users/ckchow/received_events", "type": "User", "site_admin": false }
[ { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
open
false
{ "login": "IronPan", "id": 2348602, "node_id": "MDQ6VXNlcjIzNDg2MDI=", "avatar_url": "https://avatars.githubusercontent.com/u/2348602?v=4", "gravatar_id": "", "url": "https://api.github.com/users/IronPan", "html_url": "https://github.com/IronPan", "followers_url": "https://api.github.com/users/IronPan/followers", "following_url": "https://api.github.com/users/IronPan/following{/other_user}", "gists_url": "https://api.github.com/users/IronPan/gists{/gist_id}", "starred_url": "https://api.github.com/users/IronPan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/IronPan/subscriptions", "organizations_url": "https://api.github.com/users/IronPan/orgs", "repos_url": "https://api.github.com/users/IronPan/repos", "events_url": "https://api.github.com/users/IronPan/events{/privacy}", "received_events_url": "https://api.github.com/users/IronPan/received_events", "type": "User", "site_admin": false }
[ { "login": "IronPan", "id": 2348602, "node_id": "MDQ6VXNlcjIzNDg2MDI=", "avatar_url": "https://avatars.githubusercontent.com/u/2348602?v=4", "gravatar_id": "", "url": "https://api.github.com/users/IronPan", "html_url": "https://github.com/IronPan", "followers_url": "https://api.github.com/users/IronPan/followers", "following_url": "https://api.github.com/users/IronPan/following{/other_user}", "gists_url": "https://api.github.com/users/IronPan/gists{/gist_id}", "starred_url": "https://api.github.com/users/IronPan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/IronPan/subscriptions", "organizations_url": "https://api.github.com/users/IronPan/orgs", "repos_url": "https://api.github.com/users/IronPan/repos", "events_url": "https://api.github.com/users/IronPan/events{/privacy}", "received_events_url": "https://api.github.com/users/IronPan/received_events", "type": "User", "site_admin": false } ]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "I think a good way to solve the problem is modifying the component to receive `JobConfigurationExtract`. Current component can receive only `JobConfigurationQuery` here:\r\n\r\nhttps://github.com/kubeflow/pipelines/blob/c8a18bde299f2fdf5f72144f15887915b8d11520/components/google-cloud/google_cloud_pipeline_components/experimental/bigquery/query_job/component.yaml#L84-L91\r\n\r\nWe are happy if we can pass the other configurations such as `JobConfigurationTableCopy`, `JobConfigurationExtract` and `JobConfigurationLoad`:\r\nhttps://cloud.google.com/bigquery/docs/reference/rest/v2/Job#jobconfiguration\r\n", "I need the feature, but this issue was pending for months. If the maintainers are busy, I want to contribute. So I'm curious about\r\n- How do you think about the above idea?\r\n- The maintainers have a plan to add some features to solve this issue?\r\n" ]
"2022-01-11T00:49:54"
"2022-05-05T04:30:25"
null
NONE
null
### Feature Area <!-- /area sdk --> <!-- /area components --> ### What feature would you like to see? Google cloud pipeline components (https://github.com/kubeflow/pipelines/tree/master/components/google-cloud/google_cloud_pipeline_components) is missing an operator to export a bigquery query result to CSV. This is currently provided by https://github.com/kubeflow/pipelines/tree/master/components/gcp/bigquery/query strictly speaking, but that component has issues with the GCPProjectID type not being well supported. ### What is the use case or pain point? We export some data to CSV for XGBoost training. ### Is there a workaround currently? We use a version of the component linked in part 2 above, but with some of the types lowered to String from GCPProjectId. --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a ๐Ÿ‘. We prioritize fulfilling features with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7160/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7160/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7158
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7158/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7158/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7158/events
https://github.com/kubeflow/pipelines/issues/7158
1,098,450,211
I_kwDOB-71UM5BeQUj
7,158
minio Express webserver doesn't specify character encoding
{ "login": "jli", "id": 133466, "node_id": "MDQ6VXNlcjEzMzQ2Ng==", "avatar_url": "https://avatars.githubusercontent.com/u/133466?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jli", "html_url": "https://github.com/jli", "followers_url": "https://api.github.com/users/jli/followers", "following_url": "https://api.github.com/users/jli/following{/other_user}", "gists_url": "https://api.github.com/users/jli/gists{/gist_id}", "starred_url": "https://api.github.com/users/jli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jli/subscriptions", "organizations_url": "https://api.github.com/users/jli/orgs", "repos_url": "https://api.github.com/users/jli/repos", "events_url": "https://api.github.com/users/jli/events{/privacy}", "received_events_url": "https://api.github.com/users/jli/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 2186355346, "node_id": "MDU6TGFiZWwyMTg2MzU1MzQ2", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/good%20first%20issue", "name": "good first issue", "color": "fef2c0", "default": true, "description": "" } ]
open
false
null
[]
null
[ "/cc @zijianjoy ", "I don't seem to reproduce the issue. Maybe it is because I don't understand it fully yet. Do you have an example pipeline I can run and inspect the logs for difference?", "@zijianjoy The issue is specifically with non-ascii characters. In our case, we have some logs that emit the emoji: โณ\r\n\r\nHere's some example logs from the \"Logs\" tab in the KFP web UI:\r\n<img width=\"450\" alt=\"Screen Shot 2022-01-18 at 20 16 41\" src=\"https://user-images.githubusercontent.com/133466/150045291-c03f4cdd-6f86-475c-a313-527bf727210d.png\">\r\n\r\nAnd here's the same logs when served via the minio server:\r\n<img width=\"408\" alt=\"Screen Shot 2022-01-18 at 20 18 02\" src=\"https://user-images.githubusercontent.com/133466/150045347-c4aec2a7-3d78-4534-8e7f-31cf177c4441.png\">\r\n\r\nSo, the text \"โณ Entering timer Training Keras model\" is show as \"รขยยณ Entering timer Training Keras model\" when served via minio.\r\n\r\nA pipeline that is just 1 task with code like `print(\"unicode stuff: ๐Ÿ‘ ๐Ÿคž ๐Ÿ˜Ž\")` should trigger the issue?", "Thank you @jli for the information. I have tested KFP backend and found that the header has already been `\r\ncontent-type: text/plain; charset=utf-8` for the logs. I am wondering if your suggestion is to convert response to unicode instead of ascii. \r\n\r\nReference: https://github.com/kubeflow/pipelines/blob/6fac61751b690b09846d0e7f2b657c95be884501/frontend/server/k8s-helper.ts#L243\r\n\r\nFeel free to contribute if this helps your use case.\r\n", "Also see Nodejs redirection: https://github.com/kubeflow/pipelines/blob/6fac61751b690b09846d0e7f2b657c95be884501/frontend/server/app.ts#L145", "Is this issue still open" ]
"2022-01-10T22:23:17"
"2022-02-17T04:17:54"
null
CONTRIBUTOR
null
### What steps did you take - Click on a task on the KFP run page. - On the Input/Output tab, click the minio:// link for `main-logs` to view raw logs. ### What happened: The logs file is served from the minio Express webserver without any character encoding header. This causes any non-ascii UTF-8 characters to be rendered as garbled text. ### What did you expect to happen: I think changing the Express server to always return UTF-8 is a reasonable default (i.e., `content-type: text/plain; charset=UTF-8`). ### Environment: * How do you deploy Kubeflow Pipelines (KFP)? standalone * KFP version: 1.7.1 * KFP SDK version: 1.6.2 ### Anything else you would like to add: There is no workaround for this with Google Chrome, as the option to change the character encoding was removed a while ago. ### Labels <!-- Please include labels below by uncommenting them to help us better triage issues --> /area frontend /area backend <!-- /area sdk --> <!-- /area testing --> <!-- /area samples --> <!-- /area components --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a ๐Ÿ‘. We prioritise the issues with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7158/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7158/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7157
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7157/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7157/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7157/events
https://github.com/kubeflow/pipelines/issues/7157
1,098,432,660
I_kwDOB-71UM5BeMCU
7,157
[feature] Improve frontend CSS for task display name overflow
{ "login": "jli", "id": 133466, "node_id": "MDQ6VXNlcjEzMzQ2Ng==", "avatar_url": "https://avatars.githubusercontent.com/u/133466?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jli", "html_url": "https://github.com/jli", "followers_url": "https://api.github.com/users/jli/followers", "following_url": "https://api.github.com/users/jli/following{/other_user}", "gists_url": "https://api.github.com/users/jli/gists{/gist_id}", "starred_url": "https://api.github.com/users/jli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jli/subscriptions", "organizations_url": "https://api.github.com/users/jli/orgs", "repos_url": "https://api.github.com/users/jli/repos", "events_url": "https://api.github.com/users/jli/events{/privacy}", "received_events_url": "https://api.github.com/users/jli/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." }, { "id": 2186355346, "node_id": "MDU6TGFiZWwyMTg2MzU1MzQ2", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/good%20first%20issue", "name": "good first issue", "color": "fef2c0", "default": true, "description": "" } ]
open
false
null
[]
null
[ "/cc @zijianjoy ", "Hello @jli , thank you so much for the feature request! I agree that this can improve user experience, feel free to contribute since you have already tried the CSS style. Also, if you hover over to the node, you can see the full name from your mouse.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2022-01-10T22:02:57"
"2022-04-19T07:26:59"
null
CONTRIBUTOR
null
### Feature Area /area frontend ### What feature would you like to see? Change the task display name CSS overflow properties so that long task names don't get truncated. ### What is the use case or pain point? We have somewhat longer task display names, which map to functions in our codebase (so that it's easy to find the relevant code in our codebase). This causes the names to be truncated in the KFP graph view, and this makes it hard to read and use: ![Screen Shot 2022-01-10 at 16 59 27](https://user-images.githubusercontent.com/133466/148846062-79a28a31-9b99-4cfd-b0f5-a2c00066a0ae.png) --- I experimented a bit, and just adding [`overflow-wrap: break-word`](https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Text/Wrapping_Text#breaking_long_words) to the div with the display name helps a lot! (specifically, adding the CSS to the `div.label_<hash>` element which already has the `overflow: hidden` and `text-overflow: ellipsis` properties) (Very long names still disappear because the task boxes have fixed height. I think ideally the height could expand to show the entire name, though I can imagine that this is hard to change because the graph layout code assumes fixed box size.) ![Screen Shot 2022-01-10 at 16 59 57](https://user-images.githubusercontent.com/133466/148846116-eafc9d26-18ea-42fc-91af-b18853b1210b.png) ### Is there a workaround currently? No. We can shorten our task names to a point, but this isn't feasible for all our pipelines. For example, some pipelines involve training multiple models, and we use prefixes to disambiguate each sub-dag, and being forced to use short abbreviations hurts clarity/understandability. --- <!-- Don't delete message below to encourage users to support your feature request! --> Love this idea? Give it a ๐Ÿ‘. We prioritize fulfilling features with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7157/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7157/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7154
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7154/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7154/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7154/events
https://github.com/kubeflow/pipelines/issues/7154
1,096,931,488
I_kwDOB-71UM5BYdig
7,154
How does kubuflow pipeline retry_run know where to start?
{ "login": "Shuai-Xie", "id": 18352713, "node_id": "MDQ6VXNlcjE4MzUyNzEz", "avatar_url": "https://avatars.githubusercontent.com/u/18352713?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Shuai-Xie", "html_url": "https://github.com/Shuai-Xie", "followers_url": "https://api.github.com/users/Shuai-Xie/followers", "following_url": "https://api.github.com/users/Shuai-Xie/following{/other_user}", "gists_url": "https://api.github.com/users/Shuai-Xie/gists{/gist_id}", "starred_url": "https://api.github.com/users/Shuai-Xie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Shuai-Xie/subscriptions", "organizations_url": "https://api.github.com/users/Shuai-Xie/orgs", "repos_url": "https://api.github.com/users/Shuai-Xie/repos", "events_url": "https://api.github.com/users/Shuai-Xie/events{/privacy}", "received_events_url": "https://api.github.com/users/Shuai-Xie/received_events", "type": "User", "site_admin": false }
[ { "id": 1682717392, "node_id": "MDU6TGFiZWwxNjgyNzE3Mzky", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/question", "name": "kind/question", "color": "2515fc", "default": false, "description": "" } ]
open
false
null
[]
null
[ "I'm not familiar with this, not sure if it's an implementation detail in k8s or Argo workflow.\r\n\r\nIf anyone knows, please feel free to chime in. " ]
"2022-01-08T13:43:22"
"2022-02-17T23:43:39"
null
NONE
null
At first, I think kubeflow runs know where to start with the status of all the existed pipeline pods. However, I try to `terminate_run` and delete some `Completed` pipeline pods, then use `retry_run` to restart the run. Surprisingly, the retried run can still start at the right checkpoint. I guess some states are stored in the database, but not for sure. Thanks very much.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7154/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7154/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7148
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7148/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7148/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7148/events
https://github.com/kubeflow/pipelines/issues/7148
1,096,599,334
I_kwDOB-71UM5BXMcm
7,148
[frontend] Unable to build frontend image because of `postcss` module not found
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false } ]
null
[]
"2022-01-07T19:18:41"
"2022-01-07T23:07:18"
"2022-01-07T23:07:18"
COLLABORATOR
null
During presubmit build, the frontend image is not able to find module `postcss` as the error message below ``` Step #2 - "frontend": [0m[91mCannot find module 'postcss' Step #2 - "frontend": Require stack: Step #2 - "frontend": - /root/.npm/_npx/31/lib/node_modules/tailwindcss/lib/lib/generateRules.js Step #2 - "frontend": - /root/.npm/_npx/31/lib/node_modules/tailwindcss/lib/lib/expandTailwindAtRules.js Step #2 - "frontend": - /root/.npm/_npx/31/lib/node_modules/tailwindcss/lib/processTailwindFeatures.js Step #2 - "frontend": - /root/.npm/_npx/31/lib/node_modules/tailwindcss/lib/cli.js ``` --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a ๐Ÿ‘. We prioritise the issues with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7148/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7148/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7142
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7142/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7142/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7142/events
https://github.com/kubeflow/pipelines/issues/7142
1,094,780,059
I_kwDOB-71UM5BQQSb
7,142
01/05/2022 Presubmit kubeflow-pipelines-tfx-python37 failure
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Pending the following change in tensorflow_transform being picked up by TFX:\r\n\r\nhttps://github.com/tensorflow/transform/commit/e5e0560e092b3ba66138f2250cc31716c5405081#diff-be77950e10f2a3497862328c336509a3dc956d4b90bb81ae8c17f8e050e1a2b7" ]
"2022-01-05T22:02:54"
"2022-01-06T00:11:26"
"2022-01-06T00:11:26"
COLLABORATOR
null
https://oss-prow.knative.dev/view/gs/oss-prow/pr-logs/pull/kubeflow_pipelines/7112/kubeflow-pipelines-tfx-python37/1478839808884215808 ``` 2022-01-05 21:32:19.957501: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. Traceback (most recent call last): File "/home/prow/go/src/github.com/kubeflow/pipelines/tfx/tfx/orchestration/kubeflow/kubeflow_dag_runner_test.py", line 23, in <module> from tfx.components.statistics_gen import component as statistics_gen_component File "/usr/local/lib/python3.7/site-packages/tfx/components/__init__.py", line 29, in <module> from tfx.components.transform.component import Transform File "/usr/local/lib/python3.7/site-packages/tfx/components/transform/component.py", line 19, in <module> from tfx.components.transform import executor File "/usr/local/lib/python3.7/site-packages/tfx/components/transform/executor.py", line 26, in <module> import tensorflow_transform as tft File "/usr/local/lib/python3.7/site-packages/tensorflow_transform/__init__.py", line 18, in <module> from tensorflow_transform import experimental File "/usr/local/lib/python3.7/site-packages/tensorflow_transform/experimental/__init__.py", line 16, in <module> from tensorflow_transform.experimental.analyzers import * File "/usr/local/lib/python3.7/site-packages/tensorflow_transform/experimental/analyzers.py", line 30, in <module> from tensorflow_transform import analyzer_nodes File "/usr/local/lib/python3.7/site-packages/tensorflow_transform/analyzer_nodes.py", line 31, in <module> from future.utils import with_metaclass ModuleNotFoundError: No module named 'future' ```
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7142/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7142/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7140
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7140/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7140/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7140/events
https://github.com/kubeflow/pipelines/issues/7140
1,094,058,636
I_kwDOB-71UM5BNgKM
7,140
[backend] Importing workflow yaml file is not creating container in Azure
{ "login": "rajendra-tamboli", "id": 42729698, "node_id": "MDQ6VXNlcjQyNzI5Njk4", "avatar_url": "https://avatars.githubusercontent.com/u/42729698?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rajendra-tamboli", "html_url": "https://github.com/rajendra-tamboli", "followers_url": "https://api.github.com/users/rajendra-tamboli/followers", "following_url": "https://api.github.com/users/rajendra-tamboli/following{/other_user}", "gists_url": "https://api.github.com/users/rajendra-tamboli/gists{/gist_id}", "starred_url": "https://api.github.com/users/rajendra-tamboli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rajendra-tamboli/subscriptions", "organizations_url": "https://api.github.com/users/rajendra-tamboli/orgs", "repos_url": "https://api.github.com/users/rajendra-tamboli/repos", "events_url": "https://api.github.com/users/rajendra-tamboli/events{/privacy}", "received_events_url": "https://api.github.com/users/rajendra-tamboli/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
null
[]
null
[ "cc @aronchick ", "Possible related: https://github.com/kubeflow/pipelines/issues/5714. Please switch to emissary executor if you are on K8s 1.19 or above.", "Thanks for the feedback. We've run the following command to convert to emissary executor and got confirmation on the changes as well - \r\nkubectl patch configmap workflow-controller-configmap --patch '{\"data\":{\"containerRuntimeExecutor\":\"emissary\"}}'\r\n\r\nHowever when we tried to create a sample pipeline again, it is still taking docker executor only --- \r\nEvents:\r\n Type Reason Age From Message\r\n ---- ------ ---- ---- -------\r\n Normal Scheduled 15m default-scheduler Successfully assigned data/conditional-execution-pipeline-with-exit-handler-wvsm9-3171801703 to aks-kubeflow-11663677-vmss000000\r\n Warning FailedMount 13m kubelet Unable to attach or mount volumes: unmounted volumes=[docker-sock], unattached volumes=[docker-sock mlpipeline-minio-artifact default-editor-token-kpgwz podmetadata]: timed out waiting for the condition ", "What is your Kubernetes version?\r\nWhat is your KFP backend version?", "Kubernetes version - 1.20.9\r\nKFP version - 1.8.10", "You will have to configure `containerd` as the container runtime in your AKS cluster: https://docs.microsoft.com/en-us/azure/aks/concepts-clusters-workloads#nodes-and-node-pools.\r\n\r\nSee this section:\r\n`Container Runtime: Allows containerized applications to run and interact with additional resources, such as the virtual network and storage. AKS clusters using Kubernetes version 1.19+ for Linux node pools use containerd as their container runtime. Beginning in Kubernetes version 1.20 for Windows node pools, containerd can be used in preview for the container runtime, but Docker is still the default container runtime. AKS clusters using prior versions of Kubernetes for node pools use Docker as their container runtime.\r\n`", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2022-01-05T07:26:51"
"2022-04-18T23:26:50"
null
NONE
null
### Environment - Kubeflow is deployed in Azure Kubernetes Service * KFP version: build version v1beta1 ### Steps to reproduce 1. installed kubeflow SDK 2. Created a workflow using a sample repo (https://github.com/FernandoLpz/Kubeflow_Pipelines.git) 3. Created docker image and pushed the same to ACI (Azure Container Registry) 4. created Firstpipeline.yaml file and imported the same via Kubeflow pipeline UI 5. Getting attached error that "This step is in Pending state with this message: ContainerCreating" and the container is not getting created. [](url) Below is the Error using Command kubectl describe pods first-pipeline-k9qs5-2780402892 -n data Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 12m default-scheduler Successfully assigned data/first-pipeline-k9qs5-2780402892 to aks-kubeflow-11663677-vmss000001 Warning FailedMount 6m37s (x3 over 10m) kubelet Unable to attach or mount volumes: unmounted volumes=[docker-sock], unattached volumes=[default-editor-token-kpgwz podmetadata docker-sock mlpipeline-minio-artifact]: timed out waiting for the condition Warning FailedMount 2m30s (x2 over 4m34s) kubelet Unable to attach or mount volumes: unmounted volumes=[docker-sock], unattached volumes=[docker-sock mlpipeline-minio-artifact default-editor-token-kpgwz podmetadata]: timed out waiting for the condition Warning FailedMount 28s (x14 over 12m) kubelet MountVolume.SetUp failed for volume "docker-sock" : hostPath type check failed: /var/run/docker.sock is not a socket file Warning FailedMount 27s kubelet Unable to attach or mount volumes: unmounted volumes=[docker-sock], unattached volumes=[podmetadata docker-sock mlpipeline-minio-artifact default-editor-token-kpgwz]: timed out waiting for the condition ### Expected result It should deploy the containers in AKS and should be able to execute the pipeline workflow ### Materials and Reference https://towardsdatascience.com/kubeflow-pipelines-how-to-build-your-first-kubeflow-pipeline-from-scratch-2424227f7e5 ![image](https://user-images.githubusercontent.com/42729698/148177147-e2f41cb8-8919-4a67-ae59-cfc5b362de01.png) --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a ๐Ÿ‘. We prioritise the issues with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7140/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7140/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7135
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7135/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7135/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7135/events
https://github.com/kubeflow/pipelines/issues/7135
1,092,048,123
I_kwDOB-71UM5BF1T7
7,135
[bug] pipeline_manifest not returned by the api
{ "login": "jynx10", "id": 24508041, "node_id": "MDQ6VXNlcjI0NTA4MDQx", "avatar_url": "https://avatars.githubusercontent.com/u/24508041?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jynx10", "html_url": "https://github.com/jynx10", "followers_url": "https://api.github.com/users/jynx10/followers", "following_url": "https://api.github.com/users/jynx10/following{/other_user}", "gists_url": "https://api.github.com/users/jynx10/gists{/gist_id}", "starred_url": "https://api.github.com/users/jynx10/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jynx10/subscriptions", "organizations_url": "https://api.github.com/users/jynx10/orgs", "repos_url": "https://api.github.com/users/jynx10/repos", "events_url": "https://api.github.com/users/jynx10/events{/privacy}", "received_events_url": "https://api.github.com/users/jynx10/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "> I expected both the pipeline_manifest and workflow_manifest properties to be returned as part of the apiPipelineRuntime Object.\r\n\r\n@jynx10 why would you expect this, is this the behavior in some old version?\r\n\r\nJust by looking at the code, it seems to be by design that only one of them is returned, and `workflow_manifest` is looked first: https://github.com/kubeflow/pipelines/blob/2e945750cb1758eea6db8453b437e57e68152b4a/backend/src/apiserver/resource/resource_manager.go#L995-L999\r\n\r\n", "Hi,\r\nSo this is a little summary of what code we used, what output we got and the workaround we use now.\r\nI used Postman to test an api get request to localhost:8080/apis/v1beta1/runs/{run_id}.\r\n\r\nThe output i got:\r\nThe API request only returned the workflow manifest and not the pipeline_manifest, we miss understood that only one of the two should be returned and that the API is working as expected.\r\n\r\nThe code we used to workaround not getting the pipeline_manifest:\r\n`\r\npipeline_run_detail = client.get_run(workflow_uid)\r\nexperiment.log_asset_data(json.dumps(pipeline_run_detail.to_dict(), default=str), overwrite=True)\r\n`\r\n\r\n`\r\npipeline_state = json.loads(pipeline_run_detail.pipeline_runtime.workflow_manifest)['status']\r\n`", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2022-01-02T15:59:21"
"2022-04-18T17:26:53"
null
NONE
null
### What steps did you take Sent a request to find a specific run by id (to the /apis/v1beta1/runs/{run_id} endpoint). ### What happened: In the response I received only the workflow_manifest property. ### What did you expect to happen: I expected both the pipeline_manifest and workflow_manifest properties to be returned as part of the apiPipelineRuntime Object. ### Environment: * How do you deploy Kubeflow Pipelines (KFP)? * I started a minikube cluster and applied the kubeflow pipelines to it. * KFP version: Version 1.7.0 * KFP SDK version: kfp 1.8.10 kfp-pipeline-spec 0.1.13 kfp-server-api 1.7.1 ### Labels /area backend /area sdk --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a ๐Ÿ‘. We prioritise the issues with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7135/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7135/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7133
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7133/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7133/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7133/events
https://github.com/kubeflow/pipelines/issues/7133
1,091,936,408
I_kwDOB-71UM5BFaCY
7,133
[backend] Use SDK Client get 500 Internal error: Unauthenticated: Request header error: there is no user identity header.: Request header error: there is no user identity header.\nFailed to authorize with API resource references\ngithub.com/kubeflow/pipelines/backend/src/common/util.Wrap\n\t/go/src/github
{ "login": "631068264", "id": 8144089, "node_id": "MDQ6VXNlcjgxNDQwODk=", "avatar_url": "https://avatars.githubusercontent.com/u/8144089?v=4", "gravatar_id": "", "url": "https://api.github.com/users/631068264", "html_url": "https://github.com/631068264", "followers_url": "https://api.github.com/users/631068264/followers", "following_url": "https://api.github.com/users/631068264/following{/other_user}", "gists_url": "https://api.github.com/users/631068264/gists{/gist_id}", "starred_url": "https://api.github.com/users/631068264/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/631068264/subscriptions", "organizations_url": "https://api.github.com/users/631068264/orgs", "repos_url": "https://api.github.com/users/631068264/repos", "events_url": "https://api.github.com/users/631068264/events{/privacy}", "received_events_url": "https://api.github.com/users/631068264/received_events", "type": "User", "site_admin": false }
[ { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" } ]
open
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "You're on multi-user mode, and as called out in the last paragraph in https://www.kubeflow.org/docs/components/pipelines/sdk/connect-api/#connect-to-kubeflow-pipelines-from-outside-your-cluster:\r\n\r\n> for Kubeflow Pipelines in multi-user mode, you cannot access the API using kubectl port-forward because it requires authentication\r\n\r\nSo you may consider connect from the same cluster following this instruction: https://www.kubeflow.org/docs/components/pipelines/sdk/connect-api/#multi-user-mode\r\n", "So how to change into Non-multi-user mode ? If I use this mode , can I access by without token?", "A correction on my previous rely: you can still connect to KFP with multi-user mode from a local client, but there's additional setup/configuration needed. Here're some doc on how to do it if you deployed KFP on GCP: https://www.kubeflow.org/docs/distributions/gke/pipelines/authentication-sdk/#connecting-to-kubeflow-pipelines-in-a-full-kubeflow-deployment\r\n\r\n> So how to change into Non-multi-user mode ? If I use this mode , can I access by without token?\r\n\r\nI checked with the team. You might not be able to switch the mode after deployment. But if you start over, you can change the manifest before deploying: \r\nhttps://github.com/kubeflow/manifests/blob/d36fc9c0555c936c7b71fd273b8e4604985ebba8/apps/pipeline/upstream/base/installs/multi-user/api-service/params.env#L1\r\nAlternatively, if you only care about Kubeflow Pipelines but not the rest components in Kubeflow, you can deploy Kubeflow Pipelines via standalone deployment:\r\nhttps://www.kubeflow.org/docs/components/pipelines/installation/standalone-deployment/", "Thank you for your help @chensun . After starting over with `MULTIUSER` set to `false` as suggested in https://github.com/kubeflow/pipelines/issues/7133#issuecomment-1009688565 I had no more authorization errors when using kfp.Compiler, but now I can no longer create, view or select experiments in GUI:\r\n\r\n```\r\nError retrieving resources\r\nList request failed with:\r\n{\"error\":\"Invalid input error: In single-user mode, ListExperiment cannot filter by namespace.\",\"code\":3,\"message\":\"Invalid input error: In single-user mode, ListExperiment cannot filter by namespace.\",\"details\":[{\"@type\":\"type.googleapis.com/api.Error\",\"error_message\":\"In single-user mode, ListExperiment cannot filter by namespace.\",\"error_details\":\"Invalid input error: In single-user mode, ListExperiment cannot filter by namespace.\"}]}\r\n```\r\n\r\nPlease let me know if single-user mode should be still supported in GUI and I will open an issue for this.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "\r\n\r\n\r\n\r\n> ๆ›ดๆญฃๆˆ‘ไน‹ๅ‰็š„ไพ่ต–๏ผšๆ‚จไป็„ถๅฏไปฅไปŽๆœฌๅœฐๅฎขๆˆท็ซฏไปฅๅคš็”จๆˆทๆจกๅผ่ฟžๆŽฅๅˆฐ KFP๏ผŒไฝ†้œ€่ฆ้ขๅค–็š„่ฎพ็ฝฎ/้…็ฝฎใ€‚ๅฆ‚ๆžœๆ‚จๅœจ GCP ไธŠ้ƒจ็ฝฒไบ† KFP๏ผŒ่ฟ™้‡Œๆœ‰ไธ€ไบ›ๅ…ณไบŽๅฆ‚ไฝ•ๆ‰ง่กŒๆญคๆ“ไฝœ็š„ๆ–‡ๆกฃ๏ผš https: [//www.kubeflow.org/docs/distributions/gke/pipelines/authentication-sdk/#connecting-to-kubeflow-pipelines-in-ๅฎŒๆ•ด็š„ kubeflow ้ƒจ็ฝฒ](https://www.kubeflow.org/docs/distributions/gke/pipelines/authentication-sdk/#connecting-to-kubeflow-pipelines-in-a-full-kubeflow-deployment)\r\n> \r\n> > ้‚ฃไนˆๅฆ‚ไฝ•ๅ˜ๆˆ้žๅคš็”จๆˆทๆจกๅผๅ‘ข๏ผŸๅฆ‚ๆžœๆˆ‘ไฝฟ็”จ่ฟ™็งๆจกๅผ๏ผŒๆˆ‘ๅฏไปฅไธ็”จไปค็‰Œ่ฎฟ้—ฎๅ—๏ผŸ\r\n> \r\n> ๆˆ‘ๅ’Œๅ›ข้˜Ÿๆ ธๅฎž่ฟ‡ใ€‚้ƒจ็ฝฒๅŽๆ‚จๅฏ่ƒฝๆ— ๆณ•ๅˆ‡ๆขๆจกๅผใ€‚ไฝ†ๆ˜ฏๅฆ‚ๆžœไฝ ้‡ๆ–ฐๅผ€ๅง‹๏ผŒไฝ ๅฏไปฅๅœจ้ƒจ็ฝฒไน‹ๅ‰ๆ›ดๆ”นๆธ…ๅ•๏ผš [https://github.com/kubeflow/manifests/blob/d36fc9c0555c936c7b71fd273b8e4604985ebba8/apps/pipeline/upstream/base/installs/multi-user/api-service/paramsใ€‚ env#L1](https://github.com/kubeflow/manifests/blob/d36fc9c0555c936c7b71fd273b8e4604985ebba8/apps/pipeline/upstream/base/installs/multi-user/api-service/params.env#L1) ๆˆ–่€…๏ผŒๅฆ‚ๆžœๆ‚จๅชๅ…ณๅฟƒ Kubeflow ็ฎก้“่€Œไธๅ…ณๅฟƒ Kubeflow ไธญ็š„ๅ…ถไป–็ป„ไปถ๏ผŒๅˆ™ๅฏไปฅ้€š่ฟ‡็‹ฌ็ซ‹้ƒจ็ฝฒๆฅ้ƒจ็ฝฒ Kubeflow ็ฎก้“๏ผš [https ://www.kubeflow.org/docs/components/pipelines/installation/standalone-deployment /](https://www.kubeflow.org/docs/components/pipelines/installation/standalone-deployment/)\r\n\r\n่ฟ™ๆ ทๅšไน‹ๅŽๅ‡บ็Žฐไบ†ไธ‹้ข่ฟ™ไธช้—ฎ้ข˜๏ผŒ่ฏท้—ฎๆœ‰ไป€ไนˆ่งฃๅ†ณ็š„ๆ–นๆณ•ๅ—\r\n\r\nError retrieving resources\r\nList request failed with:\r\n{\"error\":\"Invalid input error: In single-user mode, ListExperiment cannot filter by namespace.\",\"code\":3,\"message\":\"Invalid input error: In single-user mode, ListExperiment cannot filter by namespace.\",\"details\":[{\"@type\":\"type.googleapis.com/api.Error\",\"error_message\":\"In single-user mode, ListExperiment cannot filter by namespace.\",\"error_details\":\"Invalid input error: In single-user mode, ListExperiment cannot filter by namespace.\"}]}" ]
"2022-01-02T02:27:59"
"2023-04-29T15:26:35"
null
NONE
null
### Environment Install follow https://github.com/kubeflow/manifests in v1.4.1 KFP version: 1.7.0 KFP SDK version: build version dev_local k3s Kubernetes 1.19 ### Steps to reproduce use demo example to add pipline kfp 1.8.10 kfp-pipeline-spec 0.1.13 kfp-server-api 1.7.1 https://www.kubeflow.org/docs/components/pipelines/sdk-v2/v2-compatibility/ https://www.kubeflow.org/docs/components/pipelines/sdk/connect-api/ use ``` kubectl port-forward --address 0.0.0.0 svc/ml-pipeline-ui 3000:80 --namespace kubeflow ``` ```python import kfp import kfp.dsl as dsl from kfp.v2.dsl import component @component( base_image="library/python:3.7" ) def add(a: float, b: float) -> float: '''Calculates sum of two arguments''' return a + b @dsl.pipeline( name='v2add', description='An example pipeline that performs addition calculations.', # pipeline_root='gs://my-pipeline-root/example-pipeline' ) def add_pipeline(a: float = 1, b: float = 7): add_task = add(a, b) client = kfp.Client('http://xxxx:3000') client.create_run_from_pipeline_func( add_pipeline, launcher_image='library/gcr.io/ml-pipeline/kfp-launcher:1.8.7', namespace='kubeflow-user-example-com', experiment_name='test', arguments={'a': 7, 'b': 8}, mode=kfp.dsl.PipelineExecutionMode.V2_COMPATIBLE, service_account='default' ) ``` Get error ``` kfp_server_api.exceptions.ApiException: (500) Reason: Internal Server Error HTTP response headers: HTTPHeaderDict({'X-Powered-By': 'Express', 'content-type': 'application/json', 'date': 'Sun, 02 Jan 2022 02:13:12 GMT', 'x-envoy-upstream-service-time': '2', 'server': 'envoy', 'connection': 'close', 'transfer-encoding': 'chunked'}) HTTP response body: {"error":"Internal error: Unauthenticated: Request header error: there is no user identity header.: Request header error: there is no user identity header.\nFailed to authorize with API resource references\ngithub.com/kubeflow/pipelines/backend/src/common/util.Wrap\n\t/go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:275\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*ExperimentServer).canAccessExperiment\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/experiment_server.go:249\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*ExperimentServer).ListExperiment\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/experiment_server.go:148\ngithub.com/kubeflow/pipelines/backend/api/go_client._ExperimentService_ListExperiment_Handler.func1\n\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/experiment.pb.go:1089\nmain.apiServerInterceptor\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/interceptor.go:30\ngithub.com/kubeflow/pipelines/backend/api/go_client._ExperimentService_ListExperiment_Handler\n\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/experiment.pb.go:1091\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\n\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:1210\ngoogle.golang.org/grpc.(*Server).handleStream\n\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:1533\ngoogle.golang.org/grpc.(*Server).serveStreams.func1.2\n\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:871\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1357\nFailed to authorize with API resource references\ngithub.com/kubeflow/pipelines/backend/src/common/util.Wrap\n\t/go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:275\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*ExperimentServer).ListExperiment\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/experiment_server.go:150\ngithub.com/kubeflow/pipelines/backend/api/go_client._ExperimentService_ListExperiment_Handler.func1\n\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/experiment.pb.go:1089\nmain.apiServerInterceptor\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/interceptor.go:30\ngithub.com/kubeflow/pipelines/backend/api/go_client._ExperimentService_ListExperiment_Handler\n\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/experiment.pb.go:1091\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\n\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:1210\ngoogle.golang.org/grpc.(*Server).handleStream\n\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:1533\ngoogle.golang.org/grpc.(*Server).serveStreams.func1.2\n\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:871\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1357","code":13,"message":"Internal error: Unauthenticated: Request header error: there is no user identity header.: Request header error: there is no user identity header.\nFailed to authorize with API resource references\ngithub.com/kubeflow/pipelines/backend/src/common/util.Wrap\n\t/go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:275\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*ExperimentServer).canAccessExperiment\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/experiment_server.go:249\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*ExperimentServer).ListExperiment\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/experiment_server.go:148\ngithub.com/kubeflow/pipelines/backend/api/go_client._ExperimentService_ListExperiment_Handler.func1\n\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/experiment.pb.go:1089\nmain.apiServerInterceptor\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/interceptor.go:30\ngithub.com/kubeflow/pipelines/backend/api/go_client._ExperimentService_ListExperiment_Handler\n\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/experiment.pb.go:1091\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\n\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:1210\ngoogle.golang.org/grpc.(*Server).handleStream\n\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:1533\ngoogle.golang.org/grpc.(*Server).serveStreams.func1.2\n\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:871\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1357\nFailed to authorize with API resource references\ngithub.com/kubeflow/pipelines/backend/src/common/util.Wrap\n\t/go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:275\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*ExperimentServer).ListExperiment\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/experiment_server.go:150\ngithub.com/kubeflow/pipelines/backend/api/go_client._ExperimentService_ListExperiment_Handler.func1\n\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/experiment.pb.go:1089\nmain.apiServerInterceptor\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/interceptor.go:30\ngithub.com/kubeflow/pipelines/backend/api/go_client._ExperimentService_ListExperiment_Handler\n\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/experiment.pb.go:1091\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\n\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:1210\ngoogle.golang.org/grpc.(*Server).handleStream\n\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:1533\ngoogle.golang.org/grpc.(*Server).serveStreams.func1.2\n\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:871\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1357","details":[{"@type":"type.googleapis.com/api.Error","error_message":"Internal error: Unauthenticated: Request header error: there is no user identity header.: Request header error: there is no user identity header.\nFailed to authorize with API resource references\ngithub.com/kubeflow/pipelines/backend/src/common/util.Wrap\n\t/go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:275\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*ExperimentServer).canAccessExperiment\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/experiment_server.go:249\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*ExperimentServer).ListExperiment\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/experiment_server.go:148\ngithub.com/kubeflow/pipelines/backend/api/go_client._ExperimentService_ListExperiment_Handler.func1\n\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/experiment.pb.go:1089\nmain.apiServerInterceptor\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/interceptor.go:30\ngithub.com/kubeflow/pipelines/backend/api/go_client._ExperimentService_ListExperiment_Handler\n\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/experiment.pb.go:1091\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\n\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:1210\ngoogle.golang.org/grpc.(*Server).handleStream\n\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:1533\ngoogle.golang.org/grpc.(*Server).serveStreams.func1.2\n\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:871\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1357\nFailed to authorize with API resource references\ngithub.com/kubeflow/pipelines/backend/src/common/util.Wrap\n\t/go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:275\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*ExperimentServer).ListExperiment\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/experiment_server.go:150\ngithub.com/kubeflow/pipelines/backend/api/go_client._ExperimentService_ListExperiment_Handler.func1\n\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/experiment.pb.go:1089\nmain.apiServerInterceptor\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/interceptor.go:30\ngithub.com/kubeflow/pipelines/backend/api/go_client._ExperimentService_ListExperiment_Handler\n\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/experiment.pb.go:1091\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\n\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:1210\ngoogle.golang.org/grpc.(*Server).handleStream\n\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:1533\ngoogle.golang.org/grpc.(*Server).serveStreams.func1.2\n\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:871\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1357","error_details":"Internal error: Unauthenticated: Request header error: there is no user identity header.: Request header error: there is no user identity header.\nFailed to authorize with API resource references\ngithub.com/kubeflow/pipelines/backend/src/common/util.Wrap\n\t/go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:275\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*ExperimentServer).canAccessExperiment\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/experiment_server.go:249\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*ExperimentServer).ListExperiment\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/experiment_server.go:148\ngithub.com/kubeflow/pipelines/backend/api/go_client._ExperimentService_ListExperiment_Handler.func1\n\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/experiment.pb.go:1089\nmain.apiServerInterceptor\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/interceptor.go:30\ngithub.com/kubeflow/pipelines/backend/api/go_client._ExperimentService_ListExperiment_Handler\n\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/experiment.pb.go:1091\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\n\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:1210\ngoogle.golang.org/grpc.(*Server).handleStream\n\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:1533\ngoogle.golang.org/grpc.(*Server).serveStreams.func1.2\n\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:871\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1357\nFailed to authorize with API resource references\ngithub.com/kubeflow/pipelines/backend/src/common/util.Wrap\n\t/go/src/github.com/kubeflow/pipelines/backend/src/common/util/error.go:275\ngithub.com/kubeflow/pipelines/backend/src/apiserver/server.(*ExperimentServer).ListExperiment\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/server/experiment_server.go:150\ngithub.com/kubeflow/pipelines/backend/api/go_client._ExperimentService_ListExperiment_Handler.func1\n\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/experiment.pb.go:1089\nmain.apiServerInterceptor\n\t/go/src/github.com/kubeflow/pipelines/backend/src/apiserver/interceptor.go:30\ngithub.com/kubeflow/pipelines/backend/api/go_client._ExperimentService_ListExperiment_Handler\n\t/go/src/github.com/kubeflow/pipelines/backend/api/go_client/experiment.pb.go:1091\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\n\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:1210\ngoogle.golang.org/grpc.(*Server).handleStream\n\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:1533\ngoogle.golang.org/grpc.(*Server).serveStreams.func1.2\n\t/go/pkg/mod/google.golang.org/grpc@v1.34.0/server.go:871\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1357"}]} ``` ### Expected result <!-- What should the correct behavior be? --> ### Materials and Reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a ๐Ÿ‘. We prioritise the issues with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7133/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7133/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7132
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7132/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7132/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7132/events
https://github.com/kubeflow/pipelines/issues/7132
1,091,837,023
I_kwDOB-71UM5BFBxf
7,132
[backend] Pipeline first step stuck in running state even after completing
{ "login": "RobinKa", "id": 2614101, "node_id": "MDQ6VXNlcjI2MTQxMDE=", "avatar_url": "https://avatars.githubusercontent.com/u/2614101?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RobinKa", "html_url": "https://github.com/RobinKa", "followers_url": "https://api.github.com/users/RobinKa/followers", "following_url": "https://api.github.com/users/RobinKa/following{/other_user}", "gists_url": "https://api.github.com/users/RobinKa/gists{/gist_id}", "starred_url": "https://api.github.com/users/RobinKa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RobinKa/subscriptions", "organizations_url": "https://api.github.com/users/RobinKa/orgs", "repos_url": "https://api.github.com/users/RobinKa/repos", "events_url": "https://api.github.com/users/RobinKa/events{/privacy}", "received_events_url": "https://api.github.com/users/RobinKa/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Using the pns executor instead makes everything work as described [here](https://github.com/kubeflow/manifests#kubeflow-pipelines)\r\n\r\n`kustomize build apps/pipeline/upstream/env/platform-agnostic-multi-user-pns | kubectl apply -f -`\r\n\r\nSo I assume I made a mistake in my Docker setup? Although not much about docker is mentioned in the manifests readme.", "Hello @RobinKa , you can switch over to emissary executor since that is going to be the default executor going forward. https://github.com/kubeflow/pipelines/issues/5714", "Even with proper emissary executor , sometimes component will stuck " ]
"2022-01-01T15:25:11"
"2023-04-12T13:15:19"
"2022-04-13T17:17:43"
NONE
null
Hey, hope this is the right place to post this issue at. I'm new to Kubeflow and Kubernetes so please let me know what else would be useful to know. ### Environment * How did you deploy Kubeflow Pipelines (KFP): Installed Kubeflow on Kubernetes 1.19 with manifests, see below * KFP version: 1.7.0 * KFP SDK version: build version dev_local * Server specs: 8 CPUs, 16GB RAM, 240GB SSD ([Hetzner Cloud CPX41](https://www.hetzner.com/cloud)), Ubuntu 20.04 ### Steps to reproduce 1. Install KubeFlow on Kubernetes 1.19 (used K3S) with manifests, full setup script in materials below 2. Go to Kubeflow dashboard 3. Start a Pipeline run for `[Tutorial] DSL - Control structures` 4. First step completes successfully (eg. logs "tails"), but stays stuck in running state Terminating the run does nothing. I also tried running other pipelines and the result is the same. ![image](https://user-images.githubusercontent.com/2614101/147853757-3a38fa83-e310-4361-bc76-2f9e34a9d613.png) ### Expected result The pipeline step should complete and run the rest of the pipeline. ### Materials and Reference #### Setup on Ubuntu 20.04 server from scratch ``` sudo apt update -y && sudo apt upgrade -y # Install docker sudo apt install ca-certificates curl gnupg lsb-release curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt update sudo apt install -y docker-ce docker-ce-cli containerd.io # Install k3s 1.19 (I tried 1.20 too which had the same issue, but 1.21 is too new for manifests) export INSTALL_K3S_VERSION="v1.19.16%2Bk3s1" curl -sfL https://get.k3s.io | sh - # Get Kustomize 3.2.0 cd /opt/ wget https://github.com/kubernetes-sigs/kustomize/releases/download/v3.2.0/kustomize_3.2.0_linux_amd64 chmod +x kustomize_3.2.0_linux_amd64 ln -s /opt/kustomize_3.2.0_linux_amd64 /usr/bin/kustomize # Install Kubeflow using manifests git clone https://github.com/kubeflow/manifests.git cd manifests while ! kustomize build example | kubectl apply -f -; do echo "Retrying to apply resources"; sleep 10; done # Portforward Kubeflow dashboard in new tmux session tmux new -d -s kubeflow-dashboard-portforward "kubectl port-forward svc/istio-ingressgateway -n istio-system 8080:80" ``` #### kubectl get pods output ![image](https://user-images.githubusercontent.com/2614101/147853826-028369ab-aaf6-4191-a4cb-d353cbccc1a0.png) #### kubectl logs conditional-execution-pipeline-with-exit-handler-scjtr-3243716801 -c wait -n kubeflow-user-example-com ``` ... time="2022-01-01T15:31:50.462Z" level=info msg="listed containers" containers="map[]" time="2022-01-01T15:31:51.462Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=label=io.kubernetes.pod.namespace=kubeflow-user-example-com --filter=label=io.kubernetes.pod.name=conditional-execution-pipeline-with-exit-handler-scjtr-3243716801" time="2022-01-01T15:31:51.498Z" level=info msg="listed containers" containers="map[]" time="2022-01-01T15:31:52.498Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=label=io.kubernetes.pod.namespace=kubeflow-user-example-com --filter=label=io.kubernetes.pod.name=conditional-execution-pipeline-with-exit-handler-scjtr-3243716801" time="2022-01-01T15:31:52.525Z" level=info msg="listed containers" containers="map[]" time="2022-01-01T15:31:53.525Z" level=info msg="docker ps --all --no-trunc --format={{.Status}}|{{.Label \"io.kubernetes.container.name\"}}|{{.ID}}|{{.CreatedAt}} --filter=label=io.kubernetes.pod.namespace=kubeflow-user-example-com --filter=label=io.kubernetes.pod.name=conditional-execution-pipeline-with-exit-handler-scjtr-3243716801" ... (keeps going) ``` #### Step Events tab ``` kind: EventList apiVersion: v1 metadata: selfLink: /api/v1/namespaces/kubeflow-user-example-com/events resourceVersion: '27545' items: - metadata: name: >- conditional-execution-pipeline-with-exit-handler-scjtr-3243716801.16c62dd37b36b69a namespace: kubeflow-user-example-com selfLink: >- /api/v1/namespaces/kubeflow-user-example-com/events/conditional-execution-pipeline-with-exit-handler-scjtr-3243716801.16c62dd37b36b69a uid: 76728849-0449-4142-a4a1-bf839192d0a2 resourceVersion: '6986' creationTimestamp: '2022-01-01T15:05:00Z' managedFields: - manager: k3s operation: Update apiVersion: events.k8s.io/v1 time: '2022-01-01T15:05:00Z' fieldsType: FieldsV1 fieldsV1: 'f:action': {} 'f:eventTime': {} 'f:note': {} 'f:reason': {} 'f:regarding': 'f:apiVersion': {} 'f:kind': {} 'f:name': {} 'f:namespace': {} 'f:resourceVersion': {} 'f:uid': {} 'f:reportingController': {} 'f:reportingInstance': {} 'f:type': {} involvedObject: kind: Pod namespace: kubeflow-user-example-com name: conditional-execution-pipeline-with-exit-handler-scjtr-3243716801 uid: 542e569f-178b-42eb-a7e7-d07ea643178d apiVersion: v1 resourceVersion: '6983' reason: Scheduled message: >- Successfully assigned kubeflow-user-example-com/conditional-execution-pipeline-with-exit-handler-scjtr-3243716801 to ubuntu-2gb-fsn1-2 source: {} firstTimestamp: null lastTimestamp: null type: Normal eventTime: '2022-01-01T15:05:00.551652Z' action: Binding reportingComponent: default-scheduler reportingInstance: default-scheduler-ubuntu-2gb-fsn1-2 - metadata: name: >- conditional-execution-pipeline-with-exit-handler-scjtr-3243716801.16c62dd39bb346f0 namespace: kubeflow-user-example-com selfLink: >- /api/v1/namespaces/kubeflow-user-example-com/events/conditional-execution-pipeline-with-exit-handler-scjtr-3243716801.16c62dd39bb346f0 uid: 72582b0d-a333-4284-8f04-d90e11547f28 resourceVersion: '6997' creationTimestamp: '2022-01-01T15:05:01Z' managedFields: - manager: k3s operation: Update apiVersion: v1 time: '2022-01-01T15:05:01Z' fieldsType: FieldsV1 fieldsV1: 'f:count': {} 'f:firstTimestamp': {} 'f:involvedObject': 'f:apiVersion': {} 'f:fieldPath': {} 'f:kind': {} 'f:name': {} 'f:namespace': {} 'f:resourceVersion': {} 'f:uid': {} 'f:lastTimestamp': {} 'f:message': {} 'f:reason': {} 'f:source': 'f:component': {} 'f:host': {} 'f:type': {} involvedObject: kind: Pod namespace: kubeflow-user-example-com name: conditional-execution-pipeline-with-exit-handler-scjtr-3243716801 uid: 542e569f-178b-42eb-a7e7-d07ea643178d apiVersion: v1 resourceVersion: '6984' fieldPath: 'spec.containers{wait}' reason: Pulling message: >- Pulling image "gcr.io/ml-pipeline/argoexec:v3.1.6-patch-license-compliance" source: component: kubelet host: ubuntu-2gb-fsn1-2 firstTimestamp: '2022-01-01T15:05:01Z' lastTimestamp: '2022-01-01T15:05:01Z' count: 1 type: Normal eventTime: null reportingComponent: '' reportingInstance: '' - metadata: name: >- conditional-execution-pipeline-with-exit-handler-scjtr-3243716801.16c62dd4e0da1c08 namespace: kubeflow-user-example-com selfLink: >- /api/v1/namespaces/kubeflow-user-example-com/events/conditional-execution-pipeline-with-exit-handler-scjtr-3243716801.16c62dd4e0da1c08 uid: 7ea0cdf6-001a-4432-8e74-bd5a1e80bfcb resourceVersion: '7122' creationTimestamp: '2022-01-01T15:05:06Z' managedFields: - manager: k3s operation: Update apiVersion: v1 time: '2022-01-01T15:05:06Z' fieldsType: FieldsV1 fieldsV1: 'f:count': {} 'f:firstTimestamp': {} 'f:involvedObject': 'f:apiVersion': {} 'f:fieldPath': {} 'f:kind': {} 'f:name': {} 'f:namespace': {} 'f:resourceVersion': {} 'f:uid': {} 'f:lastTimestamp': {} 'f:message': {} 'f:reason': {} 'f:source': 'f:component': {} 'f:host': {} 'f:type': {} involvedObject: kind: Pod namespace: kubeflow-user-example-com name: conditional-execution-pipeline-with-exit-handler-scjtr-3243716801 uid: 542e569f-178b-42eb-a7e7-d07ea643178d apiVersion: v1 resourceVersion: '6984' fieldPath: 'spec.containers{wait}' reason: Pulled message: >- Successfully pulled image "gcr.io/ml-pipeline/argoexec:v3.1.6-patch-license-compliance" in 5.455115342s source: component: kubelet host: ubuntu-2gb-fsn1-2 firstTimestamp: '2022-01-01T15:05:06Z' lastTimestamp: '2022-01-01T15:05:06Z' count: 1 type: Normal eventTime: null reportingComponent: '' reportingInstance: '' - metadata: name: >- conditional-execution-pipeline-with-exit-handler-scjtr-3243716801.16c62dd4f2b7ba3b namespace: kubeflow-user-example-com selfLink: >- /api/v1/namespaces/kubeflow-user-example-com/events/conditional-execution-pipeline-with-exit-handler-scjtr-3243716801.16c62dd4f2b7ba3b uid: b52af29f-96d3-4da0-900b-73fce909d9fe resourceVersion: '7123' creationTimestamp: '2022-01-01T15:05:06Z' managedFields: - manager: k3s operation: Update apiVersion: v1 time: '2022-01-01T15:05:06Z' fieldsType: FieldsV1 fieldsV1: 'f:count': {} 'f:firstTimestamp': {} 'f:involvedObject': 'f:apiVersion': {} 'f:fieldPath': {} 'f:kind': {} 'f:name': {} 'f:namespace': {} 'f:resourceVersion': {} 'f:uid': {} 'f:lastTimestamp': {} 'f:message': {} 'f:reason': {} 'f:source': 'f:component': {} 'f:host': {} 'f:type': {} involvedObject: kind: Pod namespace: kubeflow-user-example-com name: conditional-execution-pipeline-with-exit-handler-scjtr-3243716801 uid: 542e569f-178b-42eb-a7e7-d07ea643178d apiVersion: v1 resourceVersion: '6984' fieldPath: 'spec.containers{wait}' reason: Created message: Created container wait source: component: kubelet host: ubuntu-2gb-fsn1-2 firstTimestamp: '2022-01-01T15:05:06Z' lastTimestamp: '2022-01-01T15:05:06Z' count: 1 type: Normal eventTime: null reportingComponent: '' reportingInstance: '' - metadata: name: >- conditional-execution-pipeline-with-exit-handler-scjtr-3243716801.16c62dd4f7efebb2 namespace: kubeflow-user-example-com selfLink: >- /api/v1/namespaces/kubeflow-user-example-com/events/conditional-execution-pipeline-with-exit-handler-scjtr-3243716801.16c62dd4f7efebb2 uid: 260352e6-ab20-4ae7-b522-0f00614d0e6b resourceVersion: '7128' creationTimestamp: '2022-01-01T15:05:06Z' managedFields: - manager: k3s operation: Update apiVersion: v1 time: '2022-01-01T15:05:06Z' fieldsType: FieldsV1 fieldsV1: 'f:count': {} 'f:firstTimestamp': {} 'f:involvedObject': 'f:apiVersion': {} 'f:fieldPath': {} 'f:kind': {} 'f:name': {} 'f:namespace': {} 'f:resourceVersion': {} 'f:uid': {} 'f:lastTimestamp': {} 'f:message': {} 'f:reason': {} 'f:source': 'f:component': {} 'f:host': {} 'f:type': {} involvedObject: kind: Pod namespace: kubeflow-user-example-com name: conditional-execution-pipeline-with-exit-handler-scjtr-3243716801 uid: 542e569f-178b-42eb-a7e7-d07ea643178d apiVersion: v1 resourceVersion: '6984' fieldPath: 'spec.containers{wait}' reason: Started message: Started container wait source: component: kubelet host: ubuntu-2gb-fsn1-2 firstTimestamp: '2022-01-01T15:05:06Z' lastTimestamp: '2022-01-01T15:05:06Z' count: 1 type: Normal eventTime: null reportingComponent: '' reportingInstance: '' - metadata: name: >- conditional-execution-pipeline-with-exit-handler-scjtr-3243716801.16c62dd4f83995c4 namespace: kubeflow-user-example-com selfLink: >- /api/v1/namespaces/kubeflow-user-example-com/events/conditional-execution-pipeline-with-exit-handler-scjtr-3243716801.16c62dd4f83995c4 uid: eb424731-c7df-43c5-a08c-5da8ace8a81f resourceVersion: '7129' creationTimestamp: '2022-01-01T15:05:06Z' managedFields: - manager: k3s operation: Update apiVersion: v1 time: '2022-01-01T15:05:06Z' fieldsType: FieldsV1 fieldsV1: 'f:count': {} 'f:firstTimestamp': {} 'f:involvedObject': 'f:apiVersion': {} 'f:fieldPath': {} 'f:kind': {} 'f:name': {} 'f:namespace': {} 'f:resourceVersion': {} 'f:uid': {} 'f:lastTimestamp': {} 'f:message': {} 'f:reason': {} 'f:source': 'f:component': {} 'f:host': {} 'f:type': {} involvedObject: kind: Pod namespace: kubeflow-user-example-com name: conditional-execution-pipeline-with-exit-handler-scjtr-3243716801 uid: 542e569f-178b-42eb-a7e7-d07ea643178d apiVersion: v1 resourceVersion: '6984' fieldPath: 'spec.containers{main}' reason: Pulled message: 'Container image "python:3.7" already present on machine' source: component: kubelet host: ubuntu-2gb-fsn1-2 firstTimestamp: '2022-01-01T15:05:06Z' lastTimestamp: '2022-01-01T15:05:06Z' count: 1 type: Normal eventTime: null reportingComponent: '' reportingInstance: '' - metadata: name: >- conditional-execution-pipeline-with-exit-handler-scjtr-3243716801.16c62dd4fad8f088 namespace: kubeflow-user-example-com selfLink: >- /api/v1/namespaces/kubeflow-user-example-com/events/conditional-execution-pipeline-with-exit-handler-scjtr-3243716801.16c62dd4fad8f088 uid: f22d88e1-0d43-4a99-b44e-f3a1d44cf524 resourceVersion: '7130' creationTimestamp: '2022-01-01T15:05:06Z' managedFields: - manager: k3s operation: Update apiVersion: v1 time: '2022-01-01T15:05:06Z' fieldsType: FieldsV1 fieldsV1: 'f:count': {} 'f:firstTimestamp': {} 'f:involvedObject': 'f:apiVersion': {} 'f:fieldPath': {} 'f:kind': {} 'f:name': {} 'f:namespace': {} 'f:resourceVersion': {} 'f:uid': {} 'f:lastTimestamp': {} 'f:message': {} 'f:reason': {} 'f:source': 'f:component': {} 'f:host': {} 'f:type': {} involvedObject: kind: Pod namespace: kubeflow-user-example-com name: conditional-execution-pipeline-with-exit-handler-scjtr-3243716801 uid: 542e569f-178b-42eb-a7e7-d07ea643178d apiVersion: v1 resourceVersion: '6984' fieldPath: 'spec.containers{main}' reason: Created message: Created container main source: component: kubelet host: ubuntu-2gb-fsn1-2 firstTimestamp: '2022-01-01T15:05:06Z' lastTimestamp: '2022-01-01T15:05:06Z' count: 1 type: Normal eventTime: null reportingComponent: '' reportingInstance: '' - metadata: name: >- conditional-execution-pipeline-with-exit-handler-scjtr-3243716801.16c62dd4ff8dd0c4 namespace: kubeflow-user-example-com selfLink: >- /api/v1/namespaces/kubeflow-user-example-com/events/conditional-execution-pipeline-with-exit-handler-scjtr-3243716801.16c62dd4ff8dd0c4 uid: 798f6d2a-d3eb-4d2d-b449-7eeb6d7c285c resourceVersion: '7133' creationTimestamp: '2022-01-01T15:05:07Z' managedFields: - manager: k3s operation: Update apiVersion: v1 time: '2022-01-01T15:05:07Z' fieldsType: FieldsV1 fieldsV1: 'f:count': {} 'f:firstTimestamp': {} 'f:involvedObject': 'f:apiVersion': {} 'f:fieldPath': {} 'f:kind': {} 'f:name': {} 'f:namespace': {} 'f:resourceVersion': {} 'f:uid': {} 'f:lastTimestamp': {} 'f:message': {} 'f:reason': {} 'f:source': 'f:component': {} 'f:host': {} 'f:type': {} involvedObject: kind: Pod namespace: kubeflow-user-example-com name: conditional-execution-pipeline-with-exit-handler-scjtr-3243716801 uid: 542e569f-178b-42eb-a7e7-d07ea643178d apiVersion: v1 resourceVersion: '6984' fieldPath: 'spec.containers{main}' reason: Started message: Started container main source: component: kubelet host: ubuntu-2gb-fsn1-2 firstTimestamp: '2022-01-01T15:05:07Z' lastTimestamp: '2022-01-01T15:05:07Z' count: 1 type: Normal eventTime: null reportingComponent: '' reportingInstance: '' ``` --- Impacted by this bug? Give it a ๐Ÿ‘. We prioritise the issues with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7132/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7132/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7131
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7131/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7131/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7131/events
https://github.com/kubeflow/pipelines/issues/7131
1,091,732,276
I_kwDOB-71UM5BEoM0
7,131
[backend] Failed to execute component: unable to get pipeline with PipelineName
{ "login": "631068264", "id": 8144089, "node_id": "MDQ6VXNlcjgxNDQwODk=", "avatar_url": "https://avatars.githubusercontent.com/u/8144089?v=4", "gravatar_id": "", "url": "https://api.github.com/users/631068264", "html_url": "https://github.com/631068264", "followers_url": "https://api.github.com/users/631068264/followers", "following_url": "https://api.github.com/users/631068264/following{/other_user}", "gists_url": "https://api.github.com/users/631068264/gists{/gist_id}", "starred_url": "https://api.github.com/users/631068264/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/631068264/subscriptions", "organizations_url": "https://api.github.com/users/631068264/orgs", "repos_url": "https://api.github.com/users/631068264/repos", "events_url": "https://api.github.com/users/631068264/events{/privacy}", "received_events_url": "https://api.github.com/users/631068264/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" } ]
open
false
null
[]
null
[ "> KFP SDK version: build version dev_local\r\n\r\nWe have dropped v2 compatible mode support from master branch. \r\n\r\nI'd suggest you try non v2 compatible mode or try v2 on Vertex Pipelines.", "I have also the same issue with v2 on KubeFlow Pipelines 1.7.0 with KubeFlow 1.4.0 Manifest On-prem installation.\r\n\r\n@chensun:\r\nCan you elaborate, what do you mean with \"non v2 compatible mode\"? From `kfp.dsl.PipelineExecutionMode`, there is only `V1_LEGACY`, `V2_COMPATIBLE`. The `V2_ENGINE` just through an error. V1_LEGACY doesn't run kfp.v2.dsl.component. And `V2_COMPATIBLE` mode gives the error in this thread.\r\n\r\nSo basicly, V2 KFP SDK is not ready and can be run nowhere? Please correct if I am wrong.", "I am running into the same issue - is there any update on this front? \r\n\r\n@chensun is your suggestion that, at this time, we can only run pipelines defined in the v2 DSL in Vertex AI PIpelines?\r\n\r\nFor extra context, I have a basic pipeline defined in the v2 DSL which does run on Vertex AI Pipelines successfully but fails with the error raised in this issue on Kubeflow. ", "@yingding Sorry, I missed your questions earlier.\r\n\r\n> Can you elaborate, what do you mean with \"non v2 compatible mode\"? From kfp.dsl.PipelineExecutionMode, there is only V1_LEGACY, V2_COMPATIBLE. The V2_ENGINE just through an error. V1_LEGACY doesn't run kfp.v2.dsl.component\r\n\r\nBy \"non v2 compatible mode\" I meant `V1_LEGACY` mode, and yes it doesn't support anything from `kfp.v2` namespace.\r\n\r\n> So basicly, V2 KFP SDK is not ready and can be run nowhere? Please correct if I am wrong.\r\n\r\nV2 KFP SDK (at the time you were asking, this means importing `kfp.v2` using KFP SDK 1.* version) was not ready for Kubeflow Pipelines, but it was ready and could be run on [Vertex Pipelines](https://cloud.google.com/vertex-ai/docs/pipelines/build-pipeline). \r\n", "> @chensun is your suggestion that, at this time, we can only run pipelines defined in the v2 DSL in Vertex AI PIpelines?\r\n\r\n@ccurro, short answer is yes. You can only run such pipelines on Vertex Pipelines.\r\nThat being said, we did recently released both [KFP 2.0.0 alpha](https://github.com/kubeflow/pipelines/releases/tag/2.0.0-alpha.1) and [KFP SDK 2.0.0 alpha](https://pypi.org/project/kfp/2.0.0a2/), which supports running pipelines defined in v2 DSL in the open source KFP backend. Since it's in the alpha state, expect that it's not feature complete and could have many bugs. ", "@chensun Thanks for clarifying!", "I am also having the same problem.\r\nIf I change my kfp to 2.0.0, it would even fail to load into my kubeflow when creating a pipeline\r\nAny available method to solve it? I almost get struggled in the whole day." ]
"2022-01-01T03:54:37"
"2023-07-11T06:53:14"
null
NONE
null
### Environment * How did you deploy Kubeflow Pipelines (KFP)? Install follow https://github.com/kubeflow/manifests in v1.4.1 KFP version: 1.7.0 KFP SDK version: build version dev_local k3s Kubernetes 1.19 ### Steps to reproduce use demo example to add pipline kfp 1.8.10 kfp-pipeline-spec 0.1.13 kfp-server-api 1.7.1 ```python #!/usr/bin/env python # -*- coding: utf-8 -*- import kfp import kfp.dsl as dsl from kfp.v2.dsl import component from kfp import compiler @component( base_image="library/python:3.7" ) def add(a: float, b: float) -> float: '''Calculates sum of two arguments''' return a + b @dsl.pipeline( name='v2add', description='An example pipeline that performs addition calculations.', # pipeline_root='gs://my-pipeline-root/example-pipeline' ) def add_pipeline(a: float = 1, b: float = 7): add_task = add(a, b) compiler.Compiler( mode=kfp.dsl.PipelineExecutionMode.V2_COMPATIBLE, launcher_image='library/gcr.io/ml-pipeline/kfp-launcher:1.8.7' ).compile(pipeline_func=add_pipeline, package_path='pipeline.yaml') ``` I upload the pipeline.yaml and start a run get error logs ``` I1231 10:12:23.830486 1 launcher.go:144] PipelineRoot defaults to "minio://mlpipeline/v2/artifacts". I1231 10:12:23.830866 1 cache.go:143] Cannot detect ml-pipeline in the same namespace, default to ml-pipeline.kubeflow:8887 as KFP endpoint. I1231 10:12:23.830880 1 cache.go:120] Connecting to cache endpoint ml-pipeline.kubeflow:8887 F1231 10:12:23.832000 1 main.go:50] Failed to execute component: unable to get pipeline with PipelineName "pipeline/v2add" PipelineRunID "7e2bdeeb-aa6f-4109-a508-63a1be22267c": Failed GetContextByTypeAndName(type="system.Pipeline", name="pipeline/v2add") ``` pod ``` kind: Pod apiVersion: v1 metadata: name: v2add-rzrht-37236994 namespace: kubeflow-user-example-com selfLink: /api/v1/namespaces/kubeflow-user-example-com/pods/v2add-rzrht-37236994 uid: 3ceb73e5-80b5-4844-8cc8-8f2bf61319d2 resourceVersion: '28824661' creationTimestamp: '2021-12-31T10:12:21Z' labels: pipeline/runid: 7e2bdeeb-aa6f-4109-a508-63a1be22267c pipelines.kubeflow.org/cache_enabled: 'true' pipelines.kubeflow.org/enable_caching: 'true' pipelines.kubeflow.org/kfp_sdk_version: 1.8.10 pipelines.kubeflow.org/pipeline-sdk-type: kfp pipelines.kubeflow.org/v2_component: 'true' workflows.argoproj.io/completed: 'true' workflows.argoproj.io/workflow: v2add-rzrht annotations: pipelines.kubeflow.org/arguments.parameters: '{"a": "1", "b": "7"}' pipelines.kubeflow.org/component_ref: '{}' pipelines.kubeflow.org/v2_component: 'true' sidecar.istio.io/inject: 'false' workflows.argoproj.io/node-name: v2add-rzrht.add workflows.argoproj.io/outputs: >- {"artifacts":[{"name":"add-Output","path":"/tmp/outputs/Output/data"},{"name":"main-logs","s3":{"key":"artifacts/v2add-rzrht/2021/12/31/v2add-rzrht-37236994/main.log"}}]} workflows.argoproj.io/template: >- {"name":"add","inputs":{"parameters":[{"name":"a","value":"1"},{"name":"b","value":"7"},{"name":"pipeline-name","value":"pipeline/v2add"},{"name":"pipeline-root","value":""}]},"outputs":{"artifacts":[{"name":"add-Output","path":"/tmp/outputs/Output/data"}]},"metadata":{"annotations":{"pipelines.kubeflow.org/arguments.parameters":"{\"a\": \"1\", \"b\": \"7\"}","pipelines.kubeflow.org/component_ref":"{}","pipelines.kubeflow.org/v2_component":"true","sidecar.istio.io/inject":"false"},"labels":{"pipelines.kubeflow.org/cache_enabled":"true","pipelines.kubeflow.org/enable_caching":"true","pipelines.kubeflow.org/kfp_sdk_version":"1.8.10","pipelines.kubeflow.org/pipeline-sdk-type":"kfp","pipelines.kubeflow.org/v2_component":"true"}},"container":{"name":"","image":"library/python:3.7","command":["/kfp-launcher/launch","--mlmd_server_address","$(METADATA_GRPC_SERVICE_HOST)","--mlmd_server_port","$(METADATA_GRPC_SERVICE_PORT)","--runtime_info_json","$(KFP_V2_RUNTIME_INFO)","--container_image","$(KFP_V2_IMAGE)","--task_name","add","--pipeline_name","pipeline/v2add","--run_id","$(KFP_RUN_ID)","--run_resource","workflows.argoproj.io/$(WORKFLOW_ID)","--namespace","$(KFP_NAMESPACE)","--pod_name","$(KFP_POD_NAME)","--pod_uid","$(KFP_POD_UID)","--pipeline_root","","--enable_caching","$(ENABLE_CACHING)","--","a=1","b=7","--"],"args":["sh","-c","\nif ! [ -x \"$(command -v pip)\" ]; then\n python3 -m ensurepip || python3 -m ensurepip --user || apt-get install python3-pip\nfi\n\nPIP_DISABLE_PIP_VERSION_CHECK=1 python3 -m pip install --quiet --no-warn-script-location 'kfp==1.8.10' \u0026\u0026 \"$0\" \"$@\"\n","sh","-ec","program_path=$(mktemp -d)\nprintf \"%s\" \"$0\" \u003e \"$program_path/ephemeral_component.py\"\npython3 -m kfp.v2.components.executor_main --component_module_path \"$program_path/ephemeral_component.py\" \"$@\"\n","\nimport kfp\nfrom kfp.v2 import dsl\nfrom kfp.v2.dsl import *\nfrom typing import *\n\ndef add(a: float, b: float) -\u003e float:\n '''Calculates sum of two arguments'''\n return a + b\n\n","--executor_input","{{$}}","--function_to_execute","add"],"envFrom":[{"configMapRef":{"name":"metadata-grpc-configmap","optional":true}}],"env":[{"name":"KFP_POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"KFP_POD_UID","valueFrom":{"fieldRef":{"fieldPath":"metadata.uid"}}},{"name":"KFP_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}},{"name":"WORKFLOW_ID","valueFrom":{"fieldRef":{"fieldPath":"metadata.labels['workflows.argoproj.io/workflow']"}}},{"name":"KFP_RUN_ID","valueFrom":{"fieldRef":{"fieldPath":"metadata.labels['pipeline/runid']"}}},{"name":"ENABLE_CACHING","valueFrom":{"fieldRef":{"fieldPath":"metadata.labels['pipelines.kubeflow.org/enable_caching']"}}},{"name":"KFP_V2_IMAGE","value":"library/python:3.7"},{"name":"KFP_V2_RUNTIME_INFO","value":"{\"inputParameters\": {\"a\": {\"type\": \"DOUBLE\"}, \"b\": {\"type\": \"DOUBLE\"}}, \"inputArtifacts\": {}, \"outputParameters\": {\"Output\": {\"type\": \"DOUBLE\", \"path\": \"/tmp/outputs/Output/data\"}}, \"outputArtifacts\": {}}"}],"resources":{},"volumeMounts":[{"name":"kfp-launcher","mountPath":"/kfp-launcher"}]},"volumes":[{"name":"kfp-launcher"}],"initContainers":[{"name":"kfp-launcher","image":"library/gcr.io/ml-pipeline/kfp-launcher:1.8.7","command":["launcher","--copy","/kfp-launcher/launch"],"resources":{},"mirrorVolumeMounts":true}],"archiveLocation":{"archiveLogs":true,"s3":{"endpoint":"minio-service.kubeflow:9000","bucket":"mlpipeline","insecure":true,"accessKeySecret":{"name":"mlpipeline-minio-artifact","key":"accesskey"},"secretKeySecret":{"name":"mlpipeline-minio-artifact","key":"secretkey"},"key":"artifacts/v2add-rzrht/2021/12/31/v2add-rzrht-37236994"}}} ownerReferences: - apiVersion: argoproj.io/v1alpha1 kind: Workflow name: v2add-rzrht uid: 9a806b04-d5fa-49eb-9e46-7502bc3e7ac5 controller: true blockOwnerDeletion: true managedFields: - manager: workflow-controller operation: Update apiVersion: v1 time: '2021-12-31T10:12:21Z' fieldsType: FieldsV1 fieldsV1: 'f:metadata': 'f:annotations': .: {} 'f:pipelines.kubeflow.org/arguments.parameters': {} 'f:pipelines.kubeflow.org/component_ref': {} 'f:pipelines.kubeflow.org/v2_component': {} 'f:sidecar.istio.io/inject': {} 'f:workflows.argoproj.io/node-name': {} 'f:workflows.argoproj.io/template': {} 'f:labels': .: {} 'f:pipeline/runid': {} 'f:pipelines.kubeflow.org/cache_enabled': {} 'f:pipelines.kubeflow.org/enable_caching': {} 'f:pipelines.kubeflow.org/kfp_sdk_version': {} 'f:pipelines.kubeflow.org/pipeline-sdk-type': {} 'f:pipelines.kubeflow.org/v2_component': {} 'f:workflows.argoproj.io/completed': {} 'f:workflows.argoproj.io/workflow': {} 'f:ownerReferences': .: {} 'k:{"uid":"9a806b04-d5fa-49eb-9e46-7502bc3e7ac5"}': .: {} 'f:apiVersion': {} 'f:blockOwnerDeletion': {} 'f:controller': {} 'f:kind': {} 'f:name': {} 'f:uid': {} 'f:spec': 'f:containers': 'k:{"name":"main"}': .: {} 'f:args': {} 'f:command': {} 'f:env': .: {} 'k:{"name":"ARGO_CONTAINER_NAME"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"ARGO_INCLUDE_SCRIPT_OUTPUT"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"ENABLE_CACHING"}': .: {} 'f:name': {} 'f:valueFrom': .: {} 'f:fieldRef': .: {} 'f:apiVersion': {} 'f:fieldPath': {} 'k:{"name":"KFP_NAMESPACE"}': .: {} 'f:name': {} 'f:valueFrom': .: {} 'f:fieldRef': .: {} 'f:apiVersion': {} 'f:fieldPath': {} 'k:{"name":"KFP_POD_NAME"}': .: {} 'f:name': {} 'f:valueFrom': .: {} 'f:fieldRef': .: {} 'f:apiVersion': {} 'f:fieldPath': {} 'k:{"name":"KFP_POD_UID"}': .: {} 'f:name': {} 'f:valueFrom': .: {} 'f:fieldRef': .: {} 'f:apiVersion': {} 'f:fieldPath': {} 'k:{"name":"KFP_RUN_ID"}': .: {} 'f:name': {} 'f:valueFrom': .: {} 'f:fieldRef': .: {} 'f:apiVersion': {} 'f:fieldPath': {} 'k:{"name":"KFP_V2_IMAGE"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"KFP_V2_RUNTIME_INFO"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"WORKFLOW_ID"}': .: {} 'f:name': {} 'f:valueFrom': .: {} 'f:fieldRef': .: {} 'f:apiVersion': {} 'f:fieldPath': {} 'f:envFrom': {} 'f:image': {} 'f:imagePullPolicy': {} 'f:name': {} 'f:resources': {} 'f:terminationMessagePath': {} 'f:terminationMessagePolicy': {} 'f:volumeMounts': .: {} 'k:{"mountPath":"/kfp-launcher"}': .: {} 'f:mountPath': {} 'f:name': {} 'k:{"name":"wait"}': .: {} 'f:command': {} 'f:env': .: {} 'k:{"name":"ARGO_CONTAINER_NAME"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"ARGO_CONTAINER_RUNTIME_EXECUTOR"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"ARGO_INCLUDE_SCRIPT_OUTPUT"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"ARGO_POD_NAME"}': .: {} 'f:name': {} 'f:valueFrom': .: {} 'f:fieldRef': .: {} 'f:apiVersion': {} 'f:fieldPath': {} 'k:{"name":"GODEBUG"}': .: {} 'f:name': {} 'f:value': {} 'f:image': {} 'f:imagePullPolicy': {} 'f:name': {} 'f:resources': .: {} 'f:limits': .: {} 'f:cpu': {} 'f:memory': {} 'f:requests': .: {} 'f:cpu': {} 'f:memory': {} 'f:terminationMessagePath': {} 'f:terminationMessagePolicy': {} 'f:volumeMounts': .: {} 'k:{"mountPath":"/argo/podmetadata"}': .: {} 'f:mountPath': {} 'f:name': {} 'k:{"mountPath":"/argo/secret/mlpipeline-minio-artifact"}': .: {} 'f:mountPath': {} 'f:name': {} 'f:readOnly': {} 'k:{"mountPath":"/mainctrfs/kfp-launcher"}': .: {} 'f:mountPath': {} 'f:name': {} 'k:{"mountPath":"/var/run/docker.sock"}': .: {} 'f:mountPath': {} 'f:name': {} 'f:readOnly': {} 'f:dnsPolicy': {} 'f:enableServiceLinks': {} 'f:initContainers': .: {} 'k:{"name":"kfp-launcher"}': .: {} 'f:command': {} 'f:env': .: {} 'k:{"name":"ARGO_CONTAINER_NAME"}': .: {} 'f:name': {} 'f:value': {} 'k:{"name":"ARGO_INCLUDE_SCRIPT_OUTPUT"}': .: {} 'f:name': {} 'f:value': {} 'f:image': {} 'f:imagePullPolicy': {} 'f:name': {} 'f:resources': {} 'f:terminationMessagePath': {} 'f:terminationMessagePolicy': {} 'f:volumeMounts': .: {} 'k:{"mountPath":"/kfp-launcher"}': .: {} 'f:mountPath': {} 'f:name': {} 'f:restartPolicy': {} 'f:schedulerName': {} 'f:securityContext': {} 'f:serviceAccount': {} 'f:serviceAccountName': {} 'f:terminationGracePeriodSeconds': {} 'f:volumes': .: {} 'k:{"name":"docker-sock"}': .: {} 'f:hostPath': .: {} 'f:path': {} 'f:type': {} 'f:name': {} 'k:{"name":"kfp-launcher"}': .: {} 'f:emptyDir': {} 'f:name': {} 'k:{"name":"mlpipeline-minio-artifact"}': .: {} 'f:name': {} 'f:secret': .: {} 'f:defaultMode': {} 'f:items': {} 'f:secretName': {} 'k:{"name":"podmetadata"}': .: {} 'f:downwardAPI': .: {} 'f:defaultMode': {} 'f:items': {} 'f:name': {} - manager: k3s operation: Update apiVersion: v1 time: '2021-12-31T10:12:24Z' fieldsType: FieldsV1 fieldsV1: 'f:status': 'f:conditions': 'k:{"type":"ContainersReady"}': .: {} 'f:lastProbeTime': {} 'f:lastTransitionTime': {} 'f:message': {} 'f:reason': {} 'f:status': {} 'f:type': {} 'k:{"type":"Initialized"}': .: {} 'f:lastProbeTime': {} 'f:lastTransitionTime': {} 'f:status': {} 'f:type': {} 'k:{"type":"Ready"}': .: {} 'f:lastProbeTime': {} 'f:lastTransitionTime': {} 'f:message': {} 'f:reason': {} 'f:status': {} 'f:type': {} 'f:containerStatuses': {} 'f:hostIP': {} 'f:initContainerStatuses': {} 'f:phase': {} 'f:podIP': {} 'f:podIPs': .: {} 'k:{"ip":"10.42.0.101"}': .: {} 'f:ip': {} 'f:startTime': {} - manager: argoexec operation: Update apiVersion: v1 time: '2021-12-31T10:12:25Z' fieldsType: FieldsV1 fieldsV1: 'f:metadata': 'f:annotations': 'f:workflows.argoproj.io/outputs': {} status: phase: Failed conditions: - type: Initialized status: 'True' lastProbeTime: null lastTransitionTime: '2021-12-31T10:12:23Z' - type: Ready status: 'False' lastProbeTime: null lastTransitionTime: '2021-12-31T10:12:21Z' reason: ContainersNotReady message: 'containers with unready status: [wait main]' - type: ContainersReady status: 'False' lastProbeTime: null lastTransitionTime: '2021-12-31T10:12:21Z' reason: ContainersNotReady message: 'containers with unready status: [wait main]' - type: PodScheduled status: 'True' lastProbeTime: null lastTransitionTime: '2021-12-31T10:12:21Z' hostIP: 10.19.64.214 podIP: 10.42.0.101 podIPs: - ip: 10.42.0.101 startTime: '2021-12-31T10:12:21Z' initContainerStatuses: - name: kfp-launcher state: terminated: exitCode: 0 reason: Completed startedAt: '2021-12-31T10:12:22Z' finishedAt: '2021-12-31T10:12:22Z' containerID: >- docker://fbf8b39a3bab8065b54e9a3b25a678e07e0880ef61f9e78abe92f9fa205a73c4 lastState: {} ready: true restartCount: 0 image: 'library/gcr.io/ml-pipeline/kfp-launcher:1.8.7' imageID: >- docker-pullable://library/gcr.io/ml-pipeline/kfp-launcher@sha256:8b3f14d468a41c319e95ef4047b7823c64480fd1980c3d5b369c8412afbc684f containerID: >- docker://fbf8b39a3bab8065b54e9a3b25a678e07e0880ef61f9e78abe92f9fa205a73c4 containerStatuses: - name: main state: terminated: exitCode: 1 reason: Error startedAt: '2021-12-31T10:12:23Z' finishedAt: '2021-12-31T10:12:23Z' containerID: >- docker://26faae59907e5a4207960ee9d15d9d350587c5be7db31c3e8f0ec97e72c6d2cf lastState: {} ready: false restartCount: 0 image: 'python:3.7' imageID: >- docker-pullable://python@sha256:3908249ce6b2d28284e3610b07bf406c3035bc2e3ce328711a2b42e1c5a75fc1 containerID: >- docker://26faae59907e5a4207960ee9d15d9d350587c5be7db31c3e8f0ec97e72c6d2cf started: false - name: wait state: terminated: exitCode: 1 reason: Error message: >- path /tmp/outputs/Output/data does not exist in archive /tmp/argo/outputs/artifacts/add-Output.tgz startedAt: '2021-12-31T10:12:23Z' finishedAt: '2021-12-31T10:12:25Z' containerID: >- docker://66b6306eb81ac2abb1fbf2609d7375a00f92891f1c827680a45962cbb1ec3c0a lastState: {} ready: false restartCount: 0 image: 'library/gcr.io/ml-pipeline/argoexec:v3.1.6-patch-license-compliance' imageID: >- docker-pullable://library/gcr.io/ml-pipeline/argoexec@sha256:44cf8455a51aa5b961d1a86f65e39adf5ffca9bdcd33a745c3b79f430b7439e0 containerID: >- docker://66b6306eb81ac2abb1fbf2609d7375a00f92891f1c827680a45962cbb1ec3c0a started: false qosClass: Burstable spec: volumes: - name: podmetadata downwardAPI: items: - path: annotations fieldRef: apiVersion: v1 fieldPath: metadata.annotations defaultMode: 420 - name: docker-sock hostPath: path: /var/run/docker.sock type: Socket - name: kfp-launcher emptyDir: {} - name: mlpipeline-minio-artifact secret: secretName: mlpipeline-minio-artifact items: - key: accesskey path: accesskey - key: secretkey path: secretkey defaultMode: 420 - name: default-editor-token-8lmfr secret: secretName: default-editor-token-8lmfr defaultMode: 420 initContainers: - name: kfp-launcher image: 'library/gcr.io/ml-pipeline/kfp-launcher:1.8.7' command: - launcher - '--copy' - /kfp-launcher/launch env: - name: ARGO_CONTAINER_NAME value: kfp-launcher - name: ARGO_INCLUDE_SCRIPT_OUTPUT value: 'false' resources: {} volumeMounts: - name: kfp-launcher mountPath: /kfp-launcher - name: default-editor-token-8lmfr readOnly: true mountPath: /var/run/secrets/kubernetes.io/serviceaccount terminationMessagePath: /dev/termination-log terminationMessagePolicy: File imagePullPolicy: IfNotPresent containers: - name: wait image: 'library/gcr.io/ml-pipeline/argoexec:v3.1.6-patch-license-compliance' command: - argoexec - wait - '--loglevel' - info env: - name: ARGO_POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: ARGO_CONTAINER_RUNTIME_EXECUTOR value: docker - name: GODEBUG value: x509ignoreCN=0 - name: ARGO_CONTAINER_NAME value: wait - name: ARGO_INCLUDE_SCRIPT_OUTPUT value: 'false' resources: limits: cpu: 500m memory: 512Mi requests: cpu: 10m memory: 32Mi volumeMounts: - name: podmetadata mountPath: /argo/podmetadata - name: docker-sock readOnly: true mountPath: /var/run/docker.sock - name: mlpipeline-minio-artifact readOnly: true mountPath: /argo/secret/mlpipeline-minio-artifact - name: kfp-launcher mountPath: /mainctrfs/kfp-launcher - name: default-editor-token-8lmfr readOnly: true mountPath: /var/run/secrets/kubernetes.io/serviceaccount terminationMessagePath: /dev/termination-log terminationMessagePolicy: File imagePullPolicy: IfNotPresent - name: main image: 'library/python:3.7' command: - /kfp-launcher/launch - '--mlmd_server_address' - $(METADATA_GRPC_SERVICE_HOST) - '--mlmd_server_port' - $(METADATA_GRPC_SERVICE_PORT) - '--runtime_info_json' - $(KFP_V2_RUNTIME_INFO) - '--container_image' - $(KFP_V2_IMAGE) - '--task_name' - add - '--pipeline_name' - pipeline/v2add - '--run_id' - $(KFP_RUN_ID) - '--run_resource' - workflows.argoproj.io/$(WORKFLOW_ID) - '--namespace' - $(KFP_NAMESPACE) - '--pod_name' - $(KFP_POD_NAME) - '--pod_uid' - $(KFP_POD_UID) - '--pipeline_root' - '' - '--enable_caching' - $(ENABLE_CACHING) - '--' - a=1 - b=7 - '--' args: - sh - '-c' - > if ! [ -x "$(command -v pip)" ]; then python3 -m ensurepip || python3 -m ensurepip --user || apt-get install python3-pip fi PIP_DISABLE_PIP_VERSION_CHECK=1 python3 -m pip install --quiet --no-warn-script-location 'kfp==1.8.10' && "$0" "$@" - sh - '-ec' - > program_path=$(mktemp -d) printf "%s" "$0" > "$program_path/ephemeral_component.py" python3 -m kfp.v2.components.executor_main --component_module_path "$program_path/ephemeral_component.py" "$@" - |+ import kfp from kfp.v2 import dsl from kfp.v2.dsl import * from typing import * def add(a: float, b: float) -> float: '''Calculates sum of two arguments''' return a + b - '--executor_input' - '{{$}}' - '--function_to_execute' - add envFrom: - configMapRef: name: metadata-grpc-configmap optional: true env: - name: KFP_POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: KFP_POD_UID valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.uid - name: KFP_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: WORKFLOW_ID valueFrom: fieldRef: apiVersion: v1 fieldPath: 'metadata.labels[''workflows.argoproj.io/workflow'']' - name: KFP_RUN_ID valueFrom: fieldRef: apiVersion: v1 fieldPath: 'metadata.labels[''pipeline/runid'']' - name: ENABLE_CACHING valueFrom: fieldRef: apiVersion: v1 fieldPath: 'metadata.labels[''pipelines.kubeflow.org/enable_caching'']' - name: KFP_V2_IMAGE value: 'library/python:3.7' - name: KFP_V2_RUNTIME_INFO value: >- {"inputParameters": {"a": {"type": "DOUBLE"}, "b": {"type": "DOUBLE"}}, "inputArtifacts": {}, "outputParameters": {"Output": {"type": "DOUBLE", "path": "/tmp/outputs/Output/data"}}, "outputArtifacts": {}} - name: ARGO_CONTAINER_NAME value: main - name: ARGO_INCLUDE_SCRIPT_OUTPUT value: 'false' resources: {} volumeMounts: - name: kfp-launcher mountPath: /kfp-launcher - name: default-editor-token-8lmfr readOnly: true mountPath: /var/run/secrets/kubernetes.io/serviceaccount terminationMessagePath: /dev/termination-log terminationMessagePolicy: File imagePullPolicy: IfNotPresent restartPolicy: Never terminationGracePeriodSeconds: 30 dnsPolicy: ClusterFirst serviceAccountName: default-editor serviceAccount: default-editor nodeName: iz1bb01rvtheuakv3h25ntz securityContext: {} schedulerName: default-scheduler tolerations: - key: node.kubernetes.io/not-ready operator: Exists effect: NoExecute tolerationSeconds: 300 - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute tolerationSeconds: 300 priority: 0 enableServiceLinks: true preemptionPolicy: PreemptLowerPriority ``` ![image](https://user-images.githubusercontent.com/8144089/147843537-a839d587-b8fd-45b3-9b29-c3b7ea69fff5.png) I don't know why it can't find the PipelineName? ![image](https://user-images.githubusercontent.com/8144089/147843586-b91bc121-7cce-4d2a-84db-322d5fe7c5aa.png) ![image](https://user-images.githubusercontent.com/8144089/147843598-050d43f6-f8e0-4710-997e-09fa778e4efb.png) --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a ๐Ÿ‘. We prioritise the issues with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7131/reactions", "total_count": 14, "+1": 14, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7131/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7129
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7129/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7129/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7129/events
https://github.com/kubeflow/pipelines/issues/7129
1,090,966,892
I_kwDOB-71UM5BBtVs
7,129
Unable to attach or mount volumes: unmounted volumes=[mlpipeline-minio-artifact], unattached volumes=[podmetadata docker-sock mlpipeline-minio-artifact default-editor-token-8lmfr]: timed out waiting for the condition
{ "login": "631068264", "id": 8144089, "node_id": "MDQ6VXNlcjgxNDQwODk=", "avatar_url": "https://avatars.githubusercontent.com/u/8144089?v=4", "gravatar_id": "", "url": "https://api.github.com/users/631068264", "html_url": "https://github.com/631068264", "followers_url": "https://api.github.com/users/631068264/followers", "following_url": "https://api.github.com/users/631068264/following{/other_user}", "gists_url": "https://api.github.com/users/631068264/gists{/gist_id}", "starred_url": "https://api.github.com/users/631068264/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/631068264/subscriptions", "organizations_url": "https://api.github.com/users/631068264/orgs", "repos_url": "https://api.github.com/users/631068264/repos", "events_url": "https://api.github.com/users/631068264/events{/privacy}", "received_events_url": "https://api.github.com/users/631068264/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
open
false
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false } ]
null
[ "finally I find that the pipeline job is in the A namespace and the **mlpipeline-minio-artifact** is in kubeflow namespace.I don't know why.", "Possibility related https://github.com/kubeflow/pipelines/issues/4649. \r\n\r\nWhat is the Kubernetes version you are running on, and what is the executor you are using? https://github.com/kubeflow/pipelines/issues/5714", "Kubernetes version v1.19.5+k3s2 \r\nI don't know what is the executor. Do you mean this **gcr.io/ml-pipeline/kfp-launcher:1.7.1** . I find it in offical **Tutorial V2 lightweight Python components**\r\n\r\nFinally I use pipeline sdk v1 and gcr.io/ml-pipeline/kfp-launcher:1.8.7 to run my pipeline task and it work well", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "I am experiencing the same issue. It looks random. I can actually write to the volume but then I do a minor code change (e.g. adding one meaningless line) and suddenly the pipeline gets stuck in pending state and the kubectl describe pod reports the same error as OP noted." ]
"2021-12-30T09:00:22"
"2023-01-02T22:11:20"
null
NONE
null
kubectl version ``` [root@iZ1bb01rvtheuakv3h25ntZ ~]# kubectl version Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.5+k3s2", GitCommit:"746cf4031370f443bf1230272bc79f2f72de2869", GitTreeState:"clean", BuildDate:"2020-12-18T01:41:55Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.5+k3s2", GitCommit:"746cf4031370f443bf1230272bc79f2f72de2869", GitTreeState:"clean", BuildDate:"2020-12-18T01:41:55Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"} ``` Pipelines 1.7.0 install kubeflow 1.4.1 ![image](https://user-images.githubusercontent.com/8144089/147736852-523a7ac5-6887-4898-bdc8-bcf8689e7e40.png) pipeline keep pending execution ``` Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 28m default-scheduler Successfully assigned kubeflow-user-example-com/my-test-pipeline-beta-bzrlr-3669637779 to iz1bb01rvtheuakv3h25ntz Warning FailedMount 14m (x2 over 21m) kubelet Unable to attach or mount volumes: unmounted volumes=[mlpipeline-minio-artifact], unattached volumes=[kfp-launcher default-editor-token-8lmfr podmetadata docker-sock mlpipeline-minio-artifact]: timed out waiting for the condition Warning FailedMount 12m (x2 over 23m) kubelet Unable to attach or mount volumes: unmounted volumes=[mlpipeline-minio-artifact], unattached volumes=[docker-sock mlpipeline-minio-artifact kfp-launcher default-editor-token-8lmfr podmetadata]: timed out waiting for the condition Warning FailedMount 10m (x2 over 19m) kubelet Unable to attach or mount volumes: unmounted volumes=[mlpipeline-minio-artifact], unattached volumes=[mlpipeline-minio-artifact kfp-launcher default-editor-token-8lmfr podmetadata docker-sock]: timed out waiting for the condition Warning FailedMount 8m1s kubelet Unable to attach or mount volumes: unmounted volumes=[mlpipeline-minio-artifact], unattached volumes=[default-editor-token-8lmfr podmetadata docker-sock mlpipeline-minio-artifact kfp-launcher]: timed out waiting for the condition Warning FailedMount 5m43s (x3 over 26m) kubelet Unable to attach or mount volumes: unmounted volumes=[mlpipeline-minio-artifact], unattached volumes=[podmetadata docker-sock mlpipeline-minio-artifact kfp-launcher default-editor-token-8lmfr]: timed out waiting for the condition Warning FailedMount 96s (x21 over 28m) kubelet MountVolume.SetUp failed for volume "mlpipeline-minio-artifact" : secret "mlpipeline-minio-artifact" not found ``` ``` Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 53m default-scheduler Successfully assigned kubeflow-user-example-com/parameterized-tfx-oss-t48jj-3672522880 to 10-19-64-204 Warning FailedMount 40m (x4 over 49m) kubelet Unable to attach or mount volumes: unmounted volumes=[mlpipeline-minio-artifact], unattached volumes=[docker-sock mlpipeline-minio-artifact default-editor-token-8lmfr podmetadata]: timed out waiting for the condition Warning FailedMount 36m (x2 over 38m) kubelet Unable to attach or mount volumes: unmounted volumes=[mlpipeline-minio-artifact], unattached volumes=[default-editor-token-8lmfr podmetadata docker-sock mlpipeline-minio-artifact]: timed out waiting for the condition Warning FailedMount 33m (x3 over 51m) kubelet Unable to attach or mount volumes: unmounted volumes=[mlpipeline-minio-artifact], unattached volumes=[mlpipeline-minio-artifact default-editor-token-8lmfr podmetadata docker-sock]: timed out waiting for the condition Warning FailedMount 8m46s (x5 over 31m) kubelet Unable to attach or mount volumes: unmounted volumes=[mlpipeline-minio-artifact], unattached volumes=[podmetadata docker-sock mlpipeline-minio-artifact default-editor-token-8lmfr]: timed out waiting for the condition Warning FailedMount 2m59s (x33 over 53m) kubelet MountVolume.SetUp failed for volume "mlpipeline-minio-artifact" : secret "mlpipeline-minio-artifact" not found ```
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7129/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7129/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7128
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7128/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7128/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7128/events
https://github.com/kubeflow/pipelines/issues/7128
1,090,689,953
I_kwDOB-71UM5BApuh
7,128
[feature] `Input` and `Output` artifacts should allow local path
{ "login": "aaaaahaaaaa", "id": 2808155, "node_id": "MDQ6VXNlcjI4MDgxNTU=", "avatar_url": "https://avatars.githubusercontent.com/u/2808155?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aaaaahaaaaa", "html_url": "https://github.com/aaaaahaaaaa", "followers_url": "https://api.github.com/users/aaaaahaaaaa/followers", "following_url": "https://api.github.com/users/aaaaahaaaaa/following{/other_user}", "gists_url": "https://api.github.com/users/aaaaahaaaaa/gists{/gist_id}", "starred_url": "https://api.github.com/users/aaaaahaaaaa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aaaaahaaaaa/subscriptions", "organizations_url": "https://api.github.com/users/aaaaahaaaaa/orgs", "repos_url": "https://api.github.com/users/aaaaahaaaaa/repos", "events_url": "https://api.github.com/users/aaaaahaaaaa/events{/privacy}", "received_events_url": "https://api.github.com/users/aaaaahaaaaa/received_events", "type": "User", "site_admin": false }
[ { "id": 1289588140, "node_id": "MDU6TGFiZWwxMjg5NTg4MTQw", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature", "name": "kind/feature", "color": "2515fc", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Nevermind, using `dataset.uri` directly instead of `dataset.path` doesn't cause any issue." ]
"2021-12-29T18:41:38"
"2021-12-30T15:26:43"
"2021-12-30T15:26:43"
NONE
null
### Feature Area In our team, we like to implement components in such way that they can be executed both locally directly as a function (e.g. in a notebook) and as part of a KFP pipeline (e.g. on Vertex). Ideally we would love to use the `Input` and `Output` artifacts and call the function behind our component, like so for example: ``` def my_function_op( dataset: Output[Dataset], ): dataset.metadata["what"] = "ever" do_something(dataset.path) ``` _Pipeline execution_ ``` FunctionOp = create_component_from_func(my_function_op) my_function_task = FunctionOp() ``` _Local execution_ ``` my_function_op( dataset=Dataset(uri="/local/path") ) ``` However the local execution shown above doesn't work because local path [are not supported by the `Artifact` getter](https://github.com/kubeflow/pipelines/blob/74c7773ca40decfd0d4ed40dc93a6af591bbc190/sdk/python/kfp/v2/components/types/artifact_types.py#L65). It would be great to have some kind of minimal support for local Artifacts.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7128/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7128/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7127
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7127/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7127/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7127/events
https://github.com/kubeflow/pipelines/issues/7127
1,090,483,923
I_kwDOB-71UM5A_3bT
7,127
[ray cluster] Problem using RayCluster within Kubeflow Pipelines
{ "login": "mikwieczorek", "id": 40968185, "node_id": "MDQ6VXNlcjQwOTY4MTg1", "avatar_url": "https://avatars.githubusercontent.com/u/40968185?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mikwieczorek", "html_url": "https://github.com/mikwieczorek", "followers_url": "https://api.github.com/users/mikwieczorek/followers", "following_url": "https://api.github.com/users/mikwieczorek/following{/other_user}", "gists_url": "https://api.github.com/users/mikwieczorek/gists{/gist_id}", "starred_url": "https://api.github.com/users/mikwieczorek/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mikwieczorek/subscriptions", "organizations_url": "https://api.github.com/users/mikwieczorek/orgs", "repos_url": "https://api.github.com/users/mikwieczorek/repos", "events_url": "https://api.github.com/users/mikwieczorek/events{/privacy}", "received_events_url": "https://api.github.com/users/mikwieczorek/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "In the pipeline, you may want to call `run_dask_on_ray_op ` instead of `ray_run_func` which will immediately evaluate the code at compilation time", "@yuhuishi-convect Thanks for finding my mistake. Fixing it worked like a charm and now it both compiles and runs successfully on the Cluster. Thank you!" ]
"2021-12-29T12:21:03"
"2021-12-30T12:09:06"
"2021-12-30T12:09:06"
NONE
null
Hi, I am struggling a bit with my settings to make my kf-pipeline run. Settings are as follows: I have a k8s cluster deployed on remote VM where the full kubeflow and Ray Cluster are deployed. I ran successfully a simple example with kfp, but now I am trying to create a pipeline using Ray inside one of the components. My code is as follows (run from the local laptop) ```import kfp import kfp.components as comp import numpy as np import pandas as pd def create_df(num_unique_ids: int, max_num_samples: int, min_id: int, max_id: int, output_csv: comp.OutputPath('CSV')): import numpy as np import pandas as pd repetition_array = np.random.randint(1, max_num_samples, num_unique_ids) ids_array = np.repeat(np.random.randint(min_id, max_id, num_unique_ids), repetition_array).flatten() df = pd.DataFrame( { 'id': ids_array, 'value': np.random.randint(0, max_num_samples, ids_array.size), } ) df['id'] = df['id'].astype(np.int64) df['value'] = df['value'].astype(object) return df.to_csv(output_csv, index=False, header=True) def ray_run_func(ray_address: str, input: comp.InputPath('CSV'), output_csv: comp.OutputPath('CSV'), grouping_column: str = 'id', allowed_column_ids: list = ['id', 'value']): import ray from ray.util.dask import ray_dask_get import dask import dask.dataframe as dd def custom_func(partition, column_names): for col in partition.columns: if col in column_names: values = partition[col].to_numpy() row_sequence = set(['_'.join(str(v).split()) for v in values]) return row_sequence ray.init(address=ray_address) dask.config.set(scheduler=ray_dask_get) df = pd.read(input) ddf = dd.from_pandas(df, npartitions=4) ddf = ddf.set_index(grouping_column, sorted=False, shuffle='tasks') d = ddf.groupby(grouping_column).apply(custom_func, column_names = allowed_column_ids, meta=object ).compute(scheduler=ray_dask_get) return d.to_csv(output_csv, index=False) ### Creating OPS create_data_op = comp.func_to_container_op(create_df, base_image='rayproject/ray:latest') run_dask_on_ray_op = comp.func_to_container_op(ray_run_func, base_image='rayproject/ray:latest') ### Creating Pipeline import kfp.dsl as dsl @dsl.pipeline( name='ray-cluster-pipeline', description='A toy pipeline with using Ray cluster on k8s.' ) def simple_ray_cluster_pipeline(): data = create_data_op(num_unique_ids=1000, max_num_samples=1000, min_id=0, max_id=10000) ray_output = ray_run_func( # ray_address='ray://10.99.4.106:10001', ## Ray service ray_address="172.17.0.60:10001", ## Ray head ip # ray_address="127.0.0.1:10001", ## When used port-forwarding # ray_address="auto", ## Not useful, as the cluster would need to be discoverable but is in different namesapce input=data.outputs['output_csv'], output_csv='ray_output.csv' ) ### Compiling kfp.compiler.Compiler().compile( pipeline_func=simple_ray_cluster_pipeline, package_path='ray-cluster-pipeline.yaml') ``` The IP to RayCluster is so that the RayCluster is accessible from the k8s so it could run in kfp. What surprises me is that when I try to compile the code is seems like the Compiler tries to run everything and connect to RayCLuster, which is inaccessible from the local laptop. ![image](https://user-images.githubusercontent.com/40968185/147661485-77a8ca15-8cb5-4c79-814a-7c4dac8bdeb2.png) I couldn't find any similar solution on the Internet or in docs. Is is possible to somehow delay the connection to the RayCluster so the connection is established only during running the pipeline itself? I appreciate and hints or links to more reading
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7127/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7127/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7124
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7124/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7124/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7124/events
https://github.com/kubeflow/pipelines/issues/7124
1,090,036,368
I_kwDOB-71UM5A-KKQ
7,124
[backend] Envoy metadata pod is exposing admin interface
{ "login": "sebastien-prudhomme", "id": 641962, "node_id": "MDQ6VXNlcjY0MTk2Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/641962?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sebastien-prudhomme", "html_url": "https://github.com/sebastien-prudhomme", "followers_url": "https://api.github.com/users/sebastien-prudhomme/followers", "following_url": "https://api.github.com/users/sebastien-prudhomme/following{/other_user}", "gists_url": "https://api.github.com/users/sebastien-prudhomme/gists{/gist_id}", "starred_url": "https://api.github.com/users/sebastien-prudhomme/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sebastien-prudhomme/subscriptions", "organizations_url": "https://api.github.com/users/sebastien-prudhomme/orgs", "repos_url": "https://api.github.com/users/sebastien-prudhomme/repos", "events_url": "https://api.github.com/users/sebastien-prudhomme/events{/privacy}", "received_events_url": "https://api.github.com/users/sebastien-prudhomme/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
open
false
null
[]
null
[ "cc @chensun .\r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-12-28T17:33:11"
"2022-04-17T06:27:33"
null
NONE
null
### Environment * How did you deploy Kubeflow Pipelines (KFP)? Install Kubeflow Pipelines 1.7.0 (standalone mode) with kustomize. ### Steps to reproduce Get metadata-envoy-deployment pod IP address: ``` kubectl -n kubeflow get pod metadata-envoy-deployment-5b4856dd5-bd88x -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES metadata-envoy-deployment-5b4856dd5-bd88x 1/1 Running 0 84s 10.42.0.54 lenovo <none> <none> ``` Get some data from admin interface (port 9901): ``` curl 10.42.0.54:9901/clusters metadata-cluster::default_priority::max_connections::1024 metadata-cluster::default_priority::max_pending_requests::1024 metadata-cluster::default_priority::max_requests::1024 metadata-cluster::default_priority::max_retries::3 metadata-cluster::high_priority::max_connections::1024 metadata-cluster::high_priority::max_pending_requests::1024 metadata-cluster::high_priority::max_requests::1024 metadata-cluster::high_priority::max_retries::3 metadata-cluster::added_via_api::false metadata-cluster::10.43.201.59:8080::cx_active::0 metadata-cluster::10.43.201.59:8080::cx_connect_fail::0 metadata-cluster::10.43.201.59:8080::cx_total::0 metadata-cluster::10.43.201.59:8080::rq_active::0 metadata-cluster::10.43.201.59:8080::rq_error::0 metadata-cluster::10.43.201.59:8080::rq_success::0 metadata-cluster::10.43.201.59:8080::rq_timeout::0 metadata-cluster::10.43.201.59:8080::rq_total::0 metadata-cluster::10.43.201.59:8080::hostname::metadata-grpc-service metadata-cluster::10.43.201.59:8080::health_flags::healthy metadata-cluster::10.43.201.59:8080::weight::1 metadata-cluster::10.43.201.59:8080::region:: metadata-cluster::10.43.201.59:8080::zone:: metadata-cluster::10.43.201.59:8080::sub_zone:: metadata-cluster::10.43.201.59:8080::canary::false metadata-cluster::10.43.201.59:8080::priority::0 metadata-cluster::10.43.201.59:8080::success_rate::-1 metadata-cluster::10.43.201.59:8080::local_origin_success_rate::-1 ``` ### Expected result Pod metadata-envoy-deployment should not expose envoy admin interface as you can shutdown the server with it. ### Materials and Reference See there: https://www.envoyproxy.io/docs/envoy/latest/operations/admin In the Envoy doc, admin socket is listening on 127.0.0.1, not on 0.0.0.0 as in https://github.com/kubeflow/pipelines/blob/f4a37b27ec5950310a76943e1ff68289b9c40d7d/third_party/metadata_envoy/envoy.yaml#L4 --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a ๐Ÿ‘. We prioritise the issues with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7124/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7124/timeline
null
null
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7123
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7123/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7123/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7123/events
https://github.com/kubeflow/pipelines/issues/7123
1,090,020,102
I_kwDOB-71UM5A-GMG
7,123
[bug] `custom_job.job_spec.tensorboard` incompatible with execution on Vertex Pipelines
{ "login": "aaaaahaaaaa", "id": 2808155, "node_id": "MDQ6VXNlcjI4MDgxNTU=", "avatar_url": "https://avatars.githubusercontent.com/u/2808155?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aaaaahaaaaa", "html_url": "https://github.com/aaaaahaaaaa", "followers_url": "https://api.github.com/users/aaaaahaaaaa/followers", "following_url": "https://api.github.com/users/aaaaahaaaaa/following{/other_user}", "gists_url": "https://api.github.com/users/aaaaahaaaaa/gists{/gist_id}", "starred_url": "https://api.github.com/users/aaaaahaaaaa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aaaaahaaaaa/subscriptions", "organizations_url": "https://api.github.com/users/aaaaahaaaaa/orgs", "repos_url": "https://api.github.com/users/aaaaahaaaaa/repos", "events_url": "https://api.github.com/users/aaaaahaaaaa/events{/privacy}", "received_events_url": "https://api.github.com/users/aaaaahaaaaa/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" } ]
closed
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "```python\r\ntrainer_task.custom_job_spec = {\r\n ...\r\n}\r\n```\r\nThis was an experimental implementation, and it has been dropped post KFP SDK 1.8 release (deleted from master branch). \r\nCan you please try the replacement version, which is a component from [a separate Python package](https://pypi.org/project/google-cloud-pipeline-components/): \r\n\r\nhttps://cloud.google.com/vertex-ai/docs/pipelines/customjob-component", "Thanks for the clarification. I confirm the `create_custom_training_job_op_from_component` function works for that purpose." ]
"2021-12-28T17:00:29"
"2022-01-11T19:46:40"
"2022-01-11T19:46:29"
NONE
null
According to the documentation [here](https://cloud.google.com/vertex-ai/docs/experiments/tensorboard-training#create_a_custom_training_job), `Vertex Tensorboard` can be configured for a custom job on Vertex by settings `custom_job.job_spec.tensorboard`. However, it seems incompatible with an execution of the task on `Vertex Pipelines`: When setting the `tensorboard` field, e.g.: ``` trainer_task = MyTrainerOp(...) trainer_task.custom_job_spec = { "displayName": "My Trainer", "jobSpec": { "tensorboard": "projects/XXXX/locations/XXX/tensorboards/XXX", "workerPoolSpecs": [ ... ], }, } ``` The following error is raised by `google.cloud.aiplatform.internal.JobService.CreateCustomJob`: > custom_job.job_spec.service_account must be specified when uploading to TensorBoard And if `serviceAccount` is set: ``` trainer_task = MyTrainerOp(...) trainer_task.custom_job_spec = { "displayName": "My Trainer", "jobSpec": { "serviceAccount": SERVICE_ACCOUNT, "tensorboard": "projects/XXXX/locations/XXX/tensorboards/XXX", "workerPoolSpecs": [ ... ], }, } ``` The following error is raised at the `google.cloud.aiplatform.v1.PipelineService.CreatePipelineJob` level: > Specifying custom service account is not supported at the task level So the two errors are basically blocking each other. ### Environment: <!-- Please fill in those that seem relevant. --> * KFP version: Vertex Pipelines * KFP SDK version: 1.8.10
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7123/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7123/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7120
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7120/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7120/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7120/events
https://github.com/kubeflow/pipelines/issues/7120
1,089,619,135
I_kwDOB-71UM5A8kS_
7,120
[pH] migrate e2e test to v2 sample test-like infra
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[ { "id": 2152751095, "node_id": "MDU6TGFiZWwyMTUyNzUxMDk1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/frozen", "name": "lifecycle/frozen", "color": "ededed", "default": false, "description": null } ]
closed
false
null
[]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "/lifecycle frozen" ]
"2021-12-28T05:17:13"
"2023-04-04T17:35:43"
"2023-04-04T17:35:43"
CONTRIBUTOR
null
Migrate kubeflow-pipeline-e2e-test to use v2 sample test-like infra. Potentially related issues: * https://github.com/kubeflow/pipelines/issues/6168 should be added to e2e test ## Why? Improves test maintainability, because * Remove [the last dependency](https://github.com/kubeflow/pipelines/blob/ca6e05591d922f6958a7681827aea25c41b94573/test/e2e_test_gke_v2.yaml#L179) on docker container runtime (deprecated), then we can use containerd instead. * KFP pipeline is easier to author, debug than bash scripts/argo workflows * No need to spend time fixing issues when v1 sample test breaks (it breaks all the time for various reasons) * Share common test utils with v2 sample test * Other reasons in https://github.com/kubeflow/pipelines/issues/6377 Further enhances the idea of testing KFP using KFP #3505. This unblocks removing v1 legacy sample test code https://github.com/kubeflow/pipelines/pull/7116, because * [kubeflow-pipeline-e2e-test](https://github.com/GoogleCloudPlatform/oss-test-infra/blob/8e2b1e0b57d0bf7adf8e9f3cef6a98af25012412/prow/prowjobs/kubeflow/pipelines/kubeflow-pipelines-presubmits.yaml#L24-L37) still depends on v1 legacy sample test code: * https://github.com/kubeflow/pipelines/blob/f4a37b27ec5950310a76943e1ff68289b9c40d7d/test/e2e_test_gke_v2.yaml#L80-L89 * https://github.com/kubeflow/pipelines/blob/f4a37b27ec5950310a76943e1ff68289b9c40d7d/test/e2e_test_gke_v2.yaml#L117-L134 ## How? We can refactor the e2e test argo workflow to a Kubeflow Pipeline like [v2 sample test](https://github.com/kubeflow/pipelines/blob/master/v2/test/sample_test.py). Note that, v2 sample test is running in kfp-ci project, while e2e test is running ml-pipeline-test project. We might want to simply move the test to kfp-ci project, because the long running orchestration test cluster is already there. ### Current state For e2e tests, we * build images via Cloud Build * create a new GKE cluster * deploy the freshly built KFP on the GKE cluster * run an argo workflow in the same cluster with all the tests There are a ton of complexities in the bash scripts like https://github.com/kubeflow/pipelines/blob/master/test/postsubmit-tests-with-pipeline-deployment.sh, because we are parallely waiting for image building and GKE cluster creation. ### Proposal Use KFP to orchestrate the entire e2e test, we need to separate into two parts: * Preparation * Build all images * Create a GKE test cluster * Deploy KFP on the test cluster * Verify KFP deployments are ready * Testing * initialization test, api integration test, frontend integration test etc (check [current e2e test argo workflow](https://github.com/kubeflow/pipelines/blob/f4a37b27ec5950310a76943e1ff68289b9c40d7d/test/e2e_test_gke_v2.yam)) For the preparation steps, obviously they can run in the long running test orchestration cluster (kfp-standalone-1 cluster in kfp-ci project). For testing steps, there are two options: * (Suggested) run them also in the test orchestration cluster, but in each test connect into KFP in the test cluster * run them using a pipeline in the test cluster Running the testing steps as a KFP pipeline in the test cluster creates some challenge, because some e2e tests deletes all the experiments, runs, pipelines in the cluster to test KFP API (api-integration-test), the test pipeline will interfere with these tests. Therefore, also running testing steps in the test orchestration cluster seems like the best option (there's an additional benefit that, the test cluster is GCed after the test finishes, so we cannot use KFP UI to directly debug the test. While the test orchestration cluster is long running, we can check test results using KFP UI after the test finishes too).
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7120/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7120/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7119
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7119/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7119/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7119/events
https://github.com/kubeflow/pipelines/issues/7119
1,089,581,536
I_kwDOB-71UM5A8bHg
7,119
[backend] Support Protobuf.Value in KFP backend API for `Create Run`
{ "login": "zijianjoy", "id": 37026441, "node_id": "MDQ6VXNlcjM3MDI2NDQx", "avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijianjoy", "html_url": "https://github.com/zijianjoy", "followers_url": "https://api.github.com/users/zijianjoy/followers", "following_url": "https://api.github.com/users/zijianjoy/following{/other_user}", "gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions", "organizations_url": "https://api.github.com/users/zijianjoy/orgs", "repos_url": "https://api.github.com/users/zijianjoy/repos", "events_url": "https://api.github.com/users/zijianjoy/events{/privacy}", "received_events_url": "https://api.github.com/users/zijianjoy/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 1118896905, "node_id": "MDU6TGFiZWwxMTE4ODk2OTA1", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend", "name": "area/backend", "color": "d2b48c", "default": false, "description": "" } ]
closed
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[]
"2021-12-28T03:15:14"
"2022-02-10T10:22:15"
"2022-02-10T10:22:15"
COLLABORATOR
null
Currently the KFP API takes int/double/string as parameter types: https://github.com/capri-xiyue/pipelines/blob/d68b8ac792a23211316b89339d049f5d0946c59e/backend/api/pipeline_spec.proto#L61-L70. However, Pipeline Spec has adopted Protobuf.Value, as in https://github.com/capri-xiyue/pipelines/blob/d68b8ac792a23211316b89339d049f5d0946c59e/api/v2alpha1/pipeline_spec.proto#L187-L190. Therefore, we require an upgrade to the KFP API and backend for accepting Protobuf.Value for input parameters. cc @chensun @Bobgy @capri-xiyue /assign @chensun
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7119/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7119/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7110
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7110/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7110/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7110/events
https://github.com/kubeflow/pipelines/issues/7110
1,087,856,380
I_kwDOB-71UM5A1178
7,110
[frontend] Missing the left-side menu bar in KFP UI v1.7.1
{ "login": "daikeshi", "id": 475945, "node_id": "MDQ6VXNlcjQ3NTk0NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/475945?v=4", "gravatar_id": "", "url": "https://api.github.com/users/daikeshi", "html_url": "https://github.com/daikeshi", "followers_url": "https://api.github.com/users/daikeshi/followers", "following_url": "https://api.github.com/users/daikeshi/following{/other_user}", "gists_url": "https://api.github.com/users/daikeshi/gists{/gist_id}", "starred_url": "https://api.github.com/users/daikeshi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/daikeshi/subscriptions", "organizations_url": "https://api.github.com/users/daikeshi/orgs", "repos_url": "https://api.github.com/users/daikeshi/repos", "events_url": "https://api.github.com/users/daikeshi/events{/privacy}", "received_events_url": "https://api.github.com/users/daikeshi/received_events", "type": "User", "site_admin": false }
[ { "id": 930619516, "node_id": "MDU6TGFiZWw5MzA2MTk1MTY=", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend", "name": "area/frontend", "color": "d2b48c", "default": false, "description": "" }, { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 2710158147, "node_id": "MDU6TGFiZWwyNzEwMTU4MTQ3", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/needs%20more%20info", "name": "needs more info", "color": "DBEF12", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Hello @daikeshi , I am wondering how do you install the Kubeflow OSS? The KFP UI is hidden because it detects the existence of Central dashboard, if Central dashboard exists, Pipelines related left nav items are merged to Central dashboard. But I don't see the central dashboard from your screenshot, maybe you can click the `three lines` icon on top left?\r\n\r\nReference: https://github.com/kubeflow/pipelines/blob/master/frontend/src/components/SideNav.tsx#L264-L266\r\n\r\nhttps://github.com/kubeflow/pipelines/issues/5199\r\n\r\n", "@zijianjoy thanks! I see. We deployed KFP along with OSS central dashboard component with our customized Kubeflow setup. \r\n\r\nYeah, clicking the three `lines icon` on the top will bring the central dashboard menu selection back, but our users typically don't use it and we are on an older version of the central dashboard (there's a bug the link there requires a page refresh). Would it be possible to override [`HIDE_SIDENAV` flag](https://github.com/kubeflow/pipelines/blob/master/frontend/server/configs.ts#L102) on my end via the deployment?\r\n", "You can configure in here https://github.com/kubeflow/pipelines/blob/master/manifests/kustomize/base/installs/multi-user/pipelines-ui/deployment-patch.yaml" ]
"2021-12-23T16:36:31"
"2022-01-07T00:41:20"
"2022-01-07T00:41:20"
CONTRIBUTOR
null
After we upgraded KFP and its UI image version from 1.3 to 1.7.1, we noticed that the left-side menu bar is missing in KFP UI v1.71. Is there any way we can bring it back? Thanks ### Environment * How did you deploy Kubeflow Pipelines (KFP)? It's deployed via OSS Kubeflow stack <!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. --> * KFP version: 1.7.1 <!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface. To find the version number, See version number shows on bottom of KFP UI left sidenav. --> ### Steps to reproduce <!-- Specify how to reproduce the problem. This may include information such as: a description of the process, code snippets, log output, or screenshots. --> **KFP UI v1.3** ![image](https://user-images.githubusercontent.com/475945/147268334-6083caf8-d11b-4faa-8acd-5dfa77d540a8.png) **KFP UI v1.7.1** ![image](https://user-images.githubusercontent.com/475945/147268459-c59058fe-5167-4af0-9101-cbb12de821f9.png) ### Expected result The UI should be consistent and offer a way for users to navigate different features easily ### Materials and Reference <!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. --> --- <!-- Don't delete message below to encourage users to support your issue! --> Impacted by this bug? Give it a ๐Ÿ‘. We prioritise the issues with the most ๐Ÿ‘.
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7110/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7110/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7106
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7106/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7106/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7106/events
https://github.com/kubeflow/pipelines/issues/7106
1,086,811,122
I_kwDOB-71UM5Ax2vy
7,106
k8sapi executor does not support outputs from base image layer
{ "login": "wybaron", "id": 19605510, "node_id": "MDQ6VXNlcjE5NjA1NTEw", "avatar_url": "https://avatars.githubusercontent.com/u/19605510?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wybaron", "html_url": "https://github.com/wybaron", "followers_url": "https://api.github.com/users/wybaron/followers", "following_url": "https://api.github.com/users/wybaron/following{/other_user}", "gists_url": "https://api.github.com/users/wybaron/gists{/gist_id}", "starred_url": "https://api.github.com/users/wybaron/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wybaron/subscriptions", "organizations_url": "https://api.github.com/users/wybaron/orgs", "repos_url": "https://api.github.com/users/wybaron/repos", "events_url": "https://api.github.com/users/wybaron/events{/privacy}", "received_events_url": "https://api.github.com/users/wybaron/received_events", "type": "User", "site_admin": false }
[ { "id": 1073153908, "node_id": "MDU6TGFiZWwxMDczMTUzOTA4", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug", "name": "kind/bug", "color": "fc2515", "default": false, "description": "" }, { "id": 2157634204, "node_id": "MDU6TGFiZWwyMTU3NjM0MjA0", "url": "https://api.github.com/repos/kubeflow/pipelines/labels/lifecycle/stale", "name": "lifecycle/stale", "color": "bbbbbb", "default": false, "description": "The issue / pull request is stale, any activities remove this label." } ]
closed
false
null
[]
null
[ "Hello @wybaron , we don't support k8sapi executor, please switch to emissary executor: https://github.com/kubeflow/pipelines/issues/5714", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-12-22T13:14:09"
"2022-04-21T00:29:38"
"2022-04-21T00:29:38"
NONE
null
Run Experiments ![image](https://user-images.githubusercontent.com/19605510/147098266-16f704cf-9266-4b77-84dc-ebfd985db48c.png)
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7106/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7106/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7104
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7104/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7104/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7104/events
https://github.com/kubeflow/pipelines/issues/7104
1,086,497,426
I_kwDOB-71UM5AwqKS
7,104
v2 backend - full API support
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false }
[ { "login": "chensun", "id": 2043310, "node_id": "MDQ6VXNlcjIwNDMzMTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chensun", "html_url": "https://github.com/chensun", "followers_url": "https://api.github.com/users/chensun/followers", "following_url": "https://api.github.com/users/chensun/following{/other_user}", "gists_url": "https://api.github.com/users/chensun/gists{/gist_id}", "starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chensun/subscriptions", "organizations_url": "https://api.github.com/users/chensun/orgs", "repos_url": "https://api.github.com/users/chensun/repos", "events_url": "https://api.github.com/users/chensun/events{/privacy}", "received_events_url": "https://api.github.com/users/chensun/received_events", "type": "User", "site_admin": false } ]
null
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
"2021-12-22T06:29:43"
"2023-01-18T08:20:17"
"2023-01-18T08:20:17"
CONTRIBUTOR
null
Follow up of https://github.com/kubeflow/pipelines/issues/6199 * [x] https://github.com/kubeflow/pipelines/issues/6171
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7104/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7104/timeline
null
completed
null
null
false
https://api.github.com/repos/kubeflow/pipelines/issues/7096
https://api.github.com/repos/kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines/issues/7096/labels{/name}
https://api.github.com/repos/kubeflow/pipelines/issues/7096/comments
https://api.github.com/repos/kubeflow/pipelines/issues/7096/events
https://github.com/kubeflow/pipelines/issues/7096
1,085,617,118
I_kwDOB-71UM5AtTPe
7,096
test: error: deployment "ml-pipeline" exceeded its progress deadline
{ "login": "Bobgy", "id": 4957653, "node_id": "MDQ6VXNlcjQ5NTc2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4957653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bobgy", "html_url": "https://github.com/Bobgy", "followers_url": "https://api.github.com/users/Bobgy/followers", "following_url": "https://api.github.com/users/Bobgy/following{/other_user}", "gists_url": "https://api.github.com/users/Bobgy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bobgy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bobgy/subscriptions", "organizations_url": "https://api.github.com/users/Bobgy/orgs", "repos_url": "https://api.github.com/users/Bobgy/repos", "events_url": "https://api.github.com/users/Bobgy/events{/privacy}", "received_events_url": "https://api.github.com/users/Bobgy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Here's what I tried to debug the issue, but cannot figure out why:\r\n* connecting into a test cluster and confirmed that connectivity from any in-cluster pod to the mysql service is broken by `kubectl run -it -n kubeflow --rm --image=mysql:8.0.12 --restart=Never mysql-client -- mysql -h mysql`\r\n* create a cluster with exact the same config/location in my own project, cannot reproduce the connectivity issue\r\n* switch to use containerd container runtime: https://github.com/kubeflow/pipelines/pull/7092, the problem still persists\r\n* check ml-pipeline-test project quotas, nothing exceeds quota limits", "Error logs captured from ml-pipeline deployment (10.28.12.35:3306 is address of the in-cluster mysql service):\r\n\r\n```\r\nInitializing client manager\r\nConfig DBConfig.ExtraParams not specified, skipping\r\ndial tcp 10.28.12.35:3306: connect: connection timed out\r\nInitializing client manager\r\nConfig DBConfig.ExtraParams not specified, skipping\r\n```", "Sharing another issue I encountered:\r\n\r\n```\r\n\r\nCreating Google service accounts...\r\nService account test-kfp-system already exists\r\nService account test-kfp-user already exists\r\nBinding each kfp system KSA to test-kfp-system\r\nUpdated IAM policy for serviceAccount [test-kfp-system@ml-pipeline-test.iam.gserviceaccount.com].\r\nKSA ml-pipeline-ui already exists\r\nserviceaccount/ml-pipeline-ui annotated\r\n* Bound KSA ml-pipeline-ui to GSA test-kfp-system\r\nERROR: (gcloud.iam.service-accounts.add-iam-policy-binding) ABORTED: There were concurrent policy changes. Please retry the whole read-modify-write with exponential backoff.\r\n++ [[ 1 -lt 3 ]]\r\n++ (( n++ ))\r\n++ echo 'Command failed. Attempt 2/3:'\r\nCommand failed. Attempt 2/3:\r\n```", "The error in the title has been resolved" ]
"2021-12-21T09:26:35"
"2021-12-22T00:37:58"
"2021-12-22T00:37:57"
CONTRIBUTOR
null
All sample/e2e tests are failing with either * metadata-grpc deployment cannot rollout * ml-pipeline deployment cannot rollout because of connection timeout to the in-cluster mysql DB. Test log: https://oss-prow.knative.dev/view/gs/oss-prow/pr-logs/pull/kubeflow_pipelines/7092/kubeflow-pipeline-e2e-test/1473214435895021568
{ "url": "https://api.github.com/repos/kubeflow/pipelines/issues/7096/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/kubeflow/pipelines/issues/7096/timeline
null
completed
null
null
false