Datasets:
url
stringlengths 59
59
| repository_url
stringclasses 1
value | labels_url
stringlengths 73
73
| comments_url
stringlengths 68
68
| events_url
stringlengths 66
66
| html_url
stringlengths 49
49
| id
int64 782M
1.89B
| node_id
stringlengths 18
24
| number
int64 4.97k
9.98k
| title
stringlengths 2
306
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | active_lock_reason
null | body
stringlengths 0
63.6k
⌀ | reactions
dict | timeline_url
stringlengths 68
68
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 0
classes | pull_request
dict | is_pull_request
bool 1
class |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/kubeflow/pipelines/issues/9980 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/9980/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/9980/comments | https://api.github.com/repos/kubeflow/pipelines/issues/9980/events | https://github.com/kubeflow/pipelines/issues/9980 | 1,894,144,464 | I_kwDOB-71UM5w5lnQ | 9,980 | [feature] Enable Setting Image Pull Policy in V2 SDK | {
"login": "PhilippeMoussalli",
"id": 47530815,
"node_id": "MDQ6VXNlcjQ3NTMwODE1",
"avatar_url": "https://avatars.githubusercontent.com/u/47530815?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilippeMoussalli",
"html_url": "https://github.com/PhilippeMoussalli",
"followers_url": "https://api.github.com/users/PhilippeMoussalli/followers",
"following_url": "https://api.github.com/users/PhilippeMoussalli/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilippeMoussalli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilippeMoussalli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilippeMoussalli/subscriptions",
"organizations_url": "https://api.github.com/users/PhilippeMoussalli/orgs",
"repos_url": "https://api.github.com/users/PhilippeMoussalli/repos",
"events_url": "https://api.github.com/users/PhilippeMoussalli/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilippeMoussalli/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1136110037,
"node_id": "MDU6TGFiZWwxMTM2MTEwMDM3",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk",
"name": "area/sdk",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 1289588140,
"node_id": "MDU6TGFiZWwxMjg5NTg4MTQw",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature",
"name": "kind/feature",
"color": "2515fc",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [] | 2023-09-13T09:40:05 | 2023-09-13T09:40:08 | null | NONE | null | ### Feature Area
/area sdk
### What feature would you like to see?
KFP V1 supported setting the image pull policy [link](https://github.com/kubeflow/pipelines/blob/4bee3d8dc2ee9c33d87e1058bac2a94d899dd4a5/sdk/python/kfp/deprecated/dsl/_container_op.py#L518)
This feature is currently not available in KFP V2
<!-- Provide a description of this feature and the user experience. -->
### What is the use case or pain point?
We used to set image pull policy to always in V1 to avoid cases when outdated images are pulled.
<!-- It helps us understand the benefit of this feature for your use case. -->
### Is there a workaround currently?
No current workaround.
---
<!-- Don't delete message below to encourage users to support your feature request! -->
Love this idea? Give it a 👍. | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/9980/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/9980/timeline | null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/9976 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/9976/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/9976/comments | https://api.github.com/repos/kubeflow/pipelines/issues/9976/events | https://github.com/kubeflow/pipelines/issues/9976 | 1,892,166,555 | I_kwDOB-71UM5wyCub | 9,976 | [bug] (Big) Integer overflow in python components | {
"login": "vigram93",
"id": 88139433,
"node_id": "MDQ6VXNlcjg4MTM5NDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/88139433?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vigram93",
"html_url": "https://github.com/vigram93",
"followers_url": "https://api.github.com/users/vigram93/followers",
"following_url": "https://api.github.com/users/vigram93/following{/other_user}",
"gists_url": "https://api.github.com/users/vigram93/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vigram93/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vigram93/subscriptions",
"organizations_url": "https://api.github.com/users/vigram93/orgs",
"repos_url": "https://api.github.com/users/vigram93/repos",
"events_url": "https://api.github.com/users/vigram93/events{/privacy}",
"received_events_url": "https://api.github.com/users/vigram93/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [] | 2023-09-12T10:15:29 | 2023-09-12T10:15:29 | null | NONE | null | /kind bug
### Environment
* How do you deploy Kubeflow Pipelines (KFP)? </br>
Using kubeflow sdk
* KFP version: </br>
1.7
* KFP SDK version: </br>
1.8.17
* Python base image used: </br>
Image tag: python:3.8-slim </br>
Image sha: sha256:7672e07ca8cd61d8520be407d6a83d45e6d37faf26bb68b91a2fa8ab89a7798f </br>
Image ID: 1b0f3f18921c
### Steps to reproduce
#### Issue:
An integer parameter (Only certain numbers) when tried to be printed using python function component, kfp mutates the input and prints a different output.
Eg:
#### Scenario1:
Input parameter value: 20230807093130621
</br>
Printed value: 20230807093130620
#### Scenario2:
Input parameter value: 20230807093130621
</br>
Added number: 10
</br>
Printed value: 20230807093130630
### Expected result
#### Scenario1:
Input parameter value: 20230807093130621
</br>
Expected value: 20230807093130621
#### Scenario2:
Input parameter value: 20230807093130621
</br>
Added number: 10
</br>
Expected value: 20230807093130631
### Pipeline yaml definition
```
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: integer-overflow-test-pipeline-
annotations:
pipelines.kubeflow.org/kfp_sdk_version: 1.8.17
pipelines.kubeflow.org/pipeline_compilation_time: '2023-09-11T20:57:57.297958'
pipelines.kubeflow.org/pipeline_spec: >-
{"inputs": [{"default": "20230807093130621", "name": "x", "optional":
true, "type": "Integer"}, {"default": "10", "name": "num_to_add",
"optional": true, "type": "Integer"}], "name":
"integer_overflow_test_pipeline"}
labels:
pipelines.kubeflow.org/kfp_sdk_version: 1.8.17
spec:
entrypoint: integer-overflow-test-pipeline
templates:
- name: add-num-to-integer
container:
args:
- '--x'
- '{{inputs.parameters.x}}'
- '--num'
- '{{inputs.parameters.num_to_add}}'
command:
- sh
- '-ec'
- |
program_path=$(mktemp)
printf "%s" "$0" > "$program_path"
python3 -u "$program_path" "$@"
- >
def add_num_to_integer(x, num = 10):
added_num = x + num
print(added_num)
import argparse
_parser = argparse.ArgumentParser(prog='Add num to integer',
description='')
_parser.add_argument("--x", dest="x", type=int, required=True,
default=argparse.SUPPRESS)
_parser.add_argument("--num", dest="num", type=int, required=False,
default=argparse.SUPPRESS)
_parsed_args = vars(_parser.parse_args())
_outputs = add_num_to_integer(**_parsed_args)
image: 'python:3.8-slim'
inputs:
parameters:
- name: num_to_add
- name: x
metadata:
labels:
pipelines.kubeflow.org/kfp_sdk_version: 1.8.17
pipelines.kubeflow.org/pipeline-sdk-type: kfp
pipelines.kubeflow.org/enable_caching: 'true'
annotations:
pipelines.kubeflow.org/component_spec: >-
{"implementation": {"container": {"args": ["--x", {"inputValue":
"x"}, {"if": {"cond": {"isPresent": "num"}, "then": ["--num",
{"inputValue": "num"}]}}], "command": ["sh", "-ec",
"program_path=$(mktemp)\nprintf \"%s\" \"$0\" >
\"$program_path\"\npython3 -u \"$program_path\" \"$@\"\n", "def
add_num_to_integer(x, num = 10):\n added_num = x + num\n
print(added_num)\n\nimport argparse\n_parser =
argparse.ArgumentParser(prog='Add num to integer',
description='')\n_parser.add_argument(\"--x\", dest=\"x\", type=int,
required=True,
default=argparse.SUPPRESS)\n_parser.add_argument(\"--num\",
dest=\"num\", type=int, required=False,
default=argparse.SUPPRESS)\n_parsed_args =
vars(_parser.parse_args())\n\n_outputs =
add_num_to_integer(**_parsed_args)\n"], "image":
"python:3.8-slim"}}, "inputs":
[{"name": "x", "type": "Integer"}, {"default": "10", "name": "num",
"optional": true, "type": "Integer"}], "name": "Add num to integer"}
pipelines.kubeflow.org/component_ref: '{}'
pipelines.kubeflow.org/arguments.parameters: >-
{"num": "{{inputs.parameters.num_to_add}}", "x":
"{{inputs.parameters.x}}"}
pipelines.kubeflow.org/max_cache_staleness: P0D
- name: integer-overflow-test-pipeline
inputs:
parameters:
- name: num_to_add
- name: x
dag:
tasks:
- name: add-num-to-integer
template: add-num-to-integer
arguments:
parameters:
- name: num_to_add
value: '{{inputs.parameters.num_to_add}}'
- name: x
value: '{{inputs.parameters.x}}'
- name: print-integer
template: print-integer
arguments:
parameters:
- name: x
value: '{{inputs.parameters.x}}'
- name: print-integer
container:
args:
- '--x'
- '{{inputs.parameters.x}}'
command:
- sh
- '-ec'
- |
program_path=$(mktemp)
printf "%s" "$0" > "$program_path"
python3 -u "$program_path" "$@"
- >
def print_integer(x):
print(x)
import argparse
_parser = argparse.ArgumentParser(prog='Print integer',
description='')
_parser.add_argument("--x", dest="x", type=int, required=True,
default=argparse.SUPPRESS)
_parsed_args = vars(_parser.parse_args())
_outputs = print_integer(**_parsed_args)
image: 'python:3.8-slim'
inputs:
parameters:
- name: x
metadata:
labels:
pipelines.kubeflow.org/kfp_sdk_version: 1.8.17
pipelines.kubeflow.org/pipeline-sdk-type: kfp
pipelines.kubeflow.org/enable_caching: 'true'
annotations:
pipelines.kubeflow.org/component_spec: >-
{"implementation": {"container": {"args": ["--x", {"inputValue":
"x"}], "command": ["sh", "-ec", "program_path=$(mktemp)\nprintf
\"%s\" \"$0\" > \"$program_path\"\npython3 -u \"$program_path\"
\"$@\"\n", "def print_integer(x):\n print(x)\n\nimport
argparse\n_parser = argparse.ArgumentParser(prog='Print integer',
description='')\n_parser.add_argument(\"--x\", dest=\"x\", type=int,
required=True, default=argparse.SUPPRESS)\n_parsed_args =
vars(_parser.parse_args())\n\n_outputs =
print_integer(**_parsed_args)\n"], "image":
"python:3.8-slim"}}, "inputs":
[{"name": "x", "type": "Integer"}], "name": "Print integer"}
pipelines.kubeflow.org/component_ref: '{}'
pipelines.kubeflow.org/arguments.parameters: '{"x": "{{inputs.parameters.x}}"}'
pipelines.kubeflow.org/max_cache_staleness: P0D
arguments:
parameters:
- name: x
value: '20230807093130621'
- name: num_to_add
value: '10'
serviceAccountName: <Intentionally redacted>
```
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/9976/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/9976/timeline | null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/9975 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/9975/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/9975/comments | https://api.github.com/repos/kubeflow/pipelines/issues/9975/events | https://github.com/kubeflow/pipelines/issues/9975 | 1,892,107,655 | I_kwDOB-71UM5wx0WH | 9,975 | [backend] Pipeline run's input artifacts in S3 are accessible but output artifacts are not accessible (403 Error) | {
"login": "guntiseiduks",
"id": 97613741,
"node_id": "U_kgDOBdF3rQ",
"avatar_url": "https://avatars.githubusercontent.com/u/97613741?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guntiseiduks",
"html_url": "https://github.com/guntiseiduks",
"followers_url": "https://api.github.com/users/guntiseiduks/followers",
"following_url": "https://api.github.com/users/guntiseiduks/following{/other_user}",
"gists_url": "https://api.github.com/users/guntiseiduks/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guntiseiduks/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guntiseiduks/subscriptions",
"organizations_url": "https://api.github.com/users/guntiseiduks/orgs",
"repos_url": "https://api.github.com/users/guntiseiduks/repos",
"events_url": "https://api.github.com/users/guntiseiduks/events{/privacy}",
"received_events_url": "https://api.github.com/users/guntiseiduks/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [] | 2023-09-12T09:43:48 | 2023-09-12T13:01:04 | null | NONE | null | ### Environment
* How did you deploy Kubeflow Pipelines (KFP)?
Kubeflow Pipelines as part of a full Kubeflow deployment provides all Kubeflow components and more integration with each platform
Kubeflow is deployed on top of AWS EKS cluster.
* KFP version: v2
* KFP SDK version: Issue identified in UI, possible source in backend.
### Steps to reproduce
1. In Kubeflow UI Pipeline Runs section (using "[Tutorial] Data passing in python components" pipeline run) click on any of pipeline completed steps.
2. Click on Input artifact s3 url (s3://kf-artifacts-store-..../) and open input artifact and that works fine, i.e. one can see contents of artifact.
3. BUT when click on output artifacts "main-logs" s3 url one gets HTTP 403 Forbidden error.
### Expected result
Both input and output artifact contents are visible in preview and when clicking on S3 url.
### Materials and Reference
Additional observations:
- Issue so far is observed for s3 objects with extension `.log`.
- s3 objects with extension `.tgz` are opened without a problem.
---
Impacted by this bug? Give it a 👍. | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/9975/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/9975/timeline | null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/9974 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/9974/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/9974/comments | https://api.github.com/repos/kubeflow/pipelines/issues/9974/events | https://github.com/kubeflow/pipelines/issues/9974 | 1,891,920,375 | I_kwDOB-71UM5wxGn3 | 9,974 | [sdk] packages_to_install not working correctly after upgrading to V2 | {
"login": "Pringled",
"id": 12988240,
"node_id": "MDQ6VXNlcjEyOTg4MjQw",
"avatar_url": "https://avatars.githubusercontent.com/u/12988240?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pringled",
"html_url": "https://github.com/Pringled",
"followers_url": "https://api.github.com/users/Pringled/followers",
"following_url": "https://api.github.com/users/Pringled/following{/other_user}",
"gists_url": "https://api.github.com/users/Pringled/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pringled/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pringled/subscriptions",
"organizations_url": "https://api.github.com/users/Pringled/orgs",
"repos_url": "https://api.github.com/users/Pringled/repos",
"events_url": "https://api.github.com/users/Pringled/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pringled/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1136110037,
"node_id": "MDU6TGFiZWwxMTM2MTEwMDM3",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk",
"name": "area/sdk",
"color": "d2b48c",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [
"I've been impacted by this change as well and I think it's a behavior change in the latest version of KFP 2.1.3 only. I was able to get my pipeline running by using KFP 2.0.1.\r\n\r\nI think the \"issue\" might be coming from the following commit https://github.com/kubeflow/pipelines/pull/9886/commits/80881c12ee2a23688cd7bd5e6eeb330228949cf8 that added `--no-deps` to the pip install command.",
"@Lothiraldan Thank you, that was indeed the problem! Everything works as expected when I downgrade to KFP 2.0.1. I do wonder why this change was implemented since it makes it seemingly impossible to properly install packages via 'packages_to_install'. I'll keep this open for now since I'm curious if it's intended behavior or a bug.",
"Having the same issue, feels like a crazy \"feature\" to require manually listing every single transitive dependency before your component can run 😅\r\n\r\nI think the intention behind the PR was to remove the compile time dependencies of `kfp` for the runtime environment, which makes sense. However, because `kfp=={version} --no-deps` is appended to the `packages_to_install` list, all user-specified dependencies get the same `--no-deps` treatment... definitely feels like a regression!"
] | 2023-09-12T08:07:41 | 2023-09-13T11:09:14 | null | NONE | null | ### Environment
#### Relevant package versions
kfp==2.1.3
google-cloud-pipeline-components==2.3.1
google-cloud-aiplatform==1.32.0
kfp-pipeline-spec== 0.2.2
kfp-server-api == 2.0.1
### Steps to reproduce
After upgrading to KFP V2, Initializing a component as follows doesn't work:
```python
from kfp.dsl import component
@component(
base_image="python:3.10",
packages_to_install=["google-cloud-aiplatform"],
)
def my_function():
from google.cloud import aiplatform
```
This gives the error `ModuleNotFoundError: No module named 'google.api_core'`, while in KFP V1 (< 2.0), this worked without any issues. It seems to be an issue with 'packages_to_install'. Adding 'google_api_core' to the 'packages_to_install' then throws the error `ModuleNotFoundError: No module named 'grpc'`, adding that gives `RuntimeError: Please install the official package with: pip install grpcio`, adding that gives `ModuleNotFoundError: No module named 'google.rpc'`, and finally, adding that gives `ERROR: No matching distribution found for google.rpc`. It seems that the 'packages_to_install' do not install their dependencies correctly, or there is an issue with 'google-cloud-aiplatform- and KFP V2.
### Expected result
The 'packages_to_install' are installed correctly and can be imported within the component. With the following versions, and the exact same code, this behavior does work:
kfp==1.8.14
google-cloud-pipeline-components==1.0.26
google-cloud-aiplatform==1.18.3
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/9974/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/9974/timeline | null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/9970 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/9970/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/9970/comments | https://api.github.com/repos/kubeflow/pipelines/issues/9970/events | https://github.com/kubeflow/pipelines/issues/9970 | 1,889,838,578 | I_kwDOB-71UM5wpKXy | 9,970 | Usage of Kubeflow Pipelines v2 | {
"login": "petrpechman",
"id": 41995595,
"node_id": "MDQ6VXNlcjQxOTk1NTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/41995595?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/petrpechman",
"html_url": "https://github.com/petrpechman",
"followers_url": "https://api.github.com/users/petrpechman/followers",
"following_url": "https://api.github.com/users/petrpechman/following{/other_user}",
"gists_url": "https://api.github.com/users/petrpechman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/petrpechman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/petrpechman/subscriptions",
"organizations_url": "https://api.github.com/users/petrpechman/orgs",
"repos_url": "https://api.github.com/users/petrpechman/repos",
"events_url": "https://api.github.com/users/petrpechman/events{/privacy}",
"received_events_url": "https://api.github.com/users/petrpechman/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"> set gpu resources (.set_gpu_limit())\r\n\r\n`set_gpu_limit()` has been renamed to [`set_accelerator_limit`](https://kubeflow-pipelines.readthedocs.io/en/sdk-2.0.1/source/dsl.html?h=set_acc#kfp.dsl.PipelineTask.set_accelerator_limit) so that is covers both GPU and TPU (on Vertex). Meanwhile, `set_gpu_limit` still exists but gives you a deprecation warning and calls into `set_accelerator_limit` under the hood.\r\n\r\n> mount hostPath (.add_volume(), add_volume_mount())\r\n\r\nVolume mount has been moved into an extension package: \r\nhttps://www.kubeflow.org/docs/components/pipelines/v2/platform-specific-features/\r\n\r\n> .set_security_context()\r\n\r\nKFP v2 doesn't have this feature parity yet. My advice would be submitting a separate feature request with the description of your use case.",
"@chensun Is there a way to use data stored on Nas??\r\n"
] | 2023-09-11T07:20:36 | 2023-09-12T01:35:55 | null | NONE | null | Hello,
we have upgraded our Kubeflow Pipelines to version 2.0.1. We are now having a lot of trouble rewriting our code from version 1.8. We are missing the following features (maybe we just didn't find them in v2):
- set gpu resources (.set_gpu_limit())
- mount hostPath (.add_volume(), add_volume_mount())
- .set_security_context()
We want to train on multiple gpus, so we need to set gpu and cpu resources, connect network data storage, and so on.
Is Kubeflow Pipelines v2 good for us? Are these features planned in version 2? Or should we still use Kubeflow Pipelines 1.8? | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/9970/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/9970/timeline | null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/9962 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/9962/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/9962/comments | https://api.github.com/repos/kubeflow/pipelines/issues/9962/events | https://github.com/kubeflow/pipelines/issues/9962 | 1,887,425,701 | I_kwDOB-71UM5wf9Sl | 9,962 | [backend] Persistence Agent failing kubeflow-pipeline-mkp-test (2) | {
"login": "difince",
"id": 11557050,
"node_id": "MDQ6VXNlcjExNTU3MDUw",
"avatar_url": "https://avatars.githubusercontent.com/u/11557050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/difince",
"html_url": "https://github.com/difince",
"followers_url": "https://api.github.com/users/difince/followers",
"following_url": "https://api.github.com/users/difince/following{/other_user}",
"gists_url": "https://api.github.com/users/difince/gists{/gist_id}",
"starred_url": "https://api.github.com/users/difince/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/difince/subscriptions",
"organizations_url": "https://api.github.com/users/difince/orgs",
"repos_url": "https://api.github.com/users/difince/repos",
"events_url": "https://api.github.com/users/difince/events{/privacy}",
"received_events_url": "https://api.github.com/users/difince/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [] | 2023-09-08T10:52:36 | 2023-09-08T10:57:54 | null | MEMBER | null | ### Steps to reproduce
Having in mind issue: [Persistence agent failing kubeflow-pipeline-mkp-test](https://github.com/kubeflow/pipelines/issues/9904) that appeared after changes in the Persistence agent manifests files - I guess that the mkp-test may fail again after the merge of [this](https://github.com/kubeflow/pipelines/pull/9957) PR as it also has changed the manifests files.
At the moment there is no automatic way to move/sync changes done in main manifest files to the manifests in the marketplace helm chart that is why I suggest someone validate the state of the `mkp-test` and apply the changes done in the above PR in the helm chart if needed.
Please close this issue if that is not the case.
cc: @zijianjoy, @chensun
<!--
Specify how to reproduce the problem.
This may include information such as: a description of the process, code snippets, log output, or screenshots.
-->
### Expected result
<!-- What should the correct behavior be? -->
### Materials and reference
<!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. -->
### Labels
<!-- Please include labels below by uncommenting them to help us better triage issues -->
<!-- /area frontend -->
<!-- /area backend -->
<!-- /area sdk -->
<!-- /area testing -->
<!-- /area samples -->
<!-- /area components -->
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/9962/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/9962/timeline | null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/9960 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/9960/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/9960/comments | https://api.github.com/repos/kubeflow/pipelines/issues/9960/events | https://github.com/kubeflow/pipelines/issues/9960 | 1,884,738,276 | I_kwDOB-71UM5wVtLk | 9,960 | What's the current behaviour with the run_pipeline? | {
"login": "fclesio",
"id": 10605378,
"node_id": "MDQ6VXNlcjEwNjA1Mzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/10605378?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fclesio",
"html_url": "https://github.com/fclesio",
"followers_url": "https://api.github.com/users/fclesio/followers",
"following_url": "https://api.github.com/users/fclesio/following{/other_user}",
"gists_url": "https://api.github.com/users/fclesio/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fclesio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fclesio/subscriptions",
"organizations_url": "https://api.github.com/users/fclesio/orgs",
"repos_url": "https://api.github.com/users/fclesio/repos",
"events_url": "https://api.github.com/users/fclesio/events{/privacy}",
"received_events_url": "https://api.github.com/users/fclesio/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@fclesio, I think the best solution for what you are trying to achieve is to compose the tasks together into a sequential DAG using a pipeline. Please take a look at the docs on [data passing and task dependencies](https://www.kubeflow.org/docs/components/pipelines/v2/pipelines/pipeline-basics/#data-passing-and-task-dependencies) and feel free to re-open if there are outstanding issues.",
"Thanks for the answer @connor-mccarthy, but I am afraid that I express myself with a lack of detail. \r\n\r\nFor a single pipeline it's clear that we can set those dependencies; however, if we have multiple and _independent_ pipelines that needs to have some sequencing, that solution does not work. \r\n\r\nOn top of that, let's say that we have a Pipeline (DAG) with almost 1000 lines, if we have 5 similar ones with the same size we're talking about to have a single pipeline with 5000 lines. \r\n\r\nBeing more specific: there is some component that can triggers another pipeline/DAG like [Airflow has the triggerDAGrun](https://airflow.apache.org/docs/apache-airflow/stable/_api/airflow/operators/trigger_dagrun/index.html)? ",
"I see, @fclesio. Thanks for the detail. KFP doesn't provide this sort of functionality directly, unfortunately."
] | 2023-09-06T20:53:45 | 2023-09-08T14:47:16 | 2023-09-07T22:50:17 | NONE | null | Thanks once again for the great library.
I have a ML pipeline that consists in 3 tasks: ETL > Pre-Processing > Modelling and Deploy.
Since those tasks are sequential, I created 3 pipelines and I used the [kfp.client.run_pipeline ](https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/client/client.py#L688C9-L688C21) to run each one sequentially.
One thing that I noticed is that the client only _fires_ the pipeline and if the API returns 200 marks it as a success, regardless of the pipeline runtime length.
In my case the run_pipeline did actually fired up all pipelines, but all of them ran at the same time (not desirable behaviour).
There some way to have some kind of "wait status" that only return success if the pipeline called runs until the end?
Thanks once again.
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/9960/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/9960/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/9958 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/9958/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/9958/comments | https://api.github.com/repos/kubeflow/pipelines/issues/9958/events | https://github.com/kubeflow/pipelines/issues/9958 | 1,882,400,753 | I_kwDOB-71UM5wMyfx | 9,958 | chore(frontend): Refactor the class component to functional component | {
"login": "jlyaoyuli",
"id": 56132941,
"node_id": "MDQ6VXNlcjU2MTMyOTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/56132941?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jlyaoyuli",
"html_url": "https://github.com/jlyaoyuli",
"followers_url": "https://api.github.com/users/jlyaoyuli/followers",
"following_url": "https://api.github.com/users/jlyaoyuli/following{/other_user}",
"gists_url": "https://api.github.com/users/jlyaoyuli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jlyaoyuli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jlyaoyuli/subscriptions",
"organizations_url": "https://api.github.com/users/jlyaoyuli/orgs",
"repos_url": "https://api.github.com/users/jlyaoyuli/repos",
"events_url": "https://api.github.com/users/jlyaoyuli/events{/privacy}",
"received_events_url": "https://api.github.com/users/jlyaoyuli/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1289588140,
"node_id": "MDU6TGFiZWwxMjg5NTg4MTQw",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature",
"name": "kind/feature",
"color": "2515fc",
"default": false,
"description": ""
}
] | open | false | {
"login": "jlyaoyuli",
"id": 56132941,
"node_id": "MDQ6VXNlcjU2MTMyOTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/56132941?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jlyaoyuli",
"html_url": "https://github.com/jlyaoyuli",
"followers_url": "https://api.github.com/users/jlyaoyuli/followers",
"following_url": "https://api.github.com/users/jlyaoyuli/following{/other_user}",
"gists_url": "https://api.github.com/users/jlyaoyuli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jlyaoyuli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jlyaoyuli/subscriptions",
"organizations_url": "https://api.github.com/users/jlyaoyuli/orgs",
"repos_url": "https://api.github.com/users/jlyaoyuli/repos",
"events_url": "https://api.github.com/users/jlyaoyuli/events{/privacy}",
"received_events_url": "https://api.github.com/users/jlyaoyuli/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "jlyaoyuli",
"id": 56132941,
"node_id": "MDQ6VXNlcjU2MTMyOTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/56132941?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jlyaoyuli",
"html_url": "https://github.com/jlyaoyuli",
"followers_url": "https://api.github.com/users/jlyaoyuli/followers",
"following_url": "https://api.github.com/users/jlyaoyuli/following{/other_user}",
"gists_url": "https://api.github.com/users/jlyaoyuli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jlyaoyuli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jlyaoyuli/subscriptions",
"organizations_url": "https://api.github.com/users/jlyaoyuli/orgs",
"repos_url": "https://api.github.com/users/jlyaoyuli/repos",
"events_url": "https://api.github.com/users/jlyaoyuli/events{/privacy}",
"received_events_url": "https://api.github.com/users/jlyaoyuli/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-09-05T17:00:17 | 2023-09-05T17:00:17 | null | COLLABORATOR | null | null | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/9958/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/9958/timeline | null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/9956 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/9956/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/9956/comments | https://api.github.com/repos/kubeflow/pipelines/issues/9956/events | https://github.com/kubeflow/pipelines/issues/9956 | 1,879,683,257 | I_kwDOB-71UM5wCbC5 | 9,956 | [bug] Tensorboard gives "No dashboards are active for the current data set." when passing minio path. | {
"login": "pandapool",
"id": 28887240,
"node_id": "MDQ6VXNlcjI4ODg3MjQw",
"avatar_url": "https://avatars.githubusercontent.com/u/28887240?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pandapool",
"html_url": "https://github.com/pandapool",
"followers_url": "https://api.github.com/users/pandapool/followers",
"following_url": "https://api.github.com/users/pandapool/following{/other_user}",
"gists_url": "https://api.github.com/users/pandapool/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pandapool/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pandapool/subscriptions",
"organizations_url": "https://api.github.com/users/pandapool/orgs",
"repos_url": "https://api.github.com/users/pandapool/repos",
"events_url": "https://api.github.com/users/pandapool/events{/privacy}",
"received_events_url": "https://api.github.com/users/pandapool/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
}
] | open | false | {
"login": "chensun",
"id": 2043310,
"node_id": "MDQ6VXNlcjIwNDMzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chensun",
"html_url": "https://github.com/chensun",
"followers_url": "https://api.github.com/users/chensun/followers",
"following_url": "https://api.github.com/users/chensun/following{/other_user}",
"gists_url": "https://api.github.com/users/chensun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chensun/subscriptions",
"organizations_url": "https://api.github.com/users/chensun/orgs",
"repos_url": "https://api.github.com/users/chensun/repos",
"events_url": "https://api.github.com/users/chensun/events{/privacy}",
"received_events_url": "https://api.github.com/users/chensun/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "chensun",
"id": 2043310,
"node_id": "MDQ6VXNlcjIwNDMzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chensun",
"html_url": "https://github.com/chensun",
"followers_url": "https://api.github.com/users/chensun/followers",
"following_url": "https://api.github.com/users/chensun/following{/other_user}",
"gists_url": "https://api.github.com/users/chensun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chensun/subscriptions",
"organizations_url": "https://api.github.com/users/chensun/orgs",
"repos_url": "https://api.github.com/users/chensun/repos",
"events_url": "https://api.github.com/users/chensun/events{/privacy}",
"received_events_url": "https://api.github.com/users/chensun/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-09-04T07:30:02 | 2023-09-07T22:48:31 | null | NONE | null | ### Environment
<!-- Please fill in those that seem relevant. -->
* How do you deploy Kubeflow Pipelines (KFP)?
<!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. -->
* KFP version:
<!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface.
To find the version number, See version number shows on bottom of KFP UI left sidenav. -->
1.8.1
* KFP SDK version:
<!-- Specify the output of the following shell command: $pip list | grep kfp -->
1.8.19
### Steps to reproduce
<!--
Specify how to reproduce the problem.
This may include information such as: a description of the process, code snippets, log output, or screenshots.
-->
Create a function to check logging of scalars:
def foo(runid) ->NamedTuple("EvaluationOutput", [('mlpipeline_ui_metadata', 'UI_metadata')]):
import subprocess
subprocess.run(["pip","install","tensorboardX==2.5"])
from tensorboardX import SummaryWriter
from minio import Minio
log_dir = " /tensorboard_logs/"
writer = SummaryWriter(log_dir)
for i in range(20):
# Log loss to TensorBoard
acc = (i+2)**2/(i+1)**2 + i-1
writer.add_scalar("Accuracy",acc, i+1)
writer.flush()
from pipelines.utils.minio_utils import upload_local_directory_to_minio
upload_local_directory_to_minio(client, bucket_name, " /tensorboard_logs/","foo_logs")
metadata = {
'outputs': [{
'type': 'tensorboard',
'source': f'http://<Minio_IP>:<Minio_Port>/minio/datasets/foo_logs',
}]
}
out_tuple = namedtuple("EvaluationOutput", ["mlpipeline_ui_metadata"])
return out_tuple(json.dumps(metadata))
![image](https://github.com/kubeflow/pipelines/assets/28887240/ec51bf38-fb50-48bc-97c0-7b50e647e71d)
### Expected result
<!-- What should the correct behavior be? -->
Tensorboard should show a graph for the scalars written to events file. The issue is not with the events file as it works for a locally deployed tensorboard server.
![image](https://github.com/kubeflow/pipelines/assets/28887240/9a74e4fe-01a9-4d57-a8c4-9225bed57007)
### Materials and reference
<!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. -->
### Labels
<!-- Please include labels below by uncommenting them to help us better triage issues -->
<!-- /area frontend -->
<!-- /area backend -->
<!-- /area sdk -->
<!-- /area testing -->
<!-- /area samples -->
<!-- /area components -->
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/9956/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/9956/timeline | null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/9950 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/9950/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/9950/comments | https://api.github.com/repos/kubeflow/pipelines/issues/9950/events | https://github.com/kubeflow/pipelines/issues/9950 | 1,875,434,182 | I_kwDOB-71UM5vyNrG | 9,950 | [bug] set_retry not working | {
"login": "dugarsumitcheck24",
"id": 141853826,
"node_id": "U_kgDOCHSEgg",
"avatar_url": "https://avatars.githubusercontent.com/u/141853826?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dugarsumitcheck24",
"html_url": "https://github.com/dugarsumitcheck24",
"followers_url": "https://api.github.com/users/dugarsumitcheck24/followers",
"following_url": "https://api.github.com/users/dugarsumitcheck24/following{/other_user}",
"gists_url": "https://api.github.com/users/dugarsumitcheck24/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dugarsumitcheck24/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dugarsumitcheck24/subscriptions",
"organizations_url": "https://api.github.com/users/dugarsumitcheck24/orgs",
"repos_url": "https://api.github.com/users/dugarsumitcheck24/repos",
"events_url": "https://api.github.com/users/dugarsumitcheck24/events{/privacy}",
"received_events_url": "https://api.github.com/users/dugarsumitcheck24/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [
"**In Kfp 2.0.1 retry on policy is not present.**\r\n\r\n**Old Code**\r\n```\r\n def set_retry(self,\r\n num_retries: int,\r\n policy: Optional[str] = None,\r\n backoff_duration: Optional[str] = None,\r\n backoff_factor: Optional[float] = None,\r\n backoff_max_duration: Optional[str] = None):\r\n \"\"\"Sets the number of times the task is retried until it's declared\r\n failed.\r\n\r\n Args:\r\n num_retries: Number of times to retry on failures.\r\n policy: Retry policy name.\r\n backoff_duration: The time interval between retries. Defaults to an\r\n immediate retry. In case you specify a simple number, the unit\r\n defaults to seconds. You can also specify a different unit, for\r\n instance, 2m (2 minutes), 1h (1 hour).\r\n backoff_factor: The exponential backoff factor applied to\r\n backoff_duration. For example, if backoff_duration=\"60\"\r\n (60 seconds) and backoff_factor=2, the first retry will happen\r\n after 60 seconds, then after 120, 240, and so on.\r\n backoff_max_duration: The maximum interval that can be reached with\r\n the backoff strategy.\r\n \"\"\"\r\n if policy is not None and policy not in ALLOWED_RETRY_POLICIES:\r\n raise ValueError('policy must be one of: %r' %\r\n (ALLOWED_RETRY_POLICIES,))\r\n\r\n self.num_retries = num_retries\r\n self.retry_policy = policy\r\n self.backoff_factor = backoff_factor\r\n self.backoff_duration = backoff_duration\r\n self.backoff_max_duration = backoff_max_duration\r\n return self\r\n```\r\n\r\n**New Code**\r\n```\r\ndef set_retry(self,\r\n num_retries: int,\r\n backoff_duration: Optional[str] = None,\r\n backoff_factor: Optional[float] = None,\r\n backoff_max_duration: Optional[str] = None) -> 'PipelineTask':\r\n\r\n Args:\r\n num_retries : Number of times to retry on failure.\r\n backoff_duration: Number of seconds to wait before triggering a retry. Defaults to ``'0s'`` (immediate retry).\r\n backoff_factor: Exponential backoff factor applied to ``backoff_duration``. For example, if ``backoff_duration=\"60\"`` (60 seconds) and ``backoff_factor=2``, the first retry will happen after 60 seconds, then again after 120, 240, and so on. Defaults to ``2.0``.\r\n backoff_max_duration: Maximum duration during which the task will be retried. Maximum duration is 1 hour (3600s). Defaults to ``'3600s'``.\r\n\r\n Returns:\r\n Self return to allow chained setting calls.\r\n \"\"\"\r\n self._task_spec.retry_policy = structures.RetryPolicy(\r\n max_retry_count=num_retries,\r\n backoff_duration=backoff_duration,\r\n backoff_factor=backoff_factor,\r\n backoff_max_duration=backoff_max_duration,\r\n )\r\n return self\r\n```\r\n\r\nFYI @chensun ",
"retry-random-failures-xgqdv-1470953391 0/2 Completed 0 69s\r\nretry-random-failures-xgqdv-2197860257 0/2 Error 0 59s\r\nretry-random-failures-xgqdv-2342167526 0/2 Completed 0 69s\r\nretry-random-failures-xgqdv-3239894960 0/2 Error 0 58s\r\nretry-random-failures-xgqdv-611022813 0/2 Completed 0 79s"
] | 2023-08-31T12:31:13 | 2023-09-02T13:17:07 | null | NONE | null | ### Environment
<!-- Please fill in those that seem relevant. -->
* How do you deploy Kubeflow Pipelines (KFP)?
* KFP version: 2.0.0
To find the version number, See version number shows on bottom of KFP UI left sidenav. -->
* KFP SDK version: 2.0.1
### Steps to reproduce
You can run the following code to reproduce the issue.
```
from kfp import dsl
@dsl.component
def random_failure_op(exit_codes: str):
"""A component that fails randomly."""
import random
import sys
exit_code = int(random.choice(exit_codes.split(",")))
print(exit_code)
sys.exit(exit_code)
@dsl.pipeline(
name="retry-random-failures",
description="The pipeline includes two steps which fail randomly. It shows how to use ContainerOp(...).set_retry(...).",
)
def retry_random_failures():
op1 = random_failure_op(exit_codes="0,1,2,3").set_retry(10)
op2 = random_failure_op(exit_codes="0,1").set_retry(5)
```
### Expected result
The component should retry on failure but it never does not even once. In the pipeline spec I also see the following policy
```
retryPolicy:
backoffDuration: 0s
backoffFactor: 2
backoffMaxDuration: 3600s
maxRetryCount: 10
```
### Materials and reference
This is the documentation I referred - https://kubeflow-pipelines.readthedocs.io/en/latest/source/dsl.html#kfp.dsl.PipelineTask.set_retry
### Labels
<!-- Please include labels below by uncommenting them to help us better triage issues -->
<!-- /area frontend -->
/area backend
<!-- /area sdk -->
<!-- /area testing -->
<!-- /area samples -->
<!-- /area components -->
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/9950/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/9950/timeline | null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/9949 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/9949/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/9949/comments | https://api.github.com/repos/kubeflow/pipelines/issues/9949/events | https://github.com/kubeflow/pipelines/issues/9949 | 1,875,251,131 | I_kwDOB-71UM5vxg-7 | 9,949 | Deprecated setup.cfg used in pyyaml@5.4.1 version not compatible with Python versions 3.10 and above | {
"login": "pdebskiIBM",
"id": 143707234,
"node_id": "U_kgDOCJDMYg",
"avatar_url": "https://avatars.githubusercontent.com/u/143707234?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdebskiIBM",
"html_url": "https://github.com/pdebskiIBM",
"followers_url": "https://api.github.com/users/pdebskiIBM/followers",
"following_url": "https://api.github.com/users/pdebskiIBM/following{/other_user}",
"gists_url": "https://api.github.com/users/pdebskiIBM/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdebskiIBM/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdebskiIBM/subscriptions",
"organizations_url": "https://api.github.com/users/pdebskiIBM/orgs",
"repos_url": "https://api.github.com/users/pdebskiIBM/repos",
"events_url": "https://api.github.com/users/pdebskiIBM/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdebskiIBM/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [
"This is fixed. See [requirements](https://github.com/kubeflow/pipelines/blob/3b8cea060fc3088520666fea26e6452bda2fdb15/sdk/python/requirements.in#L22)."
] | 2023-08-31T10:34:16 | 2023-09-07T22:36:55 | 2023-09-07T22:36:54 | NONE | null | ### Description
`pip install -r requirements.txt` fails on Python 3.10 and 3.11 due to deprecated setup.cfg in pyyaml 5.4.1 using `license_file` instead of `license_files` . According to PyYaml this problem was fixed and recommended version is 6.0.1.
Fixing this issue may require to update requirements to accept `pyyaml>=5.4.1`.
### Materials and reference
https://github.com/yaml/pyyaml/issues/724
### Environment
<!-- Please fill in those that seem relevant. -->
* How do you deploy Kubeflow Pipelines (KFP)? Private fork
* KFP version: 1.8.19
* KFP SDK version: 1.8.22
### Steps to reproduce
```
$ pip install -r requirements.txt
Collecting PyYAML==5.4.1 (from -r requirements.txt (line 43))
Using cached .../PyYAML-5.4.1.tar.gz (175 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [62 lines of output]
.../python3.9/site-packages/setuptools/config/setupcfg.py:293: _DeprecatedConfig: Deprecated config in `setup.cfg`
!!
********************************************************************************
The license_file parameter is deprecated, use license_files instead.
By 2023-Oct-30, you need to update your project and remove deprecated calls
or your builds will no longer be supported.
See https://setuptools.pypa.io/en/latest/userguide/declarative_config.html for details.
********************************************************************************
!!
...
AttributeError: cython_sources
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
```
### Expected result
`pip install -r requirements.txt` installs all dependencies successfully.
### Labels
<!-- Please include labels below by uncommenting them to help us better triage issues -->
<!-- /area backend -->
<!-- /area sdk -->
<!-- /area testing -->
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/9949/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/9949/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/9942 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/9942/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/9942/comments | https://api.github.com/repos/kubeflow/pipelines/issues/9942/events | https://github.com/kubeflow/pipelines/issues/9942 | 1,871,704,455 | I_kwDOB-71UM5vj_GH | 9,942 | [feature] Add support for Habana device cards | {
"login": "ehudyonasi",
"id": 25056870,
"node_id": "MDQ6VXNlcjI1MDU2ODcw",
"avatar_url": "https://avatars.githubusercontent.com/u/25056870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ehudyonasi",
"html_url": "https://github.com/ehudyonasi",
"followers_url": "https://api.github.com/users/ehudyonasi/followers",
"following_url": "https://api.github.com/users/ehudyonasi/following{/other_user}",
"gists_url": "https://api.github.com/users/ehudyonasi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ehudyonasi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ehudyonasi/subscriptions",
"organizations_url": "https://api.github.com/users/ehudyonasi/orgs",
"repos_url": "https://api.github.com/users/ehudyonasi/repos",
"events_url": "https://api.github.com/users/ehudyonasi/events{/privacy}",
"received_events_url": "https://api.github.com/users/ehudyonasi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1136110037,
"node_id": "MDU6TGFiZWwxMTM2MTEwMDM3",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk",
"name": "area/sdk",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 1260031624,
"node_id": "MDU6TGFiZWwxMjYwMDMxNjI0",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/samples",
"name": "area/samples",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 1289588140,
"node_id": "MDU6TGFiZWwxMjg5NTg4MTQw",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature",
"name": "kind/feature",
"color": "2515fc",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [] | 2023-08-29T13:40:47 | 2023-08-29T13:40:50 | null | NONE | null | ### Feature Area
/area sdk
/area samples
### What feature would you like to see?
I would like to have the option to choose if I would like to use Nvidia gpu cards or Habana pci cards.
The only difference is with the requests to change it from `nvidia.com/gpu` to `habana.ai/gaudi`
At the backend we are running our device plugin and other stuff to manage the driver.
### What is the use case or pain point?
Currently I cannot work on Habana device cards. With this feature I would be able to train models with their solution.
### Is there a workaround currently?
We have custom YAMLs to run on the training-operator (TF jobs, Pytorch, etc), with specific mounts. but we would like to add it also in the UI, notebooks, etc.
---
If you can help me with how to do it, that would be great :)
I am happy to submit that feature request of course.
<!-- Don't delete message below to encourage users to support your feature request! -->
Love this idea? Give it a 👍. | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/9942/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/9942/timeline | null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/9941 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/9941/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/9941/comments | https://api.github.com/repos/kubeflow/pipelines/issues/9941/events | https://github.com/kubeflow/pipelines/issues/9941 | 1,871,084,656 | I_kwDOB-71UM5vhnxw | 9,941 | Kubeflow error:Invalid input error: Job has no experiment. code:3 message | {
"login": "srikarjavvaji",
"id": 59558002,
"node_id": "MDQ6VXNlcjU5NTU4MDAy",
"avatar_url": "https://avatars.githubusercontent.com/u/59558002?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/srikarjavvaji",
"html_url": "https://github.com/srikarjavvaji",
"followers_url": "https://api.github.com/users/srikarjavvaji/followers",
"following_url": "https://api.github.com/users/srikarjavvaji/following{/other_user}",
"gists_url": "https://api.github.com/users/srikarjavvaji/gists{/gist_id}",
"starred_url": "https://api.github.com/users/srikarjavvaji/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/srikarjavvaji/subscriptions",
"organizations_url": "https://api.github.com/users/srikarjavvaji/orgs",
"repos_url": "https://api.github.com/users/srikarjavvaji/repos",
"events_url": "https://api.github.com/users/srikarjavvaji/events{/privacy}",
"received_events_url": "https://api.github.com/users/srikarjavvaji/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "chensun",
"id": 2043310,
"node_id": "MDQ6VXNlcjIwNDMzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chensun",
"html_url": "https://github.com/chensun",
"followers_url": "https://api.github.com/users/chensun/followers",
"following_url": "https://api.github.com/users/chensun/following{/other_user}",
"gists_url": "https://api.github.com/users/chensun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chensun/subscriptions",
"organizations_url": "https://api.github.com/users/chensun/orgs",
"repos_url": "https://api.github.com/users/chensun/repos",
"events_url": "https://api.github.com/users/chensun/events{/privacy}",
"received_events_url": "https://api.github.com/users/chensun/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "chensun",
"id": 2043310,
"node_id": "MDQ6VXNlcjIwNDMzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chensun",
"html_url": "https://github.com/chensun",
"followers_url": "https://api.github.com/users/chensun/followers",
"following_url": "https://api.github.com/users/chensun/following{/other_user}",
"gists_url": "https://api.github.com/users/chensun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chensun/subscriptions",
"organizations_url": "https://api.github.com/users/chensun/orgs",
"repos_url": "https://api.github.com/users/chensun/repos",
"events_url": "https://api.github.com/users/chensun/events{/privacy}",
"received_events_url": "https://api.github.com/users/chensun/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"You need first to create an experiment and than use it's id in run, passing it as a parameter.",
"> \r\n\r\nPlease provide advice on how to create an [experiment](https://www.kubeflow.org/docs/components/pipelines/v2/reference/api/kubeflow-pipeline-api-spec/#operation--apis-v2beta1-experiments-post) using [Kubeflow API](https://www.kubeflow.org/docs/components/pipelines/v1/reference/api/kubeflow-pipeline-api-spec/) as the error is **Job has no experiment** .I'm literally stuck and need assistance to move on. Also how to create [Job](https://www.kubeflow.org/docs/components/pipelines/v1/reference/api/kubeflow-pipeline-api-spec/#operation--apis-v1beta1-jobs-post).",
"Just check the API specification for the case:\r\nhttps://www.kubeflow.org/docs/components/pipelines/v1/reference/api/kubeflow-pipeline-api-spec/#operation--apis-v1beta1-experiments-post",
"> Just check the API specification for the case: https://www.kubeflow.org/docs/components/pipelines/v1/reference/api/kubeflow-pipeline-api-spec/#operation--apis-v1beta1-experiments-post\r\n\r\ncan you please give me one example on creation of [Experiment ](https://www.kubeflow.org/docs/components/pipelines/v1/reference/api/kubeflow-pipeline-api-spec/#operation--apis-v1beta1-experiments-post)using kubeflow API"
] | 2023-08-29T07:41:43 | 2023-09-07T22:47:11 | null | NONE | null | I am following tutorial of Kubeflow Pipelines API: [[Kubeflow](https://www.kubeflow.org/docs/components/pipelines/v1/tutorials/api-pipelines/)] & [Kubeflow Pipelines API](https://www.kubeflow.org/docs/components/pipelines/v1/reference/api/kubeflow-pipeline-api-spec/)
I've installed Kubeflow on a MicroK8s and started experimenting with the Kubeflow Pipelines API, I was able to upload a pipeline to the central dashboard by using [kubeflow upload API](https://www.kubeflow.org/docs/components/pipelines/v1/reference/api/kubeflow-pipeline-api-spec/), but when I tried utilizing the [Kubeflow RUN API](https://www.kubeflow.org/docs/components/pipelines/v1/reference/api/kubeflow-pipeline-api-spec/#operation--apis-v1beta1-runs-post), I was unable to establish a RUN. As RUN_ID is showing as null but in the [document](https://www.kubeflow.org/docs/components/pipelines/v1/tutorials/api-pipelines/), by running this command curl ${SVC}/apis/v1beta1/runs/${RUN_ID} | jq it is showing the value of RUN_ID.Can anyone offer assistance.
I tried several various approaches, but they were unsuccessful, I was literally struck at my work regarding this issue can anyone please help on this issue..
RUN_ID=$((
curl -H "Content-Type: application/json" -X POST ${SVC}/apis/v1beta1/runs
\-d @- \<\< EOF
{
"name":"${PIPELINE_NAME}\_run",
"pipeline_spec":{
"pipeline_id":"${PIPELINE_ID}"
}
}
EOF
) | jq -r .run.id)
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 388 100 277 100 111 23083 9250 --:--:-- --:--:-- --:--:-- 32333
$ echo $RUN_ID
null
$ curl -H "Content-Type: application/json" -X POST ${SVC}/apis/v1beta1/runs
\-d @- \<\< EOF
{
"name":"${PIPELINE_NAME}\_run",
"pipeline_spec":{
"pipeline_id":"${PIPELINE_ID}"
}
}
EOF
{"error":"Invalid input error: Job has no experiment.","code":3,"message":"Invalid input error: Job has no experiment.","details":\[{"@type":"type.googleapis.com/api.Error","error_message":"Job has no experiment.","error_details":"Invalid input error: Job has no experiment."}\]}
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/9941/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/9941/timeline | null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/9940 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/9940/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/9940/comments | https://api.github.com/repos/kubeflow/pipelines/issues/9940/events | https://github.com/kubeflow/pipelines/issues/9940 | 1,870,687,285 | I_kwDOB-71UM5vgGw1 | 9,940 | [bug] Always 404 on kfp sdk | {
"login": "d0m3n1cc",
"id": 143463250,
"node_id": "U_kgDOCI0TUg",
"avatar_url": "https://avatars.githubusercontent.com/u/143463250?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/d0m3n1cc",
"html_url": "https://github.com/d0m3n1cc",
"followers_url": "https://api.github.com/users/d0m3n1cc/followers",
"following_url": "https://api.github.com/users/d0m3n1cc/following{/other_user}",
"gists_url": "https://api.github.com/users/d0m3n1cc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/d0m3n1cc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/d0m3n1cc/subscriptions",
"organizations_url": "https://api.github.com/users/d0m3n1cc/orgs",
"repos_url": "https://api.github.com/users/d0m3n1cc/repos",
"events_url": "https://api.github.com/users/d0m3n1cc/events{/privacy}",
"received_events_url": "https://api.github.com/users/d0m3n1cc/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [
"Which version of the KFP BE do you have deployed?\r\n\r\ncc @chensun"
] | 2023-08-29T00:25:14 | 2023-09-07T22:44:39 | null | NONE | null | I was using kfp==2.0.0b12 and everything was working great. But as soon as i got into 2.0.1 I am always getting 404 when trying to use methods from client, like this:
`kfp_server_api.exceptions.ApiException: (404)
Reason: Not Found
HTTP response headers: HTTPHeaderDict({'x-powered-by': 'Express', 'content-security-policy': "default-src 'none'", 'x-content-type-options': 'nosniff', 'content-type': 'text/html; charset=utf-8', 'content-length': '170', 'date': 'Mon, 28 Aug 2023 23:52:33 GMT', 'x-envoy-upstream-service-time': '2', 'server': 'istio-envoy'})
HTTP response body: <!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Error</title>
</head>
<body>
<pre>Cannot GET /pipeline/apis/v2beta1/pipelines</pre>
</body>
</html>`
I am now using kfp==2.0.1 kfp-pipeline-spec==0.2.2 kfp-server-api==2.0.1 and I am creating the client this way:
`session = requests.Session()
response = session.get(HOST)
headers = {
"Content-Type": "application/x-www-form-urlencoded",
}
data = {"login": USERNAME, "password": PASSWORD}
session.post(response.url, headers=headers, data=data)
return session.cookies.get_dict()["authservice_session"]
client = Client(host=f"{HOST}/pipeline",
cookies=f"authservice_session={session_cookie}",
namespace=NAMESPACE)
return client
`
It used to work great, but now using kfp sdk 2.0.1 if I try something like list_pipelines() I am always getting the error.
What am I doing wrong here?
Impacted by this bug? Give it a 👍. | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/9940/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/9940/timeline | null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/9937 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/9937/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/9937/comments | https://api.github.com/repos/kubeflow/pipelines/issues/9937/events | https://github.com/kubeflow/pipelines/issues/9937 | 1,869,601,448 | I_kwDOB-71UM5vb9qo | 9,937 | [feature] Make Persistence Agent use SA Token when calling KF APIServer endpoints | {
"login": "difince",
"id": 11557050,
"node_id": "MDQ6VXNlcjExNTU3MDUw",
"avatar_url": "https://avatars.githubusercontent.com/u/11557050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/difince",
"html_url": "https://github.com/difince",
"followers_url": "https://api.github.com/users/difince/followers",
"following_url": "https://api.github.com/users/difince/following{/other_user}",
"gists_url": "https://api.github.com/users/difince/gists{/gist_id}",
"starred_url": "https://api.github.com/users/difince/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/difince/subscriptions",
"organizations_url": "https://api.github.com/users/difince/orgs",
"repos_url": "https://api.github.com/users/difince/repos",
"events_url": "https://api.github.com/users/difince/events{/privacy}",
"received_events_url": "https://api.github.com/users/difince/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1289588140,
"node_id": "MDU6TGFiZWwxMjg5NTg4MTQw",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature",
"name": "kind/feature",
"color": "2515fc",
"default": false,
"description": ""
}
] | closed | false | {
"login": "difince",
"id": 11557050,
"node_id": "MDQ6VXNlcjExNTU3MDUw",
"avatar_url": "https://avatars.githubusercontent.com/u/11557050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/difince",
"html_url": "https://github.com/difince",
"followers_url": "https://api.github.com/users/difince/followers",
"following_url": "https://api.github.com/users/difince/following{/other_user}",
"gists_url": "https://api.github.com/users/difince/gists{/gist_id}",
"starred_url": "https://api.github.com/users/difince/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/difince/subscriptions",
"organizations_url": "https://api.github.com/users/difince/orgs",
"repos_url": "https://api.github.com/users/difince/repos",
"events_url": "https://api.github.com/users/difince/events{/privacy}",
"received_events_url": "https://api.github.com/users/difince/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "difince",
"id": 11557050,
"node_id": "MDQ6VXNlcjExNTU3MDUw",
"avatar_url": "https://avatars.githubusercontent.com/u/11557050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/difince",
"html_url": "https://github.com/difince",
"followers_url": "https://api.github.com/users/difince/followers",
"following_url": "https://api.github.com/users/difince/following{/other_user}",
"gists_url": "https://api.github.com/users/difince/gists{/gist_id}",
"starred_url": "https://api.github.com/users/difince/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/difince/subscriptions",
"organizations_url": "https://api.github.com/users/difince/orgs",
"repos_url": "https://api.github.com/users/difince/repos",
"events_url": "https://api.github.com/users/difince/events{/privacy}",
"received_events_url": "https://api.github.com/users/difince/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"/assign @difince "
] | 2023-08-28T11:48:07 | 2023-09-07T23:49:34 | 2023-09-07T23:49:34 | MEMBER | null | ### Feature Area
<!-- Uncomment the labels below which are relevant to this feature: -->
<!-- /area frontend -->
</area backend>
<!-- /area sdk -->
<!-- /area samples -->
<!-- /area components -->
### What feature would you like to see?
Currently, when the Persistence Agent (PA) calls KFPipeline APIs - **readArtifacts** and **ReportMetrics** - It authenticates as a user.
See:
https://github.com/kubeflow/pipelines/blob/110e0824812883b74c73b26603a78d8cc00548d5/backend/src/agent/persistence/client/pipeline_client.go#L196-L204
The proper way for service-to-service authentication/authorization is the use of a Service Account Token. PA SA [token](https://github.com/kubeflow/pipelines/blob/110e0824812883b74c73b26603a78d8cc00548d5/manifests/kustomize/base/pipeline/ml-pipeline-persistenceagent-deployment.yaml#L39-L41C19) has been already introduced by this [PR](https://github.com/kubeflow/pipelines/pull/9699) .
My suggestion is to use the PA SA token for **readArtifacts** and **ReportMetrics** as well.
Thus the communication between PA and KFPipeline Api service will be done in the correct way. The code will become clearer/shorter and the unnecessary [requests](https://github.com/kubeflow/pipelines/blob/110e0824812883b74c73b26603a78d8cc00548d5/backend/src/agent/persistence/client/kubernetes_core.go#L33-L46) to the kubeapi will be removed.
### What is the use case or pain point?
### Is there a workaround currently?
---
<!-- Don't delete message below to encourage users to support your feature request! -->
Love this idea? Give it a 👍. | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/9937/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/9937/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/9933 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/9933/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/9933/comments | https://api.github.com/repos/kubeflow/pipelines/issues/9933/events | https://github.com/kubeflow/pipelines/issues/9933 | 1,867,215,948 | I_kwDOB-71UM5vS3RM | 9,933 | module 'kfp.components' has no attribute 'create_component_from_func' | {
"login": "Aman123lug",
"id": 94223645,
"node_id": "U_kgDOBZ29HQ",
"avatar_url": "https://avatars.githubusercontent.com/u/94223645?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aman123lug",
"html_url": "https://github.com/Aman123lug",
"followers_url": "https://api.github.com/users/Aman123lug/followers",
"following_url": "https://api.github.com/users/Aman123lug/following{/other_user}",
"gists_url": "https://api.github.com/users/Aman123lug/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aman123lug/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aman123lug/subscriptions",
"organizations_url": "https://api.github.com/users/Aman123lug/orgs",
"repos_url": "https://api.github.com/users/Aman123lug/repos",
"events_url": "https://api.github.com/users/Aman123lug/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aman123lug/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [
"For KFP SDK v2, please use the `@dsl.component` decorator. See [documentation on Lightweight Python Components](https://www.kubeflow.org/docs/components/pipelines/v2/components/lightweight-python-components/) for more information."
] | 2023-08-25T15:02:38 | 2023-09-07T22:42:16 | 2023-09-07T22:42:15 | NONE | null | ### Environment
I am trying to build the Kubeflow pipeline. i am using version 2.0.1.
I am getting this error:
module 'kfp.components' has no attribute 'create_component_from_func'
And along i i followed the document of KFPv2 there are some code already written in docs for the testing it has already created some function and pipelines I copy paste all things from docs for test then I compiled then it's created yaml file I simply upload in kubeflow ui which install in a cluster
But I am getting this error :
Cannot get MLMD object from meta store
again and again I getting same error also it has shown a pop-up and says:
Unkown content type recived
Is there any other way to import this or this is a version error?
![263000793-0a4ecc0f-d9d1-409d-988a-fbd5d75ad4cd](https://github.com/kubeflow/pipelines/assets/94223645/a4a871c2-1ae5-4600-8086-5f870a6b0d93)
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/9933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/9933/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/9924 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/9924/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/9924/comments | https://api.github.com/repos/kubeflow/pipelines/issues/9924/events | https://github.com/kubeflow/pipelines/issues/9924 | 1,863,477,869 | I_kwDOB-71UM5vEmpt | 9,924 | [backend] Cannot inject environment variables and volumes into components with the KFP CLI | {
"login": "b-feldmann",
"id": 7108970,
"node_id": "MDQ6VXNlcjcxMDg5NzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7108970?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/b-feldmann",
"html_url": "https://github.com/b-feldmann",
"followers_url": "https://api.github.com/users/b-feldmann/followers",
"following_url": "https://api.github.com/users/b-feldmann/following{/other_user}",
"gists_url": "https://api.github.com/users/b-feldmann/gists{/gist_id}",
"starred_url": "https://api.github.com/users/b-feldmann/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/b-feldmann/subscriptions",
"organizations_url": "https://api.github.com/users/b-feldmann/orgs",
"repos_url": "https://api.github.com/users/b-feldmann/repos",
"events_url": "https://api.github.com/users/b-feldmann/events{/privacy}",
"received_events_url": "https://api.github.com/users/b-feldmann/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [] | 2023-08-23T14:40:25 | 2023-08-23T14:40:25 | null | NONE | null | ### Environment
* How did you deploy Kubeflow Pipelines (KFP)?
Full Kubeflow Deployment
* KFP version: 1.7.0 (we are using Kubeflow 1.7.0)
* KFP SDK version: 2.0.1
### Steps to reproduce
The injection of environment variables and secrets into components is not working correctly with kfpv2. Please look below for the python code and the generated yaml.
We could use volumes and environment variables in KFPv1 but when updating to kfpv2 and Kubeflow 1.7 the (updated) code does not work anymore. The Pipeline yaml looks fine from our perspective but in Kubernetes the pipeline component pod does not get the required configurations. (volume mount, env)
### Expected result
The Kubernetes Pod has access to the volumes and environment variables and we can use them inside a Kubeflow pipeline component.
### Materials and Reference
We are using this to add the secret in the kfp cli.
```
def add_gsc_access(task):
task.set_env_variable(
name="GOOGLE_APPLICATION_CREDENTIALS", value="/var/secrets/google/key.json"
)
kubernetes.use_secret_as_volume(
task,
secret_name="pubsub-key",
mount_path="/var/secrets/google",
)
```
Generated Pipeline YAML:
```
components:
comp-download-from-gcs:
executorLabel: exec-download-from-gcs
inputDefinitions:
parameters:
gcs_path:
parameterType: STRING
outputDefinitions:
artifacts:
data:
artifactType:
schemaTitle: system.Artifact
schemaVersion: 0.0.1
...
deploymentSpec:
executors:
exec-download-from-gcs:
container:
command:
- ...
```
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/9924/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/9924/timeline | null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/9906 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/9906/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/9906/comments | https://api.github.com/repos/kubeflow/pipelines/issues/9906/events | https://github.com/kubeflow/pipelines/issues/9906 | 1,857,198,876 | I_kwDOB-71UM5uspsc | 9,906 | [frontend] No Logs tab in UI | {
"login": "Saurav-D",
"id": 26328148,
"node_id": "MDQ6VXNlcjI2MzI4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/26328148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Saurav-D",
"html_url": "https://github.com/Saurav-D",
"followers_url": "https://api.github.com/users/Saurav-D/followers",
"following_url": "https://api.github.com/users/Saurav-D/following{/other_user}",
"gists_url": "https://api.github.com/users/Saurav-D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Saurav-D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Saurav-D/subscriptions",
"organizations_url": "https://api.github.com/users/Saurav-D/orgs",
"repos_url": "https://api.github.com/users/Saurav-D/repos",
"events_url": "https://api.github.com/users/Saurav-D/events{/privacy}",
"received_events_url": "https://api.github.com/users/Saurav-D/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 930619516,
"node_id": "MDU6TGFiZWw5MzA2MTk1MTY=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend",
"name": "area/frontend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [
"Hi @Saurav-D,\r\n\r\nLogs are available in KFP 2.0.0. It is expected to be included in Kubeflow 1.8 (releasing in 2023 Q4)"
] | 2023-08-18T19:33:17 | 2023-08-24T22:48:53 | 2023-08-24T22:48:52 | NONE | null | ### Environment
* How did you deploy Kubeflow Pipelines (KFP)?
AWS, Terraform, Vanilla
* KFP version:
KFP 1.7
pipeline: 2.0.0-alpha.7
### Steps to reproduce
Run any pipeline
### Expected result
Logs tab with logs. Right now, I just see Input/Output and Task Details. Also, I don't see pod ID.
### Materials and Reference
![Screenshot 2023-08-18 at 12 29 09 PM](https://github.com/kubeflow/pipelines/assets/26328148/d48cc35f-72c3-4d1b-a6b3-749af0b67638)
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/9906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/9906/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/9904 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/9904/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/9904/comments | https://api.github.com/repos/kubeflow/pipelines/issues/9904/events | https://github.com/kubeflow/pipelines/issues/9904 | 1,856,240,751 | I_kwDOB-71UM5uo_xv | 9,904 | [backend] Persistence agent failing `kubeflow-pipeline-mkp-test` | {
"login": "zijianjoy",
"id": 37026441,
"node_id": "MDQ6VXNlcjM3MDI2NDQx",
"avatar_url": "https://avatars.githubusercontent.com/u/37026441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zijianjoy",
"html_url": "https://github.com/zijianjoy",
"followers_url": "https://api.github.com/users/zijianjoy/followers",
"following_url": "https://api.github.com/users/zijianjoy/following{/other_user}",
"gists_url": "https://api.github.com/users/zijianjoy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zijianjoy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zijianjoy/subscriptions",
"organizations_url": "https://api.github.com/users/zijianjoy/orgs",
"repos_url": "https://api.github.com/users/zijianjoy/repos",
"events_url": "https://api.github.com/users/zijianjoy/events{/privacy}",
"received_events_url": "https://api.github.com/users/zijianjoy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [
"### Exploration\r\n\r\nI granted `roles/iam.serviceAccountTokenCreator` to the GCE default service account: https://cloud.google.com/iam/docs/service-account-permissions#token-creator-role, and retried the mkt-test. If this permission unblocks the test, then we will need to do one of the followings:\r\n\r\n1. Make SA token requirement configurable (which means it can become optional), it allows us to exclude volume token projection for KFP standalone mode. We still need to clearly document the permission needed in the full kubeflow mode.\r\n2. Develop a solution for full Kubeflow to make workload identity association for the persistence-agent-sa, or the default-editor which is configured by default in full Kubeflow: https://googlecloudplatform.github.io/kubeflow-gke-docs/docs/pipelines/authentication-pipelines/#cluster-setup-to-use-workload-identity-for-full-kubeflow. Update all existing clusters to grant this additional permission. Put up a warning to all users about this breaking change and how should they unblock the KFP deployment. Update documentations for KFP standalone and full Kubeflow.",
"Hi @zijianjoy Thanks for reporting this issue. \r\nWhat is the result - does your change unblock the test? If not the root cause could be some kustomize file I missed ?! \r\nI would like to clearly state that in the current (the new) implementation, PA( Persistence Agent) authentication is not related to the multiuser mode at all. No matter the multiuser mode, PA is the only one that has the permissions and the right identity to call these two endpoints (no matter the namespace the reported workflow belongs to). \r\nFrom the log message, it looks like in GCE the SA token hasn't been created. As I do not have any knowledge of GCE .. I will totally count on your decision and if really needed I can work to make PA auth configurable .. \r\n",
"Yes, in a normal kubeflow there is a serviceaccount\r\n\r\n```\r\n volumeMounts:\r\n - mountPath: /var/run/secrets/kubeflow/tokens\r\n name: persistenceagent-sa-token\r\n serviceAccountName: ml-pipeline-persistenceagent\r\n volumes:\r\n - name: persistenceagent-sa-token\r\n projected:\r\n sources:\r\n - serviceAccountToken:\r\n path: persistenceagent-sa-token\r\n expirationSeconds: 3600\r\n audience: pipelines.kubeflow.org\r\n```\r\n\r\nno such file looks like a GKE specific issue.\r\n\r\nCan you check manually whether `/var/run/secrets/kubeflow/tokens` exists in the pod? `ls -lah` recursively for that directory tree might help\r\n\r\n\"volume token projection\" is also used in other parts of Kubeflow and should be there by default on a kubernetes cluster.",
"Thank you both @difince and @juliusvonkohout for the help. \r\n\r\nThe IAM role grant doesn't make a difference in terms of the error message. Persistence agent is in crashloop as well.\r\n\r\nI added the volume mount sections mentioned by @juliusvonkohout to a released KFP persistence agent's yaml file. After manually adding the volume mount, the SA token exists.\r\n\r\nNext step is to try using a persistence agent image that is built by `kubeflow-pipeline-mkp-test` in a self-deployed cluster to see if it works.",
"Current suspect:\r\n\r\nWe haven't upgrade the mkp manifest for persistence-agent in https://github.com/kubeflow/pipelines/blob/master/manifests/gcp_marketplace/chart/kubeflow-pipelines/templates/pipeline.yaml#L532C1-L563. It might be the reason causing this failure.",
"Update: PR https://github.com/kubeflow/pipelines/pull/9908 is passing `mkp-test`."
] | 2023-08-18T07:49:37 | 2023-08-21T20:08:43 | 2023-08-21T20:08:43 | COLLABORATOR | null | ### Environment
Currently `/test kubeflow-pipeline-mkp-test` is failing on HEAD.
### Steps to reproduce
In https://github.com/kubeflow/pipelines/commit/cb18d00bbbaed9cd77fc50dce739ed62c72b2356#diff-7df17678529866306ef6665ffbeeda377a1e869aca24856f7cf8a99891877fe8, service account token projection is introduced to enable Auth. However, it is failing the `kubeflow-pipeline-mkp-test` test because persistent agent is unable to find the service account token. Error message as below:
![image](https://github.com/kubeflow/pipelines/assets/37026441/fe516db0-f9eb-4613-86ad-51bd81bf10e4)
cc @difince @chensun
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/9904/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/9904/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/9893 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/9893/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/9893/comments | https://api.github.com/repos/kubeflow/pipelines/issues/9893/events | https://github.com/kubeflow/pipelines/issues/9893 | 1,855,183,517 | I_kwDOB-71UM5uk9qd | 9,893 | [bug] Can't increase/attach shared memory to pipeline task in kfp sdk v2, e.g. PyTorch training fails. | {
"login": "hsteude",
"id": 26025898,
"node_id": "MDQ6VXNlcjI2MDI1ODk4",
"avatar_url": "https://avatars.githubusercontent.com/u/26025898?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hsteude",
"html_url": "https://github.com/hsteude",
"followers_url": "https://api.github.com/users/hsteude/followers",
"following_url": "https://api.github.com/users/hsteude/following{/other_user}",
"gists_url": "https://api.github.com/users/hsteude/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hsteude/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hsteude/subscriptions",
"organizations_url": "https://api.github.com/users/hsteude/orgs",
"repos_url": "https://api.github.com/users/hsteude/repos",
"events_url": "https://api.github.com/users/hsteude/events{/privacy}",
"received_events_url": "https://api.github.com/users/hsteude/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1136110037,
"node_id": "MDU6TGFiZWwxMTM2MTEwMDM3",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk",
"name": "area/sdk",
"color": "d2b48c",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [
"Hi @hsteude!\r\n\r\nYou can specify memory requests using SDK: `.set_memory_request()` and `.set_memory_limit()`: https://kubeflow-pipelines.readthedocs.io/en/sdk-2.0.1/source/dsl.html?h=set_memory#kfp.dsl.PipelineTask.set_memory_request",
"Hi @gkcalat and thanks for your answer!\r\n\r\nUnfortunately, using `.set_memory_request()` and `.set_memory_limit()` doesn't address the actual issue I am facing. These methods set the general memory allocation for the container, not the *shared memory*, which is the limiting factor here. PyTorch's DataLoader uses shared memory when `num_workers > 0`, and it's this shared memory that gets exhausted.\r\n\r\nThe example above still fails when using the methods to increase memory requests and limits.\r\n\r\nCould this issue be reopened since the problem still persists even when applying the suggested memory settings?\r\n\r\nThanks a lot in advance!\r\n\r\nCheers,\r\nHenrik\r\n"
] | 2023-08-17T15:01:45 | 2023-09-06T11:48:12 | 2023-08-24T22:54:22 | CONTRIBUTOR | null | ### Environment
* How do you deploy Kubeflow Pipelines (KFP)?
Manifests
* KFP version:
2.0.0
* KFP SDK version:
2.0.1
### Steps to reproduce
Create and run a simple PyTorch (lightning) training pipeline:
```python
from kfp import dsl
from kfp.client import Client
# define pytorch (lightning) training component
@dsl.component(
packages_to_install=["torch==2.0.0", "pytorch_lightning"],
base_image="python:3.9",
)
def pytorch_training(num_workers: int, input_size: int, batch_size: int):
import torch
from torch import nn
import torch.nn.functional as F
from torch.utils.data import DataLoader, Dataset
import pytorch_lightning as pl
class ExampleDataset(Dataset):
def __init__(self, size: int = 1000, number_samples: int = 100):
self.x = torch.randn(number_samples, size, size)
self.size = 1000
def __len__(self):
return self.x.shape[0]
def __getitem__(self, idx):
return self.x[idx, :, :]
class ExampleModel(pl.LightningModule):
def __init__(self, inputsize: int):
super(ExampleModel, self).__init__()
self.linear = nn.Linear(inputsize, inputsize)
def forward(self, x):
return self.linear(x)
def training_step(self, batch, batch_idx):
x = batch
loss = F.mse_loss(self(x), x)
return loss
def configure_optimizers(self):
return torch.optim.SGD(self.parameters(), lr=0.1)
dataset = ExampleDataset(size=input_size, number_samples=100)
train_loader = DataLoader(dataset, num_workers=num_workers, batch_size=batch_size)
model = ExampleModel(inputsize=input_size)
trainer = pl.Trainer(max_epochs=1)
trainer.fit(model, train_loader)
# define pipeline
@dsl.pipeline
def shared_memory_pipeline(
num_workers: int = 0, input_size: int = 1000, batch_size: int = 100
):
pytorch_training_task = pytorch_training(
num_workers=num_workers, input_size=input_size, batch_size=batch_size
)
# compile and run pipeline
client = Client()
client.create_run_from_pipeline_func(
shared_memory_pipeline,
arguments=dict(num_workers=5, input_size=1_000, batch_size=100),
experiment_name="shared-memory",
enable_caching=True,
)
```
### Expected result
Pipelne gets compiled and executed without errors.
### Materials and reference
* This is the error I get:
```
ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memory (shm).
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1133, in _try_get_data
data = self._data_queue.get(timeout=timeout)
File "/usr/local/lib/python3.9/multiprocessing/queues.py", line 113, in get
if not self._poll(timeout):
File "/usr/local/lib/python3.9/multiprocessing/connection.py", line 257, in poll
return self._poll(timeout)
File "/usr/local/lib/python3.9/multiprocessing/connection.py", line 424, in _poll
r = wait([self], timeout)
File "/usr/local/lib/python3.9/multiprocessing/connection.py", line 931, in wait
ready = selector.select(timeout)
File "/usr/local/lib/python3.9/selectors.py", line 416, in select
fd_event_list = self._selector.poll(timeout)
File "/usr/local/lib/python3.9/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
_error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 503) is killed by signal: Bus error. It is possible that dataloader's workers are out of shared memory. Please try to raise your shared memory limit.
```
* There is a workaround in skd v1: https://github.com/kubeflow/pipelines/issues/6880
* Default values for shared memory for containerd and Docker is 64MD. Thus, the pipeline above works fine, if a batch is small enough (e.g. for batch_size=1). It will also run just fine if the number of workers for the Data Loader is set to 0 and no shared memory is required.
### Labels
<!-- Please include labels below by uncommenting them to help us better triage issues -->
Impacted by this bug? Give it a 👍. | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/9893/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/9893/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/9891 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/9891/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/9891/comments | https://api.github.com/repos/kubeflow/pipelines/issues/9891/events | https://github.com/kubeflow/pipelines/issues/9891 | 1,855,078,221 | I_kwDOB-71UM5ukj9N | 9,891 | [backend] Persistent agent SA token Refresher does wrong time conversion | {
"login": "difince",
"id": 11557050,
"node_id": "MDQ6VXNlcjExNTU3MDUw",
"avatar_url": "https://avatars.githubusercontent.com/u/11557050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/difince",
"html_url": "https://github.com/difince",
"followers_url": "https://api.github.com/users/difince/followers",
"following_url": "https://api.github.com/users/difince/following{/other_user}",
"gists_url": "https://api.github.com/users/difince/gists{/gist_id}",
"starred_url": "https://api.github.com/users/difince/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/difince/subscriptions",
"organizations_url": "https://api.github.com/users/difince/orgs",
"repos_url": "https://api.github.com/users/difince/repos",
"events_url": "https://api.github.com/users/difince/events{/privacy}",
"received_events_url": "https://api.github.com/users/difince/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
}
] | closed | false | {
"login": "difince",
"id": 11557050,
"node_id": "MDQ6VXNlcjExNTU3MDUw",
"avatar_url": "https://avatars.githubusercontent.com/u/11557050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/difince",
"html_url": "https://github.com/difince",
"followers_url": "https://api.github.com/users/difince/followers",
"following_url": "https://api.github.com/users/difince/following{/other_user}",
"gists_url": "https://api.github.com/users/difince/gists{/gist_id}",
"starred_url": "https://api.github.com/users/difince/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/difince/subscriptions",
"organizations_url": "https://api.github.com/users/difince/orgs",
"repos_url": "https://api.github.com/users/difince/repos",
"events_url": "https://api.github.com/users/difince/events{/privacy}",
"received_events_url": "https://api.github.com/users/difince/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "difince",
"id": 11557050,
"node_id": "MDQ6VXNlcjExNTU3MDUw",
"avatar_url": "https://avatars.githubusercontent.com/u/11557050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/difince",
"html_url": "https://github.com/difince",
"followers_url": "https://api.github.com/users/difince/followers",
"following_url": "https://api.github.com/users/difince/following{/other_user}",
"gists_url": "https://api.github.com/users/difince/gists{/gist_id}",
"starred_url": "https://api.github.com/users/difince/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/difince/subscriptions",
"organizations_url": "https://api.github.com/users/difince/orgs",
"repos_url": "https://api.github.com/users/difince/repos",
"events_url": "https://api.github.com/users/difince/events{/privacy}",
"received_events_url": "https://api.github.com/users/difince/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"/assign @difince ",
"Fixed by https://github.com/kubeflow/pipelines/pull/9892"
] | 2023-08-17T14:04:00 | 2023-08-17T17:09:24 | 2023-08-17T17:09:24 | MEMBER | null | ### Environment
The issue is introduced by [PR](https://github.com/kubeflow/pipelines/pull/9699)
1. `DefaultTokenRefresherInterval` of 1 hour is initialized in nanoseconds (3600000000000) see here:
https://github.com/kubeflow/pipelines/blob/cb18d00bbbaed9cd77fc50dce739ed62c72b2356/backend/src/agent/persistence/main.go#L68
2. But the following line of code transforms it to just seconds (3600) ( done by `DefaultTokenRefresherInterval.Seconds()`)
https://github.com/kubeflow/pipelines/blob/cb18d00bbbaed9cd77fc50dce739ed62c72b2356/backend/src/agent/persistence/main.go#L154
3. The problem comes when we create the TokenRefresher which uses `time.Duration()`.
`time.Duration` expects nanoseconds - > because of **2.** the passed interval is a smaller number and this makes the Ticker executes much more often than it should.
https://github.com/kubeflow/pipelines/blob/cb18d00bbbaed9cd77fc50dce739ed62c72b2356/backend/src/agent/persistence/main.go#L103
* How do you deploy Kubeflow Pipelines (KFP)?
locally
* KFP version: latest https://github.com/kubeflow/pipelines/commit/cb18d00bbbaed9cd77fc50dce739ed62c72b2356
### Steps to reproduce
<!--
Specify how to reproduce the problem.
This may include information such as: a description of the process, code snippets, log output, or screenshots.
-->
### Expected result
The ticker to execute every hour ( set as a default) .
### Materials and reference
**Side note:** I would like the input parameter to be in seconds, keeping it consistent with the persistence agent deployment yaml file where this value is provided in seconds.
https://github.com/kubeflow/pipelines/blob/cb18d00bbbaed9cd77fc50dce739ed62c72b2356/manifests/kustomize/base/pipeline/ml-pipeline-persistenceagent-deployment.yaml#L49
Future enhancement could be to keep these two values in sync by an env variable.
### Labels
<!-- Please include labels below by uncommenting them to help us better triage issues -->
<!-- /area frontend -->
</area backend>
<!-- /area sdk -->
<!-- /area testing -->
<!-- /area samples -->
<!-- /area components -->
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/9891/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/9891/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/9890 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/9890/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/9890/comments | https://api.github.com/repos/kubeflow/pipelines/issues/9890/events | https://github.com/kubeflow/pipelines/issues/9890 | 1,854,669,403 | I_kwDOB-71UM5ujAJb | 9,890 | [backend] severe performance problem in listruns API | {
"login": "juliusvonkohout",
"id": 45896133,
"node_id": "MDQ6VXNlcjQ1ODk2MTMz",
"avatar_url": "https://avatars.githubusercontent.com/u/45896133?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/juliusvonkohout",
"html_url": "https://github.com/juliusvonkohout",
"followers_url": "https://api.github.com/users/juliusvonkohout/followers",
"following_url": "https://api.github.com/users/juliusvonkohout/following{/other_user}",
"gists_url": "https://api.github.com/users/juliusvonkohout/gists{/gist_id}",
"starred_url": "https://api.github.com/users/juliusvonkohout/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/juliusvonkohout/subscriptions",
"organizations_url": "https://api.github.com/users/juliusvonkohout/orgs",
"repos_url": "https://api.github.com/users/juliusvonkohout/repos",
"events_url": "https://api.github.com/users/juliusvonkohout/events{/privacy}",
"received_events_url": "https://api.github.com/users/juliusvonkohout/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
}
] | open | false | {
"login": "chensun",
"id": 2043310,
"node_id": "MDQ6VXNlcjIwNDMzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chensun",
"html_url": "https://github.com/chensun",
"followers_url": "https://api.github.com/users/chensun/followers",
"following_url": "https://api.github.com/users/chensun/following{/other_user}",
"gists_url": "https://api.github.com/users/chensun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chensun/subscriptions",
"organizations_url": "https://api.github.com/users/chensun/orgs",
"repos_url": "https://api.github.com/users/chensun/repos",
"events_url": "https://api.github.com/users/chensun/events{/privacy}",
"received_events_url": "https://api.github.com/users/chensun/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "chensun",
"id": 2043310,
"node_id": "MDQ6VXNlcjIwNDMzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chensun",
"html_url": "https://github.com/chensun",
"followers_url": "https://api.github.com/users/chensun/followers",
"following_url": "https://api.github.com/users/chensun/following{/other_user}",
"gists_url": "https://api.github.com/users/chensun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chensun/subscriptions",
"organizations_url": "https://api.github.com/users/chensun/orgs",
"repos_url": "https://api.github.com/users/chensun/repos",
"events_url": "https://api.github.com/users/chensun/events{/privacy}",
"received_events_url": "https://api.github.com/users/chensun/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-08-17T09:54:35 | 2023-08-17T22:45:02 | null | MEMBER | null | This is a followup of https://github.com/kubeflow/pipelines/issues/6845 from @difince @zijianjoy
The listruns query is also very slow (seconds for large run_details tables).
It must also be migrated to joins or something other more performant
Already this small database leads to 15 seconds per listruns api call
It regressed from 5 to 15 seconds with Kubeflow 1.5 -> 1.7.
```
mysql> SELECT
-> namespace,
-> COUNT(*) AS entry_count,
-> SUM(LENGTH(CAST(namespace AS CHAR))) AS column_size
-> FROM
-> run_details
-> GROUP BY
-> namespace
-> ORDER BY
-> column_size DESC;
+------------------------------------------------+-------------+-------------+
| namespace | entry_count | column_size |
+------------------------------------------------+-------------+-------------+
| namespace-xxx | 93931 | 1033241 |
| namespace-yyy | 22645 | 362320 |
| namespace-zzz | 16700 | 317300 |
| namespace-www | 9894 | 187986
````
### Steps to reproduce
Just let users run a lot of pipelines per year, e.g. 200,000 and see how slow the listruns query becomes.
### Expected result
Listruns in less than one second for 200,000 runs, which is a very small database task
### Materials and Reference
![Capture](https://github.com/kubeflow/pipelines/assets/45896133/380f316b-e616-4aef-9716-21e5125b80ee)
Impacted by this bug? Give it a 👍. | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/9890/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/9890/timeline | null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/9889 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/9889/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/9889/comments | https://api.github.com/repos/kubeflow/pipelines/issues/9889/events | https://github.com/kubeflow/pipelines/issues/9889 | 1,854,615,495 | I_kwDOB-71UM5uiy_H | 9,889 | [backend] Security exploit in mlpipeline-UI | {
"login": "juliusvonkohout",
"id": 45896133,
"node_id": "MDQ6VXNlcjQ1ODk2MTMz",
"avatar_url": "https://avatars.githubusercontent.com/u/45896133?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/juliusvonkohout",
"html_url": "https://github.com/juliusvonkohout",
"followers_url": "https://api.github.com/users/juliusvonkohout/followers",
"following_url": "https://api.github.com/users/juliusvonkohout/following{/other_user}",
"gists_url": "https://api.github.com/users/juliusvonkohout/gists{/gist_id}",
"starred_url": "https://api.github.com/users/juliusvonkohout/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/juliusvonkohout/subscriptions",
"organizations_url": "https://api.github.com/users/juliusvonkohout/orgs",
"repos_url": "https://api.github.com/users/juliusvonkohout/repos",
"events_url": "https://api.github.com/users/juliusvonkohout/events{/privacy}",
"received_events_url": "https://api.github.com/users/juliusvonkohout/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [] | 2023-08-17T09:28:10 | 2023-08-17T09:28:10 | null | MEMBER | null | ### Environment
* How did you deploy Kubeflow Pipelines (KFP)? Kubeflow 1.7/1.8
<!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. -->
* KFP version: 2.0.0alpha7, but its also in the master branch
<!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface.
To find the version number, See version number shows on bottom of KFP UI left sidenav. -->
* KFP SDK version: irrelevant, since it is a mlpipeline-ui issue
<!-- Specify the output of the following shell command: $pip list | grep kfp -->
### Steps to reproduce
This image here shows Bobs S3 artifacts produced by a pipeline run (e.g. tensorflow model)
Sadly the KFP UI allows Alice to read Bobs artifacts content
But how?
If Alice spies the S3 artifact location from Bob, as seen at the bottom of the image,
She can just can remove the yellow namespace parameter and the UI server will skip permission checks
So you still need to know the S3/GCS path which you can for example get from an insecure (that is the default in KFP) not-disabled ML-metadata
![grafik](https://github.com/kubeflow/pipelines/assets/45896133/f9baad53-cb97-481b-bf2b-9bc62b86339b)
### Expected result
The namespace parameter of the readartifact api call should not be user-configurable.
That should have been achieved for the APIserver in another PR from me and @difince https://github.com/difince/pipelines/blob/d0424bf86de5217d41dd03b50f91ac5ec3489df3/backend/src/apiserver/server/run_server.go#L313
But with mlpipeline-ui you can circumvent the permission check in the apiserver as seen on the screenshot.
You can just remove the namespace parameter and the artifact-proxy mode is disabled.
So mlpipeline-ui must set the namespace parameter based on the namespace dropdown selection for an authenticated user and not expose the namespace parameter to the user in the URL.
https://github.com/kubeflow/pipelines/issues/8074 is also important to secure the apiserver in the future and
in the long term i would really like to get rid of the artifactproxy altogether as stated in https://github.com/kubeflow/pipelines/issues/8406#issuecomment-1643579605 and https://github.com/kubeflow/pipelines/issues/4790#issuecomment-1643574994 but that is for another day.
### Materials and Reference
<!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. -->
This has been discussed in the last KFP meeting (16th August 2023) and acknowledged by @zijianjoy on a technical level.
From the Kubecon presentation https://static.sched.com/hosted_files/kccnceu2023/f8/Hardening%20Kubeflow%20Security%20for%20Enterprise%20Environments%20-%20Julius%20von%20Kohout%2C%20DPDHL%20%26%20Diana%20Dimitrova%20Atanasova%2C%20VMware.pdf
![grafik](https://github.com/kubeflow/pipelines/assets/45896133/f9baad53-cb97-481b-bf2b-9bc62b86339b)
![grafik](https://github.com/kubeflow/pipelines/assets/45896133/6af8e05c-20e3-41bf-ba7c-ade5dd4f5ea9)
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/9889/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/9889/timeline | null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/9883 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/9883/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/9883/comments | https://api.github.com/repos/kubeflow/pipelines/issues/9883/events | https://github.com/kubeflow/pipelines/issues/9883 | 1,853,599,683 | I_kwDOB-71UM5ue6_D | 9,883 | 8/16/23 Presumbit e2e test failure | {
"login": "chensun",
"id": 2043310,
"node_id": "MDQ6VXNlcjIwNDMzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chensun",
"html_url": "https://github.com/chensun",
"followers_url": "https://api.github.com/users/chensun/followers",
"following_url": "https://api.github.com/users/chensun/following{/other_user}",
"gists_url": "https://api.github.com/users/chensun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chensun/subscriptions",
"organizations_url": "https://api.github.com/users/chensun/orgs",
"repos_url": "https://api.github.com/users/chensun/repos",
"events_url": "https://api.github.com/users/chensun/events{/privacy}",
"received_events_url": "https://api.github.com/users/chensun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 930619516,
"node_id": "MDU6TGFiZWw5MzA2MTk1MTY=",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/frontend",
"name": "area/frontend",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
}
] | closed | false | {
"login": "chensun",
"id": 2043310,
"node_id": "MDQ6VXNlcjIwNDMzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chensun",
"html_url": "https://github.com/chensun",
"followers_url": "https://api.github.com/users/chensun/followers",
"following_url": "https://api.github.com/users/chensun/following{/other_user}",
"gists_url": "https://api.github.com/users/chensun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chensun/subscriptions",
"organizations_url": "https://api.github.com/users/chensun/orgs",
"repos_url": "https://api.github.com/users/chensun/repos",
"events_url": "https://api.github.com/users/chensun/events{/privacy}",
"received_events_url": "https://api.github.com/users/chensun/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "chensun",
"id": 2043310,
"node_id": "MDQ6VXNlcjIwNDMzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chensun",
"html_url": "https://github.com/chensun",
"followers_url": "https://api.github.com/users/chensun/followers",
"following_url": "https://api.github.com/users/chensun/following{/other_user}",
"gists_url": "https://api.github.com/users/chensun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chensun/subscriptions",
"organizations_url": "https://api.github.com/users/chensun/orgs",
"repos_url": "https://api.github.com/users/chensun/repos",
"events_url": "https://api.github.com/users/chensun/events{/privacy}",
"received_events_url": "https://api.github.com/users/chensun/received_events",
"type": "User",
"site_admin": false
},
{
"login": "jlyaoyuli",
"id": 56132941,
"node_id": "MDQ6VXNlcjU2MTMyOTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/56132941?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jlyaoyuli",
"html_url": "https://github.com/jlyaoyuli",
"followers_url": "https://api.github.com/users/jlyaoyuli/followers",
"following_url": "https://api.github.com/users/jlyaoyuli/following{/other_user}",
"gists_url": "https://api.github.com/users/jlyaoyuli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jlyaoyuli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jlyaoyuli/subscriptions",
"organizations_url": "https://api.github.com/users/jlyaoyuli/orgs",
"repos_url": "https://api.github.com/users/jlyaoyuli/repos",
"events_url": "https://api.github.com/users/jlyaoyuli/events{/privacy}",
"received_events_url": "https://api.github.com/users/jlyaoyuli/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"The frontend test failure seems to be flaky. \r\n\r\nIn another run, it failed on basic sample tests:\r\n```\r\nintegration-test-q5mqw-2112930252: Traceback (most recent call last):\r\nintegration-test-q5mqw-2112930252: File \"exit_handler.py\", line 26, in <module>\r\nintegration-test-q5mqw-2112930252: @component(kfp_package_path=_KFP_PACKAGE_PATH)\r\nintegration-test-q5mqw-2112930252: TypeError: component() got an unexpected keyword argument 'kfp_package_path'\r\nintegration-test-q5mqw-2392976712: /usr/local/lib/python3.7/dist-packages/kfp/dsl/_container_op.py:1268: FutureWarning: Please create reusable components instead of constructing ContainerOp instances directly. Reusable components are shareable, portable and have compatibility and support guarantees. Please see the documentation: https://www.kubeflow.org/docs/pipelines/sdk/component-development/#writing-your-component-definition-file The components can be created manually (or, in case of python, using kfp.components.create_component_from_func or func_to_container_op) and then loaded using kfp.components.load_component_from_file, load_component_from_uri or load_component_from_text: https://kubeflow-pipelines.readthedocs.io/en/stable/source/kfp.components.html#kfp.components.load_component_from_file\r\nintegration-test-q5mqw-2392976712: category=FutureWarning,\r\n```\r\nhttps://oss.gprow.dev/view/gs/oss-prow/pr-logs/pull/kubeflow_pipelines/9699/kubeflow-pipeline-e2e-test/1691812763929677824"
] | 2023-08-16T16:54:10 | 2023-08-17T09:33:39 | 2023-08-17T09:33:39 | COLLABORATOR | null | ERROR: type should be string, got "\r\nhttps://oss.gprow.dev/view/gs/oss-prow/pr-logs/pull/kubeflow_pipelines/9873/kubeflow-pipeline-e2e-test/1691525546791407616\r\n```\r\nintegration-test-g8t6t-369894966: [0-0] AssertionError in \"deploy helloworld sample run.shows a 4-node static graph\"\r\nintegration-test-g8t6t-369894966: AssertionError [ERR_ASSERTION]: should have a 4-node graph, instead has: 0\r\nintegration-test-g8t6t-369894966: at Context.<anonymous> (/src/helloworld.spec.js:70:5)\r\nintegration-test-g8t6t-369894966: at processTicksAndRejections (node:internal/process/task_queues:96:5)\r\nintegration-test-g8t6t-369894966: 19:49:32.603 WARN [SeleniumSpanExporter$1.lambda$export$3] - {\"traceId\": \"2ae031a384cadb8eb7f70e9a187d622b\",\"eventTime\": 1692128972602956682,\"eventName\": \"HTTP request execution complete\",\"attributes\": {\"http.flavor\": 1,\"http.handler_class\": \"org.openqa.selenium.remote.http.Route$PredicatedRoute\",\"http.host\": \"127.0.0.1:4444\",\"http.method\": \"POST\",\"http.request_content_length\": \"8346\",\"http.scheme\": \"HTTP\",\"http.status_code\": 404,\"http.target\": \"\\u002fsession\\u002fd0c75f008d8950640953b1914f2f233a\\u002fexecute\\u002fsync\",\"http.user_agent\": \"webdriver\\u002f8.3.2\"}}\r\nintegration-test-g8t6t-369894966: \r\nintegration-test-g8t6t-369894966: 19:49:33.711 WARN [SeleniumSpanExporter$1.lambda$export$3] - {\"traceId\": \"72fe6381777527734f56dd38817f7a8e\",\"eventTime\": 1692128973710578070,\"eventName\": \"HTTP request execution complete\",\"attributes\": {\"http.flavor\": 1,\"http.handler_class\": \"org.openqa.selenium.remote.http.Route$PredicatedRoute\",\"http.host\": \"127.0.0.1:4444\",\"http.method\": \"POST\",\"http.request_content_length\": \"8346\",\"http.scheme\": \"HTTP\",\"http.status_code\": 404,\"http.target\": \"\\u002fsession\\u002fd0c75f008d8950640953b1914f2f233a\\u002fexecute\\u002fsync\",\"http.user_agent\": \"webdriver\\u002f8.3.2\"}}\r\nintegration-test-g8t6t-369894966: \r\nintegration-test-g8t6t-369894966: Handling connection for 3000\r\nintegration-test-g8t6t-369894966: Handling connection for 3000\r\nintegration-test-g8t6t-369894966: 19:50:20.670 INFO [LocalSessionMap.lambda$new$0] - Deleted session from local Session Map, Id: d0c75f008d8950640953b1914f2f233a\r\nintegration-test-g8t6t-369894966: 19:50:20.672 INFO [GridModel.release] - Releasing slot for session id d0c75f008d8950640953b1914f2f233a\r\nintegration-test-g8t6t-369894966: 19:50:20.675 INFO [SessionSlot.stop] - Stopping session d0c75f008d8950640953b1914f2f233a\r\nintegration-test-g8t6t-369894966: [0-0] FAILED in chrome - file:///helloworld.spec.js\r\n```" | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/9883/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/9883/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/9882 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/9882/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/9882/comments | https://api.github.com/repos/kubeflow/pipelines/issues/9882/events | https://github.com/kubeflow/pipelines/issues/9882 | 1,853,577,056 | I_kwDOB-71UM5ue1dg | 9,882 | [feature] Allow DataflowPythonJobOp to launch dataflow with another python and beam sdk | {
"login": "win845",
"id": 86051433,
"node_id": "MDQ6VXNlcjg2MDUxNDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/86051433?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/win845",
"html_url": "https://github.com/win845",
"followers_url": "https://api.github.com/users/win845/followers",
"following_url": "https://api.github.com/users/win845/following{/other_user}",
"gists_url": "https://api.github.com/users/win845/gists{/gist_id}",
"starred_url": "https://api.github.com/users/win845/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/win845/subscriptions",
"organizations_url": "https://api.github.com/users/win845/orgs",
"repos_url": "https://api.github.com/users/win845/repos",
"events_url": "https://api.github.com/users/win845/events{/privacy}",
"received_events_url": "https://api.github.com/users/win845/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1126834402,
"node_id": "MDU6TGFiZWwxMTI2ODM0NDAy",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/components",
"name": "area/components",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 1289588140,
"node_id": "MDU6TGFiZWwxMjg5NTg4MTQw",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature",
"name": "kind/feature",
"color": "2515fc",
"default": false,
"description": ""
}
] | open | false | {
"login": "connor-mccarthy",
"id": 55268212,
"node_id": "MDQ6VXNlcjU1MjY4MjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/connor-mccarthy",
"html_url": "https://github.com/connor-mccarthy",
"followers_url": "https://api.github.com/users/connor-mccarthy/followers",
"following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}",
"gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions",
"organizations_url": "https://api.github.com/users/connor-mccarthy/orgs",
"repos_url": "https://api.github.com/users/connor-mccarthy/repos",
"events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}",
"received_events_url": "https://api.github.com/users/connor-mccarthy/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "connor-mccarthy",
"id": 55268212,
"node_id": "MDQ6VXNlcjU1MjY4MjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/connor-mccarthy",
"html_url": "https://github.com/connor-mccarthy",
"followers_url": "https://api.github.com/users/connor-mccarthy/followers",
"following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}",
"gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions",
"organizations_url": "https://api.github.com/users/connor-mccarthy/orgs",
"repos_url": "https://api.github.com/users/connor-mccarthy/repos",
"events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}",
"received_events_url": "https://api.github.com/users/connor-mccarthy/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-08-16T16:36:29 | 2023-08-17T22:41:03 | null | NONE | null | ### Feature Area
<!-- Uncomment the labels below which are relevant to this feature: -->
<!-- /area frontend -->
<!-- /area backend -->
/area sdk
<!-- /area samples -->
/area components
### What feature would you like to see?
Allow dataflow to be launched with a fixed python and beam sdk version.
Apache Beam sdk was locked within `google_cloud_pipeline_components.container.v1.dataflow.dataflow_launcher`
to apache_beam[gcp]<2.34.0 via [this commit](https://github.com/kubeflow/pipelines/commit/63724207bceb189a7e1a78a9b8b374fefc14135e#diff-0feaf0e5009fa36aa80e31cd4e896cced929633a334e425add6e75663ef27983R28) .
Beam is apparently launched with whatever current python and beam_sdk is in this container.
Can we launch dataflow jobs with a fixed python and beam sdk?
At least upgrade to a more recent sdk.
### What is the use case or pain point?
<!-- It helps us understand the benefit of this feature for your use case. -->
The locked sdk within DataflowPythonJobOp is quite outdated and is missing features. It starts dataflow
with a fixed old apache beam runtime
### Is there a workaround currently?
- Create a completly custom pipeline component with own docker image which pins exact python version and apache_beam
version.
or
- use FlexTemplate component which allows to specify custom dockerfiles
---
<!-- Don't delete message below to encourage users to support your feature request! -->
Love this idea? Give it a 👍. | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/9882/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/9882/timeline | null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/9980 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/9980/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/9980/comments | https://api.github.com/repos/kubeflow/pipelines/issues/9980/events | https://github.com/kubeflow/pipelines/issues/9980 | 1,894,144,464 | I_kwDOB-71UM5w5lnQ | 9,980 | [feature] Enable Setting Image Pull Policy in V2 SDK | {
"login": "PhilippeMoussalli",
"id": 47530815,
"node_id": "MDQ6VXNlcjQ3NTMwODE1",
"avatar_url": "https://avatars.githubusercontent.com/u/47530815?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilippeMoussalli",
"html_url": "https://github.com/PhilippeMoussalli",
"followers_url": "https://api.github.com/users/PhilippeMoussalli/followers",
"following_url": "https://api.github.com/users/PhilippeMoussalli/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilippeMoussalli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilippeMoussalli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilippeMoussalli/subscriptions",
"organizations_url": "https://api.github.com/users/PhilippeMoussalli/orgs",
"repos_url": "https://api.github.com/users/PhilippeMoussalli/repos",
"events_url": "https://api.github.com/users/PhilippeMoussalli/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilippeMoussalli/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1136110037,
"node_id": "MDU6TGFiZWwxMTM2MTEwMDM3",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk",
"name": "area/sdk",
"color": "d2b48c",
"default": false,
"description": ""
},
{
"id": 1289588140,
"node_id": "MDU6TGFiZWwxMjg5NTg4MTQw",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature",
"name": "kind/feature",
"color": "2515fc",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [] | 2023-09-13T09:40:05 | 2023-09-13T09:40:08 | null | NONE | null | ### Feature Area
/area sdk
### What feature would you like to see?
KFP V1 supported setting the image pull policy [link](https://github.com/kubeflow/pipelines/blob/4bee3d8dc2ee9c33d87e1058bac2a94d899dd4a5/sdk/python/kfp/deprecated/dsl/_container_op.py#L518)
This feature is currently not available in KFP V2
<!-- Provide a description of this feature and the user experience. -->
### What is the use case or pain point?
We used to set image pull policy to always in V1 to avoid cases when outdated images are pulled.
<!-- It helps us understand the benefit of this feature for your use case. -->
### Is there a workaround currently?
No current workaround.
---
<!-- Don't delete message below to encourage users to support your feature request! -->
Love this idea? Give it a 👍. | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/9980/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/9980/timeline | null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/9976 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/9976/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/9976/comments | https://api.github.com/repos/kubeflow/pipelines/issues/9976/events | https://github.com/kubeflow/pipelines/issues/9976 | 1,892,166,555 | I_kwDOB-71UM5wyCub | 9,976 | [bug] (Big) Integer overflow in python components | {
"login": "vigram93",
"id": 88139433,
"node_id": "MDQ6VXNlcjg4MTM5NDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/88139433?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vigram93",
"html_url": "https://github.com/vigram93",
"followers_url": "https://api.github.com/users/vigram93/followers",
"following_url": "https://api.github.com/users/vigram93/following{/other_user}",
"gists_url": "https://api.github.com/users/vigram93/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vigram93/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vigram93/subscriptions",
"organizations_url": "https://api.github.com/users/vigram93/orgs",
"repos_url": "https://api.github.com/users/vigram93/repos",
"events_url": "https://api.github.com/users/vigram93/events{/privacy}",
"received_events_url": "https://api.github.com/users/vigram93/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [] | 2023-09-12T10:15:29 | 2023-09-12T10:15:29 | null | NONE | null | /kind bug
### Environment
* How do you deploy Kubeflow Pipelines (KFP)? </br>
Using kubeflow sdk
* KFP version: </br>
1.7
* KFP SDK version: </br>
1.8.17
* Python base image used: </br>
Image tag: python:3.8-slim </br>
Image sha: sha256:7672e07ca8cd61d8520be407d6a83d45e6d37faf26bb68b91a2fa8ab89a7798f </br>
Image ID: 1b0f3f18921c
### Steps to reproduce
#### Issue:
An integer parameter (Only certain numbers) when tried to be printed using python function component, kfp mutates the input and prints a different output.
Eg:
#### Scenario1:
Input parameter value: 20230807093130621
</br>
Printed value: 20230807093130620
#### Scenario2:
Input parameter value: 20230807093130621
</br>
Added number: 10
</br>
Printed value: 20230807093130630
### Expected result
#### Scenario1:
Input parameter value: 20230807093130621
</br>
Expected value: 20230807093130621
#### Scenario2:
Input parameter value: 20230807093130621
</br>
Added number: 10
</br>
Expected value: 20230807093130631
### Pipeline yaml definition
```
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: integer-overflow-test-pipeline-
annotations:
pipelines.kubeflow.org/kfp_sdk_version: 1.8.17
pipelines.kubeflow.org/pipeline_compilation_time: '2023-09-11T20:57:57.297958'
pipelines.kubeflow.org/pipeline_spec: >-
{"inputs": [{"default": "20230807093130621", "name": "x", "optional":
true, "type": "Integer"}, {"default": "10", "name": "num_to_add",
"optional": true, "type": "Integer"}], "name":
"integer_overflow_test_pipeline"}
labels:
pipelines.kubeflow.org/kfp_sdk_version: 1.8.17
spec:
entrypoint: integer-overflow-test-pipeline
templates:
- name: add-num-to-integer
container:
args:
- '--x'
- '{{inputs.parameters.x}}'
- '--num'
- '{{inputs.parameters.num_to_add}}'
command:
- sh
- '-ec'
- |
program_path=$(mktemp)
printf "%s" "$0" > "$program_path"
python3 -u "$program_path" "$@"
- >
def add_num_to_integer(x, num = 10):
added_num = x + num
print(added_num)
import argparse
_parser = argparse.ArgumentParser(prog='Add num to integer',
description='')
_parser.add_argument("--x", dest="x", type=int, required=True,
default=argparse.SUPPRESS)
_parser.add_argument("--num", dest="num", type=int, required=False,
default=argparse.SUPPRESS)
_parsed_args = vars(_parser.parse_args())
_outputs = add_num_to_integer(**_parsed_args)
image: 'python:3.8-slim'
inputs:
parameters:
- name: num_to_add
- name: x
metadata:
labels:
pipelines.kubeflow.org/kfp_sdk_version: 1.8.17
pipelines.kubeflow.org/pipeline-sdk-type: kfp
pipelines.kubeflow.org/enable_caching: 'true'
annotations:
pipelines.kubeflow.org/component_spec: >-
{"implementation": {"container": {"args": ["--x", {"inputValue":
"x"}, {"if": {"cond": {"isPresent": "num"}, "then": ["--num",
{"inputValue": "num"}]}}], "command": ["sh", "-ec",
"program_path=$(mktemp)\nprintf \"%s\" \"$0\" >
\"$program_path\"\npython3 -u \"$program_path\" \"$@\"\n", "def
add_num_to_integer(x, num = 10):\n added_num = x + num\n
print(added_num)\n\nimport argparse\n_parser =
argparse.ArgumentParser(prog='Add num to integer',
description='')\n_parser.add_argument(\"--x\", dest=\"x\", type=int,
required=True,
default=argparse.SUPPRESS)\n_parser.add_argument(\"--num\",
dest=\"num\", type=int, required=False,
default=argparse.SUPPRESS)\n_parsed_args =
vars(_parser.parse_args())\n\n_outputs =
add_num_to_integer(**_parsed_args)\n"], "image":
"python:3.8-slim"}}, "inputs":
[{"name": "x", "type": "Integer"}, {"default": "10", "name": "num",
"optional": true, "type": "Integer"}], "name": "Add num to integer"}
pipelines.kubeflow.org/component_ref: '{}'
pipelines.kubeflow.org/arguments.parameters: >-
{"num": "{{inputs.parameters.num_to_add}}", "x":
"{{inputs.parameters.x}}"}
pipelines.kubeflow.org/max_cache_staleness: P0D
- name: integer-overflow-test-pipeline
inputs:
parameters:
- name: num_to_add
- name: x
dag:
tasks:
- name: add-num-to-integer
template: add-num-to-integer
arguments:
parameters:
- name: num_to_add
value: '{{inputs.parameters.num_to_add}}'
- name: x
value: '{{inputs.parameters.x}}'
- name: print-integer
template: print-integer
arguments:
parameters:
- name: x
value: '{{inputs.parameters.x}}'
- name: print-integer
container:
args:
- '--x'
- '{{inputs.parameters.x}}'
command:
- sh
- '-ec'
- |
program_path=$(mktemp)
printf "%s" "$0" > "$program_path"
python3 -u "$program_path" "$@"
- >
def print_integer(x):
print(x)
import argparse
_parser = argparse.ArgumentParser(prog='Print integer',
description='')
_parser.add_argument("--x", dest="x", type=int, required=True,
default=argparse.SUPPRESS)
_parsed_args = vars(_parser.parse_args())
_outputs = print_integer(**_parsed_args)
image: 'python:3.8-slim'
inputs:
parameters:
- name: x
metadata:
labels:
pipelines.kubeflow.org/kfp_sdk_version: 1.8.17
pipelines.kubeflow.org/pipeline-sdk-type: kfp
pipelines.kubeflow.org/enable_caching: 'true'
annotations:
pipelines.kubeflow.org/component_spec: >-
{"implementation": {"container": {"args": ["--x", {"inputValue":
"x"}], "command": ["sh", "-ec", "program_path=$(mktemp)\nprintf
\"%s\" \"$0\" > \"$program_path\"\npython3 -u \"$program_path\"
\"$@\"\n", "def print_integer(x):\n print(x)\n\nimport
argparse\n_parser = argparse.ArgumentParser(prog='Print integer',
description='')\n_parser.add_argument(\"--x\", dest=\"x\", type=int,
required=True, default=argparse.SUPPRESS)\n_parsed_args =
vars(_parser.parse_args())\n\n_outputs =
print_integer(**_parsed_args)\n"], "image":
"python:3.8-slim"}}, "inputs":
[{"name": "x", "type": "Integer"}], "name": "Print integer"}
pipelines.kubeflow.org/component_ref: '{}'
pipelines.kubeflow.org/arguments.parameters: '{"x": "{{inputs.parameters.x}}"}'
pipelines.kubeflow.org/max_cache_staleness: P0D
arguments:
parameters:
- name: x
value: '20230807093130621'
- name: num_to_add
value: '10'
serviceAccountName: <Intentionally redacted>
```
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/9976/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/9976/timeline | null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/9975 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/9975/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/9975/comments | https://api.github.com/repos/kubeflow/pipelines/issues/9975/events | https://github.com/kubeflow/pipelines/issues/9975 | 1,892,107,655 | I_kwDOB-71UM5wx0WH | 9,975 | [backend] Pipeline run's input artifacts in S3 are accessible but output artifacts are not accessible (403 Error) | {
"login": "guntiseiduks",
"id": 97613741,
"node_id": "U_kgDOBdF3rQ",
"avatar_url": "https://avatars.githubusercontent.com/u/97613741?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guntiseiduks",
"html_url": "https://github.com/guntiseiduks",
"followers_url": "https://api.github.com/users/guntiseiduks/followers",
"following_url": "https://api.github.com/users/guntiseiduks/following{/other_user}",
"gists_url": "https://api.github.com/users/guntiseiduks/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guntiseiduks/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guntiseiduks/subscriptions",
"organizations_url": "https://api.github.com/users/guntiseiduks/orgs",
"repos_url": "https://api.github.com/users/guntiseiduks/repos",
"events_url": "https://api.github.com/users/guntiseiduks/events{/privacy}",
"received_events_url": "https://api.github.com/users/guntiseiduks/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1118896905,
"node_id": "MDU6TGFiZWwxMTE4ODk2OTA1",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/backend",
"name": "area/backend",
"color": "d2b48c",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [] | 2023-09-12T09:43:48 | 2023-09-12T13:01:04 | null | NONE | null | ### Environment
* How did you deploy Kubeflow Pipelines (KFP)?
Kubeflow Pipelines as part of a full Kubeflow deployment provides all Kubeflow components and more integration with each platform
Kubeflow is deployed on top of AWS EKS cluster.
* KFP version: v2
* KFP SDK version: Issue identified in UI, possible source in backend.
### Steps to reproduce
1. In Kubeflow UI Pipeline Runs section (using "[Tutorial] Data passing in python components" pipeline run) click on any of pipeline completed steps.
2. Click on Input artifact s3 url (s3://kf-artifacts-store-..../) and open input artifact and that works fine, i.e. one can see contents of artifact.
3. BUT when click on output artifacts "main-logs" s3 url one gets HTTP 403 Forbidden error.
### Expected result
Both input and output artifact contents are visible in preview and when clicking on S3 url.
### Materials and Reference
Additional observations:
- Issue so far is observed for s3 objects with extension `.log`.
- s3 objects with extension `.tgz` are opened without a problem.
---
Impacted by this bug? Give it a 👍. | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/9975/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/9975/timeline | null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/9974 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/9974/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/9974/comments | https://api.github.com/repos/kubeflow/pipelines/issues/9974/events | https://github.com/kubeflow/pipelines/issues/9974 | 1,891,920,375 | I_kwDOB-71UM5wxGn3 | 9,974 | [sdk] packages_to_install not working correctly after upgrading to V2 | {
"login": "Pringled",
"id": 12988240,
"node_id": "MDQ6VXNlcjEyOTg4MjQw",
"avatar_url": "https://avatars.githubusercontent.com/u/12988240?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pringled",
"html_url": "https://github.com/Pringled",
"followers_url": "https://api.github.com/users/Pringled/followers",
"following_url": "https://api.github.com/users/Pringled/following{/other_user}",
"gists_url": "https://api.github.com/users/Pringled/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pringled/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pringled/subscriptions",
"organizations_url": "https://api.github.com/users/Pringled/orgs",
"repos_url": "https://api.github.com/users/Pringled/repos",
"events_url": "https://api.github.com/users/Pringled/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pringled/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
},
{
"id": 1136110037,
"node_id": "MDU6TGFiZWwxMTM2MTEwMDM3",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/area/sdk",
"name": "area/sdk",
"color": "d2b48c",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [
"I've been impacted by this change as well and I think it's a behavior change in the latest version of KFP 2.1.3 only. I was able to get my pipeline running by using KFP 2.0.1.\r\n\r\nI think the \"issue\" might be coming from the following commit https://github.com/kubeflow/pipelines/pull/9886/commits/80881c12ee2a23688cd7bd5e6eeb330228949cf8 that added `--no-deps` to the pip install command.",
"@Lothiraldan Thank you, that was indeed the problem! Everything works as expected when I downgrade to KFP 2.0.1. I do wonder why this change was implemented since it makes it seemingly impossible to properly install packages via 'packages_to_install'. I'll keep this open for now since I'm curious if it's intended behavior or a bug.",
"Having the same issue, feels like a crazy \"feature\" to require manually listing every single transitive dependency before your component can run 😅\r\n\r\nI think the intention behind the PR was to remove the compile time dependencies of `kfp` for the runtime environment, which makes sense. However, because `kfp=={version} --no-deps` is appended to the `packages_to_install` list, all user-specified dependencies get the same `--no-deps` treatment... definitely feels like a regression!"
] | 2023-09-12T08:07:41 | 2023-09-13T11:09:14 | null | NONE | null | ### Environment
#### Relevant package versions
kfp==2.1.3
google-cloud-pipeline-components==2.3.1
google-cloud-aiplatform==1.32.0
kfp-pipeline-spec== 0.2.2
kfp-server-api == 2.0.1
### Steps to reproduce
After upgrading to KFP V2, Initializing a component as follows doesn't work:
```python
from kfp.dsl import component
@component(
base_image="python:3.10",
packages_to_install=["google-cloud-aiplatform"],
)
def my_function():
from google.cloud import aiplatform
```
This gives the error `ModuleNotFoundError: No module named 'google.api_core'`, while in KFP V1 (< 2.0), this worked without any issues. It seems to be an issue with 'packages_to_install'. Adding 'google_api_core' to the 'packages_to_install' then throws the error `ModuleNotFoundError: No module named 'grpc'`, adding that gives `RuntimeError: Please install the official package with: pip install grpcio`, adding that gives `ModuleNotFoundError: No module named 'google.rpc'`, and finally, adding that gives `ERROR: No matching distribution found for google.rpc`. It seems that the 'packages_to_install' do not install their dependencies correctly, or there is an issue with 'google-cloud-aiplatform- and KFP V2.
### Expected result
The 'packages_to_install' are installed correctly and can be imported within the component. With the following versions, and the exact same code, this behavior does work:
kfp==1.8.14
google-cloud-pipeline-components==1.0.26
google-cloud-aiplatform==1.18.3
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/9974/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/9974/timeline | null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/9970 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/9970/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/9970/comments | https://api.github.com/repos/kubeflow/pipelines/issues/9970/events | https://github.com/kubeflow/pipelines/issues/9970 | 1,889,838,578 | I_kwDOB-71UM5wpKXy | 9,970 | Usage of Kubeflow Pipelines v2 | {
"login": "petrpechman",
"id": 41995595,
"node_id": "MDQ6VXNlcjQxOTk1NTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/41995595?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/petrpechman",
"html_url": "https://github.com/petrpechman",
"followers_url": "https://api.github.com/users/petrpechman/followers",
"following_url": "https://api.github.com/users/petrpechman/following{/other_user}",
"gists_url": "https://api.github.com/users/petrpechman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/petrpechman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/petrpechman/subscriptions",
"organizations_url": "https://api.github.com/users/petrpechman/orgs",
"repos_url": "https://api.github.com/users/petrpechman/repos",
"events_url": "https://api.github.com/users/petrpechman/events{/privacy}",
"received_events_url": "https://api.github.com/users/petrpechman/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"> set gpu resources (.set_gpu_limit())\r\n\r\n`set_gpu_limit()` has been renamed to [`set_accelerator_limit`](https://kubeflow-pipelines.readthedocs.io/en/sdk-2.0.1/source/dsl.html?h=set_acc#kfp.dsl.PipelineTask.set_accelerator_limit) so that is covers both GPU and TPU (on Vertex). Meanwhile, `set_gpu_limit` still exists but gives you a deprecation warning and calls into `set_accelerator_limit` under the hood.\r\n\r\n> mount hostPath (.add_volume(), add_volume_mount())\r\n\r\nVolume mount has been moved into an extension package: \r\nhttps://www.kubeflow.org/docs/components/pipelines/v2/platform-specific-features/\r\n\r\n> .set_security_context()\r\n\r\nKFP v2 doesn't have this feature parity yet. My advice would be submitting a separate feature request with the description of your use case.",
"@chensun Is there a way to use data stored on Nas??\r\n"
] | 2023-09-11T07:20:36 | 2023-09-12T01:35:55 | null | NONE | null | Hello,
we have upgraded our Kubeflow Pipelines to version 2.0.1. We are now having a lot of trouble rewriting our code from version 1.8. We are missing the following features (maybe we just didn't find them in v2):
- set gpu resources (.set_gpu_limit())
- mount hostPath (.add_volume(), add_volume_mount())
- .set_security_context()
We want to train on multiple gpus, so we need to set gpu and cpu resources, connect network data storage, and so on.
Is Kubeflow Pipelines v2 good for us? Are these features planned in version 2? Or should we still use Kubeflow Pipelines 1.8? | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/9970/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/9970/timeline | null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/9962 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/9962/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/9962/comments | https://api.github.com/repos/kubeflow/pipelines/issues/9962/events | https://github.com/kubeflow/pipelines/issues/9962 | 1,887,425,701 | I_kwDOB-71UM5wf9Sl | 9,962 | [backend] Persistence Agent failing kubeflow-pipeline-mkp-test (2) | {
"login": "difince",
"id": 11557050,
"node_id": "MDQ6VXNlcjExNTU3MDUw",
"avatar_url": "https://avatars.githubusercontent.com/u/11557050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/difince",
"html_url": "https://github.com/difince",
"followers_url": "https://api.github.com/users/difince/followers",
"following_url": "https://api.github.com/users/difince/following{/other_user}",
"gists_url": "https://api.github.com/users/difince/gists{/gist_id}",
"starred_url": "https://api.github.com/users/difince/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/difince/subscriptions",
"organizations_url": "https://api.github.com/users/difince/orgs",
"repos_url": "https://api.github.com/users/difince/repos",
"events_url": "https://api.github.com/users/difince/events{/privacy}",
"received_events_url": "https://api.github.com/users/difince/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [] | 2023-09-08T10:52:36 | 2023-09-08T10:57:54 | null | MEMBER | null | ### Steps to reproduce
Having in mind issue: [Persistence agent failing kubeflow-pipeline-mkp-test](https://github.com/kubeflow/pipelines/issues/9904) that appeared after changes in the Persistence agent manifests files - I guess that the mkp-test may fail again after the merge of [this](https://github.com/kubeflow/pipelines/pull/9957) PR as it also has changed the manifests files.
At the moment there is no automatic way to move/sync changes done in main manifest files to the manifests in the marketplace helm chart that is why I suggest someone validate the state of the `mkp-test` and apply the changes done in the above PR in the helm chart if needed.
Please close this issue if that is not the case.
cc: @zijianjoy, @chensun
<!--
Specify how to reproduce the problem.
This may include information such as: a description of the process, code snippets, log output, or screenshots.
-->
### Expected result
<!-- What should the correct behavior be? -->
### Materials and reference
<!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. -->
### Labels
<!-- Please include labels below by uncommenting them to help us better triage issues -->
<!-- /area frontend -->
<!-- /area backend -->
<!-- /area sdk -->
<!-- /area testing -->
<!-- /area samples -->
<!-- /area components -->
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/9962/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/9962/timeline | null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/9960 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/9960/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/9960/comments | https://api.github.com/repos/kubeflow/pipelines/issues/9960/events | https://github.com/kubeflow/pipelines/issues/9960 | 1,884,738,276 | I_kwDOB-71UM5wVtLk | 9,960 | What's the current behaviour with the run_pipeline? | {
"login": "fclesio",
"id": 10605378,
"node_id": "MDQ6VXNlcjEwNjA1Mzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/10605378?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fclesio",
"html_url": "https://github.com/fclesio",
"followers_url": "https://api.github.com/users/fclesio/followers",
"following_url": "https://api.github.com/users/fclesio/following{/other_user}",
"gists_url": "https://api.github.com/users/fclesio/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fclesio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fclesio/subscriptions",
"organizations_url": "https://api.github.com/users/fclesio/orgs",
"repos_url": "https://api.github.com/users/fclesio/repos",
"events_url": "https://api.github.com/users/fclesio/events{/privacy}",
"received_events_url": "https://api.github.com/users/fclesio/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@fclesio, I think the best solution for what you are trying to achieve is to compose the tasks together into a sequential DAG using a pipeline. Please take a look at the docs on [data passing and task dependencies](https://www.kubeflow.org/docs/components/pipelines/v2/pipelines/pipeline-basics/#data-passing-and-task-dependencies) and feel free to re-open if there are outstanding issues.",
"Thanks for the answer @connor-mccarthy, but I am afraid that I express myself with a lack of detail. \r\n\r\nFor a single pipeline it's clear that we can set those dependencies; however, if we have multiple and _independent_ pipelines that needs to have some sequencing, that solution does not work. \r\n\r\nOn top of that, let's say that we have a Pipeline (DAG) with almost 1000 lines, if we have 5 similar ones with the same size we're talking about to have a single pipeline with 5000 lines. \r\n\r\nBeing more specific: there is some component that can triggers another pipeline/DAG like [Airflow has the triggerDAGrun](https://airflow.apache.org/docs/apache-airflow/stable/_api/airflow/operators/trigger_dagrun/index.html)? ",
"I see, @fclesio. Thanks for the detail. KFP doesn't provide this sort of functionality directly, unfortunately."
] | 2023-09-06T20:53:45 | 2023-09-08T14:47:16 | 2023-09-07T22:50:17 | NONE | null | Thanks once again for the great library.
I have a ML pipeline that consists in 3 tasks: ETL > Pre-Processing > Modelling and Deploy.
Since those tasks are sequential, I created 3 pipelines and I used the [kfp.client.run_pipeline ](https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/client/client.py#L688C9-L688C21) to run each one sequentially.
One thing that I noticed is that the client only _fires_ the pipeline and if the API returns 200 marks it as a success, regardless of the pipeline runtime length.
In my case the run_pipeline did actually fired up all pipelines, but all of them ran at the same time (not desirable behaviour).
There some way to have some kind of "wait status" that only return success if the pipeline called runs until the end?
Thanks once again.
| {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/9960/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/9960/timeline | null | completed | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/9958 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/9958/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/9958/comments | https://api.github.com/repos/kubeflow/pipelines/issues/9958/events | https://github.com/kubeflow/pipelines/issues/9958 | 1,882,400,753 | I_kwDOB-71UM5wMyfx | 9,958 | chore(frontend): Refactor the class component to functional component | {
"login": "jlyaoyuli",
"id": 56132941,
"node_id": "MDQ6VXNlcjU2MTMyOTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/56132941?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jlyaoyuli",
"html_url": "https://github.com/jlyaoyuli",
"followers_url": "https://api.github.com/users/jlyaoyuli/followers",
"following_url": "https://api.github.com/users/jlyaoyuli/following{/other_user}",
"gists_url": "https://api.github.com/users/jlyaoyuli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jlyaoyuli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jlyaoyuli/subscriptions",
"organizations_url": "https://api.github.com/users/jlyaoyuli/orgs",
"repos_url": "https://api.github.com/users/jlyaoyuli/repos",
"events_url": "https://api.github.com/users/jlyaoyuli/events{/privacy}",
"received_events_url": "https://api.github.com/users/jlyaoyuli/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1289588140,
"node_id": "MDU6TGFiZWwxMjg5NTg4MTQw",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/feature",
"name": "kind/feature",
"color": "2515fc",
"default": false,
"description": ""
}
] | open | false | {
"login": "jlyaoyuli",
"id": 56132941,
"node_id": "MDQ6VXNlcjU2MTMyOTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/56132941?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jlyaoyuli",
"html_url": "https://github.com/jlyaoyuli",
"followers_url": "https://api.github.com/users/jlyaoyuli/followers",
"following_url": "https://api.github.com/users/jlyaoyuli/following{/other_user}",
"gists_url": "https://api.github.com/users/jlyaoyuli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jlyaoyuli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jlyaoyuli/subscriptions",
"organizations_url": "https://api.github.com/users/jlyaoyuli/orgs",
"repos_url": "https://api.github.com/users/jlyaoyuli/repos",
"events_url": "https://api.github.com/users/jlyaoyuli/events{/privacy}",
"received_events_url": "https://api.github.com/users/jlyaoyuli/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "jlyaoyuli",
"id": 56132941,
"node_id": "MDQ6VXNlcjU2MTMyOTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/56132941?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jlyaoyuli",
"html_url": "https://github.com/jlyaoyuli",
"followers_url": "https://api.github.com/users/jlyaoyuli/followers",
"following_url": "https://api.github.com/users/jlyaoyuli/following{/other_user}",
"gists_url": "https://api.github.com/users/jlyaoyuli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jlyaoyuli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jlyaoyuli/subscriptions",
"organizations_url": "https://api.github.com/users/jlyaoyuli/orgs",
"repos_url": "https://api.github.com/users/jlyaoyuli/repos",
"events_url": "https://api.github.com/users/jlyaoyuli/events{/privacy}",
"received_events_url": "https://api.github.com/users/jlyaoyuli/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-09-05T17:00:17 | 2023-09-05T17:00:17 | null | COLLABORATOR | null | null | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/9958/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/9958/timeline | null | null | null | null | false |
https://api.github.com/repos/kubeflow/pipelines/issues/9956 | https://api.github.com/repos/kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines/issues/9956/labels{/name} | https://api.github.com/repos/kubeflow/pipelines/issues/9956/comments | https://api.github.com/repos/kubeflow/pipelines/issues/9956/events | https://github.com/kubeflow/pipelines/issues/9956 | 1,879,683,257 | I_kwDOB-71UM5wCbC5 | 9,956 | [bug] Tensorboard gives "No dashboards are active for the current data set." when passing minio path. | {
"login": "pandapool",
"id": 28887240,
"node_id": "MDQ6VXNlcjI4ODg3MjQw",
"avatar_url": "https://avatars.githubusercontent.com/u/28887240?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pandapool",
"html_url": "https://github.com/pandapool",
"followers_url": "https://api.github.com/users/pandapool/followers",
"following_url": "https://api.github.com/users/pandapool/following{/other_user}",
"gists_url": "https://api.github.com/users/pandapool/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pandapool/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pandapool/subscriptions",
"organizations_url": "https://api.github.com/users/pandapool/orgs",
"repos_url": "https://api.github.com/users/pandapool/repos",
"events_url": "https://api.github.com/users/pandapool/events{/privacy}",
"received_events_url": "https://api.github.com/users/pandapool/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1073153908,
"node_id": "MDU6TGFiZWwxMDczMTUzOTA4",
"url": "https://api.github.com/repos/kubeflow/pipelines/labels/kind/bug",
"name": "kind/bug",
"color": "fc2515",
"default": false,
"description": ""
}
] | open | false | {
"login": "chensun",
"id": 2043310,
"node_id": "MDQ6VXNlcjIwNDMzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chensun",
"html_url": "https://github.com/chensun",
"followers_url": "https://api.github.com/users/chensun/followers",
"following_url": "https://api.github.com/users/chensun/following{/other_user}",
"gists_url": "https://api.github.com/users/chensun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chensun/subscriptions",
"organizations_url": "https://api.github.com/users/chensun/orgs",
"repos_url": "https://api.github.com/users/chensun/repos",
"events_url": "https://api.github.com/users/chensun/events{/privacy}",
"received_events_url": "https://api.github.com/users/chensun/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "chensun",
"id": 2043310,
"node_id": "MDQ6VXNlcjIwNDMzMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2043310?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chensun",
"html_url": "https://github.com/chensun",
"followers_url": "https://api.github.com/users/chensun/followers",
"following_url": "https://api.github.com/users/chensun/following{/other_user}",
"gists_url": "https://api.github.com/users/chensun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chensun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chensun/subscriptions",
"organizations_url": "https://api.github.com/users/chensun/orgs",
"repos_url": "https://api.github.com/users/chensun/repos",
"events_url": "https://api.github.com/users/chensun/events{/privacy}",
"received_events_url": "https://api.github.com/users/chensun/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-09-04T07:30:02 | 2023-09-07T22:48:31 | null | NONE | null | ### Environment
<!-- Please fill in those that seem relevant. -->
* How do you deploy Kubeflow Pipelines (KFP)?
<!-- For more information, see an overview of KFP installation options: https://www.kubeflow.org/docs/pipelines/installation/overview/. -->
* KFP version:
<!-- Specify the version of Kubeflow Pipelines that you are using. The version number appears in the left side navigation of user interface.
To find the version number, See version number shows on bottom of KFP UI left sidenav. -->
1.8.1
* KFP SDK version:
<!-- Specify the output of the following shell command: $pip list | grep kfp -->
1.8.19
### Steps to reproduce
<!--
Specify how to reproduce the problem.
This may include information such as: a description of the process, code snippets, log output, or screenshots.
-->
Create a function to check logging of scalars:
def foo(runid) ->NamedTuple("EvaluationOutput", [('mlpipeline_ui_metadata', 'UI_metadata')]):
import subprocess
subprocess.run(["pip","install","tensorboardX==2.5"])
from tensorboardX import SummaryWriter
from minio import Minio
log_dir = " /tensorboard_logs/"
writer = SummaryWriter(log_dir)
for i in range(20):
# Log loss to TensorBoard
acc = (i+2)**2/(i+1)**2 + i-1
writer.add_scalar("Accuracy",acc, i+1)
writer.flush()
from pipelines.utils.minio_utils import upload_local_directory_to_minio
upload_local_directory_to_minio(client, bucket_name, " /tensorboard_logs/","foo_logs")
metadata = {
'outputs': [{
'type': 'tensorboard',
'source': f'http://<Minio_IP>:<Minio_Port>/minio/datasets/foo_logs',
}]
}
out_tuple = namedtuple("EvaluationOutput", ["mlpipeline_ui_metadata"])
return out_tuple(json.dumps(metadata))
![image](https://github.com/kubeflow/pipelines/assets/28887240/ec51bf38-fb50-48bc-97c0-7b50e647e71d)
### Expected result
<!-- What should the correct behavior be? -->
Tensorboard should show a graph for the scalars written to events file. The issue is not with the events file as it works for a locally deployed tensorboard server.
![image](https://github.com/kubeflow/pipelines/assets/28887240/9a74e4fe-01a9-4d57-a8c4-9225bed57007)
### Materials and reference
<!-- Help us debug this issue by providing resources such as: sample code, background context, or links to references. -->
### Labels
<!-- Please include labels below by uncommenting them to help us better triage issues -->
<!-- /area frontend -->
<!-- /area backend -->
<!-- /area sdk -->
<!-- /area testing -->
<!-- /area samples -->
<!-- /area components -->
---
<!-- Don't delete message below to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍. | {
"url": "https://api.github.com/repos/kubeflow/pipelines/issues/9956/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/kubeflow/pipelines/issues/9956/timeline | null | null | null | null | false |
End of preview. Expand
in Dataset Viewer.
Dataset Card for Dataset Name
Dataset Summary
GitHub Issues is a dataset consisting of the top 5_000 GitHub issues, as of 2023-09-02, associated with the KubeFlow Pipelines repository. It is intended for educational purposes and can be used for semantic search or multilabel text classification. The contents of each GitHub issue are in English and concern the domain of datasets for NLP, computer vision, and beyond.
Languages
Contains language commonly found in English software development.
Contributions
Thanks to @hjerpe for adding this dataset.
- Downloads last month
- 34