input
stringlengths 1
12.6k
| output
stringlengths 1
12k
|
---|---|
Dear Flyte Community,
I wanted to share some exciting news with all of you. While it may not be directly related to Flyte, I started Union, a venture aimed at supporting Flyte development and collaborating with the incredible community of individuals like yourselves. This is why the recent announcement of Union AI raising significant funding is of utmost importance for our Flyte community.
We have witnessed Flyte's widespread adoption and the increasing contributions from passionate individuals. Among these contributors, Union AI stands out as one of the top supporters. We are committed to nurturing this community and ensuring its everlasting success. With this goal in mind, I am pleased to share that we have raised $19.1 million for Union, a substantial portion of which will be dedicated to enhancing Flyte and making it the world's best Orchestration platform.
You can find more details about this exciting development <https://techcrunch.com/2023/05/17/union-ai-raises-19-1m-series-a-to-simplify-ai-and-data-workflows-with-flyte/|here>.
I am immensely grateful for your unwavering support and love. Together, let us continue working hand in hand to empower our end users, relieving them of infrastructure worries, and enabling companies worldwide to harness the power of Flyte.
With heartfelt appreciation,
Ketan
| Congratulations!!! So much deserved :heart:
|
Congratulations!!! So much deserved :heart:
| Great news <@UNZB4NW3S>! Wish only the best for Union and Flyte ahead! :heart:
|
Great news <@UNZB4NW3S>! Wish only the best for Union and Flyte ahead! :heart:
| Congratulations!!! You guys deserve it!
|
Congratulations!!! You guys deserve it!
| This is great, congratulations!
|
This message was deleted.
| Is there a passcode? Zoom bridge seems to require passcode now?
I think the slackbot just needs to be updated!
|
Is there a passcode? Zoom bridge seems to require passcode now?
I think the slackbot just needs to be updated!
| right, sorry. Won't happen again
|
right, sorry. Won't happen again
| where are we supposed to find the link? is it just the zoom bridge link?
|
where are we supposed to find the link? is it just the zoom bridge link?
| <https://www.addevent.com/event/EA7823958>
|
:chart_with_upwards_trend: Congrats to <@U057SH9UML5> for being member 2,000th joining this community!
:sparkler: We're happy to have you here Greg! :sparkler:
| cool name
|
hi all, be sure to go to the Zoom link in the <https://www.addevent.com/event/EA7823958?utm_source=slack&utm_medium=social&utm_campaign=fcsync|Add the event> link
| <@U01DYLVUNJE>: whats the meeting passcode?
oh had to prune the `Meeting` at the end
|
<@U01DYLVUNJE>: whats the meeting passcode?
oh had to prune the `Meeting` at the end
| oh weird, slack messed up the link there
|
:loudspeaker: Hello Flyte community!
Join us tomorrow (April 18) for the bi-weekly community sync!
Agenda:
β’ Community updates (events + :sunglasses:surprise!)
β’ <@U04SG3LPDRS>, who recently authored the fantastic <https://medium.com/@timleonardDS/who-lets-the-dags-out-from-dbt-to-flyte-exporting-and-running-a-dag-5845a9a5151c|deep-dive series on dbt+Flyte>, will show how his team is making life easy with imperative Flyte workflows. Don't miss it!
:calendar: Tue Apr 18, 9:00a.m. PT (<https://dateful.com/time-zone-converter?t=9am&tz2=Seattle-Washington|convert to your timezone>)
:arrow_right: <https://go.union.ai/tqTMssgA|Add the event> to your calendar
:arrow_right: <https://lists.lfaidata.foundation/g/flyte-announce/join|Subscribe> to the mailing list
:star: the <https://github.com/flyteorg/flyte|repo>
:purple_heart: Everyone is welcome. Bring your questions/comments or just join us to listen and learn; that's totally fine.
We hope to see you there!
| hey is there a recording?
|
hey is there a recording?
| we'll let you all know when the recording is posted :slightly_smiling_face:
aaand the recording is out <@UU43XA551>:
<https://youtu.be/y_DOiY8dI1k>
|
:mega: *Monthly Contributors Spotlight* :mega:
Hey Flyte fam! :wave:
We just wanted to give a big shoutout to some awesome folks who have been killing it with their contributions to Flyte. Big thanks to <@U02FZ6AT4KB>, <@U019PBV483E>, and <@U042Z2S8268> for going above and beyond to help make Flyte the best it can be.
β’ <@U02FZ6AT4KB> has been providing valuable assistance to the Flyte community on Slack.
β’ <@U019PBV483E> worked on some serious Pydantic magic :magic_wand: and we can't wait to see it in action on April 18th at the community sync.
β’ <@U042Z2S8268> has made notable contributions to Flyte, including his work on the <https://docs.google.com/document/d/1NS93TghOzwKamihQMDATd_jdYJtrtWQ1sViSTSDduAA/edit?usp=sharing|Flyte config overrides RFC> and crucial backend changes.
Once again, thank you Maarten, Greg, Byron, and the entire Flyte community for all your hard work and support.
| Thank the community for all the supports, especially <@USU6W5ATA> <@UNR3C6Y4T> <@U029U35LRDJ>
|
Thank the community for all the supports, especially <@USU6W5ATA> <@UNR3C6Y4T> <@U029U35LRDJ>
| is there any email communication we can forward? <@U01J90KBSU9> :slightly_smiling_face:
|
is there any email communication we can forward? <@U01J90KBSU9> :slightly_smiling_face:
| Oh, there isn't. <@U04H6UUE78B>, is that a possibility?
|
Oh, there isn't. <@U04H6UUE78B>, is that a possibility?
| Absolutely, it will be part of this month's newsletter.
Join the mailing list to receive it in your inbox
<https://lists.lfaidata.foundation/g/flyte-announce/join|https://lists.lfaidata.foundation/g/flyte-announce/join>
|
Hey all β <@U04SG3LPDRS> has published a series of articles demonstrating the synergy between dbt :dbt: and Flyte :flyte-2: for creating robust data transformation workflows. If you find this topic intriguing, I recommend checking out the series.
<https://link.medium.com/wgWk2LAuXxb>
Thank you, <@U04SG3LPDRS>, for dedicating your time and effort towards creating this series!
| Nice one Tim!
|
:mega: *Contributors of the Month*
We would like to express our gratitude and extend a special thanks to <@U03C1MJQ892>, <@U03A7NKMWGM> and <@U04664Z7H37> for their invaluable contributions to Flyte.
Here's what <@U029U35LRDJ> has to say about *Bernhard* and *Nick,* in his own words: Want to nominate Bernhard from Pachama and Nick from Blackshark for our next contributors of the month. Bernhard wrote a <https://blog.dask.org/2023/02/13/dask-on-flyte|Dask plugin> which involved submitting PRs to the k8s Dask operator to add a task status at the top-level of the CRD in addition to engineering everything end to end in Flyte (propeller, plugins, idl, flytekit, etc). Also, this is Bernhard's first work in Go - so he was learning the language as he went. Nick has had ongoing work into cache overwriting and cache deleting. I believe it was our first community <https://github.com/flyteorg/flyte/blob/master/rfc/system/2633-eviction-of-cached-task-outputs.md|RFC>. Cache overwriting allows users to start a workflow execution where each task overwrites the existing cached values, and the cache delete work will allow users to delete all of the cached values for tasks within a workflow execution. This has involved multiple iterations over many PRs in all of our repos. Nick has been a real pleasure to collaborate with on these.
In addition, our deepest gratitude to both of them for helping the community on Slack!
*Fabio* wrote an excellent <https://medium.com/@fabiograetz/attach-a-visual-debugger-to-ml-training-jobs-on-kubernetes-eb9678389f1f|article> on how to attach a visual debugger to ML training jobs on Kubernetes, which can be immensely helpful in debugging. Please give it a try!
Once again, thank you Bernhard, Nick, Fabio, and the entire Flyte community for your continued support and contributions.
| <@U02J8QS6AN7> and <@U03C1MJQ892> you folks are rockstars
|
<@U02J8QS6AN7> and <@U03C1MJQ892> you folks are rockstars
| Thank you :bow:
|
Workflow execution seems to start but failing with the error below and I dont see any other stacktrace.
```Pod failed. No message received from kubernetes.
[f1943114e818d4b7eaf3-n0-0] terminated with exit code (1). Reason [Error]. Message:
exec /opt/venv/bin/pyflyte-execute: exec format error
.```
<@U01J90KBSU9> <@UNZB4NW3S> <@U04H6UUE78B> what could be the reason?
| How did you build the container
|
How did you build the container
| I am building the container using docker on my local MacBook and pushing to a sandbox environment that I am running on EC2.
Could docker build env be causing the issue
As my Mac is on arm and EC2 VM probably on different arch?
|
I am building the container using docker on my local MacBook and pushing to a sandbox environment that I am running on EC2.
Could docker build env be causing the issue
As my Mac is on arm and EC2 VM probably on different arch?
| you should use buildkit to build amd64 image
|
you should use buildkit to build amd64 image
| Ok let me try that
|
:wave: I'm running into an auth problem with flytectl using the ClientSecret (client credentials) method. I've assigned the proper scopes to the oauth2 client and the flyteadmin service isn't complaining about not being able to find the token in the metadata anymore which is nice. However now I'm getting a response to flytectl that I'm missing a scope:
```Error: Connection Info: [Endpoint: dns:///flyte.localhost:80, InsecureConnection?: true, AuthMode: ClientSecret]: rpc error: code = Unauthenticated desc = authenticated user doesn't have required scope```
LinkerD confirms that it's the flyteadmin container that responds with Unauthenticated.
This is quite perplexing and I've verified that the token is sent and it contains all the scopes required: `"scope": "all email offline profile"`
I hope people can give me some hints as I've been bashing my head for way to long against this. I'm just trying to have a flytectl register stuff in CI.
Just for completeness, here's an example of the token that get's sent in the `flyte-authorization` header:
```{
"exp": 1684787333,
"iat": 1684787033,
"jti": "984df155-c8fc-41f4-9e33-65e38eb51e06",
"iss": "<http://keycloak.localhost/realms/flyte>",
"aud": "account",
"sub": "02bf7fe8-1b4a-4b5f-a08c-378eb26be405",
"typ": "Bearer",
"azp": "flytectl",
"session_state": "43678964-d918-4557-b9ae-ce3639e8cfe3",
"acr": "1",
"allowed-origins": [
"/*"
],
"realm_access": {
"roles": [
"default-roles-flyte",
"offline_access",
"uma_authorization"
]
},
"resource_access": {
"account": {
"roles": [
"manage-account",
"manage-account-links",
"view-profile"
]
}
},
"scope": "openid profile email all offline",
"sid": "43678964-d918-4557-b9ae-ce3639e8cfe3",
"email_verified": false,
"clientHost": "10.42.0.50",
"preferred_username": "service-account-flytectl",
"clientAddress": "10.42.0.50",
"client_id": "flytectl"
}```
This matches the access_token that's returned from keycloak to flytectl. However, if I enable openid as a scope in the auth request, it also returns an id_token which looks like the following. Note the lack of scopes here.
```{
"exp": 1684787117,
"iat": 1684786817,
"auth_time": 0,
"jti": "07555916-cb38-485f-ae29-e4af0a183dfb",
"iss": "<http://keycloak.localhost/realms/flyte>",
"aud": "flytectl",
"sub": "02bf7fe8-1b4a-4b5f-a08c-378eb26be405",
"typ": "ID",
"azp": "flytectl",
"session_state": "f6c36f22-0614-4e58-811f-0ceb02916037",
"at_hash": "BqKQDMlf3921-RZSmzaHtA",
"acr": "1",
"sid": "f6c36f22-0614-4e58-811f-0ceb02916037",
"email_verified": false,
"clientHost": "10.42.0.50",
"preferred_username": "service-account-flytectl",
"clientAddress": "10.42.0.50",
"client_id": "flytectl"
} ```
Seeing as flyteadmin also does a request towards keycloak as shown by this linkerd screenshot, is it possible it uses the access_token to get more information about the service account and then fails when that information does not actually return scopes?
| Have you set `insecure` to true in your flytectl config?
|
Have you set `insecure` to true in your flytectl config?
| Yes i have. Along with the ClientSecret authtype and clientId that matches my oauth2 app.
|
Yes i have. Along with the ClientSecret authtype and clientId that matches my oauth2 app.
| Can you set it to false?
|
Can you set it to false?
| I can but it won't work as this POC setup has no tls enabled. What were you expecting it to do?
As expected I'm getting transport connection errors now.
is there some hidden extra auth logic that gets triggered when we change transport layer settings?
|
I can but it won't work as this POC setup has no tls enabled. What were you expecting it to do?
As expected I'm getting transport connection errors now.
is there some hidden extra auth logic that gets triggered when we change transport layer settings?
| No there isn't, I thought you had tls enabled. Please check this thread: <https://discuss.flyte.org/t/2448647/u038ft3lcre-uptrgr537-let-s-discuss-auth-here#60a03d43-2aff-4f07-a23c-34924c66f70d>
|
No there isn't, I thought you had tls enabled. Please check this thread: <https://discuss.flyte.org/t/2448647/u038ft3lcre-uptrgr537-let-s-discuss-auth-here#60a03d43-2aff-4f07-a23c-34924c66f70d>
| Ah no biggie
I do have the scopes all and offline enabled and assigned in keycloak along with adding it to the local flytectl config.
I've read the thread twice now and it's not helping me. As I've mentioned before I've verified that the `flyte-authentication` header contains `Bearer <token>` which, when decoded, contains the proper scopes.
I can't believe it's just me... Isn't anyone using a CI system to talk to flyte? Or has everyone simply disabled auth all together? Maybe using an oauth proxy infront of it then?
|
```[1/1] currentAttempt done. Last Error: USER::containers with unready status: [primary]|context deadline exceeded```
Iβve seen a few times such error. What would be the root cause of it?
| The container could not come up. This is interesting
Is this is a pod template ?
Are you starting init containers
|
The container could not come up. This is interesting
Is this is a pod template ?
Are you starting init containers
| I donβt think I used ~pod template nor~ init containers, just normal startup. The pod template is used though
```apiVersion: v1
kind: PodTemplate
metadata:
name: flyte-template
namespace: app-development
template:
spec:
volumes:
- name: data
persistentVolumeClaim:
claimName: fsx-app-development
readOnly: true
containers:
- name: default
image: <http://docker.io/rwgrim/docker-noop|docker.io/rwgrim/docker-noop>
volumeMounts:
- name: data
mountPath: /data
readOnly: true```
|
Hi! I wondered how would it be possible to allow a flyte user to log a training to his Weights & Biases account. Send the logging via a service account could of course be solved e.g. with a Kubernetes secret. But we want that the logs end up in the console of the starting user. From W&Bs site you can control this by e.g. setting the env vars. But how would I do this with flyte? How can a I set env variables depending on who authenticated user?
| This issue has been opened recently and we are working on it, please comment on the issue
<https://github.com/flyteorg/flyte/issues/3696|https://github.com/flyteorg/flyte/issues/3696>
|
Hi I was wondering if anyone has run into `ValueError: Empty module name` when running a `map_task()` with flytekit and trying to load the actual `task` object. More details below :arrow_down:
cc: <@U04JWL4HR5F>
When calling `map_task` with flytekit we are seeing this `ValueError: Empty module name`
```β β± 546 β _execute_map_task( β
β β
β /root/micromamba/envs/xxx/lib/python3.10/site-packages/flytekit/e β
β xceptions/scopes.py:160 in system_entry_point β
β β
β β± 160 β β β β return wrapped(*args, **kwargs) β
β β
β /root/micromamba/envs/xxx/lib/python3.10/site-packages/flytekit/b β
β in/entrypoint.py:394 in _execute_map_task β
β β
β β± 394 β β map_task = mtr.load_task(loader_args=resolver_args, max_concur β
β β
β /root/micromamba/envs/xxx/lib/python3.10/site-packages/flytekit/c β
β ore/utils.py:295 in wrapper β
β β
β β± 295 β β β β return func(*args, **kwargs) β
β β
β /root/micromamba/envs/xxx/lib/python3.10/site-packages/flytekit/c β
β ore/map_task.py:368 in load_task β
β β
β β± 368 β β resolver_obj = load_object_from_module(resolver) β
β β
β /root/micromamba/envs/xxx/lib/python3.10/site-packages/flytekit/t β
β ools/module_loader.py:43 in load_object_from_module β
β β
β β± 43 β class_obj_mod = importlib.import_module(".".join(class_obj_mod)) β
β β
β /root/micromamba/envs/xxx/lib/python3.10/importlib/__init__.py:12 β
β 6 in import_module β
β β
β β± 126 β return _bootstrap._gcd_import(name[level:], package, level) β
β in _gcd_import:1047 β
β in _sanity_check:981 β
β°βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ―
ValueError: Empty module name```
| that feels like the underlying task is not accessible.
where do you have that defined?
can you copy paste more code? is this fast-register?
|
that feels like the underlying task is not accessible.
where do you have that defined?
can you copy paste more code? is this fast-register?
| The workflows are registered by running `flytectl register files`
`map_task` is invoked as such:
``` result = map_task(foo_task, concurrency=5)(
a = x
) ```
in a workflow defined in directory `flyte/a/b/tasks`
`foo_task` is defined in a file in the same directory `flyte/a/b/tasks`
The failing command `pyflyte-map-execute` in the logs has `--input` `'task-module'` : `flyte.a.b.tasks`
Here is the structure of the error message:
```CalledProcessError: Command '['pyflyte-map-execute', '--inputs',
'xxx', '--output-prefix',
'xxx', '--raw-output-data-prefix',
'xxx',
'--checkpoint-path',
'xxx',
'--prev-checkpoint', '""', '--dynamic-addl-distro',
'xxx', '--dynamic-dest-dir', '.',
'--resolver', 'MapTaskResolver', '--', 'vars', 'resolver',
'flytekit.core.python_auto_container.default_task_resolver', 'task-module',
'flyte.a.b.tasks', 'task-name', 'foo_task']'```
Thanks Yee, let me know if there's anything else I can provide that would be helpful to troubleshoot!
|
The workflows are registered by running `flytectl register files`
`map_task` is invoked as such:
``` result = map_task(foo_task, concurrency=5)(
a = x
) ```
in a workflow defined in directory `flyte/a/b/tasks`
`foo_task` is defined in a file in the same directory `flyte/a/b/tasks`
The failing command `pyflyte-map-execute` in the logs has `--input` `'task-module'` : `flyte.a.b.tasks`
Here is the structure of the error message:
```CalledProcessError: Command '['pyflyte-map-execute', '--inputs',
'xxx', '--output-prefix',
'xxx', '--raw-output-data-prefix',
'xxx',
'--checkpoint-path',
'xxx',
'--prev-checkpoint', '""', '--dynamic-addl-distro',
'xxx', '--dynamic-dest-dir', '.',
'--resolver', 'MapTaskResolver', '--', 'vars', 'resolver',
'flytekit.core.python_auto_container.default_task_resolver', 'task-module',
'flyte.a.b.tasks', 'task-name', 'foo_task']'```
Thanks Yee, let me know if there's anything else I can provide that would be helpful to troubleshoot!
| I think the task module should be `flyte.a.b.tasks.<module>`. Is it possible for you to share the directory structure and the serialization & registration commands you're running?
|
I think the task module should be `flyte.a.b.tasks.<module>`. Is it possible for you to share the directory structure and the serialization & registration commands you're running?
| Thanks Samhita, here's the dir structure:
workflow defined in `flyte/a/b/tasks/workflows.py` and the workflow calls a map_task():
```@workflow
def foo_workflow():
result = map_task(foo_task, concurrency=5)(
a = x
)```
task is defined in `flyte/a/b/tasks/tasks.py`
Serialization/registration commands:
``` pyflyte -k a -k flyte.a serialize --image xxx workflows -f /tmp/workflows \
&& flytectl --admin.endpoint dns:///xxx register files /tmp/workflows/* -p foo -d bar --version='xxx' \```
|
Thanks Samhita, here's the dir structure:
workflow defined in `flyte/a/b/tasks/workflows.py` and the workflow calls a map_task():
```@workflow
def foo_workflow():
result = map_task(foo_task, concurrency=5)(
a = x
)```
task is defined in `flyte/a/b/tasks/tasks.py`
Serialization/registration commands:
``` pyflyte -k a -k flyte.a serialize --image xxx workflows -f /tmp/workflows \
&& flytectl --admin.endpoint dns:///xxx register files /tmp/workflows/* -p foo -d bar --version='xxx' \```
| Can you try `pyflyte package` instead of `pyflyte serialize`?
<https://docs.flyte.org/projects/flytekit/en/latest/pyflyte.html#pyflyte-package>
<https://docs.flyte.org/projects/cookbook/en/latest/getting_started/package_register.html#package-your-project-with-pyflyte-package>
|
Can you try `pyflyte package` instead of `pyflyte serialize`?
<https://docs.flyte.org/projects/flytekit/en/latest/pyflyte.html#pyflyte-package>
<https://docs.flyte.org/projects/cookbook/en/latest/getting_started/package_register.html#package-your-project-with-pyflyte-package>
| FYI dropping back down to Flytekit 1.4.2 resolved this issue, so I suspect this is something that's either broken or changed with the recent releases (the above error was encountered with flytekit 1.6.1).
|
FYI dropping back down to Flytekit 1.4.2 resolved this issue, so I suspect this is something that's either broken or changed with the recent releases (the above error was encountered with flytekit 1.6.1).
| cc <@U0265RTUJ5B>
|
I am new to flyte. Looks like I can not install flytectl <https://docs.flyte.org/projects/flytectl/en/latest/|https://docs.flyte.org/projects/flytectl/en/latest/>. on a Windows machine. Is that correct? Thanks !
| <@U058V6UT01J>, it should work. Have you tried running `curl -sL <https://ctl.flyte.org/install> | bash` command?
|
<@U058V6UT01J>, it should work. Have you tried running `curl -sL <https://ctl.flyte.org/install> | bash` command?
| Thank you for your reply ! I need to install WSL first ?
|
Thank you for your reply ! I need to install WSL first ?
| I think so.
<https://github.com/flyteorg/flyte/issues/1561#issuecomment-939034681>
|
I think so.
<https://github.com/flyteorg/flyte/issues/1561#issuecomment-939034681>
| Many thanks . Let me look into it.
|
Hi, I am very new to the Flyte and trying to learn how flyte compiles its workflow and thereβs a cli: pyflyte serialize, but I would not get it work with an example project, created via pyflyte init. It will be great if anyone can share some insights. Here is my question in the SO: <https://stackoverflow.com/questions/76298549/how-to-run-pyflyte-serialize-workflows|https://stackoverflow.com/questions/76298549/how-to-run-pyflyte-serialize-workflows>
| Hi Ping! If you want to get workflows running on Flyte, I would recommend. using the following commands instead of seralize.
This work for a single file
```# run locally
pyflyte --config ~/.flyte/config.yaml run ./workflows/example.py wf
# run remote
pyflyte --config ~/.flyte/config.yaml run -i [image] /workflows/example.py wf```
This works for an entire python module. Flyte will serialize tasks, package your entire module, and send code to Flyte.
```# register
pyflyte --config ~/.flyte/config.yaml register -i [image] /workflows/example.py wf```
^ Kick off from the UI or use <https://docs.flyte.org/projects/flytekit/en/latest/remote.html|flyte remote>
If you are using the demo cluster....
Also, the demo cluster sets ups a container registry at `localhost:3000`. You can use new images with the following commands:
```docker build -t localhost:30000/imagname:tag
docker push localhost:30000/imagname:tag```
|
Hi Ping! If you want to get workflows running on Flyte, I would recommend. using the following commands instead of seralize.
This work for a single file
```# run locally
pyflyte --config ~/.flyte/config.yaml run ./workflows/example.py wf
# run remote
pyflyte --config ~/.flyte/config.yaml run -i [image] /workflows/example.py wf```
This works for an entire python module. Flyte will serialize tasks, package your entire module, and send code to Flyte.
```# register
pyflyte --config ~/.flyte/config.yaml register -i [image] /workflows/example.py wf```
^ Kick off from the UI or use <https://docs.flyte.org/projects/flytekit/en/latest/remote.html|flyte remote>
If you are using the demo cluster....
Also, the demo cluster sets ups a container registry at `localhost:3000`. You can use new images with the following commands:
```docker build -t localhost:30000/imagname:tag
docker push localhost:30000/imagname:tag```
| Thanks a lot <@U0530S525L0>
I am wondering if I use pyflyte run, where can I see the serialized task/workflow?
|
Thanks a lot <@U0530S525L0>
I am wondering if I use pyflyte run, where can I see the serialized task/workflow?
| I am not 100% sure, but I am pretty sure pyflyte automatically saves them to a local directory. You might be able to inspect the local dir?
Pyflyte manages sending the serialized tasks / workflows to a flyte cluster, which I imagine is the demo cluster in your case?
```flytectl demo start```
I usually skip looking at the serialized workflows since running workflows locally doesn't require serialization and pyflyte manages pushing workflows / code to flyte cluster for you..
If the `pyflyte run --remote` doesn't print out the local dir where the workflows are serialized. Then I think someone from the open-source team who know the internals better can help you tomorrow + this week.
|
I am not 100% sure, but I am pretty sure pyflyte automatically saves them to a local directory. You might be able to inspect the local dir?
Pyflyte manages sending the serialized tasks / workflows to a flyte cluster, which I imagine is the demo cluster in your case?
```flytectl demo start```
I usually skip looking at the serialized workflows since running workflows locally doesn't require serialization and pyflyte manages pushing workflows / code to flyte cluster for you..
If the `pyflyte run --remote` doesn't print out the local dir where the workflows are serialized. Then I think someone from the open-source team who know the internals better can help you tomorrow + this week.
| Thank you so much. Let me also play around the βremote flag.
|
Thank you so much. Let me also play around the βremote flag.
| Hi Evan, does this mean to run it remotely people need to push the code to a container registry first ? Thanks .
|
Does flyte support returning a `FlyteFile | None` ? Or `Optional[FlyteFile | None]`
| we support optional
|
we support optional
| yes, we support both optional and union
|
yes, we support both optional and union
| <@U0539B9794L>, are you seeing otherwise?
|
We are regularly hitting the issue mentioned in this resolved ticket: <https://github.com/flyteorg/flyte/issues/1234>
In our case we have a `@workflow` that uses map `@task`s and spins up 10-100s of pods. Our EKS setup is very elastic and sometimes will bin-pack these pods onto the same node. When it does that there's a high chance that one of the pods in the map `@task` will fail with an error similar to the issue linked above.
Our kubelet is currently setup to do serial image pulls. So, in theory, once one of the `@task`s pulls the image it _should_ then be available for all of the other pods. But it seems that's not the case. Initially i thought the fact that flyte is setting `imagePullPolicy: Always` was a problem, but <https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy|reading the docs more closely> it seems that's not the case.
> *`Always`*
> every time the kubelet launches a container, the kubelet queries the container image registry to resolve the name to an image <https://docs.docker.com/engine/reference/commandline/pull/#pull-an-image-by-digest-immutable-identifier|digest>. *If the kubelet has a container image with that exact digest cached locally, the kubelet uses its cached image*; otherwise, the kubelet pulls the image with the resolved digest, and uses that image to launch the container.
(emphasis mine)
Has anyone observed this issue? Any recommendations?
example error message:
```currentAttempt done. Last Error: UNKNOWN::[34]: code:"ContainersNotReady|ImagePullBackOff" message:"containers with unready status: [f2208032e4b24406b9f9-n1-0-dn0-0-dn10-0-34]|Back-off pulling image \"<redacted>/flyte-plaster:23.5.16\""```
if I press the recover button any time we hit a failure like above, it does eventually succeed.
So might this be a case of the <https://github.com/flyteorg/flyteplugins/blob/acc831a734195aba489c9b05a607a175cdb8d2c7/go/tasks/pluginmachinery/flytek8s/config/config.go#L50-L52|CreateContainerErrorGracePeriod> being too tight (3m) for our workloads?
OK, i've bumped the grace period to 10m. Since this is a relative randomly occurrence, so I'll observe this over time.
| you have to ensure that requests == limit
otherwise kube will kick the container
|
you have to ensure that requests == limit
otherwise kube will kick the container
| Interesting; in terms of these tasks, they are `==`.
Is your suggestion that if `limit > requests` that the work loads compete for underlying resource and take longer to start, thus more chance of hitting the graceperiod?
I ask, because we have other tasks that are not `==`.
Another thought: I observed that some of our nodes were taking approx 2m to get into a ReadyState (various daemonsets needed to start up). So if the counter starts from the moment the pod is requested we were eating up most of the time in elastically provisioning nodes.
|
Hello all
I'm trying to return machine learning models to find the hyperparameters from a task, the functions in that task is build models, the models then should be passed to another task which will find the hyper parameters, the issue here is that the models are passed as Tuple, so the the task which will find the hyperparameter should expect a tuple of modules, yet it outputs an :Transformer for type <class 'tuple'> is restricted currently, The peace of code as follows
```@task
def build_models() -> Tuple[LinearRegression, RandomForestRegressor, Any]:
lr = LinearRegression()
rf = RandomForestRegressor()
ann = HyperModel1()
return lr,rf,ann
@task
def pipline_(models: Tuple[LinearRegression, RandomForestRegressor, Any], (.....etc)) -> List[Dict]:
def itirate(models):
for i in models:
# Do STUFF```
```
def wf() -> pd.DataFrame:
....
....
models = build_models()
s_summary = HyperSearch(models=models, ....)
return s_summary```
| Hi Khalil!
Tuple is restricted to only be an output, so you canβt pass a tuple from build models into hyper search.
You can pass in a List to a map task and it will run in parallel! <https://docs.flyte.org/projects/cookbook/en/latest/auto/core/control_flow/map_task.html|https://docs.flyte.org/projects/cookbook/en/latest/auto/core/control_flow/map_task.html>
Note that outputs from flyte tasks are serialized and saved in Flytes metadata store. Initializing and returning untrained models will pickle them, save, and reload.
If you can use a `List[str]` where you use importlib to import the β`sklearn.linear_model.LinearRegression.`
or you could have a dataclasses with more fields then the training task would have all the info it needs to run. Only would pickle the trained model!
```Inputs(model=str, hyperparams=dict, β¦)```
|
Hi Khalil!
Tuple is restricted to only be an output, so you canβt pass a tuple from build models into hyper search.
You can pass in a List to a map task and it will run in parallel! <https://docs.flyte.org/projects/cookbook/en/latest/auto/core/control_flow/map_task.html|https://docs.flyte.org/projects/cookbook/en/latest/auto/core/control_flow/map_task.html>
Note that outputs from flyte tasks are serialized and saved in Flytes metadata store. Initializing and returning untrained models will pickle them, save, and reload.
If you can use a `List[str]` where you use importlib to import the β`sklearn.linear_model.LinearRegression.`
or you could have a dataclasses with more fields then the training task would have all the info it needs to run. Only would pickle the trained model!
```Inputs(model=str, hyperparams=dict, β¦)```
| Really appreciated, but can you provide a simple code implements what you have said, because im not sure if i got it right
|
Really appreciated, but can you provide a simple code implements what you have said, because im not sure if i got it right
| Something like this?
```import importlib
from dataclasses import dataclass
from functools import partial
from typing import List, Optional, Tuple
import pandas as pd
import sklearn.datasets
from dataclasses_json import dataclass_json
from flytekit import map_task, task, workflow
@dataclass_json
@dataclass
class ModelConfig:
model: str # "sklearn.linear_model.LinearRegression"
hyperparams: dict
def load_model(self):
module_name, class_name = self.model.rsplit(
".", 1
) # split the module and class
module = importlib.import_module(module_name) # import the module
model_class = getattr(module, class_name) # get the class from the module
return model_class(**self.hyperparams)
@task
def build_configs() -> List[ModelConfig]:
lr_config = ModelConfig(
model="sklearn.linear_model.LinearRegression", hyperparams={}
)
rf_config = ModelConfig(
model="sklearn.ensemble.RandomForestRegressor",
hyperparams={"max_features": "sqrt"},
)
return [lr_config, rf_config]
@task
def find_hyperparms(config: ModelConfig, X: pd.DataFrame, y: pd.DataFrame):
model = config.load_model()
model.fit(X, y.target)
print(model.predict(X).mean())
@task
def load_data() -> Tuple[pd.DataFrame, pd.DataFrame]:
X, y = sklearn.datasets.load_diabetes(return_X_y=True, as_frame=True)
return X, pd.DataFrame(y)
@workflow
def wf():
X, y = load_data()
configs = build_configs()
func = partial(find_hyperparms, X=X, y=y)
# runs in parallel
map_task(func)(config=configs)
wf()```
It seems like `dict` type transformer is broken for int types. It gets recast as a float...I will ping the OSS team and create an issue.
So it isn't broken, but there is a limitation in google's protobuf library that doesn't distinguish between int and float. This is kind of hacky...but works.
```import importlib
from dataclasses import dataclass
from dataclasses_json import dataclass_json
from functools import partial
from flytekit import task, workflow, map_task
from dataclasses import dataclass
from dataclasses_json import dataclass_json
import importlib
import sklearn.datasets
import numpy as np
from typing import Optional, List, Tuple
@dataclass_json
@dataclass
class ModelConfig:
model: str # "sklearn.linear_model.LinearRegression"
hyperparams: str
def load_model(self):
module_name, class_name = self.model.rsplit('.', 1) # split the module and class
module = importlib.import_module(module_name) # import the module
model_class = getattr(module, class_name) # get the class from the module
return model_class(**json.loads(self.hyperparams))
@task
def build_configs() -> List[ModelConfig]:
lr_config = ModelConfig(
model="sklearn.linear_model.LinearRegression",
hyperparams=json.dumps({}))
rf_config = ModelConfig(
model="sklearn.ensemble.RandomForestRegressor",
hyperparams=json.dumps({"max_features": "sqrt", "n_estimators": 10}))
return [lr_config, rf_config]
@task
def find_hyperparms(config: ModelConfig, X: pd.DataFrame, y: pd.DataFrame):
model = config.load_model()
model.fit(X, y.target)
print(model.predict(X).mean())
@task
def load_data() -> Tuple[pd.DataFrame, pd.DataFrame]:
X, y = sklearn.datasets.load_diabetes(return_X_y=True, as_frame=True)
return X, pd.DataFrame(y)
@workflow
def wf():
X, y = load_data()
configs = build_configs()
func = partial(find_hyperparms, X=X, y=y)
# runs in parallel
map_task(func)(config=configs)
wf()```
|
Something like this?
```import importlib
from dataclasses import dataclass
from functools import partial
from typing import List, Optional, Tuple
import pandas as pd
import sklearn.datasets
from dataclasses_json import dataclass_json
from flytekit import map_task, task, workflow
@dataclass_json
@dataclass
class ModelConfig:
model: str # "sklearn.linear_model.LinearRegression"
hyperparams: dict
def load_model(self):
module_name, class_name = self.model.rsplit(
".", 1
) # split the module and class
module = importlib.import_module(module_name) # import the module
model_class = getattr(module, class_name) # get the class from the module
return model_class(**self.hyperparams)
@task
def build_configs() -> List[ModelConfig]:
lr_config = ModelConfig(
model="sklearn.linear_model.LinearRegression", hyperparams={}
)
rf_config = ModelConfig(
model="sklearn.ensemble.RandomForestRegressor",
hyperparams={"max_features": "sqrt"},
)
return [lr_config, rf_config]
@task
def find_hyperparms(config: ModelConfig, X: pd.DataFrame, y: pd.DataFrame):
model = config.load_model()
model.fit(X, y.target)
print(model.predict(X).mean())
@task
def load_data() -> Tuple[pd.DataFrame, pd.DataFrame]:
X, y = sklearn.datasets.load_diabetes(return_X_y=True, as_frame=True)
return X, pd.DataFrame(y)
@workflow
def wf():
X, y = load_data()
configs = build_configs()
func = partial(find_hyperparms, X=X, y=y)
# runs in parallel
map_task(func)(config=configs)
wf()```
It seems like `dict` type transformer is broken for int types. It gets recast as a float...I will ping the OSS team and create an issue.
So it isn't broken, but there is a limitation in google's protobuf library that doesn't distinguish between int and float. This is kind of hacky...but works.
```import importlib
from dataclasses import dataclass
from dataclasses_json import dataclass_json
from functools import partial
from flytekit import task, workflow, map_task
from dataclasses import dataclass
from dataclasses_json import dataclass_json
import importlib
import sklearn.datasets
import numpy as np
from typing import Optional, List, Tuple
@dataclass_json
@dataclass
class ModelConfig:
model: str # "sklearn.linear_model.LinearRegression"
hyperparams: str
def load_model(self):
module_name, class_name = self.model.rsplit('.', 1) # split the module and class
module = importlib.import_module(module_name) # import the module
model_class = getattr(module, class_name) # get the class from the module
return model_class(**json.loads(self.hyperparams))
@task
def build_configs() -> List[ModelConfig]:
lr_config = ModelConfig(
model="sklearn.linear_model.LinearRegression",
hyperparams=json.dumps({}))
rf_config = ModelConfig(
model="sklearn.ensemble.RandomForestRegressor",
hyperparams=json.dumps({"max_features": "sqrt", "n_estimators": 10}))
return [lr_config, rf_config]
@task
def find_hyperparms(config: ModelConfig, X: pd.DataFrame, y: pd.DataFrame):
model = config.load_model()
model.fit(X, y.target)
print(model.predict(X).mean())
@task
def load_data() -> Tuple[pd.DataFrame, pd.DataFrame]:
X, y = sklearn.datasets.load_diabetes(return_X_y=True, as_frame=True)
return X, pd.DataFrame(y)
@workflow
def wf():
X, y = load_data()
configs = build_configs()
func = partial(find_hyperparms, X=X, y=y)
# runs in parallel
map_task(func)(config=configs)
wf()```
| thank you, but i have question, did you use the partial function to provide more than one parameter to find_hyperparms function which at the end will be mapped ?
|
thank you, but i have question, did you use the partial function to provide more than one parameter to find_hyperparms function which at the end will be mapped ?
| Yeah you need to do that. Itβs a limitation of map tasks!
|
Yeah you need to do that. Itβs a limitation of map tasks!
| Really appreciate your help man
|
Really appreciate your help man
| Anytime!!!
|
Whatβs the best way to attach URL to a Grafana dashboard from Flyte console? Currently, we are able to generate link for stackdriver logs using <https://docs.flyte.org/projects/cookbook/en/latest/auto/deployment/configure_logging_links.html|this>. Wondering if there is something similar for monitoring dashboard.
| You can add arbitrary URL links there. Just add a template that will expand out to your monitoring system URL given the template variables, and you should be good.
|
You can add arbitrary URL links there. Just add a template that will expand out to your monitoring system URL given the template variables, and you should be good.
| Got it :+1:
|
Does anybody else run into errors when trying to use map task workflows with Papermill/Jupyter NotebookTasks?
Running the example from the docs (<https://docs.flyte.org/projects/cookbook/en/latest/auto/core/control_flow/map_task.html#map-a-task-with-multiple-inputs>), and a plain @task decorated function works fine - but the moment I attempt to run a map task with NotebookTasks, I get this error:
```ValueError: Error encountered while executing 'run_map_task_workflow':
Map tasks can only compose of Python Functon Tasks currently```
Is `flytekitplugins.papermill.NotebookTask` currently incompatible with map_tasks?
| Great point. this is not supported, but actually is very easy to support. can you share how you are wanting to run the notebook task, are you using `functools.partial`
We should be able to add support for this pretty rapidly
cc <@USU6W5ATA> / <@U0530S525L0> can either of you add support?
|
Great point. this is not supported, but actually is very easy to support. can you share how you are wanting to run the notebook task, are you using `functools.partial`
We should be able to add support for this pretty rapidly
cc <@USU6W5ATA> / <@U0530S525L0> can either of you add support?
| looking
|
looking
| this is the problem - <@USU6W5ATA> - <https://github.com/flyteorg/flytekit/blob/ba70f4685e5ab4e00f23c5adedb281110b20f96e/flytekit/core/map_task.py#L57>
we should support python_instance_task too
<@USU6W5ATA> maybe you cannot use functools.partial with instance tasks. I do not know that, if that is the case, this is just handling instance task separately and we are good
|
this is the problem - <@USU6W5ATA> - <https://github.com/flyteorg/flytekit/blob/ba70f4685e5ab4e00f23c5adedb281110b20f96e/flytekit/core/map_task.py#L57>
we should support python_instance_task too
<@USU6W5ATA> maybe you cannot use functools.partial with instance tasks. I do not know that, if that is the case, this is just handling instance task separately and we are good
| <@UNZB4NW3S> I'm not too sure about functools.partial, but we are using the flytekit plugin for Jupyter.
`*from* flytekitplugins.papermill *import* NotebookTask`
And then we are passing in this NotebookTask into something like this:
```# Example
@workflow
def map_workflow():
map_output = map_task(notebook_task_here)(inputs=input_list)
return map_output
```
|
<@UNZB4NW3S> I'm not too sure about functools.partial, but we are using the flytekit plugin for Jupyter.
`*from* flytekitplugins.papermill *import* NotebookTask`
And then we are passing in this NotebookTask into something like this:
```# Example
@workflow
def map_workflow():
map_output = map_task(notebook_task_here)(inputs=input_list)
return map_output
```
| Cool, ya this is a miss on my end when I wrote map task
We will fix it soon
|
Cool, ya this is a miss on my end when I wrote map task
We will fix it soon
| Cool, thank you for the quick response!
|
Cool, thank you for the quick response!
| Fix it. <https://github.com/flyteorg/flytekit/pull/1650>
|
hello, i was wondering if itβs possible to run a <https://docs.flyte.org/projects/cookbook/en/latest/auto/core/flyte_basics/shell_task.html|shell task> using a specific image? Is there any example of how to do this?
| shell task runs in the pod
|
shell task runs in the pod
| wait sorry i mean how do i specify an image
|
wait sorry i mean how do i specify an image
| *ShellTask(container_image=β¦*
|
*ShellTask(container_image=β¦*
| ah thanks!
|
Can anyone in the community provide some rough metrics on the Blob Storage usage by Flyte?
For example:
total # of workflows, executions, tasks
is raw user data stored in the same blob storage?
and the existing amount of storage used.
Also will appreciate it a lot if someone can share experiences where the blob storage becomes the bottleneck/where failure happens.
| <@U045124RRFX> blob storage usage can grow quite a bit, but caching, passing references reduces usage quite a bit.
Most cloud blob stores have a expiration policies, which works great with Flyte for completed workflows are data is never reused that way - you can simply delete data after a period
|
How can I get the status of a list of tasks running in a dynamic task using `flytekit.remote`? I have the execution information and can sync it for the workflow, however I want to be able to give a user in a CLI a command they can run to check the status of a large job (amount of tasks in RUNNING, SUCCEEDED, FAILED, etc..)
I can do this for the dynamic task, however I am finding it hard to do for the 1k tasks that it is running underneath it.
Or even better, get the status from python of the tasks in a `map_task` execution
| does the dynamic task spawns a subworkflow?
if its spawning just a task you could infer the execution ids right
|
does the dynamic task spawns a subworkflow?
if its spawning just a task you could infer the execution ids right
| Yeah we have a @dynamic task call `map_task` on `n` inputs. We _could_ guess the exec id from that yeah.
|
Yeah we have a @dynamic task call `map_task` on `n` inputs. We _could_ guess the exec id from that yeah.
| so syncing the dynamic node did not return the child map tasks?
|
Dear all,
I ran into an inconsistency between flytectl and FlyteRemote which has been reported <https://flyte-org.slack.com/archives/CP2HDHKE1/p1680811197406639|before>. When trying to create a FlyteRemote with this config
`admin:`
`endpoint: dns:///A.B.C.D:PPPPP`
`insecure: false`
`authType: Pkce`
`insecureSkipVerify: true`
flytectl works just fine. But when I try to fetch an execution on a FlyteRemote with this code
`project = "myproject"`
`domain = "development"`
`execution = "a5cphsfgc57nt6nxbknt"`
`flyte_config_file = "flyte.config.yaml"`
`remote = FlyteRemote(config=Config.auto(config_file=flyte_config_file))`
`flyte_workflow_execution = remote.fetch_execution(project=project, domain=domain, name=execution)`
I get the following error
```Traceback (most recent call last):
File "debug_remote.py", line 12, in <module>
flyte_workflow_execution = remote.fetch_execution(project=project, domain=domain, name=execution)
File "/opt/micromamba/envs/OHLI/lib/python3.8/site-packages/flytekit/remote/remote.py", line 353, in fetch_execution
self.client.get_execution(
File "/opt/micromamba/envs/OHLI/lib/python3.8/site-packages/flytekit/clients/friendly.py", line 582, in get_execution
super(SynchronousFlyteClient, self).get_execution(
File "/opt/micromamba/envs/OHLI/lib/python3.8/site-packages/flytekit/clients/raw.py", line 43, in handler
return fn(*args, **kwargs)
File "/opt/micromamba/envs/OHLI/lib/python3.8/site-packages/flytekit/clients/raw.py", line 651, in get_execution
return self._stub.GetExecution(get_object_request, metadata=self._metadata)
File "/opt/micromamba/envs/OHLI/lib/python3.8/site-packages/grpc/_channel.py", line 1030, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/opt/micromamba/envs/OHLI/lib/python3.8/site-packages/grpc/_channel.py", line 910, in _end_unary_response_blocking
raise _InactiveRpcError(state) # pytype: disable=not-instantiable
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "failed to connect to all addresses; last error: UNKNOWN: ipv4:A.B.C.D:PPPPP: Peer name A.B.C.D is not in peer certificate"
debug_error_string = "UNKNOWN:failed to connect to all addresses; last error: UNKNOWN: ipv4:A.B.C.D:PPPPP: Peer name A.B.C.D is not in peer certificate {created_time:"2023-05-18T14:40:11.018710903+01:00", grpc_status:14}"
>```
The cluster is running the flyte-binary Helm chart in version 1.3.0 and I tried flytekit 1.2.11, 1.3.0, and 1.6.1, all resulting in the same error message.
| is `A.B.C.D` added as a Subject Alt Name in your cert?
|
is `A.B.C.D` added as a Subject Alt Name in your cert?
| Thanks for the hint, David. I haven't edited the cert. I thought, that is not needed since I skip the certificate verification with`insecureSkipVerify: true`But I will anyway try out adding the SAN and report back whether it works or not.
So, I configured TLS for my Flyte endpoint which works all fine in the browser and with flytectl. However, `flytekit.FlyteRemote` (the debug script from my first post) still fails. Now, with this error
```{"asctime": "2023-05-23 09:38:41,298", "name": "flytekit", "levelname": "WARNING", "message": "FlyteSchema is deprecated, use Structured Dataset instead."}
WARNING:root:KeyRing not available, tokens will not be cached. Error: No recommended backend was available. Install a recommended 3rd party backend package; or, install the keyrings.alt package if you want to use the non-recommended backends. See <https://pypi.org/project/keyring> for details.
E0523 09:38:42.232413831 4238 <http://ssl_transport_security.cc:1495]|ssl_transport_security.cc:1495]> Handshake failed with fatal error SSL_ERROR_SSL: error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED.
Traceback (most recent call last):
File "/opt/micromamba/envs/OHLI/lib/python3.8/site-packages/grpc/_interceptor.py", line 241, in continuation
response, call = self._thunk(new_method).with_call(
File "/opt/micromamba/envs/OHLI/lib/python3.8/site-packages/grpc/_interceptor.py", line 266, in with_call
return self._with_call(request,
File "/opt/micromamba/envs/OHLI/lib/python3.8/site-packages/grpc/_interceptor.py", line 257, in _with_call
return call.result(), call
File "/opt/micromamba/envs/OHLI/lib/python3.8/site-packages/grpc/_channel.py", line 343, in result
raise self
File "/opt/micromamba/envs/OHLI/lib/python3.8/site-packages/grpc/_interceptor.py", line 241, in continuation
response, call = self._thunk(new_method).with_call(
File "/opt/micromamba/envs/OHLI/lib/python3.8/site-packages/grpc/_channel.py", line 957, in with_call
return _end_unary_response_blocking(state, call, True, None)
File "/opt/micromamba/envs/OHLI/lib/python3.8/site-packages/grpc/_channel.py", line 849, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "failed to connect to all addresses; last error: UNKNOWN: Ssl handshake failed"
debug_error_string = "UNKNOWN:Failed to pick subchannel {created_time:"2023-05-23T09:38:42.234530152+01:00", children:[UNKNOWN:failed to connect to all addresses; last error: UNKNOWN: Ssl handshake failed {created_time:"2023-05-23T09:38:42.234525804+01:00", grpc_status:14}]}"
>```
I installed the CA inside the same container in which I execute the flyte script which throws this error. And as mentioned, flytectl work without problems from within the same container.
The flyte config looks now like this
`admin:`
`endpoint: dns:///my.domain.lan:30657`
`insecure: false`
`authType: Pkce`
Do you have any ideas? Would be greatly appreciated. :slightly_smiling_face:
I fixed the problem by providing my root CA file as admin.caCertFilePath in the flyte config.
However, there is also a bug in flytekit.clients.auth_helper (<https://github.com/flyteorg/flytekit/blob/master/flytekit/clients/auth_helper.py#L179>). Currently, it is
`credentials = grpc.ssl_channel_credentials(load_cert(<http://cfg.ca|cfg.ca>_cert_file_path))`
However, `load_cert`returns an *OpenSSL.crypto.X509* object. But`grpc.ssl_channel_credentialsexpects`a bytes string. So, I had to modify the call as follows (to encode the X509 object as bytes):
`credentials = grpc.ssl_channel_credentials(crypto.dump_certificate(crypto.FILETYPE_PEM, load_cert(<http://cfg.ca|cfg.ca>_cert_file_path)))`
Can we fix this in main?
|
Hello all
Which variable should I set on my `values.yaml` to change the default service account? I have tried `k8sServiceAccount` but it is not working...
<@U029U35LRDJ> it's only this that's keeping me from test the things that we talked about. When I'm running the workflow, instead of using the service account that I've setted on `values.yaml`, it keeps using the default service account...
| that is the correct key, but probably set at the wrong location. can you show how you're setting it in the `values.yaml` file?
|
that is the correct key, but probably set at the wrong location. can you show how you're setting it in the `values.yaml` file?
| Sure!
Can I send you the .yaml files in DM?
Or we can schedule a call, if you prefer
|
Sure!
Can I send you the .yaml files in DM?
Or we can schedule a call, if you prefer
| Should be able to set it here - <https://github.com/flyteorg/flyte/blob/7a8f2f5607dbe4707ff2b514b0665bf547dae921/charts/flyte-core/values.yaml#L506-L515>
cc <@UNR3C6Y4T> correct?
|
Should be able to set it here - <https://github.com/flyteorg/flyte/blob/7a8f2f5607dbe4707ff2b514b0665bf547dae921/charts/flyte-core/values.yaml#L506-L515>
cc <@UNR3C6Y4T> correct?
| Yes, I'm setting here, using `k8sServiceAccount: service_account_name`. Is that correct?
```flyteadmin:
roleNameKey: "<http://iam.amazonaws.com/role|iam.amazonaws.com/role>"
profilerPort: 10254
metricsScope: "flyte:"
metadataStoragePrefix:
- "metadata"
- "admin"
eventVersion: 2
testing:
host: <http://flyteadmin>
k8sServiceAccount: service_account_name```
|
Yes, I'm setting here, using `k8sServiceAccount: service_account_name`. Is that correct?
```flyteadmin:
roleNameKey: "<http://iam.amazonaws.com/role|iam.amazonaws.com/role>"
profilerPort: 10254
metricsScope: "flyte:"
metadataStoragePrefix:
- "metadata"
- "admin"
eventVersion: 2
testing:
host: <http://flyteadmin>
k8sServiceAccount: service_account_name```
| That should be the correct place. Let me do some testing here quick.
|
That should be the correct place. Let me do some testing here quick.
| what do you mean by default service account?
the one that the users uses or the one that flyte uses?
|
what do you mean by default service account?
the one that the users uses or the one that flyte uses?
| The one that flyte uses
|
The one that flyte uses
| and youβre on the flyte/flyte-core helm chart?
|
and youβre on the flyte/flyte-core helm chart?
| When I launch a workflow, instead of `default`, I wanted to use `service_account_name` (just a name)
Yup, using flyte-core helm chart
|
When I launch a workflow, instead of `default`, I wanted to use `service_account_name` (just a name)
Yup, using flyte-core helm chart
| so you want to change the one that users use
|
so you want to change the one that users use
| Ow, right!
Sorry, yes
|
Ow, right!
Sorry, yes
| try changing these? <https://github.com/flyteorg/flyte/blob/7a8f2f5607dbe4707ff2b514b0665bf547dae921/charts/flyte-core/values-eks.yaml#L344-L359>
those values get sent to the <https://github.com/flyteorg/flyte/blob/7a8f2f5607dbe4707ff2b514b0665bf547dae921/charts/flyte-core/values-eks.yaml#LL382C44-L382C58|template here>
|
try changing these? <https://github.com/flyteorg/flyte/blob/7a8f2f5607dbe4707ff2b514b0665bf547dae921/charts/flyte-core/values-eks.yaml#L344-L359>
those values get sent to the <https://github.com/flyteorg/flyte/blob/7a8f2f5607dbe4707ff2b514b0665bf547dae921/charts/flyte-core/values-eks.yaml#LL382C44-L382C58|template here>
| Yeah, I've tried it, but with no success
I've tried this too:
```- key: aab_default_service_account
value: |
apiVersion: v1
kind: ServiceAccount
metadata:
name: flytepropeller
namespace: {{ namespace }}
annotations:
# Needed for gcp workload identity to function
# <https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity>
<http://iam.gke.io/gcp-service-account|iam.gke.io/gcp-service-account>: {{ gsa }}```
|
Yeah, I've tried it, but with no success
I've tried this too:
```- key: aab_default_service_account
value: |
apiVersion: v1
kind: ServiceAccount
metadata:
name: flytepropeller
namespace: {{ namespace }}
annotations:
# Needed for gcp workload identity to function
# <https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity>
<http://iam.gke.io/gcp-service-account|iam.gke.io/gcp-service-account>: {{ gsa }}```
| what happens?
|
what happens?
| It is still using the default service account
|
It is still using the default service account
| why is the name βflytepropellerβ?
user pods should use the default one
|
why is the name βflytepropellerβ?
user pods should use the default one
| It was the name setted previously for the service account by a previous engineer, and I cannot change it...
I cannot change because things can get messy if I do
Is this service account that I need to change
In the previous version, it was automatically setted by `k8sServiceAccount`
|
It was the name setted previously for the service account by a previous engineer, and I cannot change it...
I cannot change because things can get messy if I do
Is this service account that I need to change
In the previous version, it was automatically setted by `k8sServiceAccount`
| you can always check to see if the cluster resource controller is doing the right thing by creating a new project in flyte admin
when the cluster resource controller runs next itβll create new namespaces, along with the service account
yeah thatβs the user service account.
|
you can always check to see if the cluster resource controller is doing the right thing by creating a new project in flyte admin
when the cluster resource controller runs next itβll create new namespaces, along with the service account
yeah thatβs the user service account.
| I've just created a new project, uploaded the workflow, and got the same results
|
I've just created a new project, uploaded the workflow, and got the same results
| can you check the definition of the service account? just to make sure the cluster creation side is working.
<https://github.com/flyteorg/flyteadmin/pull/566> thank you <@U029U35LRDJ> who found the issue
weβll be cutting a patch release on monday
can you set the service account using matchable resources for now?
on the project level?
|