input
stringlengths
1
18.7k
output
stringlengths
1
18.7k
Oh, cool It makes sense! Thank you!
If you'd like to have secrets on a per-task basis, we can discuss that too. We definitely do that here at Lyft. You should be able to use "vault" or any other secret manager to get those secrets into your container.
If you'd like to have secrets on a per-task basis, we can discuss that too. We definitely do that here at Lyft. You should be able to use "vault" or any other secret manager to get those secrets into your container.
Yeah, currently we use "vault" with serviceaccount token authentication. Thank you for advice! We just looked for the most "native" approach to use it in Flyte
Yeah, currently we use "vault" with serviceaccount token authentication. Thank you for advice! We just looked for the most "native" approach to use it in Flyte
So Flyte doesn't really prescribe a way to handle secrets (it's up to you), but I here is one way you might do it with vault (we're not using vault, so I could be overlooking some details): Vault has a secret injector: <https://www.vaultproject.io/docs/platform/k8s/injector/index.html> If you annotate your pods correctly, the secrets should get injected into the pod. Flyte allows you to add annotations via launch plans: ```annotations=Annotations({"<http://vault.hashicorp.com/agent-inject-secret-foo|vault.hashicorp.com/agent-inject-secret-foo>": 'bar/baz}),``` I'm sure there are other ways of doing this as well. Just one example.
So Flyte doesn't really prescribe a way to handle secrets (it's up to you), but I here is one way you might do it with vault (we're not using vault, so I could be overlooking some details): Vault has a secret injector: <https://www.vaultproject.io/docs/platform/k8s/injector/index.html> If you annotate your pods correctly, the secrets should get injected into the pod. Flyte allows you to add annotations via launch plans: ```annotations=Annotations({"<http://vault.hashicorp.com/agent-inject-secret-foo|vault.hashicorp.com/agent-inject-secret-foo>": 'bar/baz}),``` I'm sure there are other ways of doing this as well. Just one example.
Great! It looks more convinient! Thanks! and now I see it here <https://lyft.github.io/flyte/user/features/labels_annotations.html> :+1:
hello! what would be the best way with the SDK to “fetch” an existing workflow, create a new launch plan with new inputs and execute it? i tried something like this: ```import flytekit.configuration import flytekit.common.workflow flytekit.configuration.set_flyte_config_file("flytekit.conf") wf = flytekit.common.workflow.SdkWorkflow.fetch( project="yolotrain", domain="development", name="train.single.workflow_yolo_single", version="71a60ca9fa75497968bb09fe8c4ba8d3aee042cb", )``` but I’m getting ```FlyteAssertion: An SDK node must have one underlying entity specified at once. Received the following entities: []```
fetching of an SdkWorkflow object is being worked on in an PR. it’s not quite ready yet. hopefully in a week it’ll be done, along with some other features we’ve been meaning to push out. <https://github.com/lyft/flytekit/pull/75/files>
fetching of an SdkWorkflow object is being worked on in an PR. it’s not quite ready yet. hopefully in a week it’ll be done, along with some other features we’ve been meaning to push out. <https://github.com/lyft/flytekit/pull/75/files>
Ok thanks, looking forward to the pr :+1:
Ok thanks, looking forward to the pr :+1:
i think there’s a way to create the launchplan without the class code present as well but will wait for Matt Smith to answer that (who should be back today)
i think there’s a way to create the launchplan without the class code present as well but will wait for Matt Smith to answer that (who should be back today)
Yee there is a way Giordano you can use flyte-cli that is the simplest way to create a launchplan or am i still jet lagged :slightly_smiling_face:
Yee there is a way Giordano you can use flyte-cli that is the simplest way to create a launchplan or am i still jet lagged :slightly_smiling_face:
Hey, Giordano quick question: are you trying to execute the workflow here? Or trying to ‘specialize’ the workflow interface by creating defaults, schedules, different service accounts, etc.? for more context: a launch plan can be thought of as a specialized ‘executable’ for a workflow. Then a launch plan can be executed as many times as one wants. For example, if i had a workflow that takes inputs `country` and `time`, I could use the same workflow to create two launch plans. One launch plan could freeze `country='USA'` and be scheduled to run daily with an IAM role given write access to `<s3://USA-data>`. The other could freeze `country='Canada'` and be scheduled to run weekly with an IAM role that only accesses `<s3://Canada-data>`. This way a pipeline (workflow) can be generalized, but then specialized at execution to provide data protections, etc.. An execution -&gt; launch plan, is a many-to-one relationship. So generally, when you are creating a new execution, you don’t need to create a new launch plan. You only need to retrieve an existing one and call execute on it. So if you are trying to launch an execution, the simplest way is: ```lp = SdkLaunchPlan.fetch('project', 'domain', 'name', 'version') ex = lp.execute('project', 'domain' inputs={'a': 1, 'b': 'hello'}, &lt;name='optional idempotency string'&gt;)```
Hey, Giordano quick question: are you trying to execute the workflow here? Or trying to ‘specialize’ the workflow interface by creating defaults, schedules, different service accounts, etc.? for more context: a launch plan can be thought of as a specialized ‘executable’ for a workflow. Then a launch plan can be executed as many times as one wants. For example, if i had a workflow that takes inputs `country` and `time`, I could use the same workflow to create two launch plans. One launch plan could freeze `country='USA'` and be scheduled to run daily with an IAM role given write access to `<s3://USA-data>`. The other could freeze `country='Canada'` and be scheduled to run weekly with an IAM role that only accesses `<s3://Canada-data>`. This way a pipeline (workflow) can be generalized, but then specialized at execution to provide data protections, etc.. An execution -&gt; launch plan, is a many-to-one relationship. So generally, when you are creating a new execution, you don’t need to create a new launch plan. You only need to retrieve an existing one and call execute on it. So if you are trying to launch an execution, the simplest way is: ```lp = SdkLaunchPlan.fetch('project', 'domain', 'name', 'version') ex = lp.execute('project', 'domain' inputs={'a': 1, 'b': 'hello'}, &lt;name='optional idempotency string'&gt;)```
Hi Matt, right now I’m not looking to customize the workflow, just be able to launch it with different inputs so your suggestion might actually do the trick
Hi Matt, right now I’m not looking to customize the workflow, just be able to launch it with different inputs so your suggestion might actually do the trick
cool then it’s an easier answer :slightly_smiling_face: if you use `pyflyte` to register your workflows, there should be a default launch plan created for each workflow with the same name as the workflow
cool then it’s an easier answer :slightly_smiling_face: if you use `pyflyte` to register your workflows, there should be a default launch plan created for each workflow with the same name as the workflow
One thing that is not clear to me is the difference between the active and non-active launchplan for example when I look for “active” launchplans in my environment i don’t see any
One thing that is not clear to me is the difference between the active and non-active launchplan for example when I look for “active” launchplans in my environment i don’t see any
ah yes, so an active launch plan can be thought of a representation of the ‘tip of your deployment’
ah yes, so an active launch plan can be thought of a representation of the ‘tip of your deployment’
kind of like the latest version is that correct?
kind of like the latest version is that correct?
yes exactly, except making it a bit easier to rollback if necessary it’s important for two major cases:
yes exactly, except making it a bit easier to rollback if necessary it’s important for two major cases:
ok so how do I make a launch plan active then? i couldn’t find it in the docs… i know that there is a “fetch_latest” for tasks…
ok so how do I make a launch plan active then? i couldn’t find it in the docs… i know that there is a “fetch_latest” for tasks…
yeah, that’s a topic of ongoing debate…perhaps the active tag should be applied to all tasks and workflows. or we should go with a more generalized solution where workflows and tasks can be custom tagged and labeled and then have methods like `fetch_tag`. The reason why active is specifically important for launch plans is because of schedules. The admin service needs to know to change the schedule configuration for a launch plan based on which one is active. anywho, to make your launch plan active: `pyflyte -p project -d domain -c flyte.config lp activate-all [--ignore-schedules]`
yeah, that’s a topic of ongoing debate…perhaps the active tag should be applied to all tasks and workflows. or we should go with a more generalized solution where workflows and tasks can be custom tagged and labeled and then have methods like `fetch_tag`. The reason why active is specifically important for launch plans is because of schedules. The admin service needs to know to change the schedule configuration for a launch plan based on which one is active. anywho, to make your launch plan active: `pyflyte -p project -d domain -c flyte.config lp activate-all [--ignore-schedules]`
awesome thanks!
awesome thanks!
also, if you have an account, would you mind asking on stackoverflow and linking the question so I can answer there? We’d like to start making questions/concepts that seem common more easily searchable--and ideally have stackoverflow do the heavy lifting for us :p
also, if you have an account, would you mind asking on stackoverflow and linking the question so I can answer there? We’d like to start making questions/concepts that seem common more easily searchable--and ideally have stackoverflow do the heavy lifting for us :p
sure! I’ll make an account tomorrow
does Flyte has a roadmap of features and releases?
hi Alex Pryiomka good question. We do have a roadmap, that we will publish soon. Its just that we need to look at resourcing and what the community wants i can share a rough draft in couple weeks, once i am back in office :slightly_smiling_face: i was on parental leave for all of january and last part of december. i am slated to join in 2 weeks
hi Alex Pryiomka good question. We do have a roadmap, that we will publish soon. Its just that we need to look at resourcing and what the community wants i can share a rough draft in couple weeks, once i am back in office :slightly_smiling_face: i was on parental leave for all of january and last part of december. i am slated to join in 2 weeks
Congrats Ketan Umare on the baby aroval, i just came early January from my parental leave :smile:
Congrats Ketan Umare on the baby aroval, i just came early January from my parental leave :smile:
:slightly_smiling_face:
:slightly_smiling_face:
Ketan Umare, it has been almost two weeks now, any update on the readmap? We would like to see what is comming in flyte :smile:
Ketan Umare, it has been almost two weeks now, any update on the readmap? We would like to see what is comming in flyte :smile:
this is the last week of paternity, i am coming to Zillow we can discuss the roadmap let me share my aspirations list <https://docs.google.com/document/d/1yq8pIlhlG3gci3GJQNjdAd9bzZ-KYyLfm6I5NVms9-4/edit> I know this is a lot to digest but i will be trying to clean this up as one of the first thing, also some of these are already done
this is the last week of paternity, i am coming to Zillow we can discuss the roadmap let me share my aspirations list <https://docs.google.com/document/d/1yq8pIlhlG3gci3GJQNjdAd9bzZ-KYyLfm6I5NVms9-4/edit> I know this is a lot to digest but i will be trying to clean this up as one of the first thing, also some of these are already done
great, let me compose a list of questions for your guy's on-site meet up :slightly_smiling_face:
how does version upgrade / migration happen in flyte. Does it have version upgrade documentation? If we deploy v0.1.0 and v0.1.1 is released, how do we upgrade?
As you probably know, Flyte is composed of several components (each with their own semantic versions). Each component is being developed in parallel and releasing new versions. The `lyft/flyte` github repo contains our aggregated "complete flyte" deployment configuration. In other words, we specify a semantic version for each component, combine them into a single "flyte version". That file can be seen here: <https://github.com/lyft/flyte/blob/master/deployment/sandbox/flyte_generated.yaml> We haven't set a cadence to the updating the "complete Flyte deploy". In other words, we do it periodically, without much formal reasoning right now. We should probably formalize that. For minor versions, the idea is that you should able to `kubectl apply -f theCompleteDeployFile.yaml` and get the updates without issue. You may have custom taylored deployments for your own use that are built on-top of this deployment (using something like `kustomize`. Our goal is that those too, should update without issue. LMK if that answers your question?
As you probably know, Flyte is composed of several components (each with their own semantic versions). Each component is being developed in parallel and releasing new versions. The `lyft/flyte` github repo contains our aggregated "complete flyte" deployment configuration. In other words, we specify a semantic version for each component, combine them into a single "flyte version". That file can be seen here: <https://github.com/lyft/flyte/blob/master/deployment/sandbox/flyte_generated.yaml> We haven't set a cadence to the updating the "complete Flyte deploy". In other words, we do it periodically, without much formal reasoning right now. We should probably formalize that. For minor versions, the idea is that you should able to `kubectl apply -f theCompleteDeployFile.yaml` and get the updates without issue. You may have custom taylored deployments for your own use that are built on-top of this deployment (using something like `kustomize`. Our goal is that those too, should update without issue. LMK if that answers your question?
That is kubernetes resources that are more or less stateless. I am more concerned with postgreSQL schema migration If we migrate to a next version how can we migrate the existing workflows and projects Johnny Burns ^
That is kubernetes resources that are more or less stateless. I am more concerned with postgreSQL schema migration If we migrate to a next version how can we migrate the existing workflows and projects Johnny Burns ^
Ah, I understand. We have a container in the deployment which ensures the schema is up-to-date with the latest deployment (doing any necessary migrations). <https://github.com/lyft/flyte/blob/master/deployment/sandbox/flyte_generated.yaml#L1019-L1038> This is designed largely for migrations like adding a column, where missing data on the existing records isn't a big deal. If schema update is not backward compatible, I imagine that would require a major version update. _we're still pretty new to this, so we can probably re-think, improve, and formalize this process_
Ah, I understand. We have a container in the deployment which ensures the schema is up-to-date with the latest deployment (doing any necessary migrations). <https://github.com/lyft/flyte/blob/master/deployment/sandbox/flyte_generated.yaml#L1019-L1038> This is designed largely for migrations like adding a column, where missing data on the existing records isn't a big deal. If schema update is not backward compatible, I imagine that would require a major version update. _we're still pretty new to this, so we can probably re-think, improve, and formalize this process_
Alex Pryiomka we use GORM as the ORM layer and as johnnynsaid we have a schema migrator that migrates any changes, we have done some work to allow writing more custom logical migrations Alex Pryiomka as for the entire platform, we are using a total semantic version alongwith semver for each component Interesting thing is we use protobuf and grpc which help in maintaining backwards compatibility as long as we are not stupid Since the platform has been use for a while we have done some breaking changes internally and figured how to do them painlessly, that being said someday we will break, but hopefully our versioning scheme will indicate that
Alex Pryiomka we use GORM as the ORM layer and as johnnynsaid we have a schema migrator that migrates any changes, we have done some work to allow writing more custom logical migrations Alex Pryiomka as for the entire platform, we are using a total semantic version alongwith semver for each component Interesting thing is we use protobuf and grpc which help in maintaining backwards compatibility as long as we are not stupid Since the platform has been use for a while we have done some breaking changes internally and figured how to do them painlessly, that being said someday we will break, but hopefully our versioning scheme will indicate that
That is a comprehensive answer, thanks Ketan Umare
That is a comprehensive answer, thanks Ketan Umare
Ya I have been typing on my phone. Next week on, I will be more on my laptop
Does flyte scheduler have a backfill functionality similar to Airflow?
cc Ally Gale short answer, no, not right now, but it shouldn’t be that hard to build workflows that do this based on existing constructs in Flyte (ie launchplans and dynamic tasks.)
cc Ally Gale short answer, no, not right now, but it shouldn’t be that hard to build workflows that do this based on existing constructs in Flyte (ie launchplans and dynamic tasks.)
Alex Pryiomka backfill like airflow Only works if we understand what time is. From Flyte a point of view time is just another input. We do not actually have a built in cron scheduler. We use the cloud schedulers Backfill thus just implies running an pipeline with an older Input. Interesting thing where we want to work on is, use some open source cron scheduler and have status world that you can in the Ui or through cli indicate re executions And also how to manage resources for backfills ( this is where we will innovate)
Alex Pryiomka backfill like airflow Only works if we understand what time is. From Flyte a point of view time is just another input. We do not actually have a built in cron scheduler. We use the cloud schedulers Backfill thus just implies running an pipeline with an older Input. Interesting thing where we want to work on is, use some open source cron scheduler and have status world that you can in the Ui or through cli indicate re executions And also how to manage resources for backfills ( this is where we will innovate)
backfill usually assumes missed / failed runs and start date / beginning of your DAG. Say, i deploy a new version of a workflow with a bug fix. I would like do rerun previous runs to make sure the artifacts of the task outputs were corrected. Apparently the last thing i would want to do is to do it manually. If you have a start date on the workflow and the history of the executions based on the cron schedule, you should have no problems figuring out what to backfill. Ketan Umare ^ Backfills is actually one of the nicest things we like about airflow and use it all the time :slightly_smiling_face:
backfill usually assumes missed / failed runs and start date / beginning of your DAG. Say, i deploy a new version of a workflow with a bug fix. I would like do rerun previous runs to make sure the artifacts of the task outputs were corrected. Apparently the last thing i would want to do is to do it manually. If you have a start date on the workflow and the history of the executions based on the cron schedule, you should have no problems figuring out what to backfill. Ketan Umare ^ Backfills is actually one of the nicest things we like about airflow and use it all the time :slightly_smiling_face:
I'll have to agree with Alex Pryiomka, and Scheduled flyte launch plans do understand time as a first class citizen, Ketan Umare. We have discussed this internally a few times, it's not surprising to know this is one of the frequent asks within Lyft as well. Is this something you would be willing to help spec/write up in the context of Flyte, Alex Pryiomka? we can definitely use help scoping the project and we will be happy to provide guidance in how to move forward with it...
I'll have to agree with Alex Pryiomka, and Scheduled flyte launch plans do understand time as a first class citizen, Ketan Umare. We have discussed this internally a few times, it's not surprising to know this is one of the frequent asks within Lyft as well. Is this something you would be willing to help spec/write up in the context of Flyte, Alex Pryiomka? we can definitely use help scoping the project and we will be happy to provide guidance in how to move forward with it...
Haytham Abuelfutuh I am not denying that backfill is good idea, I am just saying at the moment it can be achieved using external means. But scheduled launch plans is the perfect entity to have backfills on and not the workflows themselves This also Implies we need a scheduler to be built :blush: Or integrated with Alex Pryiomka as Haytham Abuelfutuh said we would love to collaborate on this and would love if you guys can help Hongxin Liang from Spotify have a component called Styx that could be leveraged as well
Haytham Abuelfutuh I am not denying that backfill is good idea, I am just saying at the moment it can be achieved using external means. But scheduled launch plans is the perfect entity to have backfills on and not the workflows themselves This also Implies we need a scheduler to be built :blush: Or integrated with Alex Pryiomka as Haytham Abuelfutuh said we would love to collaborate on this and would love if you guys can help Hongxin Liang from Spotify have a component called Styx that could be leveraged as well
Ketan Umare, i like the way airflow does it. As far as the scheduled runs i think it does it pretty well except maybe the execution date piece - the date of the previous run - it should be just the scheduled datetime without any previous assumptions. Two things come to mind: • the scheduled workflow should have an optional start date that can be both in the past and in the future • for every missed run since the start date based on the current cron template the scheduler should run the workflows with the execution date provided. Example: today is 02/07/2020 23:00:00 UTC, say i deploy a new workflow with a start date that runs every 8 hours `0 0/8 * * *` and start on 02/06/2020 8:00:00 UTC. That means the missing runs would be 02/06/2020 8:00:00, 02/06/2020 16:00:00, 02/07/2020 00:00:00, 02/07/2020 08:00:00 and 02/07/2020 16:00:00. i would expect the scheduler to fill those up.
Ketan Umare, i like the way airflow does it. As far as the scheduled runs i think it does it pretty well except maybe the execution date piece - the date of the previous run - it should be just the scheduled datetime without any previous assumptions. Two things come to mind: • the scheduled workflow should have an optional start date that can be both in the past and in the future • for every missed run since the start date based on the current cron template the scheduler should run the workflows with the execution date provided. Example: today is 02/07/2020 23:00:00 UTC, say i deploy a new workflow with a start date that runs every 8 hours `0 0/8 * * *` and start on 02/06/2020 8:00:00 UTC. That means the missing runs would be 02/06/2020 8:00:00, 02/06/2020 16:00:00, 02/07/2020 00:00:00, 02/07/2020 08:00:00 and 02/07/2020 16:00:00. i would expect the scheduler to fill those up.
Awesome, let’s write it down as a proposal, problem is how are failures handled. For example, we have start date of in the past. So when we deploy executions kick off. Lets say there is a bug that fails some executions (not all). Now a new deployment is made, what should be the behavior
Awesome, let’s write it down as a proposal, problem is how are failures handled. For example, we have start date of in the past. So when we deploy executions kick off. Lets say there is a bug that fails some executions (not all). Now a new deployment is made, what should be the behavior
it is a good question. It is easier in airflow since each DAG is unique by name and when you redeploy you effectively overwrite the existing DAG where as in flyte each DAG is versioned. How does currently flyte manages schedule for multiple versions of the workflows. If i say have a workflow `abc:1.23` running once a day like so: `0 0 * * *` , then i deploy `abc:1.24` with the same schedule, do i end up running two workflows now? or the new version effectively cancels the previous one? I would say the later deployed workflow should implicitly cancel the currently running one. For the failed executions you do not backfill unless you go and manually delete them. Once deleted, the scheduler should refill them automatically. It is easier in aiflow since each DAG / worfkow is unique. It gets tricky when you need to have multiple versions of the same workflow / DAG
it is a good question. It is easier in airflow since each DAG is unique by name and when you redeploy you effectively overwrite the existing DAG where as in flyte each DAG is versioned. How does currently flyte manages schedule for multiple versions of the workflows. If i say have a workflow `abc:1.23` running once a day like so: `0 0 * * *` , then i deploy `abc:1.24` with the same schedule, do i end up running two workflows now? or the new version effectively cancels the previous one? I would say the later deployed workflow should implicitly cancel the currently running one. For the failed executions you do not backfill unless you go and manually delete them. Once deleted, the scheduler should refill them automatically. It is easier in aiflow since each DAG / worfkow is unique. It gets tricky when you need to have multiple versions of the same workflow / DAG
Flyte handles it as you say: the newest version of the launch plan with the same name takes over and cancels the previous one. If the schedule cadence changes, it is changed going forward.
Flyte handles it as you say: the newest version of the launch plan with the same name takes over and cancels the previous one. If the schedule cadence changes, it is changed going forward.
Alex Pryiomka yes as Matt says, important to note - same named LaunchPlan. As previously noted launchplan is the scheduled entity, and for backfill as haytham suggested it souls be a great place to house it
Hi Everyone! Thank you Johnny Burns for your advice about secret management in Flyte workflows. It was very helpful :pray: Today I’d like to ask about Flyte deployment “best practice”. Basically, we configured own overlays with some patches based on this article <https://lyft.github.io/flyte/administrator/install/production.html> And we refer to remote flyte base repo. Some changes in remote base repo required changes in our overlay too. E.g. <https://github.com/lyft/flyte/pull/164/commits/387228bb124b48a513b1b959b24c3057c0980926> Needs to remove quboleLimit in overlay config. Else propeller would not run. Yes, additional reviewing diff and tests, refering to the release tags in kustomize or having own repo with flyte base solve some possible issues .. Considering regular “sync” with Flyte Base and stable releases what is your recommended approach? Thank you in advance!
Sorry for the trouble Ruslan Stanevich. My understanding is that: • You have a custom flytepropeller config • You ran kustomize, which bumped the version of flytepropeller. • That version of flytepropeller was not compatible with your custom flytepropeller config. Let me know if I'm mis-understanding what happened. If that is the case, I would consider our FlytePropeller change an oversight. We strive to not make changes that are not backward compatible (this includes being backward compatible with respect to configs). The new propeller version should have been compatible with your config. It seems we might need some more process to make sure that type of change doesn't happen (I'll look into that).
Sorry for the trouble Ruslan Stanevich. My understanding is that: • You have a custom flytepropeller config • You ran kustomize, which bumped the version of flytepropeller. • That version of flytepropeller was not compatible with your custom flytepropeller config. Let me know if I'm mis-understanding what happened. If that is the case, I would consider our FlytePropeller change an oversight. We strive to not make changes that are not backward compatible (this includes being backward compatible with respect to configs). The new propeller version should have been compatible with your config. It seems we might need some more process to make sure that type of change doesn't happen (I'll look into that).
Oh Looks like I though that such changes in code are possible and asked the wrong question. Anyway, I got your vision and how it should work. Thank you!
Hello :raised_hand_with_fingers_splayed: Each time when Flyte `spark_task` workflow runs in k8s it creates `<http://sparkapplications.sparkoperator.k8s.io|sparkapplications.sparkoperator.k8s.io>` resourse with name like `{{executorID}}-{{taskName}}-{{workerNo}}` According to <https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/docs/user-guide.md#deleting-a-sparkapplication> we should delete these custom resourses with `kubectl delete ...` Just interesting if there is a recommended way to `automatically delete` these `resources` from k8s automatically? maybe based on completition status or smth else? if it makes sense of course. Thanks!
they should be getting deleted, are they not? can you do a kubectl get of the crd in the namespace and paste the output here?
they should be getting deleted, are they not? can you do a kubectl get of the crd in the namespace and paste the output here?
sure! sorry cannot paste everything here. total it is 434 rows ```✘  kubectl get <http://sparkapplications.sparkoperator.k8s.io|sparkapplications.sparkoperator.k8s.io> --all-namespaces NAMESPACE NAME AGE dwh-s3-sync-staging s3-sync-segment-1581614830274559189 4h7m dwh-s3-sync-staging s3-sync-segment-1581618448285298411 3h6m ... place-search-workflows-development zuxdkx4b46-job-result-0 15d place-search-workflows-development zy8z1p1w6q-run-apply-changes-0 3d7h ... pyspark-example-development wr0vxws9rf-w2c-result-0 29d pyspark-example-development yos1nmr86f-w2c-result-0 41d pyspark-word2vec-example-development ad28i5oh50-w2c-result-0 3d7h pyspark-word2vec-example-development at27f5ccvl-w2c-result-0 3d6h```
sure! sorry cannot paste everything here. total it is 434 rows ```✘  kubectl get <http://sparkapplications.sparkoperator.k8s.io|sparkapplications.sparkoperator.k8s.io> --all-namespaces NAMESPACE NAME AGE dwh-s3-sync-staging s3-sync-segment-1581614830274559189 4h7m dwh-s3-sync-staging s3-sync-segment-1581618448285298411 3h6m ... place-search-workflows-development zuxdkx4b46-job-result-0 15d place-search-workflows-development zy8z1p1w6q-run-apply-changes-0 3d7h ... pyspark-example-development wr0vxws9rf-w2c-result-0 29d pyspark-example-development yos1nmr86f-w2c-result-0 41d pyspark-word2vec-example-development ad28i5oh50-w2c-result-0 3d7h pyspark-word2vec-example-development at27f5ccvl-w2c-result-0 3d6h```
Anmol Khurana can you think of any reason why these wouldn’t get removed? also, can you track down the corresponding flyte workflow crd instance for one of them? when the parent flyte workflow crd is finished, the child resources should get reaped
Anmol Khurana can you think of any reason why these wouldn’t get removed? also, can you track down the corresponding flyte workflow crd instance for one of them? when the parent flyte workflow crd is finished, the child resources should get reaped
Ruslan Stanevich it will hang around till the workflow hangs around once the workflow is deleted it should get auto-deleted we keep all resources around till the workflow is around and the workflow gets GC’ed every few hours (or it may be disabled locally)
Ruslan Stanevich it will hang around till the workflow hangs around once the workflow is deleted it should get auto-deleted we keep all resources around till the workflow is around and the workflow gets GC’ed every few hours (or it may be disabled locally)
do you mean deleting completed workflow using <https://github.com/lyft/flytepropeller#deleting-workflows> `kubectl-flyte delete --namespace {{ namespace }} --all-completed` thank you!
do you mean deleting completed workflow using <https://github.com/lyft/flytepropeller#deleting-workflows> `kubectl-flyte delete --namespace {{ namespace }} --all-completed` thank you!
You should not need to do that, there is a garbage collection system that deletes completed workflows based on configuration
You should not need to do that, there is a garbage collection system that deletes completed workflows based on configuration
Is this GC configured in Propeller? just having “default sandbox” configuration we’ve got 450+ completed workflows (the oldest is finished 41days ago ) Example for one namespace ```kubectl-flyte get --namespace mapmaking-workflows-development Listing workflows in [mapmaking-workflows-development] .............................................................................................................................. Found 127 workflows Success: 24, Failed: 103, Running: 0, Waiting: 0 | Namespace| Total|Success| Failed|Running|Waiting| QuotasUsage| | mapmaking-workflows-development| 127| 24| 103| 0| 0| -|``` sorry if I misunderstood smth :slightly_smiling_face: And (127 rows) ```kubectl get <http://sparkapplications.sparkoperator.k8s.io|sparkapplications.sparkoperator.k8s.io> --namespace mapmaking-workflows-development NAME AGE a06a9vmgwa-run-aggregate-errors-0 18d a06a9vmgwa-run-detect-errors-0 18d ... a5o1wvtf9m-run-aggregate-errors-0 29h a5o1wvtf9m-run-detect-errors-0 32h```
Is this GC configured in Propeller? just having “default sandbox” configuration we’ve got 450+ completed workflows (the oldest is finished 41days ago ) Example for one namespace ```kubectl-flyte get --namespace mapmaking-workflows-development Listing workflows in [mapmaking-workflows-development] .............................................................................................................................. Found 127 workflows Success: 24, Failed: 103, Running: 0, Waiting: 0 | Namespace| Total|Success| Failed|Running|Waiting| QuotasUsage| | mapmaking-workflows-development| 127| 24| 103| 0| 0| -|``` sorry if I misunderstood smth :slightly_smiling_face: And (127 rows) ```kubectl get <http://sparkapplications.sparkoperator.k8s.io|sparkapplications.sparkoperator.k8s.io> --namespace mapmaking-workflows-development NAME AGE a06a9vmgwa-run-aggregate-errors-0 18d a06a9vmgwa-run-detect-errors-0 18d ... a5o1wvtf9m-run-aggregate-errors-0 29h a5o1wvtf9m-run-detect-errors-0 32h```
No worries I am not explaining well, I should point you to the gc configuration. Also it is ok to delete workflows that are completed Once I am near a computer will send a link to gc config
No worries I am not explaining well, I should point you to the gc configuration. Also it is ok to delete workflows that are completed Once I am near a computer will send a link to gc config
Oh, thanks a lot :pray:
Oh, thanks a lot :pray:
<https://github.com/lyft/flytepropeller/blob/master/config.yaml#L12>
<https://github.com/lyft/flytepropeller/blob/master/config.yaml#L12>
Oh, yes, thanks! Added both `gc-interval` and `max-ttl-hours` to our propeller config Works as expected!
hi guys quick question, are the kubernetes logs urls supposed to work with minikube/baremetal or do they only work with cloud providers? I can’t figure out what to put in the “kubernetes-url:” in my deployment yaml…
they should work with minikube provided you have an ingress to the k8s web console. generally, this is set up via port forwarding so it is just something like: <https://github.com/lyft/flyte/blob/5df4e997306f6829836845851ae4fcb82dab151b/kustomize/overlays/test/propeller/plugins/config.yaml#L4>
they should work with minikube provided you have an ingress to the k8s web console. generally, this is set up via port forwarding so it is just something like: <https://github.com/lyft/flyte/blob/5df4e997306f6829836845851ae4fcb82dab151b/kustomize/overlays/test/propeller/plugins/config.yaml#L4>
Hi Matt, when you say “web console” you are talking about the kubernetes dashboard correct?
Hi Matt, when you say “web console” you are talking about the kubernetes dashboard correct?
yes that’s correct, the k8s dashboard. it didn’t make its way into the public repo however. there’s nothing lyft-specific in there, we just didn’t feel like it was clean enough to include, esp since it’s mostly copied from the public version of the same thing
yes that’s correct, the k8s dashboard. it didn’t make its way into the public repo however. there’s nothing lyft-specific in there, we just didn’t feel like it was clean enough to include, esp since it’s mostly copied from the public version of the same thing
ya, so Giordano, we do not collect and store logs at all, we just point you to logs somewhere else, llike on cloud providers to their logs services and for bare k8s to K8s dashboard. Its easy to add new services like splunk, data dog etc
ya, so Giordano, we do not collect and store logs at all, we just point you to logs somewhere else, llike on cloud providers to their logs services and for bare k8s to K8s dashboard. Its easy to add new services like splunk, data dog etc
Thank you guys for the explanation and the example :+1:
Hi Everyone! Short question related to SSL termination in Flyteadmin for gRPC traffic (registering workflow for example) if I configure SSL here <https://github.com/lyft/flyteadmin/blob/6a64f00315f8ffeb0472ae96cbc2031b338c5840/flyteadmin_config.yaml#L9-L13> will AWS Network LB with TLS listener handle it correctly? or what is your recommendations? Thanks!
I haven't tried NLB personally, but I imagine the answer is no. I'll save you some time and tell you that for sure neither ALB nor ELB can handle grpc SSL termination because: ALB downgrades all connections to http1 at the load balancer. This won't work as gRPC needs http2. ELB does not understand http2. The reason I _think_ NLB won't work is because NLB's are an L4 device. http2 TLS requires something called ALPN (Application Layer Protocol Negotiation). As the name suggests, this happens at L7 (application layer), so the NLB (L4) device is incapable of speaking that language. For my personal Flyte installation, I put the ELB in "passthrough" (L3) mode, and handle TLS certs beyond the ELB (via envoy / nginx). It's not ideal but it works. I think NLB supports passthrough mode as well, fwiw.
I haven't tried NLB personally, but I imagine the answer is no. I'll save you some time and tell you that for sure neither ALB nor ELB can handle grpc SSL termination because: ALB downgrades all connections to http1 at the load balancer. This won't work as gRPC needs http2. ELB does not understand http2. The reason I _think_ NLB won't work is because NLB's are an L4 device. http2 TLS requires something called ALPN (Application Layer Protocol Negotiation). As the name suggests, this happens at L7 (application layer), so the NLB (L4) device is incapable of speaking that language. For my personal Flyte installation, I put the ELB in "passthrough" (L3) mode, and handle TLS certs beyond the ELB (via envoy / nginx). It's not ideal but it works. I think NLB supports passthrough mode as well, fwiw.
thank you Johnny for saving my time! :slightly_smiling_face: yes, it works well both with classic elb and nlb in “passthrough” mode. And you advice will help to care about handling TLS for gRPC :slightly_smiling_face:
thank you Johnny for saving my time! :slightly_smiling_face: yes, it works well both with classic elb and nlb in “passthrough” mode. And you advice will help to care about handling TLS for gRPC :slightly_smiling_face:
yeah just to chime in as well. At lyft, the production installation of flyte admin also does not run over ssl. All ssl is terminated by envoy. the code there was built in as an alternative, but I imagine most people will want to handle ssl at the nginx layer
yeah just to chime in as well. At lyft, the production installation of flyte admin also does not run over ssl. All ssl is terminated by envoy. the code there was built in as an alternative, but I imagine most people will want to handle ssl at the nginx layer
Yee, thank you! yes, it makes sense. Will look at this option with Envoy
Hi guys quick question: let’s say I have a workflow with 2 tasks, would it be possible to pick one of them based on one of the the inputs provided to the workflow? I know that the input objects can be passed to the task but I can’t figure out a way to grab the actual value that gets sent during execution to compare it against an if statement…something similar to this: ```@workflow_class class WF_train_hyperopt_yolo_experiment(object): gpu = Input(Types.Boolean, required=True, help="use gpu") if gpu == True: task = run_task_1() else: task = run_task_2()``` my use case would be to have a single workflow that could spin up a deep learning task either using GPUs or just CPUs instead of doing 2 different workflows update: I was able to do what I wanted with a dynamic task, is that the preferred method or is there a better way?
Currently dynamic_task is the only supported way to achieve this. We do however have branch nodes defined in the Flyte Spec Language but is not yet exposed in the Python SDK <https://github.com/lyft/flyteidl/blob/master/protos/flyteidl/core/workflow.proto#L40-L45>
Hello Everyone :hand: <https://github.com/lyft/flyte/issues/36> The question is about this feature request: is there any way (maybe api call) for removing registered workflow from Flyte? As I see, `pyflyte` has no such command and `kubectl-flyte` deletes workflow as k8s resource. Sorry if I am incorrect in smth :slightly_smiling_face: thanks!
I think you are correct, AFAIK. This is a feature that would be really good to have though. If you have any interest in contributing this feature I'm happy to help.
I think you are correct, AFAIK. This is a feature that would be really good to have though. If you have any interest in contributing this feature I'm happy to help.
hmm :thinking_face:, sure, that’s interesting!
Hi Igor
Hi! :hand:
Hello everyone, one question where i can get more info about how to run a workflow as a test on my local? Every time i try to run this command: ```docker run --network host -e FLYTE_PLATFORM_URL='127.0.0.1:30081' {{ your docker image }} pyflyte -p myflyteproject -d development -c sandbox.config register workflows``` It says: `Exception: Could not parse image version from configuration. Did you set it in theDockerfile?`
Hey Eduardo Giraldo When Flyte "registers" a workflow, it stores a textual representation of the workflow Task A =&gt; Task B =&gt; Task C Each of those tasks represents a container to be run, so flyte needs to know which container "task A" represents. We typically do that with environment vars: <https://github.com/lyft/flytesnacks/blob/master/python/Dockerfile#L33>
Hey Eduardo Giraldo When Flyte "registers" a workflow, it stores a textual representation of the workflow Task A =&gt; Task B =&gt; Task C Each of those tasks represents a container to be run, so flyte needs to know which container "task A" represents. We typically do that with environment vars: <https://github.com/lyft/flytesnacks/blob/master/python/Dockerfile#L33>
Hello Johnny Burns thanks for the answer, actually im triying to run my test but it does not allow the image i have, first it said it was not latest then appears this error. So im confused how it works. This is my actual workspace And as you can see im running my image, that one i build it with docker with that name
Hello Johnny Burns thanks for the answer, actually im triying to run my test but it does not allow the image i have, first it said it was not latest then appears this error. So im confused how it works. This is my actual workspace And as you can see im running my image, that one i build it with docker with that name
Eduardo Giraldo is your image literally called "latest", or is that your image _version_ ? I think you need "imagename:latest"? We build ours here: <https://github.com/lyft/flytesnacks/blob/master/python/scripts/docker_build.sh#L28>
Eduardo Giraldo is your image literally called "latest", or is that your image _version_ ? I think you need "imagename:latest"? We build ours here: <https://github.com/lyft/flytesnacks/blob/master/python/scripts/docker_build.sh#L28>
Johnny Burns i checked and is the latest one, also i build it again after prune the system but it shows the same error T_T As you can see is the latest one
Johnny Burns i checked and is the latest one, also i build it again after prune the system but it shows the same error T_T As you can see is the latest one
Eduardo Giraldo Sorry I probably didn't explain well. I think you need to change your Dockerfile. Change `FLYTE_INTERNAL_IMAGE` from: "latest" to "flyte_test:latest" (unless you already did that)
Eduardo Giraldo Sorry I probably didn't explain well. I think you need to change your Dockerfile. Change `FLYTE_INTERNAL_IMAGE` from: "latest" to "flyte_test:latest" (unless you already did that)
I already test that but it has the same result :disappointed:
I already test that but it has the same result :disappointed:
Is the error `Exception: Could not parse image version from configuration. Did you set it in theDockerfile?` ?
Is the error `Exception: Could not parse image version from configuration. Did you set it in theDockerfile?` ?
Yes sir :smiley:
Yes sir :smiley:
If so, can you `docker run -it flyte_test:latest` and call `echo $FLYTE_INTERNAL_IMAGE`
If so, can you `docker run -it flyte_test:latest` and call `echo $FLYTE_INTERNAL_IMAGE`
It has other value, i build it again and now is running Thank you so much, you rock :smiley:
It has other value, i build it again and now is running Thank you so much, you rock :smiley:
Nice! Glad I could help
does anyone other than lyft uses flyte in production?
None that I know for sure (since it's open-source, one could do so and not say so). You could almost count "L5" which is owned by Lyft, but runs like a separate entity (they manage their own Flyte clusters). Spotify is using it but unsure of the capacity (Hongxin Liang could tell you more)
None that I know for sure (since it's open-source, one could do so and not say so). You could almost count "L5" which is owned by Lyft, but runs like a separate entity (they manage their own Flyte clusters). Spotify is using it but unsure of the capacity (Hongxin Liang could tell you more)
We are still in experimenting phase and not in production.
We are still in experimenting phase and not in production.
Hongxin Liang is it Flyte vs the status-quo or are considering multiple alternatives to the status-quo, like Prefect or Metaflow?
Hongxin Liang is it Flyte vs the status-quo or are considering multiple alternatives to the status-quo, like Prefect or Metaflow?
The former case as you described. Jonathon Belotti
The former case as you described. Jonathon Belotti
Alex Pryiomka almost every company that we know off is experimenting with Flyte. I guess that’s what happens with infancy
Alex Pryiomka almost every company that we know off is experimenting with Flyte. I guess that’s what happens with infancy
Hongxin Liang from blog posts I gather that status-quo is Kubeflow?
Hongxin Liang from blog posts I gather that status-quo is Kubeflow?
Jonathon Belotti actually it seems Spotify has a legacy infra for data infra which is the team Hongxin Liang works on. There is a team in nyc that is experimenting with kubeflow for Ml Please correct me if I am wrong @honnix Jonathon Belotti are you looking into Flyte for a specific reason, company or personal interest
Jonathon Belotti actually it seems Spotify has a legacy infra for data infra which is the team Hongxin Liang works on. There is a team in nyc that is experimenting with kubeflow for Ml Please correct me if I am wrong @honnix Jonathon Belotti are you looking into Flyte for a specific reason, company or personal interest
Ahh thanks for clarification. I was looking at <https://labs.spotify.com/2019/12/13/the-winding-road-to-better-machine-learning-infrastructure-through-tensorflow-extended-and-kubeflow/>
Ahh thanks for clarification. I was looking at <https://labs.spotify.com/2019/12/13/the-winding-road-to-better-machine-learning-infrastructure-through-tensorflow-extended-and-kubeflow/>
My name is Ketan and I would love to understand your usecases Ya I do see an eventual convergence
My name is Ketan and I would love to understand your usecases Ya I do see an eventual convergence
Ketan Umare I work at Canva and we run Argo workflows. I’m the owner of that system and I’m not that thrilled with it. I’ve had a short, interesting chat with Haytham here about how he sees the Argo vs. Flyte match-up, and it was convincing enough to keep me looking at Flyte. Right now I’m studying Flyte’s design for learning. We can’t justify migration from Argo but hoping to take some lessons across.
Ketan Umare I work at Canva and we run Argo workflows. I’m the owner of that system and I’m not that thrilled with it. I’ve had a short, interesting chat with Haytham here about how he sees the Argo vs. Flyte match-up, and it was convincing enough to keep me looking at Flyte. Right now I’m studying Flyte’s design for learning. We can’t justify migration from Argo but hoping to take some lessons across.
Absolutely Also we would love to get your thoughts when we you finalize them
Absolutely Also we would love to get your thoughts when we you finalize them
:+1:
:+1:
With Spotify we are trying to do something interesting, compile their Luigi pipelines directly to Flyte, we would like to know if you are open to such an exploration for Argo
With Spotify we are trying to do something interesting, compile their Luigi pipelines directly to Flyte, we would like to know if you are open to such an exploration for Argo
Wow, sounds ambitious. I think I want to spend more time understanding the key tradeoffs between Argo and Flyte, while also iterating in other areas of our workflow system (eg. DAG SDK, security model) to see if improvements there offer higher ROI.
Wow, sounds ambitious. I think I want to spend more time understanding the key tradeoffs between Argo and Flyte, while also iterating in other areas of our workflow system (eg. DAG SDK, security model) to see if improvements there offer higher ROI.
Absolutely
Absolutely
Jonathon Belotti would love to hear you experience with Argo particularly the rough edges. It's something we might evaluate.
Jonathon Belotti would love to hear you experience with Argo particularly the rough edges. It's something we might evaluate.
Oliver Mannion • We’re pretty disillusioned with Argo’s YAML templating approach. We got along OK using Jsonnet to spit out the YAML, but we don’t think it’s better than a Python SDK for building workflows, and our Data Scientists really have not warmed to writing Jsonnet. There’s a Python DSL for Argo now, but we’re not on it (yet) and haven’t assessed it’s quality. • Bugs. In the 9 months I’ve been working with Argo, there’s been at least 3 or 4 bugs that have made it into a release that have created downtime or broken a helpful feature. This recent regression meant we shipped some broken dags to our clusters that we’d normally catch in CI -&gt; <https://github.com/argoproj/argo/issues/2313> • Lack of types. Argo’s basically stringly-typed and that’s sucked. It’s `Parameter` object in Golang is `key String, value String` . Not infrequently we find it’d be great to have `Parameter` values have types. • Task caching is not well supported in Argo, I think because the dataflow graph wasn’t a big focus for them. Argo’s DAGs describe the execution of containers and not the flow of data artifacts. Flyte has made task caching a first-class feature. • Storing history of DAGs wasn’t possible until a recent release. There’s more I could say and I haven’t covered the positives but I’ve got to get into a call.
Oliver Mannion • We’re pretty disillusioned with Argo’s YAML templating approach. We got along OK using Jsonnet to spit out the YAML, but we don’t think it’s better than a Python SDK for building workflows, and our Data Scientists really have not warmed to writing Jsonnet. There’s a Python DSL for Argo now, but we’re not on it (yet) and haven’t assessed it’s quality. • Bugs. In the 9 months I’ve been working with Argo, there’s been at least 3 or 4 bugs that have made it into a release that have created downtime or broken a helpful feature. This recent regression meant we shipped some broken dags to our clusters that we’d normally catch in CI -&gt; <https://github.com/argoproj/argo/issues/2313> • Lack of types. Argo’s basically stringly-typed and that’s sucked. It’s `Parameter` object in Golang is `key String, value String` . Not infrequently we find it’d be great to have `Parameter` values have types. • Task caching is not well supported in Argo, I think because the dataflow graph wasn’t a big focus for them. Argo’s DAGs describe the execution of containers and not the flow of data artifacts. Flyte has made task caching a first-class feature. • Storing history of DAGs wasn’t possible until a recent release. There’s more I could say and I haven’t covered the positives but I’ve got to get into a call.
Oliver Mannion Jonathon Belotti I would love to get on a call with you guys and show how we are going to move forward, also hear about your feedback As said we are still a small community, but we are focused on making this work at scale as we actually deploy this everyday to Lyft (just like we did with Envoy) All the work we are doing is to ensure that data on kubernetes is a reality and I am going to start doing biweekly calls, more open roadmap and release trains
Oliver Mannion Jonathon Belotti I would love to get on a call with you guys and show how we are going to move forward, also hear about your feedback As said we are still a small community, but we are focused on making this work at scale as we actually deploy this everyday to Lyft (just like we did with Envoy) All the work we are doing is to ensure that data on kubernetes is a reality and I am going to start doing biweekly calls, more open roadmap and release trains
I’d be happy to, but I’d want to take more time to test-drive Flyte at work so I can give better feedback. Could do meeting beyond say… Monday week.
I’d be happy to, but I’d want to take more time to test-drive Flyte at work so I can give better feedback. Could do meeting beyond say… Monday week.
ohh thats fine, even before you start test driving, we could just share our usecases and future roadmap great, thank you also do keep a look out at our issues
ohh thats fine, even before you start test driving, we could just share our usecases and future roadmap great, thank you also do keep a look out at our issues
Thanks so much Jonathon, that's super helpful feedback. I'd be happy to jump on a call. Although I'm also very early on in evaluating Flyte and haven't given it a good test drive yet.