Data Factory
PlannedEnable to pass variables from pipelines as parameters into a Dataflow Gen2
Miguel Escobar on 01 Jun 2023 17:44:52
Enable to modify the mashup document of a Dataflow Gen2 by passing a variable from a pipeline before an execution of the Dataflow
Administrator on 25 Sep 2023 16:35:40
Planned for the first half of Calendar year 2024
- Comments (15)
RE: Enable to pass variables from pipelines as parameters into a Dataflow Gen2
Would really need this to streamline ingestion patterns.
RE: Enable to pass variables from pipelines as parameters into a Dataflow Gen2
Must have! Please let us know what is the efficient workaround till this is made available.
RE: Enable to pass variables from pipelines as parameters into a Dataflow Gen2
Passing variables from the Data Factory pipeline to DataFlowGen2 parameters would help in testing situations or changing data sources.Since current data flows are not supported by deployment pipelines.Many customers present this scenario, and have already asked me to recommend this improvement.
RE: Enable to pass variables from pipelines as parameters into a Dataflow Gen2
Must Have!
RE: Enable to pass variables from pipelines as parameters into a Dataflow Gen2
I don't see it mentioned, but getting the parameter into the dataflow is not much use unless you can use it in a data source. Some will want it for working through a list of files. Others will want to inject it into a sql query (whether Direct Query or Import). I'd like both, please. Thx
RE: Enable to pass variables from pipelines as parameters into a Dataflow Gen2
Need this very much!!!!!
RE: Enable to pass variables from pipelines as parameters into a Dataflow Gen2
I would think this is a basic function. How can you not pass parameters into a Gen2 Dataflow, ie to dynamically build the SQL?I'd expect this in January
RE: Enable to pass variables from pipelines as parameters into a Dataflow Gen2
I'm trying to move large tables from on-prem SQL server to Fabric. Until the data gateways or mirroring improvements go live, splitting the work into many dataflows each touching a different "slice" of data and appending into a Lakehouse is the only viable solution. But this is currently extremely painful, because:You can't simple "save as" a dataflow as a copy. Which means you have to set up many copies of the same dataflow manually, manually map them to the destination tables, etc.You can't change the parameters in the dataflows via a pipeline, which means you can't have a single job that you can reuse for multiple slices in parallelI also agree with the comments about allowing the data sources and data destinations to be controllable via parameters. In my case, if I'm running 10 copies of a dataflow each dealing with a slice of data, I might want to spread them across different gateway clusters so I don't bog any of them down.Thanks,Scott
RE: Enable to pass variables from pipelines as parameters into a Dataflow Gen2
Looking forward to this feature