Roman Rietmann on 07 Nov 2023 10:19:24
We use data pipelines as orchestration for several notebooks, dataflows, sub pipeline and other activities. Overall error handling within pipelines was always not very stait forward. currently its possible to add an log tasks such as email/teams at the end to send the overall status by using success or failed/on skip. Great would be an try/catch activity, catch all activity or at least an general way to recieve the error tasks and status of the failed tasks and as much details as possible inclusive a link to put in the email for reviewing the issue.
With synapse it was also possible to setup alerts on failed pipelines. but as Fabric is a PaaS service having just capacity ressource this is not possible.
Administrator on 09 Jan 2024 21:01:07
We are planning this as a feature for a future release
- Comments (2)
RE: Error handling/tracking in Data Pipelines
Please consider to bring this feature to Synapse as well
RE: Error handling/tracking in Data Pipelines
Cool would be something like an exception handling area, which can be expanded in the pipeline canvas, catching all exceptions within the pipeline. In this area we can process raised fails and uncaught exceptions using the error details for further processing steps or also raising new errors, for giving back to parent pipelines.