Andre Armstrong on 28 Sep 2023 15:34:52
This is a great option to support data analytics from Samsara.
Mary Aho on 28 Sep 2023 15:07:08
There is definitely a need for more customization for responding to access requests, both in the workspace and the app workspace, ideally on that level, not the tenant or admin level. Many orgs allow users to manage their own workspaces, so controlling this at the org level will be cumbersome for large organizations.
Garry Baker on 28 Sep 2023 14:33:35
This should be considered more of a fix than a new feature. It has been a common need across several customers.
Dagmara Mirowska on 28 Sep 2023 13:25:15
A shockingly missing option :)
Dagmara Mirowska on 28 Sep 2023 13:23:58
A much needed change. I am surprised it hasn't been implemented yet.
Andy Clapham on 28 Sep 2023 12:34:24
Know it's a bit of an old suggestion, but it's a good one - I came here to suggest the same thing.
This is a great summary of the improvements to bookmarks suggested elsewhere.
I might do a bit of canvassing to garner support :)
Igor Shekhounov on 28 Sep 2023 11:43:30
Hey, MS guys, how many votes you need?
How much more time must pass?
Nancy Chiu on 28 Sep 2023 11:30:33
One of the key challenges in running an analytics platform is building a robust pipeline without turning into a nightmare. The best way to do this is to use a metadata-driven pipeline. In our SQL warehouse we can run a lot of stored-procs that are driven by metadata/control tables.
However, Fabric has a Warehouse/Lakehouse distinction. The challenge of maintaining a single pipeline over both set of artefacts become difficult. At the moment they do not "talk" to each other. This makes it quite challenging to build a unified metadata-driven pipeline across a Fabric workspace that has warehouses and lakehouses. By allowing Warehouse to do T-SQL DML updates to a Lakehouse table, as well as vice versa, we can exchange metadata about where the pipeline is at, and thus have a single pipeline across both.
As an example, I would like the Warehouse to do a batch of ETL to process the latest day's worth of data, then use spark jobs to consume the Warehouse data and return ML predictions in the Lakehouse, and then use the Warehouse ETL to consume the Lakehouse predictions, and then use the Lakehouse to write out the "gold tables" again for DirectLake. How can I coordinate this back-and-forth? If Warehouse can write to Lakehouse, or Lakehouse can write to a Warehouse table, then at least I can communicate all the job statuses by writing to a single metadata table.
Benjy Smith on 28 Sep 2023 11:18:32
Samsara has become the premier telematics provider. I am so happy to have switched from a competitor 2 years ago. This Power BI connector will enhance an already great product and expand visualizations of it's data.
Administratie Van Aalderen Banden-Autostyling B.V on 28 Sep 2023 11:10:45
mayby bij clicking on the logo GRIPS