RE:
3 years later and this rudimentary functionality is still not implemented
RE:
That menuitem does not clean up the WHSlIcensePlateLabel records. It cleans up the InventDim records related to the licenseplate numbers (not the labels). It executes the below code. I think the idea from Suf still needs some consideration.WHSLicensePlate whsLicensePlate; delete_from inventDimLPCleanupTask where inventDimLPCleanupTask.SessionId == _sessionId exists join whsLicensePlate where whsLicensePlate.LicensePlateParent == inventDimLPCleanupTask.LicensePlateId;
RE:
In Spark Streaming jobs running on Fabric capacity, there are currently no built-in metrics, graphs, or dashboards to display executor memory details such as free memory, consumed memory, and total memory. Providing these insights would help end users make informed decisions about scaling their environment type or workload based on resource utilization
RE:
This is a much needed feature we have been waiting for. Often DAX queries can perform 10x or better than MDX in some cases. We have resorted to making users run queries in Power BI reports and manually export them because the MDX from Excel to get the same data can take so long in some cases. Yes there are things we can do to the model and performance tuning but when the result is the exact same data but taking 10x or more time using MDX then something needs to be done.
RE:
Completely agree. This also applies to Power Apps Gen2 Dataflows. I can move dataflows to downstream environments via solutions. Generally I will use Power Automate to orchestrate dataflows for the initial data migration into my downstream environments. The issue?I have to publish each dataflow individually in my target environment before I can run my orchestration Power Automate flow (which essentially just calls each dataflow in the order I want them to run, sometimes with other Power Automate flows running in between for data transformation). If I don't publish (and consequently, run) each dataflow then the orchestration flow fails because it can't run an unpublished dataflow. Publishing also runs each dataflow, so I have to go back through and bulk delete all the data created when I publish the flows. Then I can run my orchestration flow.Allowing us to publish gen2 dataflows without running them would be a massive improvement, in my opinion.Also, finding a fix to having to re-establish dataflow connections every time a solution is imported would be huge as well!
RE:
Any updates?A default purchasing unit per procurement category would also be interesting, analogous to the item sales tax group on the procurement category. Users are forgetting selecting a UOM.
RE:
On top on the ability to order/display Jobs by start date/time (at least like a order option added under the left panel options) . I think MS needs to improve the "display" of the left section of the Gantt Chart. So columns are aligned properly within the left section. Now the information is all over the place and the user is forced to adjust manually the width of each column. Currently, there is no function to save a "view" of the left panel view. So when the Gantt is closed and relaunched the information should be displayed according to the saved view, including the columns and the width of the columns
RE:
Our customer requires this as mandatory/critical feature to be able to use the Gantt chart to schedule the production jobs due to high number of jobs they need to manage every day.
RE:
Specifically need this on Job Planning Lines etc.
RE:
Hello,any news about that? The allocation rules, avoid to manage manually the ledger allocation. Have the opportunity to define an allocation journal to be managed with different layers would help a lot.