RE:
It is there, but you have to select the individual series to format. There is currently no way to format it with "all" as the cards may have different data types.
RE:
This would be super helpful, otherwise I need to create two charts to give the necessary granularity choices.
RE:
Dataverse has a limit on the number of requests and will return a 429 Too Many Requests error if too many requests within a period of time as per documented by Microsoft: Service protection API limits (Microsoft Dataverse) - Power Apps | Microsoft LearnCurrently, we found out that there are no configurations available to users/administrators for limiting the frequency of API calls done to Dataverse as this is set-up at their service level.
RE:
Along with Loan management there should be Fixed deposit management as well, this has been a frequent ask from alot of Customers/Prospects I've worked with.
RE:
We are aware that there are service limits for Dataverse API based on this documentation:Our understanding is that these limits are there to restrict if there are too many API requests hitting the Data verse service. Can we check if these limits are configurable in any way, or are there any similar settings we could make use of to limit API calls to Data verse in our environment?
RE:
"...finally inner padding controls the gap between the bars in proportion to the bar size. "When looking at the Line and Clustered column chart, I cannot see this control setting in the X-axis format settings.
RE:
3 years later and this rudimentary functionality is still not implemented
RE:
That menuitem does not clean up the WHSlIcensePlateLabel records. It cleans up the InventDim records related to the licenseplate numbers (not the labels). It executes the below code. I think the idea from Suf still needs some consideration.WHSLicensePlate whsLicensePlate; delete_from inventDimLPCleanupTask where inventDimLPCleanupTask.SessionId == _sessionId exists join whsLicensePlate where whsLicensePlate.LicensePlateParent == inventDimLPCleanupTask.LicensePlateId;
RE:
In Spark Streaming jobs running on Fabric capacity, there are currently no built-in metrics, graphs, or dashboards to display executor memory details such as free memory, consumed memory, and total memory. Providing these insights would help end users make informed decisions about scaling their environment type or workload based on resource utilization
RE:
This is a much needed feature we have been waiting for. Often DAX queries can perform 10x or better than MDX in some cases. We have resorted to making users run queries in Power BI reports and manually export them because the MDX from Excel to get the same data can take so long in some cases. Yes there are things we can do to the model and performance tuning but when the result is the exact same data but taking 10x or more time using MDX then something needs to be done.