Share your ideas and vote for future features
Suggest an idea
Options
- Mark all as New
- Mark all as Read
- Float this item to the top
- Subscribe
- Bookmark
- Subscribe to RSS Feed
Submitted
2 hours ago
Submitted by
thiagobc123
2 hours ago
I currently trigger my dataset refreshes using the Power BI REST API. However, to avoid overwhelming the system, I can't refresh all reports at the same time. This means I have to constantly poll the API to check the refresh status of each dataset before triggering more. This approach isn't efficient and puts unnecessary load on both the client and the Power BI service. It would be much better if we had the option to configure a webhook that notifies us when a dataset refresh is completed. That way, we could trigger subsequent refreshes or downstream processes only when needed—without continuous API calls. Please consider adding webhook support for dataset refresh completion events!
... View more
See more ideas labeled with:
Submitted
an hour ago
Submitted by
pespinoza1
an hour ago

Category: Developer APIs / Embed APIs. Description: Currently, when using slicers in the Power BI Angular Component, the Embed API tracks only deselected items explicitly after users choose "Select All." This behavior causes significant confusion and complexity, especially when integrating slicer selections into downstream applications like paginated reports. Specifically, the current implementation: Does NOT explicitly indicate when slicer selections are inverted. Requires developers to manually maintain and subtract deselections from a complete set of potential values, which is cumbersome, error-prone, and difficult to maintain, particularly with large datasets. Proposed Improvement: Introduce an explicit property (e.g., isInverted: true) within the filter response object provided by the Embed API. This would clearly indicate when a slicer is in "inverted mode" (i.e., "Select All" except certain items), significantly simplifying data handling and downstream report processing. Benefits of this Feature: Reduces complexity and improves reliability of the integration with downstream systems, such as paginated reports or middleware. Eliminates manual, error-prone logic currently required for interpreting slicer states. Enhances developer productivity by providing clear and semantic slicer state information. Improves scalability and maintainability, particularly important when handling large datasets or complex reports. Real-World Scenario (Driving this Request): Working closely with a customer (N-able) who has identified this limitation as a significant roadblock, currently requiring substantial development workarounds or middleware logic. Clearer API semantics would greatly streamline their processes and reduce ongoing maintenance overhead. Additional Context & Community Feedback: Community discussions indicate similar frustrations (Example Discussion). A straightforward API improvement here would benefit the broader Power BI Embed developer community.
... View more
See more ideas labeled with:
Submitted
7m ago
Submitted by
DeanTCC
7m ago
I wanted to share some feedback and a suggestion regarding the Purview glossary implementation, based on our experience at our organization. While we appreciate the capabilities that Purview offers for data governance, we've encountered significant challenges with setting up and maintaining the business glossary. Despite the ability to import terms rapidly, there is no automated way to link these terms to the items they represent. Each definition must be manually linked to a data item one at a time, across three screens with multiple user interactions per screen. Given that we have over 28,000 data items in our data warehouse alone, not to mention the additional measures and tables in our Power BI data sets, this manual approach is not feasible for us. The current process significantly hinders our ability to efficiently manage and utilize the business glossary. We believe that an automated feature to link Power BI descriptions to the Purview glossary would greatly enhance our ability to manage data governance and provide a more seamless experience for our users. If this feature is included in the new Purview integrated within Fabric, it would be a significant factor in hastening our move to an F64 capacity. We hope that Microsoft considers this suggestion, as it would be invaluable in addressing the challenges we face and improving the overall user experience.
... View more
See more ideas labeled with:
Submitted
8 hours ago
Submitted by
jovanpop-msft
8 hours ago

OPENROWSET function can read the files from Azure Data Lake storage or Azure Blob storage. It would be useful to enable OPENROWSET to also read files form Fabric One Lake and access the files in Lakehouse /Files area.
... View more
See more ideas labeled with:
Submitted
10 hours ago
Submitted by
ram5
10 hours ago

In our setup, we work with multiple environments (Dev, Test, and Prod), each represented by its own workspace. To streamline deployment via pipelines, we keep Lakehouses and Notebooks named identical across workspaces. Previously, when opening a Notebook, we could clearly see which Lakehouse it was connected to, including the workspace name it belonged to. This made it easy to identify if a Notebook was referencing, for example the Dev Lakehousem while working in the Prod workspace. We could then simply disconnect and re-add the correct reference. However, with recent UI changes, only the Lakehouse name is visible, not the workspace it is contained in. This creates confusion when multiple Lakehouses share the same name across different workspaces. Even when adding a new source, the explorer does not clearly show which reference is from which workspace. This makes it far too easy to mistakenly remove or connect the wrong Lakehouse especially in production environments. This seemingly small UI regression significantly affects productivity and introduces potential risks during deployment or debugging. Please consider restoring the visibility of workspace names to Lakehouse references in Notebooks. Before: Screenshot from google E.g: Workspace Dev -LakehouseA -LakehouseB .... Workspace Prod -LakehouseA -LakehouseB .... After: Only lakehouse name We use same lakehouse name in each workspace
... View more
See more ideas labeled with:
Submitted
6 hours ago
Submitted by
Thomas_Pouliot
6 hours ago
Right now: In deployment pipeline (new version) to deploy from dev to test we have to click test then select the item from dev we want deployed and then deploy. (essentially pulling up from the lower stage versus pushing to the next) If we want to set rules we have to deploy first THEN set the rule which is counterintuitive and a time sink/waste of resources to then deploy again or manually change the parameters. The icon for deployment is a rocketship not a UFO with a tractorbeam so I think push is the intention not a pull. This imagery should convey what I mean when I say push (rocket propulsion) versus pull (UFO tractor beam) Suggested change: Change from a pull deployment to a push deployment Change Deploy From to Deploy To. To deploy from dev to test, user should click on DEV and select report to deploy to test Before deploy user should be able to set any parameter and data source change rules. Remove rules from highest prod level, add rules to lowest dev level Then user can click deploy to have it moved/pushed/deployed up to test. All gateway settings and parameters should be settable through deployment pipeline and pipeline should tell if there is no gateway Add checkbox option to deployment - Refresh on deployment This option should be grayed out or fail without using resources if a gateway is not setup. When checked, after successful deployment, attempt to refresh the report without user having to go into the workspace to refresh it. Add global parameter rules as noted in separate idea Global Deployment Pipeline Rules - Microsoft Fabric Community
... View more
See more ideas labeled with:
Submitted
5 hours ago
Submitted by
crookie
5 hours ago
Hi Where a user has no data based on the RLS permissions, instead of showing them a dashboard with all the empty visuals, I'd like to be able to automatically hide all the normal visuals and instead show something like a card with a nice message explaing they have no access to any data etc. There is currently no automated method to accomplish this or to redirect to another page based on a row count being 0. I realise you can make lots of changes to make things look invisible based on a measure, but it requires a lot messing with to get this into the desired state. What I thought we be a neater way would be to be able to group visuals into containers and then be able to dissable/hide/resize the contains. Then if I had say two containers, I could then set container 1 to be enable based on a row count measure > 0, while containter 2 to be enable on a the measure = 0. Many thanks for reading this.
... View more
See more ideas labeled with:
Submitted
9 hours ago
Submitted by
Muhiddin
9 hours ago

Currently when the user exports to excel from Power BI Visual using "Current data layout", "Applied filters"is displayed after the export result, which user has to scroll down all the way to check the applied filter. It is in convenient when there is export results too many rows. So would like to control the position of "Applied filters" depending user's wish like if the user wants to display it at top of sheet it should diplayed on the top of it.
... View more
See more ideas labeled with:
Submitted
Monday
Submitted by
kerski
Monday

With the debut of DAX Query View and TMDL Script View, there should be an API to retrieve these files. This capability would enable organizations to automate testing and deployments by programmatically accessing DAX Queries and TMDL scripts. For example, a Notebook could iterate over semantic models in a workspace, retrieve DAX Queries for testing, and execute those tests using semantic-link. If tests fail, teams could take corrective actions based on the results.
... View more
See more ideas labeled with:
Submitted
yesterday
Submitted by
Koen_Verbeeck
yesterday

It would be great if we could specify a folder to create a new item in (such as a lakehouse). Right now it seems you can only create new items in the root folder of the workspace when using the REST APIs.
... View more
See more ideas labeled with:
Submitted
yesterday
Submitted by
jasonhorner
yesterday
Problem Statement: Currently, the Azure Data Factory REST API connector only supports REST endpoints that return responses in application/json format. However, many modern and legacy REST APIs legitimately return responses as text/plain (e.g., health checks, keys, tokens, or plain status messages). When using such APIs, the REST connector fails to process the response, making it incompatible with otherwise valid REST interfaces. Although it's possible to work around this by invoking a function activity or using custom logic, this introduces unnecessary complexity and overhead. Proposed Solution: Enable the REST connector to support text/plain responses by: Wrapping the plain text in a valid JSON document (e.g., { "response": "raw text here" }), or Exposing the raw response as a string under a default or user-defined property. Business Value: Simplifies pipeline development: Reduces the need for workarounds like function or web activities. Expands connector compatibility: Supports more REST APIs natively, including those used in DevOps, monitoring, security, and identity systems. Improves performance: Avoids additional activities or dependencies that increase execution time and cost. Enhances usability: Makes ADF more flexible and developer-friendly.
... View more
See more ideas labeled with:
Submitted
10 hours ago
Submitted by
Juriaan-007
10 hours ago
For workspaces there a 4 roles (admin, contributer, member, viewer). These do not exist for Deployment pipelines. Every user is an admin. I suggest to add different roles. An example is to disallow certain users from the ability do delete a deployment pipeline.
... View more
See more ideas labeled with:
Submitted
Monday
Submitted by
v-velagalasr1
Monday

Need an API to get the list of all data sources of the reports available in the capacity.
... View more
See more ideas labeled with:
Submitted
Monday
Submitted by
v-nandagm
Monday

Need an API that allows the automated monitoring of VNet Data Gateways.
... View more
See more ideas labeled with:
Submitted on
10-07-2024
10:00 PM
Submitted by
Miguel_Myers
on
10-07-2024
10:00 PM

The current card visual forces users to overlap elements or waste copious amounts of time creating custom visuals. The new card feature should give users the ability to create multiple cards in a single container and provide a greater level of customization.
... View more
See more ideas labeled with:
Submitted
Monday
Submitted by
v-nandagm
Monday

API to update the VNET description automatically.
... View more
See more ideas labeled with:
Submitted on
10-07-2024
10:00 PM
Submitted by
Miguel_Myers
on
10-07-2024
10:00 PM

It would be beneficial to incorporate features from Pivot tables that allow for the expansion and collapse of columns and hierarchical column groups within tabular visuals. This would not only solve the current limitations of matrices but also provide report creators with the flexibility to hide and show rows and columns, saving these settings for future use, thus eliminating the need to scroll through irrelevant data.
... View more
See more ideas labeled with:
Submitted on
10-07-2024
10:00 PM
Submitted by
Miguel_Myers
on
10-07-2024
10:00 PM

Enabling customized calculations at the query level for subtotals and grand totals would offer greater flexibility in reporting and preserve performance. Efficient organization of control settings to modify the style of these totals separately will empower report creators to achieve their desired appearance, while addressing their need for more control and customization in reporting.
... View more
See more ideas labeled with:
Submitted on
10-07-2024
10:00 PM
Submitted by
Miguel_Myers
on
10-07-2024
10:00 PM

Imagine a world where report creators can automatically apply slicer and filter selections based on specific logic, revolutionizing data analysis and user experience. This innovative approach eliminates any need for complex workarounds, optimizes slicer functionality, and paves the way for more efficient and effective data reporting.
... View more
See more ideas labeled with:
Submitted
Monday
Submitted by
Cookistador
Monday

When building a report, undo and redo functionality is available for chart modifications. However, this feature is not currently available for table-related actions. Would it be possible to implement this feature for the following actions: Deletion of a table, measure, or column Modification and deletion of relationships Modifications to data type, format, or category Many thanks in advance for your help
... View more
See more ideas labeled with:
Idea Statuses
- New 14,998
- Need Clarification 5
- Needs Votes 22,633
- Under Review 641
- Planned 267
- Completed 1,649
- Declined 221
Helpful resources
Latest Comments
- anic on: Enhancing Purview Glossary Integration with Power ...
- giusepper11 on: Reintroduce Workspace Name visibility for Lakehous...
- tom_vanleijsen on: Hide "updating" spinners in real-time dashboards
-
kleigh
on: change button slicer selected item color
- SimonKAKI on: OneLake Cross-Region Mirroring
-
jovanpop-msft on: Add native OPENROWSET(json) support in Fabric DW
-
Jonathan_Garris on: Capacity level calendar of scheduled jobs
-
miguel on: Need "deployment packages" for deployment pipeline...
-
Jonathan_Garris on: Fabric Built-in Roles (RBAC)
-
timothyeharris on: Improve Workspace Visibility in Microsoft Fabric
-
Power BI
38,737 -
Fabric platform
535 -
Data Factory
445 -
Data Factory | Data Pipeline
288 -
Data Engineering
261 -
Data Warehouse
186 -
Data Factory | Dataflow
154 -
Real-Time Intelligence
128 -
Fabric platform | Workspaces
119 -
Fabric platform | OneLake
116 -
Fabric platform | Admin
113 -
Fabric platform | CICD
89 -
Fabric platform | Capacities
64 -
Real-Time Intelligence | Eventhouse and KQL
61 -
Real-Time Intelligence | Activator
53 -
Fabric platform | Governance
50 -
Fabric platform | Security
47 -
Data Science
46 -
Data Factory | Mirroring
37 -
Fabric platform | Support
31 -
Databases | SQL Database
30 -
Real-Time Intelligence | Eventstream
29 -
Fabric platform | Data hub
28 -
Databases
21 -
Data Factory | Apache Airflow Job
3 -
Product
2 -
Fabric platform | Real-Time hub
2 -
Real-Time Hub
1