Share your ideas and vote for future features
Suggest an idea
Options
- Mark all as New
- Mark all as Read
- Float this item to the top
- Subscribe
- Bookmark
- Subscribe to RSS Feed
Submitted
3 hours ago
Submitted by
Thomas_Pouliot
3 hours ago
Right now: In deployment pipeline (new version) to deploy from dev to test we have to click test then select the item from dev we want deployed and then deploy. (essentially pulling up from the lower stage versus pushing to the next) If we want to set rules we have to deploy first THEN set the rule which is counterintuitive and a time sink/waste of resources to then deploy again or manually change the parameters. The icon for deployment is a rocketship not a UFO with a tractorbeam so I think push is the intention not a pull. This imagery should convey what I mean when I say push (rocket propulsion) versus pull (UFO tractor beam) Suggested change: Change from a pull deployment to a push deployment Change Deploy From to Deploy To. To deploy from dev to test, user should click on DEV and select report to deploy to test Before deploy user should be able to set any parameter and data source change rules. Remove rules from highest prod level, add rules to lowest dev level Then user can click deploy to have it moved/pushed/deployed up to test. All gateway settings and parameters should be settable through deployment pipeline and pipeline should tell if there is no gateway Add checkbox option to deployment - Refresh on deployment This option should be grayed out or fail without using resources if a gateway is not setup. When checked, after successful deployment, attempt to refresh the report without user having to go into the workspace to refresh it. Add global parameter rules as noted in separate idea Global Deployment Pipeline Rules - Microsoft Fabric Community
... View more
See more ideas labeled with:
Submitted
7 hours ago
Submitted by
ram5
7 hours ago

In our setup, we work with multiple environments (Dev, Test, and Prod), each represented by its own workspace. To streamline deployment via pipelines, we keep Lakehouses and Notebooks named identical across workspaces. Previously, when opening a Notebook, we could clearly see which Lakehouse it was connected to, including the workspace name it belonged to. This made it easy to identify if a Notebook was referencing, for example the Dev Lakehousem while working in the Prod workspace. We could then simply disconnect and re-add the correct reference. However, with recent UI changes, only the Lakehouse name is visible, not the workspace it is contained in. This creates confusion when multiple Lakehouses share the same name across different workspaces. Even when adding a new source, the explorer does not clearly show which reference is from which workspace. This makes it far too easy to mistakenly remove or connect the wrong Lakehouse especially in production environments. This seemingly small UI regression significantly affects productivity and introduces potential risks during deployment or debugging. Please consider restoring the visibility of workspace names to Lakehouse references in Notebooks. Before: Screenshot from google E.g: Workspace Dev -LakehouseA -LakehouseB .... Workspace Prod -LakehouseA -LakehouseB .... After: Only lakehouse name We use same lakehouse name in each workspace
... View more
See more ideas labeled with:
Submitted
5 hours ago
Submitted by
jovanpop-msft
5 hours ago

OPENROWSET function can read the files from Azure Data Lake storage or Azure Blob storage. It would be useful to enable OPENROWSET to also read files form Fabric One Lake and access the files in Lakehouse /Files area.
... View more
See more ideas labeled with:
Submitted
5 hours ago
Submitted by
Muhiddin
5 hours ago

Currently when the user exports to excel from Power BI Visual using "Current data layout", "Applied filters"is displayed after the export result, which user has to scroll down all the way to check the applied filter. It is in convenient when there is export results too many rows. So would like to control the position of "Applied filters" depending user's wish like if the user wants to display it at top of sheet it should diplayed on the top of it.
... View more
See more ideas labeled with:
Submitted
7 hours ago
Submitted by
Juriaan-007
7 hours ago
For workspaces there a 4 roles (admin, contributer, member, viewer). These do not exist for Deployment pipelines. Every user is an admin. I suggest to add different roles. An example is to disallow certain users from the ability do delete a deployment pipeline.
... View more
See more ideas labeled with:
Submitted
yesterday
Submitted by
Koen_Verbeeck
yesterday

It would be great if we could specify a folder to create a new item in (such as a lakehouse). Right now it seems you can only create new items in the root folder of the workspace when using the REST APIs.
... View more
See more ideas labeled with:
Submitted
yesterday
Submitted by
jasonhorner
yesterday
Problem Statement: Currently, the Azure Data Factory REST API connector only supports REST endpoints that return responses in application/json format. However, many modern and legacy REST APIs legitimately return responses as text/plain (e.g., health checks, keys, tokens, or plain status messages). When using such APIs, the REST connector fails to process the response, making it incompatible with otherwise valid REST interfaces. Although it's possible to work around this by invoking a function activity or using custom logic, this introduces unnecessary complexity and overhead. Proposed Solution: Enable the REST connector to support text/plain responses by: Wrapping the plain text in a valid JSON document (e.g., { "response": "raw text here" }), or Exposing the raw response as a string under a default or user-defined property. Business Value: Simplifies pipeline development: Reduces the need for workarounds like function or web activities. Expands connector compatibility: Supports more REST APIs natively, including those used in DevOps, monitoring, security, and identity systems. Improves performance: Avoids additional activities or dependencies that increase execution time and cost. Enhances usability: Makes ADF more flexible and developer-friendly.
... View more
See more ideas labeled with:
Submitted
Monday
Submitted by
v-velagalasr1
Monday

Need an API to get the list of all data sources of the reports available in the capacity.
... View more
See more ideas labeled with:
Submitted
Monday
Submitted by
v-nandagm
Monday

Need an API that allows the automated monitoring of VNet Data Gateways.
... View more
See more ideas labeled with:
Submitted
Monday
Submitted by
v-nandagm
Monday

API to update the VNET description automatically.
... View more
See more ideas labeled with:
Submitted on
10-07-2024
10:00 PM
Submitted by
Miguel_Myers
on
10-07-2024
10:00 PM

The current card visual forces users to overlap elements or waste copious amounts of time creating custom visuals. The new card feature should give users the ability to create multiple cards in a single container and provide a greater level of customization.
... View more
See more ideas labeled with:
Submitted on
10-07-2024
10:00 PM
Submitted by
Miguel_Myers
on
10-07-2024
10:00 PM

It would be beneficial to incorporate features from Pivot tables that allow for the expansion and collapse of columns and hierarchical column groups within tabular visuals. This would not only solve the current limitations of matrices but also provide report creators with the flexibility to hide and show rows and columns, saving these settings for future use, thus eliminating the need to scroll through irrelevant data.
... View more
See more ideas labeled with:
Submitted on
10-07-2024
10:00 PM
Submitted by
Miguel_Myers
on
10-07-2024
10:00 PM

Enabling customized calculations at the query level for subtotals and grand totals would offer greater flexibility in reporting and preserve performance. Efficient organization of control settings to modify the style of these totals separately will empower report creators to achieve their desired appearance, while addressing their need for more control and customization in reporting.
... View more
See more ideas labeled with:
Submitted on
10-07-2024
10:00 PM
Submitted by
Miguel_Myers
on
10-07-2024
10:00 PM

Imagine a world where report creators can automatically apply slicer and filter selections based on specific logic, revolutionizing data analysis and user experience. This innovative approach eliminates any need for complex workarounds, optimizes slicer functionality, and paves the way for more efficient and effective data reporting.
... View more
See more ideas labeled with:
Submitted
Monday
Submitted by
Cookistador
Monday

When building a report, undo and redo functionality is available for chart modifications. However, this feature is not currently available for table-related actions. Would it be possible to implement this feature for the following actions: Deletion of a table, measure, or column Modification and deletion of relationships Modifications to data type, format, or category Many thanks in advance for your help
... View more
See more ideas labeled with:
Submitted on
10-07-2024
10:00 PM
Submitted by
Miguel_Myers
on
10-07-2024
10:00 PM

Interpreting visuals without a clear legend to indicate logic behind specific styles can lead to confusion and decision-making errors. An idea to enhance clarity and transparency by ensuring legends and tooltips accurately display colors, patterns, and other visual components influenced by logics, would enable report consumers to easily understand the applied logic and make more effective decisions.
... View more
See more ideas labeled with:
Submitted
Sunday
Submitted by
matthias-bi
Sunday
Microsoft SQL Server has a very powerful feature by the way of "synonyms." It allows users and DBAs to do all sorts of powerful magic such as rewiring objects under the hood (e.g. run the code against another test database). Synonyms are lightweight and more resilitent than views. This is more general than lakehouse short cuts. It is completely different from Clone. At the moment it is not supported, please support.
... View more
See more ideas labeled with:
Submitted
yesterday
Submitted by
mwielen
yesterday
It should be possible to assign a workspace identity to azure sql data sources used in semantic model, assigning the WI should be possible via setting the credential on the data source of the semantic model via the portal (user interface) as well as programmatically via the API update datasource. enbaling the above will eliminate the hassle of setting credentials for managed semantic models as well as it aligns with the preferred way of authentication when azure services connect to each other.
... View more
See more ideas labeled with:
Submitted
Monday
Submitted by
ABarzanti97
Monday
Hi, It'd be great to add a feature that lets you set a fixed width for the column in the table settings. Thank you, Andrea
... View more
See more ideas labeled with:
Submitted on
03-21-2025
12:46 AM
Submitted by
sajjadniazi
on
03-21-2025
12:46 AM

Problem: Currently, when a dataset is connected to a Lakehouse as a datasource in Power BI Fabric, it defaults to a cloud connection mapped to SSO. In embedded mode, reports built on these datasets fail due to a lack of identity, as they do not inherit authentication from the service principal. To resolve this, users must manually adjust the datasource settings via the Power BI service https://learn.microsoft.com/en-us/fabric/fundamentals/direct-lake-fixed-identity Proposed Solution: Introduce a new REST API or extend the current API functionality to allow programmatic setting (or re-setting) of the connection to the service principal, including updating datasources. Benefits: Automates the process, reducing manual intervention Minimizes downtime for embedded reports Enhances developer experience and deployment efficiency Ensures consistency in authentication settings across different environments Impact: This feature will significantly improve the workflow for organizations embedding Power BI reports using service principals, ensuring seamless and automated datasource authentication.
... View more
See more ideas labeled with:
Idea Statuses
- New 14,994
- Need Clarification 5
- Needs Votes 22,633
- Under Review 641
- Planned 267
- Completed 1,649
- Declined 221
Helpful resources
Latest Comments
- giusepper11 on: Reintroduce Workspace Name visibility for Lakehous...
- tom_vanleijsen on: Hide "updating" spinners in real-time dashboards
-
kleigh
on: change button slicer selected item color
- SimonKAKI on: OneLake Cross-Region Mirroring
-
jovanpop-msft on: Add native OPENROWSET(json) support in Fabric DW
-
Jonathan_Garris on: Capacity level calendar of scheduled jobs
-
miguel on: Need "deployment packages" for deployment pipeline...
-
Jonathan_Garris on: Fabric Built-in Roles (RBAC)
-
timothyeharris on: Improve Workspace Visibility in Microsoft Fabric
-
Cristian_Angyal on: Org app and cross-item navigation
-
Power BI
38,735 -
Fabric platform
534 -
Data Factory
445 -
Data Factory | Data Pipeline
288 -
Data Engineering
261 -
Data Warehouse
186 -
Data Factory | Dataflow
154 -
Real-Time Intelligence
128 -
Fabric platform | Workspaces
119 -
Fabric platform | OneLake
116 -
Fabric platform | Admin
112 -
Fabric platform | CICD
89 -
Fabric platform | Capacities
64 -
Real-Time Intelligence | Eventhouse and KQL
61 -
Real-Time Intelligence | Activator
53 -
Fabric platform | Governance
49 -
Fabric platform | Security
47 -
Data Science
46 -
Data Factory | Mirroring
37 -
Fabric platform | Support
31 -
Databases | SQL Database
30 -
Real-Time Intelligence | Eventstream
29 -
Fabric platform | Data hub
28 -
Databases
21 -
Data Factory | Apache Airflow Job
3 -
Product
2 -
Fabric platform | Real-Time hub
2 -
Real-Time Hub
1