Skip to content

adding fabric data source to context from python notebook #11482

@muhssamy

Description

@muhssamy

Describe the bug
when i am trying

data_source_name = dataset
data_source = context.data_sources.add_fabric_powerbi(
    data_source_name, dataset=dataset, workspace=workspace
)

this is working in pyspark experience and not in python experience. only add is the issue you can complete the actions

File ~/jupyter-env/python3.11/lib/python3.11/site-packages/great_expectations/datasource/datasource_dict.py:197, in CacheableDatasourceDict.__getitem__(self, name)
    196 # Upon cache miss, retrieve from store and add to cache
--> 197 ds = super().__getitem__(name)
    198 self.data[name] = ds

File ~/jupyter-env/python3.11/lib/python3.11/site-packages/great_expectations/datasource/datasource_dict.py:115, in DatasourceDict.__getitem__(self, name)
    113 @override
    114 def __getitem__(self, name: str) -> FluentDatasource:
--> 115     ds = self._get_ds_from_store(name)
    117     return self._init_fluent_datasource(name=name, ds=ds)

File ~/jupyter-env/python3.11/lib/python3.11/site-packages/great_expectations/datasource/datasource_dict.py:106, in DatasourceDict._get_ds_from_store(self, name)
    105 except ValueError:
--> 106     raise KeyError(f"Could not find a datasource named '{name}'")

KeyError: "Could not find a datasource named 'Sales Semantic Model'"

During handling of the above exception, another exception occurred:

TestConnectionError                       Traceback (most recent call last)
Cell In[15], line 9
      6     print(f"✅ Using existing data source: {data_source_name}")
      7 except:
      8     # Create new if doesn't exist
----> 9     data_source = context.data_sources.add_fabric_powerbi(
     10         data_source_name, dataset=dataset, workspace=workspace
     11     )
     12     print(f"✅ Created new data source: {data_source_name}")

File ~/jupyter-env/python3.11/lib/python3.11/site-packages/great_expectations/datasource/fluent/sources.py:476, in DataSourceManager.create_add_crud_method.<locals>.add_datasource(name_or_datasource, **kwargs)
    474 logger.debug(f"Adding {datasource_type} with {datasource.name}")
    475 datasource._data_context = self._data_context
--> 476 datasource.test_connection()
    477 datasource = self._data_context._add_fluent_datasource(datasource)
    479 return datasource

File ~/jupyter-env/python3.11/lib/python3.11/site-packages/great_expectations/datasource/fluent/fabric.py:301, in FabricPowerBIDatasource.test_connection(self, test_assets)
    292 """Test the connection for the FabricPowerBIDatasource.
    293 
    294 Args:
   (...)
    298     TestConnectionError: If the connection test fails.
    299 """
    300 if not self._running_on_fabric():
--> 301     raise TestConnectionError("Must be running Microsoft Fabric to use this datasource")  # noqa: TRY003 # FIXME CoP
    303 try:
    304     from sempy import fabric  # noqa: F401 # test if fabric is installed

TestConnectionError: Must be running Microsoft Fabric to use this datasource

To Reproduce
try to add a power bi data model from python notebook vs pyspark notebook

Expected behavior
they should be the same

Environment (please complete the following information):
fabric notebook

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions