Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Tungana-Bhavya/MICROSOFT_FABRIC_BOOTCAMP

Open more actions menu

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 

Repository files navigation

MICROSOFT FABRIC BOOTCAMP – LEARNING JOURNEY

This repository documents hands-on exercises completed during the Microsoft Fabric Bootcamp. It covers key concepts such as Lakehouse, Warehouse, Power Query transformations, Data Pipelines, Semantic Model, Dashboard Making, Report Making and practical experience with Dataflow Gen2 within Microsoft Fabric.


Week 1

Class 1: Introduction to Microsoft Fabric

  • What is Microsoft Fabric?
  • Key Features of Microsoft Fabric
  • Overview of OneLakehouse architecture
  • Microsoft Fabric components and ecosystem
  • Use cases and business value

Class 2: Workspace and Lakehouse Concepts

  1. Free Mirroring Storage
  • Understanding the benefits of mirrored data storage for reliability.
  1. Who Should Use Microsoft Fabric
  • Identifying target users and organizations for Fabric adoption.
  1. OneLake Overview
  • Introduction to Microsoft Fabric’s unified data lake.
  1. What is Lakehouse
  • Exploring the hybrid data storage and analytics architecture.
  1. What is Warehouse
  • Explanation of the compute layer used for querying data.
  1. Why We Need Domains
  • Importance of domains for managing data access and governance.
  1. Admin Portal Overview
  • Managing settings and resources through the Fabric admin portal.
  1. How to Create a Workspace
  • Step-by-step process of setting up a workspace in Fabric.

Week 2

Class 3: Explore Lakehouse in Microsoft Fabric

  1. Worked with SQL Query Editor using T-SQL
  • Querying Lakehouse tables via SQL Analytics Endpoint
  1. Used Notebook in Microsoft Fabric
  • Running T-SQL queries interactively and viewing results
  1. Explored Lakehouse Structure
  • Accessed schemas and tables like fact_sale and dimension_stock_item

Notebook Link: click here


Class 4: Explore Lakehouse with Spark

Practiced Spark SQL queries in a notebook connected to a Lakehouse.

Notebook Link: click here


Week 3

Class 5: Explore Warehouse in Microsoft Fabric

Practiced T-SQL Queries in Notebook (Connected to Warehouse)

Notebook Link: click here


Class 6: Explore Data Factory – Dataflow Gen2

Exercise 01 – Data Landing and Loading with Dataflow Gen2

This exercise covers loading data from a single Web API source and managing it across storage destinations in Microsoft Fabric, with transformations.

Task Description Link
1.1 Load Data via Web API Load datasets from GitHub raw links using Web API Exercise 1.1 - Data Loading
1.2 Transform and Load Data into Warehouse Transform and load the Sales dataset into the Warehouse Exercise 1.2 - Warehouse Destination
1.3 Landing Data into Lakehouse Storage Store Items, Customers, and Geography datasets in the Lakehouse for further use Exercise 1.3 - Lakehouse Destination

Exercise 02 – Power Query Transformations

This section documents various data transformation techniques using Power Query in Microsoft Fabric.

Transformation Tasks

Task Link
Pivot View Details
Unpivot View Details
Gap Filling View Details
Combine & Split View Details
Transpose View Details
Replace Values View Details
Joins View Details
Append Queries View Details
Date and Time View Details

Week 4

Class 7: Explore Data Pipeline in Microsoft Fabric

  • Creating Pipeline using Dataflow Gen2
  • Creating and naming a new pipeline
  • Adding pipeline activities and configuring Copy Data Assistant
  • Connecting to HTTP data source (GitHub raw Excel)
  • Configuring Excel file format and previewing data
  • Selecting OneLake Lakehouse as destination and defining load settings
  • Mapping data types at destination
  • Validating and running the pipeline
  • Adding Dataflow activity and setting execution order
  • Setting up failure notifications via Outlook
  • Scheduling pipeline runs for automation

README FILE: click here


Class 8: Working with Notebooks and PySpark

  • Introduction to notebooks in Microsoft Fabric
  • Creating and using notebooks within a Lakehouse
  • Loading data from Lakehouse tables into PySpark DataFrames
  • Performing transformations and analysis using PySpark
  • Demonstrated different types of joins using PySpark:
    • Inner Join
    • Left Join
    • Right Join
    • Full Outer Join
    • Left Anti Join
    • Full join
    • cross join
  • Filtering, selecting columns, Dropping columns, and aggregating data using PySpark
  • Calculating Gross Sales, COGS, Discount using withColumn
  • Saving Dataframe as delta table in pyspark
  • Saving a DataFrame as a delta Table in Overwrite Mode

Notebook Link: click here


Week 5

Class 9: Working with Direct Lake Semantic Model

  • Workspace creation in Fabric
  • Lakehouse setup with Master and Fact schemas
  • Creating table shortcuts from Lakehouse and Warehouse
  • Configuring a semantic model using Direct Lake
  • Creating and activating table relationships
  • Finalizing model for reporting
  • Creating Report

Notebook Link: click here


Class 10: KQL Queries and Dashboard Making

  • Working with KQL Queries
  • Making Real Time Dashboard

Link: click here

Morty Proxy This is a proxified and sanitized view of the page, visit original site.