Checkout the Latest Courses and Hands-On Labs from Skill Me UP!
Keep your skills sharp and check out the Live Schedule too.
This course covers configuring, managing and deploying monitoring for Azure Storage and data store solutions. We start with an overview of monitoring concepts, then focus on monitoring Azure Storage, Azure Data Lake, Azure Data Warehouse, Azure SQL DB and other services.
In this lab, you will use the Azure PowerShell cmdlets to manage common tasks with Azure Blob Storage.You will learn how to create an Azure Storage Account, create and manage containers, upload and download blobs, and manage leases and snapshots.
This course covers configuring, managing and deploying Azure data processing solutions. We start with an overview of big data environments, including Hadoop clusters, then cover how to plan for and implement Azure Databricks, Azure Stream Analytics, Event Hubs, Azure Data Factory and how these fit with Azure Data Warehouse solutions.
In this hands-on lab, you will step through 10 exercises where you will use Azure Machine Learning to accomplish several tasks that are essential to the DP 100 Designing and Implementing a Data Science Solution on Azure exam.You will learn how to Create and Deploy a Training Pipeline, Run Experiments and Manage Models, understand how to work with data stores and data sets, work with environments and compute targets. create and configure a publishing pipeline, understand how to automate machine learning, as well as learn how to monitor with application insights and detect data drift.
In this lab you will learn how to enable and disable distributed denial of service (DDoS) protection, and use telemetry to mitigate a DDoS attack with Azure DDoS Protection Standard. DDoS Protection Standard protects Azure resources such as virtual machines, load balancers, and application gateways that have an Azure public IP address assigned to it.
Data discovery classification provides advanced capabilities built into Azure SQL Database for discovering, classifying, labeling reporting the sensitive data in your databases. In this lab, you will learn the basics of how to classify your Azure SQL Database.
In this lab you will learn how to create, publish and assign an Azure Blueprint. You will also learn how to update a Blueprint draft, publish additional Blueprint versions, and update a Blueprint assignment.In addition, this lab will explore Blueprint locks and how they are implemented by the Blueprints service.This lab also shows how to export your Blueprint as JSON files, and how to edit these files to configure deployment dependencies and to streamline Blueprint parameters.In this lab, you will assign your Blueprint using a user-assigned Managed Identity. This provides you with full control over the Azure RBAC permissions used for Blueprint assignment. It also avoids the need for the user to have have subscription Owner permissions, which are required when using system-assigned Managed Identity.
The students will be able to describe and demonstrate the capabilities that Azure Cosmos DB can bring to an organization. They will be able to create a Cosmos DB instance and show how to upload and query data through a portal and through a .Net application. They will then be able to demonstrate how to enable global scale of the Cosmos DB database.
The students will be able to provision an Azure SQL Database and Azure Synapse Analytics server and be able to issue queries against one of the instances that are created. They will be also be able to integrate a data warehouse with a number of other data platform technologies and use PolyBase to load data from one data source into Azure Synapse Analytics.
The students will be able to describe what data streams are and how event processing works and choose an appropriate data stream ingestion technology for the AdventureWorks case study. They will provision the chosen ingestion technology and integrate this with Stream Analytics to create a solution that works with streaming data.
In this module, you will learn how Azure Data factory can be used to orchestrate the data movement from a wide range of data platform technologies. You will be able to explain the capabilities of the technology and be able to set up an end to end data pipeline that ingests data from SQL Database and load the data into SQL Data Warehouse. You will also demonstrate how to call a compute resource.
In this lab you will be able to explain why Azure Databricks can be used to help in Data Science projects. You will provision and Azure Databricks instance and will then create a workspace that will be used to perform a simple data preparation task from a Data Lake Store Gen II store. Finally, the student will perform a walk-through of performing transformations using Azure Databricks.
In this lab, you will create Azure storage accounts and Data Lake Storage account and explain the difference between Data Lake Storage version 1 and version 2. You will also demonstrate how to perform data loads into the data storage of choice.
In this course you will learn how to deploy containers to Azure using Azure DevOps classic pipeline editor. With code in our Azure repos, you will learn to build a Docker file for the application and build and push it to a private Azure Container Registry. From there, follow along to learn release pipelines for Azure Container Instances and Web App for Containers. In your last release pipeline, take Azure Kubernetes service to the next level by learning how to build a Helm chart and deploy to AKS with Helm in Azure DevOps Pipelines. Finally, set our build and release triggers to give the AKS pipeline Continuous Integration and Continuous Delivery functionality.
This course covers implementing Azure Data Storage services. We start off by reviewing Azure Portal and Storage concepts, then move on to implementing Azure SQL Data Warehouse, Azure SQL DB, Azure Data Lake, Azure Storage, and Azure Cosmos DB.