In this learning path, you will learn how to implement Azure Database. Topics will include understanding to design and deploy databases using SQL DB and SQL Datawarehouse along with more advanced topics of performance and troubleshooting for SQL.
In this learning path, you will learn how to build and architect big data solutions in Microsoft Azure. Topics will include architecting solutions using HD Insight, machine learning, visualizing data with Power BI, understanding lambda architecture patterns and IoT data ingestion. This path will help you prepare for exam Designing and Implementing Big Data Platform Solutions - exam 70-475 and will help you prepare for your MIcrosoft certification.
In this learning path, you will learn how to build and architect SQL focused solutions in Microsoft Azure. Topics will include SQL Server in Azure IaaS, SQL Database and SQL Data warehouse. This course will help you prepare for exam 70-473 Implementing Cloud Data Platform Solutions and prepare for your Microsoft certification.
This track has a collection of demonstrations, presentations, and interactive labs designed to prepare you for the Microsoft DP-100 exam.
In this course, you will explore the Spark Internals and Architecture. The course will start with a brief introduction to Scala. Using the Scala programming language, you will be introduced to the core functionalities and use cases of Apache Spark including Spark SQL, Spark Streaming, MLlib, and GraphFrames.
In this module, attendees will learn how to design solutions using Azure Infrastructure as a Service Components. This module will focus on core capabilities, use cases, and general best practices as well as discuss peripheral services such as Azure Backup and Site Recovery.
This course will be a deep dive into Azure SQL Database performance. We will look at designing an Azure SQL Database architecture for performance. We will look at performance specific features of Azure SQL Database. We will also cover monitoring and troubleshooting.
This module will cover all aspects of big data storage and batch processing. We will start by making the case for big data in Azure. Then we will look at Azure service topics to include Blob Storage, Azure Data Lake Store, Azure Data Lake Analytics, and HDInsight clusters running Hadoop, Hive, Interactive Hive (LLAP) and Spark. Storage topics will focus on choosing the right storage, configuring storage and storage optimization. We will also cover Big Data scenarios including batch processing, interactive clusters, multi-cluster deployments and on-demand clusters.
Define and Prepare the Development Environment - Course One of DP-100 Exam Preparation
The student will learn how Azure services can support the data science process. They’ll explore common architectures, learn to assess business goals and constraints for determining the correct environment, and setup the relevant development environments to support data science deployments in Azure.
Developing Models - Course Four of DP-100 Exam Preparation
The student will learn how develop robust models. Starting from selecting the right metric to meet business goals, through to building tuned models, and then evaluating the models produced for fitness.
The course will teach you the fundamentals of the relational database model and how to access data stored in relational databases. The course will give students an understanding of relational database concepts and teach the practical application of these concepts through the T-SQL programming language for Microsoft SQL Server and Azure SQL Database.
This course explores the NoSQL storage options available within the Microsoft Azure Cosmos DB database service. Formerly DocumentDB, Azure Cosmos DB is no longer just a Document-based NoSQL store, and it includes support for all 4 primary NoSQL data models (Document, Graph, Key/Value, Column). In addition to learning about NoSQL with Cosmos DB, students will also learn about the cloud-native features that make Cosmos DB a great NoSQL database-as-a-service in the Microsoft Azure cloud.
In this hands-on course, students will learn about Azure SQL Data Warehouse. This course will review basic architecture of Azure SQL Data Warehouse. We will cover tools used with Azure SQL Data Warehouse, loading SQL Data Warehouse and basic workload management in SQL Data Warehouse.
In the course Introduction to Azure SQL Database we will discuss the configuration, performance, security, availability, recovery and automation of Azure SQL Database. We will also review hybrid solutions with SQL Server Stretch Database. This course will partially help prepare you for exam 70-473 Designing and Implementing Cloud Data Platform Solutions.This course will help you prepare for Microsoft Exam 70-533 - Implementing Azure Infrastructure Solutions and 70-532 Developing Azure Solutions as well.
This course introduces students to Azure Data Factory V2. Students will learn about the different phases of a Data Factory Pipeline. Students will then cover Data Factory Architecture, terminology, the copy activity, file formats, integration runtimes, scheduling and triggers, and data factory management.
This training provides an overview of Azure Databricks and Spark. In this course you will learn where Azure Databricks fits in the big data landscape in Azure. Key features of Azure Databricks such as Workspaces and Notebooks will be covered. Students will also learn the basic architecture of Spark and cover basic Spark internals including core APIs, job scheduling and execution. This class will prepare developers and administrators for more advanced work in Azure Databricks such as Python or Scala development.
This module will provide an overview of big data, IoT and machine learning solutions in Azure. We will define the meaning of big data and look at the reasons why you might need a big data solution. We will then move on to a discussion of the analytics maturity model to understand how machine learning extracts value from big data. Next, we will review the lambda architecture which is the dominant architecture for big data solutions. We will look at the Azure components used in big data solutions and how they fit together to build an end-to-end lambda architecture in Azure. Finally, we will wrap up with a discussion of the Cortana Intelligence Suite and the value that it brings to big data and analytics solutions in Azure.
Performing Feature Engineering - Course Three of DP-100 Exam Preparation
The student will learn how develop effective and reusable features ready for modeling. Using manual techniques and then automated techniques, the data scientist will be able to handle core data types using SciKit-Learn and Microsoft Python libraries like MMLSpark and Azure Machine Learning Data Prep SDK.
This course builds on your Power BI skills and walks you through the interfaces of both the Online and Desktop offerings before embarking on a journey that will show you how to ingest data, transform data, create reports and dashboards before publishing and using your data sets, reports and dashboards in the Power BI online tenant.The course will help prepare students to take the Microsoft 70-778, Analyzing and Visualizing Data with Power BI certification exam.
Querying Data with T-SQL
This course serves as an introduction to the T-SQL programming language. This course is designed to give students a strong foundation in the T-SQL language which is used by all variants of SQL Server both on-premises and in the cloud.
The Real-Time Ingestion and Processing in Azure course covers information about implementing real-time event stream ingestion and processing within Microsoft Azure. The course starts with an overview of the Lambda Architecture and what a Message Broker is used for. The course continues to cover the Azure Event Hubs and Azure IoT Hub services used for event stream ingestion, and Azure Stream Analytics and HDInsight for integrating real-time event processing. Finally, the course finishes with an overview of a few example architectures to give a better perspective on architecting Real-Time Ingestion and Processing solutions within the Microsoft Azure cloud. This course should help in preparation for the 70-534 exam, Architecting Microsoft Azure Solutions.
In this hands-on course, students will learn about running SQL Server in Azure. This course will review basic Azure networking and storage using the Azure Resource Manager architecture to prepare students for building SQL Server solutions in Azure. The primary focus of this course is SQL Server cloud and hybrid-cloud solutions on Azure Infrastructure as a Service (IaaS). This course will cover best practices for deploying SQL Server on Azure Virtual Machines including standalone SQL Servers and hybrid Availability Groups. The course will look at SQL Server features that take advantage of Azure Storage such as SQL Server Managed Backup, Azure Snapshot Backups, and SQL Server data files hosted on Azure Storage.
In this lab, you will provision how to provision a Databricks workspace, an Azure storage account, and a Spark cluster. You will learn to use the Spark cluster to explore data using Spark Resilient Distributed Datasets (RDDs) and Spark Dataframes.
In this hands-on lab, you will learn how Trey Research can leverage Deep Learning technologies to scan through their vehicle specification documents to find compliance issues with new regulations. You will standardize the model format to ONNX and observe how this simplifies inference runtime code, enabling pluggability of different models and targeting a broad range of runtime environments and most importantly, improves inferencing speed over the native model. You will build a DevOps pipeline to coordinate retrieving the latest best model from the model registry, packaging the web application, deploying the web application and inferencing web service. After a first successful deployment, you will make updates to both the model, the and web application, and execute the pipeline once to achieve an updated deployment. You will also learn how to monitor the model’s performance after it is deployed so Trey Research can be proactive with performance issues.At the end of this hands-on lab, you will be better able to implement end-to-end solutions that fully operationalize deep learning models, inclusive of all application components that depend on the model.
In this lab, you will create an Azure Data Lake Store Gen2 account. You will learn to lock down and manage access of the Data Lake Store, taking advantage of both role-based access control and Data Lake Store Azure AD integration. Finally, you will process a bulk ingest using Hadoop distcp utility.
In this lab, you will create multiple Azure Cosmos DB containers. Some of the containers will be unlimited and configured with a partition key, while others will be fixed-sized. You will then use the SQL API and .NET SDK to query specific containers using a single partition key or across multiple partition keys.
In this lab you will create an Azure SQL Database using the Azure Portal and connect to it using SQL Server Management Studio. You will then migrate a SQL Server database hosted on a virtual machine to an Azure SQL Database.
Today, data is being collected in ever-increasing amounts, at ever-increasing velocities, and in an ever-expanding variety of formats. This explosion of data is colloquially known as the Big Data phenomenon.In order to gain actionable insights into big-data sources, new tools need to be leveraged that allow the data to be cleaned, analyzed, and visualized quickly and efficiently. Azure HDInsight provides a solution to this problem by making it exceedingly simple to create high-performance computing clusters provisioned with Apache Spark and members of the Spark ecosystem. Rather than spend time deploying hardware and installing, configuring, and maintaining software, you can focus on your research and apply your expertise to the data rather than the resources required to analyze that data.Apache Spark is an open-source parallel-processing platform that excels at running large-scale data analytics jobs. Spark’s combined use of in-memory and disk data storage delivers performance improvements that allow it to process some tasks up to 100 times faster than Hadoop. With Microsoft Azure, deploying Apache Spark clusters becomes significantly simpler and gets you working on your data analysis that much sooner.In this lab, you will experience HD Insight with Spark first-hand. After provisioning a Spark cluster, you will use the Microsoft Azure Storage Explorer to upload several Jupyter notebooks to the cluster. You will then use these notebooks to explore, visualize, and build a machine-learning model from food-inspection data — more than 100,000 rows of it — collected by the city of Chicago. The goal is to learn how to create and utilize your own Spark clusters, experience the ease with which they are provisioned in Azure, and, if you're new to Spark, get a working introduction to Spark data analytics.
In this lab, you will learn to build, monitor, manage and troubleshoot data pipelines with Azure Data Factory V2. You will learn to use the Copy Data wizard to build pipeline with no coding. You will build a custom pipeline to copy data from Blob storage to a table in Azure SQL Database. You will build a tumbling window pipeline to pick up data on a daily basis. Finally, you will learn to use the Management Monitoring tools to troubleshoot pipeline failures.
In this lab, you will create a virtual network that will allow the virtual machines you create to securely connect with each other. You will then create two virtual machines and specify the virtual network configuration and the availability set configuration along with storage for the virtual machine.
In this lab, we will explore the use of columnstore indexes in Azure SQL Database. We will evaluate the performance improvements we get when we implement columnstore indexes on tables for with analytical workloads.
In this lab, you will explore real-time operational analytics using Azure SQL Database. You will evaluate the performance improvements you will get when you add updateable non-clustered columnstore indexes on top of standard tables as well as memory-optimized tables.
In this lab, you will explore the new SQL Server 2016 real-time operational analytics feature. You will evaluate the performance improvements you will get when you add updateable nonclustered columnstore indexes on top of disk-based tables as well as memory-optimized tables.
In this lab, you will explore columnstore indexes in SQL Server 2016. You will evaluate the performance improvements you will get when you implement columnstore indexes on tables for your analytical workloads.
Spark structured streaming enables you to use the dataframe API to read and process an unbounded stream of data. This kind of processing is used in real-time scenarios to aggregate data over temporal intervals or windows. You can use Spark to process streaming data from a wide range of sources, including Azure Event Hubs, Kafka, and others. In this lab, you will run a Spark job to continually process a real-time stream of data.
In this lab, we will examine the use of In-Memory OLTP in Azure SQL Database. We will compare performance across standard and in-memory architectures including memory optimized tables and natively compiled stored procedures.
In this lab, you will explore the new SQL Server 2016 In-Memory OLTP feature. You will evaluate the performance improvements you will get when you migrate disk-based tables and interpreted T-SQL stored procedures into memory-optimized tables and natively-compiled stored procedures, respectively.
In this lab, you will use Visual Studio and ASP.NET to learn how to use Cosmos DB as a backend for an MVC application. You will learn how to programmatically read and write data, create and call a user-defined functions as well as understand management capabilities such as users and permissions, monitoring and scalability options.
In this lab, you will deploy and configure an on-premises gateway to work with Azure Logic Apps. The on-premises data gateway acts as a bridge, providing quick and secure data transfer between on-premises data (data that is not in the cloud) and the Power BI, Microsoft Flow, Logic Apps, and PowerApps services.
Spark includes an API named Spark MLLib (often referred to as Spark ML), which you can use to create machine learning solutions. Machine learning is a technique in which you train a predictive model using a large volume of data so that when new data is submitted to the model it can predict unknown values. The most common types of machine learning are supervised learning and unsupervised learning. In a supervised learning scenario, you start with a large volume of data that includes both features (categorical and numeric values that describe characteristics of the entity you’re trying to predict something about) and labels (the value your model will predict. Training the model involves applying a statistical algorithm that fits the features to the labels. Because your initial data includes known values for the labels, you can train the model and test its accuracy with these known label values – giving you confidence that the model will work accurately with new data for which the label values aren’t known. Unsupervised learning is a technique in which there are no known label values, and the model is trained to group (or cluster) similar entities together based on their features.In this lab, we’ll focus on supervised learning; and specifically a type of machine learning called classification in which you train a model to identify which category, or class an entity belongs to. You will train a classifier to use features of flights that are enroute to an airport, and predict whether they will be late or on-time.
In this lab, you will use PowerShell to manage Azure SQL Database. You will create a logical Azure SQL Server via PowerShell. You will then manage the firewall to allow remote connectivity to allow for client access. You will restore a database from an existing BACPAC file. Finally, you will use PowerShell to scale the database performance and pricing tier.
In this lab, you will learn how to configure and manage an Azure Cosmos DB Account (formerly Azure DocumentDB), including how to query and manage JSON documents within a Collection. Among the topics covered are using SQL language syntax to perform document queries that return JSON results, and implementing and testing global data replication and fail over.
In this lab, we will walk through management and monitoring of an Elastic Pool. First, we will create an Elastic Pool and add our databases to the pool. Then we will monitor the performance of our pool using TSQL Scripts and the Azure Portal.
In this lab, you will configure and manage the query store in SQL Server 2016 to collect runtime statistics, queries, query plan history and other workload history within the database to assist with troubleshooting query performance issues, you will then identify and resolve poor performing queries in your database using SQL Server 2016 Query Store. You will also identify query plan regressions and how to address them with information gathered from the Query Store.
In this lab, you will query an Azure Cosmos DB database instance using the SQL language. You will use features common in SQL such as projection using SELECT statements and filtering using WHERE clauses. You will also get to use features unique to Azure Cosmos DB’s SQL API such as projection into JSON, intra-document JOIN and filtering to a range of partition keys.
In this lab, you learn about deploying SQL Server on Azure virtual machines. This lab will walk you through some common setup and configuration tasks for running SQL Server in Azure infrastructure as a service.
In this lab, you will learn the fundamentals of creating databases, tables, views and relationships using Microsoft SQL Server. You will gain experience using SQL Server Management Studio (SSMS) and learn introductory concepts of writing T-SQL queries.
In this lab, you will train a classification model using Python in an Azure Machine Learning Notebook VM. The model will predict what type of bicycle a customer is most likely to buy. Some exploratory data analysis and feature engineering will be required.
In this lab, you will train the model you developed in the last lab on Azure using the Azure Machine Learning service and its Python SDK. After it has been trained, you will register the model to the registry and perform the steps necessary to deploy your model to Azure Machine Learning service where it can be leveraged by your company’s applications.
In this lab, you will use the .NET SDK to tune an Azure Cosmos DB request to optimize performance of your application.