Skill Me Up Provides comprehensive Microsoft training for Data Specialists focusing on Microsoft Azure for Data Science, Big Data, Data Processing, Document Based or Relational Workloads.Live Course Schedule
In this learning path, you will learn key concepts about the cloud and various Microsoft Azure Services. From there, you will learn core concepts such as various PaaS and IaaS services including management tools. This course will also cover several key concepts for security and compliance, as well as a brief look at a Azure pricing and support. This course will help you prepare for AZ 900 Microsoft Azure Fundamentals.
In this learning path, you will learn the basics of data science including what data science is, some of the common programming languages used in data science (R and Python) as well as an introduction to machine learning.
In this learning path, you will learn how to build and architect SQL focused solutions in Microsoft Azure. Topics will include SQL Server in Azure IaaS, SQL Database and SQL Data warehouse. This course will help you prepare for exam 70-473 Implementing Cloud Data Platform Solutions and prepare for your Microsoft certification.
In this learning path, you will learn how to build and architect big data solutions in Microsoft Azure. Topics will include architecting solutions using HD Insight, machine learning, visualizing data with Power BI, understanding lambda architecture patterns and IoT data ingestion. This path will help you prepare for exam Designing and Implementing Big Data Platform Solutions - exam 70-475 and will help you prepare for your MIcrosoft certification.
This path contains courses and labs designed to help you learn about performing data science using services in Microsoft Azure such as Azure ML.
In this learning path, you will learn how to implement Azure Database. Topics will include understanding to design and deploy databases using SQL DB and SQL Datawarehouse along with more advanced topics of performance and troubleshooting for SQL.
In this course, you will explore the Spark Internals and Architecture. The course will start with a brief introduction to Scala. Using the Scala programming language, you will be introduced to the core functionalities and use cases of Apache Spark including Spark SQL, Spark Streaming, MLlib, and GraphFrames.
In this module, attendees will learn how to design solutions using Azure Infrastructure as a Service Components. This module will focus on core capabilities, use cases, and general best practices as well as discuss peripheral services such as Azure Backup and Site Recovery.
This course is an introduction to Microsoft Azure Machine Learning Services. In this course you will learn to navigate the AML Services interface, create notebook servers, create compute clusters, manage AML Services from a notebook, deploy models, and create an Automated Machine Learning experiment.
This course will be a deep dive into Azure SQL Database performance. We will look at designing an Azure SQL Database architecture for performance. We will look at performance specific features of Azure SQL Database. We will also cover monitoring and troubleshooting.
This module will cover all aspects of big data storage and batch processing. We will start by making the case for big data in Azure. Then we will look at Azure service topics to include Blob Storage, Azure Data Lake Store, Azure Data Lake Analytics, and HDInsight clusters running Hadoop, Hive, Interactive Hive (LLAP) and Spark. Storage topics will focus on choosing the right storage, configuring storage and storage optimization. We will also cover Big Data scenarios including batch processing, interactive clusters, multi-cluster deployments and on-demand clusters.
In this course, students will gain knowledge and skills needed to recommend and design Azure Data Solutions based on requirements. The solution technologies will include both relational and non-relational cloud data stores.
Design Data Processing Solutions - Course Two of DP-201 Exam Preparation
This course covers designing of batch processing solutions and designing of real-time processing solutions.
Design for Data Security and Compliance - Course Three of DP-201 Exam Preparation
This course covers designing of security for source data access and security for data policies and standards.
Implement Data Storage Solutions - Course One of DP-200 Exam Preparation
This course covers implementing Azure Data Storage services. We start off by reviewing Azure Portal and Storage concepts, then move on to implementing Azure SQL Data Warehouse, Azure SQL DB, Azure Data Lake, Azure Storage, and Azure Cosmos DB.
This course explores the NoSQL storage options available within the Microsoft Azure Cosmos DB database service. Formerly DocumentDB, Azure Cosmos DB is no longer just a Document-based NoSQL store, and it includes support for all 4 primary NoSQL data models (Document, Graph, Key/Value, Column). In addition to learning about NoSQL with Cosmos DB, students will also learn about the cloud-native features that make Cosmos DB a great NoSQL database-as-a-service in the Microsoft Azure cloud.
In this hands-on course, students will learn about Azure SQL Data Warehouse. This course will review basic architecture of Azure SQL Data Warehouse. We will cover tools used with Azure SQL Data Warehouse, loading SQL Data Warehouse and basic workload management in SQL Data Warehouse.
In the course Introduction to Azure SQL Database we will discuss the configuration, performance, security, availability, recovery and automation of Azure SQL Database. We will also review hybrid solutions with SQL Server Stretch Database. This course will partially help prepare you for exam 70-473 Designing and Implementing Cloud Data Platform Solutions.This course will help you prepare for Microsoft Exam 70-533 - Implementing Azure Infrastructure Solutions and 70-532 Developing Azure Solutions as well.
This course introduces students to Azure Data Factory V2. Students will learn about the different phases of a Data Factory Pipeline. Students will then cover Data Factory Architecture, terminology, the copy activity, file formats, integration runtimes, scheduling and triggers, and data factory management.
This training provides an overview of Azure Databricks and Spark. In this course you will learn where Azure Databricks fits in the big data landscape in Azure. Key features of Azure Databricks such as Workspaces and Notebooks will be covered. Students will also learn the basic architecture of Spark and cover basic Spark internals including core APIs, job scheduling and execution. This class will prepare developers and administrators for more advanced work in Azure Databricks such as Python or Scala development.
This course is an introduction to Python. In this course you will learn which IDE is right for you, print statements, data types, control flow, Python functions and anonymous functions, methods, file io, and an introduction to Python packages.
This course covers introduction to the R Language. We start off with an introduction to R verisons and R Editions then move to R the language. From there will dive into one of R’s strongest features, Graphics. Using base R graphics and GGPlot you will learn how to get started, and learn how to create your own visualizations.
This course looks at services and tools used for machine learning with Azure. This course will introduce students to Machine Learning Server, SQL Server Machine Learning Services, Cognitive Toolkit, the Data Science Virtual Machine, and the Azure AI Gallery.This course will assist you in preparing for the "Using Other Services for Machine Learning" section of the "Perform Cloud Data Science with Azure Machine Learning" Microsoft Exam 70-774.
Manage and Develop Data Processing - Course Two of DP-200 Exam Preparation
This course covers configuring, managing and deploying Azure data processing solutions. We start with an overview of big data environments, including Hadoop clusters, then cover how to plan for and implement Azure Databricks, Azure Stream Analytics, Event Hubs, Azure Data Factory and how these fit with Azure Data Warehouse solutions.
Monitor and Optimize Data Solutions - Course Three of DP-200 Exam Preparation
This course covers configuring, managing and deploying monitoring for Azure Storage and data store solutions. We start with an overview of monitoring concepts, then focus on monitoring Azure Storage, Azure Data Lake, Azure Data Warehouse, Azure SQL DB and other services.
This module will provide an overview of big data, IoT and machine learning solutions in Azure. We will define the meaning of big data and look at the reasons why you might need a big data solution. We will then move on to a discussion of the analytics maturity model to understand how machine learning extracts value from big data. Next, we will review the lambda architecture which is the dominant architecture for big data solutions. We will look at the Azure components used in big data solutions and how they fit together to build an end-to-end lambda architecture in Azure. Finally, we will wrap up with a discussion of the Cortana Intelligence Suite and the value that it brings to big data and analytics solutions in Azure.
This course builds on your Power BI skills and walks you through the interfaces of both the Online and Desktop offerings before embarking on a journey that will show you how to ingest data, transform data, create reports and dashboards before publishing and using your data sets, reports and dashboards in the Power BI online tenant.The course will help prepare students to take the Microsoft 70-778, Analyzing and Visualizing Data with Power BI certification exam.
The Real-Time Ingestion and Processing in Azure course covers information about implementing real-time event stream ingestion and processing within Microsoft Azure. The course starts with an overview of the Lambda Architecture and what a Message Broker is used for. The course continues to cover the Azure Event Hubs and Azure IoT Hub services used for event stream ingestion, and Azure Stream Analytics and HDInsight for integrating real-time event processing. Finally, the course finishes with an overview of a few example architectures to give a better perspective on architecting Real-Time Ingestion and Processing solutions within the Microsoft Azure cloud. This course should help in preparation for the 70-534 exam, Architecting Microsoft Azure Solutions.
In this hands-on course, students will learn about running SQL Server in Azure. This course will review basic Azure networking and storage using the Azure Resource Manager architecture to prepare students for building SQL Server solutions in Azure. The primary focus of this course is SQL Server cloud and hybrid-cloud solutions on Azure Infrastructure as a Service (IaaS). This course will cover best practices for deploying SQL Server on Azure Virtual Machines including standalone SQL Servers and hybrid Availability Groups. The course will look at SQL Server features that take advantage of Azure Storage such as SQL Server Managed Backup, Azure Snapshot Backups, and SQL Server data files hosted on Azure Storage.
In this module, you will focus on pricing and support models available with Microsoft to include but not limited to Azure subscriptions, planning and managing costs, support options available with Azure, and the service lifecycle in Azure.
In this module you will learn basic cloud concepts to include but not limited to the following: Why Cloud Services?, Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), Software-as-a-Service (SaaS), Public, Private, and Hybrid cloud models.
In this module, you will learn the basics of core services available within Microsoft Azure to include but not limited to Core Azure architectural components, Core Azure Services and Products, Azure Solutions, and Azure management tools.
In this module, you will learn about security, privacy, compliance, and trust with Microsoft Azure. You will become familiar with the following topics: securing network connectivity in Azure, core Azure identity services, security tools and features, Azure governance methodologies, monitoring and reporting in Azure, and privacy, compliance and data protection standards in Azure.
In this lab, you will provision how to provision a Databricks workspace, an Azure storage account, and a Spark cluster. You will learn to use the Spark cluster to explore data using Spark Resilient Distributed Datasets (RDDs) and Spark Dataframes.
In this lab, you will create an Azure Data Lake Store Gen2 account. You will learn to lock down and manage access of the Data Lake Store, taking advantage of both role-based access control and Data Lake Store Azure AD integration. Finally, you will process a bulk ingest using Hadoop distcp utility.
In this lab, you will create multiple Azure Cosmos DB containers. Some of the containers will be unlimited and configured with a partition key, while others will be fixed-sized. You will then use the SQL API and .NET SDK to query specific containers using a single partition key or across multiple partition keys.
Today, data is being collected in ever-increasing amounts, at ever-increasing velocities, and in an ever-expanding variety of formats. This explosion of data is colloquially known as the Big Data phenomenon.In order to gain actionable insights into big-data sources, new tools need to be leveraged that allow the data to be cleaned, analyzed, and visualized quickly and efficiently. Azure HDInsight provides a solution to this problem by making it exceedingly simple to create high-performance computing clusters provisioned with Apache Spark and members of the Spark ecosystem. Rather than spend time deploying hardware and installing, configuring, and maintaining software, you can focus on your research and apply your expertise to the data rather than the resources required to analyze that data.Apache Spark is an open-source parallel-processing platform that excels at running large-scale data analytics jobs. Spark’s combined use of in-memory and disk data storage delivers performance improvements that allow it to process some tasks up to 100 times faster than Hadoop. With Microsoft Azure, deploying Apache Spark clusters becomes significantly simpler and gets you working on your data analysis that much sooner.In this lab, you will experience HD Insight with Spark first-hand. After provisioning a Spark cluster, you will use the Microsoft Azure Storage Explorer to upload several Jupyter notebooks to the cluster. You will then use these notebooks to explore, visualize, and build a machine-learning model from food-inspection data — more than 100,000 rows of it — collected by the city of Chicago. The goal is to learn how to create and utilize your own Spark clusters, experience the ease with which they are provisioned in Azure, and, if you're new to Spark, get a working introduction to Spark data analytics.
In this lab, you will learn to build, monitor, manage and troubleshoot data pipelines with Azure Data Factory V2. You will learn to use the Copy Data wizard to build pipeline with no coding. You will build a custom pipeline to copy data from Blob storage to a table in Azure SQL Database. You will build a tumbling window pipeline to pick up data on a daily basis. Finally, you will learn to use the Management Monitoring tools to troubleshoot pipeline failures.
In this lab, you will configure several virtual machines that have been pre-deployed into an Azure subscription. This includes: one domain controller and two SQL Servers. You will thencreate an Azure Storage account that will be used as a Cloud Witness for the cluster, a Windows Failover Cluster across the two SQL Servers with the storage account acting as a witness. You will then create a SQL ServerAvailability Group with an Azure Internal Load Balancer and validate failover works correctly.Note: This lab pre-provisions several virtual machines and may take more than 30 minutes to complete provisioning.
In this lab, you will create a virtual network that will allow the virtual machines you create to securely connect with each other. You will then create two virtual machines and specify the virtual network configuration and the availability set configuration along with storage for the virtual machine.
In this lab, you will create an Azure Web App and a SQL Database and configure the popular content management system (CMS) Orchard CMS. You will then configure the web app to automatically scale based on actual CPU usage.
In this lab, we will explore the use of columnstore indexes in Azure SQL Database. We will evaluate the performance improvements we get when we implement columnstore indexes on tables for with analytical workloads.
Spark structured streaming enables you to use the dataframe API to read and process an unbounded stream of data. This kind of processing is used in real-time scenarios to aggregate data over temporal intervals or windows. You can use Spark to process streaming data from a wide range of sources, including Azure Event Hubs, Kafka, and others. In this lab, you will run a Spark job to continually process a real-time stream of data.
In this lab, you will learn the basics of importing CSV files into a Power BI model and adjusting query and column properties to prepare them for your data model. Then you will set table and column properties in the data model and create relationships between tables.
Spark includes an API named Spark MLLib (often referred to as Spark ML), which you can use to create machine learning solutions. Machine learning is a technique in which you train a predictive model using a large volume of data so that when new data is submitted to the model it can predict unknown values. The most common types of machine learning are supervised learning and unsupervised learning. In a supervised learning scenario, you start with a large volume of data that includes both features (categorical and numeric values that describe characteristics of the entity you’re trying to predict something about) and labels (the value your model will predict. Training the model involves applying a statistical algorithm that fits the features to the labels. Because your initial data includes known values for the labels, you can train the model and test its accuracy with these known label values – giving you confidence that the model will work accurately with new data for which the label values aren’t known. Unsupervised learning is a technique in which there are no known label values, and the model is trained to group (or cluster) similar entities together based on their features.In this lab, we’ll focus on supervised learning; and specifically a type of machine learning called classification in which you train a model to identify which category, or class an entity belongs to. You will train a classifier to use features of flights that are enroute to an airport, and predict whether they will be late or on-time.
In this lab, you will use PowerShell to manage Azure SQL Database. You will create a logical Azure SQL Server via PowerShell. You will then manage the firewall to allow remote connectivity to allow for client access. You will restore a database from an existing BACPAC file. Finally, you will use PowerShell to scale the database performance and pricing tier.
In this lab, you will learn how to configure and manage an Azure Cosmos DB Account (formerly Azure DocumentDB), including how to query and manage JSON documents within a Collection. Among the topics covered are using SQL language syntax to perform document queries that return JSON results, and implementing and testing global data replication and fail over.
In this lab, you will query an Azure Cosmos DB database instance using the SQL language. You will use features common in SQL such as projection using SELECT statements and filtering using WHERE clauses. You will also get to use features unique to Azure Cosmos DB’s SQL API such as projection into JSON, intra-document JOIN and filtering to a range of partition keys.
In this lab, you will use the .NET SDK to tune an Azure Cosmos DB request to optimize performance of your application.
In this lab, you learn to leverage Machine Learning Server and SQL Server Machine Learning Services to execute R code. You will use pre-installed tools of the Data Science Virtual Machine to execute Jupyter Notebooks and execute remote R code against Machine Learning Server. You will then leverage SQL Server Machine Learning Services to execute R code in SQL Server.
In this lab, you will learn the basics of time intelligence measures to show year-to-date, quarter-to-date, and month-to-date totals. Then you will use DAX to enhance the data model and build a table that lets users choose which measures they want to see.