Skill Me Up expert on-demand training for Security Professional. Modernize your skills with cloud computing from providers such as Microsoft Azure, Amazon Web Services and much more along with core foundational IT training.Live Course Schedule
In this learning path, you will learn how to implement Azure Database. Topics will include understanding to design and deploy databases using SQL DB and SQL Datawarehouse along with more advanced topics of performance and troubleshooting for SQL.
In this learning path, you will learn how to build and architect big data solutions in Microsoft Azure. Topics will include architecting solutions using HD Insight, machine learning, visualizing data with Power BI, understanding lambda architecture patterns and IoT data ingestion. This path will help you prepare for exam Designing and Implementing Big Data Platform Solutions - exam 70-475 and will help you prepare for your MIcrosoft certification.
In this learning path, you will learn how to build and architect SQL focused solutions in Microsoft Azure. Topics will include SQL Server in Azure IaaS, SQL Database and SQL Data warehouse. This course will help you prepare for exam 70-473 Implementing Cloud Data Platform Solutions and prepare for your Microsoft certification.
This path contains courses and labs designed to help you learn about performing data science using services in Microsoft Azure such as Azure ML.
In this learning path, you will learn the basics of data science including what data science is, some of the common programming languages used in data science (R and Python) as well as an introduction to machine learning.
This learning path is designed to teach you the fundamentals of relational databases using Microsoft SQL Server. You will learn core concepts such as tables, indexes, and building relationships with foreign keys. From there, you will learn how to write queries using T-SQL to query data as well as make changes to your database.
This track has a collection of demonstrations, presentations, and interactive labs designed to prepare you for the Microsoft DP-100 exam.
In this course, you will explore the Spark Internals and Architecture. The course will start with a brief introduction to Scala. Using the Scala programming language, you will be introduced to the core functionalities and use cases of Apache Spark including Spark SQL, Spark Streaming, MLlib, and GraphFrames.
In this module, attendees will learn how to design solutions using Azure Infrastructure as a Service Components. This module will focus on core capabilities, use cases, and general best practices as well as discuss peripheral services such as Azure Backup and Site Recovery.
This course is an introduction to Microsoft Azure Machine Learning Services. In this course you will learn to navigate the AML Services interface, create notebook servers, create compute clusters, manage AML Services from a notebook, deploy models, and create an Automated Machine Learning experiment.
This course will be a deep dive into Azure SQL Database performance. We will look at designing an Azure SQL Database architecture for performance. We will look at performance specific features of Azure SQL Database. We will also cover monitoring and troubleshooting.
This module will cover all aspects of big data storage and batch processing. We will start by making the case for big data in Azure. Then we will look at Azure service topics to include Blob Storage, Azure Data Lake Store, Azure Data Lake Analytics, and HDInsight clusters running Hadoop, Hive, Interactive Hive (LLAP) and Spark. Storage topics will focus on choosing the right storage, configuring storage and storage optimization. We will also cover Big Data scenarios including batch processing, interactive clusters, multi-cluster deployments and on-demand clusters.
Define and Prepare the Development Environment - Course One of DP-100 Exam Preparation
The student will learn how Azure services can support the data science process. They’ll explore common architectures, learn to assess business goals and constraints for determining the correct environment, and setup the relevant development environments to support data science deployments in Azure.
Design Azure Data Storage Solutions - Course One of DP-201 Exam Preparation
In this course, students will gain knowledge and skills needed to recommend and design Azure Data Solutions based on requirements. The solution technologies will include both relational and non-relational cloud data stores.
Design data processing solutions - Course Two of DP-201 Exam Preparation
This course covers designing of batch processing solutions and designing of real-time processing solutions.
Design for data security and compliance - Course Three of DP-201 Exam Preparation
This course covers designing of security for source data access and security for data policies and standards.
Developing Models - Course Four of DP-100 Exam Preparation
The student will learn how develop robust models. Starting from selecting the right metric to meet business goals, through to building tuned models, and then evaluating the models produced for fitness.
The course will teach you the fundamentals of the relational database model and how to access data stored in relational databases. The course will give students an understanding of relational database concepts and teach the practical application of these concepts through the T-SQL programming language for Microsoft SQL Server and Azure SQL Database.
Google Cloud Professional Data Engineer
The course has been designed to prepare students for the GCP Professional Data Engineer Certification Exam. The course will review the most important topics in preparing for the exam and provide key aspects and methods for studying, preparing, and testing for the certification exam.
Implement Data Storage Solutions - Course One of DP-200 Exam Preparation
This course covers implementing Azure Data Storage services. We start off by reviewing Azure Portal and Storage concepts, then move on to implementing Azure SQL Data Warehouse, Azure SQL DB, Azure Data Lake, Azure Storage, and Azure Cosmos DB.
This course explores the NoSQL storage options available within the Microsoft Azure Cosmos DB database service. Formerly DocumentDB, Azure Cosmos DB is no longer just a Document-based NoSQL store, and it includes support for all 4 primary NoSQL data models (Document, Graph, Key/Value, Column). In addition to learning about NoSQL with Cosmos DB, students will also learn about the cloud-native features that make Cosmos DB a great NoSQL database-as-a-service in the Microsoft Azure cloud.
In this hands-on course, students will learn about Azure SQL Data Warehouse. This course will review basic architecture of Azure SQL Data Warehouse. We will cover tools used with Azure SQL Data Warehouse, loading SQL Data Warehouse and basic workload management in SQL Data Warehouse.
In the course Introduction to Azure SQL Database we will discuss the configuration, performance, security, availability, recovery and automation of Azure SQL Database. We will also review hybrid solutions with SQL Server Stretch Database. This course will partially help prepare you for exam 70-473 Designing and Implementing Cloud Data Platform Solutions.This course will help you prepare for Microsoft Exam 70-533 - Implementing Azure Infrastructure Solutions and 70-532 Developing Azure Solutions as well.
This course introduces students to Azure Data Factory V2. Students will learn about the different phases of a Data Factory Pipeline. Students will then cover Data Factory Architecture, terminology, the copy activity, file formats, integration runtimes, scheduling and triggers, and data factory management.
This training provides an overview of Azure Databricks and Spark. In this course you will learn where Azure Databricks fits in the big data landscape in Azure. Key features of Azure Databricks such as Workspaces and Notebooks will be covered. Students will also learn the basic architecture of Spark and cover basic Spark internals including core APIs, job scheduling and execution. This class will prepare developers and administrators for more advanced work in Azure Databricks such as Python or Scala development.
This course is an introduction to Python. In this course you will learn which IDE is right for you, print statements, data types, control flow, Python functions and anonymous functions, methods, file io, and an introduction to Python packages.
This course covers introduction to the R Language. We start off with an introduction to R verisons and R Editions then move to R the language. From there will dive into one of R’s strongest features, Graphics. Using base R graphics and GGPlot you will learn how to get started, and learn how to create your own visualizations.
This course looks at services and tools used for machine learning with Azure. This course will introduce students to Machine Learning Server, SQL Server Machine Learning Services, Cognitive Toolkit, the Data Science Virtual Machine, and the Azure AI Gallery.This course will assist you in preparing for the "Using Other Services for Machine Learning" section of the "Perform Cloud Data Science with Azure Machine Learning" Microsoft Exam 70-774.
Manage and Develop Data Processing - Course Two of DP-200 Exam Preparation
This course covers configuring, managing and deploying Azure data processing solutions. We start with an overview of big data environments, including Hadoop clusters, then cover how to plan for and implement Azure Databricks, Azure Stream Analytics, Event Hubs, Azure Data Factory and how these fit with Azure Data Warehouse solutions.
Monitor and Optimize Data Solutions - Course Three of DP-200 Exam Preparation
This course covers configuring, managing and deploying monitoring for Azure Storage and data store solutions. We start with an overview of monitoring concepts, then focus on monitoring Azure Storage, Azure Data Lake, Azure Data Warehouse, Azure SQL DB and other services.
This module will provide an overview of big data, IoT and machine learning solutions in Azure. We will define the meaning of big data and look at the reasons why you might need a big data solution. We will then move on to a discussion of the analytics maturity model to understand how machine learning extracts value from big data. Next, we will review the lambda architecture which is the dominant architecture for big data solutions. We will look at the Azure components used in big data solutions and how they fit together to build an end-to-end lambda architecture in Azure. Finally, we will wrap up with a discussion of the Cortana Intelligence Suite and the value that it brings to big data and analytics solutions in Azure.
Performing Feature Engineering - Course Three of DP-100 Exam Preparation
The student will learn how develop effective and reusable features ready for modeling. Using manual techniques and then automated techniques, the data scientist will be able to handle core data types using SciKit-Learn and Microsoft Python libraries like MMLSpark and Azure Machine Learning Data Prep SDK.
This course builds on your Power BI skills and walks you through the interfaces of both the Online and Desktop offerings before embarking on a journey that will show you how to ingest data, transform data, create reports and dashboards before publishing and using your data sets, reports and dashboards in the Power BI online tenant.The course will help prepare students to take the Microsoft 70-778, Analyzing and Visualizing Data with Power BI certification exam.
Querying Data with T-SQL
This course serves as an introduction to the T-SQL programming language. This course is designed to give students a strong foundation in the T-SQL language which is used by all variants of SQL Server both on-premises and in the cloud.
The Real-Time Ingestion and Processing in Azure course covers information about implementing real-time event stream ingestion and processing within Microsoft Azure. The course starts with an overview of the Lambda Architecture and what a Message Broker is used for. The course continues to cover the Azure Event Hubs and Azure IoT Hub services used for event stream ingestion, and Azure Stream Analytics and HDInsight for integrating real-time event processing. Finally, the course finishes with an overview of a few example architectures to give a better perspective on architecting Real-Time Ingestion and Processing solutions within the Microsoft Azure cloud. This course should help in preparation for the 70-534 exam, Architecting Microsoft Azure Solutions.
In this hands-on course, students will learn about running SQL Server in Azure. This course will review basic Azure networking and storage using the Azure Resource Manager architecture to prepare students for building SQL Server solutions in Azure. The primary focus of this course is SQL Server cloud and hybrid-cloud solutions on Azure Infrastructure as a Service (IaaS). This course will cover best practices for deploying SQL Server on Azure Virtual Machines including standalone SQL Servers and hybrid Availability Groups. The course will look at SQL Server features that take advantage of Azure Storage such as SQL Server Managed Backup, Azure Snapshot Backups, and SQL Server data files hosted on Azure Storage.
In this module, you will focus on pricing and support models available with Microsoft to include but not limited to Azure subscriptions, planning and managing costs, support options available with Azure, and the service lifecycle in Azure.
In this module you will learn basic cloud concepts to include but not limited to the following: Why Cloud Services?, Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), Software-as-a-Service (SaaS), Public, Private, and Hybrid cloud models.
In this module, you will learn the basics of core services available within Microsoft Azure to include but not limited to Core Azure architectural components, Core Azure Services and Products, Azure Solutions, and Azure management tools.
In this module, you will learn about security, privacy, compliance, and trust with Microsoft Azure. You will become familiar with the following topics: securing network connectivity in Azure, core Azure identity services, security tools and features, Azure governance methodologies, monitoring and reporting in Azure, and privacy, compliance and data protection standards in Azure.
In this lab, you will provision how to provision a Databricks workspace, an Azure storage account, and a Spark cluster. You will learn to use the Spark cluster to explore data using Spark Resilient Distributed Datasets (RDDs) and Spark Dataframes.
In this lab, you will create an Azure Data Lake Store Gen2 account. You will learn to lock down and manage access of the Data Lake Store, taking advantage of both role-based access control and Data Lake Store Azure AD integration. Finally, you will process a bulk ingest using Hadoop distcp utility.
In this lab, you will create multiple Azure Cosmos DB containers. Some of the containers will be unlimited and configured with a partition key, while others will be fixed-sized. You will then use the SQL API and .NET SDK to query specific containers using a single partition key or across multiple partition keys.
In this lab you will create an Azure SQL Database using the Azure Portal and connect to it using SQL Server Management Studio. You will then migrate a SQL Server database hosted on a virtual machine to an Azure SQL Database.
Today, data is being collected in ever-increasing amounts, at ever-increasing velocities, and in an ever-expanding variety of formats. This explosion of data is colloquially known as the Big Data phenomenon.In order to gain actionable insights into big-data sources, new tools need to be leveraged that allow the data to be cleaned, analyzed, and visualized quickly and efficiently. Azure HDInsight provides a solution to this problem by making it exceedingly simple to create high-performance computing clusters provisioned with Apache Spark and members of the Spark ecosystem. Rather than spend time deploying hardware and installing, configuring, and maintaining software, you can focus on your research and apply your expertise to the data rather than the resources required to analyze that data.Apache Spark is an open-source parallel-processing platform that excels at running large-scale data analytics jobs. Spark’s combined use of in-memory and disk data storage delivers performance improvements that allow it to process some tasks up to 100 times faster than Hadoop. With Microsoft Azure, deploying Apache Spark clusters becomes significantly simpler and gets you working on your data analysis that much sooner.In this lab, you will experience HD Insight with Spark first-hand. After provisioning a Spark cluster, you will use the Microsoft Azure Storage Explorer to upload several Jupyter notebooks to the cluster. You will then use these notebooks to explore, visualize, and build a machine-learning model from food-inspection data — more than 100,000 rows of it — collected by the city of Chicago. The goal is to learn how to create and utilize your own Spark clusters, experience the ease with which they are provisioned in Azure, and, if you're new to Spark, get a working introduction to Spark data analytics.
In this lab, you will learn to build, monitor, manage and troubleshoot data pipelines with Azure Data Factory V2. You will learn to use the Copy Data wizard to build pipeline with no coding. You will build a custom pipeline to copy data from Blob storage to a table in Azure SQL Database. You will build a tumbling window pipeline to pick up data on a daily basis. Finally, you will learn to use the Management Monitoring tools to troubleshoot pipeline failures.
In this lab, you will create a virtual network that will allow the virtual machines you create to securely connect with each other. You will then create two virtual machines and specify the virtual network configuration and the availability set configuration along with storage for the virtual machine.
In this lab, you will create an Azure Web App and a SQL Database and configure the popular content management system (CMS) Orchard CMS. You will then configure the web app to automatically scale based on actual CPU usage.
In this lab, we will explore the use of columnstore indexes in Azure SQL Database. We will evaluate the performance improvements we get when we implement columnstore indexes on tables for with analytical workloads.
In this lab, you will explore real-time operational analytics using Azure SQL Database. You will evaluate the performance improvements you will get when you add updateable non-clustered columnstore indexes on top of standard tables as well as memory-optimized tables.
In this lab, you will explore the new SQL Server 2016 real-time operational analytics feature. You will evaluate the performance improvements you will get when you add updateable nonclustered columnstore indexes on top of disk-based tables as well as memory-optimized tables.
In this lab, you will explore columnstore indexes in SQL Server 2016. You will evaluate the performance improvements you will get when you implement columnstore indexes on tables for your analytical workloads.
Spark structured streaming enables you to use the dataframe API to read and process an unbounded stream of data. This kind of processing is used in real-time scenarios to aggregate data over temporal intervals or windows. You can use Spark to process streaming data from a wide range of sources, including Azure Event Hubs, Kafka, and others. In this lab, you will run a Spark job to continually process a real-time stream of data.
In this lab, we will examine the use of In-Memory OLTP in Azure SQL Database. We will compare performance across standard and in-memory architectures including memory optimized tables and natively compiled stored procedures.
In this lab, you will explore the new SQL Server 2016 In-Memory OLTP feature. You will evaluate the performance improvements you will get when you migrate disk-based tables and interpreted T-SQL stored procedures into memory-optimized tables and natively-compiled stored procedures, respectively.
In this lab, you will use Visual Studio and ASP.NET to learn how to use Cosmos DB as a backend for an MVC application. You will learn how to programmatically read and write data, create and call a user-defined functions as well as understand management capabilities such as users and permissions, monitoring and scalability options.
In this lab, you will deploy and configure an on-premises gateway to work with Azure Logic Apps. The on-premises data gateway acts as a bridge, providing quick and secure data transfer between on-premises data (data that is not in the cloud) and the Power BI, Microsoft Flow, Logic Apps, and PowerApps services.
Spark includes an API named Spark MLLib (often referred to as Spark ML), which you can use to create machine learning solutions. Machine learning is a technique in which you train a predictive model using a large volume of data so that when new data is submitted to the model it can predict unknown values. The most common types of machine learning are supervised learning and unsupervised learning. In a supervised learning scenario, you start with a large volume of data that includes both features (categorical and numeric values that describe characteristics of the entity you’re trying to predict something about) and labels (the value your model will predict. Training the model involves applying a statistical algorithm that fits the features to the labels. Because your initial data includes known values for the labels, you can train the model and test its accuracy with these known label values – giving you confidence that the model will work accurately with new data for which the label values aren’t known. Unsupervised learning is a technique in which there are no known label values, and the model is trained to group (or cluster) similar entities together based on their features.In this lab, we’ll focus on supervised learning; and specifically a type of machine learning called classification in which you train a model to identify which category, or class an entity belongs to. You will train a classifier to use features of flights that are enroute to an airport, and predict whether they will be late or on-time.
In this lab, you will use PowerShell to manage Azure SQL Database. You will create a logical Azure SQL Server via PowerShell. You will then manage the firewall to allow remote connectivity to allow for client access. You will restore a database from an existing BACPAC file. Finally, you will use PowerShell to scale the database performance and pricing tier.
In this lab, you will learn how to configure and manage an Azure Cosmos DB Account (formerly Azure DocumentDB), including how to query and manage JSON documents within a Collection. Among the topics covered are using SQL language syntax to perform document queries that return JSON results, and implementing and testing global data replication and fail over.
In this lab, we will walk through management and monitoring of an Elastic Pool. First, we will create an Elastic Pool and add our databases to the pool. Then we will monitor the performance of our pool using TSQL Scripts and the Azure Portal.
In this lab, you will configure and manage the query store in SQL Server 2016 to collect runtime statistics, queries, query plan history and other workload history within the database to assist with troubleshooting query performance issues, you will then identify and resolve poor performing queries in your database using SQL Server 2016 Query Store. You will also identify query plan regressions and how to address them with information gathered from the Query Store.
In this lab, you will query an Azure Cosmos DB database instance using the SQL language. You will use features common in SQL such as projection using SELECT statements and filtering using WHERE clauses. You will also get to use features unique to Azure Cosmos DB’s SQL API such as projection into JSON, intra-document JOIN and filtering to a range of partition keys.
In this lab, you learn about deploying SQL Server on Azure virtual machines. This lab will walk you through some common setup and configuration tasks for running SQL Server in Azure infrastructure as a service.
In this lab, you will learn the fundamentals of creating databases, tables, views and relationships using Microsoft SQL Server. You will gain experience using SQL Server Management Studio (SSMS) and learn introductory concepts of writing T-SQL queries.
In this lab, you will use the .NET SDK to tune an Azure Cosmos DB request to optimize performance of your application.
In this lab, you want to see if there are models that perform better than the one you might manually create. You decide to use Azure Machine Learning service’s AutoML and HyperDrive to simultaneously execute a number of different types of classification models, compare the results, and recommend the best performing model. This will save you a lot of time picking the best model so you can get the solution delivered sooner.
In this lab, you learn to leverage Machine Learning Server and SQL Server Machine Learning Services to execute R code. You will use pre-installed tools of the Data Science Virtual Machine to execute Jupyter Notebooks and execute remote R code against Machine Learning Server. You will then leverage SQL Server Machine Learning Services to execute R code in SQL Server.