Skip to content

Latest commit

 

History

History
17 lines (11 loc) · 1.7 KB

Lab07B.md

File metadata and controls

17 lines (11 loc) · 1.7 KB

Lab 7B: Creating a Batch Inferencing Service

In many scenarios, inferencing is performed as a batch process that uses a predictive model to score a large number of cases. To implement this kind of inferencing solution in Azure Machine Learning, you can create a batch inferencing pipeline.

Before You Start

Before you start this lab, ensure that you have completed Lab 1A and Lab 1B, which include tasks to create the Azure Machine Learning workspace and other resources used in this lab.

Task 1: Create a Batch Inferencing Service

In this task, you'll create a batch inferencing pipeline, and publish it as a service.

  1. In Azure Machine Learning studio, view the Compute page for your workspace; and on the Compute Instances tab, start your compute instance.
  2. When the compute instance is running, refresh the Azure Machine Learning studio web page in your browser to ensure your authenticated session has not expired. Then click the Jupyter link to open the Jupyter home page in a new browser tab.
  3. In the Jupyter home page, in the Users/DP100 folder, open the 07B - Creating a Batch Inferencing Service.ipynb notebook. Then read the notes in the notebook, running each code cell in turn.
  4. When you have finished running the code in the notebook, on the File menu, click Close and Halt to close it and shut down its Python kernel. Then close all Jupyter browser tabs.
  5. If you're finished working with Azure Machine Learning for the day, in Azure Machine Learning studio, on the Compute page, on the Compute Instances tab, select your compute instance and click Stop to shut it down. Otherwise, leave it running for the next lab.