Month End Sale 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: save70

DP-100 Questions Bank

Page: 8 / 12
Total 460 questions

Designing and Implementing a Data Science Solution on Azure Questions and Answers

Question 29

You plan to implement an Azure Machine Learning solution. You have the following requirements:

• Run a Jupyter notebook to interactively tram a machine learning model.

• Deploy assets and workflows for machine learning proof of concept by using scripting rather than custom programming.

You need to select a development technique for each requirement

Which development technique should you use? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Options:

Question 30

You are training machine learning models in Azure Machine Learning. You use Hyperdrive to tune the hyperparameters. In previous model training and tuning runs, many models showed similar performance. You need to select an early termination policy that meets the following requirements:

• accounts for the performance of all previous runs when evaluating the current run

• avoids comparing the current run with only the best performing run to date

Which two early termination policies should you use? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

Options:

A.

Bandit

B.

Median stopping

C.

Default

D.

Truncation selection

Question 31

You train a model by using Azure Machine Learning. You use Azure Blob Storage to store production data.

The model must be re-trained when new data is uploaded to Azure Blob Storage. You need to minimize development and coding.

You need to configure Azure services to develop a re-training solution.

Which Azure services should you use? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Options:

Question 32

You create an Azure Machine Learning workspace. The workspace contains a dataset named sample.dataset, a compute instance, and a compute cluster. You must create a two-stage pipeline that will prepare data in the dataset and then train and register a model based on the prepared data. The first stage of the pipeline contains the following code:

You need to identify the location containing the output of the first stage of the script that you can use as input for the second stage. Which storage location should you use?

Options:

A.

workspaceblobstore datastore

B.

workspacefi lest ore datastore

C.

compute instance

Page: 8 / 12
Total 460 questions