Weekend Sale 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: save70

Free and Premium Microsoft DP-700 Dumps Questions Answers

Page: 1 / 5
Total 104 questions

Implementing Data Engineering Solutions Using Microsoft Fabric Questions and Answers

Question 1

You need to ensure that the authors can see only their respective sales data.

How should you complete the statement? To answer, drag the appropriate values the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content

NOTE: Each correct selection is worth one point.

Options:

Buy Now
Question 2

What should you do to optimize the query experience for the business users?

Options:

A.

Enable V-Order.

B.

Create and update statistics.

C.

Run the VACUUM command.

D.

Introduce primary keys.

Question 3

You need to resolve the sales data issue. The solution must minimize the amount of data transferred.

What should you do?

Options:

A.

Spilt the dataflow into two dataflows.

B.

Configure scheduled refresh for the dataflow.

C.

Configure incremental refresh for the dataflow. Set Store rows from the past to 1 Month.

D.

Configure incremental refresh for the dataflow. Set Refresh rows from the past to 1 Year.

E.

Configure incremental refresh for the dataflow. Set Refresh rows from the past to 1 Month.

Question 4

HOTSPOT

You need to troubleshoot the ad-hoc query issue.

How should you complete the statement? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Options:

Question 5

You need to implement the solution for the book reviews.

Which should you do?

Options:

A.

Create a Dataflow Gen2 dataflow.

B.

Create a shortcut.

C.

Enable external data sharing.

D.

Create a data pipeline.

Question 6

You have an Azure event hub. Each event contains the following fields:

BikepointID

Street

Neighbourhood

Latitude

Longitude

No_Bikes

No_Empty_Docks

You need to ingest the events. The solution must only retain events that have a Neighbourhood value of Chelsea, and then store the retained events in a Fabric lakehouse.

What should you use?

Options:

A.

a KQL queryset

B.

an eventstream

C.

a streaming dataset

D.

Apache Spark Structured Streaming

Question 7

You have a Fabric warehouse named DW1 that contains a Type 2 slowly changing dimension (SCD) dimension table named DimCustomer. DimCustomer contains 100 columns and 20 million rows. The columns are of various data types, including int, varchar, date, and varbinary.

You need to identify incoming changes to the table and update the records when there is a change. The solution must minimize resource consumption.

What should you use to identify changes to attributes?

Options:

A.

a direct attributes comparison for the attributes in the source table.

B.

a hash function to compare the attributes in the DimCustomer table.

C.

a direct attributes comparison across the attributes in the DimCustomer table.

D.

a hash function to compare the attributes in the source table.

Question 8

You have a Fabric workspace named Workspace1 that contains a warehouse named DW1 and a data pipeline named Pipeline1.

You plan to add a user named User3 to Workspace1.

You need to ensure that User3 can perform the following actions:

View all the items in Workspace1.

Update the tables in DW1.

The solution must follow the principle of least privilege.

You already assigned the appropriate object-level permissions to DW1.

Which workspace role should you assign to User3?

Options:

A.

Admin

B.

Member

C.

Viewer

D.

Contributor

Question 9

You have a Fabric workspace that contains a warehouse named DW1. DW1 is loaded by using a notebook named Notebook1.

You need to identify which version of Delta was used when Notebook1 was executed.

What should you use?

Options:

A.

Real-Time hub

B.

OneLake data hub

C.

the Admin monitoring workspace

D.

Fabric Monitor

E.

the Microsoft Fabric Capacity Metrics app

Question 10

You have an Azure SQL database named DB1.

In a Fabric workspace, you deploy an eventstream named EventStreamDBI to stream record changes from DB1 into a lakehouse.

You discover that events are NOT being propagated to EventStreamDBI.

You need to ensure that the events are propagated to EventStreamDBI.

What should you do?

Options:

A.

Create a read-only replica of DB1.

B.

Create an Azure Stream Analytics job.

C.

Enable Extended Events for DB1.

D.

Enable change data capture (CDC) for DB1.

Question 11

You have an Azure key vault named KeyVaultl that contains secrets.

You have a Fabric workspace named Workspace-!. Workspace! contains a notebook named Notebookl that performs the following tasks:

• Loads stage data to the target tables in a lakehouse

• Triggers the refresh of a semantic model

You plan to add functionality to Notebookl that will use the Fabric API to monitor the semantic model refreshes. You need to retrieve the registered application ID and secret from KeyVaultl to generate the authentication token.

Solution: You use the following code segment:

Use notebookutils.credentials.getSecret and specify the key vault URL and key vault secret. Does this meet the goal?

Options:

A.

Yes

B.

No

Question 12

You need to recommend a solution for handling old files. The solution must meet the technical requirements. What should you include in the recommendation?

Options:

A.

a data pipeline that includes a Copy data activity

B.

a notebook that runs the VACUUM command

C.

a notebook that runs the OPTIMIZE command

D.

a data pipeline that includes a Delete data activity

Question 13

You need to populate the MAR1 data in the bronze layer.

Which two types of activities should you include in the pipeline? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.

Options:

A.

ForEach

B.

Copy data

C.

WebHook

D.

Stored procedure

Question 14

You need to recommend a solution to resolve the MAR1 connectivity issues. The solution must minimize development effort. What should you recommend?

Options:

A.

Add a ForEach activity to the data pipeline.

B.

Configure retries for the Copy data activity.

C.

Configure Fault tolerance for the Copy data activity.

D.

Call a notebook from the data pipeline.

Question 15

You need to ensure that WorkspaceA can be configured for source control. Which two actions should you perform?

Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

Options:

A.

Assign WorkspaceA to Capl.

B.

From Tenant setting, set Users can synchronize workspace items with their Git repositories to Enabled

C.

Configure WorkspaceA to use a Premium Per User (PPU) license

D.

From Tenant setting, set Users can sync workspace items with GitHub repositories to Enabled

Question 16

You need to create the product dimension.

How should you complete the Apache Spark SQL code? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Options:

Question 17

You need to schedule the population of the medallion layers to meet the technical requirements.

What should you do?

Options:

A.

Schedule a data pipeline that calls other data pipelines.

B.

Schedule a notebook.

C.

Schedule an Apache Spark job.

D.

Schedule multiple data pipelines.

Question 18

You need to ensure that the data engineers are notified if any step in populating the lakehouses fails. The solution must meet the technical requirements and minimize development effort.

What should you use? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Options:

Question 19

You need to ensure that the data analysts can access the gold layer lakehouse.

What should you do?

Options:

A.

Add the DataAnalyst group to the Viewer role for WorkspaceA.

B.

Share the lakehouse with the DataAnalysts group and grant the Build reports on the default semantic model permission.

C.

Share the lakehouse with the DataAnalysts group and grant the Read all SQL Endpoint data permission.

D.

Share the lakehouse with the DataAnalysts group and grant the Read all Apache Spark permission.

Question 20

You need to recommend a method to populate the POS1 data to the lakehouse medallion layers.

What should you recommend for each layer? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Options:

Question 21

You need to ensure that usage of the data in the Amazon S3 bucket meets the technical requirements.

What should you do?

Options:

A.

Create a workspace identity and enable high concurrency for the notebooks.

B.

Create a shortcut and ensure that caching is disabled for the workspace.

C.

Create a workspace identity and use the identity in a data pipeline.

D.

Create a shortcut and ensure that caching is enabled for the workspace.

Page: 1 / 5
Total 104 questions