Weekend Sale 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: save70

Passed Exam Today Associate-Data-Practitioner

Google Cloud Associate Data Practitioner (ADP Exam) Questions and Answers

Question 17

You manage a BigQuery table that is used for critical end-of-month reports. The table is updated weekly with new sales data. You want to prevent data loss and reporting issues if the table is accidentally deleted. What should you do?

Options:

A.

Configure the time travel duration on the table to be exactly seven days. On deletion, re-create the deleted table solely from the time travel data.

B.

Schedule the creation of a new snapshot of the table once a week. On deletion, re-create the deleted table using the snapshot and time travel data.

C.

Create a clone of the table. On deletion, re-create the deleted table by copying the content of the clone.

D.

Create a view of the table. On deletion, re-create the deleted table from the view and time travel data.

Question 18

You are a data analyst working with sensitive customer data in BigQuery. You need to ensure that only authorized personnel within your organization can query this data, while following the principle of least privilege. What should you do?

Options:

A.

Enable access control by using IAM roles.

B.

Update dataset privileges by using the SQL GRANT statement.

C.

Export the data to Cloud Storage, and use signed URLs to authorize access.

D.

Encrypt the data by using customer-managed encryption keys (CMEK).

Question 19

Your company’s customer support audio files are stored in a Cloud Storage bucket. You plan to analyze the audio files’ metadata and file content within BigQuery to create inference by using BigQuery ML. You need to create a corresponding table in BigQuery that represents the bucket containing the audio files. What should you do?

Options:

A.

Create an external table.

B.

Create a temporary table.

C.

Create a native table.

D.

Create an object table.

Question 20

Your organization has decided to move their on-premises Apache Spark-based workload to Google Cloud. You want to be able to manage the code without needing to provision and manage your own cluster. What should you do?

Options:

A.

Migrate the Spark jobs to Dataproc Serverless.

B.

Configure a Google Kubernetes Engine cluster with Spark operators, and deploy the Spark jobs.

C.

Migrate the Spark jobs to Dataproc on Google Kubernetes Engine.

D.

Migrate the Spark jobs to Dataproc on Compute Engine.