Month End Sale 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: save70

Newly Released Google Professional-Data-Engineer Exam PDF

Google Professional Data Engineer Exam Questions and Answers

Question 9

You work for a large fast food restaurant chain with over 400,000 employees. You store employee information in Google BigQuery in a Users table consisting of a FirstName field and a LastName field. A member of IT is building an application and asks you to modify the schema and data in BigQuery so the application can query a FullName field consisting of the value of the FirstName field concatenated with a space, followed by the value of the LastName field for each employee. How can you make that data available while minimizing cost?

Options:

A.

Create a view in BigQuery that concatenates the FirstName and LastName field values to produce the FullName.

B.

Add a new column called FullName to the Users table. Run an UPDATE statement that updates the FullName column for each user with the concatenation of the FirstName and LastName values.

C.

Create a Google Cloud Dataflow job that queries BigQuery for the entire Users table, concatenates the FirstName value and LastName value for each user, and loads the proper values for FirstName, LastName, and FullName into a new table in BigQuery.

D.

Use BigQuery to export the data for the table to a CSV file. Create a Google Cloud Dataproc job to process the CSV file and output a new CSV file containing the proper values for FirstName, LastName and FullName. Run a BigQuery load job to load the new CSV file into BigQuery.

Question 10

You are building an application to share financial market data with consumers, who will receive data feeds. Data is collected from the markets in real time. Consumers will receive the data in the following ways:

    Real-time event stream

    ANSI SQL access to real-time stream and historical data

    Batch historical exports

Which solution should you use?

Options:

A.

Cloud Dataflow, Cloud SQL, Cloud Spanner

B.

Cloud Pub/Sub, Cloud Storage, BigQuery

C.

Cloud Dataproc, Cloud Dataflow, BigQuery

D.

Cloud Pub/Sub, Cloud Dataproc, Cloud SQL

Question 11

You are designing a pipeline that publishes application events to a Pub/Sub topic. You need to aggregate events across hourly intervals before loading the results to BigQuery for analysis. Your solution must be scalable so it can process and load large volumes of events to BigQuery. What should you do?

Options:

A.

Create a streaming Dataflow job to continually read from the Pub/Sub topic and perform the necessary aggregations using tumbling windows

B.

Schedule a batch Dataflow job to run hourly, pulling all available messages from the Pub-Sub topic and performing the necessary aggregations

C.

Schedule a Cloud Function to run hourly, pulling all avertable messages from the Pub/Sub topic and performing the necessary aggregations

D.

Create a Cloud Function to perform the necessary data processing that executes using the Pub/Sub trigger every time a new message is published to the topic.

Question 12

You are architecting a data transformation solution for BigQuery. Your developers are proficient with SOL and want to use the ELT development technique. In addition, your developers need an intuitive coding environment and the ability to manage SQL as code. You need to identify a solution for your developers to build these pipelines. What should you do?

Options:

A.

Use Cloud Composer to load data and run SQL pipelines by using the BigQuery job operators.

B.

Use Dataflow jobs to read data from Pub/Sub, transform the data, and load the data to BigQuery.

C.

Use Dataform to build, manage, and schedule SQL pipelines.

D.

Use Data Fusion to build and execute ETL pipelines