Spring Sale 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: save70

Free and Premium Google Professional-Cloud-Architect Dumps Questions Answers

Google Certified Professional - Cloud Architect (GCP) Questions and Answers

Question 1

You have broken down a legacy monolithic application into a few containerized RESTful microservices. You want to run those microservices on Cloud Run. You also want to make sure the services are highly available with low latency to your customers. What should you do?

Options:

A.

Deploy Cloud Run services to multiple availability zones. Create Cloud Endpoints that point to the services. Create a global HTIP(S) Load Balancing instance and attach the Cloud Endpoints to its backend.

B.

Deploy Cloud Run services to multiple regions Create serverless network endpoint groups pointing to the services. Add the serverless NE Gs to a backend service that is used by a global HTIP(S) Load Balancing instance.

C.

Cloud Run services to multiple regions. In Cloud DNS, create a latency-based DNS name that points to the services.

D.

Deploy Cloud Run services to multiple availability zones. Create a TCP/IP global load balancer. Add the Cloud Run Endpoints to its backend service.

Buy Now
Question 2

For this question, refer to the TerramEarth case study. Considering the technical requirements, how should you reduce the unplanned vehicle downtime in GCP?

Options:

A.

Use BigQuery as the data warehouse. Connect all vehicles to the network and stream data into BigQuery using Cloud Pub/Sub and Cloud Dataflow. Use Google Data Studio for analysis and reporting.

B.

Use BigQuery as the data warehouse. Connect all vehicles to the network and upload gzip files to a Multi-Regional Cloud Storage bucket using gcloud. Use Google Data Studio for analysis and reporting.

C.

Use Cloud Dataproc Hive as the data warehouse. Upload gzip files to a MultiRegional Cloud Storage

bucket. Upload this data into BigQuery using gcloud. Use Google data Studio for analysis and reporting.

D.

Use Cloud Dataproc Hive as the data warehouse. Directly stream data into prtitioned Hive tables. Use Pig scripts to analyze data.

Question 3

For this question, refer to the TerramEarth case study. TerramEarth has decided to store data files in Cloud Storage. You need to configure Cloud Storage lifecycle rule to store 1 year of data and minimize file storage cost.

Which two actions should you take?

Options:

A.

Create a Cloud Storage lifecycle rule with Age: “30”, Storage Class: “Standard”, and Action: “Set to Coldline”, and create a second GCS life-cycle rule with Age: “365”, Storage Class: “Coldline”, and Action: “Delete”.

B.

Create a Cloud Storage lifecycle rule with Age: “30”, Storage Class: “Coldline”, and Action: “Set to Nearline”, and create a second GCS life-cycle rule with Age: “91”, Storage Class: “Coldline”, and Action: “Set to Nearline”.

C.

Create a Cloud Storage lifecycle rule with Age: “90”, Storage Class: “Standard”, and Action: “Set to Nearline”, and create a second GCS life-cycle rule with Age: “91”, Storage Class: “Nearline”, and Action: “Set to Coldline”.

D.

Create a Cloud Storage lifecycle rule with Age: “30”, Storage Class: “Standard”, and Action: “Set to Coldline”, and create a second GCS life-cycle rule with Age: “365”, Storage Class: “Nearline”, and Action: “Delete”.

Question 4

For this question, refer to the TerramEarth case study. You need to implement a reliable, scalable GCP solution for the data warehouse for your company, TerramEarth. Considering the TerramEarth business and technical requirements, what should you do?

Options:

A.

Replace the existing data warehouse with BigQuery. Use table partitioning.

B.

Replace the existing data warehouse with a Compute Engine instance with 96 CPUs.

C.

Replace the existing data warehouse with BigQuery. Use federated data sources.

D.

Replace the existing data warehouse with a Compute Engine instance with 96 CPUs. Add an additional Compute Engine pre-emptible instance with 32 CPUs.

Question 5

For this question, refer to the TerramEarth case study. You are asked to design a new architecture for the

ingestion of the data of the 200,000 vehicles that are connected to a cellular network. You want to follow

Google-recommended practices.

Considering the technical requirements, which components should you use for the ingestion of the data?

Options:

A.

Google Kubernetes Engine with an SSL Ingress

B.

Cloud IoT Core with public/private key pairs

C.

Compute Engine with project-wide SSH keys

D.

Compute Engine with specific SSH keys

Question 6

TerramEarth has a legacy web application that you cannot migrate to cloud. However, you still want to build a cloud-native way to monitor the application. If the application goes down, you want the URL to point to a "Site is unavailable" page as soon as possible. You also want your Ops team to receive a notification for the issue. You need to build a reliable solution for minimum cost

What should you do?

Options:

A.

Create a scheduled job in Cloud Run to invoke a container every minute. The container will check the application URL If the application is down, switch the URL to the "Site is unavailable" page, and notify the Ops team.

B.

Create a cron job on a Compute Engine VM that runs every minute. The cron job invokes a Python program to check the application URL If the application is down, switch the URL to the "Site is unavailable" page, and notify the Ops team.

C.

Create a Cloud Monitoring uptime check to validate the application URL If it fails, put a message in a Pub/Sub queue that triggers a Cloud Function to switch the URL to the "Site is unavailable" page, and notify the Ops team.

D.

Use Cloud Error Reporting to check the application URL If the application is down, switch the URL to the "Site is unavailable" page, and notify the Ops team.

Question 7

For this question, refer to the TerramEarth case study. To be compliant with European GDPR regulation, TerramEarth is required to delete data generated from its European customers after a period of 36 months when it contains personal data. In the new architecture, this data will be stored in both Cloud Storage and BigQuery. What should you do?

Options:

A.

Create a BigQuery table for the European data, and set the table retention period to 36 months. For Cloud Storage, use gsutil to enable lifecycle management using a DELETE action with an Age condition of 36 months.

B.

Create a BigQuery table for the European data, and set the table retention period to 36 months. For Cloud Storage, use gsutil to create a SetStorageClass to NONE action when with an Age condition of 36 months.

C.

Create a BigQuery time-partitioned table for the European data, and set the partition expiration period to 36 months. For Cloud Storage, use gsutil to enable lifecycle management using a DELETE action with an Age condition of 36 months.

D.

Create a BigQuery time-partitioned table for the European data, and set the partition period to 36 months. For Cloud Storage, use gsutil to create a SetStorageClass to NONE action with an Age condition of 36 months.

Question 8

For this question, refer to the TerramEarth case study. A new architecture that writes all incoming data to

BigQuery has been introduced. You notice that the data is dirty, and want to ensure data quality on an

automated daily basis while managing cost.

What should you do?

Options:

A.

Set up a streaming Cloud Dataflow job, receiving data by the ingestion process. Clean the data in a Cloud Dataflow pipeline.

B.

Create a Cloud Function that reads data from BigQuery and cleans it. Trigger it. Trigger the Cloud Function from a Compute Engine instance.

C.

Create a SQL statement on the data in BigQuery, and save it as a view. Run the view daily, and save the result to a new table.

D.

Use Cloud Dataprep and configure the BigQuery tables as the source. Schedule a daily job to clean the data.

Question 9

For this question, refer to the Helicopter Racing League (HRL) case study. Your team is in charge of creating a

payment card data vault for card numbers used to bill tens of thousands of viewers, merchandise consumers,

and season ticket holders. You need to implement a custom card tokenization service that meets the following

requirements:

• It must provide low latency at minimal cost.

• It must be able to identify duplicate credit cards and must not store plaintext card numbers.

• It should support annual key rotation.

Which storage approach should you adopt for your tokenization service?

Options:

A.

Store the card data in Secret Manager after running a query to identify duplicates.

B.

Encrypt the card data with a deterministic algorithm stored in Firestore using Datastore mode.

C.

Encrypt the card data with a deterministic algorithm and shard it across multiple Memorystore instances.

D.

Use column-level encryption to store the data in Cloud SQL.

Question 10

For this question, refer to the Helicopter Racing League (HRL) case study. HRL wants better prediction

accuracy from their ML prediction models. They want you to use Google’s AI Platform so HRL can understand

and interpret the predictions. What should you do?

Options:

A.

Use Explainable AI.

B.

Use Vision AI.

C.

Use Google Cloud’s operations suite.

D.

Use Jupyter Notebooks.

Question 11

For this question, refer to the Helicopter Racing League (HRL) case study. A recent finance audit of cloud

infrastructure noted an exceptionally high number of Compute Engine instances are allocated to do video

encoding and transcoding. You suspect that these Virtual Machines are zombie machines that were not deleted

after their workloads completed. You need to quickly get a list of which VM instances are idle. What should you

do?

Options:

A.

Log into each Compute Engine instance and collect disk, CPU, memory, and network usage statistics for

analysis.

B.

Use the gcloud compute instances list to list the virtual machine instances that have the idle: true label set.

C.

Use the gcloud recommender command to list the idle virtual machine instances.

D.

From the Google Console, identify which Compute Engine instances in the managed instance groups are

no longer responding to health check probes.

Question 12

For this question, refer to the Helicopter Racing League (HRL) case study. HRL is looking for a cost-effective

approach for storing their race data such as telemetry. They want to keep all historical records, train models

using only the previous season's data, and plan for data growth in terms of volume and information collected.

You need to propose a data solution. Considering HRL business requirements and the goals expressed by

CEO S. Hawke, what should you do?

Options:

A.

Use Firestore for its scalable and flexible document-based database. Use collections to aggregate race data

by season and event.

B.

Use Cloud Spanner for its scalability and ability to version schemas with zero downtime. Split race data

using season as a primary key.

C.

Use BigQuery for its scalability and ability to add columns to a schema. Partition race data based on

season.

D.

Use Cloud SQL for its ability to automatically manage storage increases and compatibility with MySQL. Use

separate database instances for each season.

Question 13

For this question, refer to the Helicopter Racing League (HRL) case study. The HRL development team

releases a new version of their predictive capability application every Tuesday evening at 3 a.m. UTC to a

repository. The security team at HRL has developed an in-house penetration test Cloud Function called Airwolf.

The security team wants to run Airwolf against the predictive capability application as soon as it is released

every Tuesday. You need to set up Airwolf to run at the recurring weekly cadence. What should you do?

Options:

A.

Set up Cloud Tasks and a Cloud Storage bucket that triggers a Cloud Function.

B.

Set up a Cloud Logging sink and a Cloud Storage bucket that triggers a Cloud Function.

C.

Configure the deployment job to notify a Pub/Sub queue that triggers a Cloud Function.

D.

Set up Identity and Access Management (IAM) and Confidential Computing to trigger a Cloud Function.

Question 14

For this question, refer to the Helicopter Racing League (HRL) case study. Recently HRL started a new regional

racing league in Cape Town, South Africa. In an effort to give customers in Cape Town a better user

experience, HRL has partnered with the Content Delivery Network provider, Fastly. HRL needs to allow traffic

coming from all of the Fastly IP address ranges into their Virtual Private Cloud network (VPC network). You are

a member of the HRL security team and you need to configure the update that will allow only the Fastly IP

address ranges through the External HTTP(S) load balancer. Which command should you use?

Options:

A.

Apply a Cloud Armor security policy to external load balancers using a named IP list for Fastly.

B.

Apply a Cloud Armor security policy to external load balancers using the IP addresses that Fastly has published. C. Apply a VPC firewall rule on port 443 for Fastly IP address ranges.

C.

Apply a VPC firewall rule on port 443 for network resources tagged with scurceiplisr-fasrly.

Question 15

For this question, refer to the EHR Healthcare case study. You are a developer on the EHR customer portal team. Your team recently migrated the customer portal application to Google Cloud. The load has increased on the application servers, and now the application is logging many timeout errors. You recently incorporated Pub/Sub into the application architecture, and the application is not logging any Pub/Sub publishing errors. You want to improve publishing latency. What should you do?

Options:

A.

Increase the Pub/Sub Total Timeout retry value.

B.

Move from a Pub/Sub subscriber pull model to a push model.

C.

Turn off Pub/Sub message batching.

D.

Create a backup Pub/Sub message queue.

Question 16

You need to upgrade the EHR connection to comply with their requirements. The new connection design must support business-critical needs and meet the same network and security policy requirements. What should you do?

Options:

A.

Add a new Dedicated Interconnect connection.

B.

Upgrade the bandwidth on the Dedicated Interconnect connection to 100 G.

C.

Add three new Cloud VPN connections.

D.

Add a new Carrier Peering connection.

Question 17

For this question, refer to the Dress4Win case study.

As part of Dress4Win's plans to migrate to the cloud, they want to be able to set up a managed logging and monitoring system so they can handle spikes in their traffic load. They want to ensure that:

• The infrastructure can be notified when it needs to scale up and down to handle the ebb and flow of usage throughout the day

• Their administrators are notified automatically when their application reports errors.

• They can filter their aggregated logs down in order to debug one piece of the application across many hosts

Which Google StackDriver features should they use?

Options:

A.

Logging, Alerts, Insights, Debug

B.

Monitoring, Trace, Debug, Logging

C.

Monitoring, Logging, Alerts, Error Reporting

D.

Monitoring, Logging, Debug, Error Report

Question 18

For this question, refer to the Dress4Win case study.

The Dress4Win security team has disabled external SSH access into production virtual machines (VMs) on Google Cloud Platform (GCP). The operations team needs to remotely manage the VMs, build and push Docker containers, and manage Google Cloud Storage objects. What can they do?

Options:

A.

Grant the operations engineers access to use Google Cloud Shell.

B.

Configure a VPN connection to GCP to allow SSH access to the cloud VMs.

C.

Develop a new access request process that grants temporary SSH access to cloud VMs when an operations engineer needs to perform a task.

D.

Have the development team build an API service that allows the operations team to execute specific remote procedure calls to accomplish their tasks.

Question 19

For this question, refer to the Dress4Win case study.

Dress4Win has configured a new uptime check with Google Stackdriver for several of their legacy services. The Stackdriver dashboard is not reporting the services as healthy. What should they do?

Options:

A.

Install the Stackdriver agent on all of the legacy web servers.

B.

In the Cloud Platform Console download the list of the uptime servers' IP addresses and create an inbound firewall rule

C.

Configure their load balancer to pass through the User-Agent HTTP header when the value matches GoogleStackdriverMonitoring-UptimeChecks (https://cloud.google.com/monitoring)

D.

Configure their legacy web servers to allow requests that contain user-Agent HTTP header when the value matches GoogleStackdriverMonitoring— UptimeChecks (https://cloud.google.com/monitoring)

Question 20

For this question, refer to the Dress4Win case study.

Dress4Win has end-to-end tests covering 100% of their endpoints. They want to ensure that the move to the cloud does not introduce any new bugs. Which additional testing methods should the developers employ to prevent an outage?

Options:

A.

They should enable Google Stackdriver Debugger on the application code to show errors in the code.

B.

They should add additional unit tests and production scale load tests on their cloud staging environment.

C.

They should run the end-to-end tests in the cloud staging environment to determine if the code is working as intended.

D.

They should add canary tests so developers can measure how much of an impact the new release causes to latency.

Question 21

For this question, refer to the Dress4Win case study.

As part of their new application experience, Dress4Wm allows customers to upload images of themselves. The customer has exclusive control over who may view these images. Customers should be able to upload images with minimal latency and also be shown their images quickly on the main application page when they log in. Which configuration should Dress4Win use?

Options:

A.

Store image files in a Google Cloud Storage bucket. Use Google Cloud Datastore to maintain metadata that maps each customer's ID and their image files.

B.

Store image files in a Google Cloud Storage bucket. Add custom metadata to the uploaded images in Cloud Storage that contains the customer's unique ID.

C.

Use a distributed file system to store customers' images. As storage needs increase, add more persistent disks and/or nodes. Assign each customer a unique ID, which sets each file's owner attribute, ensuring privacy of images.

D.

Use a distributed file system to store customers' images. As storage needs increase, add more persistent disks and/or nodes. Use a Google Cloud SQL database to maintain metadata that maps each customer's ID to their image files.

Question 22

For this question, refer to the Dress4Win case study.

Dress4Win would like to become familiar with deploying applications to the cloud by successfully deploying some applications quickly, as is. They have asked for your recommendation. What should you advise?

Options:

A.

Identify self-contained applications with external dependencies as a first move to the cloud.

B.

Identify enterprise applications with internal dependencies and recommend these as a first move to the cloud.

C.

Suggest moving their in-house databases to the cloud and continue serving requests to on-premise applications.

D.

Recommend moving their message queuing servers to the cloud and continue handling requests to on-premise applications.

Question 23

For this question, refer to the Dress4Win case study.

You want to ensure Dress4Win's sales and tax records remain available for infrequent viewing by auditors for at least 10 years. Cost optimization is your top priority. Which cloud services should you choose?

Options:

A.

Google Cloud Storage Coldline to store the data, and gsutil to access the data.

B.

Google Cloud Storage Nearline to store the data, and gsutil to access the data.

C.

Google Bigtabte with US or EU as location to store the data, and gcloud to access the data.

D.

BigQuery to store the data, and a web server cluster in a managed instance group to access the data. Google Cloud SQL mirrored across two distinct regions to store the data, and a Redis cluster in a managed instance group to access the data.

Question 24

For this question, refer to the Dress4Win case study.

Dress4Win has asked you for advice on how to migrate their on-premises MySQL deployment to the cloud. They want to minimize downtime and performance impact to their on-premises solution during the migration. Which approach should you recommend?

Options:

A.

Create a dump of the on-premises MySQL master server, and then shut it down, upload it to the cloud environment, and load into a new MySQL cluster.

B.

Setup a MySQL replica server/slave in the cloud environment, and configure it for asynchronous replication from the MySQL master server on-premises until cutover.

C.

Create a new MySQL cluster in the cloud, configure applications to begin writing to both on-premises and cloud MySQL masters, and destroy the original cluster at cutover.

D.

Create a dump of the MySQL replica server into the cloud environment, load it into: Google Cloud Datastore, and configure applications to read/write to Cloud Datastore at cutover.

Question 25

Refer to the Altostrat Media case study for the following solution regarding API management and cost control.

Altostrat is using Apigee for API management and wants to ensure their APIs are protected from overuse and abuse. You need to implement an Apigee feature to control the total number of API calls for cost management. What should you do?

Options:

A.

Set up API key validation.

B.

Integrate OAuth 2.0 authorization.

C.

Configure Quota policies.

D.

Activate XML threat protection.

Question 26

Altostrat stores a large library of media content, including sensitive interviews and documentaries, in Cloud Storage. They are concerned about the confidentiality of this content and want to protect it from unauthorized access. You need to implement a Google-recommended solution that is easy to integrate and provides Altostrat with control and auditability of the encryption keys. What should you do?

Options:

A.

Configure Cloud Storage to use server-side encryption with Google-managed encryption keys. Create a bucket policy to restrict access to only authorized Google groups and required service accounts.

B.

Use Cloud Storage default encryption at rest. Implement fine-grained access control using IAM roles and groups to restrict access to sensitive buckets.

C.

Implement client-side encryption before uploading it to Cloud Storage. Store the encryption keys in a HashiCorp Vault instance deployed on Google Kubernetes Engine (GKE). Implement fine-grained access control to sensitive Cloud Storage buckets using IAM roles.

D.

Use customer-managed encryption keys (CMEK) for all Cloud Storage buckets storing sensitive media content. Implement fine-grained access control using IAM roles and groups to restrict access to sensitive buckets.

Question 27

Refer to the Altostrat Media case study for the following solution regarding the performance analysis of their media processing pipeline.

Altostrat needs to analyze the performance of its media processing pipeline running on Java-based Cloud Run function. You need to select the most effective tool for the task. What should you do?

Options:

A.

Query logs in Cloud Logging.

B.

Analyze the data via Cloud Profiler.

C.

Instrument the code to use Cloud Trace.

D.

Inspect data from Snapshot Debugger.

Question 28

Refer to the Altostrat Media case study for the following solutions regarding cost optimization for batch processing and microservices testing strategies.

Altostrat is experiencing fluctuating computational demands for its batch processing jobs. These jobs are not time-critical and can tolerate occasional interruptions. You want to optimize cloud costs and address batch processing needs. What should you do?

Options:

A.

Configure reserved VM instances

B.

Deploy spot VM instances.

C.

Set up standard VM instances.

D.

Use Cloud Run functions.

Question 29

Refer to the Altostrat Media case study for the following solution.

Altostrat is concerned about sophisticated, multi-vector Distributed Denial of Service (DDoS) attacks targeting various layers of their infrastructure. DDoS attacks could potentially disrupt video streaming and cause financial losses. You need to mitigate this risk. What should you do?

Options:

A.

Set up VPC Service Controls to restrict access to sensitive resources and prevent data exfiltration.

B.

Configure Cloud Next Generation Firewall (NGFW) with custom rules to filter malicious traffic at the network level.

C.

Deploy Google Cloud Armor with pre-configured and custom rules for L3/L4 and L7 protection.

D.

Activate Security Command Center to monitor security posture and detect potential threats.

Question 30

Altostrat's development team is using a microservices architecture for their application. You need to select the most suitable testing approach to ensure that individual microservices function correctly in isolation. What should you do?

Options:

A.

Run unit testing.

B.

Use load testing.

C.

Perform end-to-end testing.

D.

Execute integration testing.

Question 31

For this question, refer to the Mountkirk Games case study. Mountkirk Games wants you to design a way to test the analytics platform’s resilience to changes in mobile network latency. What should you do?

Options:

A.

Deploy failure injection software to the game analytics platform that can inject additional latency to mobile client analytics traffic.

B.

Build a test client that can be run from a mobile phone emulator on a Compute Engine virtual machine, and run multiple copies in Google Cloud Platform regions all over the world to generate realistic traffic.

C.

Add the ability to introduce a random amount of delay before beginning to process analytics files uploaded from mobile devices.

D.

Create an opt-in beta of the game that runs on players' mobile devices and collects response times from analytics endpoints running in Google Cloud Platform regions all over the world.

Question 32

For this question, refer to the Mountkirk Games case study. You need to analyze and define the technical architecture for the database workloads for your company, Mountkirk Games. Considering the business and technical requirements, what should you do?

Options:

A.

Use Cloud SQL for time series data, and use Cloud Bigtable for historical data queries.

B.

Use Cloud SQL to replace MySQL, and use Cloud Spanner for historical data queries.

C.

Use Cloud Bigtable to replace MySQL, and use BigQuery for historical data queries.

D.

Use Cloud Bigtable for time series data, use Cloud Spanner for transactional data, and use BigQuery for historical data queries.

Question 33

For this question, refer to the Mountkirk Games case study.

Mountkirk Games wants to set up a continuous delivery pipeline. Their architecture includes many small services that they want to be able to update and roll back quickly. Mountkirk Games has the following requirements:

• Services are deployed redundantly across multiple regions in the US and Europe.

• Only frontend services are exposed on the public internet.

• They can provide a single frontend IP for their fleet of services.

• Deployment artifacts are immutable.

Which set of products should they use?

Options:

A.

Google Cloud Storage, Google Cloud Dataflow, Google Compute Engine

B.

Google Cloud Storage, Google App Engine, Google Network Load Balancer

C.

Google Kubernetes Registry, Google Container Engine, Google HTTP(S) Load Balancer

D.

Google Cloud Functions, Google Cloud Pub/Sub, Google Cloud Deployment Manager

Question 34

For this question, refer to the Mountkirk Games case study.

Mountkirk Games wants you to design their new testing strategy. How should the test coverage differ from their existing backends on the other platforms?

Options:

A.

Tests should scale well beyond the prior approaches.

B.

Unit tests are no longer required, only end-to-end tests.

C.

Tests should be applied after the release is in the production environment.

D.

Tests should include directly testing the Google Cloud Platform (GCP) infrastructure.

Question 35

For this question, refer to the Mountkirk Games case study.

Mountkirk Games wants to set up a real-time analytics platform for their new game. The new platform must meet their technical requirements. Which combination of Google technologies will meet all of their requirements?

Options:

A.

Container Engine, Cloud Pub/Sub, and Cloud SQL

B.

Cloud Dataflow, Cloud Storage, Cloud Pub/Sub, and BigQuery

C.

Cloud SQL, Cloud Storage, Cloud Pub/Sub, and Cloud Dataflow

D.

Cloud Dataproc, Cloud Pub/Sub, Cloud SQL, and Cloud Dataflow

E.

Cloud Pub/Sub, Compute Engine, Cloud Storage, and Cloud Dataproc

Question 36

For this question, refer to the Mountkirk Games case study.

Mountkirk Games' gaming servers are not automatically scaling properly. Last month, they rolled out a new feature, which suddenly became very popular. A record number of users are trying to use the service, but many of them are getting 503 errors and very slow response times. What should they investigate first?

Options:

A.

Verify that the database is online.

B.

Verify that the project quota hasn't been exceeded.

C.

Verify that the new feature code did not introduce any performance bugs.

D.

Verify that the load-testing team is not running their tool against production.

Question 37

For this question, refer to the Mountkirk Games case study.

Mountkirk Games has deployed their new backend on Google Cloud Platform (GCP). You want to create a thorough testing process for new versions of the backend before they are released to the public. You want the testing environment to scale in an economical way. How should you design the process?

Options:

A.

Create a scalable environment in GCP for simulating production load.

B.

Use the existing infrastructure to test the GCP-based backend at scale.

C.

Build stress tests into each component of your application using resources internal to GCP to simulate load.

D.

Create a set of static environments in GCP to test different levels of load — for example, high, medium, and low.

Question 38

For this question, refer to the Mountkirk Games case study

Mountkirk Games needs to create a repeatable and configurable mechanism for deploying isolated application environments. Developers and testers can access each other's environments and resources, but they cannot access staging or production resources. The staging environment needs access to some services from production.

What should you do to isolate development environments from staging and production?

Options:

A.

Create a project for development and test and another for staging and production.

B.

Create a network for development and test and another for staging and production.

C.

Create one subnetwork for development and another for staging and production.

D.

Create one project for development, a second for staging and a third for production.

Question 39

You are implementing Firestore for Mountkirk Games. Mountkirk Games wants to give a new game

programmatic access to a legacy game's Firestore database. Access should be as restricted as possible. What

should you do?

Options:

A.

Create a service account (SA) in the legacy game's Google Cloud project, add this SA in the new game's IAM page, and then give it the Firebase Admin role in both projects

B.

Create a service account (SA) in the legacy game's Google Cloud project, add a second SA in the new game's IAM page, and then give the Organization Admin role to both SAs

C.

Create a service account (SA) in the legacy game's Google Cloud project, give it the Firebase Admin role, and then migrate the new game to the legacy game's project.

D.

Create a service account (SA) in the lgacy game's Google Cloud project, give the SA the Organization Admin rule and then give it the Firebase Admin role in both projects

Question 40

Mountkirk Games wants you to secure the connectivity from the new gaming application platform to Google

Cloud. You want to streamline the process and follow Google-recommended practices. What should you do?

Options:

A.

Configure Workload Identity and service accounts to be used by the application platform.

B.

Use Kubernetes Secrets, which are obfuscated by default. Configure these Secrets to be used by the

application platform.

C.

Configure Kubernetes Secrets to store the secret, enable Application-Layer Secrets Encryption, and use

Cloud Key Management Service (Cloud KMS) to manage the encryption keys. Configure these Secrets to

be used by the application platform.

D.

Configure HashiCorp Vault on Compute Engine, and use customer managed encryption keys and Cloud

Key Management Service (Cloud KMS) to manage the encryption keys. Configure these Secrets to be used

by the application platform.

Question 41

For this question, refer to the JencoMart case study.

JencoMart has decided to migrate user profile storage to Google Cloud Datastore and the application servers to Google Compute Engine (GCE). During the migration, the existing infrastructure will need access to Datastore to upload the data. What service account key-management strategy should you recommend?

Options:

A.

Provision service account keys for the on-premises infrastructure and for the GCE virtual machines (VMs).

B.

Authenticate the on-premises infrastructure with a user account and provision service account keys for the VMs.

C.

Provision service account keys for the on-premises infrastructure and use Google Cloud Platform (GCP) managed keys for the VMs

D.

Deploy a custom authentication service on GCE/Google Container Engine (GKE) for the on-premises infrastructure and use GCP managed keys for the VMs.

Question 42

For this question, refer to the JencoMart case study

A few days after JencoMart migrates the user credentials database to Google Cloud Platform and shuts down the old server, the new database server stops responding to SSH connections. It is still serving database requests to the application servers correctly. What three steps should you take to diagnose the problem? Choose 3 answers

Options:

A.

Delete the virtual machine (VM) and disks and create a new one.

B.

Delete the instance, attach the disk to a new VM, and investigate.

C.

Take a snapshot of the disk and connect to a new machine to investigate.

D.

Check inbound firewall rules for the network the machine is connected to.

E.

Connect the machine to another network with very simple firewall rules and investigate.

F.

Print the Serial Console output for the instance for troubleshooting, activate the interactive console, and investigate.

Question 43

For this question, refer to the JencoMart case study.

The JencoMart security team requires that all Google Cloud Platform infrastructure is deployed using a least privilege model with separation of duties for administration between production and development resources. What Google domain and project structure should you recommend?

Options:

A.

Create two G Suite accounts to manage users: one for development/test/staging and one for production. Each account should contain one project for every application.

B.

Create two G Suite accounts to manage users: one with a single project for all development applications and one with a single project for all production applications.

C.

Create a single G Suite account to manage users with each stage of each application in its own project.

D.

Create a single G Suite account to manage users with one project for the development/test/staging environment and one project for the production environment.

Question 44

For this question, refer to the JencoMart case study.

The migration of JencoMart’s application to Google Cloud Platform (GCP) is progressing too slowly. The infrastructure is shown in the diagram. You want to maximize throughput. What are three potential bottlenecks? (Choose 3 answers.)

Options:

A.

A single VPN tunnel, which limits throughput

B.

A tier of Google Cloud Storage that is not suited for this task

C.

A copy command that is not suited to operate over long distances

D.

Fewer virtual machines (VMs) in GCP than on-premises machines

E.

A separate storage layer outside the VMs, which is not suited for this task

F.

Complicated internet connectivity between the on-premises infrastructure and GCP

Question 45

For this question, refer to the JencoMart case study.

JencoMart has built a version of their application on Google Cloud Platform that serves traffic to Asia. You want to measure success against their business and technical goals. Which metrics should you track?

Options:

A.

Error rates for requests from Asia

B.

Latency difference between US and Asia

C.

Total visits, error rates, and latency from Asia

D.

Total visits and average latency for users in Asia

E.

The number of character sets present in the database

Question 46

For this question, refer to the JencoMart case study.

JencoMart wants to move their User Profiles database to Google Cloud Platform. Which Google Database should they use?

Options:

A.

Cloud Spanner

B.

Google BigQuery

C.

Google Cloud SQL

D.

Google Cloud Datastore

Question 47

For this question, refer to the TerramEarth case study

You analyzed TerramEarth's business requirement to reduce downtime, and found that they can achieve a majority of time saving by reducing customers' wait time for parts You decided to focus on reduction of the 3 weeks aggregate reporting time Which modifications to the company's processes should you recommend?

Options:

A.

Migrate from CSV to binary format, migrate from FTP to SFTP transport, and develop machine learning analysis of metrics.

B.

Migrate from FTP to streaming transport, migrate from CSV to binary format, and develop machine learning analysis of metrics.

C.

Increase fleet cellular connectivity to 80%, migrate from FTP to streaming transport, and develop machine learning analysis of metrics.

D.

Migrate from FTP to SFTP transport, develop machine learning analysis of metrics, and increase dealer local inventory by a fixed factor.

Question 48

For this question, refer to the TerramEarth case study.

TerramEarth has equipped unconnected trucks with servers and sensors to collet telemetry data. Next year they want to use the data to train machine learning models. They want to store this data in the cloud while reducing costs. What should they do?

Options:

A.

Have the vehicle’ computer compress the data in hourly snapshots, and store it in a Google Cloud storage (GCS) Nearline bucket.

B.

Push the telemetry data in Real-time to a streaming dataflow job that compresses the data, and store it in Google BigQuery.

C.

Push the telemetry data in real-time to a streaming dataflow job that compresses the data, and store it in Cloud Bigtable.

D.

Have the vehicle's computer compress the data in hourly snapshots, a Store it in a GCS Coldline bucket.

Question 49

You want to store critical business information in Cloud Storage buckets. The information is regularly changed but previous versions need to be referenced on a regular basis. You want to ensure that there is a record of all changes to any information in these buckets. You want to ensure that accidental edits or deletions can be easily roiled back. Which feature should you enable?

Options:

A.

Bucket Lock

B.

Object Versioning

C.

Object change notification

D.

Object Lifecycle Management

Question 50

Your company is running a stateless application on a Compute Engine instance. The application is used

heavily during regular business hours and lightly outside of business hours. Users are reporting that the

application is slow during peak hours. You need to optimize the application’s performance. What should you do?

Options:

A.

Create a snapshot of the existing disk. Create an instance template from the snapshot. Create an

autoscaled managed instance group from the instance template.

B.

Create a snapshot of the existing disk. Create a custom image from the snapshot. Create an autoscaled managed instance group from the custom image.

C.

Create a custom image from the existing disk. Create an instance template from the custom image. Create an autoscaled managed instance group from the instance template.

D.

Create an instance template from the existing disk. Create a custom image from the instance template.

Create an autoscaled managed instance group from the custom image.

Question 51

The operations manager asks you for a list of recommended practices that she should consider when migrating a J2EE application to the cloud. Which three practices should you recommend? Choose 3 answers

Options:

A.

Port the application code to run on Google App Engine.

B.

Integrate Cloud Dataflow into the application to capture real-time metrics.

C.

Instrument the application with a monitoring tool like Stackdriver Debugger.

D.

Select an automation framework to reliably provision the cloud infrastructure.

E.

Deploy a continuous integration tool with automated testing in a staging environment.

F.

Migrate from MySQL to a managed NoSQL database like Google Cloud Datastore or Bigtable.

Question 52

Your company and one of its partners each nave a Google Cloud protect in separate organizations. Your company s protect (prj-a) runs in Virtual Private Cloud (vpc-a). The partner's project (prj-b) runs in vpc-b. There are two instances running on vpc-a and one instance running on vpc-b Subnets denned in both VPCs are not overlapping. You need to ensure that all instances communicate with each other via internal IPs minimizing latency and maximizing throughput. What should you do?

Options:

A.

Set up a network peering between vpc-a and vpc-b

B.

Set up a VPN between vpc-a and vpc-b using Cloud VPN

C.

Configure IAP TCP forwarding on the instance in vpc b and then launch the following gcloud command from one of the instance in vpc-gcloud:

1. Create an additional instance in vpc-a

2. Create an additional instance n vpc-b

3. Instal OpenVPN in newly created instances

4. Configure a VPN tunnel between vpc-a and vpc-b with the help of OpenVPN

Question 53

Your organization uses Google Kubernetes Engine (GKE) and Amazon Elastic Kubernetes Service (EKS) to manage a complex Kubernetes environment across multiple cloud providers. You need to deploy a solution that streamlines configuration management, enforces security policies, and ensures consistent application deployment across all of the environments. You want to follow Google-recommended practices. What should you do?

Options:

A.

Leverage Argo CD for GitOps-based continuous delivery and Open Policy Agent (OPA) for policy enforcement, and develop a controller for multi-cluster configuration management.

B.

Deploy Customize for configuration customization. Config Sync with multiple Git repositories, and a script to enforce security policies.

C.

Utilize Config Sync as part of GKE to synchronize configurations from a centralized repository, and utilize Policy Controller to enforce policies using OPA Gatekeeper.

D.

Deploy Crossplane for managing cloud resources as Kubernetes objects. FluxCD for GitOps-based configuration synchronization, and Kyverno for policy enforcement.

Question 54

You are developing a globally scaled frontend for a legacy streaming backend data API. This API expects

events in strict chronological order with no repeat data for proper processing.

Which products should you deploy to ensure guaranteed-once FIFO (first-in, first-out) delivery of data?

Options:

A.

Cloud Pub/Sub alone

B.

Cloud Pub/Sub to Cloud DataFlow

C.

Cloud Pub/Sub to Stackdriver

D.

Cloud Pub/Sub to Cloud SQL

Question 55

Your applications will be writing their logs to BigQuery for analysis. Each application should have its own table.

Any logs older than 45 days should be removed. You want to optimize storage and follow Google recommended practices. What should you do?

Options:

A.

Configure the expiration time for your tables at 45 days

B.

Make the tables time-partitioned, and configure the partition expiration at 45 days

C.

Rely on BigQuery’s default behavior to prune application logs older than 45 days

D.

Create a script that uses the BigQuery command line tool (bq) to remove records older than 45 days

Question 56

You want to create a private connection between your instances on Compute Engine and your on-premises data center. You require a connection of at least 20 Gbps. You want to follow Google-recommended practices.

How should you set up the connection?

Options:

A.

Create a VPC and connect it to your on-premises data center using Dedicated Interconnect.

B.

Create a VPC and connect it to your on-premises data center using a single Cloud VPN.

C.

Create a Cloud Content Delivery Network (Cloud CDN) and connect it to your on-premises data center

using Dedicated Interconnect.

D.

Create a Cloud Content Delivery Network (Cloud CDN) and connect it to your on-premises datacenter

using a single Cloud VPN.

Question 57

For this question, refer to the Dress4Win case study. Which of the compute services should be migrated as –is and would still be an optimized architecture for performance in the cloud?

Options:

A.

Web applications deployed using App Engine standard environment

B.

RabbitMQ deployed using an unmanaged instance group

C.

Hadoop/Spark deployed using Cloud Dataproc Regional in High Availability mode

D.

Jenkins, monitoring, bastion hosts, security scanners services deployed on custom machine types

Question 58

For this question, refer to the Dress4Win case study. You are responsible for the security of data stored in

Cloud Storage for your company, Dress4Win. You have already created a set of Google Groups and assigned the appropriate users to those groups. You should use Google best practices and implement the simplest design to meet the requirements.

Considering Dress4Win’s business and technical requirements, what should you do?

Options:

A.

Assign custom IAM roles to the Google Groups you created in order to enforce security requirements.

Encrypt data with a customer-supplied encryption key when storing files in Cloud Storage.

B.

Assign custom IAM roles to the Google Groups you created in order to enforce security requirements.

Enable default storage encryption before storing files in Cloud Storage.

C.

Assign predefined IAM roles to the Google Groups you created in order to enforce security requirements.

Utilize Google’s default encryption at rest when storing files in Cloud Storage.

D.

Assign predefined IAM roles to the Google Groups you created in order to enforce security requirements. Ensure that the default Cloud KMS key is set before storing files in Cloud Storage.

Question 59

For this question, refer to the Dress4Win case study. Considering the given business requirements, how would you automate the deployment of web and transactional data layers?

Options:

A.

Deploy Nginx and Tomcat using Cloud Deployment Manager to Compute Engine. Deploy a Cloud SQL server to replace MySQL. Deploy Jenkins using Cloud Deployment Manager.

B.

Deploy Nginx and Tomcat using Cloud Launcher. Deploy a MySQL server using Cloud Launcher. Deploy Jenkins to Compute Engine using Cloud Deployment Manager scripts.

C.

Migrate Nginx and Tomcat to App Engine. Deploy a Cloud Datastore server to replace the MySQL server in a high-availability configuration. Deploy Jenkins to Compute Engine using Cloud Launcher.

D.

Migrate Nginx and Tomcat to App Engine. Deploy a MySQL server using Cloud Launcher. Deploy Jenkins to Compute Engine using Cloud Launcher.

Question 60

For this question, refer to the Dress4Win case study. To be legally compliant during an audit, Dress4Win must be able to give insights in all administrative actions that modify the configuration or metadata of resources on Google Cloud.

What should you do?

Options:

A.

Use Stackdriver Trace to create a trace list analysis.

B.

Use Stackdriver Monitoring to create a dashboard on the project’s activity.

C.

Enable Cloud Identity-Aware Proxy in all projects, and add the group of Administrators as a member.

D.

Use the Activity page in the GCP Console and Stackdriver Logging to provide the required insight.

Question 61

For this question, refer to the Dress4Win case study. You want to ensure that your on-premises architecture meets business requirements before you migrate your solution.

What change in the on-premises architecture should you make?

Options:

A.

Replace RabbitMQ with Google Pub/Sub.

B.

Downgrade MySQL to v5.7, which is supported by Cloud SQL for MySQL.

C.

Resize compute resources to match predefined Compute Engine machine types.

D.

Containerize the micro services and host them in Google Kubernetes Engine.

Question 62

For this question, refer to the Dress4Win case study. Dress4Win is expected to grow to 10 times its size in 1 year with a corresponding growth in data and traffic that mirrors the existing patterns of usage. The CIO has set the target of migrating production infrastructure to the cloud within the next 6 months. How will you configure the solution to scale for this growth without making major application changes and still maximize the ROI?

Options:

A.

Migrate the web application layer to App Engine, and MySQL to Cloud Datastore, and NAS to Cloud Storage. Deploy RabbitMQ, and deploy Hadoop servers using Deployment Manager.

B.

Migrate RabbitMQ to Cloud Pub/Sub, Hadoop to BigQuery, and NAS to Compute Engine with Persistent Disk storage. Deploy Tomcat, and deploy Nginx using Deployment Manager.

C.

Implement managed instance groups for Tomcat and Nginx. Migrate MySQL to Cloud SQL, RabbitMQ to Cloud Pub/Sub, Hadoop to Cloud Dataproc, and NAS to Compute Engine with Persistent Disk storage.

D.

Implement managed instance groups for the Tomcat and Nginx. Migrate MySQL to Cloud SQL, RabbitMQ to Cloud Pub/Sub, Hadoop to Cloud Dataproc, and NAS to Cloud Storage.

Question 63

For this question, refer to the Cymbal Retail case study. Cymbal wants to migrate its diverse database environment to Google Cloud while ensuring high availability and performance for online customers. The company also wants to efficiently store and access large product images These images typically stay In the catalog for more than 90 days and are accessed less and less frequently. You need to select the appropriate Google Cloud services for each database. You also need to design a storage solution for the product images that optimizes cost and performance What should you do?

Options:

A.

Migrate all databases to Spanner for consistency, and use Cloud Storage Standard for image storage

B.

Migrate all databases to self-managed instances on Compute Engino. and use a persistent disk for image storage.

C.

Migrate MySQL and SQL Server to Spanner. Redis to Memorystore. and MongoDB to Firestore Use Cloud Storage Standard for image storage, and move

images to Cloud Storage Nearline storage when products become less popular.

D.

Migrate MySQL to Cloud SQL. SQL Server to Cloud SQL. Redis to Memorystore. and MongoDB to Firestore. Use Cloud Storage Standard for image storage, and move images to Cloud Storage Coldline storage when products become less popular

Question 64

For this question, refer to the Cymbal Retail case study. Cymbal wants you to connect their on-premises systems to Google Cloud while maintaining secure communication between their on-premises and cloud environments You want to follow Google's recommended approach to ensure the most secure and manageable solution. What should you do?

Options:

A.

Use a bastion host to provide secure access lo Google Cloud resources from Cymbal's on-premises systems.

B.

Configure a static VPN connection using SSH tunnels to connect the on-premises systems to Google Cloud

C.

Configure a Cloud VPN gateway and establish a VPN tunnel Configure firewall rules to restrict access to specific resources and services based on IP addresses and ports.

D.

Use Google Cloud's VPC peering to connect Cymbal's on-premises network to Google Cloud.