Big Black Friday Sale 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: save70

Free and Premium Google Associate-Cloud-Engineer Dumps Questions Answers

Google Cloud Certified - Associate Cloud Engineer Questions and Answers

Question 1

You have an application that is currently processing transactions by using a group of managed VM instances. You need to migrate the application so that it is serverless and scalable. You want to implement an asynchronous transaction processing system, while minimizing management overhead. What should you do?

Options:

A.

Install Kafka on VM instances to acknowledge incoming transactions. Use Cloud Run to process transactions.

B.

Install Kafka on VM Instances to acknowledge incoming transactions. Use VM Instances to process transactions.

C.

Use Pub/Sub to acknowledge incoming transactions. Use VM instances to process transactions.

D.

Use Pub/Sub to acknowledge incoming transactions. Use Cloud Run to process transactions.

Buy Now
Question 2

You are managing a fleet of Compute Engine Linux instances in a Google Cloud project. Your company's engineering team requires SSH access to all instances to perform routine maintenance tasks. You need to manage the SSH access for the engineering team and you want to minimize operational overhead when engineers join or leave the team. What should you do?

Options:

A.

Create a Google Group for all engineering team members and set up OS Login for this group on the project. Manage group membership when engineers join or leave the team.

B.

Create a Google Group for all engineering team members, and grant them the Compute Viewer IAM role. Manage group membership when engineers join or leave the team.

C.

Create a single SSH key pair to be shared by all engineering team members. Add the public SSH key to project metadata.

D.

Create an SSH key pair for each engineer on the team and add the public SSH key to the metadata of the relevant instances.

Question 3

You need to set up a policy so that videos stored in a specific Cloud Storage Regional bucket are moved to Coldline after 90 days, and then deleted after one year from their creation. How should you set up the policy?

Options:

A.

Use Cloud Storage Object Lifecycle Management using Age conditions with SetStorageClass and Delete actions. Set the SetStorageClass action to 90 days and the Delete action to 275 days (365 – 90)

B.

Use Cloud Storage Object Lifecycle Management using Age conditions with SetStorageClass and Delete actions. Set the SetStorageClass action to 90 days and the Delete action to 365 days.

C.

Use gsutil rewrite and set the Delete action to 275 days (365-90).

D.

Use gsutil rewrite and set the Delete action to 365 days.

Question 4

You need to deploy an application, which is packaged in a container image, in a new project. The application exposes an HTTP endpoint and receives very few requests per day. You want to minimize costs. What should you do

Options:

A.

Deploy the container on Cloud Run.

B.

Deploy the container on Cloud Run on GKE.

C.

Deploy the container on App Engine Flexible.

D.

Deploy the container on Google Kubernetes Engine, with cluster autoscaling and horizontal pod autoscaling enabled.

Question 5

Your company is using Google Workspace to manage employee accounts. Anticipated growth will increase the number of personnel from 100 employees to 1.000 employees within 2 years. Most employees will need access to your company's Google Cloud account. The systems and processes will need to support 10x growth without performance degradation, unnecessary complexity, or security issues. What should you do?

Options:

A.

Migrate the users to Active Directory. Connect the Human Resources system to Active Directory. Turn on Google Cloud Directory Sync (GCDS) for Cloud Identity. Turn on Identity Federation from Cloud Identity to Active Directory.

B.

Organize the users in Cloud Identity into groups. Enforce multi-factor authentication in Cloud Identity.

C.

Turn on identity federation between Cloud Identity and Google Workspace. Enforce multi-factor authentication for domain wide delegation.

D.

Use a third-party identity provider service through federation. Synchronize the users from Google Workplace to the third-party provider in real time.

Question 6

You need to provide a cost estimate for a Kubernetes cluster using the GCP pricing calculator for Kubernetes. Your workload requires high IOPs, and you will also be using disk snapshots. You start by entering the number of nodes, average hours, and average days. What should you do next?

Options:

A.

Fill in local SSD. Fill in persistent disk storage and snapshot storage.

B.

Fill in local SSD. Add estimated cost for cluster management.

C.

Select Add GPUs. Fill in persistent disk storage and snapshot storage.

D.

Select Add GPUs. Add estimated cost for cluster management.

Question 7

You are planning to migrate your on-premises data to Google Cloud. The data includes:

• 200 TB of video files in SAN storage

• Data warehouse data stored on Amazon Redshift

• 20 GB of PNG files stored on an S3 bucket

You need to load the video files into a Cloud Storage bucket, transfer the data warehouse data into BigQuery, and load the PNG files into a second Cloud Storage bucket. You want to follow Google-recommended practices and avoid writing any code for the migration. What should you do?

Options:

A.

Use gcloud storage for the video files. Dataflow for the data warehouse data, and Storage Transfer Service for the PNG files.

B.

Use Transfer Appliance for the videos. BigQuery Data Transfer Service for the data warehouse data, and Storage Transfer Service for the PNG files.

C.

Use Storage Transfer Service for the video files, BigQuery Data Transfer Service for the data warehouse data, and Storage Transfer Service for the PNG files.

D.

Use Cloud Data Fusion for the video files, Dataflow for the data warehouse data, and Storage Transfer Service for the PNG files.

Question 8

You have a Dockerfile that you need to deploy on Kubernetes Engine. What should you do?

Options:

A.

Use kubectl app deploy .

B.

Use gcloud app deploy .

C.

Create a docker image from the Dockerfile and upload it to Container Registry. Create a Deployment YAML file to point to that image. Use kubectl to create the deployment with that file.

D.

Create a docker image from the Dockerfile and upload it to Cloud Storage. Create a Deployment YAML file to point to that image. Use kubectl to create the deployment with that file.

Question 9

Your company uses BigQuery for data warehousing. Over time, many different business units in your company have created 1000+ datasets across hundreds of projects. Your CIO wants you to examine all datasets to find tables that contain an employee_ssn column. You want to minimize effort in performing this task. What should you do?

Options:

A.

Go to Data Catalog and search for employee_ssn in the search box.

B.

Write a shell script that uses the bq command line tool to loop through all the projects in your organization.

C.

Write a script that loops through all the projects in your organization and runs a query on INFORMATION_SCHEMA.COLUMNS view to find the employee_ssn column.

D.

Write a Cloud Dataflow job that loops through all the projects in your organization and runs a query on INFORMATION_SCHEMA.COLUMNS view to find employee_ssn column.

Question 10

(You are managing an application deployed on Cloud Run. The development team has released a new version of the application. You want to deploy and redirect traffic to this new version of the application. To ensure traffic to the new version of the application is served with no startup time, you want to ensure that there are two idle instances available for incoming traffic before adjusting the traffic flow. You also want to minimize administrative overhead. What should you do?)

Options:

A.

Ensure the checkbox "Serve this revision immediately" is unchecked when deploying the new revision. Before changing the traffic rules, use a traffic simulation tool to send load to the new revision.

B.

Configure service autoscaling and set the minimum number of instances to 2.

C.

Configure revision autoscaling for the new revision and set the minimum number of instances to 2.

D.

Configure revision autoscaling for the existing revision and set the minimum number of instances to 2.

Question 11

The storage costs for your application logs have far exceeded the project budget. The logs are currently being retained indefinitely in the Cloud Storage bucket myapp-gcp-ace-logs. You have been asked to remove logs older than 90 days from your Cloud Storage bucket. You want to optimize ongoing Cloud Storage spend. What should you do?

Options:

A.

Write a script that runs gsutil Is -| – gs://myapp-gcp-ace-logs/ to find and remove items older than 90 days. Schedule the script with cron.

B.

Write a lifecycle management rule in JSON and push it to the bucket with gsutil lifecycle set config-json-file.

C.

Write a lifecycle management rule in XML and push it to the bucket with gsutil lifecycle set config-xml-file.

D.

Write a script that runs gsutil Is -Ir gs://myapp-gcp-ace-logs/ to find and remove items older than 90 days. Repeat this process every morning.

Question 12

You built an application on Google Cloud Platform that uses Cloud Spanner. Your support team needs to monitor the environment but should not have access to table data. You need a streamlined solution to grant the correct permissions to your support team, and you want to follow Google-recommended practices. What should you do?

Options:

A.

Add the support team group to the roles/monitoring.viewer role

B.

Add the support team group to the roles/spanner.databaseUser role.

C.

Add the support team group to the roles/spanner.databaseReader role.

D.

Add the support team group to the roles/stackdriver.accounts.viewer role.

Question 13

You have designed a solution on Google Cloud Platform (GCP) that uses multiple GCP products. Your company has asked you to estimate the costs of the solution. You need to provide estimates for the monthly total cost. What should you do?

Options:

A.

For each GCP product in the solution, review the pricing details on the products pricing page. Use the pricing calculator to total the monthly costs for each GCP product.

B.

For each GCP product in the solution, review the pricing details on the products pricing page. Create a Google Sheet that summarizes the expected monthly costs for each product.

C.

Provision the solution on GCP. Leave the solution provisioned for 1 week. Navigate to the Billing Report page in the Google Cloud Platform Console. Multiply the 1 week cost to determine the monthly costs.

D.

Provision the solution on GCP. Leave the solution provisioned for 1 week. Use Stackdriver to determine the provisioned and used resource amounts. Multiply the 1 week cost to determine the monthly costs.

Question 14

(You deployed an application on a managed instance group in Compute Engine. The application accepts Transmission Control Protocol (TCP) traffic on port 389 and requires you to preserve the IP address of the client who is making a request. You want to expose the application to the internet by using a load balancer. What should you do?)

Options:

A.

Expose the application by using an internal passthrough Network Load Balancer.

B.

Expose the application by using an external passthrough Network Load Balancer.

C.

Expose the application by using a global external proxy Network Load Balancer.

D.

Expose the application by using a regional external proxy Network Load Balancer.

Question 15

Your company is moving from an on-premises environment to Google Cloud Platform (GCP). You have multiple development teams that use Cassandra environments as backend databases. They all need a development environment that is isolated from other Cassandra instances. You want to move to GCP quickly and with minimal support effort. What should you do?

Options:

A.

1. Build an instruction guide to install Cassandra on GCP.2. Make the instruction guide accessible to your developers.

B.

1. Advise your developers to go to Cloud Marketplace.2. Ask the developers to launch a Cassandra image for their development work.

C.

1. Build a Cassandra Compute Engine instance and take a snapshot of it.2. Use the snapshot to create instances for your developers.

D.

1. Build a Cassandra Compute Engine instance and take a snapshot of it.2.Upload the snapshot to Cloud Storage and make it accessible to your developers.3.Build instructions to create a Compute Engine instance from the snapshot so that developers can do it themselves.

Question 16

You need to create a Compute Engine instance in a new project that doesn’t exist yet. What should you do?

Options:

A.

Using the Cloud SDK, create a new project, enable the Compute Engine API in that project, and then create the instance specifying your new project.

B.

Enable the Compute Engine API in the Cloud Console, use the Cloud SDK to create the instance, and then use the ––project flag to specify a new project.

C.

Using the Cloud SDK, create the new instance, and use the ––project flag to specify the new project.Answer yes when prompted by Cloud SDK to enable the Compute Engine API.

D.

Enable the Compute Engine API in the Cloud Console. Go to the Compute Engine section of the Console to create a new instance, and look for the Create In A New Project option in the creation form.

Question 17

You have developed a containerized web application that will serve Internal colleagues during business hours. You want to ensure that no costs are incurred outside of the hours the application is used. You have just created a new Google Cloud project and want to deploy the application. What should you do?

Options:

A.

Deploy the container on Cloud Run for Anthos, and set the minimum number of instances to zero

B.

Deploy the container on Cloud Run (fully managed), and set the minimum number of instances to zero.

C.

Deploy the container on App Engine flexible environment with autoscaling. and set the value min_instances to zero in the app yaml

D.

Deploy the container on App Engine flexible environment with manual scaling, and set the value instances to zero in the app yaml

Question 18

You manage three Google Cloud projects with the Cloud Monitoring API enabled. You want to follow Google-recommended practices to visualize CPU and network metrics for all three projects together. What should you do?

Options:

A.

1. Create a Cloud Monitoring Dashboard2. Collect metrics and publish them into the Pub/Sub topics 3. Add CPU and network Charts (or each of (he three projects

B.

1. Create a Cloud Monitoring Dashboard.2. Select the CPU and Network metrics from the three projects.3. Add CPU and network Charts lot each of the three protects.

C.

1 Create a Service Account and apply roles/viewer on the three projects2. Collect metrics and publish them lo the Cloud Monitoring API3. Add CPU and network Charts for each of the three projects.

D.

1. Create a fourth Google Cloud project2 Create a Cloud Workspace from the fourth project and add the other three projects

Question 19

Your organization has strict requirements to control access to Google Cloud projects. You need to enable your Site Reliability Engineers (SREs) to approve requests from the Google Cloud support team when an SRE opens a support case. You want to follow Google-recommended practices. What should you do?

Options:

A.

Add your SREs to roles/iam.roleAdmin role.

B.

Add your SREs to roles/accessapproval approver role.

C.

Add your SREs to a group and then add this group to roles/iam roleAdmin role.

D.

Add your SREs to a group and then add this group to roles/accessapproval approver role.

Question 20

The DevOps group in your organization needs full control of Compute Engine resources in your development project. However, they should not have permission to create or update any other resources in the project. You want to follow Google's recommendations for setting permissions for the DevOps group. What should you do?

Options:

A.

Grant the basic role roles/viewer and the predefined role roles/compute.admin to the DevOps group.

B.

Create an IAM policy and grant all compute. instanceAdmln." permissions to the policy Attach the policy to the DevOps group.

C.

Create a custom role at the folder level and grant all compute. instanceAdmln. * permissions to the role Grant the custom role to the DevOps group.

D.

Grant the basic role roles/editor to the DevOps group.

Question 21

You need to monitor resources that are distributed over different projects in Google Cloud Platform. You want to consolidate reporting under the same Stackdriver Monitoring dashboard. What should you do?

Options:

A.

Use Shared VPC to connect all projects, and link Stackdriver to one of the projects.

B.

For each project, create a Stackdriver account. In each project, create a service account for that project and grant it the role of Stackdriver Account Editor in all other projects.

C.

Configure a single Stackdriver account, and link all projects to the same account.

D.

Configure a single Stackdriver account for one of the projects. In Stackdriver, create a Group and add the other project names as criteria for that Group.

Question 22

You want to select and configure a solution for storing and archiving data on Google Cloud Platform. You need to support compliance objectives for data from one geographic location. This data is archived after 30 days and needs to be accessed annually. What should you do?

Options:

A.

Select Multi-Regional Storage. Add a bucket lifecycle rule that archives data after 30 days to Coldline Storage.

B.

Select Multi-Regional Storage. Add a bucket lifecycle rule that archives data after 30 days to Nearline Storage.

C.

Select Regional Storage. Add a bucket lifecycle rule that archives data after 30 days to Nearline Storage.

D.

Select Regional Storage. Add a bucket lifecycle rule that archives data after 30 days to Coldline Storage.

Question 23

You are operating a Google Kubernetes Engine (GKE) cluster for your company where different teams can run non-production workloads. Your Machine Learning (ML) team needs access to Nvidia Tesla P100 GPUs to train their models. You want to minimize effort and cost. What should you do?

Options:

A.

Ask your ML team to add the “accelerator: gpu” annotation to their pod specification.

B.

Recreate all the nodes of the GKE cluster to enable GPUs on all of them.

C.

Create your own Kubernetes cluster on top of Compute Engine with nodes that have GPUs. Dedicate this cluster to your ML team.

D.

Add a new, GPU-enabled, node pool to the GKE cluster. Ask your ML team to add the cloud.google.com/gke -accelerator: nvidia-tesla-p100 nodeSelector to their pod specification.

Question 24

You have 32 GB of data in a single file that you need to upload to a Nearline Storage bucket. The WAN connection you are using is rated at 1 Gbps, and you are the only one on the connection. You want to use as much of the rated 1 Gbps as possible to transfer the file rapidly. How should you upload the file?

Options:

A.

Use the GCP Console to transfer the file instead of gsutil.

B.

Enable parallel composite uploads using gsutil on the file transfer.

C.

Decrease the TCP window size on the machine initiating the transfer.

D.

Change the storage class of the bucket from Nearline to Multi-Regional.

Question 25

You will have several applications running on different Compute Engine instances in the same project. You want to specify at a more granular level the service account each instance uses when calling Google Cloud APIs. What should you do?

Options:

A.

When creating the instances, specify a Service Account for each instance

B.

When creating the instances, assign the name of each Service Account as instance metadata

C.

After starting the instances, use gcloud compute instances update to specify a Service Account for each instance

D.

After starting the instances, use gcloud compute instances update to assign the name of the relevant Service Account as instance metadata

Question 26

Your web application is hosted on Cloud Run and needs to query a Cloud SOL database. Every morning during a traffic spike, you notice API quota errors in Cloud SOL logs. The project has already reached the maximum API quota. You want to make a configuration change to mitigate the issue. What should you do?

Options:

A.

Modify the minimum number of Cloud Run instances.

B.

Set a minimum concurrent requests environment variable for the application.

C.

Modify the maximum number of Cloud Run instances.

D.

Use traffic splitting.

Question 27

Your company's accounting department needs to run an overnight batch workload every day. You must implement a solution that minimizes the cost to run this workload and automatically retries the batch if an execution fails. What should you do?

Options:

A.

Develop an application that runs the batch workload, and deploy the application as a Google Kubernetes Engine (GKE) CronJob.

B.

Develop a web application that listens for incoming requests and deploy the application as a Google Kubernetes Engine (GKE) Deployment. Use Cloud Scheduler to call the HTTP endpoint.

C.

Develop an application that runs the batch workload, and deploy the application as a Cloud Run Job. Use Cloud Scheduler to trigger the Job.

D.

Develop a web application that listens for incoming requests and deploy the application as a Cloud Run service. Use Cloud Scheduler to call the HTTP endpoint.

Question 28

You want to configure a solution for archiving data in a Cloud Storage bucket. The solution must be cost-effective. Data with multiple versions should be archived after 30 days. Previous versions are accessed once a month for reporting. This archive data is also occasionally updated at month-end. What should you do?

Options:

A.

Add a bucket lifecycle rule that archives data with newer versions after 30 days to Coldline Storage.

B.

Add a bucket lifecycle rule that archives data with newer versions after 30 days to Nearline Storage.

C.

Add a bucket lifecycle rule that archives data from regional storage after 30 days to Coldline Storage.

D.

Add a bucket lifecycle rule that archives data from regional storage after 30 days to Nearline Storage.

Question 29

You are working in a team that has developed a new application that needs to be deployed on Kubernetes. The production application is business critical and should be optimized for reliability. You need to provision a Kubernetes cluster and want to follow Google-recommended practices. What should you do?

Options:

A.

Create a GKE Autopilot cluster. Enroll the cluster in the rapid release channel.

B.

Create a GKE Autopilot cluster. Enroll the cluster in the stable release channel.

C.

Create a zonal GKE standard cluster. Enroll the cluster in the stable release channel.

D.

Create a regional GKE standard cluster. Enroll the cluster in the rapid release channel.

Question 30

You want to configure an SSH connection to a single Compute Engine instance for users in the dev1 group. This instance is the only resource in this particular Google Cloud Platform project that the dev1 users should be able to connect to. What should you do?

Options:

A.

Set metadata to enable-oslogin=true for the instance. Grant the dev1 group the compute.osLogin role. Direct them to use the Cloud Shell to ssh to that instance.

B.

Set metadata to enable-oslogin=true for the instance. Set the service account to no service account for that instance. Direct them to use the Cloud Shell to ssh to that instance.

C.

Enable block project wide keys for the instance. Generate an SSH key for each user in the dev1 group. Distribute the keys to dev1 users and direct them to use their third-party tools to connect.

D.

Enable block project wide keys for the instance. Generate an SSH key and associate the key with that instance. Distribute the key to dev1 users and direct them to use their third-party tools to connect.

Question 31

Your company runs its Linux workloads on Compute Engine instances. Your company will be working with a new operations partner that does not use Google Accounts. You need to grant access to the instances to your operations partner so they can maintain the installed tooling. What should you do?

Options:

A.

Enable Cloud IAP for the Compute Engine instances, and add the operations partner as a Cloud IAP Tunnel User.

B.

Tag all the instances with the same network tag. Create a firewall rule in the VPC to grant TCP access on port 22 for traffic from the operations partner to instances with the network tag.

C.

Set up Cloud VPN between your Google Cloud VPC and the internal network of the operations partner.

D.

Ask the operations partner to generate SSH key pairs, and add the public keys to the VM instances.

Question 32

You need to track and verity modifications to a set of Google Compute Engine instances in your Google Cloud project. In particular, you want to verify OS system patching events on your virtual machines (VMs). What should you do?

Options:

A.

Review the Compute Engine activity logs Select and review the Admin Event logs

B.

Review the Compute Engine activity logs Select and review the System Event logs

C.

Install the Cloud Logging Agent In Cloud Logging review the Compute Engine syslog logs

D.

Install the Cloud Logging Agent In Cloud Logging, review the Compute Engine operation logs

Question 33

The sales team has a project named Sales Data Digest that has the ID acme-data-digest You need to set up similar Google Cloud resources for the marketing team but their resources must be organized independently of the sales team. What should you do?

Options:

A.

Grant the Project Editor role to the Marketing learn for acme data digest

B.

Create a Project Lien on acme-data digest and then grant the Project Editor role to the Marketing team

C.

Create another protect with the ID acme-marketing-data-digest for the Marketing team and deploy the resources there

D.

Create a new protect named Meeting Data Digest and use the ID acme-data-digest Grant the Project Editor role to the Marketing team.

Question 34

Your managed instance group raised an alert stating that new instance creation has failed to create new instances. You need to maintain the number of running instances specified by the template to be able to process expected application traffic. What should you do?

Options:

A.

Create an instance template that contains valid syntax which will be used by the instance group. Delete any persistent disks with the same name as instance names.

B.

Create an instance template that contains valid syntax that will be used by the instance group. Verify that the instance name and persistent disk name values are not the same in the template.

C.

Verify that the instance template being used by the instance group contains valid syntax. Delete any persistent disks with the same name as instance names. Set the disks.autoDelete property to true in the instance template.

D.

Delete the current instance template and replace it with a new instance template. Verify that the instance name and persistent disk name values are not the same in the template. Set the disks.autoDelete property to true in the instance template.

Question 35

Your company has an existing GCP organization with hundreds of projects and a billing account. Your company recently acquired another company that also has hundreds of projects and its own billing account. You would like to consolidate all GCP costs of both GCP organizations onto a single invoice. You would like to consolidate all costs as of tomorrow. What should you do?

Options:

A.

Link the acquired company’s projects to your company's billing account.

B.

Configure the acquired company's billing account and your company's billing account to export the billing data into the same BigQuery dataset.

C.

Migrate the acquired company’s projects into your company’s GCP organization. Link the migrated projects to your company's billing account.

D.

Create a new GCP organization and a new billing account. Migrate the acquired company's projects and your company's projects into the new GCP organization and link the projects to the new billing account.

Question 36

You want to configure 10 Compute Engine instances for availability when maintenance occurs. Your requirements state that these instances should attempt to automatically restart if they crash. Also, the instances should be highly available including during system maintenance. What should you do?

Options:

A.

Create an instance template for the instances. Set the ‘Automatic Restart’ to on. Set the ‘On-host maintenance’ to Migrate VM instance. Add the instance template to an instance group.

B.

Create an instance template for the instances. Set ‘Automatic Restart’ to off. Set ‘On-host maintenance’ to Terminate VM instances. Add the instance template to an instance group.

C.

Create an instance group for the instances. Set the ‘Autohealing’ health check to healthy (HTTP).

D.

Create an instance group for the instance. Verify that the ‘Advanced creation options’ setting for ‘do not retry machine creation’ is set to off.

Question 37

(You are developing an internet of things (IoT) application that captures sensor data from multiple devices that have already been set up. You need to identify the global data storage product your company should use to store this data. You must ensure that the storage solution you choose meets your requirements of sub-millisecond latency. What should you do?)

Options:

A.

Store the IoT data in Spanner. Use caches to speed up the process and avoid latencies.

B.

Store the IoT data in Bigtable.

C.

Capture IoT data in BigQuery datasets.

D.

Store the IoT data in Cloud Storage. Implement caching by using Cloud CDN.

Question 38

You have an application that uses Cloud Spanner as a database backend to keep current state information about users. Cloud Bigtable logs all events triggered by users. You export Cloud Spanner data to Cloud Storage during daily backups. One of your analysts asks you to join data from Cloud Spanner and Cloud Bigtable for specific users. You want to complete this ad hoc request as efficiently as possible. What should you do?

Options:

A.

Create a dataflow job that copies data from Cloud Bigtable and Cloud Storage for specific users.

B.

Create a dataflow job that copies data from Cloud Bigtable and Cloud Spanner for specific users.

C.

Create a Cloud Dataproc cluster that runs a Spark job to extract data from Cloud Bigtable and Cloud Storage for specific users.

D.

Create two separate BigQuery external tables on Cloud Storage and Cloud Bigtable. Use the BigQuery console to join these tables through user fields, and apply appropriate filters.

Question 39

You have a Compute Engine instance hosting a production application. You want to receive an email if the instance consumes more than 90% of its CPU resources for more than 15 minutes. You want to use Google services. What should you do?

Options:

A.

1. Create a consumer Gmail account.2.Write a script that monitors the CPU usage.3.When the CPU usage exceeds the threshold, have that script send an email using the Gmail account and smtp.gmail.com on port 25 as SMTP server.

B.

1. Create a Stackdriver Workspace, and associate your Google Cloud Platform (GCP) project with it.2.Create an Alerting Policy in Stackdriver that uses the threshold as a trigger condition.3.Configure your email address in the notification channel.

C.

1. Create a Stackdriver Workspace, and associate your GCP project with it.2.Write a script that monitors the CPU usage and sends it as a custom metric to Stackdriver.3.Create an uptime check for the instance in Stackdriver.

D.

1. In Stackdriver Logging, create a logs-based metric to extract the CPU usage by using this regular expression: CPU Usage: ([0-9] {1,3}) %2.In Stackdriver Monitoring, create an Alerting Policy based on this metric.3.Configure your email address in the notification channel.

Question 40

Your company has many legacy third-party applications that rely on a shared NFS server for file sharing between these workloads. You want to modernize the NFS server by using a Google Cloud managed service. You need to select the solution that requires the least amount of change to the application. What should you do?

Options:

A.

Configure Firestore. Configure all applications to use Firestore instead of the NFS server.

B.

Deploy a Filestore instance. Replace all NFS mounts with a Filestore mount.

C.

Create a Cloud Storage bucket. Configure all applications to use Cloud Storage client libraries instead of the NFS server.

D.

Create a Compute Engine instance and configure an NFS server on the instance. Point all NFS mounts to the Compute Engine instance.

Question 41

(Your company was recently impacted by a service disruption that caused multiple Dataflow jobs to get stuck, resulting in significant downtime in downstream applications and revenue loss. You were able to resolve the issue by identifying and fixing an error you found in the code. You need to design a solution with minimal management effort to identify when jobs are stuck in the future to ensure that this issue does not occur again. What should you do?)

Options:

A.

Set up Error Reporting to identify stack traces that indicate slowdowns in Dataflow jobs. Set up alerts based on these log entries.

B.

Use the Personalized Service Health dashboard to identify issues with Dataflow jobs across regions.

C.

Update the Dataflow job configurations to send messages to a Pub/Sub topic when there are delays. Configure a backup Dataflow job to process jobs that are delayed. Use Cloud Tasks to trigger an alert when messages are pushed to the Pub/Sub topic.

D.

Set up Cloud Monitoring alerts on the data freshness metric for the Dataflow jobs to receive a notification when a certain threshold is reached.

Question 42

For analysis purposes, you need to send all the logs from all of your Compute Engine instances to a BigQuery dataset called platform-logs. You have already installed the Stackdriver Logging agent on all the instances. You want to minimize cost. What should you do?

Options:

A.

1. Give the BigQuery Data Editor role on the platform-logs dataset to the service accounts used by your instances.2. Update your instances’ metadata to add the following value: logs-destination: bq://platform-logs.

B.

1. In Stackdriver Logging, create a logs export with a Cloud Pub/Sub topic called logs as a sink.2. Create a Cloud Function that is triggered by messages in the logs topic.3. Configure that Cloud Function to drop logs that are not from Compute Engine and to insert Compute Engine logs in the platform-logs dataset.

C.

1. In Stackdriver Logging, create a filter to view only Compute Engine logs.2. Click Create Export.3. Choose BigQuery as Sink Service, and the platform-logs dataset as Sink Destination.

D.

1. Create a Cloud Function that has the BigQuery User role on the platform-logs dataset.2. Configure this Cloud Function to create a BigQuery Job that executes this query:INSERT INTO dataset.platform-logs (timestamp, log)SELECT timestamp, log FROM compute.logsWHERE timestamp > DATE_SUB(CURRENT_DATE(), INTERVAL 1 DAY)3. Use Cloud Scheduler to trigger this Cloud Function once a day.

Question 43

You need to configure optimal data storage for files stored in Cloud Storage for minimal cost. The files are used in a mission-critical analytics pipeline that is used continually. The users are in Boston, MA (United States). What should you do?

Options:

A.

Configure regional storage for the region closest to the users Configure a Nearline storage class

B.

Configure regional storage for the region closest to the users Configure a Standard storage class

C.

Configure dual-regional storage for the dual region closest to the users Configure a Nearline storage class

D.

Configure dual-regional storage for the dual region closest to the users Configure a Standard storage class

Question 44

You have files in a Cloud Storage bucket that you need to share with your suppliers. You want to restrict the time that the files are available to your suppliers to 1 hour. You want to follow Google recommended practices. What should you do?

Options:

A.

Create a service account with just the permissions to access files in the bucket. Create a JSON key for the service account. Execute the command gsutil signurl -m 1h gs:///*.

B.

Create a service account with just the permissions to access files in the bucket. Create a JSON key for the service account. Execute the command gsutil signurl -d 1h gs:///.

C.

Create a service account with just the permissions to access files in the bucket. Create a JSON key for the service account. Execute the command gsutil signurl -p 60m gs:///.

D.

Create a JSON key for the Default Compute Engine Service Account. Execute the command gsutil signurl -t 60m gs:///*

Question 45

(Your company has a rapidly growing social media platform and a user base primarily located in North America. Due to increasing demand, your current on-premises PostgreSQL database, hosted in your United States headquarters data center, no longer meets your needs. You need to identify a cloud-based database solution that offers automatic scaling, multi-region support for future expansion, and maintains low latency.)

Options:

A.

Use Bigtable.

B.

Use BigQuery.

C.

Use Spanner.

D.

Use Cloud SQL for PostgreSQL.

Question 46

You have an application that uses Cloud Spanner as a backend database. The application has a very predictable traffic pattern. You want to automatically scale up or down the number of Spanner nodes depending on traffic. What should you do?

Options:

A.

Create a cron job that runs on a scheduled basis to review stackdriver monitoring metrics, and then resize the Spanner instance accordingly.

B.

Create a Stackdriver alerting policy to send an alert to oncall SRE emails when Cloud Spanner CPU exceeds the threshold. SREs would scale resources up or down accordingly.

C.

Create a Stackdriver alerting policy to send an alert to Google Cloud Support email when Cloud Spanner CPU exceeds your threshold. Google support would scale resources up or down accordingly.

D.

Create a Stackdriver alerting policy to send an alert to webhook when Cloud Spanner CPU is over or under your threshold. Create a Cloud Function that listens to HTTP and resizes Spanner resources accordingly.

Question 47

Your company is active in the European Economic Area (EEA), and will adopt Google Cloud for its workloads. Projects are currently structured within different folders. You need to ensure any resources that will be deployed are using Google Cloud locations within the EEA by using the Organization Policy Service resource locations constraint. What should you do?

Options:

A.

Configure the policy at the folder level and add all allowed locations to the policy.

B.

Configure the policy at the organization level and add all allowed locations to the policy.

C.

Configure the policy at the folder level and add all disallowed locations to the policy.

D.

Configure the policy at the organization level and add all disallowed locations to the policy.

Question 48

You are designing an application that uses WebSockets and HTTP sessions that are not distributed across the web servers. You want to ensure the application runs properly on Google Cloud Platform. What should you do?

Options:

A.

Meet with the cloud enablement team to discuss load balancer options.

B.

Redesign the application to use a distributed user session service that does not rely on WebSockets and HTTP sessions.

C.

Review the encryption requirements for WebSocket connections with the security team.

D.

Convert the WebSocket code to use HTTP streaming.

Question 49

You have an application that receives SSL-encrypted TCP traffic on port 443. Clients for this application are located all over the world. You want to minimize latency for the clients. Which load balancing option should you use?

Options:

A.

HTTPS Load Balancer

B.

Network Load Balancer

C.

SSL Proxy Load Balancer

D.

Internal TCP/UDP Load Balancer. Add a firewall rule allowing ingress traffic from 0.0.0.0/0 on the target instances.

Question 50

During a recent audit of your existing Google Cloud resources, you discovered several users with email addresses outside of your Google Workspace domain.

You want to ensure that your resources are only shared with users whose email addresses match your domain. You need to remove any mismatched users, and you want to avoid having to audit your resources to identify mismatched users. What should you do?

Options:

A.

Create a Cloud Scheduler task to regularly scan your projects and delete mismatched users.

B.

Create a Cloud Scheduler task to regularly scan your resources and delete mismatched users.

C.

Set an organizational policy constraint to limit identities by domain to automatically remove mismatched users.

D.

Set an organizational policy constraint to limit identities by domain, and then retroactively remove the existing mismatched users.

Question 51

You create a new Google Kubernetes Engine (GKE) cluster and want to make sure that it always runs a supported and stable version of Kubernetes. What should you do?

Options:

A.

Enable the Node Auto-Repair feature for your GKE cluster.

B.

Enable the Node Auto-Upgrades feature for your GKE cluster.

C.

Select the latest available cluster version for your GKE cluster.

D.

Select “Container-Optimized OS (cos)” as a node image for your GKE cluster.

Question 52

You are managing several Google Cloud Platform (GCP) projects and need access to all logs for the past 60 days. You want to be able to explore and quickly analyze the log contents. You want to follow Google- recommended practices to obtain the combined logs for all projects. What should you do?

Options:

A.

Navigate to Stackdriver Logging and select resource.labels.project_id="*"

B.

Create a Stackdriver Logging Export with a Sink destination to a BigQuery dataset. Configure the table expiration to 60 days.

C.

Create a Stackdriver Logging Export with a Sink destination to Cloud Storage. Create a lifecycle rule to delete objects after 60 days.

D.

Configure a Cloud Scheduler job to read from Stackdriver and store the logs in BigQuery. Configure the table expiration to 60 days.

Question 53

You want to deploy a new containerized application into Google Cloud by using a Kubernetes manifest. You want to have full control over the Kubernetes deployment, and at the same time, you want to minimize configuring infrastructure. What should you do?

Options:

A.

Deploy the application on GKE Autopilot.

B.

Deploy the application on GKE Standard.

C.

Deploy the application on Cloud Functions.

D.

Deploy the application on Cloud Run.

Question 54

You are in charge of provisioning access for all Google Cloud users in your organization. Your company recently acquired a startup company that has their own Google Cloud organization. You need to ensure that your Site Reliability Engineers (SREs) have the same project permissions in the startup company's organization as in your own organization. What should you do?

Options:

A.

In the Google Cloud console for your organization, select Create role from selection, and choose destination as the startup company's organization

B.

In the Google Cloud console for the startup company, select Create role from selection and choose source as the startup company's Google Cloud organization.

C.

Use the gcloud iam roles copy command, and provide the Organization ID of the startup company'sGoogle Cloud Organization as the destination.

D.

Use the gcloud iam roles copy command, and provide the project IDs of all projects in the startup company s organization as the destination.

Question 55

Your preview application, deployed on a single-zone Google Kubernetes Engine (GKE) cluster in us-centrall, has gained popularity. You are now ready to make the application generally available. You need to deploy the application to production while ensuring high availability and resilience. You also want to follow Google-recommended practices. What should you do?

Options:

A.

Use the gcloud container clusters create command with the options--enable-multi-networking and--enable- autoscaling to create an autoscaling zonal cluster and deploy the application to it.

B.

Use the gcloud container clusters create-auto command to create an autopilot cluster and deploy the application to it.

C.

Use the gcloud container clusters update command with the option—region us-centrall to update the cluster and deploy the application to it.

D.

Use the gcloud container clusters update command with the option—node-locations us-centrall-a,us-centrall-b to update the cluster and deploy the application to the nodes.

Question 56

You have an on-premises data analytics set of binaries that processes data files in memory for about 45 minutes every midnight. The sizes of those data files range from 1 gigabyte to 16 gigabytes. You want to migrate this application to Google Cloud with minimal effort and cost. What should you do?

Options:

A.

Upload the code to Cloud Functions. Use Cloud Scheduler to start the application.

B.

Create a container for the set of binaries. Use Cloud Scheduler to start a Cloud Run job for the container.

C.

Create a container for the set of binaries Deploy the container to Google Kubernetes Engine (GKE) and use the Kubernetes scheduler to start the application.

D.

Lift and shift to a VM on Compute Engine. Use an instance schedule to start and stop the instance.

Question 57

You have a Compute Engine instance hosting an application used between 9 AM and 6 PM on weekdays. You want to back up this instance daily for disaster recovery purposes. You want to keep the backups for 30 days. You want the Google-recommended solution with the least management overhead and the least number of services. What should you do?

Options:

A.

1. Update your instances’ metadata to add the following value: snapshot–schedule: 0 1 * * *2. Update your instances’ metadata to add the following value: snapshot–retention: 30

B.

1. In the Cloud Console, go to the Compute Engine Disks page and select your instance’s disk.2. In the Snapshot Schedule section, select Create Schedule and configure the following parameters:–Schedule frequency: Daily–Start time: 1:00 AM – 2:00 AM–Autodelete snapshots after 30 days

C.

1. Create a Cloud Function that creates a snapshot of your instance’s disk.2.Create a Cloud Function that deletes snapshots that are older than 30 days.3.Use Cloud Scheduler to trigger both Cloud Functions daily at 1:00 AM.

D.

1. Create a bash script in the instance that copies the content of the disk to Cloud Storage.2.Create a bash script in the instance that deletes data older than 30 days in the backup Cloud Storage bucket.3.Configure the instance’s crontab to execute these scripts daily at 1:00 AM.

Question 58

You need to configure IAM access audit logging in BigQuery for external auditors. You want to follow Google-recommended practices. What should you do?

Options:

A.

Add the auditors group to the ‘logging.viewer’ and ‘bigQuery.dataViewer’ predefined IAM roles.

B.

Add the auditors group to two new custom IAM roles.

C.

Add the auditor user accounts to the ‘logging.viewer’ and ‘bigQuery.dataViewer’ predefined IAM roles.

D.

Add the auditor user accounts to two new custom IAM roles.

Question 59

You have created an application that is packaged into a Docker image. You want to deploy the Docker image as a workload on Google Kubernetes Engine. What should you do?

Options:

A.

Upload the image to Cloud Storage and create a Kubernetes Service referencing the image.

B.

Upload the image to Cloud Storage and create a Kubernetes Deployment referencing the image.

C.

Upload the image to Container Registry and create a Kubernetes Service referencing the image.

D.

Upload the image to Container Registry and create a Kubernetes Deployment referencing the image.

Question 60

(Your company’s developers use an automation that you recently built to provision Linux VMs in Compute Engine within a Google Cloud project to perform various tasks. You need to manage the Linux account lifecycle and access for these users. You want to follow Google-recommended practices to simplify access management while minimizing operational costs. What should you do?)

Options:

A.

Enable OS Login for all VMs. Use IAM roles to grant user permissions.

B.

Enable OS Login for all VMs. Write custom startup scripts to update user permissions.

C.

Require your developers to create public SSH keys. Make the owner of the public key the root user.

D.

Require your developers to create public SSH keys. Write custom startup scripts to update user permissions.

Question 61

You need to manage a third-party application that will run on a Compute Engine instance. Other Compute Engine instances are already running with default configuration. Application installation files are hosted on Cloud Storage. You need to access these files from the new instance without allowing other virtual machines (VMs) to access these files. What should you do?

Options:

A.

Create the instance with the default Compute Engine service account Grant the service account permissions on Cloud Storage.

B.

Create the instance with the default Compute Engine service account Add metadata to the objects on Cloud Storage that matches the metadata on the new instance.

C.

Create a new service account and assig n this service account to the new instance Grant the service account permissions on Cloud Storage.

D.

Create a new service account and assign this service account to the new instance Add metadata to the objects on Cloud Storage that matches the metadata on the new instance.

Question 62

You are a Google Cloud organization administrator. You need to configure organization policies and log sinks on Google Cloud projects that cannot be removed by project users to comply with your company's security policies. The security policies are different for each company department Each company department has a user with the Project Owner role assigned to their projects. What should you do?

Options:

A.

Organize projects under folders for each department. Configure both organization policies and log sinks on the folders

B.

Organize projects under folders for each department. Configure organization policies on the organization and log sinks on the folders.

C.

Use a standard naming convention for projects that includes the department name. Configure organization policies on the organization and log sinks on the projects.

D.

Use a standard naming convention for projects that includes the department name. Configure both organization policies and log sinks on the projects.

Question 63

Your company has a large quantity of unstructured data in different file formats. You want to perform ETL transformations on the data. You need to make the data accessible on Google Cloud so it can be processed by a Dataflow job. What should you do?

Options:

A.

Upload the data to BigQuery using the bq command line tool.

B.

Upload the data to Cloud Storage using the gsutil command line tool.

C.

Upload the data into Cloud SQL using the import function in the console.

D.

Upload the data into Cloud Spanner using the import function in the console.

Question 64

Your company is seeking a scalable solution to retain and explore application logs hosted on Compute Engine. You must be able to analyze your logs with SQL queries, and you want to be able to create charts to identify patterns and trends in your logs over time. You want to follow Google-recommended practices and minimize your operational costs. What should you do?

Options:

A.

Use a custom script to push your application logs to Cloud SQL for exploration.

B.

Ingest your application logs to Cloud Logging by using Ops Agent, and explore your logs in Logs Explorer.

C.

Ingest your application logs to Cloud Logging by using Ops Agent, and explore your logs with Log Analytics.

D.

Use a custom script to push your application logs to BigQuery for exploration.

Question 65

You need to enable traffic between multiple groups of Compute Engine instances that are currently running two different GCP projects. Each group of Compute Engine instances is running in its own VPC. What should you do?

Options:

A.

Verify that both projects are in a GCP Organization. Create a new VPC and add all instances.

B.

Verify that both projects are in a GCP Organization. Share the VPC from one project and request that the Compute Engine instances in the other project use this shared VPC.

C.

Verify that you are the Project Administrator of both projects. Create two new VPCs and add all instances.

D.

Verify that you are the Project Administrator of both projects. Create a new VPC and add all instances.

Question 66

Your company has a 3-tier solution running on Compute Engine. The configuration of the current infrastructure is shown below.

Each tier has a service account that is associated with all instances within it. You need to enable communication on TCP port 8080 between tiers as follows:

• Instances in tier #1 must communicate with tier #2.

• Instances in tier #2 must communicate with tier #3.

What should you do?

Options:

A.

1. Create an ingress firewall rule with the following settings:• Targets: all instances• Source filter: IP ranges (with the range set to 10.0.2.0/24)• Protocols: allow all2. Create an ingress firewall rule with the following settings:• Targets: all instances• Source filter: IP ranges (with the range set to 10.0.1.0/24)• Protocols: allow all

B.

1. Create an ingress firewall rule with the following settings:• Targets: all instances with tier #2 service account• Source filter: all instances with tier #1 service account• Protocols: allow TCP:80802. Create an ingress firewall rule with the following settings:• Targets: all instances with tier #3 service account• Source filter: all instances with tier #2 service account• Protocols: allow TCP: 8080

C.

1. Create an ingress firewall rule with the following settings:• Targets: all instances with tier #2 service account• Source filter: all instances with tier #1 service account• Protocols: allow all2. Create an ingress firewall rule with the following settings:• Targets: all instances with tier #3 service account• Source filter: all instances with tier #2 service account• Protocols: allow all

D.

1. Create an egress firewall rule with the following settings:• Targets: all instances• Source filter: IP ranges (with the range set to 10.0.2.0/24)• Protocols: allow TCP: 80802. Create an egress firewall rule with the following settings:• Targets: all instances• Source filter: IP ranges (with the range set to 10.0.1.0/24)• Protocols: allow TCP: 8080

Question 67

You are the team lead of a group of 10 developers. You provided each developer with an individual Google Cloud Project that they can use as their personal sandbox to experiment with different Google Cloud solutions. You want to be notified if any of the developers are spending above $500 per month on their sandbox environment. What should you do?

Options:

A.

Create a single budget for all projects and configure budget alerts on this budget.

B.

Create a separate billing account per sandbox project and enable BigQuery billing exports. Create a Data Studio dashboard to plot the spending per billing account.

C.

Create a budget per project and configure budget alerts on all of these budgets.

D.

Create a single billing account for all sandbox projects and enable BigQuery billing exports. Create a Data Studio dashboard to plot the spending per project.

Question 68

You have a number of applications that have bursty workloads and are heavily dependent on topics to decouple publishing systems from consuming systems. Your company would like to go serverless to enable developers to focus on writing code without worrying about infrastructure. Your solution architect has already identified Cloud Pub/Sub as a suitable alternative for decoupling systems. You have been asked to identify a suitable GCP Serverless service that is easy to use with Cloud Pub/Sub. You want the ability to scale down to zero when there is no traffic in order to minimize costs. You want to follow Google recommended practices. What should you suggest?

Options:

A.

Cloud Run for Anthos

B.

Cloud Run

C.

App Engine Standard

D.

Cloud Functions.

Question 69

You’ve deployed a microservice called myapp1 to a Google Kubernetes Engine cluster using the YAML file specified below:

You need to refactor this configuration so that the database password is not stored in plain text. You want to follow Google-recommended practices. What should you do?

Options:

A.

Store the database password inside the Docker image of the container, not in the YAML file.

B.

Store the database password inside a Secret object. Modify the YAML file to populate the DB_PASSWORD environment variable from the Secret.

C.

Store the database password inside a ConfigMap object. Modify the YAML file to populate the DB_PASSWORD environment variable from the ConfigMap.

D.

Store the database password in a file inside a Kubernetes persistent volume, and use a persistent volume claim to mount the volume to the container.

Question 70

You host your website on Compute Engine. The number of global users visiting your website is rapidly expanding. You need to minimize latency and support user growth in multiple geographical regions. You also want to follow Google-recommended practices and minimize operational costs. Which two actions should you take?

Choose 2 answers

Options:

A.

Use a Network Load Balancer

B.

Deploy your VMs in multiple Google Cloud regions closest to your users' geographical locations.

C.

Use an external Application Load Balancer in Global mode.

D.

Use an external Application Load Balancer in Regional mode.

E.

Deploy all of your VMs in a single Google Cloud region with the largest available CIDR range.

Question 71

Your company is moving its entire workload to Compute Engine. Some servers should be accessible through the Internet, and other servers should only be accessible over the internal network. All servers need to be able to talk to each other over specific ports and protocols. The current on-premises network relies on a demilitarized zone (DMZ) for the public servers and a Local Area Network (LAN) for the private servers. You need to design the networking infrastructure on

Google Cloud to match these requirements. What should you do?

Options:

A.

1. Create a single VPC with a subnet for the DMZ and a subnet for the LAN. 2. Set up firewall rules to open up relevant traffic between the DMZ and the LAN subnets, and another firewall rule to allow public ingress traffic for the DMZ.

B.

1. Create a single VPC with a subnet for the DMZ and a subnet for the LAN. 2. Set up firewall rules to open up relevant traffic between the DMZ and the LAN subnets, and another firewall rule to allow public egress traffic for the DMZ.

C.

1. Create a VPC with a subnet for the DMZ and another VPC with a subnet for the LAN. 2. Set up firewall rules to open up relevant traffic between the DMZ and the LAN subnets, and another firewall rule to allow public ingress traffic for the DMZ.

D.

1. Create a VPC with a subnet for the DMZ and another VPC with a subnet for the LAN. 2. Set up firewall rules to open up relevant traffic between the DMZ and the LAN subnets, and another firewall rule to allow public egress traffic for the DMZ.

Question 72

You are configuring service accounts for an application that spans multiple projects. Virtual machines (VMs) running in the web-applications project need access to BigQuery datasets in the crm-databases project. You want to follow Google-recommended practices to grant access to the service account in the web-applications project. What should you do?

Options:

A.

Grant "project owner" for web-applications appropriate roles to crm-databases.

B.

Grant "project owner" role to crm-databases and the web-applications project.

C.

Grant "project owner" role to crm-databases and roles/bigquery.dataViewer role to web-applications.

D.

Grant roles/bigquery.dataViewer role to crm-databases and appropriate roles to web-applications.

Question 73

Your company's security vulnerability management policy wonts 3 member of the security team to have visibility into vulnerabilities and other OS metadata for a specific Compute Engine instance This Compute Engine instance hosts a critical application in your Goggle Cloud project. You need to implement your company's security vulnerability management policy. What should you dc?

Options:

A.

• Ensure that the Ops Agent Is Installed on the Compute Engine instance.• Create a custom metric in the Cloud Monitoring dashboard.• Provide the security team member with access to this dashboard.

B.

• Ensure that the Ops Agent is installed on tie Compute Engine instance.• Provide the security team member roles/configure.inventoryViewer permission.

C.

• Ensure that the OS Config agent Is Installed on the Compute Engine instance.• Provide the security team member roles/configure.vulnerabilityViewer permission.

D.

• Ensure that the OS Config agent is installed on the Compute Engine instance• Create a log sink Co a BigQuery dataset.• Provide the security team member with access to this dataset.

Question 74

Your team has developed a stateless application which requires it to be run directly on virtual machines. The application is expected to receive a fluctuating amount of traffic and needs to scale automatically. You need to deploy the application. What should you do?

Options:

A.

Deploy the application on a managed instance group and configure autoscaling.

B.

Deploy the application on a Kubernetes Engine cluster and configure node pool autoscaling.

C.

Deploy the application on Cloud Functions and configure the maximum number instances.

D.

Deploy the application on Cloud Run and configure autoscaling.

Question 75

Your company would like to store invoices and other financial documents in Google Cloud. You need to identify a Google-managed solution to store this information for your company. You must ensure that the documents are kept for a duration of three years. Your company's analysts need frequent access to invoices from the past six months. After six months, invoices should be archived for audit purposes only. You want to minimize costs and follow Google-recommended practices. What should you do?

Options:

A.

Use Cloud Storage with Object Lifecycle Management to change the object storage class to Coldline after six months.

B.

Use Cloud Storage with Object Lifecycle Management to change the object storage class to Standard after six months.

C.

Store your documents on Filestore and move the documents to Cloud Storage with object storage class set to Coldline after six months.

D.

Store your documents on Filestore and move the documents to Cloud Storage with object storage class set to Standard after six months.

Question 76

You created a cluster.YAML file containing

resources:

name: cluster

type: container.v1.cluster

properties:

zone: europe-west1-b

cluster:

description: My GCP ACE cluster

initialNodeCount: 2

You want to use Cloud Deployment Manager to create this cluster in GKE. What should you do?

Options:

A.

gcloud deployment-manager deployments create my-gcp-ace-cluster --config cluster.yaml

B.

gcloud deployment-manager deployments create my-gcp-ace-cluster --type container.v1.cluster --config cluster.yaml

C.

gcloud deployment-manager deployments apply my-gcp-ace-cluster --type container.v1.cluster --config cluster.yaml

D.

gcloud deployment-manager deployments apply my-gcp-ace-cluster --config cluster.yaml

Question 77

You have one project called proj-sa where you manage all your service accounts. You want to be able to use a service account from this project to take snapshots of VMs running in another project called proj-vm. What should you do?

Options:

A.

Download the private key from the service account, and add it to each VMs custom metadata.

B.

Download the private key from the service account, and add the private key to each VM’s SSH keys.

C.

Grant the service account the IAM Role of Compute Storage Admin in the project called proj-vm.

D.

When creating the VMs, set the service account’s API scope for Compute Engine to read/write.

Question 78

You are running multiple microservices in a Kubernetes Engine cluster. One microservice is rendering images. The microservice responsible for the image rendering requires a large amount of CPU time compared to the memory it requires. The other microservices are workloads that are optimized for n1-standard machine types. You need to optimize your cluster so that all workloads are using resources as efficiently as possible. What should you do?

Options:

A.

Assign the pods of the image rendering microservice a higher pod priority than the older microservices

B.

Create a node pool with compute-optimized machine type nodes for the image rendering microservice Use the node pool with general-purposemachine type nodes for the other microservices

C.

Use the node pool with general-purpose machine type nodes for lite mage rendering microservice Create a nodepool with compute-optimized machine type nodes for the other microservices

D.

Configure the required amount of CPU and memory in the resource requests specification of the image rendering microservice deployment Keep the resource requests for the other microservices at the default

Question 79

You have been asked to migrate a docker application from datacenter to cloud. Your solution architect has suggested uploading docker images to GCR in one project and running an application in a GKE cluster in a separate project. You want to store images in the project img-278322 and run the application in the project prod-278986. You want to tag the image as acme_track_n_trace:v1. You want to follow Google-recommended practices. What should you do?

Options:

A.

Run gcloud builds submit --tag gcr.io/img-278322/acme_track_n_trace

B.

Run gcloud builds submit --tag gcr.io/img-278322/acme_track_n_trace:v1

C.

Run gcloud builds submit --tag gcr.io/prod-278986/acme_track_n_trace

D.

Run gcloud builds submit --tag gcr.io/prod-278986/acme_track_n_trace:v1

Question 80

(You are managing the security configuration of your company's Google Cloud organization. The Operations team needs specific permissions on both a Google Kubernetes Engine (GKE) cluster and a Cloud SQL instance. Two predefined Identity and Access Management (IAM) roles exist that contain a subset of the permissions needed by the team. You need to configure the necessary IAM permissions for this team while following Google-recommended practices. What should you do?)

Options:

A.

Grant the team the two predefined IAM roles.

B.

Create a custom IAM role that combines the permissions from the two relevant predefined roles.

C.

Create a custom IAM role that includes only the required permissions from the predefined roles.

D.

Grant the team the IAM roles of Kubernetes Engine Admin and Cloud SQL Admin.

Question 81

You are planning to move your company's website and a specific asynchronous background job to Google Cloud Your website contains only static HTML content The background job is started through an HTTP endpoint and generates monthly invoices for your customers. Your website needs to be available in multiple geographic locations and requires autoscaling. You want to have no costs when your workloads are not In use and follow recommended practices. What should you do?

Options:

A.

Move your website to Google Kubemetes Engine (GKE). and move your background job to Cloud Functions

B.

Move both your website and background job to Compute Engine

C.

Move both your website and background job to Cloud Run.

D.

Move your website to Google Kubemetes Engine (GKE), and move your background job to Compute Engine

Question 82

Your company’s infrastructure is on-premises, but all machines are running at maximum capacity. You want to burst to Google Cloud. The workloads on Google Cloud must be able to directly communicate to the workloads on-premises using a private IP range. What should you do?

Options:

A.

In Google Cloud, configure the VPC as a host for Shared VPC.

B.

In Google Cloud, configure the VPC for VPC Network Peering.

C.

Create bastion hosts both in your on-premises environment and on Google Cloud. Configure both as proxy servers using their public IP addresses.

D.

Set up Cloud VPN between the infrastructure on-premises and Google Cloud.

Question 83

You are designing an application that lets users upload and share photos. You expect your application to grow really fast and you are targeting a worldwide audience. You want to delete uploaded photos after 30 days. You want to minimize costs while ensuring your application is highly available. Which GCP storage solution should you choose?

Options:

A.

Persistent SSD on VM instances.

B.

Cloud Filestore.

C.

Multiregional Cloud Storage bucket.

D.

Cloud Datastore database.

Question 84

Your VMs are running in a subnet that has a subnet mask of 255.255.255.240. The current subnet has no more free IP addresses and you require an additional 10 IP addresses for new VMs. The existing and new VMs should all be able to reach each other without additional routes. What should you do?

Options:

A.

Use gcloud to expand the IP range of the current subnet.

B.

Delete the subnet, and recreate it using a wider range of IP addresses.

C.

Create a new project. Use Shared VPC to share the current network with the new project.

D.

Create a new subnet with the same starting IP but a wider range to overwrite the current subnet.

Question 85

You have been asked to create robust Virtual Private Network (VPN) connectivity between a new Virtual Private Cloud (VPC) and a remote site. Key requirements include dynamic routing, a shared address space of 10.19.0.1/22, and no overprovisioning of tunnels during a failover event. You want to follow Google-recommended practices to set up a high availability Cloud VPN. What should you do?

Options:

A.

Use a custom mode VPC network, configure static routes, and use active/passive routing

B.

Use an automatic mode VPC network, configure static routes, and use active/active routing

C.

Use a custom mode VPC network use Cloud Router border gateway protocol (86P) routes, and use active/passive routing

D.

Use an automatic mode VPC network, use Cloud Router border gateway protocol (BGP) routes and configure policy-based routing

Question 86

You want to host your video encoding software on Compute Engine. Your user base is growing rapidly, and users need to be able 3 to encode their videos at any time without interruption or CPU limitations. You must ensure that your encoding solution is highly available, and you want to follow Google-recommended practices to automate operations. What should you do?

Options:

A.

Deploy your solution on multiple standalone Compute Engine instances, and increase the number of existing instances wnen CPU utilization on Cloud Monitoring reaches a certain threshold.

B.

Deploy your solution on multiple standalone Compute Engine instances, and replace existing instances with high-CPUinstances when CPU utilization on Cloud Monitoring reaches a certain threshold.

C.

Deploy your solution to an instance group, and increase the number of available instances whenever you see high CPU utilization in Cloud Monitoring.

D.

Deploy your solution to an instance group, and set the autoscaling based on CPU utilization.

Question 87

You created a Kubernetes deployment by running kubectl run nginx image=nginx labels=app=prod. Your Kubernetes cluster is also used by a number of other deployments. How can you find the identifier of the pods for this nginx deployment?

Options:

A.

kubectl get deployments –output=pods

B.

gcloud get pods –selector=”app=prod”

C.

kubectl get pods -I “app=prod”

D.

gcloud list gke-deployments -filter={pod }

Question 88

Your company wants to standardize the creation and management of multiple Google Cloud resources using Infrastructure as Code. You want to minimize the amount of repetitive code needed to manage the environment What should you do?

Options:

A.

Create a bash script that contains all requirement steps as gcloud commands

B.

Develop templates for the environment using Cloud Deployment Manager

C.

Use curl in a terminal to send a REST request to the relevant Google API for each individual resource.

D.

Use the Cloud Console interface to provision and manage all related resources

Question 89

Your team is building a website that handles votes from a large user population. The incoming votes will arrive at various rates. You want to optimize the storage and processing of the votes. What should you do?

Options:

A.

Save the incoming votes to Firestore. Use Cloud Scheduler to trigger a Cloud Functions instance to periodically process the votes.

B.

Use a dedicated instance to process the incoming votes. Send the votes directly to this instance.

C.

Save the incoming votes to a JSON file on Cloud Storage. Process the votes in a batch at the end of the day.

D.

Save the incoming votes to Pub/Sub. Use the Pub/Sub topic to trigger a Cloud Functions instance to process the votes.

Question 90

You have two Google Cloud projects: project-a with VPC vpc-a (10.0.0.0/16) and project-b with VPC vpc-b (10.8.0.0/16). Your frontend application resides in vpc-a and the backend API services ate deployed in vpc-b. You need to efficiently and cost-effectively enable communication between these Google Cloud projects. You also want to follow Google-recommended practices. What should you do?

Options:

A.

Configure a Cloud Router in vpc-a and another Cloud Router in vpc-b.

B.

Configure a Cloud Interconnect connection between vpc-a and vpc-b.

C.

Create VPC Network Peering between vpc-a and vpc-b.

D.

Create an OpenVPN connection between vpc-a and vpc-b.

Question 91

You need to host an application on a Compute Engine instance in a project shared with other teams. You want to prevent the other teams from accidentally causing downtime on that application. Which feature should you use?

Options:

A.

Use a Shielded VM.

B.

Use a Preemptible VM.

C.

Use a sole-tenant node.

D.

Enable deletion protection on the instance.

Question 92

You have downloaded and installed the gcloud command line interface (CLI) and have authenticated with your Google Account. Most of your Compute Engine instances in your project run in the europe-west1-d zone. You want to avoid having to specify this zone with each CLI command when managing these instances. What should you do?

Options:

A.

Set the europe-west1-d zone as the default zone using the gcloud config subcommand.

B.

In the Settings page for Compute Engine under Default location, set the zone to europe–west1-d.

C.

In the CLI installation directory, create a file called default.conf containing zone=europe–west1–d.

D.

Create a Metadata entry on the Compute Engine page with key compute/zone and value europe–west1–d.

Question 93

You want to run a single caching HTTP reverse proxy on GCP for a latency-sensitive website. This specific reverse proxy consumes almost no CPU. You want to have a 30-GB in-memory cache, and need an additional 2 GB of memory for the rest of the processes. You want to minimize cost. How should you run this reverse proxy?

Options:

A.

Create a Cloud Memorystore for Redis instance with 32-GB capacity.

B.

Run it on Compute Engine, and choose a custom instance type with 6 vCPUs and 32 GB of memory.

C.

Package it in a container image, and run it on Kubernetes Engine, using n1-standard-32 instances as nodes.

D.

Run it on Compute Engine, choose the instance type n1-standard-1, and add an SSD persistent disk of 32 GB.

Question 94

You have experimented with Google Cloud using your own credit card and expensed the costs to your company. Your company wants to streamline the billing process and charge the costs of your projects to their monthly invoice. What should you do?

Options:

A.

Grant the financial team the IAM role ofג€Billing Account Userג€ on the billing account linked to your credit card.

B.

Set up BigQuery billing export and grant your financial department IAM access to query the data.

C.

Create a ticket with Google Billing Support to ask them to send the invoice to your company.

D.

Change the billing account of your projects to the billing account of your company.

Question 95

You created an instance of SQL Server 2017 on Compute Engine to test features in the new version. You want to connect to this instance using the fewest number of steps. What should you do?

Options:

A.

Install a RDP client on your desktop. Verify that a firewall rule for port 3389 exists.

B.

Install a RDP client in your desktop. Set a Windows username and password in the GCP Console. Use the credentials to log in to the instance.

C.

Set a Windows password in the GCP Console. Verify that a firewall rule for port 22 exists. Click the RDP button in the GCP Console and supply the credentials to log in.

D.

Set a Windows username and password in the GCP Console. Verify that a firewall rule for port 3389 exists. Click the RDP button in the GCP Console, and supply the credentials to log in.

Question 96

You are using Data Studio to visualize a table from your data warehouse that is built on top of BigQuery. Data is appended to the data warehouse during the day. At night, the daily summary is recalculated by overwriting the table. You just noticed that the charts in Data Studio are broken, and you want to analyze the problem. What should you do?

Options:

A.

Use the BigQuery interface to review the nightly Job and look for any errors

B.

Review the Error Reporting page in the Cloud Console to find any errors.

C.

In Cloud Logging create a filter for your Data Studio report

D.

Use the open source CLI tool. Snapshot Debugger, to find out why the data was not refreshed correctly.

Question 97

Your finance team wants to view the billing report for your projects. You want to make sure that the finance team does not get additional permissions to the project. What should you do?

Options:

A.

Add the group for the finance team to roles/billing user role.

B.

Add the group for the finance team to roles/billing admin role.

C.

Add the group for the finance team to roles/billing viewer role.

D.

Add the group for the finance team to roles/billing project/Manager role.

Question 98

You deployed an App Engine application using gcloud app deploy, but it did not deploy to the intended project. You want to find out why this happened and where the application deployed. What should you do?

Options:

A.

Check the app.yaml file for your application and check project settings.

B.

Check the web-application.xml file for your application and check project settings.

C.

Go to Deployment Manager and review settings for deployment of applications.

D.

Go to Cloud Shell and run gcloud config list to review the Google Cloud configuration used for deployment.

Question 99

You have one GCP account running in your default region and zone and another account running in a non-default region and zone. You want to start a new Compute Engine instance in these two Google Cloud Platform accounts using the command line interface. What should you do?

Options:

A.

Create two configurations using gcloud config configurations create [NAME]. Run gcloud config configurations activate [NAME] to switch between accounts when running the commands to start the Compute Engine instances.

B.

Create two configurations using gcloud config configurations create [NAME]. Run gcloud configurations list to start the Compute Engine instances.

C.

Activate two configurations using gcloud configurations activate [NAME]. Run gcloud config list to start the Compute Engine instances.

D.

Activate two configurations using gcloud configurations activate [NAME]. Run gcloud configurations list to start the Compute Engine instances.

Question 100

You are working for a hospital that stores Its medical images in an on-premises data room. The hospital wants to use Cloud Storage for archival storage of these images. The hospital wants an automated process to upload any new medical images to Cloud Storage. You need to design and implement a solution. What should you do?

Options:

A.

Deploy a Dataflow job from the batch template "Datastore lo Cloud Storage" Schedule the batch job on the desired interval

B.

In the Cloud Console, go to Cloud Storage Upload the relevant images to the appropriate bucket

C.

Create a script that uses the gsutil command line interface to synchronize the on-premises storage with Cloud Storage Schedule the script as a cron job

D.

Create a Pub/Sub topic, and enable a Cloud Storage trigger for the Pub/Sub topic. Create an application that sends all medical images to the Pub/Sub lope

Question 101

You need to add a group of new users to Cloud Identity. Some of the users already have existing Google accounts. You want to follow one of Google's recommended practices and avoid conflicting accounts. What should you do?

Options:

A.

Invite the user to transfer their existing account

B.

Invite the user to use an email alias to resolve the conflict

C.

Tell the user that they must delete their existing account

D.

Tell the user to remove all personal email from the existing account

Question 102

(You are managing a stateful application deployed on Google Kubernetes Engine (GKE) that can only have one replica. You recently discovered that the application becomes unstable at peak times. You have identified that the application needs more CPU than what has been configured in the manifest at these peak times. You want Kubernetes to allocate the application sufficient CPU resources during these peak times, while ensuring cost efficiency during off-peak periods. What should you do?)

Options:

A.

Enable cluster autoscaling on the GKE cluster.

B.

Configure a Vertical Pod Autoscaler on the Deployment.

C.

Configure a Horizontal Pod Autoscaler on the Deployment.

D.

Enable node auto-provisioning on the GKE cluster.

Exam Detail
Vendor: Google
Certification: Google Cloud Certified
Last Update: Nov 26, 2025
Associate-Cloud-Engineer Question Answers