Cuestiones
ayuda
option
Mi Daypo

TEST BORRADO, QUIZÁS LE INTERESEGCP Dev Exam Test 3

COMENTARIOS ESTADÍSTICAS RÉCORDS
REALIZAR TEST
Título del test:
GCP Dev Exam Test 3

Descripción:
Even more real questions about GCP Dev Exam

Autor:
gcptester
(Otros tests del mismo autor)

Fecha de Creación:
13/03/2024

Categoría:
Informática

Número preguntas: 46
Comparte el test:
Facebook
Twitter
Whatsapp
Comparte el test:
Facebook
Twitter
Whatsapp
Últimos Comentarios
No hay ningún comentario sobre este test.
Temario:
Your team is developing a Cloud Function triggered by Cloud Storage events. You want to accelerate testing and development of your Cloud Function while following Google-recommended best practices. What should you do? Create a new Cloud Function that is triggered when Cloud Audit Logs detects the cloudfunctions.functions.sourceCodeSet operation in the original Cloud Function. Send mock requests to the new function to evaluate the functionality. Make a copy of the Cloud Function, and rewrite the code to be HTTP-triggered. Edit and test the new version by triggering the HTTP endpoint. Send mock requests to the new function to evaluate the functionality. Install the Functions Frameworks library, and configure the Cloud Function on localhost. Make a copy of the function, and make edits to the new version. Test the new version using curl. Make a copy of the Cloud Function in the Google Cloud console. Use the Cloud console's in-line editor to make source code changes to the new function. Modify your web application to call the new function, and test the new version in production.
Your team is setting up a build pipeline for an application that will run in Google Kubernetes Engine (GKE). For security reasons, you only want images produced by the pipeline to be deployed to your GKE cluster. Which combination of Google Cloud services should you use? Cloud Build, Cloud Storage, and Binary Authorization Google Cloud Deploy, Cloud Storage, and Google Cloud Armor Google Cloud Deploy, Artifact Registry, and Google Cloud Armor Cloud Build, Artifact Registry, and Binary Authorization.
You are supporting a business-critical application in production deployed on Cloud Run. The application is reporting HTTP 500 errors that are affecting the usability of the application. You want to be alerted when the number of errors exceeds 15% of the requests within a specific time window. What should you do? Create a Cloud Function that consumes the Cloud Monitoring API. Use Cloud Scheduler to trigger the Cloud Function daily and alert you if the number of errors is above the defined threshold. Navigate to the Cloud Run page in the Google Cloud console, and select the service from the services list. Use the Metrics tab to visualize the number of errors for that revision, and refresh the page daily. Create an alerting policy in Cloud Monitoring that alerts you if the number of errors is above the defined threshold. Create a Cloud Function that consumes the Cloud Monitoring API. Use Cloud Composer to trigger the Cloud Function daily and alert you if the number of errors is above the defined threshold.
You need to build a public API that authenticates, enforces quotas, and reports metrics for API callers. Which tool should you use to complete this architecture? App Engine Cloud Endpoints Identity-Aware Proxy GKE Ingress for HTTP(S) Load Balancing.
You noticed that your application was forcefully shut down during a Deployment update in Google Kubernetes Engine. Your application didn’t close the database connection before it was terminated. You want to update your application to make sure that it completes a graceful shutdown. What should you do? Update your code to process a received SIGTERM signal to gracefully disconnect from the database. Configure a PodDisruptionBudget to prevent the Pod from being forcefully shut down. Increase the terminationGracePeriodSeconds for your application. Configure a PreStop hook to shut down your application.
You are a lead developer working on a new retail system that runs on Cloud Run and Firestore in Datastore mode. A web UI requirement is for the system to display a list of available products when users access the system and for the user to be able to browse through all products. You have implemented this requirement in the minimum viable product (MVP) phase by returning a list of all available products stored in Firestore. A few months after go-live, you notice that Cloud Run instances are terminated with HTTP 500: Container instances are exceeding memory limits errors during busy times. This error coincides with spikes in the number of Datastore entity reads. You need to prevent Cloud Run from crashing and decrease the number of Datastore entity reads. You want to use a solution that optimizes system performance. What should you do? Modify the query that returns the product list using integer offsets. Modify the query that returns the product list using limits. Modify the Cloud Run configuration to increase the memory limits. Modify the query that returns the product list using cursors.
You need to deploy an internet-facing microservices application to Google Kubernetes Engine (GKE). You want to validate new features using the A/B testing method. You have the following requirements for deploying new container image releases: • There is no downtime when new container images are deployed. • New production releases are tested and verified using a subset of production users. What should you do? 1. Configure your CI/CD pipeline to update the Deployment manifest file by replacing the container version with the latest version. 2. Recreate the Pods in your cluster by applying the Deployment manifest file. 3. Validate the application's performance by comparing its functionality with the previous release version, and roll back if an issue arises. 1. Create a second namespace on GKE for the new release version. 2. Create a Deployment configuration for the second namespace with the desired number of Pods. 3. Deploy new container versions in the second namespace. 4. Update the Ingress configuration to route traffic to the namespace with the new container versions. 1. Install the Anthos Service Mesh on your GKE cluster. 2. Create two Deployments on the GKE cluster, and label them with different version names. 3. Implement an Istio routing rule to send a small percentage of traffic to the Deployment that references the new version of the application. 1. Implement a rolling update pattern by replacing the Pods gradually with the new release version. 2. Validate the application's performance for the new subset of users during the rollout, and roll back if an issue arises.
Your team manages a large Google Kubernetes Engine (GKE) cluster. Several application teams currently use the same namespace to develop microservices for the cluster. Your organization plans to onboard additional teams to create microservices. You need to configure multiple environments while ensuring the security and optimal performance of each team’s work. You want to minimize cost and follow Google-recommended best practices. What should you do? Create new role-based access controls (RBAC) for each team in the existing cluster, and define resource quotas. Create a new namespace for each environment in the existing cluster, and define resource quotas. Create a new GKE cluster for each team. Create a new namespace for each team in the existing cluster, and define resource quotas.
You have deployed a Java application to Cloud Run. Your application requires access to a database hosted on Cloud SQL. Due to regulatory requirements, your connection to the Cloud SQL instance must use its internal IP address. How should you configure the connectivity while following Google-recommended best practices? Configure your Cloud Run service with a Cloud SQL connection. Configure your Cloud Run service to use a Serverless VPC Access connector. Configure your application to use the Cloud SQL Java connector. Configure your application to connect to an instance of the Cloud SQL Auth proxy.
Your application stores customers’ content in a Cloud Storage bucket, with each object being encrypted with the customer's encryption key. The key for each object in Cloud Storage is entered into your application by the customer. You discover that your application is receiving an HTTP 4xx error when reading the object from Cloud Storage. What is a possible cause of this error? You attempted the read operation on the object with the customer's base64-encoded key. You attempted the read operation without the base64-encoded SHA256 hash of the encryption key. You entered the same encryption algorithm specified by the customer when attempting the read operation. You attempted the read operation on the object with the base64-encoded SHA256 hash of the customer's key.
You have two Google Cloud projects, named Project A and Project B. You need to create a Cloud Function in Project A that saves the output in a Cloud Storage bucket in Project B. You want to follow the principle of least privilege. What should you do? 1. Create a Google service account in Project B. 2. Deploy the Cloud Function with the service account in Project A. 3. Assign this service account the roles/storage.objectCreator role on the storage bucket residing in Project B. 1. Create a Google service account in Project A 2. Deploy the Cloud Function with the service account in Project A. 3. Assign this service account the roles/storage.objectCreator role on the storage bucket residing in Project B. 1. Determine the default App Engine service account (PROJECT_ID@appspot.gserviceaccount.com) in Project A. 2. Deploy the Cloud Function with the default App Engine service account in Project A. 3. Assign the default App Engine service account the roles/storage.objectCreator role on the storage bucket residing in Project B. 1. Determine the default App Engine service account (PROJECT_ID@appspot.gserviceaccount.com) in Project B. 2. Deploy the Cloud Function with the default App Engine service account in Project A. 3. Assign the default App Engine service account the roles/storage.objectCreator role on the storage bucket residing in Project B.
A governmental regulation was recently passed that affects your application. For compliance purposes, you are now required to send a duplicate of specific application logs from your application’s project to a project that is restricted to the security team. What should you do? Create user-defined log buckets in the security team’s project. Configure a Cloud Logging sink to route your application’s logs to log buckets in the security team’s project. Create a job that copies the logs from the _Required log bucket into the security team’s log bucket in their project. Modify the _Default log bucket sink rules to reroute the logs into the security team’s log bucket. Create a job that copies the System Event logs from the _Required log bucket into the security team’s log bucket in their project.
You plan to deploy a new Go application to Cloud Run. The source code is stored in Cloud Source Repositories. You need to configure a fully managed, automated, continuous deployment pipeline that runs when a source code commit is made. You want to use the simplest deployment solution. What should you do? Configure a cron job on your workstations to periodically run gcloud run deploy --source in the working directory. Configure a Jenkins trigger to run the container build and deploy process for each source code commit to Cloud Source Repositories. Configure continuous deployment of new revisions from a source repository for Cloud Run using buildpacks. Use Cloud Build with a trigger configured to run the container build and deploy process for each source code commit to Cloud Source Repositories.
Your team has created an application that is hosted on a Google Kubernetes Engine (GKE) cluster. You need to connect the application to a legacy REST service that is deployed in two GKE clusters in two different regions. You want to connect your application to the target service in a way that is resilient. You also want to be able to run health checks on the legacy service on a separate port. How should you set up the connection? (Choose two.) Use Traffic Director with a sidecar proxy to connect the application to the service. Use a proxyless Traffic Director configuration to connect the application to the service. Configure the legacy service's firewall to allow health checks originating from the proxy. Configure the legacy service's firewall to allow health checks originating from the application. Configure the legacy service's firewall to allow health checks originating from the Traffic Director control plane.
You have an application running in a production Google Kubernetes Engine (GKE) cluster. You use Cloud Deploy to automatically deploy your application to your production GKE cluster. As part of your development process, you are planning to make frequent changes to the application’s source code and need to select the tools to test the changes before pushing them to your remote source code repository. Your toolset must meet the following requirements: • Test frequent local changes automatically. • Local deployment emulates production deployment. Which tools should you use to test building and running a container on your laptop using minimal resources? Docker Compose and dockerd Terraform and kubeadm Minikube and Skaffold kaniko and Tekton.
You are deploying a Python application to Cloud Run using Cloud Source Repositories and Cloud Build. The Cloud Build pipeline is shown below. You want to optimize deployment times and avoid unnecessary steps. What should you do? Remove the step that pushes the container to Artifact Registry. Deploy a new Docker registry in a VPC, and use Cloud Build worker pools inside the VPC to run the build pipeline. Store image artifacts in a Cloud Storage bucket in the same region as the Cloud Run instance. Add the --cache-from argument to the Docker build step in your build config file.
You are developing an event-driven application. You have created a topic to receive messages sent to Pub/Sub. You want those messages to be processed in real time. You need the application to be independent from any other system and only incur costs when new messages arrive. How should you configure the architecture? Deploy the application on Compute Engine. Use a Pub/Sub push subscription to process new messages in the topic. Deploy your code on Cloud Functions. Use a Pub/Sub trigger to invoke the Cloud Function. Use the Pub/Sub API to create a pull subscription to the Pub/Sub topic and read messages from it. Deploy the application on Google Kubernetes Engine. Use the Pub/Sub API to create a pull subscription to the Pub/Sub topic and read messages from it. Deploy your code on Cloud Functions. Use a Pub/Sub trigger to handle new messages in the topic.
You have an application running on Google Kubernetes Engine (GKE). The application is currently using a logging library and is outputting to standard output. You need to export the logs to Cloud Logging, and you need the logs to include metadata about each request. You want to use the simplest method to accomplish this. What should you do? Change your application’s logging library to the Cloud Logging library, and configure your application to export logs to Cloud Logging. Update your application to output logs in JSON format, and add the necessary metadata to the JSON. Update your application to output logs in CSV format, and add the necessary metadata to the CSV. Install the Fluent Bit agent on each of your GKE nodes, and have the agent export all logs from /var/log.
You are working on a new application that is deployed on Cloud Run and uses Cloud Functions. Each time new features are added, new Cloud Functions and Cloud Run services are deployed. You use ENV variables to keep track of the services and enable interservice communication, but the maintenance of the ENV variables has become difficult. You want to implement dynamic discovery in a scalable way. What should you do? Configure your microservices to use the Cloud Run Admin and Cloud Functions APIs to query for deployed Cloud Run services and Cloud Functions in the Google Cloud project. Create a Service Directory namespace. Use API calls to register the services during deployment, and query during runtime. Rename the Cloud Functions and Cloud Run services endpoint is using a well-documented naming convention. Deploy Hashicorp Consul on a single Compute Engine instance. Register the services with Consul during deployment, and query during runtime.
You work for a financial services company that has a container-first approach. Your team develops microservices applications. A Cloud Build pipeline creates the container image, runs regression tests, and publishes the image to Artifact Registry. You need to ensure that only containers that have passed the regression tests are deployed to Google Kubernetes Engine (GKE) clusters. You have already enabled Binary Authorization on the GKE clusters. What should you do next? Create an attestor and a policy. After a container image has successfully passed the regression tests, use Cloud Build to run Kritis Signer to create an attestation for the container image. Deploy Voucher Server and Voucher Client components. After a container image has successfully passed the regression tests, run Voucher Client as a step in the Cloud Build pipeline. Set the Pod Security Standard level to Restricted for the relevant namespaces. Use Cloud Build to digitally sign the container images that have passed the regression tests. Create an attestor and a policy. Create an attestation for the container images that have passed the regression tests as a step in the Cloud Build pipeline.
You are reviewing and updating your Cloud Build steps to adhere to best practices. Currently, your build steps include: 1. Pull the source code from a source repository. 2. Build a container image 3. Upload the built image to Artifact Registry. You need to add a step to perform a vulnerability scan of the built container image, and you want the results of the scan to be available to your deployment pipeline running in Google Cloud. You want to minimize changes that could disrupt other teams’ processes. What should you do? Enable Binary Authorization, and configure it to attest that no vulnerabilities exist in a container image. Upload the built container images to your Docker Hub instance, and scan them for vulnerabilities. Enable the Container Scanning API in Artifact Registry, and scan the built container images for vulnerabilities. Add Artifact Registry to your Aqua Security instance, and scan the built container images for vulnerabilities.
You are developing an online gaming platform as a microservices application on Google Kubernetes Engine (GKE). Users on social media are complaining about long loading times for certain URL requests to the application. You need to investigate performance bottlenecks in the application and identify which HTTP requests have a significantly high latency span in user requests. What should you do? Configure GKE workload metrics using kubectl. Select all Pods to send their metrics to Cloud Monitoring. Create a custom dashboard of application metrics in Cloud Monitoring to determine performance bottlenecks of your GKE cluster. Update your microservices to log HTTP request methods and URL paths to STDOUT. Use the logs router to send container logs to Cloud Logging. Create filters in Cloud Logging to evaluate the latency of user requests across different methods and URL paths. Instrument your microservices by installing the OpenTelemetry tracing package. Update your application code to send traces to Trace for inspection and analysis. Create an analysis report on Trace to analyze user requests. Install tcpdump on your GKE nodes. Run tcpdump to capture network traffic over an extended period of time to collect data. Analyze the data files using Wireshark to determine the cause of high latency.
You need to load-test a set of REST API endpoints that are deployed to Cloud Run. The API responds to HTTP POST requests. Your load tests must meet the following requirements: • Load is initiated from multiple parallel threads. • User traffic to the API originates from multiple source IP addresses. • Load can be scaled up using additional test instances. You want to follow Google-recommended best practices. How should you configure the load testing? Create an image that has cURL installed, and configure cURL to run a test plan. Deploy the image in a managed instance group, and run one instance of the image for each VM. Create an image that has cURL installed, and configure cURL to run a test plan. Deploy the image in an unmanaged instance group, and run one instance of the image for each VM. Deploy a distributed load testing framework on a private Google Kubernetes Engine cluster. Deploy additional Pods as needed to initiate more traffic and support the number of concurrent users. Download the container image of a distributed load testing framework on Cloud Shell. Sequentially start several instances of the container on Cloud Shell to increase the load on the API.
Your team is creating a serverless web application on Cloud Run. The application needs to access images stored in a private Cloud Storage bucket. You want to give the application Identity and Access Management (IAM) permission to access the images in the bucket, while also securing the services using Google-recommended best practices. What should you do? Enforce signed URLs for the desired bucket. Grant the Storage Object Viewer IAM role on the bucket to the Compute Engine default service account. Enforce public access prevention for the desired bucket. Grant the Storage Object Viewer IAM role on the bucket to the Compute Engine default service account. Enforce signed URLs for the desired bucket. Create and update the Cloud Run service to use a user-managed service account. Grant the Storage Object Viewer IAM role on the bucket to the service account. Enforce public access prevention for the desired bucket. Create and update the Cloud Run service to use a user-managed service account. Grant the Storage Object Viewer IAM role on the bucket to the service account.
You are using Cloud Run to host a global ecommerce web application. Your company’s design team is creating a new color scheme for the web app. You have been tasked with determining whether the new color scheme will increase sales. You want to conduct testing on live production traffic. How should you design the study? Use an external HTTP(S) load balancer to route a predetermined percentage of traffic to two different color schemes of your application. Analyze the results to determine whether there is a statistically significant difference in sales. Use an external HTTP(S) load balancer to route traffic to the original color scheme while the new deployment is created and tested. After testing is complete, reroute all traffic to the new color scheme. Analyze the results to determine whether there is a statistically significant difference in sales. Use an external HTTP(S) load balancer to mirror traffic to the new version of your application. Analyze the results to determine whether there is a statistically significant difference in sales. Enable a feature flag that displays the new color scheme to half of all users. Monitor sales to see whether they increase for this group of users.
You are a developer at a large corporation. You manage three Google Kubernetes Engine clusters on Google Cloud. Your team’s developers need to switch from one cluster to another regularly without losing access to their preferred development tools. You want to configure access to these multiple clusters while following Google-recommended best practices. What should you do? Ask the developers to use Cloud Shell and run gcloud container clusters get-credential to switch to another cluster. In a configuration file, define the clusters, users, and contexts. Share the file with the developers and ask them to use kubect1 contig to add cluster, user, and context details. Ask the developers to install the gcloud CLI on their workstation and run gcloud container clusters get-credentials to switch to another cluster. Ask the developers to open three terminals on their workstation and use kubect1 config to configure access to each cluster.
You are a lead developer working on a new retail system that runs on Cloud Run and Firestore. A web UI requirement is for the user to be able to browse through all products. A few months after go-live, you notice that Cloud Run instances are terminated with HTTP 500: Container instances are exceeding memory limits errors during busy times. This error coincides with spikes in the number of Firestore queries. You need to prevent Cloud Run from crashing and decrease the number of Firestore queries. You want to use a solution that optimizes system performance. What should you do? Modify the query that returns the product list using cursors with limits. Create a custom index over the products. Modify the query that returns the product list using integer offsets. Modify the Cloud Run configuration to increase the memory limits.
You are a developer at a large organization. Your team uses Git for source code management (SCM). You want to ensure that your team follows Google-recommended best practices to manage code to drive higher rates of software delivery. Which SCM process should your team use? Each developer commits their code to the main branch before each product release, conducts testing, and rolls back if integration issues are detected. Each group of developers copies the repository, commits their changes to their repository, and merges their code into the main repository before each product release. Each developer creates a branch for their own work, commits their changes to their branch, and merges their code into the main branch daily. Each group of developers creates a feature branch from the main branch for their work, commits their changes to their branch, and merges their code into the main branch after the change advisory board approves it.
You have a web application that publishes messages to Pub/Sub. You plan to build new versions of the application locally and want to quickly test Pub/Sub integration for each new build. How should you configure local testing? Install Cloud Code on the integrated development environment (IDE). Navigate to Cloud APIs, and enable Pub/Sub against a valid Google Project ID. When developing locally, configure your application to call pubsub.googleapis.com. Install the Pub/Sub emulator using gcloud, and start the emulator with a valid Google Project ID. When developing locally, configure your application to use the local emulator with ${gcloud beta emulators pubsub env-init}. In the Google Cloud console, navigate to the API Library, and enable the Pub/Sub API. When developing locally, configure your application to call pubsub.googleapis.com. Install the Pub/Sub emulator using gcloud, and start the emulator with a valid Google Project ID. When developing locally, configure your application to use the local emulator by exporting the PUBSUB_EMULATOR_HOST variable.
Your ecommerce application receives external requests and forwards them to third-party API services for credit card processing, shipping, and inventory management as shown in the diagram. Your customers are reporting that your application is running slowly at unpredictable times. The application doesn’t report any metrics. You need to determine the cause of the inconsistent performance. What should you do? Install the OpenTelemetry library for your respective language, and instrument your application. Install the Ops Agent inside your container and configure it to gather application metrics. Modify your application to read and forward the X-Cloud-Trace-Context header when it calls the downstream services. Enable Managed Service for Prometheus on the Google Kubernetes Engine cluster to gather application metrics.
You are developing a new application. You want the application to be triggered only when a given file is updated in your Cloud Storage bucket. Your trigger might change, so your process must support different types of triggers. You want the configuration to be simple so that multiple team members can update the triggers in the future. What should you do? Configure Cloud Storage events to be sent to Pub/Sub, and use Pub/Sub events to trigger a Cloud Build job that executes your application. Create an Eventarc trigger that monitors your Cloud Storage bucket for a specific filename, and set the target as Cloud Run. Configure a Cloud Function that executes your application and is triggered when an object is updated in Cloud Storage. Configure a Firebase function that executes your application and is triggered when an object is updated in Cloud Storage.
You are defining your system tests for an application running in Cloud Run in a Google Cloud project. You need to create a testing environment that is isolated from the production environment. You want to fully automate the creation of the testing environment with the least amount of effort and execute automated tests. What should you do? Using Cloud Build, execute Terraform scripts to create a new Google Cloud project and a Cloud Run instance of your application in the Google Cloud project. Using Cloud Build, execute a Terraform script to deploy a new Cloud Run revision in the existing Google Cloud project. Use traffic splitting to send traffic to your test environment. Using Cloud Build, execute gcloud commands to create a new Google Cloud project and a Cloud Run instance of your application in the Google Cloud project. Using Cloud Build, execute gcloud commands to deploy a new Cloud Run revision in the existing Google Cloud project. Use traffic splitting to send traffic to your test environment.
You are a cluster administrator for Google Kubernetes Engine (GKE). Your organization’s clusters are enrolled in a release channel. You need to be informed of relevant events that affect your GKE clusters, such as available upgrades and security bulletins. What should you do? Configure cluster notifications to be sent to a Pub/Sub topic. Execute a scheduled query against the google_cloud_release_notes BigQuery dataset. Query the GKE API for available versions. Create an RSS subscription to receive a daily summary of the GKE release notes.
You are tasked with using C++ to build and deploy a microservice for an application hosted on Google Cloud. The code needs to be containerized and use several custom software libraries that your team has built. You do not want to maintain the underlying infrastructure of the application. How should you deploy the microservice? Use Cloud Functions to deploy the microservice. Use Cloud Build to create the container, and deploy it on Cloud Run. Use Cloud Shell to containerize your microservice, and deploy it on a Container-Optimized OS Compute Engine instance. Use Cloud Shell to containerize your microservice, and deploy it on standard Google Kubernetes Engine.
You need to containerize a web application that will be hosted on Google Cloud behind a global load balancer with SSL certificates. You don’t have the time to develop authentication at the application level, and you want to offload SSL encryption and management from your application. You want to configure the architecture using managed services where possible. What should you do? Host the application on Google Kubernetes Engine, and deploy an NGINX Ingress Controller to handle authentication. Host the application on Google Kubernetes Engine, and deploy cert-manager to manage SSL certificates. Host the application on Compute Engine, and configure Cloud Endpoints for your application. Host the application on Google Kubernetes Engine, and use Identity-Aware Proxy (IAP) with Cloud Load Balancing and Google-managed certificates.
You manage a system that runs on stateless Compute Engine VMs and Cloud Run instances. Cloud Run is connected to a VPC, and the ingress setting is set to Internal. You want to schedule tasks on Cloud Run. You create a service account and grant it the roles/run.invoker Identity and Access Management (IAM) role. When you create a schedule and test it, a 403 Permission Denied error is returned in Cloud Logging. What should you do? Grant the service account the roles/run.developer IAM role. Configure a cron job on the Compute Engine VMs to trigger Cloud Run on schedule. Change the Cloud Run ingress setting to 'Internal and Cloud Load Balancing.' Use Cloud Scheduler with Pub/Sub to invoke Cloud Run.
You work on an application that relies on Cloud Spanner as its main datastore. New application features have occasionally caused performance regressions. You want to prevent performance issues by running an automated performance test with Cloud Build for each commit made. If multiple commits are made at the same time, the tests might run concurrently. What should you do? Create a new project with a random name for every build. Load the required data. Delete the project after the test is run. Create a new Cloud Spanner instance for every build. Load the required data. Delete the Cloud Spanner instance after the test is run. Create a project with a Cloud Spanner instance and the required data. Adjust the Cloud Build build file to automatically restore the data to its previous state after the test is run. Start the Cloud Spanner emulator locally. Load the required data. Shut down the emulator after the test is run.
Your company's security team uses Identity and Access Management (IAM) to track which users have access to which resources. You need to create a version control system that can integrate with your security team's processes. You want your solution to support fast release cycles and frequent merges to your main branch to minimize merge conflicts. What should you do? Create a Cloud Source Repositories repository, and use trunk-based development. Create a Cloud Source Repositories repository, and use feature-based development. Create a GitHub repository, mirror it to a Cloud Source Repositories repository, and use trunk-based development. Create a GitHub repository, mirror it to a Cloud Source Repositories repository, and use feature-based development.
You recently developed an application that monitors a large number of stock prices. You need to configure Pub/Sub to receive messages and update the current stock price in an in-memory database. A downstream service needs the most up-to-date prices in the in-memory database to perform stock trading transactions. Each message contains three pieces or information: • Stock symbol • Stock price • Timestamp for the update How should you set up your Pub/Sub subscription? Create a push subscription with exactly-once delivery enabled. Create a pull subscription with both ordering and exactly-once delivery turned off. Create a pull subscription with ordering enabled, using the stock symbol as the ordering key. Create a push subscription with both ordering and exactly-once delivery turned off.
You are a developer at a social media company. The company runs their social media website on-premises and uses MySQL as a backend to store user profiles and user posts. Your company plans to migrate to Google Cloud, and your learn will migrate user profile information to Firestore. You are tasked with designing the Firestore collections. What should you do? Create one root collection for user profiles, and create one root collection for user posts. Create one root collection for user profiles, and create one subcollection for each user's posts. Create one root collection for user profiles, and store each user's post as a nested list in the user profile document. Create one root collection for user posts, and create one subcollection for each user's profile.
Your team recently deployed an application on Google Kubernetes Engine (GKE). You are monitoring your application and want to be alerted when the average memory consumption of your containers is under 20% or above 80%. How should you configure the alerts? Create a Cloud Function that consumes the Monitoring API. Create a schedule to trigger the Cloud Function hourly and alert you if the average memory consumption is outside the defined range. In Cloud Monitoring, create an alerting policy to notify you if the average memory consumption is outside the defined range. Create a Cloud Function that runs on a schedule, executes kubectl top on all the workloads on the cluster, and sends an email alert if the average memory consumption is outside the defined range. Write a script that pulls the memory consumption of the instance at the OS level and sends an email alert if the average memory consumption is outside the defined range.
You manage a microservice-based ecommerce platform on Google Cloud that sends confirmation emails to a third-party email service provider using a Cloud Function. Your company just launched a marketing campaign, and some customers are reporting that they have not received order confirmation emails. You discover that the services triggering the Cloud Function are receiving HTTP 500 errors. You need to change the way emails are handled to minimize email loss. What should you do? Increase the Cloud Function's timeout to nine minutes. Configure the sender application to publish the outgoing emails in a message to a Pub/Sub topic. Update the Cloud Function configuration to consume the Pub/Sub queue. Configure the sender application to write emails to Memorystore and then trigger the Cloud Function. When the function is triggered, it reads the email details from Memorystore and sends them to the email service. Configure the sender application to retry the execution of the Cloud Function every one second if a request fails.
You have a web application that publishes messages to Pub/Sub. You plan to build new versions of the application locally and need to quickly test Pub/Sub integration for each new build. How should you configure local testing? In the Google Cloud console, navigate to the API Library, and enable the Pub/Sub API. When developing locally configure your application to call pubsub.googleapis.com. Install the Pub/Sub emulator using gcloud, and start the emulator with a valid Google Project ID. When developing locally, configure your applicat.cn to use the local emulator by exporting the PUBSUB_EMULATOR_HOST variable. Run the gcloud config set api_endpoint_overrides/pubsub https://pubsubemulator.googleapis.com.com/ command to change the Pub/Sub endpoint prior to starting the application. Install Cloud Code on the integrated development environment (IDE). Navigate to Cloud APIs, and enable Pub/Sub against a valid Google Project IWhen developing locally, configure your application to call pubsub.googleapis.com.
Your team has created an application that is hosted on a Google Kubemetes Engine (GKE) cluster. You need to connect the application to a legacy REST service that is deployed in two GKE clusters in two different regions. You want to connect your application to the legacy service in a way that is resilient and requires the fewest number of steps. You also want to be able to run probe-based health checks on the legacy service on a separate port. How should you set up the connection? (Choose two.) Use Traffic Director with a sidecar proxy to connect the application to the service. Set up a proxyless Traffic Director configuration for the application. Configure the legacy service's firewall to allow health checks originating from the sidecar proxy. Configure the legacy service's firewall to allow health checks originating from the application. Configure the legacy service's firewall to allow health checks originating from the Traffic Director control plane. .
You are monitoring a web application that is written in Go and deployed in Google Kubernetes Engine. You notice an increase in CPU and memory utilization. You need to determine which function is consuming the most CPU and memory resources. What should you do? Add print commands to the application source code to log when each function is called, and redeploy the application. Create a Cloud Logging query that gathers the web application s logs. Write a Python script that calculates the difference between the timestamps from the beginning and the end of the application's longest functions to identify time-intensive functions. Import OpenTelemetry and Trace export packages into your application, and create the trace provider. Review the latency data for your application on the Trace overview page, and identify which functions cause the most latency. Import the Cloud Profiler package into your application, and initialize the Profiler agent. Review the generated flame graph in the Google Cloud console to identify time-intensive functions.
You are developing a flower ordering application. Currently you have three microservices: • Order Service (receives the orders) • Order Fulfillment Service (processes the orders) • Notification Service (notifies the customer when the order is filled) You need to determine how the services will communicate with each other. You want incoming orders to be processed quickly and you need to collect order information for fulfillment. You also want to make sure orders are not lost between your services and are able to communicate asynchronously. How should the requests be processed? Order request --> Order Service --> Order Fulfillment Service --> Notification Service Order request --> Order Service --> Cloud Storage Bucket --> Order Fulfillment Service --> Cloud Storage Bucket --> Notification Service Order request --> Order Service --> Firestore database --> Order Fulfillment Service --> Firestore database --> Notification Service Order request --> Order Service --> Pub / Sub queue --> Order Fulfillment Service --> Firestore database --> Pub / Sub queue --> Notification Service.
Denunciar test Consentimiento Condiciones de uso