option
Cuestiones
ayuda
daypo
buscar.php

Developer- parte 1

COMENTARIOS ESTADÍSTICAS RÉCORDS
REALIZAR TEST
Título del Test:
Developer- parte 1

Descripción:
últimas preguntas

Fecha de Creación: 2026/04/17

Categoría: Otros

Número Preguntas: 13

Valoración:(0)
COMPARTE EL TEST
Nuevo ComentarioNuevo Comentario
Comentarios
NO HAY REGISTROS
Temario:

Your application in production has recently been experiencing reliability issues, and you are unsure how the application will behave in the event of an unexpected failure. You want to assess the application's resilience. What should you do?. Write end-to-end tests to determine how different microservices interact. Validate that all tests pass. Perform chaos engineering by intentionally introducing failures into the system. Observe how the application behaves, and ensure that it is able to recover from a failure. Test individual units of code for a critical portion of the application's code. Ensure that unit tests are part of the Cloud Build pipeline. Perform load testing of the application, and use JMeter for the critical endpoints of the application. Ensure that the application performs as expected under a heavy load.

You are a developer at an ecommerce company. You are tasked with developing a globally consistent shopping cart for logged-in users across both mobile and desktop clients. You need to configure how the items that are added to users’ carts are stored. How should you configure this cart service?. Store the carts in a separate Memorystore for Redis instance, and configure each user's IP address as the key. Store the carts in a separate Firestore document, and configure each user ID as the document's key. Insert the carts in a separate Spanner table, and configure each user's encrypted password as the primary key. Create and store the carts in the shopping-cart HTTP cookie.

You are building an application that will store frequently accessed data in a Memorystore for Redis Cluster instance. You would like to make the application resilient to network issues. You need to ensure that the application handles client disconnections from a Redis instance gracefully to minimize disruption and ensure stability of the application. What should you do?. Immediately terminate the application instance upon detecting a Redis disconnection to force a restart, and wait for clients to reconnect to the cache as soon as the application becomes available. Configure the Redis client to reconnect after a fixed delay of 60 seconds. Use Memorystore’s automatic failover mechanisms to make the Redis cache available in a secondary zone. Implement exponential backoff with a jitter when reconnecting after a disconnection. Configure client-side caching to serve data during brief outages.

You are developing a new mobile game that will be deployed on GKE and Cloud Run as a set of microservices. Currently, there are no projections for the game’s user volume. You need to store the following data types: •Data type 1: leaderboard data •Data type 2: player profiles, chats, and news feed •Data type 3: player clickstream data for BI You need to identify a data storage solution that is easy to use, cost-effective, scalable, and supports offline caching on the user’s device. Which data storage option should you choose for the different data types?. •Data type 1: Memorystore •Data type 2: Firestore •Data type 3: BigQuery. •Data type 1: Memorystore •Data type 2: Spanner •Data type 3: Bigtable. •Data type 1: Firestore •Data type 2: Cloud SQL •Data type 3: BigQuery. •Data type 1: Firestore •Data type 2: Firestore •Data type 3: BigQuery.

You have a GKE cluster that has three TPU nodes. You are running an ML training job on the cluster, and you observe log entries in Cloud Logging similar to the one below. To identify the root cause of a performance issue, you need to view all stdout logs from the containers running on the TPU nodes. What should you do? { insertId: "gvqk7r5qc5hvogif" labels: { compute.googleapis.com/resource_name: "gke-tpu-9243ec28-wwf5" k8s-pod/batch_kubernetes_io/controller-uid: "443a3128-xxx090" k8s-pod/batch_kubernetes_io/job-name: "my-training-job" k8s-pod/controller-uid: "443a3128-xxx090" k8s-pod/job-name: "my-training-job" } logName: "projects/gke-tpu-demo-project/logs/stdout" receiveTimestamp: "2025-01-26T05:52:39.652122589Z" resource: { labels: { cluster_name: "tpu-test" container_name: "tensorflow" location: "us-central2-b" namespace_name: "default" pod_name: "my-training-job-17418" project_id: "gke-tpu-demo-project" } type: "k8s_container" } severity: "INFO" textPayload: " 1/938 [..............................] - ETA: 13:36 - loss: 2.3238 - accuracy: 0.0469 ... 937/938 [==============================>.] - ETA: 0s - loss: 0.2184 - accuracy: 0.9349" timestamp: "2025-01-26T05:52:38.962950115Z" }. Run the following log query in Cloud Logging: resource.type="k8s_container" labels."k8s-pod/batch_kubernetes_io/job-name" = "my-training-job" log_id("stdout"). Run the following log query in Cloud Logging: resource.type="k8s_container" labels."compute.googleapis.com/resource_name" =~ "gke-tpu-9243ec28.*" log_id("stdout"). Run the following command in Cloud Shell: gcloud logging read 'resource.type="k8s_node" labels."compute.googleapis.com/resource_name" = "gke-tpu-9243ec28.*" log_id("stdout")'. Run the following command in Cloud Shell: gcloud logging read 'resource.type="k8s_node" labels."k8s-pod/batch_kubernetes_io/job-name" = "my-training-job" log_id("stdout")'.

You are building a Workflow to process complex data analytics for your application. You plan to use the Workflow to execute a Cloud Run job while following Google-recommended practices. What should you do?. Create a Pub/Sub topic, and subscribe the Cloud Run job to the topic. Configure an Eventarc trigger to invoke the Cloud Run job, and include the trigger in a step of the Workflow. Use the Cloud Run Admin API connector to execute the Cloud Run job within the Workflow. Determine the entry point of the Cloud Run job, and send an HTTP request from the Workflow.

You are customizing a VM instance for development and need to choose either SSD or Standard Persistent Disks for your VM. Your team’s cloud architect is not currently available for you to consult. You need to quickly determine the advantages and disadvantages of using SSD or Standard storage. What should you do?. Use Gemini Cloud Assist, and prompt “What is the difference between SSD and Standard Persistent Disks?”. Request help on internet forums such as Reddit or Stack Overflow. Review the Google Cloud documentation on Persistent Disks. Review reference architectures that match your application in the Cloud Architecture Center.

Your team is trying to reduce their cloud spend, and you want to evaluate your GKE Autopilot cluster costs. When reviewing the manifests, you see that resource requests are currently not specified. Your application is stateless and fault-tolerant, and there are no specific hardware or memory requirements on nodes. You want to modify the cluster to be scalable and cost-effective as quickly as possible while maintaining a cluster with sufficient computing resources. What should you do?. Request that your Pods run as Spot Pods, and use the cloud.google.com/gke-spot=true label in your YAML manifest. Request the Balanced compute class in your YAML manifest. In the YAML deployment configuration manifest, request and set the maximum CPU to 5 vCPU. Set up Cloud Trace and Cloud Monitoring, identify the maximum memory used in the past 30 days, and set the YAML manifest to request that amount of memory.

You are developing a custom job scheduler that must have a persistent cache containing entries of all Compute Engine VMs that are in a running state (not deleted, stopped, or suspended). The job scheduler checks this cache and only sends jobs to the available Compute Engine VMs in the cache. You need to ensure that the available Compute Engine instance cache is not stale. What should you do?. Set up an organization-level Cloud Storage log sink with a filter to capture the audit log events for Compute Engine. Configure an Eventarc trigger that executes when the Cloud Storage bucket is updated and sends these events to the application to update the cache. Set up a Cloud Asset Inventory real-time feed of insert and delete events with the asset types filter set to compute.googleapis.com/Instance. Configure an Eventarc trigger that sends these events to the application to update the cache. Set up an organization-level Pub/Sub log sink with a filter to capture the audit log events for Compute Engine. Configure an Eventarc trigger that sends these events to the application to update the cache. Set up an organization-level BigQuery log sink. Configure the application to query this BigQuery table every minute to retrieve the last minute’s events and update the cache.

Your company is planning a global event. You need to configure an event registration portal for the event. You have decided to deploy the registration service by using Cloud Run. Your company’s marketing team does not want to advertise the Cloud Run service URL. They want the registration portal to be accessed by using a personalized hostname or path in your custom domain URL pattern, for example, .example.com. How should you configure access to the service while following Google-recommended practices?. Configure Cloud Armor to block traffic on the Cloud Run service URL and allow reroutes from only the custom domain URL pattern. Set up an HAProxy on Compute Engine, and add routing rules for a custom domain to the Cloud Run service URL. Add a global external Application Load Balancer in front of the service, and configure a DNS record that points to the load balancer’s IP address. Create a CNAME record that points to the Cloud Run service URL.

You are developing a new Python 3 API that needs to be deployed to Cloud Run. Your Cloud Run service sits behind an Apigee proxy. You need to ensure that the Cloud Run service is running with the already deployed Apigee proxy. You want to conduct this testing as quickly as possible. What should you do?. Store the service code as a zip file in a Cloud Storage bucket. Deploy your application by using the gcloud run deploy --source command, and test the integration by pointing Apigee to Cloud Run. Use the Cloud Run emulator to test your application locally. Test the integration by pointing Apigee to your local Cloud Run emulator. Build a container image locally, and push the image to Artifact Registry. Deploy the Image to Cloud Run, and test the integration by pointing Apigee to Cloud Run. Deploy your application directly from the current directory by using the gcloud run deploy --source command, and test the integration by pointing Apigee to Cloud Run.

You are developing an application that needs to connect to a Cloud SQL for PostgreSQL database by using the Cloud SQL Auth Proxy. The Cloud SQL Auth Proxy is hosted in a different Google Cloud VPC network. The Cloud SQL for PostgreSQL instance has public and private IP addresses. You are required to use the private IP for security reasons. When testing the connection to the Cloud SQL instance, you can connect by using the public IP address, but you are unable to connect by using the private IP address. How should you fix this issue?. Run the Cloud SQL Auth Proxy as a background service. Add the --private-ip option when starting the Cloud SQL Auth Proxy. Set up VPC Network Peering between your VPC and the VPC where the Cloud SQL instance is deployed. Grant yourself the IAM role that provides access to the Cloud SQL instance.

You are tasked with using C++ to build and deploy a microservice for an application hosted on Google Cloud. The code needs to be containerized and use several custom software libraries that your team has built. You want to minimize maintenance of the application’s underlying infrastructure. How should you deploy the microservice?. Use Cloud Run functions to deploy the microservice. Use Cloud Build to create the container, and deploy it on Cloud Run. Use Cloud Shell to containerize your microservice, and deploy it on GKE Standard. Use Cloud Shell to containerize your microservice, and deploy it on a Container-Optimized OS Compute Engine instance.

Denunciar Test