Cuestiones
ayuda
option
Mi Daypo

TEST BORRADO, QUIZÁS LE INTERESEGCP Dev Exam Test 1

COMENTARIOS ESTADÍSTICAS RÉCORDS
REALIZAR TEST
Título del test:
GCP Dev Exam Test 1

Descripción:
Real questions about GCP Dev exam

Autor:
gcptester
(Otros tests del mismo autor)

Fecha de Creación:
11/03/2024

Categoría:
Informática

Número preguntas: 50
Comparte el test:
Facebook
Twitter
Whatsapp
Comparte el test:
Facebook
Twitter
Whatsapp
Últimos Comentarios
No hay ningún comentario sobre este test.
Temario:
You are porting an existing Apache/MySQL/PHP application stack from a single machine to Google Kubernetes Engine. You need to determine how to containerize the application. Your approach should follow Google-recommended best practices for availability. What should you do? Package each component in a separate container. Implement readiness and liveness probes. Package the application in a single container. Use a process management tool to manage each component. Package each component in a separate container. Use a script to orchestrate the launch of the components. Package the application in a single container. Use a bash script as an entrypoint to the container, and then spawn each component as a background job.
You are developing an application that will be launched on Compute Engine instances into multiple distinct projects, each corresponding to the environments in your software development process (development, QA, staging, and production). The instances in each project have the same application code but a different configuration. During deployment, each instance should receive the application's configuration based on the environment it serves. You want to minimize the number of steps to configure this flow. What should you do? When creating your instances, configure a startup script using the gcloud command to determine the project name that indicates the correct environment. In each project, configure a metadata key ג€environmentג€ whose value is the environment it serves. Use your deployment tool to query the instance metadata and configure the application based on the ג€environmentג€ value. Deploy your chosen deployment tool on an instance in each project. Use a deployment job to retrieve the appropriate configuration file from your version control system, and apply the configuration when deploying the application on each instance. During each instance launch, configure an instance custom-metadata key named ג€environmentג€ whose value is the environment the instance serves. Use your deployment tool to query the instance metadata, and configure the application based on the ג€environmentג€ value.
You are developing an ecommerce application that stores customer, order, and inventory data as relational tables inside Cloud Spanner. During a recent load test, you discover that Spanner performance is not scaling linearly as expected. Which of the following is the cause? The use of 64-bit numeric types for 32-bit numbers. The use of the STRING data type for arbitrary-precision values. The use of Version 1 UUIDs as primary keys that increase monotonically. The use of LIKE instead of STARTS_WITH keyword for parameterized SQL queries.
You are designing an application that will subscribe to and receive messages from a single Pub/Sub topic and insert corresponding rows into a database. Your application runs on Linux and leverages preemptible virtual machines to reduce costs. You need to create a shutdown script that will initiate a graceful shutdown. What should you do? Write a shutdown script that uses inter-process signals to notify the application process to disconnect from the database. Write a shutdown script that broadcasts a message to all signed-in users that the Compute Engine instance is going down and instructs them to save current work and sign out. Write a shutdown script that writes a file in a location that is being polled by the application once every five minutes. After the file is read, the application disconnects from the database. Write a shutdown script that publishes a message to the Pub/Sub topic announcing that a shutdown is in progress. After the application reads the message, it disconnects from the database.
You work for a web development team at a small startup. Your team is developing a Node.js application using Google Cloud services, including Cloud Storage and Cloud Build. The team uses a Git repository for version control. Your manager calls you over the weekend and instructs you to make an emergency update to one of the company's websites, and you're the only developer available. You need to access Google Cloud to make the update, but you don't have your work laptop. You are not allowed to store source code locally on a non-corporate computer. How should you set up your developer environment? Use a text editor and the Git command line to send your source code updates as pull requests from a public computer. Use a text editor and the Git command line to send your source code updates as pull requests from a virtual machine running on a public computer. Use Cloud Shell and the built-in code editor for development. Send your source code updates as pull requests. Use a Cloud Storage bucket to store the source code that you need to edit. Mount the bucket to a public computer as a drive, and use a code editor to update the code. Turn on versioning for the bucket, and point it to the team's Git repository.
Your team develops services that run on Google Kubernetes Engine. You need to standardize their log data using Google-recommended practices and make the data more useful in the fewest number of steps. What should you do? (Choose two.) Create aggregated exports on application logs to BigQuery to facilitate log analytics. Create aggregated exports on application logs to Cloud Storage to facilitate log analytics. Write log output to standard output (stdout) as single-line JSON to be ingested into Cloud Logging as structured logs. Mandate the use of the Logging API in the application code to write structured logs to Cloud Logging. Mandate the use of the Pub/Sub API to write structured data to Pub/Sub and create a Dataflow streaming pipeline to normalize logs and write them to BigQuery for analytics. .
You are designing a deployment technique for your new applications on Google Cloud. As part of your deployment planning, you want to use live traffic to gather performance metrics for both new and existing applications. You need to test against the full production load prior to launch. What should you do? Use canary deployment Use blue/green deployment Use rolling updates deployment Use A/B testing with traffic mirroring during deployment.
You support an application that uses the Cloud Storage API. You review the logs and discover multiple HTTP 503 Service Unavailable error responses from the API. Your application logs the error and does not take any further action. You want to implement Google-recommended retry logic to improve success rates. Which approach should you take? Retry the failures in batch after a set number of failures is logged. Retry each failure at a set time interval up to a maximum number of times. Retry each failure at increasing time intervals up to a maximum number of tries. Retry each failure at decreasing time intervals up to a maximum number of tries.
You need to redesign the ingestion of audit events from your authentication service to allow it to handle a large increase in traffic. Currently, the audit service and the authentication system run in the same Compute Engine virtual machine. You plan to use the following Google Cloud tools in the new architecture: ✑ Multiple Compute Engine machines, each running an instance of the authentication service ✑ Multiple Compute Engine machines, each running an instance of the audit service ✑ Pub/Sub to send the events from the authentication services. How should you set up the topics and subscriptions to ensure that the system can handle a large volume of messages and can scale efficiently? Create one Pub/Sub topic. Create one pull subscription to allow the audit services to share the messages. Create one Pub/Sub topic. Create one pull subscription per audit service instance to allow the services to share the messages. Create one Pub/Sub topic. Create one push subscription with the endpoint pointing to a load balancer in front of the audit services. Create one Pub/Sub topic per authentication service. Create one pull subscription per topic to be used by one audit service. Create one Pub/Sub topic per authentication service. Create one push subscription per topic, with the endpoint pointing to one audit service.
You are a SaaS provider deploying dedicated blogging software to customers in your Google Kubernetes Engine (GKE) cluster. You want to configure a secure multi-tenant platform to ensure that each customer has access to only their own blog and can't affect the workloads of other customers. What should you do? Enable Application-layer Secrets on the GKE cluster to protect the cluster. Deploy a namespace per tenant and use Network Policies in each blog deployment. Use GKE Audit Logging to identify malicious containers and delete them on discovery. Build a custom image of the blogging software and use Binary Authorization to prevent untrusted image deployments.
You are developing an internal application that will allow employees to organize community events within your company. You deployed your application on a single Compute Engine instance. Your company uses Google Workspace (formerly G Suite), and you need to ensure that the company employees can authenticate to the application from anywhere. What should you do? Add a public IP address to your instance, and restrict access to the instance using firewall rules. Allow your company's proxy as the only source IP address. Add an HTTP(S) load balancer in front of the instance, and set up Identity-Aware Proxy (IAP). Configure the IAP settings to allow your company domain to access the website. Set up a VPN tunnel between your company network and your instance's VPC location on Google Cloud. Configure the required firewall rules and routing information to both the on-premises and Google Cloud networks. Add a public IP address to your instance, and allow traffic from the internet. Generate a random hash, and create a subdomain that includes this hash and points to your instance. Distribute this DNS address to your company's employees.
You want to create `fully baked` or `golden` Compute Engine images for your application. You need to bootstrap your application to connect to the appropriate database according to the environment the application is running on (test, staging, production). What should you do? Embed the appropriate database connection string in the image. Create a different image for each environment. When creating the Compute Engine instance, add a tag with the name of the database to be connected. In your application, query the Compute Engine API to pull the tags for the current instance, and use the tag to construct the appropriate database connection string. When creating the Compute Engine instance, create a metadata item with a key of ג€DATABASEג€ and a value for the appropriate database connection string. In your application, read the ג€DATABASEג€ environment variable, and use the value to connect to the appropriate database. When creating the Compute Engine instance, create a metadata item with a key of ג€DATABASEג€ and a value for the appropriate database connection string. In your application, query the metadata server for the ג€DATABASEג€ value, and use the value to connect to the appropriate database.
You recently migrated a monolithic application to Google Cloud by breaking it down into microservices. One of the microservices is deployed using Cloud Functions. As you modernize the application, you make a change to the API of the service that is backward-incompatible. You need to support both existing callers who use the original API and new callers who use the new API. What should you do? Leave the original Cloud Function as-is and deploy a second Cloud Function with the new API. Use a load balancer to distribute calls between the versions. Leave the original Cloud Function as-is and deploy a second Cloud Function that includes only the changed API. Calls are automatically routed to the correct function. Leave the original Cloud Function as-is and deploy a second Cloud Function with the new API. Use Cloud Endpoints to provide an API gateway that exposes a versioned API. Re-deploy the Cloud Function after making code changes to support the new API. Requests for both versions of the API are fulfilled based on a version identifier included in the call.
You are a developer working with the CI/CD team to troubleshoot a new feature that your team introduced. The CI/CD team used HashiCorp Packer to create a new Compute Engine image from your development branch. The image was successfully built, but is not booting up. You need to investigate the issue with the CI/ CD team. What should you do? Create a new feature branch, and ask the build team to rebuild the image. Shut down the deployed virtual machine, export the disk, and then mount the disk locally to access the boot logs. Install Packer locally, build the Compute Engine image locally, and then run it in your personal Google Cloud project. Check Compute Engine OS logs using the serial port, and check the Cloud Logging logs to confirm access to the serial port.
Your development team has been asked to refactor an existing monolithic application into a set of composable microservices. Which design aspects should you implement for the new application? (Choose two.) Develop the microservice code in the same programming language used by the microservice caller. Create an API contract agreement between the microservice implementation and microservice caller. Require asynchronous communications between all microservice implementations and microservice callers. Ensure that sufficient instances of the microservice are running to accommodate the performance requirements. Implement a versioning scheme to permit future changes that could be incompatible with the current interface.
You deployed a new application to Google Kubernetes Engine and are experiencing some performance degradation. Your logs are being written to Cloud Logging, and you are using a Prometheus sidecar model for capturing metrics. You need to correlate the metrics and data from the logs to troubleshoot the performance issue and send real-time alerts while minimizing costs. What should you do? (Check only one option) Create custom metrics from the Cloud Logging logs, and use Prometheus to import the results using the Cloud Monitoring REST API. Export the Cloud Logging logs and the Prometheus metrics to Cloud Bigtable. Run a query to join the results, and analyze in Google Data Studio. Export the Cloud Logging logs and stream the Prometheus metrics to BigQuery. Run a recurring query to join the results, and send notifications using Cloud Tasks. Export the Prometheus metrics and use Cloud Monitoring to view them as external metrics. Configure Cloud Monitoring to create log-based metrics from the logs, and correlate them with the Prometheus data.
You have been tasked with planning the migration of your company's application from on-premises to Google Cloud. Your company's monolithic application is an ecommerce website. The application will be migrated to microservices deployed on Google Cloud in stages. The majority of your company's revenue is generated through online sales, so it is important to minimize risk during the migration. You need to prioritize features and select the first functionality to migrate. What should you do? Migrate the Product catalog, which has integrations to the frontend and product database. Migrate Payment processing, which has integrations to the frontend, order database, and third-party payment vendor. Migrate Order fulfillment, which has integrations to the order database, inventory system, and third-party shipping vendor. Migrate the Shopping cart, which has integrations to the frontend, cart database, inventory system, and payment processing system.
Your team develops services that run on Google Kubernetes Engine. Your team's code is stored in Cloud Source Repositories. You need to quickly identify bugs in the code before it is deployed to production. You want to invest in automation to improve developer feedback and make the process as efficient as possible. What should you do? Use Spinnaker to automate building container images from code based on Git tags. Use Cloud Build to automate building container images from code based on Git tags. Use Spinnaker to automate deploying container images to the production environment. Use Cloud Build to automate building container images from code based on forked versions.
Your team is developing an application in Google Cloud that executes with user identities maintained by Cloud Identity. Each of your application's users will have an associated Pub/Sub topic to which messages are published, and a Pub/Sub subscription where the same user will retrieve published messages. You need to ensure that only authorized users can publish and subscribe to their own specific Pub/Sub topic and subscription. What should you do? Bind the user identity to the pubsub.publisher and pubsub.subscriber roles at the resource level. Grant the user identity the pubsub.publisher and pubsub.subscriber roles at the project level. Grant the user identity a custom role that contains the pubsub.topics.create and pubsub.subscriptions.create permissions. Configure the application to run as a service account that has the pubsub.publisher and pubsub.subscriber roles.
Your web application is deployed to the corporate intranet. You need to migrate the web application to Google Cloud. The web application must be available only to company employees and accessible to employees as they travel. You need to ensure the security and accessibility of the web application while minimizing application changes. What should you do? Configure the application to check authentication credentials for each HTTP(S) request to the application. Configure Identity-Aware Proxy to allow employees to access the application through its public IP address. Configure a Compute Engine instance that requests users to log in to their corporate account. Change the web application DNS to point to the proxy Compute Engine instance. After authenticating, the Compute Engine instance forwards requests to and from the web application. Configure a Compute Engine instance that requests users to log in to their corporate account. Change the web application DNS to point to the proxy Compute Engine instance. After authenticating, the Compute Engine issues an HTTP redirect to a public IP address hosting the web application.
You migrated some of your applications to Google Cloud. You are using a legacy monitoring platform deployed on-premises for both on-premises and cloud- deployed applications. You discover that your notification system is responding slowly to time-critical problems in the cloud applications. What should you do? Replace your monitoring platform with Cloud Monitoring. Install the Cloud Monitoring agent on your Compute Engine instances. Migrate some traffic back to your old platform. Perform A/B testing on the two platforms concurrently. Use Cloud Logging and Cloud Monitoring to capture logs, monitor, and send alerts. Send them to your existing platform. .
You recently deployed your application in Google Kubernetes Engine, and now need to release a new version of your application. You need the ability to instantly roll back to the previous version in case there are issues with the new version. Which deployment model should you use? Perform a rolling deployment, and test your new application after the deployment is complete. Perform A/B testing, and test your application periodically after the new tests are implemented. Perform a blue/green deployment, and test your new application after the deployment is complete. Perform a canary deployment, and test your new application periodically after the new version is deployed.
You developed a JavaScript web application that needs to access Google Drive's API and obtain permission from users to store files in their Google Drives. You need to select an authorization approach for your application. What should you do? Create an API key. Create a SAML token. Create a service account. Create an OAuth Client ID.
You manage an ecommerce application that processes purchases from customers who can subsequently cancel or change those purchases. You discover that order volumes are highly variable and the backend order-processing system can only process one request at a time. You want to ensure seamless performance for customers regardless of usage volume. It is crucial that customers' order update requests are performed in the sequence in which they were generated. What should you do? Send the purchase and change requests over WebSockets to the backend. Send the purchase and change requests as REST requests to the backend. Use a Pub/Sub subscriber in pull mode and use a data store to manage ordering. Use a Pub/Sub subscriber in push mode and use a data store to manage ordering.
Our company needs a database solution that stores customer purchase history and meets the following requirements: ✑ Customers can query their purchase immediately after submission. ✑ Purchases can be sorted on a variety of fields. ✑ Distinct record formats can be stored at the same time. Which storage option satisfies these requirements? Firestore in Native mode Cloud Storage using an object read Cloud SQL using a SQL SELECT statement Firestore in Datastore mode using a global query.
You recently developed a new service on Cloud Run. The new service authenticates using a custom service and then writes transactional information to a Cloud Spanner database. You need to verify that your application can support up to 5,000 read and 1,000 write transactions per second while identifying any bottlenecks that occur. Your test infrastructure must be able to autoscale. What should you do? Build a test harness to generate requests and deploy it to Cloud Run. Analyze the VPC Flow Logs using Cloud Logging. Create a Google Kubernetes Engine cluster running the Locust or JMeter images to dynamically generate load tests. Analyze the results using Cloud Trace. Create a Cloud Task to generate a test load. Use Cloud Scheduler to run 60,000 Cloud Task transactions per minute for 10 minutes. Analyze the results using Cloud Monitoring. Create a Compute Engine instance that uses a LAMP stack image from the Marketplace, and use Apache Bench to generate load tests against the service. Analyze the results using Cloud Trace.
You are using Cloud Build for your CI/CD pipeline to complete several tasks, including copying certain files to Compute Engine virtual machines. Your pipeline requires a flat file that is generated in one builder in the pipeline to be accessible by subsequent builders in the same pipeline. How should you store the file so that all the builders in the pipeline can access it? Store and retrieve the file contents using Compute Engine instance metadata. Output the file contents to a file in /workspace. Read from the same /workspace file in the subsequent build step. Use gsutil to output the file contents to a Cloud Storage object. Read from the same object in the subsequent build step. Add a build argument that runs an HTTP POST via curl to a separate web server to persist the value in one builder. Use an HTTP GET via curl from the subsequent build step to read the value.
Your company’s development teams want to use various open source operating systems in their Docker builds. When images are created in published containers in your company’s environment, you need to scan them for Common Vulnerabilities and Exposures (CVEs). The scanning process must not impact software development agility. You want to use managed services where possible. What should you do? Enable the Vulnerability scanning setting in the Container Registry. Create a Cloud Function that is triggered on a code check-in and scan the code for CVEs. Disallow the use of non-commercially supported base images in your development environment. Use Cloud Monitoring to review the output of Cloud Build to determine whether a vulnerable version has been used.
Your operations team has asked you to create a script that lists the Cloud Bigtable, Memorystore, and Cloud SQL databases running within a project. The script should allow users to submit a filter expression to limit the results presented. How should you retrieve the data? Use the HBase API, Redis API, and MySQL connection to retrieve database lists. Combine the results, and then apply the filter to display the results Use the HBase API, Redis API, and MySQL connection to retrieve database lists. Filter the results individually, and then combine them to display the results Run gcloud bigtable instances list, gcloud redis instances list, and gcloud sql databases list. Use a filter within the application, and then display the results Run gcloud bigtable instances list, gcloud redis instances list, and gcloud sql databases list. Use --filter flag with each command, and then display the results.
You need to deploy a new European version of a website hosted on Google Kubernetes Engine. The current and new websites must be accessed via the same HTTP(S) load balancer's external IP address, but have different domain names. What should you do? Define a new Ingress resource with a host rule matching the new domain Modify the existing Ingress resource with a host rule matching the new domain Create a new Service of type LoadBalancer specifying the existing IP address as the loadBalancerIP Generate a new Ingress resource and specify the existing IP address as the kubernetes.io/ingress.global-static-ip-name annotation value.
The development teams in your company want to manage resources from their local environments. You have been asked to enable developer access to each team’s Google Cloud projects. You want to maximize efficiency while following Google-recommended best practices. What should you do? Add the users to their projects, assign the relevant roles to the users, and then provide the users with each relevant Project ID. Add the users to their projects, assign the relevant roles to the users, and then provide the users with each relevant Project Number. Create groups, add the users to their groups, assign the relevant roles to the groups, and then provide the users with each relevant Project ID. Create groups, add the users to their groups, assign the relevant roles to the groups, and then provide the users with each relevant Project Number.
Your application is composed of a set of loosely coupled services orchestrated by code executed on Compute Engine. You want your application to easily bring up new Compute Engine instances that find and use a specific version of a service. How should this be configured? Define your service endpoint information as metadata that is retrieved at runtime and used to connect to the desired service. Define your service endpoint information as label data that is retrieved at runtime and used to connect to the desired service. Define your service endpoint information to be retrieved from an environment variable at runtime and used to connect to the desired service. Define your service to use a fixed hostname and port to connect to the desired service. Replace the service at the endpoint with your new version.
You are developing a microservice-based application that will run on Google Kubernetes Engine (GKE). Some of the services need to access different Google Cloud APIs. How should you set up authentication of these services in the cluster following Google-recommended best practices? (Choose two.) Use the service account attached to the GKE node. Enable Workload Identity in the cluster via the gcloud command-line tool. Access the Google service account keys from a secret management service. Store the Google service account keys in a central secret management service. Use gcloud to bind the Kubernetes service account and the Google service account using roles/iam.workloadIdentity.
The new version of your containerized application has been tested and is ready to deploy to production on Google Kubernetes Engine. You were not able to fully load-test the new version in pre-production environments, and you need to make sure that it does not have performance problems once deployed. Your deployment must be automated. What should you do? Use Cloud Load Balancing to slowly ramp up traffic between versions. Use Cloud Monitoring to look for performance issues. Deploy the application via a continuous delivery pipeline using canary deployments. Use Cloud Monitoring to look for performance issues. and ramp up traffic as the metrics support it. Deploy the application via a continuous delivery pipeline using blue/green deployments. Use Cloud Monitoring to look for performance issues, and launch fully when the metrics support it. Deploy the application using kubectl and set the spec.updateStrategv.type to RollingUpdate. Use Cloud Monitoring to look for performance issues, and run the kubectl rollback command if there are any issues.
Users are complaining that your Cloud Run-hosted website responds too slowly during traffic spikes. You want to provide a better user experience during traffic peaks. What should you do? Read application configuration and static data from the database on application startup. Package application configuration and static data into the application image during build time. Perform as much work as possible in the background after the response has been returned to the user. Ensure that timeout exceptions and errors cause the Cloud Run instance to exit quickly so a replacement instance can be started.
You are a developer working on an internal application for payroll processing. You are building a component of the application that allows an employee to submit a timesheet, which then initiates several steps: • An email is sent to the employee and manager, notifying them that the timesheet was submitted. • A timesheet is sent to payroll processing for the vendor's API. • A timesheet is sent to the data warehouse for headcount planning. These steps are not dependent on each other and can be completed in any order. New steps are being considered and will be implemented by different development teams. Each development team will implement the error handling specific to their step. What should you do? Deploy a Cloud Function for each step that calls the corresponding downstream system to complete the required action. Create a Pub/Sub topic for each step. Create a subscription for each downstream development team to subscribe to their step's topic. Create a Pub/Sub topic for timesheet submissions. Create a subscription for each downstream development team to subscribe to the topic. Create a timesheet microservice deployed to Google Kubernetes Engine. The microservice calls each downstream step and waits for a successful response before calling the next step.
This architectural diagram depicts a system that streams data from thousands of devices. You want to ingest data into a pipeline, store the data, and analyze the data using SQL statements. Which Google Cloud services should you use for steps 1, 2, 3, and 4? 1. App Engine 2. Pub/Sub 3. BigQuery 4. Firestore 1. Dataflow 2. Pub/Sub 3. Firestore 4. BigQuery 1. Pub/Sub 2. Dataflow 3. BigQuery 4. Firestore 1. Pub/Sub 2. Dataflow 3. Firestore 4. BigQuery.
Your company just experienced a Google Kubernetes Engine (GKE) API outage due to a zone failure. You want to deploy a highly available GKE architecture that minimizes service interruption to users in the event of a future zone failure. What should you do? Deploy Zonal clusters Deploy Regional clusters Deploy Multi-Zone clusters Deploy GKE on-premises clusters.
Your team develops services that run on Google Cloud. You want to process messages sent to a Pub/Sub topic, and then store them. Each message must be processed exactly once to avoid duplication of data and any data conflicts. You need to use the cheapest and most simple solution. What should you do? Process the messages with a Dataproc job, and write the output to storage. Process the messages with a Dataflow streaming pipeline using Apache Beam's PubSubIO package, and write the output to storage. Process the messages with a Cloud Function, and write the results to a BigQuery location where you can run a job to deduplicate the data. Retrieve the messages with a Dataflow streaming pipeline, store them in Cloud Bigtable, and use another Dataflow streaming pipeline to deduplicate messages.
You are running a containerized application on Google Kubernetes Engine. Your container images are stored in Container Registry. Your team uses CI/CD practices. You need to prevent the deployment of containers with known critical vulnerabilities. What should you do? • Use Web Security Scanner to automatically crawl your application • Review your application logs for scan results, and provide an attestation that the container is free of known critical vulnerabilities • Use Binary Authorization to implement a policy that forces the attestation to be provided before the container is deployed • Use Web Security Scanner to automatically crawl your application • Review the scan results in the scan details page in the Cloud Console, and provide an attestation that the container is free of known critical vulnerabilities • Use Binary Authorization to implement a policy that forces the attestation to be provided before the container is deployed • Enable the Container Scanning API to perform vulnerability scanning • Review vulnerability reporting in Container Registry in the Cloud Console, and provide an attestation that the container is free of known critical vulnerabilities • Use Binary Authorization to implement a policy that forces the attestation to be provided before the container is deployed • Enable the Container Scanning API to perform vulnerability scanning • Programmatically review vulnerability reporting through the Container Scanning API, and provide an attestation that the container is free of known critical vulnerabilities • Use Binary Authorization to implement a policy that forces the attestation to be provided before the container is deployed .
You have an on-premises application that authenticates to the Cloud Storage API using a user-managed service account with a user-managed key. The application connects to Cloud Storage using Private Google Access over a Dedicated Interconnect link. You discover that requests from the application to access objects in the Cloud Storage bucket are failing with a 403 Permission Denied error code. What is the likely cause of this issue? The folder structure inside the bucket and object paths have changed. The permissions of the service account’s predefined role have changed. The service account key has been rotated but not updated on the application server. The Interconnect link from the on-premises data center to Google Cloud is experiencing a temporary outage.
You are using the Cloud Client Library to upload an image in your application to Cloud Storage. Users of the application report that occasionally the upload does not complete and the client library reports an HTTP 504 Gateway Timeout error. You want to make the application more resilient to errors. What changes to the application should you make? Write an exponential backoff process around the client library call. Write a one-second wait time backoff process around the client library call. Design a retry button in the application and ask users to click if the error occurs. Create a queue for the object and inform the users that the application will try again in 10 minutes.
You are building a mobile application that will store hierarchical data structures in a database. The application will enable users working offline to sync changes when they are back online. A backend service will enrich the data in the database using a service account. The application is expected to be very popular and needs to scale seamlessly and securely. Which database and IAM role should you use? Use Cloud SQL, and assign the roles/cloudsql.editor role to the service account. Use Bigtable, and assign the roles/bigtable.viewer role to the service account. Use Firestore in Native mode and assign the roles/datastore.user role to the service account. Use Firestore in Datastore mode and assign the roles/datastore.viewer role to the service account.
Your application is deployed on hundreds of Compute Engine instances in a managed instance group (MIG) in multiple zones. You need to deploy a new instance template to fix a critical vulnerability immediately but must avoid impact to your service. What setting should be made to the MIG after updating the instance template? Set the Max Surge to 100%. Set the Update mode to Opportunistic. Set the Maximum Unavailable to 100%. Set the Minimum Wait time to 0 seconds. .
You made a typo in a low-level Linux configuration file that prevents your Compute Engine instance from booting to a normal run level. You just created the Compute Engine instance today and have done no other maintenance on it, other than tweaking files. How should you correct this error? Download the file using scp, change the file, and then upload the modified version Configure and log in to the Compute Engine instance through SSH, and change the file Configure and log in to the Compute Engine instance through the serial port, and change the file Configure and log in to the Compute Engine instance using a remote desktop client, and change the file.
You are developing an application that needs to store files belonging to users in Cloud Storage. You want each user to have their own subdirectory in Cloud Storage. When a new user is created, the corresponding empty subdirectory should also be created. What should you do? Create an object with the name of the subdirectory ending with a trailing slash ('/') that is zero bytes in length. Create an object with the name of the subdirectory, and then immediately delete the object within that subdirectory. Create an object with the name of the subdirectory that is zero bytes in length and has WRITER access control list permission. Create an object with the name of the subdirectory that is zero bytes in length. Set the Content-Type metadata to CLOUDSTORAGE_FOLDER.
Your company’s corporate policy states that there must be a copyright comment at the very beginning of all source files. You want to write a custom step in Cloud Build that is triggered by each source commit. You need the trigger to validate that the source contains a copyright and add one for subsequent steps if not there. What should you do? Build a new Docker container that examines the files in /workspace and then checks and adds a copyright for each source file. Changed files are explicitly committed back to the source repository. Build a new Docker container that examines the files in /workspace and then checks and adds a copyright for each source file. Changed files do not need to be committed back to the source repository. Build a new Docker container that examines the files in a Cloud Storage bucket and then checks and adds a copyright for each source file. Changed files are written back to the Cloud Storage bucket. Build a new Docker container that examines the files in a Cloud Storage bucket and then checks and adds a copyright for each source file. Changed files are explicitly committed back to the source repository.
One of your deployed applications in Google Kubernetes Engine (GKE) is having intermittent performance issues. Your team uses a third-party logging solution. You want to install this solution on each node in your GKE cluster so you can view the logs. What should you do? Deploy the third-party solution as a DaemonSet Modify your container image to include the monitoring software Use SSH to connect to the GKE node, and install the software manually Deploy the third-party solution using Terraform and deploy the logging Pod as a Kubernetes Deployment.
You are in the final stage of migrating an on-premises data center to Google Cloud. You are quickly approaching your deadline, and discover that a web API is running on a server slated for decommissioning. You need to recommend a solution to modernize this API while migrating to Google Cloud. The modernized web API must meet the following requirements: • Autoscales during high traffic periods at the end of each month • Written in Python 3.x • Developers must be able to rapidly deploy new versions in response to frequent code changes You want to minimize cost, effort, and operational overhead of this migration. What should you do? Modernize and deploy the code on App Engine flexible environment. Modernize and deploy the code on App Engine standard environment. Deploy the modernized application to an n1-standard-1 Compute Engine instance. Ask the development team to re-write the application to run as a Docker container on Google Kubernetes Engine.
You are developing an application that consists of several microservices running in a Google Kubernetes Engine cluster. One microservice needs to connect to a third-party database running on-premises. You need to store credentials to the database and ensure that these credentials can be rotated while following security best practices. What should you do? Store the credentials in a sidecar container proxy, and use it to connect to the third-party database. Configure a service mesh to allow or restrict traffic from the Pods in your microservice to the database. Store the credentials in an encrypted volume mount, and associate a Persistent Volume Claim with the client Pod. Store the credentials as a Kubernetes Secret, and use the Cloud Key Management Service plugin to handle encryption and decryption.
Denunciar test Consentimiento Condiciones de uso