GCP
![]() |
![]() |
![]() |
Título del Test:![]() GCP Descripción: Test GCP |




Comentarios |
---|
NO HAY REGISTROS |
For this question, refer to the Dress4Win case study https://cloud.google.com/certification/guides/cloud-architect/casestudy-dress4win You want to ensure Dress4Win’s sales and tax records remain available for infrequent viewing by auditors for at least 10 years. Cost optimization is your top priority. Which cloud services should you choose?. Google Bigtable with US or EU as location to store the data, and gcloud to access the data. BigQuery to store the data, and a web server cluster in a managed instance group to access the data. Google Cloud SQL mirrored across two distinct regions to store the data, and a Redis cluster in a managed instance group to access the data. Google Cloud Storage Nearline to store the data, and gsutil to access the data. Google Cloud Storage Coldline to store the data, and gsutil to access the data. For this question, refer to the Mountkirk Games case study. https://cloud.google.com/certification/guides/cloud-architect/casestudy-mountkirkgames Mountkirk Games has deployed their new backend on Google Cloud Platform (GCP). You want to create a thorough testing process for new versions of the backend before they are released to the public. You want the testing environment to scale in an economical way. How should you design the process?. Create a scalable environment in GCP for simulating production load. Build stress tests into each component of your application using resources internal to GCP to simulate load. Use the existing infrastructure to test the GCP-based backend at scale. Create a set of static environments in GCP to test different levels of load – for example, high, medium, and low. You have been asked to select the storage system for the click-data of your company's large portfolio of websites. This data is streamed in from a custom website analytics package at a typical rate of 6,000 clicks per minute, with bursts of up to 8,500 clicks per second. It must be stored for future analysis by your data science and user experience teams. Which storage infrastructure should you choose?. Google cloud Datastore. Google Cloud SQL. Google Cloud Bigtable. Google Cloud Storage. Over time, you've created 5 snapshots of a single instance. To save space, you delete snapshots number 3 and 4. What has happened to the fifth snapshot?. The data from both snapshots 3 and 4 necessary for continuance are transferred to snapshot 5. It is no longer useable and cannot restore data. All later snapshots, including 5, are automatically deleted as well. The data from snapshot 4 necessary for continuance was transferred to snapshot 5, however snapshot 3's contents were transferred to snapshot 2. A small number of API requests to your microservices-based application take a very long time. You know that each request to the API can traverse many services. You want to know which service takes the longest in those cases. What should you do?. Set timeouts on your application so that you can fail requests faster. Instrument your application with StackDriver Trace to break down the request latencies at each microservice. Send custom metrics for each of your requests to Stackdriver Monitoring. Use Stackdriver Monitoring to look for insights that show when your API latencies are high. One of your clients is using customer-managed encryption, which of the following statements are true when you are applying customer-managed encryption key to an object.[Select any 3]. the encryption key is used to encrypt the object's data. the encryption key is used to encrypt the object’s CRC32C checksum. the encryption key is used to encrypt the object's name. the encryption key is used to encrypt the object's MD5 hash. For this question, refer to the TerramEarth case study. https://cloud.google.com/certification/guides/cloud-architect/casestudy-terramearth TerramEarth's 20 million vehicles are scattered around the world. Based on the vehicle's location its telemetry data is stored in a Google Cloud Storage (GCS) regional bucket (US. Europe, or Asia). The CTO has asked you to run a report on the raw telemetry data to determine why vehicles are breaking down after 100 K miles. You want to run this job on all the data. What is the most cost-effective way to run this job?. Launch a cluster in each region to preprocess and compress the raw data, then move the data into a regional bucket and use a Cloud Dataproc cluster to finish the job. Move all the data into 1 region, then launch a Google Cloud Dataproc cluster to run the job. Launch a cluster in each region to preprocess and compress the raw data, then move the data into a multi-region bucket and use a Dataproc cluster to finish the job. Move all the data into 1 zone, then launch a Cloud Dataproc cluster to run the job. Your company just finished a rapid lift and shift to Google Compute Engine for your compute needs. You have another 9 months to design and deploy a more cloud-native solution. The business team is looking for services with lesser responsibility and easy manageability. Please select the order of services with lesser responsibility to more responsibility. Google Container Engine with containers > Google App Engine Standard Environment >Cloud Functions >Compute Engine with containers >Compute Engine. Cloud Functions >Google App Engine Standard Environment>Google Container Engine with containers >Compute Engine with containers > Compute Engine. Cloud Functions >Google Container Engine with containers >Google App Engine Standard Environment >Compute Engine >Compute Engine with containers. Google App Engine Standard Environment > Cloud Functions>Compute Engine with containers>Google Container Engine with containers>Compute Engine. Your company’s test suite is a custom C++ application that runs tests throughout each day on Linux virtual machines. The full test suite takes several hours to complete, running on a limited number of on premises servers reserved for testing. Your company wants to move the testing infrastructure to the cloud, to reduce the amount of time it takes to fully test a change to the system, while changing the tests as little as possible. Which cloud infrastructure should you recommend?. Google Cloud Dataproc to run Apache Hadoop jobs to process each test. Google App Engine with Google Stackdriver for logging. Google Compute Engine managed instance groups with auto-scaling. Google Compute Engine unmanaged instance groups and Network Load Balancer. For this question refer to the TerramEarth case study https://cloud.google.com/certification/guides/cloud-architect/casestudy-terramearth Operational parameters such as oil pressure are adjustable on each of TerramEarth's vehicles to increase their efficiency, depending on their environmental conditions. Your primary goal is to increase the operating efficiency of all 20 million cellular and unconnected vehicles in the field. How can you accomplish this goal? Select one: Have your engineers inspect the data for patterns, and then create an algorithm with rules that make operational adjustments automatically. Implement a Google Cloud Dataflow streaming job with a sliding window and use Google Cloud Messaging (GCM) to make operational adjustments automatically. Capture all operating data, train machine learning models that identify ideal operations, and host in Google Cloud Machine Learning (ML) Platform to make operational adjustments automatically. Capture all operating data, train machine learning models that identify ideal operations, and run locally to make operational adjustments automatically. You need to regularly create disk level backups of the root disk of a critical instance. These backups need to be able to be converted into new instances that can be used in different projects. How should you do this? Select 2 possible way to accomplish this. Create snapshots, turn the snapshot into a custom image, and share the image across projects. Use the VM migration tools in Compute Engine to copy a VM to a different project. Create snapshots and share them to other projects. Stream your VM's data into Cloud Storage and share the exported data in the storage bucket with another project. Your company has decided to build a backup replica of their on-premises user authentication PostgreSQL database on Google Cloud Platform. The database is 4 TB, and large updates are frequent. Replication requires RFC1918 private address space. Which networking approach would be the best choice?. Create two VPN tunnels within the same Cloud VPN gateway to the same destination VPN gateway. Direct Peering. Google Cloud Dedicated Interconnect or Google Cloud Partner Interconnect. Google Cloud VPN connected to the data center network. One of your clients are storing highly sensitive data on Google Cloud Storage, they strictly adhere to their compliance, hence they do not want their keys to be stored in a cloud, please suggest them the right choice of encryption. You provide a raw CSEK as part of an API call. All objects on Google Storage are encrypted by default hence additional encryption isn’t required. Give your Cloud Storage service account access to an encryption key, that service account encrypts. Google recommends the usage of cloud KMS for storing CMEK. You are using DataFlow to ingest a large amount of data and later you send the data to Bigquery for Analysis, but you realize the data is dirty, what would be the best choice to use to clean the data in stream with serverless approach?. Fetch data from Bigquery and clean data from DataPrep and send it back to Bigquery. Fetch the data from Bigquery and create one more pipeline, clean data from DataFlow and send it back to Bigquery. Fetch the data from Bigquery and clean data from DataProc and send it back to Bigquery. Send the data to Data Storage and use BigTable to clean the data . You have a long-running job that one of your employees has permissions to start. You don’t want that job to be terminated when the employee who last started that job leaves the company. What would be the best way to address the concern in this scenario?. Create many IAM users and give them the permission. Create a service account. Grant the Service Account User permission to the employees who needs to start the job. Also, provide "Compute Instance Admin" permission to that service account. Give full permissions to the Service Account and give permission to the employee to access this service account. Use Google-managed service accounts in this scenario. You have created several preemptible Linux virtual machine instances using Google Compute Engine. You want to properly shut down your application before the virtual machines are preempted. What should you do?. Create a shutdown script registered as a xinetd service in Linux and configure a StackDriver endpoint check to call the service. Create a shutdown script, registered as a xinetd service in Linux, and use the gcloud compute instances add-metadata command to specify the service URL as the value for a new metadata entry with the key shutdown-script-url. Create a shutdown script named k99.shutdown in the /etc/rc.6.d/ directory. Create a shutdown script and use it as the value for a new metadata entry with the key shutdown-script in the Cloud Platform Console when you create the new virtual machine instance. You need to allow traffic from specific virtual machines in 'subnet-a' network access to machines in 'subnet-b' without giving the entirety of subnet-a access. How can you accomplish this?. Create a firewall rule to allow traffic from resources with specific network tags, then assign the machines in subnet-a the same tags. Relocate the subnet-a machines to a different subnet and give the new subnet the needed access. Create a rule to deny all traffic to the entire subnet, then create a second rule with higher priority giving access to tagged VM's in subnet-a. You can only grant firewall access to an entire subnet and not individual VM's inside. A news feed web service has the following code running on Google App Engine. During peak load, users report that they can see news articles they already viewed. What is the most likely cause of this problem?. The session variable is local to just a single instance. The session variable is being overwritten in Cloud Datastore. The URL of the API needs to be modified to prevent caching. The HTTP Expires header needs to be set to -1 to stop caching. One of the microservices in your application has an intermittent performance problem. You have not observed the problem when it occurs but when it does, it triggers a particular burst of log lines. You want to debug a machine while the problem is occurring. What should you do?. Log into one of the machines running the microservice and wait for the log storm. In the Stackdriver Error Reporting dashboard, look for a pattern in the times the problem occurs. Configure your microservice to send traces to Stackdriver Trace so you can find what is taking so long. Set up a log metric in Stackdriver Logging, and then set up an alert to notify you when the number of log lines increases past a threshold. To ensure that your application will handle the load even if an entire zone fails, what should you do? Select all correct options. Don't select the "Multizone" option when creating your managed instance group. Spread your managed instance group over two zones and overprovision by 100%. (for Two Zone). Create a regional unmanaged instance group and spread your instances across multiple zones. Overprovision your regional managed instance group by at least 50%. (for Three Zones). You are creating a single preemptible VM instance named “preempt” to be used as scratch space for a single workload. If your VM is preempted, you need to ensure that disk contents can be re-used. Which gcloud command would you use to create this instance?. gcloud compute instances create "preempt" --preemptible --no-boot-disk-auto-delete. gcloud compute instances create "preempt" --preemptible --boot-disk-auto-delete=no. gcloud compute instances create "preempt" --preemptible. gcloud compute instances create "preempt" --no-auto-delete. You want to make a copy of a production Linux virtual machine in the US-Central region. You want to manage and replace the copy easily if there are changes in the production virtual machine. You will deploy the copy as a new instance in a different project in the US-East region. What steps must you take? Select 2 options. Use the Linux dd and netcat commands to copy and stream the root disk contents to a new virtual machine instance in the US-East region. Create a snapshot of the root disk and select the snapshot as the root disk when you create a new virtual machine instance in the US-East region. Create an image file from the root disk with Linux dd command, create a new disk from the image file, and use it to create a new virtual machine instance in the US-East region. Create a snapshot of the root disk, create an image file in Google Cloud Storage from the snapshot, and create a new virtual machine instance in the US-East region using the image file for the root disk. A lead software engineer tells you that his new application design uses websockets and HTTP sessions that are not distributed across the web servers. You want to help him ensure his application will run properly on Google Cloud Platform. What should you do?. Help the engineer to convert his websocket code to use HTTP streaming. Review the encryption requirements for websocket connections with the security team. Meet with the cloud operations team and the engineer to discuss load balancer options. Help the engineer redesign the application to use a distributed user session service that does not rely on websockets and HTTP sessions. Which of the following are the best practices recommended by Google Cloud when dealing with service Accounts. Select 3 relevant options. Grant the service account full set of permissions. Do not delete service accounts that are in use by running instances on Google App Engine or Google Compute Engine. Grant serviceAccountUser role to all the users in the organization. Use the display name of a service account to keep track of the service accounts. When you create a service account, populate its display name with the purpose of the service account. Create service accounts for each service with only the permissions required for that service. You have been delegated Access to XYZ Organization, you want to create Shared VPC, even with the delegated access you aren’t able to create the Shared VPC, what solution would resolve the issue?. With Delegated Access, you don’t need any other extra permission. Give yourself Compute Shared VPC Admin role. Give yourself Compute Admin Access. Add your member and give them a Shared Network Admin Role. You set up an autoscaling instance group to serve web traffic for an upcoming launch. After configuring the instance group as a backend service to an HTTP(S) load balancer, you notice that virtual machine (VM) instances are being terminated and re-launched every minute. The instances do not have a public IP address. You have verified the appropriate web response is coming from each instance using the curl command. You want to ensure the backend is configured correctly. What should you do?. Ensure that a firewall rule exists to allow source traffic on HTTP/HTTPS to reach the load balancer. Create a tag on each instance with the name of the load balancer. Configure a firewall rule with the name of the load balancer as the source and the instance tag as the destination. Ensure that a firewall rule exists to allow load balancer health checks to reach the instances in the instance group. Assign a public IP to each instance and configure a firewall rule to allow the load balancer to reach the instance public IP. You are working on a project with two compliance requirements. The first requirement states that your developers should be able to see the Google Cloud Platform billing charges for only their own projects. The second requirement states that your finance team members can set budgets and view the current charges for all projects in the organization. The finance team should not be able to view the project contents. You want to set permissions. What should you do?. Add the finance team members to the default IAM Owner role. Add the developers to a custom role that allows them to see their own spend only. Add the finance team members to the Billing Administrator role for each of the billing accounts that they need to manage. Add the developers to the Viewer role for the Project. Add the developers and finance managers to the Viewer role for the Project. Add the finance team to the Viewer role for the Project. Add the developers to the Security Reviewer role for each of the billing accounts. For this question, refer to the Dress4Win case study. https://cloud.google.com/certification/guides/cloud-architect/casestudy-dress4win Dress4Win has configured a new uptime check with Google Stackdriver for several of their legacy services. The Stackdriver dashboard is not reporting the services as healthy. What should they do?. In the Cloud Platform Console download the list of the uptime servers' IP addresses and create an inbound firewall rule. Install the Stackdriver agent on all of the legacy web servers. Configure their legacy web servers to allow requests that contain user-Agent HTTP header when the value matches GoogleStackdriverMonitoring- UptimeChecks (https://cloud.google.com/monitoring). Configure their load balancer to pass through the User-Agent HTTP header when the value matches GoogleStackdriverMonitoring-UptimeChecks (https://cloud.google.com/monitoring). Your company is using Bigquery for data analysis, many users have access to this service and the data set, you would want to know which user has run what query, what would be the best way to get the required information?. Go to job history, it has information about which user has run what query. Query the Stackdriver logs. Check the Audit logs for the user ID. Go to Query history it has information about which user has run what query. For this question, refer to the Mountkirk Games case study. https://cloud.google.com/certification/guides/cloud-architect/casestudy-mountkirkgames Mountkirk Games wants to set up a real-time analytics platform for their new game. The new platform must meet their technical requirements. Which combination of Google technologies will meet all of their requirements?. Cloud Dataflow, Cloud Storage, Cloud Pub/Sub, and BigQuery. Cloud SQL, Cloud Storage, Cloud Pub/Sub, and Cloud Dataflow. Container Engine, Cloud Pub/Sub, and Cloud SQL. Cloud Pub/Sub, Compute Engine, Cloud Storage, and Cloud Dataproc. Cloud Dataproc, Cloud Pub/Sub, Cloud SQL, and Cloud Dataflow. You are managing the GCP Account of a client, the client raises a request to attach 9 local SSDs and launch a VM instance in us-east1 Region, as a Cloud Architect what would be your response to the above request?. You can always attach maximum of ten local SSD devices to a VM instance. If a resource is not available, you won’t be able to create new resources of that type, even if you still have remaining quota in your region or project and you can attach up to eight local SSD devices for 3 TB of total local SSD storage space per instance. Launch the instance first and add the local SSD drives later for optimal performance. Request changes to quota from the Quotas page in the GCP Console. Your customer is receiving reports that their recently updated Google App Engine application is taking approximately 30 seconds to load for some of their users. This behavior was not reported before the update. What strategy should you take?. Work with your ISP to diagnose the problem. Open a support ticket to ask for network capture and flow data to diagnose the problem, then roll back your application. Roll back to an earlier known good release initially, then use Stackdriver Trace and logging to diagnose the problem in a development/test/staging environment. Roll back to an earlier known good release, then push the release again at a quieter period to investigate. Then use Stackdriver Trace and logging to diagnose the problem. A power generation company is looking to use Google cloud platform to monitor a power station.They have installed several IOT sensors in the power station like temperature sensors, smoke detectors, motion detectors etc. Sensor data will be continuously streamed to the cloud. There it has to be handled by different components for real-time monitoring and alerts, analysis and performance improvement. What Google Cloud Architecture would serve their purpose?. Cloud IoT Core receives data from IOT devices, Cloud IOT core transforms and redirects requests to a Cloud Pub/Subtopic. After the data is stored in Cloud Pub/Sub, it is retrieved by a streaming job running in Cloud Dataflow that transforms the data and sends it to Big Query for analysis. Send IOT devices data to Cloud Storage, load data from cloud storage to Big Query. Cloud IOT core receives data from IOT sensors, then sends the data to cloud storage, transform the data using Cloud Dataflow and send the data to Big Query for Analysis. Cloud IOT core receives the data from IOT devices, Cloud IOT core transforms and redirects the request to Pub/Sub, use data proc to transform the data and send it to BigQuery for Analysis. A digital Media company has recently moved its infrastructure from On-premise to Google Cloud, they have several instances under a Global HTTPS load balancer, a few days ago the Application and Infrastructure were subjected to DDOS attacks, they are looking for a service that would provide a defense mechanism against the DDOS attacks. Please select the relevant service. Cloud Armor. Cloud-Identity Aware Proxy. GCP Firewalls. IAM policies. Your office is connected to GCP via a VPN connection. How can you increase the speed of your VPN connection, assuming that your office Internet is not the bottleneck?. Apply for a dedicated interconnect option. Enable high speed routing in your VPN settings. Create an additional VPN tunnel. Submit request to increase bandwidth quota. Using the principle of least privilege and allowing for maximum automation, what steps can you take to store audit logs for long-term access and to allow access for external auditors to view? (Choose two). Generate a signed URL to the Stackdriver export destination for auditors to access. Create an account for auditors to have view access to Stackdriver Logging. Export audit logs to Cloud Storage via an export sink. Export audit logs to BigQuery via an export sink. Suppose you have a web server that is working properly, but you can't connect to its instance VM over SSH. Which of these troubleshooting methods can you use without disrupting production traffic? (Select 3 answers.). Create a snapshot of the disk and use it to create a new disk; then attach the new disk to a new instance. Use netcat to try to connect to port 22. Access the serial console output. Create a startup script to collect information. You have a Kubernetes cluster with 1 node-pool. The cluster receives a lot of traffic and needs to grow. You decide to add a node. What should you do?. Use "gcloud container clusters resize" with the desired number of nodes. Use "kubectl container clusters resize" with the desired number of nodes. Edit the managed instance group of the cluster and increase the number of VMs by 1. Edit the managed instance group of the cluster and enable autoscaling. One of your client wants to store time series data in Google Cloud Platform. They have found Big Table to be a natural fit for time series data. Choose the best practices suggested by Google Cloud for schema design patterns for storing time series in Cloud Bigtable (Select four). For row key design pattern, use tall and narrow tables. For row key Always for a row key for a time series includes a timestamp, make sure all your writes will target a single node; fill that node; and then move onto the next node in the cluster . Prefer reverse timestamps only where your most common query is for the latest values. For patterns for column design , in general ,keep row sizes below approximately 100 MB. Design your row key with your queries in mind. To reduce costs, the Director of Engineering has required all developers to move their development infrastructure resources from on-premises virtual machines (VMs) to Google Cloud Platform. These resources go through multiple start/stop events during the day and require state to persist. You have been asked to design the process of running a development environment in Google Cloud while providing cost visibility to the finance department. Which two steps should you take? Choose 2 answers. Store all state in Google Cloud Storage, snapshot the persistent disks, and terminate the VM. Use the --no-auto-delete flag on all persistent disks and stop the VM. Apply VM CPU utilization label and include it in the BigQuery billing export. Use Google BigQuery billing export and labels to associate cost to groups. Use the -auto-delete flag on all persistent disks and terminate the VM. Store all state into local SSD, snapshot the persistent disks, and terminate the VM. For this question, refer to the MountKirk Games case study: https://cloud.google.com/certification/guides/cloud-architect/casestudy-mountkirkgames MountKirk Games needs to build out their streaming data analytics pipeline to feed from their game backend application. What GCP services in which order will achieve this?. Cloud Storage - Cloud Dataflow - BigQuery. Cloud Dataproc - Cloud Storage - BigQuery. Cloud Pub/Sub - Cloud Dataflow - Cloud Bigtable. Cloud Pub/Sub - Cloud Dataflow - BigQuery. A Global Media company is configuring a Global load balancer for non-http(s) traffic, they are looking for a service with SSL offloading, as a Cloud Architect what would be your load balancing choice?. HTTPS load balancing. SSL proxy Load balancing. TCP proxy Load balancing for all non-http(s) traffic. Network TCP/UDP load balancing. You created an update for your application on App Engine. You want to deploy the update without impacting your users. You want to be able to roll back as quickly as possible if it fails. What should you do?. Delete the current version of your application. Deploy the update using the same version identifier as the deleted version. Notify your users of an upcoming maintenance window. Deploy the update in that maintenance window. Deploy the update as the same version that is currently running. Deploy the update as a new version. Migrate traffic from the current version to the new version. You are designing a large distributed application with 30 microservices. Each of your distributed microservices needs to connect to a database back-end. You want to store the credentials securely. Where should you store the credentials?. In a secret management system. In the source code. In an environment variable. In a config file that has restricted access through ACLs. For this question, refer to the TerramEarth case study: https://cloud.google.com/certification/guides/cloud-architect/casestudy-terramearth-rev2 Based on TerramEarth's current data flow environment (refer to the image in the case study), what are the direct GCP services needed to replicate the same structure for batch uploads?. Cloud Spanner - Cloud SQL - BigQuery. Cloud Dataflow - Cloud Bigtable - Cloud Dataproc. Cloud Dataproc - Cloud Storage - BigQuery. Cloud Storage - Cloud Dataflow - BigQuery. One of the large data Analysis company uses Big Query, Big Table, Data Proc and Cloud Storage services. They use a Hybrid Architecture involving on premise and Google Cloud, Cloud VPN is used to connect to Google Cloud Platform. One of the main challenges for the Organization is mitigating Data exfiltration risks stemming from stolen identities, IAM policy misconfigurations, malicious insiders and compromised virtual machines. What Google Cloud Service can they use to address the challenge?. Shared VPC. Cloud Armour. VPC Service Controls. Resource Manager. You have a definition for an instance template that contains a web application. You are asked to deploy the application so that it can scale based on the HTTP traffic it receives. What should you do?. Create a VM from the instance template. Create a custom image from the VM’s disk. Export the image to Cloud Storage. Create an HTTP load balancer and add the Cloud Storage bucket as its backend service. Create a VM from the instance template. Create an App Engine application in Automatic Scaling mode that forwards all traffic to the VM. Create a managed instance group based on the instance template. Configure autoscaling based on HTTP traffic and configure the instance group as the backend service of an HTTP load balancer. Create the necessary amount of instances required for peak user traffic based on the instance template. Create an unmanaged instance group and add the instances to that instance group. Configure the instance group as the Backend Service of an HTTP load balancer. One of the customers want to redact the sensitive data like credit card numbers, social security numbers that are generated by the application logs. Please select the suitable service that fulfils the necessary requirement . Cloud Data Loss Prevention. Cloud Secure. VPC Service control. Cloud Armour. For future phases, Dress4Win is looking at options to deploy data analytics to the Google Cloud. Which option meets their business and technical requirements?. Run current jobs from the current technical environment on Google Cloud Dataproc. Review all current data jobs. Identify the most critical jobs and create Google BigQuery tables to storeand query data. Review all current data jobs. Identify the most critical jobs and develop Google Cloud Data and pipelines to process data. Deploy a Hadoop/Spark cluster to Google Compute Engine virtual machines. Move current jobs from the current technical environment and run them on the Hadoop/Spark cluster. What activity is carried out after the data is transferred to Transfer Appliance to reverse compression, deduplication and encryption which are carried out while transferring the data to a transfer appliance?. Link Aggregation. Data Rehydration. Data Capture. Data Recapture. |