option
Cuestiones
ayuda
daypo
buscar.php

Databricks Certified Data Engineer Associate - Preparation

COMENTARIOS ESTADÍSTICAS RÉCORDS
REALIZAR TEST
Título del Test:
Databricks Certified Data Engineer Associate - Preparation

Descripción:
Databricks Certified Data Engineer Associate - Preparation

Fecha de Creación: 2024/05/31

Categoría: Informática

Número Preguntas: 39

Valoración:(0)
COMPARTE EL TEST
Nuevo ComentarioNuevo Comentario
Comentarios
NO HAY REGISTROS
Temario:

A new data engineering team has been assigned to work on a project. The team Will need access to database customers in order to see what tables already exist. The team has its own group team. Which of the following commands can be used to grant the necessary permission on the entire database to the new team?. GRANT VIEW ON CATALOG customers TO team;. GRANT CREATE ON DATABASE customers TO team;. GRANT USAGE ON CATALOG team TO customers;. GRANT CREATE ON DATABASE team TO customers;. GRANT USAGE ON DATABASE customers TO team;.

A new data engineering team. has been assigned to an ELT project. The new data engineering team Will need full privileges on the database customers to fully manage the project. Which of the following commands can be used to grant full permissions on the database to the new data engineering team?. GRANT USAGE ON DATABASE customers TO team;. GRANT ALL PRIVILEGES ON DATABASE team TO customers;. GRANT SELECT PRIVILEGES ON DATABASE customers TO teams;. GRANT SELECT CREATE MODIFY USAGE PRIVILEGES ON DATABASE customers TO team;. GRANT ALL PRIVILEGES ON DATABASE custorners TO team;.

Which of the following commands Will return the location of database customer360?. DESCRIBE LOCATION customer360;. DROP DATABASE customer360;. DESCRIBE DATABASE customer360;. ALTER DATABASE customer360 SET DBPROPERTIES ('location' ='/user'};. USE DATABASE customer360;.

Which of the following benefits is provided by the array functions from Spark SQL?. An ability to work with data in a variety of types at once. An ability to work with data within certain partitions and windows. An ability to work with time-related data in specified intervals. An ability to work with complex, nested data ingested from JSON files. An ability to work with an array of tables for procedural automation.

Which of the following commands can be used to write data into a Delta table while avoiding the writing of duplicate records?. DROP. IGNORE. MERGE. APPEND. INSERT.

A data analyst has a series of queries in a SQL program. The data analyst wants this program to run every day. They only want the final query in the program to run on Sundays. They ask for help from the data engineering team to complete this task. Which of the following approaches could be used by the data engineering team to complete this task?. They could submit a feature request with Databricks to add this functionality. They could wrap the queries using PySpark and use Python's control flow system to determine when to run the final query. They could only run the entire program on Sundays. They could automaticaiiy restrict access 10 the source tabie in the íinai query so that it is only accessible on Sundays. They could redesign the data model to separate the data used in the final query into a new table.

A data engineer runs a statement every day to copy the previous day's sales into the table transactions. Each day's sales are in their own file in the location "[transactions/raw". Today, the data engineer runs the following command to complete this task: COPY INTO transactions FROM "/transactions/raw" FILEFORMAT PARQUET; After running the command today, the data engineer notices that the number of records in table transactions has not changed. Which of the following describes Why the statement might not have copied any new records into the table?. The format of the files to be copied were not included with the FORMAT OPTIONS keyword. The names of the files to be copied were not included with the FILES keyword. The previous day's file has already been copied into the table. The PARQUET file format does not support COPY INTO. The COPY INTO statement requires the table to be refreshed to View the copied rows.

A data engineer needs to create a table in Databricks using data from their organization's existing SQLite database. They run the following command: CREATE TABLE jdbc custcmer360 USING _____________ OPTIONS ( url "jdbc: sqlite: /customers.db", dbtable "customer360" Which of the following lines of code fills in the above blank to successfully complete the task?. org.apache.spark.sql.jdbc. autoloader. DELTA. sqlite. org.apache.spark.sql.sqlite.

A data engineering team has two tables. The first table march_transactions is a collection of all retail transactions in the month of March. The second table april_transactions is a collection of all retail transactions in the month of April. There are no duplicate records between the tables.Which of the following commands should be run to create a new table all_transactions that contains all records from march_transactions and april_transactions without duplicate records?. CREATE TABLE all transactions AS SELECT * FROM march transactions INNER JOIN SELECT * FROM april_transactions;. CREATE TABLE all transactions AS SELECT * FROM march transactions UNION SELECT * FROM april_transactions;. CREATE TABLE all transactions AS SELECT * FROM march transactions OUTER JOIN SELECT * FROM april_transactions;. CREATE TABLE all transactions AS SELECT * FROM march transactions INTERSECT SELECT * from april_transactions;. CREATE TABLE all transactions AS SELECT * FROM march transactions MERGE SELECT * FROM april_transactions;.

A data engineer only wants to execute the final block of a Python program if the Python variable day_of_week is equal to 1 and the Python variable review_period is True. Which of the following control flow statements should the data engineer use to begin this conditionally executed code block?. if day_of week = 1 and review_period:. if day_of week = 1 and review_period = "True":. if day_of week == 1 and review_period "True":. if day_of week == 1 and review_period:. if day_of week = I & review_period: = "True":.

A data engineer is attempting to drop a Spark SQL table my_table. The data engineer wants to delete all table metadata and data. They run the following command: DROP TABLE IF EXISTS my_table - While the object no longer appears when they run SHOW TABLES, the data files Still exist. Which of the following describes Why the data files Still exist and the metadata files were deleted?. The table's data was larger than 10 GB. The table's data was smaller than 10 GB. The table was external. The table did not have a location. The table was managed.

A data engineer wants to create a data entity from a couple of tables. The data entity must be used by Other data engineers in Other sessions. It also must be saved to a physical location. Which of the following data entities should the data engineer create?. Database. Function. View. Temporary View. Table.

A data engineer is maintaining a data pipeline. Upon data ingestion, the data engineer notices that the source data is starting to have a lower level of quality. The data engineer would like to automate the process of monitoring the quality level. Which of the following tools can the data engineer use to solve this problem?. Unity Catalog. Data Explorer. Delta Lake. Delta Live Tables. Auto Loader.

A Delta Live Table pipeline includes two datasets defined using STREAMING LIVE TABLE. Three datasets are defined against Delta Lake table sources using LIVE TABLE. The table is configured to run in Production mode using the Continuous Pipeline Mode. Assuming previously unprocessed data exists and all definitions are valid, what is the expected outcome after clicking Start to update the pipeline?. All datasets Will be updated at set intervals until the pipeline is shut down. The compute resources Will persist to allow for additional testing. All datasets Will be updated once and the pipeline Will persist without any processing. The compute resources Will persist but go unused. All datasets Will be updated at set intervals until the pipeline is shut down. The compute resources Will be deployed for the update and terminated when the pipeline is stopped. All datasets Will be updated once and the pipeline Will shut down. The compute resources Will be terminated. All datasets Will be updated once and the pipeline Will shut down. The compute resources Will persist to allow for additional testing.

In order for Structured Streaming to reliably track the exact progress of the processing so that it can handle any kind of failure by restarting and/or reprocessing, which of the following two approaches is used by Spark to record the offset range of the data being processed in each trigger?. Checkpointing and Write-ahead Logs. Structured Streaming cannot record the offset range of the data being processed in each trigger. . Replayable Sources and Idempotent Sinks. Write-ahead Logs and Idempotent Sinks. Checkpointing and Idempotent Sinks.

Which of the following describes the relationship between Gold tables and Silver tables?. Gold tables are more likely to contain aggregations than Silver tables. Gold tables are more likely to contain valuable data than Silver tables. Gold tables are more likely to contain a less refined View of data than Silver tables. Gold tables are more likely to contain more data than Silver tables. Gold tables are more likely to contain truthful data than Silver tables.

Which of the following describes the relationship between Bronze tables and raw data?. Bronze tables contain less data than raw data files. Bronze tables contain more truthful data than raw data. Bronze tables contain aggregates while raw data is unaggregated. Bronze tables contain a less refined View of data than raw data. Bronze tables contain raw data with a schema applied.

Which of the following tools is used by Auto Loader process data incrementally?. Checkpointing. Spark Structured Streaming. .Data Explorer. Unity Catalog. Databricks SQL.

A data engineer has configured a Structured Streaming job to read from a table, manipulate the data, and then perform a streaming write into a new table. The cade block used by the data engineer is below: (spark. table ( "sales" ) .withCoIumn ("avg_price", col ("sales") / col ("units") ) .writeStream . option ("checkpointLocation", checkpointpath) . outputMode ("complete " ) .__________________ . table ("new sales") ) If the data engineer only wants the query to execute a micro-batch to process data every 5 seconds, which of the following lines of code should the data engineer use to fill in the blank?. trigger("5 seconds"). trigger(). trigger(once="5 seconds"). trigger(processingTime="5 seconds"). trigger(continuous="5 seconds").

A data engineer is designing a data pipeline. The source system generates files in a shared directory that is also used by Other processes. As a result, the files should be kept as is and Will accumulate in the directory. The data engineer needs to identify which files are new since the previous run in the pipeline, and set up the pipeline to only ingest those new files with each run. Which of the following tools can the data engineer use to solve this problem?. Unity Catalog. Delta Lake. Databricks SQL. Data Explorer. Auto Loader.

A data engineer has three tables in a Delta Live Tables (DLT) pipeline. They have configured the pipeline to drop invalid records at each table. They notice that some data is being dropped due to quality concerns at some point in the DLT pipeline. They would like to determine at which table in their pipeline the data is being dropped. Which of the following approaches can the data engineer take to identify the table that is dropping the records?. They can set up separate expectations for each table when developing their DLT pipeline. They cannot determine which table is dropping the records. They can set up DLT to notify them via email when records are dropped. They can navigate to the DLT pipeline page, click on each table, and View the data quality statistics. They can navigate to the DLT pipeline page, click on the "Error" button, and review the present errors.

A data engineer has a single-task Job that runs each morning before they begin working. After identifying an upstream data issue, they need to set up another task to run a new notebook prior to the original task. Which of the following approaches can the data engineer use to set up the new task?. They can clone the existing task in the existing Job and update it to run the new notebook. They can create a new task in the existing Job and then add it as a dependency of the original task. They can create a new task in the existing Job and then add the original task as a dependency of the new task. They can create a new job from scratch and add both tasks to run concurrently. They can clone the existing task to a new Job and then edit. it to run the new notebook.

An engineering manager wants to monitor the performance of a recent project using a Databricks SQL query. For the first week following the project's release, the manager wants the query results to be updated every minute. However, the manager is concerned that the compute resources used for the query Will be left running and cost the organization a lot of money beyond the first week of the project's release. Which of the following approaches can the engineering team use to ensure the query does not cost the organization any money beyond the first week of the project's release?. They can set a limit to the number of DBUs that are consumed by the SQL Endpoint. They can set the query's refresh schedule to end after a certain number of refreshes. They cannot ensure the query does not cost the organization money beyond the first week of the project's release. They can set a limit to the number of individuals that are able to manage the query's refresh schedule. They can set the query's refresh schedule to end on a certain date in the query scheduler.

A data analysis team has noticed that their Databricks SQL queries are running too slowly when connected to their always-on SQL endpoint. They claim that this issue is present when many members of the team are running small queries simultaneously. They ask the data engineering team for help. The data engineering team notices that each of the team's queries uses the same SQL endpoint. Which of the following approaches can the data engineering team use to improve the latency of the team's queries?. They can increase the cluster size of the SQL endpoint. They can increase the maximum bound of the SQL endpoint's scaling range. They can turn on the Auto Stop feature for the SQL endpoint. They can turn on the Serverless feature for the SQL endpoint. They can turn on the Serverless feature for the SQL endpoint and change the Spot Instance Policy to "Reliability Optimized".

A data engineer wants to schedule their Databricks SQL dashboard to refresh once per day, but they only want the associated SQL endpoint to be running when it is necessary. Which of the following approaches can the data engineer use to minimize the total running time of the SQL endpoint used in the refresh schedule of their dashboard?. They can ensure the dashboard's SQL endpoint matches each of the queries' SQL endpoints. They can set up the dashboard's SQL endpoint to be serverless. They can turn on the Auto Stop feature for the SQL endpoint. They can reduce the cluster size of the SQL endpoint. They can ensure the dashboard's SQL endpoint is not one of the included querys SQL endpoint.

A data engineer has been using a Databricks SQL dashboard to monitor the cleanliness of the input data to an ELT job. The ELT job has its Databricks SQL query that returns the number of input records containing unexpected NULL values. The data engineer wants their entire team to be notified via a messaging webhook whenever this value reaches 100. Which of the following approaches can the data engineer use to notify their entire team via a messaging webhook whenever the number of NULL values reaches 100?. They can set up an Alert with a custom template. They can set up an Alert with a new email alert destination. They can set up an Alert with a new webhook alert destination. They can set up an Alert with one-time notifications. They can set up an Alert without notifications.

# A single Job runs two notebooks as two separate tasks. A data engineer has noticed that one of the notebooks is running slowly in the Job's current run. The data engineer asks a tech lead for help in identifying Why this might be the case. Which of the following approaches can the tech lead use to identify Why the notebook is running slowly as part of the Job?. They can navigate to the Tasks tab in the Jobs Ul and click on the active run to review the processing notebook. There is no way to determine Why a Job task is running slowly. They can navigate to the Tasks tab in the Jobs Ul to immediately review the processing notebook.

A data engineer has a Job with multiple tasks that runs nightly. Each of the tasks runs slowly because the clusters take a long time to start. Which of the following actions can the data engineer perform to improve the start up time for the clusters used for the Job?. They can use endpoints available in Databricks SQL. They can use jobs clusters instead of all-purpose clusters. They can configure the clusters to be single-node. They can use clusters that are from a cluster pool. They can configure the clusters to autoscale for larger data sizes E. They can configure the clusters to autoscale for larger data sizes.

A new data engineering team. has been assigned to an ELT project. The new data engineering team Will need full privileges on the database customers to fully manage the project. Which of the following commands can be used to grant full permissions on the database to the new data engineering team?. GRANT USAGE ON DATABASE customers TO team;. GRANT ALL PRIVILEGES ON DATABASE team TO customers;. GRANT SELECT PRIVILEGES ON DATABASE customers TO teams;. GRANT SELECT CREATE MODIFY USAGE PRIVILEGES ON DATABASE customers TO team;. GRANT ALL PRIVILEGES CN DATABASE custorners TO team;.

A data analyst has created a Delta table sales that is used by the entire data analysis team. They want help from the data engineering team to implement a series of tests to ensure the data is clean. However, the data engineering team uses Python for its tests rather than SQL. Which of the following commands could the data engineering team use to access sales in PySpark?. SELECT * FROM sales. There is no way to share data between PySpark and SQL. spark.sql("sales"). spark.delta.table("sales"). spark.table("sales").

A data engineer has left the organization. The data team needs to transfer ownership of the data engineer's Delta tables to a new data engineer. The new data engineer is the lead engineer on the data team. Assuming the original data engineer no longer has access, which of the following individuals must be the one to transfer ownership of the Delta tables in Data Explorer?. Databricks account representative. This transfer is not possible. Workspace administrator. New lead data engineer. Original data engineer.

A data engineer needs to determine whether to use the built-in Databricks Notebooks versioning or version their project using Databricks Repos. Which of the following is an advantage of using Databricks Repos over the Databricks Notebooks versioning?. Databricks Repos automatically saves development progress. Databricks Repos supports the use of multiple branches. Databricks Repos allows users to revert to previous versions of a notebook. Databricks Repos provides the ability to comment on specific changes. Databricks Repos is wholly housed within the Databricks Lakehouse Platform.

Which of the following datalakehouse features results in improved data quality over a traditional data lake?. A data lakehouse provides storage solutions for structured and unstructured data. A data lakehouse supports ACID-compIiant transactions. A data lakehouse allows the use of SQL queries to examine data. A data lakehouse stores data in open formats. A data lakehouse enables machine learning and artificial Intelligence workloads.

# Which of the following Git operations must be performed outside of Databricks Repos?. Commit. Pull. Push. Clone. Merge.

A data engineer has realized that they made a mistake when making a daily update to a table. They need to use Delta time travel to restore the table to a version that is 3 days old. However, when the data engineer attempts to time travel to the Older version, they are unable to restore the data because the data files have been deleted. Which of the following explains Why the data files are no longer present?. The VACUUM command was run on the table. The TIME TRAVEL command was run on the table. The DELETE HISTORY command was run on the table. The OPTIMIZE command was nun on the table. The HISTORY command was run on the table.

Which of the following code blocks Will remove the rows where the value in column age is greater than 25 from the existing Delta table my_table and save the updated table?. SELECT * FROM my_table WHERE age > 25;. UPDATE my_table WHERE age > 25;. DELETE FROM my_table WHERE age > 25;. UPDATE my_table WHERE age 25;. DELETE FROM my_table WHERE age 25;.

Which of the following describes the storage organization of a Delta table?. Delta tables are stored in a single file that contains data, history, metadata, and Other attributes. Delta tables store their data in a single file and all metadata in a collection of files in a separate location. Delta tables are stored in a collection of files that contain data, history, metadata, and other attributes. Delta tables are stored in a collection of files that contain only the data stored within the table. Delta tables are stored in a single file that contains only the data stored within the table.

Which of the following benefits of using the Databricks Lakehouse Platform is provided by Delta Lake?. The ability to manipulate the same data using a variety of languages. The ability to collaborate in real time on a single notebook. The ability to set up alerts for query failures. . The ability to support batch and streaming workloads. The ability to distribute complex data operations.

Which of the following is hosted completely in the classic Databricks architecture?. Worker node. JDBC data source. Databricks web application. Databricks Filesystem. Driver node.

Denunciar Test