option
Cuestiones
ayuda
daypo
buscar.php
TEST BORRADO, QUIZÁS LE INTERESE: SAP BODS
COMENTARIOS ESTADÍSTICAS RÉCORDS
REALIZAR TEST
Título del Test:
SAP BODS

Descripción:
SAP BODS 4.2

Autor:
Anónimo
OTROS TESTS DEL AUTOR

Fecha de Creación:
18/01/2023

Categoría: Otros

Número Preguntas: 80
COMPARTE EL TEST
COMENTARNuevo Comentario
No hay ningún comentario sobre este test.
Temario:
An SAP data services file format has a date column, but occasionally the file contains an invalid value in one row.This causes the dataflow to terminate with an error. What can you do to completely load such erroneous files? Note: There are 2 correct answer Specify a date format of '????-??-??' to indicate the value might NOT be a valid date in the file format editor. Define the column as varchar and use functions in subsequent Query transform to perform the checks and conversionS. Place the dataflow between a Try/Catch block to catch all erroneous rows. Use the error handling options for conversion error in the file format definition.
What are standard components on SAP Dataservices? There are 3 correct answers Desing studio Acces server Job server Secure local repository Real time services.
You are asked to perform either the initial load or delta load based on the value of a variable that is set at job execution. How do you desing this requirement in SAP Data services? Use a job containing a script with the ifthenelse () function to test the variable value. Connect this script to the initial and delta dataflow. Use a job containing a Conditional object that test the value of the variable. In the IF part, call the initial dataflow in the ELSE part call the delta dataflow. Use a job containing a Case transform testing for the two possible conditions. Connect one case output to the initial dataflow and the other to the delta dataflow. Set the job to call the initial and delta dataflow in parallel. Each dataflow should hace a filter testing for the variable value.
The SAP Data services merge transform is used to combine two datasets; the first has 3000 rows and the second has 2000 rows. What are characteristics of the Merge transform? Note: there are 2 correct answers to this question. The merge transform require both dataset to have the same structure. The merge transform joins the datasets using a full outer join. The merge transform combines the dataset into 5000 output rows. The merge transform combines the dataset into 5000 or less output rows.
You want to execute two dataflows in parallel in SAP Data Services. How can you achieve this? Create a workflow containing two dataflows and conect them with a line. Create a workflow containing two dataflows and set a degree of parallelism to 2. Create a workflow containing two dataflows without connecting them with a line. Create a workflow containg two dataflows and deselect the execute Only once property of the workflow.
In which of the following objects can you use built-in functions in SAP Data services? Note: There are 3 correct answers to this question Scripts Map CDC transform Conditionals Merge transform Query transform.
How would you use the View Optimized SQL feature to optimize the SQL feature to optimize the performance of the dataflow? View and modify the SQL and adjust the dataflow to maximize push-down operations. View and modify the database execution plan within the Data Services Designer. View and modify the overall optimization plan of a data services engine. View and modify the SQL to improve performance.
What are advantages of using the validation transform in SAP Data services? Note: There are 3 correct answers to this question. You can see which rules were violated in one output. You can produce statistics. You can set different failed paths for each rule. You can call a recovery dataflow. You can have multiple rules on a single column.
What task can you perform in the SAP Data Services Management Console? Note: There are 3 correct answers to this question. Schedule a job for daily execution. View the rows and the values being loaded. Debug a dataflow to find data issues. Display the optional Validation Transform statistics. Display trace and monitor and error logs.
You execute an SAP data services job with enable recovery activated. One of the dataflows in the jobs raises an exception that interrupts the execution. You run the job again with Recover from last failed execution enabled. What happens to the data flow that raised the exception during the first execution? It is rerun from the beginning and the partially loaded data is always handled automanticaly. It is rerun from the beginning and the desing of the data flow must deal with partially loaded data. It s is rerun with the first failed row. It is rerun only if the dataflow is part of a recovery unit.
A dataflow contain a pivot transform followed by a query transform that performs an aggregation. The Aggregation query should by pushed down to the database in SAP Data services. Where would you place the Data_Transfer transform to do this? Before the pivot transform. After the query transform. Between the pivot transform and the query transform. Before the pivot transform and after the query transform.
Why would you specify Recover as a Unit in the property of a workflow in SAP Data Services? To ensure that each dataflow is recovered as a separate unit during the recovery execution To ensure that all dataflow of the workflow are executed in one single transaction during recovery To ensure that all objects of the workflow are executed during recovery including the steps that were executed successfully in the prior run To ensure that the workflow is skipped during recovery if the workflow was executed successfully in the prior run.
You SAP Data Services job desing includes an initialization script that truncates rows in the target prior to loading. the job uses automatic recovery How would you expect the system to behave when you run the job in recovery mode? Note: There are 2 correct answers to this question The job executes the scripts if it is part of a workflow marked as a recovery unit, but only if an error was raised within that workflow. The job starts with the flow that caused the error. If this flow is after the initialization script the initialization script is skipped. The job reruns all workflows and scripts. When using automatic recovery, only dataflows that ran successfully in the previous execution are skipped. The job executes the scripts if it is part of a workflow marked as a recovery unit irrespective of where the error ocurred in the job flow.
The performance of a dataflow is slow in SAP Data Services. How can you see which part of the operations is pushed down to the source database? Note: the are 2 correct answers to this question. By enabling corresponding trace options in the job execution dialog By opening the auto documentation page in the Data Services Management Console By starting the job in debubg mode By opening the dataflow and using the view optimized SQL feature.
An SAP Data Services job contains a lot of dataflows and runs for several hours every night. If a job execution fails, you want to skip all successful dataflows and start with the failed dataflow. How do you accomplish this? Note: There are 2 correct answers to this question. Add a try block before each dataflow and a Catch block after each dataflow. Run the nightly job with the enable recovery flag turned on. Design the dataflow to ensure a second run does not result in duplicate. Merge the dataflows from the job and rerun it.
In SAP Data Services, what do you use implement a target-based delta that deals with inserts, update and deletes en the source? A Map_Operation transform A Table comparison transform The auto correct load A Map_CDC_Operation transform.
You decide to distribute the execution of a job across multiple job servers within a server group. What distribution levels are avaible? Note: There are 3 correct answers to this questions: Dataflow Embedded dataflow Subdataflow Workflow JOB.
You have a workflow containing two dataflows. The second dataflow should only run if the first one finished successfully. How would you achieve this in SAP Data Services? Embed the first dataflow in a try-catch. Use a conditional for the second dataflow. Add a script between the dataflows using the error_number() function. Connect the two dataflows with line.
What does the expression SUBSTR(FIRST NAME, 1,3) return? M IRS U FIR.
What are SAP data services scripts used for? Note: There are 2 correct answers to this question: To set the desired properties, for example, trace options, monitor sample rate, and the use statistics for optimization flag. To execute single SQL commands using the sql() function to select a value from a status table for the variable. To write complex transformation logic using the flexibility of the scripting language. To perform job initialization tasks to print the job variable values into the trace log using the print() function.
An SAP data services job was executed in the past. Where can you see the order that the dataflows were executed in? There are 2 correct answers to this question: In the job server log In the operational dashboard In the impact and Lineage Analysis report In the job trace log.
Which transform are typically used to implement a slowly changing dimension of type 2 in SAP Data services? There are 3 correct answers to this question: Table_comparison Key_Generation Data_Transfer History_Preserving Map_CDC_Operation.
Where can you set up breakpoints for the SAP Data services interactive debugger? In a script In a dataflow In a job In a workflow.
You modified an existing SAP data Services job. You notice that the run time is now longer then expected. Where in SAP Data Services can you observe the progress of row counts to determine the location of a bottleneck? In the impact and lineage analysis On the view data tab In the trace log In the monitor log.
In SAP Data Services why would you select the produce default output checkbox in the Case transform? To output all rows that do match exactly one case expression To output all rows that do not match any case expression to the default path To output all rows to the default path regardless if the they match the case expressions To output all rows that match the case statement.
In which situation is it appropiate to use a time - based CDC to capture changes in source data with SAP Data services? When you need to capture intermediate changes. When almost all of the rows have changes. When there are large tables with few changes. When you need to capture physical deletes from source.
How do you desing a data load that has good performance and deals with interrupted loads in SAP Data services? By setting the target table loader with Bulk Load and Auto Correct Load enabled By setting the target table loader with bulk load enabled By using the table comparison transform By creating two dataflows and executing the Auto Correct Load version when required.
You are instructed to calculate the maximum value in the SALARY column of an EMPLOYEE table. How can you achieve this in SAP Data Services? Use max (SALARY) in a script Use max(SALARY) in a conditional Call max(SALARY) from a Custom function Enter max(SALARY) in the query transform.
The value of the DEPT_ID is null decode((DEPT_ID='IT'), 'IS', (DEPT_ID='CS'),'CA', '?') What is the output of this SAP Data Services fuction? IS CA null ?.
You create a file format in SAP Data Services What properties can you set for a column? Note: There are 3 correct answers to this question. Format information Comment Field size Data type Default value.
What transform can you use to change the operation code from UPDATE to INSERT in SAP Data Services? Note: There are 2 correct answers to this question: Map operation Query Key generation History Preserving.
Which features are supported by the SAP Data Services interactive dubbuger? Note: There are 3 correct answers to this question: Define additional filters Show sample rows of each step Set breakpoint Show the optimized execution plan Show performance-related statistics.
Why would you use a memory datastore in your SAP Data Services dataflow desing? To enhance processing performance of datadlows used in real time jobs To reduce the memory consumption in the target database To define a connection to SAP HANA To reduce the memory consumption in the source database.
What does the data services repository os SAP Data services contain? There are 2 correct answers to this question: User security Transformation rules Target metadata In-flight data.
An SAP Data Services dataflow adds the changed data (insert and update) into a target table every day. How do you desing the dataflow to ensure that a partially executed dataflow recovers automatically the next time it is executed? Note:There are 2 correct answers to this question: Add lookup function in the where clause to filter out existing rows. Enable the Delete data before load target table loader option. Use the table comparison transform before the table loader. Set the autocorrect load option in the target table loader option.
You want to use on SAP data services transform to split your source vendor data into three branches, based on the country code. Which transform do you use? Country ID transform Map_Operation transform Validation transform Case transform.
What requirement must you meet when mapping an output column on the SAP Data Services query transform mapping tab? Primary keys in the input schema must be mapped to only one column in the output schema. All columns of the input schema must be mapped to the output schema. Every column of the output schema must have a mapping. Each column in the output schema must be mapped to one or more columns in the input schema.
An SAP Data Services Validation transform outputs all invalid rows. If more than ten rows are invalid, the data is considered to be failed. How do you implement this? Raise an exception in a Conditional connected to the target table. Create an auditing rule that raises an exception. Set a breakpoint on the line connected to the target table. Use the raise_exception function in the validation transform.
Which repository types are used in SAP Data Services? Note: There are 2 correct answers to this question: Data repository Profiler repository Remote Repository Central Repository.
How do you allow a new team member to view the SAP Data Services repository in read only mode? Export the repository's metadata to an XML file and open it in a browser. Use the Auto Documentation feature in the Management Console. Use the central repository in the Desinger. Copy the repository and view the copy in the repository manager.
In SAP Data Services you have a Validation Transform with the following two rules: Rule #1: Action on Failure is ';Send to Pas'; Rules#2: Action on Failure is ';Send to Fail'; Where are the records that fail both rule #1 and rule #2 sent? Only to the Fail output Only to the Pass output To both the Pass and Fail output Only to the Rule Violation output.
A new developer joined the project team. You already created a new SAP Data Services repository for this member. Where do you manage the security setting for this new repository? Repository manager Data Services Designer Central Management console Repository database.
How do you view the data between transforms in SAP Data Services dataflow? By using the interactive debugger By setting the Audit Data On job execution trace option By adding audit points in the dataflow By setting the SQL Transforms on job execution trace option.
You define audit rules for critical data flow to confirm that you SAP Data Services batch job loaded only correct data. Which audit functions are available to define these rules for columns? Note: There are 3 correct answers to this question: Min Sum Average Checksum Count distinct.
You have to load a file that contains the following first three lines: YEAR; MONTH; PLAN_AMOUNT 2014;01;100.00 2014;02;110.00 What setting do you use when you created a file format for this? Type: Delimited Column delimiter: <blank> Skip row header: yes Type: Delimited Column delimiter: ; Skip row header: yes Type: Delimited Column delimiter: ; Skip row header: no Type: Fixed Column lengths: 4,2 and 6 Skip row header: yes.
In SAP data services what does the data generation transform allow you to generate? The valid from date based on a dataset the contains valid to information only The valid to date based on a dataset the contains valid to information only The rows for a given data range The current date for a column to see when each row was loaded.
What application do you use to display the graphical representations of all Sap Data Services objects including their relationships and properties? Data quality reports Impact and lineage Analysis Operational Dashboard Auto Documentation.
An SAP Data Services dataflow has validation error. What is the cause? The source data does NOT comply with the rules entered in the Validation transform. The dataflow has a syntax error that has to be correct before executing it. A conversion is missing. The source data is incorrect and the dataflow therefore requires a validation transform.
You want to display the description of an object in the Designer workspace. Which task must you perform to accomplish this in SAP Data servicces? There are 3 correct answers to this question: Right - click on the job in the project hierarchy to enable all descriptions. Disable the hide non - executable elements setting in the difference viewer. Enter a description in the properties of the object. Right-click the object, then choose Enable Description. Click the view enabled descriptions button on the toolbar.
You developed a batch job using SAP Data Services and want yo start an execution. How can you execute the job? There are 2 correct answers to this question: Execute the job manually in the Data Services Designer. Use the scheduler in the Data Services Management Console. Use the scheduler in the Data Services Designer. Use the debug option in the Data Services Management console.
From the account table you want to know how many accounts you have per account type. The ACCOUNT_TYPE is output along with an additional column COUNTER. The group by tab of the query transform is used with ACCOUNT_TYPE. Which mapping would you use for the COUNTER column in SAP Data Services? Count_distinct(Account_TYPE) Gen_Row_Num() Count(*) Sum(Account_type).
A target column named ZIP4 requires the input of the source columns POSTCODE and EXTENSION. For example: POSTECODE:99999 EXTENSION:9999 Desired result is ZIP4:99999-9999 What mapping do you use to implement this in SAP Data Services query? POSTCODE ||'-'|| EXTENSION POSTCODE +'-'+ EXTENSION rpad_ext(POSTCPDE,EXTENSION) POSTCODE AND '-'AND EXTENSION.
A SAP Data Services job contain logic to execute different dataflows depending on whether the job was successful or failed. Therefore the $NEEDS_RECOVERY variable should be set to either ‘Yes’ or ‘No’. How do you assign the value to the $NEEDS_RECOVERY variable? Use a catch block and set the variable to 'yes'. Use a script with an SQL function to read from status table. Use a dataflow to set the value via a template table. Use a global variable to persist the value accros job executions.
In SAP Data Services which basic operations can you perform with a Query transform? Note: There are 3 correct answers to this question. Apply functions to columns. Join data from several sources. Set a global variable to a value. Map Columns from an input schema to an output schema. Flag rows for update.
You build a Data Warehouse with a date dimension in SAP Data Services. You decide to use the Date Generation transform to create this. What options are available to control the output form the transform? Note: There are 2 correct answers to this question. Effective date column Julian format End date Increment.
You source table has a revenue column and a quantity column for each month. You want to transform this data to get a table cointaing twelve rows whit two columns. What is the best way to achieve this in SAP Data Services? Use the merge transform that is connected to the source. Use the Pivot transform with two pivot sets. Use the query transform with multiple IFTHENELSE() functions. Use twelve query transform to create the desired output. Then combine these transforms.
You are joining tables using the query transform of SAP Data Services. What option is supported? Maximum of two tables Left outer joins and inner joins Only inner joins Only equal conditions.
Which type of SAP Data Services object can a project, job, dataflow or workflow contain? Note: There are 3 correct answers to this question. A workflow can contain a workflow. A job can contain a job. A job can contain a workflow. A Project can contain a job. A dataflow can contain a workflow.
You import a table from a database into a datastore. Which information is added into the SAP Data Services repository? The table name and all column names with their dataypes The complete metadata information of the table Only the table name The whole table with all its source data.
You built a delta load dataflow in SAP Data Services. This Dataflow is executed every night. The source table contains a CHANGE_DATE column which is populated by the database when the row is saved. What can a timestamp based CDC aproach identify in the source based on this CHANGE_DATE column? Inserted and updated rows but not delete rows Every single change made to a row Insert, updates and deletes for a specified time range Update rows but NOT insert or deleted records.
Which of the following administrative tasks can you perform using the SAP Data Services Management Console? Note: There are 2 correct answers to this question. Edit the system configuration. Configure an adapter. Edit the initialization script of a job. Schedule a batch job.
You are reading a Sales Order table from the source and need to add the customer region information a from a customer table. The primary key of the customer table consists of the columns CUST_ID and VALID_FROM How would you desing the dataflow to get the region information that was valid at the ORDER_CREARTE_DATE? Perform an outer join between both tables Join the two tables Use a regular lookup function Use a lookup_ext function.
What errors can you handle in SAP Services when you use a file format target? Note: There are 2 correct ansers question: Semantic error Row- format error File type error Data type conversion error.
A Map operation transform in SAP Data Services was used to modify the operation code of data is being processed. Why do you perform this action? Note: There are 2 correct answers to this question. To ensure compatibility with subsequent transforms To push the data down for better performance To increase the speed that the database loads To control how the data is loaded.
You need to import metadata and extract data from an SAP ERP system using SAP Data Services. Which type of datastore must you use? Adapter datastore Database datastore Web Services datastore Application datastore.
An SAP Data Services dataflow must load the source table data into a target table, but the column names are different. Where do you assigg each source column to the matching target column? In the Map transform In the table reader In a Query transform In a table loader.
You executed a job in development enviroment and it raised primary key violation error in SAP Data Services. Which feature do you enable to identify wich primary key values caused the errors? Auto correct load Drop and re-created target table Use overflow file Delete data before loading.
You are loading a database table using SAP Data Services. Which loading options are valid? Note: There are 3 correct answers to this question. Rows per commit Include in transaction Number of loader Data transfer method Abap execution option.
You want to load data from an input table to an output table using the SAP Data Services Query transform How do you define the mapping of the columns within a Query transform? Note: There are 2 correct answers to this question. Select an output column and enter the mapping manually. Drag one column from the input schema to the output schema. Drag one column from the output schema to the input schema. Select one input column and enter the mapping manually.
You created and saved a database datastore in SAP Data Services. Which properties can you change in the Edit Datastore dialog box? Note: There are 3 correct answer to this question. User name and password Database version Database server name Database name Datastore name.
What can you use a workflow for in SAP Data Services? To group data flows that belong together To group jobs that you want to monitor To transform source data into target data To allow schedulling for dataflows.
Which feature in the SAP Data Services Management Console allows you to see the trend of the execution time for any given job? Operational dashboard Monitor log Trace log Data quality reports.
What operations can be pushed down in SAP Data Services? Note: There are 2 correct answers to this question. Join operations between a file and a database table Join operations between sources that are on the same database servers Aggregation operations used with a Group By statement Load operations that contain trigger.
You have a map operation transform immediately before the target in a data flow in Sap Data Services. What happens if all operation codes are mapped to Discard in the transform? They are deleted from the target. They are flagged for later deletion. They are filtered by the transform. They are added to the overflow file.
You want to back up an SAP Data Services repository. How does the system store repositories? As tables in a relational database management system As an XML file on the data services Job server As an XML file on the Data services Access Server As an binary file on the Data servives Job server.
You want to set up a new SAP Data Services landscape. You must also create new repositories. Which repository types can you create? Note: There are 3 corrects answers to this question. Profiler repository Central repository Standy repository Backup repository Local repository.
What is the relationship between local variables and parameters in SAP Data Services? Note: There are 2 correct answers to this question. A local variable in a workflow sets the value of an a parameter in a dataflow. A parameter in a job set the value of a local variable in a dataflow A parameter in a workflow sets the value of a local variable in a dataflow. A local variable in a job sets the value of an a parameter in a workflow.
What is the SAP Data services Dataflow auditing feature used for? Note: There are 2 correct answers to this question. To define rules based on the number of records processed overall once the dataflow is finished To count the number of rows processed at user defined points to collect runtime statistics To view the data as it is processed by the dataflow in order to ensure its correctness To define rules that each record processed by the dataflow has to comply with.
What operation can you push down to the database using a data transfer transform in SAP Data Services? Note: There are 3 correct answers to this question. Join Distinct XML function Custom fuction Ordering.
In SAP Data Services, wich function delivers the same results as nested IFTHENELSE functions? Literal Match_pattern Match_regex Decode.
Denunciar Test