Oracle Data Integrator 12c Essentials
![]() |
![]() |
![]() |
Título del Test:![]() Oracle Data Integrator 12c Essentials Descripción: Oracle Data Integrator 12c Essentials |




Comentarios |
---|
NO HAY REGISTROS |
You need to implement security mechanisms that allow only user ''A'' to view all the Mappings contained in a specific project named ''P1.'' How must you proceed?. Go to Security, assign the NG DESIGNER profile to user ''A.'' Next, drag project ''P1'' to user ''A'' and finally set View privileges to Active for the Mapping objects. Go to Security, assign the DESIGNER profile to user ''A.'' Next, drag project ''P1'' to user ''A'' and finally set View privileges to Active for the Mapping objects. Go to Security, assign the NG DESIGNER profile to user ''A.'' Next, drag project ''P1'' to user ''A'' and finally set View privileges to Active for the Project, Folder, and Mapping objects. Go to Security, assign the DESIGNER profile to user ''A.'' Next, drag project ''P1'' to user ''A'' and finally set View privileges to Active for the Project, Folder, and Mapping objects. How can you define the order in which target data stores are loaded in a Mapping?. ODI automatically computes the load order based on the order on which the data stores were added to the Mapping. You can use the Target Order field. You can use the Target Load Order field. You can use the Load Order field. Identify the correct variable step type to use when assigning a value to a variable with an SQL query. Evaluate Variable. Set Variable. Declare Variable. Refresh Variable. You need to create a Model that works with multiple underlying technologies. How must you proceed?. This works only for Oracle and Informix. This works only for Oracle and MySQL. Create a new generic technology to support it. Use the out-of-the box Generic SQL technology. How would a Knowledge Module, that is required to perform an aggregation in a Mapping, generate the correct code?. The Knowledge Module must be customized because business rules and the physical implementation are strictly interlayed in ODI. The Knowledge Module must be customized only for aggregation functions. The Knowledge Module need not be customized because aggregation code is generated automatically by ODI according to the Mapping logic. The Knowledge Module need not be customized, but a variable must be used in the aggregate expression to generate the correct sum. Identify two implementation strategies of changed data capture provided through ODI Knowledge Modules. (Choose two.). extracting source data to flat files. triggers. Oracle GoldenGate. before-and-after database image comparison. How does the data flow when moving records between two servers by using Database Links and an Agent installed on a middle-tier server?. from the source database into flat files that are then loaded into the target database. directly from the source database to the target database. from the source database onto the server running the Agent and then into the target database. from the source database into the machine running ODI Studio and then into the target database. You are a project developer using ODI and want to consolidate your own local metadata repositories. Identify the true statement. You must consolidate your own local metadata repositories. The local metadata must be transmitted via ftp and synchronized with a dedicated proprietary engine, creating a common metadata model for all the developers. You must consolidate your own local metadata repositories. You have to invoke a dedicated web service to synchronize the metadata by using Oracle Service Bus. You need not consolidate your own local metadata repositories, because the ODI proprietary metadata server allows all developers to share the common metadata of a specific project. You need not consolidate your own local metadata repositories, because ODI uses a centralized relational metadata repository that all the developers share. How should you define the Work Schema of a Physical Schema?. Use a dedicated schema such as ODI_STAGING. Use TEMP. Use the same schema as the Data Schema. Use SYSTEM. You have been tasked with designing a Mapping that must perform an initial load and incremental updates using the same transformation logic. How should you proceed?. Create a single Mapping with two Physical Designs: one for the initial load and one for the incremental updates. Create a single Mapping with a single Physical Design and modify it appropriately when an initial load is required or when an incremental update is required. Create two Mappings: one for the initial load and one for the incremental updates. Duplicate the transformation logic. Create a single Mapping and use variables in the Logical Design to do an initial load when required or an incremental update otherwise. How are the ODI repositories upgraded?. by using Opatch. by using ODI Studio. by using the import/export utilities. by using Upgrade Assistant. You must run the same mapping concurrently while avoiding clashes of ODI temporary objects. How must you implement this?. Create a custom KM to handle this by using ODI variables defined at the Topology level to create unique temporary object names. Use variables in the Logical Design of the Mapping to create unique temporary object names. Create a custom Knowledge Module (KM) to handle this by using ODI variables to create unique temporary object names. Select the Use Unique Temporary Object Names check box at the Physical Design level. Which tables created by ODI contain error records?. ERR$. ERROR$. ODI_ERR$. E$. Your customer wants a project in ODI, which contains a user function, to translate commands with different syntax for different technologies but with the same functionality. How can you achieve this?. the project must be explicitly mapped within an ODI mapping. A customize knowledge module is needed. An ODI procedure must be associated with it. It can be defined for every technology listed in the topology. If multiple changes occur on the source system but have not yet been processed by ODI, the J$ table contains entries for each change for the records sharing the same PK. What happens at run time when a Mapping gets executed?. All distinct entries are processed. Only the latest entry is processed based on the JRN_DATE field. Only the first entry is processed based on the JRN_DATE field. A PK violation occurs due to the duplicated entries and the entries are inserted in the E$ table. You want to ensure that the Physical Mapping Design cannot be modified even if the Logical Design of the Mapping is changed. What sequence of steps must you follow to achieve this?. Go to the Physical tab, select the Fixed Design check box of the Physical Mapping Design. Go to the Physical tab, select the Is Frozen check box of the Physical Mapping Design. Go to the Physical tab, select the Read-only check box of the Physical Mapping Design. Go to the Physical tab, deselect the Modify check box of the Physical Mapping Design. Which two statements are true about the Oracle Data Integrator Software Development Kit (ODI SDK)? (Choose two.). It enables developers to execute most ODI operations through a Java program. It can be used to embed ODI processes into another product. It is used to load data between Teradata and Oracle. It must be used by ETL developers to develop ODI processes. How do you reuse a configuration of OdiSendMail across multiple packages?. Add a sub-model to a package, set the Sub-model step to Journalizing Sub-model, and select the Extend Window and Lock Subscriber check boxes. Add an OdiSendMail step to a knowledge module. Duplicate the OdiSendMail step into multiple packages. Create a procedure with a step that uses OdiSendMail and add this procedure into multiple packages. When Oracle Data Integrator (ODI) and Oracle GoldenGate are used together, which option represents the phases of an ELT process that Oracle GoldenGate performs?. transform only. load and transform. extract and load. extract and transform. You create two mappings that both use the same changed data and run at different times. One runs every 15 minutes and the other runs once every day. What must you do to ensure that the Mapping that gets executed more often does not consume the changed data required by the other Mapping?. Use Consistent CDC, do not perform a purge of the journal data when the first Mapping gets executed, and manually change the JRN_SUBSCRIBER column in the corresponding J$ table to keep the changed data present for the second Mapping. Create a third mapping to copy the changed data to a staging table, which is used as the source of the second Mapping. Duplicate the source data store for each Mapping. Create two distinct subscribers for each Mapping. You need to create a package that automatically sends an alert to users in case the third step fails. Which option represents the steps to accomplish this?. Add an OdiSendMail step to My Package and link the My Third Package step to it by using a green OK arrow. Add an OdiSendMail step to My Package and link the My Last Package step to it by using a red KO arrow. Add an OdiReadMail step to My Package and link the My Third Package step to it by using a red KO arrow. Add an OdiSendMail step to My Package and link the My Third Package step to it by using a red KO arrow. Identify two correct exception behavior values for Run Scenario steps in load plans. (Choose two.). Run Exception and Restart. Run Exception and Continue. Run Exception and Ignore. Run Exception and Fail. Run Exception and Raise. Which statement is true about defining more than one physical schema associated to the same data server?. Which statement is true about defining more than one physical schema associated to the same data server?. It is possible to define more than one physical schema associated to the same data server, but it is mandatory to specify a different user to log in. It is possible to define more than one physical schema associated to the same data server, but you must flag the default physical schema. It is possible but it is better to avoid doing so because it is more difficult to define a logical schema this way. You are working with delimited flat files and want to enforce a primary key on a flat file by using a Check Knowledge Modules (CKM). However, you encounter an error. Why did this happen?. It is not possible to enforce constraints on some technologies such as flat files and Java Messaging Service (JMS) queries. It is not possible to enforce constraints on any technology. It is only possible to forward-engineer it to the flat file definition. It is possible to enforce a primary key on a flat file by using a CKM; however, you have to save it as a fixed file. Which statement is correct about the tasks that Standalone Agents perform?. They update or modify code to be executed, check security, select database servers, and update log files after execution. They schedule scenarios to be executed, check security, constraints and update log files after execution. They retrieve code from the execution repository and request database servers, operating systems, or scripting engines for execution. They schedule code from the execution repository and update log files after execution. As part of your QA process you want to view code at the Step level in Operator. How must you proceed?. It is only possible to see code at the Session level. It is only possible to view code at the Task level. Depending on the audit level declared when running the execution, some of the generated code at the step level can be viewed. All generated code can be viewed together at the step level, with a complete list of executed code. You are using a customized reverse knowledge module. You want the execution to be performed in only the development data environment. Which statement is true?. The execution should be done only on the development data environment, as long as the other environment is a mirrored copy. Only the production environment can be used. All environments linked to a logical schema can be used to reverse-engineer table structures. Only a Java engine intermediate environment can be processed. Your package logic requires you to retrieve the status of the previous package step into a variable. How must you implement this?. Create a variable, use odiRef.getPrevStepLog () in the SQL query in its Refreshing tab, and add a variable as a Refresh Variable step into the package. Create a variable, use odiRef.getPrevStepLog () in the SQL query in its Refreshing tab, and add a variable as a Refresh Variable step into the package. Create a variable, use odiRef.getStep () in the SQL query in its Refreshing tab, and add a variable as a Refresh Variable step into the package. Create a variable, use odiRef.getPrevStepStatus () in the SQL query in its Refreshing tab, and add a variable as a Refresh Variable step into the. Which product is included in ODI web-based components?. Oracle BPEL Process Manager. Oracle GoldenGate. ODI Console. Oracle WebLogic Server. Which two statements are true about using ODI and Oracle GoldenGate together? (Choose two.). Oracle GoldenGate primarily performs aggregations for ODI. Oracle GoldenGate and ODI are used together primarily for running weekly batch jobs. Oracle GoldenGate provides non-invasive changed data capture capabilities to ODI. ODI starts and stops Oracle GoldenGate processes automatically during a Mapping execution. ODI and Oracle GoldenGate enable real-time business intelligence. You are working on notifications in a package and you must send an email containing an error message in case a Package step fails. Which odiRef method do you use to access the error message?. odiRef.getSession(). odiRef.getInfo(). odiRef.getPrevStepLog(). odiRef.getStep(). Which is the correct statement about the contents of Master repositories?. They contain data model and security information. They contain security and topology information. They contain project and topology information. They contain project and security information. Which two statements are true about big data support in ODI? (Choose two.). ODI uses its own transformation engine to process data in a Hadoop cluster. ODI performs data transformations inside a Hadoop cluster. ODI must perform data transformations outside Hadoop in an Oracle database. ODI allows moving data in and out of a Hadoop cluster. You want to add a new CDC subscriber in ODI after you have started the journal process. Which option represents the steps to use this new subscriber?. Add a new subscriber and edit the default Journalizing filter in your Mappings. Drop the journal, add a new subscriber, start the journal, and edit the default Journalizing filter in your Mappings. Drop the journal, remove existing subscribers, add a new subscriber, start the journal, and edit the default Journalizing filter in your Mappings. Add a new subscriber, start the journal, and remove the default Journalizing filter in your Mappings. Identify two correct statements about reusable Mappings. (Choose two.). They can have generic input and output signatures. They contain both a Logical and Physical Mapping design. They can be used within regular Mappings. They can be executed directly. They can be shared across projects. In a Mapping, you want to load the data by using a specific partition that is declared for a target table. Which statement is correct in this situation?. It is not possible to use a specific partition. Only ODI variables can be used to filter the target table loading. An option in the Mapping can be used to declare partitions but an ODI variable must be used to evaluate the correct partition value. An option in the Mapping can be used to declare the partition that has to be used for the loading. You want to override the code generated by ODI and provide a custom SQL statement as the source of a Mapping. How must you proceed?. Duplicate the Integration Knowledge Module, add an option CUSTOM_TEMPLATE, and set it within the Mapping. Duplicate the Check Knowledge Module, add an option CUSTOM_TEMPLATE, and set it within the Mapping. In the Physical tab of a Mapping, click one of the source data stores, select the Extract Options, and enter the SQL statement in the CUSTOM_TEMPLATE field. Duplicate the Loading Knowledge module, add an option CUSTOM_TEMPLATE, and set it within the Mapping. You must modify the order in which data stores are being journalized in a model. What sequence of steps do you follow to achieve this?. Open the model, go to the Journalizing tab, and modify the order there. Right-click Datastore, select Changed Data Capture, and then select Order. Open the model, go to the Journalized tables tab, and modify the order there. Open the data stores, go to the Journalizing tab, and modify the order there. You need to troubleshoot the execution of a Mapping and visualize the data contained in the ODI Staging tables while the Mapping is being executed. How must you proceed?. Start a Mapping in Debug mode and use breakpoints and the Get Data feature to query the data contained in the Staging tables. Modify a Knowledge Module to dump the Staging tables’ data into log files for review. Use variables in a Package to query the Staging tables and evaluate the variable values. Reverse engineer the Staging tables in a Model and review the data contained in it. You are defining a data store in ODI metadata and want to add a primary key even if it does not physically exist on the related database catalog. How can you accomplish this?. You can add a primary key even if it does not physically exist on the related database catalog, by adding constraints on the data store. You can add a primary key even if it does not physically exist on the related database catalog, by adding constraints to the data store diagram. You cannot add a primary key if it does not physically exist on the related database catalog. You can flag only non-null conditions to be checked. . You cannot add a primary key if it does not physically exist on the related database catalog. You can reverse engineer only the existing constraints. Identify the name of the default WebLogic data source created for the Master Repository when setting up a JEE domain. odiMasterRepository. MasterRepository. odimasterrepository. ODIMasterRepository. The source and target data stores are located on the same data server. Which statement is correct about the need for a Loading Knowledge Module to load the data on the target?. Only an Integration Knowledge Module single-technology is required. Only an Integration Knowledge Module multitechnology is required. A loading Knowledge Module multitechnology and an Integration Knowledge Module single-technology are required. Both a Loading Knowledge Module multitechnology and an Integration Knowledge Module multitechnology are required. A Mapping that uses CDC does not load any source data and you want to check the SNP_CDC_SET table to find out the current window ID. In which database schema must you look to find this table?. the schema hosting the Work repository. the schema hosting the Work schema of the default schema defined for your source data server. the schema hosting the Staging Area. SYS. Identify two constraints that can be used to ensure uniqueness in ODI. (Choose two.). conditions. foreign keys. primary keys. alternate keys. not null. Which statement is correct about choosing the join order in an ODI Mapping when defining a join?. This option is inherited from reverse engineering. This option is always available. . You can never make this choice. This option may be available only if the underlying technology supports it. You need to deploy ODI JEE Components on WebLogic Server. Where should you deploy them?. on the Coherence Server. on the Administration Server. on the Administration Server. on the Managed Server. You must monitor and manage a co-located stand-alone agent, Oracle DI Agent1, by using the ODI plug-in for Enterprise Manager CloudControl. Which is the correct command to start this agent on Linux?. Create a Data Server by using the File technology and specify the various record formats while reverse engineering the file. Create a Data Server by using the XML technology, create an nXSDfile by using the Native Format Builder, and then reverse engineer it in a Model. Create a Data Server by using the File technology, create an nXSDfile by using the Native Format Builder, and then reverse engineer it in a Model. Create a Data Server by using the Complex File technology, create an nXSDfile by using the Native Format Builder, and then reverse engineer it in a Model. You are designing a load plan in which you must create multiple branches based on the value of a variable. How do you accomplish this by using Load Plan Editor?. Add a case step and drag the variable into the case step. Create a scenario from the variable and add the scenario to the load plan to create a case step. Add a case step in the load plan and select a variable in the wizard. Drag the variable into the load plan and define a case step. Which statement is correct about all expressions in a Mapping?. It is possible to set the execution location on source, staging area, or target. All transformations are executed on the staging area. Only the source and target servers can be used to execute expressions. All expressions are executed on the source area. Which are the two correct statements about Work repositories? (Choose two.). They contain project and security information. They contain data models and execution information. They contain data and security information. They contain data models and project information. You are setting up the topology for an infrastructure with three different environments: Dev, QA, and Prod. How must you create Logical Schemas?. Create one Logical Schema per Physical Schema. Create one Logical Schema per Physical Schema that stores the same type of data. Create one Logical Schema per Data Server. Create one Logical Schema per Context. You want to modify the code generated by a failed Task and restart the session. How must you proceed?. Open the Task in Operator, go to Code, click Edit, use Pre-execution Code to edit the code, and save it. It is not possible to modify the code once it has been generated. Open the Step in Operator, edit the code, and save it. Open the Task in Operator, go to Code, click Query/Execution Plan to edit the code, and save it. Identify three capabilities of load plans. (Choose three.). restart from failed tasks. exception handling. native support for parallelism. support for Open Tools. restart of an agent. You must split a model with many database tables into multiple sub-models based on their names. How must you proceed?. Create new submodels and drag data stores individually to each submodel. Use automatic distribution feature at the models level to automatically create the submodels and move the data stores based on their names. Create new submodels and leverage the automatic distribution feature at the submodels level to automatically move the data stores based on their names. Create new models, drag the data stores individually to each model and then drag the models to the parent model to create submodels. Which two objects can be dragged to a Mapping? (Choose two.). Variables. Datastores. Knowledge Modules. Reusable Mappings. Identify two correct Restart values for parallel steps in load plans. (Choose two.). Restart from new session. Restart from failure. Restart from failed children. Restart all children. Which two statements are true about ODI web-based components? (Choose two.). ODI Console allows administrators to edit users’ information. Enterprise Manager Cloud Control provides access to ODI data servers’ settings. ODI sessions can be monitored in Enterprise Manager Cloud Control. ODI Console provides access to project and mapping details. You are designing a load plan in which you must run Mappings A and B one after the other while running Mapping C at the same time. Which option represents the steps to accomplish this?. Add a parallel step, add three serial steps underneath it, then add A to the first parallel step, B to the second one, and C to the last one. Create a scenario from the variable and add the scenario to the load plan to create a case step. Add a parallel step, add two parallel steps underneath it, then add A and B to one parallel step, and C to the other. Add a parallel step, add two serial steps underneath it, then add A and B to one serial step, and C to the other. Add a parallel step, add two serial steps underneath it, then add A and B to one serial step, and C to the other. Create a procedure and add two tasks. The first one must have a Command on Source that reads the email addresses from the database table and stores them in a bind variable, and the second must have a Command on Target that uses OdiSendMail to send the email by using the bind variable defined in the first task. Create a Knowledge Module. Add a task with a Command on Source that reads the email addresses from the database table and stores them in a bind variable, and a Command on Target that uses OdiSendMail to send the email using the bind variable defined in the Command on Source. Create a counter variable that gets the total number of email addresses stored in the database table. Next, create another variable that selects the email addresses from the database table. Then, create a loop in the packages by using the variables and an OdiSendMail step. Create a procedure. Add a task with a Command on Source that reads the email addresses from the database table and stores them in a bind variable, and a Command on Target that uses OdiSendMail to send the email using the bind variable defined in the Command on Source. You are designing a Mapping. How are target and source tables defined?. Their definition is imported with a reverse-engineering process directly from databases and other sources, but you must manually define all keys and constraints existing on the database. Their definition is imported with a reverse-engineering process directly from databases and other sources. For every load the definition of the tables used, their columns and constraints must be manually built. An external engine must be run to describe what metadata is needed for a Mapping. You want to deploy the generated code manually in a source or target server, before executing a mapping in ODI. How can you accomplish this?. You need not deploy the generated code manually in a source or target server. ODI Agent coordinates the execution of commands prepared for the job, and executes them on the correct server. You must deploy the generated code manually in a source or target server. You must then compile the generated code and then double-check if the code prepared for the development server is the same as the code for the production server. You need not deploy the generated code manually in a source or target server. ODI prepares a package for the development environment. You must deploy the generated code manually in a source or target server. You must copy all procedures generated into the development, test, and production servers. What must you set on the Definition tab of a variable to protect the variable value from being displayed in the Operator logs?. Select the Secure Value check box. Select the Hide Value check box. Set the Keep History field to No History. Leave the Default Value field empty. You must ensure that your Mappings do not run into connectivity issues when moving data from server A to server B by using an AGT agent that is running on server B. How must you test this by using ODI Studio running on machine C?. You must ensure that your Mappings do not run into connectivity issues when moving data from server A to server B by using an AGT agent that is running on server B. How must you test this by using ODI Studio running on machine C?. In Topology, test the connections to servers A and B by using the AGT agent. In Topology, test the connections to the AGT agent. In Topology, test the connections to servers A and B by using Local (No Agent). Identify two benefits that are unique to ODI JEE Agents. (Choose two.). high availability. minimal footprint. access to WebLogic connection pools. management in Enterprise Manager Cloud Control. Which statement is true about the IKM SQL Control Append that creates an intermediate integration table prefixed with I$?. The intermediate integration table prefixed with I$ is required to determine which records must be inserted. If Flow Control is not necessary, an I$ table is not. The intermediate integration table prefixed with I$ is created because the flow control is mandatory when doing an initial load. Neither the LKM nor the IKM create intermediate tables. Most of their work is performed in the ODI Agent memory. You want to draw directly in the data flow all the details about how the different servers involved are connected, in order to load a specific table by using ODI. What must you do to accomplish this?. In the ODI Interface palette, choose the more convenient graphic objects to link the involved servers. You need not draw directly in the data flow all the details about how the different servers involved are connected. ODI automatically designs the flow and how servers are connected. You need not draw directly in the data flow all the details about how the different servers involved are connected. The code will be the black box generated directly by ODI. You must draw directly in the data flow all the details about how the different servers involved are connected, and also specify with variables the passwords for connecting to the data contained in the servers. Which ODI Agent is deployed in Oracle WebLogic Server?. ODI Runtime Agent. ODI Standalone Agent. ODI JEE Agent. ODI Colocated Standalone Agent. You have to loop through a Mapping step three times in a package by using variables. Identify the correct variable data type to create the loop. numeric. alphanumeric. date. text. The workflow you are designing requires checking to see whether there are records available in a source table before doing anything. Which ODI tool must you use to implement this?. OdiWaitForData. OdiWaitForCDCData. OdiWaitForLogData. OdiWaitForTable. Which statement is true about the need to have a topology with an intermediate server, in order to use ODI correctly in a data warehouse project?. Data transformation must be distributed across several scalable nodes. It is mandatory to have an intermediate server. it is better to load data directly from sources into a data warehouse server. It is mandatory to collect data directly from sources to a data warehouse server by using real time replication processes. Identify the ODI tool used to write content into a file in a package. OdiOutFile. OdiFileCopy. OdiFileMove. OdiInFile. How do you provide a timeout value for an exception step?. by using a custom Groovy script. by using a variable. by using the Timeout filed. by using a Timeout knowledge module. Updates have been made to Mappings in a package. What must you do to ensure that the Production team runs a scenario that contains those updates while preserving the existing scenario’s schedule?. Regenerate the existing scenario. Nothing, the existing scenario will automatically be updated. Generate a new scenario and create a new schedule. Generate a new scenario and attach the previous scenario’s schedule to it. What is the main benefit of using consistent set journalizing compared to simple journalizing?. Consistent set journalizing runs faster than simple journalizing. Consistent set journalizing always uses Oracle GoldenGate. Consistent set journalizing treats each data store individually. Consistent set journalizing provides a guarantee of consistency of the captured changes. Your project requires a loop through the same package logic 24/7. You must be able to purge those executions when required. How do you accomplish this?. Link the last step of the package to its first step to create an infinite loop. Create a scenario from the package, add the scenario as the last step in the package, run it asynchronously, and then save and regenerate this scenario. Create a scenario from the package, add the scenario as the last step in the package, run it asynchronously, and then save. Drag the package to the last step in the package, run it asynchronously, and then save. How are the domains of ODI Agents configured?. by using Upgrade Assistant. by using ODI Studio. by using Domain Creation Assistant. by using Configuration Wizard. You are designing a package in which you want a certain step to not appear in the ODI logs. How do you accomplish this?. Periodically schedule a scenario that uses OdiPurgeLog to remove that step from the ODI logs. In Package Editor, click the step, go to Advanced tab, and set Log Steps in the journal to Never. Right-click the step in Package Editor and select the No Logging menu item. Modify Operator Navigator settings to disable that step from being displayed. In ODI, can a check constraint be added to metadata?. No, it is not possible to add conditions more that those that can be reverse-engineered. Yes, by adding a constraint to the datastore. No, you can declare them only customizing knowledge modules. Yes, but you need to execute additional scripts on the database in order to make it work. In an ODI Interface, can filters be added automatically?. Yes, if they were reverse-engineered-. Yes, if they exist in the datastore definition. No, filters can never be added automatically. No, because each interface instance is unique and different from others. Select the correct statement in an ODI interface, each column of the target can have at most, one mapping in a given dataset. No, a target column can have multiple defined mappings. No, a variable is used to evaluate if the target column can have more than one mapping. Yes, a target column can have a unique and well-defined mapping. Yes, a target column can have a unique definition, but an ODI procedure can define additional mappings for that column. You are loading a file into a database but the file name is unknown at design time and will have to be passed dynamically to a Package at run time. How do you achieve this?. Create a variable, use it in Topology at the File dataserver-level, and add it to a package as a Declare Variable step. Create a variable, use it in Topology at the File dataserver-level, and add it to a package as a Set Variable step. Create a variable, use it as the Resource Name of the File datastore, and add it to a package as a Declare Variable step. Create a variable, use it as the Resource Name of the File datastore, and add it to a package as a Set Variable step. In an ODI interface, can source tables be declared as not joined?. Yes, ODI generates the code calculating a sample of the data, profiling it, and deciding automatically if a join exists. Yes, it is possible to create an ODI procedure to declare joins for the source tables. No, in ODI, it is mandatory that all the source data stores are joined directly or indirectly. No, by default, every added data store in the source declaration of an interface is joined with an inner join. In ODI, when you add a column definition in a table, can you also use a technology-specific datatype for it?. Yes, it is possible to declare it, and if the desired datatype is not in the list available, you can also add it into a Topology datatypes definition. Yes, it is possible to declare it but only if it is a standard SQL datatype. Yes, it is possible but in any case, you need to forward-engineer the datatype via the JDBC driver. No, datatypes are assigned by default as string variables. In ODI, is it possible to reverse a COBOL flat file using a Copybook definition?. Yes, only using the file driver. Yes, using the file and complex file drivers. Yes, but only if the source file is an EBCDIC file. No, you need to manually add the columns definition in all the cases. In ODI Operator, is it possible to view code in the second level (the step) represented?. No, it is only possible to see code in the first higher level (the session) represented. Yes, all generated code can be viewed together in the step-level, with a complete list of executed code. Yes, all generated code can be viewed together in the step-level, with a complete list of executed code. No, it is possible to view code in the thud level (the task level) represented. You have to load three tables A, B, and C in parallel, wait for them to be loaded, and then load a fourth table D. Which steps do you have to go through in order to do this?. Generate three scenarios to load A, B, and C, add them to a Package, set their Synchronization mode to Asynchronous, use an OdiWaitForChildSession step before adding an Interface to load D. Generate three scenarios to load A, B, and C, add them to a package, set their Synchronization mode to Synchronous, use an OdiWaitForChildSession step before adding an interface to load D. Add three Interfaces to load A, B, and C to a package, use an OdiWaitForChildSession step before adding another Interface to load D. Generate three scenarios to load A, B, and C, add them to a package, use an OdiWaitForChildSession step before adding an Interface to load D. In Topology, can a query be defined to retrieve native database sequences?. Yes, in the technology details in the Topology submodule, there is a tab that defines queries for specific database elements. Yes, but you need to specify an option in the knowledge modules of the mapping using sequences. No, only the timestamp definition can be set in Topology. No, the query for native sequence sis defined in ODI Data Models, under ODJ Studio Designer. Identify two true statements regarding the ODI web-based components. Enterprise Manager Fusion Middleware Control Console provides access to ODI data servers settings. ODI Console allows administrators to edit users' information. ODI sessions can be monitored in Enterprise Manager Fusion Middleware Control Console. ODI Console provides access to Project and Interface details. In an ODI interface, is it mandatory to use a join to describe a lookup?. Yes, there are not many other graphical Objects to describe a lookup. Yes, it is not possible to set anything other than a join in an ODI Interface. No, there is a graphical object dedicated to set a lookup in the source area of an interface. No, you can use variables in ODI to declare lookups. In an ODI interface, if a business rule is declared to calculate a sum aggregation, does a knowledge module need to be customized in order to create the correct code?. Yes, business rules and the physical implementation are strictly interlayed in ODI. Yes, a customization is needed only for aggregation functions. No, aggregation code is generated automatically by ODI according to the mapping logic rules. No, it is not needed but, a variable must be declared in the source mapping in order to generate the correct sum. Is it possible to manage five metadata work repositories together with ODI and a single master repository in order to have a software lifecycle generation respecting the best practices that my company uses?. No, every repository needs as mandatory a related separated master repository. No, unless I also have live data environments because it is mandatory that the production work repository is installed on the data production server, the test work repository on the test data server and so on. Yes, it is possible to create separate metadata environments with work repositories, that define for instance, a current level of development work repository, next level development, maintenance, production, and so on. Yes, but also install a mandatory agent for every work repository and compile all its Java archives separately. You are running a Scenario in a Load Plan using a Run Scenario step. This scenario contains the Interfaces. In case of failure in this Scenario, you would like to restart the Interface that failed beginning. Which Restart Type option do you need to pick for the Run Scenario step?. Restart from new session. Restart from failed step. Restart from failed task. Restart from failure. Select the correct statement. When running an interface in ODI, is it possible to see a simulation of the code?. No, that is a current limitation in ODI. No, but it is possible to query the metadata repository in order to see the generated code. Yes, but it is only possible to see a code simulation on ODI Console. Yes, before executing the interface, it is possible to set a flag to request the code simulation. One of the parameters retrieve column properties in a knowledge module is odiRef.getcolList. What is the name of the parameter that retrieves the datatype of a mapped target column. DEST_DT. SOURCE_DT. COL_DESC. COL_FORMAT. What is a relationship in a matching process?. It is how many matched records exist globally. It is matching functions that determine how well two records match each other, for a given identifier. It is a link between two records, created by automatic match rules and manual decisions. A Relationship determines how many comparison results are interpreted during the matching process. What is a Load Plan?. A frozen version of a package. An executable object that contains a hierarchy of steps. A substitute for packages or scenarios. A web service. To declare mapping on a target column, does a developer have to manually write the required database functions?. Yes, ODI only allows manual insertion of information related to source-target mappings. Yes, either the work area can be used to write the functions or a configuration file can be prepared for ODI to read. No, a developer must prepare a configuration file and associate to it a user function,. No, a graphical expression editor can be used to drag and drop columns and the required function definitions. The ODI application adapter for Hadoop______. connects to a Work Repository r. is a Big Data Connector. connects to a Master Repository. is an Agent configuration. Select two infrastructure components that can be used to deploy both ODI and EDQ components. JDeveloper. Oracle WebLogic Server. IBM WebSphere Application Server. Oracle SOA Suite. Select two correct statements about Results Books. A Results Book can contain tabs containing the output from several different EDQ processor steps. Results Books always link to the latest version of a process. Results Books can be exported to a multi-sheet MS Excel Workbook. A Results Book page will always look the same as the output of the processor it is linked to,. Results Books can combine pages from many projects. Select the two correct statements about the Date Profiler. It can profile string dates written in a variety of formats, such as DD/MM/YYYY or MM/DD/YYYY. It provides a distribution for the day in the year, such as February 21, regardless of the year. It allows the EDQ user to define a valid range of dates. By clicking a date in blue, the user can drill down to the records that carried that value. It rejects February 29 as an invalid date. Select the two correct reasons that might lead you to use EDQ Transformation Processors rather than transform data in ODI. EDQ is faster than ODI at performing transformations. The data needs to be mapped to a coded value. Some data needs to be aggregated, producing a total value from multiple rows. When converting string dates, EDQ does not need all source fields to be valid dates. When a standardized or derived data value is needed for matching records. Select the three correct options for Token Checking in the EDQ parser. Checking against a list of values. Checking for duplicates. Checking against a list of patterns. Checking for typing errors. Checking against a list of regular expressions. Select the statement that best describes the difference between Server and Client Data Store access. There is no difference between Server and Client Data Store access, both can be used conveniently. Data is accessed from Server only if it is the data store. Data is accessed from Client only if it is an Excel file. A local file on a laptop can be accessed using Client access, or from Server If a copy of the file is on the EDQ server landing area. Select four correct scenarios where EDQ parsing is beneficial. There are separate attributes for Titles, Forenames, and Family names, but the values are not always in the right fields. Names are well-structured, but the Gender field is often missing. The data contains product descriptions in the string field and you need to extract the size and weight of each product. Addresses are contained in a single string field and are difficult to verify. Phone numbers are sometimes embedded in the customer name and address. Select two correct answers. ODI and EDQ can share the same infrastructure If__________. They use an Oracle DB as a repository and WebLogic to host the web application. They use an Oracle DB as a repository and WebSphere to host the web application. They use any DB as a repository and any compatible web application server. ODI and EDQ cannot share the same infrastructure, they both have different architecture. Is it possible to monitor a JEE Agent using a plug in directly into Enterprise Manager Fusion Middleware Control Console?. Yes, it is possible to check that an Agent is active or if it has any issues. No, you have to run some dedicated Java scripts in order to monitor the Agent behavior. Yes, it is possible to check that an Agent is active, but you must also install and configure Oracle Tuxedo. No, you can simply check the metadata available using ODI Console as a web application. Is it possible to invoke an ODI job as a public web service to execute it?. No, it is only possible to run jobs internally or via command line. Yes, both standalone and JEE agent support web services. Yes, but you need to use as mandatory Oracle BPEL in order to process the job invocation. No, there are only internal specific application program interfaces that you can use to run a job. Is it possible to import a knowledge module into project (A) and then reference that knowledge module in another project (B)?. Yes, knowledge modules can be imported. Yes, but three specific variables have to be declared to indicate the data environment. No, to use a knowledge module, it must be imported into each project. No because a variable needs to be set to define it as a global knowledge module. In an ODI package, is it possible to use a tool to purge logs based on the name of the agent that executed the session?. No, it is only possible to purge the logs of the sessions that run in a specific period of time. No, it is only possible to purge the logs of the properly ended sessions. Yes, it is possible but you also need to indicate the ODI user that ran the session. Yes, it is one of the parameters available to purge the log. In an ODI interface, to declare a lookup, is there a dedicated graphical object?. No, in ODI you can declare a lookup only using a join definition. No, in ODI you can declare a lookup only using an ODI procedure. Yes, there is a specific object, you click it and a wizard helps the developer to set the lookup. Yes, there is a specific object and you need to prepare a configuration file to use it. If an ODI user function has one syntax and one or more implementations, can it be used in an ODI interface mapping?. Yes, but to be mapped in an ODI interface it also needs an ODI variable. No, it can only have one syntax and one implementation for all technologies. Yes, but you need to write a custom knowledge module. Yes, during execution, it will substitute the specific technology implementation. Identify two true statements regarding the ODI SDK. The ODI SDK is used to load data between Teradata and Oracle. The ODI SDK can be used to embed ODI processes into another product. The ODI SDK is required to be used by ETL developers in order to develop ODI processes,. The ODI SDK allows developers to execute most ODI operations through a Java program. Identify two parameters of the OGG JKMs that must be set in order to configure OGG through ODI. SRC__OGG__OBJECT_GROUP. SYNC__JRN_DELETE. STG_MANAGER_PORT. AUTO CONFIGURATION. Identify two operations that can be achieved using the ODI SDK. Create users. Create interfaces. Create Master and Work repositories. Create security profiles. Identify three objects from which scenarios can be generated. Interface. Variable. Package. Knowledge modules. Trail. Extract. Identify the operation that ensures that referential integrity is maintained while loading changes detected by the ODI CDC framework. Unlock Subscriber. Extend Window. Lock Subscriber. Purge Journal. Identify the database view ODI uses when loading the journalized data in an Interface. JV$D view. JV$D view. JV$ view. JV$I view. Identify one correct statement regarding Exception steps in Load Plans. Exception steps cannot be defined in a Load Plan. Exception steps can only be defined for Parallel Steps in a Load Plan. Exception steps can only be defined for Serial Steps in a Load Plan. Exception steps can be defined for most Step types in a Load Plan. When profiling data with EDQ, how can you ensure that the latest data is being used?. By creating a snapshot. By refreshing a snapshot by running it. New data is automatically loaded by EDQ, no action is required. By creating a data store. Which two ODI knowledge modules are included in the Application Adapter for Hadoop?. IKM Oracle Incremental Update. IKM Hive Transform. IKM SQL to File Append. IKM FiletoHive. Which tables store ODI/OBIEE lineage information?. Master repository tables. Work repository tables. Staging tables. OBIEE repository tables. Which property file parameter stores the location of the OBBIE web catalog folder?. OBIEE_WEBCAT_FOLDER_TO_EXPORT. OBIEE_WEBCAT_EXPORT_FILE. OBIEE_WEBCAT__FILE. OBXEE WEBCAT. Which one of the following attributes should always be provided to EDQ AV as a separate attribute if available?. Name. City. Country. State or Province. Zip or Postal code. After invoking an Enterprise Data Quality Job from ODI, where can you monitor the detailed progress of that Job?. In Operator. In Enterprise Data Quality. In ODI Console. In Enterprise Manager. Where can users edit scenarios in ODI?. Designer Navigator. Operator Navigator. Command line. Nowhere, scenarios cannot be edited. When the data, source, and target are persistent in the same data server, in an ODI interface, do you need an LKM and an LKM to load the data on target?. Yes, both an LKM multi technology and an IKM multi-technology are required. Yes, an LKM multi-technology and an IKM single technology are required. No, only an IKM multi-technology is required. No, only an IKM single technology is required. |