You need to process data on the nodes within a Hadoop cluster. To accomplish this task, you write a mapper and reducer transformation and use the Pentaho MapReduce entry to execute the MapReduce job on the cluster.
In this scenario, which two steps are required within the transformations? (Choose two.)
Choose 2 answers
What are two ways to schedule a PDI job stored in the repository? (Choose two.)
Choose 2 answers
You need to populate a fact table with the corresponding surrogate keys from each dimension table
Which two steps accomplish this task? (Choose two.)
Choose 2 answers
You have a PDI job that gets a list of variables followed by three subsequent transformation entries. Since the three subsequent transformation entries are not dependent on each other. You want to execute them at the same time.
According the Hitachi Vantara best practices, how do you accomplish this task?