Universal Containers (UC) is concerned about the accuracy of their Customer information in Salesforce. They have recently created an enterprise-wide trusted source MDM for Customer data which they have certified to be accurate. UC has over 20 million unique customer records in the trusted source and Salesforce. What should an Architect recommend to ensure the data in Salesforce is identical to the MDM?
An Architect needs information about who is creating, changing, or deleting certain fields within the past four months.
How can the Architect access this information?
Exporting the setup audit trail can provide information about who is creating, changing, or deleting certain fields within the past four months. The setup audit trail tracks the recent setup changes that administrators and other users have made to the organization. The setup audit trail history shows up to 20 most recent changes in the Setup area, but administrators can download a report (in CSV format) of up to six months of setup history.
Universal Containers (UC) is in the process of selling half of its company. As part of this split, UC's main Salesforce org will be divided into two org:org A and org B, UC has delivered these requirements to its data architect
1. The data model for Org B will drastically change with different objects, fields, and picklist values.
2. Three million records will need to be migrated from org A to org B for compliance reasons.
3. The migrate will need occur within the next two month, prior to be split.
Which migrate strategy should a data architect use to successfully migrate the date?
Due to security requirements, Universal Containers needs to capture specific user actions, such as login, logout, file attachment download, package install, etc. What is the recommended approach for defining a solution for this requirement?
Universal Containers (CU) is in the process of implementing an enterprise data warehouse (EDW). UC needs to extract 100 million records from Salesforce for migration to the EDW.
What data extraction strategy should a data architect use for maximum performance?
Installing a third-party AppExchange tool (option A) is not a good solution, as it can incur additional costs and dependencies. It may also not be able to handle such a large volume of data efficiently. Calling the REST API in successive queries (option B) is also not a good solution, as it can encounter API limits and performance issues when querying such a large volume of data. Using the Bulk API in parallel mode (option D) is also not a good solution, as it can still cause timeouts and errors when querying such a large volume of data without chunking.