At ValidExamDumps, we consistently monitor updates to the Salesforce MuleSoft-Platform-Architect-I exam questions by Salesforce. Whenever our team identifies changes in the exam questions,exam objectives, exam focus areas or in exam requirements, We immediately update our exam questions for both PDF and online practice exams. This commitment ensures our customers always have access to the most current and accurate questions. By preparing with these actual questions, our customers can successfully pass the Salesforce Certified MuleSoft Platform Architect I exam on their first attempt without needing additional materials or study guides.
Other certification materials providers often include outdated or removed questions by Salesforce in their Salesforce MuleSoft-Platform-Architect-I exam. These outdated questions lead to customers failing their Salesforce Certified MuleSoft Platform Architect I exam. In contrast, we ensure our questions bank includes only precise and up-to-date questions, guaranteeing their presence in your actual exam. Our main priority is your success in the Salesforce MuleSoft-Platform-Architect-I exam, not profiting from selling obsolete exam questions in PDF or Online Practice Test.
Refer to the exhibit.
What is the best way to decompose one end-to-end business process into a collaboration of Experience, Process, and System APIs?
A) Handle customizations for the end-user application at the Process API level rather than the Experience API level
B) Allow System APIs to return data that is NOT currently required by the identified Process or Experience APIs
C) Always use a tiered approach by creating exactly one API for each of the 3 layers (Experience, Process and System APIs)
D) Use a Process API to orchestrate calls to multiple System APIs, but NOT to other Process APIs
Correct Answe r: Allow System APIs to return data that is NOT currently required by the identified Process or Experience APIs.
*****************************************
>> All customizations for the end-user application should be handled in 'Experience API' only. Not in Process API
>> We should use tiered approach but NOT always by creating exactly one API for each of the 3 layers. Experience APIs might be one but Process APIs and System APIs are often more than one. System APIs for sure will be more than one all the time as they are the smallest modular APIs built in front of end systems.
>> Process APIs can call System APIs as well as other Process APIs. There is no such anti-design pattern in API-Led connectivity saying Process APIs should not call other Process APIs.
So, the right answer in the given set of options that makes sense as per API-Led connectivity principles is to allow System APIs to return data that is NOT currently required by the identified Process or Experience APIs. This way, some future Process APIs can make use of that data from System APIs and we need NOT touch the System layer APIs again and again.
4A developer for a transportation organization is implementing exactly one processing functionality in a Reservation Mule application to process and store passenger
records. This Reservation application will be deployed to multiple CloudHub workers/replicas. It is possible that several external systems could send duplicate passenger records
to the Reservation application.
An appropriate storage mechanism must be selected to help the Reservation application process each passenger record exactly once as much as possible. The selected storage
mechanism must be shared by all the CloudHub workers/replicas in order to synchronize the state information to assist attempting exactly once processing of each passenger
record by the deployed Reservation Mule application.
Which type of simple storage mechanism in Anypoint Platform allows the Reservation Mule application to update and share data between the CloudHub workers/replicas exactly
once, with minimal development effort?
Processing Requirements and Storage Mechanism:
The Reservation Mule application will be deployed to multiple CloudHub workers/replicas, meaning that each worker must share state information to handle records exactly once. This requires a shared storage mechanism where state can be stored and accessed by multiple instances to avoid duplicate processing of the same records.
A Persistent Object Store in Anypoint Platform can be used to store records in a way that is accessible across multiple workers, providing a reliable mechanism for 'exactly once' processing.
Evaluating the Options:
Option A (Correct Answer): A Persistent Object Store is designed to retain data across different application instances and can be shared by all workers on CloudHub. It helps achieve idempotency by ensuring that a record is processed exactly once.
Option B: Runtime Fabric Object Store is used for applications deployed in Anypoint Runtime Fabric, not CloudHub. This option would not be compatible with the CloudHub deployment.
Option C: A Non-persistent Object Store does not retain data across application restarts or different instances, making it unsuitable for the requirement of synchronized storage for exactly-once processing.
Option D: An In-memory Mule Object Store is local to each worker and is not shared across instances, so it does not meet the requirement for a shared storage mechanism accessible to all CloudHub workers.
Conclusion:
Option A is the correct answer, as a Persistent Object Store allows data sharing across multiple CloudHub workers, enabling them to synchronize and achieve 'exactly once' processing of passenger records with minimal development effort.
Refer to MuleSoft's documentation on Object Store configurations and usage for best practices on handling state across distributed instances.
A Mule application exposes an HTTPS endpoint and is deployed to the CloudHub Shared Worker Cloud. All traffic to that Mule application must stay inside the AWS VPC.
To what TCP port do API invocations to that Mule application need to be sent?
Correct Answe r: 8082
*****************************************
>> 8091 and 8092 ports are to be used when keeping your HTTP and HTTPS app private to the LOCAL VPC respectively.
>> Above TWO ports are not for Shared AWS VPC/ Shared Worker Cloud.
>> 8081 is to be used when exposing your HTTP endpoint app to the internet through Shared LB
>> 8082 is to be used when exposing your HTTPS endpoint app to the internet through Shared LB
So, API invocations should be sent to port 8082 when calling this HTTPS based app.
https://docs.mulesoft.com/runtime-manager/cloudhub-networking-guide
https://help.mulesoft.com/s/article/Configure-Cloudhub-Application-to-Send-a-HTTPS-Request-Directly-to-Another-Cloudhub-Application
An organization has created an API-led architecture that uses various API layers to integrate mobile clients with a backend system. The backend system consists of a number of specialized components and can be accessed via a REST API. The process and experience APIs share the same bounded-context model that is different from the backend data model. What additional canonical models, bounded-context models, or anti-corruption layers are best added to this architecture to help process data consumed from the backend system?
Correct Answe r: Create a bounded-context model for the system layer to closely match the backend data model, and add an anti-corruption layer to let the different bounded contexts cooperate across the system and process layers
*****************************************
>> Canonical models are not an option here as the organization has already put in efforts and created bounded-context models for Experience and Process APIs.
>> Anti-corruption layers for ALL APIs is unnecessary and invalid because it is mentioned that experience and process APIs share same bounded-context model. It is just the System layer APIs that need to choose their approach now.
>> So, having an anti-corruption layer just between the process and system layers will work well. Also to speed up the approach, system APIs can mimic the backend system data model.
A large lending company has developed an API to unlock data from a database server and web server. The API has been deployed to Anypoint Virtual Private Cloud
(VPC) on CloudHub 1.0.
The database server and web server are in the customer's secure network and are not accessible through the public internet. The database server is in the customer's AWS
VPC, whereas the web server is in the customer's on-premises corporate data center.
How can access be enabled for the API to connect with the database server and the web server?
Scenario Overview:
The API resides in Anypoint Virtual Private Cloud (VPC) on CloudHub 1.0, where it requires connectivity to both an AWS-hosted database server and an on-premises web server.
Both servers are isolated from the public internet: the database server is within the customer's AWS VPC, and the web server is within the customer's on-premises corporate data center.
Connectivity Requirements:
To connect to the AWS database server from the API in Anypoint VPC, VPC peering is ideal. This would allow a private network connection between the MuleSoft Anypoint VPC and the customer's AWS VPC, enabling secure, direct access to the database.
To connect to the on-premises web server, a VPN tunnel is suitable. This would establish a secure, encrypted connection from the Anypoint VPC to the customer's corporate data center, allowing secure data flow between the API and the on-premises web server.
Analysis of Options:
Option A (Correct Answer): Setting up VPC peering with AWS VPC enables private network connectivity with the database server, while a VPN tunnel to the on-premises data center allows secure access to the web server. This combination meets the requirements for secure, controlled access to both resources.
Option B: VPC peering alone would not suffice because it does not support a connection from the Anypoint VPC directly to an on-premises network. A VPN is necessary for on-premises access.
Option C: Setting up a transit gateway would provide connectivity within AWS but would not enable direct connectivity from CloudHub to the on-premises network.
Option D: VPC peering with the on-premises network is not possible because VPC peering is typically used to connect two VPCs, not a VPC with an on-premises network.
Conclusion:
Option A is the correct choice, as it provides a complete solution by using VPC peering for AWS VPC connectivity and a VPN tunnel for secure on-premises connectivity. This setup aligns with Anypoint Platform best practices for connecting Anypoint VPCs to both AWS-hosted and on-premises systems, ensuring secure, controlled access to both the database and web server.
For more detailed reference, MuleSoft documentation on Anypoint VPC peering and VPN connectivity provides additional context on best practices for setting up these connections within a hybrid network infrastructure.