Free Microsoft DP-420 Exam Actual Questions

The questions for DP-420 were last updated On Nov 19, 2024

Question No. 1

You have an Azure subscription that contains an Azure Cosmos DB for NoSQL account named account1.

Backups for account1 have the following configurations:

* Interval: 2 hours

* Retention period: 4 days

You need to estimate the charges associated with the retention of the backups. How many copies of the backups will incur additional charges?

Show Answer Hide Answer
Correct Answer: C

Question No. 2

You have an Azure Cosmos DB database that contains a container named container 1. The container1 container is configured with a maximum of 20,000 RU/s and currently contains 240 GB of data.

You need to estimate the costs of container1 based on the current usage.

How many RU/s will be charged?

Show Answer Hide Answer
Correct Answer: B

Question No. 3

You need to create a database in an Azure Cosmos DB for NoSQL account. The database will contain three containers named coll1, coll2 and coll3. The coll1 container will have unpredictable read and write volumes. The col!2 and coll3 containers will have predictable read and write volumes. The expected maximum throughput for coll1 and coll2 is 50,000 request units per second (RU/s) each.

How should you provision the collection while minimizing costs?

Show Answer Hide Answer
Correct Answer: B

Azure Cosmos DB offers two different capacity modes: provisioned throughput and serverless1. Provisioned throughput mode allows you to configure a certain amount of throughput (expressed in Request Units per second or RU/s) that is provisioned on your databases and containers.You get billed for the amount of throughput you've provisioned, regardless of how many RUs were consumed1. Serverless mode allows you to run your database operations without having to configure any previously provisioned capacity.You get billed for the number of RUs that were consumed by your database operations and the storage consumed by your data1.

To create a database that minimizes costs, you should consider the following factors:

The read and write volumes of your containers

The predictability and variability of your traffic

The latency and throughput requirements of your application

The geo-distribution and availability needs of your data

Based on these factors, one possible option that you could choose isB. Create a provisioned throughput account. Set the throughput for coll1 to Autoscale. Set the throughput for coll2 and coll3 to Manual.

This option has the following advantages:

It allows you to handle unpredictable read and write volumes for coll1 by using Autoscale, which automatically adjusts the provisioned throughput based on the current load1.

It allows you to handle predictable read and write volumes for coll2 and coll3 by using Manual, which lets you specify a fixed amount of provisioned throughput that meets your performance needs1.

It allows you to optimize your costs by paying only for the throughput you need for each container1.

It allows you to enable geo-distribution for your account if you need to replicate your data across multiple regions1.

This option also has some limitations, such as:

It may not be suitable for scenarios where all containers have intermittent or bursty traffic that is hard to forecast or has a low average-to-peak ratio1.

It may not be optimal for scenarios where all containers have low or sporadic traffic that does not justify provisioned capacity1.

It may not support availability zones or multi-master replication for your account1.

Depending on your specific use case and requirements, you may need to choose a different option.For example, you could use a serverless account if all containers have low or sporadic traffic that does not require predictable performance or geo-distribution1.Alternatively, you could use a provisioned throughput account with Manual for all containers if all containers have stable and consistent traffic that requires predictable performance or geo-distribution1.


Question No. 4

You have a container in an Azure Cosmos DB for NoSQL account.

You need to create an alert based on a custom Log Analytics query.

Which signal type should you use?

Show Answer Hide Answer
Correct Answer: B

Question No. 5

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have a container named container1 in an Azure Cosmos DB Core (SQL) API account.

You need to make the contents of container1 available as reference data for an Azure Stream Analytics job.

Solution: You create an Azure Data Factory pipeline that uses Azure Cosmos DB Core (SQL) API as the input and Azure Blob Storage as the output.

Does this meet the goal?

Show Answer Hide Answer
Correct Answer: B

Instead create an Azure function that uses Azure Cosmos DB Core (SQL) API change feed as a trigger and Azure event hub as the output.

The Azure Cosmos DB change feed is a mechanism to get a continuous and incremental feed of records from an Azure Cosmos container as those records are being created or modified. Change feed support works by listening to container for any changes. It then outputs the sorted list of documents that were changed in the order in which they were modified.

The following diagram represents the data flow and components involved in the solution: