You have an Azure subscription that contains an Azure Cosmos DB for NoSQL account named account1.
Backups for account1 have the following configurations:
* Interval: 2 hours
* Retention period: 4 days
You need to estimate the charges associated with the retention of the backups. How many copies of the backups will incur additional charges?
You have an Azure Cosmos DB database that contains a container named container 1. The container1 container is configured with a maximum of 20,000 RU/s and currently contains 240 GB of data.
You need to estimate the costs of container1 based on the current usage.
How many RU/s will be charged?
You need to create a database in an Azure Cosmos DB for NoSQL account. The database will contain three containers named coll1, coll2 and coll3. The coll1 container will have unpredictable read and write volumes. The col!2 and coll3 containers will have predictable read and write volumes. The expected maximum throughput for coll1 and coll2 is 50,000 request units per second (RU/s) each.
How should you provision the collection while minimizing costs?
To create a database that minimizes costs, you should consider the following factors:
The read and write volumes of your containers
The predictability and variability of your traffic
The latency and throughput requirements of your application
The geo-distribution and availability needs of your data
Based on these factors, one possible option that you could choose isB. Create a provisioned throughput account. Set the throughput for coll1 to Autoscale. Set the throughput for coll2 and coll3 to Manual.
This option has the following advantages:
It allows you to optimize your costs by paying only for the throughput you need for each container1.
This option also has some limitations, such as:
It may not support availability zones or multi-master replication for your account1.
You have a container in an Azure Cosmos DB for NoSQL account.
You need to create an alert based on a custom Log Analytics query.
Which signal type should you use?
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a container named container1 in an Azure Cosmos DB Core (SQL) API account.
You need to make the contents of container1 available as reference data for an Azure Stream Analytics job.
Solution: You create an Azure Data Factory pipeline that uses Azure Cosmos DB Core (SQL) API as the input and Azure Blob Storage as the output.
Does this meet the goal?
Instead create an Azure function that uses Azure Cosmos DB Core (SQL) API change feed as a trigger and Azure event hub as the output.
The Azure Cosmos DB change feed is a mechanism to get a continuous and incremental feed of records from an Azure Cosmos container as those records are being created or modified. Change feed support works by listening to container for any changes. It then outputs the sorted list of documents that were changed in the order in which they were modified.
The following diagram represents the data flow and components involved in the solution: