Free Microsoft DP-600 Exam Actual Questions

The questions for DP-600 were last updated On Dec 19, 2024

Question No. 1

You have a Fabric tenant that contains a new semantic model in OneLake.

You use a Fabric notebook to read the data into a Spark DataFrame.

You need to evaluate the data to calculate the min, max, mean, and standard deviation values for all the string and numeric columns.

Solution: You use the following PySpark expression:

df.show()

Does this meet the goal?

Show Answer Hide Answer
Correct Answer: B

The df.show() method also does not meet the goal. It is used to show the contents of the DataFrame, not to compute statistical functions. Reference = The usage of the show() function is documented in the PySpark API documentation.


Question No. 2

You have a Fabric tenant named Tenant1 that contains a workspace named WS1. WS1 uses a capacity named C1 and contains a dawset named DS1. You need to ensure read-write access to DS1 is available by using the XMLA endpoint. What should be modified first?

Show Answer Hide Answer
Correct Answer: C

To ensure read-write access to DS1 is available by using the XMLA endpoint, the C1 settings (which refer to the capacity settings) should be modified first. XMLA endpoint configuration is a capacity feature, not specific to individual datasets or workspaces. Reference = The configuration of XMLA endpoints in Power BI capacities is detailed in the Power BI documentation on dataset management.


Question No. 3

You have a Microsoft Power B1 Premium Per User (PPU) workspace that contains a semantic model.

You have an Azure App Service app named App1 that modifies row-level security (RLS) for the model by using the XMLA endpoint. App1 requires users to sign in by using their Microsoft Entra credentials to access the XMLA endpoint. You need to configure App1 to use a service account to access the model. What should you do first?

Show Answer Hide Answer
Correct Answer: B

Question No. 4

You are creating a semantic model in Microsoft Power Bl Desktop.

You plan to make bulk changes to the model by using the Tabular Model Definition Language (TMDL) extension for Microsoft Visual Studio Code.

You need to save the semantic model to a file.

Which file format should you use?

Show Answer Hide Answer
Correct Answer: B

When saving a semantic model to a file that can be edited using the Tabular Model Scripting Language (TMSL) extension for Visual Studio Code, the PBIX (Power BI Desktop) file format is the correct choice. The PBIX format contains the report, data model, and queries, and is the primary file format for editing in Power BI Desktop. Reference = Microsoft's documentation on Power BI file formats and Visual Studio Code provides further clarification on the usage of PBIX files.


Question No. 5

You have a Fabric tenant that contains a new semantic model in OneLake.

You use a Fabric notebook to read the data into a Spark DataFrame.

You need to evaluate the data to calculate the min, max, mean, and standard deviation values for all the string and numeric columns.

Solution: You use the following PySpark expression:

df .sumary ()

Does this meet the goal?

Show Answer Hide Answer
Correct Answer: A

Yes, the df.summary() method does meet the goal. This method is used to compute specified statistics for numeric and string columns. By default, it provides statistics such as count, mean, stddev, min, and max. Reference = The PySpark API documentation details the summary() function and the statistics it provides.