Ends in
00
days
00
hrs
00
mins
00
secs
ENROLL NOW

💸 BIG Discounts on AWS & Azure Foundational Practice Exams – Now as LOW as $9.99 only!

AZ-204 Microsoft Azure Developer Associate Sample Exam Questions

Home » Others » AZ-204 Microsoft Azure Developer Associate Sample Exam Questions

AZ-204 Microsoft Azure Developer Associate Sample Exam Questions

Here are 10 AZ-204 Microsoft Azure Developer Associate practice exam questions to help you gauge your readiness for the actual exam.

Question 1

You are developing an Azure Web App that processes customer orders. The application requires an App Service background task to handle asynchronous operations, such as sending order confirmation emails and updating inventory in response to new customer orders.

You need a solution that can run continuously or on a schedule within the Azure App Service environment.

You plan to use the WebJobs SDK to integrate with an Azure Storage queue to process incoming order requests efficiently.

Which of the following options will meet this requirement?

  1. Microsoft Power Automate
  2. WebJobs
  3. Azure Functions
  4. Azure Batch

Correct Answer: 2

Azure WebJobs SDK is a framework designed to simplify the process of writing background processing code for Azure WebJobs. It features a declarative binding and trigger system that works seamlessly with Azure Storage Blobs, Queues, and Tables, as well as Service Bus. This binding system allows you to easily create code that reads from or writes to Azure Storage objects. Additionally, the trigger system automatically executes a function in your code whenever new data is received in a queue or blob. The SDK also offers an integrated dashboard experience within the Azure management portal, providing rich monitoring and diagnostic information for your WebJob runs.

WebJobs

WebJobs is a feature of Azure App Service that allows you to run a program or script in the same instance as your web app. All App Service plans support WebJobs. Additionally, you can use the Azure WebJobs SDK to simplify various programming tasks associated with WebJobs.

Hence, the correct answer is: WebJobs. It is a fully managed background processing solution operating within the Azure App Service environment. It enables asynchronous execution of background tasks and supports both continuous and scheduled operations. By integrating effortlessly with the WebJobs SDK, it effectively handles messages from an Azure Storage Queue, making it a perfect fit for processing customer order requests.

Microsoft Power Automate is incorrect because Power Automate is primarily a workflow automation tool, not designed for running continuous background processes within Azure App Service. It also lacks native support for WebJobs SDK or queue-based processing needed for order handling.

Azure Functions is incorrect because Azure Functions is a serverless solution that runs outside of the Azure App Service environment. While it can process background tasks and integrate with Azure Storage Queues, the requirement specifies that the solution must run within App Service.

Azure Batch is incorrect because it is primarily used for high-performance computing (HPC) and parallel batch processing, which is not suitable for handling lightweight, event-driven background tasks like processing orders or sending emails.

References:
https://learn.microsoft.com/en-us/azure/app-service/webjobs-create?tabs=windowscode
https://learn.microsoft.com/en-us/azure/azure-functions/functions-compare-logic-apps-ms-flow-webjobs#compare-functions-and-webjobs

Check out this Azure App Service Cheat Sheet:
https://tutorialsdojo.com/azure-app-service/

Question 2

You manage an Azure App Service that serves users across multiple regions. The application uses Azure Traffic Manager to route traffic intelligently and has Application Insights enabled for monitoring. Additionally, Azure Front Door is configured to enhance global load balancing and content delivery.

Your team must generate monthly reports on uptime trends and analyze historical performance data to ensure high availability.

Which solutions will achieve this goal? (Select TWO.)

Note: Each correct selection is worth one point.

Correct Answer: 2,5

Azure Monitor Logs is a robust service designed to help you collect and analyze telemetry data from your Azure resources. It provides deep insights into your applications, infrastructure, and network by gathering log data from various sources, including Azure App Services, virtual machines, and more. With the power of Kusto Query Language (KQL), you can perform complex queries to analyze system performance, troubleshoot issues, and detect abnormal patterns. This functionality is especially useful for monitoring long-term trends, such as generating monthly uptime reports and diagnosing performance issues over time. By centralizing log data, Azure Monitor Logs helps ensure operational efficiency and the availability of your applications.

Azure Monitor Logs

The service integrates with other Azure monitoring solutions like Application Insights and Azure Security Center, offering a comprehensive view of your environment’s health. This integration allows you to correlate log data with application performance metrics, security insights, and network traffic, providing a 360-degree perspective on your app’s health. Azure Monitor Logs also supports custom dashboards, alerting mechanisms, and automated actions based on log data, which makes it a powerful tool for both real-time monitoring and historical analysis. Whether you’re monitoring uptime, error rates, or system resource usage, this service provides actionable insights to maintain optimal performance.

Azure Monitor Metrics

Azure Monitor Metrics complements Azure Monitor Logs by offering real-time, high-frequency performance data for your Azure resources. It tracks important metrics like CPU usage, memory consumption, request rates, and response times, providing instant visibility into the performance of your application. Azure Monitor Metrics is essential for detecting performance degradation or availability issues before they affect end users. It allows you to create custom dashboards, configure alerts, and track metrics over time, which is valuable for generating monthly reports on system availability and trends. By leveraging both logs and metrics, you can ensure a proactive approach to maintaining application health and availability.

The option that says: Azure Front Door Health Probes is incorrect because it is primarily used for real-time health monitoring of the application’s endpoints. They ensure that only healthy instances serve traffic, but they typically focus on detecting issues with individual endpoints or regions at a specific moment. While they help with immediate health status and routing decisions, they simply don’t provide the historical analysis or long-term uptime trends required for generating monthly reports or analyzing performance over time.

The option that says: Application Insights Availability Tests is incorrect because it is primarily used to simulate user traffic from various locations to measure the availability of your application in real time. While they can provide insights into availability and alert you to downtime, they only test specific endpoints at set intervals and do not provide the deep historical analysis or detailed trend reporting over extended periods.

The option that says: Azure Traffic Manager Endpoint Monitoring is incorrect because it is only useful for monitoring the health of the endpoints being routed to, and it helps ensure that only healthy endpoints are receiving traffic. However, it focuses on real-time health checks and routing decisions rather than storing and analyzing historical data over time. It ensures that traffic is distributed to healthy regions or endpoints but does not collect the historical performance data needed to track uptime trends or generate monthly reports.

References:
https://learn.microsoft.com/en-us/azure/azure-monitor/logs/data-platform-logs
https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/data-platform-metrics

Check out this Azure Monitor Cheat Sheet:

https://tutorialsdojo.com/azure-monitor/

Question 3

You manage multiple Azure API Management (APIM)-hosted APIs for your organization.

A minor and non-breaking changes are required for one of the APIs. These updates must adhere to the following conditions:

  •  Existing consumers must not experience disruptions.
  • A rollback mechanism should be available in case issues arise.
  • Tutorials dojo strip
  • The modifications must be documented to inform developers about the updates.
  • The changes should be thoroughly tested before being made public.

Additionally, your organization is evaluating the use of the Azure API Center to catalog and manage APIs across teams.

Which of the following is the most effective approach to updating the API while ensuring compliance with these conditions?

Correct Answer: 2

Azure API Management (APIM) supports revisions, allowing developers to introduce non-breaking changes to APIs without affecting existing consumers. By creating a new revision, modifications can be applied and thoroughly tested in isolation. Once validated, the revision can be promoted to the current version, ensuring a seamless transition for users. If issues arise, reverting to a previous revision is straightforward, providing a reliable rollback mechanism.

Add new version

Utilizing API revisions in APIM is a best practice for implementing non-breaking changes. This approach ensures that existing consumers experience no disruptions, as changes are isolated until explicitly published. The ability to test revisions before making them current guarantees that any potential issues are identified and resolved in a controlled environment. Moreover, the built-in rollback capability allows for quick reversion if unforeseen problems occur post-deployment. Documentation of changes can be maintained within the revision notes, keeping developers informed about updates.

The option that says: Utilize header-based versioning to direct requests to different API versions without altering existing client configurations is incorrect. While header-based versioning allows clients to specify which version of an API to use via HTTP headers, it is typically employed for managing breaking changes across different API versions. Implementing this method requires clients to include specific headers in their requests, which may necessitate modifications on the client side. Additionally, this approach is more suited for versioning rather than handling non-breaking changes within the same API version. ​

The option that says: Configure an Azure Traffic Manager profile to route requests between API versions for controlled testing and rollback is incorrect because Azure Traffic Manager is designed to distribute user traffic across multiple service endpoints, primarily for load balancing, failover, and performance optimization. It operates at the DNS level and is not tailored for routing traffic between different API versions within APIM. Using Traffic Manager for this purpose would be an unconventional approach and may introduce unnecessary complexity without addressing the specific requirements of API versioning and revisions.

The option that says: Use Azure Pipelines to automate API deployment and testing before rolling out changes is incorrect because this service simply facilitates continuous integration and deployment (CI/CD) processes, it does not inherently provide the capabilities required for non-breaking API modifications within Azure API Management (APIM). Therefore, utilizing Azure Pipelines alone does not satisfy with the requirements of making minor, non-breaking API changes that need to be thoroughly tested, documented, and safely rolled out without disrupting existing consumers.

References:
https://learn.microsoft.com/en-us/azure/api-management/api-management-get-started-revise-api?tabs=azure-portal
https://learn.microsoft.com/en-us/azure/api-management/api-management-revisions

Check out this Azure Cheat Sheet:
https://tutorialsdojo.com/microsoft-azure-cheat-sheets/

Question 4

You are designing an Azure API that needs to securely call another internal API hosted in Azure API Management. The following security requirements must be met:

  •  The API must authenticate itself when making calls to the internal API.
  • No client credentials, API keys, or tokens should be sent manually.
  • Authentication should integrate with Microsoft Entra ID for seamless security.

Which authentication mechanism should be implemented?

Correct Answer: 3

Managed Identity is an Azure feature that allows Azure services to authenticate to other Azure resources without needing to manage credentials manually. When a service, like an Azure App Service or Virtual Machine, is assigned a managed identity, it can use Microsoft Entra ID to authenticate to Azure resources, such as Azure Key Vault, Azure API Management, and other Azure services, securely. This eliminates the need for developers to handle credentials, making the process more secure and less error-prone.

Managed Identity

There are two types of Managed Identity: System-Assigned Managed Identity, which is tied to the lifecycle of a specific Azure resource, and User-Assigned Managed Identity, which is a standalone identity that can be used across multiple Azure resources. Managed Identity simplifies authentication by automatically handling the creation and management of the identity, as well as the issuance of access tokens for authentication to Azure services. By using Managed Identity, you can ensure that the authentication process is fully integrated into Microsoft Entra ID, providing a seamless and secure way to interact with other resources without exposing credentials or requiring manual token handling.

API Key Authentication is incorrect because it primarily involves manually passing an API key in the request headers to authenticate the caller. This method requires managing and securely storing API keys, which does not align with the requirement to avoid manually sending credentials or tokens. While API key authentication is useful in many cases, it does not integrate seamlessly with Microsoft Entra ID and does not meet the goal of automating authentication without manually handling credentials.

Azure API Management OAuth 2.0 Policy is incorrect because it typically requires generating and managing access tokens manually, which contradicts the requirement of not manually handling tokens or credentials. OAuth 2.0 flows also require additional configuration, and just using this method alone doesn’t ensure seamless integration with Microsoft Entra ID for the authentication of the calling service without extra setup.

Basic Authentication is incorrect because it only requires a username and password to authenticate a request, sending these credentials in the HTTP header with each call. This method does not meet the security requirement, as it involves transmitting sensitive data in plain text unless encrypted via HTTPS. Furthermore, Basic Authentication doesn’t integrate with Microsoft Entra ID or support seamless, automatic authentication, making it a less secure and less suitable for this scenario.

References:
https://learn.microsoft.com/en-us/entra/identity/managed-identities-azure-resources/overview
https://learn.microsoft.com/en-us/entra/identity/managed-identities-azure-resources/overview-for-developers?tabs=dotnet
https://learn.microsoft.com/en-us/azure/app-service/overview-managed-identity?tabs=portal%2Chttp

Check out this Microsoft Entra ID Cheat Sheet:
https://tutorialsdojo.com/microsoft-entra-id/

Question 5

 

You are building a Java-based solution that leverages Cassandra for key-value data storage. The application is designed to utilize a new Azure Cosmos DB resource with the Cassandra API.

To facilitate the provisioning of Azure Cosmos DB accounts, databases, and containers, you have established a Microsoft Entra Group named Cosmos DB Creators. Additionally, you are considering implementing a caching mechanism to enhance read performance.​

This group must not have access to the keys required for data access.

Which role-based access control should be assigned to the Microsoft Entra Group to meet these requirements?

Correct Answer: 1

Azure Cosmos DB integrates with Microsoft Entra ID to manage access through role-based access control (RBAC). This approach allows for fine-grained permissions, ensuring that users or groups have appropriate access levels without exposing sensitive information like access keys

Azure Role-Based Access Control (RBAC) provides a collection of built-in roles that can be assigned to users, groups, service principals, and managed identities to access Azure resources. Access is controlled through role assignments, which determine the actions a user or entity can perform within a given resource. If the built-in roles do not align with an organization’s specific security and access requirements, custom roles can be created to tailor permissions accordingly.

Cosmos DB Operator

The Cosmos DB Operator role is specifically designed to permit the provisioning and management of Azure Cosmos DB accounts, databases, and containers without granting access to the data within them or the associated access keys. This role includes actions such as creating and managing databases and containers but explicitly prevents data access and key retrieval. ​

Assigning the Cosmos DB Operator role to the Cosmos DB Creators group ensures they can perform necessary management tasks without compromising data security by accessing keys or data.

CosmosRestoreOperator is incorrect because this role primarily focuses on restoring Cosmos DB accounts from backups. While it allows interaction with Cosmos DB instances, it does not fulfill the requirement to provision new Cosmos DB accounts, databases, and containers. Additionally, it does not address role-based access control (RBAC) for preventing key access.

Redis Cache Contributor is incorrect because this role is simply used for managing Azure Cache for Redis, a caching service that improves read performance. While the scenario mentions a caching mechanism, it’s not relevant to Cosmos DB access control.

Cosmos DB Account Reader Role is incorrect because it only grants read-only access to Azure Cosmos DB account configurations, allowing users to view account settings and properties. However, it still allows access to account keys, which contradicts the requirement that the Microsoft Entra Group must not have access to keys.

References:
https://learn.microsoft.com/en-us/azure/role-based-access-control/overview
https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-roles

Check out this Azure Role-Based Access Control (RBAC) Cheat Sheet:
https://tutorialsdojo.com/azure-role-based-access-control-rbac/

Question 6

Your company is developing an ASP.NET Core Web API web service to power an e-commerce platform. The service integrates with Azure Application Insights to collect telemetry and track dependencies.

The web service processes customer orders, stores data in the Microsoft SQL Server, and communicates with a third-party payment gateway API to handle transactions. To ensure complete monitoring, you must configure dependency telemetry tracking for interactions with the external payment gateway.

Which telemetry properties should you use to track payment gateway interactions to correlate them with the overall transaction operation and ensure end-to-end tracing? (Choose TWO.)
NOTE: Each correct selection is worth one point.

Correct Answer: 2,4

When monitoring complex applications, especially those that interact with external services like payment gateways, Telemetry.id and Telemetry.Context.Operation.Id are crucial for ensuring comprehensive traceability and understanding the flow of requests. Telemetry.id uniquely identifies each telemetry item, providing a distinct reference for every event or dependency tracked within Application Insights. This allows you to pinpoint and analyze specific interactions with external services, such as the payment gateway. By associating each interaction with a unique ID, you can follow the lifecycle of a request through its various stages, helping you identify issues or delays in specific transactions. This granularity is significant in troubleshooting and debugging scenarios, as it allows you to track the exact flow of a transaction from start to finish.

Application Insights Data Model

Telemetry.Context.Operation.Id is equally important as it ties together multiple telemetry events under a single operation. In a distributed system, a single user request might involve multiple services. By using the Operation.Id, you can link these various components and track them as a unified operation. This property enables end-to-end tracing, allowing you to correlate telemetry across different services and gain insights into the overall performance of a user transaction. With this context, you can identify bottlenecks, measure service performance, and ensure that all system parts work harmoniously. In the case of a payment gateway interaction, Operation.Id allows you to tie the payment process back to the original user request, ensuring that you have a complete view of how the payment gateway fits into the broader application flow.

By combining Telemetry.id unique event tracking and Telemetry.Context.Operation.Id. For distributed tracing, Azure Application Insights offers a powerful mechanism to monitor, diagnose, and optimize applications that depend on external services. These properties provide the necessary context to understand the full scope of user interactions with external dependencies and help ensure that performance issues or failures are swiftly identified and addressed. This approach leads to better reliability, faster troubleshooting, and improved overall application health, particularly in systems where multiple services interact in complex workflows.

Telemetry.Context.Session.Id is incorrect because it is primarily used to track the user’s session within the web application, helping to monitor user behavior and activities over time. However, it is only relevant for tracking sessions and user interactions, not external dependencies like a payment gateway. Typically, this property would be useful for analyzing how users navigate through your application or for understanding user-specific trends, but it does not help in tracking the interaction between the web service and external services such as the payment gateway.

Telemetry.Context.Dependency.Type is incorrect because it is used to specify the type of dependency. While it can be helpful for categorizing dependencies in general, it is simply not sufficient by itself to fully track the payment gateway interaction. Just setting the Dependency.Type doesn’t provide enough context or tracing information to track dependencies across distributed services.

Telemetry.Context.Dependency.Target is incorrect because it is used to track the target of a dependency, such as the hostname or endpoint URL. While this property is useful for identifying where the dependency is directed, it is only a small part of the overall dependency tracking. Primarily, this property tells you where the request is going but doesn’t help track the entire transaction process and its relation to other services.

References:
https://learn.microsoft.com/en-us/azure/azure-monitor/app/data-model-complete
https://learn.microsoft.com/en-us/azure/azure-monitor/app/transaction-search-and-diagnostics?tabs=transaction-search

Check out this Azure Monitor Cheat Sheet:
https://tutorialsdojo.com/azure-monitor/

Question 7

You are part of a development team for a technology firm that delivers multiple cloud-based web services.

All web services must observe the following security and access regulations.

  • API requests must be managed through Azure API Management.
  • Authentication must be handled using OpenID Connect.
  • Anonymous requests should be strictly blocked.
  • The API Gateway must log access attempts for auditing purposes.

A recent security assessment discovered that some API endpoints are accessible without authentication, which could lead to unauthorized data access.

Which Azure API Management policy should you configure to enforce authentication?

    Free AWS Courses

Correct Answer: 2

In Azure API Management, the validate-jwt policy ensures that each API request contains a valid JSON Web Token (JWT) before being processed. This policy validates the issuer (OpenID Connect provider), audience, expiration, and signature of the token. If the token is missing, expired, or invalid, the API request is denied, preventing unauthorized access.

Validate JWT

The given scenario requires that all API endpoints be secured using OpenID Connect and that anonymous access is prevented. Since OpenID Connect relies on JWTs for authentication, implementing the validate-jwt policy ensures that only requests with a valid token can access the services.

Additionally, the requirement to log authentication attempts can be complemented by integrating Azure Monitor or Application Insights to track API requests and failures.

authentication-managed-identity is incorrect because this policy simply enables Azure API Management to authenticate itself when calling a backend service using Azure Managed Identity. While this enhances security for backend service calls, it does not validate authentication for incoming client API requests. The scenario specifically requires user authentication using OpenID Connect, which authentication-managed-identity is not enforced.

check-header is incorrect because this policy only inspects HTTP request headers to determine whether a specific header is present or has a specific value. While it can check for the existence of an Authorization header, it does not validate JWTs or ensure OpenID Connect authentication. This means it cannot prevent unauthorized access on its own, making it unsuitable for enforcing authentication.

authentication-basic is incorrect because this policy typically enables Basic Authentication, where a client provides a username and password in the Authorization header (Authorization: Basic <base64-encoded-credentials>). However, Basic Authentication does not support OpenID Connect or JWT validation, which are required in this scenario.

References:
https://learn.microsoft.com/en-us/azure/api-management/validate-jwt-policy
https://learn.microsoft.com/en-us/azure/api-management/authentication-authorization-overview

Check out these Azure Cheat Sheets:
https://tutorialsdojo.com/microsoft-azure-cheat-sheets/

Question 8

You are maintaining a mission-critical Azure App Service web application for thousands of users and have enabled Application Insights for monitoring. Your team has observed multiple exceptions in the production environment, causing intermittent failures.

To identify the root cause, you need to examine source code execution and variable values when exceptions occur without impacting live performance.

Additionally, your team uses Azure Monitor Alerts to be notified when the exception rate exceeds a threshold.

Which Application Insights feature is best suited for this scenario?

Correct Answer: 1

The Snapshot Debugger in Application Insights is a powerful tool designed to help developers diagnose and resolve exceptions in live production environments without affecting application performance. When an exception occurs, the Snapshot Debugger automatically captures a snapshot of the application’s state, including the call stack and variable values, providing detailed insights into the issue. This feature is particularly beneficial for mission-critical applications where maintaining user experience is important.

Snapshot Debugger

To utilize the Snapshot Debugger, developers can enable it through the Azure portal by navigating to their Application Insights resource and configuring the settings under the Application Insights section of their App Service. Once enabled, the Snapshot Debugger monitors exception telemetry and collects snapshots of top-throwing exceptions, allowing developers to view and analyze these snapshots directly within the Azure portal. This process aids in pinpointing the root cause of exceptions and facilitates efficient debugging without the need for intrusive code changes or impacting live application performance.

Additionally, the Snapshot Debugger supports integration with Visual Studio, enabling a seamless debugging experience. Developers can download snapshots and open them in Visual Studio Enterprise to step through code, inspect variables, and understand the sequence of events leading up to an exception. This integration enhances the debugging process by providing a familiar environment for developers to analyze and resolve issues effectively.

Application Insights Failures Panel is incorrect because it simply aggregates high-level failure data such as failed requests and exceptions, but it doesn’t provide in-depth debugging capabilities. It only helps you identify trends or spikes in exception rates and gives you a summary of failed requests, but it doesn’t allow you to inspect the actual source code execution or variable values when an exception occurs.

Profiler is incorrect because it primarily focuses on tracking performance issues, such as CPU usage, function calls, and the execution times of different application parts. It doesn’t focus on debugging exceptions or examining the application’s state when an error occurs. While it can help identify performance bottlenecks, it doesn’t allow you to capture and debug exceptions or inspect the variable values in the context of those exceptions.

Azure Monitor Logs with Kusto Query is incorrect because it is typically used for querying and analyzing log data to identify trends, patterns, or anomalies across the system. While it can help identify exceptions, it doesn’t give you direct access to the execution state of the application at the time of an exception. It only provides log data, which isn’t as detailed or interactive as the snapshot-based debugging provided by the Snapshot Debugger.

References:
https://learn.microsoft.com/en-us/azure/azure-monitor/snapshot-debugger/snapshot-debugger-data
https://docs.azure.cn/en-us/azure-monitor/snapshot-debugger/snapshot-debugger-troubleshoot

Check out this Azure Monitor Cheat Sheet:
https://tutorialsdojo.com/azure-monitor/

Question 9

Your team is developing an Azure App Service REST API that will be used by an Azure App Service web app to manage employee profiles. The API must authenticate users and retrieve their profile information from Microsoft Entra ID. Additionally, the API should allow authorized users to update their profile details securely.

Which tool will effectively achieve and implement the functionality? (Select TWO.)
NOTE: Each correct selection earns one point.

Correct Answer: 3,5

The Microsoft Graph API is a powerful and unified endpoint that enables developers to interact with a wide range of Microsoft services, including Microsoft Entra ID, OneDrive, Teams, and Outlook. It allows applications to authenticate users, retrieve their profile information, and manage directory data such as user attributes, group memberships, and other resources. By using Microsoft Graph, developers can efficiently access user profiles, update details like names and emails, and integrate deeply with Microsoft Entra ID to manage user data. The API supports both REST and SDK approaches, making it versatile for various development environments and essential for managing user profiles and organizational data in Microsoft 365.

Microsoft Graph

The Microsoft Authentication Library (MSAL) works alongside Microsoft Graph to simplify authentication and secure access to Microsoft services. MSAL enables applications to authenticate users via Microsoft Entra ID and obtain tokens needed to interact with services like Microsoft Graph. It handles various authentication scenarios, such as single sign-on (SSO), multi-factor authentication (MFA), and supports both personal and organizational accounts. By streamlining the token acquisition process, MSAL allows developers to focus on building application features while ensuring secure and seamless user sign-ins and API calls to Microsoft resources. Together, Microsoft Graph API and MSAL provide a robust solution for building secure applications that manage user profiles and organizational data.

Microsoft Authentication Library (MSAL)

Microsoft Entra Privileged Identity Management (PIM) is incorrect because it is primarily used for managing and controlling access to critical Azure resources, particularly for privileged roles. It enables just-in-time privileged access and helps in securing and auditing administrative roles within Microsoft Entra ID. It is not designed for managing or retrieving user profiles and cannot be used for authentication or profile management in the context of general user profiles.

Microsoft Entra External ID is incorrect because it is typically used to manage access and collaboration with external users, such as partners, contractors, or other organizations. It facilitates identity management and collaboration with external identities but does not deal with retrieving or updating internal employee profiles or handling authentication for organizational users.

Microsoft Entra Connect is incorrect because it is primarily used to synchronize on-premises Active Directory with Microsoft Entra ID, enabling hybrid identity scenarios. While it is useful for directory synchronization, it does not directly interact with the API for retrieving or updating user profile details.

References:
https://learn.microsoft.com/en-us/graph/overview
https://learn.microsoft.com/en-us/entra/identity-platform/msal-overview

Check out this Microsoft Entra ID Cheat Sheet:
https://tutorialsdojo.com/microsoft-entra-id/

Question 10

You are an API developer for a financial technology company that provides real-time transaction processing via an Azure API Management (APIM) Standard tier instance named Agila. This APIM instance is configured with a managed gateway to securely expose APIs to external clients.

One of the APIs, TransactionAPI, interacts with a backend database that can only handle a limited volume of requests per minute due to licensing constraints. To prevent performance degradation, you need to enforce a policy that limits the number of API calls from an individual IP address to ensure fair usage while protecting the backend system from overload.

Which APIM policy should you apply to TransactionAPI to meet this requirement?

Correct Answer: 2

The rate-limit-by-key policy in Azure API Management (APIM) allows you to control the rate at which requests are processed by enforcing limits based on a specific key, such as an API key, subscription ID, or client IP address. This is particularly useful for preventing overuse or abuse of the system by a single client or a group of clients. For example, suppose the backend database can only handle a limited number of requests per minute. In that case, you can apply the rate-limit-by-key policy to restrict the number of API calls a specific IP address can make in a given time period. This helps maintain a stable and performant system while ensuring that no individual client overwhelms the resources.

rate-limit-by-key

This policy is highly customizable and allows you to specify the number of allowed requests within a defined time window. It also provides flexibility in defining different limits for different keys, allowing fine-grained control. By setting the policy on your API, you can protect backend services from high traffic volumes, ensure fair usage across clients, and avoid potential performance degradation or service outages. It also helps to balance the load across multiple users, making it ideal for scenarios where resources need to be carefully managed, such as in a transaction processing system with licensing constraints.

The rate-limit-by-key policy can be configured with options like request count, time window, and the specific key used to apply the limit. When a client exceeds the rate limit, APIM returns a 429 Too Many Requests status code, signaling that the client has exceeded the allowable request threshold. This ensures the API remains available for other clients while protecting the backend from potential overloads.

set-backend-service is incorrect. This policy is primarily used to specify the backend service to which the API calls are forwarded. It is typically used when changing or configuring the backend service endpoint dynamically is needed. While this policy is useful for routing requests to different backend services, it does not address rate limiting or controlling the volume of requests.

rate-limit is incorrect. Although it can limit the rate of requests, it typically applies globally, without being tied to a specific key like an IP address or API key. This means it could limit the total number of requests for the entire API, not individual users or clients.

set-query-parameter is incorrect because it is only useful for modifying query parameters in requests, not for controlling the rate of API calls or limiting requests. This policy typically serves as a mechanism for manipulating or appending parameters to an incoming request, rather than enforcing rate limits. It does not address the core issue of protecting your backend from overload due to high traffic. Therefore, it is irrelevant to the scenario where rate limiting per IP address is required.

References:
https://learn.microsoft.com/en-us/azure/api-management/rate-limit-by-key-policy
https://learn.microsoft.com/en-us/microsoft-cloud/dev/dev-proxy/concepts/implement-rate-limiting-azure-api-management

Check out these Azure Cheat Sheets:
https://tutorialsdojo.com/microsoft-azure-cheat-sheets/

For more practice questions like these and to further prepare you for the actual AZ-204 Microsoft Azure Developer Associate exam, we recommend that you take our top-notch AZ-204 Microsoft Azure Developer Associate Exams, which simulate the real unique question types in the AZ-204 exam such as drag and drop, dropdown, and hotspot.

BIG Discounts on AWS & Azure Foundational Practice Exams – Now as LOW as $9.99 only!

Tutorials Dojo portal

Learn AWS with our PlayCloud Hands-On Labs

FREE AI and AWS Digital Courses

Tutorials Dojo Exam Study Guide eBooks

tutorials dojo study guide eBook

FREE AWS, Azure, GCP Practice Test Samplers

Subscribe to our YouTube Channel

Tutorials Dojo YouTube Channel

Join Data Engineering Pilipinas – Connect, Learn, and Grow!

Data-Engineering-PH

K8SUG

Follow Us On Linkedin

Recent Posts

Written by: Nikee Tomas

Nikee is a dedicated Web Developer at Tutorials Dojo. She has a strong passion for cloud computing and contributes to the tech community as an AWS Community Builder. She is continuously striving to enhance her knowledge and expertise in the field.

AWS, Azure, and GCP Certifications are consistently among the top-paying IT certifications in the world, considering that most companies have now shifted to the cloud. Earn over $150,000 per year with an AWS, Azure, or GCP certification!

Follow us on LinkedIn, YouTube, Facebook, or join our Slack study group. More importantly, answer as many practice exams as you can to help increase your chances of passing your certification exams on your first try!

View Our AWS, Azure, and GCP Exam Reviewers Check out our FREE courses

Our Community

~98%
passing rate
Around 95-98% of our students pass the AWS Certification exams after training with our courses.
200k+
students
Over 200k enrollees choose Tutorials Dojo in preparing for their AWS Certification exams.
~4.8
ratings
Our courses are highly rated by our enrollees from all over the world.

What our students say about us?