Ends in
00
days
00
hrs
00
mins
00
secs
ENROLL NOW

Get $4 OFF in AWS Solutions Architect & Data Engineer Associate Practice Exams for $10.99 each ONLY!

AZ-305 Designing Microsoft Azure Infrastructure Solutions Sample Exam Questions

Last updated on December 8, 2023

Here are 10 AZ-305 Designing Microsoft Azure Infrastructure Solutions practice exam questions to help you gauge your readiness for the actual exam.

Question 1

You have an application named Manila running on an Azure virtual machine scale set. The data used by the application is stored in a SQL Server on Azure Virtual Machines. The application is not used 24/7.

You need to recommend a disaster recovery solution for the SQL database server. The solution must meet the following requirements:

  • The recovery time objective (RTO) should be within 30 minutes.

  • The recovery point objective (RPO) should be within 8 hours.

  • Must ensure the capability to restore service in the case of a regional failure.

  • Minimize costs.

What should you include in the recommendation?

  1. Use Azure Backup to backup the SQL server daily.
  2. Implement availability sets.
  3. Implement SQL Server Always On Availability Groups.
  4. Use Azure Site Recovery for the SQL server.

Correct Answer: 4

Azure Site Recovery helps ensure business continuity by keeping business apps and workloads running during outages. Site Recovery replicates workloads running on physical and virtual machines (VMs) from a primary site to a secondary location. When an outage occurs at your primary site, you fail over to a secondary location and access apps from there. After the primary location is running again, you can fail back to it.

Site Recovery can manage replication for:

– Azure VMs replicating between Azure regions
– Replication from Azure Public Multi-Access Edge Compute (MEC) to the region
– Replication between two Azure Public MECs
– On-premises VMs, Azure Stack VMs, and physical servers

Site Recovery orchestrates replication without intercepting application data. When you replicate to Azure, data is stored in Azure storage, with the resilience that it provides. When failover occurs, Azure VMs are only then created based on the replicated data that also contributes to lower resource costs. Azure Site Recovery allows you to perform global disaster recovery. You can replicate and recover VMs between any two Azure regions in the world.

The minimum RTO of Azure site recovery is typically less than 15 minutes, and the minimum recovery point objective ( RPO) is one hour for application consistency and five minutes for crash consistency which fully satisfies the RTO and RPO requirements of the question.

Hence, the correct answer is: Use Azure Site Recovery for the SQL server.

The option that says:

The option that says:Use Azure Backup to backup the SQL server daily >is incorrect because using Azure Backup to back up the SQL Server daily does not meet the Recovery Point Objective (RPO) and Recovery Time Objective (RTO) requirements because you are only backing up daily.

The option that says: Implement availability sets is incorrect because they only work within a single Azure data center. They are designed to protect your applications from downtime due to maintenance or hardware failures within a single data center, not in the event of a regional outage, which is one of the requirements of the question.

The option that says: Implement SQL Server Always On Availability Groups is incorrect. Although it could meet the RTO and RPO requirements. However, this option could significantly increase costs. This solution requires additional SQL Server instances, which come with additional licensing and resource costs. While it technically fulfills the requirements, it is not the most cost-efficient solution, which is one of the requirements stated in the question.

References:
https://learn.microsoft.com/en-us/azure/site-recovery/site-recovery-overview
https://learn.microsoft.com/en-us/azure/site-recovery/azure-to-azure-how-to-enable-replication

Check out this Azure Virtual Machines Cheat Sheet:
https://tutorialsdojo.com/azure-virtual-machines/

Question 2

You have an on-premises MySQL database server that you are migrating to Azure database for MySQL. The database administrator runs a script that shuts down the database nightly. You need to recommend a database solution that must meet the fulfill requirements

  • The database must remain accessible if a data center fails.

  • Customized maintenance schedule.

  • Minimize costs.

What should you recommend?

  1. Flexible Server – Burstable
  2. Single Server – Basic
  3. Flexible Server – General purpose
  4. Single Server – General purpose

Correct Answer: 3

Azure Database for MySQL – Flexible Server is a fully managed production-ready database service designed for more granular control and flexibility over database management functions and configuration settings. The flexible server architecture allows users to opt for high availability within a single availability zone and across multiple availability zones.

Flexible servers provide better cost optimization controls with the ability to stop/start the server, and burstable compute tier, ideal for workloads that don’t need full-compute capacity continuously.

Flexible Server also supports reserved instances allowing you to save up to 63% cost, which is ideal for production workloads with predictable compute capacity requirements.

Take note of the following features and limitations of Azure Database for MySQL – Flexible Server:

-High availability isn’t supported in the burstable compute tier. You need to use the general purpose tier if you want your database to be highly available if the data center fails.

-Users can configure the patching schedule to be system managed or define their custom schedule.

-Zone-redundant high availability can be set only when the flexible server is created.

Tutorials dojo strip

Hence, the correct answer is: Flexible Server – General purpose.

Flexible Server – Burstable is incorrect because the burstable tier does not support high availability. Use general purpose tier instead.

Single Server – Basic and Single Server – General purpose is incorrect because, with Single Server, you can’t choose your own maintenance window due to automated patching. The requirement states that you need a customized maintenance schedule.

References:

https://learn.microsoft.com/en-us/azure/mysql/flexible-server/overview
https://learn.microsoft.com/en-us/azure/mysql/flexible-server/quickstart-create-server-portal

Tutorials Dojo’s AZ-305 Microsoft Azure Solutions Architect Expert Exam Study Guide:
https://tutorialsdojo.com/az-305-microsoft-azure-solutions-architect-expert-exam-study-guide/

Question 3

Your organization has an on-premises file server named Pampanga that hosts data files accessed by a critical application using the SMB protocol. You created an Azure subscription and were tasked to deploy a cost-effective disaster recovery solution for Pampanga.

You need to recommend a solution that ensures users can access the data files immediately if Pampanga experiences an outage.

What should you include in the recommendation?

  1. Use Azure Import/Export with Blob Storage
  2. Use Azure File Sync with Azure File Share
  3. Use AzCopy with Blob Storage
  4. Use Azure Backup with Recovery Services

Correct Answer: 2

Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard Server Message Block (SMB) protocol or Network File System (NFS) protocol. Azure Files SMB file shares are accessible from Windows, Linux, and macOS clients. Azure Files NFS file shares are accessible from Linux or macOS clients. Additionally, Azure Files SMB file shares can be cached on Windows Servers with Azure File Sync for fast access near where the data is being used.

Use Azure File Sync to centralize your organization’s file shares in Azure Files, while keeping the flexibility, performance, and compatibility of an on-premises file server. Azure File Sync transforms Windows Server into a quick cache of your Azure file share. You can use any protocol that’s available on Windows Server to access your data locally, including SMB, NFS, and FTPS. You can have as many caches as you need across the world.

Azure File Sync provides the capability to synchronize your data continuously between your on-premises Windows Server and your Azure file share. This means that any changes made to the data on your on-premises server are promptly replicated on your Azure file share, thus ensuring that an up-to-date version of the data files is always available in Azure.

In the event of an Outage, your users can simply mount the file share.

Hence, the correct answer is: Use Azure File Sync with Azure File Share.

Use Azure Import/Export with Blob Storage is incorrect because this Azure Import/Export allows you to transfer large amounts of data to and from Azure Blob storage using a physical storage device. It is primarily used for one-time data transfers or periodic data migration scenarios. 

Use AzCopy with Blob Storage is incorrect. Blob Storage is designed for object storage, primarily for unstructured data such as images, videos, documents, etc. It is not optimized for file-level access, and Blob Storage does not natively support SMB protocol-based file access.

Use Azure Backup with Recovery Services is incorrect. Azure Backup requires a restoration process, which can consume considerable time and cause significant downtime for your application during a disaster. Remember that you need your users to access the data files right away in the event of failure.

References:
https://docs.microsoft.com/en-us/azure/storage/files/storage-files-introduction
https://learn.microsoft.com/en-us/azure/storage/file-sync/file-sync-deployment-guide

Check out this Azure Files Cheat Sheet:
https://tutorialsdojo.com/azure-file-storage/

Azure Blob vs Disk vs File Storage:
https://tutorialsdojo.com/azure-blob-vs-disk-vs-file-storage/

Question 4

An organization has 128 TB of data on-site that needs to be migrated to Azure. The organization’s current internet connection lacks the bandwidth to support this data migration.

Additionally, the company does not have the capability to provide the hard drives required to ship the data, and they do not want to handle the logistics of shipping the hard drives to Microsoft.

You need to recommend a solution to ensure a secure and efficient migration of data to Azure Blob Storage without negatively impacting day-to-day business operations.

What should you include in the recommendation?

  1. Azure Import/Export
  2. Azure Data Box
  3. AzCopy
  4. Storage Migration Service

Correct Answer: 2

The Microsoft Azure Data Box cloud solution lets you send terabytes of data into and out of Azure in a quick, inexpensive, and reliable way. The secure data transfer is accelerated by shipping you a proprietary Data Box storage device. Each storage device has a maximum usable storage capacity of 80 TB and is transported to your datacenter through a regional carrier. The device has a rugged casing to protect and secure data during the transit.

You can order the Data Box device via the Azure portal to import or export data from Azure. Once the device is received, you can quickly set it up using the local web UI. Depending on whether you will import or export data, copy the data from your servers to the device or from the device to your servers and ship the device back to Azure. If importing data to Azure, in the Azure datacenter, your data is automatically uploaded from the device to Azure. The entire process is tracked end-to-end by the Data Box service in the Azure portal.

Azure Data Box includes the following components:

– Data Box device: A physical device that provides primary storage, manages communication with cloud storage and helps to ensure the security and confidentiality of all data stored on the device. The Data Box device has a usable storage capacity of 80 TB.

– Data Box service: An extension of the Azure portal that lets you manage a Data Box device by using a web interface that you can access from different geographical locations. Use the Data Box service to perform daily administration of your Data Box device.

– Data Box local web-based user interface: A web-based UI that’s used to configure the device so it can connect to the local network and then register the device with the Data Box service.

Data Box is ideally suited for transferring data sizes larger than 40 TBs. The service is especially useful in scenarios with limited internet connectivity.

With Databox, Microsoft Azure provides the hardware to store your data and handles the shipping logistics.

Hence, the correct answer is: Azure Data Box.

Azure Import/Export is incorrect. With Azure Import/Export, you need to supply your own disk drives and you also personally need to ship the drives to Microsoft. With Data box, Microsoft provides the necessary hardware and takes care of the logistics.

AzCopy and Storage Migration Service are incorrect because the organization lacks the bandwidth to transfer the data to Azure. You need to use an offline migration route to migrate the 128 TB of data.

References:
https://learn.microsoft.com/en-us/azure/databox/data-box-overview
https://learn.microsoft.com/en-us/azure/databox/data-box-limits

Azure Blob Storage Cheat Sheet:
https://tutorialsdojo.com/azure-blob-storage/

Question 5

You are planning to migrate a SQL Server database to an Azure SQL Managed Instance.

Which tool should you use?

  1. Data Migration Assistant
  2. SQL Server Management Studio (SSMS)
  3. Azure Data Studio
  4. Azure Storage Explorer

Correct Answer: 3

Azure Database Migration Service is a core component of the Azure SQL Migration extension architecture. Database Migration Service provides a reliable migration orchestrator to support database migrations to Azure SQL. You can create an instance of Database Migration Service or use an existing instance by using the Azure SQL Migration extension in Azure Data Studio.

Azure SQL Migration extension for Azure Data Studio helps you to assess your database requirements, get the right-sized SKU recommendations for Azure resources, and migrate your SQL Server database to Azure.

Use the Azure SQL migration extension in Azure Data Studio to migrate database(s) from a SQL Server instance to an Azure SQL Managed Instance with minimal downtime

Hence, the correct answer is: Azure Data Studio.

SQL Server Management Studio (SSMS) is incorrect because it is simply a tool for managing SQL Server instances and databases. While you can use SSMS to connect and manage Azure SQL Managed Instances, it doesn’t have the functionality that the question requires.

Data Migration Assistant is incorrect. The Data Migration Assistant (DMA) is an Azure tool designed to facilitate the migration of databases to Azure SQL. It assesses on-premises SQL Server instances for migration, identifies any migration-blocking compatibility issues, detects partially supported or deprecated features, and offers comprehensive recommendations to resolve these issues.

Azure Storage Explorer is incorrect because this is only a tool that allows you to view and interact with your Azure Storage resources. It primarily provides visibility into your Blob, Queue, Table, and File Shares storage. 

References:
https://learn.microsoft.com/en-us/azure/dms/dms-overview
https://learn.microsoft.com/en-us/azure/dms/migration-using-azure-data-studio

Tutorials Dojo’s AZ-305 Microsoft Azure Solutions Architect Expert Exam Study Guide:
https://tutorialsdojo.com/az-305-microsoft-azure-solutions-architect-expert-exam-study-guide/

Question 6

You plan on creating a storage account named Baguio that will store files from an on-premises backup server that is accessed once a year. You must recommend a storage solution that needs to meet the following requirements:

  • Files under 5 GB must be available within 10 hours after the request.

  • Minimize costs wherever you can.

What are the two possible Azure storage solutions that you can implement to satisfy the given requirements?

NOTE: Each correct selection is worth one point.

  1. Provision a standard general-purpose v2 storage account and set the default access tier to hot. Create a blob container and upload the files using AzCopy with the –block-blob-tier set to hot.
  2. Provision a standard general-purpose v2 storage account and set the default access tier to hot. Create a blob container and upload the files using AzCopy with the –block-blob-tier set to archive.
  3. Provision a standard general-purpose v2 storage account and set the default access tier to cool. Create a file share and upload the files using Microsoft Storage Explorer.
  4. Provision a standard general-purpose v2 storage account and set the default access tier to cool. Create a blob container, upload the files, and create a lifecycle management policy that will transition the files to the archive tier after one day.
  5. Provision a standard general-purpose v2 storage account and set the default access tier to archive. Create a blob container and upload the files.

Correct Answers: 2,4

Azure storage offers different access tiers, which allow you to store blob object data in the most cost-effective manner. The available access tiers include:

Hot – Optimized for storing data that is accessed frequently.

Cool – Optimized for storing data that is infrequently accessed and stored for at least 30 days.

Archive – Optimized for storing data that is rarely accessed and stored for at least 180 days with flexible latency requirements (on the order of hours). 

While a blob is in the archive access tier, it’s considered offline and can’t be read or modified. The blob metadata remains online and available, allowing you to list the blob and its properties. Reading and modifying blob data is only available with online tiers such as hot or cool.

To read data in archive storage, you must first change the tier of the blob to hot or cool. This process is known as rehydration and can take hours to complete.

A rehydration operation with Set Blob Tier is billed for data read transactions and data retrieval size. High-priority rehydration has higher operation and data retrieval costs compared to standard priority. High-priority rehydration shows up as a separate line item on your bill.

The question states that the files are accessed once a year. From this information, you can eliminate the hot tier already, as storing files in this tier is much more expensive. This leaves you with the cool and archive tier.

Remember, whenever you hear rarely accessed files, immediately think of the cool tier and archive tier. If you can tolerate a few hours of waiting before a file is ready, use the archive tier. If not, go with the cool tier.

One of the requirements is that your files must be available after 10 hours. This already hints that you must store your files in the archive tier. Remember that the archive tier is cheaper than the cool tier storage-wise.

Lifecycle Management policy enables the automatic transitioning of data between different access tiers based on the age of the data. The transitioning files from the cool tier to the archive tier, the lifecycle management policy can be configured to move data that are infrequently accessed or reaches a certain age to the archive tier.

This is important because you can only set the default access tier of a storage account to hot and cool tiers only. In this way, you can upload the files in the cool tier and, after a day of blob creation, automatically transition them to the archive tier.

Hence, the correct answers are:

– Provision a standard general-purpose v2 storage account and set the default access tier to hot. Create a blob container and upload the files using AzCopy with the –block-blob-tier set to archive.

– Provision a standard general-purpose v2 storage account and set the default access tier to cool. Create a blob container, upload the files, and create a lifecycle management policy that will transition the files to the archive tier after one day.

The statement that says: Provision a standard general-purpose v2 storage account and set the default access tier to hot. Create a blob container and upload the files using AzCopy with the –block-blob-tier set to hot is incorrect. Storing your files in the hot tier will cost more compared to storing them in the archive tier. Remember that one of the requirements states that you need to minimize costs wherever you can.

The statement that says: Provision a standard general-purpose v2 storage account and set the default access tier to cool. Create a file share and upload the files using Microsoft Storage Explorer is incorrect. The files are rarely accessed, and storing files in a file share will cost more than keeping them in the archive tier. Azure Files also has no option to set blobs in the archive tier.

The statement that says: Provision a standard general-purpose v2 storage account and set the default access tier to archive. Create a blob container and upload the files is incorrect. You can’t set the default access tier to archive. When creating a storage account, you can only set the default access tier to hot and cool tier.

References:
https://azure.microsoft.com/en-us/services/storage/archive/
https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-rehydration

Check out this Azure Archive Cheat Sheet:
https://tutorialsdojo.com/azure-archive-storage/

Question 7

A popular gaming company maintains its daily gameplay logs and user behavior data in Azure Table storage.

Every month, they plan to identify trends and user preferences from the data.

You need to recommend an automated solution to transfer this data to Azure Data Lake Storage for detailed analysis.

What should you recommend?

  1. Azure Import/Export
  2. Microsoft SQL Server Migration Assistant
  3. Azure Storage Explorer
  4. Azure Data Factory

Correct Answer: 4

Azure Data Factory is a cloud-based data integration service provided by Microsoft’s Azure platform. This service enables the creation of data-driven workflows to orchestrate and automate data movement and transformation. With Azure Data Factory, users can load data from various data sources, transform the data based on custom-defined business rules, and publish the output data to data stores such as SQL Server, Azure SQL Database, and Azure Data Lake Storage, among others.

The requirement states that the gaming company has its data in Azure Table storage and needs to move it to Azure Data Lake Storage every month for analysis.

Azure Data Factory can accomplish this by creating a scheduled data pipeline between Azure Table storage and Azure Data Lake Storage.

AWS Exam Readiness Courses

It ensures the process is automated, efficient, and scalable to handle large data sets. This way, the company can focus on deriving insights from their data instead of managing the data transfer process.

Hence, the correct answer is: Azure Data Factory.

Azure Import/Export is incorrect because this is simply used to securely import large amounts of data to Azure Blob storage and Azure Files by shipping disk drives to an Azure data center.

Microsoft SQL Server Migration Assistant is incorrect because this is mainly used for migrating databases from several types of database systems to SQL Server, not for automated, recurring data transfers.

Azure Storage Explorer is incorrect. This is a standalone app from Microsoft that allows you to easily work with Azure Storage data on both Windows and macOS. It’s more of a manual tool for exploring your data, uploading, and downloading files.

References:
https://learn.microsoft.com/en-gb/azure/data-factory/introduction
https://learn.microsoft.com/en-gb/azure/data-factory/quickstart-create-data-factory

Question 8

A rapidly evolving online news platform relies on Azure Storage to store various forms of digital media content.

They operate predominantly within one region and require that their data must be available if a single availability zone in the region fails.

While they want to avoid losing their media content, they have systems in place to regenerate this content if absolutely necessary.

To optimize cost while ensuring data availability, you need to recommend an appropriate redundancy option for their Azure Storage.

You need to recommend an Azure storage redundancy option that optimizes cost while ensuring data availability.

What should you recommend?

  1. Locally-redundant storage (LRS)
  2. Zone-redundant storage (ZRS)
  3. Geo-redundant storage (GRS)
  4. Geo-zone-redundant storage (GZRS)

Correct Answer: 2

Data in an Azure Storage account is always replicated three times in the primary region. Azure Storage offers four options for how your data is replicated:

  1. Locally redundant storage (LRS) copies your data synchronously three times within a single physical location in the primary region. LRS is the least expensive replication option but is not recommended for applications requiring high availability.
  2. Zone-redundant storage (ZRS) copies your data synchronously across three Azure availability zones in the primary region. For applications requiring high availability.
  3. Geo-redundant storage (GRS) copies your data synchronously three times within a single physical location in the primary region using LRS. It then copies your data asynchronously to a single physical location in a secondary region that is hundreds of miles away from the primary region.
  4. Geo-zone-redundant storage (GZRS) copies your data synchronously across three Azure availability zones in the primary region using ZRS. It then copies your data asynchronously to a single physical location in the secondary region.

An Availability Zone is a high-availability offering that protects applications and data from data center failures. Each Azure region is split into multiple Availability Zones, with each zone made up of one or more data centers equipped with independent power, cooling, and networking.

Zone-Redundant Storage (ZRS) replicates your data synchronously across three different Availability Zones in the same region. This ensures that a copy of your data is always available even if one availability zone (which could be composed of one or more data centers) fails completely.

Take note that you need to also optimize costs.

Hence, the correct answer is: Zone-redundant storage (ZRS)

Locally-redundant storage (LRS) is incorrect. LRS only replicates your data three times within a single data center in the region. Although LRS costs less than other options, it doesn’t protect against one or more availability zone failures.

Geo-redundant storage (GRS) and Geo-zone-redundant storage (GZRS) are incorrect. Even though they solve the technical requirement of data availability in the event of a single availability zone failure, it does not satisfy the cost-efficiency requirement. You can solve both the technical and cost requirement with zone-redundant storage (ZRS)

References:
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blobs-overview
https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy

Check out this Azure Storage Overview Cheat Sheet:
https://tutorialsdojo.com/azure-storage-overview/

Locally Redundant Storage (LRS) vs Zone-Redundant Storage (ZRS) vs Geo-Redundant Storage (GRS):
https://tutorialsdojo.com/locally-redundant-storage-lrs-vs-zone-redundant-storage-zrs/

Question 9

Your organization wants to implement customer-managed Transparent Data Encryption (TDE) for Azure SQL databases to enhance data security. It is crucial to use the strongest encryption strength available for maximum protection.

Which encryption algorithm with the strongest encryption strength should you recommend?

  1. RSA 4096
  2. AES 256
  3. RSA 3072
  4. AES 128

Correct Answer: 3

Azure SQL is a family of managed, secure, and intelligent products that use the SQL Server database engine in the Azure cloud.

– Azure SQL Database: Support modern cloud applications on an intelligent, managed database service that includes serverless compute.

– Azure SQL Managed Instance: Modernize your existing SQL Server applications at scale with an intelligent fully managed instance as a service, with almost 100% feature parity with the SQL Server database engine. Best for most migrations to the cloud.

– SQL Server on Azure VMs: Lift-and-shift your SQL Server workloads with ease and maintain 100% SQL Server compatibility and operating system-level access.

Azure SQL transparent data encryption (TDE) with customer-managed key (CMK) enables Bring Your Own Key (BYOK) scenario for data protection at rest, and allows organizations to implement separation of duties in the management of keys and data. With customer-managed TDE, the customer is responsible for and in full control of key lifecycle management (key creation, upload, rotation, deletion), key usage permissions, and auditing of operations on keys.

Remember, it’s important to note that the key size directly affects the strength and security of the encryption. Larger key sizes generally offer stronger security but may require more computational resources for encryption and decryption operations. So it is crucial to find the balance between performance and security.

Take note, TDE protector can only be an asymmetric, RSA, or RSA HSM key. The supported key lengths are 2048 bits and 3072 bits.

Hence, the correct answer is: RSA 3072.

RSA 4096, AES 256, and AES 128 are incorrect because encryption keys are not supported by Azure SQL transparent data encryption (TDE).

References:
https://learn.microsoft.com/en-us/azure/azure-sql/azure-sql-iaas-vs-paas-what-is-overview
https://learn.microsoft.com/en-us/azure/azure-sql/database/transparent-data-encryption-byok-overview

Question 10

You need to recommend a migration plan for an on-premises database. The solution must meet the following specifications:

  • Database storage: 84 TB

  • Data retention for at least 10 years.

  • Data must be backed up to a secondary region

  • Minimize database administration effort

Select the correct answer from the drop-down list of options. Each correct selection is worth one point.

 

Correct Answer: 

Azure Service: Azure SQL Database

Service Tier: Hyperscale

Azure SQL is a family of managed, secure, and intelligent products that use the SQL Server database engine in the Azure cloud.

– Azure SQL Database: Support modern cloud applications on an intelligent, managed database service that includes serverless compute.

– Azure SQL Managed Instance: Modernize your existing SQL Server applications at scale with an intelligent fully managed instance as a service, with almost 100% feature parity with the SQL Server database engine. Best for most migrations to the cloud.

– SQL Server on Azure VMs: Lift-and-shift your SQL Server workloads with ease and maintain 100% SQL Server compatibility and operating system-level access.

The Hyperscale service tier in Azure SQL Database provides the following additional capabilities:

– Rapid Scale up – you can, in constant time, scale up your compute resources to accommodate heavy workloads when needed and then scale the compute resources back down when not needed.

– Rapid scale out – you can provision one or more read-only replicas for offloading your read workload and for use as hot standbys.

– Automatic scale-up, scale-down, and billing for compute based on usage with serverless compute (in preview).

– Optimized price/performance for a group of Hyperscale databases with varying resource demands with elastic pools (in preview).

– Auto-scaling storage with support for up to 100 TB of database or elastic pool size. Fast database backups (based on file snapshots) regardless of size with no IO impact on compute resources.

– Fast database restores or copies (based on file snapshots) in minutes rather than hours or days.

Let’s discuss the requirements of the scenario:

– Database storage: 84 TB

– Only Azure SQL Database Hyperscale can support database storage up to 100 TB

– Data retention for at least 10 years

– The three Azure services satisfy this requirement.

– Data must be backed up to a secondary region

– The three Azure services satisfy this requirement.

– Minimize database administration effort

– Only Azure SQL database and Azure SQL Managed Instance offers fully managed database services provided by Microsoft Azure. This means that Microsoft handles infrastructure management, including patching, backups, high availability, and automated management tasks.

Only Azure SQL Database Hyperscale satisfies all the requirements. The differentiating feature of Azure SQL Database Hyperscale compared to the two Azure services is that it can handle the 84 TB database storage requirement.

Therefore, you have to use the Azure SQL Database as your service and Hyperscale as the service type because it is the only service and service type combination that satisfies all the requirements.

References:
https://learn.microsoft.com/en-us/azure/azure-sql/database/service-tier-hyperscale-frequently-asked-questions-faq
https://learn.microsoft.com/en-us/azure/azure-sql/database/service-tier-hyperscale

For more practice questions like these and to further prepare you for the actual AZ-305 Designing Microsoft Azure Infrastructure Solutions exam, we recommend that you take our top-notch AZ-305 Designing Microsoft Azure Infrastructure Solutions Practice Exams, which simulate the real unique question types in the AZ-305 exam such as drag and drop, dropdown, and hotspot.

Also, check out our AZ-305 Microsoft Azure Solutions Architect Expert exam study guide here.

Get $4 OFF in AWS Solutions Architect & Data Engineer Associate Practice Exams for $10.99 ONLY!

Tutorials Dojo portal

Be Inspired and Mentored with Cloud Career Journeys!

Tutorials Dojo portal

Enroll Now – Our Azure Certification Exam Reviewers

azure reviewers tutorials dojo

Enroll Now – Our Google Cloud Certification Exam Reviewers

Tutorials Dojo Exam Study Guide eBooks

tutorials dojo study guide eBook

FREE AWS Exam Readiness Digital Courses

Subscribe to our YouTube Channel

Tutorials Dojo YouTube Channel

FREE Intro to Cloud Computing for Beginners

FREE AWS, Azure, GCP Practice Test Samplers

Recent Posts

Written by: Tutorials Dojo

Tutorials Dojo offers the best AWS and other IT certification exam reviewers in different training modes to help you pass your certification exams on your first try!

AWS, Azure, and GCP Certifications are consistently among the top-paying IT certifications in the world, considering that most companies have now shifted to the cloud. Earn over $150,000 per year with an AWS, Azure, or GCP certification!

Follow us on LinkedIn, YouTube, Facebook, or join our Slack study group. More importantly, answer as many practice exams as you can to help increase your chances of passing your certification exams on your first try!

View Our AWS, Azure, and GCP Exam Reviewers Check out our FREE courses

Our Community

~98%
passing rate
Around 95-98% of our students pass the AWS Certification exams after training with our courses.
200k+
students
Over 200k enrollees choose Tutorials Dojo in preparing for their AWS Certification exams.
~4.8
ratings
Our courses are highly rated by our enrollees from all over the world.

What our students say about us?