70-765 Exam Questions - Online Test


70-765 Premium VCE File

Learn More 100% Pass Guarantee - Dumps Verified - Instant Download
150 Lectures, 20 Hours

certleader.com

Exam Code: 70-765 (), Exam Name: Provisioning SQL Databases (beta), Certification Provider: Microsoft Certifitcation, Free Today! Guaranteed Training- Pass 70-765 Exam.

Free demo questions for Microsoft 70-765 Exam Dumps Below:

NEW QUESTION 1
You administer a single server that contains a Microsoft SQL Server 2014 default instance. You plan to install a new application that requires the deployment of a database on the server. The application login requires sysadmin permissions.
You need to ensure that the application login is unable to access other production databases. What should you do?

  • A. Use the SQL Server default instance and configure an affinity mask.
  • B. Install a new named SQL Server instance on the server.
  • C. Use the SQL Server default instance and enable Contained Databases.
  • D. Install a new default SQL Server instance on the server.

Answer: B

Explanation: References:
https://docs.microsoft.com/en-us/sql/sql-server/install/work-with-multiple-versions-and-instances-of-sql-server

NEW QUESTION 2
You are tuning the performance of a virtual machines that hosts a Microsoft SQL Server instance. The virtual machine originally had four CPU cores and now has 32 CPU cores.
The SQL Server instance uses the default settings and has an OLTP database named db1. The largest table in db1 is a key value store table named table1.
Several reports use the PIVOT statement and access more than 100 million rows in table1. You discover that when the reports run, there are PAGELATCH_IO waits on PFS pages 2:1:1, 2:2:1, 2:3:1, and 2:4:1 within the
tempdb database.
You need to prevent the PAGELATCH_IO waits from occurring.
Solution: You rewrite the queries to use aggregates instead of PIVOT statements. Does this meet the goal?

  • A. Yes
  • B. No

Answer: B

Explanation: Instead you can add more files to the database.
References: https://www.mssqltips.com/sqlservertip/3088/Explanation:-of-sql-server-io-and-latches/

NEW QUESTION 3
You administer a Microsoft SQL Server 2014 database instance.
You plan to migrate the database to Windows Azure SQL Database. You verify that all objects contained in the database are compatible with Windows Azure SQL Database.
You need to ensure that database users and required server logins are migrated to Windows Azure SQL Database.
What should you do?

  • A. Use the copy database wizard
  • B. Use the Database Transfer wizard
  • C. Use SQL Server Management Studio to deploy the database to Windows Azure SQL Database
  • D. Backup the database from the local server and restore it to Windows Azure SQL Database

Answer: C

Explanation: You would need to use either the SQL Server Management Studio or Transact-SQL.
References: https://docs.microsoft.com/en-us/azure/sql-database/sql-database-cloud-migrate

NEW QUESTION 4
You have Microsoft SQL Server on a Microsoft Azure virtual machine.
You have two Windows accounts named serviceAccount1 and ServiceAccount2. The SQL Server Agent runs as ServiceAccount1.
You need to run SQL Server Agent job steps by using ServiceAccount2. Which cmdlet should you run first?

  • A. Set-ADServiceAccount
  • B. Set-SqlCredential
  • C. New-ADServiceAccount
  • D. New-SqlCredential

Answer: C

Explanation: The New-ADServiceAccount command creates a new Active Directory managed service account or group managed service account object.

NEW QUESTION 5
You plan to deploy an AlwaysOn failover cluster in Microsoft Azure. The cluster has a Service Level Agreement (SLA) that requires an uptime of at least 99.95 percent.
You need to ensure that the cluster meets the SLA.
Which cmdlet should you run before you deploy the virtual machine?

  • A. New-AzureRmAvailabilitySet
  • B. New-AzureRmLoadBalancer
  • C. New-AzureRmSqlDatabaseSecondary
  • D. New-AzureRmSqlElasticPool
  • E. New-AzureRmVM
  • F. New-AzureRmSqlServer
  • G. New-AzureRmSqlDatabaseCopy
  • H. New-AzureRmSqlServerCommunicationLink

Answer: B

Explanation: On Azure virtual machines, a SQL Server Availability Group requires a load balancer. The load balancer holds the IP address for the Availability Group listener. The New-AzureRmLoadBalancer cmdlet creates an Azure load balancer.
References:

NEW QUESTION 6
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this sections, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have Microsoft SQL Server on a Microsoft Azure virtual machine that has a database named DB1. You discover that DB1 experiences WRITE_LOG waits that are longer than 50 ms.
You need to reduce the WRITE_LOG wait time. Solution: Add additional data files to DB1. Does this meet the goal?

  • A. Yes
  • B. No

Answer: B

Explanation: In SQL Server, if we have a transactional based system and find a high WRITELOG wait type this is a performance bottleneck and can cause the transaction log file to grow rapidly and frequently.
It is being recommended to SQL server users that they must archive the log files on a separate disk for getting better performance.
References: https://atdhebuja.wordpress.com/2021/06/20/resolving-sql-server-transaction-log-waits/

NEW QUESTION 7
Your company has several Microsoft Azure SQL Database instances used within an elastic pool. You need to obtain a list of databases in the pool.
How should you complete the commands? To answer, drag the appropriate segments to the correct targets. Each segment may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
70-765 dumps exhibit

    Answer:

    Explanation: References:
    https://docs.microsoft.com/en-us/cli/azure/sql/elastic-pool?view=azure-cli-latest#az-sql-elastic-pool-list-dbs

    NEW QUESTION 8
    You administer a Microsoft SQL Server 2014 instance.
    You need to configure a new database to support FILETABLES. What should you do? Choose all that apply.

    • A. Disable FILESTREAM on the Database.
    • B. Enable FILESTREAM on the Server Instance.
    • C. Configure the Database for Partial Containment.
    • D. Create a non-empty FILESTREAM file group.
    • E. Enable Contained Databases on the Server Instance.
    • F. Set the FILESTREAM directory name on the Database.

    Answer: BDF

    Explanation: B: FileTables extend the capabilities of the FILESTREAM feature of SQL Server. Therefore you have to enable FILESTREAM for file I/O access at the Windows level and on the instance of SQL Server before you can create and use FileTables.
    D: Before you can create FileTables in a database, the database must have a FILESTREAM filegroup. F: Specifying a Directory for FileTables at the Database Level
    When you enable non-transactional access to files at the database level, you can optionally provide a directory name at the same time by using the DIRECTORY_NAME option. If you do not provide a directory name when you enable non-transactional access, then you have to provide it later before you can create FileTables in the database.
    References:
    https://docs.microsoft.com/en-us/sql/relational-databases/blob/enable-the-prerequisites-for-filetable

    NEW QUESTION 9
    Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution. Determine whether the solution meets stated goals.
    Your company plans to use Microsoft Azure Resource Manager templates for all future deployments of SQL Server on Azure virtual machines.
    You need to create the templates.
    Solution: You create the desired SQL Server configuration in an Azure Resource Group, then export the Resource Group template and save it to the Templates Library.
    Does the solution meet the goal?

    • A. Yes
    • B. No

    Answer: B

    Explanation: Azure Resource Manager template consists of JSON, and expressions that you can use to construct values for your deployment.
    A good JSON editor, not a Resource Group template, can simplify the task of creating templates.
    Note: In its simplest structure, a Azure Resource Manager template contains the following elements:
    {
    "$schema": "http://schema.management.azure.com/schemas/2015-01- 01/deploymentTemplate.json#",
    "contentVersion": "", "parameters": { },
    "variables": { },
    "resources": [ ],
    "outputs": { }
    }
    References:https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authoring-templates

    NEW QUESTION 10
    Note: This question is part of a series of questions that use the same or similar answer choices. An answer choice may be correct for more than one question in the series. Each question is independent of the other questions in this series. Information and details provided in a question apply only to that question.
    You have deployed several GS-series virtual machines (VMs) in Microsoft Azure. You plan to deploy Microsoft SQL Server in a development environment. Each VM has a dedicated disk for backups.
    You need to backup a database to the local disk on a VM. The backup must be replicated to another region.
    Which storage option should you use?

    • A. Premium P10 disk storage
    • B. Premium P20 diskstorage
    • C. Premium P30 disk storage
    • D. Standard locally redundant disk storage
    • E. Standard geo-redundant disk storage
    • F. Standard zone redundant blob storage
    • G. Standard locally redundant blob storage
    • H. Standard geo-redundant blob storage

    Answer: E

    Explanation: Note: SQL Database automatically creates a database backups and uses Azure read- access geo-redundant storage (RA-GRS) to provide geo-redundancy. These backups are created automatically and at no additional charge. You don't need to do anything to make them happen. Database backups are an essential part of any business continuity and disaster recovery strategy because they protect your data from accidental corruption or deletion.
    References:https://docs.microsoft.com/en-us/azure/sql-database/sql-database-automated- backups

    NEW QUESTION 11
    A company runs Microsoft SQL Server 2021 in an on-premises environment. The databases are memory-optimized.
    An integrity check of a database has failed.
    You need to ensure that the data is healthy and passes an integrity check. What should you do?

    • A. Run the checktable Transact-SQL statement.
    • B. Clear the buffer of the database.
    • C. Restore from a verified backup.
    • D. Run the cleantable Transact-SQL statement.

    Answer: C

    Explanation: To verify the integrity of the on-disk checkpoint files, perform a backup of the MEMORY_OPTIMIZED_DATA filegroup.

    NEW QUESTION 12
    HOTSPOT
    You need to optimize SRV1.
    What configuration changes should you implement? To answer, select the appropriate option from each list in the answer area.
    70-765 dumps exhibit

      Answer:

      Explanation: From the scenario: SRV1 has 16 logical cores and hosts a SQL Server instance that supports a mission-critical application. The application hasapproximately 30,000 concurrent users and relies heavily on the use of temporary tables.
      Box 1: Change the size of the tempdb log file.
      The size and physical placement of the tempdb database can affect the performance of a system. For example, if the size that is defined for tempdb is too small, part of the system- processing load may be taken up with autogrowing tempdb to the size required to support the workload every time you restart the instance of SQL Server. You can avoid this overhead by increasing the sizes of the tempdb data and log file.
      Box 2: Add additional tempdb files.
      Create as many files as needed to maximize disk bandwidth. Using multiple files reduces tempdb storage contention and yields significantly better scalability. However, do not create too many files because this can reduce performance and increase management overhead. As a general guideline, create one data file for each CPU on the server (accounting for any affinity mask settings) and then adjust the number of files up or down as necessary.

      Topic 5, Contoso, Ltd Case Study 2Background
      You are the database administrator for Contoso, Ltd. The company has 200 offices around the world. The company has corporate executives that are located in offices in London, New York, Toronto, Sydney, and Tokyo.
      Contoso, Ltd. has a Microsoft Azure SQL Database environment. You plan to deploy a new
      Azure SQL Database to support a variety of mobile applications and public websites.
      The company is deploying a multi-tenant environment. The environment will host Azure SQL Database instances. The company plans to make the instances available to internal departments and partner companies. Contoso is in the final stages of setting up networking and communications for the environment.
      Existing Contoso and Customer instances need to be migrated to Azure virtual machines (VM) according to the following requirements:
      The company plans to deploy a new order entry application and a new business intelligence and analysis application. Each application will be supported by a new database. Contoso creates a new Azure SQL database named Reporting. The database will be used to support the company's financial reporting requirements. You associate the database with the Contoso Azure Active Directory domain.
      Each location database for the data entry application may have an unpredictable amount of activity. Data must be replicated to secondary databases in Azure datacenters in different regions.
      To support the application, you need to create a database named contosodb1 in the existing environment.
      Objects
      70-765 dumps exhibit
      Database
      The contosodb1 database must support the following requirements:
      Application
      For the business intelligence application, corporate executives must be able to view all data in near real-time with low network latency.
      Contoso has the following security, networking, and communications requirements:

      NEW QUESTION 13
      Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution. Determine whether the solution meets stated goals.
      You have a mission-critical application that stores data in a Microsoft SQL Server instance. The application runs several financial reports. The reports use a SQL Server-authenticated login named Reporting_User. All queries that write data to the database use Windows authentication.
      Users report that the queries used to provide data for the financial reports take a long time to complete. The queries consume the majority of CPU and memory resources on the database server. As a result, read-write queries for the application also take a long time to complete.
      You need to improve performance of the application while still allowing the report queries to finish.
      Solution: You configure the Resource Governor to limit the amount of memory, CPU, and IOPS used for the pool of all queries that the Reporting_user login can run concurrently.
      Does the solution meet the goal?

      • A. Yes
      • B. No

      Answer: A

      Explanation: SQL Server Resource Governor is a feature than you can use to manage SQL Server
      workload and system resource consumption. Resource Governor enables you to specify limits on the amount of CPU, physical IO, and memory that incoming application requests can use.
      References:https://msdn.microsoft.com/en-us/library/bb933866.aspx

      NEW QUESTION 14
      Settings Value VM size D3
      Storage Location Drive E Storage type Standard Tempdb location Drive C
      The workload on this instance has of the tembdb load.
      You need to maximize the performance of the tempdb database.
      Solution: You use a D- Series VM and store the tempdb database on drive D. Does this meet the goal?

      • A. Yes
      • B. No

      Answer: A

      Explanation: For D-series, Dv2-series, and G-series VMs, the temporary drive on these VMs is SSD-based. If your workload makes heavy use of TempDB (such as temporary objects or complex joins), storing TempDB on the D drive could result in higher TempDB throughput and lower TempDB latency.
      References:
      https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sql/virtual-machines-windows-sql-performan

      NEW QUESTION 15
      DRAG DROP
      You deploy a new Microsoft Azure SQL Database instance to support a variety of mobile applications and public websites. You plan to create a new security principal named User1.
      The principal must have access to select all current and future objects in a database named Reporting. The activity and authentication of the database user must be limited to the Reporting database.
      You need to create the new security principal.
      Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
      70-765 dumps exhibit

        Answer:

        Explanation: Step 1, Step 2:
        First you need to create a login for SQL Azure, it's syntax is as follows: CREATE LOGIN username WITH password='password';
        This command needs to run in master db. Only afterwards can you run commands to create a user in the database.
        Step 3:
        Users are created per database and are associated with logins. You must be connected to the database in where you want to create the user. In most cases, this is not the master database. Here is some sample Transact-SQL that creates a user:
        CREATE USER readonlyuser FROM LOGIN readonlylogin; References:https://azure.microsoft.com/en-us/blog/adding-users-to-your-sql-azure-database/

        NEW QUESTION 16
        Note: This question is part of a series of questions that use the same or similar answer choices. An answer choice may be correct for more than one question in the series. Each question is independent of the other questions in this series. Information and details provided in a question apply only to that question.
        You have deployed a GS-series virtual machine (VM) in Microsoft Azure. You plan to deploy Microsoft SQL Server.
        You need to deploy a 30 megabyte (MB) database that requires 100 IOPS to be guaranteed while minimizing costs.
        Which storage option should you use?

        • A. Premium P10 disk storage
        • B. Premium P20 disk storage
        • C. Premium P30 disk storage
        • D. Standard locally redundant disk storage
        • E. Standard geo-redundant disk storage
        • F. Standard zone redundant blob storage
        • G. Standard locally redundant blob storage
        • H. Standard geo-redundant blob storage

        Answer: A

        Explanation: Premium Storage Disks Limits
        When you provision a disk against a Premium Storage account, how much input/output operations per second (IOPS) and throughput (bandwidth) it can get depends on the size of the disk. Currently, there are three types of Premium Storage disks: P10, P20, and P30. Each one has specific limits for IOPS and throughput as specified in the following table:
        70-765 dumps exhibit
        References:https://docs.microsoft.com/en-us/azure/storage/storage-premium-storage

        NEW QUESTION 17
        You plan to deploy two new Microsoft Azure SQL Database instances. Once instance will support a data entry application. The other instance will support the company’s business intelligence efforts. The databases will be accessed by mobile applications from public IP addresses.
        You need to ensure that the database instances meet the following requirements:
        The database administration team must receive alerts for any suspicious activity in the data entry database, including potential SQL injection attacks.
        Executives around the world must have access to the business intelligence application.
        Sensitive data must never be transmitted. Sensitive data must not be stored in plain text in the database. In the table below, identify the feature that you must implement for each database.
        NOTE: Make only one selection in each column. Each correct selection is work one point.
        70-765 dumps exhibit

          Answer:

          Explanation: Data entry: Threat Detection
          SQL Threat Detection provides a new layer of security, which enables customers to detect and respond to potential threats as they occur by providing security alerts on anomalous activities. Users receive an alert upon suspicious database activities, potential vulnerabilities, and SQL injection attacks, as well as anomalous database access patterns.
          Business intelligence: Dynamic Data Masking
          Dynamic data masking limits (DDM) sensitive data exposure by masking it to non-privileged users. It can be used to greatly simplify the design and coding of security in your application.
          References:
          https://docs.microsoft.com/en-us/azure/sql-database/sql-database-threat-detection https://docs.microsoft.com/en-us/sql/relational-databases/security/dynamic-data-masking

          NEW QUESTION 18
          Background
          You manage the Microsoft SQL Server environment for a company that manufactures and sells automobile parts.
          The environment includes the following servers: SRV1 and SRV2. SRV1 has 16 logical cores and hosts a SQL Server instance that supports a mission-critical application. The application has approximately 30,000 concurrent users and relies heavily on the use of temporary tables.
          The environment also includes the following databases: DB1, DB2, and Reporting. The Reporting database is protected with Transparent Data Encryption (TDE). You plan to migrate this database to a new server. You detach the database and copy it to the new server.
          You are performing tuning on a SQL Server database instance. The application which uses the database was written using an object relationship mapping (ORM) tool which maps tables as objects within the application code. There are 30 stored procedures that are regularly used by the application.
          After reviewing the plan cache you have identified that a large number of simple queries are using parallelism, and that execution plans are not being kept in the plan cache for very long.
          You review the properties of the instance (Click the Exhibit button). Exhibit:
          70-765 dumps exhibit
          You need to restore the Reporting database to SRV2. What should you do? To answer, drag the appropriate options to the correct locations. Each option may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
          Select and Place:
          70-765 dumps exhibit

            Answer:

            Explanation: Step 2: Create: server certificate
            Recreate the server certificate by using the original server certificate backup file.
            Note: The password must be the same as the password that was used when the backup was created. Step 3: Restore: Reporting database .mdf file.
            -- Attach the database that is being moved.
            -- The path of the database files must be the location where you have stored the database files. Example:
            CREATE DATABASE [CustRecords] ON
            ( FILENAME = N'C:Program FilesMicrosoft SQL ServerMSSQL13.MSSQLSERVERMSSQLDATA
            CustRecords.mdf' ),
            ( FILENAME = N'C:Program FilesMicrosoft SQL ServerMSSQL13.MSSQLSERVERMSSQLDATA
            CustRecords_log.LDF' ) FOR ATTACH ;
            GO
            From scenario: The Reporting database is protected with Transparent Data Encryption (TDE). You plan to migrate this database to a new server. You detach the database and copy it to the new server.
            References:
            https://docs.microsoft.com/en-us/sql/relational-databases/security/encryption/move-a-tdeprotected-database-to-a

            NEW QUESTION 19
            You administer a Microsoft SQL Server 2014 database that includes a table named Application.Events. Application.Events contains millions of records about user activity in an application.
            Records in Application.Events that are more than 90 days old are purged nightly. When records are purged, table locks are causing contention with inserts.
            You need to be able to modify Application.Events without requiring any changes to the applications that utilize Application.Events.
            Which type of solution should you use?

            • A. Partitioned tables
            • B. Online index rebuild
            • C. Change data capture
            • D. Change tracking

            Answer: A

            Explanation: Partitioning large tables or indexes can have manageability and performance benefits including:
            You can perform maintenance operations on one or more partitions more quickly. The operations are more efficient because they target only these data subsets, instead of the whole table.
            References: https://docs.microsoft.com/en-us/sql/relational-databases/partitions/partitioned-tables-and-indexes

            100% Valid and Newest Version 70-765 Questions & Answers shared by Surepassexam, Get Full Dumps HERE: https://www.surepassexam.com/70-765-exam-dumps.html (New 209 Q&As)