advanced answers Interview Interview Q&A Latest questions sql dba sql server SQL Server DBA tech

Latest SQL DBA Interview Questions and Answers

Latest SQL DBA Interview Questions and Answers

The publish Newest SQL DBA Interview Questions and Solutions takes you thru the newest SQL DBA interview experiences shared by weblog followers. It’s also possible to share your interview expertise and we will present you the solutions:

Q. Can we carry out a log backup with COPY ONLY choice?

Ans:

Sure In fact we will carry out Copy Solely log backups

Q. What are the marked transactions in SQL Server?

Ans:

Once we are coping with associated databases which suggests there’s a dependency between databases and we have to set up a restoration technique for all databases to the identical constant part. However with this strategy we might lose the lately dedicated transactions. Marking transaction logs is useful if you end up coping with a number of databases and need to restore all of them to the identical time limit restoration.

That is utilized in uncommon instances and appropriate for TFS as a result of when there’s catastrophe ‘happens we’d like our knowledge again in all databases to the identical constant method, for those who don’t mark the T-Logs and restore utilizing the traditional time limit restoration it’d get our knowledge again however with inconsistency between databases. For instance I’ve created a brand new department in TFS for brand spanking new improvement work and and any person checking and modifying code this complete actions are recorded in a number of databases so once we need to restore it again we should always get precisely the identical constant level. Take a look at right here:

https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/use-marked-transactions-to-recover-related-databases-consistently

Q. Why Main key’s mandate for Transaction Replication?

Ans:

Sure this have to be a terrific query for positive even I attempted to seek out the reply from SQL documentation however I didn’t get the right rationalization however Brent answered this:

“Once we carry out an operation (Replace / Delete) on writer the identical operation must be utilized in any respect subscribers by matching / lookup a singular column between supply (Writer) and vacation spot (Subscriber). That’s means Main key’s obligatory to configure Transactional Replication.”

Q. Then how merge replication works with out main key?

Ans:

We will configure a method Merge replication and may embrace articles with out Main Key. Merge replication makes use of a globally distinctive identifier column “GUID” to uniquely determine the every row in the course of the replication course of. Merge replication mechanically creates / add one GUID / uniqueidentifier column in case the revealed desk doesn’t have one. However this course of may impression the replication efficiency as a side-effect.

Q. What are the perfect practices to carry out a backup for a VLDB (2TB) in SQL Server?

Ans:

We will’t provide the right reply until we all know the precise enterprise requirement however these are the generic guidelines:

  • Carry out a full backup at low visitors time (Sunday Late Night time)
  • Plan for a differential backup on each week day and a full in weekend
  • Select a devoted drive for backup information as an alternative of acting on disk that incorporates DATA, LOG or TempDB.
  • Use backup compression it’d take CPU however saves lots of area and time
  • Carry out cut up backup on a number of drives. Let’s say full backup is taking four hours then cut up backup with four information might be completed in 1 hour common. However it provides strain on I/O that’s why we should always select the low visitors time.
  • Once we carry out backup for VLDB ensure that no different backup jobs or operating in parallel.
  • Don’t carry out backup to community / distant drive as an alternative carry out a backup to native drive after which schedule a job to repeat.
  • Select the acceptable backup disk sort to help the write speeds for bulk volumes
  • When you’ve got a whole lot of knowledge / tables are unchanging then you’ll be able to think about internet hosting all of these knowledge tables in a separate database with easy restoration mode or a minimum of in a separate file-group in the identical database.
  • Attempt third celebration instruments (Litespeed, RedGate and so forth.) which can velocity up the backup course of

Q. How do you discover the lengthy operating queries in SQL Server?

Ans:

There are a number of methods to seek out the lengthy operating queries:

Use Queries: Put together an inventory of helpful queries and hold the script prepared with you and that may be helpful as a fast software once you discover one thing suspicious in your Server / database

Exercise Monitor: Fast strategy to determine the problematic queries

Server Normal Reviews: We will see the highest costliest queries utilizing the server native reviews

Database Commonplace Studies: We will discover the database degree lengthy operating queries

Profiler: Run a profiler hint and discover the queries with dangerous efficiency

Prolonged Occasions: Light-weight device to get the hint

Server Aspect Hint: Utilizing T-SQL instructions we will run server aspect hint and seize the hint for a sure time interval and we will analyse it to seek out the lengthy operating queries

DMV: We will use DMV to scan the buffer cache to determine the lengthy operating queries Ex: sys.dm_exec_query_stats, sys.dm_exec_query_plan and so on.

DBCC: Someday DBCC instructions will help us to figuring out the lengthy operating transactions ex: DBCC OPENTRAN

Brent Pattern Question to seek out the highest 50 lengthy operating queries (Execution time > 5 Min):

SELECT st.textual content,

qp.query_plan,

qs.*

FROM (

SELECT TOP 50 *

FROM sys.dm_exec_query_stats

ORDER BY total_worker_time DESC

) AS qs

CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st

CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) AS qp

WHERE qs.max_worker_time > 300

OR qs.max_elapsed_time > 300

Q. Software is operating sluggish and the help group concerned you to verify the bottleneck. How do you deal with that? Are you able to clarify the usual process to resolve a sluggish operating software?

Ans:

Often this query is requested to check the way you skilled the state of affairs. Listed here are the steps:

These are the generic steps we comply with as a result of we have now seen points like this in enterprise setting the place we discover 100’s of purposes are operating. If some app is operating sluggish we might not have the ability to determine the precise object / question / process / bottleneck inflicting the difficulty. Thereof we should always comply with the usual process to determine the bottleneck

  1. Earlier than concluding the issue is from SQL Server first we should always make it possible for the bottleneck is from Database aspect.
  2. Means we should always ensure that the issue is just not from the opposite elements:
  • Software
  • Net Service
  • Community / Firewall
  • Middleware – Knowledge LayersWe often do this proper, as soon as app group determine that the appliance is operating sluggish then they’ll attain the know-how desk then we’ll open an incident and contain all groups.
  1. Step one is often fast (<=5 min) examine all providers are up and operating (IIS / Tomcat, Community Band width, Firewall, Database Servers, Middleware Providers and so forth.)
  2. The DBA workforce often takes the lead and drive the incident name
  3. From DB prospect we now have to ensure under are operating wonderful:
  • Database Servers are up and operating
  • SQL Service is responding and all DB’s are on-line
  • CPU Utilization
  • Reminiscence Utilization
  • Disk I/O Utilization
  • Efficiency counters
  • And so on.
  1. As I informed you it must be the fast check which shouldn’t take greater than 5 min. We will use a predefined / written SQL script. Run a Well being Verify report and examine all parameters in a single go.
  2. Now all working advantageous then we will shortly verify by way of the database for lengthy operating queries and transactions:
  • Examine if any Deadlocks / Blocking – Use Sysprocesses
  • Examine lengthy operating transactions – DBCC opentran
  • Examine prime 10 lengthy operating queries / procedures – Use commonplace database studies or take DMV assist
  • For those who discover something suspicious then verify with app / dev group to make it possible for question / proc may trigger the web page / module in app turning into sluggish
  1. Should you nonetheless can’t discover something suspicious then we might have to run a hint by means of prolonged occasions or profiler server aspect hint by filtering via the database identify and software login.
  2. As soon as you discover the block then will probably be straightforward to unravel the issue.

When you determine the question / saved process which is inflicting the issue then you possibly can nice tune it utilizing the information:

  • Examine if there’s any blocking or deadlocks through the execution
  • Often under are the important thing areas the place we focus from Execution Plan
  • Bookmark/RID/KEY Lookup
  • Type
  • Desk Scan/Index Scan
  • Knowledge/Line Density
  • HASH/Nested Loop/Merge Joins
  • Tables/Index statistics
  • Warning Indicators
  • Logical Reads
  • Variety of Executions
  • Question Statistics – Logical and Bodily Reads
  • Cardinality Estimates, Histogram
  • Verify if there are any temp objects utilizing and may be changed by some other best choice. For instance “Desk Variable” might be changed with a “Temp Desk” when coping with large knowledge.
  • Verify how knowledge processing occurring. Apply all required knowledge filters earlier than making use of/becoming a member of with different knowledge units.
  • Examine if there are any consumer outlined features utilizing in choose or the place clause and consider options if attainable.
  • Examine if there are any question hints utilizing and see if these are actually required.
  • Examine if there are any correct SET choices will help
  • Verify if dangerous parameter sniffing inflicting the difficulty and use native variables or OPTIMIZE FOR UNKNOWN choice
  • Attempt to keep away from dynamic SQL as an alternative cut up the saved process into a number of procedures
  • Examine if there’s any implicit knowledge conversion occurring in question and repair it
  • Verify if there are any cross database queries fetching knowledge from different database with becoming a member of tables from present database. If that’s the case then we will cut up the processing logic to fetch solely the filtered knowledge from different database right into a temp object after which apply the be a part of.
  • Shortly verify the index utilization:
  • See if the question is utilizing the right indexes
  • Examine if we will keep away from type through the use of a clustered index column
  • Examine the column order as it’d influence the index utilization
  • Examine if statistics are updated
  • Based mostly on frequency of question utilization we might contemplate creating overlaying index

Q. What sort of an algorithm we will specify whereas creating an Finish Level whereas configuring Database Mirroring in SQL Server?

Ans:

RC4 is the default algorithm however we must be cautious because it’s marked as a deprecated function and Microsoft suggesting to make use of AES algorithm.

https://docs.microsoft.com/en-us/sql/database-engine/database-mirroring/transport-security-database-mirroring-always-on-availability

Q. We’re getting a excel sheet from buyer on every day foundation, we have to replace knowledge in SQL Server database desk based mostly on excel sheet. That is only a easy requirement we simply have to replace knowledge from excel to desk and no additional transformations required. Are you able to recommend your ideas on it?

Ans:

Use OPENDATASOURCE / OPENROWSET:

We will merely insert knowledge from Excel to SQL Server desk and Replace knowledge:

1. Create a staging desk

2. First truncate the staging desk

three. Insert knowledge from Excel to Staging

four. Replace base / most important desk from staging desk utilizing a replace assertion by evaluating the important thing columns

Use SSIS:

Create a easy ETL package deal that may accomplish the identical listing of steps listed above.

1. Management Move: Execute SQL Assertion – Truncate / Delete the staging desk

2. Knowledge Circulate: Supply Excel File, Vacation spot: SQL Server Desk

three. Management Stream: Execute SQL Assertion: Replace the primary / base desk by becoming a member of the staging desk and evaluating the important thing columns

If knowledge goes to be large Absolute best strategy is utilizing a SSIS ETL package deal. If it’s for much less knowledge and dealing high-quality then merely create a T-SQL code in a process and schedule a job to execute the proc.

Q. I’m making an attempt to know how temp desk works in SQL Server. I’ve created a easy saved process that creates a temp desk, inserts and retrieves knowledge from native / international temp tables as under:

–Native Temp check

Create Process LTemp_Test

AS

BEGIN

CREATE TABLE #LTemp (Id Int)

INSERT INTO #LTemp VALUES (10);

SELECT * FROM #LTemp

END

GO

–International Temp Check

Create Process GTemp_Test

AS

BEGIN

CREATE TABLE ##GTemp (Id Int)

INSERT INTO ##GTemp VALUES (10);

SELECT * FROM ##GTemp

END

 Now my query is when I’m executing these procedures LTemp_Test is executing efficiently however GTemp_Test process is executing nice for the primary time however within the 2nd execution it’s failing with the error “Msg 2714, Degree 16, State 6, Process GTemp_Test, Line 5 There’s already an object named ‘##GTemp’ within the database.”.

Are you able to clarify me why?

Ans:

There’s nothing mistaken in that and that is anticipated behaviour of International and Native Momentary tables. First let me clarify concerning the scope of a Short-term desk:

Native Temp Desk:

  • Native Short-term desk is accessible in present session solely which suggests if you create a Native Temp Desk in session 1 can’t be used / referenced in another session/connection.
  • Native Temp tables are routinely dropped when the present session is closed. For instance an area short-term desk created in a saved process is dropped routinely when the saved process completes.

International Temp Desk:

  • International Short-term desk is seen to all periods which suggests once you create a worldwide temp desk in session 1 might be accessible in different periods as properly.
  • International Temp tables are mechanically dropped when the session that created that desk ends and the final lively Transact-SQL assertion referencing this desk in different periods ends. For instance a worldwide momentary desk ##GTT is created in session 1 and the identical desk is being utilized in session 5 and session eight. International Temp Desk is mechanically dropped if you shut the session 1 and there are not any transactions are referencing this desk in session 5 and session eight.

 Now coming to your query:

  • LTemp_Test is efficiently executing with none concern as a result of the native temp desk is routinely dropped when saved process execution completes. When subsequent time it executes it creates a brand new native temp desk.
  • GTemp_Test is executing efficiently for the primary time however failing on the second execution due to its scope. As we learnt international short-term desk is out there for all periods / connections thereof if you attempt to execute the process 2nd time it’s failing as there’s already the worldwide desk created in earlier session and the identical is accessible for all periods.

Q. I’ve a manufacturing occasion the place we hosted almost 100 databases for a number of purposes. Our requirement is to cover pointless databases from customers meaning customers shouldn’t be capable of see databases the place they don’t have entry. Can we do this?

Ans:

Sadly we don’t have any direct method / function to cover databases from customers in SQL Server. Nevertheless it’s potential if the consumer is usually a DB_OWNER on all required databases, right here is the best way:

For instance we’ve a SQL Server occasion with 65 databases. We have now created a login referred to as “SalesAD” and mapped to 2 databases “SalesDB” and SalesDB_Archive” as a DBO. The requirement is to cover remaining 63 databases from this consumer.

  • Deny View Any Database permission for the login “SalesAD”

USE Grasp

GO

DENY VIEW ANY DATABASE TO [SalesAD];

GO

  • Take away / drop the consumer “SalesAD” from these three databases

USE [SalesDB]

GO

DROP USER [SalesAD]

GO

USE [SalesDB_Archive]

GO

DROP USER [SalesAD]

GO

  • Assign possession of the database to consumer “SalesAD”

USE [Master]

GO

ALTER AUTHORIZATION ON DATABASE::[SalesDB] to [SalesAD]

GO

ALTER AUTHORIZATION ON DATABASE::[SalesDB_Archive] to [SalesAD]

  • Now should you login to SQL Server utilizing “SalesAD” login then you’ll be able to see solely two databases “SalesDB” and “SalesDB_Archive”.

 Q. My database incorporates A, B, C, D, E, F tables and I’ve a consumer referred to as USER1. How can I cover tables B, D and E from consumer USER1? The consumer USER1 ought to have the ability to see / entry A, C, and F tables solely.

Ans:

Okay, let’s say in database DB1, USER1 must entry A,C,F tables and shouldn’t be capable of entry/view B, D, E tables. Right here is the method:

  • Map USER1 to database DB1 – Solely to Public Position
  • Give required entry on A, C, F tables to USER1 as under:
  • GRANT SELECT ON A TO USER1;
  • GRANT INSERT ON A TO USER1;
  • GRANT DELETE ON A TO USER1;
  • GRANT UPDATE ON A TO USER1;
  • Similar applies for Desk C and F.
  • Now USER1 can see / entry solely A, C and F information

Q. How can I transfer a database knowledge / log file to a brand new drive when database is collaborating in AlwaysOn?

Ans:

There are two methods to do it:

First Means: Eradicating database from AlwaysOn Group and Detach- Connect technique

  • Take away the database from AlwaysOn Availability Group
  • Detach the database from occasion
  • Transfer the bodily information to the required / new location
  • Connect the database again to occasion
  • Re-join the database to Availability Group

Second Method: Droop knowledge second and transfer knowledge information – Wants a SQL Server restarts and failover between replicas

  • Droop the info motion for the database to all secondary databases. This stops sending log data to secondary database thereof we will anticipate the log file progress on main database.
  • Change the readable secondary to No for all secondary replicas
  • Logically transfer knowledge/ log information to new location: Modify the file places on a secondary duplicate utilizing ALTER DATABASE …MODIFY FILE command.
  • On secondary duplicate cease SQL Server occasion
  • Bodily transfer knowledge / log information to the brand new location: Minimize and Paste knowledge and log information to new location
  • Begin the SQL Server occasion on secondary duplicate and examine the database file places
  • Resume the database in Main Duplicate
  • Change the readable secondary to Sure for the finished secondary duplicate
  • Failover and repeat above steps to finish knowledge / log file motion on all associate servers

Q. What are the results once we lack of the principal in mirroring?

Ans:

If the principal fails, the failover state of affairs is determined by the transaction security degree and whether or not you have got a witness.

State of affairs 1: Security FULL with a witness

This state of affairs supplies the excessive security with automated failover. Within the occasion of the failure of the principal, the mirror varieties a quorum with the witness. Automated failover will happen, thereby minimizing the database downtime. As soon as the final precept turns into operational, it routinely assumes the position of the mirror.

State of affairs 2: Security FULL with no witness

This state of affairs supplies excessive security, however automated failover is just not allowed. Within the occasion of failure of the principal, the database service turns into unavailable. We have to execute the next on mirrored server to make the database service out there:

ALTER DATABASE SET PARTNER OFF

RESTORE DATABASE WITH RECOVERY

As soon as precept turns into out there, it’s essential re-establish the mirroring session.

State of affairs three: Security OFF

If the security degree is OFF, there have been transactions that didn’t make it to the mirror on the time of the failure of the principal. These transactions might be misplaced. Subsequently, guide failover with security OFF includes acknowledging the potential for knowledge loss. We have to execute the next on mirror to make the database service out there:

ALTER DATABASE SET PARTNER

FORCE_SERVICE_ALLOW_DATA_LOSS

As soon as the database on principal turns into operational, it routinely assumes the position of the mirror. Nevertheless, the mirroring session stays SUSPENDED, and you will have to manually RESUME the mirroring session.

Q. What are the results if we loss mirror in database mirroring?

Ans:

If the mirror fails, the principal continues functioning, however the mirroring state is DISCONNECTED and the principal is operating uncovered. As soon as the mirror database turns into operational, it mechanically assumes the position of the mirror and begins synchronizing with the principal. So long as the mirroring state stays DISCONNECTED, the transaction log area on the principal can’t be reused, even in the event you again up the transaction log. If the log file grows and reaches its most measurement restrict or runs out of disk area, the entire database involves a halt. To stop this we now have choices:

  • Plan for sufficient disk area for the transaction log to develop and convey again the mirror database earlier than the area fills up
  • Break the database mirroring session

Q. What occurs if we loss witness in database mirroring?

Ans:

If the witness server fails, database mirroring continues functioning with out interruption, besides that automated failover is just not attainable. As soon as the witness turns into operational, it mechanically joins the database mirroring session.

Q. What occurs if we lose each mirror and the witness in database mirroring?

Ans:

Assume you have got configured database mirroring with a witness. When the mirror is unavailable, the principal runs uncovered. Whereas the mirror is unavailable, if the witness can also be misplaced, the principal turns into remoted and may’t service the shoppers. Despite the fact that the principal database is operating, it isn’t obtainable to the shoppers. For those who try to hook up with the database, you get the message “Database is enabled for database mirroring, however neither the companion nor witness server situations can be found: the database can’t be opened.” If the mirror or the witness can’t be introduced again on-line shortly, then the one approach to resume database service is to terminate the database mirroring session. To do that, we have to execute the next command after connecting to the grasp database of the principal server:

ALTER DATABASE SET PARTNER OFF

Q. Can I carry out time limit restoration if restoration mannequin is Bulk Logged restoration mannequin?

Ans:

If a log backup taken underneath the bulk-logged restoration mannequin accommodates bulk-logged modifications, point-in-time restoration just isn’t allowed. Making an attempt to carry out point-in-time restoration on a log backup that accommodates bulk modifications will trigger the restore operation to fail.

Q. I’ve desk referred to as member with the columns “MemberID” (distinctive), “DOB” and so forth. As per the appliance requirement principally we fetch knowledge based mostly on MemberID and DOB. What’s your index technique?

Ans:

We will create a clustered index on MemberID and Non-Clustered index on DOB column. There are few issues we should always keep in mind to make it possible for the index is getting utilized:

  • You need to perceive how index is created on MemberID and DOB then you’ll be able to be capable of perceive how they’re utilized in search queries
  • See once you search Member desk utilizing MemberID will apply the binary search and search the clustered index on MemmebrID and fetch all required columns from clustered index leaf node
  • Looking based mostly on DOB, traverse the non-clustered index on DOB and get the values from leaf node of non-clustered index. IF SELECT clause incorporates extra columns then it has to traverse again the clustered index leaf node to fetch different columns together with MemberID and DOB. This course of is proven as Key Lookup in execution plan.
  • Be sure to are choosing solely required knowledge from the desk and by no means use * in SELECT
  • Use an applicable knowledge varieties. Ex: Use DATE as an alternative of DATETIME for DOB if that’s fits the requirement
  • Keep away from utilizing conversion features in the place clause
  • All the time execute correct variety of check instances and ensure index is enhancing the efficiency earlier than making use of it to manufacturing
  • In case your software is required to make use of a set of columns for nearly all the queries then higher create a overlaying index, ensure you comply with the right order in SELECT question
  • Even in any case these should you nonetheless discover any efficiency points from execution plan then analyze it and see the place we will enhance the efficiency.

Q. In your backup technique you might have loss some knowledge, how do you persuade your shopper?

Ans:

First examine that the SLA (Service Degree Settlement) and ensure knowledge loss is in acceptable limits as per SLA. If not you may need missed one thing in designing backup technique to matchup the shopper SLA. Nevertheless very first thing must be “Determine the potential methods” to get our knowledge again. There are specific issues from the place we might get our knowledge again and naturally it all the time is determined by the state of affairs. Verify if we will get the tail log backup, see if that knowledge is static or dynamic, if it’s a static / grasp knowledge then we might get it from the opposite mirror / companion database, or see if we will match it with any Pre-Prod surroundings and so on.

Even after checking all potential methods you continue to did not get knowledge again then strategy your supervisor with an in depth RCA (Root Trigger Evaluation) by saying:

  • Why this occurred?
  • The place is the error?
  • Can we offer any fast / short-term answer to get our knowledge again?
  • What’s the prevention from future failures?
  • Present all attainable logs / occasion info that you simply collected through the RCA

Within the submit “Newest SQL DBA Interview Questions and Solutions” we simply listed few random SQL Server DBA interview questions with solutions. These questions are shared by weblog followers and we want to respect everybody who shared their experiences.

1000 Plus SQL Server Interview Questions with Solutions