Quantcast
Channel: Brent Ozar Unlimited®
Viewing all 3173 articles
Browse latest View live

[Video] Office Hours 2016 2016/06/08 (With Transcriptions)

$
0
0

This week, Angie, Erik, Doug, Jessica, and Richie discuss DB migration, rebuilding large indexes, recommendation for SQL dev ops tools, best practices for disabling SA accounts, compression, and more!

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

If you prefer to listen to the audio:

Office Hours Webcast – 2016-06-08

Jessica Connors: Question from Justin. He always asks us something. Justin says, “Is it advisable to move the public’s role from being able to query sys logins, sys databases, and/or sys configurations in master?”

Erik Darling: Advisable for what? I’ve never done it. I never cared that much. But I’m not like a big security guy. Any other big security guys want to talk about it…?

Doug Lane: Yeah, I’ve never done anything with public’s role and I’ve never seen it be a problem, but again, we’re not security experts.

Erik Darling: Again, we always recommend that when people ask sort of offhand security questions, Denny Cherry’s book Securing SQL Server is probably the go-to thing to read to figure out if what you’re doing is good or bad.

Jessica Connors: Yeah, Justin says that they got audited and [Inaudible 00:00:47]..

Erik Darling: What kind of audit was it that brought those up? I’d be curious.

 

Two Servers, One Load Test

Jessica Connors: Let’s move on to a question from Claudio. He says, “I would like to load test a new SQL Server instance with real production data. Is there anything we could put between clients and two SQL Servers that will intercept the queries for them to both SQL Servers and return the response only to one SQL Server?

Erik Darling: Yes, and I also have a magic spell that turns rats into kittens. No. That’s a bit much and a bit specific. You’re going to have to come up with something else. If you want to get really crazy, you’re going to have to look at Distributed Replay and come back in three years when you finish reading the documentation.

 

How do I configure multi-subnet AG listeners?

Jessica Connors: Okay. Let’s see here. This is a long one from Richard. Let’s tackle this one. “I will be adding a remote DR replica, non-readable, to an existing local availability group on a multi-subnet cluster to be able to use the listener at the DR site. I know a remote site IP address will be added to the listener. Is there anything else that has to be configured in the availability group or cluster besides DNS and firewall rules?”

Erik Darling: Brent?

Doug Lane: Yeah.

Jessica Connors: Where are you, Brent?

Erik Darling: I don’t know actually. I would be interested so I want you to try it out and email me if you hit any errors because I would be fascinated.

[Angie Rudduck enters webcast]

Jessica Connors: Oh, hi.

Doug Lane: Oh, we heard Angie before we saw her.

Angie Rudduck: Thought I had my mute on.

Doug Lane: As for the AG mystery, we’re going to leave that one unsolved.

Jessica Connors: Unsolved mysteries.

 

How should I configure database maintenance tasks?

Jessica Connors: Question from David. He says, “For routine tasks, index maintenance, backup, etcetera, is it preferred to use agent jobs or maintenance plans? It seems to be the DBA preference. Any reasons to lean one way or the other?”
Erik Darling: Ola Hallengren. Angie, tell us about Ola Hallengren.

Angie Rudduck: Ola Hallengren is amazing. I tell every single client about Ola Hallengren. I used it at my last place and in production across every server. You can do all backups, full disk logs. You can do it separated for your user databases versus your system databases. You get your CHECKDBs in there, user versus system databases. You even get index optimize and even better, Brent, aka Erik, has a really good blog post about how you can use it to just do update stats which is a great follow-up from his post about why don’t do index maintenance anyway, right? Just update stats. I love Ola. I’m working on a minideck to pitch all of his stuff in one instead of just the indexing.

Erik Darling: Nice.

Angie Rudduck: But I’m too busy with clients.

Doug Lane: Plus, he’s a Sagittarius.

Angie Rudduck: Gemini.

Erik Darling: I’ve heard rumors that I’m a Scorpio but I’ve never had that confirmed.

Jessica Connors: Use your Google machine.

Doug Lane: [Imitating Sean Connery] Do you expect me to talk, Scorpio?

[Laughter]

 

How do I set the default port for the DAC?

Jessica Connors: Let’s take one from Ben. He says, “Oh, SQL stuff. Here’s one. In old SQL, we had to set a registry key to set a static remote DAC port. Is there a better way in SQL 2012, 2014, 2016? What’s the registry key?”

Erik Darling: A static remote direct administrative connection port?

Jessica Connors: Mm-hmm.

Erik Darling: Weird. No, I don’t know, I’ve never done that.

Doug Lane: Yeah, me neither.

Angie Rudduck: What is old SQL? Like what version is old SQL?

[Laughter]

Angie Rudduck: 2005?

Doug Lane: 2005 he says.

Erik Darling: Hmm, I don’t believe that’s changed much since then.

Richie Rump: Yeah, it sounds like a blog post you need to write, Erik.

Angie Rudduck: We’ve got something on the site about remote DAC because…

Doug Lane: That doesn’t say anything about the port though.

Angie Rudduck: No, but it’s pretty detailed, isn’t it? I don’t know maybe go check that out, Ben, and go from there. I think it’s just go/dac. I don’t know. I’m making up things now.

Erik Darling: Brentozar.com/go/dac, D-A-C.

Jessica Connors: What’s the oldest version of SQL you guys have worked on?

Erik Darling: ’05.

Angie Rudduck: 2000.

Doug Lane: In Critical Care, ’05.

Angie Rudduck: Oh, yeah.

Richie Rump: No 6.5 people? No?

Angie Rudduck: Tara is not here.

Jessica Connors: Yeah, she’d probably have a story about the oldest version she’s used. She’s got the best stories.

Erik Darling: “It was on a floppy disk…”

[Laughter]

Doug Lane: I worked on 7 once upon a time. I didn’t actually like do real work on 7, it was just, believe it or not, writing stored procedures in the GUI window.

Angie Rudduck: Query explorer or whatever it is?

Doug Lane: No, it was like the properties of the—it was crazy when I think back on it. There was like no validation of any kind except the little parse button. This was back when Query Analyzer and Enterprise Manager were separate and I was doing it in Enterprise Manager.

Angie Rudduck: We had a 2000 box at my last place and I knew nothing about 2000. I tried logging in there and I was like, “Wait, where is Management Studio?” That was really hard to try to figure it out. The management administrative part is really scary in 2000 and I was on the server directly. It was like already a precarious server about to tip over. So, scary.

 

What’s the best way to rebuild a 2-billion-row table?

Jessica Connors: Question from Joe. He says, “What is the best way to rebuild a very large index without taking outage or filling the log? Rebuilding after two billion record delete.”

Doug Lane: Oh, are you sure you need to delete two billion rows from a table?

Erik Darling: Maybe he was archiving.

Doug Lane: Yeah, I don’t know if you want to flag them as deleted and then move them out some other time or what, but, wow, that’s a lot of log stuff. You can do minimal logging if it’s a table that you really don’t care about it being fully logged on but there are disadvantages to that too.

Erik Darling: What I would probably do, I mean, if you’re on Enterprise you’re kind of out of luck either way, right? There’s no online index operations there. You can help with the log backup stuff if you put it into bulk logged and continue taking log backups, but at that point, if anything else happens that you need to be recoverable after it starts bulk logging something, you’re going to lose all that information too. So bulk log does have its downsides. It’s not a magic bullet. So depending on your situation, you might be in a little bit of a pickle. A better bet is if you’re deleting two billion records and depending on how many records are leftover, you might just want to dump the stuff that you’re not deleting into another table and then do some sp_rename and switch things around.

Doug Lane: You can actually just drop the index and recreate it. Sometimes that goes a lot faster.

 

Are there any problems with SQL role triggers?

Jessica Connors: Question from J.H. He says, “Anything to be aware of or downsides of setting up SQL role triggers, mainly sysadmin role changes?”

Erik Darling: All these security questions.

Doug Lane: Yeah.

Erik Darling: We bill ourselves as not security people.

Doug Lane: Like the one before, I think we’re going to punt on that.

Jessica Connors: Thomas Cline says, “No security questions.”

Angie Rudduck: Too bad the slides aren’t up.

Jessica Connors: Yeah.

Erik Darling: “For security questions…”

Angie Rudduck: “Please call…”

Erik Darling: Yeah, there we go.

Angie Rudduck: I’ll do them because it usually works for me.

Erik Darling: Attendees… staff… Angie. I’ll just mute you, just kidding. There we go. You are presenting.

 

What are the HA and DR options with Azure VMs?

Jessica Connors: All right, who wants to answer some Azure questions?

Erik Darling: Nope.

[Laughter]

Jessica Connors: Does anybody here know the HA and DR options with SQL 2012 Standard in Azure VMs?

Doug Lane: Oh, no. Not me.

Erik Darling: Using a VM? If you’re just using the VMs, I assume it’s the same as are available with anything else. It’s only if you use the managed databases that you get something else but I think it’s mirroring either way. I know Amazon RDS uses mirroring.

Richie Rump: Yeah, I think they have like three copies and if one goes down it automatically fails over to the other two or something like that. Don’t quote me.

Jessica Connors: Okay, we’re all being quoted. We’re actually all being transcribed. We’re all being recorded. We’re all being watched.

Erik Darling: Really?

 

Is there a better solution for replication than linked servers?

Jessica Connors: Question from Cynthia. She says, “My developers have a product that uses linked servers for parameter table replication. I’ve read that linked servers aren’t the greatest. Is there another way to do this?”

Doug Lane: Okay, that’s actually kind of a two-part question because you’ve heard that linked servers aren’t the greatest. You’re right. So with SQL Server 2012 SP1 and later, you don’t have to blast a huge security hole in order to get statistics back from the remote side in linked servers. It used to be that you had to have outrageous permissions like ddl admin or sysadmin in order to reach across, get a good estimate, when it then builds the query plan on the local side. That’s not the case anymore. The problem that you can still run into though is that where clauses can be evaluated on the local side. Meaning, if you do a where on a remote table what can happen is SQL Server will bring the entire contents of that remote table over and then evaluate the where clause locally. So you’re talking about a huge amount of network traffic potentially. That’s what can go wrong with them. The other question, “Is there a better way?” That kind of depends on what flexibility the app gives you because you say that this is a product. So I don’t know if this is something that you have the ability to change or not but if you’re talking about replicating from one side to the other, there’s any number of ways to move data from A to B.

Jessica Connors: And why do linked servers suck so bad?

Doug Lane: I just explained that.

Jessica Connors: Oh, did you? I didn’t hear you say why they suck so bad, sorry.

Doug Lane: Because you can end up with really bad plans either because permissions don’t allow good statistics or you end up pulling everything across the network just to filter it down once you’ve got it on the other side.

 

Are there any good devops tools for SQL Server?

Jessica Connors: Question from Joshua. This might be one for Richie. “Do you have any recommendations for Microsoft SQL dev ops tools?”

Richie Rump: There’s not a ton. I guess Opserver. I guess from Stack Overflow would be one of them but not that I know of that there’s like out-of-the-box ways to do that kind of stuff. I know when I was consulting with one firm, they had built their own dev ops tools. I think they had Splunk and then they just threw stuff out from SQL Server logs and then did a whole bunch of other querying to put dashboards up so they could do monitoring amongst the team and do all that other stuff. I think Opserver does a lot of that stuff for you but it’s a lot of configuration to get it up and running. I’d say test it out, try it out, and see if that works for you but I’m not aware of any kind of things you could buy and it’s kind of ops-y things. I don’t know, what do you think guys?

Erik Darling: I agree with you.

Doug Lane: I don’t live in the dev ops world.

Jessica Connors: I agree with you, Richie.

Angie Rudduck: Yeah, whatever the developer says.

Jessica Connors: What he said.

 

Should we disable the SA account and set the DB owner to something else?

Jessica Connors: Question from Curtis. He says, “I’m looking for a clarification on SA usage. sp_Blitz [inaudible 00:12:03] to having DB owner set to SA, not a user account. But what about the best practice of disabling SA? Should DB owner be set to a surrogate SA account?

Erik Darling: Nope. It’s not really catastrophic because it’s something that you should be aware of because usually what happens on a server is someone will come in and restore it. Someone will come in and restore a database, usually from an older server to the new one. They’ll be logged in with their user account so they’ll be the owner of that database. The owner of the database has elevated privileges on the database equal to SA, which you may not want always and forever. That’s why SA should be the owner, even if it’s disabled and the user account shouldn’t be. Even if the user is a sysadmin, you kind of just don’t want them to also be the owner of a database.

 

How do I migrate databases in simple recovery?

Jessica Connors: Question from Monica M. “We are migrating and upgrading from SQL 2008 R2 to 2014. We use simple recovery as our reporting/analysis rather than OLTP. Our IT department said after I copy/restore the databases to the new server it will take them two weeks to go live. By this time, our DBs will obviously be out of sync. What simple method would be best to perform this move?”

Angie Rudduck: Every time I moved, we did some server upgrades where we just created a new VM and ended up renaming it to the old server name eventually but what we did was we took a full backup like the day before, hopefully, but if you have to do two-weeks, we took the full backup when we knew and then we took a differential right when we’re ready to make the cut over. So let’s say at 6:00 p.m. the maintenance window is open and I’m allowed to take the database offline. I put it in single-user mode. I took a differential and then applied that to the new server. Then took it out of single-user mode on the new server. Then we did all of our extra work. So it’s not perfect for two weeks of data change, so if you could keep applying the fulls until like the night before, that would give you a little bit better change over.

 

Jessica Connors: Trying to find some questions here. You guys are real chatty today.

Erik Darling: Everyone is all blah, blah, blah, problems, blah, blah, blah.

Jessica Connors: “Here is my error…” They copy and paste it. I’m never reading those.

Erik Darling: “Here’s the memory dump I had.”

Angie Rudduck: Jessica likes to be able to read the questions and she doesn’t read SQL, so nobody reads computer. Nobody really reads computer, including us.

Erik Darling: “Yeah, I found this weird XML…”

Jessica Connors: Richie reads computer.

Angie Rudduck: That’s true, Richie reads computer.

Richie Rump: I was reading XML before I got on.

Angie Rudduck: That’s disturbing.

Erik Darling: Naughty boy.

 

How do I shrink a 1.2TB database?

Jessica Connors: Here’s a question from Ben. He says, “I have a large 1.2 terabyte [inaudible 00:14:51] queuing database. Added a new drive and a new file device. DBCC SHRINKFILE does not seem to be working on the original file. Seems that the queuing application reuses space before it can be reclaimed. Any suggestions?”

Angie Rudduck: Don’t shrink.

Erik Darling: I don’t know what you’re trying to do. Are you trying to move the file to the new drive or what are you up to? I don’t think you’re being totally honest with us here.

Angie Rudduck: Yeah.

Jessica Connors: But you shouldn’t shrink, huh?

Doug Lane: Spread usage across drives, okay.

Angie Rudduck: Maybe put it on one drive, I don’t know? I guess that’s hard to do with such a large file size.

Jessica Connors: 1.2 terabytes.

Erik Darling: So you have your database and you bought a new drive. Did you put like files or file groups on the new drive? Did you do any of that stuff yet?

Angie Rudduck: He says he has to shrink because the original drive is maxed and he needs workspace. I think it’s just not creating—maybe he has to do what you’re saying, Erik, about creating an additional file group to be on the other drive.

Erik Darling: Right, so what you have to do is actually move stuff over to that other file. So if you haven’t done it already, you have to pick some indexes or nonclustered or clustered indexes and start doing rebuild on the other file group.

Angie Rudduck: Then you’ll be able to clear out space to shrink your file.

Erik Darling: Hopefully.

Angie Rudduck: Maybe, yeah. Let us know next Wednesday.

 

Has anybody played with SQL Server 2016 yet?

Jessica Connors: Have we played with SQL 2016 yet?

Erik Darling: Oh, yeah.

Doug Lane: Yep.

Jessica Connors: No? Some of you?

Erik Darling: Yes.

Jessica Connors: Have you played around with the 2016 cardinality estimator and do you know if it works better than SQL 2014?

Erik Darling: It’s the same one as 2014.

Jessica Connors:         Is it?

Doug Lane: So there’s the new and the old. Old is 2012 and previous and the new is 2014 plus. There’s all kinds of other new stuff in 2016 but the cardinality estimator actually hasn’t been upgraded a second time.

Erik Darling: Yeah, Microsoft is actually approaching things a little bit differently where post 2014 with a new cardinality estimator, they’ll add optimizer fixes and improvements for a version but you won’t automatically be forced into using those. You’ll have to use trace flag 4199 to apply some of those. So even if you pop right into 2016, you may not see things immediately. You may have to trace flag your way into greatness and glory.

 

Are high IO waits on TempDB a problem?

Jessica Connors: Here’s a good question from Mandy. She says, “I’ve been on a SQL 2014 standard cluster with tempdb stored on SSDs for several months. The last few days we’ve been seeing a lot of alerts and spotlights saying that we have high IO waits on those tempdb files. The IO waits are as high as 500 to 800 milliseconds. Is this a high value? I’m new to using SSDs with SQL Server and I admit that I just don’t know what high is in this case. Any thoughts?”

Doug Lane: It’s high but how frequent is it? Because if you’re getting an alert that like once a day that you’re hitting that threshold, it may not be something you need to worry about too much depending on what it is that’s hitting it. So what you want to do is look at your wait stats and look at those as a ratio of exactly how much wait has been accumulated versus hours of up time. If you’re seeing a lot of accumulated wait versus hours of up time, not only will you know there’s a problem but you’ll also be able to see what that particular wait type is and get more information about what’s causing it. Then you can put that together with what might be happening in tempdb and possibly come up with an explanation for what’s going on.

Erik Darling: Yeah. I’d also be curious if something changed that started using tempdb a whole lot more or if maybe you might be seeing some hardware degradation just after some time of use.

 

What should I do when my audit stops working?

Jessica Connors: Question from James. He says, “I’ve installed a SQL Server audit and noticed it stopped working. Is there anyway to be alerted when a SQL Server audit stops or fails?”

Angie Rudduck: Is that the Redgate tool? Because I feel like Redgate had some auditing tool or encrypting tool that went out of support. When I was at my last place and we had to change over so I’m not sure what that is.

Doug Lane: If it throws a certain severity error then you can have SQL Server notify you of those kinds of things. But as far as like audit as a product, I’m not sure.

 

Will backup compression compress compressed indexes?

Jessica Connors: Then we’ll move on to J.H. Says, “When compressing all tables page option in a database does compressing its backup gain more compression?”

Erik Darling: Yes.

Angie Rudduck: Compression squared.

Erik Darling: Compression times compression. Are you really compressing all your tables to get smaller backups?

Jessica Connors: Is that really bad?

Erik Darling: No. It’s just kind of a funny way to approach it.

Doug Lane: I don’t know if that’s the purpose but…

Angie Rudduck: I think he has no drive space, tiny, tiny, tiny SAN.

Erik Darling: Buy a new thumb drive.

Doug Lane: Talk to Ben because he apparently has the budget to have new large drives.

 

Are there performance issues with SSMS 2016?

Jessica Connors: We have somebody in here that’s playing with SQL 2016. He says, this is from Michael, “SQL Server Management Studio 2016 sometimes goes into not responding status when using the object explorer window such as expanding the list of database tables. These freezes last around 20 seconds. Is there any known performance issues with SSMS 2016?”

Doug Lane: I found one. I was trying to do a demo on parameter sniffing where I return ten million rows of a single int-type column and maybe about half the time SSMS would stop working and it would crash and force the restart. So I think SSMS 2016, at least related to the RTM release, is a little bit flakey.

Jessica Connors: For now.

Erik Darling: Yeah, it might depend on just how many tables you’re trying to expand too. I’ve been using it for a bit and I haven’t run into that particular problem with just expanding object explore stuff. So how many tables are you trying to bring back would be my question.

Angie Rudduck: I was just about to say, we had that question last week or the week before about SSMS crashing when they tried to…

Erik Darling: Oh, that’s right.

Angie Rudduck: Remember? They were trying to expand their two million objects.

Erik Darling: Yeah, that’s not going to work out well.

Angie Rudduck: So maybe this is the same person, different question.

Doug Lane: I was going to say I think it might just be a little…

Angie Rudduck: Yeah. It’s brand new, what do you expect? It’s a week old. It’s going to be flakey.

Richie Rump: Something to work when you release it?

Angie Rudduck: No, come on.

Richie Rump: I’m just saying, it’s a crazy idea, I know. I have all these crazy ideas but…

Angie Rudduck: Unrealistic expectations, Richie.

Erik Darling: That would require testing.

Jessica Connors: Richie has never released anything with bugs.

Angie Rudduck: Who needs to test things? I did have a client recently ask me what test meant when I was talking about test environment.

Jessica Connors: I know, what?

Richie Rump: What’s this test you speak of?

Erik Darling: Just for the record, Richie wipes cooties on everything he releases.

Angie Rudduck: Kiddie cooties.

Doug Lane: All right, looks like we’ve got two minutes left. Lightening round, huh?

[Group speaking at the same time]

 

What’s the best SQL Server hardware you’ve ever worked on? And the worst?

Jessica Connors: Question from Dennis. He wants to know, “Tell me the best SQL hardware environment that you have ever worked on.”

Doug Lane: I would say when we went down to Round Rock last year. I got to play with I think it was a 56-core server, that was pretty fun.

Erik Darling: Yeah, I think my best was 64 cores and 1.5 terabytes of RAM.

Richie Rump: Yeah, I had 32 cores and 2 terabytes of RAM.

Erik Darling: Nice.

Jessica Connors: What about the worst you’ve seen with clients?

Erik Darling: Ugh. Probably an availability group with 16 gigs of RAM across them. That was pretty bad. And it had like one dual core processor. It was pretty, yeah. It was Richie’s laptop.

Angie Rudduck: Worse than Richie’s laptop.

Doug Lane: That sounds about like the worst I’ve seen is like dual core, 10 or 12 gigs of RAM.

Angie Rudduck: 500 gigs of data.

Erik Darling: I’ve had faster diaries than that.

Jessica Connors: All right, well, we’re out of time.

All: Bye.

Wanna learn from us, but can't travel? Our in-person classes now have online dates, too.


[Video] Office Hours 2016 2016/06/01 (With Transcriptions)

$
0
0

This week, Brent, Angie, Erik, Tara, Jessica, and Richie discuss SSMS issues, security auditing, snapshot replication, SSIS Cache Connection Manager, AlwaysON Availability Groups, deadlocks, and Jessica’s trip to Mexico.

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

If you prefer to listen to the audio:

Office Hours Webcast – 2016-06-01

Jessica Connors: All right, I guess we should be talking about SQL Server.

Erik Darling: Nah.

Brent Ozar: Oh no.

Erik Darling: Boring.

Jessica Connors: That one product.

Angie: Meh.

Erik Darling: Snoozefest.

Brent Ozar: Which is out new today. So ladies and gentlemen, if you’re watching this, SQL Server 2016 is out right now. You can go download it on MSDN or the partner site. There’s places where you can go it. Developer Edition is free so you can go download the latest version right now. As we speak, Management Studio is not out yet but will be coming any moment.

Jessica Connors: That was our first question too: Is 2016 out yet?

Brent Ozar: Dun dun dun.

Jessica Connors: Are you hearing any rumblings on problems with it?

[Laughter]

Brent Ozar: We all start laughing. There were a lot of problems with the community previews. For example, SQL Server Management Studio would crash every time I would close it. So I’m really curious. Usually you don’t see stuff quite this buggy as you get close to release. But at the same time, I’m like, well, no one ever goes live with it in production the day it comes out anyway. People are just going to get widespread experience in development, in dev environments, and QA, then they’ll go find bugs hopefully and fix them. Hopefully.

Angie Rudduck: Wait. So I shouldn’t install that in our production servers running everything?

Brent Ozar: Yeah, no. I would take a pass for a week or two. Just let things bake out just a little bit.

Jessica Connors: Let it just wait.

 

Jessica Connors: Let’s take a question from Dennis, SSMS question. Is there a way to have SSMS format the numbers in the output messages? Not data in the query, like the row count at the bottom?

Tara Kizer: What are you trying to solve here? Because this is a presentation layer issue. Management Studio, it’s just a tool for us to query data, why does the formatting of the output matter? If you have an application you’re developing in .NET, format your data there. The row count at the bottom. No, Management Studio, there isn’t a way to format it. You can change the font and things like that in the tools options but I’m not sure that that’s what you’re asking.

Richie Rump: Is there a way, Brent? Could we format that?

Erik Darling: One thing you can do is if you’re interested in just having commas in is you can cast it as money or convert it to money with a different culture and you can get commas put in. But other than that, I’m not really sure what you’re after so if you’re a little more specific.

Brent Ozar: Well and return it as results. Whatever you’re looking for, return it as results instead of looking at what comes out of SSMS. Then you can format it there as well.

Jessica Connors: Dennis hasn’t replied to us.

 

Jessica Connors: Let’s go to Ben. He says, “[inaudible 00:02:24 old] to SQL. Hearing rumors about going to the cloud, MS, or Amazon, specifically in terms of security. What are the gotchas and pain points? Security is not our forte.”

Brent Ozar: This is so totally different from on-premises because on-premises you don’t have any security risks at all. No one could possibly access your data. I’m sure it’s locked down tighter than the pope’s poop chute. I mean it is completely secure as all get out. Just me, I’m usually like… Erik says, “Pull my finger.” I would say usually it’s more secure because you don’t go wild and crazy with giving everybody sysadmin. So I just turn it back to people on-premises and go, “So let’s talk about your security. Let’s go take a look at what you got. Everybody is SA. You haven’t changed your password in three years? Yeah, I think you should get out of on-premises. On-premises is probably the worst thing for you.” Nate says, “The pope’s poop chute? Really?” Yes. This is what happens when you work for a small independent company. You can say things like “tighter than the pope’s poop chute.” Probably can’t say that but we’ll find out later.

[Laughter]

Angie Rudduck: You’ve already said it at least three times, we’re going to find out. You’re going to get an official letter from the pope.

Brent Ozar: The Vatican, yep.

Angie Rudduck: Yeah.

Brent Ozar: “The pope does not have a poop chute.”

[Laughter]

Erik Darling: Going for a world record, most references to the pope’s butt in one webcast.

Angie Rudduck: Stop it.

Brent Ozar: Dad always said that to me, so yeah, there we go. Someone else should probably tackle the next question.

Richie Rump: Yeah, somebody else talk now, please.

Jessica Connors: Brent, I’ll just put him on mute.

Erik Darling: Looser than Brent’s…

[Laughter]

Erik Darling: Wallet, wallet, wallet.

Angie Rudduck: Wallet on the company retreat.

Brent Ozar: There we go.

Jessica Connors: I’m glad it’s a short week.

 

Jessica Connors:  Question from J.H. “Would creating a server trigger and emailing our DBA team if someone makes changes to the server role safe? Hard triggers affect performance, but it’s rare in our case that we have server role changes but want to catch it if a network admin puts himself in the sysadmin role without letting us know.”

Tara Kizer: We had security auditing at my last job. I’m not too sure what was used. Well, I think the other DBA who set this all up, he just set up a job and queried for the information. Then the job would run every few minutes I believe and would send the DBA team an alert if anything changed.

Brent Ozar: Yeah, I like that. My first reaction was Extended Events.

Tara Kizer: We had really strict auditing that we had to put in place due to the credit card information. It was encrypted but we had to be very careful with everything.

[Erik and Brent speaking at same time]

Brent Ozar: Would you say you had tight security? How tight was security? Go ahead, Erik, I dare you.

Erik Darling: Oh, sorry. I was going to say that you can set up the event data. You got me all flustered now. You can set up event data XML. It’s pretty good for modification triggers like that. It’s not like, you know, if you put triggers on tables and you’re doing massive shifts of data or you know before and after stuff. It’s a pretty lightweight way to just log changes as they happen.

 

Jessica Connors: Let’s see here. Question from Terry. “Is there a way to set up databases in an AG without doing a backup and restore?”

Erik Darling: Not a good one.

Tara Kizer: No.

Brent Ozar: 2016 there is. 2016 we get direct seeding where we can seed directly from the primary, so starting today you can. But unfortunately, not before today.

 

Jessica Connors: All right, a security question. This is from Nate regarding security auditing. “Any suggestions on getting some basic setup that tracks and alerts for security changes and schema changes?”

Tara Kizer: I don’t know.

Brent Ozar: I don’t know either. Is there like an Extended Event or something you could hook into?

Tara Kizer: Probably. What we had set up for security would have just been queries, just to query for the information. Look at the system tables and views. For schema changes, I don’t know.

Angie Rudduck: I think somebody set up a simple, “Hey, there’s somebody new in this group” for a security group. I think it was PowerShell at my last place just to like all of a sudden somebody is in the DBA sysadmin group. How did you get there? It would fire off of one server in the domain but I don’t know anything about schemas.

Brent Ozar: Yeah, schemas are tricky because you can log DDL changes. The problem is if your trigger fails, then the change to the table can fail and that can be kind of ugly. You can also set up event notifications and dump stuff into a queue with Service Broker, but it is kind of challenging and kind of risky. If you want to learn more about it, search—god, I’ve got to type this woman’s name out—Maria Zakourdaev. So if you search for “event notifications and SQLblog,” that’s what you do: “SQLblog Maria.” SQLblog is all one word. Maria Zakourdaev, and I’m sure I’m butchering her name, from Israel has a post on how you go about setting up event notifications and how they break because they do break under some circumstances.

Erik Darling: Everyone mark it, not only with SQL Server 2016 release today but Brent recommended Service Broker.

[Laughter]

Brent Ozar: It’s a great solution.

Richie Rump: You didn’t see the disdain on my face when he said that? You didn’t see that at all?

 

Jessica Connors: Let’s talk about snapshot replication from Trish L. “I have…

Tara Kizer: I’ve got to go get my coffee.

Brent Ozar: I know, we’re all like, “I’m out of here.”

Jessica Connors: Maybe we could tackle this. “I have snapshot replication which is scheduled to run one time per day but recently I’ve started to see blocking done by the snapshot replication. Do I need to [Inaudible 00:07:53] the distribution agent as well because it is running automatically now?”

Tara Kizer: I’m not sure about that but the blocking, you’re going to encounter that because it has to lock the schema. That’s one of the last steps it does. So anytime you have to do a snapshot, whether it be snapshot replication or transactional replication, I assume with merge replication too. Anytime you have to do that initial snapshot or reinitialize a snapshot, it does block changes—data changes, schema changes, you’ll see a lot of blocking as it’s going through the last bits of the snapshot creation.

Brent Ozar: What would make you choose snapshot replication? Like what would be a scenario where you’d go—or have there been any scenarios where you go, “Hey, snapshot replication is the right thing for something I encountered.”

Tara Kizer: I’ve never used to it but if users are willing to accept that their data is a day old, let’s say. Any time that I’ve used transactional replication, they’ve wanted near real time data. They wanted zero latency. We couldn’t deliver that in replication. But yeah, snapshot replication, it just depends on what your user wants as far as the data goes.

Richie Rump: I’ve used it for reporting solutions.

[Richie and Erik speaking at the same time]

Jessica Connors: What?

Erik Darling: I was asking Tara if a different isolation level would help with that blocking.

Brent Ozar: Oh.

Tara Kizer: We were actually using RCSI so, yeah, it was definitely a schema lock. We definitely still had blocking.

Brent Ozar: Makes sense. It was probably worse without the schema, or without the snapshot or CSI, probably horrible.

Tara Kizer: It was very rare we had to do the snapshot but sometimes replication would be broken for whatever reason and we couldn’t figure it out and we’d just have to restart replication. Our database was large. It took like five to eight hours to do. Not the snapshot portion, the snapshot took like about 45 minutes I believe but there was a lot of blocking during that time.

Richie Rump: I like snapshot replication for reporting purposes, right? So again, you just dump the data over there and it’s okay that there’s a time delay for the reporting aspect and there’s your data.

Tara Kizer: I just wonder instead of snapshot replication if people should be, not backup and restore because that might take too long on larger databases, but a SAN snapshot, a daily SAN snapshot, because it’s just available right away. You don’t have to wait for anything.

Brent Ozar: No schema locks, it doesn’t matter what the volume of change data is, yeah.

 

Jessica Connors: While we’re on the hot topic of replication, there’s another one from Paul. “I am replicating a database using merge and had an issue where if the developers changed a procedure on the original database, the change would not be pushed to the replicated database. Replicate schema changes is set to true. Any guidance on the reason why the changes won’t replicate? I did a snapshot before initiating replication.”

Tara Kizer: So replicate schema changes has to do with the table changes, it does not have to do with stored procedure, views, functions, or anything like that. So if you do an alter table, add a column, that will get replicated if you have the replicate schema changes set to true but you would have to also have in a publication either your current publication or a different one to also replicate the stored procedures.

Brent Ozar: I wouldn’t do that in merge either. Like I would—if you’re going to change stored procedures, just keep them in source control and apply them to both servers.

Tara Kizer: Yeah.

 

Jessica Connors: Let’s move onto a question from Justin, SSIS Cache Connection Manager question. “I want to load several objects into cache, about one to five million records, but can’t figure out how to access that cache’s source of data. It’s quite a bit faster for us to load to a cache versus staging tables. Is this possible? If not, how would you store this?”

Brent Ozar: Have any of us used the caching stuff in SSIS? No, everybody is…

Tara Kizer: No, I’ve used SSIS a lot and have not used that.

Brent Ozar: The one guy I know who does is Andy Leonard. If you search for Andy Leonard SSIS, he’s written and talked about this. I know because it was in his book. I didn’t read the book, I just remember seeing the book. It was on my shelf at one time. It was a great paperweight. Smart guy, really friendly. Just go ask him the question, he’ll be able to give you that right away. Normally we’re all about, “Go put it on stack exchange.” Just go ask Andy. Just go “Andy Leonard SSIS” and he’s super friendly and will give you that answer right away.

Erik Darling: Tell him Brent sent you.

Brent Ozar: Tell him Brent sent you on this.

 

Jessica Connors: Question from Tim L. He says, “I’ve got an ancient Access expert here at my company. I’m having SA access. He has a lot of ODBC from multiple Access dbs into my 2008 R2 SQL Server. How do I find out what tables he updates? There’s nothing in terms of jobs or stored procedures that references his data pull and updates.”

Tara Kizer: You could do an Extended Event, run a trace, add a trigger.

Brent Ozar: It’s 2008 R2 though. I like the trigger.

Angie Rudduck: I like cutting his access.

Richie Rump: I love that, “ancient.”

Tara Kizer: Yeah, why does he need SA access?

Brent Ozar: Just go ask him. He’s ancient. He’ll be a nice guy. He’s mellow by now. If you run a trace, that’s going to be ugly, performance intensive. The trigger will be intensive.

Erik Darling: Well you can at least filter the trace down to table name.

Brent Ozar: Well but if he wants to know what tables he’s doing, it’s going to be every time…

Erik Darling: Oh, never mind.

Brent Ozar: Yeah.

Tara Kizer: He could filter by his login at least, if that’s what it’s going through at least to connect to SQL Server.

Brent Ozar: And don’t try to log his insert statements or updates deletes. Just put a record in a table the first time he does an update, delete, and then immediately turn off the trigger on that table, or the trace on that. But, yeah. That’s tough. Just go ask the guy. Go talk to the guy. It would be nice.

Erik Darling: Shoot him email.

Brent Ozar: Yeah, shoot him an email. Buy him a bottle of Bourbon.

Erik Darling: Yeah.

Brent Ozar: It’s a human being.

Richie Rump: Yeah, just don’t give away the wine. Right, Brent?

Brent Ozar: If you were going to give somebody wine, you should give them like Robert Mondavi.

[Laughter]

Brent Ozar: He’s Access. He’s not, you know. That’s not true. Cliff Lede, ladies and gentlemen. This webcast is brought to you by Cliff Lede wines.
Jessica Connors: Do any of us participate in SQL Cruise?

Brent Ozar: I cofounded that with Tim Ford. Tim and I cofounded it and when we split up the consulting company versus the training and cruise-type business, I wanted to let him go do his own thing there and not be on it because I felt like I would kind of shadow in on it and make the thing murky. It is a wonderful experience. I strongly recommend it to anyone who thinks about going. It’s fantastic for your professional development. It’s limited to just say 20 attendees and like 5 to 10 presenters, so the mix, the ratio of presenters and attendees is fabulous. You get to hang out with them. You get to have dinners, from all of this you get to know them really well. So it can be a rocket ship for your career and it helps you really build networking bonds with not just the presenters but the other attendees who are there. The downside is you get to hang out with the presenters in hot tubs so that may be a pro or a con depending on what your idea of a good time is there. So it’s not for everybody but it is truly fantastic.

Erik Darling: Grant Fritchey in a speedo, ladies and gentlemen.

[Laughter]

Jessica Connors: Do you still go on the cruise then? Are you done?

Brent Ozar: I don’t. I totally stopped doing that. I go off and do my own cruises. My next one is in Alaska in August I think, going on that one with my parents. But I haven’t done a technical cruise since. Most of the time what I like to do now is just go out on a cruise and not talk to anyone. I like to go out and sit and read books.

Erik Darling: You did Alaska before, right?

Brent Ozar: This is my fifth time I think, yeah. Absolutely love it. It’s gorgeous. I never was a snow kind of a guy but you get out there in the majestic snow and mountains and bears and all that, it’s beautiful.

Jessica Connors: Nice.

Angie Rudduck: Minus the jacket.

Brent Ozar: Yes.

 

Jessica Connors: Let’s talk to Graham Logan, he’s got some problems. He says, “SSMS crashes when expanding database objects in objects explorer. Database is about 1.2 terabytes and has about two million objects.”

Tara Kizer: Oh good lord.

Jessica Connors: But, he says, “[inaudible 00:15:43] design. It’s not mine. How to view all database objects without SSMS crashing.”

Tara Kizer: You just cannot use object explorer. You’re not going to be able to use object explorer. You can’t use the left pane in Management Studio. You’re going to have to write queries to see things. It’s very unfortunate but that’s a heck of a lot of objects in the database.

Brent Ozar: Before you expand the list, you have to right click on the tables thing and click filter. Then you can filter for specific strings but without filtering, it’s useless… I’d go information schema tables, information schema, yeah, all columns, all kinds of stuff.

 

Jessica Connors: Kyle Johnson has a new one. “We have a 4.2 terabyte database with a single data file. I’m working on a plan to migrate to multiple ones. Shrinking the database to a level with the data between files isn’t really practical with a six-hour window of no users. Have any other suggestions? Reindexing tables and specifying the file groups to move the table to two?” From Kyle Johnson.

Brent Ozar: Not a bunch of good options here.

Erik Darling: Brent is getting ready to tell you about Bob Pusateri.

Brent Ozar: I was. You are psychic. You are phenomenally psychic. Tell us more. I want to subscribe to your newsletter.

Erik Darling: Bob Pusateri, which I feel like this webcast has been obscene enough without me saying that, has a blog post about moving file groups, a lot of the gotchas, and you know, bad things that can happen to you. I will track down the link for it and send it to you but I would not do it justice just explaining what goes on it, because it’s scripts and everything, so.

Brent Ozar: Bob had a 25 terabyte data warehouse with thousands of files in it because the prior DBA thought it was a good idea to create a separate file group for every employee and then later came to regret that decision so he has a great set of scripts on how you go about moving stuff around and keeping them online wherever possible. So it’s really slick. So you do that prepping leading up to the six-hour window so that your six-hour window is only dealing with stuff that you can’t do offline, like moving the LOB data if I remember right.

 

Jessica Connors: Question from Claudio. “I’m trying to understand the differences between the new AlwaysOn Basic Availability Groups and the synchronous commit mode and the mirroring and high safety mode but they look identical except AlwaysOn seems more complicated to set up and manage. Are there any benefits to either solutions, features, performance, licenses, liability? Which one would you recommend to adopt?

Tara Kizer: Database mirroring is being deprecated so you’re going to want to move over to the AG basic availability groups. Get on it now. It’s the replacement for database mirroring.

Brent Ozar: The drawbacks, so you’ve managed both too. What would you say the strengths of AlwaysOn Availability Groups are over mirroring and vice versa? That’s not a trick question, I promise.

Tara Kizer: Mirroring you’re not failing over groups at a time. You’re failing over a database at a time. So availability groups let you failover in groups which is good when you have an application with multiple databases that it needs.

Brent Ozar: To be clear, so you’re saying the guy is saying Standard too, so you only do one database at a time. You could script those too, just like you would with mirroring. I’m trying to think if there’s anything that would be… have to have a cluster but you don’t have to have a domain with mirroring. But you don’t either with 2016 either. You can do it between standalone boxes.

Tara Kizer: With mirroring, if you want the automatic failovers, you need a witness. With AGs you do need a quorum but it could be a file share on another server, you know, on a file server that you have or a disk on a SAN could be a quorum. Mirroring does require another box, a VM, it can be Express Edition.

Brent Ozar: Yeah, I used to be the biggest fan of mirroring. I’m having a tough time coming up with advantages as 2016 is starting here.

Tara Kizer: I did a lot of failovers with mirroring, log shipping, and then later availability groups and by far I like availability groups best for DR failovers. It was just so much easier. You just run a failover command and you’re done. With mirroring, you’re doing it database by database. Log shipping is, you know, all sorts of restores going on. Mirroring is certainly easy, definitely easy, but I like the slickness of availability groups and readable secondaries and the choice of asynchronous and synchronous.

Brent Ozar: Yeah, that’s where I was going to go too. Because even in Standard, you get choice between synch and asynch now. And you can use one technology that works on your Standard stuff and your Enterprise stuff so you only have to learn one feature instead of learning two. That’s kind of slick too.

Tara Kizer: When we used mirroring, we would use asynchronous mirroring to the DR site then for high availability solution at the primary site we used failover clustering. So availability groups it just solves both solutions in one feature, plus reporting, we got rid of replication.
Jessica Connors: All right. Let’s move on to a question from Chris Woods, a regular attendee. He says, “Migrating MDF with LOB data, L-O-B data, I don’t know how you call that, from one drive to another with minimal/no downtime. Can you use log mirroring to mirror it to a new database on the same server that shut down the original during a quick downtime?”

Brent Ozar: You can’t do mirroring to the different database on the same server, can you? You can do log shipping, can you do mirroring to the same instance?

Tara Kizer: No.

Brent Ozar: You can do log shipping to the same instance. That will work. Your downtime will be super fast. Because what your failover process would look like is when it comes time for failover, you do a tail of the log backup up on the main database, then restore that tail-log on the other database. Rename the old primary as like, the old primary database just like “database old.” Then rename the new one as “database new” and then whatever the new database name is or the original database name is. So you could do that in like a 30-second outage. You don’t have to change connection strings because it’s all the same server still. So that’s kind of slick.

Tara Kizer: If this is a SAN drive, even moving from one SAN to the next, we did all this stuff live. I don’t know what the technologies are called but we would move arrays live. The SAN administrators did some magic and it just copied over the data and once the copy was complete, it did a switcheroo between the two pointers, or, I don’t know what the technology was but the SAN can handle this without any down time.

 

Jessica Connors: Rob is adding a new instance to an existing active active cluster. I think he’s talking me about his process so that we can say if it’s yea or nay. He says, “I would need to failover the existing instances to one node, install the new instance on the node with no instances. Service pack it up and failover the instance to the node I was just on. Then run the install in another node, apply SPS, then rebalance the instances.” Does that sound about right?

Tara Kizer: It does but you know we don’t recommend active active clusters. What happens if you lose a node? I don’t at least. I’ve had four-node clusters where all four nodes were active. It’s just a nightmare. If you lose a node, can your other nodes support all of the instances at the same time until you get that other node fixed?

Brent Ozar: Richie is showing something on his iPad. What I would say is…

Erik Darling: It’s too bright.

Brent Ozar: I still can’t see it. We do recommend active active with a passive node.

Tara Kizer: Yeah, okay. Right.

Brent Ozar: Yeah, multi-instance clusters, just have a passive in there somewhere. Your scenario is exactly why you want a passive node laying around.

Tara Kizer: At least what you wrote out here for the question, yeah, that is the process.

Brent Ozar: Also known as miserable.

Tara Kizer: Yeah. At least since SQL Server 2008 we’ve been able to have where it can install it just on one node. Prior to that, all nodes in the cluster had to be online and have the exact right status in order for the installation. Because the installation occurred across all nodes at the same time. Service packs, the engine, everything. On a four-node cluster, there’d always be one node that was misbehaving. It just says, “I need a reboot.” And you’d reboot it 20 times and it would still say, “I need a reboot.” Then finally that one would be okay and now another node would say, “I need a reboot.” It was just ridiculous. So I’m glad that Microsoft changed the installation process starting with 2008.

Brent Ozar: It’s like taking kids on a road trip. “Everybody ready…?” “No.”

Erik Darling: “I have to pee.”

Richie Rump: I got excited, I thought we had a Node.js question but I guess not.

Erik Darling: Never have, never will.

Richie Rump: Brent has.

Brent Ozar: I have.

 

Jessica Connors: Let’s take one more question. Let’s see here. “Good morning, Brent and Tara, Erik, Richie, and Angie. Says, “Yesterday we had a problem with the process that normally moves data from table queue and delete it after it’s done. This is a standalone database. We stopped the inflow of data but it didn’t help. We got thousands of deadlock alerts. I notice that the disk queue length on the log drive is higher than usual. Here is a sample of the deadlock.” He provides it. “Is there anywhere I could look for this issue?”

Tara Kizer: If you’re getting deadlocks you should turn on the deadlock trace flag 1222, maybe run an Extended Event to capture the deadlock graph. Having just the deadlock victim isn’t enough to be able to resolve it.

Brent Ozar: It’s a separate technique I think that not a lot of database administrators get good at because it’s one of those things where you’re kind of like, “Hey, you should fix your indexes in your queries.” Then people go off and do their own thing. It’s one of those where when you do want to do it, it takes a day or two to read up and go, “Here’s exactly how the [Inaudible 00:25:00].” There’s also not a lot of good resources on our site for it. We don’t go into details on deadlocks either. Have any of you guys seen resources on deadlocks that you liked?

Erik Darling: I like just hitting Extended Events for it. The system health session has quite a bevy of information on deadlocks and you can view the graphs and everything which is pretty swell.

Tara Kizer: I attended a session at PASS in 2014, Jonathan Kehayias from SQLskills, it was all about deadlocks. It was invaluable information. He went over different scenarios and stuff. He said that he loves deadlocks. It was like, whoa, I don’t know that anyone has ever said that before. But it was really great information. I haven’t looked at—I do read his blogs—but I suspect he’s got a lot of deadlock information on the blog to help you out.

Richie Rump: He also loves XML.

Brent Ozar: He loves XML and Extended Events. If you have a Pluralsight subscription. So Pluralsight has online training. I want to say it’s like $39 a month or something like that. I think Kehayias has a course on deadlocks. I’m not 100 percent sure but if you search for SQL Server deadlocks if Kehayias has a course on there, it would be wonderful.

Erik Darling: Also, if you don’t have Pluralsight but you want to try it, Microsoft has a Dev Essentials site I believe where if you sign up for that, you get a 30-day free trial of Pluralsight and you also get Developer Edition and a copy of Visual Studio that’s free, Visual Studio Community or something for free. So it’s not just the Pluralsight subscription for 30-days but you do get a couple other goodies in there that last you a little bit longer.

Richie Rump: The course is called SQL Server Deadlock Analysis and Prevention.

Angie Rudduck: Someday still has a Pluralsight account.

Jessica Connors: All right guys, that’s all we’ve got for today.

Brent Ozar: But thanks for hanging out with us. Man, time goes so fast now. Gee, holy smokes.

Erik Darling: And they’re sobering up.

Brent Ozar: Well, back to work. The Cliff Lede, ladies and gentlemen. Enjoy the High Fidelity. See you guys next week.

 

Wanna learn from us, but can't travel? Our in-person classes now have online dates, too.

SQL Interview Question: “How do you respond?”

$
0
0

Brent’s in class this week!

So you get me instead. You can just pretend I’m Brent, or that you’re Brent, or that we’re both Brent, or even that we’re all just infinite recursive Brents within Brents. I don’t care.

Here’s the setup

A new developer has been troubleshooting a sometimes-slow stored procedure, and wants you to review their progress so far. Tell me what could go wrong here.

You are now reading this in Pat Boone's voice.

You are now reading this in Pat Boone’s voice.

Remember, there are no right answers! Wait…

Wanna learn from us, but can't travel? Our in-person classes now have online dates, too.

SQL Server 2016: Availability Groups, Direct Seeding, and You.

$
0
0

One of my least favorite things about Availability Groups

T-SQL Tuesday

Well, really, this goes for Mirroring and Log Shipping, too. Don’t think you’re special just because you don’t have a half dozen patches and bug fixes per CU. Hah. Showed you!

Where was I? Oh yeah. I really didn’t like the backup and restore part.

You find yourself in an awkward position

When you’re dealing with large databases, you can either take an out of band COPY_ONLY backup, or wait for a weekly/daily full. But, if you’re dealing with a lot of large databases, chances are that daily fulls are out of the question. By the time a full finishes, you’re looking at a Whole Mess O’ Log Restores, or trying to work a differential into the mix. You may also find yourself having to pause backups during this time, so your restores aren’t worthless when you go to initialize things.

You sorta-kinda got some relief from this with Availability Groups, but not much. You could either take your backups as part of the Wizarding process (like Log Shipping), figure it out yourself (like Mirroring), or defer it. That is, until SQL Server 2016.

Enter Direct Seeding

This isn’t in the GUI (yet?), so don’t open it up and expect magic mushrooms and smiley-face pills to pour out at you on a rainbow. If you want to use Direct Seeding, you’ll have to script things. But it’s pretty easy! If I can do it, anyone can.

I’m not going to go through setting up a Domain Controller or Clustering or installing SQL here. I assume you’re already lonely enough to know how to do all that.

The script itself is simple, though. I’m going to create my Availability Group for my three lovingly named test databases, and add a listener. The important part to notice is SEEDING_MODE = AUTOMATIC. This will create an Availability Group called SQLAG01, with one synchronous, and one asynchronous Replica.

Critical sensitive data.

Critical sensitive data.

CREATE AVAILABILITY GROUP [SQLAG01]   
FOR DATABASE [Crap1], [Crap2], [Crap3]  
REPLICA ON 
N'SQLVM01\AGNODE1' WITH (ENDPOINT_URL = N'TCP://SQLVM01.darling.com:5022',  
    FAILOVER_MODE = AUTOMATIC,  
    AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,   
    BACKUP_PRIORITY = 50,   
    SECONDARY_ROLE(ALLOW_CONNECTIONS = READ_ONLY),   
    SEEDING_MODE = AUTOMATIC),   
N'SQLVM02\AGNODE2' WITH (ENDPOINT_URL = N'TCP://SQLVM02.darling.com:5022',   
    FAILOVER_MODE = AUTOMATIC,   
    AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,   
    BACKUP_PRIORITY = 50,   
    SECONDARY_ROLE(ALLOW_CONNECTIONS = READ_ONLY),   
    SEEDING_MODE = AUTOMATIC),   
N'SQLVM03\AGNODE3' WITH (ENDPOINT_URL = N'TCP://SQLVM03.darling.com:5022',   
    FAILOVER_MODE = MANUAL,   
    AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT,   
    BACKUP_PRIORITY = 50,   
    SECONDARY_ROLE(ALLOW_CONNECTIONS = READ_ONLY),   
    SEEDING_MODE = AUTOMATIC);   
GO  

ALTER AVAILABILITY GROUP [SQLAG01]
ADD LISTENER N'SQLAGLISTEN01' (
WITH IP ((N'123.123.123.13', N'255.255.255.0')), PORT=6000);
GO

 

Empty inside.

Empty inside.

The next thing we’ll have to do is join our Replicas to the AG with the GRANT CREATE ANY DATABASE permission. I prefer to do this in SQLCMD mode so I don’t have to change connections manually.

No more apple strudel!

No more apple strudel!

:CONNECT SQLVM02\AGNODE2

ALTER AVAILABILITY GROUP [SQLAG01] JOIN
GO
ALTER AVAILABILITY GROUP [SQLAG01] GRANT CREATE ANY DATABASE  
GO

:CONNECT SQLVM03\AGNODE3 

ALTER AVAILABILITY GROUP [SQLAG01] JOIN
GO
ALTER AVAILABILITY GROUP [SQLAG01] GRANT CREATE ANY DATABASE  
GO

DO MY BIDDING!

DO MY BIDDING!

 

 

Shocked, SHOCKED

And uh, that was it. I had my AG, and all the databases showed up on my two Replicas. Apart from how cool it is, it’s sort of anti-climactic that it’s so simple. People who set their first AG up using this will take for granted how simple this is.

BRB waiting for something horrible to happen.

BRB waiting for something horrible to happen.

 

What’s really nice here is that when you add new databases, all you have to do is add them to the Availability Group, and they’ll start seeding over to the other Replica(s). I need to do some more playing with this feature. I have questions that I’ll get into in another post in the future.

CREATE DATABASE [Crap4]
GO

ALTER AVAILABILITY GROUP SQLAG01 
ADD DATABASE [Crap4];  
GO

 

These are empty test databases, so everything is immediate. If you want to find out how long it will take to Direct Seed really big databases, tune in to DBA Days Part 2. If anyone makes a SQL/Sequel joke in the comments, I will publicly shame you.

 

Healthy green colors!

Healthy green colors!

 

Thanks for reading!

Update! The Man With The PowerShell Plan himself, Mike Fal, also wrote about this feature for T-SQL Tuesday. Check it out.

Brent says: wanna see this capability get added to SSMS for easier replica setup? Upvote this Connect item.

Wanna learn from us, but can't travel? Our in-person classes now have online dates, too.

Triage Quiz: Is Your SQL Server Safe?

$
0
0

Contrary to popular belief, we spend a lot of time with clients when we’re not blogging, answering questions in Office Hours, or working on new features for the download pack. Something we hear a lot is, “How do we compare to other clients?” or “Is this the worst/best setup you’ve seen?”. This got me thinking, so I’ve created this totally non-scientific “Triage Test” for anyone who wants to know how they’re doing or who has nothing better to do than take quizzes on the internet.

You are just answering questions; no changes to your systems. Here how it works:

  1. Pick ONE production SQL Server for your score
  2. Pick the answer that is closest to your setup
  3. If the answer is the 1st answer, you get 1 point. If it’s the 3rd, you get 3 points. (This would be worth 3 points, right? Right.) Get it?

Despite how honorable everyone who reads our blog is, since we can’t prevent cheating, you’ll have to settle for the glorious prize of having a comment on this post, and hopefully either knowing your server is in a pretty good place or knowing where to start to fix it.

 

DO YOU HAVE RPO/RTO ESTABLISHED FOR THIS SERVER IF IT GOES OFFLINE (We’ll stick to HA scenario only)?

  1. What’s RPO/RTO?
  2. No, but we have informal goals in the IT department
  3. Yes, we set this within (only) IT
  4. Yes, we have it in writing from the business

Bonus Point: Yes, we set it with business and tested (at least once) that we can meet it

 

ARE YOU BACKING UP ALL DATABASES ON YOUR SERVER?

  1. What’s a backup?
  2. No, only the ones we use the most
  3. Yes, system and user databases
  4. Yes, full backups for system and user databases, plus transaction log backups on user databases

 

ARE YOU RUNNING DBCC CHECKDB FOR ALL DATABASES?

  1. What’s DBCC CHECKDB?
  2. No, only the ones we use the most
  3. Yes, system and user databases
  4. Yes, and we log the entire output

 

DO YOU HAVE DATABASE MAIL ENABLED AND ALERTING ON THIS SQL SERVER?

  1. What’s Database Mail?  What Alerts?
  2. No, Database Mail is enabled but no alerts are configured
  3. Yes, Database Mail is configured and we receive job failure/completion and/or error alert emails
  4. Yes, we have 3rd party SQL Server-specific monitoring software

Bonus Point: What’s your software, and do you like it?

 

ARE YOU RUNNING SP_BLITZ ON YOUR SERVER?

  1. What’s sp_Blitz®?
  2. No, nothing is wrong with my server
  3. Yes, I ran it once
  4. Yes, I run it on a regular basis

Bonus Point: What shocked you the most in your results?

 

HOW WELL DID YOU DO?

There are 23 possible points.

Did you do as well as you thought?

Are you surprised by other results?

While there are several other factors that go into keeping your server safe, these are some of the things I use when I triage a client’s server. Hopefully you had a chuckle, and maybe even learned something new along the way.

CHEERS!

Tara says: I first heard about sp_Blitz at PASS 2011 when I attended Brent’s session on it. I was eager to get back to work and run it on my servers. Well that’s until I actually did run it on my servers and saw so many issues: UNSUBSCRIBE. There were things in there that I had never heard of or thought about. Do your servers a favor by running it on your servers periodically.

Wanna learn from us, but can't travel? Our in-person classes now have online dates, too.

Questions You Should Ask About the Databases You Manage

$
0
0

First, what data do we have?

  1. Do we store any personally identifiable data?
  2. Does any of that data include children?
  3. Do customers believe that this data will never be seen publicly?
  4. Do customers believe that this data will never be seen by your employees?

Next, what would happen if this data became public?

  1. What would happen if all of the data was suddenly available publicly?
  2. What would happen if the not-really-considered-private data was made public? (Customer lists, products, sales numbers, salaries)
  3. If someone got a copy of our backups, what data would they be able to read?
  4. If someone got the application’s username/password, what data would they be able to read?
1.5TB of flash drives. All your backups in my pocket.

1.5TB of flash drives. All your backups in my pocket.

What are we doing to ensure those scenarios don’t happen?

  1. If our backups aren’t encrypted, do we know everywhere that the backups are right now?
  2. How are we preventing people from taking out-of-band backups?
  3. How are we preventing systems administrators from taking snapshot backups or copying backups?
  4. How are we preventing people from running queries, saving the output, and taking them out of the building?
  5. For each of these scenarios, do we have a list of all of the people who could accomplish these tasks?
  6. For each of these scenarios, would we know if they happened?

And finally:

  1. Overall, what risks are out there?
  2. Have you documented the risks in writing?
  3. Has this risk list been given to management?
  4. Or, when any of these scenarios eventually happen, are you going to be the one who was assumed to be protecting the business from this kind of thing?

After all, notice the title of this blog post – you’re managing the databases, right?

Wanna learn from us, but can't travel? Our in-person classes now have online dates, too.

Let’s Make a Match: Index Intersection

$
0
0

Most of the time, when you run a query, SQL Server prefers to use just one index on a table to do its dirty work.

Let’s query the Users table in the StackOverflow database (I’m using the March 2016 version today), looking for people with a certain reputation score OR a certain number of upvotes:

Query doing a table scan

Query doing a table scan

If I create an index on each field, will SQL Server use it?

Create the indexes, but SQL Server ignores them

Create the indexes, but SQL Server ignores them

Diabolical. The cost on this query is 74 query bucks – not a small operation, and large enough to go parallel, but SQL Server still turns up its nose at the indexes.

But the indexes weren’t perfect – they weren’t covering. I was doing a SELECT *, getting all of the fields. What happens if I only get the fields that are on the index itself – the clustering key, the ID of the table?

Eureka! Index intersection.

Indexes gone mild!

Presto! Now SQL Server is doing an index seek on two different indexes on the same table in order to accomplish my where clause.

Now, that’s not really index intersection – it’s doing two two index seeks to get two different populations of users – those that match the reputation filter, and those who match the upvotes filter. What happens if we change our query’s OR to an AND?

Query with a key lookup

Query with a key lookup

Now we’re down to a query plan you know and tolerate: an index seek followed by a key lookup. The reason is that the filters on reputation are extremely selective – there just aren’t that many users with those exact reputation numbers.

In order to get real index intersection – finding the overlapping Venn diagram in two filters – we need to use ranges of data that are less selective. It’s an interesting challenge:

  • If either filter is too selective, we get an index seek on that one, followed by a key lookup
  • If neither filter is selective enough, we get a clustered index scan
The unicorn in the wild: index intersection.

The unicorn in the wild: index intersection.

Presto! SQL Server is doing index seeks on two different indexes, on the same table, and then finding the rows that match both filters. I can count on one hand the number of times I’ve seen this in the wild, but that probably has to do with the kinds of servers I usually see. I’ll leave that interpretation to you, dear reader.

 

Wanna learn from us, but can't travel? Our in-person classes now have online dates, too.

[Video] Office Hours 2016/06/15 (With Transcriptions)

$
0
0

This week, Brent, Angie, and Tara talk through your questions about monitoring tools, transactional replication, configuration management, source control software suggestions and much more!

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

If you prefer to listen to the audio:

Office Hours Webcast – 2016-06-15

Why did some of my drives disappear?

Angie Rudduck: Bruno says he has an instance with several DBs and suddenly a couple of them became unavailable and the physical disks where the data log files disappeared. No Windows events. How can he audit what happened at SQL Server level?

Tara Kizer: It’s unlikely a SQL Server problem. I’ve encountered this many, many, many times. You’ve got to talk to your sysadmins or you’ve got to talk to your SAN admins, server admins, they’ve got to take a look. Something happened. It’s almost certainly not a SQL Server issue.

Angie Rudduck: Yeah, if your drives disappeared, it’s probably not SQL Server’s fault.

Brent Ozar: When the drives disappear, I don’t know that you would see a Windows event unless there’s some kind of error message that pops up from the SAN or whatever. I’m assuming it’s a SAN.

Tara Kizer: You would eventually see a SQL Server error once it finally has to write to disk. I mean, it’s going to be a little bit before that happens since SQL Server does everything in memory. So it’s not going to know about it for a while. But the checkpoint, any kind of writing to disk. It’s finally going to start throwing errors and those should be posted in the event log.

Brent Ozar: Backups.

Tara Kizer: Yeah. We’ve encountered weird bugs on like Cisco hardware that caused it and just various weird things. But it has happened numerous times, across many servers, many different hardware platforms, different SANs. It just happens.

Brent Ozar: I think it’s usually it’s just human error. I mean like Robert Davis, a fellow Microsoft Certified Master, just ran a blog post on how he’s like, “Even I screw up.” Drops the wrong one and all these database transaction logs disappear.
Angie Rudduck: Oh yeah, I’ve dropped two databases from prod before. Two separate occasions I have dropped a database from prod. Thankfully both were quick enough recovery. The second one turned out not really used, so that was okay.

Brent Ozar: It’s a matter of time. That’s how you become senior too, you have to have those experiences.

Angie Rudduck: I was just going to say I feel like something that, I’ve met people having only been a DBA for three years, I run into people who have been DBAs for ten years and I know things they don’t only because it’s things I’ve experienced that they never did because maybe they were a smaller shop and I worked in bigger places. Just all about what experience you had.

Brent Ozar: Yeah, everything that involves replication. Tara knows everything.

Angie Rudduck: Somebody already, “Since Tara is here, blah blah replication” question.

Brent Ozar: Here we go.

 

What’s the best SQL Server monitoring tool to use?

Angie Rudduck: Constantino—I butchered your name, I’m sorry—he has a long-winded, easy question. Basically they’re trying to look for a good monitoring tool for production servers. They’re looking specifically for health monitoring that can alert them when something happens or is going to happen. So don’t get Ignite, it’s not in your list, but don’t get Ignite. He’s looking for a full-rounded solution. They’ve tested a bunch: Spotlight, Foglight, Redgate, SQL Sentry, Idera. Do we have any favorites that we would point them to for health monitoring and for SQL alerting?

Tara Kizer: SQL Sentry provides both with the performance advisor and then the event manager tools I believe. Both of those together can give you everything you need. At previous jobs, we used SQL Sentry at my last job and previous jobs we used Spotlight. I wasn’t a big fan of Spotlight. It was great for looking at certain things. I did set up some availability group alerts but it wasn’t as comprehensive as I wanted. We also had Foglight which I think is now called Performance Analysis. Then we had SCOM, so Microsoft’s System Center Operations Manager with the SQL Server management pack. But SQL Sentry, their two big tools did replace SCOM and the performance analysis tool for us at the time. But it’s pretty expensive. SCOM plus another tool is not as expensive. But SCOM requires almost a fulltime monitoring person that knows how to use it. It’s very complicated.

Angie Rudduck: Yeah.

Brent Ozar: I’ve used all of those too. I’m fine with all of them. It comes down to personal preference.

Tara Kizer: Yeah.

Angie Rudduck: Did he mention Dell’s? That’s Spotlight, right? Dell is Spotlight.

Tara Kizer: Yeah, Spotlight and Foglight. Foglight is the name that we used to call them. I think it’s Performance Analysis, I think. People may still refer to it as Foglight.

Brent Ozar: They renamed it again.

Tara Kizer: Oh they did? They went back to Spotlight?

Brent Ozar: Yes.

Tara Kizer: Oh, I didn’t know that. They were probably sick of people calling it Foglight and they’re like well we should just call it that too.

Brent Ozar: A friend of mine calls them FogSpot. He’s like, “I don’t know which one it is. I’ll just call it FogSpot.”

Tara Kizer: Yeah, one of them.

 

What should I do about the replication error “undelivered commands”?

Angie Rudduck: All right. So we will move along. Let’s see—not that one—we will go to Nate, with the transactional replication. They have a setup where often they get stalled transactions from the “alert for undelivered commands job.” Any thoughts?

Tara Kizer: Stalled transactions. I’d probably need to see the full error. So undelivered, so it probably means that it’s sitting at the distributor and it hasn’t been sent to the subscriber. I would take a look at the throughput. Take a look at the distributor and the subscriber to see if there’s any kind of CPU issue, possibly it’s just a lot of data got pushed through. Yeah, I don’t know for undelivered commands. Usually it’s a bottleneck on the publisher with reading the transaction log. Maybe there’s just a lot of stuff in there, you’re not backing it up often enough so the amount of data that has to go through is bigger. Mirroring, availability groups, and—well those can add to replication latency because everything gets stored in the transaction log.
Angie Rudduck: All right. So I realized I missed this very small question from Greg, so I will give him some attention. He said he saw some tweets recently that stated you should have four cores per NUMA node. What do we think about that?

Brent Ozar: Somebody was pulling your leg. It’s not configurable. It just comes down to for Intel processors it’s the number of cores per processor. If you turn on hyperthreading, it’s going to suddenly magically double. There are differences under virtualization, unfortunately, it’s such a huge topic you can’t possibly say, “You should always have four cores.” It depends a lot on the host hardware that you’re using and whether or not that hardware is identical across all of the hosts in your cluster. But yeah, anybody who just says four, they’re over simplifying something. Or it might have been for just one particular guy’s setup, like if one guy had just one host design.

Angie Rudduck: Yay for answers where people are pulling your leg.

 

What’s the best way to create a SQL Server inventory?

Angie Rudduck: Okay. Samuel wants to know, “What is the best way to create a SQL Server CMDB/inventory without buying third party software?”

Tara Kizer: I don’t know what that is.

Brent Ozar: Configuration management. Idera just had a new tool. If you go to Idera.com and click on free tools, I want to say it’s Instance Check, it’s got something along the names of inventory in it or it’s not discovery, but it’s something along the lines of inventory. So go to Idera and click on free tools. The other thing to search for is Dell Discovery Wizard. Dell Discovery Wizard will go through and survey your network and discover SQL Servers and identify them for you. Put them into a database. Another tool that you can use is SQL Power Doc. SQL Power Doc is an open source PowerShell script from Kendal Van Dyke. If I had to pick one that I like, I used Dell Discovery Wizard a bunch of times. Idera’s looks pretty easy as well. SQL Power Doc, not very easy, but super powerful.

Angie Rudduck: Very good.

 

Should I use multiple Availability Groups or just one?

Angie Rudduck: Eli has a question about availability groups since Bret Ozar II isn’t here. They’re planning on upgrading from 2008 R2 to 2014 to take advantage of availability groups. They would like to know if there is a performance advantage to having databases spread across multiple AGs instead of one single AG. His example is having the primary on one AG but be on a different node than another primary to take advantage of the hardware.

Tara Kizer: Yeah, I mean, definitely. The first part of your question, there is no advantage to spreading them across multiple AGs unless you are putting the primary on a separate replica. But you know, you do have licensing implications in that case.

Angie Rudduck: Curse licensing. Always out to get you.

Brent Ozar: That was a smart question. I’m glad he said move the split around different primaries because I was like, “No, there is no advantage—Oh yeah.”

Tara Kizer: There is an advantage there.

Angie Rudduck: Tricky wording there.

 

Why am I getting an external component error when installing SQL Server?

Angie Rudduck: Kimberly, welcome back, I haven’t seen you in a bit. She is installing SQL Server 2008 R2 on Windows Server 2012 R2. This is compatible based on MS docs she checked. However, she’s getting the “external component has thrown an exception error.” What is she missing?

Tara Kizer: I wonder if there is a prerequisite that you need to install first. At least on older versions of SQL Server and Windows it was supported on newer versions of Windows but you had to install something first. I don’t remember what it was and I don’t know that that’s why you’re encountering this error. This is the type of thing that I’d probably open up a support case with Microsoft.

Brent Ozar: The other thing, go download that, whatever the ISO file or the [XE 00:09:32] that you got for the download, go download it again and just save it to another place and try it again. Because I’ve gotten so many funky setup errors just from a corrupted ISO file. Then when I go and get another ISO, like bloop, works perfectly. I’d also say anytime you get them, I’m kind of paranoid like this, but anytime that you get an error during setup, I’d rather wipe Windows and start again. I’m just paranoid. I want to build something that’s going to last forever. So if you’re having repeated errors on the same Windows box, hey, go download a new ISO and then try again on a fresh install of Windows.

Tara Kizer: You can go through the setup logs to see if there’s a better error because that’s probably a pretty generic error. The problem with the setup logs is it’s hard to find the errors. Scroll all the way to the bottom and then you might have to start scrolling back up to see the failure. Because even though it failed, it might have done a lot of extra work afterwards and all of that is going to be logged.

Brent Ozar: There’s like 50 million instances of the word error in the log.

Tara Kizer: Yeah, exactly, it’s awful.

Angie Rudduck: I do like the trick that I learned about filtering the Windows log, starting here during triage. I had no idea about that and then one day when I watched you do triage and you right clicked on the left side bar, I was like, “What? I only knew…” Because half of the time during triage I have to ask the client to move my head because it’s always floating right over the filter log on the right panel in Windows events, so that happens a lot. I’ve been trying to work around not asking them to move my head because it sounds weird to me.

 

How should we do source control with SQL Server?

Angie Rudduck: Since we’re talking about a lot of software, let’s ask another question from Scott. Do we have any suggestions on source control software? When Richie is not here of course.

Tara Kizer: Yeah, I was going to say, Brent and Richie love Git.

Brent Ozar: So there are two ways you can do this. One is you can do source control before you deploy your code, meaning you go make a stored procedure change, you check the code into GitHub or Subversion or TFS, whatever tool you want to use. That’s proactive. Then you go deploy the code after you’ve checked it in. Man, almost nobody does that. Plus too, you end up having DBAs who need to change indexes in production or need to make an emergency change. So the other way you can do it is reactive source code control which means this tool goes out and looks at your SQL Server every x number of hours and then goes grabs any changes and checks them into source control. So this gives you a postmortem log of everything that changed but not who did it and not the exact time that it changed. I am personally a fan of reactive source control as a DBA. I don’t really care as much about who did it but I want what was changed. I want a breadcrumb list of everything that changed on objects. So Redgate’s source control for SQL Server has that ability that they’ll just go through and patrol your SQL Server periodically and check in any changes. It’s just not the source control that your developers are used to. That proactive change control is really, really hard.

Tara Kizer: We did both proactive and reactive at my last job. We used Visual Studios Team Foundation Server. Anytime we did deployment of the application, that was always proactive. And of course, DBAs, you know, are having to make changes. The DBAs were supposed to go in and do a schema compare and then update TFS. That didn’t always happen. Other tasks were more important. So whoever that next person was that touched that database, when they did the schema compare to create the deployment scripts, they would see that ther are these other things that shouldn’t be in my deployment that they’ve already been deployed to production but weren’t in source control. Besides that though, because you could have databases you never touch again. So besides that, twice a year they went through and did a schema compare against all databases and got them up to date.

Brent Ozar: Scott asks, “I didn’t know about reactive source control. Who makes it?” It’s a technique, not a product. It’s just part of Redgate’s source control as well. I think I even still have a blog post on our blog about how you do it with a batch file. Like I wrote a batch file in VBScript to do it with Visual SourceSafe. I need to burn that in a fire.

Angie Rudduck: That sounds complicated to somebody who’s going to try and totally mess up. That was cool. I was about to ask Tara if you could do them together. So that’s cool that you have seen them both together because I was like I feel like one place we didn’t consider indexes, we didn’t let developers change indexes. So if a DBA throws them in and then doesn’t check it in, that would be great to have the reactive right there.

Tara Kizer: Yeah, as long as you have a schema compare option in the tool that you use. Or you can get another schema compare. Then you can see what the changes are between source control and your database.

Angie Rudduck: Very cool.

 

What’s the fastest way to modify a big table?

Angie Rudduck: J.H. wants to know, “What is the fastest and/or safest way of exporting a large table and then reimporting it and maintaining its primary key auto identity seed ID … SELECT * into temp table from large table or bulk copy out or something else?”

Brent Ozar: Okay, so I’m going to tell you the terms to google for: modifying a table online Michael J. Swart S-W-A-R-T. So because you said fastest, Michael has an interesting set of blog posts, it’s like a five-part blog post on how you go set up the new table, how you build something to keep the old and new table in sync and then you move data across in batches. So this way end users notice very minimal downtime and yet you’re able to keep the two in sync as you get larger. The only time I would go that route is if, “You cannot take any down time. We’re willing to let you put a whole lot of development work into it” and it’s more than like say 50 gigs in one table. If it’s less than 50 gigs in one table, I would probably just do a select with a tablock and move the data across that way.

Tara Kizer: Then you can use the identity insert option to handle the identities. That way you keep the values the same between the two tables. So SET IDENTITY_INSERT ON. You can only have one table at a time do this so make sure you set it off when you’re done.

 

How should I manage identity fields with replication?

Angie Rudduck: That’s a perfect follow into Paul’s question. He has existing replication where he wants to change the identity management of primary keys have identity 1 1. He wants to change the primary keys to identify 1 2 on the publisher and identity 0 2 on the subscriber. Is there a way to do this without recreating the tables?

Tara Kizer: You do have the DBCC command where you can change the seed but I don’t think that you can change the increment. Usually, in a scenario like this though what people do is they have the publisher, it’s inserting positive numbers and then on the subscriber inserting negative numbers. So you would have, you know, if it’s an integer, you could have two billion rows for the subscriber and two billion rows in the publisher. That usually satisfies most tables. Otherwise, go to bigint.

Brent Ozar: So there’s a DBCC command to reseed the identity. I cannot remember for the life of me what the syntax is but if you search for that.

Tara Kizer: Yeah, I think it’s [Inaudible 00:16:26] IDENT is the command.

Brent Ozar: Yeah, you just run that on the one where you want to change them.

Angie Rudduck: Good info.

 

Should I use checksum when taking backups?

Angie Rudduck: Samuel wants to know, “Is it best practice to always add checksum when taking backups?”

Brent Ozar: Did you do that when you were a DBA?

Angie Rudduck: I didn’t.

Brent Ozar: You too, both of you, yeah, yeah.

Angie Rudduck: I didn’t know it existed.

Brent Ozar: I don’t think most people do.

Tara Kizer: I knew it existed. Did we do it? Probably not. It does add overhead to the backups and we were—at least a lot of the critical systems we would always, not always, but we would have a backup restore system. So we were testing our backups regardless. So do you need checksum if you are going to be testing your backups?

Brent Ozar: Yeah, I learned about it after I got started consulting. I’m like, oh, that’s an interesting idea. I went on a little quest of “I’m going to get everybody to do checksum on their backups.” I put it in sp_Blitz as a warning, “Hey, you’re not doing checksums on your backups.” Universally, people were like, “What is that? Why would I want my backups to go slower?” So I took it out as a recommendation just because people don’t like their backups going slower.

Tara Kizer: Does Ola’s solution, does it do the checksum by default?

Brent Ozar: Not by default, yeah.

Angie Rudduck: I think it does.

Brent Ozar: Oh, does it?

Angie Rudduck: Because I’ve been playing around. Yesterday I was playing around, let me double check my savings here but I ran the scripts default and then took a look. So would have to double check, but it’s included as an option at the very least.

Brent Ozar: And doesn’t his do verify by default too out of the box?

Angie Rudduck: Yeah, maybe it does verify by default and not checksum by default. But the verify, I mean the one thing I don’t think people think of is like how it can impact because you might be, “Oh, my one gig backup is taking 20 minutes.” I don’t know. But it’s because it’s just doing the restore verify only command against the backup it just took. So it’s just saying, “Oh, is this still a valid backup?” And at the basic level, right? Correct me if I’m wrong, but it’s only saying, “Oh, yes, I can open this as a file. I don’t know its validity inside.” Just that it could reopen it as needed. So that’s just something to be considerate of, it’s not the saving grace. “Oh, I did verify only.”

Brent Ozar: Yeah, it could be all corrupt data in there. It could be 100 percent corrupt. The way you’re going to have to find out is to run CHECKDB.

 

Why don’t our SELECT queries show current data?

Angie Rudduck: All right.

Brent Ozar: We’ve got all kinds of questions coming in. It’s crazy.

Angie Rudduck: I know, they’re definitely falling in now. Okay, so Andrea says they have been having issues with data not showing up in reports for sometimes up to 15 minutes. They are an OLTP shop running 2012 Web. Is this possibly a thing with SQL or is it due to something else?

Tara Kizer: I think we would need more information as to how is the data getting into this database? Is it queueing? Is there a backlog in say a Tibco queue or something like that? Or, you talk about reporting, do you have a replicated system? Or in availability groups, readable secondary, maybe there’s a delay in getting the data to those. I don’t think we have enough information to answer it.

Angie Rudduck: Yeah, I agree.

Brent Ozar: It’s never normal to do an insert in SQL Server, commit your transaction, and then not have it be available for a select immediately.

 

Why am I getting tempdb-full errors when my tempdb is 4GB?

Angie Rudduck: Let’s see what David has to say. He’s getting this on a server with four by one gig temp data, eight gig temp log, insufficient space in tempdb to hold row versions. Need to shrink the version store to free up some space in tempdb.

Tara Kizer: That’s a pretty small tempdb. I’ve supported tempdbs that were half a terabyte in size just because we had business users running ridiculous queries. So, first of all, why is your tempdb so small? Are you shrinking it down? You probably need some more space. Version store, are you running read commited snapshot isolation level? So you need more space for tempdb.

Brent Ozar: And then how much space do you need? Generally, if somebody puts a gun to my head and just says go pick a number, I’m going to go with 25 percent of the size of all the databases combined. So if you have say 100 gigs on the database server, you probably need at least 25 gigs for tempdb.

Tara Kizer: A few jobs ago, we set up hundreds and hundreds of servers. So we just made a policy and tempdb we set at 70GBs. These were shared servers with lots of databases and we didn’t know what was going to occur on them. We would have alerts to warn us if data files or the log file was creeping up on, if they were going to fill up, so we could react to those. But 70 GBs for all of the tempdb data files and I believe 30GBs for the tempdb log file. That was just our default.

Brent Ozar: I don’t get out of bed for less than 70 gigs.

Angie Rudduck: Silly, silly.

 

If I don’t know monitoring tools, will that hold me back in job interviews?

Angie Rudduck: Ronny supports about 25 prod and dev databases as a DBA. He’s not in the corp Windows DBA group and does not have access to all the tools monitoring performance, etc. “All monitoring I have in place is based on scripts that run and report issues. Will the lack of experience working with the tools like that hurt my chances with pursuing a new DBA job? I know it really depends on what the hiring manager is looking for but is knowing tools like that an important skill to have?”

Tara Kizer: I don’t think it’s an important skill necessarily, I think it’s obviously going to depend company to company but if you don’t have any experience with monitoring tools, I think that that’s fine as long as your other experience, your actual SQL Server experience, is what they’re looking for. You can get up to speed on these tools, I wouldn’t say fairly quickly, but you can at least click around and figure things out and with some guidance get some pretty in-depth knowledge of these tools. For the most part, I don’t think that companies are paying for tools like this. So I think that it’s pretty rare that companies have these tools in place.

Angie Rudduck: Yeah, unless you’re going to a large DBA shop, I don’t feel like you’re probably going to have very many of these tools.

Brent Ozar: And you’d have to know all of them. I mean, you know, if you only knew one and then somebody doesn’t use that one, you’re screwed.

Angie Rudduck: It’s not the same thing as not knowing SQL Server versus MySQL versus Oracle. They all run pretty similarly and nobody expects you to know all of them or they’re only going to hire you if you know this one. Like if you only know Redgate, great, because they’re a Redgate shop. That’s usually not the case.

Brent Ozar: Yeah, when we do interviewing for consultants for example, so when we go and hire people, we will often give them a virtual machine and say, “Now you’re on, take remote control of this thing. Show me why it’s slow.” Or, “Troubleshoot why this query isn’t working.” If someone comes to me and says, “Well, I’m sorry, all I can do is use a tool,” like I only ever troubleshoot this with Dell or Idera or Redgate and I’m lost without a third party tool, you’re not going to do well as a consultant because we can’t rely on those tools either. When we parachute in, man, I have no idea what’s going to be happening on the other end. So it pays better to know the native ways to doing things.

 

Idle chit-chat about smoking weed and your friend sp_WhoIsActive

Angie Rudduck: I think we have probably time for one more question. Did you guys see anyone while I scroll back and look?

Brent Ozar: Greg says he remembers that the tweets about tempdb stuff was four tempdb files per NUMA node. They were smoking weed too. I don’t know who that was.

Angie Rudduck: They must be in Portland.

Brent Ozar: Yeah, Denver, something like that.

Angie Rudduck: Someone chimes in, Eli says, “The sp_WhoIsActive is your friend about the monitoring” to you Ronny. That is a good point, we love…

Tara Kizer: WhoIsActive and the Blitz stuff.

Brent Ozar: Yeah, free tools. Pro tip: If you’re going to apply for work at our company, you may want to try using our tools. Know that they’re out there. If you come in and use someone else’s tools, it will not be a good time for you.

Angie Rudduck: Yeah, I agree.

Brent Ozar: Not that you ever need to know how to use our tools to work here. We teach you those too after you get here. But, yeah.

 

Is PostgreSQL better than SQL Server?

Angie Rudduck: Yeah. I feel like there was one that I… There’s a couple that are like…

Tara Kizer: Wes asks the same question, I think he wants—they’re 20 minutes apart.

Angie Rudduck: He really wants me to read his question. Wes, I’m going to tell you my answer is SQL Server pays our bills. Wes wants to know what our thoughts are on Postgres versus Microsoft SQL Server. SQL Server.

Tara Kizer: We’re SQL Server professionals so our answer is going to be SQL Server. If you want me to support your system, I don’t do Postgres SQL so I can’t support it. I mean, I could probably learn it but I don’t really have any interest in learning it.

Brent Ozar: See, I don’t support it either. But I always try to learn about other stuff. There’s stuff that’s really cool about Postgres. Unlogged tables is a classic example. If you search for Postgres on our site, we’ve written a couple blogposts about different features in Postgres that we would want in SQL Server. But boy, at the same time, I kind of like parallelism. Man, Microsoft SQL Server has had parallelism for a long, long time. That’s kind of nice in today’s huge, multicore environments where 16 cores isn’t a big deal. 32 cores isn’t a big deal anymore. Parallelism is pretty freaking awesome. And they’re iterating super fast. So, yeah, I kind of like Microsoft SQL Server. If I was going to start a career from scratch, so like Microsoft is where it’s at in the enterprise environment and Postgres is where it’s at in the startup environment. Well, thanks everybody for hanging out with us today and we will see you guys next week.

Wanna shape sp_Blitz and the rest of our scripts? Check out our new Github repository.


We’re open-sourcing the sp_Blitz* scripts.

$
0
0

We’re proud to announce that our First Responder Kit is now on Github, and it now uses the MIT open source license.

What This Means for Users

Good news: it’s still free, and now it’ll be updated even more often. If you’re subscribed to update notifications, we’ll still email you monthly when we release new versions.

Today, we’re not announcing a new release – because we’re in the midst of testing a whole bunch of breaking changes:

  • Parameter names are now all @PascalCase with no underscores. (They used to vary between procs.)
  • Parameter functions are more consistent – here’s the documentation. Right now, this documentation page is kinda long and unwieldy, and we’ll be splitting that up too over time.
  • sp_AskBrent is about to be renamed – although I have no idea what to call it, and I’ll ask for your help on that one in tomorrow’s blog post.

If you want a stable, high-quality set of code, get the latest release zip. Don’t work with the source code directly unless you’re in a testing environment, because it will break.

What This Means for Consultants and Software Companies

Our prior copyright license said you couldn’t install this on servers you don’t own. We’d had a ton of problems with consultants and software vendors handing out outdated or broken versions of our scripts, and then coming to us for support.

Now, it’s a free-for-all! If you find the scripts useful, go ahead and use ’em. Include sp_Blitz, sp_BlitzCache, sp_BlitzIndex, etc as part of your deployments for easier troubleshooting.

What This Means for Contributors

The contribution process is now way easier:

  • Search Github issues to see if anyone has requested the feature you’re considering (including closed issues, because sometimes we close stuff that isn’t a good fit for these scripts)
  • Create a new Github issue so other users can discuss your proposed changes
  • Fork the project to a local copy – this gives you your own working version that you can test locally
  • Test your work on case-sensitive instances – ideally, on at least the oldest and newest supported versions of SQL Server (today, 2008 and 2016)
  • Create a pull request to offer your code back up into the public repo, and moderators will test your code

Bonus: if you’re working towards Microsoft MVP status, you can include open source contributions in your list of community activities. Since these tools are now open source, you get more credit for your work.

Head on over to the Github SQL Server First Responder Kit project, and if you’re interested in watching what happens, click the Watch button at the top right. You’ll get emails as people add issues and create pull requests.

Wanna talk about it live? Join SQLServer.slack.com, and we’re in the #FirstResponderKit channel.

Wanna shape sp_Blitz and the rest of our scripts? Check out our new Github repository.

We’re Renaming sp_AskBrent. What Should the New Name Be?

$
0
0

Yesterday we announced that we’re open sourcing our free SQL Server scripts, and one of those is sp_AskBrent. I originally named it that because it had a funny magic-8-ball type feature: if you pass in a question as a parameter, it gives you an answer:

sp_AskBrent @Humor = 1

sp_AskBrent @Humor = 1

Cute, but now that it’s open source, it’s time to give it a name that matches the important stuff it does.

Here’s what sp_AskBrent does:

sp_AskBrent with no parameters gives you a prioritized list of reasons why your SQL Server is slow right now, like a backup running, rollback happening, a data or log file growing, a long-running query blocking others, extremely high CPU use, etc.

sp_AskBrent @SinceStartup = 1 shows your wait types, file stats, and Perfmon counter activity since startup.

sp_AskBrent @OutputDatabaseName = ‘DBAtools’, @OutputSchemaName = ‘dbo’, @OutputTableName = ‘AskBrentResults’ – plus a few other parameters – captures your wait types, file stats, and Perfmon counters into a table so you can do your own performance trending over time.

So what should we name it?

Most of our other tools start with sp_Blitz, so maybe sp_BlitzPerformanceCheck or sp_BlitzMetrics. I have no idea. But I bet you do, so put in your comments here before end of day on Friday, June 24th, 2016. We’ll pick a winner based on completely random subjective taste, and the first person who suggested that name will get a free Everything Bundle. Good luck!

Tara says: I like sp_BlitzNow. Vote for my pick, and I’ll send you some Brent Ozar Unlimited magnets. I’m kidding. Those things are heavy. Shipping will mean I can’t pay the mortgage. If my pick wins, I’ll give away the Everything Bundle to a random person that commented.

Wanna shape sp_Blitz and the rest of our scripts? Check out our new Github repository.

30,000 Comments

$
0
0

It feels kinda arbitrary, but it’s a champagne moment:

The 30,000th Comment

The 30,000th Comment

Thanks to everybody who’s ever stopped by, left a comment, and taken part in the discussions. (An extra-special thanks to folks who even addressed us by the right names, and didn’t call everybody else here Brent, hahaha.)

I started this thing over a decade ago, but you, dear reader, are the reason we keep blogging. It feels weirdly appropriate that the 30,000th non-spam comment is about what to name our open source tools. You make this place a party.

Wanna shape sp_Blitz and the rest of our scripts? Check out our new Github repository.

The Worst Way to Judge SQL Server’s HA/DR Features

$
0
0

We love to help people plan for disasters

We’re not pessimists, we’ve just seen one too many servers go belly up in the middle of the night to think that having only one is a good idea. When people ask us to help them, we have to take a lot into consideration. The first words out of the gate are almost always “we’ve been thinking about Availability Groups”, or some bastardized acronym thereof.

Don’t get me wrong, they’re a fine thing to think about, but the problem is that usually people’s only exposure to them, if they have any exposure to them outside of thinking about them, is just in setting them up.

Usually with VMs.

On their laptop.

With a few small test databases that they just created.

And they’re really easy to set up! Heck, even I can do it.

But this is the worst way to judge how well a solution fits your team’s abilities.

Everything’s easy when it’s working

When things get hard, and when most people figure out they’re in way over their heads, is when something goes wrong. Things always wait to go wrong until you’re in production. In other words, driving a car is a lot easier than fixing a car.

If you don’t have 2-3 people who are invested mainly in the health and well-being of your Availability Groups, practicing disaster scenarios, failing over, failing back, and everything in between, you’re going to really start hating the choice you made when something goes bump in the night.

And stuff goes wrong all the time. That’s why you wanted HA/DR in the first place, right?

Stuff can go wrong when patching

I mean, REALLY wrong

Sometimes index tuning can be a pain in the neck

You need to think before you fail over

Setting them up isn’t the end of the line

Fine, don’t believe me

Play to your strengths

If you’re a team of developers, a lone accidental DBA, or simply a few infrastructure folks who don’t spend their time reading KB articles on SQL and Windows patches, testing those patches in a staging environment, and then pushing your app workload on those patches, you’re going to have a tough time.

Things that are still great:

No, you don’t get all the readable replica glamour, and the databases failing over together glitz, but you also don’t get to find out you’re the first person to Google the error your Availability Group started throwing at 2am, shortly before you, your app, and all your users stopped being able to connect to it.

Try scaling up first

If part of your move to Availability Groups is Enterprise licensing, get some really fast CPUs, and enough RAM to cache all or most of your data. You may not need to offload the stuff that’s currently a headache.

Try some optimism

Optimistic isolation levels like RCSI and SI can help relieve some of the burden from large reporting queries running over your OLTP tables.

Get your script on

No, I don’t mean getting your baby mama’s name tattooed on your neck. I mean scripting out parts of failing over Mirroring or Log Shipping so that it’s not such a bleary-eyed, manual process. Availability Groups don’t keep agent jobs, users, and other custom settings synced from server to server, so you’re going to have to figure that part out anyway.

Still interested in coming up with a HA/DR solution that works for you? Drop us a line!

Thanks for reading!

Doug says: And remember folks, HA/DR solutions sometimes differ between on-premises and cloud. Make sure the cloud features you want to use are fully supported.

Wanna shape sp_Blitz and the rest of our scripts? Check out our new Github repository.

What’s the Greatest Compliment You Ever Received?

$
0
0

It takes as little as one word. One word from a co-worker, blog commenter, or Stack Overflow user can make you feel like a champ. (Or a chump, depending on the word.)

(Let’s focus on feeling like a champ.)

What’s the greatest compliment you ever received about your professional work? Did someone compliment your tenacity, your calm under pressure, your ability to make chaos orderly, or maybe how you rescued a doomed project? Do you remember you felt when you heard it? Does it still influence you today? Share it in the comments!

Here’s mine, from 2010: “You’re a great researcher.”

When I think about all the different ways being a good researcher makes me better at my work, I can’t help but put that compliment at the top of my list. I’ll never forget it. What compliment will you never forget?

Erik says: The best compliment I ever got? “Your a idiot.”

Brent says: “You’re taller than you look on the webcasts.”

Wanna shape sp_Blitz and the rest of our scripts? Check out our new Github repository.

Announcing New Online Classes in EU & Australia-Friendly Time Zones

$
0
0

Time zones suck.

I really wish we could all rise and shine at the same time, but that’s just not how this flat planet works. So to make it easier, I’ll be getting up at some ridiculous times in order to lead a new round of online classes:

Gonna need a lot of these

Gonna need a lot of these

Senior DBA Class of 2016:

SQL Server Performance Tuning:

Students have been raving about these courses:

“It was awesome. I learned why things work the way they work inside SQL Server and how to identify the main bottlenecks and proceed to fix accordingly.” – Jose R. Guay

“Great information, simple yet thorough examples that were easy to understand and apply.” – Chris Wiegert

“It was my favorite “work week” in a very long time. I learned a lot, reinforced a lot of good things I previously knew, and had a great time. It was an honor to learn from Brent and observe his love for what he does first hand. I have been back in the office for a little over a week now and can say we have already (after proper testing of course) implemented 3 changes I learned about from the training and all 3 of these changes represent a 10x boost in performance from our system. It has ALREADY been a good investment to take the class.” – Kevin Anderson

“Fan-Frikin-Tastic” – Andy Schwabe

“Awesome, and well worth the investment! Almost experienced a brain overload!” – Christopher Sprinkel

“Epiphany inducing course work illustrating the depth of knowledge required to be a senior DBA.” – Graham Logan

“The class flows very well, and you can tell that you have worked hard at keeping the content relevant and up-to-date. For our class you were able to modify the schedule and conversations based on our questions. That is a sign of someone who is a great presenter, and is comfortable with the content and themselves.” – Keith Harvego

“Please keep doing what you’re doing !!! This was by far my best instructor led class that I’ve taken in a long long time” – Harry Larsick

Check out the courses, and save $1,000 with coupon code Caffeine in you book before July 31.

Wanna shape sp_Blitz and the rest of our scripts? Check out our new Github repository.

[Video] Office Hours 2016/06/22 (With Transcriptions)

$
0
0

This week, Brent, Richie, Doug, and Tara discuss Growing databases, most useful SQL certifications, replication issues, group discounts, backup software and more.

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

If you prefer to listen to the audio:

Office Hours Webcast – 2016-06-22

 

Brent Ozar: All right, we might as well get started here. Let’s see, we’ve got all kinds of questions pouring in today. We’ve got 75 folks in here, let’s go see what they’re asking questions about. Going through… oh, Scott asks an interesting question. Scott says, “How would you handle a new manager that insists his databases should not grow?”

Doug Lane: Insist his business should not grow.

Brent Ozar: I like it.

Richie Rump: Monster.com?

Brent Ozar: Yeah, you’re never allowed to sell anything. You’re never allowed to add new data. What I would do is make sure that he understands that update statements don’t make the database grow. So if he updates your salary, it doesn’t take additional space in the database, it just changes an existing number.

Tara Kizer: I wonder if there’s more to the question though. Is the manager SQL Server savvy and is saying preallocate it big enough that it never has to auto grow? He is probably not a good manager though, doesn’t understand SQL Server technology or databases in general.

Doug Lane: We’re quick to judge in these parts.

Brent Ozar: Scott says, “No, he is not technical.”

Tara Kizer: And he’s saying that SQL databases should not grow. That is just odd.

Richie Rump: If he’s not technical, why is he giving technical advice? That doesn’t make any sense.

Brent Ozar: It’s probably a budgeting thing. He’s like, “Look, I’ve got to keep my budgets the same number. It’s very important.” Scott says, “He is very used to monitoring the database.” What you do is, I had a monitor—I shouldn’t say—okay, we’ll I’ve started down the road so I just might as well. So I had a manager once, not in my department but in another department, who was asking anal things like that. Like, “Oh, I want to make sure CPU never goes above five percent.” So what we did was we hooked up his monitoring tool to point at a server no one used and we just called it SQL whatever and told him that’s where his stuff lived. He was totally happy. He was completely happy. Thought we were amazing. Yep, that’s my job.

 

Brent Ozar: All right, so Gordon asks a related question. Gordon says, “I’m not really sure what to set autogrowth to on my very large databases.” Usually when people say VLDB they mean like a terabyte or above. He says, “Yes, I should be growing manually during a suitable maintenance window, but I also have an autogrowth value set just as well in case. Is one gig a good number or what should I do for a one terabyte database?”

Tara Kizer: I used one gigabyte on the larger databases, sometimes even maybe a little bit bigger. As long as you have the instant file initialization for the data files so they could be zeroed-out so that the growth isn’t a slow growth. Log files may be a little bit smaller. I did a lot of times [inaudible 00:02:25]. Sometimes I did one gigabyte on larger databases where I knew the patterns and it was going to use a larger file for certain things. But I tried to preallocate those.

Doug Lane: And IFI is something that comes on by default if you want it to in SQL 2016.

Tara Kizer: Oh really? I didn’t know that.

Brent Ozar:          There’s a checkbox in the install.

Doug Lane: There’s a little checkbox that says, “I want IFI to work with this installation.”

Brent Ozar: Microsoft calls these things “delighters.” They’re trying to add delighters into the product. I’m like, “I am delighted! That’s actually wonderful.”

Richie Rump: It’s just faster.

Brent Ozar: It is faster. It’s just faster. And they’re right. I like them.

Doug Lane: It works.

 

Brent Ozar: I have an interesting question from Wes. Wes asks, “What are the most useful SQL Server certifications?” So we’ll go through and ask these folks for their opinion. Richie, we’ll get started with you because you’re on the left on my monitor. What do you think the most useful SQL Server certifications are?

Richie Rump: The one you have. That’s it. That’s the only useful one.

Brent Ozar: The A+?

Richie Rump: Yeah. Certified Scrum Master. No, the MCM, right? I mean that’s by far the most useful one you have. I mean as soon as you get it, you’re recognized as an expert anywhere.

Brent Ozar: You say that but nobody still believes that I actually passed that test, for rightful reasons. I wrote a check, it was a really large check. Then I brought another bag of $20s and I gave that to the instructor and off we went. Tara, how about you?

Tara Kizer: I’m against SQL Server certifications. A while ago they had all these practice tests online and I am a terrible test taker. I felt like at the time I was really good at what I did, you know, SQL Server, DBA for a long time, and I could not pass the test. So I feel like it’s for people that don’t have experience that are just trying to get their foot in the door. I already had experience, I don’t know that certifications are required at any job when you have as many years of experience as I do but I could not pass the test. I also wasn’t willing to study for these tests. Some of the stuff is just useless information I didn’t need to know. So why add that stuff to my brain?

Brent Ozar: Doug, how about you?

Doug Lane: It depends on how you define useful because is it useful in the sense that it will get you a job or is it useful in the sense that it will make you better at your job? Certifications will tell you what you don’t know as you test for them but apart from their value as actually holding the certification, there’s very little value to it. It’s the kind of thing where you decide if you want it on your resume or not. In most cases, it won’t matter. Again, apart from exposing blind spots in what Microsoft thinks you should know about SQL Server, it’s really not going to help you that much.

Brent Ozar: It does teach you a lot—go ahead.

Richie Rump: As a former hiring manager of both developers and data folks, I never looked at certifications at all. It didn’t help you; it didn’t hurt you. It just never came into play because it’s just a test. It’s not exactly how you work, like Tara said, it’s just a test.

Tara Kizer: I had one job that if you did do the certifications that it was something to put on your review, that this was something that you worked towards. So it was a review cycle thing, a possible extra bonus or promotion, but it was just a bullet point on the review. You had all the other stuff on your review as well.

Brent Ozar: For the record, we don’t bonus our employees that way. If you want to take a test, that’s cool. We’ll pay for it. We also pay for passes or fails, it doesn’t matter because I know from taking them too, I’m like, I walk in, I look at the test, I’m like, “They want you to know what? XML, what?”

Tara Kizer: And there will be more than one right answer. They want the most perfect answer and it’s like, well, there’s three of them here that could be the right answer.

Brent Ozar: Yeah.

Richie Rump: PMP was crazy like that. I mean it was, “Oh look, they’re all right. But what’s righter-er-er?”

Brent Ozar: PMP, Project Management Professional?

Richie Rump: Professional, yep.

Brent Ozar: There we go.

 

Brent Ozar: Nate Johnson says, “It may be a waste of 15 minutes of company time but I do enjoy these pregame antics.” For those of you who just listen to the podcast, you miss out on when you come in and join live we just shoot the poop—there goes a bunch of old jokes but I’m just going to keep right on, I’m not going down there.

 

Brent Ozar: Tishal asks, “Is it possible to see the size of the plan cache using T-SQL?” The answer is yes and none of us know how by memory. There is a DMV for it, if you Google for that, there’s a DMV for it. In the show notes, we’ll go find that and track it down for you.

 

Brent Ozar: David asks, “Replication question.” And then he never types it in. Oh, no, he does later. “We us replication extensively. In this scenario…” What do you mean your scenario? Is this like a game show? Are you trying to test us? “We have a bunch of reference tables that hardly ever change replicated out to dozens of locations. Should we use transactional replication or snapshot replication or an ETL process and just refreshing them once a day would be fine?”

Doug Lane: What are you the most comfortable managing?

Brent Ozar: Oh, look at that Doug. Go on, elaborate.

Richie Rump: Welcome back.

Doug Lane: If you feel really good about setting up some sort of SSIS package to do this, then by all means do and get away from replication. But this is the kind of thing where it really comes down to a comfort level. Replication will never be your best friend. It’s just too finicky and complicated and aggravating to work with. But it can get the job done.

Brent Ozar: When you say finicky and complicated and aggravating to work with, that describes most of my best friends so I’m not sure what you mean by… yeah, Richie is pointing at himself.

Tara Kizer: I had a scenario like this for reference tables. We actually did not replicate them. The only time that these tables changed was during deployment. So if we needed them on the subscriber database, we just deployed to both the publisher and the subscriber for those tables. That way we didn’t have to add them to the publication. There’s not really any overhead as far as transactional or snapshot except when it has changes coming through. But why have them in there if they hardly ever change and it’s part of a deployment process?

 

Brent Ozar: James asks, “What’s the best practice for setting minimum server memory? There’s a lot of guides out there on how you set max server memory but what should I set min server memory to?”

Tara Kizer: We took the max server memory, best practice, four gigabytes or ten percent whichever is greater, then we divided it by two. That was our min server memory. That was our standard across all servers.
Brent Ozar: I like that. I think in our setup guide it doesn’t even give advice on it because we never—if that’s your biggest problem, you’re in really good shape. I love that you’re asking this question because that’s a really good, detail-oriented question.

Tara Kizer: We had the standard because we were mostly a clustered environment. We had, I don’t even know how many clusters, maybe 100 clusters or so, a lot of them were active active, not a lot of them, but some of them were active active and you want to make sure that when the failover occurs and you’re running on one node that the SQL instances—the one that has to failover can get memory. We would set max down also in the active active environment.

Doug Lane: It also kind of depends on how much you’re piling on that server because if it’s your Swiss Army knife server, you’re probably going to have trouble if you’re trying to run Exchange and other stuff on it, but you know [inaudible 00:09:37]. You’ve got all the BI stack running on it too then you want to make sure that under no circumstances can other stuff steal away from SQL Server to the point where your database engine is actually beginning to starve a little bit. So keep in mind whatever else is on that box. If you really just have a dedicated SQL Server database engine box, then yeah, it’s not going to be as big of deal because it will take whatever it needs and there really won’t be competition for that memory in terms of like it getting stolen away.

 

Brent Ozar: Mandy asks, “We’ve got SQL Server 2014 and our tempdb files are on local solid-state drives. Recently we’re starting to see higher and higher IO waits on those tempdb files, upwards of 800 milliseconds. I’m new to solid-state, is this normal or is this indication of a problem?” That’s a good question. My guess is, depending on how old the solid-state drives are, their write speed actually degrades over time. It can get worse over time. The other thing that’s tricky is depending on how many files you have, if you spider out tempdb to say one file per core and you’ve got 24 cores, solid-state may not be able to handle that workload as well. So generally speaking, we aim for either four or eight tempdb files when we first configure a server. This is one of those instances where more can actually harm you rather than having fewer but I would just check to see. You can run CrystalDiskMark against those solid-state drives and see if write speed has degraded since they were new. It’s certainly not normal though.

 

Brent Ozar: Wes asks, “Are any of you speaking at the PASS Summit?” Well, all of us will be speaking, we’re all going to be standing around the hallway talking to all of our friends. Are we going to be presenting? That we don’t know yet. That announcement comes out today. So we’ll find out later today. I keep looking over at Twitter to see whether or not it’s come out and it hasn’t come out. So as soon as it comes out, we’ll say something.

 

Brent Ozar: Wes says—and I have no idea what this is in reference to—“Use Walmart as a precedent.”

Richie Rump: Enough said. I don’t think we need to say anything more about that.

Doug Lane: For the “adios pantalones” shirt.

Brent Ozar: That’s probably true.

 

Brent Ozar: Next up, Tim says, “I’m fighting for only using stored procs. I don’t want to use inline SQL even for simple queries. My developers are fighting against this and they want to use things like Entity Framework. Am I wrong for pushing hard for only using stored procs?”

Tara Kizer: I have a lot of experience on this topic. I was very, very pro stored procedures for the longest, longest time. Slowly, as developers changed, they wanted to use prepared statements, parameterized queries from the applications, and we didn’t want to stop them from the rapid development that they were doing so we did allow that. Once we realized that the performance was the same between stored procedures and prepared statements, parameterized queries, it became okay from a performance standpoint. However, from a security standpoint, you’re having to give access to the tables rather than just to the stored procedures. So that was just something that we had to think about. But as far as Entity Framework goes, I know Richie is very pro Entity Framework. Entity Framework, and what’s the other one? NHibernate. There are some bad things that it does that can really, really harm performance. So it’s something that you have to watch out for. They use nvarchar as their datatype for string variables and if your database is using varchar, you’re going to have a table scan on those when you do a comparison, the where clause, and you’ll be able to tell in the execution plan. It will say, “An implicit conversion occurred.” You’ll see that it said nvarchar and you’ll be like, “Whoa, why?” Your table is using varchar. It’s because of the application specifies nvarchar. Something that you can override, but if you’re not overriding it, this is what they’re going to do.

Richie Rump: So this just in, that is not a bug. That is a problem with the developer’s code. They didn’t specify that the column was a varchar so because .NET uses Unicode as their string type, it automatically assumes everything is nvarchar. So there’s a way that we could go in and say, “Hey, this column is nvarchar.” If you don’t do that, that will cause the implicit conversions. That’s only if you’re using code first. If you’re using the designer, the designer does the right thing and doesn’t put the N in front of it so it doesn’t put it as nvarchar so you get that implicit conversion. So that’s only for code first and if the developers aren’t really doing the right things when they’re doing their mappings in code. And just because I have Julie Lerman’s phone number doesn’t mean that I’m pro Entity Framework.

Tara Kizer: You’re pro because you speak about it. You present on the topic.

Richie Rump: Oh, okay. So if you go to the pre-con, you’ll hear me talk more about it but it’s more of a heavy, it’s a balanced way—we’re not going to be able to tell developers not to use. Microsoft is saying to use it. So if you’re saying that, then you’re saying, “Don’t do what Microsoft says,” and that’s a much bigger uphill battle than you probably want to face as a DBA. So the general rule of thumb is usually for most things, it’s okay. But for complex things, if it’s going to be complex in the SQL, it’s going to be complex in the link, and now there’s two hops it’s got to go through to figure out what the actual plan is. One it’s got to change that link into a SQL statement and it’s got to change that SQL statement into a plan. That’s probably going to be 50 pages long, which nobody ever wants. So at that point, cut your losses, then do a stored procedure and everybody is okay. But there’s a big difference between when we have to write SQL as developers, when we’re typically not very good at it, as opposed to, “Oh here, let me just do context.tablename.get” and then it just does it for us. So there’s a speed issue here to development and there’s usually a lot more developers than there are of you. So unless you want to stay up all night writing SQL statements…

Brent Ozar: Is that a threat?

Richie Rump: Yeah. You guys get paid more than us so I don’t understand what that is either.

 

Brent Ozar: John says, “I just saw your announcement about pre-con training in Seattle. Do you guys offer group discounts?” We do actually. If you’re going to get five or more seats, shoot us an email at help@brentozar.com or just go to brentozar.com and click on contact up at the top. Then you can contact us, tell us which class you want to go to and we can for five or more seats we give group discounts there.

 

Brent Ozar: Gordon asks, “If I’ve got an Azure VM that replicates data, I want to send it down to an on-premises database. It’s a VM, it’s not using Azure SQL database, what are my HA and DR options?”

Tara Kizer: That’s unusual to go in that direction. I don’t have an answer but I’ve never heard of anyone doing that.

Brent Ozar: Should be able to use replication, log shipping, AlwaysOn Availability Groups, anything that you can use on-premises you can use up in Azure VMs, I’ve got to be really careful when I say that. The hard part of course is getting the VPN connection between on-premises and your on-site stuff. That’s where things get to be a bit of a pain in the rear.

 

Brent Ozar: Jennifer asks, “Is the MCM still available?” No. They broke the mold after I got mine, thank goodness.

Tara Kizer: Yeah, 2008 R2 was the last version right? I mean it was the only version really. It’s been a while.

Brent Ozar: Whoo hoo.

 

Brent Ozar: Kyle Johnson asks—I try not to use last names but Kyle, that’s a generic enough last name and your question is so cool, it doesn’t matter. It’s totally okay. You shouldn’t be ashamed of this question. It’s a good question. I’m not just saying that to get you a gold star on the fridge. He says, “I was on a webinar yesterday where you covered sp_BlitzIndex. Are you aware of any scripts or [inaudible 00:17:06] columnstore indexes? Or is there anything I would look at in order to learn whether or not columnstore is a good fit for me?” Everyone is silent. So there’s your answer. The closest I would go to is Nikoport, N-I-K-O-P-O-R-T.com, Niko Neugebauer he’s from Portugal, he’s got a really hard to pronounce last name—has lots of information about columnstore indexes. He’s like Mr. Columnstore.

 

Brent Ozar: Tim says, “I’ve just inherited a data warehouse project.” Well, you have really crappy relatives, Tim. “With five weekly updated data marts. The largest table is closest to 300 million rows and it’s approaching a terabyte. My loads are taking longer than usual. What’s the best way to diagnose performance tuning on stuff like this?” So, data warehouse that’s got a bunch of data marts, tables approaching a terabyte, and my loads are taking longer. Where should I look for performance tuning?”

Tara Kizer: What is it waiting on?

Brent Ozar: What is it waiting on? And how do you find that out?

Tara Kizer: You run a query, a saved script. I don’t have the DMVs memorized. I mean, you could run sp_WhoIsActive I guess. I assume that would work in this environment and while it’s running, see what it’s waiting on. You know, is it a disk issue? Something else?

Brent Ozar: My favorite of course because I’m me is sp_AskBrent. sp_AskBrent, if you run it with the SinceStartup equals one parameter, SinceStartup equals one will tell you what your waits stats have been since startup.

Richie Rump: The script formerly known as sp_AskBrent.

Brent Ozar: Yeah, we’re running a contest now to figure out a new name for it because we just open sourced a whole bunch of our scripts and I don’t want to have it called sp_AskBrent anymore because the strangers are going to start checking in code and I don’t want that answer to be reflecting on me. “I asked Brent and it said I was a moron.”

Richie Rump: That’s already in there.

Brent Ozar: Yeah, it’s in right at the end.

 

Brent Ozar: Ankit asks, “How do I troubleshoot SQL recompilations increasing on SQL Server?” So there’s a Perfmon counter recompilations a second and recompilations is increasing. What should he go look at it?

Doug Lane: Did anyone recently add option recompile?

Brent Ozar: Oh, okay. I like that.

Doug Lane: Like someone may have tried to solve a parameter sniffing problem on a frequently run query. Just one idea.

Brent Ozar: Some yo-yo could be running update stats continuously. You can capture a profiler trace—this is horrible—you could capture a profiler trace and capture compilation events and that will tell you which SQL is doing it. How else would I troubleshoot that? Oh, no, you know I’m surprised Tara didn’t ask the question—it’s the question that we usually ask, “What’s the problem that you’re trying to solve? What led you to recompilations a second as being an issue that you want to track down?” That’s an interesting thing.

Tara Kizer: I wonder if it’s due to they say that your recompilations should be like 10 percent or under of your compilations. I wonder if that’s something that they’re monitoring, maybe that’s increased. Or maybe it’s a counter that they’re already tracking and the number has gone up.

Brent Ozar: Yeah, I don’t think any of us—have any of you guys run into a situation where that was the problem, recompilations a second?

Tara Kizer: That’s a Perfmon counter that I always pull up along with the compilations and I just take a quick peek at it and then I delete those two counters from the screen. A very, very quick peek.

Brent Ozar: Yes, yeah.

Tara Kizer: Of course, if recompilations are occurring more frequently, you probably have more CPU hits. So if you see that your CPU has risen, maybe it is something to look into.

Brent Ozar: If you put a gun to my head and said, “Make up a problem where recompilations a second is the big issue.” Like if I had a table that was continuously being truncated and then repopulated, truncated, and repopulated where the stats are changing fast enough that was causing compilations, even then I don’t think it’s going to be too many recompilations a second. So that’s a really good question.

 

Brent Ozar: Tim asks a great question. This isn’t the other Tim, who also asked a great question as well, it’s a different Tim. “Is performance tuning approached differently from a transactional system versus an analytical system? When you approach an online system versus a reporting system do you do troubleshooting for performance any differently?”

Tara Kizer: I don’t as far as reporting but I’ve never really supported a true OLAP environment.

Doug Lane: Yeah. There are so many options that don’t apply to regular OLTP environment that do apply to OLAP, specifically talking about cubes because they’re all different kinds of models that you can do. You can do hybrid. You can do ROLAP OLAP. All different ways of kind of choosing how much of that data you want to compile ahead of time. So the troubleshooting process would be very different if you’re talking about SSAS cubes for example. If you’re talking about the source data, usually people don’t care about the underlying data because that ends up in some other final format, like a cube, so I mean I guess if I were to look at a database that was just straight—what would that be, it’s been a while—ROLAP I think, where you get it right out of the database. Then I suppose I would use some of the same troubleshooting steps, like looking at wait stats, looking at long-running queries, and so on and so forth. But if you’re talking about troubleshooting a cube, that’s a whole different bag from OLTP.

 

Brent Ozar: Adam asks—not that your question—I just said “Adam asks.” It’s not that your question is bad. I didn’t say it was a good question. It’s still a good question. I can’t say “good question” every time or else people won’t take me seriously. “How would you approach doing replication in AGs?” So if I have the publisher in an Availability Group, do I have to reconfigure replication again when I failover to my DR replica?”

Tara Kizer: So the distributor isn’t supported as far as AG. So if the DR’s environment has its own distributor, the answer is yes you do. Hopefully you’re scripted and hopefully when you have done a failover to DR it wasn’t an automatic event. Usually because DR is so far apart you can’t have automatic failovers occur. So if it was a manual DR, hopefully you were in downtime window, all the applications were down. You made sure that there was no data that was left behind, you know, that hadn’t been fully sent to the subscriber. If that’s the case, you just need to run your scripts to start up replication again right where you left off. You don’t have to reinitialize. This is a topic that I’ve done quite a bit, failover to DR, using replication AGs, pretty much every technology.

 

Brent Ozar: And we have a bad question from Nate. I’m not going to lie, this question is bad, Nate. You shouldn’t feel bad, but it’s a bad question. He says, “Is a self-referencing linked server as slow as a real linked server? And is it generally a bad idea or not?” How’s that work guys?

Tara Kizer: What problem are you trying to solve? Why are you self-referencing itself?

Brent Ozar: I’ve seen people do this and it wasn’t a good idea then either but I’m just going to repeat what they did. So they had linked servers inside their code so that they could have the same code whenever they moved it from server to server. Then sometimes they would have reporting servers where they changed the linked server to point somewhere else. They thought that somehow doing linked server queries was going to abstract that away. Like they could move some of the tables at some point to another server. So for those of you who are only listening to the audio version and not seeing our faces, none of our faces are happy at this point. We’re all sad. Sadly, it is as slow as a regular “linked server.” SQL Server doesn’t know that that’s a remote server.

Brent Ozar: Let’s see here, what’s up next? All kinds of questions here. Nate says, the context: he has a replicated view that points at two servers. So what you should do because this is kind of a multi-paragraph thing. He’s got a few things inside there. Post this on dba.stackexchange.com. Post in as much details as you can and talk about what the problem is you’re trying to solve. Generally when we talk about doing reporting servers, we’d rather scale up than scale out to multiple servers. You kind of see why here. Managing multiples is kind of a pain in the rear.

Tara Kizer: I think that the answer should be just don’t use linked servers though. If you need to be able to contact another server, do that through the application, not within SQL Server. Programming languages can handle this, joining two different datasets together.

Brent Ozar: Yeah. Conor Cunningham has a great talk at SQLBits when he talks about the difficulties of distributed queries. It’s pretty bad.

 

Brent Ozar: Nate also asks—Nate redeems himself by asking another question. Nate says, “Finally, a backup software question. What do you guys like/prefer in terms of backup software? There’s a bunch of different versions out there. Whose tools should I buy? Are they all compatible with Ola scripts?” I think Ola scripts work with everything at this point, like Idera, Redgate, and LiteSpeed. In terms of like who we prefer, we’re kind of vendor agnostic since we don’t have to manage anybody’s backup software. But just in terms of experience, we’ll go through and ask. Richie, have you ever used third party backup software and how was your experience and which ones were they?”

Richie Rump: I’ve never used backup software.

Brent Ozar: All right, Richie doesn’t use backup. He just puts things in GitHub and lets the rest of the world backup his source code.

Richie Rump: I let the DBAs handle that.

Brent Ozar: Tara, how about you?

Tara Kizer: I have a long answer. I’ve been using SQL Server a long time and backup compression didn’t exist in older versions, so yes, we started off with Quest LiteSpeed, worked really, really well. It was fairly expensive. We wanted to get the Redgate’s SQL Toolbelt and they gave us a deal and we were able to get the backup software—we were able to completely replace all of our LiteSpeed licenses, which we had already paid for, it’s not like we got a refund from all these and we put Redgate out there instead. The reason why we did that is because all new incoming servers were going to use the Redgate software instead. So it made sense to have one tool rather than multiples. But both of them we did testing, we did a ton of testing on them, they pretty much produced the same compression ratio, the same file size, the same restore time. I mean, absolutely everything was—the difference was so minor. One was just cheaper than the other.

Brent Ozar: Yeah, everything is negotiable. Back like ten years ago, there might have been differences, today, not so much. Doug, how about you? Have you used any third party backup tools?

Doug Lane: I yield my time to the consultant from California.

Brent Ozar: Nice. I’ve used all of them as well. They’re all good. Anything is better than trying to roll your own.

Tara Kizer: And, yes, I mean they definitely are compatible with Ola, especially the two that he’s listed. I know you said that they probably are, these two specifically are.

Brent Ozar: Yeah, absolutely. Well that wraps up our time for today. Thanks everybody for coming and hanging out with us. We will see you guys next week. Adios, everybody.

Tara Kizer: Bye.

Doug Lane: Bye-bye.

Wanna shape sp_Blitz and the rest of our scripts? Check out our new Github repository.


Interview Question Follow-up: How do you respond?

$
0
0

Normally I’d update the original post

But I wanted to add a bit more than was appropriate. For my interview question, I asked how you’d respond to a developer showing you progress they’d made on tuning a sometimes slow stored procedure.

While a lot of you gave technically correct answers about the recompile hint, and the filtered index, and the table variable, no one really addressed the fact that I was asking you to respond to a person that you work with about a problem on a system that you share.

To be honest, if I asked this question in an interview and someone started reading me a riot act of things that were wrong with the example, I’d be really concerned that they’re unable to work as part of a team, and that they’re not really a good fit for a lead or mentoring type role. I’m not saying you’re not technically proficient, just that I don’t want to hire the Don’t Bother Asking style DBA. I’ve been guilty of this myself at times, and I really regret it.

This is true about, and a problem for, us as a technical community. Very few people have learned everything the hard way. The nature of most SQL Server users is community and sharing oriented. Blogging, presenting, writing free scripts, etc. And that rules. If you’re interested in something, but don’t have direct experience with it, you can usually find endless information about it, or ask for help on forums like dba.se, SQL Server Central, etc. and so forth.

We’re really lucky to have way-smart people working on the same product and sharing their insights so that we don’t always have to struggle and find 10,000 ways to not make a light bulb. Or deal with XML. Whatever. Who else would have this much of an answer about making a function schemabound? Not many! Even fewer would ever find this out on their own. You would likely do what I do, and recoil in horror at the site of a scalar valued function. Pavlov was right, and he never invented a lightbulb.

Let’s look at this together

What I really wanted to get was some sense that you are able to talk to people, not just recite facts in an endless loop. When someone junior to you shows some promise, and excitement, but perhaps not the depth of knowledge you have, make some time for them. It doesn’t have to be the second an email comes through. Let’s not pretend that every second of being a DBA is a white-knuckled, F5 bashing emergency. You can spare 30 minutes to sit down and talk through that little bit of code instead of side-eyeing your monitoring dashboard.

That’s far more powerful than just telling them everything that’s wrong with what they’ve spent a chunk of their time working on.

Acknowledging effort is powerful

“Hey! You’ve really been cranking on this!” or “Cool, those are some interesting choices.” or at least leading with some positive words about their attempt to make things better is a far more appropriate way to start a conversation with a co-worker than pointing out issues like you had to parse, bind, optimize, and execute the thing yourself.

They may not be right about everything, or maybe anything, but if you just shut them down, they’ll start shutting you out. That does not make for good morale, and they won’t be the only people who notice.

Make an effort

When you spend most of your time in front of a computer, you start to forget that there are actual people on the other end. If they’re coming to you for help, guidance, or even just to show you something, it’s a sign of respect. Don’t waste it by being Typical Tech person.

Thanks for reading!

Angie says:  As the only team member to most recently be a Junior DBA, I’d like to point out how much I appreciated it when my mentors came to MY desk to watch me try and do something, or when they locked their computer when I was at their desk with questions so it was clear that I had their full attention.  It’s the little things that make the most impact sometimes!

Wanna shape sp_Blitz and the rest of our scripts? Check out our new Github repository.

First Responder Kit Updated, and sp_AskBrent is now sp_BlitzFirst

$
0
0

We’ve released the first fully open source version of our SQL Server First Responder Kit: sp_Blitz, sp_BlitzCache, sp_BlitzIndex, sp_BlitzRS, sp_BlitzTrace, and the newest member: the newly renamed sp_BlitzFirst.

I wanted to rename sp_AskBrent because as an open source project, it’s going to have more than just my own answers as to why your SQL Server is slow. So I asked for your naming suggestions, and we got over 300! Here’s my favorites in no particular order:

  • sp_BlitzNow – Tara Kizer
  • sp_BlitzTriage – Michael J. Swart
  • sp_BlitzPerformance – Andy Mellon
  • sp_BlitzGauge – Daryl
  • sp_BlitzDiagnose – Nick Molyneux
  • sp_BlitzyMcBlitzFace – Eric
  • sp_BlitzStatus – Paul Goldstraw
  • sp_AskCommunity – Michel Zehnder
  • sp_Blitz911 – David Hirsch
  • sp_BlitzPulse – Hondo Henriques
  • sp_BlitzFirst – Joshua Birger

So many of these were just fantastic, and I’m not gonna lie: I was this close to picking sp_BlitzyMcBlitzFace. We were joking in the company chat room that it would be hilarious to say to a client, “Now we’re going to find out why the server is slow by running sp_BlitzyMcBlitzFace.” Eric, the first suggester, wins an Everything Bundle just because.

I went with Joshua Birger’s sp_BlitzFirst because as a trainer, it instantly helps me tell the story of how the scripts work. It’s easy for me to stand up and say, “Run sp_BlitzFirst…first.” I love it, and Joshua also wins an Everything Bundle.

Go download the latest version, check out the changes, and enjoy. For questions about how the scripts work, where to chat with us, or how you can contribute, check out the readme on the project’s home page.

Wanna shape sp_Blitz and the rest of our scripts? Check out our new Github repository.

Availability Group Direct Seeding: How to fix a database that won’t sync

$
0
0

This post covers two scenarios

You either created a database, and the sync failed for some reason, or a database stopped syncing. Our setup focuses on one where sync breaks immediately, because whatever it’s my blog post. In order to do that, I set up a script to create a bunch of databases, hoping that one of them would fail. Lucky me, two did! So let’s fix them.

You wimp.

You wimp.

You have to be especially vigilant during initial seeding

Automatic failover can’t happen while databases sync up. The AG dashboard reports an unhealthy state, so failover is manual. The good news is that in the limited test scenarios I checked out, Direct Seeding to Replicas will pick back up when the Primary is back online, but if anything really bad happens to your Primary, that may not be the warmest or fuzziest news.

Here’s our database stuck in a restoring state.

Poor Crap903

Poor Crap903

Now let’s look in the error log. Maybe we’ll have something good there. On the Primary…

Unknown,The mirror database "Crap903" has insufficient transaction log data to preserve the log backup chain of the principal database.  This may happen if a log backup from the principal database has not been taken or has not been restored on the mirror database.

Okie dokie. Good to know. On the Replica, you’ll probably see something like this…

Automatic seeding of availability database 'Crap903' in availability group 'SQLAG01' failed with an unrecoverable error. Correct the problem then issue an ALTER AVAILABILITY GROUP command to set SEEDING_MODE = AUTOMATIC on the replica to restart seeding.

Oh, correct the problem. You hear that, guys? Correct the problem.

IF ONLY I’D THOUGHT OF CORRECTING THE PROBLEM.

Sheesh

So what do we do? We can check out the AG dashboard, see a bunch of errors, and then focus in on them.

Sit, DBA, sit. Good DBA.

Sit, DBA, sit. Good DBA.

Alright, let’s see what we can do! We can run a couple magical DBA commands and see what happens.

ALTER DATABASE [Crap903] SET HADR RESUME

ALTER DATABASE [Crap903] SET HADR AVAILABILITY GROUP = SQLAG01;

Oh come on.

Oh come on.

THE SALES GUY SAID THIS WOULD BE SO EASY WTF SALES GUY

THE SALES GUY SAID THIS WOULD BE SO EASY WTF SALES GUY

The two errors were:
Msg 35242, Level 16, State 16, Line 1
Cannot complete this ALTER DATABASE SET HADR operation on database ‘Crap903’.
The database is not joined to an availability group. After the database has joined the availability group, retry the command.

And then

Msg 1412, Level 16, State 211, Line 1
The remote copy of database “Crap903” has not been rolled forward to a point in time that is encompassed in the local copy of the database log.

Interesting! What the heck does that mean? If Brent would give me his number, I’d call and ask. I don’t understand why he won’t give me his number. Well, let’s just kick this back off. We kind of expected that not to work because of the errors we saw in the log before, but it’s worth a shot to avoid taking additional steps.

ALTER AVAILABILITY GROUP [SQLAG01] REMOVE DATABASE [Crap903]
GO

ALTER AVAILABILITY GROUP [SQLAG01] ADD DATABASE [Crap903]
GO

Right? Wrong. Digging into our DMVs and Extended Events, they’re telling us that a database with that name already exists. What’s really lousy here is that this error doesn’t appear ANYWHERE ELSE. It’s not in the dashboard, it’s not in regular, documented DMVs, nor in the XE health session. It’s only in the undocumented stuff. If you’re going to use this feature, be prepared to do a lot of detective work. Be prepared to cry.

Crud.

Crud

Double crud

Double crud

What we have to do is go back, remove the database from the Availability Group again, then drop it from our other Replicas. We can’t just restore over what’s already there. That would break all sorts of laws of physics and whatever else makes front squats harder to do than back squats.

Since our database is in a restoring state, it’s a few steps to recover it, set it to single user so no one does anything dumber than our AG has done, and then drop it.

Drop it like it's crap.

Drop it like it’s crap.

When we re-add the database to our Availability Group, it should start syncing properly. Lucky for us, it did!

I'm not highly available and I'm so scared.

I’m not highly available and I’m so scared.

There’s no Tinder for databases.

I'm highly available. Call me.

I’m highly available. Call me.

New features are hard

With direct seeding, you have to be extra careful about named instances and default database creation paths. If you used named instances with default database paths to Program Files, or different drive letters and folder names, this isn’t going to work. You don’t have an option to change those things. SQL expects everything to be there in the same place across all of your Replicas. I learned that the annoying way. Several times. Troubleshooting this was weird because I still can’t track down a root cause as to why anything failed in the first place. For the record, I created 50 databases, and two of them didn’t work for some reason.

Correct the problem. Just correct the problem.

Thanks for reading!

Wanna shape sp_Blitz and the rest of our scripts? Check out our new Github repository.

Availability Group Direct Seeding: Extended Events and DMVs

$
0
0

As of this writing, this is all undocumented

I’m super interested in this feature, so that won’t deter me too much. There have been a number of questions since Availability Groups became a thing about how to automate adding new databases. All of the solutions were kind of awkward scripts to backup, restore, join, blah blah blah. This feature aims to make that a thing of the past.

There’s also not a ton of information about how this works, the option hasn’t made it to the GUI, and there may still be some kinks to work out. Some interesting information I’ve come across has been limited to this SAP on SQL blog post, and a Connect item by the Smartest Guy At SanDisk, Jimmy May.

The SAP on SQL Server blog post says that this feature uses the same method as Azure databases to create replicas; opening a direct data link, and Jimmy’s Connect item points to it being a backup and restore behind the scenes. The Extended Events sessions point to it being a backup and restore, so let’s look at those first.

Bring out your XML!

We’re going to need two sessions, because there are two sets of collectors, and it doesn’t make sense to lump them into one XE session. If you look in the GUI, there’s a new category called dbseed, and of course, everything is in the super cool kid debug channel.

New Extended Event Smell

New Extended Event Smell

Quick setup scripts are below.

CREATE EVENT SESSION [DirectSeed] ON SERVER 
ADD EVENT sqlserver.hadr_ar_controller_debug(
    ACTION(sqlserver.database_id,sqlserver.sql_text,sqlserver.tsql_stack)),
ADD EVENT sqlserver.hadr_automatic_seeding_failure(
    ACTION(sqlserver.database_id,sqlserver.sql_text,sqlserver.tsql_stack)),
ADD EVENT sqlserver.hadr_automatic_seeding_start(
    ACTION(sqlserver.database_id,sqlserver.sql_text,sqlserver.tsql_stack)),
ADD EVENT sqlserver.hadr_automatic_seeding_state_transition(
    ACTION(sqlserver.database_id,sqlserver.sql_text,sqlserver.tsql_stack)),
ADD EVENT sqlserver.hadr_automatic_seeding_success(
    ACTION(sqlserver.database_id,sqlserver.sql_text,sqlserver.tsql_stack)),
ADD EVENT sqlserver.hadr_automatic_seeding_timeout(
    ACTION(sqlserver.database_id,sqlserver.sql_text,sqlserver.tsql_stack))
ADD TARGET package0.event_file(SET filename=N'C:\XE\DirectSeed.xel',max_rollover_files=(10))
GO


CREATE EVENT SESSION [PhysicalSeed] ON SERVER 
ADD EVENT sqlserver.hadr_physical_seeding_backup_state_change(
    ACTION(sqlserver.database_id,sqlserver.sql_text,sqlserver.tsql_stack)),
ADD EVENT sqlserver.hadr_physical_seeding_failure(
    ACTION(sqlserver.database_id,sqlserver.sql_text,sqlserver.tsql_stack)),
ADD EVENT sqlserver.hadr_physical_seeding_forwarder_state_change(
    ACTION(sqlserver.database_id,sqlserver.sql_text,sqlserver.tsql_stack)),
ADD EVENT sqlserver.hadr_physical_seeding_forwarder_target_state_change(
    ACTION(sqlserver.database_id,sqlserver.sql_text,sqlserver.tsql_stack)),
ADD EVENT sqlserver.hadr_physical_seeding_progress(
    ACTION(sqlserver.database_id,sqlserver.sql_text,sqlserver.tsql_stack)),
ADD EVENT sqlserver.hadr_physical_seeding_restore_state_change(
    ACTION(sqlserver.database_id,sqlserver.sql_text,sqlserver.tsql_stack)),
ADD EVENT sqlserver.hadr_physical_seeding_schedule_long_task_failure(
    ACTION(sqlserver.database_id,sqlserver.sql_text,sqlserver.tsql_stack)),
ADD EVENT sqlserver.hadr_physical_seeding_submit_callback(
    ACTION(sqlserver.database_id,sqlserver.sql_text,sqlserver.tsql_stack))
ADD TARGET package0.event_file(SET filename=N'C:\XE\PhysicalSeed',max_rollover_files=(10))
GO

ALTER EVENT SESSION [DirectSeed] ON SERVER STATE = START
ALTER EVENT SESSION [PhysicalSeed] ON SERVER STATE = START

Since this is so new

I haven’t quite narrowed down which are important and which yield pertinent information yet. Right now I’m grabbing everything. In a prelude to DBA days, I’m adding the StackOverflow database. With some session data flowing in, let’s figure out what we’re looking at. XML shredding fun is up next.

To get information out of the Automatic Seeding session…

IF OBJECT_ID('tempdb..#DirectSeed') IS NOT NULL
   DROP TABLE [#DirectSeed];

CREATE TABLE [#DirectSeed]
       (
         [ID] INT IDENTITY(1, 1)
                  NOT NULL ,
         [EventXML] XML ,
         CONSTRAINT [PK_DirectSeed] PRIMARY KEY CLUSTERED ( [ID] )
       );

INSERT  [#DirectSeed]
        ( [EventXML] )
SELECT  CONVERT(XML, [event_data]) AS [EventXML]
FROM    [sys].[fn_xe_file_target_read_file]('C:\XE\DirectSeed*.xel', NULL, NULL, NULL)

CREATE PRIMARY XML INDEX [DirectSeedXML] ON [#DirectSeed]([EventXML]);

CREATE XML INDEX [DirectSeedXMLPath] ON [#DirectSeed]([EventXML])
USING XML INDEX [DirectSeedXML] FOR VALUE;

SELECT
[ds].[EventXML].[value]('(/event/@name)[1]', 'VARCHAR(MAX)') AS [event_name],			
[ds].[EventXML].[value]('(/event/@timestamp)[1]', 'DATETIME2(7)') AS [event_time],
[ds].[EventXML].[value]('(/event/data[@name="debug_message"]/value)[1]', 'VARCHAR(8000)') AS [debug_message],
/*hadr_automatic_seeding_state_transition*/
[ds].[EventXML].[value]('(/event/data[@name="previous_state"]/value)[1]', 'VARCHAR(8000)') AS [previous_state],
[ds].[EventXML].[value]('(/event/data[@name="current_state"]/value)[1]', 'VARCHAR(8000)') AS [current_state],
/*hadr_automatic_seeding_start*/
[ds].[EventXML].[value]('(/event/data[@name="operation_attempt_number"]/value)[1]', 'BIGINT') as [operation_attempt_number],
[ds].[EventXML].[value]('(/event/data[@name="ag_id"]/value)[1]', 'VARCHAR(8000)') AS [ag_id],
[ds].[EventXML].[value]('(/event/data[@name="ag_db_id"]/value)[1]', 'VARCHAR(8000)') AS [ag_id],
[ds].[EventXML].[value]('(/event/data[@name="ag_remote_replica_id"]/value)[1]', 'VARCHAR(8000)') AS [ag_remote_replica_id],
/*hadr_automatic_seeding_success*/
[ds].[EventXML].[value]('(/event/data[@name="required_seeding"]/value)[1]', 'VARCHAR(8000)') AS [required_seeding],
/*hadr_automatic_seeding_timeout*/
[ds].[EventXML].[value]('(/event/data[@name="timeout_ms"]/value)[1]', 'BIGINT') as [timeout_ms],
/*hadr_automatic_seeding_failure*/
[ds].[EventXML].[value]('(/event/data[@name="failure_state"]/value)[1]', 'BIGINT') as [failure_state],
[ds].[EventXML].[value]('(/event/data[@name="failure_state_desc"]/value)[1]', 'VARCHAR(8000)') AS [failure_state_desc]
FROM [#DirectSeed] AS [ds]
ORDER BY [ds].[EventXML].[value]('(/event/@timestamp)[1]', 'DATETIME2(7)') DESC

Every time I have to work with XML I want to go to culinary school and become a tattooed cliche on Chopped. Upside? Brent might hire me to be his personal chef. Downside? I’d only be cooking for Ernie.

Here’s a sample of what we get back

I’ve moved the ‘less interesting’ columns off to the right.

Frenemy.

Frenemy.

These are my first clues that Jimmy is right about it being a backup and restore. One of the columns says “limit concurrent backups” and, we’re also sending file lists around. Particularly interesting is in the debug column from the hadr_ar_controller_debug item. Here’s pasted text from it.

[HADR] [Secondary] operation on replicas [58BCC44A-12A6-449B-BF33-FAAF9D1A46DD]->[F5302334-B620-4FE2-83A2-399F55AA40EF], database [StackOverflow], remote endpoint [TCP://SQLVM01.darling.com:5022], source operation [55782AB4-5307-47A2-A0D9-3BB29F130F3C]: Transitioning from [LIMIT_CONCURRENT_BACKUPS] to [SEEDING].

[HADR] [Secondary] operation on replicas [58BCC44A-12A6-449B-BF33-FAAF9D1A46DD]->[F5302334-B620-4FE2-83A2-399F55AA40EF], database [StackOverflow], remote endpoint [TCP://SQLVM01.darling.com:5022], source operation [55782AB4-5307-47A2-A0D9-3BB29F130F3C]: Starting streaming restore, DB size [-461504512] bytes, [2] logical files.

[HADR] [Secondary] operation on replicas [58BCC44A-12A6-449B-BF33-FAAF9D1A46DD]->[F5302334-B620-4FE2-83A2-399F55AA40EF], database [StackOverflow], remote endpoint [TCP://SQLVM01.darling.com:5022], source operation [55782AB4-5307-47A2-A0D9-3BB29F130F3C]: 
Database file #[0]: LogicalName: [StackOverflow] FileId: [1] FileTypeId: [0]
Database file #[1]: LogicalName: [StackOverflow_log] FileId: [2] FileTypeId: [1]

[HADR] [Secondary] operation on replicas [58BCC44A-12A6-449B-BF33-FAAF9D1A46DD]->[F5302334-B620-4FE2-83A2-399F55AA40EF], database [StackOverflow], remote endpoint [TCP://SQLVM01.darling.com:5022], source operation [55782AB4-5307-47A2-A0D9-3BB29F130F3C]: RESTORE T-SQL String for VDI Client: [RESTORE DATABASE [StackOverflow] FROM VIRTUAL_DEVICE='{AA4C5800-7192-4B77-863B-426246C0CC27}' WITH NORECOVERY, CHECKSUM, REPLACE, BUFFERCOUNT=16, MAXTRANSFERSIZE=2097152, MOVE 'StackOverflow' TO 'E:\SO\StackOverflow.mdf', MOVE 'StackOverflow_log' TO 'E:\SO\StackOverflow_log.ldf']

Hey look, a restore

While I didn’t see an explicit backup command to match, we did pick up data like this:

[HADR] [Primary] operation on replicas [58BCC44A-12A6-449B-BF33-FAAF9D1A46DD]->[571F3967-FB40-4187-BF1E-36A88458C13A], database [StackOverflow], remote endpoint [TCP://SQLVM03.darling.com:5022], source operation [AFB86269-8284-4DB1-95F9-0128EB710825]: Starting streaming backup, DB size [-461504512] bytes, [2] logical files.

A streaming backup! How cute. There’s more evidence in the Physical Seeding session, so let’s look there. Prerequisite XML horrors to follow.

IF OBJECT_ID('tempdb..#PhysicalSeed') IS NOT NULL
   DROP TABLE [#PhysicalSeed];

CREATE TABLE [#PhysicalSeed]
       (
         [ID] INT IDENTITY(1, 1)
                  NOT NULL ,
         [EventXML] XML ,
         CONSTRAINT [PK_PhysicalSeed] PRIMARY KEY CLUSTERED ( [ID] )
       );

INSERT  [#PhysicalSeed]
        ( [EventXML] )
SELECT  CONVERT(XML, [event_data]) AS [EventXML]
FROM    [sys].[fn_xe_file_target_read_file]('C:\XE\PhysicalSeed*.xel', NULL, NULL, NULL)

CREATE PRIMARY XML INDEX [PhysicalSeedXML] ON [#PhysicalSeed]([EventXML]);

CREATE XML INDEX [PhysicalSeedXMLPath] ON [#PhysicalSeed]([EventXML])
USING XML INDEX [PhysicalSeedXML] FOR VALUE;

SELECT
[ds].[EventXML].[value]('(/event/@name)[1]', 'VARCHAR(MAX)') AS [event_name],			
[ds].[EventXML].[value]('(/event/@timestamp)[1]', 'DATETIME2(7)') AS [event_time],
[ds].[EventXML].[value]('(/event/data[@name="old_state"]/text)[1]', 'VARCHAR(8000)') as [old_state],
[ds].[EventXML].[value]('(/event/data[@name="new_state"]/text)[1]', 'VARCHAR(8000)') as [new_state],
[ds].[EventXML].[value]('(/event/data[@name="seeding_start_time"]/value)[1]', 'DATETIME2(7)') as [seeding_start_time],
[ds].[EventXML].[value]('(/event/data[@name="seeding_end_time"]/value)[1]', 'DATETIME2(7)') as [seeding_end_time],
[ds].[EventXML].[value]('(/event/data[@name="estimated_completion_time"]/value)[1]', 'DATETIME2(7)') as [estimated_completion_time],
[ds].[EventXML].[value]('(/event/data[@name="transferred_size_bytes"]/value)[1]', 'BIGINT') / (1024. * 1024.) as [transferred_size_mb],
[ds].[EventXML].[value]('(/event/data[@name="transfer_rate_bytes_per_second"]/value)[1]', 'BIGINT') / (1024. * 1024.) as [transfer_rate_mb_per_second],
[ds].[EventXML].[value]('(/event/data[@name="database_size_bytes"]/value)[1]', 'BIGINT') / (1024. * 1024.) as [database_size_mb],
[ds].[EventXML].[value]('(/event/data[@name="total_disk_io_wait_time_ms"]/value)[1]', 'BIGINT') as [total_disk_io_wait_time_ms],
[ds].[EventXML].[value]('(/event/data[@name="total_network_wait_time_ms"]/value)[1]', 'BIGINT') as [total_network_wait_time_ms],
[ds].[EventXML].[value]('(/event/data[@name="is_compression_enabled"]/value)[1]', 'VARCHAR(8000)') as [is_compression_enabled],
[ds].[EventXML].[value]('(/event/data[@name="failure_code"]/value)[1]', 'BIGINT') as [failure_code]
FROM [#PhysicalSeed] AS [ds]
ORDER BY [ds].[EventXML].[value]('(/event/@timestamp)[1]', 'DATETIME2(7)') DESC

And a sampling of data…

What an odd estimated completion date.

What an odd estimated completion date.

The old state and new state columns also point to backup and restore operations. I assume the completion date points to 1600 BECAUSE THIS IS ABSOLUTE WITCHCRAFT.

 

Ooh! Metrics!

Ooh! Metrics!

Ignore the smaller sizes at the bottom. I’ve clearly been doing this with a few different databases. The disk IO and network metrics are pretty awesome. Now I have to backtrack a little bit…

The SAP on SQL Server blog post talks about Trace Flag 9567 being used to enable compression. It says that it only has to be enabled on the Primary Replica to work, but even with it turned on on all three of my Replicas, the compression column says false. Perhaps, like parallel redo logs, it hasn’t been implemented yet. I tried both enabling it with DBCC TRACEON, and using it as a startup parameter. Which brings us to the next set of collectors…

DMVs

These are also undocumented, and that kind of sucks. There are two that ‘match’ the XE sessions we have.

[sys].[dm_hadr_physical_seeding_stats]
[sys].[dm_hadr_automatic_seeding]

These can be joined around to other views to get back some alright information. I used these two queries. If you have anything better, feel free to let me know.

SELECT 
ag.name as ag_name,
adc.database_name,
r.replica_server_name,
start_time, 
completion_time, 
current_state, 
failure_state_desc, 
number_of_attempts, 
failure_condition_level
FROM sys.availability_groups ag
JOIN sys.availability_replicas r ON ag.group_id = r.group_id
JOIN sys.availability_databases_cluster adc on ag.group_id=adc.group_id
JOIN sys.dm_hadr_automatic_seeding AS dhas
ON dhas.ag_id = ag.group_id
LEFT JOIN sys.dm_hadr_physical_seeding_stats AS dhpss
ON adc.database_name = dhpss.local_database_name
WHERE database_name = 'StackOverflow'
ORDER BY completion_time DESC

SELECT
database_name,
transfer_rate_bytes_per_second,
transferred_size_bytes,
database_size_bytes,
start_time_utc,
end_time_utc,
estimate_time_complete_utc,
total_disk_io_wait_time_ms,
total_network_wait_time_ms,
is_compression_enabled
FROM sys.availability_groups ag
JOIN sys.availability_replicas r ON ag.group_id = r.group_id
JOIN sys.availability_databases_cluster adc on ag.group_id=adc.group_id
JOIN sys.dm_hadr_automatic_seeding AS dhas
ON dhas.ag_id = ag.group_id
LEFT JOIN sys.dm_hadr_physical_seeding_stats AS dhpss
ON adc.database_name = dhpss.local_database_name
WHERE database_name = 'StackOverflow'
ORDER BY completion_time DESC

But we get sort of different information back in a couple places. This is part of what makes me wonder how fully formed this feature baby is. The completion estimate is in this century, heck, even this YEAR. The compression column is now a 0. Just a heads up, when I DIDN’T have Trace Flag 9567 on, that column was NULL. Turning it on changed it to 0. Heh. So uh, glad that’s… there.

I smell like tequila.

I smell like tequila.

Oh look, it’s the end

I know I said it before, but I love this new feature. There’s apparently still some stuff to work out, but it’s very promising so far. I’ll post updates as I get more information, but this is about the limit of what I can get without some official documentation.

Thanks for reading!

Wanna shape sp_Blitz and the rest of our scripts? Check out our new Github repository.

Availability Group Direct Seeding: TDE’s Frenemy

$
0
0

From the Mailbag

In another post I did on Direct Seeding, reader Bryan Aubuchon asked if it plays nicely with TDE. I’ll be honest with you, TDE is one of the last things I test interoperability with. It’s annoying that it breaks Instant File Initialization, and mucks up backup compression. But I totally get the need for it, so I do eventually get to it.

The TL;DR here

Is that if you encrypt a database that’s already taking part in a Direct Seeding relationship, everything is fine. If you already have an encrypted database that you want to add to your Availability Group, Direct Seeding has a tough time with it.

I don’t think this is an outright attempt to push people to AlwaysEncrypted, because it has a lot of limitations.

Let’s walk through this

Because I love reader sanity checks, here we go. Microsoft tells you how to add a database encrypted with TDE to an existing Availability Group here.

wordswordswordsblahblahblah

wordswordswordsblahblahblah

That all sounds good! So let’s follow directions. We need a database! We also need a password, and a certificate. Alright, we can do this. We’re competent adults.

/*Create databse on acceptable path to all Replicas*/
CREATE DATABASE EncryptedCrap
 ON PRIMARY 
( NAME = 'EncryptedCrap', FILENAME = 'E:\Crap\EncryptedCrap.mdf')
 LOG ON 
( NAME = 'EncryptedCrap_log', FILENAME = 'E:\Crap\EncryptedCrap_log.ldf');
 
 /*Create key*/
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'GreatP0stBrent!' 
GO 

/*Create cert*/
CREATE CERTIFICATE EncryptedCrapCert 
WITH SUBJECT = 'If you can read this I probably got fired.'

Alright, cool. We did that. Now we have to get all up in our database and scramble its bits.

/*Get into database*/
USE EncryptedCrap  
GO  

/*Create database encryption key*/
CREATE DATABASE ENCRYPTION KEY  
WITH ALGORITHM = AES_128  
ENCRYPTION BY SERVER CERTIFICATE EncryptedCrapCert 
GO 

/*Turn encryption on*/
ALTER DATABASE EncryptedCrap SET ENCRYPTION ON

SQLCMD Appreciation Header

Few things in life will make you appreciate SQLCMD mode like working with Availability Groups. You can keep your PowerShell. $.hove-it; I’m with SQLCMD.

Stick with me through the next part. You may have to do this someday.

/*Back into master*/
USE master  
GO 

/*Backup cert to fileshare*/ 
BACKUP CERTIFICATE EncryptedCrapCert   
TO FILE = '\\Sqldc01\sqlcl1-fsw\NothingImportant\EncryptedCrap.cer'  
WITH PRIVATE KEY (FILE = '\\Sqldc01\sqlcl1-fsw\NothingImportant\EncryptedCrap.pvk' ,  
ENCRYPTION BY PASSWORD = 'GreatP0stBrent!' )  
GO

:CONNECT SQLVM02\AGNODE2

USE master  
GO  

/*Set up password*/
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'GreatP0stBrent!' 
GO 

/*Restore cert from share*/  
CREATE CERTIFICATE EncryptedCrapCert  
FROM FILE = '\\Sqldc01\sqlcl1-fsw\NothingImportant\EncryptedCrap.cer'   
WITH PRIVATE KEY (FILE = '\\Sqldc01\sqlcl1-fsw\NothingImportant\EncryptedCrap.pvk',   
DECRYPTION BY PASSWORD =  'GreatP0stBrent!');
GO

:CONNECT SQLVM03\AGNODE3

USE master  
GO  

/*Set up password*/
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'GreatP0stBrent!' 
GO 

/*Restore cert from share*/  
CREATE CERTIFICATE EncryptedCrapCert  
FROM FILE = '\\Sqldc01\sqlcl1-fsw\NothingImportant\EncryptedCrap.cer'   
WITH PRIVATE KEY (FILE = '\\Sqldc01\sqlcl1-fsw\NothingImportant\EncryptedCrap.pvk',   
DECRYPTION BY PASSWORD =  'GreatP0stBrent!');
GO

:CONNECT SQLVM01\AGNODE1

USE master
GO 

ALTER AVAILABILITY GROUP SQLAG01 ADD DATABASE EncryptedCrap
GO

What did we do?

Exactly what we did. We backed up our certificate to a network share, created a private key for it, and then on two replicas we created master passwords, and created certificates using the backup of our certificate from the primary. We did this in one SSMS window. Magical. Then we added our encrypted database to the Availability Group.

If this database weren’t encrypted, everything would probably go just fine. I say probably because, you know, computers are just the worst.

But because it is encrypted, we get some errors. On our Primary Replica, we get normal startup messages, and then messages about things failing with a transient error. Not sure what a transient error is. It forgot to tie its shoelaces before running to jump on that freight car.

Log du jour

Log du jour

On our Replicas, we get a different set of messages. Backup failures. Database doesn’t exist. More transient errors. This time you left an open can of pork beans by the barrel fire.

I failed college algebra, again.

I failed college algebra, again.

Over in our Extended Events session that tracks automatic seeding, we get an error code! searching for it doesn’t really turn up much. New features. Good luck with them.

Ungoogleable errors.

Ungoogleable errors.

One bright, shiny star of error message-y goodness shows up in our Physical Seeding Extended Event session. Look at all those potentially helpful failure codes! An individual could get a lot of useful information from those.

Attempting Helpful.

Attempting Helpful.

If only you weren’t being laughed at by the Gods of HA/DR. Some of the physical_seeding Extended Events have values here, but none of the automatic seeding ones do.

Feature Complete.

Feature Complete.

As of now

I don’t have a work around for this. The alternatives are to decrypt, and then re-encrypt your database after you add it, or add it the old fashioned way. Maybe something will change in the future, but as of now, these don’t appear to be compatible.

I’ve opened a Connect Item about this. I’d appreciate votes of the upward variety, if you feel so inclined.

Thanks for reading!

Wanna shape sp_Blitz and the rest of our scripts? Check out our new Github repository.

Viewing all 3173 articles
Browse latest View live


Latest Images