Quantcast
Channel: Brent Ozar Unlimited®
Viewing all 3128 articles
Browse latest View live

First Responder Kit Release: Precon Precogs

$
0
0

This release comes to you from a hotel room in Chicago: The land of fiscal insolvency and one shooting per hour.

It’s pretty nice, otherwise.

This release is to get the pre-precon important stuff in. As much as I’d like to push all the recent contributions through, between travel, speaking, and uh… what do the rock and roll stars call it? Exhaustion? We just don’t have the bandwidth to test everything this time around. I promise they’ll make it into the next release, when I have sleep and dual monitors and brain cells again.

You can download the updated FirstResponderKit.zip here.

sp_Blitz Improvements

Nothing this time. It took me 30 minutes to verify this, because it’s so weird.

sp_BlitzCache Improvements

  • #1099 We try to make things easy for you. That’s why we make tools like Paste The Plan, and well, sp_BlitzCache. It’s also why we answer questions for free on dba.stackexchange.com, along with a whole bunch of other smart folks. To make sure you’re aware of this stuff, I added a line to the rolled up warnings on how to get more help with a plan you’re stuck on.
  • #1140 A DEBUG MODE UNLIKE NO OTHER! Okay, just like every other. This’ll print out dynamic SQL, and run selects on all the temp table used in the proc. As part of this process, I moved (nearly) all the SELECT INTO code to INSERT SELECT, complete with drop/create statements on the temp tables.
  • #1141 For the first time, I think ever, we’ve removed something. A while back when I was merging stuff from our old private GitHub repo to our new public GitHub repo, I thought these looked like a good idea. They never once fired, and on servers with weird plan cache stuff going on, they sometimes ran for quite a while. Out they go.
  • #1146 We asked, and we listened. The query plan column is now moved way closer to the left in the result set. Now you don’t have to scroll 17 screens over to get there.
  • #1159 Refined the implicit conversion analysis queries. They now work much better. V1 of everything stinks.
  • #1195 On the line where we give you percentages of plans created in different time spans, we now give you a count of plans in the cache.
  • #1143#1166#1167 All team up to add some new functionality to our scripts as a whole. These changes make it possible for us to add sp_BlitzCache output tables to sp_BlitzFirst analysis,

sp_BlitzFirst Improvements

  • #1106 Those dang time zones, man. Just all the time with the time zones. Zones. Time. Time. Zones. Who can keep track? WE CAN! here’s proof.
  • #1154 Brent did this for Brenty reasons. He cares deeply about the Delta Views. When they’re more than four hours apart, data can look more like Southwestern Views: cheap, unenthusiastic, sober, domestic.
  • #1175 Okay, so two things got removed. I don’t know what world this is anymore. You can no longer ask a question. No, no, now you can log messages. It has something to do with PowerBI, which means I need to take a nap.
  • #1177 We really do try to make things understandable by human beings. Like, normal human beings. Normal human beings don’t understand Ring Buffers, but CPU percentages are easy. Hey, look, we can’t all be Jonathan Kehayias. If we could, we could keep talking about Ring Buffers.
  • #1200 AGAIN WITH THE TIME ZONES! And again, we prevail like mighty warriors… Okay, so more like a bunch of middle aged doughballs with God awful posture. But still. If you close your eyes, anything is possible. Especially naps. God I want a nap.
  • #1144#1169 These are part of the BlitzCache stuff that make the PowerBI stuff work.

sp_BlitzIndex Improvements

  • #1132 When you have a lot of partitions, sometimes things run dog slow. Sometimes you don’t know that. Sometimes you don’t care. If you have > 100 partitions in the database, we skip partition level details. If you want to get them, you need to use the @BringThePain parameter.
  • #1160 Remember those AG things? We do too. Especially when they make sp_BlitzIndex fail. We skip those databases that aren’t in a readable state.

sp_BlitzWho Improvements

Ain’t not nothin’. Next time around, we’re going to be pruning the default list of columns that it returns, and adding an @ExpertMode that returns all of them. If you have opinions, now’s the time to let us know.

sp_DatabaseRestore Improvements

  • #1135 @James-DBA-Anderson (seriously that’s his middle name) added a check for permission denied messages from directory listings. Hurrah.
    Next time around, the Most Famous Mullet On The Internet® is going to have a whole bunch of cool new tricks added. I’m more excited about these than I am about the stint in rehab I’m going to need after this trip.

sp_BlitzBackups Improvements

Nothing this time.

sp_BlitzQueryStore Improvements

  • #1178 The result sorting was stupid. I don’t know why I picked Query Cost. Probably that darned exhaustion, again. Now we order by the last execution time. We do this especially because when you’re troubleshooting parameter sniffing issues, it helps to know which version of a query executed most recently.
  • #1182 We’re now way more 2017 compatible. A couple of the new and interesting metrics added to Query Store (tempdb used, log bytes used) are now fully supported in the metrics gathering. Before they were only mildly supported. Like used hosiery.

sp_AllNightLog and sp_AllNightLog_Setup Improvements

Ain’t not nothin’!

sp_foreachdb Improvements

Ain’t not nothin’!

You can download the updated FirstResponderKit.zip here.


Let’s mix things up with a new way to learn.

$
0
0

Because learning SQL Server is painful, we’ve been experimenting with new topics, new guest instructors, new ways of talking to other students during class, and much more. On October 30, I’m going to unveil our new live class lineup covering PowerShell, Linux, Always On Availability Groups, SSIS, and more.

I’ve got a new Mastering series:
hands-on, real-world.

I built new 3-day classes where:

  • Hour 1: I teach you a concept first with slides and demos
  • Hour 2: You work in a cloud VM on a challenging problem
  • Hour 3: I show you how I’d solve that exact problem
  • I finish by showing you how to tell if this problem is affecting your own SQL Server, and we discuss data from different students who are willing to share theirs

We keep repeating that process through the course of the 3 days – lectures, labs, and then looking at your own SQL Server to see how it relates to your apps.

The problems are just like real life. I started with Stack Overflow’s >100GB database, then built a series of workloads to simulate real world issues. You’ve got a lot of queries running simultaneously, and you don’t get any explanations. You have to figure out which indexes you need to fix, what queries need to be tuned, and then you’ve gotta roll up your sleeves and do the work. It’s a race against the clock – just like real life.

The problems build on each other. Through the course of each class, you gradually get more familiar with a workload. You start to learn what parts of it you can fix, and what parts of it stymie you, and what parts you may need to revisit a few months later when you’ve upped your game.

The problems are as hard as you are good. If you’re struggling with the basics, then you’ll need the full amount of time to tackle the problem’s

You can work on the problems at your pace. Everybody wants to pull you in a million directions, and you just can’t dedicate your whole day to a training class. We take extensive breaks through the day so you can pick when you want to work on the labs, when you need to catch up on work email, and when you wanna go do lunch. If you want to skip watching me work on the lab, you can – and just come back in when it’s time to get your next problem assignment. Heck, some of our early-access students even worked on their lab VMs overnight!

It’s like an arcade game for SQL Server.

Let’s have fun with your horrible queries!

When I went through the Microsoft Certified Master classes, the final lab exam was thrilling and challenging. I was exhausted, but the more that I talked to my fellow students, the more I wanted to play it again and again.

I built these labs with two things in mind:
pushing you, and letting you have fun.

This ain’t tiny-laptop-VMs, either: you get an 8-core, >60GB RAM VM with all solid state storage because I want you to be able to rapidly test different indexes and queries. We’ll be working with the Stack Overflow database, over 100GB of data so that slow queries really do go slowly.

You’ll be chatting in Slack with your fellow students as you go along, talking about your techniques, what you’re seeing in the lab, and helping each other try to set the best throughput scores. (Or maybe giving them false hints and hindering them. I never can tell with you people.)

You get my personal advice on your SQL Server, too.

Before class starts, you’ll run an app that collects data like sp_Blitz, sp_BlitzCache, sp_BlitzIndex, and more from a SQL Server that you’re worried about. It centralizes the data into an easy-to-share Excel spreadsheet.

I’ll personally review that data, and send you an email with advice on what parts of the training class you should focus on. I’ll talk about which queries or indexes need your attention, and as we walk through the exercises, I’ll tell you which parts are the most relevant to you.

Then, as you’ve got questions about how the class relates to your own server, I’ll be able to refer to your own server’s data. I’ll have all the information at my fingertips so that I can give you really, really, really thorough answers – not generic useless “it depends” fluff, but exact answers with real-world value.

It’s like a mix of training and consulting – you’ll learn exactly what I’d do in your shoes. (Because hey, when I look at your server, it’s like I get to play the arcade game too!)

This really is different.

There’s nothing like it in the industry today, and I bet you’re going to want to play the game – uh, I mean take the class – again and again.

To help your addiction, we’ll be offering a Live Class Season Pass option: take all of the classes, over and over through 2018. Each time, you’ll learn more – but also keep improving your own SQL Servers, getting new custom advice each time.

I’m so excited to share it with you next week!

Here’s what our early-access students said about our new courses.

$
0
0

While developing my new SQL Server training experiment, I ran an early-access version, and here’s what the students wrote:

Doug Gideon: “Amazing! It brought me to a whole new level of understanding of performance tuning. Don’t change a thing. The class was fun and the material was presented in an interesting way. I have taken many classes where by day 2 I was hoping for a SQL meltdown to get me out of the class. but not this one. It was great from beginning to end.”

Lori Halsey: “One of the best classes I’ve ever taken, esp. for advanced level learning.”

“Brent does not look as good as his illustrated avatar.”

Gaurang Patel: “That was incredible, and lots of real-world scenarios were included. All were superb.”

Kate Osenbach: “The class was great! I learned a lot and it was perfectly paced so I didn’t feel overwhelmed. Brent was so engaging that I felt like I was there! The most valuable thing I learned is how to think through/tackle finding and addressing a performance bottleneck.”

David Hicks: “I enjoyed it.  The way the material was presented was most helpful because there were lots of opportunities to try and put what we were learning into practice.  It wasn’t just lecture.”

Zaki Faheem: “It was awesome. I have learned a lot of new stuff which is 100% related to our current problems, seems like this course is specially designed to address the actual database problems which can probably cause issues.”

Stephen Mutzel: “It was very good.  Touched on topics I don’t work with on a regular basis.  Learned a lot and made me think about my current environment and how I could improve it.”

Tony Dunsworth: “It was very eye-opening and enlightening and I am glad I attended. You were tough on me when you needed to be and you were very supportive and encouraging.”

These weren’t free classes, either: these students paid to attend, and they were glowing with feedback. Many of ’em even wrote about which classes they wanted to attend next in the lineup.

I know you’re gonna love this new class lineup. On Monday, I’ll unveil the full lineup, dates, prices, and registration. Stay tuned!

[Video] Office Hours 2017/10/25 (With Transcriptions)

$
0
0

This week, Brent, Erik, and Richie discuss database corruption, multi-instance clusters, career advice, whether you should transition from contract work to full time, VMware vMotion, reducing failover time with AGs, query tuning, and more.

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

If you prefer to listen to the audio:

Enjoy the Podcast?

Don’t miss an episode, subscribe via iTunes, Stitcher or RSS.
Leave us a review in iTunes

Office Hours – 10-25-17

 

Should we use 64K NTFS allocation units?

Brent Ozar: We might as well get started. We’ve got a few technical questions coming in here. “Should we still allocate disks for 64K NTFS allocation units when SQL Server data and log files when using VMware and EMC XtremIO?”

Erik Darling: Gosh, I just don’t care.

Brent Ozar: So every … their own best practices documentation, and EMC XtremIO has their own. I want to say, but don’t quote me, that it still says 64K, but it may also say things like 4K because different SAN vendors do things at a different size internally; as opposed to your old school hard drives that used to do everything in certain sector sizes.

Erik Darling: Most SAN vendors want you to use the smallest block size possible so they get the higher IOPS ratings – they can cheat on the test. Like, “Look how many IOPS we do…”

Brent Ozar: Wes Crocket laughs out loud when he says, “Technical webcast.”

Erik Darling: Screw you, man. How’s that new job going because of all the questions we answered, Wes?

Brent Ozar: That’s right.

 

How do I change SSAS startup parameters?

Brent Ozar: Wes says, “Actual question – SSAS has a startup parameter pointing to G:, how can I change that startup parameter? G no longer exists.” Okay, a quick show of hands for everyone in the room, how many of us manage SQL Server Analysis Services? There’s a lot of you. Do any of you know the answer to that question?

Erik Darling: There’s an XML file, probably…

Brent Ozar: We got nothing? … In SSAS, you can go right click on the properties – I should make Victoria come around to this side. Victoria says, “You can right click on SSAS, go into properties and you can set it there.” Plus, it’s a Microsoft product, so isn’t that always the safe answer for anything? She’s getting ready to open up her laptop and she’s going to see…

Erik Darling: Just right click on everything.

Brent Ozar: Oh, Wes says, “analysis services won’t start at all unless I remap a G drive and move the config file to it.” Well that’s a onetime thing. You can use the SUBST command, fake a G drive, like point it to another drive letter; that will at least get you started. Then you could – and I’m totally going off of what this nice lady said on the other side of the screen here – then you can right click…

Erik Darling: As soon as she nods, we just keep talking.

Brent Ozar: I see a menu here, so she right clicked and went into properties and it looks like that’s where it comes from. [inaudible]… This is the way she’s always accessed it.

Erik Darling: You know what I would do? I would check maybe under configuration manager; I would see if there’s a startup option in there. I assume SSAS and configuration manager has an entry, because I’ve seen it in there. So I would right click on that and see if you could change it in configuration manager. That way, you don’t have to start it up.

 

I have corruption in MSDB. What now?

Brent Ozar: Lee says, “I have corruption in my MSDB database and my last best backup was five days ago. What do you think about fixing it?”

Erik Darling: No. As soon as corruption starts showing up in system databases, I want to run screaming. What I would do first is see if you can rescue it. Because sometimes what happens, and I’ve seen this a with temp tables where sometimes people get screwed up because of those – so what I would do is check the corrupt pages DMV, and I would see how many errors you have and when the last one was. Because if it’s an older error and you don’t see the errors piling up, you don’t see them keeping happening, it might be just a temp table or something that disappeared, and you can not care about it anymore. But if it’s like a real deal system view or something that’s continuously giving problems, I’d be a little dicey about staying on that server. I’d probably want to start looking towards something new, because generally, corruption doesn’t just stick around in one database.

Brent Ozar: Whatever drive that database is on, other databases can start becoming corrupt. It’s almost like – Eddie Murphy joked (NSFW) about back in his comedy videos like, whenever someone hears, in a horror movie, a voice saying, “Get out,” the smart people just go running right out of the house. It’s the dumb people that get the flashlight and go, “I better go look.” No, get out.

Erik Darling: Where did that come from? The basement? Alright…

Richie Rump: Brent, what have you done for me lately?

Brent Ozar: Ice cream…

 

 

Can I have an active/passive 3 node cluster…

Brent Ozar: Peter says, “Hi, I’m a first-time questioner, long time listener…” Welcome to the show. “Here’s my question about 2008 R2 on Windows Server 2008. Can I have an active passive three node cluster that uses shared storage for a database held on two instances?” Alright, so if I don’t say this, Allan Hirt is going to kill me; Technically that’s a multi-instance cluster. So technically it means you have a two instance three node cluster. I totally understand what you’re saying, it’s right, it’s just that people get freaky about that language.

Erik Darling: It’s like AOAG, everyone just throws stuff at you.

Brent Ozar: Or just Always On. My Always On is broken…

Erik Darling: Always On.

Brent Ozar: So yeah, you can totally do that. The gotcha is that it requires Enterprise Edition if you want any instance to be able to failover to any one of three nodes. If you kind of duct tape it together and any one instance is only on two of the nodes – so like one instance is on node A and B, the other instance is on node B and C – you can do that with SQL Server Standard Edition. Just you don’t usually want to do that; once you start getting fancy, you go Enterprise. Peter says, “I understand the active active passive only needs two SQL licenses.” Yes, that is true, as long as you’re covered under software assurance.

 

Should I learn data science?

Brent Ozar: Grahame says, “There seems to be an explosion… That’s true with pretty much anywhere where we’re at. “In data engineer and data science jobs, so I’ve begun focusing on learning SSAS and R, those are interesting to me. What are your thoughts on the evolving nature of the data profession?” So it sounds like you’re saying that your job is doomed, what are you going to do about that?

Erik Darling: Nothing, I’m going to hang out and wait until that actually comes to pass, because like every year that I’ve been a DBA or I’ve worked with SQL Server, I’ve heard that my job is over and shortly going to be extinct or a fossil and I’m going to be holed up in a data center somewhere freezing my butt off and waiting for a server to go down. [crosstalk] So far that hasn’t happened. So you know – you said about the weekly links…

Brent Ozar: Yes.

Erik Darling: Yes, Microsoft has been proclaiming the death of the DBA since, what, SQL Server 7? Before it even had a year; In fact, it was still just a sad number.

Brent Ozar: If you subscribe to our Monday links – so this week I had a kind of funny batch group of links, including the performance tuning guide for SQL Server 7 and the manual for SQL Server 7, where it says, and I quote, “The database has become largely self-tuning.” It gives you all kinds of advice on how you don’t have to worry about performance tuning anymore. It also says that the index tuning wizard, the thing that’s dead now, “Does a better job of indexing than humans do.” Yeah, if we outlasted the index tuning wizard, we are going to outlast the next self-help thing.

The other thing I would say about that is, if someone’s telling you that the DBA role is dead and you should go do something else, maybe watch what they’re doing in case the person that you mentioned, they’re actually talking about DBA topics at PASS, not about data science. So that’s kind of funny how that works out.

Erik Darling: But, you know, if that’s what interests you then go for it. I mean, don’t stick around being a DBA if you don’t want to do it anymore. If you’re into SS whatever S and R then go crazy.

Brent Ozar: And there’s money in it.

Erik Darling: Yeah, totally. Just, you know, get your Ph.D., and a few years from now you’ll be a massively successful data scientist.

Brent Ozar: You’re competing with everyone who comes fresh out of college who has this Ph.D. in math, Ph.D. in computer science, and they make $20 an hour because they’re desperate for ramen to pay the rent. That’s how we got Richie, for example.

Richie Rump: Yeah, well it wasn’t just that. Well, there was a ramen without the flavor packets, so…

Erik Darling: I snorted all the flavor packets, so…

Richie Rump: Well the real question that I have is what constitutes and explosion? Obviously, there wasn’t a lot of data scientist jobs, not all of a sudden there’s an explosion, what does that mean? And how many data scientists do you really need for a company? And how do they integrate with one another? And frankly, the big problem with data isn’t the data scientists, it’s the data itself and morphing all that data so that it can be actually processed by a data scientist. So is there going to be a new job now where that’s going to actually transform all this data into a readable format for the data scientist because they don’t want to pay all these data scientists this huge amount of money so they can actually do their data science type stuff?

I mean, I don’t know, but it’s still so young and it’s so early right now. If that floats your boat then go off and do it and have fun at it, but if you’re just trying to chase a dollar sign, that typically doesn’t work out well for anyone. Just ask the Silverlight guys and see how that turned out.

Erik Darling: Ouch.

Bren Ozar: Grahame follows up with, “That was supposed to be a softball question, my bad.” Well no, it’s just that we love that particular softball because it comes up a lot. There’s a lot of people who are like, “The database is dead, there’s not going to be any careers left.” I’m like, “They said the same thing when XML came out.” “No one needs databases, we’ll put it all in flat files.” “Okay, get back to me on that.”

Erik Darling: We did a sold-out pre-con on the database administrator not being dead.

Brent Ozar: How many seats did we sell?

Erik Darling: 360… No, 361, it was me. I had to pay to get in there.

 

Heard anything about DDBoost?

Brent Ozar: Let’s see, Daniel says, “My vice president…” I assume he means Mister Pence… “Wants us to do another proof of concept with DD Boost…”  I want to say that’s data domains. “Have you ever seen it and do you have any horror stories?” Have you ever seen anybody use it?

Erik Darling: I’ve heard SAN guys talk a lot about it, and SAN guys seem to love it. I’ve never actually heard a DBA talk about it and love it.

Brent Ozar: If you go to the bottom of our blog post about it, there was a comment from somebody who was like, “I’m going to go in and run reports against it to see how it goes.” I’m like, “God bless you; you’re wonderful.” And he actually came back and he’s like, “Restore speed tends to suck.” Like okay, I wish they would come out with some numbers showing whether it’s better or worse, and when they don’t come out with numbers, hat usually means it’s worse.

 

Can you do an FCI in an AG?

Brent Ozar: Robert says, “Hi all, have you seen a hybrid environment…”  I think he means like Commodore 64 and Amigas. “Where there are failover clustered instances with shared storage and Always On availability groups with locally attached storage?”

Erik Darling Hell yeah, yeah there’s been like 24 people managing those. There’s like numbered team jerseys, you’ve got to…

Brent Ozar: It’s complex.

Erik Darling: You have to be one heck of an engineer to intermingle and interoperate all of those technologies.

Brent Ozar: When I used to talk about AGs a few years ago in 2012 when this stuff first came out, allrecipes.com had a public example of that and Discover Channel had a public example of that as well. So that was way back in 2012 when they first came out, but all those people had like three, four, five people in their DBA teams, and that was their only cluster.

 

The B-side of Brent Ozar Unlimited

Brent Ozar: Peter says, “Richie looks like a bee.” Yeah, he’s our B team…

Richie Rump: Watch out for my stinger, boiii…

Erik Darling: Our little B-side.

Brent Ozar: That’s human resources, paging human resources…

Richie Rump: Again?

Erik Darling: She’s out getting tacos with my wife right now.

Brent Ozar: Human Resources is Erika She has the worst mouth of any of us. She swears, curses me under the table…

 

Is vMotion okay for database servers?

Brent Ozar: Amdal says, “Hello, what are your thoughts on VMware VMotion in database servers? What are your recommendations for this? I recommended the team to set resources and change them from shared to dedicated; do you like VMotion?” Do we have anything against VMotion?

Erik Darling: I have nothing against VMotion. I guess my only hang up about VMotion in general is that if you have a SQL Server – which I assume is why you’re here – on a virtual machine, VMotion actually won’t be aware if the SQL Server fails. So if the SQL Server goes down, VMotion won’t make a peep about it. VMotion won’t protect you from SQL Server going down. The other thing that sucks is, if you VMotion a VM with a downed SQL Server, it will come up on wherever you VMotion it to still down. It doesn’t actually restart anything, it transfers the VM over in the exact same state that it was in when VMotion started. So it doesn’t protect you from downed SQL Servers; other than that, pretty cool, when you pay for VMware double enterprise full set of teeth licenses.

Brent Ozar: And be aware too that if you VMotion like a running database mirror or an availability group, if it’s down for too long during the VMotion, you can cause a mirroring failover or an availability group failover.

Erik Darling: And if it’s less than graceful, you might even cause yourself some corruption.

Brent Ozar: So weird that I forget about that.

 

Followup on the Analysis Services config file

Brent Ozar: Nesta says, “For your analysis services question, the config file for SQL Server 2012 is located in C program files…” Well, it’s probably going to be wherever you installed it. “Microsoft SQL Server lack MSSS…” You know what, I’m just going to paste that into the questions window and say here you go, rather than looking like an idiot trying to read that out loud. So if you’re looking for that location, it’s in the answers window. It’s also in the config file…

Erik Darling: Well the configuration manager.

Brent Ozar: Oh configuration manager, no kidding.

Erik Darling: You can just right click your butt off.

Brent Ozar: Now I know how analysis services work.

Erik Darling: Now you’re a data scientist.

Brent Ozar: I’m a data scientist, I can dump this crappy job.

Brent Ozar: Oh my god, Daniel says, “The data scientists at my company have databases named C:usersuserdesktopthelastfinalcopy.MDF.

Richie Rump: Yep, and you guys think I’m lying about this but I’ve seen it. I’ve seen these data scientists work. They’re amazing, but how they get their data is crazy insane; it’s insane.

Brent Ozar: And there’s like no chain of custody throughout half the time. Where’s this field come from? “Yes.”

Erik Darling: It was in a flat file somewhere.

Brent Ozar: Daniel says, “I can provide screenshots.” If you’re willing to show they publically, like if your company’s okay showing them publically, it would be really hilarious to send them into us, but just make sure first that your company is okay with that because we wouldn’t want to get you fired. You seem like a nice guy, but not that nice; you’re here on the webcast.

 

How can I reduce failover time with AGs?

Brent Ozar: Leah says, “I would like to failover in production for patching using synchronous AGs. In my testing, I see five seconds on failing over and our application fails to connect. How do I reduce failover time with AGs?”

Erik Darling: That’s a good one. I can think of a few places off the top of my head. My first question is how many databases are you failing over? Because on failover, those databases do have to start up on the other end, so that would be a question there. [crosstalk]…

Brent Ozar: So if you have a large number of virtual log files, it takes a long time for crash recovery or whatever recovery to happen on startup.

Erik Darling: Or if you have really big VLFs, then SQL – so it’s like you don’t want to have too many and they can’t be too big, it’s very Goldie Locks in the zone where VLFs are happy. So I would check, you know, number of databases failing over and number of VLFs in there, which I do believe … So run sp_Blitz and see how your VLFs are doing.

Brent Ozar: There’s also, you can work on tuning network failover time, like the DNS IP config, release and renew.

 

Should I be an employee or a contractor?

Brent Ozar: Peter says, “I am the single DBA in my company…” Alright, if you’re asking how to like find a date or whatever, Match.com, PlentyOfFish.com? I got a laugh from Richie. … I like that. And then if you’re willing to go overseas it’s GetUTCDate.

Richie Rump: Well, I mean if FarmersOnly.com is a thing, I guess DBA Get Dates could be a thing too.

Brent Ozar: Someone who understands that you’re going to be on call pretty much every night, you’ll be foul and uncompassionate and smell like a data center. He says, “I’m the single DBA in my company. When transactions in the app fail, everything comes to me. What I’ve got there is a contractor, they never had a DBA with their platform. I’ve tuned the crap out of it and boosted it so much, they’ve now asked me to go fulltime.” Okay, what’s your question? Oh, should you go fulltime? Do you want health insurance is the big one; health insurance is usually expensive and hard to get. Vacation time? Like to have someone else take care of your vacation. Richie, you’ve been both sides, you’ve done contracting and fulltime. What are the things that would make you decide one over the other?

Richie Rump: Stability is really the big thing from a family perspective. So at any given time, when your contract runs out, they can kind of say see you later, and if you don’t have enough money in the emergency fund or you’re actively looking for that next gig, you’re going to be on the bench for a little bit while you look for it. So if you’re cool with that and you’ve got money in the bank and you’ve got enough for three months or whatever, go for it. But I didn’t like going out looking for the next gig; that was the worst for me. So at that point, my wife said, “You know, you’re not even looking at this, it’s been two months, you’re not even looking. Go get a real job so that you can actually pay for this house, you fool.”

That was the actual conversation, I’m like, “Yeah you’re probably right, maybe I should be looking for a fulltime job seeing as I don’t like going out there looking for work.”

Brent Ozar: I would also say too, if you turn down – if they ask you to go fulltime and you turn it down, be aware that they may start looking for somebody else that they’re going to bring in as fulltime and terminate your contract because you might be more expensive on a contract basis. On the flip side, if you’re doing an amazing job and you’ve built up a really good rapport, they way I would spin it with them is I would say, “Look, you don’t really want to do full time with me because I’ll be bored half the time. How about we just start tailing off some of these contracting hours so that it’s not so expensive for you?” Then that way, you can go off and start getting your next client, if you’re insistent that you want to go contracting. Richie, you should talk a little bit more about the emergency fund thing because we’ve talked about that several times recently. So what’s the concept of an emergency fund and what’s realistic that a contractor consultant should have for a parachute?

Richie Rump: Right, so the way I figured it out, our emergency fund is what can we live on bare minimum, just meeting all the bills and enough that we could live on, what is that number? So we came up with that monthly number, whatever that was, and then we just multiplied it by six because – I would have been fine with three months of a padding, but my wife said, “No, no, no, no, no, no I don’t feel good with three months, I want six months.” So we saved up and we lived essentially off that base number and then just pocketed everything else, put it into the savings account until we got to that six months number of where she started feeling really good about me being on my own and going out and doing all that fun stuff.

What happened with me, how I actually went independent was I got laid off but I got five months’ severance. So that actually helped out a lot, that actually helped me get the next gig; it gave me a buffer to find my first contract and actually to put off a bunch for the emergency fund so that she kind of felt good about me being on my own. I didn’t do it for six months, I did it for almost four years. And it’s one of those things, if you have a frank conversation with your spouse, they’ll get it. It’s like, “Well what number would make you feel comfortable? How much do we actually need to that if I’m without work for X amount of time, you know you’re going to be okay?” If you’re single, just have a talk to yourself, you know, sit in front of the mirror and say, “Hey, how you doing, you’re good-looking, you doing alright? Yeah, I’m doing all right. What number are you comfortable with? How long’s good for you?”

Brent Ozar: Yeah, Peter follows up and he says, “My statement was about the death of the DBA role.” Yeah, I don’t know anybody who’s getting laid off as a DBA from companies going, “You know what, those databases just take care of themselves now. We hardly need any help at all. You move on up the road.” Even when they go to the cloud, even when they go to Azure SQLDB or Amazon RDS – we had one client who went into Amazon RDS and hired a DBA because they’re like we have so many needs now that are outside, above and beyond what the cloud provider does for us; because you still need index tuning, you still need query tuning. And when you’re paying by the hour for a server and for performance, suddenly it makes a DBA even more required because you can drop your bill right away by doing performance tuning.

Erik Darling: Or when you’re paying by whatever the heck DTU is. I’m still not sure on that one.

 

Listeners chime in

Brent Ozar: Let’s see, Dee BA, I still love their name, Dee BA says, “For the contractor, make sure you ask enough questions ahead of time as well. On salary they can often overwork you without the overtime pay.” That is so totally true. They may be going, “You’re a contractor, you’re too expensive, let’s hire you fulltime so we can work you 80 hours a week and not get in trouble for it.”

Brent Ozar: Michael says, “I also did stupid things like pay my mortgage into the future for six months. That helped a lot when I ended up working for nothing.” Sure, I made my fantasy football bets for like a year in advance, that way I knew… That’s totally not true.

Erik Darling: I just got life insurance, so if things ever get rocky for a while, I’ll just slip and fall somewhere. Everyone wins.

Richie Rump: Well you know you have to die for the money to get paid, you know that, right?

Erik Darling: Oh that’s the plan, don’t worry, I’m cool.

Richie Rump: It really should be called death insurance. I don’t know why they call it life insurance.

Erik Darling: Well because everyone else gets to live a good life.

 

Can sp_WhoIsActive prove that the database isn’t the problem?

Brent Ozar: Nestor says, “We have a third party app blaming the SQL database for slow performance. I see queries sp_whoisactive come and go in less than two seconds. Is this enough to prove that SQL Server is not the bottleneck? No.

Erik Darling: I’d want to look at overall wait stats, I think, on that server.

Brent Ozar: And how would you see wait stats?

Erik Darling:  I would use sp_BlitzFirst, available in our first responder kit for free.

Brent Ozar: sp_BlitzFirst, which we will be teaching the students here how to use tomorrow. That’s one of the things we show in this class.

 

I’m using SQL Server 2000…

Brent Ozar: Gordon says, “Given that some of my company’s clients still use 2008, 2005 and 2000, I don’t think the DBA is going to become irrelevant any time soon.”

Erik Darling: So let me ask you a question. What’s the timeline on you getting on a version of SQL Server made, say, in this decade?

Brent Ozar: And you know you have no support. Like if anything goes wrong, you are screwed. I know it’s not yours, I know you didn’t choose to run those versions. This is where I start to say things that are more my management, say, and you put this in writing, “Just so that everyone knows, if any of these servers die, I have no support capabilities with Microsoft.” And when they come to ask me to make a query go faster, I’ll flat out say, “Oh man, these are the tools I have on 2008 and forward, but I don’t have those tools on 2000, 2005.” “What can you do?” “Nothing…”

Erik Darling: Run DTA.

Brent Ozar: That’s right, I forgot; go back to 7 because it’s self-tuning.

Richie Rump: On a good note, some of those databases can legally drive now, so that’s good.

Brent Ozar: Unfortunately they’re intoxicated.

Erik Darling: Buy scratch tickets, get drunk, join the army, all sorts of fun things.

 

Is the Expert SSIS Training sold out?

Brent Ozar: Last question we’ll take, Wes says, “Is the expert SSIS training sold out?” All our training classes are on hold now until November the first. We’re going to announce everything back out on November the first, including our new lineup of classes for next year. So right now we’re building up anticipation via emails and talking about the kinds of stuff that we’ll release. So check back on November first at 9am Eastern time, we’ll open up all the sales again; and they’ll be open at Black Friday prices as well. So thanks everybody for hanging out with us this week at Office Hours and we will see you all next week. Adios everybody… Actually we won’t see you next week because we’ll be at PASS. We’re not running an Office Hours during PASS, so we’ll see you in two weeks, or we’ll see you at PASS…

Erik Darling: Unless Tara and Richie do it on their own?

Brent Ozar: No I gave them the week off. Adios, everybody.

Erik Darling: Slackers.

Announcing my new Mastering class series – and registration opens Wednesday.

$
0
0

We’ve talked about why traditional training suckshow our all-new series is different, and what our early-access students said.

Now it’s time to unveil the new lineup:

The new Mastering Series with Brent:

Registrations open Wednesday at Black Friday prices.

At 9AM Eastern, a limited number of seats in each class/date will be 50% off. That’s as low as it goes: we won’t be offering anything lower on Black Friday on these live courses. (Black Friday itself will be all about the Everything Bundle.)

First come, first serve, no coupon required. When they’re gone, they’re gone. Save thousands of dollars – but you gotta move fast.

Talk to your manager. Get those credit cards ready (no checks or POs during Black Friday), and I can’t wait to share the learning and games with you!

What Is Estimated Subtree Cost? Query Bucks. No, Really.

$
0
0

When you look at a query plan, SQL Server shows a tooltip with an Estimated Subtree Cost:

Estimated Subtree Cost

Now I can run all the bad queries I want!

A long time ago in a galaxy far, far away, it meant the number of seconds it would take to run on one guy’s Dell desktop. These days, it’s just a set of hard-coded cost estimates around CPU & IO work requirements – it isn’t really tied to time at all.

One day when @Kendra_Little needed to explain the unit of measurement, she coined the term Query Bucks. That’s a great example of how she really brings SQL Server concepts to life in fun ways. (You should check out her SQLworkbooks.com training. Her classes are completely free right now, and I absolutely guarantee you’ll learn something from them. She’s one of the smartest people I’ve ever met.)

So this year for our PASS Summit pre-con on performance tuning, we thought it’d be fun to make Query Bucks a real, physical thing. Eric Larsen brought them to life – he’s the amazing illustrator who does all of our portraits, the operators at PasteThePlan, our Christmas cards, you name it. He’s super talented and really delivered:

Kendra’s $5 Query Buck

We immortalized Kendra on her own query buck, plus one for each member of our team, then picked a couple of folks that have influenced our own query tuning careers: Paul White (@SQL_Kiwi) and Joe Sack (JoeSackMSFT). I am totally going to make the phrase “a stack of Paul Whites” a thing.

Joe Sack’s Query Buck

I am totally going to make the phrase “a stack of Paul Whites” a thing

Tara Kizer

Erik Darling

Richie Rump

Me (because either a $2 or $3 bill makes sense for my goofiness)

For the back, the person on the front picked their favorite query plan operator:

In Codd We Trust

I’m tickled pink with how these turned out. This might be my favorite tangible thing that we’ve ever given away – and of course, attendees of our PASS Summit pre-con today all get a handful of Query Bucks. When they get back to the office, I fully expect them to be tipping their fellow DBAs and developers for jobs well done.

Print your own with the Query Bucks PDF. Enjoy, and I’d love to see photos of you with your #QueryBucks.

How to Get Live Query Plans with sp_BlitzWho

$
0
0

sp_BlitzWho is our open source replacement for sp_who and sp_who2. It has all kinds of really neat-o outputs like how many degrees of parallelism the query is using, how much memory it’s been granted, how long it’s been waiting for memory grants, and much more.

If you’re on SQL Server 2016 SP1 or newer, it can show you a query’s live execution plan from sys.dm_exec_query_statistics_xml.

Actual plan properties

Live plans add all kinds of cool stuff:

  • Which query in the batch is currently executing
  • Actual properties for each operator – with details like how many reads have been done so far, how much time has elapsed on that operator, and how many rows have returned
  • Actual properties for each arrow in the plan, very helpful for estimated vs actual row counts

Actual properties on an arrow

Now, these plans aren’t quite as cool as the ultra-cool animated plans showing continuous movement and completion percentages, but rather they’re just a point-in-time snapshot of the live plan’s actual-vs-estimated rows (as of the moment you query that DMF.) This means you may want to run sp_BlitzWho a few times, clicking on the query’s live_query_plan field each time, and comparing the differences between passes to get a rough idea of what kind of progress it’s making. (And yes, this sounds like a great opportunity for someone to build something to show query plans as they’re moving through the engine.)

To enable live query plans, you need:

Plus either one of these two turned on:

  • Slow, painful, set at session level: SET STATISTICS XML ON or SET STATISTICS PROFILE ON, both of which have to be enabled before the query starts. That’s cool if you’re doing tuning on a particular query, but not-so-good if you’re in the middle of a troubleshooting emergency. Plus, this adds a pretty big overhead to that query.
  • Fast, easy, set globally: Trace flag 7412. This uses the new lightweight stats infrastructure, which Microsoft says only adds a 1-2% overhead to your queries overall. This doesn’t capture CPU metrics, but that’s usually okay for me – I’d rather just have the operator numbers to get me started. To learn more about this, watch Pedro Lopes’ GroupBy session on 2016 SP1’s enhancements.

Examples of how to use it:

/* Turn it on: */
DBCC TRACEON(7412, -1);

/* Turn it off (only affects new queries from here on out) */
DBCC TRACEOFF(7412, -1);

This improvement is such a great example of why Erik and I are teaching our Expert Performance Tuning for 2016 & 2017 class (and I’m demoing this very feature onstage this afternoon). So many things have improved lately, and if you haven’t been to a performance tuning class in the last year or two, you’re gonna be stunned at how many more tools you’ve got at your disposal these days.

Registration is Open Now – at Black Friday Prices.

$
0
0

Good news, everyone! Registration is open now, and a limited number of seats are available at half off. We’re running our Black Friday Sale all November long – first come, first serve:

Plus, we’ve got more classes! Here’s the rest of our upcoming lineup, all priced 50% off too:

Good luck on claiming your spot, and see you in class!

Registration is open now for our new 2018 class lineup.


#PASSsummit Day 1 Keynote Live Blog

$
0
0

Good morning, folks, and welcome to our annual live blog of the PASS Summit Day 1 keynote.

Open the free live video stream at PASSsummit.com in one browser tab, and then refresh this page every couple/few minutes. I’ll be adding my thoughts at the end of the page, in chronological order, for easier reading later.

Today’s keynote will be presented by Rohan Kumar (@RohanKSQL), GM of Database Systems Engineering.

What I’m Expecting

Microsoft marketing team will probably require Rohan’s team to spend some time shilling SQL Server 2017 even though it’s already shipped. (We’ve even got Cumulative Update 1, complete with some wild bugs.) This means a chunk of the time will be spent showing things that you, dear reader, already knew about because you’re the kind of person who stays very current on blogs and announcements. However, many PASS attendees don’t have that luxury, and they expect Microsoft to catch ’em up to speed in Day 1’s keynote. Plus, this is Microsoft’s chance to trot out customers who’ve already adopted SQL Server 2017 and seen benefits.

However, Microsoft’s showing a willingness to ship features in Cumulative Updates, not just new versions – especially DBA-friendly features that make troubleshooting and tuning easier. We’ve already discovered hidden features in SQL 2017 that aren’t enabled yet – so this would be a great time for them to surprise and delight their fan base. I bet we’ll have at least a couple of feature releases in this keynote that will involve shipping dates before the end of the year.

Let’s see what happens!

Live Keynote Blog

Setting up for the keynote

8:21AM – PASS President Adam Jorgensen welcoming everybody and preparing them for “the Rolling Stones of SQL Server” – that’s a great way of saying it. The Microsoft talent here is crazy.

PASS President Adam Jorgensen

8:26AM – Adam: “PASS is run by the community, for the community.” Talking about how local volunteers make everything possible. I have so much respect for these folks – they do an absolutely heroic amount of work, most of it unseen and unthanked. This is your week to see people speaking, guiding folks around, answering questions in the Community Zone, and take a few moments to say thank you.

8:29 – Adam’s recognizing Tom LaRock for 10 years of volunteer service in the Board of Directors, and Denise McInerney for 6 years of service. (Seriously, that’s a long time, and a lot of meetings. God bless those BoD members – they put up with a lot of flack.)

8:31 – Rohan Kumar from Microsoft takes the stage. He’s talking about how data, cloud, and AI are changing the work we do, and talking about how Microsoft has been investing in AI for a long time. I have one word for you: Clippy. Yes, he sucked, but I can see how Microsoft can say they’ve been investing for a while and trying to bring AI to consumers.

Rohan Kumar

8:36 – The modern data estate allows data to be accepted from any source – structured or unstructured – and it functions across both on-premises and the cloud. “It essentially hides all the differences from the application and the infrastructure management.” That’s a great vision, but we don’t have anything remotely resembling that today. Try a cross-database query or scheduling a job, for example.

Modern Data Estate

8:38 – Rohan: “Will a developer have to care whether we deploy to the cloud or on-prem? If the answer is yes, we’re not shipping that feature.” 

Wait – we need to be specific, dear reader.

He’s right, but there’s one very, very critical word there: “developer.” If you build a new app from the ground up today, in 2017, and if you’re disciplined about what features you use (and don’t use), then you can do what Rohan’s describing. But for existing applications, using tons of legacy features, you cannot do this awesome trick. Try looking at the unsupported features in SQL Server on Linux, or the features not supported in Azure SQL DB.

The Modern Data Platform is absolutely amazing for ground-up new builds, but for existing apps, it’s a shimmering oasis on the horizon, unreachable without a long trek through the desert of code rewrites. Applications built before 2017/2018 are legacy in an entirely new way that really is bad. (Serverless is a similar sea-change in development.)

8:40 Talking about how containers drove a lot of adoption. I think containers was only half of it: Developer Edition is now free. If you would have had to pay to license that easily-downloaded container, adoption rates would be a different story. I bet it took a lot of hard work behind the scenes to convince the bean counters to make Dev Edition free, and it’s starting to pay off here in terms of market share on new platforms. Microsoft’s done a great job here.

8:42 – Bob Ward and Conor Cunningham talking about persistent memory storage, doing a very, very fast series of demos. Not talking about the benchmark speed – these guys are just seriously caffeinated.

Bob Ward & Conor Cunningham demoing SQL Server on Linux

8:46 – Maybe a ten second demo of automatic plan correction. Those poor guys had to have been under threat of death if they took more than 5 minutes onstage or something. It’s kinda cool that a day 1 demo goes technical, but…holy cow that was fast. It was like ShamWow but for SQL Server. It’s hard for me to let those guys go offstage once they start talking. COME BACK!

8:49 – “Basically, SQL Server is the fastest database on the planet, period.” I have no idea if that’s true, and I don’t care. I yelled, “WOOHOO!”

At both Ignite and PASS, nobody cheered when he mentioned that it’s a tenth of the price of Oracle. I bet if you surveyed the attendees to ask them what their per-core licensing cost was, and then if you double-checked with their accounting teams, less than ten percent would be within, say, 50% of the number. To database people, either they think it’s expensive (SQL Server, Oracle, DB2), or it’s free (MySQL, PostgreSQL, MongoDB), and there’s not a lot of distinguishing room in there.

8:50 – New features in SQL 2017 include graph data, machine learning with R & Python, native T-SQL scoring, adaptive query processing, and automatic plan correction.

8:53 – Paraphrasing: as data grows, hardware changes, and database features come in, it’s going to be more important for SQL Server’s query optimizer to change its behavior as it learns about the performance of the queries it executes. Given the marketing fluff in some keynotes, you’d be forgiven for thinking that this might just be hype to get people excited, but I bet this is true. Since Microsoft now hosts databases in Azure SQL DB, it’s in their own best interest to fix query plans as quickly as possible in order to reduce their own hosting costs, maximize profit, and make their database look faster than anybody else’s. This reason alone makes me adore Azure SQL DB: it drives improvements in the boxed product, too.

8:55 – Tobias Ternstrom & Mihaela Blendea doing a containers demo. A customer story of this is in Microsoft’s e-book about Linux. dv01’s developers use Docker on their local workstation, then migrate to production with continuous integration. Tobias & Mihaela is showing the new way of doing fast dev environment deployments. In the old world of SQL Server on Windows, Microsoft wouldn’t have been able to get dv01’s business because it’d just have been way too hard to integrate into dv01’s processes of CI/CD. Containers make this possible.

Tobias “Fancypants” Ternstrom demoing Carbon, new SSMS for Linux/Mac/Windows

9:01 – Tobias very briefly shows Carbon, the new free SSMS for Linux/Mac/Windows, rendering an execution plan.

9:01 – Rohan announcing SQL Operations Studio, the new “free lightweight modern data operations tool for SQL everywhere.” No release date mentioned, and it’s not on the SQL Server downloads page yet. (Saved you a click.)

Azure SQL DB Announcements

9:04 – “SQL Server and Azure SQL DB…share exactly the same codebase. So all the innovation that you’re seeing released in SQL Server 2017 has been available in Azure SQL DB for several months – in some cases, more than a year now.”

While yes, the code base is shared, the “all the innovation…has been available” line is nowhere near accurate. Go down Microsoft’s list of what’s new in SQL 2017, and lots of this stuff isn’t available in Azure SQL DB. Even if you restrict it to just engine stuff, you’ll notice that docs pages like Using R in Azure SQL Database show limitations up in Azure that you don’t get on premises.

I get twitchy about these claims that they’re exactly the same because it hints to customers that Azure SQL DB is fully testing out everything before it ships in the boxed product. There are whole areas of the engine that just aren’t in use up in the cloud. (And that’s totally okay! I just wish marketing didn’t imply they’re identically used.)

Migration improvements for Azure SQL DB

9:07 – Streamlining your journey to the cloud – specifically, PaaS. Managed Instances are coming, lift and shift migration without code changes, and Azure SQL DB cost cuts for Software Assurance owners. I LOVE MANAGED INSTANCES. These changes are so important because they bring Azure SQL DB not just to an equivalent of Amazon RDS for SQL Server, but in most ways, beyond. Amazon RDS has always let you do bigger databases than Azure SQL DB, easier lift-and-shift migrations, and let you reuse your on-premises licensing. However, now Azure SQL DB Managed Instances give you bigger database sizes and readable replicas. There’s just two things left to learn: the exact pricing and the release dates.

9:09 – “We collect 700TB of telemetry data per day.” Yes, and you don’t let developers opt out of that. That’s a time bomb for the Linux/open-source communities – I still think the pushback on that is going to hit hard at some point, and we’re gonna have to let users opt out of SQL Server phoning home.

9:11 – Danielle Dean doing an ML demo with predictions for healthcare, figuring out how long a patient is going to stay. Inserting ~1.4M rows per second, then switching over to a Jupyter notebook, taking that logic and putting the ML model into Azure SQL DB. “Demo” is the wrong word for this pace – it’s just clicking between screens. Nobody’s really learning anything here, and this audience is way beyond the point where they buy “all you do is click and the machine learned everything.” We’re data and developer people, and we know this stuff is hard work. Try cleansing data for 1.4M rows/sec.

9:15 – The demos really feel like somebody loaded up the buzzword shotgun, fired at the screen, and took whatever combo came out. There’s no storytelling here. They’re under too much pressure to sling too many buzzwords in too little time, and it’s going to just wash over the audience. At bare minimum, I would want each demo to end with, “To see the full story on this technology, go to session X at YAM.”

9:17 – The new Azure Data Factory (preview) lets you migrate your SSIS packages as-is up to the cloud. If there’s ever been a bursty load that should be available on demand, priced by the second of consumption, it’s ETL loads. 30+ connectors. I like how they didn’t try to brand it as Flow Enterprise or something like that. Plus, SSIS in Azure, managed environment for SSIS execution. You pick the number of nodes and node size, hourly pricing, licensing costs are bundled in.

That’s pretty awesome for BI consultants who want to take their existing SSIS skills, jump into a new client that doesn’t own any hardware,

9:20 – Scott Currie (CEO of Varigence, the team behind Biml) onstage to do an Azure Data Factory with Biml, pushing from on-premises up to Azure Data Lake, do some data scrubbing, deploy with PowerShell. Announcing general availability today that Biml will have first-class support for creating Azure Data Factory objects. Just change your deployment target, and instead of deploying locally, you’re deploying to ADF.

9:22 – Paraphrasing Rohan: “Azure SQL Data Warehouse is our flagship data warehouse product.” Man, just like ETL work, data warehouses are so totally perfect for the cloud. Outside of compliance requirements (which are usually misinterpreted anyway), I don’t know why you’d wanna deploy a new on-premises data warehouse today if you could avoid it. (I don’t want you to think that OLTP is somehow different, either – if you’re building a ground-up new build OLTP app today, you should try PaaS first.)

9:25 – Julie Strauss doing petabyte data warehouse demo.

Julie Strauss doing 100TB scan demo

Business Intelligence Hybrid Architecture

9:31 – Christian Wade onstage to announce scale-out Azure Analysis Services. It’s another demo of mentioning five keywords, clicking on three places, and pretending like entire projects are done. This makes TV chefs look like project managers. “I just click here and switch windows because I’ve already done a bunch of other stuff.” COME ON, this is not a demo.

Christian Wade doing Pwoer BI and Visual Studio demos

9:36 – Announcing scale-out Azure Analysis Services to support hundreds or thousands of concurrent users. Up to 7 read-only replicas.

Okay, I just lost it.

9:37 – Riccardo Muti demoing Power BI Premium connecting to any back end that on-premises Power BI Desktop can connect to, and how mobile reports look. This is actually a good demo, showing reporting UI. That’s a good feature set to cover in a fast demo.

9:44 – Rohan back on stage to wrap it up and explain how you are the key to making all this happen. (Also gave a really nice shout-out to Denny Cherry, who had to miss the Summit due to a medical emergency.) Now, go learn how to do it at Summit! See you around this week.

Registration is open now for our new 2018 class lineup.

#PASSsummit Day 2 Keynote: Dr. Rimma Nehme on Azure Cosmos DB

$
0
0

Summit day 2 keynotes have become special.

Over the last few years, Microsoft has dedicated the day 2 keynote to a technical dive into an advanced, future-looking topic. Past examples have included future-looking guidance on Hekaton, columnstore indexes, and how Azure SQL DB protects data.

I love Microsoft for doing this. It costs them a lot of money to basically buy the stage for the morning, and they’re kinda donating that money to you, dear reader, by letting you learn about something you might not be putting into practice anytime soon. They teach you where they’re going, and you get the chance to think about whether it makes sense to focus your own training on these topics.

As a presenter, I can tell you that building a session like that is incredibly hard. The attendees have a wide range of job duties and experience levels. How can they teach something advanced when it’s so hard to get everyone onto the same starting page?

When Microsoft’s Dr. @RimmaNehme takes the stage this morning, after the fun music and community news, I bet you’re going to be impressed. She has a solid track record of delivering interesting, informative keynotes where everybody in the room learns something. Hell, a lot of things.

She’s going to be introducing you to Azure Cosmos DB, a cloud-first globally distributed database system. It’s the artist formerly known as DocumentDB, and it’s like Google Cloud Spanner. You, dear reader, probably aren’t going to use Cosmos DB this year, and probably not even next year. However, developers who are sick and tired of struggling with database problems are very interested in Cosmos DB’s simpler approach. It solves a lot of development problems. It would do you some good to learn about why and how Microsoft built it so that you can have better conversations with your developers.

Do I think you need to drop everything and learn how to manage it? No, because that’s Microsoft’s job. However, your role is to understand where Cosmos DB makes sense, and when your developers wanna build something that’s a good fit, point them towards it to go take a look at it. (In the stuff we build here at the company for our own use, we default to cloud databases like this first, too.)

To follow along with my notes:

Registration is open now for our new 2018 class lineup.

Microsoft’s Query Tuning Announcements from #PASSsummit

$
0
0

Microsoft’s Joe Sack & Pedro Lopes held a forward-looking session for performance tuners at the PASS Summit and dropped some awesome bombshells.

Pedro’s Big Deal: there’s a new CXPACKET wait in town: CXCONSUMER. In the past, when queries went parallel, we couldn’t differentiate harmless waits incurred by the consumer thread (coordinator, or teacher from my CXPACKET video) from painful waits incurred by the producers. Starting with SQL Server 2016 SP2 and 2017 CU3, we’ll have a new CXCONSUMER wait type to track the harmless ones. That means CXPACKET will really finally mean something.

Pedro Lopes explaining CXCONSUMER

Joe’s Big Deal: the vNext query processor gets even better. Joe, Kevin Farlee, and friends are working on the following improvements:

  • Table variable deferred compilation – so instead of getting crappy row estimates, they’ll get updated row estimates much like 2017’s interleaved execution of MSTVFs.
  • Batch mode for row store – in 2017, to get batch mode execution, you have to play tricks like joining an empty columnstore table to your query. vNext will consider batch mode even if there’s no columnstore indexes involved.
  • Scalar UDF inlining – so they’ll perform like inline table-valued functions, and won’t cause the calling queries to go single-threaded.

Joe Sack peers into the hazy crystal ball

These are all fantastic news. If you’re in Seattle and you wanna learn more, Kevin Farlee will be doing a 20-minute demo at 1PM in the Microsoft Theater in the exhibit hall. See you there!

Registration is open now for our new 2018 class lineup.

It’s the last week to save $2,997.50 on my Live Class Season Pass.

$
0
0

We launched our new class lineup last week, and response has been fantastic. Turns out you really liked the new Live Class Season Pass option.

If you’re on the fence about whether or not to pick one up, now’s your chance: prices go up by $1,000 on Friday. Instead of the class being $2,997.50, it’ll be $3,997.50 – and the price will keep going up until it’s back up to full price ($5,995.)

If you need help convincing the boss:

Unlimited Jazz Hands

Here’s my new lineup:

And here’s our solid guest instructor lineup, all priced 50% off too:

See you in class!

Registration is open now for our new 2018 class lineup.

Partitioned Views, Aggregates, and Cool Query Plans

$
0
0

The Max for the Minimum

Paul White (obviously of course as always) has a Great Post, Brent® about Aggregates on partitioned tables

Well, I’m not that smart or good looking, so I’ll have to settle for a So-So Post about this.

There are actually quite a few similarities between the way a partitioned table and a partitioned view handle these things.

Building on previous examples with the Votes table converted into a partitioned view, let’s try a couple queries out.

Selecting the global min and max give me this query plan.

SELECT MIN(v.CreationDate), MAX(v.CreationDate)
FROM   dbo.AllVotes AS v;

And I know, it looks big and mean. Because it kind of is.

It keeps going, too.

BUT LOOK HOW LITTLE WORK IT DOES!

wtf I hate millennials now

Just like in Paul’s post that I linked to above, each one of the top operators is a TOP 1.

For the Min, you get a forward scan, and for the Max you get a backwards scan.

That happens once per table in the partitioned view.

Easier to visualize

If we focus on a single table, it’s easier to parse out.

SELECT MIN(v.CreationDate), MAX(v.CreationDate)
FROM   dbo.AllVotes AS v
WHERE  v.CreationDate >= '20140101'
       AND v.CreationDate < '20150101'
       AND 1 = ( SELECT 1 );

Simpleton

A lot of people may complain that there are two index accesses here, but since SQL Server doesn’t have anything quite like a Skip Scan where it could hit one of the index and then jump to the other end without reading everything in between, this is much more efficient.

Thanks for reading!

Brent says: remember, a scan doesn’t mean SQL Server read the whole table, and a seek doesn’t mean it only read a few rows. It’s so hard to tell this stuff at a glance in execution plans, especially in estimated plans.

Registration is open now for our new 2018 class lineup.

Coming in SQL Server vNext: Approximate_Count_Distinct

$
0
0

Last week at the PASS Summit in Seattle, Kevin Farlee & Joe Sack ran a vNext demo in the Microsoft booth and dropped a little surprise. SQL Server vNext (2018?) will let you trade speed for accuracy.

I have had approximately all of the breakfast margaritas

They’re working on a new APPROXIMATE_COUNT_DISTINCT.

It would work like Oracle 12c’s feature, giving accuracy within about 4% by using HyperLogLog (PDF with serious math, didn’t read). They also showed it on a slide under “Approximate Query Processing,” and the way it was shown suggested that there might be other APPROXIMATE% features coming, too.

If you have a use case for this and you’d be willing to run preproduction versions of SQL Server, contact us with info about your use case & database size, and we can put you in touch with the MS folks involved.

Registration is open now for our new 2018 class lineup.

Implied Predicate and Partition Elimination

$
0
0

>implying

Way back when, I posted about turning the Votes table in the Stack Overflow database into a Partitioned View.

While working on related demos recently, I came across something kind of cool. It works for both partitioned tables and views, assuming you’ve done some things right.

In this example, both versions of the table are partitioned in one year chunks on the CreationDate column.

That means when I run queries like this, neither one is eligible for partition elimination.

Why? Because the CreationDate column in the Posts table could have any range of dates at all in it, so we need to query every partition for matches.

SELECT COUNT(*) AS records
FROM   dbo.Votes AS v
JOIN   dbo.Posts AS p
ON p.Id = v.PostId
   AND p.CreationDate = v.CreationDate;


SELECT COUNT(*) AS records
FROM   dbo.AllVotes AS v
JOIN   dbo.Posts AS p
ON p.Id = v.PostId
   AND p.CreationDate = v.CreationDate;

How do we know that? Well, for the partitioned table, because all 12 partitions were scanned.

12! 12 years! Kinda.

For the partitioned view, well…

That’s not cool.

I think it’s obvious what’s gone on here.

Eliminationist

Are you ready for the cool part?

If I add a predicate to the JOIN (or WHERE clause) for the Posts table (remember that the Votes table is partitioned, and the Posts table isn’t), SQL Server is so smart, it can use that to trim the range of partitions that both queries need to access.

SELECT COUNT(*) AS records
FROM   dbo.Votes AS v
JOIN   dbo.Posts AS p
ON p.Id = v.PostId
   AND p.CreationDate = v.CreationDate
   AND p.CreationDate >= '20140101';


SELECT COUNT(*) AS records
FROM   dbo.AllVotes AS v
JOIN   dbo.Posts AS p
ON p.Id = v.PostId
   AND p.CreationDate = v.CreationDate
   AND p.CreationDate >= '20140101';

The partitioned table plan eliminates 8 partitions, and the seek predicate is converted to the Votes table.

GR8

And the partitioned view is also smart enough to pick up on that and only scan the partitions I need.

Pants: On

Expected?

Logically, it makes total sense for this to happen. The optimizer is pretty smart, so this works out.

Thanks for reading!

Registration is open now for our new 2018 class lineup.


How to Test Your Corruption Alerts

$
0
0

You’ve been such a good database administrator.

You followed the setup checklist in our First Responder Kit. You ran sp_Blitz. You set up email alerts for common issues. You run CHECKDB as frequently as practical – weekly, or maybe even daily.

But you just assume it’s all working.

There’s an easy way to test: go to Steve Stedman’s Database Corruption Challenge and download one of the sample corrupt databases. Attach the corrupt database to your production SQL Server, and as they say on Bravo, Watch What Happens™.

Andy Cohen’s CHECKDB alerts are working properly

Disclaimer: if there are multiple DBAs on your team, or if the discovery of a corrupt database triggers a mass panic in your company, then maybe this isn’t a good idea.

Disclaimer Part 2: This isn’t a good idea…it’s a GREAT idea. Test your fellow DBAs to see if they’re on their toes, or if they’re the kinds of DBAs who have all alert emails filed away to a folder automatically.

If you haven’t tested your corruption alerts recently, they’re probably not working. Hop to it.

Registration is open now for our new 2018 class lineup.

[Video] Office Hours 2017/11/08 (With Transcriptions)

$
0
0

This week, Richie and Tara discuss Richie’s current projects, Availability Groups, offloading reads and backups, applying service packs, whether production should ship agent jobs to production recovery server, security checks and scans in SQL Server, CPU performance issues upgrading to SQL Server 2016 and 2017, database backups and restores, whether there’s currently a DBA shortage, and Richie’s favorite board game.

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

If you prefer to listen to the audio:

Enjoy the Podcast?

Don’t miss an episode, subscribe via iTunes, Stitcher or RSS.
Leave us a review in iTunes

Office Hours – 11-8-17

 

Do we see more on-premises or cloud servers?

Tara Kizer: Michael has a question, “Currently the SQL Server related projects you’re working on – SQL Server on-premise or SQK Server cloud based implementations…” What are you doing, Richie, for all the projects that you’re working on? Because we don’t really work on projects, Erik and I, we’re just…

Richie Rump: Yeah, so I’m kind of purely cloud-based now, so doing a lot of work in Postgres in the cloud. Not SQL Server, oddly enough, doing a bunch with some of the other NoSQL databases in the cloud as well, but mainly dealing with the NoSQL side of things with Lambda and all that. So I haven’t played much with Azure over the past six months or so, it’s all been AWS and getting stuff up running in there. So mainly the stack that we’re using for Paste the Plan is mainly the stack that I’m messing with, and c# with .NET locally. So a lot of fun stuff, but no, I haven’t been doing any SQL Server in the cloud. I probably should.

Tara Kizer: I’d say probably half of our clients, maybe, at least my clients are in the cloud, maybe a little bit less than half. And of those, most of them are using AWS EC2 specifically. I don’t think I’ve had too many Azure clients. I haven’t had any Google Cloud clients yet. Common theme for the cloud clients – all of them are experiencing I/O issues. Slow I/O is the theme on those. That’s not always the major culprit, but definitely slow I/O on those servers…

Richie Rump: I think that’s a big problem with the cloud, period, just slow I/O.

Tara Kizer: Yeah, I mean companies are looking to save money, they’re like, “Look at this cheap disk that we could use for our implementation here.” You know, you don’t get very many IOPs out of that.

Richie Rump: You should tell the managers, every time you see cheap, replace that with slow, and then reread the sentence and tell me what you think.

 

Is there a lot of age prejudice against DBAs?

Tara Kizer: Alright, Thomas asks, “Do you guys interview people for companies, and if so, do you see any of the prejudice towards older workers in database work?” We do offer that as a service. Brent is the one that has been handling that. I don’t know how you request it but look on our website and contact us if you’d like us to do that. I don’t know that I’m seeing any prejudice towards older workers, I don’t know. Brent’s not here to answer that question. He’s the one that tackles this. It’s definitely a service that we offer. A lot of our clients don’t have DBAs, they don’t have SQL Server knowledgeable people, or they don’t even have much of an IT staff. Some of these companies only have like two people for their entire IT department; so definitely if you guys are looking to hire a SQL Server developer or DBA and you don’t have the knowledge to be able to interview people, we do offer that as a service.

Richie Rump: Yeah, but I’ve read some articles about how that’s a problem, especially in Silicon Valley. Not only that, but diversity hires as well. So it’s interesting, and on the West Coast, it’s more of a problem. I haven’t run into it, specifically not in the database area, I haven’t really seen much of that at all. Typically I think your DBAs will tend to be a bit older because you don’t really go out of college knowing how to be a DBA. You know how to do other stuff and then you somehow fall into that DBA role.

(Brent says: nah, I’m not seeing any age discrimination at clients because DBA is one of those positions that really rewards (or requires) experience.)

 

 

Do you recommend Availability Groups for high availability?

Tara Kizer: Let’s see – Kush asks, “If there are no requirements for offload read, backups, et cetera, would you still recommend Availability Groups for high availability?” Well, since I’m the one on the call, I love Availability Groups. So yes, I implement it, as a production DBA, for everything. I know how to use it, I know how to configure it, I know what the issues are where it has caused production outages that I have experienced. I don’t need to offload reads or backups, yeah definitely.

It’s still a great HA and DR solution, offloading reads that pertain to reporting, offloading backups. I don’t even like offloading backups. I don’t really see the need to offload backups. Just how much load are backups adding to your primary replica that’s your having to offload that task? There is latency on the secondary replicas, even on a synchronous commit secondary, there’s latency there. So I want my backups to be up to date to avoid as much data loss as possible.

Richie Rump: I wouldn’t recommend that at all, because I don’t know it at all. So I’d just say go to the cloud, SQL Server in the cloud, and guess what, that has all the backups for you, all the replication stuff – Aurora does all that – you could do all that stuff pretty easily in just a couple of clicks and, “Oh look, cluster, woo, fun…”

Tara Kizer: Yeah, and if you’re looking for just simpler HA, think about database mirroring. It still exists; yes, it’s been deprecated since SQL Server 2012, but it still exists in 2016 and I imagine 2017 haven’t touched it yet. You know, use synchronous database mirroring and then have a witness, and that witness can be a SQL Server Express Edition, it could be this little tiny virtual machine with hardly any resources. You see the third resource out there – if you want to configure automatic failovers that is – if you don’t care about that, then just the two servers is fine. But synchronous database mirroring is a great HA solution. It’s easier to implement and doesn’t have as much issues as availability groups do. It’s not complicated at all. Failover cluster instances are a bit complicated. Once you know it, it’s not so bad, but you have to have knowledge about clustering to implement it; it’s the same thing with availability groups.

Richie Rump: You know what I find funny is that our website has a failover backup. I’m like, “Oh how did it get so big?”

Tara Kizer: Yeah, it was crazy, for our website, reading Brent’s blog post – I think it’s on his Ozar.me website – talking about the strategy that he has for Black Friday, what he has to think about to make sure that it scales for Black Friday and the issues he encountered last year because it was down for a little while because it ran into issues.

Richie Rump: I think that post was fascinating because he said, you know, we’re going to be doing this and I had to be notified, the website had to be down for a certain period of time for them to build up the new cluster, and then Brent told me the price and I’m like, “Okay, I guess, you know…”

Tara Kizer:  It’s shocking, the price…

Richie Rump: Yeah, you know, it’s like $6000 a month, and here I’m working on serverless products, that you know, “Hey, ten bucks man, there you go, there’s all the compute time that you need for the next month.” But it makes total sense that we want it to be rock solid so that people, when they all come rushing in, that we can handle that load.

 

What precautions should we take before patching?

Tara Kizer: Alright, a question from Shree: “Precautions to take before we apply service packs.” So I always recommend applying service packs in your test environments first. You burn time in test environments and then when you decide to do it to production, obviously you’re in a maintenance window, make sure you have solid backups. I don’t necessarily ever kick off the full backup job before applying a service pack, I just know that my full backup runs at night and I’ve got my transaction logs all day and we restore those in other environments.

I know that my backups are good – if you don’t know that your backups are good, maybe you should kick off a full backup. I’ve never had an issue with installing a service pack where it required me to do a complete restore reinstall, things like that. But that really is the only precaution, besides making sure you run it in a test environment.

 

What performance issues pop up in the cloud?

Tara Kizer: Alright, Michael asks, “I think Tara hinted at the answer, but what are the root cause issues regarding SQL Server performance in the cloud? As far as I/O issues, the root cause is that, you know, you’re not buying the disks that have tee I/O performance that your server needs. You’re seeing severe I/O slowness because you haven’t spent enough money on the disks that have been allocated to your box.

You have to be very careful, when you’re selecting your instance types, that you’re able to go up in IOPs if needed, because there’s some instance types where it maxes out at a certain number, and that’s it. I know in AWS you can start striping disks, but it gets complicated. So I think that the ceiling on EC2 or a certain type – it’s either 1000 IOPS or 3000. I can’t remember what it is, and if you need more than that, you’ve got to start doing more complicated things; striping a disk or picking a different instance type that allows you to select disk with higher IOPS.

Richie Rump: Yeah, I think the other part of that, especially talking about SQL Server in the cloud on Azure side, it’s really index maintenance too. So they just announced this week that it’s going to, SQL Azure, it’s going to be auto-tuning, and that’s going to be the default now for all new instances.  And that makes sense because people throw their stuff in the cloud and a lot of them probably don’t even know how to tune their own database. So it’s going to be auto-tuning there, and it would help out probably about 80% of folks. Now the other 20, it’d be like, “Well why is this index there,” and blah, blah, blah, but you could turn that on and turn it off and there you go. But as usual, index, when you’re calling a query, that’s probably something you need to look at first.

Tara Kizer: Yeah, and look at your wait stats if you’re able to. I don’t know if you can on some of these instance types. Maybe if you do Amazon RDS, maybe, I don’t know – but look at your wait stats and see what SQL Server is waiting on, and if it’s going to be disk, you’re going to see specific wait stats. What is your top wait stat? CXPACKET? That doesn’t really pertain to I/O, so maybe you are experiencing severe I/O slowness, but SQL Server is not having to go to the disk very often, is it really impacting you? You’ve got to know your wait stats as well.

 

I’m getting this error number…

Tara Kizer: Gordon asks, “Getting an error with availability groups…” Blah, blah, blah, I don’t know. I don’t have error numbers memorized. I would suggest posting your question over at Stack Exchange and asking over there, and provide some more detail so people can help you out there.

Richie Rump: Gordon, I’m sorry, I thought she would know that error right off the top of her head, so you and me are in the same boat, buddy.

Tara Kizer: Yeah, and I suspect there’s more to that error as well, that failed to join local availability group replica to AG – I need more info. Maybe it’s not at the right LSN yet.

 

Do I need AG knowledge before deploying AGs?

Tara Kizer Kush follows up with his, or her, I’m not sure, availability group question from earlier, “Is the recommendation based on high availability group knowledge?” Yes, definitely, but that doesn’t mean that if you don’t have any knowledge, you can’t go ahead and deploy it. You’ve got to get the knowledge, though. And the way I learned was we installed it in a test environment, QA environment, load test environment. We thoroughly tested availability groups and then went to production many months later.

Kush prefers failover cluster instances over availability groups. Brent and Erik would agree with you. My experience is that I like availability groups easier because it solves so many issues, and failover cluster instances, I have to add another feature to get DR. FCIs provide HA, but they’re not providing DR for me.

Richie Rump: Yeah, do we have an availability groups training course coming soon?

Tara Kizer: Yes, I’m not sure – I know there’s one in December from Edwin, so yes, that was really popular the first time that he did it, and Brent was the person helping out on the call in case any issues happened. I watched a bit of it; it’s really good material.

Richie Rump: And our Black Friday sales are coming up soon for that, so if you want to jump in on that, stay up late.

Tara Kizer:  Yeah, and we’ve currently got great sales on the mastering series that Brent has, and I’m actually really excited about that because it’s all hands on and I learn by doing. Most of that is going to be in production, but get your own virtual machine, so hands on there.

Richie Rump: Yeah, I fail by doing, so that’s the way that works. Oh, that doesn’t work? Oh okay, move on.

 

I’ve got this Analysis Services problem…

Tara Kizer: Okay, Vlad asks something about analysis services, linked dimensions… I don’t know, I’m not a data warehouse person. I’ve actually never even touched analysis services, so I can’t even go there. Richie, have you done analysis services?

Richie Rump: I have, but very minimal, to the point where I could get something up and running and that’s it. I would go over to DBA.StackExchange.com and ask the question there. Yeah, I can’t answer that question with any sort of authority whatsoever.

Tara Kizer: Yeah, at my past companies we haven’t used analysis services. We’ve had data warehouses, but it’s always been like Cognos, Informatica, these other non-Microsoft products.

Richie Rump: I’ve used all of that…

Tara Kizer: Yep, and the thing to know about Cognos and Informatica, certain versions of that don’t really work well with availability groups where you need multi-subnet configuration and readable secondaries.

 

Do I need to pause replication when patching?

Tara Kizer: Alright, Shree asks, “Also, if you have replication enabled, you just stop replication agent job and apply patches.”  I actually don’t even do that for patching. I just install as is, services are going to be stopped and all the patches will be applied and all the restarting of services. I don’t stop replication, I don’t stop anything as far as patching goes. The only thing that we stop is the application access, you know, shutting down the application so that they don’t start gaining – you said that there’s graceful shutdowns, and this is a planned maintenance window. So I don’t stop agent jobs, replication, anything.

 

Where’s everybody at?

Tara Kizer: Michael says, “Thanks for doing office Hours, very much appreciated.” Yeah, we’re out of questions here if anyone wants to get any last-minute questions in here, otherwise, we’re going to end the call early. Sorry, it’s just Richie and I to field your questions today. Brent is enjoying Cabo. He’s in vacation mode and Erik is too. I think he’s in San Francisco, something like that.

Richie Rump: So, he’s in Cabo. I didn’t know that that’s where he went. I knew he went to Mexico, I didn’t know where in Mexico.  See, that’s the extent of the questions that I ask.

(Brent says: The Resort at Pedregal. Erika got a really good Black Friday deal last year. That’s basically how our money handling works: we run Black Friday sales, and we take that money, and we…blow it on vacations. You get training and pictures of Mexico. Everybody wins.)

Tara Kizer: I only know because we’re Facebook friends and I look at Facebook and I saw his pictures and stuff.

Richie Rump: I just skim – oh Brent, alright… Actually, my data warehouse platform of choice is actually Teradata. I got some training in that and got understanding how that thing works, and I was able to get like  five billion rows into Teradata and actually do some pretty cool stuff up against it. So I actually probably know a little bit more Teradata stuff than I do the SSAS stuff; which is probably pretty embarrassing for a Microsoft guy, but I’m in AWS all day long so I guess I’m not that much of a Microsoft guy in the first place.

Tara Kizer: Wasn’t there a link that you shared that someone was saying that Oracle and Teradata are Legacy products?

Richie Rump: Yeah, and I just want to keep scrolling and say, “What are you selling?” And that’s what it was. It was someone who was, you know, “We will help you get to the cloud and off your Legacy stuff.”  And it’s like, “Yeah, Teradata and Oracle aren’t Legacy stuff.” You know, the people who need that kind of performance and power, they’re not going to the cloud; not yet, for the afore mentioned speed and the size of the data and all that stuff. They’re not going up there yet.

 

What’s Kendra up to these days?

Tara Kizer: Alright, Michael says, “For those interested in an Ozar alumni, Kendra Little, one of the founders of the company, she has a website, SQLWorkBooks.com. Great hands-on info regarding query tuning and much more.” Definitely, there’s a lot of good stuff there, free stuff, and she’s building up all of her training material over at SQLWorkBooks.com. And if you’re a blog reader, which hopefully you are, LittleKendra.com is where she blogs.

Richie Rump: Yeah, and she’s on the $1 denomination [crosstalk]…

Tara Kizer on the query bucks

Tara Kizer: No, I’m the $1. Brent is too. Is he going to send that stuff to us? I hope so…

Richie Rump: He is, I’m sure, once he gets back from Mexico we’ll get all that stuff. But she is on a query buck, one of them, maybe five. I know I’m 20; I’m on the 20.

Tara Kizer: My client this week, they sent one of their people to Erik and Brent’s pre-con, so they had the query bucks, but they didn’t receive all the denominations. They had mine so they showed that to me on the video, but they didn’t have them all. Not sure how that worked at the pre-con…

Richie Rump: So just for people who are confused, we do have like actual printed out query bucks with all these denominations on it and different people are on the denomination. I think Paul White’s on the 100, as he probably should be.

 

Should DR servers have the same Agent jobs?

Tara Kizer: Alright, Greg asks, “Should production ship agent jobs to the disaster recovery server?” Yes. So if you’ve got HADR and if you’re using an availability group where even your HA replica – it is another SQL Server instance. Failover cluster instances, you don’t have to worry about another server because it’s just one instance, but availability groups, log shipping, database mirroring, those are all another SQL Server instance that needs the jobs, it needs the logins, it needs all that stuff that is not being, quote en quote, mirrored to that other server.

Even transactional replication, if you’re using that as an HA or DR solution, which I do not agree with, but if you are, all those things require someone to set up those external objects. When I say external: external from the user database, things that are storing Master, MSDB, such as logins and jobs; those are the two most common things that you need to keep in sync, making sure that those always get updated on the other server. And I used to do this manually. There are probably tools out there that can help you script that stuff out, but definitely need to keep those other servers in sync too if you ever have to do a failover to that other server.

Especially on an automatic failover synchronous commit availability group replica, because if it automatically fails over at two in the morning and you’re missing some critical jobs that you have in place, make sure that those are over there as well.

 

Have you worked with Epic data warehouses?

Richie Rump: Yeah, so Wes Crocket has a question, “Have you guys ever worked with Epic Data Warehouse or reporting environments?” I can say no; I have successfully avoided Epic in my career, so there.

Tara Kizer: I haven’t either. I haven’t really touched anything data warehouse. I mean, the company has had data warehousing solutions, and so I had to get access to the systems. In Informatica, we had to set up a transactional replication publication with no subscriber, because Informatica connects directly to the distribution database; that was crazy and it certainly created some blocking issues for us. But yeah, I haven’t really touched much of data warehouse.

A lot of client’s servers that I’ve looked at – and I shouldn’t say a lot, less than 10% of my clients, I’ve been looking at a data warehouse server, but not when it comes to analysis services and the actual data warehouse product. I’m looking at the SQL Server instance.

 

How do you do security scans in SQL Server?

Tara Kizer: Okay, Shree asks, “How do you do security checks and scans in SQL Server?” I don’t, that’s how I do it. Ask someone else. I’ve certainly been at companies where we’ve had security audits, and they require some pretty strict things and we would just say, “Not going to do that, not going to do that.” We don’t specialize in security here, so I’m not comfortable talking about that topic.

Richie Rump: Yeah, I think Denny Cherry has a book on SQL Server security that we recommend you check out.

Tara Kizer: Alright, Shree is asking, “sp_Blitz or any other tools?” No, that’s not going to help you out – none of the Blitz stuff is going to help you out with another – oh it’s asking about security. Sorry, these things aren’t linked together in the questions panel. No, not going to help you there.

Richie Rump: No, and I’m not going to write that either, so… You’re welcome to contribute, sp_BlitzSecurity.

Tara Kizer: I think some people have actually asked for that, or thought about adding that to the blitz stuff and Brent said no, let’s keep that out. I’m not positive on that though.

Richie Rump: yeah, it seems like a completely different script and a script that I want nothing to do with because I don’t want to do anything with that, not what I want to do, sorry folks.

(Brent says: exactly, we just don’t specialize in security. You don’t want amateurs doing your security, not in this day and age. You wouldn’t wanna hire a security team to fix a performance problem, either – it’s important to understand the strengths of who you’re hiring. If someone claims to be great at everything, then they’re probably not even good at anything. They don’t even know what they don’t know.)

 

Should we skip SQL Server 2016?

Tara Kizer: Adam asks, “My company started upgrading to 2016 but halted due to experiencing increased CPU performance. Now SQL 2017 is being touted as being even better performing. Should we consider going straight to 2017 and scrap the 2016 upgrade?” Well, you need to look into the new cardinality estimator. I’m assuming that you’re on 2012 or lower and I think maybe what you’re experiencing with the increased CPU utilization on 2016 is performance issues due to the new cardinality estimator that was introduced in 2014. So I would advise that you look into that.

There are things that you can do to get the old cardinality estimator. I don’t advise changing the compatibility level to be lower or adding the system startup trace flag. You need to figure out what queries are having issues, find the CPU culprits – and you can use sp_BlitzCache with order CPU and order average CPU. Use those to determine your CPU offenders, and then figure out what’s wrong with them. Do those need to use the old cardinality estimators? Because you can add the query trace on trace flag to individual queries, so that’s what we recommend. We don’t recommend changing the cardinality estimator at the instance level, be it compatibility level or the trace flag at the instance level. Instead, find the culprit queries and see if downgrading the cardinality estimator on those specific ones is helpful.

I don’t think that upgrading to 2017 versus 2016 is going to fix this. You need to figure out what’s happening here, and I suspect just based on experience with clients and reading blog articles out there, a lot of people are having issues with the new cardinality estimator.

 

How can I restore multi-terabyte databases faster?

Tara Kizer: Alright, Dee asks, “We have large databases, 1TB on one primary file. They are all simple recovery model. They take a long time to backup even with compression. How hard would they be to restore if broken into multiple files? Very limited staff and no DR…” For very large databases, I still think that you should do SQL Server backups, but you may want to look into snapshotting technologies that can backup a 20TB database in just – snapshots, it can do it in a second. So SAN snapshot technologies and making sure that those snapshots are – I forget what it’s called, but they are using the VSS thing, and so they are valid SQL Server backups. And you can copy over massive databases to another server or test server in just a few minutes via snapshotting technologies.

As far as breaking it up into multiple files, I don’t know that that’s going to help. Breaking your backups into multiple files might help. One of the tests that I did when we did the Dell DBA days last year in Austin was to test backup files, and I think it was that four backup files was the best number to have for backup performance. going higher wasn’t very helpful and going lower wasn’t very helpful, but four was like the sweet spot. So take a look at that. But for large databases, for the full backups, I would look at other technologies to help you out with that.

 

What database podcasts do you listen to?

Tara Kizer: Alright, let’s see, Chris asks, “Do you have any good tech database podcasts that you regularly listen to?” Richie can probably answer that.

Richie Rump: Actually, I’m more of a content creator than I am a consumer, and now that I don’t travel anymore, my podcast listenage has just gone straight down the tubes. But yes, I do know a bunch of podcasts. One is mine, Away From The Keyboard, where we interview technologists but don’t talk about technology. We just kind of get behind the person and the technology and really talk more about who they are and what they’ve done in their career. I know that SQL Server Radio, I think that’s Matan Yungman’s podcast. They’re out in Israel, they do a really good job there. SQL Down Under – I don’t know how frequently that is being produced, but I remember listening to that a lot. SQL Compañeros, that’s another one that’s out there as well. Am I missing one? I’m pretty sure I’m missing more than one.

But Kendra has one that’s considered a podcast, but it’s really a videocast, she’s got hers as well. So yeah, there’s a bunch of them that are out there. Just do a Google search on SQL Server podcasts and there will be a bunch of them that come up, and probably some that I mentioned. Check them out, see which ones you like. Listen to them all, listen to one of them, don’t listen to any of them. Listen to mine though, that’s the one that really matters.

 

Is there a DBA shortage?

Tara Kizer: Dorian asks, “I saw an article about there being a DBA shortage. Are you seeing any of that?” I’m interested in seeing how the job market is doing, so I look at all the LinkedIn emails and all the recruiter emails and definitely in the market that I’m in, San Diego, California, SQL Server DBA jobs are remaining open for a very long time. We’re talking like weeks upon weeks upon weeks. Definitely a DBA shortage here.

Maybe companies need to start investing in the employees that they have and getting them trained to be able to do the senior level work, because that’s what all the job postings I’m seeing are for, senior DBAs, not seen intermediate or juniors popping up. So the shortage is on the senior side. Maybe getting someone that might be interested on the developer side, because some developers are interested in switching over to DBAs; not a lot usually. It’s usually coming from the sys admin side, the Windows administrators, that are often wanting to jump over to the DBA side. Invest in them, invest internally and get those people up to senior DBA level.

 

Does SQL Server have dynamic partitioning like Oracle?

Tara Kizer: Sreejith asks, “Anything close to integral partition from Oracle or dynamic partition feature in any of the newer versions of SQL Server 2016/17?” I don’t even know what those features do, so I can’t tell you if there’s an equivalent in SQL Server or not. If Erik were here, he might be able to help.

Richie Rump: Erik would be able to know that form the Oracle side. But I haven’t used Oracle since my second job out of college, which was twen… years ago, so yeah. I’m behind on my features just a little bit.

 

Is San Diego a nice place to live?

Tara Kizer: Ben asks, “Is San Diego a nice place to live?” It sure is. That’s why our mortgages are so high and rent is so high. In my area, you can’t even get a two bedroom apartment for less than $2200 per month. And I live in just middle-class America here, this is nothing fancy whatsoever. Definitely not a bad area, it’s just regular old middle-class people. It did rain yesterday.

Richie Rump: Yeah, but your baseball team is terrible. I mean really, I mean…

Tara Kizer: We don’t have a football team…

Richie Rump: yeah, your football team went up and left for no reason. It’s like, I don’t know man, you can’t have baseball and football.

Tara Kizer: Now that the Chargers are with the trader Las… I’m not a sports person. I grew up on sports, heavily sports family, but sports is not my thing to watch. Anyway, everybody’s into the San Diego State Aztec football team now because they’re really good, so they’ve got a lot of attendance at their local football games.

Richie Rump: Yeah, but are they 8-0? No they’re not…

Tara Kizer: I don’t know, I don’t follow…

Richie Rump: My school is, that’s right, class of 97, University of Miami. I’m not saying we’re back, but if we win today, if we win this weekend then we’re back.

 

Why do you wear a winter coat in San Diego?

Tara Kizer: One last comment from Thomas, “You live in San Diego and wear a winter coat inside?” Yes, I am wearing a down puffy jacket, and this thing’s pretty significant. I’m actually getting kind of warm here. But it’s chilly here in the mornings and the sun doesn’t hit my house until the afternoon, so where my desk is, it stays cool all day long. I usually don’t have to run the air conditioning until the afternoon, even when it’s 100 degrees outside.

Richie Rump: Yeah, I totally would typically wear a hoodie, except I’ve got the lights on me right now, so I’ll probably put it on after I bring the lights down a bit. But yeah, us warm weather people are really crazy because it gets below like 78 and we’re like, “Jacket time, suckers. I’m not getting cool for this, no.”

Tara Kizer: Alright, Greg says, “22 above and I’m in Minnesota.”

Richie Rump: You enjoy that. Yeah, my refrigerator’s not even 22, you know.

Tara Kizer: Alright guys, that’s the end of this call, we’ll see you next week.

Registration is open now for our new 2018 class lineup.

#TSQL2sday: Brad McGehee Made a Difference in My Career.

$
0
0

For this month’s T-SQL Tuesday, Ewald asked who’s made a difference in our careers.

When I first got started out in SQL Server, all I had was books and Books Online. Back then, neither of them were particularly well-indexed, nor were they up to date.

Then I found a web site that turned things around.

Brad McGehee wrote web posts that were smart, easy to understand, and straight to the point. I was able to get in, get my problems solved, learn a little, and get back to work – all for free. (I’m not linking to the site because Brad sold it, and the new owners haven’t done a good job of keeping it up, and a lot of the advice is irrelevant – hey, it’s 15+ years old now!)

Brad then became an evangelist for Red Gate. When they wanted to send a DBA to space, Brad starred in a series of videos that challenged DBAs to answer trivia questions. (I had a lot of laughs out of that, especially Brad’s awesome behind-the-scenes videos.)

In 2012, Brad’s family situation changed, and he did the admirable thing: he gave up public life to be a “regular” DBA again and focus on his family. But as far as I’m concerned, Brad will never be a “regular” DBA – he’s an inspiration to me.

Every time I run into Brad at SQL Server conferences, I’m happy to see him. He’s just a really nice guy who helped set my database career in motion, and I can’t thank him enough. I’ve said it privately, but I’ve never said it publicly, so here you go: thank you, Brad. I thank you, DBAs from the 2000s all thank you, and your family thanks you. You do good work.

First Responder Kit Release: The Ides Of November

$
0
0

IB Rewop

This is a cleanup release to get some of the pull requests in that didn’t make it in before the precon.

There’s also a secret unlockable character that Brent is blogging about next week!

Please clap.

You can download the updated FirstResponderKit.zip here.

sp_Blitz Improvements

  • #1199 We’ve updated the unsupported builds list! Now you can numerically gauge how bad you are at patching SQL Server. Thanks to @tony1973 for letting us know about this. Apparently he’s really good at patching SQL Server!
  • #1205 We now guarantee a total lack of duplicate Check IDs. Can a primate get a primary key, here? Thanks to @david-potts for letting us know about this one.
  • #1207 If you have read only databases on your server, we’ll no longer gripe about a lack of DBCC CHECKDB. Thanks to @Gavin83 for coding this one!

sp_BlitzCache Improvements

  • #1230 Oh, that missing parameter in the dynamic SQL. Thanks to @GrzegorzOpara for letting us know!

sp_BlitzFirst Improvements

Big breaking change here: the new @OutputTableRetentionDays parameter defaults to 7 days. If you’re keeping more data than that in your BlitzFirst% output tables, set this parameter in your jobs right away, or else data older than 7 days is going to get deleted.

  • #1232 No one likes retaining things. Actually, people like retaining everything other than water. Which is weird, since HYDRATION IS IMPORTANT! I’m not a doctor, but this new @OutputTableRetentionDays parameter is a good way to purge old perf data. Put your data on a diet.

sp_BlitzIndex Improvements

Nothing this time around.

sp_BlitzWho Improvements

Nothing this time.

Next time around, we’re going to be pruning the default list of columns that it returns, and adding an @ExpertMode that returns all of them. If you have opinions, now’s the time to let us know.

sp_DatabaseRestore Improvements

@ShawnCrocker did a bang up job adding and fixing a bunch of stuff, as only someone who actually needs to restore databases all the time can do!

  • #1198 The @StopAt parameter was being ignored for Full and Diffs — no more!
  • #1192 There was a dependency bug when moving files was blank. Now we look for the default instance path.
  • #1180 You can now change the recovery model of a database after restoring it!

sp_BlitzBackups Improvements

Nothing this time.

sp_BlitzQueryStore Improvements

Nothing this time.

sp_AllNightLog and sp_AllNightLog_Setup Improvements

  • #1133 Skips over attempting to restore system databases, because what kind of maniac would want that to happen? Thanks to @dalehirt for this one!

sp_foreachdb Improvements

Nothing this time.

You can download the updated FirstResponderKit.zip here.

We’re Coming to London! Announcing Our SQL Bits Pre-Con.

$
0
0

Going to SQL Bits in London this year? Join me & Erik Darling on Wednesday at our all-day pre-con session, Expert Performance Tuning for SQL Server 2016 & 2017. Here’s the abstract:

Your job is making SQL Server go faster, but you haven’t been to a performance tuning class since 2016 came out. You’ve heard things have gotten way better with 2016 and 2017, but you haven’t had the chance to dig into the new plan cache tools, DMVs, adaptive joins, and wait stats updates.

In one fun-filled day, Brent Ozar and Erik Darling will future-proof your tuning skills. You’ll learn our most in-depth techniques to tune SQL Server leveraging DMVs, query plans, sp_BlitzCache, and sp_BlitzFirst. You’ll find your server’s bottleneck, identify the right queries to tune, and understand why they’re killing your server. If you bring a laptop with SQL Server 2016 or 2017, and 120GB free space, you can follow along with us in the Stack Overflow database, too.

You’ll go back to the office with free scripts, great ideas, and even a plan to convince the business to upgrade to SQL Server 2016 or 2017 ASAP.

Can’t upgrade to 2016? We’ll even show you memory grant and compilation tracking tricks that work in newer service packs for 2012 and 2014.

This is not an introductory class: you should have 2-3 years of experience with SQL Server, reading execution plans, and working on making your queries go faster.

Attendees will even get a one-year Recorded Class Season Pass – with that, the pre-con pays for itself!

Check out the list of pre-cons, then register for SQL Bits. This same pre-con sold out at Summit long before the conference even started, so don’t wait until the last minute.

See you in London!

Viewing all 3128 articles
Browse latest View live