Quantcast
Channel: Brent Ozar Unlimited®
Viewing all 3170 articles
Browse latest View live

Locks Taken During Indexed View Modifications

$
0
0

Frankenblog

This post has been nagging at me for a while, because I had seen it hinted about in several other places, but never written about beyond passing comments.

A long while back, Conor Cunningham wrote:

This same condition applies to indexed view maintenance, but I’ll save that for another day :).

AFAIK he hasn’t written about it or typed an emoji since then.

There’s also a passing comment from Paul White in this Stack Exchange answer:

Range locks taken when maintaining an indexed view referencing more than one table.

Just what are Range locks? Great question!

Ranger Things

So what causes Range Locks? Just ask Sunil. He knows everything (this assumes the serializable isolation level):

Equality Predicate

If the key value exists, then the range lock is only taken if the index is non-unique. In the non-unique index case, the ‘range’ lock is taken on the requested key and on the ‘next’ key.

If the ‘next’ key does not exist, then a range lock is taken on the ‘infinity’ value. If the index is unique then a regular S lock on the key.

If the key does not exist, then the ‘range’ lock is taken on the ‘next’ key both for unique and non-unique index.

If the ‘next’ key does not exist, then a range lock is taken on the ‘infinity’ value.

Range Predicate (key between the two values)

‘range lock on all the key values in the range when using ‘between’

‘range’ lock on the ‘next’ key that is outside the range. This is true both for unique and non-unique indexes. This is to ensure that no row can be inserted between the requested key and the one after that. If the ‘next’ key does not exist, then a range lock is taken on the ‘infinity’ value.

First, Testing an Indexed View with One Table

I’m going to use the small version of Stack Overflow for expediency. Here’s a test setup and update for an indexed view with just one table.

USE StackOverflow2010;
GO 

CREATE OR ALTER VIEW dbo.UserPostScore
WITH SCHEMABINDING
AS
SELECT u.Id,
       u.DisplayName,
       SUM(CONVERT(BIGINT, u.Reputation)) AS TotalRep,
	   COUNT_BIG(*) AS ForSomeReason
FROM dbo.Users AS u
WHERE u.Reputation > 1000
GROUP BY u.Id,
         u.DisplayName;
GO 

CREATE UNIQUE CLUSTERED INDEX cx_ups ON dbo.UserPostScore (Id);
GO 

BEGIN TRAN
UPDATE u
SET u.Reputation += 100
FROM dbo.Users AS u
WHERE u.Id BETWEEN 22656 AND 25656;
ROLLBACK
GO

After that runs, I’m going to use this query to see what locks are held. I know it’s ugly.

But most DMV queries are.

SELECT    dtl.request_mode,
          CASE dtl.resource_type 
               WHEN 'OBJECT' 
		       THEN OBJECT_NAME(dtl.resource_associated_entity_id) 
			   ELSE OBJECT_NAME(p.object_id) 
		  END AS locked_object,
          dtl.resource_type,
          COUNT_BIG(*) AS total_locks
FROM      sys.dm_tran_locks AS dtl
LEFT JOIN sys.partitions AS p
    ON p.hobt_id = dtl.resource_associated_entity_id
WHERE     dtl.request_session_id = 54
AND       dtl.resource_type <> 'DATABASE'
GROUP BY  CASE dtl.resource_type 
               WHEN 'OBJECT' 
		       THEN OBJECT_NAME(dtl.resource_associated_entity_id) 
			   ELSE OBJECT_NAME(p.object_id) 
		  END, 
		  dtl.resource_type, 
		  dtl.request_mode;

Here’s what shows up for me:

Latchkey

Some rather expected locks, I think. Exclusive and Intent Exclusive for the indexed view and the Users table.

But what if we change the indexed view?

Double Down

I’m going to join Users to Posts here, to fulfill the prophecy, then run my update.

CREATE OR ALTER VIEW dbo.UserPostScore
WITH SCHEMABINDING
AS
SELECT u.Id,
       u.DisplayName,
       SUM(CONVERT(BIGINT, u.Reputation)) AS TotalRep,
       SUM(p.Score) AS TotalScore,
	   COUNT_BIG(*) AS ForSomeReason
FROM dbo.Users AS u
    JOIN dbo.Posts AS p
        ON u.Id = p.OwnerUserId
WHERE u.Reputation > 1000
GROUP BY u.Id,
         u.DisplayName;
GO 

CREATE UNIQUE CLUSTERED INDEX cx_ups ON dbo.UserPostScore (Id);
GO

BEGIN TRAN
UPDATE u
SET u.Reputation += 100
FROM dbo.Users AS u
WHERE u.Id BETWEEN 22656 AND 25656;
ROLLBACK

Now when I look at the locks, I see something new!

Ranger Danger

My indexed view has exclusive range locks taken out on it.

There’s nothing you can do about this either. Beyond normal lock escalation, the isolation level has been escalated to Serializable.

If I try to query the view, I’ll get blocked unless I add a where clause to specifically avoid the locked range of keys, like this:

SELECT COUNT(*)
FROM dbo.UserPostScore AS ups
WHERE ups.Id < 22656;

What’s The Point?

There’s no such thing as a free index, and that applies to indexed views as well.

To learn more about how to interpret indexed view maintenance, check out this post by Paul White.

Thanks for reading!

Brent says: I’ve always been a little nervous doing indexed views across multiple tables, but now I’m even more hesitant. It’s the kind of trick that’s absolutely amazing for selects when it works, but completely terribad for concurrency.

Free online training next Friday - register now for GroupBy September.


It’s Okay If You Don’t Create Statistics.

$
0
0

Along with the ability to create indexes (which you most definitely should be doing), SQL Server gives you the ability to create statistics. This helps SQL Server guess how many rows will come back for your searches, which can help it make better decisions on seeks vs scans, which tables to process first, and how much memory a query will need.

And I never do that.

I mean sure, I do it in training classes to show as a demo – and then I turn right around and show why it doesn’t usually get you across the query tuning finish line.

I used to think I was missing some kind of Super Secret Query Tuning Technique®, like I was just somehow doing it wrong, but then I stumbled across this note in Books Online’s how-to-create-statistics page and suddenly everything made sense:

Before you begin, let the engine handle it

Let me rephrase: before you even start playing around with statistics, make sure you haven’t taken away SQL Server’s ability to do this for you.

Me, to this BOL page’s author

I like to make fun of a lot of SQL Server’s built-in “auto-tuning” capabilities that do a pretty terrible job. Cost Threshold for Parallelism of 5? MAXDOP 0? Missing index hints that include every column in the table? Oooookeydokey.

But there are some things that SQL Server has been taking care of for years, and automatically creating statistics is one of ’em. If you frequently query a column, you’re gonna get a statistic on it, end of story. You might have edge case scenarios where those statistics aren’t enough, but when you do, the fix isn’t usually to create a statistic.

The fix is usually to create the right indexes, which also happen to give you a free statistic anyway – but the index means that the data access itself will be faster.

Creating stats just tells SQL Server how bad the query is going to suck. Your users aren’t satisfied with that – they want the query to actually suck less. That’s where creating indexes comes in, and you should start your query tuning efforts there first.

Going to Summit? Here’s a Calendar Invite for My Session.

$
0
0

At Summit, on Wednesday at 1:30PM in room 6C, I’m presenting “Getting Better Query Plans by Improving SQL’s Estimates.”

Here’s the abstract:

Like Books Online, but with jazz hands

You’ve been writing T-SQL queries for a few years now, and when you have performance issues, you’ve been updating stats and using OPTION (RECOMPILE). It’s served you well, but every now and then, you hit a problem you can’t solve. Your data’s been growing larger, your queries are taking longer to run, and you’re starting to wonder: how can I start getting better query plans?

The secret is often comparing the query plan’s estimated number of rows to actual number of rows. If they’re different, it’s up to you – not the SQL Server engine – to figure out why the guesses are wrong. To improve ’em, you can change your T-SQL, the way the data’s structured and stored, or how SQL Server thinks about the data.

This session won’t fix every query – but it’ll give you a starting point to understand what you’re looking at, and where to go next as you learn about the Cardinality Estimator.

See you there, and if you’re not one of the 150+ folks who’ve signed up, there’s still space in our Tuesday pre-con, Performance Tuning in 21 Demos. Attendees get a free year of SQL ConstantCare®, too – that bonus pays for the price of the pre-con!

Forwarded Fetches and Bookmark Lookups

$
0
0

Base Table

When you choose to forgo putting  a clustered index on your table, you may find your queries utilizing forwarded fetches — SQL Server’s little change of address form for rows that don’t fit on the page anymore.

This typically isn’t a good thing, though. All that jumping around means extra reads and CPU that can be really confusing to troubleshoot.

All sorts of things might get blamed, like parameter sniffing, out of date stats, eye tax frog man station, and more common bogeypersons that take the blame for all SQL Server performance issues.

Dummo

CREATE TABLE el_heapo 
( 
	id INT IDENTITY, 
	date_fudge DATE, 
	stuffing VARCHAR(3000)
);

INSERT dbo.el_heapo WITH (TABLOCKX) 
      ( date_fudge, stuffing )
SELECT DATEADD(HOUR, x.n, GETDATE()), REPLICATE('a', 1000)
FROM (
SELECT TOP (1000 * 1000)
ROW_NUMBER() OVER (ORDER BY @@SPID)
FROM sys.messages AS m
CROSS JOIN sys.messages AS m2
) AS x (n)

CREATE NONCLUSTERED INDEX ix_heapo ON dbo.el_heapo (date_fudge);
Here’s a heap. It’s not very special. When we examine it with sp_BlitzIndex, there’s not much going on.
EXEC master.dbo.sp_BlitzIndex @DatabaseName = N'Crap', 
                              @SchemaName = 'dbo', 
			      @TableName = 'el_heapo';

Right in the middle of a swamp.

When we run a simple query against it, we get a plan with a bookmark lookup.

SELECT *
FROM dbo.el_heapo AS eh
WHERE eh.date_fudge BETWEEN '2018-09-01' AND '2019-09-01'
AND 1 = (SELECT 1)
OPTION(MAXDOP 1);

The MAXDOP 1 hint is just to make reading the tea leaves easier.

BML

When we update the table, the fun starts. By fun I mean terribleness.

UPDATE eh
SET eh.stuffing = REPLICATE('z', 3000)
FROM dbo.el_heapo AS eh
WHERE eh.date_fudge BETWEEN '2018-09-01' AND '2019-09-01'
OPTION(MAXDOP 1)

Now sp_BlitzIndex shows up something new!

Ehhh

We racked up ~6300 forwarded fetches just during the update.

If we re-run our original select query, that number doubles.

Ach.

If we run Profiler to capture some query metrics, because it’s late and I’d rather enjoy this wine than use XE, the issues show themselves a bit more.

Pro-Filer.

I think the change in metrics from before and after the update speak for themselves.

This is all on a relatively small heap, with queries that touch a small number of rows.

My select only returns a little under 9000 records, but it takes ~6000 extra reads to get them with the fetches involved.

CPU doesn’t do much, but it does show up where it hadn’t before.

Fear Of Endings

Heaps have their uses. Some of my favorite people love heaps.

But you have to be really careful when choosing which tables you’re going to leave clustered indexes off of.

Typically, if any part of the workload involves updates or deletes, a heap is a bad idea.

Thanks for reading!

[Video] Office Hours 2018/9/5 (With Transcriptions)

$
0
0

This week, Brent, Tara, Erik, and Richie discuss whether you should keep autoshrink on, AG multisite failovers, next version of SQL Server, SQL Server 2017 Vulnerability Assessment, SELECT INTO vs INSERT INTO, using Node.js with SQL Server, Using PowerShell for DBA tasks, the future of SQL Server for Linux, memory gateway query compile, VB.NET, patching, and more!

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

If you prefer to listen to the audio:

Enjoy the Podcast?

Don’t miss an episode, subscribe via iTunes, Stitcher or RSS.
Leave us a review in iTunes

Office Hours Webcast – 2018-09-05

 

Is auto-shrink good when…

Brent Ozar: Rich is asking a question, but this also means Rich is not doing his homework from the training class, because Rich is in our mastering index tuning class right now. He’s doing such a good job on homework that I will let it pass. Rich says, “If you have servers that you can rarely get onto for maintenance, can you ever think of a time when you would keep auto-shrink on?”

Tara Kizer: Never; not even in a test environment.

Brent Ozar: And why not?

Tara Kizer: Why? I would wonder why you want to do this. There’s only bad things that happen. What issue are you trying to solve?

Brent Ozar: yeah, especially if you can’t get to them for maintenance, that makes me think that probably they’re growing on their own. Just let them grow out. Let them grow and go for it.

 

Why aren’t my multi-site AG failovers fast?

Brent Ozar: Sri asks, “Question on Availability Group multi-site failover – when we failover to the synchronous node, databases are failing over with any issues, but the web app that uses JDBC and the listener timeout isn’t reconnecting. What do we do to change the application’s max timeout?”

Tara Kizer: So it would be the connection timeout in the connection string. So there’s command timeout and then connect timeout, or connection timeout it might be. So those are two different things. You’d want connection timeout, but I suspect maybe your multi-subnet failover connection parameter, you either don’t have it turned on in the connection string, or it’s not working. I would wonder about your database driver and making sure that – you said JDBC JBoss. So JDBC 4.0 has multi-subnet failover. I would imagine you’re on that, but something’s not working there. You need to figure out – you know, check your connection string and see if multi-subnet failover equals true is in there, because that might be what you’re missing here.

Brent Ozar: And it’s worth to know too, just when your application sends a command, if they send a command or if their command is in-flight during the failover, it’s just going to time out. And it’s up to your application to go through and retry the same command. Like, JDBC is going to return a, hey, you know, your command timed out, and you have to go through and retry it. I say you, it’s your developers, not you.

 

Any word on the next version of SQL Server?

Brent Ozar: Mike says, “Any word on the next version of SQL Server?” The word is future… So we don’t know, of course, and anybody – I always say that anyone who knows isn’t allowed to say. Anyone who says, that means they don’t know. But I would say that Microsoft’s Ignite Conference is coming the third week of September, I think. It’s like the 20th through 24th. Last year at Ignite, they announced the next version of SQL Server and gave public previews. So since it seemed like they wanted to move to a yearly release cadence, that might make sense that they would announce the same thing at Ignite here in September.

Richie Rump: So is that going to be in Orlando, or?

Brent Ozar: You know, I don’t know. I don’t know where Ignite is at.

Richie Rump: Well let’s just hope they don’t put it in Chicago again because that didn’t go well last time.

Brent Ozar: I heard from several Microsoft people that they were like swearing on a bible they would never do Chicago again ever it was so bad.

Tara Kizer: What was the issue?

Brent Ozar: The food was unbelievably bad. It was legendarily bad because they didn’t have enough catering supplies to deal with like 30,000 people coming in at once, so they gave out cold sandwiches, cold fried chicken, then they would run out and it would be an hour before they would get more. There was a bus line going to and from the convention center because there was no hotels at the convention center; that can handle that many people, of course. And the busses get trapped at rush hour so people were on busses for an hour to get to and from their hotel and you would have walked fast. Terrible…

Richie Rump: Yeah, I think the bathroom situation was not optimal either and just people waiting in lines forever.

Brent Ozar: I love Chicago, but I would not want to go to a conference there. So that probably answers it there, I would watch at Ignite. The other thing I was going to say is, at the PASS Summit coming up in November, there’s a bunch of public sessions where people have said we’re previewing vNext here. Like, Microsoft has said we’re previewing vNext, so that means that it’s at least going to be publically demo-able in November.

 

Have you used the Vulnerability Assessment?

Brent Ozar: Heather says, “Do y’all have any experience with the SQL Server 2017 vulnerability assessment?”

Tara Kizer: I’ve never even heard of it. I don’t keep up to date on their extra tools.

Brent Ozar: I want to say it was an SSMS 17 feature and when they brought it out, they said things like xp_cmdshell is bad, you should turn this off. I’m like, really? Really? Anybody who’s SA can turn that on and turn that off anytime they want to and there’s, like, no tracing for it. So I’m not sure I take it really seriously.

Richie Rump: Does it scan your application for SQL injection? Does it do that for you?

Brent Ozar: No. Even the plans in the cache it doesn’t look at to see if they’re vulnerable. I’m like, it’s kind of cheesy. And they didn’t improve it at all…

 

What’s the fastest way to load a temp table?

Brent Ozar: Shaun asks, “If I’m inserting stuff into a temp table, should I do select into temp table or should I specifically say insert into temp table? Is there a performance difference?”

Tara Kizer: I mean, I think one of the issues with select into is that you might not get the right data types. There’s an issue there, so making sure that your temp table is created in the exact layout you want. So I usually do insert into if I’m in a stored procedure. If I’m just in Management Studio, I’m lazy and will do select into, just because it’s easier for me to write. I don’t have to do the [cray 0:05:21.0] stuff.

Brent Ozar: Yeah, Kendra wrote a blog post about this. There were changes in certain versions of SQL Server where you suddenly got parallelism where you didn’t before. And this matters a lot depending on how you define your code, and I never remember what the takeaway was. I always have to point people back to this. because I’m with Tara, I just go, I want to create my table explicitly, here’s the data types that I want, if I want a key, here’s how it looks like. I feel really bad saying here’s how I write queries these days because I am writing queries in Postgres and Richie’s right here and he sees my terrible queries and I know he’s just holding his mouth going, Brent, that’s not really what you do. You write toilet equivalent queries. They’re really bad.

Richie Rump: No, actually the one I reviewed yesterday, I was pleasantly surprised. I’m like, oh nice…

Brent Ozar: Which tells you that I usually write crappy queries, because he was pleasantly surprised by this one…

 

What are you wearing?

Brent Ozar: Darshan asks, “Tara, why the hat?”

Tara Kizer: I know, I was waiting for that. I’m surprised it didn’t come up before we started with the questions. I just didn’t have a chance to shower and that’s just what I had on when I took the kids to school. I don’t like my hair in a ponytail publicly and I needed it to be in a ponytail, so the hat’s covering that.

Richie Rump: So, Brent, why not a grey shirt?

Tara Kizer: Yeah, there you go.

Brent Ozar: Yeah, really, I need to get with the company uniform today.

 

Have you used Node.js with SQL Server?

Brent Ozar: Kevin asks, “Have y’all ever used Node.js with SQL Server?”

Tara Kizer: That’s certainly a Richie question.

Richie Rump: No, I haven’t. I have used it with Postgres with some pretty good success, but no I haven’t used it with SQL Server. That’d be a curious little test to do that. I don’t know if any – I’m assuming there’s a project out there to talk to SQL Server. I’ve never used it or any libraries around it. So yeah, maybe that’s something I’ll do in the future. It could be kind of fun.

 

How do I keep logins in sync?

Brent Ozar: John says, “I have two instances that should have the same logins and DB users and I’m planning on using the second server for reporting via log shipping of a couple of databases, but I think my logins are out of sync. How can I check and correct my logins if I need to sync them?”

Tara Kizer: You could just query sys.logins or whatever it is and look for the sid, but you may want to just drop them and move them over and make sure that you copy the sid. And Brent’s got the script out there that can do that for you. It will grab the password and the sid. But what you can do is you can transfer the databases over to that log shipped server and un-orphan them to avoid having to do it. But I like to get my sids in sync so I don’t have to take that extra step of un-orphaning them.

Brent Ozar: Do it once, get it right and be done with it. Yeah, so the – for those of you listening to the podcast, it is a link to Robert Davis’s script on transferring logins to a database mirror. The same technique works with log shipping, database mirroring, Always On Availability Groups, anything where you can create different logins on different servers.

 

I have an index with 2 partitions

Brent Ozar: Joe asks, “I have an index with…” What the… Joe, can you rephrase your question? I’m not exactly sure what you mean by that, with an index with two partitions. Usually, if people do partitioning, they do a lot more than two partitions. So I’m thinking there’s something lost in translation there.

 

Should I use PowerShell and nothing else?

Brent Ozar: Anna asks, “Do y’all recommend using PowerShell only for all DBA tasks?”

Tara Kizer: We’re the wrong company to ask…

Richie Rump: No, no, whoa. If someone said you have one tool to do DBA tasks with, PowerShell would never be my answer.

Brent Ozar: Whoa, that’s inflammatory. Richie, why would you say that?

Richie Rump: You know, command lines are good for certain things, but other things, I want more of a fully functioned thing and SSMS is that tool, right. I mean, it’s really the best database tool out there, so why would I go down to PowerShell for everything when the best tool out there is SSMS?

Brent Ozar: And we know people are going to disagree with us. We don’t manage lots of servers. So we tend to manage one or two servers at a time. Clients have us parachute in and do a really deep dive. I know people out there who have the opposing viewpoint. They’re like, if I could only use one tool for the rest of my life, it would be PowerShell. And that’s cool. I think you should learn one tool really well, whatever that one tool is, and you should be amazing at that one tool. None of us on this call would choose PowerShell to be that one tool, but I know there are folks out there who do, it’s just not us.

Richie Rump: I mean, that being said, you should use multiple tools. You should understand multiple tools. I mean, I rarely use SSMS anymore. I’m in other stuff as it is. So it’d only benefit your career to have multiple tools under your belt.

Tara Kizer: I’ve worked at companies where I’ve had a lot of servers. One of the companies I worked at, we had 700 SQL Servers and back then – because it’s been five years now since I worked for that company – PowerShell was a thing. We had our own thing to deploy changes to a lot of servers. So we were using management Studio’s CMS, the central management server option and we had some fancy scripts that would populate the CMS with all of our servers. It would call a database, a database, as we called it, and populate that. So we had the CMS already. I imagine these days that company probably is using some PowerShell because of the amount of servers and PowerShell is fairly flexible. It can do more than just SQL Server stuff, whereas, with CMS, you’re writing scripts T-SQL. You can be calling out to xp_cmdshell to do other things. So I probably would be using it for some things if I’m using a lot of servers. Anna follows up, she’s got lots of servers and five different versions. So it has its uses, but like Richie said, you don’t just only use that tool for all DBA tasks. You pick the right tool for each task.

Richie Rump: Yeah, I mean we get in these flame wars all the time as developers, c# is the best… no, it’s Python… and we write lots of words on lots of bulletin boards and hate other people, and the fact of the matter is c# really isn’t as good at some things, but it’s really good at others. And Node.js is good for this and maybe not good for that. And so we have to take the positives and negatives. And it’s better for me as a developer to understand those positives and negatives and maybe even know a few of those languages so I could actually go off and maybe do other projects that are interesting, but maybe not using the language that I prefer.

Brent Ozar: I’m going to ask you a follow-up question then; if you could only learn one language right now for starters, like the first language that you would focus someone to go learn on, what would be the one language you would pick?

Richie Rump: First is hard…

Brent Ozar: Assume that maybe they’ve already played around with programming before, but like one language to specialize in.

Richie Rump: Right now, languages are easier but they’re also harder at the same time. JavaScript you have all this async stuff inside of it. Python, maybe one of those ones because you could write a lot of stuff in very little code, but there’s also a pretty medium to large learning curve there. So I mean, I don’t know, c#, there’s a big learning curve there, good lord. Maybe, now that you have a gun to my head, I’d probably say Python, but there’s a lot of languages that are out there that are really worthwhile and they have really some good stuff. JavaScript is a good one because you could go anywhere with JavaScript now. You could do frontend, you could do backend, you could go cloud, not cloud, anywhere, JavaScript is running. But JavaScript is notoriously hard to learn and hard to master, so yeah, Python; why not?

 

So, about this SQL Server on Linux

Brent Ozar: Sri says, “What do y’all think of the future of SQL Server for Linux? Will it be widely used at the Enterprise level?”

Tara Kizer: I really don’t think it is. I think that the reason why Microsoft did this is because there’s a lot of anti-Microsoft people who don’t want to run Windows on their servers, so they’ve got Linux and maybe they need to use SQL Server because some third-party application that they’ve purchased requires SQL Server. But those third-party applications probably don’t even support SQL Server running on Linux anyway. I’ve got a recent client where they are very anti-Microsoft. Would an anti-Microsoft company run SQL Server on top of Linux? I don’t know. I worked for a company that was very anti-Microsoft. They were mostly Unix, Linux, and Oracle. We had a very large Oracle team, but we still had SQL Servers. We had, you know, five SQL Server DBAs supporting, like I said, 700 SQL Servers, and they were very anti-Microsoft. Would that company be running SQL Server on top of Linux? I bet you they’ve had discussions about it, but why? It’s such a limited set of features available. I don’t know. I don’t really see a future for it.

Richie Rump: I mean, is it going up against the free tools like Postgres? I mean, is that its competition, or is the reason it exists for some other reason?

Brent Ozar: Yeah it’s tough. Joseph says, “Regarding SQL Server on Linux, I just joined an Oracle on Linux shop that is being pushed into SQL Server by market forces.” I would love to see this. Market forces, is this like Jedis and they’re like, yeah this is the database platform you are looking for and they’re pushing it…

Richie Rump: Hadouken…

Brent Ozar: he says, “Microsoft SQL Server on Linux appeals to us just for porting purposes.” Like the skills that you already know in the operating system – I have passionate feelings about this; very passionate feelings about this. Is it hard for you to find Windows sysadmins? Probably not. I’m not saying that they’re cheap a dime a dozen. They’re still hard to find, good people. But is it going to be hard for you to find the documentation you want for SQL Server on Linux? Yes, that’s going to be much harder. There’s so much good documentation out there for SQL Server on Windows and it really falls apart for SQL Server on Linux.

 

Have you used Contained Databases in production?

Brent Ozar: Augusto asks, “Have y’all ever used contained databases in production?” That’s a no. that is a no, ladies and gentlemen. Yeah, no, I don’t think so.

 

How can I fix RESOURCE_SEMAPHORE QUERY_COMPILE waits?

Brent Ozar: Alex says, “I’m fighting the poison wait resource semaphore query compile. I don’t have memory pressure. There’s only 80GB in use out of around a couple hundred GB and m CPU isn’t bad. Is there an issue with compile memory?” Yes, it means that SQL Server can’t get enough memory in order to compile a query. The fact that your server has a lot of memory overall doesn’t necessarily mean that a lot of memory is available to do query compilations. The term that you could Google for is memory gateway; SQL Server memory gateway query compile. And there’s a couple of blog posts out there from Microsoft talking about the gotchas with it.

 

Is VB.NET the Python of .NET languages?

Brent Ozar: Radu says, “VB.NET is kind of like the Python of .NET languages in terms of learning curve; it just got wrongfully accused lately.” I don’t even know where to begin with this, Richie…

Richie Rump: Okay, so I did VB.NET for a really, really long time. VB way before that, started in VB3. I understand your love for VB, but at some point, when even Microsoft stopped writing documentation for it and examples for it, you needed to get off of that and onto something else. Because when they’re not giving you the proper support and you can’t find – you know, new features come out and all of a sudden they’re not in VB and it’s in c#, there’s a lagging there. I wouldn’t start anything in VB.NET right now. There’s really no reason for it. The problem isn’t with VB.NET the language, the learning curve, it’s .NET itself. It’s the framework. There’s just so much going on there. Understanding the .NET framework is harder than learning the VB language. I could go on forever…

Brent Ozar: VB, I felt like when I was learning .NET languages, VB was the one that was most approachable to me because I knew VBScript. That was also the point where I stopped learning languages because I’m like, I don’t think real programmers – this is horrible to say this stereotype – I don’t think real programmers are using VB.NET. they’re going to kind of phase away from that, which means that my skills are going to get outdated fast. I need to learn something else, c#, Java, JavaScript, whatever. And if you’re a career developer, you probably have to have that love of learning new languages and frameworks. And assuming that you do, I would want to get away from VB.NET.

Richie Rump: Yeah, I love VB. Listen, don’t get me wrong, I haven’t coded it in at least a decade, but me going from VB.NET to c# was so seamless because I understood the framework and everything else is just syntax. But understanding the framework is the hard part of VB.NET, not the language itself.

Brent Ozar: God, it doesn’t seem to be getting easier either with the .NET Standard, .NET Core, all the different stuff that’s going on there…

Richie Rump: Of which I haven’t even touched because if you can’t figure it out yet, Microsoft, I’m not going to figure it out for you and go on your journey while you figure it out. So hopefully, this next version they’re coming out with, version three or whatever, maybe it’s even out by now because I’m not paying much attention, hopefully, that will be the one where they’re like, that’s the one, we’re sticking to it. But you wouldn’t believe, Brent, all the different versions and whatnot you need to go do hoops through for Core to make sure things work. It’s ridiculous.

Brent Ozar: I keep watching what Stack Overflow’s doing, with Nick Craver talking about their porting things over to .NET Core and he’ll be like, this is amazing, this is really good… This is terrible, oh, this is terrible, everything is broken… Okay, this is cool again. I’m like, if a guy as smart as you struggles this much figuring it out, I would be screwed.

Richie Rump: Nick is amazing. I mean, he’s going in and submitting patches for Core and stuff like that and I’m like, no I do not want to be fixing Microsoft’s code for them. That is not my job at all.

 

Which update should I apply?

Brent Ozar: Dave asks – he says, “The security update for SQL Server X Cumulative Update, which one should I apply and where?” Go to sqlserverupdates.com and at the homepage of sqlserverupdates.com we tell you what the most recent patches are and you can go from there.

 

Following up on VB.NET

Brent Ozar: Radu follows up with, “So the move away from VB.NET, that’s peer pressure. Otherwise, VB and c# still have language parity.” I’m going to leave that one alone…

Richie Rump: If you think so, that’s fine, but I mean, if I go to any sort of job site and I go VB.NET code versus c#, I’m going to see a lot better projects that I would want to get into as opposed to some legacy stuff with VB.NET. I mean, if you want to do VB.NET, that’s fine, go for it, man. Have at it. I’m not poo-pooing you. I’m not that developer. I’m not that guy, but for me, I want to be able to go on the projects that ring to me most because the work is what’s important for me, not the language.

Brent Ozar: There’s an interesting saying in business too; if you think about a grid of matrixes, you can either use old tools or new tools and you can either do old things or new things. The general advice is, you can do old stuff with old tools, you can do old stuff with new tools, you can do new stuff but using old tools, you never want to do new stuff with new tools. It’s just too much of a complex thing there. So if you love VB.NET, that’s okay, you can still do both old stuff and new stuff. You can go find new interesting projects, you can go work in cutting-edge industries, but just know that some of the really cool new stuff may not be open to you. The doors may not be open to you if you’re still using VB. If we were going to start a new project in the year 2018, I wouldn’t even think to hire someone with VB to do it.

Richie Rump: No, and it probably wouldn’t even be c#, frankly.

Brent Ozar: Yeah, yeah…

Richie Rump: And that being said, we may be doing some c# here soon, Brent, so…

Brent Ozar: For the execution plan analysis stuff? Yeah…

Richie Rump: In Core… Go figure, let me poo-poo it some more and we’ll end up doing it. That’s probably what’s going to end up happening.

 

We generate 900GB of log files every night…

Brent Ozar: Shaun says, “We have three databases on a SQL Server, each about 1TB in size. Every night, the data warehouse teams run nightly processing on those that generate about 300Gb worth of transaction log space for all three logs.” Okay, hold on a second here. Let’s think through that for just a second. You have a 1Tb database and you’re changing 30% of it every night. The name for this is Groundhog Day. What it means is that your ETL teams are wiping and redoing the same tables from scratch every day with no change detection. They’re just continuously overriding the whole reports table every single day and rebuilding those numbers. Long-term, just thinking way far out as a consultant, don’t do Groundhog Day type stuff; you’re going to have a bad time. It’s going to be tough around database mirroring, Always On Availability Groups, log shipping, your backups, just because of that rate of change, storage de-duplication, the list could go on and on.

His question continues, “I had them stop doing a shrink arguing that it’s just going to grow again every night, just leave it alone. Now I have all the storage guys calling me crazy, leaving those three files at roughly 300GB each. Am I crazy?” You’re both right. I mean, you shouldn’t be doing auto-shrink. You also shouldn’t be leaving 300GB log files around if they’re all empty all the time. But the answer isn’t to do shrinking; the answer is to go fix the Groundhog Day stuff.

Shaun follows up with, “Laughing out loud, yes that’s right, Groundhog Day is exactly what they’re doing.” Yeah, so you fix the Groundhog Day processes and then you’re fine.

Tara Kizer: It needs it every single day, so don’t shrink it. I mean, so yeah, let’s say you’re getting back 250GB, but you need it again tonight so you’re not really saving space at all.

 

We need help fixing Access blocking

Brent Ozar: John says, “We have a third-party app that has a Microsoft Access frontend…”

Tara Kizer: Ooh, boy. We were just saying, you know, VB.NET and c# are getting old…

Brent Ozar: I was going to use Access as an example too… “Connecting to a SQL Server table. Once the app is running and we try to update the table, there’s blocking. Is there any idea that would allow us to make updates throughout the day, like how do I have concurrency in Microsoft Access?”

Tara Kizer: Isn’t that the whole problem with Access, it’s like one user at a time. I mean, isn’t that like what it was designed for? Don’t use Access. Rewrite it…

Richie Rump: I think you can. There may be a setting or something where it’s locking the entire table. I mean, we’re talking now stuff that I did 25 years ago, I don’t recall. It was a very long time ago, but I would start messing with your connection settings, because it sounds like there’s some sort of weird locking going on there.

Brent Ozar: And I would always want to refer you to somebody who can tell too and it’s these guys, accessexperts.com. These guys specialize in just Microsoft Access. Really smart people, really friendly. They have a couple dozen employees I believe now at this time and they just specialize in Microsoft Access…

Tara Kizer: Wow…

Brent Ozar: Yeah, it’s absolutely huge. So I would check with them. Really friendly people, they can give you an idea of what it will be like to make it go faster.

 

How should I troubleshoot view definition mutex waits?

Brent Ozar: And then goodnight, the last question that we’re going to take, because we’re not going to have an answer – Alex asks, “What’s the best way to troubleshoot view definition mutex waits? Information regarding this wait is very scarce.”

Tara Kizer: Is it really a problem or is it just in your list of waits that are happening? What I would do is I would type in Google, and if there’s not much there, there’s probably a page from sqlskills.com, Paul probably has some information on it. Check to see if he thinks it’s an ignorable wait. But I mean, there’s plenty that I see in the list, but is it your top wait or your second top wait? Or are you just talking about number 70 in the list of waits that you’re looking up?

Brent Ozar: My first guess is, do some kind of logging to a table. Like, log sp_WhoIsActive to a table to see which queries are waiting on view definition mutex. My first thought is, have you got something like an application that’s continuously checking the contents of views, like a monitoring tool, to go through and assess slow queries? Maybe it’s written poorly and it’s trying to go get definitions of views. Or, you have insane concurrency, like really, really bad concurrency or something’s got – I’ve seen once where somebody had an alter view embedded in their code and they were doing that by accident. But if you look and see what queries are being blocked by it, I bet that’s going to be a big eye-opener.

Tara Kizer: He follows up and says it’s his second top wait. That’s crazy. I’ve never seen it even in the top 10 I don’t think.

Brent Ozar: No. Alright, well thanks, everybody for hanging out with us this week at Office Hours and we will see y’all next week; adios.

Announcing SQL Server 2019

$
0
0

TEASE

Who Let The Docs Out?

Ignite must be coming up.

If you head over to Microsoft’s GitHub repo, you can peruse around for stuff updated recently.

Maybe you’ll create an account.

Maybe you’ll start contributing to open source projects.

Maybe you’ll quietly slip into a world of solitude for days on end.

Happy Saturday!

 

Your SQL Server is Bored: What Low Wait Times Mean

$
0
0

Let’s say you have an assistant. (I know, unlikely, but bear with me.)

And say you give your assistant a task – hey, go fetch me a coffee. Your assistant would nod obediently, go head out to the neighborhood coffee shop, get your preferred Americano, and bring it back to you. It might take them 10 minutes to go achieve that task – and then when they return, they’ll sit patiently waiting for their next assignment.

If fetching coffee is the only task you have for your assistant, they’re mostly going to be sitting around bored. You just don’t drink coffee fast enough to keep them busy.

When you ask them for coffee,
you might not be satisfied with your average wait time.

But hiring more assistants isn’t going to get you coffee faster.

Say you give them more tasks.

You ask them to book you flights for Intersection and Summit, summarize our latest blog posts, and file your expense reports. It’s not a lot of work, and they can context-switch between tasks, and none of the individual tasks will be hard to perform.

If, while they’re working on your expense reports, you ask for another coffee, they can switch tasks and go fetch it for you. Fetching coffee might take a little longer, but it won’t be terrible.

The busier your assistant is,
the longer the average task will take to complete.

And if you give them a truly overwhelming amount of hard work to do – like take your car to the dealer for service, pick up your dry cleaning, design indexes for a query, tune a thousand-line stored procedure, and comprehend a Joe Obbish blog post – then they might not be able to context-switch as quickly or as effectively.

Your server measures clock speed a little differently

When your assistant is really busy,
then it may take them hours to get you coffee,
and it may make sense to hire another assistant.

SQL Server workloads are measured with wait times.

In my How to Measure Your SQL Server video, I explain how to use Wait Time Ratio to gauge how busy your SQL Server is. Track how many hours of wait time SQL Server piles up in a given hour, and you can get a pretty good idea of its workload intensity. To take an extreme example:

If you only run one query per hour,
and that query only waits on storage for 30 seconds,
your server is bored.

That doesn’t mean the query finishes instantly, nor does it mean users aren’t complaining. The CEO might be running that 30-second query, and she might want it to finish instantly. However, on a bored server, you stop looking at wait stats and overall performance, and start looking at query-level performance.

When your server is bored, tune queries.

If users are complaining about performance, stop looking at the server and start looking at queries. My favorite way is our open source sp_BlitzCache, which shows you your top most resource-intensive queries lately.

To find which queries have taken the longest time overall:

EXEC sp_BlitzCache @SortOrder = 'duration';

That sorts by total duration, mind you – a query that ran 1,000 times for 1 second each will float higher in the results than a query that only ran once for 30 seconds. If you want to find the longest-running individual statements, run:

EXEC sp_BlitzCache @SortOrder = 'avg duration';

Then look at the Warnings columns on where to start your tuning efforts. If you hit a wall, check out how to get help with a slow query.

Or, of course, if you’re a SQL ConstantCare® member, just email the query and the actual (not estimated) plan to us, and we’ll get you started.

Thoughts On Microsoft’s Azure Outage Post-Mortem

$
0
0

Last week, Azure suffered a day-long outage. One of the services involved was Visual Studio Team Services (aka Azure DevOps), and that team just published their outage postmortem.

The postmortem is FANTASTIC: open, honest (at least it reads that way), and goes into enough technical detail to satisfy a wide variety of readers from managers to technical implementers.

This section explains a lot about their HA/DR strategy:

Why didn’t VSTS services fail over to another region? We never want to lose any customer data. A key part of our data protection strategy is to store data in two regions using Azure SQL DB Point-in-time Restore (PITR) backups and Azure Geo-redundant Storage (GRS). This enables us to replicate data within the same geography while respecting data sovereignty. Only Azure Storage can decide to fail over GRS storage accounts. If Azure Storage had failed over during this outage and there was data loss, we would still have waited on recovery to avoid data loss.

To rephrase, in the event of losing a region, the plan was to restore from backups. That’s absolutely fair, and it’s probably the same disaster recovery plan your company has, dear reader. Don’t get all high-and-mighty on me now – I like that plan just fine for disasters, and it’s the same thing we designed for our Faux PaaS project.

But I want to draw your attention to what their plan didn’t include: synchronous Availability Groups across data centers.

Cross-data-center synchronous AGs are something that work great in theory, but usually fall down in practice. Your applications just don’t want to wait until a write is committed across two different data centers. I’ll let Microsoft explain why:

However, the reality of cross-region synchronous replication is messy. For example, the region paired with South Central US is US North Central. Even at the speed of light, it takes time for the data to reach the other data center and for the original data center to receive the response. The round-trip latency is added to every write. This adds approximately 70ms for each round trip between South Central US and US North Central. For some of our key services, that’s too long. Machines slow down and networks have problems for any number of reasons. Since every write only succeeds when two different sets of services in two different regions can successfully commit the data and respond, there is twice the opportunity for slowdowns and failures. As a result, either availability suffers (halted while waiting for the secondary write to commit) or the system must fall back to asynchronous replication.

That’s Microsoft talking.

Microsoft can’t get sync AGs to work for them in a way that makes them happy.

Before you design a DR plan aiming for zero data loss using synchronous AG replication, make sure you build a solid proof of concept, and load test it with production-quality workloads. Make sure your end users will accept the latency slowdowns – or if they won’t, make sure they sign off on the RPO and RTO involved with a single-data-center solution. The time to learn these numbers isn’t when the hurricane is approaching, or when you’re writing a postmortem about your own apps.


Can Forced Parameterization Go Wrong?

$
0
0

App Like That

If you’ve got the kind of application that sends bare-assed strings to SQL Server, you may end up with a weird choice.

Brent will be sitting on one shoulder telling you to use Forced Parameterization.

I’ll be on the other shoulder asking if you’ve really thought this whole thing through while you ignore me.

It’s cool. That’s what this post is for.

See, one potential down side of Forced Parameterization is, well, Forced Parameter Sniffing.

Live Nude Strings

Let’s take a closer look with some demos.

This’ll get us started by creating some indexes and looping over the same query, putting in different Vote Types to search on.

USE StackOverflow2010

CREATE INDEX ix_votes ON dbo.Votes(VoteTypeId, UserId, CreationDate);
CREATE INDEX ix_posts ON dbo.Posts(OwnerUserId);
CREATE INDEX ix_badges ON dbo.Badges(UserId);

DBCC FREEPROCCACHE
DECLARE @VoteTypeId INT = 1
DECLARE @sql NVARCHAR(MAX) = N''
DECLARE @counter INT = 0

WHILE @counter < 10
	BEGIN
		WHILE @VoteTypeId <= 15
			BEGIN
				SET @sql = N'
				SELECT COUNT_BIG(DISTINCT v.PostId) AS records
				FROM dbo.Votes AS v
				WHERE VoteTypeId = ' + CONVERT(NVARCHAR(5), @VoteTypeId) + N'
				AND NOT EXISTS (SELECT * FROM dbo.Posts AS p JOIN dbo.Badges AS b ON b.UserId = p.OwnerUserId WHERE p.OwnerUserId = v.UserId)
				AND v.CreationDate >= DATEADD(YEAR, -1, ''2011-01-01'')
				'
				PRINT @sql
				EXEC(@sql)
				
				SET @VoteTypeId += 1
			END
			SET @counter += 1
			SET @VoteTypeId = 1
		END

That finishes pretty quickly, and we can dive right into the plan cache.

EXEC sp_BlitzCache @DatabaseName = 'StackOverflow2010', @Top = 15;

We end up with a cached plan per Vote Type. Costs are all over the place.

One Off.

Some of the plans end up being the same, but here’s a comparison between the highest and lowest cost plans.

Highball

Lowball

Clearly some different choices were made.

We even have different sets of warnings…

Told you eh?

Rather amusingly, even the estimated impact of the missing indexes swings all over the place from plan to plan.

It’s amusing because it’s always the same exact index definition.

WHY DO YOU DO THIS TO US?

The metrics are all different…

Hard To Profile

Some get memory, and one spills…

Amy Granted

The point is that each query got an “appropriate” plan. Not perfect. We have some tuning to do, I think.

Casuals

I know what you’re thinking at this point: This doesn’t seem so bad. Plan cache pollution? Eh…

It gets cleared every night when you rebuild every index 500 times anyway. Who cares?

And you’re sort of right. This is a much easier problem to have than parameter sniffing.

Let’s look at how Forced Parameterization changes things.

Take Two

I’m going to do everything exactly the same, except I’m going to run this command to turn on Forced Parameterization.

ALTER DATABASE StackOverflow2010 SET PARAMETERIZATION FORCED;

Then I’m going to run the loop, and see what BlitzCache tells me.

The first thing I notice is that this is taking way longer to finish. The first loop finishes in around 20 seconds.

This one has been going on for a while. It ended up taking over a minute.

Now there’s only one plan in the cache.

You did nothing.

BIGGUY4U

That’s the big plan from before. We can change things up a bit and start the loop with a different Vote Type — like one that has almost no usage.

I’ll clear the cache and start it with 15 — that’ll start us with the little plan from before.

DBCC FREEPROCCACHE
DECLARE @VoteTypeId INT = 15

This one runs even longer than the last loop — it takes about a minute and a half.

Most of that seems to be due to the fact that we spilled a lot more!

TUFF LUQ

We also had some Nested Loops Joins run a pretty good chunk of times for the larger plans.

This doesn’t seem like a winning scenario.

There’s No Such Thing As A Free Feature

With databases in general, we often end up trading one problem for another.

Wanna fix plan cache pollution? Hope you like fixing parameter sniffing!

This is when people start doing all sorts of things that they think fix parameter sniffing, that really just disables it.

Stuff like recompile hints, optimize for unknown, local variables…

Could I tune queries and indexes to make our code less sensitive to parameter sniffing?

Of course I could, but if you don’t have that person, and no one wants to be that person, maybe spending a little more on RAM to hold your dirty, filthy plan cache ain’t such a bad trade.

Thanks for reading!

Should You Use the New Compatibility Modes and Cardinality Estimator?

$
0
0

For years, when you right-clicked on a database and click Properties, the “Compatibility Level” dropdown was like that light switch in the hallway:

Compatibility level

You would flip it back and forth, and you didn’t really understand what it was doing. Lights didn’t go on and off. So after flipping it back and forth a few times, you developed a personal philosophy on how to handle that light switch – either “always put it on the current version,” or maybe “leave it on whatever it is.”

Starting in SQL Server 2014, it matters.

When you flip the switch to “SQL Server 2014,” SQL Server uses a new Cardinality Estimator – a different way of estimating how many rows are going to come back from our query’s operations.

For example, when I run this query, I’ll get different estimates based on which compatibility level I choose:

SELECT COUNT(*)
  FROM dbo.Users
  WHERE DisplayName LIKE 'Jon Skeet'
    AND Reputation = 1;

When I set my compatibility level at 110, the SQL Server 2012 one, I get an estimated 239 rows:

Compatibility level 2012, the old CE

Whereas compatibility level 120, SQL Server 2014, guesses 1.5 rows:

Compatibility 2014, the newer CE

In this case, SQL Server 2014’s estimate is way better – and this can have huge implications on more complex queries. The more accurate its estimates can be, the better query plans it can build – choosing seeks vs scans, which indexes to use, which tables to process first, how much memory to allocate, how many cores to use, you name it.

You read that, and you make a bad plan.

You read that the new Cardinality Estimator does a better job of estimating, so you put it to the test. You take your worst 10-20 queries, and you test them against the new CE. They go faster, and you think, “Awesome, we’ll go with the new compatibility level as soon as we go live!”

So you switch the compat level…and your server falls over.

It goes to 100% CPU usage, and people scream in the hallways, cursing your name. See, the problem is that you only tested the bad queries: you didn’t test your good queries to see if they would get worse.

Instead, here’s how to tackle an upgrade.

  • Go live with the compat level you’re using today
  • Wait out the blame game (because anytime you change anything in the infrastructure, people will blame your changes for something that was already broken)
  • Wait for the complaints to stabilize, like a week or two or three
  • On a weekend, when no one is looking, flip the database into the newest compat level
  • If CPU goes straight to 100%, flip it back, and go about your business
  • Otherwise, wait an hour, and then run sp_BlitzCache. Capture the plans for your most resource-intensive queries.
  • Flip the compat level back to the previous one

On Monday morning, when you’re sober and ready, you compare those 10 resource-intensive plans to the plans they’re getting in production today, with the older compat level. You research the differences, understand whether they would kill you during peak loads, and start prepping for how you can make those queries go faster under the new CE.

You read Joe Sack’s white paper about the new CE, you watch Dave Ballantyne’s sessions about it, and you figure out what query or index changes will give you the most bang for the buck. Maybe you even resort to using hints in your queries to get the CE you want. You open support cases with Microsoft for instances where you believe the new CE is making a bad decision, and it’s worth the $500 to you to get a better query plan built into the optimizer itself.

Or maybe…

Just maybe…

You come to the realization that the old CE is working good enough for you as it is, and that your developers are overworked already, and you can just live with the old compatibility level today. After all, the old compatibility level is still in the SQL Server you’re using. Yes, at some point in the future, you’re going to have to move to a newer compatibility level, but here’s the great part: Microsoft is releasing fixes all the time, adding better query plans in each cumulative update.

For some shops, the new CE’s improvements to their worst queries are worth the performance tuning efforts to fix their formerly-bad queries. It’s totally up to you how you want to handle the tradeoff – but sometimes, you have to pay for the new CE in the form of performance tuning queries that used to be fast.

Quirks When Working With Extended Events To Track Locks

$
0
0

Worse Than Mobile Browsing

I have a love/hate relationship with Extended Events. Yes, they’re powerful. Yes, you can track interesting things.

But they’re just not intuitive, like, 10 years on.

Part of the problem is that the narrative from Microsoft has been that it’s a full replacement for Profiler and Traces.

How many monitoring tool vendors do you see using Extended Events instead?

Trying To Do Something Simple

Tracking locks should be very easy, right? There’s locks everywhere.

But right away, there’s a problem.

SELECT   mv.name, mv.map_value, xo.description, xp.description
FROM     sys.dm_xe_map_values AS mv
JOIN     sys.dm_xe_objects AS xo
    ON  mv.object_package_guid = xo.package_guid
    AND mv.name = xo.name
JOIN     sys.dm_xe_packages AS xp
    ON xo.package_guid = xp.guid
WHERE    mv.name = 'lock_mode'
ORDER BY mv.map_key;

I have a User Thing issue about this open.

LAST MODE?!

The issue is that LAST_MODE should map to RX_X, but it doesn’t.

Leaving aside the juvenile word searches one could explore here, if you were trying to figure out a bad locking problem, what would you think of LAST_MODE?

Probably not much, but the wait that it maps to is usually due to Serializable locking.

Not something you wanna ignore.

Speaking Of Locks…

If you were trying to figure out how to see which objects were locked, what of these would you look at?

object misery

There are five columns that purport to have an object ID in them. None of them ever seem to resolve when you use OBJECT_NAME.

In fact, most of them fail, because they’re populated with bigger ints than OBJECT_NAME can cope with.

If you were to create an XE session to look at locks, and then try to parse things out, you’d have to do something like this:

SELECT CONVERT(XML, fxftrf.event_data) AS lock_data
INTO #locks
FROM   sys.fn_xe_file_target_read_file('c:\temp\lock*.xel', NULL, NULL, NULL) AS fxftrf;

WITH thing
    AS
     (
         SELECT l.lock_data.value('(event/data[@name="mode"]/text)[1]', 'VARCHAR(256)') AS mode,
                l.lock_data.value('(event/data[@name="associated_object_id"]/value)[1]', 'BIGINT') AS associated_object_id
         FROM   #locks AS l
     )
SELECT t.*, OBJECT_NAME(p.object_id) AS table_name
FROM   thing AS t
JOIN   sys.partitions AS p
    ON t.associated_object_id = p.hobt_id;

The associated_object_id maps to the hobt_id column in sys.partitions.

Which is per-database. Context is everything.

I didn’t see a better way to do this when looking at the Global Fields you can collect.

Not exactly intuitive.

Holes

When you’re going to use Extended Events, there’s a heck of a lot of shruggery involved.

Which event(s) do I need? What do they actually capture? What really triggers them? What sort of target should I use? If I allow events to be lost, will I miss what I’m looking for?

None of this stuff is obvious, which puts anyone trying to use them to track down or solve a problem for the first time at a real disadvantage. I use them fairly frequently (compared to a lot of people I know), and I still get hung up on stuff.

At the beginning of the article, I said XE is worse than mobile browsing.

That sounds harsh, but hear me out: if you’re on your phone (server), and there’s an article (event) you’re interested in, what’s an easy way for you to lose interest quick?

Clicking on a link, getting a full screen pop up ad, then a request to allow notifications, then a pop up window, one of those scroll past ads, maybe some auto play video for good measure. If you’re really lucky, you’ll get a notification that you won a hot local single.

I’m reminded of all those annoyances every time I go to do something with Extended Events. There’s a lot standing in the way of me acquiring an interesting piece of information. They should have a lot more going for them by now, but even just figuring out microseconds or milliseconds is a pain.

My Friend From Slack, Steve Jones, said something along the lines of: Software should make people’s lives easier.

Right now, Extended Events often introduce a lot of complications just trying to answer relatively simple questions.

Thanks for reading!

Should Index Changes Require Change Control?

$
0
0

We got a phenomenal series of questions from a client, and I wanted to encapsulate the answers into a blog post to help more folks out:

Should all index changes require testing in a staging environment, no matter how big or small? What would be a reasonable timeline duration from index identification to deployment? What should the process of index review entail? What levels/kinds of justification/analysis should be required in order to move forward with an index change?

I’m going to zoom out a little and ask it as, “Should index changes require going through change control?” To answer that, I need to think about…

How risky are index changes?

At first glance, you might think index changes are pretty harmless. However, let’s think about a series of scenarios where an index change could result in slowdowns or an outright outage.

Index changes can fill up your drives. Adding indexes may require more space in the data files for the additional copy of your data (even temporarily, if you’re replacing an existing index.) Your log files are impacted too – index creation is a logged activity, which may cause transaction log growth. Heck, even TempDB can grow if you use the sort-in-TempDB option during index creation. If you’ve got database mirroring or Always On Availability Groups, you also have to look at the drive sizes on those remote replicas as well. If you’re using virtualization snapshots/replication, or storage snapshots/replication, you might even have to work with your sysadmins on the storage space availability.

Index changes can cause blocking. Even when you use Enterprise Edition’s ONLINE = ON while creating an index, you’re still going to need locks, and you can still block other queries. Plus, if your index change backfires and you need to drop an index, there is no online drop or disable. To learn more, check out how online index operations work, and guidelines for online indexed operations.

Index changes can slow down unrelated operations. As you’re creating large indexes, you’re going to need CPU power, workspace memory, and storage throughput. The more of these that you consume, the more you can impact other running queries – even if they’re working on completely different tables or databases.

Indexes can cause inserts to fail. If you create filtered indexes or indexed views, for example, and your application uses non-standard connection settings, your inserts can fail.

Really fancy indexes can have really fancy side effects. If you start playing around with columnstore indexes or in-memory OLTP, you might need a spectacular amount of memory to apply and maintain your index changes.

How sensitive is your workload to performance risk?

Some companies stretch their dollars as much as possible, running at 50-60-80% CPU use at any given time. Users are wildly unhappy with performance as it is, and even the slightest hiccup causes screams of pain.

Some companies go in armed for bear: they build monster servers to make sure they can handle surprise peak workloads. I had one DBA say in a class, “I size all my servers so that I can run backups and checkdb in the middle of the day without anyone noticing.” (Love.)

Any change is risk.

Whether we’re talking adding a nonclustered index or truncating a table, database administration is about understanding how much risk is involved with a change, and your organization’s level of tolerance to that risk.

To reduce the risks involved with any change, test it.

The closer your tests can be to production, the more comfortable you can be with those risks. In a perfect world, that means using a test server and test workload that’s absolutely identical to production. In the real world, the closeness of your tests (and even whether you test at all) is up to your tolerance for risk and your budget.

If you want to learn more about using source control and automated deployments for database changes to reduce your risks, check out Alex Yate’s upcoming classes on Database DevOps with SSDT and with the Redgate tools.

[Video] Office Hours 2019/09/12 (With Transcriptions)

$
0
0

This week, Brent, Erik, Tara, and Richie discuss syncing logins and database data via log shipping, monitoring page life expectancy, cumulative update issues, when to change MAXDOP, doing SQL Server install on AWS, connecting to SQL Server instance on a virtual box from the desktop where the virtual box is running, and what real DBAs drink.

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

If you prefer to listen to the audio:

Enjoy the Podcast?

Don’t miss an episode, subscribe via iTunes, Stitcher or RSS.
Leave us a review in iTunes

Office Hours Webcast – 2019-09-12

 

How can I sync logins and Agent jobs?

Brent Ozar: Jason asks, “When it comes to log shipping, are there any solid solutions out there relating to syncing logins and stuff that’s in the system databases?”

Tara Kizer: I mean, we know about Robert Davis’s script for syncing logins, but everything else, there isn’t really one tool to sync everything. I think there have been tools over the years that people have released, but I don’t know that there’s much that people are recommending. So yeah, Robert Davis’s sync login script, jobs, I’ve used doing it manually, making sure that my DR server or whatever server is in sync at all times. Every time I make in the primaries, make it the secondary. There’s scripts out there that you can run. I don’t know if dbatools, PowerShell stuff can help out with this maybe.

Brent Ozar: That’s true too. I was going to say Kehayias has a syncing job script

Erik Darling: I just stuck that in chat too. That’s the only one I’ve read about. The only thing is that post is from 2013. I don’t know that it’s been updated in a while. Not that I expect him to keep it up to date either because it was a freebie for a blog post, you know. We all know the quality of those freebie blog posts.

Brent Ozar: We’re not saying anything about Jonathan’s; we’re saying this about our own as well. I look at this as one of those things where a third-party vendor should totally step in and do. But you’ve just got to be aware that if you make a change to a job and that job gets synced automatically to every other server – maybe your event paths are different, like the places where you store files. Maybe the schedules are different. Maybe you have some jobs that only are supposed to run on the primary and you didn’t code in any detection to see whether or not it’s the primary. Just be really careful when you sync them around from one place to another.

Erik Darling: And if you’re really changing jobs that frequently then you probably just want to have a separate server that holds all the jobs and points them at the stuff in your AG or log shipping or mirroring environment. That way, at least when you have those jobs run, you have to be really mindful about detecting if it’s the primary or the secondary because you can’t just have things running full sail everywhere. I mean, you can, but you…

Brent Ozar: Job failures everywhere.

Erik Darling: Yeah, good luck dealing with those.

Richie Rump: What would we do about job failures? I have no idea…

Brent Ozar: So I’m surprised Richie doesn’t Tweet more about – I haven’t looked over to see if he Tweeted, but I’m surprised he doesn’t Tweet more about some of our adventures in production.

Richie Rump: Good god, no. It’s one of those things. Maybe, if I have a good idea of why things happen, you know, but a lot of these, like what happened yesterday, we had 11,000 failures in production, which was just a fun thing to have. And it happened to be a problem with one of our load tables which wanted a vacuum. And then I scratched my head and I’m like, what’s the hell a vacuum? So we figured out a really short order. We just had a lot of errors happen really quickly at the busiest time of the day. But oddly enough, everything, with the exception of 12 files, everything reprocessed and everything went back through, so there we go.

Erik Darling: We need to get one of those signs. It’s been like zero days since our last accident.

Brent Ozar: One day since last failure.

Erik Darling: You know, Brent’s little pet project is building like automated animated signs. We could get one of those or something.

Brent Ozar: True, we totally could. So the demo for this – let’s see, so the project that I’m working on right now is a little animated sign to show the next three flights coming into our airport. Because right over there is the San Diego Airport flight-path and I worked with a developer to build a little tool so that it will show just like an airport flip-board the next three flights coming in. And this is a screenshot, like a static list of flights – it’s not the current flights in San Diego – which shows the flight number, the equipment type, the departure airport, arrival airport, and the time it’s coming in at. So the goal is there that we’re going to throw it up on a projector so that it just shows on the wall about what’s coming in next. So that’s my life. There you go.

Erik Darling: San Diego Airport, which we recently learned has the least secure wifi in all major international airports.

Brent Ozar: Yeah, absolutely.

 

You should make flavored vodka.

Brent Ozar: Let’s see. Next up, Lee says, “Y’all need to meet some Russians and learn how to make flavored vodka. Pinon nut vodka is amazing.”

Richie Rump: Vodka – you got to say it right. It’s wodka.

Brent Ozar: Wodka…

Erik Darling: I mean, it’s still vodka. Every time I’ve seen someone drink flavored vodka, it’s never been good and the person has always been awful. So like, it’s always been like birthday cake flavored vodka or something. Like, here’s cupcake vodka and I’m like, die. Stop. Let the cirrhosis take you.

Brent Ozar: Well if it’s always in combination with, this is great with whatever… You know, this is great with Diet Coke – how about alcohol that tastes great by itself, you know. And you can’t drink birthday cake flavored vodka for any length of time or you will be dead. You will be dead.

 

Why is PLE different for different NUMA nodes?

Brent Ozar: Gordon asks, “What might be the reason for hugely differing values for page life expectancy on a server with two NUMA nodes? Shouldn’t the PLE be the same across both nodes?”

Tara Kizer: I’m really confused on the question because what does PLE have to do with two NUMA node servers.

Erik Darling: Is it that PLE for one NUMA node is lower than on the other?

Tara Kizer: Oh, got you.

Brent Ozar: Paul has a blog post about this. To me, the takeaway is – it’s a good blog post, but to me, the takeaway is don’t monitor page life expectancy. It’s just a garbage metric that doesn’t tell you anything.

Erik Darling: It goes up, it goes down…

Tara Kizer: Mic drop.

Richie Rump: You would think that a bunch of big popular bloggers would have wildly different opinions on how you go and do performance tuning. I bet, if you put us in a room, especially in separate rooms and you gave all of us written tests – like where do you start with performance tuning – we would all have a lot of different answers, but I bet the big thing would wait stats. Like, we would start with wait statistics to find out what the server’s bottleneck is. You’ll find all kind of blog posts about all kinds of other techniques that we also use after we find the wait stat that’s a problem. But I think everybody starts with wait stats instead of page life expectancy.

Erik Darling: I mean, wait stats drive me as far as do I want to look at queries that are running now, do I want to look at the plan cache? What type of plans do I want to look for in the plan cache? Like, with BlitzCache, we can sort things by different metrics. So if wait stats are showing my high CPU then I’m sorting by CPU, stuff like that. and it also drives me to, like, do I want to go right to the plan cache or do I want to go look at indexes? Because something has just clearly run amok over here. So if I look at wait stats and I see a lot of locking, I’m going right to the indexes. I’m not going to start with the plan cache right after that.

Tara Kizer: And then also, large memory grants can cause the query workspace area [inaudible 0:06:57.8], which can cause the buffer pool to shrink down. So I mean, sometimes, when you see page life expectancy, it’s because of jobs that were running like rebuilding indexes. It could be you don’t have enough RAM for the buffer pool. But what about large memory grants that can cause buffer pool to shrink down.

Erik Darling: Even just CHECKDB that’s, you know, doing something crazy. That might be the one thing that got everyone in different rooms to pick up the sword and shield about; index rebuilds.

Brent Ozar: That’s probably true. And it would just boil down to two rooms. There would be the one room on one side and the other room would be like Scotland in the freedom thing…

 

Is SSMS 17 slower than older versions?

Brent Ozar: Michael Tilley asks a question that’s about a paragraph long. I’m going to scan it just to see if I can get a short answer on it. Michael says, “I have various SQL Server instances. If a plane leaves Philadelphia at the same time…”

Erik Darling: Someone flushes a cheesecake…

Richie Rump: I think the best part of this question is he’s like, “Okay, SQL tech question now…” Like, calm down, people.

Tara Kizer: He’s asking about the speed of SSMS.

Erik Darling: So it’s slow in the SSMS?

Brent Ozar: He says that, “The standalone version of SSMS is slower. So, like, the 17 branch of SSMS is slower than the one that comes with installation media.”

Erik Darling: I thought it didn’t come with the installation media anymore.

Brent Ozar: Well for his older versions. He’s got 2008.

Erik Darling: Well yeah, they added more crap to it. Every time you add more crap, you slow something down. All those new versions where Microsoft gets to add things like quarterly to it and they just add new things. And when you add new things, you slow things down. I keep adding the fat and that keeps slowing me down.

Brent Ozar: Or the pineapple flavored vodka. And I think too, they change the version of Visual Studio that it was built on top of and it added all kinds of new bloat when they did that. if you want something that’s lean and mean and kind of fast, you could start with SQL Operations Studio, which is their new electron-based one. The problem is, it’s also kind of faster because they took out a whole lot of features. It does way less stuff. I’m not saying it’s bad. It’s fine. If you want to use SSMS on a Mac or on Linux and you don’t want to install a VM, that’s one way to do it. I run a Mac as my home main machine. I’m getting ready to watch the Apple Keynote in about 30 minutes and I’m going to have my credit card ready going, shut up and take my money. I want a new iPad so bad and I’ve been holding back for so long. I don’t want another phone with 2TB of RAM. That doesn’t make any sense to me. But yeah, and still, I don’t use SQL Operations Studio. I think it’s fine. There’s nothing wrong with it; it’s just SSMS is amazing.

Erik Darling: Someday, SQL Operations Studio will be just as big and bloated as Management Studio, all the features, and we’ll complain about that then.

Brent Ozar: Cortana…

Erik Darling: Yeah, why not? Go crazy.

Richie Rump: Brent is so buying another phone.

Brent Ozar: I’m not. I’m not. I’m totally not.

Richie Rump: It’s totally going to happen. People, trust me, it’s going to happen. (Update: the Apple keynote happened, and no, Brent did not buy another phone.)

Brent Ozar: Anything that they could bring out, I’ll be like, no, no, thanks, I’m good, because I am holding back my credit card for the iPad. Because the instant that they throw – and I don’t want the 12 inch – I don’t want something that’s the size of a Trapper Keeper; that’s ridiculous. I just want an iPad Mini that’s like nine inches, 10 inches across. That’s all I need.

Richie Rump: But what if there’s a Lamborghini on the iPad Trapper Keeper? I mean, come on, Brent.

Brent Ozar: Ooh yeah, okay, yeah. Okay, so now I have to show this. so one of my favorite websites, Autotrader Oversteer, this is where a lot of their commentary goes for Autotrader. Tyler is this guy who has an incredible car collection problem and he continuously buys the cheapest expensive car in the United States. So his latest purchase after his Ferrari F355 caught fire was he bought the cheapest Ferrari Testarossa in the United States. So then he goes and buys the cheapest version and blogs about all the problems that he runs into with it and how expensive or inexpensive it is. Yeah, so there’s that.

Richie Rump: What?

 

Why didn’t Microsoft catch this bug in 2016 SP2 CU2?

Brent Ozar: Eric, different Eric, asks, “I applied…” it’s not really an ask; he’s just kind of making a comment. “I applied Service Pack 2 Cumulative Update 2 for 2016 a couple of weeks ago and my Database Mail stopped working. To resolve it, I had to reconfigure .NET35 on the server. How did Microsoft miss this? I might not apply Cumulative Updates anymore.”

Erik Darling: It’s amazing when there are bugs in something that gets released monthly.

Brent Ozar: And Database Mail, like the further you go off campus, like the further you go into features that are kind of not mainstream, SQL Server’s a great relational database. But the more that you start to use a feature that automatically lowers and raises your car’s antenna, the less that that stuff is really reliable and tested. There’s just only so many tests they run.

Erik Darling: Well even the last round of CUs for 2017, they left a bunch of trace flags on in one of their releases and they had to roll that back. Not that I blame them. I get oversights, especially with a product that big and the co-release that big and all the stuff you have to do. I get it. I’m not trying to harsh the Microsoft vibe there, but man.

Brent Ozar: yeah, it was 2016, CU2, there was a GDR security patch. They unreleased it because it had trace flags turned on, then rereleased it again like a week later. And if you’re pushing the updates out as fast as they come out, man, you’re going to have a bad time.

Erik Darling: I think if I had to deal with the CU release cycle now for prod, I would probably be a month or two behind.

Brent Ozar: I wouldn’t even put it to development within the first week.

Erik Darling: No, no I mean, development would probably be a month behind. Prod would probably be three months behind. I don’t know, just staggered somehow so I can let some other poor fool jump on that bleeding edge.

Richie Rump: That’s usually you, Erik, by the way.

Brent Ozar: Jim says he had the same issue. You can also resolve it by recreating your DatabaseMail.exe.config file. And he said that the mail issue will be resolved in Cumulative Update 3. So see you next month.

Richie Rump: Which will break something else…

Brent Ozar: And we’re not saying not to patch either. Like, oh Brent doesn’t like patching – no, we love – I am a thorough believer that you should be on a relatively current version. The RTM version sucks but…

 

When should you change MAXDOP?

Brent Ozar: Let’s see. Gus says, “When should you change MAXDOP? If you have a 16 core SQL Server, should you change MAXDOP from the default of zero?”

Erik Darling: Immediately.

Brent Ozar: Would you take a coffee break first or would you change it immediately.

Erik Darling: I would do it immediately. I wouldn’t even hesitate. I would be like, done it, eight.

Brent Ozar: Eight, okay, I was going to ask you what would you change it to, yeah. And at the same time you change it, are there any other changes that you would also make?

Erik Darling: Oh dear god, do we need to stretch our time that badly? Yes, I’d pour flavored vodka on it, strawberry jam, throw some frozen English muffins in there, cook a pancake, I don’t know. Yeah, there’s lots of changes I make to a SQL Server. I try to write about them so people don’t have to continue to be befuddled by setting up SQL Server. This is one of those things I blogged about sort of recently, whether people still need help, basic help, with the setup. And there are settings that, when Microsoft stuck tempdb in front of everyone and said, hey, multiple files, size them the same, so some good stuff, like nudging people in the right direction. And by making a bunch of trace flags the default behavior so that you don’t have to be a DBA with that checklist of things that you do, they started putting some good things in the setup of the product and the way the product is presented to people. They just make their lives easier. MAXDOP and cost threshold are totally some of those things that just people need in front of them when they do setup. And if Microsoft is going to have pages on the internet in which they give you best practices for things, those best practices should show up during setup on those screens.

Even if they’re not filled in for you, just say, we recommend this, you should change it to this. Fine, I would take that over, meh just click next a few more times and out you go. Just put it in front of people so they know. If people still need this much help with it 20 something years after the defaults are added to the product, maybe it’s time to put them up front.

Brent Ozar: And you think too, the versions that go out today – so whenever SQL Server vNext comes out – Microsoft Ignite is coming up in about a week. Ignite is where they set fire to all your plans to go build your next SQL Server because they’re about to release the new version. So they drop the new version out. Whatever version next comes out next week – and it obviously won’t come out for production use next week, it will just be a preview next week probably, and I’m just guessing, I have no inside knowledge, but they did the same thing last year at Ignite. So whatever version they bring out, say that the final comes out in January of 2019, people are going to be installing that for the next seven years, eight years. The setup needs to go, hey here’s a place on the web where you can go for information that wasn’t written by Dwight Eisenhower, you know, things that are vaguely up to date.

Erik Darling: Like written by someone who has since hit retirement age and left. Like, I’m done. Like, I don’t even work with the product anymore. I took a job at a bigger company than Microsoft somehow. Just give us something.

Brent Ozar: Now I’m going to work in a monitoring company and immediately try to fix all of the things that I left behind in setup.

 

Should you use Amazon’s SQL Server images?

Brent Ozar: Kevin says, “Morning…” morning. Afternoon – afternoon web services. He says, “AWS, do you use your own SQL Server install from the images in EC2 or do you use theirs?”

Erik Darling: I didn’t know we had a choice.

Brent Ozar: Well the licensing is the big driver. So if you use AWS’s instances that have licensing included, so like Enterprise, Standard, whatever, then they’re going to make you do one of two things. Either you have to pay by the hour for that or bring your own licensing. If you want to use it for any kind of high availability or disaster recovery, it usually makes sense to build your own image and install your own SQL Server from there. Not because you’re going to do a better job, just because you’re not going to have to hassle with their licensing police. And then you can manage the number of replicas and all that without having to hassle it. Even if you do use their installs, it’s very next, next, next, finish. It’s just like what you would do if you were just throwing the DVD in and you were drunk working for a cloud company and you had to ship it out as quickly as possible.

Microsoft is the same way. Google’s the same way. The stuff that they ship out as defaults isn’t necessarily best practices – cost threshold for parallelism. So you would still want to go through your own setup checklist to get it the way that you would want it as opposed to somebody at a vendor.

Erik Darling: Do you remember what stuff was set to in managed instances?

Brent Ozar: Oh, I might have blogged about that.

Erik Darling: I remember looking at a bunch of stuff. I don’t remember what they had the defaults for there, but I think that would be a pretty good measuring stick…

Brent Ozar: No, I sure didn’t. I didn’t blog about what they did in managed instances. And I bet you…

Erik Darling: I don’t remember looking, now that I think about it.

Brent Ozar: No, and I bet you one crispy burrito that it is exactly the same as the box product. I bet they’re not changing that either.

Erik Darling: Yeah, which would be weird.

Brent Ozar: But remember, they’re charging you for CPU consumed, so on one hand, they want it to perform fast, but on the other hand, they really want to fleece you for all you’ve got.

 

What’s the perfect vodka martini recipe?

Brent Ozar: And then the last question – well it’s more of an observation from Michael Tilley. Michael says, “Erik, the perfect vodka martini recipe. He wants a vodka with more than 6x distillation, like Tito’s or Dripping Springs Vodka from here in Texas. The number of distillations has to do with the sipability or the smoothness of vodka and the ensuing quality of the martini being made.”

Erik Darling: Can I give you the perfect recipe for a glass of Lagavulin?

Brent Ozar: Yes, by all means.

Erik Darling: Glass, Lagavulin, enjoyment. Too much work.

Brent Ozar: There’s two kinds of people in the world; the people who love to cook and make things, like craft things from multiple ingredients, and the people who don’t. I’m one of those people where if I could reach into the fridge and get out a burrito that wouldn’t kill me and just put it in the microwave and I could magically stay healthy, I would be all over that. that would be perfect to me.

Erik Darling: It’s called a Hot Pocket, Brent.

Brent Ozar: Yeah, but that would kill you. If I only ate Hot Pockets, I would die. And I like Hot Pockets, but – same thing with the idea of a drink. I like wine. You open the bottle, you drink the bottle. Tequila, you open the bottle, you can drink the bottle. But the instant that there’s multiple ingredients involved – I’m all for fancy complex drinks, but someone else needs to be making them; not me.

Erik Darling: Yeah, it’s like the same way that I’m all for fancy complex stored procedures. Someone else should be writing them.

Richie Rump: Brent’s the same way with the software. He’s all for fancy complex software, as long as someone else writes it.

Brent Ozar: Absolutely, yes.

 

Can I connect to SQL Server running in Virtual Box?

Brent Ozar: Alright, so we’ll take one more question because Mike added one more. Mike says, “Is there any way to connect to a SQL Server instance running on a virtual box, Linux, from the desktop where the virtual box is running?” Yes, it will likely involve – oh yeah, you did that for a while.

Erik Darling: I still have virtual box on here. So to connect to the virtual – I’ll go look at my settings right now because I can pull them up quickly for you. I mean, it’s Windows, so it’s a little bit different, but from my settings – and I can connect to SQL Server here – over in networking, I have enabled network adapter, attached to NAT, and then in the promiscuous mode, I have to allow all. So that will make things magically work. And I can connect to all my VMs from the desktop and all my VMs can connect out to the internet. So if you want something different then you’ll have to Google for that yourself. But those settings work for me with virtual box to connect in and to have those boxes be able to go to websites so I can look at things in a VM on a website. Let’s talk about something else.

Brent Ozar: And that’s a perfect way for us to end Office Hours. We’ll all get back to surfing the webernets. Alright, see y’all later; adios.

Free Training Classes for Those Who Give Back: Announcing our 2019 Scholarships

$
0
0

You work at a charity or non-profit, helping them make a difference with data.

Maybe you write reports to help fundraisers do a better job of raising money to find a cure for a disease. Or maybe you’re a developer at a non-government organization whose mission is to speak for those who can’t speak for themselves. Or maybe you’re a sysadmin who’s been stuck managing vital databases, but your non-profit is already stretched to the limit and can’t afford training.

Pocket Square

Time for the heart.

That’s where we come in. We wanna help.

Every year, we offer scholarships to dozens of people just like you. We want to empower you to continue making a difference. Our scholarship program is simple: recipients get a Live Class Season Pass with access to all of Brent & Erik’s training classes, both live and recorded, during 2019, plus SQL ConstantCare® mentoring.

To give you an idea of the kinds of organizations we’ve supported:

The fine print:

  • You must work for a foundation, non-profit, charity, or similar company that’s doing good work. It can totally be a for-profit company, just as long as they’re making a difference. (If you work for Ginormous Profitable Global Corporation, you’re probably not going to make the cut.)
  • Your company or government rules must allow you to receive free training. (Some companies prohibit their employees from accepting gifts.)
  • You must already have a job working with SQL Server. (This isn’t about getting a new job.)

Apply here. Applications close October 31st.

Applications are open now for our 2019 scholarship program.

One Hundred Percent CPU

$
0
0

Raise Your Hand If

You’ve ever wanted to play a prank on your co-workers, but just didn’t have a any ideas that didn’t involve exploding Hot Pockets.

Now you have something even less safe than molten cheese squirts!

A stored procedure that pushes CPUs to 100%.

All of’em.

CREATE OR ALTER PROCEDURE dbo._keep_it_100
AS
BEGIN

WITH e1(n) AS 
    (
    SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL 
    SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL 
    SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL 
    SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL 
    SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL
    SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL
    SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL
    SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL
    SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL
    SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL 
    ),                          
e2(n) AS (SELECT TOP 2147483647 NEWID() FROM e1 a, e1 b, e1 c, e1 d, e1 e, e1 f, e1 g, e1 h, e1 i, e1 j)
SELECT MAX(ca.n)
FROM e2
CROSS APPLY
(
    SELECT TOP 2147483647 *
    FROM (
            SELECT TOP 2147483647 *
            FROM e2
            UNION ALL SELECT * FROM e2
            UNION ALL SELECT * FROM e2
            UNION ALL SELECT * FROM e2
            UNION ALL SELECT * FROM e2
            UNION ALL SELECT * FROM e2
            UNION ALL SELECT * FROM e2
            UNION ALL SELECT * FROM e2
            UNION ALL SELECT * FROM e2
            UNION ALL SELECT * FROM e2
            UNION ALL SELECT * FROM e2
            UNION ALL SELECT * FROM e2
         ) AS x
	WHERE x.n = e2.n
  ORDER BY x.n
) AS ca
OPTION(MAXDOP 0, LOOP JOIN, QUERYTRACEON 8649);

END;

 

Probably Don’t Do This In Production

EXEC dbo._keep_it_100
We got this tested on 48 cores, and it was pretty sweet.

RUN, FORREST, RUN

Special thanks to Forrest for testing this out on his dev server for me.

At least I hope it was.

Thanks for reading!

Brent says: wanna test your monitoring or alerting processes and see how quickly folks get alerted, start troubleshooting, and get to a root cause? Run this in your QA environment and see how fast folks react.

Applications are open now for our 2019 scholarship program.


A Simple Stored Procedure Pattern To Avoid

$
0
0

Get Yourself Together

This is one of the most common patterns that I see in stored procedures. I’m going to simplify things a bit, but hopefully you’ll get enough to identify it when you’re looking at your own code.

Here’s the stored procedure:

CREATE OR ALTER PROCEDURE dbo.you_are_not_clever ( @cd DATETIME NULL)
AS
    BEGIN

        DECLARE @id INT;

        SELECT   TOP 1
                 @id = u.Id 
        FROM     dbo.Users AS u
        WHERE    ( u.CreationDate >= @cd OR @cd IS NULL )
        ORDER BY u.Reputation DESC;

        SELECT   p.OwnerUserId, pt.Type, SUM(p.Score) AS ScoreSum
        FROM     dbo.Posts AS p
        JOIN     dbo.PostTypes AS pt
            ON p.PostTypeId = pt.Id
        WHERE    p.OwnerUserId = @id
        GROUP BY p.OwnerUserId, pt.Type;

    END;

There’s a lot of perceived cleverness in here.

The problem is that SQL Server doesn’t think you’re very clever.

Pattern 1: Assigning The Variable

Leaving aside that optional parameters aren’t SARGable, something even stranger happens here.

And I know what you’re thinking (because you didn’t read the post I just linked to), that you can get the “right” plan by adding a recompile hint.

Let’s test that out.

DECLARE @id INT;
		DECLARE @cd DATETIME = '2017-01-01';

        SELECT   TOP 1
                 @id = u.Id --Assigning the id to a variable
        FROM     dbo.Users AS u
        WHERE    ( u.CreationDate >= @cd OR @cd IS NULL )
        ORDER BY u.Reputation DESC;

        SELECT   TOP 1
                 u.Id -- Not sssigning the id to a variable
        FROM     dbo.Users AS u
        WHERE    ( u.CreationDate >= @cd OR @cd IS NULL )
        ORDER BY u.Reputation DESC
        OPTION ( RECOMPILE );

Here are the plans, for all you nice people NOT WORKING AT WORK.

Just like in the post I linked up there (that you still haven’t read yet — for SHAME), the first plan refuses to recognize that an index might make things better.

The second plan does, but as usual, the missing index request is rather underwhelming.

But what’s more interesting, is that even with a recompile hint, The Optimizer doesn’t ‘sniff’ the variable value in the first query that assigns to @id.

you’re crazy

This only happens in queries that test if a variable is null, like the pattern above, and it becomes more obvious with a better index in place.

CREATE INDEX ix_helper ON dbo.Users(CreationDate, Reputation DESC)

Parades!

We’ve got it all! A seek vs a scan, a bad estimate! Probably some other stuff!

Pattern 2: Using That Assigned Variable

The Great And Powerful Kendra blogged about a related item a while back. But it seems like you crazy kids either didn’t get the message, or think it doesn’t apply in stored procedures, but it does.

If we look at the plan generated, the cardinality estimate gets the density vector treatment just like Kendra described.

Commotion

In my histogram, the values plug in below, and give us 79.181327288022

SELECT (38485046 * 2.057457E-06) AS lousy

This gets worse when a user with a lot of data, where other parts of the plan start to go downhill.

Everyday I’m Suffering.

How To Avoid These Problems

Both of the problems in these patterns can be avoided with dynamic SQL, or sub stored procedures.

Unfortunately, IF branching doesn’t work the way you’d hope.

If you’re ever confused by bad query plans that crop up in a stored procedure, sneaky stuff like this can totally be to blame.

Thanks for reading!

Applications are open now for our 2019 scholarship program.

Finding & Fixing Statistics Without Histograms

$
0
0

Men Without Hats – now those guys were cool:

Statistics Without Histograms – not so much.

If you have a database that’s been passed along from one SQL Server to another, gradually upgraded over the years, or if you’ve had a table that’s been loaded but never queried, you can end up with a curious situation. For example, take the StackOverflow2010 database: I build it on SQL Server 2008, detach it, and then make it available for you to download and play with. The nice thing about that is you can attach it to any currently supported version – SQL Server 2017 attaches the database, runs through an upgrade process, and it just works.

Mostly.

But when you go to look at statistics:

We can leave your stats behind

That means when we ask SQL Server to guess how many rows are going to come back for queries, it’s going to act real rude and totally removed.

Finding This Problem in SQL Server 2016 & Newer

Starting with SQL Server 2016 Service Pack 1 Cumulative Update 2, you can run this in an affected database to find the tables with missing stats, and generate an UPDATE STATISTICS command to fix them:

SELECT DISTINCT SCHEMA_NAME(o.schema_id) AS schema_name,
       o.name AS table_name,
    'UPDATE STATISTICS ' + QUOTENAME(SCHEMA_NAME(o.schema_id)) + '.' + QUOTENAME(o.name) + ' WITH FULLSCAN;' AS the_fix
  FROM sys.all_objects o 
  INNER JOIN sys.stats s ON o.object_id = s.object_id AND s.has_filter = 0
  OUTER APPLY sys.dm_db_stats_histogram(o.object_id, s.stats_id) h
  WHERE o.is_ms_shipped = 0 AND o.type_desc = 'USER_TABLE'
    AND h.object_id IS NULL
    AND 0 < (SELECT SUM(row_count) FROM sys.dm_db_partition_stats ps WHERE ps.object_id = o.object_id)
  ORDER BY 1, 2;

That gives you a list of fixes:

Everything’ll work out right

Run those in a way that makes you feel – what’s the word I’m looking for – safe. Depending on your server’s horsepower and the size of the objects, you may want to do it after hours.

Updating statistics can be a read-intensive operation since it’s going to scan the table, but at least it’s not write-intensive (since statistics are just single 8KB pages.) However, be aware that this can also cause recompilations of query plans that are currently cached.

Finding This Problem in Earlier Versions

That’s left as an exercise for the reader. Parsing DBCC SHOW_STATISTICS would be a bit of a pain in the rear, and I’m already dancing to a list of YouTube’s related videos. Good luck!

Applications are open now for our 2019 scholarship program.

6 DBA Lessons I Wish Someone Would Have Taught Me Earlier

$
0
0

I was talking to a DBA friend of mine, reminiscing about some of the hard lessons we learned early on in our career. The more we talked, the more we realized that there should probably be a one-page cheat-sheet that you’re required to read before you open SQL Server Management Studio.

1. The name of the job isn’t necessarily what it does. That “Backup All Databases” job probably doesn’t, and it probably has a shrink step in there for good measure.

2. A completed backup job doesn’t mean anything. Maybe the job isn’t set up to back up all of the databases. Maybe it’s a homegrown script that has a bug. Maybe it’s writing the backups to the very same drive where the databases live.

3. A lack of failure emails doesn’t mean success. It can also mean the failure emails stopped working, or they were being sent to a distribution list that has been deleted, or that the mail server is down, or that the email filter you set up the other day is wrong.

4. The last admin meant well. They weren’t incompetent, just overworked and undertrained.

5. Software vendors aren’t psychic. You can complain all you want about their crappy performance, but the reality is that your users might be using the software in a totally different way than anybody else. If you don’t give them clear, easy-to-understand performance data about query and index issues in your environment, they’re not going to be able to guess about it, much less fix it.

6. For maximum learning, you need peers and challenges. If you’re the only DBA in a shop, and you get your servers under control, you’re not going to grow. You need to tackle new challenges that you haven’t seen before, and you need outside opinions to challenge what you think you already know. You might be a big fish in a little pond today, but when you take a job in a bigger pond, be humble about what you think you know. You might be wildly incorrect.

What about you? What do you wish someone would have told you earlier?

Applications are open now for our 2019 scholarship program.

Leaked: SQL Server 2019 Big Data Clusters Introduction Video

$
0
0

Psst – you’re probably not supposed to see this yet, but look what @WalkingCat found:

What the video says

Growing volumes of data create deep pools of opportunity for those who can navigate it. SQL Server 2019 helps you stay ahead of the changing time by making data integration, management, and intelligence easier and more intuitive than ever before. 

Yep, that’s a Microsoft video alright.

Polybase

With SQL Server 2019 you can create a single virtual data layer that’s accessible to nearly every application. Polybase data virtualization handles the complexity of integrating all your data sources and formats without requiring you to replicate or move it. You can streamline data management using SQL Server 2019 Big Data Clusters deployed in Kubernetes. Every node of a Big Data Cluster includes SQL Server’s relational engine, HDFS storage, and Spark, which allow you to store and manage your data using the tools of your choice.

Big Data Cluster

SQL Server 2019 makes it easier to build intelligent apps with big data. Now you can run Spark jobs to analyze structured and unstructured data, train models over data from anywhere with SQL Server Machine Learning Services or Spark ML, and query data from anywhere using a rich notebook experience embedded in Azure Data Studio. The torrent of data isn’t slowing down, but it doesn’t have to sink your business. Sail through with SQL Server 2019, and shorten the distance between data and action.

My take on the Big Data Clusters thing

<sarcasm> It’s like linked servers, but since they don’t perform well, we need to scale out across containers. </sarcasm>

Today, Polybase is a rare and interesting animal. You’ve probably never used it – here’s a quick introduction from James Serra – but it wasn’t really targeted at the mainstream database professional. It first shipped in PDW/APS to let data warehouses run queries against Hadoop, and then it was later added to the boxed product in SQL Server 2016.

Polybase is for data warehouse builders who want to run near-real-time reports against data without doing ETL projects. That’s really compelling to me – report on data where it’s at. That seems like a smart investment as the sizes of data grow, and our willingness to move it decreases.

I like that Microsoft is making a risky bet, planting a flag where nobody else is, saying, “We’re going to be at the center of the new modern data warehouse.” What they’re proposing is hard work – we all know first-hand the terrible performance and security complexities of running linked server queries, and this is next-level-harder. It’s going to take a lot of development investments to make this work well, and this is where the licensing revenues of a closed-source database make sense.

If you want to hitch your career caboose to this train, there are all kinds of technologies you could specialize in: machine learning, Hadoop, Spark, Kubernetes, or…just plain SQL. See, here’s the thing: there’s a whole lot of SQL Server in this image:

Big Data Cluster

If you’re good at performance tuning the engine, and this feature takes off, you’re going to have a lot of work to do, and the licensing costs of this image make consulting look inexpensive. This feature’s primary use case isn’t folks with Standard Edition running on an 8-core VM. (I can almost hear the marketers wailing, “But you COULD do it with that,” hahaha.)

Applications are open now for our 2019 scholarship program.

[Video] Office Hours 2018/9/19 (With Transcriptions)

$
0
0

This week, Brent, Tara, Erik, and Richie discuss moving views between databases, Cluster Quorum File Share on AlwaysOn, checking for corruption, rebooting/restarting SQL server after applying cumulative updates, how many drive letter to use for a VM, decimal precision and scale value settings on SQL Server 2016 vs 2008, and nested views.

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

If you prefer to listen to the audio:

Enjoy the Podcast?

Don’t miss an episode, subscribe via iTunes, Stitcher or RSS.
Leave us a review in iTunes

Office Hours Webcast – 2018-09-19

 

Should I put views in a different database?

Brent Ozar: Justin says, “If you have views in a database, would there be any reason to move those views to a different database?”

Tara Kizer: Interesting, odd question.

Richie Rump: Try and get more information.

Brent Ozar: I’m trying to come up with a reason why I would do it and I can’t think of a reason. Oh okay, I got one; if you wanted to restore the database. I used to do this on a log ship secondary. We would have complex views set up and then replace the database underneath from time to time because the views couldn’t be on the original database. That’s the only thing I’ve got, though. I can’t think of a reason why… Okay, so I’m going to stretch, SQL Server 2016 has database level things like MAXDOP and parameter sniffing. So you could put views in different databases if you wanted people to have different MAXDOP settings and you didn’t want to use resource governor.

Tara Kizer: Would that work though, because you’re going to be referencing objects in the other database? I guess that would work.

Brent Ozar: Yeah, it’s based off whatever database you’re currently in. Justin follows up with, “We have developers who have two databases. One has tables and the other has views pointing back to the tables.” Okay, no, no, no credit there.

Richie Rump: this reminds me of the early days of Access, where we had one database with the data and then we would have another database with all the screens and reports and all that stuff.

Brent Ozar: Thank god that’s over. Justin says, “We think it’s a bad idea.”

Tara Kizer: It’s not necessarily a bad idea, just it’s odd. I wonder why he thinks he needs to do that.

Brent Ozar: Version control is going to be harder; deployments are going to be harder. Yeah, it strikes us as a bad smell.

Richie Rump: It’s a bad idea.

Brent Ozar: I can’t come up with a scenario where it’s a good idea to do new development that way.

 

How should I configure quorum?

Brent Ozar: Rob says, “Hi, we’re about to build a two node Always On cluster with four named instances.” I think, and you may want to follow this up, Rob, I’m not sure if you mean an Always On Availability Group, or a failover clustered instance, your grandpa’s cluster. “As far as quorum, will I need to specify a file share on some server when I install clustering for quorum due to the even number of nodes?”

Tara Kizer: Yes. You need a third guy for quorum, and most people for Availability Groups are using a file share witness. On failover cluster instances, a lot of us use a quorum disk, you know, a SAN drive, a [Q] drive maybe.

 

Where should I run CHECKDB in my AG?

Brent Ozar: Pablo says, “Hello, folks. Can you suggest the best way to check for corruption on replicas on a high transactional server with Always On…” I assume he means Always On Availability Groups, since he means replicas. “Best way to do CHECKDB…”

Tara Kizer: I like to offload that task. I mean, if your system is 24/7, which a lot of peoples are, and you can’t take the hit of CHECKDB at any time of the day, then you offload that task to another server, backup restore, SAN Snapshot, something on another box. But you just have to keep in mind that you’re supposed to also CHECKDB your replicas, but that brings in licensing concerns because you’re offloading production work to another server, and also that other server for the backup restore.

Brent Ozar: And make sure you’re doing it wherever you do backups. Like, you want to be checking the one that you’re doing backups on, otherwise, your backups may be garbage.

Tara Kizer: you could break up the task. So CHECKDB is just a bunch of other check things, you know, like CHECKTABLE, CHECKALLOC. It’s a bunch of stuff. So I know when I worked for Qualcomm, we had our large systems where we couldn’t offload tasks to another server. Maybe because they were too big, we didn’t have enough SAN storage, I don’t know. But we broke that up into multiple steps and by the end of the week, all steps would have been performed. So a full CHECKDB would be performed every week, but it would be a daily task of small pieces.

Brent Ozar: I haven’t used this myself, but I’ve heard Sean and Jen McCown – god, their site’s hacked again. Alright, well so much for that… A part of me wants to recommend this, but their site’s hacked again. Whenever their site gets unhacked, Minion CheckDB has the ability – they have a set of open source scripts. I’m not sure if they’re free, open source, whatever, but a set of scripts where you can offload parts of CHECKDB to different servers. You can have this server check catalogues, this server check this file group. It looks really slick. I’ve never used it myself, but also, when you click on it, you get a choose your price sweepstakes, so clearly something’s wrong with their site at the moment, so we’re going to just close that browser and go back.

Richie Rump: Burn that VM to the ground.

Brent Ozar: Websites suck.

 

Do I need to restart after patching SQL Server?

Brent Ozar: [Joy Anne] asks, “Is a Windows reboot or SQL Server restart recommended after applying Cumulative Updates? I just realized that the CU9 security update requires a Windows reboot before I can apply CU10.”

Tara Kizer: I mean, it goes through the reboot restarts, or the restarts at least, during the installation, the restarts of SQL Server as needed. But yeah, I mean, I would definitely reboot. Now, if it’s my own desktop machine, I don’t. I mean, if I can get away with not rebooting and it’s not asking for a reboot, I do not. But on production, I’m in a maintenance window, I’ll probably reboot it at the end, even if it doesn’t ask.

Brent Ozar: Plus, usually, you want to do Windows patching at the same time. If I’ve got to restart SQL Server, I want to fix the Windows patches, which seem to come out more frequent than the SQL patches.

 

Should I put everything on the C drive?

Brent Ozar: J.H asks, “When setting up a SQL Server 2016 Always On within two VMs, is it theoretically okay to have one large C drive, like 1TB, on each VM, that holds everything? SQL Server install all system databases, tempdb, user databases, log files, et cetera, or should I separate them out onto smaller drives?”

Tara Kizer: I don’t ever put database files on the C drive. The C drive is the number one place to run out of disk space because of Windows security updates. You could put a large drive there. Definitely, the SQL installation and all the shared stuff – shared files, not the shared system databases. I would not put your database files on the C drive. There’s no reason to and it’s so easy to create a drive.

Brent Ozar: Yeah, the filling up thing is the one thing that scares me. The other thing that scared me before is when you want to do VSS Snapshots, it has to quiesce everything on that drive, you know, so you may just want to snap databases and not the OS. And there was another that hit me and I was – so if you ever have, for some reason, I’ve had situations where I needed to just get copies of the user databases and log files somewhere else, like I wanted to do an upgrade of a Windows or SQL Server and I wanted to go test it somewhere else, so much easier to take a snapshot of just where the data and log files live and present that over instead of having the whole C drive.

Tara Kizer: I would have the minimum of three drive letters for a VM. You know, C drive, maybe a D drive for the user databases and some of the system databases, and then another drive for tempdb, putting tempdb on its own drive.

Brent Ozar: I’m with you, because plus too, it’s tuning. For the SAN admin, it’s easier for them to tune for different access patterns. So, like, tempdb, I might want that on blazing insane fast storage if I have a tempdb problem, otherwise ,I might want it on garbage. I just might want it on junk storage if I don’t care about it.

Tara Kizer: Richie just got a better offer, I think. See-ya…

Brent Ozar: In the middle of the webcast, like, forget it… That, I think, might be the first time we’ve ever seen Richie disappear when it didn’t involve a blue screen of death, because usually Richie drops it when he has blue screens of death. Richie, your computer stayed on and you left. That might be a first.

Richie Rump: Yeah, I had a delivery notification.

 

Have precision and scale changed across versions?

Brent Ozar: Robert asks, “Is there any difference in decimal precision or scale between SQL Server 2016 and 2008? I’m getting a different value on the two servers and they’re both set to the same precision and scale settings.”

Tara Kizer: That’s interesting. I would imagine, if the version number is the reason, that people would have blogged about this or, you know, hit it.

Brent Ozar: It should be easy to do a repro query there, just like repro the exact same query on the two boxes and then post it to, say, dba.stackexchange.com.

Tara Kizer: Declare a variable, just in a Management Studio test, and set it to whatever data type and size it is and see what you get on the two boxes. I just wonder, is it a query issue instead?

Richie Rump: Or data issue.

Brent Ozar: Or regional settings. I’m trying to think if there would be a way that regional settings would hose you, like if it’s different currency formats. I’m not sure what that would be.

 

Is there ever a good reason to nest a view?

Brent Ozar: And then Joe asks, “Is there ever a good reason to nest a view?”

Tara Kizer: I tell you, as a performance consultant, I cannot stand nested views, and I’ve had a client where I gave up after like five times. I was like, okay, now we’ve got to open up this view, now we’ve got to open up this view. I was like, I’m out. I mean, we have limited time on this call. There’s just no way I’m going through this.

Richie Rump: I mean, nested views make sense from a developer perspective, but when you take a look at performance-wise, it makes zero sense. I mean, it’s like going back to the old days of oobject-orientedprogramming. We had this and we’d build on top of another one and build on top… And it makes a lot of sense to developers, but they don’t ever go under the hood and see the garbage that it’s doing underneath. Just don’t do it.

Brent Ozar: And it’s one of those things where it works on my machine when it starts, you know. You have really limited data sets, you’re just getting started with the application, nobody’s using it. So it seems like everything’s cool, and then just later when you get to rreal-worldscale, performance goes to hell in a handbasket. It’s one of those that I would kind of coach people towards, hey, if you have a choice, I wouldn’t do it. I’d rather you do something else.

Alright, well a short list of questions here today. Y’all didn’t have any other questions, so we will bail out early and go start using our unlimited drink package. So we will see y’all next week at Office Hours. Adios, everybody.

Applications are open now for our 2019 scholarship program.

Viewing all 3170 articles
Browse latest View live


Latest Images