sql server – Is physical file fragmentation of the mdf file an issue? – Education Career Blog

I have ran a defrag report on a SQL server and the mdf file for the database is being reported as 99% fragmented. I asked a colleague and they said this may be normal as it just the way that type of file works internally.

Is it an issue and would running a disk defrag for this file improve DB performance? It is a pain to because it requires a lot of space on disk as it is a huge file so I want to be sure whether it is necessary.


Another linked question leading on from the answers. How would a RAID striped configuration affect this. If the data is striped over multiple disks would defragmentation be less of an issue?


As long as you have determined that, in fact, you have a database response problem this would be a good candidate for improvement. Presumably you’ve at least determined that there is a problem to be solved, and that it’s in the database. (i.e. you’ve tested queries which are slower than you would like; or you see locks; etc.).

Otherwise it starts to look like guessing and trying. Caching naturally compensates for a lot of this depending on your query patterns.


Short answer is yes a defrag would help.

Clustered indexes try to organize the index such that only sequential reads will be required. This is great if the .MDF is contiguous on disk. Not so much if the file is spread out all over the disk.

Also even though SQL allocates in 8K blocks having to move the disk head all over the place leads to much slower access times. You really want the file contiguous.


A good way to avoid this is to not rely on autogrow. Give the MDF room to breath.

Leave a Comment