SQLServerCentral Editorial

Physical or Virtual Storage

,

When I started working with SQL Server, every server had what we'd call das-dee, or DASD (Direct Attached Storage Devices). These were hard drives inside the same physical case as the rest of the Windows computer. I've added lots of drives to various server systems over the years. As databases grew, we even had separate boxes in our racks that were attached to the main server, but only filled with drives.

Technology has changed, and today most of us work with SAN or NAS devices, where the storage is addressed across some type of network. Either a private one (copper or fiber), or the same Ethernet that connects the various computers together. A few of us might even have cloud storage that is located at Microsoft, Amazon, or elsewhere. The Stretch Database feature takes advantage of this last configuration. In all these cases, the storage that our databases see is often cobbled together from other disks that hide the underlying organization from the system.

Recently I read a piece from Randolph West that talked about recovering data from a RAID array. That reminded me of my early career, where I had to make decisions about how to structure storage. I've run RAID 1, 5, 10, 0+1, 6, and maybe more in my career to store data files. However, at some point I stopped worrying about the underlying configuration. I just expected, and trusted, the storage people to ensure that space was available. I even stopped thinking of the z: or y: drives on my database server as disks. Those drives were just storage that existed somewhere in the ether, just available for the database to use. 

In thinking about Randolph's experiences, I wondered how many of you out there might still deal with physical drives. Do you still make decisions about RAID levels? Do you even know what RAID levels are being used by your databases? If you're a storage admin, you might, but for those of you that aren't, do you know anything about your storage configuration? 

Really, I'm speaking of production systems, not development ones. Certainly many of us might know there's a development server with RAID 5 that holds a bunch of dev/test VMs, but I would expect that might even be rare. Outside of our own workstation, we likely don't know the storage setup. Plenty of development systems these days probably even use a SAN, maybe even the same one as production, for storage.

For me, I have no idea of our systems. I used to build the SQLServerCentral servers, and when Redgate took over that part of the business, I helped spec the initial machines we rented as physical hosts. At some point we moved to virtual machines, and while I was asked about the specifications, I didn't care about any of the hardware. I just said that I wanted enough CPU, RAM, space, and IOPS to handle the load. Deciding what that was, and ensuring it was available, was someone else's job.

If you spec hardware, or pay attention, let me know. There certainly are plenty of hardware geeks, like Glenn Berry, that pay attention and prefer particular configurations. Those are the people I'm glad I can ask for advice if need it. I certainly ask for help with my personal systems, but for servers, I just need capacity. Do you feel the same way?

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating