• Yes - if your row size means that a lot of space will be left on table pages (e.g. fixed row size of 3KB, * 2 = 6, leaves 2K unusable) then you will swell the database size and slow down table scans to some extent.

    But in practice, using variable length columns means that row size will typically be much smaller than the maximum. You need to consider average size rather than maximum size when deciding the issue.

    SQL server will use the actual size of the data and not the defined maximum when allocating page space to records.

    Also, since table sacns are so inefficient in most cases, your indexing strategy will probably mean that they hardly ever occur on large tables.

    In the end analysis, there is usually little you can do if the requirements of the data model dicatate a large record size. I suppose a vertical partition for frequently- vs. rarely accessed data is a possibility.

    Also, space calculations don't include text and image data, of course, which are stored (I think?) as a pointer.

    Tim

    Tim Wilkinson

    "If it doesn't work in practice, you're using the wrong theory"
    - Immanuel Kant