table design

  • Hi friends

    We r in the process moving from visual foxpro database to sql server 2000.

    most of our foxpro tables have more than 25 columns

    and we've some 10 tables with 200 columns

    I know that in sql server we can have 1024 cols in a table.my questions is

    how many columns (i.e max no of cols) ideally should we've in a table in sql server for good performance?

    Thanks for ur ideas

  • I am not familiar with any correlation between number of columns and performance (assuming the design is normalized),

    how ever keep in mind that the total row length for each row storage cannot exceed 8K.

  • Thanks for the post mssql_rules 

    actually i exported one table with 180 columns and some 8000 records to sql server.

    when i ran

    select * from table1 it took 7 seconds

    i've table in sql server with 25 columns and with 15000 records

    when i ran

    select * from table2 it took 0 seconds

    It looked to me its bcoz column no!!

  • If you're running these queries in Query Analyzer, turn on Client Statistics (Ctrl+Shift+K).

    The delay you're experiencing is probably due to the client formatting the output.

    Also, make sure you are not outputting to grid. That slows things down drastically while the output is formatted into rows and columns.

    As for the best number of columns, a very general rule is the less the better.

    --------------------
    Colt 45 - the original point and click interface

  • Hey, you are luckly I have a project that came to me about a month ago when my company bought out another company.. and there database CAME FROM Q&A! ( which for you young people was before fox pro ). Just to give you a clue on THE SMALLEST! table we got... we imported it into excel from a cvs. After it was in.. the "letter(s)" on the col. name ended at QZ! ( all the way threw a-z like 17 allmost 18 TIMES! ) .. we are having a hell of a time with this stuff. Right now that one table is split up between 4 "sub" tables. Any one have any other idea's on how to speed things up or how things could be layed out. We are still in the dev stage and would love to change things now than later. On a side note, I come from a *nix background in programming and my new company has all windows, and since this project had to be done quick I just installed PHP on Win2k3 Svr. Later i am going to move it to .NET but anyone know how much of a hit it will take b/c of the use of PHP?

     

    -- Sal.

  • I suggest that if tables have that many columns that the database needs to be redesigned.  This is an indication that the database is far from third normal form. 

    Of course, you should shoot for fourth or fifth normal form -- meaning, for one thing, that any fields in the main tables that should have their values limited to a discrete set of values should have those fields linked through foreign keys (these are typically like alphanumeric codes that should be designed to be mnemonic) to what some call "lookup tables".  Such tables typically contain two or three fields -- the code (primary key), a name for the value, and possibly a description field.  For example, States  and ZipCodes can be in lookup tables.  Constraints can be created and used check for valid values.  Before updates are written to the database, software can, if it wants to be ultra safe, check to make sure that such fields contain valid values.  One common way to do this is to use these lookup tables to load dropdown listboxes for user selection. 

    Of course, redesigning the database would make it necessary to rewrite the software.  One thing to do is to redesign the database, import the data (a big chore in itself, if the database is redesigned), and then create views that present the same names as the big tables the software uses.  Then the software can be incrementally redesigned.  I think performance will improve in the long run.

    I once worked on a customer service project where the database was designed by a professor from a local university who was supposed to be an expert but who was more of a buddy to the VP in the local DC-beltway-bandit company who was running this project.  The database consisted of one table with 200 columns (Duh!) and was running on an early TeraData DBMS system.  A simple query would take 20 minutes or more.   Needless to say, the productivity of the programmers was abysmal.  That system never got to production.  The prime contractor came in and took over the project and replaced TeraData with a DataEase system.  They periodically exported the DataEase data to the TeraData system.  (Why?, I don't know, except that they had paid millions of dollars for this TeraData system.)

     

     

  • Thank u all for ur great input.

    hi Phill Carter could u tell me what client statistics does?

    Thanks

  • Turning on Client Statistics in Query Analyzer gives you the following info on a seperate tab,

    Application Profile Statistics

    Timer resolution (milliseconds) 0 0

    Number of INSERT, UPDATE, DELETE statements 0 0

    Rows effected by INSERT, UPDATE, DELETE statements 0 0

    Number of SELECT statements 1 1

    Rows effected by SELECT statements 1 1

    Number of user transactions 1 1

    Average fetch time 0 0

    Cumulative fetch time 0 0

    Number of fetches 0 0

    Number of open statement handles 0 0

    Max number of opened statement handles 0 0

    Cumulative number of statement handles 0 0

    Network Statistics

    Number of server roundtrips 1 1

    Number of TDS packets sent 1 1

    Number of TDS packets received 1 1

    Number of bytes sent 502 502

    Number of bytes received 489 489

    Time Statistics

    Cumulative client processing time 1 1

    Cumulative wait time on server replies 0 0

    This was a simple select on a table with one row.

    --------------------
    Colt 45 - the original point and click interface

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply