no fo columns

  • Hi friends

    we need ur advise on database design.we r currently in the process of migrating visual foxpro to sql server.some of our foxpro tables have more than 100 columns each.

    if we migrate it to sql server as it is ,is there any performance loss?

    what is ideal no of columns should we keep in sql table ?

    any other design issue i shoould consider ?

    Thanks for ur advise.

    Cheers

  • SQL should be able to cope with the volume of data you stored in FoxPro, although you might consider partitioning some of those wide tables and joining on them when necessary.

    Additionally, with DBs like FoxPro, the tendancy on the client side is to use lots of cursors with the concept of a "current row pointer".  SQL supports this, but it (along with many other "server" databases) is optimised for set-based logic - you ask it for a set of results, it returns it, and that's the end of the conversation.

    Finally, something I bump up against as we work with several pieces of software written in FoxPro - the date fields in FoxPro are a LOT more forgiving.  We see dates in the year 1002 (the user had intended 2001) - whilst FoxPro will accept this date, SQL using datetime field will not.  DTS just errors in an unhelpful way when transferring such data.  The trick is to transfer it to a char/varchar column in SQL using a format of YYYYMMDD and then parse in SQL and replace bad dates (less than 1900) with some predefined value.  I spent a while digging around the net and finally found FoxPro commands - use DTOS to format the FoxPro date this way...

    Sorry to rant on about the last bit - I spent a while just the other day on this annoying problem on a large data import

  • Thanks for post Ian.we'd consider ur suggestions in our design.
    BTW do you suggest any website for good practices on database design,indexes etc.,
    Thanks again
     
  • This site

  • quick question Ian
    am just impoting a table from foxpro to sql and got following error

    Error: Preparation SQL Task: Warning: The table 'claim' has been created but its maximum row size (87839) exceeds the maximum number of bytes per row (8060). INSERT or UPDATE of a row in this table will fail if the resulting row length exceeds 8060 bytes. (SQL Server Import and Export Wizard)

    any ideas ? whats that means.BTW this table has 183 columns!!

  • Table is too large. A row MUST fit in a data page or the insert fails. The data page size is 8060, so any row containing more than 8060 bytes of data cannot be written to the page so the insert/update would fail. This is just a warning that it may happen. That is where normalization is important... or vertical partitioning.

  • 183 columns is HUGE!   I hope you aren't selecting them all with select * all of the time - best to partition it and only select the columns you need...

    As remi said, your table will work just fine, but if you put > 8060 bytes of data in there (remembering that an int is 4 bytes, GUIDs are 16, char(n)/varchar(n) are n bytes, 8 bits in a table form 1 byte, etc).  You could partition the table into several and if you really absolutely must have that many columns (at least until you recode your app a bit more), then make a view with the name of the original FoxPro table which will join the tables back together...  That would give you the benefit of allowing your legacy code to work without too many changes but any new code you write can use the more specific underlying tables.

    If you have lots of varchar(x) columns, try reducing them down to what you really need...  Eg, if you made everything varchar(500) then you have a maximum row size of 91500 (plus overheads).  You may only store 10 chars per column, but since you *might* exceed the maximum you get a warning.  Note that if you made the columns char(500), then regardless of the string length, 500 bytes is always occupied and you could not insert rows into the table (Remi: do nulls take up space?)

    Cheers

  • Note that if you made the columns char(500), then regardless of the string length, 500 bytes is always occupied and you could not insert rows into the table (Remi: do nulls take up space?)

    I'm not Remi, but I'll bite anyway. Yes, for fixed-size columns (like char) they use the full space. A varying-size column will not store anything (other than the 2 bytes overhead necessary for every variyng-size column) when the column is set to null.

  • Well I'm not Remi either but I'll have to agree .

  • I only asked Remi cause I reckon I could bet the farm (if I owned one!) that Remi would be the one to answer the question after I posted it..

    I guess it's a typical reason why I don't gamble!

  • Well I did... once I got up from bed.

  • I have a farm, but I'm smart enough (barely) not to bet anything on it.

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply