Changing a column from INT to BIGINT - Urgent

  • Hi Guys & Girls,

    We are experiencing a major headache, and any help would be greatly appreciated.

    We have a relatively large database (60Gb) and need to upgrade an int column to a bigint column. There are two major problems:

    1. We don't want to wait 24 hours.

    2. We don't want the log file to grow too large. (doing an alter column results in the log disk (20GB) filling up)

    We have figured that we could create a second column, copy the data across (in chunks), drop the first column and then rename the column. This is an option but is really slow. Since we have done a backup before this upgrade, it would be really nifty to be able to disable SQL's transaction logging while we do the alter table thing.... that would solve both our problems.

    Thanks in advance! 😉

    Aside:

    We also have some tables with primary keys that are going to need the same upgrade sometime... just think about all the fun with references, indexes and other monsters

    🙁

  • You cannot disable transaction logging.

    You give the size of the db but not of the table. How many rows and how many indexes?

    Have you considered using bcp to copy out the data, truncating the table, altering it, and then using bcp (with bulk copy minimal logging) to refill the table? Use the ORDER and TABLOCK hints with bcp. Sometimes dropping the nonclustered indexes before loading and then recreating them afterwards can also save some time, particularly if tempdb is on its own disks and you use the WITH SORT_IN_TEMPDB option.

    --Jonathan



    --Jonathan

  • Thanks Jonathan, looks like our copy will do the trick, we need to grow the log file manually before we upgrade to around 1GB to avoid the extra time SQL takes to grow the file. BCP would be fantastic if we could BCP only for one row, without doing the whole table. There are a couple of indexes and plenty of references, making it a little tricky to do the new table thing.

    Another problem, with the approach I mentioned is that when we change the column from NULL to NOT NULL the transaction log grows again! (And it takes hours...).

    No idea what its doing now. Looks like its doing it row by row, rather than doing one scan and then just changing the schema. I'm going to have lots of grey hair after this

    The DB Size is made up mainly by this table, probably 2/3 or so, so around 40Gb.

    I know the docs say you can't disable transaction logging (but I was hoping for a undocumented hack).

    Edited by - ifx on 10/10/2003 07:35:04 AM

    Edited by - ifx on 10/10/2003 07:38:06 AM

  • Interesting statement... "would be fantastic if we could BCP only for one row" Care to explain further? BTW - you can bcp using a query

    Depending on how much the table is increasing, you may find it as easy to copy the database elsewhere, make the changes on a separate machine, then copy it back. If there weren't many new additions, they could just be copied over to the changed database before the copy back. Still takes time, but you could use any fast method (I also like bcp) and could do a number of changes without affecting the live system)

    Guarddata-

  • Thanks,

    Yeah, I can get the data out of one row with bcp, but I can't bcp only one row back [;-)].

    The upgrade is on a test machine, so its not on a live machine... yet.

  • Why can't you change the database to bulk-logged or Simple recovery, change type to bigint, then change back to full recovery ?

  • We're already using the simple, but since the Alter table command is one "atomic" command, SQL is forced to write all changes to the log first, otherwise, if something goes wrong the database is stuffed.

    In this case I don't really care though, because I know I have a tried and tested backup if things go fuzzy. Just need to convince SQL too 😉

    I know you can truncate the log using the full model, when you back up the log, but am not sure whether it will truncate, since the this will be part of the "active portion".

    It still won't help my performance problems though.

  • quote:


    Thanks,

    Yeah, I can get the data out of one row with bcp, but I can't bcp only one row back [;-)].


    I also don't understand these statements about "one row." Why can't you bcp the entire table out, drop the DRI, drop the nonclustered indexes, truncate the table, alter the table, then bcp the rows back in and recreate the indexes and DRI constraints?

    --Jonathan



    --Jonathan

  • Sorry guys, I mean't one column, not one row. I can do this, but the overhead and risk is not really worth it, since we will have to also find each and every table that references data in our table needing the upgrade, drop the references and recreate them.

    I just can't believe that nobody has ever run out of ids in a big table before and that the powers that be never catered for such a situation.

    Oh well... you live, you learn [;-)]

  • why are you so hungry for a bigint? You can have an ID which spreads over two columns.

  • Thanks Kay,

    Thats something I hadn't thought of.

  • quote:


    Sorry guys, I mean't one column, not one row. I can do this, but the overhead and risk is not really worth it, since we will have to also find each and every table that references data in our table needing the upgrade, drop the references and recreate them.


    This isn't difficult, particularly if you use DMO. I can send you a tiny VB utility to create all the scripts if you like.

    The time required to script out all the drops and creates is negligible, and the time to bcp out, bcp in, and recreate indexes will be a fraction of the time that it will take to alter the datatype.

    --Jonathan



    --Jonathan

  • Thanks Jonathan, that would be most helpful.

  • What about changing the seed to a negative? Int's have a upper bound of 2 bil or so, but they also have a lower bound of -2 bil.

    Andy

    http://qa.sqlservercentral.com/columnists/awarren/

Viewing 14 posts - 1 through 13 (of 13 total)

You must be logged in to reply to this topic. Login to reply