Performance issue.

  • Hi all,

    I have studied about the file group in SQL Server 2008. I have one query regarding file group.

    " i have database and it contains near about 20 tables. and each tables contains 10000 record for each day. for a year it will be 10000*30*12. if i will create the 20 file group and create single table in each file group then it will increase performance or not."

    create database TestProject

    on primary

    (

    name=ProjectPrimary

    ,FileName='D:\pankaj kuchaliya\Database Projects\ProjectPrimary.mdf'

    )

    ,filegroup ProjectsFG

    (name=ProjectData1

    ,filename='D:\pankaj kuchaliya\Database Projects\ProjectData1.ndf'

    )

    ,(name=ProjectData2

    ,filename='D:\pankaj kuchaliya\Database Projects\ProjectData2.ndf'

    ),

    (name=ProjectData3

    ,filename='D:\pankaj kuchaliya\Database Projects\ProjectData3.ndf'

    ),

    (name=ProjectData4

    ,filename='D:\pankaj kuchaliya\Database Projects\ProjectData4.ndf'

    ),

    (name=ProjectData5

    ,filename='D:\pankaj kuchaliya\Database Projects\ProjectData5.ndf'

    ),

    (name=ProjectData6

    ,filename='D:\pankaj kuchaliya\Database Projects\ProjectData6.ndf'

    ),

    (name=ProjectData7

    ,filename='D:\pankaj kuchaliya\Database Projects\ProjectData7.ndf'

    ),

    (name=ProjectData8

    ,filename='D:\pankaj kuchaliya\Database Projects\ProjectData8.ndf'

    ),

    (name=ProjectData9

    ,filename='D:\pankaj kuchaliya\Database Projects\ProjectData9.ndf'

    ),

    (name=ProjectData10

    ,filename='D:\pankaj kuchaliya\Database Projects\ProjectData10.ndf'

    ),

    (name=ProjectData11

    ,filename='D:\pankaj kuchaliya\Database Projects\ProjectData11.ndf'

    ),

    (name=ProjectData12

    ,filename='D:\pankaj kuchaliya\Database Projects\ProjectData12.ndf'

    ),

    (name=ProjectData13

    ,filename='D:\pankaj kuchaliya\Database Projects\ProjectData13.ndf'

    ),

    (name=ProjectData14

    ,filename='D:\pankaj kuchaliya\Database Projects\ProjectData14.ndf'

    ),

    (name=ProjectData15

    ,filename='D:\pankaj kuchaliya\Database Projects\ProjectData15.ndf'

    ),

    (name=ProjectData16

    ,filename='D:\pankaj kuchaliya\Database Projects\ProjectData16.ndf'

    ),

    (name=ProjectData17

    ,filename='D:\pankaj kuchaliya\Database Projects\ProjectData17.ndf'

    ),

    (name=ProjectData18

    ,filename='D:\pankaj kuchaliya\Database Projects\ProjectData18.ndf'

    ),

    (name=ProjectData19

    ,filename='D:\pankaj kuchaliya\Database Projects\ProjectData19.ndf'

    ),

    (name=ProjectData20

    ,filename='D:\pankaj kuchaliya\Database Projects\ProjectData20.ndf'

    )

    log on(

    name=Archlog1

    ,filename='D:\pankaj kuchaliya\Database Projects\ProjectLog.ldf'

    )

    If i will create different file group then it will increase the performance??

    Please Help

  • Valid indexing will have a far more significant effect on read performance than creating file groups.

    If you need to physically separate the data for other reasons then it may be worth partitioning the data.

    Why do you think that adding filegroups and fragmenting your data across them would increase performance?

  • "I have studied in sql book , data should be .ndf files."

    Should i create the different file group to load the data or should i use only primary.

    I also face one problem after loading the data the database size was 80 GB, i delete some tables which is not required then i saw it was displaying the same size. then i shrink the database , it takes near about 15 hours.

    If i will create the file group then it will take the same time for query processing /shrink ?

  • Yes, I prefer to have data in a Filegroup for Data, and leave the PRIMARY Filegroup for the system tables. It won't hurt you to do this.

    My default db set up is Primary @ 4MB, Data (sized appropriately) and Index (Sized to contain non-clustered indexes). That way Index can be moved to a separate physical disk to improve access.

    Shrinking a large database that has become very fragmented will always take a long time. Shrinking a database is something that should be done VERY rarely, if ever. Having removed a bunch of tables from the database is one occasion that I can accept that this might be appropriate.

    I'm struggling to understand what you are hoping to achieve with this. A table with 3.6m row is barely going to stretch SQL on any competent server build if you have a sensible index strategy. If you were tabling about 300m rows I think we could start to worry.

    Edit: It helps if I read the question properly - 20 tables each having 3.6m rows.

    You will only get a benefit from separating the tables into their own filegroups if you can put those filegroups on separate disks (PHYSICAL disks... not Logical ones!)

  • Yes, I prefer to have data in a Filegroup for Data, and leave the PRIMARY Filegroup for the system tables. It won't hurt you to do this.

    Yes 20 tables each having 3.6m rows, But it is not the one year data, i have to load 5 year data.

    I have data in xml format, i just import that data in 20 tables

    after that i just makes join in between them and insert in to new single table(CombinationOf20Tables).

    on the basis of some rules i order that particular single table(CombinationOf20Tables).

    after ordering again i generate some report as my senior wants.

    Problem is when i create single table(CombinationOf20Tables) on the base of joining, i required some extra table which help me to store the result into single table. after creation of single table i dont need extra table, then i just delete them, but after deleting it does not release the space so i use shrink option. and it takes too much time

    So i think i should use different file group and also use Proper indexing(as you suggest).

    and As you suggest file group on different physical disk (means different hard disk). am i right.

  • As above, I think its more about the queries that fetch data from these tables will work. Your solution might (20 filegroups) might be appropriate, but depends on queries accessing the data.

    If you do split the DB into multiple files like that, it best to have completely separate disks as well.

  • Problem is when i create single table(CombinationOf20Tables) on the base of joining, i required some extra table which help me to store the result into single table. after creation of single table i dont need extra table, then i just delete them, but after deleting it does not release the space so i use shrink option. and it takes too much time

    Putting the "extra table" into a separate filegroup will take care of the shrink issues, but its still better not to shrink if you can avoid it.

    Every time you run this, you are going to grow the db with the extra tables. It may be better to leave the addtional space allocated to the db to prevent it having to extend each time you run this.

  • Thanks for reply.

    ok i will put extra tables into different file group, but when i will delete extra data then it not automatically shrink the database. i have to manually shrink the database and it takes time.How can i prevent to grow database due to extra tables.

  • After you truncate the extra table, leave the file as it is. Why shrink it? The next time you build, it will have to reallocate space and grow. Skip the shrink and let it be.

  • My thoughts exactly.

    When you delete the contents of these tables, the space will be free within the database, and when you do the next iteration that space will get used first.

    The database size on disk should remain relatively statis if you follow this approach.

  • ok, now i have start to work on it. let checks out, what will happen. i will tell you the result.

    great thanks .....;-)

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply