Number of tables in database

  • Hi all,

    Can any one clarify my doubt please..

    We have a data base where we will store multiple electricity meters data.

    Means we will get data from meters in the form of XML files and will format that data according to our requirement and will store in data base.

    Here, as the number of columns are not fixed for each meter we planned to maintain separate tables for each meter. For each meter we will get maximum of 15 tables.

    So we may need to maintain 10,000 meters data in a single data base. So approximately

    there will be 1,50,000 tables in the database.

    My doubt is, will this affect the performance. Performance in the sense we need to give reports from this database. So ,will this become slow??

    If it becomes slow, Is there any solution to make this fast?

    Will indexes help in this condition ?

    Currently we are having 20K tables and performance is good.. we are able to get reports quickly..

    If the tables increases which are the problems we may need to face. I am talking on reporting side.

    One more thing to remember, the data in the tables will not be more.. For each meter in one are two tables data will be up to 5k records..in remaining tables it will be not more than 500 records.

    Please provide your valuable suggestions on this...

    Thanks.

  • As long your table not filling much pages in database and stats are up-to-date, there shouldnt be major performance issues. Also performance not depepnds on only these factors. There could be performance issue because of non-sql resources (like network/disk/other task running in server/memory and ....)

    Also As you mentioned the number of tables are going to be huge, I will suggest two option:-

    1) Bind the tables in schema. For ex, for east area meter reading, create schema as east and bind all the related tables in this schema and so on. It will be easy to manage object wise and permission wise. You can give permission to user who need data related to a perticular schema.

    2) Instead of schema, you can create database in the same way.

    ----------
    Ashish

  • I don't have experience with that many tables, and I think it should just be tested in your environment. Also, the number of tables is limited by the server capacity, but the number of all objects can never exceed 2,147,483,647. But...

    So what I understand: you receive data from a meter, create a table if it's a new meter and insert the data in there. Or only if it's a new type of meter? I would think it would be logical to have all the data in one table for reporting. So you might need for a new type of meter have the metadata setup in the right way, so you know which column needs to go where and such. If that works, and it will result in that one table, you don't need all the other tables, right? If it really means you have 10,000s of different types of meters, I should realize that the world of electricity meters is far more complex than I would have ever known.

    Ronald HensbergenHelp us, help yourself... Post data so we can read and use it: http://www.sqlservercentral.com/articles/Best+Practices/61537/-------------------------------------------------------------------------2+2=5 for significant large values of 2

  • There will be different meter makers. The data from each meter make will be in different format . They will follow some standard.

    Also some times the data from meters of same maker differs.

    So we are maintaining in different tables.

  • Can any DBA clarify my doubt please???

    Thanks in advance..

  • Can't because I've never seen a system like that.

    Any way to figure out how many types of meters you have and merge all those in the same table?

    That way I can tell you that even with millions of rows there's a way to have very good performance.

    More table just seems like a coding nightmare. Perf chould be fine, but you need to test at full scale to be sure.

  • I would assume performance for querying is going to start degrading, but honestly, I've never seen a system with 150,000 tables before, so I couldn't tell you when.

    I suspect, really strongly, there's a better way to do it than simply assigning individual tables. Surely, if these are mechanical devices, there are only a limited number of inputs, most of them common. The common values make up table. Now, there 5 devices that have the common columns, plus three columns, fine, add a child table for those devices, etc., etc. Until you're looking at 15-30 tables, not 150,000.

    Frankly, it doesn't sound like you're using the database a relational storage engine, which is what it is. You're using it to store flat files, which I don't think it's going to do very well.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • it sounds like you would be better off simply storing the xml files in a column in one table, sure it would not perform well but it would be better than having 150,000 tables..

  • You are most likely not going to use all information from these different format xml files. Likely you will only use a subset of elements and attributes of each format. I would suggest (like was suggested before) to store each file in a single table, in that table assigning each file some unique key. Then you can, in another table, put all information digested from the xml file, keeping the key of the original file with the digested data.

    This way you will get a normalized set of data containing only the data that you're actually interested in and still have the original xml file for reference purposes (or for re-loading the data if need be). I would also suggest to store the xml file in a varbinary(max) column, not xml. SQL has a nasty habbit of messing with encoding of xml files, plus you would not be able to store a file if it's content is not perfectly xml. By storing it as a varbinary(max) SQL will just accept anything your users feed in and it is up to your processing routines to extract the data into the tables or notify someone if it can not process a file.

    Just something I typed together in 3 minutes as a suggestion for the needed tables:

    use tempdb;

    create table dbo.Files (

    FileID bigint identity(1,1) not null,

    [ProcessedDate] datetime null,

    [XML] varbinary(max) null,

    [CreateDate] datetime2 not null default getutcdate(),

    primary key clustered (FileID)

    );

    create table dbo.Meters (

    MeterID int identity(1,1) not null,

    Number varchar(40) not null,

    ReplaceDate datetime2 null,

    FileID bigint not null,

    [CreateDate] datetime2 not null default getutcdate(),

    primary key clustered (MeterID),

    foreign key (FileID) references dbo.Files(FileID)

    );

    create table dbo.Readings (

    ReadingID bigint not null,

    MeterID int not null,

    Reading decimal(12,2) not null,

    [Date] Date not null,

    FileID bigint not null,

    [CreateDate] datetime2 not null default getutcdate(),

    primary key clustered (ReadingID),

    foreign key (MeterID) references dbo.Meters(MeterID),

    foreign key (FileID) references dbo.Files(FileID)

    );



    Posting Data Etiquette - Jeff Moden[/url]
    Posting Performance Based Questions - Gail Shaw[/url]
    Hidden RBAR - Jeff Moden[/url]
    Cross Tabs and Pivots - Jeff Moden[/url]
    Catch-all queries - Gail Shaw[/url]


    If you don't have time to do it right, when will you have time to do it over?

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply