Many-to-Many relationships with foreign key constraints?

  • Grr. This is driving me nuts. I'm sure there has to be a way of accomplishing the following.

    I have two tables. Table A has a compound Primary Key Col1 & Col2 because you can have multiple copies of Col1 but only one copy of both columns together. Table B has TableA.Col1 in it. I want to create a foreign key reference on Table B pointing to Col1 only (don't want TableA.Col2 in Table B).

    Yet every time I try to create this Foreign Key reference I get "There are no primary or candidate keys in the referenced table..."

    Surely there is a way to create a Many-to-Many foreign key relationship between two tables without including all parts of a Primary Key from the other table???

    I've googled, but all the articles keep saying things like "Only use Col1 as the PK" (which I can't) or "Use both columns in Table B" (which would make Table B too bulky considering how many FKs are about to be on it).

    Thoughts?

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • You're stuck. You can only reference what constitutes a candidate key, a pk or unique index and then the whole thing or the leading edge... You could create a second index with a different leading edge... Not attractive.

    I'd reassess the structure because if you're trying to skip out on parts of the key, something isn't quite right.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • Actually re-assessing is what I'm in the middle of.

    Problem is Datawarehouse design where I have to account for changing Entity records (different types of people where each "type" is in a different table). So I have Stores, Customers, Supplies, etc. If a Customer Record changes, I need to be able to add in a whole new copy of the record with the changed (and same) elements.

    Unfortunately, because Products (and other dependent objects) are in separate tables, I either need to keep the same StoreKey / CustomerKey / SupplierKey for the new Entity records or I need to do some serious re-coding to update the dependent records in addition to the Entity records if I use brand new Entity keys when those records get updated.

    The example is: John Doe 114 Lois Lane Chicago, IL 35587

    Well, someone got the zip wrong in the original OLTP DB. So, I find CustomerKey 1234 (which is John Doe above) and happens to have an EffectiveDate of 01/01/2006 and I stick a new record in that generates a CustomerKey of 9957 with all the same info except the new Zip and the EffectiveDate of 07/14/2008.

    Once I do this, all tables with records that link back up to CustomerKey 1234 are now "bad". I need to go into those records and add new records that match back to CustomerKey 9957. And that is a LOT of coding.

    The thought was (after discussion with another DBA) that we would add a secondary surrogate column that would allow us to do:

    CustomerKey CustKey2

    1234 1

    1234 2

    Sort of like that, so that we could keep the same CustomerKey and just know which one was the "current" by the CustKey2 and EffectiveDate. But only reference the CustomerKey itself in the foreign key constraints. The problem is that the CustomerKey IS the PK & clustered Index of the Customer table and won't allow multiple copies of Key 1234 without adding CustKey2 to the PK which leads to the problem I originally posted about.

    At this point, I'm sure you can see where I'm getting aggrivated.

    Any thoughts of a better way to tackle this problem?

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • We ran into a similar situation once, but in an OLTP system. Long story, short, we only had to do this in a few places and the data load was fairly benign, so we put a trigger in place to enforce the referential integrity and just tossed the concept of FK's. It still sticks in my craw, but it worked.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • Triggers are the only way I can think of to comply with all of the stated needs.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • So you have type-2 SCD attributes on your snowflaked dimension, but you want the snowflake attribute to basically be a type-1 SCD.

    Since your Product dimension has a CustomerKey, if you change the Customer record, you force a new record for the Product dimension if you use the CustomerKey because that is your surrogate key.

    You should be careful with this. The fact that you have a snowflake to a dimension suggests you care about the relationship between the Product and Customer. The fact that you have a Type-2 SCD field of ZipCode suggests you care about ZipCode change history. Now, if you create a Customer dimension that has the change history and you create a Product dimension that has the customer zipcode, you now have the same attribute in your data warehouse twice with two different values. This may be what you want, but be careful. You also have to watch out for hierarchies and attribute relationships. If you have a rigid attribute relationship for your zipcode on your product (not that you would, but I am just going with the example) your cube will require a full process if a zipcode changes because it will not Type-2 from this perspective.

    Taking the approach of using the business key will work fine, just be careful with the display of this information and it's use in the cube. It can get confusing pretty quickly.

  • Hmm. That's a point I hadn't taken into consideration. I'm pretty new to all this DW / Cube stuff and am following the requirements laid out by the business. I didn't think that the business rules were going to create such a complicated issue.

    But then again, since when does the business care about how difficult a DBA's life is as long as they get their data in a timely manner and in the right order? @=)

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • It looks like what you are trying to do is versioning. That is, keep old values intact for historical or audit purposes as you make modification to the data. This is a big problem in that FK relationships can't be maintained to a versioned table. I have a design pattern that solves this problem, but my write up is currently 38 pages and growing. A bit long to post here. Give me a day or two to clean it up and, if this sounds like something that would help you, let me know and I will post it as an attachment. It won't be entirely finished in that time, but there is enough for you to implement a versioned table for testing. If it solves your problem, good.

    Tomm Carr
    --
    Version Normal Form -- http://groups.google.com/group/vrdbms

  • Tomm,

    You are correct. This is exactly what my BUs and Boss want to accomplish with this. Historical auditing as well as reporting.

    I'd love to see this paper of yours. Please do post it when you get it cleaned up. @=)

    Thank you muchly for the offer.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • I'm not sure of ALL the implications, but why not have a table dedicated to "old" data, such that after a change, the "old" record is archived to the "old data" table, and the regular table just holds current information. This way, the key info never needs to change, and the way to maintain the date information is to store an effective date with the current data, which then becomes part of the archive.

    This archive table's primary key would include the unique customer's id AND the effective date, while the current table's key wouldn't need to. I'm not sure if that concept would work, but it sounds like something that perhaps should be investigated...

    Steve

    (aka smunson)

    :):):)

  • I've actually been thinking about that, but it would involve adding at least 12 more tables to my DB.

    I did find an interesting article on "Junk Dimensions" in SQL Server Magazine from May 08, which describes most of my warehouse unfortunately. @=) Because the two main OLTP DBs I'm pulling from track slightly different thing, only about 2-4 columns out of each table are the same and that's the Keys, the foreign keys and the timekeys. Bleargh. So adding tables to keep Junk Dimensions in would again require adding lots and lots more tables.

    I'm researching versioning before I'll make a final decision. Also, boning up on my Type 1s and Type 2s.

    One good thing about having a problem DW is that by the time I'm done, I'll know a lot more about this thing that I would have if it went smoothly the first time around. @=)

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Just a thought, and again, testing for performance implications would be essential, but perhaps you just have ONE extra table, capable of accommodating any/all OLD records from the various tables, which means it would have the fields in common as well as all the unique ones, plus a "type" identifier that would indicate which table the record is an old record for. Not sure if you could get a useful primary key on it, but you could certainly index all the fields that would be keys or be indexed in their original tables. Your thoughts?

    Steve

    (aka smunson)

    :):):)

  • Brandie:

    Have you seen this?

    http://en.wikipedia.org/wiki/Slowly_changing_dimension

    I think you need a Type 6 SCD possibly the alternate implementation.

    As I was going to suggest that your PK was 'wrong'...er...'incorrectly implemented' (working on my 'Soft Skills' 🙂 ) and should be a single column surrogate key.

    --Shaun

    Hiding under a desk from SSIS Implemenation Work :crazy:

  • Steve, Unfortunately, all my different entities have different fields. Again with the Junk Dimensions issue. I might end up with a 40+ column wide "archive" table with millions of records. @=) Though, there are some small # of fields that are similiar enough... Hmm. I'll have to ponder on this.

    Shaun, This is the first I've heard of a Type 6. Thanks for the link. I'll definitely check this out.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Ok then, with so many fields, perhaps you could find some way to narrow it down to just 4 or 5 tables or so? Perhaps find some kind of "common ground" - combining data from tables for "similar entities" (as you would define them, or perhaps merely by sufficient similarity of field content). Just a thought, as this could still potentially drive up the maintenance effort quite a bit.

    Steve

    (aka smunson)

    :):):)

Viewing 15 posts - 1 through 15 (of 25 total)

You must be logged in to reply to this topic. Login to reply