• If you have room for two copies of the data, I think this might run faster than the self-join and DELETE method.

    First import the data into a staging table.

    Then create another staging table with the same structure, and put a unique constraint on the dedupe fields using WITH IGNORE_DUP_KEY.

      SELECT * INTO StageUnique FROM Staging WHERE 1=0

      CREATE UNIQUE INDEX IX_StageUnique ON StageUnique (Addr1, PostCode)

        WITH IGNORE_DUP_KEY

    Now copy the data into the second table.  All duplicates are rejected, which is faster than locating and deleting them later.

      INSERT INTO StageUnique  SELECT * FROM Staging ORDER BY Addr1, PostCode

    You can add to the ORDER BY clause if you have a preference for which record among a set of duplicates is retained.  For instance, you may want to keep records where Addr2 is defined if there are other records where it is missing.

      INSERT INTO StageUnique  SELECT * FROM Staging

      ORDER BY Addr1, PostCode, CASE WHEN ISNULL(Addr2,'') = '' THEN 1 ELSE 0 END