An approach I used on a weekly data load a while back might be useful:
The data we received was, essentially, a change log from the true owner of the data. We might see several records with the same primary key at various points within the file. Sounds similar to your situation.
I set a baseline number of records to process at once (5000 or 10000, don't recall and no longer have access to the routine, as it belongs to a former employer). If the insert attempt failed, I checked to see if the failure was due to a duplicate key. If so, I cut the number of records being processed in half, and tried again; if necessary, until I got down to one record. Once things were successful, I bumped the number of records processed back up by doubling it for each successful run, until I hit my baseline.
Mind you, if I had it all to do over again, I would probably at least consider some sort of pre-processing step.
R David Francis