Bad practices

  • One thing just came to my mind.

    Another really bad practice is NOT to use such a think tank as this forum is. You're really lost trying to do all by yourself!

    Cheers,

    Frank

    --
    Frank Kalis
    Microsoft SQL Server MVP
    Webmaster: http://www.insidesql.org/blogs
    My blog: http://www.insidesql.org/blogs/frankkalis/[/url]

  • quote:


    If so, sp_defaultdb [ @loginame = ] 'login' , [ @defdb = ] 'database' should help

    Cheers,

    Frank


    Thanks for the response. Your instructions are right on.

  • Not sure I agree about the locking mechanism being a bad practice. Most modern apps are designed to work without a constant connection (very optimistic locking), so there is no way for SQL to hold the lock. SQL locks are for data integrity, application locks are often more for business rules. I know they sound the same, but they can be different!

    Andy

    http://qa.sqlservercentral.com/columnists/awarren/

  • Only use cursors as a last resort.

  • What about using direct SQL statements instead of stored procedures?

    Or even better dynamic SQL?

    Cheers,

    Frank

    --
    Frank Kalis
    Microsoft SQL Server MVP
    Webmaster: http://www.insidesql.org/blogs
    My blog: http://www.insidesql.org/blogs/frankkalis/[/url]

  • My first exposure to triggers was trying to debug triggers left behind by some whacko they had gotten rid of. He had error handling built into all the triggers (IF @@ERROR...) and it took me the longest time to figure out why it never worked.

    Most of you probably already know why, but the answer is that an error aborts the trigger and it never executes the IF @@ERROR statement.

  • How about manual ad-hoc changes to production data?

    I worked for a company that used a web-based system to handle used car sales between leasing companies and wholesale dealers. We concentrated on building a web interface to handle all the stages of the transaction, but there was no way (by design) to go back and unsell a car or change an agreed-on price.

    When the inevitable change requests came in, the only way to do it was manual updates, and I had Query Analyzer scripts to handle the common tasks. For some reason maintenance programs for the support staff never get as high on the priority list as more bells and whistles for the next release.

    When you have a query like UPDATE Transactions SET TransactionStatus = Sold WHERE TransactionID = 1234, and you neglect to highlight the WHERE clause before hitting F5, it is a bad thing.

  • Well I'll probably get flamed a little here, but here goes.

    And some of this I continue to blame MS for making SQL Server too easy to use.

    I am moving a number of servers from outlying offices to a consolidated server here.

    So far 3 out of 4 have had just plain rookie developer errors.

    One digs into system tables, so without the vendor (was in house, developer left and started his own firm). So without their input we couldn't convert from SQL7 to SQL2k.

    Then one, first riddled with nVarChar, DB came in at 15 gig, its now 9, and we've added several gig since moving it.

    Then a beauty of a SP process, a batch type thing for us older folks. Theirs runs 12 hours (85k inserts !!!), I have it running in 4 minutes.

    How did I get such improvement ?? you ask.

    Theirs sums maybe 200k rows, read a row move all REAL numbers to Varchar, read a row move all numbers to VarChar. Convert both numbers to Real, sum them, convert them to varchar, repeat, outputting 85k rows. Just fixing this cut it to about 3 hours.

    Next they did

    If row type 1 ....

    If Row type 2 ....

    for about 15 types.

    I changed it to IF ELSEs, 4:09 is my best run time. And note the original crashed at least 30 percent of the time for one reason or another.

    I wouldn't consider this bad, except this proc has been running monthly for several years. AND I probably spent all of a couple of hours fixing it.

    Too many IT workers, not enough IT professionals.

    KlK, MCSE


    KlK

  • Hi kknudson,

    quote:


    Well I'll probably get flamed a little here, but here goes.

    And some of this I continue to blame MS for making SQL Server too easy to use.


    to a certain degree you're absolutely right.

    However, I haven't come across a M$ products that is REALLY easy to use in the long run

    Cheers,

    Frank

    --
    Frank Kalis
    Microsoft SQL Server MVP
    Webmaster: http://www.insidesql.org/blogs
    My blog: http://www.insidesql.org/blogs/frankkalis/[/url]

  • Well if you have ever had the opurtunity to support any of the other major DBMSs, IMS, DB2 Mainframe or Fischer Price (Unix or NT) or Oracle. What requires a DBA on the others can be done by anyone, developers included on SQL Server.

    Done right, NO. But they can build a working system. Then when it has problems they blame the DBMS.

    If you knew the number of queries, SPs or whatever I have fixed here over the past couple of years for the developers. Where in a few minutes work I cut the overall time down by 80 % you'd die.

    Had one went from 118,000 reads to 2,000, CPU and duration, although somewhat less only 80 or 90 % improvement. This query was running 100s of times an hour, and the plan was to buy a bigger box.

    True some of these have been DB2, and MF.

    Funny things is I initially applied here as a developer, but "I didn't know the SQL language". Per the HR person I didn't have official experience. So I used a friend, and got hired as a DBA. And I'm the one some of the developers come to now when they need to improve their SQL statements.

    KlK, MCSE


    KlK

  • Really bad practice - assuming web hosting companies know how to set up SQL Server boxes.

    This is the second company I've been at where the hosted SQL servers were set up without my input, and their favorite configuration always seems to be a C: drive (that you can't touch) for the OS and one big RAID 5 volume for everything else. They can't be bothered putting the logs or tempdb on separate drives and controllers.

    The C: drive is a 4GB partition that included the system databases. There was normally about 1GB of free space on C:, and 30GB on D: (50GB used). There's only around 10GB of actual data, the rest is from online database and log backup files. More than enough disk space for the forseeable future.

    This time the setup included replication, which I'd never dealt with before. There are a dozen databases running transactional replication on the publisher. The distribution db was located on C: with the other system databases. (Bonus points if you see where this is headed).

    Over the July 4 weekend there was enough web traffic to cause the distribution db to fill up C:. When the distribution log could not grow, replication died. When replication died, the replicated database logs would not shrink. The transaction logs kept growing, and so did the database and log backups. It's amazing how fast 30GB of free space can evaporate over a long holiday weekend.

Viewing 11 posts - 16 through 25 (of 25 total)

You must be logged in to reply to this topic. Login to reply