Tips for optimizing performance?? - VB/SQL Server

  • Hi there,

    I have a VB front end and have just upgraded from Access to a SQL Server backend.

    I can connect to the database, but I'm finding my application is running VERY slowly, and actually times out in the middle of navigating throughout my recordset.

    Somebody mentioned indexing certain fields, and I'm off to read about that now. Does anyone have any other hints or tips for improving speed?  Obviously, storing my records in SQl Server HAS to be faster than Access, right?

    THanks!

    Christy

  • I know nothing about Access, but here are some good tips:

    http://www.sql-server-performance.com/visual_basic_performance.asp

    I also suggest reading Andy Warrens Introduction to ADO articles on this site.



    --Jonathan

  • Much depends on the version of VB being used however using VB 6.0 or above and ADO (as opposed to JET which Access uses as native, or even RDO) yields extensive performance gains.

    We use ADO quite a bit here in accessing SQL Server as well as an "Open Source" database (PostgreSQL).  Performance against SQL Server is blazingly fast.

    The other area to check out is your indicies.  Indexes are a little different with SQL Server than with Access.  The number of indexes as well as the TYPE of indexes used could be contributing to the problem.  Keep the number of indexes down to what is actually required as the more indexes used, the more SQL Server must do to maintain all of these indexes.  Clustered indexes, if not implemented properly, can create a big performance hit as SQL Server must keep these indexes "in order" (sorted).  Clustered indexes are great for the right type of query and there really is no hard and fast rule for using them.  Yesterday's SQL Server Central Newsletter had a great article on Indexes and can be found at http://qa.sqlservercentral.com/columnists/rgummadi/indexingstrategies.asp

    -- Joe

    -- Joe

  • A couple of things.

    Start thinking in terms of "buckets".  When you had a totally VB and Access product.. and if you're access db was on a network drive, any time you did a "select x,y,z from myTable where x<10" (example) .. in the ACCESS world, you had to bring back the ENTIRE myTable across the network.  YES, that sucks.

    if you did an inner join on 2 tables (from myTable inner join myTable2 on ...), it brought back BOTH TABLES (in their entireity (sp?)).  YES, that sucks.

    Enter the RDBMS.  The quickest performance gain you can get is to use a stored procedure.

    "select x,y,z from myTable where x<10"

    for a RDBMS (relational database management system) .. all the processing takes place on the server.  when you call a precomplied stored procedure, the ONLY data that gets passed back are the Xnumber of rows (for example, the above query might bring back 9 rows) .. and ONLY the columns you ask for (x,y,z).

    This is the quickest gain you can get.

    What I just shared with you is .. getting data from the db.  which is fine.

    you will then be TEMPTED to put all your business logic into sql server stored procedures.  6-7 years ago, this was a "2 tiered" system.  and your front end app became kinda of a thin client.  this is NOT the preferred architecture method of the day.  but you should cross that bridge at a later time.

    ..

    A diffferent thing which affects performance.  I'm shifting gears now, so forget everything i've said so far.

    when connecting to a sql server db, you go across the network.  you can choose different protocols for going accross the network.  the most common are "named pipes" or "tcp/ip". 

    when you install a developer edition of sql server on your computer, you'll find a program called "Client Network Utility"

    what this program does, it allows you to configure which method you access a sql server (7.0 or 2000) db over the network.  there are probably more options than you need, and you'll probably either go with named pipes or tcp/ip ..

    the tcp/ip setup is pretty much self explantory.  the "port" is defaulted to 1433, so let that remain.

    this program (Client Network Utility) is JUST a FANCY WRAPPER for adding/editing setting in a registry key at:

    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSSQLServer\Client\ConnectTo

    Let me say, that this affecting your app performance is probably a small %.  <5% maybe.  (just pulling a number out of the air).  HOWEVER, I have personally witnesses a site where changing from named pipes to tcp/ip had a DRASTIC effect.  its all based on a network configuration at any given place.  but it was the holy grail at this one site i was at, cuz it improved 800% at the least performance wise.  but again, there is only a slim chance this is what is killing you.

    If you find by running the client network utility, that you get a gain, you can just copy the registry settings out (for that ONE server you are working with).. and put them on any client computers.  "export registry file", i'd probably open it in notepad and remove any settings not related to the ONE db server in question.  and then "run" the .reg file on any client machine.  Again, this is probably NOT the issue, but it could be the issue.

     

    ..

     

    Hopefully that helps some.

     

     

  • Thank you everyone for your replies!

     

    I read the article that Jonathon posted "Performance Tuning Tips for VB Applications Using SQL Server" and I found it enlightening to say the least.

     

    Apparently I am doing a LOT of things wrong in my code.  Do you agree?  For example:

     

    1) I don't have any stored procedures - I do all my data manipulation through ADO recordsets.  I should instead create stored procedures, and call them from my code.

     

    2) I don't use SQL UPDATE, INSERT, etc - again, I use the recordset's fields collection, as well as .AddNew, .Update, etc. to add data to my database.  Should I use stored procedures for this as well?

     

    3) I always use Client-side cursors, and I never use forward-only cursors.  This is what all my connections look like: Does it look bad?

     

    ---------------

    sConn = "Driver={SQL Server};Server=GENWEB;Database=RollSQL;Trusted_Connection=yes;"

     

    Set dbAN = New ADODB.Connection

    dbAN.Mode = adModeReadWrite

    dbAN.Open sConn

     

    'opening roll recordset (about 20,000 records)

    sSQL = "SELECT * FROM tblRoll ORDER BY ID"

    Set rsAN = New ADODB.Recordset

    rsAN.CursorLocation = adUseServer

    rsAN.Open sSQL, sConn, adOpenKeyset, adLockPessimistic, adCmdText

    ---------------

     

    4) I use ODBC provider instead of OLE Db - this article says you should use OLE DB because it's faster.

     

    5) By not setting the CacheSize property manually, does this mean only one row is being retrieved form the server at a time?  Since we have 20,000 reocrds in our main table, make no wonder it takes a long time!

     

    6) I don't ever close my database connections or recordsets. OOPS!  I think I should definitely do that.

     

    7) It says not to use the Fields collection of ADO recordset, but I use it all the time.

     

    8) It says to use dot notation (i.e. WITH and END WITH) for connections, and I don't - I write it out every time.

     

    So it turns out I know VERY little about what kind of code works best with SQL Server!  Any opinions on whether or not I should implement ALL of the changes I mentioned, and if you think this is what could be compromising the speed of my application?

     

    Thanks!

    Christy

    🙂

  • 1) I don't have any stored procedures - I do all my data manipulation through ADO recordsets.  I should instead create stored procedures, and call them from my code.

     

    You can manipultate the Data once you have it .. in a local ADO object.

    However, to GET the data, you want to populate the intial ADO object from a stored procedure.

     

    4) I use ODBC provider instead of OLE Db - this article says you should use OLE DB because it's faster

    Think of this as "layers".  the odbc goes thru an extra layer to get your ADO recordset to talk to the db.  by using a oledb, you remove a layer, thus you get better performance.  also, the sql server (and access i guess) oledb are fine tuned to a specific app, so they will work better than the "generic" odbc stuff.

    the odbc was retained for backwards compatiability, because "earlier in the game" of programming, odbc was the standard.

     

    7) It says not to use the Fields collection of ADO recordset, but I use it all the time.

    Uh.  its better to reference a field ordinally, rather then by name.  aka, if your fields are uid, lastname, firstname

    rs.fields(0).value = '123'

    rs.fields(1).value = 'smith'

    rs.fields(2).value = 'john'

    HOWEVER, using the ordinal values is techinally faster, its less readable.  so, its a judgment call.  i wouldn't get too hung up on this, i still use "by name" most of the time.  however, i'm not trying to get every 'last bit' of performance out of the app either.

     

    sSQL = "SELECT * FROM tblRoll ORDER BY ID"

    You should NEVER/Almost NEVER use a select "*".  you should pick the exact columns you want.  especially when you get into t-sql stored procedures.  remember the "buckets".  you only want to bring back the data you need, not all the columns.

     

    ..

     

    Again, if you're going to the trouble of setting up a sql server database, you need to start migrating (especially large recordsets) into stored procedures.  dynamically doing a sql statement for opening a ado recordset is .. bad.

    a stored procedure is "pre compiled", meaning, it retains the best way to get the data to you.  if you run a query from ado, when it gets the statement, it has to figure out EVERY TIME... what the best way is to find you the data.  this is called a "execution plan".

     

    lastly, if you get opportunity to move to vb.net and (typed) datasets, they are much easier/better to work with then the ado object.  but ,  you can cross that bridge later, you're getting enough info now to get you started.

    ..

     

     

  • Thanks, sholliday!  I'm really learning a lot here...and lerning that I have a lot to do.

    Most of your suggestions I htink I can handle, but I'm concerned about never using SELECT * - I almost always need to select everything because I'm usually populating a form with my recordsets, and I need all the fields to display on the form.  However, I'm going to see if there are any recordsets where I don't need to do this.

    As for everything else, I'm off to get started.  Thanks again for all your help, and I'll probably be back with more questions soon!

    Christy

     

  •  

    Even if most times you need "*", you should still write them out.

    Why?

    The table schema can change at a later time. 

    like

    june 2002, you table has the following columns

    uid, lastname, firstname

    ..

    you come along and develop a stored procedure. and you need uid, latsname and firstname.

    so you do the easy thing and go

    select *

    may, 2003

    the table needs to be updated.

    now you add

    addr1, addr2, city, state, zip, homephone, workphone, etc, email, favorite_colorid, jobfunctionid

    ..

    guess what?

    your procedure of "select *" now brings back 11 more columns than it needs to.  youch.

    while this example is extreme, it is not unlikely that the user table can change, expeciallly to add a foreign key reference to another table.

    you should go ahead and get into the habit of

    select a, b, c from mytable .. not * ...

    make sense?

     

    ..

     

    good luck

     

    Sloan

     

  • Christy,

    as sholliday stated you really don't want to use SELECT * FROM in your app. It will come back as a bug later on.

     

    Also, sholliday said "

    -------------------------------------------------------------------------------------------

    7) It says not to use the Fields collection of ADO recordset, but I use it all the time.

    Uh.  its better to reference a field ordinally, rather then by name.  aka, if your fields are uid, lastname, firstname

    rs.fields(0).value = '123'

    rs.fields(1).value = 'smith'

    rs.fields(2).value = 'john'

    HOWEVER, using the ordinal values is techinally faster, its less readable.  so, its a judgment call.  i wouldn't get too hung up on this, i still use "by name" most of the time.  however, i'm not trying to get every 'last bit' of performance out of the app either."

    -------------------------------------------------------------------------------------------

    What I used to do in this situation was to create a enum in VB with the fieldnames so that they corresponded to the columnid of the recordset (another reason you want to specifically specify the return columns!). The enum would be named the same as the table I would be pulling back from the SP. In this way I could simply use the something like the following...

     

    private enum Logins

    userid = 0

    lastname = 1

    firstname = 2

    ...

    end enum

    ....

    DIM Login as Logins

    DIM iCtr int

    SET RS = DB.OpenRecordset...

    debug.print RS(Login.userid)

    debug.print RS(Login.firstname)

    debug.print RS(Login.lastname)

    Please note that I didn't test this code. I may have the syntax off just a bit as I haven't used VB in almost 2 years!




    Gary Johnson
    Microsoft Natural Language Group
    DBA, Sr. DB Engineer

    This posting is provided "AS IS" with no warranties, and confers no rights. The opinions expressed in this post are my own and may not reflect that of my employer.

  • For best performance and security use SPs for updates, inserts, deletes, etc.

    Generally found ADO Client sider cursor for Connection cursor type to be fastest but should test both server side and client to see which holds up best.

    I would use oridinal as long as you are using SPs or views and are sure they will not change but make notations as to what they reference, otherwise use name (a bit slower) but still has same effect if you alter SP or view or table being referenced.

    Try to limit chatter where you can with SPs by using no recordsets execution when return is not needed or OUTPUT variable in place whne only few items return (like 1 int).

    A lot is learned by building and rebuilding a project and asking questions. But there are always better ways to find to do many things just ask when you are curious or see a specific issue.

  • Thanks again, everyone!

    I will start by creating stored procedures - most of you just mention for inserts, deletes, and updates.  Would you also use SP for simply selecting data to display on a form?  Or should I continue to do that through ADO, with SQL statements in my code?

    I will also list the fields instead of saying "Select *", as well as try client side cursors to see if they are any faster.  I also have to close my connections when I am finished with them, as I don't do that at all right now - yikes!

    Thanks a LOT for your suggestions - I'm sure I'll be back with more questions as I start to implement them.

    Christy

     

  • Christy,

    Some good advice there with regard to specifics, so thought I'd add some more general troubleshooting advice.

    A very simple way to ascertain if you need to concentrate on the application or the database is as follows:

    Look at what queries are running the slowest in your application, and make a note of the SQL that they're throwing out at the database.

    Put a checkpoint on your VB code at the point where you execute one of these queries, and run your app up to that point. (NB : The following applies whether you're calling a stored proc from your VB, or throwing dynamically created SQL at the database).

    Do a rough timing (the Windows clock will do) on your application of the time it takes for the SQL query to execute and come back - use f8 and time how long it takes for it to step to the next line. This tells you how long it takes to both run the query, and build the result set on the VB side in your development PC's memory.

    Next, copy and paste the SQL query that was just sent to your database into SQL Server Query Analyser.

    Run the SQL in Query Analyser, timing it from the point you hit ctrl-E / the run button, to the point at which you get your results back.

    Now compare the two timings:

    If the VB timing is slow and the Query Analyser timing is fast (ie : what you would class as acceptable for your application), then the way the application is handling the recordset is the problem.

    Conversely, if the Query Analyser timing is 90% or more of the total time taken to return results, then it's your database that needs addressing.

    If the two are roughly the same, then it it's obviously both the database AND the app which you have to address for performance.

    As an aside, look for bad database design, large numbers of joins, and joins between tables on columns which do not have indexes. Also, make sure your queries have a 'WHERE....' clause in them, otherwise you're going to have to build and transmit over your network potentially 10,000's of rows. If a where clause slows down the time it takes for the first row to be returned when running the SQL in Query Analyser, you probably need an index on the column(s) which the where clause references. Even if you limit the number of columns returned to just those you need, this can severely hit your app's performance if you're returning lots of unwanted rows.  Finally, check memory usage on your client PC using Task Manager's "Performance" tab.  If you run a query from your app. and it gobbles up lots of memory, then after you leave the page where it's run that memory isn't freed up (look for rsRecordSet.close / set rsRecordSet = nothing in your VB), then you can easily run out of client side memory, which will cause your PC to page as memory pages get thrashed out to the paging file.

    As a general rule, bad database design and badly written queries are going to hobble your applications more than exactly how you reference your data once it's retrieved, so it's always a good idea to check how fast your queries are running 'directly' on top of the database, without involving the VB code layer. The cut and paste query running technique outlined above is one I've used successfully on many occasions, and helps me to determine which machine the problem part of the code is running on, as well as helping me to isolate where the problem is.

    Once you know the location of the problem, you can determine if it's database or application based, and focus your efforts on fixing things in the right place. 

    Happy performance troubleshooting


    Jon

  • Just spotted something else with regard to an earlier post about clustered indexes:

    "Keep the number of indexes down to what is actually required as the more indexes used, the more SQL Server must do to maintain all of these indexes.  Clustered indexes, if not implemented properly, can create a big performance hit as SQL Server must keep these indexes "in order" (sorted).  ..."

    Without going into too much depth, yes, this is technically right and generally good advice, but only with respect to when you are inserting, updating or deleting records.  If your problematic queries are mainly SELECTs, then updating the indexes isn't an issue, so don't pay too much attention to this.

    Yes, SQL Server does have to maintain indexes, and in the case of a clustered index, it has to keep the data which the index has been created on in (roughly) the correct order on disk. However, if you're not changing the data, this is not an issue, as if the data's not being changed, then neither are the indexes! So don't consider it as an issue UNLESS your selects are fast AND your updates/inserts/deletes are slow. This is usually a pointer that something may be amiss in the "number of indexes" area, though is by no means conclusive.

    As a *general rule*, one clustered index and no more than 5-6 non-clustered indexes per table is a good rule of thumb.

    (NB : You can have as many indexes as you want on a read-only database, so long as you have the time to build them, and the space to store them).


    Jon

  • Christy,

    You've gotten a lot of good advise here. I just wanted to answer you specific question on using SP calls for "Simple Selects"....

    Yes I would create an SP for simply selecting the data for your form. There are two ways to look at this. Basically you can have your resultset only return data for the record you are looking at or you can have it return all the data for that type of data. Typically what I do when building a form is to have one SP that returns all the ID's for the data that the form would be looking at and then have a another SP that takes the current ID as a parameter to fill in the portion of the form relating to that ID. I may make multiple SP calls depending on the complexity of the form as well. In this way you may have a lot of small network traffic on your app but the client sees the app as fairly quick to respond. If you need to you can certainly cache up more data but remember in a client/server app that could be dangerous due to two or more people modifying records next to each other (or even the same record) in your dataset.




    Gary Johnson
    Microsoft Natural Language Group
    DBA, Sr. DB Engineer

    This posting is provided "AS IS" with no warranties, and confers no rights. The opinions expressed in this post are my own and may not reflect that of my employer.

  • Thanks for all the replies, everyone!  I've started migrating my recordset creation, insertion, etc, into stored procedures, but am having some issues.

    I use my main recordset to populate my form, and then users can scroll around between the records on the form. I need to be able to use commands like movefirst and movelast - when I create my recordset using a stored procedure, I get an error on rs.movelast - "Rowset does not support fetching backward."

    I know now that this is because of the cursor type, but I don't know how to change it.  I tried setting the cursor type to keyset or dynamic, but it doesn't seem to recognize that code when I'm calling a stored procedure to populate the recordset.

    Is there any way to use a stored procedure but still use a cursor type that supports movelast?  otherwise stored procedures are pretty much useless to me!

    thanks!

    christy

Viewing 15 posts - 1 through 15 (of 20 total)

You must be logged in to reply to this topic. Login to reply