A Good Word for ODBC

  • bob.willsie (9/10/2009)


    ...I typically write a string of queries that extract small chunks from the server and store them locally, then merge, extrapolate, etc. locally. I found this executes much faster than trying to do lots of complex joins and subjoins all in one query and passing that query to the server. Typically I never join more than three tables if at all possible.

    Are small, highly targeted queries what you mean by "pass-thru" queries? If not, please explain...

    Richard described most of how this works already. Pass-Through is a property of your named queries in Access. It's been a while since I've used Access, and I only have Access 2007 installed on my PC right now, but in 2007 it's a property in the Design ribbon when you're working on a query. A Pass-Through query will always run on the server side, with regular ones you're taking chances, especially if there are joins or complex criteria. It's limmited to only being editied in the SQL View mode though, not the nice Design View mode.

    If you're importing data locally and then working with the local copies instead of doing all your queries directly on the linked tables, then you should be ok.

  • Hmm. I just popped into Access 2003 opened an existing query in SQL design mode, right clicked and selected Properties. However, I don't see that anything that lists or suggests "Pass-Thru" as an option.

    However, I am sure a portion of my queries can't be pass-thru due to my chopping up approach. For instance in the query I just checked I was joining a local table with a server linked table. I think that pretty much eliminates that one being a pass-thru query if I understand things right.

    Is it possible that all queries that could be pass-thru (IE, 100% server based joins for instance) are automatically run that way and that all others are local (joining server table with local table as I described above.)

    Too be honest, I never put much thought into where the query was executing unless I know it is going to be a bear, then I make sure all the data is local.

    I guess I just expected it to automatically happen on the server or local depending on whichever was more efficient.

  • gah (9/10/2009)


    rf44 (9/10/2009)


    This is one of the most arrogant, presumptuous and unfounded assertion I ever read.

    Strong words indeed...would you care to elaborate....maybe some examples.

    ..

    The wording is on the strong side, for sure. that said - I can't say I disagree with Access having access to better number/analysis features, certainly for longer than SQL, as long as you stay within the "core" database engine. SQL might get the results FASTER, but in terms of functions, it's had a lot of data functions that SQL Server hasn't had.

    For example - Access' PIVOT functions have been in place since 2.0, and did a much better job with it than even what was implemented into 2005 (that's not exactly SQL Server's fault since ANSI got a hold of the command and turned it into the syntax disaster you see in 2005).

    ----------------------------------------------------------------------------------
    Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?

  • bob.willsie (9/10/2009)


    Hmm. I just popped into Access 2003 opened an existing query in SQL design mode, right clicked and selected Properties. However, I don't see that anything that lists or suggests "Pass-Thru" as an option.

    However, I am sure a portion of my queries can't be pass-thru due to my chopping up approach. For instance in the query I just checked I was joining a local table with a server linked table. I think that pretty much eliminates that one being a pass-thru query if I understand things right.

    Is it possible that all queries that could be pass-thru (IE, 100% server based joins for instance) are automatically run that way and that all others are local (joining server table with local table as I described above.)

    Too be honest, I never put much thought into where the query was executing unless I know it is going to be a bear, then I make sure all the data is local.

    I guess I just expected it to automatically happen on the server or local depending on whichever was more efficient.

    No - you have to MAKE queries be pass-through.

    Within the query designer, QUERY, SQL-specific, PASS-through (fill in the extra info requested).

    ----------------------------------------------------------------------------------
    Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?

  • Ah. Got it. Thanks Matt.

    Am I correct in assuming that forcing it to be pass-thru is only useful if all the tables being joined reside on the server?

  • bob.willsie (9/10/2009)


    Ah. Got it. Thanks Matt.

    Am I correct in assuming that forcing it to be pass-thru is only useful if all the tables being joined reside on the server?

    It will only work if all of the tables are on the server. Pass-through physically makes the execution context be on the server, so it won't "see" any local tables.

    These are best for read-only queries that severely limit big tables on the server (so - pulling small resultsets back from "big" tables on the server).

    ----------------------------------------------------------------------------------
    Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?

  • I apologize if I was a bit rough. My intention was not to insult anybody but this was the un-elaborate expression of my thoughts. Many DBA people seem to live in their "ivory tower", far from everything and everyone else (slow down: I said "many", not "all"!) and do not seem to realize that most of their certitudes are based on biased and/or inadequate and outdated concepts. They strongly favour dogmatism over pragmatism.

    Finding examples should not be very difficult, among others:

    1. "What can be done on a server must be done on a server" (or something like that): wrong! It depends on several factors, and among them:

    a) The total "power" of this server as compared to the "power" of the user's workstation. In this you have to take in account that a server has many clients (so it's "power" has to be divided among them). In my work environment it is not rare to have workstations that in term of "power" (performance of the processor(s), disk speed, capacity and access time, available memory etc.) are not far behind some servers. The time when servers were "super-computers" and their clients were just a bit more than terminals is long gone.

    b) Security issues. That can be a problem, but giving access to read-only views and only allowing data changes through to use of stored procedures, with a strong model of security, roles and access rights can ensure that no real harm can be done to the data. Of course I do not condone giving access to everything to everyone without any control nor supervision.

    c) Availability. This is a big one. If a user or a department has to wait for an over-timed DBA (or team of DBA for those who are lucky enough to work in teams) to get a more or less adequate set of data while he (or she: user) or it (department) has the knowledge, the skills and the technology to process brute data and get results quickly what's the point in waiting? (about that I enjoyed immensely Phil Factor's blog on science-fiction).

    On those last two points, I often have the feeling that it's more a question of who gets the power than anything else. Some DBAs seem to forget that they are the guardians of the data, not their owners and that those data has to be used by people who need to process and analyse them, often quickly.

    2) As some others pointed out here, Access has a richer set of functions to process data, and to many these functions are easier to use. Some assertions such as "Access is bad, SQL is good" are are futile and preposterous to me. Both are powerful tools, both have their "niche" and both can coexist in harmony. It mainly depends on the context.

    Once more I apologize for my harsh words, even if I don't change my mind about the matter.

    Have a nice day! 😉

  • gah (9/10/2009)


    rf44 (9/10/2009)


    This is one of the most arrogant, presumptuous and unfounded assertion I ever read.

    Strong words indeed...would you care to elaborate....maybe some examples.

    ..

    Perhaps not...that would turn into Religious Wars (MSAccess vs MSSQL) that does nobody no good ... let us keep this religion free and each to their database / programming God. ODBC is not MsAccess.

  • The biggest problem I've had with access databases is where they are held and what they contain.

    The "I've built this database" brigade invariably store them on a local pc drive, accessible only to the one pc or laptop and with no security or backups. Then they put personal data into them and take the laptop home..........

    I've dealt with departments that get asked for their data and say "It's all on the system" but it turns out they mean a local access database not the main business system! Thus they appear to be performing poorly and not generating revenue because that data is not included in the corporate totals.

    It has resulted in a strong desire to monitor all access databases and where possible convert them to centrally held sql server with web pages and reports - we do have the resources and skills!

    I still use ODBC regularly for data links in excel and also, in one of my favourites, Visio, for reverse engineering a database into a database diagram. I feel Visio went backwards after Microsoft bought it as you could previously both reverse engineer and create a database right through from an ORM diagram.

  • RTF, I was merely saying that if you run any given query on SQL versus Access then SQL will be faster, particularly bearing in mind Bob's tables are on the SQL server. I don't want to get into a slanging match about this, I really can't see anything contentious there.

    True the downside of ODBC is that if you're not careful it allows proliferation of "data islands" which can cause big problems. Having said that I'd rather the users were using ODBC connections than stand alone Excel "Databases" (and if users start using the term "Excel Database" you are in BIG trouble :-P).

  • I realise that this is off-topic, but you'd be surprised how many people are using Excel databases. An old friend of mine is a city of London financial analyst and told me he had a spreadsheet of 2 Gigabytes. On another occasion, I popped in to see one of the largest investment banks in the world. They were using Excel spreadsheets to try to value some complex derivatives that had cost them billions of dollars. Whilst doing an audit of the database systems of the UK government, I came across a spreadsheet that was used to calculate the grants given to Local Government by the UK government.

    The size of some of these ..er.. databases is larger and more complex than we ever dreamed were possible in the old days of Sybase on IBMs.

    In almost every case, Excel was used as the business staff had to get things up and running in a far shorter timescale than the IT departments were prepared to countenance. As Excel is considered to be and 'office productivity tool' and therefore not subject to the Gormanghast-style restrictions that were inherent in the IT department's policies toward applications, everybody was sneakily writing their own applications in it and hoping nobody would notice.

    Best wishes,
    Phil Factor

  • Phil Factor (9/11/2009)


    I realise that this is off-topic, but you'd be surprised how many people are using Excel databases. An old friend of mine is a city of London financial analyst and told me he had a spreadsheet of 2 Gigabytes. On another occasion, I popped in to see one of the largest investment banks in the world. They were using Excel spreadsheets to try to value some complex derivatives that had cost them billions of dollars. Whilst doing an audit of the database systems of the UK government, I came across a spreadsheet that was used to calculate the grants given to Local Government by the UK government.

    The size of some of these ..er.. databases is larger and more complex than we ever dreamed were possible in the old days of Sybase on IBMs.

    In almost every case, Excel was used as the business staff had to get things up and running in a far shorter timescale than the IT departments were prepared to countenance. As Excel is considered to be and 'office productivity tool' and therefore not subject to the Gormanghast-style restrictions that were inherent in the IT department's policies toward applications, everybody was sneakily writing their own applications in it and hoping nobody would notice.

    And they did SUCH A GREAT JOB!!!! Well done Excel....:-D

  • Well done Excel....

    :w00t:

    As someone who spends a large amount of time beating crappy Excel workbooks from various clients back into usable SQL data, I cringe. :crying:

  • chrisn (9/11/2009)


    Well done Excel....

    :w00t:

    As someone who spends a large amount of time beating crappy Excel workbooks from various clients back into usable SQL data, I cringe. :crying:

    I do some of the same type of work. I cringe as well, because usually what happens is there is information in the spreadsheet not data. This makes for some very difficult ETL on some of these spreadsheets. Then the clients whine and cry because it "costs" so much. When in reality, we are cheaper than all our competitors, by a good amount.

    Oh and what I mean by information is a easily understood format that a human can read and digest, rather than raw data that can be easily loaded into SQL Server.

  • Matt Miller (9/10/2009)


    ...These are best for read-only queries that severely limit big tables on the server (so - pulling small resultsets back from "big" tables on the server).

    Matt said it best. Each way of processing has it's own rewards and best use scenarios. If you need to read most or all the data in the server table and are doing a lot of massaging of it for a report or whatever, importing or copying the data locally and doing Access queries is fine. If you need to read smaller bits and pieces of data from the server or multiple joins to properly limmit the number of results, Pass-Through will be more efficient.

Viewing 15 posts - 16 through 30 (of 31 total)

You must be logged in to reply to this topic. Login to reply