how to remove numbers from strings?

  • This was/is an interesting thread. But I would suggest before

    diving into comparisons of elapsed time people would get as

    broad a view as possible of the objects of their exercises. And

    that means not limiting your understanding of something to just

    what's written about it in bol.

    Anyone interested in OVER should at least read these two papers:

    ISO/ANSI: Introduction to OLAP functions

    http://tinyurl.com/2taahc

    Itzik Ben-Gan and Sujata Mehta

    OVER Clause and Ordered Calculations

    http://www.insidetsql.com/OVER_Clause_and_Ordered_Calculations.doc

    Where to start?:) Ok an INNER JOINER asked, I guess quite sarcastically,

    an OVER the question:

    >What is a single advantage of OVER() for someone who knows how to

    >write T-SQL statements?

    One can ask the same question of CASE, APPLY, FULL OUTER JOIN and just

    about all sql additions. The construct has utility, it's intended

    to make life easier for a user. It does not add anything new to

    the language that couldn't be done prior to its introduction. But,

    and here is the interesting part, the really big bonus is performance.

    But the real kicker is the nature of the performance gain. But let

    me back up for a minute. Just to be clear we're talking about the

    "full" implementation of the OVER statement which is really the

    an sql "window". What MS did is offer an "incomplete" implementation

    of an sql window which is what you have in S2005. The real intent

    of a window/OVER is not another (and simpler) way to write a

    group by and a join. Its real intent is to ease the burden of

    writing queries for cumulative aggregates (ie. running sums). And

    to ease the burden on the server for such queries. Another way

    to say this is that sql wanted a new way to optimize a non-equi

    join. With a join the only way to obtain running sums, counts

    and ranks is to use a predicates like ' ='. But MS left

    out this entire functionality from OVER (it is availiable in Oracle).

    What MS offers is the case of the equi-join, the most trivial case

    of OVER/aggregates where a running sum is not wanted but only an

    aggregate over a whole partition( group). The new ranking functions

    in S2005 are based on the full window implementation and are but

    special cases using the count(*) aggregate. But that is the only

    case of MS using the sql window as it was intended.

    Prior to sql-99 OLAP there never was a way to optimize queries

    that use non equal predicates to accumulate data. No indexing

    scheme could make up for the fact that such queries involved

    N^2 (N=rows in a group/partition) comparisons which translated

    into scans of data. The shear weight of this number for large

    data sets (think of the test data in this thread, 1 million squared)

    is beyond the ability of any optimizer to handle. Even with a covered

    index the scans are only cut in half. (See this article for what

    accumulations really look like and what they do to an optimizer:

    http://beyondsql.blogspot.com/2007/06/dataphor-sql-visualizing-ranking-query.html)

    The question the sql window/MS ranking functions answer is how to

    efficiently obtain any number of aggregates with just a single pass

    over the data. The anwser sql came up with I'm sure everyone knows and

    knows very well. And I said it was a kicker. The answer is obvious,

    it's a 'cursor'! The S2K optimizer simply does not know how to

    efficiently "reuse" data. In S2005 when you use the ranking functions

    the ORDER BY is really a cursor. If an index is availiable the server

    doesn't even have to sort the data, if one isn't it will sort the

    data just as you would in a cursor.

    In this thread what is seen with OVER is setting a cursor appropriately

    for each partition and accumulating the aggregates just as a group by

    would. With a single pass over the table. It's just that you do not see

    the efficiency of it because it's over the entire partition as opposed

    to accumulation(s) for each row within the partition.

    An INNER JOINER asked an OVER:

    >Where are those advantages you are so excited about?

    To see the huge advantage of OVER change the test to use any of the ranking

    function(s) vs. a join and non equal predicate(s). If you really want

    to have fun try a million rows without any partition:)

    www.beyondsql.blogspot.com

  • I am trying to figure out a way to remove a number from a string. The value of the string is entered from a input text field. I need to detect leading zeros and remove them from the string. I understand the logic but familiar enough with the AS syntax yet.

    //Found this to detect the first digit in the string

    check = myString .charAt( 0 )

    Need help forming the if statement to detect if leading digit is '0' and if = 0 remove it from string.

    made a loop in your string and get the char pointed by the index. If number slice your string...

    ActionScript Code:

    for(i=0;i<=yourstring.length-1;i++) {

    if (yourstring.charAt(i) == '0') {

    yourstring = yourstring.concat(yourstring.slice(0,i),yourstring.slice(i+1))

    }

    }

  • I have been away from the forums for a while, but took the time to read this entire thread! 🙂 I think I'm still sane at the end of it!!

    I'm not sure if the post below is a joke or someone genuinely asking a question - if the latter, I suggest you start a new thread. Are you trying to get the script converted into T-SQL?

    VAIYDEYANATHAN.V.S (10/25/2007)


    I am trying to figure out a way to remove a number from a string. The value of the string is entered from a input text field. I need to detect leading zeros and remove them from the string. I understand the logic but familiar enough with the AS syntax yet.

    //Found this to detect the first digit in the string

    check = myString .charAt( 0 )

    Need help forming the if statement to detect if leading digit is '0' and if = 0 remove it from string.

    made a loop in your string and get the char pointed by the index. If number slice your string...

    ActionScript Code:

    for(i=0;i<=yourstring.length-1;i++) {

    if (yourstring.charAt(i) == '0') {

    yourstring = yourstring.concat(yourstring.slice(0,i),yourstring.slice(i+1))

    }

    }

    Anyhow, Jeff, I have both SQL 2k developer (as my default instance) and SQL 2k5 developer (as an instance called SQL2005 oddly enough) on my development machine. They both coexist beautifully. The new SQL Management Studio takes a bit of getting used to, and its keyboard defaults are not the same as QA's - I've chosen the option to set them back to SQL 2000's keys. I myself have not yet become familiar with SQL 2k5's new bells and whistles because several of our customers still use SQL 2k so I need to code using it, but then test in both 2k and 2k5. What fun when 2k8 comes out 🙂

    There are a few things in 2k5 that I would like to use in my code if possible but so far I haven't had any problems working around them in 2k (eg CTEs vs temp tables).

    Good thread - I think it's gradually coming to a halt, although I'll happily look at any new code that's posted to compare the 2k way of doing things with the 2k5 way.

  • Have to revise my previous ramblings on insert order and identity above, apologies for that.

    (though it's somewhat a sidetrack it deserves to be correct)

    Contrary to what I wrote before, if you have a table (id identity, col1...)

    and populate it with; INSERT tbl (col1) SELECT col1 from wherever ORDER BY col1

    then the identites *will* be guaranteed to be generated according to the order stipulated.

    It's not guranteed to physically be entered in that order, but that's another matter, and doesn't affect the logical 'sequence-generation'.

    Again, sorry for misleading rants... :unsure:

    /Kenneth

  • Kenneth, I could not answer you immediately, it gave you some time to withdraw your post.

    But you did not use that chance.

    So, it's your fault. 😉

    Kenneth Wilhelmsson (10/25/2007)


    Ah, just small details that tends to get lost in the overall excitement...

    Like first, when performance was measured by just execting something and as I assumed, also returning the results. This is just a competition in who has the fastest box to render the result.

    Though that was indeed noted, so it was changed to inserting into a temptable instead.

    However, then it became a competiotin about who has the fastest diskdrives.. 😉

    You did not put much thinking (actually - no thinking at all) into this.

    We did not compare INNER JOIN on my machine to OVER on Matt's one. Matt did everything on his server and guaranteed that disks and memory settings are the same for both tests.

    Ofc, one could say that when running the 'competitors' back to back, then the relative results between could serve as an indicator of which is 'best', but still...

    Still what?

    Just disagree - and that's it?

    Personally, I don't think that timings is that good at all when deciding what is most efficient anyway.

    No harm doing it, but there's so much more to what makes a query 'good' or 'bad'.

    IMO, the plans generated are more useful to understand how the optimizer resolves the query, and also to find out what potentionals there may be to improve upon it. Both by syntax, and also what happens with different index strategies.

    Plans are not always useful as well.

    I saw execution plans for 2 queries where SQL Server where estimated resources for each of them as 50%.

    When executes one by one it was 2 seconds to 12 seconds of execution time.

    You must understand what that execution plan means and which way it will be translated into execution time. If your execution plan takes more memory but less I/O load than another one then it will be more effective on a server with much memory and slow drive(s). On a server with flash disk system another one may be preferable. You will see it by execution times for different options.

    At the end of the day execution time is the only thing that matters. Server does not take coffee brakes and does not waste time posting on web forums, so if it takes longer to proceed then it's taking more resources.

    I've only seen one post with actual plans. (remember, haven't scanned the entire thread).

    There they was identical, and indeed, they were. However, there's a lot of 'tweaking' one can do, like remove the ORDER BY. Sorts are always very costly in terms of performance, so we like to avoid them if we can. Now, that was only unnecessary in the example that selected into temptable. Assuming that was a requirement, ORDER BY won't guarantee how rows are inserted anyway. 😎

    What do you mean "how rows are inserted"?

    How do you know how rows were inserted? Were you sitting there watching?

    Or you check it by SELECT from populated table?

    Then you need to read a little bit about what actually happens when you're inserting into a table.

    About reserving data pages, reserving index pages, reordering index pages (including clustered index - data pages), rebuilding indexes, building B-trees, etc. It all happens AFTER the query we are discussing completed it's job. It's all in hands of the code defined by CREATE TABLE statement.

    OTOH, if you're in the situation that you do need to have absolute control over a 'generated' sequence and store that in a table, then Sergiy's 'absolutely useless' OVER() clause is one of the very few tools that would allow you to do just that.

    And no, 'INSERT myTmpTblWithIdentity SELECT foo FROM bar ORDER BY foo will *NOT* guarantee that rows will be inserted according to the ORDER BY.

    Though, the 'old' way that is supported (if I understood that correctly) is if you do a construct like:

    SELECT (inline identity col), col1, col2... INTO tmpTbl FROM otherTable ORDER BY col1

    I know, I'm a terrible person, and I deserve to be discredited in any possible way. :hehe:

    But what you just wrote is incredibly stupid.

    I don't think reputation of the name "Kenneth Wilhelmsson" on this forum is so worthless it may be sacrificed for sake of OVER argument.

    I believe you admitted your mistake in you following post.

    It saves your reputation a little bit. 😎

    The only way to check which way row were actually inserted is to assign sequential IDENTITY values to the rows being inserted. This check proves that ORDER BY guarantees that rows will be inserted according to specified order.

    And OVER() still remains useless for those who know how to use ORDER BY. 🙂

    So, Sergiy was wrong about saying OVER() is totally useless 😉

    So, Kenneth was wrong saying that Sergiy was wrong. 😛

    Though if we go back to the examples that was competing, remove the ORDER BY from both, then the inner join version has a very much more efficient plan. (and shorter time)

    But IF the ORDER BY is deemed necessary for the task given, then it's a tie between the two.

    Tie means "no advantage" - right?

    In fact ORDER BY is much heavier than all other operation involved in that query. About 10 times heavier. It takes 2-4 seconds to process the data and 26 seconds to sort it.

    So, 2 seconds difference you name "tie" is actually 2 times difference in data processing.

    Also as noted, though not yet followed up on(?) is what different index strategies would to to the statements in terms of changing the plans. From what I found, the OVER() version has most to gain from playing around with where the clustered index is, or if we have a covered index, and exactly how that would look... Try it, and note how sort and join operators in the plans change or go away.

    Yes, that's also part of a tuning process, as you mentioned earlier, Jeff.

    Where did you noticed any advantage OVER() version gained from indexes?

    Anyhow, bottom line...

    Timings isn't everything. Don't forget the plans. Imo plans are better tools than timings, because timings are very much dependant on hardware. (ie the same plan may have different times depending on what else goes on in the system at that particular time)

    Right - timings are very much dependant on hardware.

    And as I said, same plan may be effective on one set of hardware and useless on another.

    But at the end of the day - timing is only thing we are interesting in. Nothing else matters.

    And you must choose an execution plan most effective on this particular hardware.

    When feeling too excited, get a bucket of water, apply to head, and chill out a bit 😀

    Then come back and be constructive (none mentoined, none forgotten)

    Exactly.

    _____________
    Code for TallyGenerator

  • rog pike,

    Did not have time to read all articles you referenced, but this one:

    Itzik Ben-Gan and Sujata Mehta

    OVER Clause and Ordered Calculations

    http://www.insidetsql.com/OVER_Clause_and_Ordered_Calculations.doc

    is horrible.

    They use correlated subqueries everywhere, name it "set-based approach" and show how cool is OVER() comparing to that "set based query". :ermm:

    I don't think any conclusions of such lame T-SQL programmers should be considered.

    They should learn T-SQL first, before they write long articles.

    As I said:

    OVER() is useful for those who does not know T-SQL.

    _____________
    Code for TallyGenerator

  • Sergiy (10/29/2007)


    You did not put much thinking (actually - no thinking at all) into this.

    We did not compare INNER JOIN on my machine to OVER on Matt's one. Matt did everything on his server and guaranteed that disks and memory settings are the same for both tests.

    You misunderstood. Arguments was only about different timings. The 'extra' that comes after the query is processed by the query engine seemed to be forgotten. (ie render the result, insert into a table..)

    Plans are not always useful as well.

    I saw execution plans for 2 queries where SQL Server where estimated resources for each of them as 50%.

    When executes one by one it was 2 seconds to 12 seconds of execution time.

    You must understand what that execution plan means and which way it will be translated into execution time. If your execution plan takes more memory but less I/O load than another one then it will be more effective on a server with much memory and slow drive(s). On a server with flash disk system another one may be preferable. You will see it by execution times for different options.

    At the end of the day execution time is the only thing that matters. Server does not take coffee brakes and does not waste time posting on web forums, so if it takes longer to proceed then it's taking more resources.

    So you agree? 😉

    'Not always useful' does mean that it does have it's usefulness?

    At the end of the day, the execution plan is what decides how things are done.

    If it's a good plan or a bad plan, ofc that depends on you having flash drives or whatever..

    My point is simply that the plan is also important input. By looking at the plan you may even see how to improve the overall execution time. If you never look at the plans, you have no clue what the box is actually doing with your queries.

    Also, the estimated execution plans are just that - estimates.

    We can only be certain from time to time if we look at the actual plans if the estimate was 'good' or not.

    What do you mean "how rows are inserted"?

    How do you know how rows were inserted? Were you sitting there watching?

    Or you check it by SELECT from populated table?

    Then you need to read a little bit about what actually happens when you're inserting into a table.

    About reserving data pages, reserving index pages, reordering index pages (including clustered index - data pages), rebuilding indexes, building B-trees, etc. It all happens AFTER the query we are discussing completed it's job. It's all in hands of the code defined by CREATE TABLE statement.

    I mean: INSERT t2 (col1) SELECT col1 FROM t1 ORDER BY col1

    (t2 has no identity column..)

    In such cases ORDER BY is reduntant, since it won't guarantee the order of the rows inserted,

    but as you agree upon, any SORT op is very expensive to do.

    So, if we measure a statement with an operator that doesn't fulfill any practical function, then that statement will be unecessary 'heavy'. ie a moot point to test.

    I know, I'm a terrible person, and I deserve to be discredited in any possible way. :hehe:

    But what you just wrote is incredibly stupid.

    I don't think reputation of the name "Kenneth Wilhelmsson" on this forum is so worthless it may be sacrificed for sake of OVER argument.

    I believe you admitted your mistake in you following post.

    It saves your reputation a little bit. 😎

    The only way to check which way row were actually inserted is to assign sequential IDENTITY values to the rows being inserted. This check proves that ORDER BY guarantees that rows will be inserted according to specified order.

    And OVER() still remains useless for those who know how to use ORDER BY. 🙂

    Nah... you're not *that* terrible :kiss:

    Well, this is a little hairsplitting, but that's actually not quite correct.

    It doesn't guarantee how the rows will be inserted, what it does guarantee is the logical sequence how the identites will be generated. The rows may still be physically entered into the

    table in another order..

    Ofc this is just something I've read, but I do have the utmost respect for Conor, in that he know his stuff. If anyone's interested, this info can be found here:

    http://blogs.msdn.com/sqltips/archive/2005/07/20/441053.aspx

    It details the 6 scenarios where ORDER BY is obided by the query optimizer.

    Tie means "no advantage" - right?

    In fact ORDER BY is much heavier than all other operation involved in that query. About 10 times heavier. It takes 2-4 seconds to process the data and 26 seconds to sort it.

    So, 2 seconds difference you name "tie" is actually 2 times difference in data processing.

    Those two particular plans was a tie, yes. Neither gave the other any advantage, since they were identical. The more intriguing question is '*must* the plans be identical'?

    Where did you noticed any advantage OVER() version gained from indexes?

    Well, the OVER() function 'suffers' from what seems to be some kind of 'embedded' sort.

    That is, all plans I looked at did have a sort step when OVER() was used.

    Though sort wasn't all, these plans also had joins, and joins may change depending on index strategies.

    So, playing aorund a bit with different indexes did produce different joins, and those may also affect the plan's attributes (memory grants, size etc) as well as in the end the overall resources needed to resolve it.

    I did not say OVER() had any 'advantage', I said it had the 'most to gain', being the 'heaviest' by exploring that path.

    Right - timings are very much dependant on hardware.

    And as I said, same plan may be effective on one set of hardware and useless on another.

    But at the end of the day - timing is only thing we are interesting in. Nothing else matters.

    And you must choose an execution plan most effective on this particular hardware.

    Oh I agree, choosing is key. But to be able to choose you must remember to look at it also 😎

    /Kenneth

  • Kenneth Wilhelmsson (10/30/2007)


    So you agree? 😉

    With what?

    'Not always useful' does mean that it does have it's usefulness?

    At the end of the day, the execution plan is what decides how things are done.

    If it's a good plan or a bad plan, ofc that depends on you having flash drives or whatever..

    My point is simply that the plan is also important input. By looking at the plan you may even see how to improve the overall execution time. If you never look at the plans, you have no clue what the box is actually doing with your queries.

    Execution plan is evidences, testimonies, timing is a verdict.

    By studying evidences you may see what have you missed during the trial and if you have any chance to successfully appeal the verdict.

    Depending on the judge or jury (hardware) you may make an accent on different evidences, call different witnesses.

    But at the end of the day the only thing which matters is a verdict.

    Clients won't share your excitement about perfect execution plan if system will be slow.

    Also, the estimated execution plans are just that - estimates.

    We can only be certain from time to time if we look at the actual plans if the estimate was 'good' or not.

    I probably was not precise in my expression. Those were actual plans, not estimated.

    I mean: INSERT t2 (col1) SELECT col1 FROM t1 ORDER BY col1

    (t2 has no identity column..)

    In such cases ORDER BY is reduntant, since it won't guarantee the order of the rows inserted,

    Again, ORDER BY WILL guarantee the order of the rows inserted, but it does not guarantee pages of the table holding the rows will be physically placed on disk in the same order as those rows were inserted. And there is no guarantee SELECT will read pages in the order they placed on disk, because Server will read it in optimal order depending on the position of the last page accessed and on some I/O activity going on at the same moment.

    Well, this is a little hairsplitting, but that's actually not quite correct.

    It doesn't guarantee how the rows will be inserted, what it does guarantee is the logical sequence how the identites will be generated. The rows may still be physically entered into the

    table in another order..

    That was actually absolutely correct.

    Identities are assigned according to the order rows being inserted.

    What it does not guarantee that pages will not be reshuffled due to disk space fragmentation, indexing of the table or for some other reason.

    Well, the OVER() function 'suffers' from what seems to be some kind of 'embedded' sort.

    That is, all plans I looked at did have a sort step when OVER() was used.

    Though sort wasn't all, these plans also had joins, and joins may change depending on index strategies.

    So, playing aorund a bit with different indexes did produce different joins, and those may also affect the plan's attributes (memory grants, size etc) as well as in the end the overall resources needed to resolve it.

    I did not say OVER() had any 'advantage', I said it had the 'most to gain', being the 'heaviest' by exploring that path.

    It's like that gambler who could save more that everybody else if he would stop gambling.

    Problem is nobody can make him stop gambling.

    That's why he will never save.

    _____________
    Code for TallyGenerator

  • I'll just quote it from the guys who wrote the stuff we're all using:

    -- quote --

    INSERT queries that use SELECT with ORDER BY to populate rows guarantees how identity values are computed but not the order in which the rows are inserted

    -- end quote --

    I'm not sure if this is what you're saying, but with other words, or if you're saying that this isn't how it works..?

    /Kenneth

  • Kenneth Wilhelmsson (10/30/2007)


    I'll just quote it from the guys who wrote the stuff we're all using:

    Nobody is perfect.

    Don't ever copy someone's words without good thinking about it, no matter how big or significant those guys sound.

    Because then YOU put your sign under this.

    Do simple reality check.

    Replace in that sentence "table" with "letter box" and "rows" with "letters".

    How stupid does it sound now?

    _____________
    Code for TallyGenerator

  • Sergiy (10/29/2007)


    rog pike,

    Itzik Ben-Gan and Sujata Mehta

    OVER Clause and Ordered Calculations

    http://www.insidetsql.com/OVER_Clause_and_Ordered_Calculations.doc

    is horrible.

    They should learn T-SQL first, before they write long articles.

    As I said:

    OVER() is useful for those who does not know T-SQL.

    Sometimes we mistake a truely handicapped person for the village idiot. I

    don't think that's the case here. I don't suffer fools well so you'll have

    to forgive me for talking in the simplist way possible. But if it helps

    someone else the lack of sophistication is worth the effort.

    You didn't understand what you thought you read. The entire sql community,

    that means MS, Oracle, IBM etc., acknowledged before 1999 that sql

    cannot offer an efficient solution to ranking or cummulative aggregate

    queries. The paper you refer to as horrible attempts to show that t-sql

    is handicapped because the only way to write these queries in t-sql is with

    correlated subqueries/joins using a non-equi predicate. It's the use of

    these predicates (less than,greater than), which cannot be avoided, that sql chokes on.

    The paper goes into detail way this is so. Now if you think you know

    something the rest of the whole world doesn't please share. Write a

    t-sql query that doesn't use these predicates to get a rank or cummulative

    sum. As if you don't have enough problems with these issues MS further

    screwed you up by only offering the most trivial form of OVER that can be

    used with aggregates. The MS version cannot be used for accumulating

    aggregates (on a row by row basis) but only over an entire partition just

    like a group by. And it is this trivial OVER about which you are going on about.

    The full OVER functionality is 'not' in sql server for cumulative aggregates

    you nitwit. You would have to go to Oracle or IBM DB2 to see the benefit

    of these functions over standard sql syntax for cumulative aggregates.

    You're making statements about things you really don't know anything

    about. But I don't judge the entire village by its idiot.

  • Sergiy (10/30/2007)


    Kenneth Wilhelmsson (10/30/2007)


    I'll just quote it from the guys who wrote the stuff we're all using:

    Nobody is perfect.

    Oh, I totally agree. Not even you, then?

    Don't ever copy someone's words without good thinking about it, no matter how big or significant those guys sound.

    Because then YOU put your sign under this.

    Well, this is straight from the horses mouth. I do trust them to know this stuff in detail,

    and I'll gladly sign under on what they tell us when asked about specifics.

    Since they sit on the sourcecode, and also are those who has designed the thing that handles

    this particular part, they should know what guarantees are given with their product, and what are not, don't you think?

    But, since we agree on that noone is perfect, perhaps they are wrong and you're right.

    In that case, I urge you to go to

    http://blogs.msdn.com/sqltips/archive/2005/07/20/441053.aspx

    and please post in that thread following the article that they are mistaken.

    It's really important, since what is written there carries some heavy weight that reaches

    far far outside this tiny forum here.

    Do simple reality check.

    Replace in that sentence "table" with "letter box" and "rows" with "letters".

    How stupid does it sound now?

    Oh, I agree again. Replacing words in sentences just to come up with something else

    sounds incredibly stupid. Especially when it's words that doesnt' even exist in the original quote 😀

    Come on, Sergiy.

    I know you can do better than just troll everything that passes by...?

    /Kenneth

  • rog pike (10/30/2007)


    The entire sql community,

    that means MS, Oracle, IBM etc., acknowledged before 1999 that sql

    cannot offer an efficient solution to ranking or cummulative aggregate

    queries.

    I don't care about entire community.

    Entire community is useless when it comes to matching Google performance.

    Entire community wasted millions and millions in many different currencies on standard health care system, and they end up with passing .pdf files.

    Entire community failed on the global project for a big customer I and 2 my colleague completed a year ago. Before us 7 different companies from top 10 lists in different countries failed to build even prototype for a customer who could pay any bill.

    That's why I do not care about opinion of such a dumb community.

    BUT! I know several people who can build efficient solutions for cumulative aggregate queries. And their opinions do really matter to me.

    the only way to write these queries in t-sql is with

    correlated subqueries/joins using a non-equi predicate. It's the use of

    these predicates (less than,greater than), which cannot be avoided, that sql chokes on.

    If you don't know other way it does not mean it's the only way.

    Now if you think you know

    something the rest of the whole world doesn't please share. Write a

    t-sql query that doesn't use these predicates to get a rank or cummulative

    sum.

    If you're so interested in this we can discuss the rates.

    I do cumulative monetary reports all the time. Aggregated statistics is the main thing people are interested in when it comes to money matters.

    I'm afraid I cannot even count all reports I built over the last year. And I never needed to use correlated subqueries.

    Well, somewhere in my dark past there are several reports used correlated subqueries. But those were small rarely invoked reports, so I considered that it's not a big deal if I'd be lazy and dumb in those cases. Lord, forgive me my weaknesses.

    As if you don't have enough problems with these issues MS further

    screwed you up by only offering the most trivial form of OVER that can be

    used with aggregates.

    It's not the only and not the worst case where MS screwed up.

    My colleague gave up on reading a book about SQL 2005 published by MS after I pointed on 2 or 3 critical errors in every chapter. And that was a handbook for self preparing to MS certification exams.

    I wonder how big is the community which got their SQL understanding from books like that?

    You're making statements about things you really don't know anything

    about. But I don't judge the entire village by its idiot.

    Never and nowhere in history numbers proved themselves genius. It's village idiots who appeared to be genius after all.

    Remember, entire community considered Jesus as an idiot. So what?

    Mediocre always admire mediocre and consider everybody who exceeds its capabilities as idiots.

    There is nothing new here.

    _____________
    Code for TallyGenerator

  • guys, i'm really loving this. keep them coming. I'm in manila and it's a 4-day weekend here, Nov 1-4. This post should keep the weekend interesting. 🙂

    Mathematics is the universal language.


    Ronald San Juan | SQL DBA
    ID 710124 ~ Code "Northwind"

  • Kenneth,

    they are too dumb to put some effort into explaining how SQL Server works to them.

    Perfect members of the community. 😀

    They were informed about the error right in the first comment. They just cannot get it.

    If you wish - go ahead, try.

    Post this code over there and ask them can LOOP with TIME DELAY guarantee the order of inserting rows in the table?

    For your convenience I added PRINT statement to let you see the order the rows were actually INSERTED.

    Compare it to the orders in SELECT statements.

    [Code]

    CREATE TABLE [dbo].[Test] (

    [Name] [nvarchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,

    CONSTRAINT [PK_Test] PRIMARY KEY CLUSTERED ([Name]) WITH FILLFACTOR = 90

    )

    GO

    GO

    DECLARE @I int, @String nvarchar(20)

    SET @String = 'asdfghjkl;'

    SET @I = 1

    WHILE @I <= 10

    BEGIN

    INSERT INTO dbo.Test (Name)

    SELECT SUBSTRING(@String, @I, 1)

    PRINT SUBSTRING(@String, @I, 1)

    WAITFOR DELAY '00:00:00.5'

    SET @I = @I + 1

    END

    SELECT * FROM dbo.Test

    drop table [dbo].[Test]

    GO

    CREATE TABLE [dbo].[Test] (

    [Id] [smallint] IDENTITY (1, 1) NOT NULL ,

    [Name] [nvarchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,

    CONSTRAINT [PK_Test] PRIMARY KEY NONCLUSTERED ([Id]) WITH FILLFACTOR = 90

    )

    CREATE UNIQUE CLUSTERED INDEX [UX_Test] ON [dbo].[Test]([Name]) WITH FILLFACTOR = 99

    GO

    GO

    DECLARE @I int, @String nvarchar(20)

    SET @String = 'asdfghjkl;'

    SET @I = 1

    WHILE @I <= 10

    BEGIN

    INSERT INTO dbo.Test (Name)

    SELECT SUBSTRING(@String, @I, 1)

    PRINT SUBSTRING(@String, @I, 1)

    WAITFOR DELAY '00:00:00.5'

    SET @I = @I + 1

    END

    SELECT * FROM dbo.Test

    drop table [dbo].[Test]

    [/Code]

    _____________
    Code for TallyGenerator

Viewing 15 posts - 136 through 150 (of 172 total)

You must be logged in to reply to this topic. Login to reply