Getting Close to the 2017 RTM

  • SilentOne - Tuesday, July 18, 2017 1:58 AM

    Microsoft's development and release cadence is really impressive but one of the biggest concerns I see is how developers and admins stay up to date with the latest features and improvements, differences between versions?

    For example, CI/CD and DevOps has been around for many years but SQL Server 2017's use of containers and Microsoft's approach of using containers for the CI/CD and release process of SQL Server is a huge change, that will require large changes to established CI/CD pipelines and release processes. The reason I highlight this change is without it I can't see how an organisation can keep up.

    How is everyone keeping up with these changes? It occurs to me that developers and admin (maybe more admins?) will need to specialise in particular areas of SQL Server as being a generalist you will never know the technology in depth enough to be considered an expert.

    Would be really interested to your thoughts.

    Fair question, but what I'd say is that the changes from 2016-2017 are minimal There are some performance chances, and certainly Linux/containers, but those are more infrastructure changes, not dev ones. If you look at 2014-2016, there are quite a few changes, but in a particular area, they're minimal. The HA story is improved, but it's evolutionary from 2012-2014-2016. If you know if from 2012-2016 somewhere, it's minimal.

  • riversoft - Tuesday, July 18, 2017 8:05 AM

     In my opinion, disk based SQL Server will be replaced with IN-Memory for OLTP. With the changes in hardware (memory) it just make sense.
    If you think about it, you should have the entire OLTP database in memory even for disk-based SQL server.

    Maybe, but I doubt it. So many systems have no need of the performance increases from In-Memory OLTP. Hardware easily handles the workload.

  • Part of me wonders if MS is looking to take their on-premise enterprisey applications (SQL Server, Exchange, etc) in the direction they're going with Windows 10 and Windows Server.
    More of a "subscription-based" with lots of little updates over the course of a year and a once or twice a year "point update" (think Windows 10 Creators Update.)

    As for where I work, we're *just* about finished migrating from SQL2008R2 to SQL2014, I have no intention of planning a migration to SQL2016 or SQL2017 at present.  Now, if one of my customers came to me and said "there's this feature in SQL201X that we just HAVE to have," I'd work with my supervisor and the customer to migrate them to it.  But thus far, everyone seems to be happy with SQL2014 (well, those that have migrated so far...)

  • Steve Jones - SSC Editor - Tuesday, July 18, 2017 10:51 AM

    riversoft - Tuesday, July 18, 2017 8:05 AM

     In my opinion, disk based SQL Server will be replaced with IN-Memory for OLTP. With the changes in hardware (memory) it just make sense.
    If you think about it, you should have the entire OLTP database in memory even for disk-based SQL server.

    Maybe, but I doubt it. So many systems have no need of the performance increases from In-Memory OLTP. Hardware easily handles the workload.

    Agreed. Also, aside from whether the performance benefit is important to a business or not, while memory has gotten cheaper, TBs of memory is still quite the investment 🙂

    Is it great to have enough memory to fit your entire workload into memory? Absolutely! Financially feasible? Not always 🙂

    Cheers!

  • Steve Jones - SSC Editor - Tuesday, July 18, 2017 10:51 AM

    riversoft - Tuesday, July 18, 2017 8:05 AM

     In my opinion, disk based SQL Server will be replaced with IN-Memory for OLTP. With the changes in hardware (memory) it just make sense.
    If you think about it, you should have the entire OLTP database in memory even for disk-based SQL server.

    Maybe, but I doubt it. So many systems have no need of the performance increases from In-Memory OLTP. Hardware easily handles the workload.

    I see what you are saying and that is the probable outcome. What I meant to say is for applications that need near real-time processing. The only caveat is that no database can be too fast in my experience :-).

  • Jacob Wilkins - Tuesday, July 18, 2017 11:06 AM

    Steve Jones - SSC Editor - Tuesday, July 18, 2017 10:51 AM

    riversoft - Tuesday, July 18, 2017 8:05 AM

     In my opinion, disk based SQL Server will be replaced with IN-Memory for OLTP. With the changes in hardware (memory) it just make sense.
    If you think about it, you should have the entire OLTP database in memory even for disk-based SQL server.

    Maybe, but I doubt it. So many systems have no need of the performance increases from In-Memory OLTP. Hardware easily handles the workload.

    Agreed. Also, aside from whether the performance benefit is important to a business or not, while memory has gotten cheaper, TBs of memory is still quite the investment 🙂

    Is it great to have enough memory to fit your entire workload into memory? Absolutely! Financially feasible? Not always 🙂

    Cheers!

    No doubt you are correct for very large databases. Aren't most TB databases not OLTP?  I mostly work with databases < 500GB. And they have to be in memory for us.

  • Over all the DBs in the world I wouldn't be surprised if most DBs >=1 TB are non-OLTP, but I work with some OLTP DBs of that size.

    The other thing to consider is that very often there is a one-to-many relationship between instances and databases.

    I work with a lot of instances that contain many OLTP databases (most commonly separated per client, but there are other schemes, of course), none of which are more than a couple hundred GB, but together they are many TB.

    At any rate, as you say it is definitely a desirable goal; the problem is the $$$ 🙂

    Cheers!

  • Jacob Wilkins - Tuesday, July 18, 2017 11:20 AM

    Over all the DBs in the world I wouldn't be surprised if most DBs >=1 TB are non-OLTP, but I work with some OLTP DBs of that size.

    The other thing to consider is that very often there is a one-to-many relationship between instances and databases.

    I work with a lot of instances that contain many OLTP databases (most commonly separated per client, but there are other schemes, of course), none of which are more than a couple hundred GB, but together they are many TB.

    At any rate, as you say it is definitely a desirable goal; the problem is the $$$ 🙂

    Cheers!

    We too have many database on one instance. The max db size is < 100GB. They total <500GB. So we are lucky in that regard. Since I have never worked with databases > 100GB I appreciate your point of view. I will be more informed should I ever have that pleasure 🙂

  • jasona.work - Tuesday, July 18, 2017 10:58 AM

    Part of me wonders if MS is looking to take their on-premise enterprisey applications (SQL Server, Exchange, etc) in the direction they're going with Windows 10 and Windows Server.
    More of a "subscription-based" with lots of little updates over the course of a year and a once or twice a year "point update" (think Windows 10 Creators Update.)

    As for where I work, we're *just* about finished migrating from SQL2008R2 to SQL2014, I have no intention of planning a migration to SQL2016 or SQL2017 at present.  Now, if one of my customers came to me and said "there's this feature in SQL201X that we just HAVE to have," I'd work with my supervisor and the customer to migrate them to it.  But thus far, everyone seems to be happy with SQL2014 (well, those that have migrated so far...)

    Perhaps, but I think what you're doing (R2 -> 2014) is about what I'd expect. Maybe R2->2016 is what I'd have aimed for this year (or next). I wouldn't expect those systems to move again until 2022.

  • riversoft - Tuesday, July 18, 2017 11:06 AM

    I see what you are saying and that is the probable outcome. What I meant to say is for applications that need near real-time processing. The only caveat is that no database can be too fast in my experience :-).

    That I would agree with. Those systems that need high rates of data change will benefit. That being said, 3D SSD disks are likely to dramatically change architectures as well. With In-Memory, you'll get amazing performance.
    http://wccftech.com/intel-3d-xpoint-optane-ssd-benchmark/

  • So, all you folks who recently acquired or are currently studying for MCSA SQL 2016 certification, you're outdated already.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Steve Jones - SSC Editor - Tuesday, July 18, 2017 11:46 AM

    riversoft - Tuesday, July 18, 2017 11:06 AM

    I see what you are saying and that is the probable outcome. What I meant to say is for applications that need near real-time processing. The only caveat is that no database can be too fast in my experience :-).

    That I would agree with. Those systems that need high rates of data change will benefit. That being said, 3D SSD disks are likely to dramatically change architectures as well. With In-Memory, you'll get amazing performance.
    http://wccftech.com/intel-3d-xpoint-optane-ssd-benchmark/

    That's a good point. Even with memory optimized table and the delayed writes there is disk access involved.

  • riversoft - Tuesday, July 18, 2017 8:05 AM

    I have been testing IN-Memory OLTP. With the release of 2017 most of the restrictions I have encountered have been addressed (Support for CASE, Unlimited Indexes on Memory Optimized Table for example). 
    One of the features I really like in the ability to have schema only memory tables. 
    There are still some features I would like to see. Subquery support for example.
    The combination of Memory optimized tables and native compiled procedures and functions have shown incredible performance results.
     In my opinion, disk based SQL Server will be replaced with IN-Memory for OLTP. With the changes in hardware (memory) it just make sense.
    If you think about it, you should have the entire OLTP database in memory even for disk-based SQL server.

    I have to ask... what are you calling "incredible performance results"?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

  • Steve Jones - SSC Editor - Tuesday, July 18, 2017 9:54 AM

    peter.row - Tuesday, July 18, 2017 1:36 AM

    If MS are going to release at this cadence then they need to start selling licensing such that it covers 2 versions, i.e. if you buy 2017 it should cover 2017 and 2018.
    It can't be in MS' interests to have so many versions of SQL Server out there.
    ...

    Why? Do you think that you get less value from SS2017 if SS2018 (or likely early SS2019) is released 18 months later? The support is the same. You'll get 5 yrs of SS2017 support + 5 more of security and potential 6 more if you pay.

    Is it in MS interest? I think it is. They only get x% upgrading to each version. If they release every 12-18 months, I think they get more upgrades along the way, especially as new features come along.

    I do think that there is a slight support increase, but they way they've moved to a streamlined engineering process and feature flags, I think providing support for SS2016+ is easier than previous versions.

    Okay I'm going to re-frame the statement - so many SQL Server versions is going to be a pain in the arse for devs.
     5 years + 5 more security fixes that means at this rate by 2022 a dev using SQL Server for their product depending on customers might have to support 5 - 8 different versions of SQL Server and that doesn't even include service pack levels. That might not be a problem for MS with it billions of dollars and endless resources on its projects but for small - medium companies it's going to be painful and will end up with some painful decisions, i.e. you simply tell the customer no or you end up having to support much older versions..

    The issue could be for upgrades for a company that is on an older version. With yearly releases there will always be a newer version just around the corner which gives those in charge of purse strings a good excuse to put off needed upgrades continuously.

    Yes I do think you'll get less value for your money. If they charge how ever many thousands they do for licensing but you are only getting a few features then depending on what you are upgrading from it's simply not worth it. However if the license covered 2 versions, you could buy with the knowledge that you can upgrade next year and get those features. 

    Alternatively I see maybe SQL Server going the subscription route, you pay nothing upfront just a monthly cost. You upgrade to newer versions when you choose, not when they are available, much like you do now. However unlike now older versions would get dropped quicker.

    Before you say it moving to the cloud is quite clearly not the answer - especially if you use anything more than the database engine because SQL Azure doesn't support any of the other SQL Server components - SSRS, SSAS, SSIS.
     In my case customers have sensitive data that they simply would not allow in the cloud end of story. That might change in 20 years and if no company offers an onsite DB system, but that's not going to happen for a long time.

  • peter.row - Wednesday, July 19, 2017 1:19 AM

    Steve Jones - SSC Editor - Tuesday, July 18, 2017 9:54 AM

    peter.row - Tuesday, July 18, 2017 1:36 AM

    If MS are going to release at this cadence then they need to start selling licensing such that it covers 2 versions, i.e. if you buy 2017 it should cover 2017 and 2018.
    It can't be in MS' interests to have so many versions of SQL Server out there.
    ...

    Why? Do you think that you get less value from SS2017 if SS2018 (or likely early SS2019) is released 18 months later? The support is the same. You'll get 5 yrs of SS2017 support + 5 more of security and potential 6 more if you pay.

    Is it in MS interest? I think it is. They only get x% upgrading to each version. If they release every 12-18 months, I think they get more upgrades along the way, especially as new features come along.

    I do think that there is a slight support increase, but they way they've moved to a streamlined engineering process and feature flags, I think providing support for SS2016+ is easier than previous versions.

    Okay I'm going to re-frame the statement - so many SQL Server versions is going to be a pain in the arse for devs.
     5 years + 5 more security fixes that means at this rate by 2022 a dev using SQL Server for their product depending on customers might have to support 5 - 8 different versions of SQL Server and that doesn't even include service pack levels. That might not be a problem for MS with it billions of dollars and endless resources on its projects but for small - medium companies it's going to be painful and will end up with some painful decisions, i.e. you simply tell the customer no or you end up having to support much older versions..

    The issue could be for upgrades for a company that is on an older version. With yearly releases there will always be a newer version just around the corner which gives those in charge of purse strings a good excuse to put off needed upgrades continuously.

    Yes I do think you'll get less value for your money. If they charge how ever many thousands they do for licensing but you are only getting a few features then depending on what you are upgrading from it's simply not worth it. However if the license covered 2 versions, you could buy with the knowledge that you can upgrade next year and get those features. 

    Alternatively I see maybe SQL Server going the subscription route, you pay nothing upfront just a monthly cost. You upgrade to newer versions when you choose, not when they are available, much like you do now. However unlike now older versions would get dropped quicker.

    Before you say it moving to the cloud is quite clearly not the answer - especially if you use anything more than the database engine because SQL Azure doesn't support any of the other SQL Server components - SSRS, SSAS, SSIS.
     In my case customers have sensitive data that they simply would not allow in the cloud end of story. That might change in 20 years and if no company offers an onsite DB system, but that's not going to happen for a long time.

    Amen to that... especially that last paragraph.  I'll also add that forced upgrades would be horrible, as well.  Ironically and with the understanding that security fixes will be missing, the cool part about a version going out of support is that it finally becomes stable with no chance of something new breaking because they've stopped messing with the code.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
    "Change is inevitable... change for the better is not".

    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)
    Intro to Tally Tables and Functions

Viewing 15 posts - 16 through 30 (of 36 total)

You must be logged in to reply to this topic. Login to reply