36 Changes

  • Comments posted to this topic are about the item 36 Changes

  • Actually - that kind of pace is what you'd expect on a continuous integration scenario. At least to all environments except for Prod (where you probably want it a little more controlled).

    In particular - if you really are doing test-driven development and have all of your business requirements coded as tests, deploying new code just simply means that the business requirements can be validated *automatically* by the testing engine, so - the more checkins/builds, the lesss likely you are to end up with code conflicts.

    With a big enough team - it wouldn't be that unusually to see many builds in a week.

    ----------------------------------------------------------------------------------
    Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?

  • My concern with this number of releases to production would be alienating the audience. Every time they visited the site there could be new features to learn and navigate around. To begin with I imagine this being exiting for them but for those that just want your application to work I can imagine this being off putting. There is also a high risk of not being able to drop or radically refine rapidly deployed features that don't work for most users for fear of alienating the users that adopted them early.

    But then may be I am just overly cautious stick in the mud 🙂

  • One point that stroked to me most in that article is

    "No other industry would accept a failure rate that we have in our industry,"

    Very much true.... I remember Steve's earlier article comparing software job with other industries... a doctor can't keep trying new releases from pharma company so frequently on his patient... or a civil engineer try new model so frequently... it is the software industry that is currently having privilege to experiment on the live data with many releases....

  • Gavin Reid (10/22/2008)


    My concern with this number of releases to production would be alienating the audience. Every time they visited the site there could be new features to learn and navigate around. To begin with I imagine this being exiting for them but for those that just want your application to work I can imagine this being off putting. There is also a high risk of not being able to drop or radically refine rapidly deployed features that don't work for most users for fear of alienating the users that adopted them early.

    But then may be I am just overly cautious stick in the mud 🙂

    I agree, can you imagine trying to use any form of application that changes so often. Doesn't this imply that there was something wrong with the original release if there needs to be so much regular change. An application with giggling feature creep that moves as you watch. How can anyone make decisions of quality so quickly. How does anyone decide what to include and exclude and keep a consistent user interface with such a pace of development? If a user is subjected to so much change how can they even keep up with it?

  • Where I am working, we have dedicated testers that tests a developers job after we think we are done with something. That increases the quality by a lot and is really good since the system is huge and complex. If you change a function that is being used by several objects or other methods/functions, you better test it out afterwards. Unit tests that we write, and having the testers do their tests (as well as we doing ours) makes sure that the quality of what we produce remains high.

  • Sorry, I'm not suggesting low quality of code or application. I'm suggesting a lack of quality in the decision making process for the user experience, look and feel, and direction of the application.

    I myself make changes regularly to business web applications to fix bugs and improve performance or handle new conditions but I think that adding new functionality to an application needs some buy-in from the people who are targeted to use the application. My experience with this is that it takes longer for users to decide what they do and don't like.

    I was commenting in the spirit of the post I quoted on the subject of alienating users, not quality of build. Of course one can run a daily release cycle of high quality code, but who takes a step back and sees the sanity of the overall user experience in such a change rich environment?

  • I very much agree with the article's comment that developers interact with users more directly. That's my favorite part of a project!

    As for the large number of changes - this suggests to me that the application is not a final product yet - still greatly expanding the functionality. I could see that part of the changes are bug fixes (which should have been caught in testing) but that others are new features. I don't think this would necessarily confuse and frustrate novice users, because they might not have needed them yet, and the more sophisticated users were the ones who requested them.

    Still, I would think that the pace of changes would slow down as the product matures. One difference between Web 2.0 products and packaged products is that you don't have to worry about distributing the changes, ie. service packs, etc. So, putting the newest features out there as soon as they are ready is not such a big deal.

  • If you deploy 36 times in one week it means (assuming your coders are coding and testers are testing 24/7) you're deploying every 4.6 *hours*.

    Even with scripting languages that's outrageous.

    Assuming an 8 hour 5 day week you're talking about deploying once per hour.

    Sounds like BS to me...

    Let's assume Flickr has 24/7 coding/testing. How in the world can you find the code to change, change it, and then test it in 4 and a half hours? Not to mention actually doing the deployment itself?

    Automated tests are all well and good, but at such a pace it would mean you're depending almost entirely on them.

    No wonder the web is so screwed up.

    Now, if you told me they had a couple of deployments, with a grand total of 36 changes, that's hardly noteworthy, especially for a large coding/testing team. But 36 deployments a week? Very, very hard to believe.

  • To be fair Roger, although the team is releasing an implementation ever hour, with a sizable team you could have each developer only realeasing 1 change per day for example.

    It also just says 36 changes in a week.

    There could be 36 developers, each working on their own thing for a week and then implementing on Friday with their own implementation.

    Although on the surface it sounds crazy, it could be done quite easily by throwing developers at it, or having really small implementations.

  • You hit the nail on the head when you mentioned "clients". In my experience, many of them work on IT projects above and beyond their everyday tasks so involving them more in the testing process has always been an insufficiently fulfilled need. Adding more frequent testing may work from a technical standpoint, but only those projects that can justify user testing or that require no user testing will likely be approved and/or sufficiently tested for production.

    Just my view of the world.

    --John

  • It all matters with what the TEAM is comfortable in doing. Implementing 36 changes a week may seem overly ambitious and possibly nerve racking, If it is doable for the type of applications being implemented then I say why not.

    However, what I find conspicuously missing, is an average time frame between discovery, planning, design, testing, and implementation. It may be possible to roll-out 36 changes per week, but how long does it take for a single change to progress through each phase of the overall process? I have worked in places (that I hated) where a simple five minute change (to write) could take over six months to make it through all the QA and testing process levels before being implemented, because of all the red-tape and setting up and tearing down of multiple test environments.

    Ron K.

    "Any fool can write code that a computer can understand. Good programmers write code that humans can understand." -- Martin Fowler

  • "I've seen people worry about testing, but I don't think this requires shortcuts in testing. You have less to test because there are smaller deployments, less changes. Everything gets speeded up, and that should mean simpler, and hopefully better, testing."

    Unless there is excellent understanding of the interactions of the existing code and the new code, less testing can be the road to failure.

    I've see a "simple one line change" shut down a web-based application. Unfortunately, the in-house testers had trusted access to the application (intranet) and their login process was not the same as the remote users (coming from 20 countries via the Internet). Being a telecommuter has its advantages - I encountered that problem as soon as it was deployed and had the phone numbers of the responsible parties.

    Not a common problem, but it does reinforce the need for testing more than just the current hour's changes.

    John

  • Follow the money. Correct me if I'm wrong, but Flickr is free to all of its "customers" who upload photos. Its real customers are the byusinesses whose ads you see on there. So if its non-paying customers suffer a little in the quality of the perpetual beta, then they'll just understand - you get what you pay for. And the paying customers are almost at the mercy of Flickr to tell them how many impressions were flashed before non-paying customers. And no money is changing hands on the actaul application. No lives are really at stake. Wow. Now you can understand how they can run the perpetual beta. I'm sure it doesn't often happen, but if a picture on Flickr is somehow lost, then the owner simply has to upload it again. No big deal.

    Now if I tried to do that with my product (a higher education ERP), it would be a little different. In my case real money is changing hands and needs to be accounted for at a number of levels (GL, AP, AR, Payroll, Personnel/Benefits, Budgeting, Requisitioning/Purchasing, Registration/Tuition, Development, to name a few). And real data that affects real people lives and has real privacy information - your college transcript, for example - is maintained without the danger of corruption, compromise, or loss. We generally roll out 2-3 main releases a year that are fully QA'ed and beta tested on five different independent signed-off levels of testing and maybe 4-6 patches that have three different independent levels of testing.

    So don't show essentially a no-money, one-off, application like the ad-popper Flickr when you talk about 36 refreshes a week. They know they can get away with it (by the way, same with Google's main application). No one really raises a big beef (you get what you pay for) or it is difficult for the paying customers to know if something really didn't work well. I'll bet that they are extremely careful with the Accounts Recievable portion of their application! Show me a main-line-of-business application that affects real money and real lives doing this. Then I'll be really impressed.

  • For a counterpoint, look at Paul Graham's site, http://www.paulgraham.org, under the essays

    --

    JimFive

Viewing 15 posts - 1 through 15 (of 18 total)

You must be logged in to reply to this topic. Login to reply