vvq9no1renejmmktkhl6grj6i6noi222.ilinedesign.com Open in urlscan Pro
77.72.1.2  Public Scan

URL: https://vvq9no1renejmmktkhl6grj6i6noi222.ilinedesign.com/
Submission: On November 28 via api from US — Scanned from US

Form analysis 1 forms found in the DOM

GET https://www.agileinsider.org

<form method="get" id="searchform" action="https://www.agileinsider.org"> <input type="text" value="" name="s" id="s"> <input type="submit" id="go" value="" alt="Search" title="Search"></form>

Text Content

AGILE INSIDER REALITY BYTES…

Subscribe via RSS
 * Home
   


10Apr/14Off


THE VALUE OF PASSION


My last engagement has left me a little scarred and bruised.  It has really
tested my core agile values and as I reflect on it, I came to a surprising
conclusion.

The engagement involved introducing/advocating cloud/virtualisation to improve
the testing capabilities within a global tier 1 bank.  The bank has all the
elements in place and therefore the challenge was not remotely technical but
100% cultural.

The project was driven/under-pinned by a vision rather than a backlog, this was
to prevent the project being de-railed by cultural dead-ends or technical
side-shows.  It wasn't so much we'll know it when we see it, rather than we knew
what was acceptable and everything that prevented this could and would be
challenged.  It was firmly based on devops principles of completely automated,
repeatable environments.

Going back to the values, I personally place high value on simplicity, feedback
and working software so rather than powerpoint the project to death we developed
and released a working solution.  This was not a theoretically working solution,
but a working solution on real infrastructure provided by the men in black.
 Sadly, the real infrastructure we were using was in the wrong continent and was
only half-heartedly supported by the men in black preventing the transfer of
data required for testing and at this point the rails came off.

We had performed live demonstrations and key people were heavily engaged and
excited about making use of what we had developed and what should have been a
simple lift and shift to the correct continent proved instead to be the slow and
excruciatingly painful death of the vision.  Everyone agreed with the vision,
but culture, policies, processes and bureaucracy all transpired against us.

The first wheel to come off was our use of an unsupported operating system.  It
was the correct operating system, but it hadn't been built by the men in black
so wasn't sufficiently opaque.  It took a few months of unpicking and reverse
engineering just to get back to some of the basic capabilities that are
mandatory for deploying software a'la devops in a highly restrictive financial
organisation.  Those months did however allow us to get back to where we were
and at this stage the feeling was that not only had we built it again, but had
also this time built it better as it was more in-line with wider strategies.  So
finally, we want to get people in there, but wait...  We have no disk space

Popping into your local PC world for a few TBs of disk is easy and will set you
back a couple of hundred bucks at most.  In a corporate data-centre however
disk-space is like gold dust and is charged by the ounce.  This was the first
stage in the project where we needed real funds and investment.  We were at the
point where we had a working solution, an eager customer base and genuine
excitement.  This was the game-changer and we were very, very excited...

What followed sums up the cultural challenges, instead of capitalising on the
solution, looking for opportunities to deploy to other groups, we spent months
creating detailed business cases, investment plans, roadmaps, etc, to get modest
sums to fund the final rollout of the solution.  During these months, I had to
put my personal values aside in favour of documentation, process and all those
other things that are less valuable in agile, but I was playing the long game.
 Our strategy was realising our vision and that meant enduring these little
tactical battles where necessary.  What I wasn't prepared for was how
demoralising this would be and just how much of my passion would be destroyed in
the process.  This wasn't a case of everyone required clubbing together to
devise a brighter future, it was a horse-trading exercise of compromise and
trade-offs.

As I look back now, detached from the project, it would be very easy to view
this as a failure; we certainly failed to get the funding or deliver our vision.
 What we did achieve though was the planting of a seed.  It will take several
years for the seed to grow, just as agile typically takes a few years to embed
itself in a large multi-national organisation (and even there the use of the
word agile probably means nothing more than the team do a stand-up each day).
 I'm hoping that when the time is right, people may be able to dust off a few of
my blog articles I wrote explaining how devops can strengthen governance and
auditing, or why creating an environment automatically in minutes is better than
building one manually over weeks (even if the steps are all self-service).

The bank in question is a bank I personally love.  The people are great, the
technology (when you can use it) is cutting edge and the challenges are anything
but trivial.  I had the opportunity to stay at the bank in question and opted
not to, for what was a surprise to myself.  It wasn't because of the lack of
feedback, the skepticism of simplicity, the illusion of control or lack of
trust.  It was because I lost my passion.  It turns out that my most important
value is passion and this is the one fire they failed to ignite and instead
extinguished.

I have also realised (again) that every single assumption you make at the outset
of a project needs to be made explicit and validated.  I'm heartened that our
project was not a big costly disaster, it was a (relatively) small, well
contained experiment in the art of the (im)possible.  We delivered working
software, but failed to get it into the hands of the users.  We found simplicity
hiding in  a web of complexity.  We were open and honest with all our
information and everything we did was made available to everyone.

My passion is always to get high quality, working software into the hands of the
customer as quickly as possible and delight them.  To drive my passion I rely on
a my own core values of simplicity, feedback, transparency and trust.  I have no
doubt the bank in question will deliver yet another highly compromised version
of what we have already built and demonstrated twice; I can only hope our
original vision remains as the yard-stick.

Filed under: Agile, enterprise, value driven Comments Off


21Mar/14Off


FROM BEHIND CLOSED DOORS

It would be hard to tell looking at this blog, but for the last 2 years I have
been actively blogging, but on the internal blogging platform within a global
bank.  When my contract ends on 31st March I will feel a small sense of loss
because I will lose access to these articles, as well as many other extremely
insightful articles only available within the organisation but written by some
people I respect extremely highly and would love to work with again.  A few
brave souls have started blogging publicly, but only material not related to
their day jobs, which means the important and insightful stuff remains locked
behind those closed doors.

I would love to be able to blog now about how successful my project has been, or
provide you with some nitty gritty details about the challenges of corporate
working, but that would not be appropriate, or indeed fair.  Every corporate is
unique and despite the constant, expensive search for silver bullets and
one-size-fits all recipes these will constantly remain elusive.

I suspect that every corporate which has any relevance to financial markets and
stability are all facing similar challenges to meet increasing pressure from
regulators.  Tighter controls, more transparent models and increased
accountability make it much harder to deliver innovation in these organisations.
 For the last 2 years I've been exploring and demonstrating how a DevOps
mentality within these extremely important establishments is introducing
opportunities, techniques and practises which can alter the balance from a
tick-box, form-filling, blame shifting culture to a more proactive, rigorous and
scientific one.

Of course, when I refer to devops, just as when I refer to agile - I mean the
culture and values.  Debating puppet vs chef vs foobar is an entertaining
side-show; the search for the perfect silver bullet, which takes away the focus
from the value of delivering working software, regularly, repeatably and
reliably.  Done right, delivering software, or scaling infrastructure should be
a non-event, having been practised repeatedly.  The gulf between a cycle time of
minutes/hours and months/years won't be bridged with small incremental
improvements, it requires radical thinking, cultural change and the occasional
leap of faith...

Etsy, NetFlix, Amazon - thanks for the leadership, now move over and smile as
the big boys try to 'buy' your culture

 

Filed under: Agile Comments Off


15Feb/12Off


TOP TIPS – ADVANCED ACCEPTANCE TEST DRIVEN DEVELOPMENT

Focus on Success

Over the course of my career I've had the pleasure of working with some great
agile teams. I've also had some bitter disappointments working with great
developers, testers and BAs who just don't get it...

Many of the teams that get it didn't actually use natural language to create
executable acceptance tests, however they did have extensive suites of automated
acceptance tests, usually written by the business analysts or developers but in
a language and style that is not normal for the non agile developers I have
encountered. So in an attempt to try to capture the difference I'm going to try
to provide some useful tips and techniques to challenge those attempting to
adopt acceptance test driven development within a corporate environment.

I will begin by recommending the various conference videos from GTAC. I'm not
saying google are doing it perfect (I just don't know), but I am happy to
believe they are probably doing lots of things right...

And most important, if we are going to go to the bother of creating executable
acceptance tests, think carefully about who is accepting these. If the only
person who will accept these (and I mean really accept, as in understand and
even be happy to take ownership of it) is the developer, then use the most
appropriate tool.

So the tips and techniques...

 1.  Make sure the story is the right story... If you have a story that is
     purely technical, then it's possibly better to test these using developer
     tests, it's unlikely to be something the business "really" care about... If
     the story isn't for a paying customer but for an internal user, try to find
     out what benefit that internal user is going to provide for the customer
     and reword the story from the end user perspective.
 2.  Don't clean up after tests... More importantly for acceptance testing is
     ensuring you know the state of the environment at the beginning of the test
     and that the test can run based on that state. Leaving the data created by
     the test can help immensely when issues are found. Given the amount and
     complexity of changes an acceptance test can inflict on an environment
     combined with the number of points at which an acceptance test can fail
     makes cleaning up extremely complex and error prone and does not provide
     the same level of ROI as unit tests. This has the added benefit of building
     a business case for better, more flexible environments and continuous
     delivery...
 3.  Create unique contexts for each test... To prevent tests stepping on each
     other's toes if they are run in parallel, create a unique context for the
     test, this could be as simple as creating a user with a unique id for that
     test or might require creating a collection of unique items you plan to use
     (e.g. instruments in a trading app, pages in a cms, articles for a blog)
 4.  Don't wait for something, make it happen... Where you come across a
     situation where you need to wait for something, prod the system to make it
     happen and use a spin loop so that in an environment where you can't prod
     the test still passes.
 5.  Question everything, even the test framework... As you develop your
     acceptance tests, the supporting test framework and ultimately the
     application continually ask yourself what would happen if you replaced x
     with y. For a web based application, the questions you might ask could be
     what would happen if we wanted to make this available on an android device
     or iphone, does my acceptance test still hold true? Can my test framework
     support this easily without visiting all the fixtures? What if change the
     test framework I use?
 6.  Use the english language to drive the domain model... Good acceptance tests
     usually make it explicit the domain model needed to support the testing,
     and more often than not this drives the actual domain model needed within
     the application.
 7.  Use the real application code if at all possible... Rather than completely
     decouple your tests from the implementation, use the real implementation at
     the appropriate layer. This adds the benefit that changes to the
     implementation require no changes to the tests. To achieve this requires a
     suitably layered test framework to prevent these implementation changes
     rippling too far up resulting in broken tests. The best candidates for
     reuse are typically the domain models, data access components and service
     interfaces.
 8.  Assume you are running everything on your own machine until you can't...
     Start with the assumption that everything you need is running on your local
     development machine, since ultimately the goal is you can actually run
     these tests locally to test the functionality works. Once you have a test
     running and passing locally, you know the functionality is working and are
     then in a better place to refactor the test to support different
     environments.
 9.  Keep the tests isolated... Don't try to optimise tests by adding additional
     verifications or steps to existing stories. Create new tests. This might
     expose problems running the tests quickly but explore the other solutions
     to this rather than create huge tests that test too much. And imagine how
     the business will react when you say you are running 500 business tests and
     are getting 100% pass rate but can't test their new features because you
     don't have enough kit...
 10. Don't write the test at all... If the story doesn't have much value, or the
     the systems you are using are not in your control and are not test friendly
     then stop just short of automating it... Work out how you might automate
     it, the exercise will highlight the blockers and drive a better story and
     clearer acceptance criteria, but weight up the cost of writing, maintaining
     and executing the test against the value of the story and the true
     cost/likelihood should a defect occur in that story...

I'm sure a few of these will feel a little controversial or sit uncomfortably
dependent on your experience. I'm also sure some appear on the face of it to
conflict with others. For those who reach nirvana, you will end up with a suite
of extremely robust acceptance tests (owned and fully understood by, the
business), which a developer can run locally before committing code and which
are then run again in a virtual production like cloud.

Tagged as: acceptance tests, atdd, tdd, test driven, test first Comments Off


19Jan/12Off


WHY BOTHER WITH AUTOMATED ACCEPTANCE TESTS

Who's this for?

I'm about to write a few articles covering some advanced acceptance testing
techniques. I don't plan to get into the nitty gritty technical details and
instead want to discuss the why's... For some great material around acceptance
testing I highly recommend looking at the Concordion techniques page and can't
speak highly enough of Gojko Adzic and recommend you look at his blog and in
particular the comments to his posts.

The question I want to ask is slightly more philosophical. Why are we really
writing automated acceptance tests and who are they really for?

If you have a customer on-site, who writes the stories and tests (acceptance
criteria) in a non implementation specific way then stop reading right now and
send me the link to your blog, otherwise...

In an acceptance test driven environment, the acceptance tests help ensure you
have solved the problem and developer tests help ensure you are solving the
problem the right way. To validate you are solving the right problem we need to
express the tests in a way which doesn't tie you to a particular implementation
so we probably want to drive this more from a user experience and in particular
the functionality we expose to the user as well as what the user can expect when
using that functionality. So we are writing acceptance tests that check that the
functionality we are making available to our customers is working correctly,
without worrying how we will provide that functionality, but does that mean we
are expecting our customer to "accept" those acceptance tests?

In agile teams you probably have a product owner and in an ideal world we would
want the product owner to "own" these acceptance tests. More often than not, the
product owner will happily own the story but will delegate owning the specifics
(which sadly often includes testing) to a business analyst. Our goal is to get
the product owner to own these tests, but with a business analyst in the way we
are probably already at the stage where any tests will be implementation
specific, since the business analyst is probably doing exactly that, working out
how to solve the problem... In fact, business analysts probably don't want to
own the tests either which leaves the team...

Let's reflect for a moment... We want the customer or product owner to own
acceptance tests, but instead it usually ends up being the team that owns them,
so let's explore what typically happens... The team search the web for
acceptance testing techniques, they come across BDD and see there are a wealth
of tools out there supporting BDD. They pick up a tool (cucumber, jbehave, etc)
and all tests are now captured and represented in Pseudo english in the hope
that the product owner or business analyst can start creating these tests
themselves. I've yet to meet a product owner or business analysts (or indeed a
developer) who uses this style of language,

> a product owner walks in to a bar and says to the barman
> 
> "Given I am thirsty
> and I am standing at a bar
> and there is a beer mat in front of me,
> When I ask the bar man for a pint of his best bitter
> Then the barman will pour a pint of his best bitter
> and place it on the beer mat in front of me".

Just a little bit verbose (not to mention slightly implementation specific) for
expressing ordering a pint of best bitter. So my point is BDD is a technique, it
is invaluable for exploring the problem domain and capturing scenarios and
examples to help understand a problem, however they are not a specification in
and of themselves. Using a tool too early to automate these ties you into this
form of unnatural expression and eliminates a choice of how to engage with the
customer later.

As a team, using the technique in discussions but then using a tool or framework
(e.g. xUnit) more suited to the real owner of the executable part (developers)
means you can leave the choice of customer facing tool to a more appropriate
moment when they actually do want to engage and also benefit from an environment
and language the developers are most comfortable with...  I've written
previously that even working out what you plan to test or demonstrate before
working out how to implement it can add immense value as a thought exercise.

Toxic Waste

There is also another scenario which is by and far the most dangerous... Having
browsed the web, we want a cross functional team so we embed a tester into the
team to perform this function. The tester works closely with the business
analyst and creates/owns functional tests. Most testers are new to development
and don't have the skills or experience of the developers to be writing code,
and worse, we are trusting these inexperienced developers with writing the most
important code in the system, the tests that validate that what we are doing is
correct... Inevitably we will end up with an enormous suite of functional tests
that are very "script" based, not easy to understand and which add little if any
value to the day to day activities of the team.

So to recap, we want to write acceptance tests to validate we are building the
right thing (and that once built (or is reimplemented) it continues to work),
and we want the customer (or product owner) to "own" them. If any of these are
not true in your organisation then seriously ask yourself why are you doing what
you are doing and put the customer's hat on and ask would you (as a customer
with limited technical knowledge) ever "accept" what is being done on your
"behalf"...

Tagged as: acceptance tests, atdd, customer, tdd, test driven, user stories
Comments Off


17Jan/12Off


AGILE, A POEM

Agile is a Journey

I thought I'd have a little blast at poetry for fun...

> Agile is not a Gift I can Give,
> Nor is it a Method I can Teach,
> It is a Choice You must willingly Take,
> And a Journey You are willing to Make.
> 
> The road Never ends,
> It Twists and it Turns,
> But the Road is your Road,
> And it's your Trail which Blazes.
> 
> Don't be a Passenger,
> Don't pay a Chauffeur,
> Grab hold of the wheel,
> And Pick your own Pace.
> 
> Take those Detours,
> Enjoy the Delights,
> Splash in the Fountains,
> Chase those Green Lights.

Tagged as: fun, poems Comments Off


   Older Entries »


MY LINKS

Follow @agileinsider on Twitter
I'm on Linked In



RECENT POSTS

 * The Value of Passion
 * From Behind Closed Doors
 * Top Tips – Advanced Acceptance Test Driven Development
 * Why Bother with Automated Acceptance Tests
 * Agile, a poem
 * Updated – Agile Hitler – He’s Using Git
 * Dusting off Rework
 * Concordion Plus
 * My Stories Are Bigger Than Your Story
 * Digital Charity Shop


BLOGROLL

 * Alistair Cockburn
 * Andy Hunt
 * Ben Hoskins
 * Dan North
 * Dave Thomas
 * David Peterson
 * Gojko Adzic
 * James Shore
 * Jim Highsmith
 * Joel On Software
 * Martin Fowler
 * Michael Feathers
 * Nigel Charman
 * Object Mentor
 * Portia Tung
 * Scott Ambler
 * Stephen Walther
 * Ward Cunningham


TAGS

acceptance criteria acceptance tests agile analysis atdd baby steps benefits
book charity coaching communication culture drm emotions essays feedback from
the trenches fun functional debt human nature ideas information radiator just
enough lean legacy mentor methodology pair programming poems pragmatism
psychology quality refactoring satire scrum simplicity tdd team technical debt
test driven test first user stories weather poker yagni youtube


ARCHIVES

 * April 2014
 * March 2014
 * February 2012
 * January 2012
 * August 2011
 * April 2011
 * March 2011
 * February 2011
 * September 2010
 * April 2010
 * March 2010
 * September 2009
 * August 2009
 * July 2009
 * June 2009
 * May 2009


Copyright © 2023 Agile Insider · Powered by WordPress
Lightword Theme by Andrei Luca Go to top ↑