Showing posts with label UAT. Show all posts
Showing posts with label UAT. Show all posts

Thursday 28 August 2008

When the site hits the fan

Woke up this morning, checked in on a client site, nothing there. No database access, content management access or website front-end visible.

Not good.

After some frustrating phone calls, another freelancer working on the site (unit A), started playing the 'blunderbuss blame game' to defer any responsibility for the problem.

Unit A was put in place to ensure the site project would have failsafes for when a server went down, mirrors and database snapshots with smart load balancing, clustered server setup, 3 different geographical locations, etc.

Previously the site had 99.7 % uptime on a Fasthosts dedicated server but Unit A was keen to force through 'scaling up' tactics rather than wait for the worst to happen. Sensible chat indeed. However Unit A is a chatter and not a doer it seems and is now squirming while all the teams working on the project add yet another black mark to the Unit A gripe-sheet.

This is the culmination of a month of Unit A speaking with forked-tongue about a new, more safe and efficient system that was being put in place for developers to test and deploy code to the live site. Which hasn't been delivered.

What we now have is a system whereby Unit B (a friend of Unit A) outsources development work to Unit C and all work must go through Unit C's team server environment.

Once we check files in and out we now have to merge those files on the team server, run a script or batch file to update THE WHOLE SITE from the team server to the development server, test on the development server, then snapshot that instance of the site and submit to the Live-Candidate stage using a multi step, error-prone system, then wait for Unit A to hear from Unit B, (which in practise is me phoning Unit A every day at least once asking that he sends an email confirmation which he is very resistant to doing) then carry out UAT (user acceptance testing), then feedback to Unit A, who passes this back to Unit B, who charges the work to Unit C to revise the develoment server version, resubmit to live-candidate version, ETC.

In the meantime we have work queued up on the system which can't be deployed until Unit C fix their errors.

It's the perfect bureaucracy: impossible to understand, unduly complicated, far too many agencies involved and people who want to get on with their work can't. Oh, and the CEO is under the illusion that we need Unit A and that everything will be better soon.

Add to that, the most creative developer is so demoralised by the situation that he is now leaving the team. This follows all the existing team repeatedly expressing justified reservations to the CEO about the involvement of Unit A and subsequent issues.

Some other key failings of Unit A: not making any allowance for the fact that the whole team use Macs which aren't compatible with team server, not being open with basic permissions and access details for team, not communicating clearly, etc.

Sometimes I feel like laughing.