Showing posts with label testing. Show all posts
Showing posts with label testing. Show all posts

Thursday 29 January 2009

Fancy a Hummer in Carlisle

This has been a journey of interest - installing wordpress on a Callisti server and then configuring it to behave kind of like a regular website but with added widgets. In a way the back end is more interesting than the front!

There is a flash slideshow plugin based on the NextGen image galleries and aside from some email settings on the server and a lot of CSS editing I've been impressed with the scope for individuation from the Wordpress community.

The next part is testing the site and getting the content and up-to-date images uploaded as well as the google account duties.

sitemap.xml CHMOD 777 and all that...

Monday 1 September 2008

Google Listings

Just tried a speculative search in UK results for 'usability and seo' and callisti comes up in postition 8. Remove the 'and' and it shifts to 3rd or 4th page. Encouraging though all the same. I must update the sitemap now that some content for user testing and seo consultancy has been added in a subdirectory for the website.

User testing or website usability

I'm not even sure if usability is actually a word beyond the realms of websites and graphical user interfaces. Anyhow, user testing is a research component of the website usability consultancy service offered by Callisti.

A client's website is examined in several ways: what does the unassuming user discover unprompted when visiting the website; how does the user navigate through the site and what is her experience of this; how does the client intend the website to work; etc.

By compiling the findings we can report back on a convergence strategy that aims to improve ROI and goal conversion from the client's point of view while simultaneously improving user feedback.

Thursday 28 August 2008

When the site hits the fan

Woke up this morning, checked in on a client site, nothing there. No database access, content management access or website front-end visible.

Not good.

After some frustrating phone calls, another freelancer working on the site (unit A), started playing the 'blunderbuss blame game' to defer any responsibility for the problem.

Unit A was put in place to ensure the site project would have failsafes for when a server went down, mirrors and database snapshots with smart load balancing, clustered server setup, 3 different geographical locations, etc.

Previously the site had 99.7 % uptime on a Fasthosts dedicated server but Unit A was keen to force through 'scaling up' tactics rather than wait for the worst to happen. Sensible chat indeed. However Unit A is a chatter and not a doer it seems and is now squirming while all the teams working on the project add yet another black mark to the Unit A gripe-sheet.

This is the culmination of a month of Unit A speaking with forked-tongue about a new, more safe and efficient system that was being put in place for developers to test and deploy code to the live site. Which hasn't been delivered.

What we now have is a system whereby Unit B (a friend of Unit A) outsources development work to Unit C and all work must go through Unit C's team server environment.

Once we check files in and out we now have to merge those files on the team server, run a script or batch file to update THE WHOLE SITE from the team server to the development server, test on the development server, then snapshot that instance of the site and submit to the Live-Candidate stage using a multi step, error-prone system, then wait for Unit A to hear from Unit B, (which in practise is me phoning Unit A every day at least once asking that he sends an email confirmation which he is very resistant to doing) then carry out UAT (user acceptance testing), then feedback to Unit A, who passes this back to Unit B, who charges the work to Unit C to revise the develoment server version, resubmit to live-candidate version, ETC.

In the meantime we have work queued up on the system which can't be deployed until Unit C fix their errors.

It's the perfect bureaucracy: impossible to understand, unduly complicated, far too many agencies involved and people who want to get on with their work can't. Oh, and the CEO is under the illusion that we need Unit A and that everything will be better soon.

Add to that, the most creative developer is so demoralised by the situation that he is now leaving the team. This follows all the existing team repeatedly expressing justified reservations to the CEO about the involvement of Unit A and subsequent issues.

Some other key failings of Unit A: not making any allowance for the fact that the whole team use Macs which aren't compatible with team server, not being open with basic permissions and access details for team, not communicating clearly, etc.

Sometimes I feel like laughing.