Why didn't my entry get the recognition it deserved?

Just after the publication of the shortlist, I (as chairman of the judges) got several emails asking why their products/projects didn’t win or didn’t make the shortlist. I know that I will also get many queries after the event on 15 March, asking why an entry didn’t win.
 
I couldn’t really answer these in any detail, partly because I couldn’t remember every entry in detail and partly because I couldn’t answer for our 20 judges. Please bear in mind one thing, there are a lot of judges all with their own beliefs about what makes a good entry: some put a high premium on innovation, some on business benefits; some judge projects on management, some on technology - there’s no easy answer.
 
Some ascribe sinister motives, I’ve heard whispers that X won because it’s a big company or Y won because it advertises in Cloud Pro, or is a member of CIF.  None of these are true: all entries are considered on their merits – some of them are big companies, some are small, there’s an excellent mix in this year’s winners.  As for advertising and CIOF,  only one judge is a Cloud Pro employee and only two are part of the CIF infrastructure – they couldn’t influence the result even if they wanted to (and they don’t). All the judges abide by a set of rules, if they have a financial interest in any products/providers/projects in a category, they drop out of that category altogether - all the judges take their responsibilities seriously and put in a lot of work in assessing the entries.
 
Having said that (having consulted with other judges) there are some general themes to consider.
 
User comments
If your entry has no user comment or support, you’re not going to get anywhere.  You can tell us it’s the best product/service ever but if you can’t find a single customer to support that point of view, it doesn’t fill the judges with any confidence. And we like to see those customers say something meaningful “This is great product” doesn’t really cut it. “Since using X, we’ve saved 30 percent of our running costs, completed projects 10 percent quicker and have achieved better staff happiness” does.
 
What does the product really do?
There are a couple of instances where a judge would fire off a question to the rest of the group. “I may be me but I don’t understand what this product does” – this would generally lead to an admission that he/she wasn’t alone and that the entry was couched in such terms, no-one knew what it did. Don’t use jargon to disguise any uncertainty about the product’s role. Tell us what it does in plain, simple terms.
 
Just the facts, please
We like figures. Telling us a project saved money is all well and good; telling us it saved 15 percent  of IT costs or improved profits by 20 percent is much better.  We have as much respect for facts as Mr Gradgrind – so, the more you can tell us, the better your entry will be received
 
Counting the cost
And our love for precise figures means that we like seeing costs. In the old days of technology, the catalogue price and the actual price were very different – there were a wide variety of discounts to be negotiated – so it was hard to judge actual costs. Cloud isn’t like that, or shouldn’t be.  There are a lot of entries where costs were left off the entry form - that makes it hard to judge value for money. Tell us much as possible about what the cost is going to be, it will help our thought process. What’s surprising is that, as that over the years, there’s less information on costs, not more. Be upfront about what your service will mean for your customers’ bottom lines.
 
Support documents
Have you had a good magazine review? Have you been recognised by Gartner or another analyst? Have you been subject to a rigorous benchmark test? Have you won other awards?  If you can say ‘yes’ to any of these, tell us about it. And, furthermore, include such material with your entry, that helps the judges immensely.
 
Have a good reputation
Our judges don’t live in a vacuum. They know this industry very well and have heard of many of the products they’re judging. Hell, many of them have used some of them.  If a judge has heard only good things about a product, that’s going to stand an entry in good stead; if a judge has heard bad things, then that product is going to be marked down.  We judge most of the entries on the forms themselves but occasionally real life sneaks in and there were a couple of instances where products were marked up or down because of judges’ personal experiences
 
Ask yourself “is it cloud’?
Most companies know what a cloud service is but, looking at the entries we get, there are still one or two who don’t. These are fewer in number than there were, but if you’re not sure, have a look at the NIST definition.
 
Is it up-to-date
We’re rewarding the best in cloud. If your product was released ten years ago and has had no major upgrade since, it’s not going to win, The business (and cloud) landscape has changed a lot since then and judges tend to recognise products that try to move with the times. We also like projects that are fresh and new: one that was basically completed three years ago and has just been tweaked since then, will not win,
 
Of course, you can do all of these and still not be a winner. If we have 10 entries from companies who all do all of these, there are going to be five or six disappointed candidates.  This was particularly relevant this year, there were categories where the entries were so good that every one could have been a winner a couple of years ago. It’s great for us that the quality has improved but it does mean some disappointment out there. And it’s when the quality is so high, judges’ little quirks can make all the difference … and there’s little that can be done about that.