skip to Main Content

By Nigel Evans, member of the BPS Standards in Assessment Centres Working Group

AC Top Ten Tips

 A practical ‘in the field’ spotlight on what is actually happening in assessment centres, with the intention of raising the standards of assessment centre practice.

Assessment centres have enjoyed a reputation as the superior method for assessing individuals for selection and development. Assessment methodology is well established [1] with specific research insights ever emerging [2]. However, assessment centres are plagued by practical issues that tend to limit their effectiveness [3].

Whilst there are some classic texts to reference in order to understand assessment centre methodology [4], most practitioners learn more from concrete examples of what to do and what not to do.

Assessment centres have enjoyed a reputation as the superior method of assessment of individuals for selection and development.

With this in mind, this article draws on the recommendations of experienced assessors to provide a platform for practical suggestions of improving practice assessment centre implementation. These top tips link directly to quality assurance work in assessment centre design and delivery with reference to British Assessment Centre Standards [5] and International Standards on Assessment in Organisations (ISO 10667).

Top Ten Tips for Better Design and Delivery of Talent Assessment Centres.

1. Competency Definitions: Redesign – do not automatically use as given

There can be a lot of detail in a competency definition, yet how much of that can readily be observed in each assessment centre exercise? Rather than try to go for everything, focus on the parts that particular exercise is best able to elicit.

2. Assessor Training: Train – do not leave them untrained

Assessing is a skill and can be readily taught, but people need time to practice. Furthermore, even skilled assessors, be they internal or external to the organisation, need some training on the specific centre to be delivered, as timetables, exercises, marking guides, and the like vary enormously.

3. Support Staff: Nominate administrators – do not skimp on administration support

Assessors are there to do their principle job – that is, to assess well. They do guides, and taking lunch orders. Time is tight enough, so relieve the assessors of excessive administration tasks.

4. Role Players: Use trained ‘actors’ – do not let assessors ‘ham it up’

In specific work simulations (e.g., meetings), role players interact with the participants so as to generate behaviour to be assessed. To gain the required consistency, use an actor who has also been trained to understand the whole process, rather than dragging in a spare assessor or employee.

5. Assessor-Candidate Ratios: Ratio of 2:1 is possible – do not go further than 1:2

Sometimes there can appear to be more assessors in the room than participants. This is good, especially if you really need certainty on the specific behaviour assessed. Spreading assessors too thinly literally thins out their observation capacity.

6. Exercises: Bespoke is best – do not just take it off the shelf

If you are aiming to assess top talent for your organisation, why blindly rely on generic exercises? There are exceptions (see tip number 7: Psychometrics), but specifically designed exercises will get at the core of the behaviour you are specifically looking for at the specific level.

7. Psychometrics: Use experts – do not forget to audit their qualifications

Standardised psychometric tools are very powerful in the hands of an expert. Unfortunately, many ‘users’ do not have the formally recognised qualifications in broader test use which includes applying test scores to assessment centre competency models. These broader qualifications can be checked on national professional registers.

8. Scoring: Unify scoring systems – do not use multiple scale scores

Grading participants on an A-E system should go across the board for all assessment exercises. If you have a 2 rating, 80th percentile, sten 6, ‘good’ comment, or anything else but an A-E in your end scoring column then something is wrong!

9. Weighting: Be actuarial – do not ‘guesstimate’

Certain competencies and even specific exercises may not have equal weighting. This is fine with supporting evidence, yet decide and confirm all this before you get to the comparison of participants. Putting all scores in a spreadsheet is useful to then create the scoring algorithm; do check this forensically before making that top hiring or promotion decision.

10. Providing results to the Panel: Discuss – do not just distant report scores

The time and effort of designing and delivering an assessment centre does not have to simply boil down to a composite score. You will have a lot more information to paint the picture or deep dive detail for the panel who will ultimately make the final hiring or promotion decision. Given the assessment centre is usually only one part in the decision making process, feeding through key points will help inform next steps for combining data points or focussing final interviews.


There is a large ‘bandwidth’ of practice – ranging from what could be classified as ‘Best’ to ‘Questionable’. Hopefully these top tips will help you stay on the right side. Illustrations of best practice show what is possible to achieve within an organisational context, especially when consultants are challenged to rationalise assessment centre time and budgets.


About the author

Nigel Evans-01Nigel Evans  is a Business Psychologist with over 20 years’ experience of  providing consulting services to leading companies (including AXA, BBC, Nokia, UBS and numerous government departments). He is a recognised expert in psychometrics and assessment, and is truly international in his approach, having delivered assignments in over 25 countries.

He is currently the BPS representative for the International Test Commission and is core member of the BPS working group which has formulated the new BPS standard in assessment centres.


Article references:

1) Woodruffe, C. (2000). Development  and  Assessment  Centres. London: Chartered Institute of Personnel and Development.

2) Putka, D. J and Hoffman, B. J. Clarifying the contribution of assessee-, dimension-, exercise-, and assessor-related effects to reliable and unreliable variance in assessment center ratings. Journal of Applied Psychology, 2013, Vol 98 No 1, Pps 114-133.

3) Evans, N. (2014), Differences observed when implementing Assessment Centres – Best practice vs Questionable methods. BPS Division of Occupational Psychology Conference, Brighton, UK.

4) Ballantyne, I. & Povah, N. (2004). Assessment and Development Centres(2nd ed.).Hampshire: Gower.

5) BPS Standard in the Design and Delivery of Assessment Centres. Division of Occupational Psychology, 2014.

6) Bowler, Mark C and Woehr, David J. (2006). A meta-analytic evaluation of the impact of dimension and exercise factors on assessment center ratings. Journal of Applied Psychology, Vol 91, No 5, pp 1114-1124.

7) BPS. Design, Implementation and Evaluation of Assessment and Development Centres Best Practice Guidelines. Psychological Test Centre, 2012

8) Jansen, A., Melchers, K. G., Lievens, F., Kleinmann, M., Brändlki, M., Freaefel, L and König, C. J. Situation assessment as an ignored factor in the behavioural consistency paradigm underlying the validity of personnel selection procedures. Journal of Applied Psychology, 2013, vol 98, no 2, Pps 326 – 341.

Back To Top