
Designing 360 Degree Feedback for Leadership Development needs to be carefully planned and implemented. If you’re going to create a bespoke 360 to measure your organisation’s specific leadership behaviours and values, you will need to think about how to tailor your content, engage respondents and report the results. This Complete Guide will help you do just that.
This Guide to Designing 360 Degree Feedback for Leadership Development covers:
- Why build a bespoke 360 Degree Feedback?
- Where do you start with bespoke 360 Feedback Design?
- Using competency and other frameworks
- Turning competencies into 360 Degree feedback questions or statements
- Should we use rating feedback (quantitative) or written feedback (qualitative)
- What rating scale to use
- How to use free text questions in the 360
- How many questions should be in the 360?
- Should 360 Degree Feedback always be anonymous?
- Is it necessary to run a pilot?
- What should a 360 Feedback report include?
Why build a bespoke 360 Degree Feedback?
Designing a customised 360 Degree Feedback is essential as part of a bespoke development programme. Using a customised 360 will:
- Provide a measure of how effectively learners or leadership are demonstrating those behaviours
- Clearly define for other employees what behaviours they can expect from leaders
- Measure changes in observed behaviours following learning or other development activities
- Gather group helps you to assess areas of strength and great leadership behaviours across the organisation, which you can share
- Provide data on further learning and development needs
Where do you start with designing 360 Degree Feedback ?
The best place to start is to decide on the what you want to get out of the 360. So, ask yourself what benefit or value the 360 feedback will add to your leadership programme. Do you want to use the 360 as a pre-assessment tool, to decide whom you will invite to join the programme? Or will it be an initial benchmark to measure the skills that you want to develop through your programme? Alternatively, the 360 could be the key talent management tool for keeping track of leadership and succession planning.
Clearly defining your goals for leadership skills will help to define the behaviours that demonstrate those skills. Those behaviours will form the basis of your bespoke 360 Feedback design.
Using competency and other frameworks
If you have already developed one of the following, you can use it to start your 360 design:
- Competency framework, whether that is a management, leadership or other framework.
- Job families and specifications
- Company values and their definitions
Alternatively, you can look at using one of the published competency lists. A good examples is as Korn Ferry’s excellent For Your Improvement. (Note we have no commercial relationship or interest in this product).
A competency framework can be a very good basis for a bespoke 360 Degree Feedback tool. So, it can include competencies that are already familiar and specific to the organisation. If the organisation uses the framework for recruitment, performance management and succession planning, then the 360 can consistently measure those processes.
All these models are likely to need some work to turn them into a robust 360 feedback tool. In particular, for quality feedback, you will need to draw out from the model, specific behaviours relating to the competencies.
Turning competencies into 360 Degree feedback questions or statements
The competencies in the 360 feedback should align with the outcomes of your leadership programme:
Once you have decided on the competencies you are going to use:
Describing the behaviours
- Within each set of competencies, create 4-6 questions or statements (also called ‘items’) against which the feedback will be given. Each statement should describe one behaviour or action that demonstrates the competency.
- Keep each statement short and easy to understand.
For example, ‘appropriately applies moderately complex learning methodologies’ sounds like corporate jargon which will have the effect of excluding potential respondents who don’t understand what ‘appropriately’ and ‘moderately complex’ indicate. A simpler way of saying this might be ‘Solves problems with the right tools for the job’ – it’s accessible to a wider group of respondents, and is more natural in tone. - Ask yourself, “Can this behaviour be observed and rated?” If not, you need to change your question.
For example, can you see someone doing the following: ‘Understands own strengths and weaknesses’? In other words, can you see someone understanding something? Perhaps a better way to phrase this would be ‘Openly talks about their own strengths and weaknesses’, which would enable the respondent to say that they observe this behaviour ‘Rarely’.
Write action-based and observable statements
- Always use an active verb in reference to a description of a trait, so instead of ‘is future focused’, we would recommend ‘develops future-focused plans’. This also describes something that they do, rather than a trait that might or might not be relevant.
- Only ask about one observable behaviour in each statement. For example, here is a statement provided by a client: ‘Actively seeks and acts on feedback from others, reflects and acts on own learning’. In this case, a number of separate behaviours are being described. Actively seeking feedback is not equivalent to acting on feedback, therefore it is unclear which behaviour we are asking respondents to rate. This statement should be split into at least three statements:
– ‘Requests feedback from others’,
– ‘Acts on feedback from others’, and
– ‘Demonstrates that he/she has acted on his/her own learning’
Avoid ‘Does not’ statements
- It’s important to write statements consistently stated in the positive, rather than the negative, e.g., ‘Stays on course when difficulties arise’, rather than ‘Doesn’t change direction when difficulties arise’. Otherwise, it’s confusing for the rater to be sure they are providing the right rating.
- A mixture of positively and negatively stated questions is very confusing for respondents working with a rating scale. If I choose ‘Frequently’ in response to the second statement above, what exactly am I saying? Also, a respondent may feel that the questions are somehow ‘trick’ questions.
Avoid generalisations, theories and ‘psychological’ analysis
- Avoid questions that ask for an opinion, talk about general or theoretical concepts or sound like a psychological assessment, e.g., ‘Demonstrates emotional intelligence’. A question like this can be widely interpreted – what you think emotional intelligence is may be different to my view. These kinds of questions can also have an implicit judgement in them, and it’s therefore advisable not to use them.
Using rating feedback (quantitative) or written feedback (qualitative)
When designing 360 Degree Feedback, it’s important to to understand the benefits and disadvantages of quantitative versus qualitative feedback.
With quantitative or rating scale feedback, respondents rate their response to a statement. For 360, the statement describes a behaviour, e.g., ‘Delegates tasks that will help the team to learn and develop their skills’. The respondent chooses a response on a scale, e.g., Strongly Agree to Strongly Disagree. The rating scale gives the respondent a very specific way of reporting on their observations of the learner’s actions. The responses of all raters are then put together to show an average rating for that statement.
The benefits of using Quantitative or Rating scale feedback
- Consistency of theme
By rating specific statements the learner and respondents can focus very clearly on one behaviour at a time. Because they are responding to the same statement, the ratings can then be grouped and compared. This gives a clear message about how the behaviour is being demonstrated. - Key organisational skills
The structured 360 allows everyone engaging with the process to think consistently about the key capabilities, skills and values that are important to the organisation. This helps to embed those capabilities as part of the culture of the organisation, and reinforce their importance in terms of performance and development. - Data on group strengths and development needs
Again, because the quantitative feedback is structured and consistent, the organisation will obtain clear data on, not only learner, but also group, departmental and regional training needs. Similarly, there will be clear indicators to support decisions on leadership development, talent management and succession planning.
The disadvantage of Quantitative feedback
The disadvantage of using quantitative feedback on its own, is that ratings alone do not provide a context or further information about the respondent’s observations. So, although you can get a clear idea of consistently high and low ratings, you then must make assumptions about why those ratings have been given.
The benefits of using Qualitative or text feedback
Qualitative feedback consists of free-text information provided by the learner and their respondents. Unlike quantitative feedback, qualitative feedback is prompted by open questions, such as “How could your colleague become a better leader?” or “What one thing could your colleague start doing to be more effective?”
The great thing about qualitative feedback is that the respondent can provide examples and experiences from working with the learner. This allows the learner to better understand the experience that their colleague has had. It also points the way to what specifically the learner can do to improve, based on those examples. Alternatively, examples of great behaviours are also an indicator of what the learner could do more of. Text feedback can be in a section of the questionnaire that’s associated with a skillset, or separately, as above.
The disadvantage of Qualitative feedback
The disadvantage of using only qualitative feedback is that, unless you provide key themes, the subject matter of the comments can vary widely. Therefore, the feedback may be scattered without clear or consistent themes. The comments may have more to do with the respondents’ own agendas and issues than the areas that the learner may need to develop. Also, respondents can sometimes be reluctant to provide only written feedback, as they may fear this will compromise the anonymity of their feedback.
Quantitative and Qualitative feedback work best together
We have found that using both quantitative and qualitative options within the same 360 framework is a highly effective way of obtaining feedback that is both measurable and contextual. With careful design and integration of the best of both options, learners can get a consistent set of messages about their key skills and competencies, as well as specific examples which will help them to continue to do what is effective, and to change what they need to improve their performance.
Remember to test all the statements against the rating scale you decide to use, so that the rating statement matches what you are asking, so for example, a statement like ‘Always asks for feedback from colleagues’ will not work with a frequency rating scale like ‘Always, Often, Seldom, or Never’.

Which rating scale to use
Ratings of all kinds are used in the 360 Degree Feedback, ranging from 3-point to 10-point scales. The most popular ratings scale is the 5-point ratings scale. My preferred scales are between 5 and 7 points, which are not too limiting for the respondents, like 3-point scales, and not too wide that the ratings and averages become meaningless, like 10-point scales.
Similarly, the naming of the rating points can include Agree/Disagree, Expectations (exceeded or not reached), and Satisfactory/Unsatisfactory.
In my experience, the most effective rating scale descriptors are those that are based on the frequency of observing the behaviour, rather than a judgment on the effectiveness of the behaviour, or indeed, the individual learner themselves.
I generally recommend frequency scales, such as ‘Always’ to ‘Never’ (as in always observed behaviour) or an ‘Almost always’ to ‘Almost never’.
There should be a clear difference between each rating point and if necessary, each should be defined in more detail so that users are clear what each rating looks like.
How to use free text questions in the 360
There are different ways to include ‘Free’ text questions. You can include these within each competency. So, you could, for example, have a free text box that relates to the competency ‘managing team performance’. This helps people to focus their free text question or their free text comments on that competency set.
Alternatively, use free text questions as a ‘wrap up’. This allows respondents to give their own views, with no direction from the questionnaire as to what they should focus on.
Another option is to direct respondents by asking open, coaching-type questions such as, “What would you like your colleague to Start doing (or Stop or Continue)?” or, “What one thing would make your colleague a better manager/leader/team member?” This provides a very easy and concrete way for respondents to provide examples of things that they’ve seen or things they’d like to see. And where it includes a ‘Continue doing’ (e.g., ‘continue being bright and happy in the mornings because it cheers us all up’), there is always a positive element.
How many questions should be in the 360?
After many years of experience in designing 360 Degree Feedback, we’ve found that between 30 and 40 questions (a mix of ratings and text) is the best number for an effective 360 survey.
A very short ratings/text questionnaire, while quick to complete, might not provide enough detail for the feedback to be informative to the reviewee (ratee). Long questionnaires, over 40 questions, do not engage respondents. Respondents may not complete them. Or they may give any old answer, just to get finished. Either way, the quality of the feedback reduces as the 360 gets longer!
We recommend around 6 competencies, each with 5 questions, so 30 ratings, plus 3 text questions, giving a total of 33.
Should 360 Degree Feedback always be anonymous?
Best practice states 360 Degree Feedback should always be reported anonymously. The British Psychological Society’s guidelines* advise anonymity for 360 feedback. By doing this, you give respondents the opportunity to provide more honest, and therefore more valuable, feedback to learners. The only exception to this is the feedback of the Line Manager. In most cases their feedback is directly attributable and visible in the 360 report. If required, the 360 administration should be able to anonymise this feedback. However in general, line managers should be willing and able to provide honest feedback that is attributable to them.
The process and the reporting should support this anonymity. Coaches or managers who are debriefing the 360 with learners should also ensure that learners focus on the feedback content, rather than the feedback givers.
However, not all organisations use 360 in this way. In Managing and Measuring Employee Performance**, the authors, Houldsworth and Jirasinghe, describe how BAE Systems successfully uses open and attributable 360 Degree Feedback as part of its performance appraisal process.
Is it necessary to run a pilot?
Whether you run a pilot 360 depends on the size of the target group and the extent you will be using the 360 in the future.
If you’re going to use the 360 from time to time for small cohorts or coaching individual leaders, a pilot may not be necessary. We would still recommend a limited rollout to a small number of colleagues to get their views before the main launch. You should ask colleagues who may be critical, as well as those you know will give a positive response.
For larger 360 assessments across the organisation, we would recommend a pilot. This will help you to get content and process right first time.
We would also recommend a content validation process for larger projects, to ensure that the questions being asked will measure what they are meant to measure.
What should a 360 Feedback report include?
Information that’s easy to understand
Research and experience designing 360 Degree Feedback indicates that the key features of a good 360 report are:
- Simplicity and ease of understanding
- Key feedback messages that are clear and easy to find
- Self-reflection prompts
- Next steps and development plan
Headlines and detailed feedback
A well designed 360 Degree Feedback report should include the following as basic:
- Self-review ratings by statement/question
- Top-rated statements as rated by all respondents
- Lowest-rated statements as rated by all respondents
- All statements rated top to bottom (as rated by all respondents)
- Anonymised free text comments
Here’s an example of top 5 and bottom 5 rated skills:

Comparison between self and others, between different groups
Additional reporting information that can add value to the feedback and the discussion with the learner includes:
- Comparison between self-ratings and respondents’ ratings
- Comparison between different respondent group ratings
- An indication of the spread of scores for each question (to highlight any large deviations. These could indicate respondent seeing inconsistent actions).
Keep the report format simple
Some reports provide the feedback data in statistical format (means, modes and variants), and in long passages of texts. This can be difficult for learners to wade through and understand. Ideally, as much of the reporting as possible should be in graphical format. Graphs are more engaging to the viewer and are easier to understand than complex statistics.
The key purpose of the report is to highlight the main messages of the 360. This will support them and their coach. Too much data, in a very long report, can be off-putting.
In addition, with complex reports, analytical people tend to get their calculators out and start to figure out the ratings. Clearly we want to steer them away from analysis, and towards a discussion.
Ask your provider
Ideally you should not have to design your own 360 Degree Feedback report. There are many good templates already available. Therefore, you should be able to fit your 360 questionnaire, ratings and reporting relationships into a standard template. If you are using an online 360 system, the system will allow you to do this. It should then show the data in a standard or agreed format. Each system provider will have a standard format for reporting the 360 Degree Feedback.
Putting it all together

Your 360 system provider should also be able to support you in customising the report, including branding and look-and-feel. This is important if you have a training or development offering and 360 Feedback is part of the overall programme. Here, consistent branding and content is critical for giving the 360 report a professional appearance.
More articles on this topic
If you’d like more detail on some of the topics covered in this article, explore our other articles below:
360 Degree Feedback Design: Why you should always include an ‘Unable to Rate’ option
Anonymous text 360 feedback: how does it work?
Should Feedback include Self-Assessment by the ratee?
Getting a high response rate to your 360 Degree Feedback, and why it matters
How to make your 360 Degree Feedback more user-friendly
Getting 360 Feedback right first time
How to improve 360 degree feedback
If you want better response rates for your 360 Feedback, do these things.
Leading organisations use 360 Feedback
360 Feedback For Performance Review
Competency frameworks: are they still relevant?
Using 360 Feedback to identify future leaders
For more information, and to see how Track 360 Feedback could help you to use 360 Degree Feedback in your organisation, call us or contact us now.
References:
*BPS Best Practice Guidelines
**Managing and Measuring Employee Performance, by Elizabeth HOULDSWORTH (Author), Dilum JIRASINGHE (Contributor), 2006