Background – Københavns Universitet

Background

Previously, a study has been undertaken of how experts, or the kind of people who are recruited as assessors, perceive the validity of different welfare indicators. In an abstract based on that study, which was recently presented at a conference, it was concluded that these people “(…) do not agree on the level of validity of measures” and that “There were some significant differences between groups of experts but these were minor compared to the overall disagreement.”

If there are substantial differences between assessors in how they apply the scores, this may imply that overall index values are dependent on who does the scoring. Changes in the index over time may then reflect changes in those who do the scoring rather than changes in the welfare of the animals. A central endeavour in this respect is to provide the assessor with a relevant scoring scheme (or scale) through which variation in animal welfare can be reported while aiming to improve inter-observer reliability. It is very likely that the type of scale used may influence the reliability of the scoring as well as the information that can be gained from the measure.

The role of experts may also influence the aggregation of scores of welfare indicators into scores at the criteria level and total welfare scores at farm level, and further, it is important to consider whether the aggregated welfare scores are in accordance with how experts view the total welfare on the farms they score. A large discrepancy between the aggregated welfare score and what an expert would perceive the welfare score of a farm to be when visiting the farm, would be cause for concern. Indeed, one way of validating the aggregation procedure is to check how well it compares with the holistic impression of the experts. In this way, the different aggregation procedures may be evaluated.

Research plan