RA Diagnostic

RA Diagnostic
We are at ease when multiple conflicting priorities exist—we don’t rush to resolve tension prematurely.
When unexpected disruptions occur, we treat them as learning opportunities, not just risks to be controlled.
People here are encouraged to challenge the way things are done, even when systems are performing well.
Decisions are often made without full information, yet we still act decisively and review intelligently.
We listen carefully to outlier voices, dissent, and weak signals—even when they are inconvenient.
We are good at noticing when the context changes—and adjusting accordingly, even mid-flight.
Our long-term success depends on the health of the wider systems (environmental, social, economic) we operate within—and we act like it.
Environmental and social impacts are considered at the start of strategic decisions, not added as an afterthought.
Our core business model helps to solve problems in the wider world, not just avoid creating them.
We routinely consider the second- and third-order effects of our actions, not just direct impacts.
Teams are encouraged to design solutions that serve both immediate needs and wider system health.
Our performance measures include indicators of ecosystem health or social wellbeing—not only financial results.
Collaborations often result in outcomes that could not have been predicted or planned in advance.
We have spaces (formal or informal) where people from unrelated functions come together to make sense of complex problems.
Language barriers (jargon, disciplinary assumptions) are actively worked with, not ignored or papered over.
We deliberately bring in perspectives from outside our sector or field when facing novel challenges.
Accountability here isn’t just structural—it’s personal and cultural.
People feel safe raising ethical concerns, even when those concerns challenge powerful interests.
We actively question whether our governance processes reinforce or undermine trust.
When using AI, we start with the question: what kind of intelligence is needed here—human, artificial, or both?
Our teams are trained to understand both the power and the limits of AI—not just how to use it.
Decisions shaped by AI are routinely reviewed and questioned by people, especially where values or consequences are involved.
We have mechanisms to identify when human judgement should override or constrain automated outcomes.
We have scenarios and strategies for how AI could reshape our business, our workforce, or our impact.
People are encouraged to reflect in action, not only after the fact.
We make time and space for learning—even when it competes with delivery pressures.
The organisation adapts its structures and strategies in response to what individuals and teams are learning.
Learning loops span the organisation—insight in one area regularly informs others.
Scroll to Top