Why Desiderata Matter
When we set out to build AI tools for structural engineering, we faced a question that most technology companies avoid: what should we actually be building toward? We found ourselves asking this not in terms of technical capability or commercial viability, but in terms of what is genuinely desirable for the engineers who will use these tools, the people who will occupy the buildings they design and the society that depends on safe, sustainable infrastructure.
This question led us to a concept from ethics literature that has shaped our approach: desiderata.
What Are Desiderata?
Desiderata, from the Latin for "things desired," represents what we want to achieve in an ideal world. Before jumping to principles, guidelines or implementation details, we need to articulate what we are actually aiming for.
This matters because most AI ethics frameworks start in the wrong place. They begin with compliance, asking what the law requires or they begin with principles like fairness and transparency. These are important, but they skip a crucial step. They assume we already know what we want AI to do in a given context.
In structural engineering, that assumption does not hold. Nobody has systematically asked what responsible AI looks like in this specific field or what a genuinely good outcome would be for engineers, for buildings and for the people who use them.
Desiderata forces us to answer that question first.
Alignment with the Institution of Structural Engineers
We are not building this framework in isolation from the profession's existing values and direction. The Institution of Structural Engineers has established three pillars that guide its work: Advancing the Profession, Supporting Professionals and Enhancing Professionalism. Its mission is to secure a safe and sustainable society by advancing structural engineering, raising professional standards and sharing knowledge. Its vision is to create an engaged global community of structural experts who are inspired and supported to innovate, collaborate, and generate a safe and sustainable built environment.
These pillars and this vision directly inform our desiderata for AI in structural engineering.
Advancing the Profession means AI should expand what structural engineers can achieve, not diminish their role. Supporting Professionals means AI tools should make engineers more effective, not replace their judgment or create dependencies that undermine their development. Enhancing Professionalism means AI must be implemented in ways that uphold the standards, accountability and ethical obligations that define chartered practice.
Last week we attended new IStructE President Brian Uy’s inaugural address, where he outlined three themes for his 2026 presidential year: registration and supervision, structural efficiency and technical competence and research. Each of these intersects directly with questions about AI in structural engineering.
On registration and supervision, AI raises questions about what competencies engineers need to demonstrate, how supervised practice should account for AI-assisted work and what responsibilities supervising engineers bear when trainees use AI tools. On structural efficiency, AI offers genuine potential to optimise designs and reduce material use, but only if implemented with appropriate oversight and verification. On technical competence and research, AI challenges us to define what knowledge and judgment engineers must retain even as tools become more capable and what research is needed to understand AI's impact on practice.
Our framework is designed to support these institutional priorities rather than operate in parallel to them.
Our Three-Step Framework
In collaboration with Alva Markelius, an AI ethics researcher at Cambridge's Affective Intelligence and Robotics Laboratory, we are developing a framework that moves from aspiration to implementation in three stages.
Step One: Desiderata
The first step asks what we actually want from AI in structural engineering. This is not about constraints or guardrails but about vision. In a utopian scenario, what would AI contribute to the field?
Our manifesto offers a starting point. We believe buildings should be safe, good for occupants and responsible to the future. We believe structural engineers should be creative, masterful and accountable. We believe the built environment should serve humanity's fundamental need for shelter.
These values align with the IStructE vision of a safe and sustainable built environment. They inform what we want AI to do: help engineers deliver better buildings faster, without compromising safety or eroding the judgment that makes engineering trustworthy.
But our perspective is limited. We are building this framework through consultation with practising engineers, professional bodies including the Institution of Structural Engineers, and researchers working at the intersection of AI ethics and embodied systems. The desiderata must reflect the field's collective aspirations, not just our own.
Step Two: Principles
From desiderata, we derive principles. These are established concepts from AI ethics literature, grounded in the specific context of structural engineering.
The usual suspects appear here: fairness, transparency and explainability. But we are also drawing on broader principles that are often overlooked. Data justice asks who benefits from the data being collected and who might be harmed. Environmental impact considers the carbon footprint of AI systems and the infrastructure that supports them, a consideration that resonates with the profession's growing focus on sustainability. Accountability addresses who bears responsibility when AI-assisted design goes wrong, a question central to Brian Uy's focus on registration and supervision.
The key is connecting these principles back to the desiderata. We do not adopt transparency because it is fashionable. We adopt it because it serves what we actually want: engineers who remain in control, buildings that are safe, decisions that can be traced and verified.
Step Three: Implementation
Principles without practice remain academic. The third step translates principles into concrete methods that structural engineering firms can actually use.
This is where participatory design comes in, involving engineers as co-designers of the tools they will use. This is where we address questions like how AI outputs should be presented so engineers can verify them, what audit trails are needed for regulatory compliance and how we prevent over-reliance on AI among junior engineers.
The implementation stage must also address Brian Uy's theme of technical competence. AI tools should reinforce engineering understanding, not replace it. Junior engineers growing up with AI must still develop the intuition and judgment that characterise competent practice. Our framework includes guidance on designing AI tools that educate as they assist, showing working rather than just answers and prompting verification rather than blind acceptance.
We are gathering implementation insights through our design partnerships with UK structural engineering firms. Every workflow discovery, feedback on product, and concern raised about AI in practice feeds into this part of the framework.
Beyond Compliance
A note on legal requirements: compliance is not sufficient.
The EU AI Act, emerging UK regulations, and professional standards from engineering institutions set a floor rather than a ceiling. Something can be legal and still be irresponsible. A tool can meet every regulatory requirement and still erode engineering judgment, concentrate power inappropriately or create dependencies that harm the profession.
The IStructE mission speaks of raising professional standards, not merely meeting them. Our framework is explicitly ambitious in the same spirit. We want to go beyond what is required to what is genuinely good. This is not naïve idealism but a recognition that engineers trust tools built by people who understand their responsibilities. Firms adopt technology from companies that share their values. In a field where mistakes cost lives, building ethically is the only sustainable approach.
Grounding the Framework in Evidence
We are not developing this framework in isolation. In the coming months, we will be running focus groups with structural engineers and AI ethics researchers to test and refine our thinking.
This matters for two reasons. First, it ensures the framework reflects diverse perspectives rather than just our assumptions. Engineers from different backgrounds, working in different contexts, will surface considerations we have not thought of. AI ethics researchers will challenge us to be more rigorous, more comprehensive, and more honest about limitations.
Second, it gives the framework methodological weight. A white paper based on systematic evidence collection carries more authority than one based on three people thinking hard in a room. If we are going to propose industry-wide guidance, we need to demonstrate that the guidance emerged from the industry itself.
This approach aligns with Brian Uy's emphasis on research and the IStructE commitment to sharing knowledge. We intend to publish our findings openly so that the profession can build on them.
An Invitation to Participate
We are opening a consultation on this framework. If you work in structural engineering and have views on how AI should be implemented in your workflows, we want to hear from you. If you are an AI ethics researcher with expertise relevant to safety-critical industries, we would welcome your contribution. If you are at a professional body thinking about how standards and chartership requirements should evolve, your perspective is essential.
The IStructE vision calls for an engaged global community that collaborates and innovates. This framework is an opportunity to do exactly that, to shape the future of AI in structural engineering together rather than letting it be shaped for us.
The decisions made now about AI in structural engineering will shape the field for decades. We would rather build this framework together than impose it from outside.
What Comes Next
Over the coming months, we will publish more detailed thinking on each component of the framework. We will share what we learn from focus groups and consultations. We will release practical guidance that firms can use immediately.
This blog post is a starting point rather than an endpoint. The framework will evolve as we learn more, as the technology develops, and as the industry's understanding deepens.
But it starts with desiderata. It starts with asking what we actually want.
Bite Engineering builds AI-powered workflow automation for structural engineers. We are currently working with design partners across the UK to develop tools that save time without compromising safety. If you are interested in contributing to our AI ethics framework consultation, get in touch at [email protected].
This post was informed by ongoing collaboration with Alva Markelius at Cambridge's Affective Intelligence and Robotics Laboratory. We are grateful for perspectives that challenge and refine our thinking.

