Defining the outcomes we are building toward, before principles or implementation

In January, we published our thinking on why desiderata matter. We argued that most AI ethics frameworks start in the wrong place. They begin with principles like fairness and transparency, or with compliance requirements, and skip a crucial step: articulating what we actually want AI to achieve in a specific context.

This post defines our desiderata for AI in structural engineering. These are the outcomes we want in an ideal world, derived from our manifesto and tested through ongoing collaboration with Alva Markelius at Cambridge's Affective Intelligence and Robotics Laboratory.

We publish them not as finished answers but as a starting point for consultation. The focus groups we are running with structural engineers and AI ethics researchers will challenge, refine and extend this thinking. But we believe it is important to articulate where we stand before asking others to help us improve.

What Desiderata Are

Desiderata are not principles. Principles tell you how to act. Desiderata tell you what you are acting toward.

Desiderata are not constraints. Constraints tell you what to avoid. Desiderata tell you what to pursue.

Desiderata are not features. Features describe what a product does. Desiderata describe why it matters.

The Latin root means "things desired." Before we can decide whether an AI system is fair or transparent or accountable, we need to know what it would mean for that system to succeed. What are we trying to achieve? What would a good outcome look like?

In structural engineering, that question has not been systematically asked. We are attempting to answer it.

Our Desiderata

We derive these from our manifesto, which articulates what we believe about buildings, about structural engineers and about the built environment. Each desideratum connects an aspiration from the manifesto to a specific outcome we want AI to enable.

1. Buildings that protect life

Our manifesto states that safety is non-negotiable. Buildings must stand up. This is not a feature, it is the baseline.

The desideratum: AI in structural engineering should increase the probability that buildings perform safely under both expected and unexpected conditions. It should never create pathways to structural failure that would not otherwise exist.

This means AI tools must reinforce rather than erode the engineer's capacity to ensure safety. It means audit trails that allow decisions to be traced and verified. It means human judgment remaining in the loop for all safety-critical determinations.

A building that collapses kills people. No efficiency gain, no time saving, no commercial benefit justifies increasing that risk. This desideratum is absolute.

2. Engineers who remain in control

Our manifesto states that the engineer signs the drawings and the engineer must remain in control.

The desideratum: AI should expand what structural engineers can achieve without diminishing their agency, judgment or professional responsibility. Engineers should feel more capable with AI tools, not more dependent on them.

This means AI outputs must be verifiable. Engineers must be able to understand why a recommendation was made, not merely that it was made. The black box is unacceptable in a field where professional liability attaches to individuals.

It also means AI should not create learned helplessness. Junior engineers must still develop the intuition and judgment that characterise competent practice. AI that does the thinking for engineers produces engineers who cannot think. That is not progress.

3. Time returned to judgment

Our manifesto states that structural engineers should be creative, not just compliant. They should be empowered to do more of the work that requires judgment, not buried in coordination tasks.

The desideratum: AI should reduce time spent on repetitive, low-judgment tasks so that engineers can spend more time on work that requires creativity, mastery and professional reasoning.

The current state is that structural engineers lose hours each week to coordination overhead, to hunting for files, to propagating changes across documents, to tasks that require attention but not expertise. This is waste. It is time that could be spent on design decisions that actually matter.

AI should return that time. But it should return it to judgment, not to more tasks. The goal is not engineers who process more work but engineers who do better work.

4. Knowledge that persists

Our manifesto acknowledges that buildings outlast us. A building may stand for decades or centuries, occupied by generations of people who had no say in its design.

The desideratum: AI should make institutional knowledge accessible and durable, so that the reasoning behind design decisions can be understood by those who inherit them.

Consider what happens when a building needs repair decades after construction. A prominent building in central London developed significant cracking in its load-bearing stone facade. Engineers brought in to assess the damage reviewed the original drawings and proposed repair strategies based on what they found. But the drawings did not tell the whole story.

What the drawings did not show was that during original construction, a cost-saving decision on one floor had altered how the stone connected to the concrete structure. This deviation from the design created stress concentrations that caused the cracking. The engineers proposing repairs had no way of knowing this. The information existed only in the memory of those who had been present decades earlier.

The repair strategy only became viable when a retired engineer who had worked on the original construction was contacted. He remembered the deviation. He still had his notebook from the site visits. Without those memories, without that notebook, the building's history would have been lost and the repair approach would have been based on incomplete understanding.

This is not an isolated case. It is the normal state of institutional knowledge in structural engineering. Critical information lives in the heads of senior engineers, in personal notebooks, in files that cannot be found. When those engineers retire or move on, the knowledge often goes with them.

This is why we are building Bite. The first version of our product is an intelligent file management system that makes a firm's collective knowledge accessible through natural language. Rather than navigating folder hierarchies or remembering project codes, engineers describe what they need and Bite surfaces relevant files from across the firm's history. It indexes not just filenames but the contextual information in reports and documents that explains what a project involved, what decisions were made and why.

The goal is not better search. The goal is that a graduate engineer starting work on a complex refurbishment has access to the same institutional context that a thirty-year veteran carries in their head. The goal is that when buildings need attention decades from now, the reasoning behind original design decisions is retrievable rather than lost.

5. Buildings that account for their impact

Our manifesto states that sustainability is not optional, but we are honest about our limits. The built environment contributes nearly half of global carbon emissions. We cannot ignore this.

The desideratum: AI should enable structural engineers to understand and minimise the environmental impact of their design decisions, without pretending that technology alone solves the problem.

This means giving engineers visibility into the carbon implications of material choices, structural systems and design alternatives. It means surfacing trade-offs that might otherwise remain hidden. It means supporting better decisions where decisions exist.

It does not mean greenwashing. More buildings built faster is not inherently sustainable. AI that enables speed without accountability for impact is not serving this desideratum. We are honest about the tension and we do not pretend it away.

6. Buildings that serve their occupants

Our manifesto states that buildings are meant to serve humans, not the other way around. Every building should enhance the life of the person inside it.

The desideratum: AI should support the design of buildings that provide genuine shelter, comfort and utility to the people who occupy them.

Structural engineering is one discipline among many that shapes a building. But the structural engineer's decisions affect everything from floor vibration to the feeling of safety due to deflection. These are not secondary concerns. They determine whether a building is a place people want to be or a place people endure.

AI should help engineers see these connections. It should surface how structural choices affect occupant experience. It should support buildings that work for the humans inside them, not merely buildings that stand up.

7. A profession that grows stronger

Our manifesto commits to building AI tools that make structural engineers more capable, not less essential.

The desideratum: AI should strengthen the structural engineering profession as a whole, raising standards and expanding capacity without concentrating power or hollowing out expertise.

This means AI should not create winner-take-all dynamics where a few firms with access to sophisticated tools dominate while others fall behind. It means AI should support the development of junior engineers, not replace them. It means the profession should emerge from the AI transition with more capability, not less.

We reject the framing that positions AI as a threat to engineers. AI is a tool. Tools can be used well or badly. Our desideratum is that AI be used in ways that leave the profession stronger than it found it.

The Relationships Between Desiderata

These seven desiderata are not independent. They reinforce each other, but they also create tensions that must be navigated.

Safety and efficiency exist in tension. Our manifesto is explicit: we choose safety. But that choice must be made concrete in specific design decisions. AI should help engineers see where the tension exists and support them in making the right call.

Returning time to judgment only matters if engineers use that time for judgment. AI that saves time but encourages volume over quality does not serve the desiderata. The goal is better work, not more work.

Knowledge that persists is only valuable if it remains accessible to those who need it. AI systems that lock institutional knowledge inside proprietary platforms may preserve knowledge while restricting access. The desideratum requires both preservation and accessibility.

Occupant wellbeing and environmental impact can conflict. A building with larger windows admits more natural light but may require more material and more energy. AI should help engineers navigate these trade-offs, not pretend they do not exist.

A stronger profession requires that AI tools serve engineers broadly, not just early adopters. If AI concentrates advantage among large firms while leaving smaller practices behind, the profession does not grow stronger. It fragments.

These tensions are not problems to be solved. They are realities to be navigated. The desiderata provide orientation. They do not provide easy answers.

How We Will Test These Desiderata

We are not publishing these desiderata as final conclusions. We are publishing them as hypotheses to be tested.

Over the coming months, we will run focus groups with structural engineers and AI ethics researchers. We will ask whether these desiderata capture what matters. We will ask what we have missed. We will ask where our framing is naïve or incomplete.

We expect to be challenged. Engineers may tell us that some desiderata are unrealistic given commercial pressures. Ethicists may tell us that some desiderata are too narrow or too focused on our own interests as a company building AI tools. Both critiques would be valuable.

We will also test these desiderata against our own product decisions. When we face choices about what to build and how to build it, we will ask which option better serves the desiderata. If we find ourselves making decisions that conflict with what we have articulated here, that is information. Either the decision is wrong or the desideratum needs revision.

An Invitation

We are assembling participants for focus groups that will test and refine this thinking. If you are a structural engineer with views on what AI should achieve in your practice, we want to hear from you. If you are an AI ethics researcher with relevant expertise, your contribution would strengthen this work.

We are also interested in responses to this post. Tell us what we got wrong. Tell us what we missed. Tell us where our framing reveals assumptions we have not examined.

The desiderata will evolve. This is version one. We would rather publish something imperfect and improve it through dialogue than wait for perfection that never comes.

Why This Matters

AI is coming to structural engineering. The question is not whether but how.

We believe that how matters enormously. AI implemented carelessly could erode engineering judgment, concentrate power inappropriately and create risks that the profession is not equipped to manage. AI implemented thoughtfully could expand what engineers achieve, preserve knowledge that would otherwise be lost and support buildings that better serve the people who occupy them.

The difference lies in intention. It lies in being clear about what we are building toward before we start building.

That is what desiderata are for. They force us to articulate the destination before we optimise the route.

We have articulated ours. Now we invite the profession to help us get them right.

Bite Engineering builds AI-powered workflow automation for structural engineers. We are currently working with design partners across the UK to develop tools that make engineers more effective without compromising safety or eroding professional judgment.

This post describes ongoing collaboration with Alva Markelius at Cambridge's Affective Intelligence and Robotics Laboratory. The desiderata presented here will be tested and refined through focus groups with structural engineers and AI ethics researchers.

If you want to participate in this research or respond to what we have written, contact us at [email protected].

Keep Reading