Structural Design is Critical

When AI makes a mistake recommending a film, you waste two hours. When AI makes a mistake in structural engineering, buildings fail and people get hurt. This distinction matters. As AI tools become more capable and more present in engineering workflows, we need to think carefully about how they're used, not to resist progress, but to ensure it happens responsibly.

At Bite, we're building AI-powered automation for structural design. That means we think about safety constantly. This post outlines our perspective on using AI safely in structural design, not as a set of rules, but as an opener for an industry-wide conversation.

Three Pillars of AI Ethics

As we learnt from a discussion with Alva Markelius, a leading AI Ethics Researcher at Cambridge’s Affective Intelligence and Robotics Laboratory (AFAR), AI ethics isn't just philosophy, just policy, or just engineering. It requires all three working together.

The philosophical pillar asks what principles should guide how we build and deploy AI in safety-critical contexts, and what we owe to the people affected by these systems. The legal and governance pillar addresses who is accountable when things go wrong, what regulations and professional standards should apply, and how bodies like the Institution of Structural Engineers should engage with AI ethics. The technical pillar concerns how we actually build systems that embody these principles, what guardrails constrain harmful outputs, and how we design for transparency and oversight.

The mistake many make is focusing on only one pillar. Technical solutions alone can't solve problems that are fundamentally social or structural. Philosophical frameworks without practical implementation remain academic exercises. Policy without technical grounding produces regulation that doesn't match reality. Effective AI ethics in structural engineering requires all three, integrated rather than siloed.

Human-in-the-Loop as a Design Principle

AI in structural engineering should augment engineers, not replace them. The engineer remains responsible for every decision, every calculation, every drawing that leaves the office. AI is a tool, like analysis software, like a calculator, that helps engineers work faster and with fewer errors.

This isn't a limitation. It's a design principle.

When we build features at Bite, we ask whether this keeps the engineer in control. If the answer is no, we redesign it. Automation without oversight isn't efficient. It's a risk transfer to people who didn't consent to it. There's a concept from design theory called participatory design, the idea that people affected by a technology should have agency in shaping how it's built. This approach originated in Scandinavian workplaces in the 1970s and has since become central to AI ethics, particularly in contexts involving vulnerable or affected communities. For structural engineering AI, participatory design means engineers aren't just end users but co-designers. The people who carry liability for buildings should shape the tools they use to design them.

This is why design partnerships matter to us. We're not building in isolation and then selling to an industry we don't understand. We're building with practising engineers, incorporating their feedback at every stage, learning how they actually work rather than how we imagine they work. Participatory design is human-in-the-loop applied to the development process itself.

Where AI Helps and Where It Doesn't

AI excels at searching large volumes of documents quickly, identifying patterns across project files, flagging potential inconsistencies between models and drawings, summarising information from multiple sources and tracking what changed between revisions.

AI is not good at making engineering judgments, understanding context that isn't in the data, knowing when something feels wrong or taking responsibility for outcomes.

The value of AI in structural engineering lies in handling the tedious, error-prone, time-consuming tasks that drain engineers' capacity, so engineers can spend more time on the work that actually requires engineering judgment.

The Hallucination Problem

Large language models can generate plausible-sounding nonsense. In casual conversation, this is annoying. In structural engineering, it's dangerous.

Our approach is that AI should say it doesn't know rather than guess. When Bite's Company Intelligence searches project files, it returns answers based only on what it finds, with citations. If the information isn't there, it says so. No fabrication, no gap-filling, no confident wrong answers.

This requires deliberate technical choices. We constrain our models to work only with verified project data. We surface the sources of every answer. We make it easy for engineers to verify what the AI returns. Trust is built through transparency, not confidence.

Liability and Accountability

Who is responsible when AI-assisted design goes wrong? Our view is that the engineer is responsible, always. This isn't about shifting blame. It's about maintaining the professional accountability that makes engineering trustworthy. Engineers carry professional indemnity insurance, maintain chartership and stake their reputations on their work. That responsibility doesn't diminish because they used a tool.

What changes is the tool-maker's obligation. We have a duty to build tools that support good engineering practice, not undermine it. That means clear audit trails showing what the AI did and why, easy override and correction mechanisms, no automation of safety-critical decisions without explicit human approval, and honest communication about what the tool can and cannot do.

Professional bodies like the Institution of Structural Engineers and the Institution of Civil Engineers will need to engage with these questions. How should chartership requirements evolve? What competencies should engineers demonstrate regarding AI tools? How do existing codes of conduct apply to AI-assisted design? These aren't questions we can answer alone. They require industry-wide conversation.

The Over-Reliance Risk

The more useful a tool becomes, the more people depend on it. Dependency creates risk when the tool fails or misleads.

We've heard this concern from experienced engineers. Younger engineers growing up with AI might never develop the intuition to know when something is wrong. If the computer says it's fine, it must be fine. This happened with finite element analysis. It's happening with AI.

Our response isn't to make tools less useful. It's to design them in ways that reinforce engineering thinking rather than replace it. That means showing working rather than just answers, prompting engineers to verify rather than just accept, building tools that educate as they assist, and never presenting AI outputs as authoritative. The goal is engineers who use AI effectively, not engineers who can't function without it.

Sustainability and Broader Infrastructure

There's a tendency to focus narrowly on the AI model itself, its capabilities, its outputs, its errors. But AI systems are embedded in much larger infrastructures involving the raw materials for semiconductors, the energy consumption of data centres, and the human labour involved in training and refining models.

Structural engineers already think in these terms. When specifying concrete, they consider where it comes from, what admixtures it contains, how it affects global supply chains. The same systems thinking should apply to AI tools. We don't have all the answers here, but we believe transparency about these broader impacts is part of responsible AI development.

Balancing Caution and Usefulness

There's a tension in AI ethics discourse between those focused on catastrophic, long-term risks and those concerned with immediate, practical harms. Both matter.

In structural engineering, the immediate risks are concrete. Over-reliance, hallucinated outputs, erosion of engineering judgment, unclear liability. These aren't speculative. They're happening now, in workflows across the industry. Our focus is on these near-term risks because they're what we can actually address through how we build our tools. That doesn't mean longer-term questions are unimportant. It means we believe in solving the problems in front of us while remaining attentive to where the technology is heading.

There's also a commercial pressure to make AI do more, faster, with less human involvement. Efficiency sells. Oversight doesn't. We reject this framing. In structural engineering, the cost of failure is measured in lives, not just money. The efficiency gains from AI are real, but they have to be captured in the right places. Faster document search, yes. Automated beam sizing without review, no. The line isn't always obvious. That's why we engage with practising engineers, not just technologists.

What We're Building Toward

Our vision for AI in structural engineering develops in stages.

In stage 1, we're building tools that handle information management, finding files, tracking changes, and maintaining context across complex projects. Engineers stay fully in control of all design decisions. In stage 2, we’re building AI that can flag potential issues such as inconsistencies between documents, calculations that don't match drawings, and changes that might have downstream impacts. Engineers review and decide. In the third stage, we see assistants that can execute well-defined tasks under supervision, updating a schedule when a beam size changes or generating a first-pass calculation for review, always with human approval and always with full traceability.

At every stage, the engineer remains the engineer. AI handles the busywork so humans can focus on the judgment calls that actually matter.

An Invitation

This isn't a solved problem. We don't have all the answers.

We're publishing this to start a conversation, but also to announce something more substantive. We're developing a white paper on AI safety and ethics in structural engineering, and we're opening a consultation to shape it.

We're writing this paper in collaboration with industry leaders from structural engineering practices across the UK, alongside Alva Markelius and other researchers working at the intersection of AI ethics and embodied systems. Alva is a PhD candidate at Cambridge's Affective Intelligence and Robotics Laboratory, recipient of the Top 100 Brilliant Women in AI Ethics award, and founder of EthicAI. She brings academic rigour in AI ethics that complements the practical, industry-facing perspective we're developing with our design partners.

We want this paper to reflect the views of practising engineers, not just technologists and academics. If you work in structural engineering and have thoughts on how AI should be implemented in your workflows, we want to hear from you. If you lead a firm thinking through these questions, we'd welcome your contribution. If you're at a professional body considering how chartership and codes of conduct should evolve, your perspective is essential.

The decisions made now about how AI is used in structural engineering will shape the industry for decades. We'd rather get it right than get it first, and getting it right means building this framework together.

If you'd like to contribute to the white paper consultation, get in touch. We'll be gathering input over the coming months and publishing later this year.

Bite Engineering builds AI-powered workflow automation for structural engineers. We're currently working with design partners across the UK to develop tools that save time without compromising safety. If you're interested in learning more, read our manifesto or get in touch at [email protected].

This post was informed by an educational and engaging discussion with Alva Markelius. We're grateful for perspectives that challenge and refine our thinking.

Keep Reading