top of page

building the “mechanical knight”

  • michellekuepper
  • 4 hours ago
  • 6 min read

The dream of building human-like machines is older than modern science.


In the late 15th century, Leonardo da Vinci sketched the first humanoid robot called the “mechanical knight”: Hidden inside German-Italian plate armour and powered by pulleys, cables, and gears, it was designed to sit, stand, and move its arms purposed with defence in mind. Looking back this reads like the first attempt to give a human-shaped body a programmable “inside”.

Reconstruction of Leonardo’s mechanical knight. Photo: Erick Möller. Leonardo da Vinci. Mensch- Erfinder – Genie exhibit, Berlin 2005
Reconstruction of Leonardo’s mechanical knight. Photo: Erick Möller. Leonardo da Vinci. Mensch- Erfinder – Genie exhibit, Berlin 2005

Five hundred years later, the armour is gone, and the insides replaced by silicon, sensors, and neural networks but the underlying ambition hasn’t changed. Fuelled by the advent of AI, and breakthroughs in high-performance compute, simulation tooling, and the hardware itself, this mechatronic dream is finally approaching reality.


We could not be more excited to back Flexion’s $57M round and double down on Nikita, David, Julian, Fabian and the entire team building the brain and the missing intelligence layer for the next era of robots. We believe that this will become one of the most central pieces of technology and infrastructure of this century. 

Team Flexion
Team Flexion

Closing the gap in Blue Collar workers


When looking at the global workforce, roughly half of the people are employed in blue-collar roles, representing more than $50 trillion in annual labour expenditure, with a significant share concentrated in manual and operations-intensive work. The pool of workers available for these roles is shrinking: working-age populations are projected to decline by around 8% by 2060, with some countries facing drops of over 30%, according to OECD estimates.


In parallel, middle-skill routine jobs have been eroding for decades as manufacturing and clerical tasks were automated or offshored. Combined with persistently low birth rates and several major European economies now in “ultra-low fertility” territory, this points to enduring labour shortages in precisely the physical roles that underpin the real economy and the reindustrialization of the continents.


The physical world is the next productivity frontier


By contrast, AI has already begun to transform knowledge work. Large language models can draft emails, write code, and orchestrate complex workflows with humans in the loop, while generative tools produce images, videos, and full campaigns on demand. The physical world, however, remains comparatively under-automated: most blue-collar occupations have seen little benefit from recent AI advances, even as many countries face shrinking workforces, ageing populations, and a worsening worker-to-retiree ratio. Together, these dynamics create strong structural tailwinds for a new era of robotics, in which embodied systems evolve from point solutions into critical infrastructure for the global labour market.


The previous wave of robotics

Industrial robots already exist. They weld, paint, and assemble at scale. But they live in carefully orchestrated environments: fenced-off zones with pre-defined motion paths, tightly controlled inputs, and little variation. For most of the industrial era, robots were designed on the assumption that everything important could be specified in advance: part pose, fixture location, motion path, cycle time. When anything changed from tooling to product variant or conveyor timing, engineers went back into the code, edited waypoints, re-tuned controllers, and revalidated safety.

This led to deeply vertical stacks. Each cell was its own ecosystem: proprietary hardware, firmware, fieldbus, perception code, PLC logic, motion planning, and HMI, all custom-wired around a narrow workflow. Because there was no common abstraction layer, software ended up hard bound to a specific robot body and layout. Porting the same application from one plant to another often meant rebuilding it almost from scratch. 

Extending this paradigm to humanoids, where environments are cluttered, contacts are complex, and tasks change daily, would require hand-engineering an explosion of edge cases. The result has been impressive demos and highly optimised islands of automation, but no shared “robot brain” that can move across bodies, workflows, or customers. That is the gap the new generation of robotics, and Flexion in particular, is aiming to close. 

A brain for every robot

The shift from hard-coded robots to general-purpose autonomy is enabled by three intertwined breakthroughs on the software layer: large multimodal models, reinforcement learning at scale, and high-fidelity simulation.

First, LLM/VLM agents provide a generic decision layer. Recent work (SayCan, RT-2, OpenVLA) showed that language models can decompose tasks into subgoals, reason over text and perception, and choose which skills to invoke rather than emitting raw force. Flexion builds directly on this idea: an LLM/VLM agent sits at the top of the stack, parsing instructions, choosing tools, and encoding everyday conventions through prompting and fine-tuning, rather than hand-written state machines.

Second, reinforcement learning (RL) has matured into a practical engine for motor intelligence. Instead of replaying demonstrations, RL learns a policy, a mapping from observations to actions that adapts online. A leg does not just execute a precomputed gait, it redistributes forces when friction changes underfoot or the robot is pushed. A hand does not follow a single grasp trajectory, instead it stabilises objects when contact conditions shift.

However, reinforcement learning needs orders of magnitude more data than any physical robot can experience without destroying hardware. The key is simulation. In richly parameterised virtual worlds, Flexion trains locomotion and manipulation policies entirely in sim, with massive randomisation and perturbations: varying surfaces, external pushes, sensor delays, and friction changes, so controllers learn to maintain balance and recover gracefully from unexpected events.


Naive domain randomisation inflates the space of possible worlds and yields over-cautious behaviours. Flexion instead uses real-to-sim pipelines to calibrate dynamics, contacts, actuation limits, and sensing so that simulators match reality where accuracy matters. Randomisation is then applied selectively to parameters that genuinely vary across deployments (e.g. ground friction, load distribution), training policies on realistic variation rather than arbitrary extremes.


Finally, generative models extend the training distribution by synthesising plausible variations grounded in real data. Instead of injecting white noise, these models perturb environments and trajectories in physically coherent ways, turning simulation into a source of meaningful diversity and a lens for inspecting failure modes.

Coupled with modern on-board compute (Jetson Orin today, Jetson Thor next) and dense RGB-D sensing, these advances make it viable to run high-frequency control locally while offloading heavy reasoning to the cloud, a hardware/software co-design that simply wasn’t available to the last generation of robots.

The humanoid in the room…


When thinking about the right form factor for embodied AI, our built environment quietly assumes a human body. Everything from shelf heights and stair geometry to pallet jacks and power tools was designed around two legs, two arms and a certain reach. If you want robots to step into this world without rebuilding every warehouse, hospital and factory, you need machines that can handle the same class of problems a person can: moving through cluttered 3D space, using both hands, and making sense of messy, changing situations.

That’s why humanoids are such a big deal. They bundle locomotion, dexterity, perception and decision-making into a single, tightly coupled system. The same “brain” must keep balance on a ramp, line up a grasp on a deformable object, and plan the next few seconds of motion without freezing. Traditional automation typically demands rebuilding the environment around the machine incl. new layouts, fixtures, conveyors, and custom integrations. Humanoids invert this equation: they arrive compatible with the world as it is.

But we do not constrain ourselves by fully mimicking the exact human form factor; it’s to inherit compatibility with human environments by default, then go beyond it where it helps such as adding extra degrees of freedom, non-human sensor ranges, or hybrid modes of movement.


Flexion is betting that solving autonomy for this hardest setting pays off everywhere else. If you can coordinate a full humanoid body, simpler form factors become straight forward. Starting with humanoids forces the platform to master the fundamentals of embodied intelligence that every future robot will end up needing.

Flexion robot in the wild
Flexion robot in the wild

redalpine + Flexion 

For the longest time, hardware remained the centre of gravity in robotics. Today, we’re shifting to a world where the most valuable part of a robot is invisible: the software brain that coordinates across perception, planning, and control.

Our conviction at redalpine is that Flexion is on its way of becoming that shared brain for humanoid robots and beyond. A clear understanding that the hardest part of robotics is not making things move, but making them generalise, a technology stack that leverages the best of modern AI without collapsing everything into fragile end-to-end monoliths, and a business model that scales with each deployment. Turning every robot’s experience into shared progress is what gave us the confidence to back Flexion from the very beginning by investing in their $57M funding round alongside, DST Global Partners, Ventures, NVentures (NVIDIA’s venture capital arm), Prosus Ventures, and Moonfire.

If Leonardo’s mechanical knight was the first sketch of a humanoid body, Flexion is building the brain that finally makes such bodies economically useful at scale.

When people will be looking back at the history of embodied AI, the story will not just be about shiny metal frames, but about the moment robots began to share a common intelligence layer, downloading their abilities rather than hand-coding them task by task.


That is the world we’re backing Flexion to build.

 
 
bottom of page