How industrial automation evolved - and why it still isn’t finished
- michellekuepper
- Nov 12, 2025
- 9 min read

Industrial automation did not begin with algorithms or robots. Its roots go back to the late 18th century, when James Watt’s 1781 rotary steam engine and Edmund Cartwright’s powered loom replaced human muscle with mechanical strength. Textile output in Britain increased nearly 20-fold between 1780 and 1850 - not because of algorithms, but because energy and motion were finally decoupled from the human body.
A century later, Henry Ford turned manufacturing from craft to system. In 1913, he introduced the moving assembly line at Highland Park. The time to build a Model T dropped from 12.5 hours to 93 minutes. Productivity surged, wages doubled to $5 a day, and unit costs fell by more than 60%.

The next leap was with control theory formalized by engineers such as Norbert Wiener in the 1940s. In 1958, General Motors installed the first programmable logic controller (PLC) at its Hydra-Matic transmission plant. Machines could now correct themselves in real time - regulate pressure, temperature, torque - instead of blindly executing motion. Before PLCs, factory control systems relied on sprawling relay panels - expensive, rigid, and difficult to modify. PLCs changed that. They offered a reprogrammable, software-based way to control industrial equipment using ladder logic - a language that mirrored electrical diagrams familiar to technicians. More importantly, they could retain memory through power failures and survive harsh factory environments. What began as simple on/off control gradually evolved to include timers, counters, and synchronization across complex systems.
At the same time, industrial robots entered the factory. General Motors installed the first Unimate robot in 1961. By the 1970s and 1980s, robots became the workhorses of automotive plants - welding, assembling, and handling materials with speed and consistency no human could match. But these systems were expensive and hard to program. Only large manufacturers with deep engineering teams could afford to deploy and maintain them.
The 1990s brought standardization and software. Frameworks formalized different programming languages for PLCs which eventually allowed a PLC program written in Stuttgart to be understood by an engineer in Osaka. Fun fact: The committee behind some of these framework initiatives included not only engineers from traditional manufacturing players like Siemens or Schneider Electric, but also aerospace firms - because the same abstract logic used to control a milling machine could orchestrate hydraulic valves on a fighter jet. Human-Machine Interfaces (HMIs) and Manufacturing Execution Systems (MES) arrived.
By the early 2010s Germany put a name to its industrial ambition: Industry 4.0. It began in 2011 as a Ministry-backed research working group, but it quickly became a doctrine. The premise was straightforward: wire every machine, capture every signal, and treat the factory not as a collection of assets but as a single, thinking system. Behind the branding sat a geopolitical calculation. Europe could not match China’s scale or America’s capital intensity. What it could do was integrate - link the mechanical and digital worlds more tightly than anyone else. Industry 4.0 was not a product but an operating system for German industry, especially its Mittelstand suppliers. It offered a common language for digitizing production and protecting the country’s export base. Soon, the approach spread across Europe and was copied far beyond it.
Three shifts enabled this decade of progress.
First, sensors became almost free. A MEMS accelerometer that cost five dollars in 2005 could be bought for less than fifty cents by 2015. Suddenly it made economic sense to measure everything - every gearbox, motor, and valve. Siemens built MindSphere, GE built Predix, and both tried to turn this stream of raw telemetry into “digital twins” - live software models of physical machines.
Second, data could finally move at the speed of the factory floor. In the early 2000s, most plants recorded a single data point per second. By 2020, cloud and edge systems could process millions of data points per second across networks. That made predictive maintenance possible. Rolls-Royce’s Power by the Hour became the template: jet engines streamed flight data via onboard sensors, and maintenance shifted from a reactive cost to a service built on continuous insight.
Third, machine learning began to handle real industrial work - maintenance, scheduling, quality control. In 2014 FANUC and Cisco launched Zero Downtime (ZDT), which analyzed robot torque patterns to detect failures weeks in advance. By 2019 more than 27’000 robots were connected to the system, sending 1.6 billion data points every day.
Hardware evolved too. Collaborative robots - cobots - moved out of steel cages. Universal Robots’ UR5, released in 2012 at under $35’000, could be taught by hand and safely share space with people. Its torque sensors and compliant motion made it the first robot designed to work with humans rather than around them. It was the first real break in the logic of Ford’s assembly line.

By the close of the 2010s, the modern factory had crossed a threshold. It was no longer just a place that produced goods, but a living network of sensors and software - constantly measuring, transmitting, and learning. Yet the promise of Industry 4.0 came with an irony. Every major vendor built its own “platform,” but few of them could talk to one another. The result was a flood of data without true interoperability. Connectivity was achieved, coordination was not.
As the 2020s began, the frontier moved from linking machines to aligning their intelligence. But progress remained uneven. Automotive and semiconductor plants reached near-perfect synchronization - production lines running with the precision of code - while tens of thousands of smaller factories across Europe still operated on manual routines and legacy control systems. For all the talk about Industry 4.0, the reality inside most factories - especially small and mid-sized ones - is far more grounded. Many have not automated, not because they don’t understand the future, but because they can’t make the present work.
How factory programming actually works
Walk into any modern factory and you’ll find that beneath the machinery and robotics, production still runs on decades-old logic. At the core are programmable logic controllers, or PLCs - the silent infrastructure of industrial automation. They are programmed mostly in ladder logic, a graphical language that mimics electrical relay circuits, or in other standards like Structured Text and Function Block Diagrams. Siemens has TIA Portal, Rockwell offers Studio 5000. These environments are functional, but highly proprietary. They control everything from the motion of a conveyor belt to the PID loops behind chemical processing.

Robots operate in a separate world. ABB uses RAPID. KUKA uses KRL. Fanuc uses KAREL. Each comes with its own syntax, semantics, and development environment. When a robot joins a production line, the PLC doesn’t “talk” to it in any intuitive sense - it sends digital signals or calls subroutines. Engineers then stitch logic across two different worlds: deterministic PLC timing on one side, kinematic control systems on the other. The more complex the line, the more brittle the integration.
In research labs and a handful of flexible manufacturing cells, ROS - the Robot Operating System - has become the standard framework for building robotic applications. ROS and its successor ROS2 offer modular libraries for sensing, planning, and actuation. But factories don’t run on academic timelines. They run on uptime. ROS is not yet built for hard real-time control, redundancy, or safety certifications. Efforts like ROS-Industrial are trying to close the gap, but we’re still early.
So today’s factory is a patchwork of systems that were never designed to work together. A PLC handles machine sequencing. Robots run their own proprietary code. Vision systems operate on industrial PCs with standalone software. Sensors sit on fieldbus networks. Every connection - every exchange of data or timing - requires custom code, bespoke interfaces, and engineers fluent in not one language, but five or six. Siemens speaks Profinet. Rockwell speaks Ethernet/IP. None of it is truly open.

The problem Forgis sits in
Modern manufacturing runs on a quiet layer of specialists: system integrators. These firms design, program, and install automation on factory floors. They translate machines into productivity - configuring robots, PLCs, sensors, and conveyor systems into something that works day after day. Their business model is essentially time and expertise: billable hours from skilled engineers, plus a small margin on hardware. On a good project, hardware margins might reach 10-20%, but most of the economics come from engineering hours.
The reason is simple: every factory is different, and very little code can be reused. A robotic palletizer built for one packaging line often has to be programmed from scratch for another - even if the task is similar. Different robot brands, different PLCs, different product formats mean bespoke code and long engineering cycles. Hardware might represent about half of the upfront cost - the robot arm, the conveyor, the PLC.
The other half is spent on design, programming, installation, and commissioning. A $100,000 robot can easily require another $100,000 in engineering to make it productive. On more complex lines, engineering becomes the majority cost. And the work does not end after commissioning. Change the product, retool a production line, or update a process, and engineers must rewrite logic and robot paths, often leaving expensive machines idle for days. Hence, many consider systems integration - not hardware cost - the main barrier to further automation.

Compounding this is a structural constraint in labor. The world does not have enough automation engineers. In the United States, roughly 290,000 workers are employed in automation-adjacent roles, yet more than 44,000 open positions remain unfilled. Japan expects a deficit of 700,000 engineers in manufacturing by 2030. Europe faces similar pressures. Scarcity makes integrators expensive and slow to respond; projects get delayed or shelved entirely. The industry is trapped in a cycle: integration is costly, so fewer factories invest; the limited number of engineers get stretched thinner; and integration costs rise further.
This is precisely where the opportunity for an “automation operating system” lies. If software can reduce integration friction - not marginally, but structurally - it can tap into the software and services portion of a $200 billion-plus market. The total addressable market is hard to define precisely because it spans hardware productivity, labor substitution, and process optimization.
But even simple logic suggests its magnitude: there are around 2-3 million industrial robots in operation worldwide and tens of millions of machines and PLCs. Each robot cell may consume $50,000 or more in programming and reprogramming over its life. Even partially automating that process creates a multi-billion-dollar annual opportunity. If smarter automation software improves factory productivity by only a few percentage points, it unlocks tens of billions in value in a multi-trillion-dollar manufacturing sector. And if it enables thousands of factories that currently cannot automate to finally do so, the upside is far larger.
This is the backdrop for Forgis. The need for automation is rising, especially in high-cost economies reshoring production. Yet the current system depends on manual coding, tribal knowledge, and scarce engineering labor. System integrators fill the gap, but the model does not scale.
Forgis is constructing the missing software layer of industrial automation - the infrastructure that allows factories to operate with the same flexibility and intelligence that cloud computing brought to the digital world. Its system transforms automation from an artisanal, code-heavy discipline into an adaptive, learning process. At its center lies an agentic development environment (ADE) designed around three interlocking components: The Integrator, Coder & Tester.
The “Integrator” acts as the orchestration layer, discovering and connecting every element of a production cell - PLCs, robots, sensors, drives - into a unified topology. It abstracts away vendor differences and network complexity, giving engineers a single coherent interface to control heterogeneous systems. The “Coder” then takes functional intent and converts it into deterministic, production-grade automation logic.
Trained on domain-specific industrial datasets, it writes code that adheres to the strict temporal and safety constraints of factory environments. This enables a level of portability across Siemens, ABB, Fanuc, and Beckhoff ecosystems that was previously impossible. The “Tester” closes the loop. It runs high-fidelity simulations to validate the generated logic against real-world operating conditions, detecting and resolving faults before deployment. Each iteration feeds back into the system, creating a continuous cycle of learning and refinement.
Taken together, these modules form the foundation for self-improving automation - a software environment where integration, code generation, and validation reinforce one another. Over time, Forgis’s platform evolves with each deployment, internalizing the accumulated knowledge of automation engineering itself. The implications are structural. For the first time, factories can be reconfigured at software speed. Integration cycles measured in months collapse to days. Vendor lock-in gives way to interoperability. In effect, Forgis is making industrial automation composable - turning the world’s physical production systems into programmable assets.