The Myth of Concurrent Computing vs. Distributed (Edge) Computing in Supply Chain Planning
So called concurrent computing has been used in the context of supply chain planning applications recently. It implies that somehow the system concurrently plans everything including sales, operations, capacities, suppliers etc. all at the same time. Two questions arise:
- Computers are inherently sequential devices and execute instructions sequentially; or
- Is it implied that Quantum computing or parallel processors are used.
Let’s assume that the latter is the case, then each processor has to wait for the results of the other in order to be able to perform its task. In other words, we have to know or predict the sales, demand or inventory before we can plan anything and we have to plan before we can tell the suppliers what we need. Then we have to get confirmation from the suppliers to adjust the plan, so that we can tell the clients what we can deliver. We can all agree that this is an inherently sequential process not parallel or concurrent. Things appear to happen all at the same time because of speed of computing. However, as the size of the model and data increase in the real world, even the appearance of concurrency goes away and long wait time is expected because of the sequential nature of the process. This is how S&OP solutions have been designed for the last 30 years but now there are better and faster ways of doing it.
Distributed computing allows independent distributed processes run in parallel and concurrently when there is no dependency between them. When there is, they stop, talk to each other and start running again on their own. A “purchasing agent” can do its job once it knows what to purchase and when to expect delivery in parallel with a “customer agent” talking to customers for incoming orders, or order update status reporting. An Inventory agent can constantly recommend adjustments to inventory levels on its own based on its ability to learn from the past patterns of usage. This type of fast and independent computing is accomplished by deploying independent and distributed agents, called Adexa Genies©,
Moreover, distributed agents are capable of scaling and growing. Each agent is like a new hire into the company in charge of a process; and keeps learning more and more. They can also collaborate with each other and more importantly negotiate with one another using Swarm Methodology. Traditional decision support systems rely on polling or consensus to make decisions. However, in Swarm technology, the parties can go back and forth in order to find the most optimal solution not necessarily the majority voted solution. This is how bees find their best location for their hive by signaling their findings to all others.
Given the scalability and the ability to receive and process data, as well as communicate with so many events in a supply chain without data latency and decision latency, edge or distributed computing is the way forward. In contrast, one big centralized solution cannot receive and respond to events and data in a timely manner let alone plan “concurrently.” Having a central planning system or a so-called control tower means having a car that uses a central engine or processor to smooth the ride for every little bump instead of the local shock absorbers. A distributed system has many smart sensors (distributed agents) that are intelligent to sense events, act on them and learn from them locally.
Distributed computing allows independent distributed processes run in parallel and concurrently when there is no dependency between them.
To learn more about Adexa distributed architecture click Here.