Chapter 3

The Design, Deployment, and Assessment of Industrial Agent Systems

Luis Ribeiro    Department of Management and Engineering (IEI), Division of Manufacturing Engineering, Linköping University, Linköping, Sweden

Abstract

The design, deployment, and implementation of industrial agent systems have been influenced both by technology and an emerging set of bio-inspired production paradigms. One of the main challenges encompassing these three stages is the fact that the conceptual simplicity of the main architectural constructs is not echoed by the corresponding implementations. The conceptualization-implementation gap results from the fact that bio-inspired systems and engineered systems obey fundamentally different laws. Hence, there is a considerable mismatch in design objectives that affects most implementations. In this context, this chapter briefly surveys the latest scientific efforts from the industrial agents’ community and discusses the virtues and challenges associated with the main architectural design approaches. The chapter then articulates this discussion with the main deployment and implementation barriers, exploring both the conceptual and technical constraints and finally considers the assessment problem, which remains one of the main open challenges in the success of agent-based solutions.

Keywords

Design of industrial agent systems

Deployment of industrial agent systems

Assessment of industrial agent systems

3.1 Introduction

Agent-based systems have been explored, if not practically, at least conceptually, in a wide range of domains. The notion of an agent has also taken on many shapes and meanings according to the application area. These have ranged from pure computational applications, such as UNIX daemons, Internet crawlers, and optimization algorithms, etc., to embodied agents used in mobile robotics. The notion of cyber-physical systems has been very recently coined to denote the next generation of embedded systems. Unlike an embedded system, a cyber-physical system is designed from scratch to promote the symbiosis and fusion between a physical element, its controller, and its abstract or logical representation/existence. To an enormous extent, the concept echoes the idea of embodiment (Pfeifer et al., 2007), whereby the body shapes the cognitive abilities of its control gear and self-organization (Holland and Melhuish, 1999) in the sense that a resilient whole results from the collective interactions of many parts. Some rather similar principles have been the basis for holonic manufacturing systems (HMSs) (Bussmann and Mcfarlane, 1999), bionic manufacturing systems (BMSs) (Ueda, 1992), evolvable assembly systems (EASs) (Onori, 2002), and an overwhelming number of industrial agent-based architectures that have followed them (Van Brussel et al., 1998; Leitao et al., 2005; Barata, 2003; Lastra, 2004; Shen et al., 2006; Marik and Lazansky, 2007; Vrba et al., 2011; Leitão, 2009; Monostori et al., 2006).

It is therefore safe to assert that industrial agent systems are a preceding, probably more restricted, case of cyber-physical systems.

Although each application area has its specific challenges, arguably the design, deployment, and assessment of industrial agent systems are particularly complex. Given the multidisciplinary nature of today's industrial systems, their cyber-physical realization entails challenges that range from pure computer science and embedded controller design to production optimization and sustainability.

The main challenges comprising the design, deployment, and assessment of industrial agent-based systems are therefore examined.

Multiagent systems (MASs) have been widely known as the basis for inherent, robust, and available systems, and there are many characteristics (Wooldridge and Jennings, 1994, 1995), such as autonomy, social ability, proactive response, reactivity, self-organization, etc., which have been identified as core ingredients for the MAS reliability.

However, to call a software abstraction an “agent” and create a system based on these abstractions is not a guarantee that the system will exhibit the expected characteristics. Unfortunately, this misconception is quite common.

There have been significant international and industrial efforts made in addressing the different design, deployment, and assessment challenges. The reader is naturally referred to the contents of this book to learn about the latest results and technical details. Previous international projects are not limited to but include: SIRENA—early development of the device profiles for web services (DPWS) stacks (Jammes and Smit, 2005; Bohn et al., 2006); the subsequent project SODA—focusing on the development of a service-based ecosystem using DPWS; Inlife—focusing on the service-oriented diagnosis of distributed intelligent systems (Barata et al., 2007); SOCRADES—investigating the creation of new methodologies, technologies, and tools for the modeling, design, implementation, and operation of networked hardware/software systems embedded in smart physical objects (De Souza et al., 2008); AESOP—tackling web service-oriented process monitoring and control (Karnouskos et al., 2010); GRACE—exploring process and quality control integration using a MAS framework (Stroppa et al., 2012); and IDEAS—focusing on instant deployment of agentified components (Ribeiro et al., 2011a).

The subsequent details are therefore organized to first highlight the commonest structural arrangements considered in current agent architectures and more specifically bring some context to their potential applications and limitations. Secondly, because emerging architectures are increasingly inspired by concepts and methods from the complexity sciences, the gaps between them and the concrete instantiation of industrial MASs are discussed. The presentation of the design challenges and opportunities follows, as well as the conventional deployment approaches. Finally, the impact of MAS design is discussed from a system validation perspective.

3.2 Distributed Versus Self-Organizing Design

MASs are by design logically distributed. From a computational point of view, this distribution can take the form of several processes and/or threads within a single controller or can spread across a network of controllers. Computational distribution does not necessarily guarantee the robustness or the availability of the MAS.

Control architectures have traditionally been classified as centralized, hierarchical, modified hierarchical, or heterarchical (Dilts et al., 1991). Almost all modern shop floors rely on hierarchical architectures, and the control logic is distributed across several controllers. In this context, each controller stands as a single point of failure because there is a unique control path connecting the different controllers. Process-related information flows in a sequential way.

Agent architectures are not immune to this effect. In fact, if the MAS consists of a nonredundant set of agents, and architecturally each agent is logically coupled to each other, then there are no obvious benefits of using agents at all, nor does the system exhibit any MAS-like behavior. Logical decoupling between the system's components is a precondition to ensure a proper MAS response.

The notion of logical decoupling is based on self-organizing design. This means that regardless of the existence of a hierarchy of agents, or the adoption of a more flat model, the interactions between agents in the system should be defined by the scope of their classes, not by the scope of their instances. This entails that all the instances that compose a MAS have a set of well-characterized interactions with other instances based on their kind. Therefore, agents abstracting resources are able to interact with agents abstracting processes, which may include those resources, rather than having a purposely designed robot agent interacting with a purposely designed pick-and-place agent, which is normally the case in traditional automation.

In this context, the common idea that a hierarchical system cannot self-organize does not hold true, and hierarchical architectures also become valid and useful models in an industrial MAS context. In fact, hierarchical and semi-hierarchical models have been the basis of almost all classical agent-based paradigms, such as HMS, BMS, EAS, etc. Again, the important point is the definition of the hierarchy in the class space rather than the instance space as it occurs in current programmable logic controller (PLC)-based control.

Self-organizing architectures take inspiration from natural systems or from abstract concepts, which have been the object of study of complexity sciences. They are supported by elusive concepts such as emergence, self-organization, and evolvability.

Emergence has a deep connection with traceability. Emergent behavior occurs when an observer is not able to trace the process back to the operation of its components or, as it was first noted by Lewes, “although each effect is the resultant of its components, we cannot always trace the steps of the process, so as to see in the product the mode of operation of each factor” (Goldstein, 1999). This view is shared by Bedau who proposed the notion of weak emergence to characterize a system whose behavior can only be explained through simulation due to the complexity of the causal matrix resulting from the dynamics between the components (Bedau, 2008). Emergence is an appealing construct for engineered systems because its manifestation entails (De Wolf and Holvoet, 2005; Goldstein, 1999): radical novelty, coherence and correlation, macro-level expression, ostensive manifestation, micro-macro and macro-micro influence, etc. Further, complex natural interactions appear to be the by-product of the joint action of seemingly simple individuals.

The relationship between emergence and self-organization is intricate because one seems to entail the other. Haken (2006) defines a self-organizing system this way: “if it acquires a spatial, temporal or functional structure without specific interference from the outside (…) that the structure or functioning is not impressed on the system, but that the system is acted upon from the outside in a nonspecific fashion.” De Wolf and Holvoet (2005) propose some criteria for classifying pure emergent, self-organizing and other systems that denote both, based on some salient features. Their definition is supported by conceptual examples where the main distinction lies in the notion of changing order, for self-organization, and the micro-macro effect, for emergence.

“Evolvability” is a third key metaphor and expresses the ability of organisms to slowly adapt to evolutionary challenges. Resilience and sustainability seem to be core components of an evolvable system (Urken and Schuck, 2012).

There is however a huge gap between these concepts and an industrial agent-based system, even if some paradigms are, in their roots, faithful to these natural principles, their outcome is typically very far away in respect to implementation and operational principles (Ribeiro and Barata, 2013a,b).

In Ribeiro and Barata (2013a,b), the main pitfalls are identified and fall into one of these gaps:

 “The generic and the adapted concepts”—Natural systems, as studied by the complexity sciences, are the outcome of billions of evolutionary cycles or result from the exposure to complex physical and chemical processes over time. Both lead to the emergence of certain structures or patterns, some of which are replicable and others unique. Unlike nature, industrial agent systems do not have an infinite amount of time to overcome a disturbance. Further, their response must be predictable and kept framed within a desired working region. The first engineering step is to accept that unless an enormous computational power is available, certain natural processes cannot be imported to an engineering framework. They are simply unfeasible. This creates a subtle gap between a working system and something that needs to be modeled to explore a similar behavioral principle but that is not guaranteed to work.

 “The set of adapted concepts and reference architectures”—After the conceptualization phase and the preliminary translation to an engineering framework, these naturally inspired concepts still need to find their form as agents, services, intelligent modules, nodes, etc., in the context of a reference architecture. This creates a problem of conceptual interoperability. In the literature, there is a lack of unified architectures for HMS, BMS, EPS, etc. Therefore, these have gained the status of paradigms only offering an extremely high-level conceptualization on how such a system could work. There is no concordance on what is the meaning and outcome of self-organization, emergence, and “evolvability.” This has a tremendous impact on design and validation because it is not possible to sufficiently define the behavior of the system, nor how it comes together as a whole at this level. System architectures are their author's interpretation of a specific paradigm or high-level model and are not quantitatively comparable among themselves.

 “The reference architecture and existing technology”—At this stage, the architecture is probably already far away from the bio-inspired concepts and, in most cases, the unique properties that were to be incorporated were lost. Even a clearly defined architecture must be instantiated in some technological framework. Technological instantiation is a significant challenge. Agent-based architectures implicitly make assumptions about the underlying technology. Recall that the modeling is to occur in the class space, not in the instance space. The immediate consequence is that at least an object-oriented representation of the architectural constructs is required. This is still not the standard in automation technology, although object orientation has been recently incorporated in the IEC 61131-3.

One additional remark is that, even if all these gaps are properly addressed, the effort of developing and prototyping these systems and architectures has been mainly led by academia and has hardly ever been tested in a true industrial context. Hence, there is a general disregard for industrial safety and security standards. Who is liable if a system harms a human in the process of using its autonomy? And how is autonomy defined such that it is suitable and accepted in an industrial environment?

3.3 Design Challenges and Directions

Current system design is still dominated by the notion of lean. Lean manufacturing emerged as a reaction to the era of the 1950s-1970s, with its oil crisis and undergoing socio-economic changes. A lean manufacturing system is one that meets high throughput or service demands with very little inventory, and with minimal waste. Its advent is due to the Japanese industry, and especially Toyota (Barata, 2003; Ribeiro and Barata, 2011). Under a lean system, processes and operations are streamlined and there is high involvement by factory workers toward continuous improvement. The alternative to lean mostly followed by European and American companies back then was essentially technologically based. Western companies perceived production flexibility as the key factor in boosting competitiveness, which led to the concept of flexible manufacturing systems (FMSs). The main known issue with FMS, especially in its early instantiations, was cost effectiveness. In developing multipurpose equipment, there are relevant risks of failing to estimate the family of products to be manufactured and end up with under- or over-engineered equipment (Barata, 2003; Ribeiro and Barata, 2011).

Modern FMS is now focused on profound customization, which led to the emergence of the concept of reconfigurable manufacturing systems (RMSs) (Koren et al., 1999; Mehrabi et al., 2000, 2002). RMSs target mass customization environments and maintain that reconfiguration and flexibility should be attained at the cost of an open reconfigurable control approach and dedicated intermodular tools rather than optimized controls and multipurpose tools. Agent-based systems may have a decisive role in such contexts and, while there is still no suitable agent-based middleware for industrial applications, agents have started to find their way in an industrial context.

MAS-based architectures and technologies have been developed to operate with existing technology in two distinct perspectives:

 Coupled—One or more agents collect and process data from the existing infrastructure. The analysis of that data may produce results that, when applied back to the infrastructure, change its behavior. The infrastructure retains its native control elements and the control is still considered in the instance space. The native infrastructure still operates even if the MAS is absent.

 Embedded—The automation platform is agent-based. Agents may have access to native control operations in the system's controllers, or the control is mediated by a gateway or any other integration mechanism. Agents exert a direct influence on the behavior of the native system. Controlling actions do not exist in the absence of the agent platform.

Although it is the author's opinion that the second approach will prevail in the long term, the first approach is immediately applicable and can, in principle, seamlessly integrate with traditional automation technology.

3.3.1 Coupled Design

Coupled design is the entry point for agent-related principles and technologies in current automation scenarios (Figure 3.1).

f03-01-9780128003411
Figure 3.1 A coupled agent-based design.

As mentioned before, the golden rule in such a design is to completely decouple the agent infrastructure from the native system. In other words, the native system should be able to perform its normal tasks in the absence of the agent platform. Agents may be able to influence the underlying system but they do not perform real-time control. The interaction must be mediated by an integration artifact. Normally, the interaction between the agents and the system happens over an OPC-UA, web service, or socket connection; or by using any other network connectivity object. This implies that, at the controller level, in the system space, the relevant controllers have had their processes modified to consider a fixed number of interactions with an external system. This level of integration is quite common in current shop floors because several tools, not necessarily agent-based, interact and collect data from the native control infrastructure. The contribution of an agent platform, as opposed to a set of cooperating tools, in such a context is really arguable if the agents behave merely as logic information processing blocks that interact in a predefined and static way. In this context, the malfunctioning of one agent/application would hinder the behavior of the agent system and stop the advanced functionalities in the system space. In fact, when agents are considered in such a context the application scenario is normally one where one or several mutually dependent agents execute to optimize a given aspect of the underlying system. Typical applications include scheduling, line balancing/process planning problems (Tasan and Tunali, 2008; Onwubolu and Davendra, 2006; Li and McMahon, 2007) and cell formation/design configuration (Stawowy, 2006; Noktehdan et al., 2010). These algorithms present a reasonable performance for nearly static systems. When the underlying system/problem is subject to frequent changes, these approaches start to struggle from a computational point of view because the computational complexity is proportional to the size of the system.

The alternative is the creation of identity relations between agents and specific functionalities at the controller level. In this scenario, an agent may abstract a function or a block of code programmed in a way that its behavior can be slightly modified/parameterized. The native control infrastructure still retains its abilities in the absence of the agent platform. However, the agent is able to introduce local behavioral changes. This means that actions in the agent space, from simple interactions to activity orchestration, are echoed by the native control system. As opposed to the first case, the entities in the agent space are necessarily much more decoupled and can rely on self-organizing interaction patterns to influence the underlying system. This naturally impacts the choice of methods that can be applied to either case. The single agent or sequential process design has its origins in traditional AI. Complex problems, typically optimization problems, that are not linearly divisible, will be normally solved using such an approach. There are, in this context, a wide range of methods that can be applied to model the problem. These go from logical reasoning to heuristic and meta-heuristic search algorithms such as genetic, particle swarm, ant colony, and simulated annealing algorithms (see Michalewicz and Fogel, 2000, and Russel and Norvig, 2003, for a full set of references). The specification of the agent behavior normally excludes open communication with other agents because the processing in the agent space is not meant to be further distributed. In the second case, although the individual behavioral modeling is also considered, there is an extra focus on the definition of the agent interactions. One fundamental aspect is that these systems have to be generic and open because underlying changes may create new identity relations. Hence, the number and role of agents will change and the MAS must adapt.

3.3.2 Embedded Design

When considering an embedded design approach, one is assuming an agent-based infrastructure, where agents natively exert control (Figure 3.2).

f03-02-9780128003411
Figure 3.2 An embedded design.

Agents should be able to dynamically establish interactions between themselves according to the system's status. While in the coupled design pattern agents exist outside the controller and communicate with it using one or more communication protocols, in the embedded control pattern the agent binaries are supported by the controller itself. According to the computational power available and the purpose of the agent, it may operate over an operative system (OS) or directly over the controllers' computational infrastructure.

With the proper architectural support, embedded design promotes the decoupling of agents logically and geographically, effectively enabling the creation of plug-and-produce entities comprising the artifact being controlled, the controller, and the agent. Unlike the previous case, this approach has been rarely explored in industrial scenarios.

There is a plethora of design decisions to be considered. It is important to recall that in such a pluggable environment the agents' behaviors are generically defined in their class space. In this context, they will be specialized before or during deployment as a reconfiguration action. For example, each agent retains both a generic communication interface and the self-organizing logic that enables each instance to interact with the other instances according to their classes.

One of the first design decisions concerns which entities are pluggable in a system. One common solution is the development of a low-level entity (a resource agent/holon) that in the embedded approach encapsulates the tuple (agent + controller + equipment). This agent exposes a certain process that matches the ones offered by the equipment. Under these circumstances, this whole should be seamlessly pluggable and unpluggable to and from the system. This requires physical adaptations in respect to mechanical, electrical, pneumatic … interfaces and logical adaptations so that other agents in the environment can react to the presence and absence of that agent.

However, if one allows more than one agent to sit in a controller, what does that mean? The identity is broken so plug and produce is no longer as simple as disconnecting and reconnecting the whole. The establishment of the identity relation just detailed implies that the controller is just about right in respect to the computing power required to handle the equipment. Failing to observe this constraint means, most likely, a very expensive system. The rationale for using an oversized controller could be to increase the logical flexibility of a system with interchangeable tools. From an architectural point of view, however, one needs to consider how to plug and unplug that system. If the components are not meaningful in any other context, then this scenario falls back to the first case. Nevertheless, if some components can exist on their own, the architecture must support the selective shutdown and re-plug of specific agents without affecting the behavior of all the others running on the same controller. The tuple is now (agents + controller + equipments) and the controller is the most critical piece.

The processes offered straightforwardly by these resource agents often need to be composed to create higher-order processes.

This raises another important question, which is the definition of the lowest and highest abstraction units (i.e., the granularity of the system components). The granularity of the system relates to the number of logical layers that must be traversed before a control command originating in a top-level entity can reach the physical execution level. The impact on the overall performance of the system is therefore obvious. Fine granularity systems offer more logical flexibility at the cost of execution time.

One can hardly make assumptions about a universal granularity because it varies from architecture to architecture and closely relates to the functional requirements of the system. Smaller systems will normally allow the definition of finer granularity components, whereas in bigger systems the performance burden is more significant. Granularity is mostly a physical feature of a system. From an architectural point of view, a resource agent/holon (normally, the smaller logical unit) should in principle be able to abstract equipment ranging from a sensor to an entire station or even an entire system (if that would make any sense). Above the resource level, several approaches have been attempted; however, the most common practice is to either consider a resource as a sort of recursive container that can hold and coordinate other resources, which can hold other resources… or to define a separate entity that is responsible for the management of other entities of the same kind and/or other resources.

The difference between both approaches is subtle yet important. If the notion of resources is taken as a recursive entity, this implies that, from an implementation point of view, the resource should be able to handle direct interactions with existing equipment, as well as manage purely virtual processes that result from composing resources that do not entail direct hardware interfacing.

In the second approach, there is a clear separation between virtual processes and direct hardware interaction. This requires the creation of an extra class of agents specialized in the management of these virtual processes.

In either case, the higher the abstraction level, the more independent these entities are from the execution details, which means that, theoretically, these higher-order entities are free to allocate and reallocate other entities and manage the logical workload distribution in a more freely and self-organizing way. If not properly managed, this flexibility may lead to catastrophic results. Figure 3.3 shows a potential MAS organized in distinct layers and denotes the ongoing, and potential, interactions in blue (darker lines), as well as the harmful interactions in red (lighter lines). Dashed lines denote potential interactions resulting from MAS self-organization.

f03-03-9780128003411
Figure 3.3 Constraints in resource orchestration.

In a nutshell, the dangerous interactions are the ones that either bypass the orchestration layer immediately above the resource layer or may lead to resource allocation from a noncompatible area.

We shall now focus this analysis on the former case. Although from a mechanical and logical point of view two resources may be identical, the MAS should not be allowed to allocate resources that, despite being potentially replaceable, cannot work together in a specific context. In the earlier example, the orchestration agent in the first layer that is using a conveyor belt, a manipulator, and a gripper should not be allowed to use other grippers unless it can physically interact with those grippers.

The first layer of orchestrating agents should ensure consistency at this level. This is a clear scenario where the physical setup imposes restrictions on the MAS.

For that very same reason, an orchestrator on any other layer should not be allowed to use resources directly.

This does not mean, however, that the agents cannot assist the user in mechanically modifying the system to keep it running. In fact, if the MAS detects one of these restrictions it should, because it cannot take action on its own, make it obvious to the user.

An alternative to freezing the configuration of the first layer of orchestrating agents is to consider that the orchestrating agents can completely abandon a set of resources and scan the system for the possibility of offering the same functionality using a different set of resources in another area. These agents should therefore have an auto-redeployment capability.

Above the first orchestration layer, the executing problem is strictly kept in a logical domain. At this level, it is therefore possible to make extensive use of the self-organizing abilities of the MAS. It is worth recalling that at this level processes are logical and the orchestrating agents are free to allocate other agents as they see fit.

Unlike the resource layer, there are no fixed identities. This also means that auto-redeployment is not required at this level. Agents can share computational resources without directly affecting the plug-ability of the system.

Embedded design is challenging because it attempts to be generic. It attempts to explore the ability of the system to self-organize while eliminating logical central points of failure. In doing so, the system itself imposes design rules on the MAS that must be captured in a generic way as well.

Yet another important design decision regards agent specialization, particularly resource specialization. So far, the MAS has been defined in respect to resources and orchestrators (higher-order entities); however, given the multiplicity of resource types, one will hardly be able to manage the complexity of defining a universal resource that could handle simultaneously a robot and the interactions of a complex conveyor network. Resource specialization should not influence the granularity of the system but rather support the fact that certain resources exhibit specialized behaviors. If the main consequence of being under specialization is an intractable implementation, the consequence of overspecialization results in an overwhelming combination of potential interactions. Recall that the point of designing an agent-based system is the definition of the agent interactions in the class space so that the instances can have a convergent, self-organizing behavior. For a system with n specialized agents, a maximum of n(n − 1)/2 peer-to-peer interactions can be developed. Not all these interactions will be considered, but still one has to clearly define the set of those possible. In addition if, over these communication links, several interaction protocols are considered, then the set of possible cases that needs to be addressed grows further.

3.3.3 Design Guidelines

So far, the chapter has tried to convey that there is a set of meaningful ways of designing agent-based systems suitable for mechatronic applications. Several factors that influence MAS design have been informally identified:

 Available technology and costs—Technology is still the main barrier in the development of industrial agent-based systems. With development efforts mainly led by academia, there is a lack of suitable hardware and software to support industry-standard agent systems. A cost-effective agent system requires having just about the right controller for the right agent/s. Before designing any MAS, it is fundamental to understand the characteristics of the underlying IT platform. There are currently a few agent-based stacks. They have been mainly designed for general purpose use and therefore fail to meet the specificities of industrial use. Among the agent environments that have been consistently used, one may mention: JADE (JADETeam, 2014), JACK (AOS, 2014), Cougaar (CougaarTeam, 2014), MaDKit (MaDKitTeam, 2014), and Mobile-C (MobileCTeam, 2014). The first four have been developed in JAVA, which means that they are highly portable but cannot reach hard real-time performance. Mobile-C uses a C core but the agents are defined in an interpreted language that is very close to C. The reason for using interpreted or virtual machine-supported languages is to cater to code mobility and enable the creation of agents in runtime. However, these improvements come at the cost of a reduced performance. The other fundamental aspect is maintenance and licensing. JACK is a commercial platform, while the others are open source and licensed in different ways. This may affect the potential for commercialization. Most “industrial” agent platforms have been developed on top of these platforms. This creates a complex and heavy technological stack that is only suitable for proof-of-concept demonstrations. FIPA compliance is an important characteristic because it ensures a certain degree of interoperability. FIPA provides in http://fipa.org/resources/livesystems.html a list of compliant implementations, most of which have been discontinued. In respect to the aforementioned platforms, only JADE and JACK are compliant. Different platforms also provide distinct agent models. Their selection is therefore directly related to the functional requirements of the system being implemented. The variations in the models can be quite substantial. JACK supports BDI modeling, while JADE's model is more prone to the development of reactive and behavior-based agents. Mobile-C promotes mobility, Cougaar is blackboard-based, and MaDKit does not enforce a specific agent model but instead focuses on organizational aspects. The technological stack plays, in this context, an important role in the performance of the agent platform. Although the usage of a virtual machine creates some isolation from the underlying platform, most virtual machines still require a native OS to support them. This is valid even for real-time virtual machines. A real-time OS can improve the overall performance of the agent platform. However, in the case of JAVA-related technologies, the virtual machine does its own thread scheduling, which is independent of the native OS. The JAVA concurrency model is based on threads, not processes, and the virtual machine will, itself, be a process of the native OS. There are no standardized concurrency computing models for agent platforms. JADE, for instance, uses one thread per agent and one instance of the virtual machine per container. This means it is possible to improve the performance of the underlying system by balancing the number of agents and the number of containers that execute in the platform. Performance maximization happens when the agent platform is directly compiled in order to generate controller-specific binaries, without the need for a supporting OS. This, however, may be unfeasible because the platform would have to incorporate controller-specific code for all the target platforms, which to a certain extent is the role already played by virtual machines. This also creates an interoperability and openness problem.

 System size—Size impacts the performance of the system because agent systems rely on peer-to-peer communication. Therefore, size also acts as a limiting factor for the granularity.

 Functional requirements—Agent-based systems should be generic, but only as generic as required to fulfill a specific function in a specific domain. It is fundamental to grasp the main functional requirements so that extra complexity, without obvious added value, is not introduced. Agents should be specialized accordingly.

Some of the main steps involved in agent-based design for industrial automation should therefore be:

1. Get to know the problem that needs to be solved—As discussed so far, agents combine distributed problem solving with distributed computation. If the problem at hand only requires one of the two, then probably other solutions will perform better. A MAS in an industrial context is all about compromising performance to gain adaptability. From a logical point of view, a MAS can be very adaptable; however, the physical system must be mechanically prepared to benefit from the MAS. If full optimization is a requirement, MAS are in principle out of the question.

2. Assess the constraints introduced by the physical system itself—It will impose interfacing and physical limitations that restrict the behavior of the MAS. Capture and model the constraints, keeping in mind that the MAS should provide a solution for that system and also for a family of similar systems, but not for all systems in the world.

3. Assess the technological constraints—Recall that most MAS platforms cannot meet real-time performance. In this context, make sure that the MAS can deliver a performance that is superior to the most demanding case. Tune both hardware and software accordingly.

4. (Re)sketch out the agents—Define the smallest pluggable entity, the highest-order entity, and an intermediary entity that will support the composition and adaptation of system functionalities. Further specialize the entities by attaining a balance between code and interaction complexity.

5. Understand the system as a whole and validate the assumptions made for each agent—Recall that as the number of agent instances varies, the whole may denote distinct collective behaviors. Try to understand if different combinations of the several agent types may unbalance the whole.

6. Go back to step 4 until the architecture seems convergent and stable. The design for self-organization is frequently an iterative process.

7. Implement and validate.

Most importantly, always avoid the temptation of obsessively sticking to bio-inspiration. Some natural mechanisms, although fashionable, are just not adequate. Indeed, one of biggest pitfalls comes when people introduce the notion of adaptation or evolution through learning. Learning is an attractive concept. However, artificial learning techniques require either a significant set of examples or allowing the system the freedom to make mistakes and get feedback from those mistakes. Learning system-wide in the context that has been discussed so far is a true challenge and is greatly affected by information myopia. Learning in self-organizing systems has to be very well assessed to be successful.

Finally, it is important to recall that agent design is defined in the class space of the agents. Subsequent instantiation introduces mechatronic constraints that are system-dependent and may limit the agent's action scope. These constraints have a direct impact on the reconfigurability of the system. The main challenge is that a generic design (potentially applicable to many systems) will perform differently in different systems. Simulation may help in predicting the behavior of the system, but preferably the design should be informed by a reconfigurability framework (Farid and McFarlane, 2008), where the mechatronic constraints can be consistently evaluated.

3.4 Deployment

Deployment is of paramount importance in agent-based systems. It is probably the most distinguishing feature that separates a MAS-based approach from a standard system and is responsible, in a huge extent, to the mechatronic self-organization of the system.

In a conventional system, and in a very simplified way, two stages can be considered in the deployment process. The first is the physical connection of a device and the compilation of the code. The second is the subsequent deployment of the generated binaries in the device's controllers. As discussed before, the code is almost always developed in the scope of the instance and not generically in the scope of the class of that device. This means that any redeployment activity will entail reprogramming of the device's controller in the new context.

Embedded design changes this process quite substantially because you should be able to plug, unplug, and re-plug modules seamlessly. This is made possible because the agents' interactions are harmonized and context changes result in automatic reconfiguration actions rather than reprogramming.

MASs have to support all these dynamics by identifying agent types and providing the agents' instances with information about the deployment context.

Current agent stacks are not ready for mechatronic deployment. Instead, they cover only specific cases of logical redeployment in the form of agent mobility. This means that an agent may be able to migrate from one platform to another.

What is inherently difficult about mechatronic deployment, in the context discussed, is that it is not always a case of mobility. In particular, mobility only applies if agents are sharing a controller. If there is an identity between agent controllers and equipment/devices, then redeployment entails that the tuple must be shut down, physically displaced or reconfigured, and finally reconnected. Unlike pure logical redeployment or mobility, in this scenario the agent is disconnected from the system, which means that neither the agent knows about the system, nor does the system know about the agent. Contextual information has to be generated in runtime when the agent reconnects based on the information that the agent has potentially stored before shutting down. Both the agent and the system need to assess and validate that the agent can operate in the new context. This surely includes validating physical features such as geometric constraints. In the case of logical redeployment then, in addition it must be verified that the agent can operate the equipment associated with the controller where it has migrated to. Ideally, this will be checked before the agent is redeployed.

There are also technological challenges that cannot be disregarded. Agent mobility implies that the new controllers hosting the agents have enough computational resources to accept it. This not only includes memory to accommodate the agent's footprint, but includes, as well, the processing power to ensure that the other agents that might be already running are able to execute according to the desired performance.

In this context, several deployment models have been considered to potentially overcome the technological limitations (Ribeiro and Barata, 2013a,b).

Figure 3.4 shows that regardless of the approach, a hardware abstraction layer (HAL) should always be considered. The HAL creates a harmonization layer that enables the definition of the agents in a hardware neutral language or meta-language. This is a requirement if one is to define the agents generically in their class scope. It also allows the definition of an agent platform that is reconfigurable rather than reprogrammable by eliminating the need of recompiling whenever an agent endures changes. If agent mobility is a requirement, then the HAL is compulsory. It is therefore the bridge between the agent code and the real-time execution kernel at the controller level. In this context, the HAL can be as simple as an integration library that limits its action scope to the translation of commands described in the agent scope to the native scope, or as complex as a fully featured virtual machine providing more advanced functionalities that include the deployment infrastructure itself.

f03-04-9780128003411
Figure 3.4 Deployment models.

Among the existing agent-based implementations, and depending on the objectives of each, different authors and practitioners have chosen to locate the HAL in two different locations: outside and within the controller's scope (Figure 3.4(a) and (b) or (c)). The first approach is what would be used currently to connect with most standard industrial controllers. The HAL exists in some external device that has the ability to communicate with the controller. In cases (b) and (c), the HAL is within the controller. These approaches have the advantage of cutting a potential communication link between the HAL and the controllers in case (b), and between the agent, the HAL, and the controller in case (c). Although this latter case would be the ideal case for a MAS in an industrial context, because it would improve performance and facilitate plug-ability, it is also the one that entails more technological implications. In particular, it assumes that an agent can be described in the controller's language or, otherwise, in a language that the controller can seamlessly interpret. It also implies that the controller should have higher computational resources to support the increased footprint.

It is, however, important to notice that all the three presented approaches are seamlessly interoperable and can co-exist.

The available technology strongly influences the deployment infrastructure, and this has to be considered from an architectural point of view when defining the MAS. There are at least three main points to consider:

 What are the functionalities of the HAL (full deployment infrastructure or just integration libraries)?

 Do all the controllers in the system provide a deployment infrastructure that can contain the maximum number of agents that might autonomously migrate therein?

 How to assess the number of potentially moving agents?

There are only a few reported and detailed cases of agent deployment at the controller level. Recent cases include the work developed by Rockwell automation (Vrba et al., 2011), the work reported in Cândido et al. (2011) and the work related to the SOCRADES project (De Souza et al., 2008) in respect to the dynamic deployment of services in service-oriented architectures and the work carried out under the FP7 IDEAS project (Ribeiro and Barata, 2013a,b). There are additional well-known cases of industrial applications of agents; however, the technical details about the deployment approach are not known (Pěchouček and Mařík, 2008; Ribeiro et al., 2011b).

Most of the challenges posed so far relate to the final topic of this chapter. How to assess MAS in a mechatronic context?

3.5 Assessment

The assessment of a self-organizing mechatronic system is fairly different from that of a traditional system. Ultimately, both will be evaluated based on some common metric, such as throughput, work-in-progress, make span, etc. However, in the absence of disturbances, a traditional system is expected to behave as a highly predictable entity and operate with steady performance values. In the presence of disturbances, these highly predictable systems will typically struggle because the redundancy of the system is usually low.

Introducing redundancy is a tricky business. In a conventional system, this mostly implies physical redundancy that normally translates into an excess of resources and low utilization. This just doesn't exist in lean systems.

In MAS, an extra layer of redundancy can be easily introduced. Process-level redundancy implies that, in the absence of some resources, the MAS may try to reorganize itself and find the missing processes elsewhere in the system. Note that this still entails a bit of physical redundancy yet, given the adaptable nature of the agents, the system can easily find an adaptation on the installed system, or even suggest a change. The price to pay for such a capacity is that during these reorganization moments the dynamics of the system are transient and may eventually stabilize to a steady state or, alternatively, create cyclic patterns whereby the system jumps in between set points. In the worst-case scenario, the self-organizing system can exhibit chaotic behavior. This means that in the scope of a self-organizing system one can consider at least three working regions (Frei and Serugendo, 2011):

1. An ideal working region—Where the system denotes a near optimal behavior.

2. An allowed working region—Where the system behavior is far from optimal but still acceptable.

3. A forbidden region—Where the behavior of the system is not acceptable.

While it is obvious that a MAS should not be able to enter the “forbidden” region, it is less obvious if it should be allowed to roam in the allowed region or if this region should only be used as a path between steady states. This sort of control has to be thought through during the design phase. Still, it is more of an art than a science. The behavioral assessment of a self-organizing MAS is inherently complex because the architecture is based on interacting generic constructs. Although in most cases it is easy to foresee that the MAS will have a convergent behavior, it is not possible to guarantee that the same set of generic agents will behave similarly in all possible instantiations.

Different methods and tools must be applied to study their macro behavior then.

Recall that, by design, individual resources should have little impact on the whole; however, their collective dynamic has a considerable impact.

Simulation plays a decisive role in studying these systems. The difficulty is in knowing what to simulate. Under stable conditions, the MAS should denote a stable behavior as well. Although this sort of simulation is important, it is much more important to simulate under the presence of disturbances and with different mixes of agents in the environment.

In an abstract MAS (without any connection to a physical environment), the simulations can be done in a relatively simple way. In a mechatronic context, however, and as mentioned before, the system imposes constraints on the model and also on its simulation. The simulation has to take into account:

 The dynamics of the agents controlling both the transport system and the transforming equipment, such as tools, robots, grippers, etc.

 The disturbances that the system will be subjected to. Will modules be plugged, unplugged, or both? Will the system change its topology? Are all the interactions allowed at all times?

After a few simulation rounds, it should be possible to have an idea about whether the behavior of the system will converge or not. This has to be assessed statistically because simulation rounds are necessarily different. It is of great importance to further analyze the worst-case scenarios in the simulation because they will uncover either faults in the architecture or in the system and they will help with re-assessing the forbidden zone.

Assessment should, in this context, be a continuous process that can be started offline before the first version of the system is deployed and then continued to constantly improve the system. Worst-case scenarios are often triggered by conditions that are very particular of a specific system. It is important to properly acknowledge these cases and treat them as specific contextual exceptions rather than extending the fix to the entire infrastructure. Incorporating one of these specific exceptions into the general architecture may upset the balance of the MAS and create more problems.

Unfortunately, there is no general methodology to assess MAS in a mechatronic context. This continues to fuel the idea of unpredictable behavior when, as with any other set of concepts and technologies in their infancy, the “right” tools have yet to be investigated. Assessment and validation are now of paramount importance in the MAS community. To show that it works is not enough. People generally know that self-organizing MASs work, and also know, to a certain extent, how to build them. It is important to quantitatively explain how they work and to describe their dynamics.

3.6 Conclusions

The use of MAS in a mechatronic/industrial context has been around for some years now. There have been several test cases and application scenarios, yet its application in production environments has been elusive so far.

As with any other subject in its infancy, the design and implementation of industrial agent-based systems have many challenges that need to be overcome. Almost all of these challenges fall into three areas: design, technology, and assessment.

There is necessarily considerable feedback between these areas, with technology playing a leading role in influencing architectural design. In the industrial informatics domain, most implementations are dominated by the JADE platform which, although suitable to describe most models, struggles with performance. Some important technological developments need to occur to consider embedded and truly pluggable industrial agents. In particular, there is room for lightweight agent stacks that are suitable for embedded industrial controllers, which also require modifications to meet the software requirements of industrial agent platforms. It is therefore a bidirectional development.

On the architectural/conceptual side, there is one important balance to be achieved. It is important to resist the temptation of forcing too much bio-inspiration into industrial systems, yet at the same time it is important to prevent agent systems from becoming too similar to standard automation systems because they will lose their value.

While both the conceptual and technological dimensions are fairly advanced, the same cannot be said about assessment.

Assessment is probably the more complex of the three because it means that the first two have been brought to a stable state, upon which it is possible to make assumptions about the behavior of the system and its components. The complexity of the assessment is hardened because two different systems based on the same agent architecture will most likely exhibit distinct behaviors. This does not apply to the interactions of their parts but rather to the wholes, and it is at the whole level that the performance of the system can be assessed.

Although assessment has been presented from a more technical perspective, ultimately these systems need to be assessed regarding their running costs (economic perspective). This can only be attained once the three pinpointed dimensions become stabilized and an industrial system of relevant scale is considered. This is a giant leap from where agent systems and technology are standing nowadays. However, it can be offered as a prospect that, if properly designed, a self-organizing agent-based industrial system can have an active role in reducing the costs by continuously assessing its state and proactively either recommending system changes or implementing ones that are envisioned within its design constraints.

References

AOS, 2014. JACK, AOS—Autonomous Decision Making Software. http://www.aosgrp.com/ (accessed 24.03.14).

Barata J. Coalition Based Approach for Shop Floor Agility. (Ph.D. thesis) Universidade Nova de Lisboa; 2003.

Barata J, Ribeiro L, Colombo A. Diagnosis using service oriented architectures (SOA). In: 2007 5th IEEE International Conference on Industrial Informatics; Vienna, Austria: IEEE; 2007:1203–1208.

Bedau MA. Is weak emergence just in the mind? Mind. Mach. 2008;18(4):443–459.

Bohn H, Bobek A, Golatowski F. SIRENA—service infrastructure for real-time embedded networked devices: a service oriented framework for different domains. In: International Conference on Networking, International Conference on Systems and International Conference on Mobile Communications and Learning Technologies, 2006. ICN/ICONS/MCL 2006; 2006 43 pp.

Bussmann S, Mcfarlane DC. Rationales for holonic manufacturing. In: Second International Workshop on Intelligent Manufacturing Systems, Leuven, Belgium; 1999:177–184.

Cândido G, Colombo AW, Barata J, Jammes F. Service-oriented infrastructure to support the deployment of evolvable production systems. IEEE Trans. Ind. Inform. 2011;7(4):759–767.

CougaarTeam, 2014. Cognitive Agent Architecture (Cougaar). http://www.cougaar.org/ (accessed 24.03.14).

De Souza LMS, Spiess P, Guinard D, Köhler M, Karnouskos S, Savio D. Socrades: a web service based shop floor integration infrastructure. In: The Internet of Things. Berlin, Heidelberg: Springer; 2008:50–67.

De Wolf T, Holvoet T. Emergence versus self-organisation: different concepts but promising when combined. In: Engineering Self-Organising Systems. Berlin, Heidelberg: Springer-Verlag; 2005:1–15.

Dilts D, Boyd N, Whorms H. The evolution of control architectures for automated manufacturing systems. J. Manuf. Syst. 1991;10(1):79–93.

Farid AM, McFarlane D. Production degrees of freedom as manufacturing system reconfiguration potential measures. Proc. Inst. Mech. Eng. B J. Eng. Manuf. 2008;222(10):1301–1314.

Frei R, Serugendo GDM. Concepts in complexity engineering. Int. J. Bio-Inspired Comput. 2011;3(2):123–139.

Goldstein J. Emergence as a construct: history and issues. Emergence. 1999;1(1):49–72.

Haken H. Information and Self-Organization: A Macroscopic Approach to Complex Systems. Berlin, Heidelberg, New York: Springer Verlag; 2006.

Holland O, Melhuish C. Stigmergy, self-organization, and sorting in collective robotics. Artif. Life. 1999;5(2):173–202.

JADETeam, 2014. JADE—Java Agent DEvelopment Framework, JADE Board. http://jade.tilab.com/ (accessed 24.03.14).

Jammes F, Smit H. Service-oriented paradigms in industrial automation. IEEE Trans. Ind. Inform. 2005;1(1):62–70.

Karnouskos S, Colombo AW, Jammes F, Delsing J, Bangemann T. Towards an architecture for service-oriented process monitoring and control. In: IECON 2010—36th Annual Conference on IEEE Industrial Electronics Society; 2010:1385–1391.

Koren Y, Heisel U, Jovane F, Moriwaki T, Pritchow G, Ulsoy AG, Van Brussel H. Reconfigurable manufacturing systems. CIRP Ann. Manuf. Technol. 1999;48(2):527–540.

Lastra J. Reference Mechatronic Architecture for Actor-Based Assembly Systems. (Ph.D. thesis) Tampere University of Technology; 2004.

Leitão P. Agent-based distributed manufacturing control: a state-of-the-art survey. Eng. Appl. Artif. Intell. 2009;22(7):979–991.

Leitao P, Colombo AW, Restivo FJ. ADACOR: a collaborative production automation and control architecture. IEEE Intell. Syst. 2005;20(1):58–66.

Li W, McMahon CA. A simulated annealing-based optimization approach for integrated process planning and scheduling. Int. J. Comput. Integr. Manuf. 2007;20(1):80–95.

MaDKitTeam, 2014. MaDKit. http://www.madkit.org/ (accessed 24.03.14).

Marik V, Lazansky J. Industrial applications of agent technologies. Control. Eng. Pract. 2007;15(11):1364–1380 (special issue on Manufacturing Plant Control: Challenges and Issues—INCOM 2004, 11th IFAC INCOM'04 Symposium on Information Control Problems in Manufacturing).

Mehrabi MG, Ulsoy AG, Koren Y. Reconfigurable manufacturing systems and their enabling technologies. Int. J. Manuf. Technol. Manag. 2000;1:113–130.

Mehrabi MG, Ulsoy AG, Koren Y, Heytler P. Trends and perspectives in flexible and reconfigurable manufacturing systems. J. Intell. Manuf. 2002;3(2):135–146.

Michalewicz Z, Fogel DB. How to Solve It: Modern Heuristics. New York: Springer; 2000.

MobileCTeam, 2014. Mobile-C. http://www.mobilec.org/ (accessed 24.03.14).

Monostori L, Váncza J, Kumara SRT. Agent-based systems for manufacturing. CIRP Ann. Manuf. Technol. 2006;55(2):697–720.

Noktehdan A, Karimi B, Husseinzadeh Kashan A. A differential evolution algorithm for the manufacturing cell formation problem using group based operators. Expert Syst. Appl. 2010;37(7):4822–4829.

Onori M. Evolvable assembly systems—a new paradigm? In: 33rd International Symposium on Robotics; 2002.

Onwubolu G, Davendra D. Scheduling flow shops using differential evolution algorithm. Eur. J. Oper. Res. 2006;171(2):674–692.

Pěchouček M, Mařík V. Industrial deployment of multi-agent technologies: review and selected case studies. Auton. Agent. Multi-Agent Syst. 2008;17(3):397–431.

Pfeifer R, Lungarella M, Iida F. Self-organization, embodiment, and biologically inspired robotics. Science. 2007;318(5853):1088.

Ribeiro L, Barata J. Re-thinking diagnosis for future automation systems: an analysis of current diagnostic practices and their applicability in emerging IT based production paradigms. Comput. Ind. 2011;62(7):639–659.

Ribeiro L, Barata J. Deployment of multiagent mechatronic systems. In: Industrial Applications of Holonic and Multi-Agent Systems. Berlin, Heidelberg: Springer; 2013a:71–82.

Ribeiro L, Barata J. Self-organizing multiagent mechatronic systems in perspective. In: IEEE International Conference on Industrial Informatics (INDIN 2013); 2013b.

Ribeiro L, Barata J, Onori M, Hanisch C, Hoos J, Rosa R. Self-organization in automation—the IDEAS pre-demonstrator. In: IECON 2011—37th Annual Conference on IEEE Industrial Electronics Society; Melbourne, Australia: IEEE; 2011a:2752–2757.

Ribeiro L, Candido G, Barata J, Schuetz S, Hofmann A. IT support of mechatronic networks: a brief survey. In: 2011 IEEE International Symposium on Industrial Electronics (ISIE); Gdansk, Poland: IEEE; 2011b:1791–1796.

Russel S, Norvig P. Artificial Intelligence a Modern Approach. NJ: Prentice Hall; 2003.

Shen W, Hao Q, Yoon HJ, Norrie DH. Applications of agent-based systems in intelligent manufacturing: an updated review. Adv. Eng. Inform. 2006;20(4):415–431.

Stawowy A. Evolutionary strategy for manufacturing cell design. Omega. 2006;34(1):1–18.

Stroppa L, Rodrigues N, Leitao P, Paone N. Quality control agents for adaptive visual inspection in production lines. In: IECON 2012—38th Annual Conference on IEEE Industrial Electronics Society; 2012:4354–4359.

Tasan SO, Tunali S. A review of the current applications of genetic algorithms in assembly line balancing. J. Intell. Manuf. 2008;19(1):49–69.

Ueda K. A concept for bionic manufacturing systems based on DNA-type information. In: Proceedings of the IFIP TC5/WG5.3 Eighth International PROLAMAT Conference on Human Aspects in Computer Integrated Manufacturing; Amsterdam: North-Holland Publishing; 1992:853–863.

Urken AB, Schuck TM. Designing evolvable systems in a framework of robust, resilient and sustainable engineering analysis. Adv. Eng. Inform. 2012;26(3):553–562.

Van Brussel H, Wyns J, Valckenaers P, Bongaerts L, Peeters P. Reference architecture for holonic manufacturing systems: PROSA. Comput. Ind. 1998;37(3):255–274.

Vrba P, Tichý P, Mařík V, Hall KH, Staron RJ, Maturana FP, Kadera P. Rockwell automation’s holonic and multiagent control systems compendium. IEEE Trans. Syst. Man Cybern. C Appl. Rev. 2011;41(1):14–30.

Wooldridge MJ, Jennings NR. Agent theories, architectures, and languages: a survey. In: ECAI—Workshop on Agent Theories, Architectures and Languages, Amsterdam; 1994:1–32.

Wooldridge M, Jennings NR. Intelligent agents—theory and practice. Knowl. Eng. Rev. 1995;10(2):115–152.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset