Zum Hauptinhalt springen

Architecture of a dynamic production controller in CIM enterprise environments

KWAK, Choonjong
In: International journal of production research, Jg. 48 (2010), Heft 1, S. 167-182
Online academicJournal - print, 1 p.1/4

Architecture of a dynamic production controller in CIM enterprise environments. 

This paper deals with a production control problem for the testing and rework cell in a dynamic and stochastic computer integrated manufacturing (CIM) system. A dynamic controller is designed to handle three different production control decisions; dispatching, pre-emption, and dispatching within pre-emption. The model of this system also helps users deal with a large number of input features in real situations. This paper describes a technical architecture of dynamic production control and design issues of a dynamic production controller. It explains the overall development processes of the system and functional diagrams in order to give readers a design guideline for the dynamic production controller.

Keywords: production control; dispatching; pre-emption; dispatching within pre-emption; testing; repair; data mining; feature selection

1. Introduction

According to a US Department of Defense report (1998), architecture is defined as 'the structures or components, their relationships, and the principles and guidelines governing their design and evolution over time'. Raheja et al. ([27]) noted that, without a generic architecture, each domain or area might not be compatible with the requirements of other applications. They proposed the application of data-fusion and data-mining processes for condition-based maintenance and tried to understand the interrelationships of the various architectural modules, by providing a conceptual description and explanation of the modules. Cheung et al. ([4]) discussed the fundamental infrastructure of the product development integration architecture and its implementation issues with a product lifecycle management system. Cavalieri et al. ([5]) provided design principles for the development of good benchmark cases and technical issues related to the provision of a web-based simulation environment for manufacturing control. Zulch et al. ([32]) presented a simulation aided approach for designing organisational structures in manufacturing systems. Note that the choice of a development strategy and architecture affects how a system responds to changes and evolves over time (McKay and Black [19]).

A very challenging problem in dynamic and stochastic computer integrated manufacturing (CIM) systems is production control such as dispatching and job pre-emption. Manufacturing control is considered one of the most commercially important topics in the manufacturing area and a popular research domain in a variety of academic fields (Cavalieri et al.[5]). Dispatching rules are very effective for shop floor control and most of the shop floor control approaches adapt dispatching rules to determine what job should be selected next when a machine is available (Kwak and Yih [13]). It is well known that dispatching rules influence delay costs, inventory costs, and setup costs but no dispatching rule consistently guarantees optimal system performance in shop-floor environments (Blackstone et al.[3]). Good performance for individual dispatching rules is limited to certain shop floor conditions such as shop-floor congestion levels and due-date status (Anderson and Nyirenda [1], Grabot and Geneste [6]). A significant amount of research has focused on dispatching rules over the decades to intelligently find an appropriate rule given the current system status and performance measures (Ishii and Talavage [9], Nakasuka and Yoshida [20], Park et al.[22], Soon and De Souza [29], Jeong and Kim [10], Kwak and Yih [15]).

Job pre-emption is to interrupt the processing of a job and to put a different job on a workstation (Pinedo [25]). Pre-emption is allowed when jobs have several different priority levels (Paterok and Ettl [23], Pinedo [25]). Pre-emption frequently occurs in practice (Manko [18]). It is very difficult to schedule the purchase of products exactly in advance. Constant design changes make the problem worse. Cost must often be sacrificed to obtain fast service. Job pre-emption is an attractive choice for such rush and priority orders, because their deadlines are enforced by interrupting less urgent jobs (Paterok and Ettl [23], Manko [18]). On the other hand, frequent pre-emption may lead to significant system overhead and additional costs (Paterok and Ettl [23]). A control issue on pre-emption is motivated by the trade-off between deadline enforcement and system costs.

Kwak and Yih ([14]) defined a generic testing-repair model with a single tester shared by multiple types of jobs. Such a testing-repair model is very important to ensure high quality products in highly competitive business environments. This is particularly true in high-cost aerospace, military, and medical industries. By extending the model, Kwak and Yih ([15]) developed competitive decision selector (CDS) in the testing and rework cell of a dynamic and stochastic CIM system. It was an effort to deal with a production control problem for both job pre-emption and dispatching by finding their appropriate rules intelligently given the current system status and performance measures. The proposed CDS observed the status of the system and jobs at every decision point and makes its decision on job pre-emption and dispatching rules in real time. With comparisons of the CDS and five other static control rules, the CDS dynamic control showed better performance than the static control rules.

Their results, however, indicated by a few exceptional cases that job pre-emption itself without finer control may not be enough to achieve better performance than non pre-emption in all cases. They concluded that finer control would be necessary for job pre-emption to achieve better performance as a possible future research direction. In fact, Kwak and Kim ([16]) noted dispatching decisions within pre-emption, in addition to dispatching and pre-emption. With a standard pre-emption procedure, they explored two dispatching decisions within the pre-emption procedure and showed how different dispatching decisions within the pre-emption procedure affect system performance. Another problem with the CDS approach (Kwak and Yih [15]) is that a tremendous amount of data is often dealt with in real situations so that it may take a long time and effort to perform their feature selection approach. We may need to 'narrow down' feature candidates from a set of initially selected features (Liu et al.[17]), before their sophisticated feature selection approach is performed.

This paper addresses a technical architecture of dynamic production control and design issues of a dynamic production controller in the testing and rework cell of a dynamic and stochastic CIM system. The proposed dynamic controller handles three different decisions on dispatching, pre-emption, and dispatching within pre-emption simultaneously for production control purposes, for which this research provides the logical integration of its internal modules. This paper explains the overall development processes of the system with functional diagrams, as a design guideline for the dynamic production controller. The model of this system also assists users in dealing with a large number of input features in real situations.

The rest of this paper is organised as follows. Section 2 states a production control problem and defines a standard pre-emption procedure that has two internal dispatching decisions. Presented in Section 3 are the architecture of knowledge discovery in databases (KDD) in CIM enterprise environments and the design process of the proposed dynamic production controller with four internal modules. A case study is provided in Section 4 to show how the production controller is designed, with several design considerations in Section 5. Finally, general conclusions follow in Section 6.

2. Problem statement

Consider a testing and rework cell with identical parallel workstations. See Figures 1 and 2 for a general example of product flow in testing environments and for a testing and rework cell of this research, respectively. Jobs come to the single queue of this testing and rework cell from the previous assembly cell. All jobs are served by a dispatching rule. Once a test/rework workstation receives a job, the job is tested. A major setup is incurred when a tester shifts from one job family to another, while a minor setup occurs when it shifts from one job type to another within the same family. If the job passes the test, it goes to the next stage. Otherwise, it is repaired in the same workstation so that it can be tested again in the same place without another setup after its rework. That is, each workstation can process one operation, either testing or rework, at a time. It is often desirable in just-in-time (JIT) environments to utilise the skills of the multifunction operators in multi-purpose workstations.

Graph: Figure 1. Product flow in testing environments.

Graph: Figure 2. Layout of the testing and rework cell.

The objective is to minimise the number of tardy jobs. Pre-emption of jobs is allowed at any moment of the testing and rework stage. Once a job is pre-empted, it goes back to the queue. Then, the job is either tested or repaired, depending on the job's status at the moment of pre-emption, when it is served again. When the job was in the middle of testing, it is assumed all the testing progress that has been made on the job is lost and testing has to be done from scratch. When the job was at rework, it is assumed to resume later from the point of pre-emption.

Dispatching and pre-emption in dynamic environments are involved with the frequency of decision making (Robb and Rohleder [28]). Dispatching is done at each job completion point, by selecting the next job for an available workstation, based on a dispatching rule. On the other hand, pre-emption is done at each job arrival point. Jobs that come to the queue are either a new arrival or a pre-empted job.

When an urgent job arrives, it may be beneficial to pre-empt one of the current jobs being served, considering the overall objective function. When job j comes to the queue at time TNOW with total expected processing time and due date (or, deadline) dj, job pre-emption can be done according to the following procedure:

Step 1

If any workstation is idle, no pre-emption; done. Otherwise, go to Step 2.

Step 2

If , go to Step 5. Otherwise, go to Step 3.

Step 3

Find the job b with the latest due date in the workstations, obtain its due date db, and go to Step 4.

Step 4

If dbdj, no pre-emption; done. Otherwise, pre-empt job b; done.

Step 5

Schedule the next decision point of the job j for time ; done.

Job pre-emption in the above procedure is based on the slack time, , of the incoming job j and the latest due date of the jobs in the workstations and is independent of the status (either testing or rework) of the jobs. The pre-emption procedure has two decision phases. The first phase is to determine whether the arriving job is urgent in Step 2, with its due date dj and its pre-emption threshold of . If the incoming job j is not urgent at its arrival time, the job j is checked again at time as in Step 5. The procedure tries again to pre-empt other jobs in the workstations to prevent tardiness, if it is still not being served at time . The second phase is to select a job to be pre-empted by the LDD (latest due date) rule in Steps 3 and 4, when the arriving job is urgent.

The next obvious problem is how to control not only job pre-emption and dispatching but also dispatching within pre-emption efficiently, to make appropriate decisions based on the status of the system and jobs. For dispatching within pre-emption, other dispatching rules can be adopted for Steps 2, 3, and 4 of the above pre-emption procedure. For example, critical ratio,, can replace slack time in Step 2. Then, the modified Step 2 is 'If , go to Step 5. Otherwise, go to Step 3,' in this example with its pre-emption threshold of 1.2. Other dispatching rules can also be used instead of the LDD rule in Steps 3 and 4.

3. Architecture of dynamic production control

Figure 3 illustrates a knowledge discovery in databases (KDD) architecture in CIM enterprise environments. The hierarchical control architecture in CIM enterprise environments consists of four levels: factory, cell, workstation, and resources. With respect to the given problem, the cell level in Figure 3 is limited to both the final assembly and testing and rework cells of Figure 2 in this research. The workstation level in Figure 3 corresponds to the test/rework workstations in Figure 2, while the resource level of Figure 3 includes testers, rework devices, operators, and so forth. In Figure 3, the status of the system is collected and saved in the cell and workstation databases in real time. The data is preprocessed to obtain necessary information (features) that is then sent to the dynamic controller. The dynamic controller makes its control decisions with the features that represent the system status and job characteristics. Cell central and local controllers implement the decisions in the cell and workstations, respectively, which affect the system status again.

Graph: Figure 3. Architecture of knowledge discovery in databases (KDD) in CIM enterprise environments.

Figure 4 depicts the design process of the dynamic controller with four internal modules: training data generation, data partition/transformation, rule extraction, and controller. The four modules are constructed off-line in advance. Then, the constructed controller with knowledge bases interacts on-line with the CIM enterprise environment, specifically that is the final assembly and testing and rework cells of Figure 2 in this problem. The design process, which is iterative in nature, provides the logical integration of the internal modules. Each module is explained in the following section, with emphasis on feature extraction and selection.

Graph: Figure 4. Design of the dynamic production controller.

4. Case study

4.1 Training data generation

Features are attributes or variables that are used to construct models describing data (Liu et al.[17]). When a predictive model for gas mileage is generated, for instance, the inputs (features) which are to be fed into the model should be determined. Vehicle weight is probably considered, while colour may be ignored. This example sounds obvious, but real-world data sets tend to involve large and often complex feature selection choices. If feature selection is done poorly, no clever model can compensate: for example, predicting gas mileage from colour.

While many have tried to reduce the level of detail in simulation modelling for practical and economic reasons (Hung and Leachman [8]), the number of features should be minimised only within the extent of not losing significant information of the specific problem domain (Hood [7], Kwak et al.[12], Kwak and Yih [13]). In fact, the pitfall of the simplifying assumption also happens in analytical models (Hung and Leachman [8]).

The wrapper and filter approaches are two major categories in feature selection. The wrapper approach considers a mining algorithm and uses its performance as the evaluation criterion to judge a feature's usefulness. However, this approach may not be computationally feasible with a great number of features. The filter approach, on the other hand, selects features based on general characteristics of the data independent of any mining algorithm. Thus, the features selected by the filter approach have better generalisation properties.

Figure 5 shows the process of feature extraction and selection used in this research. The process consists of three sequential steps: the first step for feature extraction and the second two steps for feature selection. In our case study, features were first extracted based on preliminary experiments as well as the literature. Second, a filter feature selection approach was applied to narrow down features. Finally, C4.5 (Quinlan [26]) was used with its internal feature selection process.

Graph: Figure 5. Feature extraction and selection.

The feature extraction process is explained in further detail in Figure 6. Feature extraction is the general activities that create new features through transformations or combinations of the original primitive features. Feature extraction in this case study started with collecting initial feature candidates based on the literature (Paterok and Ettl [23], Robb and Rohleder [28], Julien et al.[11], Park et al.[22], Kwak and Yih [15]), because domain-specific knowledge is typically used in this process. The initial feature candidates represent the shop-floor status and job characteristics. Note also that feature extraction as well as feature selection is an iterative process until we find a reasonable set of the features that adequately explains a particular problem. Through preliminary experiments with the initial feature candidates (Kwak and Yih [15]), 36 feature candidates were extracted in this case study and listed in Table 1.

Graph: Figure 6. Feature extraction.

Table 1. Thirty-six feature candidates.

 1: Minimum relative deadline of jobs in queue.
 2: Average relative deadline of jobs in queue.
 3: Minimum relative deadline of jobs in servers.
 4: Average relative deadline of jobs in servers.
 5: Minimum critical ratio of jobs in queue.
 6: Average critical ratio of jobs in queue.
 7: Minimum critical ratio of jobs in servers.
 8: Average critical ratio of jobs in servers.
 9: Minimum slack time of jobs in queue.
10: Average slack time of jobs in queue.
11: Minimum slack time of jobs in servers.
12: Average slack time of jobs in servers.
13: Minimum relative deadline of jobs in queue–minimum relative deadline of jobs in servers.
14: Average relative deadline of jobs in queue–average relative deadline of jobs in servers.
15: Minimum critical ratio of jobs in queue–minimum critical ratio of jobs in servers.
16: Average critical ratio of jobs in queue–average critical ratio of jobs in servers.
17: Minimum slack time of jobs in queue–minimum slack time of jobs in servers.
18: Average slack time of jobs in queue–average slack time of jobs in servers.
19: The number of tardy jobs in queue.
20: The number of tardy jobs in servers.
21: The number of potentially tardy jobs in queue.
22: The number of potentially tardy jobs in servers.
23: Number of tardy jobs in queue–number of tardy jobs in servers.
24: Number of potentially tardy jobs in queue–number of potentially tardy jobs in servers.
25: The number of jobs in queue/the number of jobs in queue and servers.
26: Minimum current tardiness of jobs in queue.
27: Average current tardiness of jobs in queue.
28: Maximum current tardiness of jobs in queue.
29: Minimum current tardiness of jobs in servers.
30: Average current tardiness of jobs in servers.
31: Maximum current tardiness of jobs in servers.
32: Mean tardiness in recent 10 jobs.
33: The number of tardy jobs in recent 10 jobs/10.
34: Minimum current tardiness of jobs in queue–minimum current tardiness of jobs in servers.
35: Average current tardiness of jobs in queue–average current tardiness of jobs in servers.
36: Maximum current tardiness of jobs in queue–maximum current tardiness of jobs in servers.

The relevance of problem domains is subjective, because the exact same problem is not usually described in the literature. Most of the cited literature is on production systems with dispatching rules and/or pre-emption. The literature offers only broad coverage of the feature space before the search space is further narrowed down by the following feature selection process.

Figure 7 explains the feature selection process of this case study. Feature selection is the task of finding the most reasonable subset of features for a classifier to seek fewer features and maximum class separability (Kwak et al.[12]). Feature selection is a key issue in pattern classification problems and which feature set is best depends upon the given classification task (Augusteijn et al.[2]).

Graph: Figure 7. Feature selection.

Once the 36 intermediate features were extracted, discretisation was first performed. Discretisation is the process of transferring continuous data into discrete counterparts. Discretisation reduces the number of values for a given continuous feature by dividing the value range of the feature into several intervals. Interval labels can then be used to replace actual data values. Each interval contains approximately the same number of instances. The instances within an interval include all data whose values for a particular feature are in the same range. The following filter feature selection approach used the discretised data of the SEPT (shortest expected processing time) rule without job pre-emption as a basis of all experiments. Each of the 36 features of the SEPT rule was divided into appropriate intervals in the discretisation.

For each feature, in the filter approach, each interval takes the average performance of the instances that fall in the interval. The criterion of selecting a particular feature is associations between the total average and the interval averages of performance, the number of tardy jobs in this case study. The interval averages of performance are individually compared with the total average of performance, for each feature. If they show a relatively large variation, the feature is selected as an element of the final feature candidates.

The feature selection by C4.5 was actually done in the module of rule extraction. It is one of the basic tasks in generating decision trees to determine the relative importance of features. At each level of the tree building process, only one feature, the one with the highest value for the selection criterion, is picked out of the final feature candidates. The sample set is then split into several subsets according to the values of the feature. The whole procedure is recursively repeated until no further splits are possible and the tree building process stops.

A surface mount technology (SMT) process was chosen as a case study to construct a simulation test bed for the CIM enterprise environment of Figure 4 in this research. It is very difficult or often impossible to model and solve the overall test cell as a dynamic and stochastic analytical model without simulation (Yang and Chang [31]). Basic system information came from Kwak and Yih ([15]). Table 2 gives the system parameter values used in the experiment. Two job families with seven different job types of each were considered. Job inter-arrival times were modelled by an exponential distribution. An arriving job was equally assigned to one of the 14 job types. The actual processing times used were generated from exponential distributions with the given means in Table 2 but were not known to the dynamic controller, which had only mean values from the data history.

Table 2. Data for the modelled system.

System parametersValues
Mean testing time in family 10.608 (min.)
Mean testing time in family 23.008 (min.)
The number of testing stations3
Mean setup time between families7.5 (min.)
Mean setup time within the same family2.5 (min.)
Mean repair time in family 18.5 (min.)
Mean repair time in family 26.3 (min.)
Mean allowance factor8.0
Mean rework rate in family 10.30
Mean rework rate in family 20.37

Table 3 shows 18 different decision rules considered in the experiments. When a workstation was available, the controllers selected a job by the SEPT rule or the EDD (earliest due date) rule. Whenever a new job came into the single queue of this testing and rework stage, the pre-emption procedure was activated with two decision phases. The first phase used either the slack time or the critical ratio based thresholds. Four rules were tested in the second phase: latest due date, largest critical ratio, largest slack time, and SEPT rules. In total, 18 decision rules were considered as shown in Table 3.

Table 3. Expression of decision rules.

Pre-emption
Dispatching ruleDecision phase 1Decision phase 2Expressions
SEPTSlack time-basedDue dateDecision 1
Slack time-basedCritical ratioDecision 2
Slack time-basedProcessing timeDecision 3
Slack time-basedSlack timeDecision 4
CR-basedDue dateDecision 5
CR-basedCritical ratioDecision 6
CR-basedProcessing timeDecision 7
CR-basedSlack timeDecision 8
No pre-emptionDecision 9
EDDSlack time-basedDue dateDecision 10
Slack time-basedCritical ratioDecision 11
Slack time-basedProcessing timeDecision 12
Slack time-basedSlack timeDecision 13
CR-basedDue dateDecision 14
CR-basedCritical ratioDecision 15
CR-basedProcessing timeDecision 16
CR-basedSlack timeDecision 17
No pre-emptionDecision 18

The simulation models were developed for the CIM enterprise environment of Figure 4 by using SIMAN (Pegden et al.[24]). The controller module was written in C and inserted into SIMAN for real-time control purposes (Kwak and Yih [15], Park [21]). In all simulation replications, a warm-up period of 115,200 minutes out of each replication of 172,800 minutes was used as a common initialisation period by using the SEPT rule without job pre-emption.

4.2 Data partition/transformation

Data partition by decision rules was first made in the training data generation module. A decision rule is applied to each simulation scenario as a static control rule and necessary attributes are collected at data collection points and saved in each corresponding file.

The data partition/transformation module then transforms the given data structure to the standard data structure, each instance of which consists of input features and tardy status. This transformation is necessary due to the timing difference of data collection between input features and performance. Necessary input features are collected at the point of each job arrival to the queue, while tardy status can be checked when a job is completed. The job that comes to the queue is either a new arrival or a pre-empted job.

This data partition/transformation module finally partitions the transformed training data by system utilisation level. This second partition process is done individually on the partitions that have previously been made by decision rules. A knowledge base is constructed on each sub-partition in the next rule extraction module.

4.3 Rule extraction

In order to make on-line control decisions in the dynamic controller module, knowledge bases containing control decision rules should be constructed off-line in advance. A knowledge base is constructed in the off-line rule extraction module by generating control decision rules by decision tree C4.5 (Quinlan [26]), within each sub-partition by decision rules and utilisation levels. All default parameters of C4.5 are used and its significance test is performed for pruning. Then, the knowledge bases are grouped by the utilisation level in this rule extraction module.

4.4 Dynamic controller

The dynamic controller is constructed off-line with knowledge bases containing control decision rules. The controller interacts on-line with a simulation test bed that is synchronised with a real system, the final assembly and testing and rework cells in this problem. When a production control decision is about to be made, the dynamic controller first finds the current utilisation level of the system and then activates the corresponding group of knowledge bases. In the corresponding group, each knowledge base that represents a decision rule individually analyses input features, and then submits its short-term performance expected when its decision rule is applied to the current status of the system. Then, each knowledge base with its expected short-term performance competes with one another in the dynamic controller module. With a tie-breaking vector, the dynamic controller eventually selects a winner that is the control rule with the minimum number of expected tardy jobs of the near future.

5. Design considerations

An important observation in the design process: the utilisation of the system was one of the most significant factors in the performance of static rules and the dynamic controller. That is, as inter-arrival time changed, the difference in their performance also changed. Note that high inter-arrival time directly leads to low utilisation. Table 4 shows the results of the simulation that changed inter-arrival time from 4.5 to 5.0, each of which has three replications. When an inter-arrival time is 4.5, for example, the SEPT group (decisions 1 to 9) shows better performance than the EDD group (decisions 10 to 18). As the inter-arrival time becomes larger, the difference in performance between the EDD and SEPT groups is becoming smaller. With the inter-arrival time of 4.7, the two groups show similar performance. As the inter-arrival time changes from 4.7 to 5.0, the EDD group starts to show better performance than the SEPT group. When the inter-arrival time was in the middle of the range with the inter-arrival time of 4.7 and the EDD and SEPT groups showed similar performance, the proposed dynamic controller worked best with respect to the number of tardy jobs. This is why utilisation is considered as an important factor in the design process, particularly in the data partition module.

Table 4. The number of tardy jobs for different inter-arrival times.

Inter-arrival time 4.5Inter-arrival time 4.7Inter-arrival time 5.0
Decision123123123
 1290328672969269026392583233524332344
 2302130242912255426582571232023372202
 3298631953015275928452741243224012350
 4303429872985267526032624225223292363
 5295330063026262826452674231524092250
 6293929782706253626322664231123812220
 7300330963088266129372689238324862377
 8295430802984259726362544228823812190
 9310031273022268426872655239424382384
10296038513280272727913044200022192300
11329634423349296927142605239721702222
12350336243450298828612965243024872481
13342833153485306628842772223221582109
14344530222884275827352585238822292258
15289632373697239632402443220623082079
16339736473646293029822844238325572260
17308037193013259028062405209222312261
18311433303494327129022663233323842528

It was also observed that it was desirable to incorporate appropriate prior knowledge into mined patterns. This not only improved system performance but also helped interpret mined patterns properly. It is important to note that blind application of data mining (data dredging) can be dangerous with the use of meaningless patterns. As a heuristic choice based on preliminary experiments, for example, we replaced the minded classification rules of the highest utilisation level by a classification rule: that is, if the current utilisation of the system is the highest level (i.e., above 0.67), then apply the SEPT rule with job pre-emption (decision 1). This rule incorporates the well-known heuristic knowledge that the SEPT rule performs well in congested shop floors with mined patterns.

Finally, features were extracted from the initial candidates to the intermediate ones of Figure 6 with several considerations: first, considering the objective function of this case study, minimising the number of tardy jobs, features were reinforced with more deadline-related statistics and counters, while several flow time-related features were eliminated from the initial feature candidates. Second, useful features can often be generated by taking averages, sums, differences, and ratios of primitive features. Relative values were desirable to avoid any possible negative effect due to the variation by other factors. For example, relative deadlines were preferred to deadlines because the former are time invariant. Finally, position information was included for control purposes so that job status in the queue and servers was represented. When it is known that there are urgent waiting jobs in the queue, for instance, different control decisions on the servers (i.e., job pre-emption) may have to be considered.

6. Conclusions

Recent studies on production control have focused on typical dispatching rules and pre-emption, but have not noted dispatching decisions within the pre-emption mechanism. This research adopted a standard pre-emption procedure (Kwak and Yih [15]) with an example of a testing and rework cell. Two dispatching decisions within the pre-emption procedure were newly considered for a dynamic production controller in this case study.

This paper presented a technical architecture of dynamic production control and the design of a new dynamic production controller for the testing and rework cell of a dynamic and stochastic CIM system. The proposed dynamic controller considered not only job pre-emption and dispatching but also dispatching within pre-emption at the same time, for which this research provides the logical integration of its internal modules. As a design guideline for the dynamic production controller, this paper described the overall development processes of the system and functional diagrams. The model of this paper will also help users construct a system to deal with a large number of input features in real situation.

Major contributions of this research: first, this paper provides reference design architecture for a dynamic production controller with four modules. This research also presents the logical integration of the four internal modules. Second, it introduces the design process of a dynamic production controller with its practical design issues. Even though the aim of this paper is to present a guideline for designing production controllers, an example of dynamic production control is also given with practical design considerations. Finally, the proposed system bridges the gap between conceptual production control levels and practical controller design levels.

The proposed dynamic controller may need to be further investigated to show its performance. The research results may be available in the near future, as part of a research project based on the proposed architecture and design processes.

Acknowledgements

The author would like to thank Ms. Park, Enumi for helping to modify the author's program and set up part of experiments.

References 1 Anderson, EJ and Nyirenda, JC. 1990. Two new rules to minimize tardiness in a job shop. International Journal of Production Research, 28(12): 2277–2292. 2 Augusteijn, MF, Clemens, LE and Shaw, KA. 1995. Performance evaluation of texture measures for ground cover identification in satellite images by means of a neural network classifier. IEEE Transactions on Geoscience and Remote Sensing, 33(3): 616–626. 3 Blackstone, JHJr., Phillips, DT and Hogg, GL. 1982. A state-of-the-art survey of dispatching rules for manufacturing job shop operations. International Journal of Production Research, 20(1): 27–45. 4 Cheung, WM. 2008. Advanced product development integration architecture: an out-of-box solution to support distributed production networks. International Journal of Production Research, 46(12): 3185–3206. 5 Cavalieri, S, Macchi, M and Valckenaers, P. 2003. Benchmarking the performance of manufacturing control systems: design principles for a web-based simulation testbed. Journal of Intelligent Manufacturing, 14(1): 43–58. 6 Grabot, B and Geneste, L. 1994. Dispatching rules in scheduling: a fuzzy approach. International Journal of Production Research, 32(4): 903–915. 7 Hood, SJ. Detail vs 'competitive manufacturing for the next decade', IEEE/CHMT'90. simplifying assumptions for simulating semiconductor manufacturing lines. Proceedings of ninth international electronic manufacturing technology symposium. 1–3 October1990, Washington, DC. pp.103–108. 8 Hung, YF and Leachman, RC. 1999. Reduced simulation models of wafer fabrication facilities. International Journal of Production Research, 37(12): 2685–2701. 9 Ishii, N and Talavage, JJ. 1991. A transient-based real-time scheduling algorithm in FMS. International Journal of Production Research, 29(12): 2501–2520. Jeong, K-C and Kim, Y-D. 1998. A real-time scheduling mechanism for a flexible manufacturing system: using simulation and dispatching rules. International Journal of Production Research, 36(9): 2609–2626. Julien, FM, Magazine, MJ and Hall, NG. 1997. Generalized preemption models for single-machine dynamic scheduling problems. IIE Transactions, 29(5): 359–372. Kwak, C, Ventura, JA and Tofang-Sazi, K. 2000. A neural network approach for defect identification and classification of leather fabric. Journal of Intelligent Manufacturing, 11(5): 485–499. Kwak, C and Yih, Y. 2001. Simulation comparison of collaboration protocol–based testing models. International Journal of Production Research, 39(13): 2947–2956. Kwak, C and Yih, Y. 2003. Statistical analysis of factors influencing the performance of the timeout-based testing model. International Journal of Production Research, 41(5): 1033–1044. Kwak, C and Yih, Y. 2004. Data mining approach to production control in the computer integrated testing cell. IEEE Transactions on Robotics and Automation, 20(1): 107–116. Kwak, C and Kim, CO. 2005. Dispatching decisions within preemption procedures. International Journal of Industrial Engineering-Theory Applications and Practice, 12(1): 21–25. Liu, H. 2005. Evolving feature selection. IEEE Intelligent Systems, 20(6): 64–76. Manko, HH. 1995. Soldering handbook for printed circuits and surface mounting: design, materials, processes, equipment, trouble-shooting, quality, economy, and line management, New York: Van Nostrand Reinhold. McKay, KN and Black, GW. 2007. The evolution of a production planning system: a 10-year case study. Computers in Industry, 58(8): 756–771. Nakasuka, S and Yoshida, T. 1992. Dynamic scheduling system utilizing machine learning as a knowledge acquisition tool. International Journal of Production Research, 30(2): 411–443. Park, E. 2006. "Situation dependent decision selector for production control in testing rework cell". In MS thesis, Seoul, , South Korea: Department of Computer Science and Industrial System Engineering, Yonsei University. Park, SC, Raman, N and Shaw, MJ. 1997. Adaptive scheduling in dynamic flexible manufacturing systems: a dynamic rule selection approach. IEEE Transactions on Robotics and Automation, 13(4): 486–502. Paterok, M and Ettl, M. 1994. Sojourn time and waiting time distributions for M/G/1 queues with preemption distance priorities. Operations Research, 42(6): 1146–1161. Pegden, CD, Shannon, RE and Sadowski, RP. 1995. Introduction to simulation using SIMAN, New York: McGraw-Hill. Pinedo, M. 1995. Scheduling: theory, algorithms, and systems, Englewood Cliffs, NJ: Prentice-Hall. Quinlan, JR. 1993. C4.5 programs for machine learning, San Mateo, CA: Morgan Kaufmann Publishers. Raheja, D. 2006. Data fusion/data mining-based architecture for condition-based maintenance. International Journal of Production Research, 44(14): 2869–2887. Robb, DJ and Rohleder, TR. 1996. An evaluation of scheduling heuristics for dynamic single-processor scheduling with early/tardy costs. Naval Research Logistics, 43(3): 349–364. Soon, TH and De Souza, R. 1997. Intelligent simulation-based scheduling of workcells: an approach. Integrated Manufacturing Systems, 8(1): 6–23. US Department of Defense. 1998. DoD joint technical architecture version 2.0, Washington, DC: DoD. Yang, J and Chang, T-S. 1998. Multiobjective scheduling for IC sort and test with a simulation test bed. IEEE Transactions on Semiconductor Manufacturing, 11(2): 304–315. Zulch, G. 2004. Simulation aided design of organizational structures in manufacturing systems using structuring strategies. Journal of Intelligent Manufacturing, 15(4): 431–437.

By Choonjong Kwak

Reported by Author

Titel:
Architecture of a dynamic production controller in CIM enterprise environments
Autor/in / Beteiligte Person: KWAK, Choonjong
Link:
Zeitschrift: International journal of production research, Jg. 48 (2010), Heft 1, S. 167-182
Veröffentlichung: Abingdon: Taylor & Francis, 2010
Medientyp: academicJournal
Umfang: print, 1 p.1/4
ISSN: 0020-7543 (print)
Schlagwort:
  • Control theory, operational research
  • Automatique, recherche opérationnelle
  • Sciences exactes et technologie
  • Exact sciences and technology
  • Sciences appliquees
  • Applied sciences
  • Recherche operationnelle. Gestion
  • Operational research. Management science
  • Recherche opérationnelle et modèles formalisés de gestion
  • Operational research and scientific management
  • Gestion des stocks, gestion de la production. Distribution
  • Inventory control, production control. Distribution
  • Informatique; automatique theorique; systemes
  • Computer science; control theory; systems
  • Logiciel
  • Software
  • Organisation des mémoires. Traitement des données
  • Memory organisation. Data processing
  • Traitement des données. Listes et chaînes de caractères
  • Data processing. List processing. Character string processing
  • Analyse donnée
  • Data analysis
  • Análisis datos
  • Extraction information
  • Information extraction
  • Extracción información
  • Fouille donnée
  • Data mining
  • Busca dato
  • Gestion intégrée
  • Integrated management
  • Gestión integrada
  • Gestion production
  • Production management
  • Gestión producción
  • Modélisation
  • Modeling
  • Modelización
  • Problème livraison
  • Dispatching problem
  • Problema reparto
  • Productique
  • Computer integrated manufacturing
  • Robótica
  • Préemption
  • Preemption
  • Preempción
  • Réparation
  • Repair
  • Reparación
  • Synthèse commande
  • Control synthesis
  • Síntesis control
  • Traitement donnée
  • Data processing
  • Tratamiento datos
  • data mining
  • dispatching within pre-emption
  • dispatching
  • feature selection
  • pre-emption
  • production control
  • repair
  • testing
Sonstiges:
  • Nachgewiesen in: PASCAL Archive
  • Sprachen: English
  • Original Material: INIST-CNRS
  • Document Type: Article
  • File Description: text
  • Language: English
  • Author Affiliations: Korea Aerospace Research Institute, Research Evaluation and Planning Division, Daejeon, Korea, Republic of
  • Rights: Copyright 2015 INIST-CNRS ; CC BY 4.0 ; Sauf mention contraire ci-dessus, le contenu de cette notice bibliographique peut être utilisé dans le cadre d’une licence CC BY 4.0 Inist-CNRS / Unless otherwise stated above, the content of this bibliographic record may be used under a CC BY 4.0 licence by Inist-CNRS / A menos que se haya señalado antes, el contenido de este registro bibliográfico puede ser utilizado al amparo de una licencia CC BY 4.0 Inist-CNRS
  • Notes: Computer science; theoretical automation; systems ; Operational research. Management

Klicken Sie ein Format an und speichern Sie dann die Daten oder geben Sie eine Empfänger-Adresse ein und lassen Sie sich per Email zusenden.

oder
oder

Wählen Sie das für Sie passende Zitationsformat und kopieren Sie es dann in die Zwischenablage, lassen es sich per Mail zusenden oder speichern es als PDF-Datei.

oder
oder

Bitte prüfen Sie, ob die Zitation formal korrekt ist, bevor Sie sie in einer Arbeit verwenden. Benutzen Sie gegebenenfalls den "Exportieren"-Dialog, wenn Sie ein Literaturverwaltungsprogramm verwenden und die Zitat-Angaben selbst formatieren wollen.

xs 0 - 576
sm 576 - 768
md 768 - 992
lg 992 - 1200
xl 1200 - 1366
xxl 1366 -