This paper deals with a production control problem for the testing and rework cell in a dynamic and stochastic computer integrated manufacturing (CIM) system. A dynamic controller is designed to handle three different production control decisions; dispatching, pre-emption, and dispatching within pre-emption. The model of this system also helps users deal with a large number of input features in real situations. This paper describes a technical architecture of dynamic production control and design issues of a dynamic production controller. It explains the overall development processes of the system and functional diagrams in order to give readers a design guideline for the dynamic production controller.
Keywords: production control; dispatching; pre-emption; dispatching within pre-emption; testing; repair; data mining; feature selection
According to a US Department of Defense report (1998), architecture is defined as 'the structures or components, their relationships, and the principles and guidelines governing their design and evolution over time'. Raheja et al. ([
A very challenging problem in dynamic and stochastic computer integrated manufacturing (CIM) systems is production control such as dispatching and job pre-emption. Manufacturing control is considered one of the most commercially important topics in the manufacturing area and a popular research domain in a variety of academic fields (Cavalieri et al.[
Job pre-emption is to interrupt the processing of a job and to put a different job on a workstation (Pinedo [
Kwak and Yih ([
Their results, however, indicated by a few exceptional cases that job pre-emption itself without finer control may not be enough to achieve better performance than non pre-emption in all cases. They concluded that finer control would be necessary for job pre-emption to achieve better performance as a possible future research direction. In fact, Kwak and Kim ([
This paper addresses a technical architecture of dynamic production control and design issues of a dynamic production controller in the testing and rework cell of a dynamic and stochastic CIM system. The proposed dynamic controller handles three different decisions on dispatching, pre-emption, and dispatching within pre-emption simultaneously for production control purposes, for which this research provides the logical integration of its internal modules. This paper explains the overall development processes of the system with functional diagrams, as a design guideline for the dynamic production controller. The model of this system also assists users in dealing with a large number of input features in real situations.
The rest of this paper is organised as follows. Section 2 states a production control problem and defines a standard pre-emption procedure that has two internal dispatching decisions. Presented in Section 3 are the architecture of knowledge discovery in databases (KDD) in CIM enterprise environments and the design process of the proposed dynamic production controller with four internal modules. A case study is provided in Section 4 to show how the production controller is designed, with several design considerations in Section 5. Finally, general conclusions follow in Section 6.
Consider a testing and rework cell with identical parallel workstations. See Figures 1 and 2 for a general example of product flow in testing environments and for a testing and rework cell of this research, respectively. Jobs come to the single queue of this testing and rework cell from the previous assembly cell. All jobs are served by a dispatching rule. Once a test/rework workstation receives a job, the job is tested. A major setup is incurred when a tester shifts from one job family to another, while a minor setup occurs when it shifts from one job type to another within the same family. If the job passes the test, it goes to the next stage. Otherwise, it is repaired in the same workstation so that it can be tested again in the same place without another setup after its rework. That is, each workstation can process one operation, either testing or rework, at a time. It is often desirable in just-in-time (JIT) environments to utilise the skills of the multifunction operators in multi-purpose workstations.
Graph: Figure 1. Product flow in testing environments.
Graph: Figure 2. Layout of the testing and rework cell.
The objective is to minimise the number of tardy jobs. Pre-emption of jobs is allowed at any moment of the testing and rework stage. Once a job is pre-empted, it goes back to the queue. Then, the job is either tested or repaired, depending on the job's status at the moment of pre-emption, when it is served again. When the job was in the middle of testing, it is assumed all the testing progress that has been made on the job is lost and testing has to be done from scratch. When the job was at rework, it is assumed to resume later from the point of pre-emption.
Dispatching and pre-emption in dynamic environments are involved with the frequency of decision making (Robb and Rohleder [
When an urgent job arrives, it may be beneficial to pre-empt one of the current jobs being served, considering the overall objective function. When job j comes to the queue at time TNOW with total expected processing time and due date (or, deadline) d
If any workstation is idle, no pre-emption; done. Otherwise, go to Step 2.
If , go to Step 5. Otherwise, go to Step 3.
Find the job b with the latest due date in the workstations, obtain its due date d
If d
Schedule the next decision point of the job j for time ; done.
Job pre-emption in the above procedure is based on the slack time, , of the incoming job j and the latest due date of the jobs in the workstations and is independent of the status (either testing or rework) of the jobs. The pre-emption procedure has two decision phases. The first phase is to determine whether the arriving job is urgent in Step 2, with its due date d
The next obvious problem is how to control not only job pre-emption and dispatching but also dispatching within pre-emption efficiently, to make appropriate decisions based on the status of the system and jobs. For dispatching within pre-emption, other dispatching rules can be adopted for Steps 2, 3, and 4 of the above pre-emption procedure. For example, critical ratio,, can replace slack time in Step 2. Then, the modified Step 2 is 'If , go to Step 5. Otherwise, go to Step 3,' in this example with its pre-emption threshold of 1.2. Other dispatching rules can also be used instead of the LDD rule in Steps 3 and 4.
Figure 3 illustrates a knowledge discovery in databases (KDD) architecture in CIM enterprise environments. The hierarchical control architecture in CIM enterprise environments consists of four levels: factory, cell, workstation, and resources. With respect to the given problem, the cell level in Figure 3 is limited to both the final assembly and testing and rework cells of Figure 2 in this research. The workstation level in Figure 3 corresponds to the test/rework workstations in Figure 2, while the resource level of Figure 3 includes testers, rework devices, operators, and so forth. In Figure 3, the status of the system is collected and saved in the cell and workstation databases in real time. The data is preprocessed to obtain necessary information (features) that is then sent to the dynamic controller. The dynamic controller makes its control decisions with the features that represent the system status and job characteristics. Cell central and local controllers implement the decisions in the cell and workstations, respectively, which affect the system status again.
Graph: Figure 3. Architecture of knowledge discovery in databases (KDD) in CIM enterprise environments.
Figure 4 depicts the design process of the dynamic controller with four internal modules: training data generation, data partition/transformation, rule extraction, and controller. The four modules are constructed off-line in advance. Then, the constructed controller with knowledge bases interacts on-line with the CIM enterprise environment, specifically that is the final assembly and testing and rework cells of Figure 2 in this problem. The design process, which is iterative in nature, provides the logical integration of the internal modules. Each module is explained in the following section, with emphasis on feature extraction and selection.
Graph: Figure 4. Design of the dynamic production controller.
Features are attributes or variables that are used to construct models describing data (Liu et al.[
While many have tried to reduce the level of detail in simulation modelling for practical and economic reasons (Hung and Leachman [
The wrapper and filter approaches are two major categories in feature selection. The wrapper approach considers a mining algorithm and uses its performance as the evaluation criterion to judge a feature's usefulness. However, this approach may not be computationally feasible with a great number of features. The filter approach, on the other hand, selects features based on general characteristics of the data independent of any mining algorithm. Thus, the features selected by the filter approach have better generalisation properties.
Figure 5 shows the process of feature extraction and selection used in this research. The process consists of three sequential steps: the first step for feature extraction and the second two steps for feature selection. In our case study, features were first extracted based on preliminary experiments as well as the literature. Second, a filter feature selection approach was applied to narrow down features. Finally, C4.5 (Quinlan [
Graph: Figure 5. Feature extraction and selection.
The feature extraction process is explained in further detail in Figure 6. Feature extraction is the general activities that create new features through transformations or combinations of the original primitive features. Feature extraction in this case study started with collecting initial feature candidates based on the literature (Paterok and Ettl [
Graph: Figure 6. Feature extraction.
Table 1. Thirty-six feature candidates.
1: Minimum relative deadline of jobs in queue. 2: Average relative deadline of jobs in queue. 3: Minimum relative deadline of jobs in servers. 4: Average relative deadline of jobs in servers. 5: Minimum critical ratio of jobs in queue. 6: Average critical ratio of jobs in queue. 7: Minimum critical ratio of jobs in servers. 8: Average critical ratio of jobs in servers. 9: Minimum slack time of jobs in queue. 10: Average slack time of jobs in queue. 11: Minimum slack time of jobs in servers. 12: Average slack time of jobs in servers. 13: Minimum relative deadline of jobs in queue–minimum relative deadline of jobs in servers. 14: Average relative deadline of jobs in queue–average relative deadline of jobs in servers. 15: Minimum critical ratio of jobs in queue–minimum critical ratio of jobs in servers. 16: Average critical ratio of jobs in queue–average critical ratio of jobs in servers. 17: Minimum slack time of jobs in queue–minimum slack time of jobs in servers. 18: Average slack time of jobs in queue–average slack time of jobs in servers. 19: The number of tardy jobs in queue. 20: The number of tardy jobs in servers. 21: The number of potentially tardy jobs in queue. 22: The number of potentially tardy jobs in servers. 23: Number of tardy jobs in queue–number of tardy jobs in servers. 24: Number of potentially tardy jobs in queue–number of potentially tardy jobs in servers. 25: The number of jobs in queue/the number of jobs in queue and servers. 26: Minimum current tardiness of jobs in queue. 27: Average current tardiness of jobs in queue. 28: Maximum current tardiness of jobs in queue. 29: Minimum current tardiness of jobs in servers. 30: Average current tardiness of jobs in servers. 31: Maximum current tardiness of jobs in servers. 32: Mean tardiness in recent 10 jobs. 33: The number of tardy jobs in recent 10 jobs/10. 34: Minimum current tardiness of jobs in queue–minimum current tardiness of jobs in servers. 35: Average current tardiness of jobs in queue–average current tardiness of jobs in servers. 36: Maximum current tardiness of jobs in queue–maximum current tardiness of jobs in servers.
The relevance of problem domains is subjective, because the exact same problem is not usually described in the literature. Most of the cited literature is on production systems with dispatching rules and/or pre-emption. The literature offers only broad coverage of the feature space before the search space is further narrowed down by the following feature selection process.
Figure 7 explains the feature selection process of this case study. Feature selection is the task of finding the most reasonable subset of features for a classifier to seek fewer features and maximum class separability (Kwak et al.[
Graph: Figure 7. Feature selection.
Once the 36 intermediate features were extracted, discretisation was first performed. Discretisation is the process of transferring continuous data into discrete counterparts. Discretisation reduces the number of values for a given continuous feature by dividing the value range of the feature into several intervals. Interval labels can then be used to replace actual data values. Each interval contains approximately the same number of instances. The instances within an interval include all data whose values for a particular feature are in the same range. The following filter feature selection approach used the discretised data of the SEPT (shortest expected processing time) rule without job pre-emption as a basis of all experiments. Each of the 36 features of the SEPT rule was divided into appropriate intervals in the discretisation.
For each feature, in the filter approach, each interval takes the average performance of the instances that fall in the interval. The criterion of selecting a particular feature is associations between the total average and the interval averages of performance, the number of tardy jobs in this case study. The interval averages of performance are individually compared with the total average of performance, for each feature. If they show a relatively large variation, the feature is selected as an element of the final feature candidates.
The feature selection by C4.5 was actually done in the module of rule extraction. It is one of the basic tasks in generating decision trees to determine the relative importance of features. At each level of the tree building process, only one feature, the one with the highest value for the selection criterion, is picked out of the final feature candidates. The sample set is then split into several subsets according to the values of the feature. The whole procedure is recursively repeated until no further splits are possible and the tree building process stops.
A surface mount technology (SMT) process was chosen as a case study to construct a simulation test bed for the CIM enterprise environment of Figure 4 in this research. It is very difficult or often impossible to model and solve the overall test cell as a dynamic and stochastic analytical model without simulation (Yang and Chang [
Table 2. Data for the modelled system.
System parameters Values Mean testing time in family 1 0.608 (min.) Mean testing time in family 2 3.008 (min.) The number of testing stations 3 Mean setup time between families 7.5 (min.) Mean setup time within the same family 2.5 (min.) Mean repair time in family 1 8.5 (min.) Mean repair time in family 2 6.3 (min.) Mean allowance factor 8.0 Mean rework rate in family 1 0.30 Mean rework rate in family 2 0.37
Table 3 shows 18 different decision rules considered in the experiments. When a workstation was available, the controllers selected a job by the SEPT rule or the EDD (earliest due date) rule. Whenever a new job came into the single queue of this testing and rework stage, the pre-emption procedure was activated with two decision phases. The first phase used either the slack time or the critical ratio based thresholds. Four rules were tested in the second phase: latest due date, largest critical ratio, largest slack time, and SEPT rules. In total, 18 decision rules were considered as shown in Table 3.
Table 3. Expression of decision rules.
Pre-emption Dispatching rule Decision phase 1 Decision phase 2 Expressions SEPT Slack time-based Due date Decision 1 Slack time-based Critical ratio Decision 2 Slack time-based Processing time Decision 3 Slack time-based Slack time Decision 4 CR-based Due date Decision 5 CR-based Critical ratio Decision 6 CR-based Processing time Decision 7 CR-based Slack time Decision 8 No pre-emption Decision 9 EDD Slack time-based Due date Decision 10 Slack time-based Critical ratio Decision 11 Slack time-based Processing time Decision 12 Slack time-based Slack time Decision 13 CR-based Due date Decision 14 CR-based Critical ratio Decision 15 CR-based Processing time Decision 16 CR-based Slack time Decision 17 No pre-emption Decision 18
The simulation models were developed for the CIM enterprise environment of Figure 4 by using SIMAN (Pegden et al.[
Data partition by decision rules was first made in the training data generation module. A decision rule is applied to each simulation scenario as a static control rule and necessary attributes are collected at data collection points and saved in each corresponding file.
The data partition/transformation module then transforms the given data structure to the standard data structure, each instance of which consists of input features and tardy status. This transformation is necessary due to the timing difference of data collection between input features and performance. Necessary input features are collected at the point of each job arrival to the queue, while tardy status can be checked when a job is completed. The job that comes to the queue is either a new arrival or a pre-empted job.
This data partition/transformation module finally partitions the transformed training data by system utilisation level. This second partition process is done individually on the partitions that have previously been made by decision rules. A knowledge base is constructed on each sub-partition in the next rule extraction module.
In order to make on-line control decisions in the dynamic controller module, knowledge bases containing control decision rules should be constructed off-line in advance. A knowledge base is constructed in the off-line rule extraction module by generating control decision rules by decision tree C4.5 (Quinlan [
The dynamic controller is constructed off-line with knowledge bases containing control decision rules. The controller interacts on-line with a simulation test bed that is synchronised with a real system, the final assembly and testing and rework cells in this problem. When a production control decision is about to be made, the dynamic controller first finds the current utilisation level of the system and then activates the corresponding group of knowledge bases. In the corresponding group, each knowledge base that represents a decision rule individually analyses input features, and then submits its short-term performance expected when its decision rule is applied to the current status of the system. Then, each knowledge base with its expected short-term performance competes with one another in the dynamic controller module. With a tie-breaking vector, the dynamic controller eventually selects a winner that is the control rule with the minimum number of expected tardy jobs of the near future.
An important observation in the design process: the utilisation of the system was one of the most significant factors in the performance of static rules and the dynamic controller. That is, as inter-arrival time changed, the difference in their performance also changed. Note that high inter-arrival time directly leads to low utilisation. Table 4 shows the results of the simulation that changed inter-arrival time from 4.5 to 5.0, each of which has three replications. When an inter-arrival time is 4.5, for example, the SEPT group (decisions 1 to 9) shows better performance than the EDD group (decisions 10 to 18). As the inter-arrival time becomes larger, the difference in performance between the EDD and SEPT groups is becoming smaller. With the inter-arrival time of 4.7, the two groups show similar performance. As the inter-arrival time changes from 4.7 to 5.0, the EDD group starts to show better performance than the SEPT group. When the inter-arrival time was in the middle of the range with the inter-arrival time of 4.7 and the EDD and SEPT groups showed similar performance, the proposed dynamic controller worked best with respect to the number of tardy jobs. This is why utilisation is considered as an important factor in the design process, particularly in the data partition module.
Table 4. The number of tardy jobs for different inter-arrival times.
Inter-arrival time 4.5 Inter-arrival time 4.7 Inter-arrival time 5.0 Decision 1 2 3 1 2 3 1 2 3 1 2903 2867 2969 2690 2639 2583 2335 2433 2344 2 3021 3024 2912 2554 2658 2571 2320 2337 2202 3 2986 3195 3015 2759 2845 2741 2432 2401 2350 4 3034 2987 2985 2675 2603 2624 2252 2329 2363 5 2953 3006 3026 2628 2645 2674 2315 2409 2250 6 2939 2978 2706 2536 2632 2664 2311 2381 2220 7 3003 3096 3088 2661 2937 2689 2383 2486 2377 8 2954 3080 2984 2597 2636 2544 2288 2381 2190 9 3100 3127 3022 2684 2687 2655 2394 2438 2384 10 2960 3851 3280 2727 2791 3044 2000 2219 2300 11 3296 3442 3349 2969 2714 2605 2397 2170 2222 12 3503 3624 3450 2988 2861 2965 2430 2487 2481 13 3428 3315 3485 3066 2884 2772 2232 2158 2109 14 3445 3022 2884 2758 2735 2585 2388 2229 2258 15 2896 3237 3697 2396 3240 2443 2206 2308 2079 16 3397 3647 3646 2930 2982 2844 2383 2557 2260 17 3080 3719 3013 2590 2806 2405 2092 2231 2261 18 3114 3330 3494 3271 2902 2663 2333 2384 2528
It was also observed that it was desirable to incorporate appropriate prior knowledge into mined patterns. This not only improved system performance but also helped interpret mined patterns properly. It is important to note that blind application of data mining (data dredging) can be dangerous with the use of meaningless patterns. As a heuristic choice based on preliminary experiments, for example, we replaced the minded classification rules of the highest utilisation level by a classification rule: that is, if the current utilisation of the system is the highest level (i.e., above 0.67), then apply the SEPT rule with job pre-emption (decision 1). This rule incorporates the well-known heuristic knowledge that the SEPT rule performs well in congested shop floors with mined patterns.
Finally, features were extracted from the initial candidates to the intermediate ones of Figure 6 with several considerations: first, considering the objective function of this case study, minimising the number of tardy jobs, features were reinforced with more deadline-related statistics and counters, while several flow time-related features were eliminated from the initial feature candidates. Second, useful features can often be generated by taking averages, sums, differences, and ratios of primitive features. Relative values were desirable to avoid any possible negative effect due to the variation by other factors. For example, relative deadlines were preferred to deadlines because the former are time invariant. Finally, position information was included for control purposes so that job status in the queue and servers was represented. When it is known that there are urgent waiting jobs in the queue, for instance, different control decisions on the servers (i.e., job pre-emption) may have to be considered.
Recent studies on production control have focused on typical dispatching rules and pre-emption, but have not noted dispatching decisions within the pre-emption mechanism. This research adopted a standard pre-emption procedure (Kwak and Yih [
This paper presented a technical architecture of dynamic production control and the design of a new dynamic production controller for the testing and rework cell of a dynamic and stochastic CIM system. The proposed dynamic controller considered not only job pre-emption and dispatching but also dispatching within pre-emption at the same time, for which this research provides the logical integration of its internal modules. As a design guideline for the dynamic production controller, this paper described the overall development processes of the system and functional diagrams. The model of this paper will also help users construct a system to deal with a large number of input features in real situation.
Major contributions of this research: first, this paper provides reference design architecture for a dynamic production controller with four modules. This research also presents the logical integration of the four internal modules. Second, it introduces the design process of a dynamic production controller with its practical design issues. Even though the aim of this paper is to present a guideline for designing production controllers, an example of dynamic production control is also given with practical design considerations. Finally, the proposed system bridges the gap between conceptual production control levels and practical controller design levels.
The proposed dynamic controller may need to be further investigated to show its performance. The research results may be available in the near future, as part of a research project based on the proposed architecture and design processes.
The author would like to thank Ms. Park, Enumi for helping to modify the author's program and set up part of experiments.
By Choonjong Kwak
Reported by Author