You do not have permission to edit this page, for the following reason:

You are not allowed to execute the action you have requested.


You can view and copy the source of this page.

x
 
1
<!-- metadata commented in wiki content
2
3
4
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;">
5
''' Integration of Game theory and Response Surface Method for Robust Parameter Design'''</div>
6
-->
7
== Abstract ==
8
Robust parameter design (RPD) is to determine the optimal controllable factors that minimize the variation of quality performance caused by noise factors. The dual response surface approach is one of the most commonly applied approaches in RPD that attempts to simultaneously minimize the process bias (i.e., the deviation of the process mean from the target) as well as process variability (i.e., variance or standard deviation). In order to address this tradeoff issue between the process bias and variability, a number of RPD methods are reported in literature by assigning relative weights or priorities to both the process bias and variability. However, the relative weights or priorities assigned are often subjectively determined by a decision maker (DM) who in some situations may not have enough prior knowledge to determine the relative importance of both the process bias and variability. In order to address this problem, this paper proposes an alternative approach by integrating the bargaining game theory into an RPD model to determine the optimal factor settings. Both the process bias and variability are considered as two rational players that negotiate how the input variable values should be assigned. Then Nash bargaining game solution technique is applied to determine the optimal, fair, and unique solutions (i.e., a balanced agreement point) for this game. This technique may provide a valuable recommendation for the DM to consider before making the final decision. This proposed method may not require any preference information from the DM by considering the interaction between the process bias and variability. To verify the efficiency of the obtained solutions, a lexicographic weighted Tchebycheff method which is often used in bi-objective optimization problems is utilized. Finally, in two numerical examples, the proposed method provides non-dominated tradeoff solutions for particular convex Pareto frontier cases. Furthermore, sensitivity analyses are also conducted for verification purposes associated with the disagreement and agreement points.
9
10
'''Keywords''': Robust parameter design, lexicographic weighted Tchebycheff, bargaining game, response surface methodology, dual response model
11
12
==1. Introduction==
13
14
Due to fierce competition among manufacturing companies and an increase in customer quality requirements, robust parameter design (RPD), an essential method for quality management, is becoming ever more important. RPD was developed to decrease the degree of unexpected deviation from the requirements that are proposed by customers or a DM and thereby helps to improve the quality and reliability of products or manufacturing processes. The central idea of RPD is to build quality into the design process by identifying an optimal set of control factors that make the system impervious to variation [1]. The objectives of RPD are set out to ensure that the process mean is at the desired level and process variability is minimized. However, in reality, a simultaneous realization of those two objectives sometimes is not possible. As Myers et al. [2] stated there are circumstances where the process  variability is robust against the effects of noise factors but the mean value is still far away from the target. In other words, a set of parameter values that satisfies these two conflicting objectives may not exist. Hence, the tradeoffs that exist between the process mean and variability are undoubtedly crucial in determining a set of controllable parameters that optimize quality performance.  
15
16
The tradeoff issue between the process bias and variability can be associated with assigning different weights or priority orders. Weight-based methods assign different weights to the process bias and variability, respectively, to establish their relative importance and transform the bi-objective problem into a single objective problem. The two most commonly applied weight-based methods are the mean square error model [3] and the weighted sum model [4,5]. Alternatively, priority-based methods sequentially assign priorities to the objectives (i.e., minimization of the process bias or variability). For instance, if the minimization of the process bias is prioritized, then the process variability is optimized with a constraint of zero-process bias [6]. Other priority-based approaches are discussed by Myers and Carter [7], Copeland and Nelson [8], Lee et al. [9], and Shin and Cho [10]. In both weight-based and priority-based methods, the relative importance can be assigned by the decision maker’s (DM) preference, which is obviously subjective. Additionally, there are situations in which the DM could be unsure about the relative importance of the process parameters in bi-objective optimization problems. 
17
18
Therefore, this paper aims to solve this tradeoff problem from a game theory point of view by integrating bargaining game theory into the RPD procedure. First, the process bias and variability are considered as two rational players in the bargaining game. Furthermore, the relationship functions for the process bias and variability are separately estimated by using the response surface methodology (RSM). In addition, those estimated functions are regarded as utility functions that represent players’ preferences and objectives in this bargaining game. Second, a disagreement point, signifying a pair of values that the players expect to receive when negotiation among players breaks down, can be defined by using the minimax-value theory which is often used as a decision rule in game theory. Third, Nash bargaining solution techniques are then incorporated into the RPD model to obtain the optimal solutions. Then, to verify the efficiency of the obtained solutions, a lexicographic weighted Tchebycheff approach is used to generate the associated Pareto frontier so that it can be visually observed if the obtained solutions are on the Pareto frontier. Two numerical examples are conducted to show that the proposed model can efficiently locate well-balanced solutions. Finally, a series of sensitivity analyses are also conducted in order to demonstrate the effects of the disagreement point value on the final agreed solutions.
19
20
This research paper is laid out as follows: Section 2 discusses existing literature for RPD and game theory applications. In Section 3, the dual response optimization problem, the lexicographic weighted Tchebycheff method, and the Nash bargaining solution are explained. Next, in Section 4, the proposed model is presented. Then in Section 5, two numerical examples are addressed to show the efficiency of the proposed method, and sensitivity studies are performed to reveal the influence of disagreement point values on the solutions. In Section 6, a conclusion and further research directions are discussed.
21
22
<span id="OLE_LINK4"></span><span id="OLE_LINK5"></span><span id="OLE_LINK6"></span><span id="OLE_LINK7"></span><span id="cite-1"></span>[[Draft Shin 691882792|<span id="cite-2"></span>]]<span id="OLE_LINK2"></span><span id="OLE_LINK22"></span>
23
24
<span id="_Hlk58849949"></span>
25
==2.  Literature review ==
26
27
===2.1 Robust parameter design ===
28
29
Taguchi proposed both experimental design concepts and parameter tradeoff issues into a quality design process. In addition, Taguchi developed an orthogonal-array-based experimental design and used the signal-to-noise (SN) ratio to measure the effects of factors on desired output responses. As discussed by Leon et al. [11] in some situations, the SN ratio is not independent of the adjustment parameters, so using the SN ratio as a performance measure may often lead to far from the optimal design parameter settings. Box [12] also argued that statistical analyses based on experimental data should be introduced, rather than relying only on the maximization of the SN ratio. The controversy about the Taguchi method is further discussed and addressed by Nair et al. [13] and Tsui [14]. 
30
31
Based on Taguchi’s philosophy, further statistical based methods for RPD have been developed. Vining and Myers [6] introduced a dual response method, which takes zero-process bias as a constraint and minimizes the variability. Copeland and Nelson [15] proposed an alternative method for the dual response problem by introducing a predetermined upper limit on the deviation from the target. Similar approaches related to upper limit concept are further discussed by Shin and Cho [10] and Lee et al. [9] For the estimation phase, Shoemaker et al. [16] and Khattree [17] suggested a utilization of the response surface model approaches. However, when a homoscedasticity assumption for regression is violated, then other methods, such as the generalized linear model, can be applied [18].  Additionally, in cases where there is incomplete data, Lee and Park [19] suggested an expectation-maximization (EM) algorithm to provide an estimation of the process mean and variance, while Cho and Park [20] suggested a weighted least squares (WLS) method. However, Lin and Tu [3] pointed out that the dual response approach had some deficiencies and proposed an alternative method called mean-squared-error (MSE) model. Jayaram and Ibrahim [21] modified the MSE model by incorporating capability indexes and considered the minimization of total deviation of capability indexes to achieve a multiple response robust design. More flexible alternative methods that could obtain Pareto optimal solutions based on a weighted sum model were introduced by many researchers [4,5,22]. In fact, this weighted sum model is more flexible than conventional dual response models, but it cannot be applied when a Pareto frontier is nonconvex [23]. In order to overcome this problem, Shin and Cho [23] proposed an alternative method called lexicographic weighted Tchebycheff by using an <math display="inline">L-\infty</math> norm.
32
33
More recently, RPD has become more widely used not only in manufacturing but also in other science and engineering areas including pharmaceutical drug development. New approaches such as simulation, multiple optimization techniques, and neural networks (NN) have been integrated into RPD. For example, Le et al. [24] proposed a new RPD model by introducing a NN approach to estimate dual response functions. Additionally, Picheral et al. [25] estimated the process bias and variance function by using the propagation of variance method. Two new robust optimization methods, the gradient-assisted and quasi-concave gradient-assisted robust optimization methods, were presented by Mortazavi et al. [26]. Bashiri et al. [27] proposed a robust posterior preference method that introduced a modified robust estimation method to reduce the effects of outliers on functions estimation and used non-robustness distance to compare non-dominated solutions. However, the responses are assumed to be uncorrelated. To address the correlation among multiple responses and the variation of noise factors over time, Yang et al. [28] extended offline RPD to online RPD by applying Bayesian seemingly unrelated regression and time series models so that the set of optimal controllable factor values can be adjusted in real-time.
34
35
===2.2 Game Theory ===
36
37
The field of game theory presents mathematical models of strategic interactions among rational agents. These models can become analytical tools to find the optimal choices for interactional and decision-making problems. Game theory is often applied in situations where the "roles and actions of multiple agents affect each other" [29]. Thus, game theory serves as an analysis model that aims at helping agents to make the optimal decisions, where agents are rational and those decisions are interdependent.  Because of the condition of interdependence each agent has to consider other agents’ possible decisions when formulating a strategy. Based on these characteristics of game theory, it is widely applied in multiple disciplines, such as computer science [30], network security and privacy [31], cloud computing [32], cost allocation [33], and construction [34]. Because game theory has a degree of conceptual overlap with optimization and decision-making, three concepts (i.e., game theory, optimization, and decision-making) can often be combined, respectively. According to Sohrabi and Azgom [29], there are three kinds of basic combinations associated with those three concepts as follows: game theory and optimization, game theory and decision-making, game theory, optimization, and decision-making. 
38
39
The first type of these combinations (i.e., game theory and optimization) further has two possible situations. In the first situation, optimization techniques are used to solve a game problem and prove the existence of equilibrium [35,36]. In the second situation, game theory concepts are integrated to solve an optimization problem. For example, Leboucher et al. [37] used evolutionary game theory to improve the performance of a particle swarm optimization (PSO) approach. Additionally, Annamdas and Rao [38] solved a multi-objective optimization problem by using a combination of game theory and a PSO approach. The second type kind of combination (i.e., game theory and decision-making) integrates game theory to solve a decision-making problem, as discussed by Zamarripa et al. [39] who applied game theory to assist with decision-making problems in supply chain bottlenecks. More recently, Dai et al. [40] attempted to integrate the Stackelberg leadership game into RPD model to solve a dual response tradeoff problem. The third type of combination (i.e., game theory, optimization and decision-making) integrates game theory and optimization to a decision-making problem. For example, a combination of linear programming and game theory was introduced to solve a decision-making problem [41]. Doudou et al. [42] used a convex optimization method and game theory to settle a wireless sensor network decision-making problem.<span id="_Ref76149866"></span><span id="OLE_LINK20"></span><span id="OLE_LINK21"></span><span id="cite-20"></span><span id="cite-21"></span><span id="cite-22"></span><span id="cite-23"></span><span id="cite-24"></span><span id="cite-25"></span><span id="cite-_Ref76149866"></span><span id="cite-26"></span><span id="cite-27"></span><span id="cite-28"></span><span id="cite-29"></span><span id="cite-30"></span><span id="cite-31"></span><span id="cite-32"></span><span id="cite-33"></span><span id='OLE_LINK16'></span><span id='OLE_LINK17'></span><span id='cite-34'></span>
40
41
===2.3 Bargaining game===
42
43
A bargaining game can be applied in a situation where a set of agents have an incentive to cooperate but have conflicting interests over how to distribute the payoffs generated from the cooperation [43]. Hence, a bargaining game essentially has two features: Cooperation and conflict. Because the bargaining game considers cooperation and conflicts of interest as a joint problem, it is more complicated than a simple cooperative game that ignores individual interests and maximizes the group benefit [44]. Typical three bargaining game examples include a price negotiation problem between product sellers and buyers, a union and firm negotiation problem over wages and employment levels, and a simple cake distribution problem. 
44
45
Significant discussions about the bargaining game can be addressed by Nash [45,46]. Nash [45] presented a classical bargaining game model aimed at solving an economic bargaining problem and used a numerical example to prove the existence of multiple solutions. In addition, Nash [46] extended his research to a more general form and demonstrated that there are two possible approaches to solve a two-person cooperative bargaining game. The first approach, called the negotiation model, is used to obtain the solution through an analysis of the negotiation process. The second approach, called the axiomatic method, is applied to solve a bargaining problem by specifying axioms or properties that the solution should obtain. For the axiomatic method, Nash concluded four axioms that the agreed solution called Nash bargaining solution should have. Based on Nash’s philosophy, many researchers attempted to modify Nash's model and proposed a number of different solutions based on different axioms. One famous modified model replaces one of Nash’s axioms in order to reach a fairer unique solution which is called the Kalai-Smorodinky’s solution [47]. Later, Rubinstein [48] addressed a bargaining problem by specifying a dynamic model which explains a bargaining procedure. <span id="cite-35"></span><span id="cite-36"></span><span id='OLE_LINK3'></span><span id='cite-37'></span><span id='cite-38'></span><span id='cite-39'></span><span id='cite-40'></span>
46
47
==3. Models and methods ==
48
49
===3.1 Bi-objective robust design model===
50
51
A general bi-objective optimization problem involves simultaneous optimization of two conflicting objectives (e.g.,  <math>f_1({\boldsymbol{\text{x}}})</math> and <math>f_2({\boldsymbol{\text{x}}})</math>) that can be described in mathematical terms as <math>\min[f_1({\boldsymbol{\text{x}}}), f_2({\boldsymbol{\text{x}}})]</math>. The primary objective of PRD is to minimize the deviation of performance of the production process from the target value and the variability of the performance, where this performance deviation can be represented by process bias and the performance variability can be represented by standard deviation or variance. For example, Koksoy [49], Goethals and Cho [50], and Wu and Chyu [51]  utilized estimated variance functions to represent process variability. On the other hand, Shin and Cho [10,52], Tang and Xu [53] used estimated standard deviation functions to measure process variability. Steenackers and Guillaume [54] discussed the effect of different response surface expressions on the optimal solutions, and they concluded that both standard deviation and variance can capture the process variability well but can lead to different optimal solution sets. Since it can be infeasible to minimize the process bias and variability simultaneously, a simultaneous optimization of these two process parameters, which are separately estimated by applying RSM, is then transformed into a tradeoff problem between the process bias and variability. This tradeoff problem can be formally expressed as a bi-objective optimization problem [23] as:<span id="cite-_Ref76152586"></span>
52
53
{| class="formulaSCP" style="width: 100%; text-align: center;" 
54
|-
55
| 
56
{| style="text-align: center; margin:auto;" 
57
|-
58
|<math>min</math>
59
| <math display="inline">{\left[ {\left\{ \hat{\mu }\left( {\boldsymbol{\text{x}}}\right) -\tau \right\} }^{2},\, {\hat{\sigma }}^{2}({\boldsymbol{\text{x}}})\right] }^{T}</math> 
60
|-
61
|<math>s.t.</math>
62
|<math display="inline"> {\boldsymbol{\text{x}}}\in X\,</math>
63
|}
64
| style="width: 5px;text-align: right;white-space: nowrap;" | (1)
65
|}
66
67
where <math display="inline">{\boldsymbol{\text{x}}}</math>, <math display="inline">X</math>, <math display="inline">\tau</math>, <math display="inline">{\left\{ \hat{\mu }\left( {\boldsymbol{\text{x}}}\right) -\tau \right\} }^{2}</math>, and <math display="inline">{\hat{\sigma }}^{2}({\boldsymbol{\text{x}}})</math> represent a vector of design factors, the set of feasible solutions under specified constraints, the target process mean value, and the estimated functions for process bias and variability, respectively.
68
69
===3.2 Lexicographic weighed Tchebycheff method===
70
71
A bi-objective robust design problem is generally addressed by introducing a set of parameters, determined by a DM, which represents the relative importance of those two objectives. With the introduced parameters, the bi-objective functions can be transformed into a single integrated function, thus the bi-objective optimization problem can be solved by simply optimizing the integrated function. One way to construct this integrated function is by using the weighted sum of the distance between the optimal solution and the estimated function. Different ways of measuring distance can lead to different solutions, and one of the most common methods is <math display="inline">{L}_{p}</math> metric, where <math>p=1,2,\, \mbox{or}\, \infty</math>. When <math>p=1</math>, the metric is called the Manhattan metric, whereas <math display="inline">p=\infty</math>, it is named the Tchebycheff metric [47]. Utopia point represents an initial point to apply <math>L-\infty</math> metric in weighted Tchebycheff method and can be obtained by minimizing each objective function separately. The weak Pareto optimal solutions can be obtained by introducing different weights: <span id='cite-41'></span>
72
73
{| class="formulaSCP" style="width: 100%; text-align: center;" 
74
|-
75
| 
76
{| style="text-align: center; margin:auto;" 
77
|-
78
| <math display="inline">\mathrm{min}\,\left(\sum _{i=1}^{p}{w}_{i}\left| {f}_{i}\left( \boldsymbol{\text{x}}\right) -{u}_{i}^{\ast }\right| ^{p}\right)^{\frac{1}{p}}</math>
79
|}
80
| style="width: 5px;text-align: right;white-space: nowrap;" | (2)
81
|}
82
83
where <math display="inline">{u}_{i}^{\ast }</math> and <math display="inline">{w}_{i}</math> denote the utopia point values and weights associated with objective functions, respectively. When <math>p=\infty</math>, the above function (i.e., Equation 2) will only consider the largest deviation. Although the weighted Tchebycheff method is an efficient approach, its main drawback is that only weak non-dominated solutions can be guaranteed [56], which is obviously not optimal for the DM. So, Steuer and Choo [57] introduced an interactive weighted Tchebycheff method, which can generate every non-dominated point provided that weights are selected appropriately. Shin and Cho [23] introduced the lexicographic weighted Tchebycheff method to the RPD area. This method is proved to be efficient and capable of generating all Pareto optimal solutions when the process bias and variability are treated as a bi-objective problem. The mathematical model is shown below [23]:<span id='cite-42'></span>[[Draft Shin 691882792|<span id="cite-43"></span>]]<span id='cite-_Ref76152586'></span><span id='cite-_Ref76152586'></span>
84
85
{| class="formulaSCP" style="width: 100%; text-align: center;" 
86
|-
87
| 
88
{| style="text-align: center; margin:auto;" 
89
|-
90
|<math>min</math>
91
| <math display="inline">\mathrm{\, \, \, \, }\left[\,\xi ,\, \left[ {\left\{ \hat{\mu }\left( {\boldsymbol{\text{x}}}\right) -t\right\} }^{2}-{u}_{1}^{\ast }\right] +\left[{\hat{\sigma }}^{2}\left( {\boldsymbol{\text{x}}}\right) -{u}_{2}^{\ast }\right]\right]</math>
92
|-
93
|<math>s.t.</math>
94
|<math> \, \lambda \left[{\left\{ \hat{\mu }\left( {\boldsymbol{\text{x}}}\right) -t\right\} }^{2}-{u}_{1}^{\ast }\right]\leq \xi</math>
95
|-
96
|
97
|<math>\left( 1-\lambda \right) \left[\hat{\sigma }\left({\boldsymbol{\text{x}}}\right) -{u}_{2}^{\ast }\right]\leq \xi</math>
98
|-
99
|
100
|<math>{\boldsymbol{\text{x}}}\in X</math>
101
|}
102
| style="width: 5px;text-align: right;white-space: nowrap;" | (3)
103
|}
104
105
where <math>\xi</math> and <math>\lambda</math> represent a non-negative variable and a weight term associated with process bias and variability, respectively. The Lexicographic weighed Tchebycheff method is utilized as a verification method in this paper. 
106
107
===3.3 Nash bargaining solution===
108
109
A two-player bargaining game can be represented by a pair <math display="inline">\, (U,d)</math>, where <math display="inline">U\subset {R}^{2}</math>and <math display="inline">d\subset {R}^{2}</math>. <math display="inline">U=({u}_{1}({\boldsymbol{\text{x}}}){,u}_{2}({\boldsymbol{\text{x}}}))</math> denotes a pair of obtainable payoffs of the two players, where <math display="inline">{u}_{1}({\boldsymbol{\text{x}}})</math> and <math display="inline">{\, u}_{2}\left({\boldsymbol{\text{x}}}\right) \,</math> represent the utility functions for player 1 and 2, respectively, and <math display="inline">{\boldsymbol{\text{x}}}{\, =}{(}{x}_{1},\, {x}_{2})\,</math> denotes a vector of actions taken by players. <math display="inline">d</math> ( <math display="inline">{d=(d}_{1},{d}_{2})</math>) , defined as a disagreement point, represents the payoffs that each player will gain from this game when two players fail to reach a satisfactory agreement. In other words the disagreement point values are the payoffs that each player can expect to receive if a negotiation breaks down. Assuming <math display="inline">{u}_{i}({\boldsymbol{\text{x}}})>{d}_{i}</math> where <math display="inline">{u}_{i}({\boldsymbol{\text{x}}})\in U</math> for <math display="inline">\, i\, =1,2</math>, the set <math display="inline">U\cap \left\{ \left( {u}_{1}({\boldsymbol{\text{x}}}),{u}_{2}({\boldsymbol{\text{x}}})\right) \in \, {R}^{2}:\, {u}_{1}({\boldsymbol{\text{x}}})\geq {d}_{1};\, {u}_{2}({\boldsymbol{\text{x}}})\geq {d}_{2}\right\}</math> is non-empty. As suggested by the expression of the Nash bargaining game <math display="inline">(U, d)</math>, the Nash bargaining solution is affected by both the reachable utility range (<math display="inline">U</math>)and disagreement point value (<math display="inline">d</math>). Since <math display="inline">U</math> cannot be changed, rational players will decide a disagreement point value to optimize their bargaining position. According to Myerson [59], there are three possible ways to determine the value of a disagreement point. One standard way is to calculate the minimax value for each player.
110
{| class="formulaSCP" style="width: 100%; text-align: center;" 
111
|<math>{d}_{1}=\mathrm{min}\,\mathrm{max}\,{u}_{1}({x}_{1},{x}_{2})\,\,and\,\,{d}_{2}=\mathrm{min}\,\mathrm{max}\,{u}_{2}({x}_{1},{x}_{2})</math>                                                                                                                                                            
112
|(4)
113
|}
114
115
To be more specific, Equation 4 states that, given each possible action for player 2, player 1 has a corresponding best response strategy. Then, among all those best response strategies, player 1 chooses the one that returns the minimum payoff which is defined as a disagreement point value. Following this logic, player 1 can guarantee to receive an acceptable payoff. Another possible way of determining the disagreement point value is to derive the disagreement point value as an effective and rational threat to ensure the establishment of an agreement. The last possibility is to set the disagreement point as the focal equilibrium of the game.
116
117
Nash proposed four possible axioms that should be possessed by the bargaining game solution [<span id='cite-44'></span><span id='cite-45'></span>58,59]:
118
119
:* Pareto optimality
120
121
:* Independence of equivalent utility representation (IEUR)
122
123
:* Symmetry
124
125
:* Independence of irrelevant alternatives (IIA)
126
127
The first axiom states that the solution should be Pareto optimal, which means it should not be dominated by any other point. If the notation <math display="inline">f\left( U,d\right) =\left( {f}_{1}\left( U,d\right) ,\, {f}_{2}\left( U,d\right) \right) \,</math> stands for the Nash bargaining solution to the bargaining problem <math display="inline">(U,d)</math>, then the solution <math display="inline">{u}^{\ast }=</math><math>\left( {u}_{1}\left( {{\boldsymbol{\text{x}}}}^{\mathit{\boldsymbol{\ast }}}\right) ,{u}_{2}\left( {{\boldsymbol{\text{x}}}}^{\boldsymbol{\ast }}\right) \right)</math>  can be Pareto efficient if and only if there exists no other point <math display="inline">{u}^{'}=</math><math>\left( {u}_{1}\left( {{\boldsymbol{\text{x}}}}^{\mathit{\boldsymbol{'}}}\right) ,\, {u}_{2}\left( {{\boldsymbol{\text{x}}}}^{\boldsymbol{'}}\right) \right) \in U</math> such that <math display="inline">{u}_{1}\left( {{\boldsymbol{\text{x}}}}^{\boldsymbol{'}}\right) >{u}_{1}\left( {{\boldsymbol{\text{x}}}}^{\boldsymbol{\ast }}\right) ;{u}_{2}\left( {{\boldsymbol{\text{x}}}}^{\mathit{\boldsymbol{'}}}\right) \geq {u}_{2}\left( {{\boldsymbol{\text{x}}}}^{\boldsymbol{\ast }}\right)</math>  or <math display="inline">{\, u}_{1}\left( {{\boldsymbol{\text{x}}}}^{\boldsymbol{'}}\right) \geq {u}_{1}\left( {{\boldsymbol{\text{x}}}}^{\boldsymbol{\ast }}\right) ;</math> <math display="inline">{u}_{2}\left( {{\boldsymbol{\text{x}}}}^{\boldsymbol{'}}\right) >{u}_{2}\left( {{\boldsymbol{\text{x}}}}^{\boldsymbol{\ast }}\right)</math> This implies that there is no alternative feasible solution that is better for one player without worsening the payoff for other players.
128
129
The second axiom, IEUR also referred to as scale covariance, states that the solution should be independent of positive affine transformations of utilities.In other words, if a new bargaining game <math display="inline">(G,w)</math> exists, where <math>G=\{ {\alpha }_{1}{u}_{1}({\boldsymbol{\text{x}}})+ {\beta }_{1},{\alpha }_{2}{u}_{2}({\boldsymbol{\text{x}}})+{\beta }_{2}\}</math> and  <math>w=({\alpha }_{1}{d}_{1}+{\beta }_{1},{\alpha }_{2}{d}_{2}+{\beta }_{2})\,</math> and where <math>\bigl({u}_{1}({\boldsymbol{\text{x}}}),{u}_{2}({\boldsymbol{\text{x}}})\bigr)\in U</math>and <math> {\alpha }_{1}>0,{\alpha }_{2}>0</math>, then the solution for this new bargaining game (i.e., <math display="inline">f(G,w)</math>) can be obtained by applying the same transformations, which is demonstrated by Equation 5 and Figure 1:<span id="OLE_LINK8"></span><span id="OLE_LINK9"></span><div style="text-align: right; direction: ltr; margin-left: 1em;">
130
{| class="formulaSCP" style="width: 100%; text-align: center;" 
131
|<math>\, \, f(G,\, w)=({\alpha }_{1}{f}_{1}(U,d)+{\beta }_{1},\, {\alpha }_{2}{f}_{2}(U,\, d)+{\beta }_{2})\,
132
</math>
133
|(5)
134
|}
135
</div>
136
[[File:Dail2.png|alt=Figure 1.   Explanation of IEUR axiom|centre|thumb|354x354px|'''Figure 1'''.   Explanation of IEUR axiom]]
137
The third axiom “symmetry” represents that the solutions should be symmetric when the bargaining positions of the two players are completely symmetric. This axiom can be explained as if there is no information that can be used to distinguish one player from the other, then the solutions should also be indistinguishable between players [46].
138
139
As shown in Figure 2, the last axiom states that if  <math display="inline">{U}_{1}\subset {U}_{2}</math> and <math display="inline">f({U}_{2},d)</math> is located within the feasible area <math display="inline">{U}_{1}</math>, then <math display="inline">f\left( {U}_{1,}d\right) =</math><math>f({U}_{2},d)</math> [59]. <div class="center" style="width: auto; margin-left: auto; margin-right: auto;">[[File:Draft_Shin_691882792-image2.png|centre|thumb|374x374px|'''Figure 2. '''Explanation of IIA axiom]]</div>
140
141
The solution function introduced by Nash [46] that satisfies all the four axioms as identified before can be defined as follows:
142
143
{| class="formulaSCP" style="width: 100%; text-align: center;" 
144
|-
145
| 
146
{| style="text-align: center; margin:auto;" 
147
|-
148
| <math display="inline">f\left( U,\, d\right) =Max\prod _{i=1,2}^{}({u}_{i}({\boldsymbol{\text{x}}})-{d}_{i})=</math><math>Max\, ({u}_{1}({\boldsymbol{\text{x}}})-{d}_{1})({u}_{2}({\boldsymbol{\text{x}}})-{d}_{2})</math>
149
|}
150
| style="width: 5px;text-align: right;white-space: nowrap;" | (6)
151
|}
152
153
154
where <math display="inline">{u}_{i}({\boldsymbol{\text{x}}})>{d}_{i},\, i=1,2</math>. Intuitively, this function is trying to find solutions that maximize each player’s difference in payoffs between the cooperative agreement point and the disagreement point. In simpler terms, Nash selects an agreement point <math display="inline">({u}_{1}\left( {{\boldsymbol{\text{x}}}}^{\ast }\right) ,{u}_{2}({{\boldsymbol{\text{x}}}}^{\ast }))</math> that maximizes the product of utility gains from the disagreement point <math display="inline">\, ({d}_{1},{d}_{2})</math>.
155
156
== 4. The proposed model ==
157
The proposed method attempts to integrate bargaining game concepts into the tradeoff issue between the process bias and variability, so that not only the interaction between process bias and variability can be incorporated but also a unique optimal solution can be obtained. The detailed procedure includes problem description, calculation for response functions and disagreement points, bargaining game based RPD model, and verification can be illustrated in Figure 3. As illustrated in Figure 3, the objective of the proposed method is to address the tradeoff between process bias and variability. In the calculation phase, a utopia point can be calculated based on separately estimated functions for the process bias and variability. However, this utopia point is in an infeasible region, which means that a simultaneous minimization of the process bias and variability is unachievable. The disagreement point is calculated by first, optimizing only one of the objective functions (i.e., the estimated process variability or the process bias function) and obtaining a solution set, and second, inserting the obtained solution set into the other objective function to generate a corresponding value. In the proposed model, based on the obtained disagreement point, the Nash bargaining solution concept is applied to solve the bargaining game. While in the verification phase, the lexicographic weighted tchebycheff is applied to generate the associated Pareto frontier, so that the obtained game solution can be compared with other efficient solutions. 
158
159
An integration of the Nash bargaining game model involves three steps. First step, the two players and their corresponding utility function should be defined. The process bias can be defined as player A, and variability can be regarded as player B. The RSM-based estimated functions of both responses will be regarded as the players’ utility functions in this bargaining game (i.e., <math display="inline">{\, u}_{A}\left( {\boldsymbol{\text{x}}}\right) \,</math> and <math display="inline">{\, u}_{B}\left({\boldsymbol{\text{x}}}\right)</math> ) where <math>{\boldsymbol{\text{x}}}</math> stands for a vector of controllable factors. Then, the goal of each player is to choose a set of controllable factors while minimizing each individual utility function. Second step, a disagreement point can be determined by applying a minimax-value theory as identified in Equation 7. Based on the tradeoff between the process bias and variability, the modified disagreement point functions can be defined as follows:
160
161
{| class="formulaSCP" style="width: 100%; text-align: center;" 
162
|-
163
| 
164
{| style="text-align: center; margin:auto;" 
165
|-
166
| <math display="inline">{d}_{A}=\mathrm{max}\,\mathrm{min}\,{u}_{A}({\boldsymbol{\text{x}}})\,\,and\,\,{d}_{B}=\mathrm{max}\,\mathrm{min}\,{u}_{B}({\boldsymbol{\text{x}}})</math>
167
|}
168
| style="width: 5px;text-align: right;white-space: nowrap;" | (7)
169
|}
170
171
In this way, both player A (i.e., the process bias) and player B (i.e., the process variability) are guaranteed to receive the worst acceptable payoffs. In that case, the disagreement point, defined as the maximum minimum utility value, can be calculated by minimizing only one objective (process variability or bias). The computational functions for the disagreement point values can be formulated as:<span id='OLE_LINK15'></span>
172
173
{| class="formulaSCP" style="width: 100%; text-align: center;" 
174
|-
175
| 
176
{| style="text-align: center; margin:auto;" 
177
|-
178
| <math display="inline">\left\{ {d}_{A}={u}_{A}({\boldsymbol{\text{x}}})|{\boldsymbol{\text{x}}}=\arg\min{u}_{B}({\boldsymbol{\text{x}}})\,and\,{\boldsymbol{\text{x}}}\in X\right\} </math>
179
|}
180
| style="width: 5px;text-align: right;white-space: nowrap;" | (8)
181
|}
182
183
184
and
185
186
{| class="formulaSCP" style="width: 100%; text-align: center;" 
187
|-
188
| 
189
{| style="text-align: center; margin:auto;" 
190
|-
191
| <math display="inline">\left\{ {d}_{B}={u}_{B}({\boldsymbol{\text{x}}})|{\boldsymbol{\text{x}}}=\arg\min{u}_{A}({\boldsymbol{\text{x}}})\,and\,{\boldsymbol{\text{x}}}\in X\right\} </math>
192
|}
193
| style="width: 5px;text-align: right;white-space: nowrap;" | (9)
194
|}
195
[[File:New2.png|centre|thumb|820x820px|'''Figure 3.''' The proposed procedure by integrating of bargaining game into RPD]]
196
Thus, the idea of the proposed method to find the optimal solutions is to continuously perform bargaining games from the specified disagreement point <math display="inline">({d}_{A} ,\, {d}_{B})</math> to Pareto frontier as illustrated in Figure 4. To be more specific, as demonstrated in Figure 4, if the convex curve represents all Pareto optimal solutions, then each point on the curve can be regarded as a minimum utility value for one of the two process parameters (i.e., the process variability or bias). For example, at point A, when the process bias is minimized within the feasible area, the corresponding variability value is the minimum utility value for the process variability, since other utility values would be either dominated or infeasible. These solutions may provide useful insight for a DM when the relative importance between process bias and variability is difficult to identify.  
197
198
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;">[[File:New.png|centre|thumb|391x391px|'''Figure 4'''. Solution concepts for the proposed bargaining game based RPD method by integrating trafeoff between both process bias and variability]]</div>
199
200
In the final step, the Nash bargaining solution function <math display="inline">Max\left( {u}_{A}({\boldsymbol{\text{x}}})-\right. </math><math>\left. {d}_{A}\right) \left( {u}_{B}({\boldsymbol{\text{x}}})-{d}_{B}\right)</math>is utilized. In an RPD problem, the objective of this problem is to minimize both process bias and variability, so a constraint of <math display="inline">{u}_{i}({\boldsymbol{\text{x}}})</math>< <math display="inline">{d}_{i},\, i=</math><math>A,B</math> is applied. After the players, utility functions and the disagreement point are identified, the Nash bargaining solution function is applied as below:
201
202
{| class="formulaSCP" style="width: 100%; text-align: center;" 
203
|-
204
| 
205
{| style="text-align: center; margin:auto;" 
206
|-
207
|<math>max </math>
208
| <math display="inline"> \left( {u}_{A}({\boldsymbol{\text{x}}})-{d}_{A}\right) \left( {u}_{B}({\boldsymbol{\text{x}}})-\right.\left. {d}_{B}\right) </math>
209
|-
210
|<math>s.t. </math>
211
|<math>{u}_{A}({\boldsymbol{\text{x}}})\leq {d}_{A},{u}_{B}({\boldsymbol{\text{x}}})\leq {d}_{B}, and \, {\boldsymbol{\text{x}}}\in \,X</math>
212
|}
213
| style="width: 5px;text-align: right;white-space: nowrap;" | (10)
214
215
|}
216
where
217
{| style="text-align: center; margin:auto;"  
218
|-
219
|<math>u_A({\boldsymbol{\text{x}}})=\bigl(\widehat{\mu}({\boldsymbol{\text{x}}})-\tau\bigr)^2,\,u_B({\boldsymbol{\text{x}}})=\hat{\sigma}^2({\boldsymbol{\text{x}}})\,or\,\hat{\sigma}({\boldsymbol{\text{x}}})</math>
220
|-
221
|<math>\hat{\mu}({\boldsymbol{\text{x}}})=\alpha_0+{\boldsymbol{\text{x}}}^T\boldsymbol{\alpha_1}+{\boldsymbol{\text{x}}}^T\boldsymbol{\Gamma}{\boldsymbol{\text{x}}},\,and\,\hat{\sigma}^2({\boldsymbol{\text{x}}})=\beta_0+{\boldsymbol{\text{x}}}^T\boldsymbol{\beta_1}+{\boldsymbol{\text{x}}}^T\Delta{\boldsymbol{\text{x}}}</math>
222
|-
223
|<math>\hat{\sigma}({\boldsymbol{\text{x}}})=\gamma_0+{\boldsymbol{\text{x}}}^T\boldsymbol{\gamma_1}+{\boldsymbol{\text{x}}}^T\Epsilon{\boldsymbol{\text{x}}}</math>
224
|}and where 
225
{| class="formulaSCP" style="width: 100%; text-align: center;" 
226
|-
227
| 
228
{| style="text-align: center; margin:auto;" 
229
|-
230
| <math display="inline">{\boldsymbol{\text{x}}}=\left[ \, \begin{matrix}{x}_{1}\\{x}_{2}\\\, \begin{matrix}\vdots \\{x}_{n-1}\\{x}_{n}\end{matrix}\end{matrix}\right], \,{\mathit{\boldsymbol{\alpha }}}_{\mathit{\boldsymbol{1}}}=\left[ \, \begin{matrix}{\hat{\alpha }}_{1}\\{\hat{\alpha }}_{2}\\\, \begin{matrix}\vdots \\{\hat{\alpha }}_{n-1}\\{\hat{\alpha }}_{n}\end{matrix}\end{matrix}\right],\,{\mathit{\boldsymbol{\beta }}}_{\mathit{\boldsymbol{1}}}=\, \left[ \begin{matrix}{\hat{\beta }}_{1}\\\begin{matrix}{\hat{\beta }}_{2}\\\vdots \end{matrix}\\\begin{matrix}{\hat{\beta }}_{n-1}\\{\hat{\beta }}_{n}\end{matrix}\end{matrix}\right], \,{\mathit{\boldsymbol{\gamma }}}_{\mathit{\boldsymbol{1}}}=\, \left[ \begin{matrix}{\hat{\gamma }}_{1}\\\begin{matrix}{\hat{\gamma }}_{2}\\\vdots \end{matrix}\\\begin{matrix}{\hat{\gamma }}_{n-1}\\{\hat{\gamma }}_{n}\end{matrix}\end{matrix}\right],and\, \boldsymbol{\Gamma}=\, \left[ \begin{matrix}\begin{matrix}{\hat{\alpha }}_{11}&{\hat{\alpha }}_{12}/2\\{\hat{\alpha }}_{12}/2&{\hat{\alpha }}_{22}\end{matrix}&\cdots &\begin{matrix}{\hat{\alpha }}_{1n}/2\\{\hat{\alpha }}_{2n}/2\end{matrix}\\\vdots &\ddots &\vdots \\\begin{matrix}{\hat{\alpha }}_{1n}/2&{\hat{\alpha }}_{2n}/2\end{matrix}&\cdots &{\hat{\alpha }}_{nn}\end{matrix}\right] </math>
231
|}
232
|-
233
|<math>\boldsymbol{\Delta}=\, \left[ \begin{matrix}\begin{matrix}{\hat{\beta }}_{11}&{\hat{\beta }}_{12}/2\\{\hat{\beta }}_{12}/2&{\hat{\beta }}_{22}\end{matrix}&\cdots &\begin{matrix}{\hat{\beta }}_{1n}/2\\{\hat{\beta }}_{2n}/2\end{matrix}\\\vdots &\ddots &\vdots \\\begin{matrix}{\hat{\beta }}_{1n}/2&{\hat{\beta }}_{2n}/2\end{matrix}&\cdots &{\hat{\beta }}_{nn}\end{matrix}\right],\,and \,\boldsymbol{\Epsilon}=\, \left[ \begin{matrix}\begin{matrix}{\hat{\gamma }}_{11}&{\hat{\gamma }}_{12}/2\\{\hat{\gamma }}_{12}/2&{\hat{\gamma }}_{22}\end{matrix}&\cdots &\begin{matrix}{\hat{\gamma }}_{1n}/2\\{\hat{\gamma}}_{2n}/2\end{matrix}\\\vdots &\ddots &\vdots \\\begin{matrix}{\hat{\gamma}}_{1n}/2&{\hat{\gamma }}_{2n}/2\end{matrix}&\cdots &{\hat{\gamma }}_{nn}\end{matrix}\right]</math>
234
|}
235
   
236
where <math>(d_A, d_B)</math>, <math>u_A({\boldsymbol{\text{x}}})</math>, <math>u_B({\boldsymbol{\text{x}}})</math>, <math>\hat{\mu}({\boldsymbol{\text{x}}})</math>, <math>\hat{\sigma}^2({\boldsymbol{\text{x}}})</math>, <math>\hat{\sigma}({\boldsymbol{\text{x}}})</math>, <math>\tau</math>, <math>X</math> and '''x''' represent a disagreement point, utility functions for player A and B, an estimated process mean function, process variance function, and standard deviation function, the target value, the feasible area, the vector of controllable factors, respectively. In Equation (10), <math>\boldsymbol{\alpha_1} </math>, <math>\boldsymbol{\beta_1} </math>, <math>\boldsymbol{\gamma_1} </math>, <math>\boldsymbol{\Gamma} </math>, <math>\boldsymbol{\Delta} </math>, and <math>\boldsymbol{\Epsilon} </math> denote vectors and matrixes of estimated regression coefficients for the process mean, variance, and standard deviation, respectively. Here, the constraint, <math>u_i({\boldsymbol{\text{x}}})\leq d_i </math>where <math>i=A,B </math>, ensures that the obtained agreement point payoffs will be at least as good as the disagreement point payoffs. Otherwise, there is no reason for players to participate in the negotiation.
237
== 5.  Numerical illustrations and sensitivity analysis ==
238
239
===5.1 Numerical example 1===
240
241
<span id='_Hlk60583940'></span>Two numerical examples are conducted to demonstrate the efficiency of the proposed method. As explained in section 3.1, the process variability can be measured in terms of both the estimated standard deviation and variance functions, but the optimal solutions can be different if different response surface expressions are used. Therefore, the equations estimated in the original example were utilized for better comparison. Example 1 investigates the relationship between the coating thickness of bare silicon wafers (<math>y </math>) and three controller variables: mould temperature <math display="inline">({x}_{1})</math>, injection flow rate <math display="inline">({x}_{2})</math>, and cooling rate <math display="inline">{(x}_{3})</math> [10]. A central composite design and three replications were conducted, and the detailed experimental data with coded values can be shown in Table 1.<span id='cite-46'></span>
242
243
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;">
244
'''Table 1. '''Data for numerical example 1</div>
245
{| style="width: 100%;margin: 1em auto 0.1em auto;border-collapse: collapse;"  
246
|-
247
| style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;vertical-align: top;" |'''Experiments'''
248
249
'''number'''
250
| style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;" |<math>x_1</math>
251
| style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;" |<math>x_2</math>
252
| style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;" |<math>x_3</math>
253
| style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;" |<math>y_1</math>
254
| style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;" |<math>y_2</math>
255
| style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;" |<math>y_3</math>
256
| style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;" |<math>y_4</math>
257
| style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;" |<math>\overline{\mathit{y}}</math>
258
| style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;" |<math>\mathit{\sigma }</math>
259
|-
260
| style="border-top: 1pt solid black;text-align: center;vertical-align: top;" |1
261
| style="border-top: 1pt solid black;text-align: center;" |-1
262
| style="border-top: 1pt solid black;text-align: center;" |-1
263
| style="border-top: 1pt solid black;text-align: center;" |-1
264
| style="border-top: 1pt solid black;text-align: center;" |76.30
265
| style="border-top: 1pt solid black;text-align: center;" |80.50
266
| style="border-top: 1pt solid black;text-align: center;" |77.70
267
| style="border-top: 1pt solid black;text-align: center;" |81.10
268
| style="border-top: 1pt solid black;text-align: center;" |78.90
269
| style="border-top: 1pt solid black;text-align: center;" |2.28
270
|-
271
| style="text-align: center;vertical-align: top;" |2
272
| style="text-align: center;" |1
273
| style="text-align: center;" |-1
274
| style="text-align: center;" |-1
275
| style="text-align: center;" |79.10
276
| style="text-align: center;" |81.20
277
| style="text-align: center;" |78.80
278
| style="text-align: center;" |79.60
279
| style="text-align: center;" |79.68
280
| style="text-align: center;" |1.07
281
|-
282
| style="text-align: center;vertical-align: top;" |3
283
| style="text-align: center;" |-1
284
| style="text-align: center;" |1
285
| style="text-align: center;" |-1
286
| style="text-align: center;" |82.50
287
| style="text-align: center;" |81.50
288
| style="text-align: center;" |79.50
289
| style="text-align: center;" |80.90
290
| style="text-align: center;" |81.10
291
| style="text-align: center;" |1.25
292
|-
293
| style="text-align: center;vertical-align: top;" |4
294
| style="text-align: center;" |1
295
| style="text-align: center;" |1
296
| style="text-align: center;" |-1
297
| style="text-align: center;" |72.30
298
| style="text-align: center;" |74.30
299
| style="text-align: center;" |75.70
300
| style="text-align: center;" |72.70
301
| style="text-align: center;" |73.75
302
| style="text-align: center;" |1.56
303
|-
304
| style="text-align: center;vertical-align: top;" |5
305
| style="text-align: center;" |-1
306
| style="text-align: center;" |-1
307
| style="text-align: center;" |1
308
| style="text-align: center;" |70.60
309
| style="text-align: center;" |72.70
310
| style="text-align: center;" |69.90
311
| style="text-align: center;" |71.50
312
| style="text-align: center;" |71.18
313
| style="text-align: center;" |1.21
314
|-
315
| style="text-align: center;vertical-align: top;" |6
316
| style="text-align: center;" |1
317
| style="text-align: center;" |-1
318
| style="text-align: center;" |1
319
| style="text-align: center;" |74.10
320
| style="text-align: center;" |77.90
321
| style="text-align: center;" |76.20
322
| style="text-align: center;" |77.10
323
| style="text-align: center;" |76.33
324
| style="text-align: center;" |1.64
325
|-
326
| style="text-align: center;vertical-align: top;" |7
327
| style="text-align: center;" |-1
328
| style="text-align: center;" |1
329
| style="text-align: center;" |1
330
| style="text-align: center;" |78.50
331
| style="text-align: center;" |80.00
332
| style="text-align: center;" |76.20
333
| style="text-align: center;" |75.30
334
| style="text-align: center;" |77.50
335
| style="text-align: center;" |2.14
336
|-
337
| style="text-align: center;vertical-align: top;" |8
338
| style="text-align: center;" |1
339
| style="text-align: center;" |1
340
| style="text-align: center;" |1
341
| style="text-align: center;" |84.90
342
| style="text-align: center;" |83.10
343
| style="text-align: center;" |83.90
344
| style="text-align: center;" |83.50
345
| style="text-align: center;" |83.85
346
| style="text-align: center;" |0.77
347
|-
348
| style="text-align: center;vertical-align: top;" |9
349
| style="text-align: center;" |-1.682
350
| style="text-align: center;" |0
351
| style="text-align: center;" |0
352
| style="text-align: center;" |74.10
353
| style="text-align: center;" |71.80
354
| style="text-align: center;" |72.50
355
| style="text-align: center;" |71.90
356
| style="text-align: center;" |72.58
357
| style="text-align: center;" |1.06
358
|-
359
| style="text-align: center;vertical-align: top;" |10
360
| style="text-align: center;" |1.682
361
| style="text-align: center;" |0
362
| style="text-align: center;" |0
363
| style="text-align: center;" |76.40
364
| style="text-align: center;" |78.70
365
| style="text-align: center;" |79.20
366
| style="text-align: center;" |79.30
367
| style="text-align: center;" |78.40
368
| style="text-align: center;" |1.36
369
|-
370
| style="text-align: center;vertical-align: top;" |11
371
| style="text-align: center;" |0
372
| style="text-align: center;" |-1.682
373
| style="text-align: center;" |0
374
| style="text-align: center;" |79.20
375
| style="text-align: center;" |80.70
376
| style="text-align: center;" |81.00
377
| style="text-align: center;" |82.30
378
| style="text-align: center;" |80.80
379
| style="text-align: center;" |1.27
380
|-
381
| style="text-align: center;vertical-align: top;" |12
382
| style="text-align: center;" |0
383
| style="text-align: center;" |1.682
384
| style="text-align: center;" |0
385
| style="text-align: center;" |77.90
386
| style="text-align: center;" |76.40
387
| style="text-align: center;" |76.90
388
| style="text-align: center;" |77.40
389
| style="text-align: center;" |77.15
390
| style="text-align: center;" |0.65
391
|-
392
| style="text-align: center;vertical-align: top;" |13
393
| style="text-align: center;" |0
394
| style="text-align: center;" |0
395
| style="text-align: center;" |-1.682
396
| style="text-align: center;" |82.40
397
| style="text-align: center;" |82.70
398
| style="text-align: center;" |82.60
399
| style="text-align: center;" |83.10
400
| style="text-align: center;" |82.70
401
| style="text-align: center;" |0.29
402
|-
403
| style="text-align: center;vertical-align: top;" |14
404
| style="text-align: center;" |0
405
| style="text-align: center;" |0
406
| style="text-align: center;" |1.682
407
| style="text-align: center;" |79.70
408
| style="text-align: center;" |82.40
409
| style="text-align: center;" |81.00
410
| style="text-align: center;" |81.20
411
| style="text-align: center;" |81.08
412
| style="text-align: center;" |1.11
413
|-
414
| style="text-align: center;vertical-align: top;" |15
415
| style="text-align: center;" |0
416
| style="text-align: center;" |0
417
| style="text-align: center;" |0
418
| style="text-align: center;" |70.40
419
| style="text-align: center;" |70.60
420
| style="text-align: center;" |70.80
421
| style="text-align: center;" |71.10
422
| style="text-align: center;" |70.73
423
| style="text-align: center;" |0.30
424
|-
425
| style="text-align: center;vertical-align: top;" |16
426
| style="text-align: center;" |0
427
| style="text-align: center;" |0
428
| style="text-align: center;" |0
429
| style="text-align: center;" |70.90
430
| style="text-align: center;" |69.70
431
| style="text-align: center;" |69.00
432
| style="text-align: center;" |69.90
433
| style="text-align: center;" |69.88
434
| style="text-align: center;" |0.78
435
|-
436
| style="text-align: center;vertical-align: top;" |17
437
| style="text-align: center;" |0
438
| style="text-align: center;" |0
439
| style="text-align: center;" |0
440
| style="text-align: center;" |70.70
441
| style="text-align: center;" |71.90
442
| style="text-align: center;" |71.70
443
| style="text-align: center;" |71.20
444
| style="text-align: center;" |71.38
445
| style="text-align: center;" |0.54
446
|-
447
| style="text-align: center;vertical-align: top;" |18
448
| style="text-align: center;" |0
449
| style="text-align: center;" |0
450
| style="text-align: center;" |0
451
| style="text-align: center;" |70.20
452
| style="text-align: center;" |71.00
453
| style="text-align: center;" |71.50
454
| style="text-align: center;" |70.40
455
| style="text-align: center;" |70.78
456
| style="text-align: center;" |0.59
457
|-
458
| style="text-align: center;vertical-align: top;" |19
459
| style="text-align: center;" |0
460
| style="text-align: center;" |0
461
| style="text-align: center;" |0
462
| style="text-align: center;" |71.50
463
| style="text-align: center;" |71.10
464
| style="text-align: center;" |71.20
465
| style="text-align: center;" |70.00
466
| style="text-align: center;" |70.95
467
| style="text-align: center;" |0.66
468
|-
469
| style="border-bottom: 1pt solid black;text-align: center;vertical-align: top;" |20
470
| style="border-bottom: 1pt solid black;text-align: center;" |0
471
| style="border-bottom: 1pt solid black;text-align: center;" |0
472
| style="border-bottom: 1pt solid black;text-align: center;" |0
473
| style="border-bottom: 1pt solid black;text-align: center;" |71.00
474
| style="border-bottom: 1pt solid black;text-align: center;" |70.40
475
| style="border-bottom: 1pt solid black;text-align: center;" |70.90
476
| style="border-bottom: 1pt solid black;text-align: center;" |69.90
477
| style="border-bottom: 1pt solid black;text-align: center;" |70.55
478
| style="border-bottom: 1pt solid black;text-align: center;" |0.51
479
|}
480
481
The fitted response functions for the process bias and standard deviation of the coating thickness are estimated by using  LSM through MINITABsoftware package as:
482
{| class="formulaSCP" style="width: 100%; text-align: center;" 
483
|-
484
| 
485
{| style="text-align: center; margin:auto;" 
486
|-
487
| <math>\hat{\mu }\left({\boldsymbol{\text{x}}}\right) =\,72.21+\,{{\boldsymbol{\text{x}}}}^{T}{\boldsymbol{\alpha }}_{\boldsymbol{1}}\mathit{\boldsymbol{+\, }}{{\boldsymbol{\text{x}}}}^{T}\boldsymbol{\Gamma \text{x}} </math>
488
|}
489
| style="width: 5px;text-align: right;white-space: nowrap;" | (11)
490
|}
491
where
492
{| class="formulaSCP" style="width: 100%; text-align: center;" 
493
|-
494
| 
495
{| style="text-align: center; margin:auto;" 
496
|-
497
|<math>{\boldsymbol{\alpha }}_{1}=\, \left[ \begin{matrix}0.59\\-0.35\\-0.01\end{matrix}\right],  and  \quad \boldsymbol\Gamma=\, \left[ \begin{matrix}0.28&0.045&0.83\\0.045&1.29&0.755\\0.83&0.755&1.85\end{matrix}\right] </math>
498
|}
499
|}
500
501
{| class="formulaSCP" style="width: 100%; text-align: center;" 
502
|-
503
| 
504
{| style="text-align: center; margin:auto;" 
505
|-
506
| <math display="inline">{\hat{\sigma }}\left( {\boldsymbol{\text{x}}}\right)=\, 2.55\,+ {\boldsymbol{\text{x}}}^T\boldsymbol{\gamma_1}+{\boldsymbol{\text{x}}}^T\Epsilon{\boldsymbol{\text{x}}} </math>
507
|}
508
| style="width: 5px;text-align: right;white-space: nowrap;" | (12)
509
|}
510
where
511
{| style="text-align: center; margin:auto;"  
512
|-
513
|<math>\boldsymbol\gamma_1=\, \left[ \begin{matrix}0.38\\-0.43\\0.56\end{matrix}\right],  and  \quad \boldsymbol\mathrm{E}=\, \left[ \begin{matrix}0.49&-0.235&0.36\\-0.235&0.61&-0.06\\0.36&-0.06&0.85\end{matrix}\right] </math>
514
|}Based on the proposed RPD procedure as described in Figure 3, those two functions (i.e., process bias and standard deviation) as shown in Equations (11) and (12) are regarded as two players and also their associated utility functions in the bargaining game.  The disagreement point as shown in Figure 4 can be computed as <math display="inline">d=({d}_{\mathit{\boldsymbol{A}}}</math>, <math display="inline">{d}_{\mathit{\boldsymbol{B\, }}})</math>=(1.2398, 3.1504)  by using Equations (8) and (9). Then, the optimization problem can be solved by applying Equation (10) under an additional constraint, <math display="inline">\sum _{l=1}^{3}{{x}_{l}}^{2}\leq 3</math>. which represents a feasible experiment region.
515
516
The solution (i.e., <math display="inline">{\left( \hat{\mu }\left( {\boldsymbol{\text{x}}}^*\right) -\tau \right) }^{2}=</math> 0.2967 and <math display="inline">\hat\sigma \left({\boldsymbol{\text{x}}}^*\right)</math> = 2.6101) are calculated by using a MATLAB software package. To perform a comparative study, the optimization results of the proposed method and the conventional dual response approach are summarized in Table 2. Based on Table 2, the proposed method provides slightly better MSE results in this particular numerical example. To check the efficiency of the obtained results, the lexicographic weighted Tchebycheff approach is adopted to procure an associated Pareto frontier which is shown in Figure 5.
517
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;">
518
'''Table 2.''' The optimization results of example1</div>
519
520
{| style="width: 100%;border-collapse: collapse;" 
521
|-
522
|  style="border: 1pt solid black;text-align: center;"|
523
|  style="border: 1pt solid black;text-align: center;vertical-align: top;"|<math>{x}_{1}^{\ast }</math>
524
|  style="border: 1pt solid black;text-align: center;vertical-align: top;"|<math>{x}_{2}^{\ast }</math>
525
|  style="border: 1pt solid black;text-align: center;vertical-align: top;"|<math>{x}_{3}^{\ast }</math>
526
|  style="border: 1pt solid black;text-align: center;vertical-align: top;"|<math>{\left( \hat{\mu }\left( {\boldsymbol{\text{x}}}^*\right) -\tau \right) }^{2}</math> 
527
|  style="border: 1pt solid black;text-align: center;vertical-align: top;"|<math>{\hat{\sigma }}^{2}\mathit{\boldsymbol{(}}{\boldsymbol{\text{x}}}^*\mathit{\boldsymbol{)}}</math>
528
|  style="border: 1pt solid black;text-align: center;vertical-align: top;"|'''MSE'''
529
|-
530
|  style="border: 1pt solid black;text-align: center;"|'''Dual response model with WLS'''
531
|  style="border: 1pt solid black;text-align: center;"|-1.4561
532
|  style="border: 1pt solid black;text-align: center;"|-0.1456
533
|  style="border: 1pt solid black;text-align: center;"|0.5596
534
|  style="border: 1pt solid black;text-align: center;"|0
535
|  style="border: 1pt solid black;text-align: center;"|3.0142
536
|  style="border: 1pt solid black;text-align: center;"|9.0854
537
|-
538
|  style="border: 1pt solid black;text-align: center;"|'''Proposed model'''
539
|  style="border: 1pt solid black;text-align: center;"|-0.8473
540
|  style="border: 1pt solid black;text-align: center;"|0.0399
541
|  style="border: 1pt solid black;text-align: center;"|0.2248
542
|  style="border: 1pt solid black;text-align: center;"|0.2967
543
|  style="border: 1pt solid black;text-align: center;"|2.6101
544
|  style="border: 1pt solid black;text-align: center;"|7.1093
545
|}
546
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;">
547
[[File:News.png|alt=|centre|thumb|404x404px|'''Figure 5.''' The optimization results plot with the Pareto frontier of example 1]]</div>
548
549
As exhibited in Figure 5, the obtained Nash bargaining solution, which is plotted as a star, is on the Pareto frontier. By using the concept of bargaining game theory, the interaction between process bias and variability can be incorporated while identifying a unique tradeoff result. As result, this proposed method might provide well-balanced optimal solutions associated with the process bias and variability in this particular example.
550
551
=== 5.2 '''Sensitivity analysis for numerical example 1''' ===
552
Based on the optimization results, sensitivity analysis for different disagreement point values are then conducted for verification purposes as shown in Table 3. While changing  <math>d_B</math> values by both 10% increment and decrement with fixed <math>d_A</math> value at 3.1504, the changing patterns of the process bias and variability values are investigated in this sensitivity analysis.
553
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;">
554
'''Table 3.''' Sensitivity analysis results for numerical example 1 by changing <math display="inline">{d}_{A}</math></div>
555
556
{| style="width: 100%;border-collapse: collapse;" 
557
|-
558
|  style="border-top: 1pt solid black;text-align: center;"|<math>{d}_{A}</math>
559
|  style="border-top: 1pt solid black;text-align: center;"|<math>{d}_{B}</math>
560
|  style="border-top: 1pt solid black;text-align: center;"|<math>{\mathit{\boldsymbol{((}}\hat{\mu }\left( {\boldsymbol{\text{x}}}\right) \mathit{\boldsymbol{-}}t\mathit{\boldsymbol{)}}}^{\mathit{\boldsymbol{2}}}-</math><math>{d}_{A}\boldsymbol{)\ast (}\hat{\sigma }\left({\boldsymbol{\text{x}}}\right) -</math><math>{d}_{B}\mathit{\boldsymbol{)}}</math>
561
|  style="border-top: 1pt solid black;text-align: center;"|<math>{\boldsymbol{\text{x}}}</math>
562
|  style="border-top: 1pt solid black;text-align: center;"|<math>{\boldsymbol{(}\mu \left( {\boldsymbol{\text{x}}}^*\right) \boldsymbol{-}\tau \boldsymbol{)}}^{\boldsymbol{2}}</math>
563
|  style="border-top: 1pt solid black;text-align: center;"|<math>\sigma \left({\boldsymbol{\text{x}}}^*\right)</math> 
564
|-
565
|  style="border-top: 1pt solid black;text-align: center;"|0.6589
566
|  style="border-top: 1pt solid black;text-align: center;"|3.1504
567
|  style="border-top: 1pt solid black;text-align: center;"|0.2218
568
|  style="border-top: 1pt solid black;text-align: center;"|[-1.0281  -0.0159  0.3253]
569
|  style="border-top: 1pt solid black;text-align: center;"|0.157
570
|  style="border-top: 1pt solid black;text-align: center;"|2.7085
571
|-
572
|  style="text-align: center;"|0.7321
573
|  style="text-align: center;"|3.1504
574
|  style="text-align: center;"|0.2547
575
|  style="text-align: center;"|[-1.0017   -0.0078   0.3107]
576
|  style="text-align: center;"|0.1753
577
|  style="text-align: center;"|2.6930
578
|-
579
|  style="text-align: center;"|0.8134
580
|  style="text-align: center;"|3.1504
581
|  style="text-align: center;"|0.2925
582
|  style="text-align: center;"|[-0.9739   0.0007   0.2953]
583
|  style="text-align: center;"|0.1953
584
|  style="text-align: center;"|2.6771
585
|-
586
|  style="text-align: center;"|0.9038
587
|  style="text-align: center;"|3.1504
588
|  style="text-align: center;"|0.3361
589
|  style="text-align: center;"|[-0.9445   0.0098  0.2790]
590
|  style="text-align: center;"|0.2174
591
|  style="text-align: center;"|2.6608
592
|-
593
|  style="text-align: center;"|1.0042
594
|  style="text-align: center;"|3.1504
595
|  style="text-align: center;"|0.3861
596
|  style="text-align: center;"|[-0.9137    0.0193    0.2619]
597
|  style="text-align: center;"|0.2416
598
|  style="text-align: center;"|2.6441
599
|-
600
|  style="text-align: center;"|1.1158
601
|  style="text-align: center;"|3.1504
602
|  style="text-align: center;"|0.4435
603
|  style="text-align: center;"|[-0.8813    0.0293    0.2438]
604
|  style="text-align: center;"|0.2680
605
|  style="text-align: center;"|2.6272
606
|-
607
|  style="text-align: center;"|'''1.2398'''
608
|  style="text-align: center;"|'''3.1504'''
609
|  style="text-align: center;"|'''0.5095'''
610
|  style="text-align: center;"|'''[-0.8473    0.0399    0.2248]'''
611
|  style="text-align: center;"|'''0.2967'''
612
|  style="text-align: center;"|'''2.6101'''
613
|-
614
|  style="text-align: center;"|1.3638
615
|  style="text-align: center;"|3.1504
616
|  style="text-align: center;"|0.5775
617
|  style="text-align: center;"|[-0.8153    0.0499    0.2069]
618
|  style="text-align: center;"|0.3248
619
|  style="text-align: center;"|2.5946
620
|-
621
|  style="text-align: center;"|1.5002
622
|  style="text-align: center;"|3.1504
623
|  style="text-align: center;"|0.6543
624
|  style="text-align: center;"|[-0.7820    0.0603    0.1881]
625
|  style="text-align: center;"|0.3549
626
|  style="text-align: center;"|2.5791
627
|-
628
|  style="text-align: center;"|1.6502
629
|  style="text-align: center;"|3.1504
630
|  style="text-align: center;"|0.7412
631
|  style="text-align: center;"|[-0.7475    0.0711    0.1687]
632
|  style="text-align: center;"|0.3869
633
|  style="text-align: center;"|2.5637
634
|-
635
|  style="text-align: center;"|1.8152
636
|  style="text-align: center;"|3.1504
637
|  style="text-align: center;"|0.8393
638
|  style="text-align: center;"|[-0.7120    0.0824    0.1486]
639
|  style="text-align: center;"|0.4209
640
|  style="text-align: center;"|2.5484
641
|-
642
|  style="text-align: center;"|1.9967
643
|  style="text-align: center;"|3.1504
644
|  style="text-align: center;"|0.9499
645
|  style="text-align: center;"|[-0.6754    0.0939    0.1278]
646
|  style="text-align: center;"|0.4567
647
|  style="text-align: center;"|2.5335
648
|-
649
|  style="text-align: center;"|2.1964
650
|  style="text-align: center;"|3.1504
651
|  style="text-align: center;"|1.0746
652
|  style="text-align: center;"|[-0.6381    0.1058    0.1065]
653
|  style="text-align: center;"|0.4942
654
|  style="text-align: center;"|2.5191
655
|-
656
|  style="border-bottom: 1pt solid black;text-align: center;"|2.4160
657
|  style="border-bottom: 1pt solid black;text-align: center;"|3.1504
658
|  style="border-bottom: 1pt solid black;text-align: center;"|1.2148
659
|  style="border-bottom: 1pt solid black;text-align: center;"|[-0.6002    0.1180    0.0847]
660
|  style="border-bottom: 1pt solid black;text-align: center;"|0.5331
661
|  style="border-bottom: 1pt solid black;text-align: center;"|2.5052
662
|}
663
664
As shown in Table 3, if only <math display="inline">{d}_{A}</math> increases, the optimal squared bias <math display="inline">{(\hat{\mu }(\boldsymbol{x}^*)-\tau )}^{2}</math> increases while the process variability <math display="inline">\hat{\sigma }\left( \boldsymbol{x}^*\right)</math> decreasing. All of the optimal solutions obtained by using the proposed methods are plotted as circles and compared with the Pareto optimal solutions generated by using the lexicographic weighted Tchebycheff method. Clearly, the obtained solutions are on the Pareto frontier, as shown in Figure 6.[[File:Draft_Shin_691882792-image7.png|centre|thumb|463x463px|'''Figure 6.  '''Plot of sensitivity analysis results with the Pareto frontier for numerical example 1 by changing <math display="inline">{d}_{A}</math>.]]On the other hand, if <math display="inline">{d}_{A}</math> is considered as a constant and <math display="inline">{d}_{B}</math> is changed by 5% each time, the transformed data is summarized and plotted in Table 4 and Figure 7, respectively.
665
666
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;">
667
'''Table 4.''' Sensitivity analysis results for numerical example 1 by changing <math display="inline">{d}_{B}</math></div>
668
669
{| style="width: 100%;border-collapse: collapse;" 
670
|-
671
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{d}_{A}</math>
672
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{d}_{B}</math>
673
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{\mathit{\boldsymbol{((}}\hat{\mu }\left( {\boldsymbol{\text{x}}}\right) \mathit{\boldsymbol{-}}t\mathit{\boldsymbol{)}}}^{\mathit{\boldsymbol{2}}}-</math><math>{d}_{A}\boldsymbol{)\ast (}\hat{\sigma }\left({\boldsymbol{\text{x}}}\right) -</math><math>{d}_{B}\mathit{\boldsymbol{)}}</math>
674
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{\boldsymbol{\text{x}}}</math>
675
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{\boldsymbol{(}\hat{\mu }\left( {\boldsymbol{\text{x}}}^*\right) \boldsymbol{-}\tau \boldsymbol{)}}^{\boldsymbol{2}}</math>
676
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>\hat{\sigma }\left({\boldsymbol{\text{x}}}^*\right)</math> 
677
|-
678
|  style="border-top: 1pt solid black;text-align: center;"|1.2398
679
|  style="border-top: 1pt solid black;text-align: center;"|2.4377
680
|  style="border-top: 1pt solid black;text-align: center;"|0.0076
681
|  style="border-top: 1pt solid black;text-align: center;"|[-0.2082    0.2495   -0.1539]
682
|  style="border-top: 1pt solid black;text-align: center;"|0.9764
683
|  style="border-top: 1pt solid black;text-align: center;"|2.4089
684
|-
685
|  style="text-align: center;"|1.2398
686
|  style="text-align: center;"|2.5660
687
|  style="text-align: center;"|0.0592
688
|  style="text-align: center;"|[-0.4198    0.1770   -0.0212]
689
|  style="text-align: center;"|0.7286
690
|  style="text-align: center;"|2.4501
691
|-
692
|  style="text-align: center;"|1.2398
693
|  style="text-align: center;"|2.7011
694
|  style="text-align: center;"|0.1394
695
|  style="text-align: center;"|[-0.5607    0.1307    0.0618]
696
|  style="text-align: center;"|0.5746
697
|  style="text-align: center;"|2.4916
698
|-
699
|  style="text-align: center;"|1.2398
700
|  style="text-align: center;"|2.4832
701
|  style="text-align: center;"|0.2425
702
|  style="text-align: center;"|[-0.6726    0.0948    0.1262]
703
|  style="text-align: center;"|0.4595
704
|  style="text-align: center;"|2.5324
705
|-
706
|  style="text-align: center;"|1.2398
707
|  style="text-align: center;"|2.9929
708
|  style="text-align: center;"|0.3664
709
|  style="text-align: center;"|[-0.7666    0.0651    0.1795]
710
|  style="text-align: center;"|0.3690
711
|  style="text-align: center;"|2.5721
712
|-
713
|  style="text-align: center;"|1.2398
714
|  style="text-align: center;"|3.1504
715
|  style="text-align: center;"|0.5095
716
|  style="text-align: center;"|[-0.8473    0.0399    0.2248]
717
|  style="text-align: center;"|0.2967
718
|  style="text-align: center;"|2.6101
719
|-
720
|  style="text-align: center;"|1.2398
721
|  style="text-align: center;"|3.3079
722
|  style="text-align: center;"|0.6626
723
|  style="text-align: center;"|[-0.9141    0.0192    0.2621]
724
|  style="text-align: center;"|0.2412
725
|  style="text-align: center;"|2.6444
726
|-
727
|  style="text-align: center;"|1.2398
728
|  style="text-align: center;"|3.4733
729
|  style="text-align: center;"|0.8316
730
|  style="text-align: center;"|[-0.9727    0.0011    0.2946]
731
|  style="text-align: center;"|0.1962
732
|  style="text-align: center;"|2.6764
733
|-
734
|  style="text-align: center;"|1.2398
735
|  style="text-align: center;"|3.6470
736
|  style="text-align: center;"|1.0162
737
|  style="text-align: center;"|[-1.0241   -0.0147    0.3231]
738
|  style="text-align: center;"|0.1597
739
|  style="text-align: center;"|2.7061
740
|-
741
|  style="text-align: center;"|1.2398
742
|  style="text-align: center;"|3.8293
743
|  style="text-align: center;"|1.2159
744
|  style="text-align: center;"|[-1.0692   -0.0285    0.3480]
745
|  style="text-align: center;"|0.1303
746
|  style="text-align: center;"|2.7334
747
|-
748
|  style="border-bottom: 1pt solid black;text-align: center;"|1.2398
749
|  style="border-bottom: 1pt solid black;text-align: center;"|4.0208
750
|  style="border-bottom: 1pt solid black;text-align: center;"|1.4308
751
|  style="border-bottom: 1pt solid black;text-align: center;"|[-1.1088   -0.0406    0.3698]
752
|  style="border-bottom: 1pt solid black;text-align: center;"|0.1065
753
|  style="border-bottom: 1pt solid black;text-align: center;"|2.7583
754
|}
755
[[File:Draft_Shin_691882792-image8.png|centre|thumb|435x435px|'''Figure 7.''' Plot of sensitivity analysis results with the Pareto frontier for numerical example 1 by changing <math display="inline">{d}_{B}</math>]]As demonstrated by Table 4, the value of <math display="inline">{(\hat{\mu }(\boldsymbol{x}^*)-\tau )}^{2}</math> declines while <math display="inline">\, \hat{\sigma }\left( {\boldsymbol{\text{x}}}^*\right)</math>  grows if <math display="inline">{d}_{B}</math> is increased and <math display="inline">{d}_{A}</math> is kept constant. However, all of the solution points are still on the Pareto frontier, as shown in Figure 7.
756
757
===5.3 Numerical example 2===
758
In the second example [20], an unbalanced data set is utilized to investigate the relationship between coating thickness (<math>y</math>), mould temperature (<math display="inline">{x}_{1}</math>) and injection flow rate (<math display="inline">{x}_{2}</math>). A 3<sup>2</sup> factorial design with three levels as -1, 0, and +1 is applied as shown in Table 5.<span id="cite-47"></span><div class="center" style="width: auto; margin-left: auto; margin-right: auto;">
759
'''Table 5.'''  Experimental data for example 2</div>
760
761
{| style="width: 100%;border-collapse: collapse;" 
762
|-
763
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|'''Experiments'''
764
765
'''number'''
766
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{\mathit{{x}}}_{{1}}</math>
767
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{\mathit{{x}}}_{{2}}</math>
768
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{\mathit{{y}}}_{{1}}</math>
769
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{\mathit{{y}}}_{{2}}</math>
770
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{\mathit{{y}}}_{{3}}</math>
771
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{\mathit{{y}}}_{{4}}</math>
772
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{\mathit{{y}}}_{{5}}</math>
773
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{\mathit{{y}}}_{{6}}</math>
774
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{\mathit{{y}}}_{{7}}</math>
775
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>\overline{\mathit{{y}}}</math>
776
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{\mathit{{\sigma }}}^{\mathit{{2}}}</math>
777
|-
778
|  style="border-top: 1pt solid black;text-align: center;"|1
779
|  style="border-top: 1pt solid black;text-align: center;"|-1
780
|  style="border-top: 1pt solid black;text-align: center;"|-1
781
|  style="border-top: 1pt solid black;text-align: center;"|84.3
782
|  style="border-top: 1pt solid black;text-align: center;"|57.0
783
|  style="border-top: 1pt solid black;text-align: center;"|56.5
784
|  style="border-top: 1pt solid black;text-align: center;"|
785
|  style="border-top: 1pt solid black;text-align: center;"|
786
|  style="border-top: 1pt solid black;text-align: center;"|
787
|  style="border-top: 1pt solid black;text-align: center;"|
788
|  style="border-top: 1pt solid black;text-align: center;"|65.93
789
|  style="border-top: 1pt solid black;text-align: center;"|253.06
790
|-
791
|  style="text-align: center;"|2
792
|  style="text-align: center;"|0
793
|  style="text-align: center;"|-1
794
|  style="text-align: center;"|75.7
795
|  style="text-align: center;"|87.1
796
|  style="text-align: center;"|71.8
797
|  style="text-align: center;"|43.8
798
|  style="text-align: center;"|51.6
799
|  style="text-align: center;"|
800
|  style="text-align: center;"|
801
|  style="text-align: center;"|66.00
802
|  style="text-align: center;"|318.28
803
|-
804
|  style="text-align: center;"|3
805
|  style="text-align: center;"|1
806
|  style="text-align: center;"|-1
807
|  style="text-align: center;"|65.9
808
|  style="text-align: center;"|47.9
809
|  style="text-align: center;"|63.3
810
|  style="text-align: center;"|
811
|  style="text-align: center;"|
812
|  style="text-align: center;"|
813
|  style="text-align: center;"|
814
|  style="text-align: center;"|59.03
815
|  style="text-align: center;"|94.65
816
|-
817
|  style="text-align: center;"|4
818
|  style="text-align: center;"|-1
819
|  style="text-align: center;"|0
820
|  style="text-align: center;"|51.0
821
|  style="text-align: center;"|60.1
822
|  style="text-align: center;"|69.7
823
|  style="text-align: center;"|84.8
824
|  style="text-align: center;"|74.7
825
|  style="text-align: center;"|
826
|  style="text-align: center;"|
827
|  style="text-align: center;"|68.06
828
|  style="text-align: center;"|170.35
829
|-
830
|  style="text-align: center;"|5
831
|  style="text-align: center;"|0
832
|  style="text-align: center;"|0
833
|  style="text-align: center;"|53.1
834
|  style="text-align: center;"|36.2
835
|  style="text-align: center;"|61.8
836
|  style="text-align: center;"|68.6
837
|  style="text-align: center;"|63.4
838
|  style="text-align: center;"|48.6
839
|  style="text-align: center;"|42.5
840
|  style="text-align: center;"|53.46
841
|  style="text-align: center;"|139.89
842
|-
843
|  style="text-align: center;"|6
844
|  style="text-align: center;"|1
845
|  style="text-align: center;"|0
846
|  style="text-align: center;"|46.5
847
|  style="text-align: center;"|65.9
848
|  style="text-align: center;"|51.8
849
|  style="text-align: center;"|48.4
850
|  style="text-align: center;"|64.4
851
|  style="text-align: center;"|
852
|  style="text-align: center;"|
853
|  style="text-align: center;"|55.40
854
|  style="text-align: center;"|83.11
855
|-
856
|  style="text-align: center;"|7
857
|  style="text-align: center;"|-1
858
|  style="text-align: center;"|1
859
|  style="text-align: center;"|65.7
860
|  style="text-align: center;"|79.8
861
|  style="text-align: center;"|79.1
862
|  style="text-align: center;"|
863
|  style="text-align: center;"|
864
|  style="text-align: center;"|
865
|  style="text-align: center;"|
866
|  style="text-align: center;"|74.87
867
|  style="text-align: center;"|63.14
868
|-
869
|  style="text-align: center;"|8
870
|  style="text-align: center;"|0
871
|  style="text-align: center;"|1
872
|  style="text-align: center;"|54.4
873
|  style="text-align: center;"|63.8
874
|  style="text-align: center;"|56.2
875
|  style="text-align: center;"|48.0
876
|  style="text-align: center;"|64.5
877
|  style="text-align: center;"|
878
|  style="text-align: center;"|
879
|  style="text-align: center;"|57.38
880
|  style="text-align: center;"|47.54
881
|-
882
|  style="border-bottom: 1pt solid black;text-align: center;"|9
883
|  style="border-bottom: 1pt solid black;text-align: center;"|1
884
|  style="border-bottom: 1pt solid black;text-align: center;"|1
885
|  style="border-bottom: 1pt solid black;text-align: center;"|50.7
886
|  style="border-bottom: 1pt solid black;text-align: center;"|68.3
887
|  style="border-bottom: 1pt solid black;text-align: center;"|62.9
888
|  style="border-bottom: 1pt solid black;text-align: center;"|
889
|  style="border-bottom: 1pt solid black;text-align: center;"|
890
|  style="border-bottom: 1pt solid black;text-align: center;"|
891
|  style="border-bottom: 1pt solid black;text-align: center;"|
892
|  style="border-bottom: 1pt solid black;text-align: center;"|60.63
893
|  style="border-bottom: 1pt solid black;text-align: center;"|81.29
894
|}
895
896
Based on Cho and Park [20], a weighted least square (WLS) method was applied to estimate the process mean and variability functions as:
897
{| class="formulaSCP" style="width: 100%; text-align: center;" 
898
|-
899
| 
900
{| style="text-align: center; margin:auto;" 
901
|-
902
| <math display="inline">\hat{\mu }\left({\boldsymbol{\text{x}}}\right) =\,55.08\,\mathit{\boldsymbol{+\, }}{{\boldsymbol{\text{x}}}}^{T}{\boldsymbol{\alpha }}_{\boldsymbol{1}}\mathit{\boldsymbol{+\, }}{{\boldsymbol{\text{x}}}}^{T}\boldsymbol\Gamma{\boldsymbol{\text{x}}} </math>
903
|}
904
| style="width: 5px;text-align: right;white-space: nowrap;" | (13)
905
|}
906
where 
907
{| class="formulaSCP" style="width: 100%; text-align: center;" 
908
|-
909
| 
910
{| style="text-align: center; margin:auto;" 
911
|-
912
<math>\boldsymbol\alpha_1=\, \left[ \begin{matrix}-5.76\\-0.52\\\end{matrix}\right], and \quad \boldsymbol\Gamma=\, \left[ \begin{matrix}5.51&-0.92\\-0.92&5.47\\\end{matrix}\right] </math>     
913
|}
914
|}
915
{| class="formulaSCP" style="width: 100%; text-align: center;"  
916
|-
917
| 
918
{| style="text-align: center; margin:auto;"  
919
|-
920
| <math display="inline">{\hat{\sigma }}^{2}\left( {\boldsymbol{\text{x}}}\right)=\, 154.26\,+ {{{\boldsymbol{\text{x}}}}^{\mathit{\boldsymbol{T}}}\mathit{\boldsymbol{\beta }}}_{\mathit{\boldsymbol{1}}}\mathit{\boldsymbol{+\, }}{{\boldsymbol{\text{x}}}}^{\mathit{\boldsymbol{T}}}\boldsymbol\Delta{\boldsymbol{\text{x}}} </math>
921
|}
922
| style="width: 5px;text-align: right;white-space: nowrap;" | (14)
923
|}
924
where    
925
{| class="formulaSCP" style="width: 100%; text-align: center;" 
926
|-
927
| 
928
{| style="text-align: center; margin:auto;" 
929
|-
930
<math>\boldsymbol\beta_1=\, \left[ \begin{matrix}-39.34\\-93.09\\\end{matrix}\right], and \quad \boldsymbol\Delta=\, \left[ \begin{matrix}-38.31&22.07\\22.07&17.81\\\end{matrix}\right] </math>
931
|}
932
|}
933
Applying the same logic as utilized in example 1, the ranges for the process bias and variability are calculated by [12.0508, 420.25] and [45.53, 310.39], respectively. The disagreement points are computed as <math display="inline">{d}_{A}</math> =63.0436 and <math display="inline">{d}_{B}</math>=112.0959. Applying Equation (10), the optimal solutions can be obtained as follows: <math display="inline">{(\hat{\mu }({\boldsymbol{\text{x}}}^*)-\tau )}^{2}</math>=23.6526 and <math display="inline">{\hat{\sigma }}^{2}\left( {\boldsymbol{\text{x}}}^*\right) =</math> 58.3974. Based on the optimization results of both the proposed method and the conventional MSE model as demonstrated in Table 6, the optimization results of the proposed method provide a significantly small MSE compared to the conventional MSE model in this particular example.
934
935
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;">
936
'''Table 6.''' The optimization results of example2</div>
937
938
{| style="width: 100%;border-collapse: collapse;" 
939
|-
940
|  style="border: 1pt solid black;text-align: center;"|
941
|  style="border: 1pt solid black;text-align: center;vertical-align: top;"|<math>{x}_{1}^{\ast }</math>
942
|  style="border: 1pt solid black;text-align: center;vertical-align: top;"|<math>{x}_{2}^{\ast }</math>
943
|  style="border: 1pt solid black;text-align: center;vertical-align: top;"|<math>\left| \hat{\mu }\mathit{\boldsymbol{(}}{\boldsymbol{\text{x}}}^*\mathit{\boldsymbol{)-}}\tau \right|</math> 
944
|  style="border: 1pt solid black;text-align: center;vertical-align: top;"|<math>{\hat{\sigma }}^{2}\mathit{\boldsymbol{(}}{\boldsymbol{\text{x}}}^*\mathit{\boldsymbol{)}}</math>
945
|  style="border: 1pt solid black;text-align: center;vertical-align: top;"|'''MSE'''
946
|-
947
|  style="border: 1pt solid black;text-align: center;"|'''MSE model'''
948
|  style="border: 1pt solid black;text-align: center;"|0.998
949
|  style="border: 1pt solid black;text-align: center;"|0.998
950
|  style="border: 1pt solid black;text-align: center;"|7.93
951
|  style="border: 1pt solid black;text-align: center;"|45.66
952
|  style="border: 1pt solid black;text-align: center;"|108.48
953
|-
954
|  style="border: 1pt solid black;text-align: center;"|'''Proposed model'''
955
|  style="border: 1pt solid black;text-align: center;"|1.000
956
|  style="border: 1pt solid black;text-align: center;"|0.4440
957
|  style="border: 1pt solid black;text-align: center;"|4.8606
958
|  style="border: 1pt solid black;text-align: center;"|58.3974
959
|  style="border: 1pt solid black;text-align: center;"|82.023
960
|}
961
962
A Pareto frontier including all non-dominated solutions can be obtained by applying a lexicographic weighted Tchebycheff approach. As illustrated by Figure 8, the Nash bargaining solution is on the Pareto frontier, which may clearly verify the efficiency of the proposed method.
963
964
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;">[[File:Ty1.png|centre|thumb|407x407px|''' Figure 8. '''The optimization results plot with the Pareto frontier of example 2]]</div>
965
966
===5.4 Sensitivity analysis for numerical example 2===
967
968
Applying the same logic for example 2, <math display="inline">{d}_{B}</math> is kept constant as <math display="inline">{d}_{A}</math> is changed by 10%. Table 7 exhibits the effect of changes in <math display="inline">{d}_{A}</math>, and Figure 9 demonstrates the efficiency of the calculated solutions.
969
970
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;">
971
'''Table 7.''' Sensitivity analysis results for numerical example 2 by changing <math display="inline">{d}_{A}</math></div>
972
973
{| style="width: 100%;margin: 1em auto 0.1em auto;border-collapse: collapse;" 
974
|-
975
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{d}_{A}</math>
976
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{d}_{B}</math>
977
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{\mathit{\boldsymbol{((}}\hat{\mu }\left( {\boldsymbol{\text{x}}}\right) \mathit{\boldsymbol{-}}\tau \mathit{\boldsymbol{)}}}^{\mathit{\boldsymbol{2}}}-</math><math>{d}_{A}\boldsymbol{)\ast (}{\hat{\sigma }}^{2}\left({\boldsymbol{\text{x}}}\right) -</math><math>{d}_{B}\mathit{\boldsymbol{)}}</math>
978
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{\boldsymbol{\text{x}}}</math>
979
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{{(}\hat{\mu }\left( {\boldsymbol{\text{x}}}^*\right) {-}\tau {)}}^{{2}}</math>
980
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{\hat{\sigma }}^{\mathit{{2}}}\left( {\boldsymbol{\text{x}}}^*\right)</math> 
981
|-
982
|  style="border-top: 1pt solid black;text-align: center;"|37.2266
983
|  style="border-top: 1pt solid black;text-align: center;"|112.0959
984
|  style="border-top: 1pt solid black;text-align: center;"|790.0487
985
|  style="border-top: 1pt solid black;text-align: center;"|[ 0.9510    0.3554]
986
|  style="border-top: 1pt solid black;text-align: center;"|19.9778
987
|  style="border-top: 1pt solid black;text-align: center;"|66.2928
988
|-
989
|  style="text-align: center;"|41.3629
990
|  style="text-align: center;"|112.0959
991
|  style="text-align: center;"|986.2591
992
|  style="text-align: center;"|[0.9813    0.3624]
993
|  style="text-align: center;"|21.2437
994
|  style="text-align: center;"|63.0751
995
|-
996
|  style="text-align: center;"|45.9588
997
|  style="text-align: center;"|112.0959
998
|  style="text-align: center;"|1218.3
999
|  style="text-align: center;"|[1.0000    0.3751]
1000
|  style="text-align: center;"|22.2248
1001
|  style="text-align: center;"|60.7647
1002
|-
1003
|  style="text-align: center;"|51.0653
1004
|  style="text-align: center;"|112.0959
1005
|  style="text-align: center;"|1482.5
1006
|  style="text-align: center;"|[1.0000    0.3978]
1007
|  style="text-align: center;"|22.6267
1008
|  style="text-align: center;"|59.9662
1009
|-
1010
|  style="text-align: center;"|56.7392
1011
|  style="text-align: center;"|112.0959
1012
|  style="text-align: center;"|1780.6
1013
|  style="text-align: center;"|[1.000    0.4208]
1014
|  style="text-align: center;"|23.0925
1015
|  style="text-align: center;"|59.1766
1016
|-
1017
|  style="text-align: center;"|'''63.0436'''
1018
|  style="text-align: center;"|'''1120959'''
1019
|  style="text-align: center;"|'''2116.7'''
1020
|  style="text-align: center;"|'''[1.0000    0.4440]'''
1021
|  style="text-align: center;"|'''23.6256'''
1022
|  style="text-align: center;"|'''58.3974'''
1023
|-
1024
|  style="text-align: center;"|69.3480
1025
|  style="text-align: center;"|112.0959
1026
|  style="text-align: center;"|2457.5
1027
|  style="text-align: center;"|[1.0000    0.4653]
1028
|  style="text-align: center;"|24.1686
1029
|  style="text-align: center;"|57.7026
1030
|-
1031
|  style="text-align: center;"|76.2828
1032
|  style="text-align: center;"|112.0959
1033
|  style="text-align: center;"|2837.1
1034
|  style="text-align: center;"|[1.0000    0.4867]
1035
|  style="text-align: center;"|24.7721
1036
|  style="text-align: center;"|57.0185
1037
|-
1038
|  style="text-align: center;"|83.9110
1039
|  style="text-align: center;"|112.0959
1040
|  style="text-align: center;"|3259.8
1041
|  style="text-align: center;"|[1.0000    0.5083]
1042
|  style="text-align: center;"|25.4386
1043
|  style="text-align: center;"|56.346
1044
|-
1045
|  style="text-align: center;"|92.3021
1046
|  style="text-align: center;"|112.0959
1047
|  style="text-align: center;"|3730.5
1048
|  style="text-align: center;"|[1.0000    0.5300]
1049
|  style="text-align: center;"|26.1709
1050
|  style="text-align: center;"|55.686
1051
|-
1052
|  style="text-align: center;"|101.5323
1053
|  style="text-align: center;"|112.0959
1054
|  style="text-align: center;"|4254.2
1055
|  style="text-align: center;"|[1.0000    0.5518]
1056
|  style="text-align: center;"|26.9716
1057
|  style="text-align: center;"|55.0393
1058
|-
1059
|  style="text-align: center;"|111.6856
1060
|  style="text-align: center;"|112.0959
1061
|  style="text-align: center;"|4836.8
1062
|  style="text-align: center;"|[1.0000    0.5738]
1063
|  style="text-align: center;"|27.8435
1064
|  style="text-align: center;"|54.407
1065
|-
1066
|  style="text-align: center;"|122.8541
1067
|  style="text-align: center;"|112.0959
1068
|  style="text-align: center;"|5484.6
1069
|  style="text-align: center;"|[1.0000    0.5958]
1070
|  style="text-align: center;"|28.7892
1071
|  style="text-align: center;"|53.7896
1072
|-
1073
|  style="border-bottom: 1pt solid black;text-align: center;"|135.1396
1074
|  style="border-bottom: 1pt solid black;text-align: center;"|112.0959
1075
|  style="border-bottom: 1pt solid black;text-align: center;"|6204.7
1076
|  style="border-bottom: 1pt solid black;text-align: center;"|[1.0000    0.6179]
1077
|  style="border-bottom: 1pt solid black;text-align: center;"|29.8115
1078
|  style="border-bottom: 1pt solid black;text-align: center;"|53.1879
1079
|}
1080
1081
1082
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;">[[File:Draft_Shin_691882792-image9.png|centre|thumb|'''Figure 9.''' Plot of sensitivity analysis results with the Pareto frontier for numerical example 2 by changing
1083
<math display="inline">{d}_{A}</math>
1084
|445x445px]]</div>
1085
1086
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;"></div>
1087
1088
On the other hand, another sensitivity analysis is conducted by changing <math display="inline">{d}_{B}</math> with 10% increment and decrement while holding <math display="inline">{d}_{A}</math> at a fixed value (63.0436) as shown in Table 8 and plotted in Figure 10.
1089
1090
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;">
1091
'''Table 8.''' Sensitivity analysis results for numerical example 2 by changing <math display="inline">{d}_{B}</math></div>
1092
1093
{| style="width: 100%;border-collapse: collapse;" 
1094
|-
1095
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{d}_{A}</math>
1096
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{d}_{B}</math>
1097
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{\mathit{\boldsymbol{((}}\hat{\mu }\left( {\boldsymbol{\text{x}}}\right) \mathit{\boldsymbol{-}}\tau \mathit{\boldsymbol{)}}}^{\mathit{\boldsymbol{2}}}-</math><math>{d}_{A}\boldsymbol{)\ast (}{\hat{\sigma }}^{\mathit{\boldsymbol{2}}}\left( {\boldsymbol{\text{x}}}\right) -</math><math>{d}_{B}\mathit{\boldsymbol{)}}</math>
1098
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{\boldsymbol{\text{x}}}</math> 
1099
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;"|<math display="block">{\boldsymbol{(}\hat{\mu }\left( {\boldsymbol{\text{x}}}^*\right) \boldsymbol{-}\tau \boldsymbol{)}}^{{2}}</math>
1100
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{\hat{\sigma }}^{\mathit{{2}}}\left( {\boldsymbol{\text{x}}}^*\right)</math> 
1101
|-
1102
|  style="border-top: 1pt solid black;text-align: center;vertical-align: top;"|63.0436
1103
|  style="border-top: 1pt solid black;text-align: center;vertical-align: top;"|48.2536
1104
|  style="border-top: 1pt solid black;text-align: center;vertical-align: top;"|15.4253
1105
|  style="border-top: 1pt solid black;text-align: center;vertical-align: top;"|[1.0000    0.9166]
1106
|  style="border-top: 1pt solid black;text-align: center;vertical-align: top;"|52.7429
1107
|  style="border-top: 1pt solid black;text-align: center;vertical-align: top;"|46.7561
1108
|-
1109
|  style="text-align: center;vertical-align: top;"|63.0436
1110
|  style="text-align: center;vertical-align: top;"|53.6151
1111
|  style="text-align: center;vertical-align: top;"|102.0493
1112
|  style="text-align: center;vertical-align: top;"|[1.0000    0.8094]
1113
|  style="text-align: center;vertical-align: top;"|42.2936
1114
|  style="text-align: center;vertical-align: top;"|48.6971
1115
|-
1116
|  style="text-align: center;vertical-align: top;"|63.0436
1117
|  style="text-align: center;vertical-align: top;"|59.5724
1118
|  style="text-align: center;vertical-align: top;"|245.6339
1119
|  style="text-align: center;vertical-align: top;"|[1.0000    0.7284]
1120
|  style="text-align: center;vertical-align: top;"|36.1567
1121
|  style="text-align: center;vertical-align: top;"|50.4366
1122
|-
1123
|  style="text-align: center;vertical-align: top;"|63.0436
1124
|  style="text-align: center;vertical-align: top;"|66.1915
1125
|  style="text-align: center;vertical-align: top;"|438.1362
1126
|  style="text-align: center;vertical-align: top;"|[1.0000    0.6620]
1127
|  style="text-align: center;vertical-align: top;"|32.0892
1128
|  style="text-align: center;vertical-align: top;"|52.0372
1129
|-
1130
|  style="text-align: center;vertical-align: top;"|63.0436
1131
|  style="text-align: center;vertical-align: top;"|73.5461
1132
|  style="text-align: center;vertical-align: top;"|677.0693
1133
|  style="text-align: center;vertical-align: top;"|[1.0000    0.6055]
1134
|  style="text-align: center;vertical-align: top;"|29.2313
1135
|  style="text-align: center;vertical-align: top;"|53.5218
1136
|-
1137
|  style="text-align: center;vertical-align: top;"|63.0436
1138
|  style="text-align: center;vertical-align: top;"|81.7179
1139
|  style="text-align: center;vertical-align: top;"|962.4337
1140
|  style="text-align: center;vertical-align: top;"|[1.0000    0.5567]
1141
|  style="text-align: center;vertical-align: top;"|27.1579
1142
|  style="text-align: center;vertical-align: top;"|54.8985
1143
|-
1144
|  style="text-align: center;vertical-align: top;"|63.0436
1145
|  style="text-align: center;vertical-align: top;"|90.7977
1146
|  style="text-align: center;vertical-align: top;"|1295.7
1147
|  style="text-align: center;vertical-align: top;"|[1.0000    0.5140]
1148
|  style="text-align: center;vertical-align: top;"|25.6262
1149
|  style="text-align: center;vertical-align: top;"|56.1698
1150
|-
1151
|  style="text-align: center;vertical-align: top;"|63.0436
1152
|  style="text-align: center;vertical-align: top;"|100.8863
1153
|  style="text-align: center;vertical-align: top;"|1679.3
1154
|  style="text-align: center;vertical-align: top;"|[1.0000    0.4767]
1155
|  style="text-align: center;vertical-align: top;"|24.4832
1156
|  style="text-align: center;vertical-align: top;"|57.3359
1157
|-
1158
|  style="text-align: center;vertical-align: top;"|'''63.0436'''
1159
|  style="text-align: center;vertical-align: top;"|'''112.0959'''
1160
|  style="text-align: center;vertical-align: top;"|'''2116.7'''
1161
|  style="text-align: center;vertical-align: top;"|'''[1.0000    0.4440]'''
1162
|  style="text-align: center;vertical-align: top;"|'''23.6256'''
1163
|  style="text-align: center;vertical-align: top;"|'''58.3974'''
1164
|-
1165
|  style="text-align: center;vertical-align: top;"|63.0436
1166
|  style="text-align: center;vertical-align: top;"|123.3066
1167
|  style="text-align: center;vertical-align: top;"|2562.1
1168
|  style="text-align: center;vertical-align: top;"|[1.0000    0.4181]
1169
|  style="text-align: center;vertical-align: top;"|23.0342
1170
|  style="text-align: center;vertical-align: top;"|59.2691
1171
|-
1172
|  style="text-align: center;vertical-align: top;"|63.0436
1173
|  style="text-align: center;vertical-align: top;"|135.6372
1174
|  style="text-align: center;vertical-align: top;"|3058.4
1175
|  style="text-align: center;vertical-align: top;"|[1.0000    0.3951]
1176
|  style="text-align: center;vertical-align: top;"|22.5764
1177
|  style="text-align: center;vertical-align: top;"|60.0593
1178
|-
1179
|  style="text-align: center;vertical-align: top;"|63.0436
1180
|  style="text-align: center;vertical-align: top;"|149.2010
1181
|  style="text-align: center;vertical-align: top;"|3609.9
1182
|  style="text-align: center;vertical-align: top;"|[1.0000    0.3748]
1183
|  style="text-align: center;vertical-align: top;"|22.2214
1184
|  style="text-align: center;vertical-align: top;"|60.7721
1185
|-
1186
|  style="text-align: center;vertical-align: top;"|63.0436
1187
|  style="text-align: center;vertical-align: top;"|164.1211
1188
|  style="text-align: center;vertical-align: top;"|4223.7
1189
|  style="text-align: center;vertical-align: top;"|[0.9829    0.3628]
1190
|  style="text-align: center;vertical-align: top;"|21.3147
1191
|  style="text-align: center;vertical-align: top;"|62.9025
1192
|-
1193
|  style="text-align: center;vertical-align: top;"|63.0436
1194
|  style="text-align: center;vertical-align: top;"|180.5332
1195
|  style="text-align: center;vertical-align: top;"|4919.9
1196
|  style="text-align: center;vertical-align: top;"|[0.9512    0.3554]
1197
|  style="text-align: center;vertical-align: top;"|19.9864
1198
|  style="text-align: center;vertical-align: top;"|66.2698
1199
|-
1200
|  style="border-bottom: 1pt solid black;text-align: center;vertical-align: top;"|63.0436
1201
|  style="border-bottom: 1pt solid black;text-align: center;vertical-align: top;"|198.5865
1202
|  style="border-bottom: 1pt solid black;text-align: center;vertical-align: top;"|5708.3
1203
|  style="border-bottom: 1pt solid black;text-align: center;vertical-align: top;"|[0.9199    0.3472]
1204
|  style="border-bottom: 1pt solid black;text-align: center;vertical-align: top;"|18.7938
1205
|  style="border-bottom: 1pt solid black;text-align: center;vertical-align: top;"|69.5842
1206
|}
1207
1208
1209
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;">[[File:Draft_Shin_691882792-image10.png|centre|423x423px|thumb|'''Figure 10.''' Plot of sensitivity analysis results with the Pareto frontier for numerical example 2 by changing <math display="inline">{d}_{B}</math>]]</div>
1210
1211
In general, for<span id="OLE_LINK13"></span><span id="OLE_LINK14"></span> both cases, an increase in the value of <math display="inline">{d}_{i}</math> will increase the corresponding bargaining solution value. For example, an increasement of <math display="inline">{d}_{A}</math> will lead to an increase in process bias and a decrease in the variability value. This conclusion also makes sense from the perspective of game theory since it can be explained as disagreement point monotonicity [60] which can be defined as:<span id="cite-48"></span>
1212
1213
For two points, <math display="inline">\, d=\left( {d}_{A},\, {\, d}_{B}\right)</math>  and <math display="inline">\, {d}^{'}=</math><math>\left( {{d}_{A}}^{'},\, {{\, d}_{B}}^{'}\right)</math> , <math display="inline">if\, {d}_{i}^{'}\geq {d}_{i},\, {d}_{j}^{,}={d}_{j},</math>then <math>{f}_{i}\left( U,{d}^{'}\right) \geq {f}_{i}\left( U,d\right) \, \, </math>; <math>j\not =i,\, and\, i,j\in \lbrace A,\, B\rbrace</math>
1214
1215
where <math display="inline">{f}_{i}\left( U,{d}^{'}\right)</math> and <math display="inline">{f}_{i}\left( U,d\right)</math>  represent the solution payoff for player ''i'' after and before the incensement of his disagreement point payoff, respectively. More specifically, the more disagreement point value ( <math display="inline">{d}_{i}</math>) a player demands for participation in an agreement, the more the player will get. Although, a gain achieved by one player comes at the expense of the other player. This is because if the agreed solution is not an improvement for one player, then the player would not have any incentive to participate in the bargaining game. However, in the RPD case, the objective for a player is to minimize instead of maximize the utility value, so the less <math display="inline">{d}_{i}</math> a player proposes, the higher the requirement the player is actually proposing to participate in a bargaining game.
1216
1217
==6. Conclusion and future direction ==
1218
1219
In a robust design model, when considering the simultaneous minimization of both process bias and variability as a bi-objective problem, there is an intractable tradeoff problem between them. Most existing methods tackle this tradeoff problem by either prioritizing a process parameter or assigning weights to process parameters to indicate the relative importance determined by a DM. However, the DM may struggle with assigning the weights or priority orders to different types and units of responses. Furthermore, the prioritizing or combining response procedure involves a certain degree of subjectivity, as different DMs may have different viewpoints on which process parameter is more important. Thus, in this paper, a bargaining game-based RPD method is proposed to solve this tradeoff problem by integrating Nash bargaining solution techniques and letting the two objectives (e.g., process bias and variability) “negotiate”, so that unique, fair, and efficient solutions can be obtained. These solutions can provide valuable suggestions to the DM, especially when there is no prior information of the relative importance for the process bias and variability. To inspect the efficiency of the obtained solutions, the associated Pareto frontier was generated through applying the lexicographic weighted Tchebycheff method, and thus, the solution position was visually confirmed. As validated by the two numerical examples, compared with the conventional dual response surface method and mean squared error method the proposed method can provide more efficient solutions based on MSE criterion. In addition, a number of sensitivity studies were conducted to investigate the relationship between the disagreement point values (<math>d_i</math>) and the agreement solutions. This research illustrates the possibility of combining the concept of game theory with an RPD model. For further study, the proposed method will be extended to solve the multiple response optimization problems. The tradeoff issue among multiple responses can be addressed by applying the multilateral bargaining game theory, where each quality response is regarded as a rational player who attempts to reach an agreement with others on which set of control factors to choose. In the game, each response proposes a solution set that optimizes the respective estimated response function and is subject to the expectations of the other responses.
1220
1221
== Acknowledgment ==
1222
This research was a part of the project titled ‘Marine digital AtoN information management and service system development (2/5) (20210650)’, funded by the Ministry of Oceans and Fisheries, Korea.
1223
1224
== References ==
1225
<div id="1">
1226
[1] Park, G. J., Lee, T. H., Lee, K. H., & Hwang, K. H. Robust design: an overview. AIAA Journal, 44(1): 181-191,2006.
1227
1228
[2] Myers, W. R., Brenneman, W. A., & Myers, R. H. A dual-response approach to robust parameter design for a generalized linear model. Journal of Quality Technology, 37(2), 130-138, 2005.
1229
1230
[3] Lin, D. K. J., and Tu, W.  “Dual response surface optimization.” Journal of Quality Technology 27:34-39, 1995.
1231
1232
[4] Cho, B. R., Philips, M. D., and Kapur, K. C.  “Quality improvement by RSM modeling for robust design.” The 5th Industrial Engineering Research Conference, Minneapolis, 650-655, 1996.
1233
1234
[5] Ding, R., Lin, D. K. J., and Wei, D.  “Dual response surface optimization: A weighted MSE approach.” Quality engineering 16(3):377-385, 2004.
1235
1236
[6] Vining, G. G., and Myers, R. H.  “Combining Taguchi and response surface philosophies: A dual response approach.” Journal of Quality Technology 22:38-45, 1990.
1237
1238
[7] Myers, R. H. and Carter, W. H.  Response Surface Methods for Dual Response Systems, Technometrics, 15(2), 301-307,1973. 
1239
1240
[8] Copeland, K. A. and Nelson, P. R. Dual Response Optimization via Direct Function Minimization, Journal of Quality Technology, 28(3), 331-336, 1996.
1241
1242
[9] Lee, D., Jeong, I., and Kim, K. A Posterior Preference Articulation Approach to Dual-Response Surface Optimization, IIE Transaction, 42(2), 161-171, 2010. 
1243
1244
[10] Shin, S. and Cho, B. R.  Robust design models for customer-specified bounds on process parameters, Journal of Systems Science and Systems Engineering, 15, 2-18, 2006.
1245
</div>
1246
[11] Leon R.V., Shoemaker A.C., Kackar R.N. Performance Measures Independent of Adjustment: an Explanation and Extension of Taguchi’s Signal-To-Noise Ratios. Technometrics, 29(3), 253-265, 1987. 
1247
1248
[12] Box G. Signal-to-noise ratios, performance criteria, and transformations. Technometrics, 30(1): 1-17, 1988.
1249
1250
[13] Nair V N, Abraham B, MacKay J, et al. Taguchi's parameter design: a panel discussion. Technometrics, 34(2): 127-161, 1992.
1251
1252
[14] Tsui K L. An overview of Taguchi method and newly developed statistical methods for robust design. IIE Transactions, 24(5): 44-57, 1992.
1253
1254
[15] Copeland, K. A. and Nelson, P. R.  Dual Response Optimization via Direct Function Minimization, Journal of Quality Technology, 28(3), 331-336, 1996.
1255
1256
[16] Shoemaker A C, Tsui K L, Wu C F J. Economical experimentation methods for robust design. Technometrics'','' 33(4): 415-427, 1991.
1257
1258
[17] Khattree R. Robust parameter design: A response surface approach. Journal of Quality Technology, 28(2): 187-198, 1996.
1259
1260
[18] Pregibon, Daryl. Generalized linear models. The Annals of Statistics, 12(4): 1589–1596, 1984.  
1261
1262
[19] Lee S B, Park C.  Development of robust design optimization using incomplete data. Computers & industrial engineering, 50(3): 345-356, 2006.
1263
1264
[20] Cho B R, Park C. Robust design modeling and optimization with unbalanced data. Computers & industrial engineering, 48(2): 173-180, 2005.
1265
1266
[21] Jayaram, J.S.R. and Ibrahim, Y.  Multiple response robust design and yield maximization. International Journal of Quality & Reliability Management, 16(9): 826-837, 1999.  
1267
1268
[22] Köksoy O, Doganaksoy N. Joint optimization of mean and standard deviation using response surface methods. Journal of Quality Technology, 35(3): 239-252, 2003.
1269
1270
[23] Shin S, Cho B R. Studies on a biobjective robust design optimization problem. IIE Transactions, 41(11): 957-968, 2009.
1271
1272
[24] Le T H, Tang M, Jang J H, et al. Integration of Functional Link Neural Networks into a Parameter Estimation Methodology[J]. Applied Sciences, 2021, 11(19): 9178.
1273
1274
[25] Picheral L, Hadj-Hamou K, Bigeon J.  Robust optimization based on the Propagation of Variance method for analytic design models. International Journal of Production Research, 52(24): 7324-7338, 2014. 
1275
1276
[26] Mortazavi A, Azarm S, Gabriel S A. Adaptive gradient-assisted robust design optimization under interval uncertainty. Engineering Optimization, 45(11): 1287-1307, 2013. 
1277
1278
[27] Bashiri, M., Moslemi, A., & Akhavan Niaki, S. T. Robust multi‐response surface optimization: a posterior preference approach. International Transactions in Operational Research, 27(3), 1751-1770, 2020. 
1279
1280
[28] Yang, S., Wang, J., Ren, X., & Gao, T. Bayesian online robust parameter design for correlated multiple responses. Quality Technology & Quantitative Management, 18(5), 620-640, 2021. 
1281
1282
<div id="11">
1283
[29] Sohrabi M K, Azgomi H. A survey on the combined use of optimization methods and game theory. Archives of Computational Methods in Engineering'','' 27(1): 59-80, 2020. 
1284
1285
[30] Shoham Y. Computer science and game theory. Communications of the ACM, 51(8): 74-79, 2008. 
1286
1287
[31] Manshaei M H, Zhu Q, Alpcan T, et al. Game theory meets network security and privacy. ACM Computing Surveys (CSUR), 45(3): 1-39, 2013.
1288
1289
[32] Pillai P S, Rao S. Resource allocation in cloud computing using the uncertainty principle of game theory. IEEE Systems Journal, 10(2): 637-648, 2014. 
1290
1291
[33] Lemaire J. An application of game theory: cost allocation. ASTIN Bulletin: The Journal of the IAA. 1984; 14(1): 61-81, 2014.
1292
1293
[34] Barough A S, Shoubi M V, Skardi M J E. Application of game theory approach in solving the construction project conflicts. Procedia-Social and Behavioral Sciences'','' 58: 1586-1593, 2012. 
1294
1295
[35] Gale D, Kuhn H W, Tucker A W. Linear programming and the theory of game. Activity analysis of production and allocation, 13: 317-335, 1951.
1296
1297
[36] Mangasarian O L, Stone H. Two-person nonzero-sum games and quadratic programming. Journal of mathematical analysis and applications, 9(3): 348-355, 1964. 
1298
1299
[37] Leboucher C, Shin H S, Siarry P, et al. Convergence proof of an enhanced particle swarm optimization method integrated with evolutionary game theory. Information Sciences, 346: 389-411, 2016. 
1300
1301
[38] Annamdas K K, Rao S S. Multi-objective optimization of engineering systems using game theory and particle swarm optimization. Engineering optimization, 41(8): 737-752, 2009. 
1302
1303
[39] Zamarripa, M. A., Aguirre, A. M., Méndez, C. A., & Espuña, A.  Mathematical programming and game theory optimization-based tool for supply chain planning in cooperative/competitive environments. Chemical Engineering Research and Design, 91(8): 1588-1600, 2013.
1304
1305
[40] Dai, L., Tang, M., & Shin, S. Stackelberg game approach to a bi-objective robust design optimization. Revista Internacional de Métodos Numéricos para Cálculo y Diseño en Ingeniería, 37(4), 2021.
1306
1307
[41] Matejaš J, Perić T. A new iterative method for solving multiobjective linear programming problem. Applied mathematics and computation, 243: 746-754, 2014. 
1308
1309
[42] Doudou, M., Barcelo-Ordinas, J. M., Djenouri, D., Garcia-Vidal, J., Bouabdallah, A., & Badache, N. Game theory framework for MAC parameter optimization in energy-delay constrained sensor networks. ACM Transactions on Sensor Networks (TOSN), 12(2), 1-35, 2016.
1310
</div>
1311
[43] Muthoo A. Bargaining theory with applications. Cambridge University Press, 1999.
1312
1313
[44] Goodpaster G. Rational decision-making in problem-solving negotiation: Compromise, interest-valuation, and cognitive error. Ohio St. J. on Disp. Resol. 8: 299, 1992.
1314
1315
[45] Nash, J. F. The Bargaining Problem.  Econometrica, 18(2):155-162, 1950. 
1316
1317
[46] Nash, J. F.  Two-Person Cooperative Games. Econometrica, 21(1):128-140, 1953. 
1318
1319
[47] Kalai E, Smorodinsky M. Other solutions to Nash's bargaining problem.  Econometrica: Journal of the Econometric Society, 513-518, 1975. 
1320
1321
[48] Rubinstein A.  Perfect equilibrium in a bargaining model.  Econometrica: Journal of the Econometric Society, 97-109, 1982. 
1322
1323
[49] Köksoy, O. A nonlinear programming solution to robust multi-response quality problem. Applied mathematics and computation, 196(2), 603-612, 2008. 
1324
1325
[50] Goethals, P. L., & Cho, B. R. Extending the desirability function to account for variability measures in univariate and multivariate response experiments. Computers & Industrial Engineering, 62(2), 457-468, 2012. 
1326
1327
[51] Wu, F. C., & Chyu, C. C.  Optimization of robust design for multiple quality characteristics. International Journal of Production Research, 42(2), 337-354, 2004. 
1328
1329
[52] Shin, S., & Cho, B. R. Bias-specified robust design optimization and its analytical solutions. Computers & Industrial Engineering, 48(1), 129-140, 2005. 
1330
1331
[53] Tang, L. C., & Xu, K. A unified approach for dual response surface optimization. Journal of quality technology, 34(4), 437-447, 2002. 
1332
1333
[54]  Steenackers, G., & Guillaume, P. Bias-specified robust design optimization: A generalized mean squared error approach. Computers & Industrial Engineering, 54(2), 259-268, 2008. 
1334
1335
[55] Mandal W A. Weighted Tchebycheff optimization technique under uncertainty. Annals of Data Science, 1-23, 2020.
1336
1337
[56] Dächert K, Gorski J, Klamroth K. An augmented weighted Tchebycheff method with adaptively chosen parameters for discrete bicriteria optimization problems. Computers & Operations Research, 39(12): 2929-2943, 2012.
1338
1339
[57] Steuer R E, Choo E U. An interactive weighted Tchebycheff procedure for multiple objective programming. Mathematical programming, 26(3): 326-344, 1983.
1340
1341
[58] Rausser G C, Swinnen J, Zusman P. Political power and economic policy: Theory, analysis, and empirical applications''.'' Cambridge University Press, 2011.
1342
1343
[59] Myerson R B. Game Theory: Analysis of Conflict. Harvard University Press, Cambridge, MA. London England, 1991.
1344
1345
[60] Thomson W. In: Handbook of game theory with economic applications. Cooperative models of Bargaining. 2: 1237-1284, 1994. <span id="_GoBack"></span>
1346

Return to Tang et al 2022a.

Back to Top

Document information

Published on 20/06/22
Accepted on 08/06/22
Submitted on 18/03/22

Volume 38, Issue 2, 2022
DOI: 10.23967/j.rimni.2022.06.002
Licence: CC BY-NC-SA license

Document Score

0

Views 150
Recommendations 0

Share this document

claim authorship

Are you one of the authors of this document?