You do not have permission to edit this page, for the following reason:

You are not allowed to execute the action you have requested.


You can view and copy the source of this page.

x
 
1
<!-- metadata commented in wiki content
2
3
4
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;">
5
''' Integration of Game theory and Response Surface Method for Robust Parameter Design'''</div>
6
-->
7
== Abstract ==
8
Robust parameter design (RPD) is to determine the optimal controllable factors that minimize the variation of quality performance caused by noise factors. The dual response surface approach is one of the most commonly applied approaches in RPD that attempts to simultaneously minimize the process bias (i.e., the deviation of the process mean from the target) as well as process variability (i.e., variance or standard deviation). In order to address this tradeoff issue between the process bias and variability, a number of RPD methods are reported in literature by assigning relative weights or priorities to both the process bias and variability. However, the relative weights or priorities assigned are often subjectively determined by a decision maker (DM) who in some situations may not have enough prior knowledge to determine the relative importance of both the process bias and variability. In order to address this problem, this paper proposes an alternative approach by integrating the bargaining game theory into an RPD model to determine the optimal factor settings. Both the process bias and variability are considered as two rational players that negotiate how the input variable values should be assigned. Then Nash bargaining game solution technique is applied to determine the optimal, fair, and unique solutions (i.e., a balanced agreement point) for this game. This technique may provide a valuable recommendation for the DM to consider before making the final decision. This proposed method may not require any preference information from the DM by considering the interaction between the process bias and variability. To verify the efficiency of the obtained solutions, a lexicographic weighted Tchebycheff method which is often used in bi-objective optimization problems is utilized. Finally, in two numerical examples, the proposed method provides non-dominated tradeoff solutions for particular convex Pareto frontier cases. Furthermore, sensitivity analyses are also conducted for verification purposes associated with the disagreement and agreement points.
9
10
'''Keywords''': Robust parameter design, lexicographic weighted Tchebycheff, bargaining game, response surface methodology, dual response model
11
12
==1. Introduction==
13
14
Due to fierce competition among manufacturing companies and an increase in customer quality requirements, robust parameter design (RPD), an essential method for quality management, is becoming ever more important. RPD was developed to decrease the degree of unexpected deviation from the requirements that are proposed by customers or a DM and thereby helps to improve the quality and reliability of products or manufacturing processes. The central idea of RPD is to build quality into the design process by identifying an optimal set of control factors that make the system impervious to variation [1]. The objectives of RPD are set out to ensure that the process mean is at the desired level and process variability is minimized. However, in reality, a simultaneous realization of those two objectives sometimes is not possible. As Myers et al. [2] stated there are circumstances where the process  variability is robust against the effects of noise factors but the mean value is still far away from the target. In other words, a set of parameter values that satisfies these two conflicting objectives may not exist. Hence, the tradeoffs that exist between the process mean and variability are undoubtedly crucial in determining a set of controllable parameters that optimize quality performance.  
15
16
The tradeoff issue between the process bias and variability can be associated with assigning different weights or priority orders. Weight-based methods assign different weights to the process bias and variability, respectively, to establish their relative importance and transform the bi-objective problem into a single objective problem. The two most commonly applied weight-based methods are the mean square error model [3] and the weighted sum model [4,5]. Alternatively, priority-based methods sequentially assign priorities to the objectives (i.e., minimization of the process bias or variability). For instance, if the minimization of the process bias is prioritized, then the process variability is optimized with a constraint of zero-process bias [6]. Other priority-based approaches are discussed by Myers and Carter [7], Copeland and Nelson [8], Lee et al. [9], and Shin and Cho [10]. In both weight-based and priority-based methods, the relative importance can be assigned by the decision maker’s (DM) preference, which is obviously subjective. Additionally, there are situations in which the DM could be unsure about the relative importance of the process parameters in bi-objective optimization problems. 
17
18
Therefore, this paper aims to solve this tradeoff problem from a game theory point of view by integrating bargaining game theory into the RPD procedure. First, the process bias and variability are considered as two rational players in the bargaining game. Furthermore, the relationship functions for the process bias and variability are separately estimated by using the response surface methodology (RSM). In addition, those estimated functions are regarded as utility functions that represent players’ preferences and objectives in this bargaining game. Second, a disagreement point, signifying a pair of values that the players expect to receive when negotiation among players breaks down, can be defined by using the minimax-value theory which is often used as a decision rule in game theory. Third, Nash bargaining solution techniques are then incorporated into the RPD model to obtain the optimal solutions. Then, to verify the efficiency of the obtained solutions, a lexicographic weighted Tchebycheff approach is used to generate the associated Pareto frontier so that it can be visually observed if the obtained solutions are on the Pareto frontier. Two numerical examples are conducted to show that the proposed model can efficiently locate well-balanced solutions. Finally, a series of sensitivity analyses are also conducted in order to demonstrate the effects of the disagreement point value on the final agreed solutions.
19
20
This research paper is laid out as follows: Section 2 discusses existing literature for RPD and game theory applications. In Section 3, the dual response optimization problem, the lexicographic weighted Tchebycheff method, and the Nash bargaining solution are explained. Next, in Section 4, the proposed model is presented. Then in Section 5, two numerical examples are addressed to show the efficiency of the proposed method, and sensitivity studies are performed to reveal the influence of disagreement point values on the solutions. In Section 6, a conclusion and further research directions are discussed.
21
22
<span id="OLE_LINK4"></span><span id="OLE_LINK5"></span><span id="OLE_LINK6"></span><span id="OLE_LINK7"></span><span id="cite-1"></span>[[Draft Shin 691882792|<span id="cite-2"></span>]]<span id="OLE_LINK2"></span><span id="OLE_LINK22"></span>
23
24
<span id="_Hlk58849949"></span>
25
==2.  Literature review ==
26
27
===2.1 Robust parameter design ===
28
29
Taguchi proposed both experimental design concepts and parameter tradeoff issues into a quality design process. In addition, Taguchi developed an orthogonal-array-based experimental design and used the signal-to-noise (SN) ratio to measure the effects of factors on desired output responses. As discussed by Leon et al. [11] in some situations, the SN ratio is not independent of the adjustment parameters, so using the SN ratio as a performance measure may often lead to far from the optimal design parameter settings. Box [12] also argued that statistical analyses based on experimental data should be introduced, rather than relying only on the maximization of the SN ratio. The controversy about the Taguchi method is further discussed and addressed by Nair et al. [13] and Tsui [14]. 
30
31
Based on Taguchi’s philosophy, further statistical based methods for RPD have been developed. Vining and Myers [6] introduced a dual response method, which takes zero-process bias as a constraint and minimizes the variability. Copeland and Nelson [15] proposed an alternative method for the dual response problem by introducing a predetermined upper limit on the deviation from the target. Similar approaches related to upper limit concept are further discussed by Shin and Cho [10] and Lee et al. [9] For the estimation phase, Shoemaker et al. [16] and Khattree [17] suggested a utilization of the response surface model approaches. However, when a homoscedasticity assumption for regression is violated, then other methods, such as the generalized linear model, can be applied [18].  Additionally, in cases where there is incomplete data, Lee and Park [19] suggested an expectation-maximization (EM) algorithm to provide an estimation of the process mean and variance, while Cho and Park [20] suggested a weighted least squares (WLS) method. However, Lin and Tu [3] pointed out that the dual response approach had some deficiencies and proposed an alternative method called mean-squared-error (MSE) model. Jayaram and Ibrahim [21] modified the MSE model by incorporating capability indexes and considered the minimization of total deviation of capability indexes to achieve a multiple response robust design. More flexible alternative methods that could obtain Pareto optimal solutions based on a weighted sum model were introduced by many researchers [4,5,22]. In fact, this weighted sum model is more flexible than conventional dual response models, but it cannot be applied when a Pareto frontier is nonconvex [23]. In order to overcome this problem, Shin and Cho [23] proposed an alternative method called lexicographic weighted Tchebycheff by using an <math display="inline">L-\infty</math> norm.
32
33
More recently, RPD has become more widely used not only in manufacturing but also in other science and engineering areas including pharmaceutical drug development. New approaches such as simulation, multiple optimization techniques, and neural networks (NN) have been integrated into RPD. For example, Le et al. [24] proposed a new RPD model by introducing a NN approach to estimate dual response functions. Additionally, Picheral et al. [25] estimated the process bias and variance function by using the propagation of variance method. Two new robust optimization methods, the gradient-assisted and quasi-concave gradient-assisted robust optimization methods, were presented by Mortazavi et al. [26]. Bashiri et al. [27] proposed a robust posterior preference method that introduced a modified robust estimation method to reduce the effects of outliers on functions estimation and used non-robustness distance to compare non-dominated solutions. However, the responses are assumed to be uncorrelated. To address the correlation among multiple responses and the variation of noise factors over time, Yang et al. [28] extended offline RPD to online RPD by applying Bayesian seemingly unrelated regression and time series models so that the set of optimal controllable factor values can be adjusted in real-time.
34
35
===2.2 Game Theory ===
36
37
The field of game theory presents mathematical models of strategic interactions among rational agents. These models can become analytical tools to find the optimal choices for interactional and decision-making problems. Game theory is often applied in situations where the "roles and actions of multiple agents affect each other" [29]. Thus, game theory serves as an analysis model that aims at helping agents to make the optimal decisions, where agents are rational and those decisions are interdependent.  Because of the condition of interdependence each agent has to consider other agents’ possible decisions when formulating a strategy. Based on these characteristics of game theory, it is widely applied in multiple disciplines, such as computer science [30], network security and privacy [31], cloud computing [32], cost allocation [33], and construction [34]. Because game theory has a degree of conceptual overlap with optimization and decision-making, three concepts (i.e., game theory, optimization, and decision-making) can often be combined, respectively. According to Sohrabi and Azgom [29], there are three kinds of basic combinations associated with those three concepts as follows: game theory and optimization, game theory and decision-making, game theory, optimization, and decision-making. 
38
39
The first type of these combinations (i.e., game theory and optimization) further has two possible situations. In the first situation, optimization techniques are used to solve a game problem and prove the existence of equilibrium [35,36]. In the second situation, game theory concepts are integrated to solve an optimization problem. For example, Leboucher et al. [37] used evolutionary game theory to improve the performance of a particle swarm optimization (PSO) approach. Additionally, Annamdas and Rao [38] solved a multi-objective optimization problem by using a combination of game theory and a PSO approach. The second type kind of combination (i.e., game theory and decision-making) integrates game theory to solve a decision-making problem, as discussed by Zamarripa et al. [39] who applied game theory to assist with decision-making problems in supply chain bottlenecks. More recently, Dai et al. [40] attempted to integrate the Stackelberg leadership game into RPD model to solve a dual response tradeoff problem. The third type of combination (i.e., game theory, optimization and decision-making) integrates game theory and optimization to a decision-making problem. For example, a combination of linear programming and game theory was introduced to solve a decision-making problem [41]. Doudou et al. [42] used a convex optimization method and game theory to settle a wireless sensor network decision-making problem.<span id="_Ref76149866"></span><span id="OLE_LINK20"></span><span id="OLE_LINK21"></span><span id="cite-20"></span><span id="cite-21"></span><span id="cite-22"></span><span id="cite-23"></span><span id="cite-24"></span><span id="cite-25"></span><span id="cite-_Ref76149866"></span><span id="cite-26"></span><span id="cite-27"></span><span id="cite-28"></span><span id="cite-29"></span><span id="cite-30"></span><span id="cite-31"></span><span id="cite-32"></span><span id="cite-33"></span><span id='OLE_LINK16'></span><span id='OLE_LINK17'></span><span id='cite-34'></span>
40
41
===2.3 Bargaining game===
42
43
A bargaining game can be applied in a situation where a set of agents have an incentive to cooperate but have conflicting interests over how to distribute the payoffs generated from the cooperation [43]. Hence, a bargaining game essentially has two features: Cooperation and conflict. Because the bargaining game considers cooperation and conflicts of interest as a joint problem, it is more complicated than a simple cooperative game that ignores individual interests and maximizes the group benefit [44]. Typical three bargaining game examples include a price negotiation problem between product sellers and buyers, a union and firm negotiation problem over wages and employment levels, and a simple cake distribution problem. 
44
45
Significant discussions about the bargaining game can be addressed by Nash [45,46]. Nash [45] presented a classical bargaining game model aimed at solving an economic bargaining problem and used a numerical example to prove the existence of multiple solutions. In addition, Nash [46] extended his research to a more general form and demonstrated that there are two possible approaches to solve a two-person cooperative bargaining game. The first approach, called the negotiation model, is used to obtain the solution through an analysis of the negotiation process. The second approach, called the axiomatic method, is applied to solve a bargaining problem by specifying axioms or properties that the solution should obtain. For the axiomatic method, Nash concluded four axioms that the agreed solution called Nash bargaining solution should have. Based on Nash’s philosophy, many researchers attempted to modify Nash's model and proposed a number of different solutions based on different axioms. One famous modified model replaces one of Nash’s axioms in order to reach a fairer unique solution which is called the Kalai-Smorodinky’s solution [47]. Later, Rubinstein [48] addressed a bargaining problem by specifying a dynamic model which explains a bargaining procedure. <span id="cite-35"></span><span id="cite-36"></span><span id='OLE_LINK3'></span><span id='cite-37'></span><span id='cite-38'></span><span id='cite-39'></span><span id='cite-40'></span>
46
47
==3. Models and methods ==
48
49
===3.1 Bi-objective robust design model===
50
51
A general bi-objective optimization problem involves simultaneous optimization of two conflicting objectives (e.g.,  <math>f_1({\boldsymbol{\text{x}}})</math> and <math>f_2({\boldsymbol{\text{x}}})</math>) that can be described in mathematical terms as <math>\min[f_1({\boldsymbol{\text{x}}}), f_2({\boldsymbol{\text{x}}})]</math>. The primary objective of PRD is to minimize the deviation of performance of the production process from the target value and the variability of the performance, where this performance deviation can be represented by process bias and the performance variability can be represented by standard deviation or variance. For example, Koksoy [49], Goethals and Cho [50], and Wu and Chyu [51]  utilized estimated variance functions to represent process variability. On the other hand, Shin and Cho [10,52], Tang and Xu [53] used estimated standard deviation functions to measure process variability. Steenackers and Guillaume [54] discussed the effect of different response surface expressions on the optimal solutions, and they concluded that both standard deviation and variance can capture the process variability well but can lead to different optimal solution sets. Since it can be infeasible to minimize the process bias and variability simultaneously, a simultaneous optimization of these two process parameters, which are separately estimated by applying RSM, is then transformed into a tradeoff problem between the process bias and variability. This tradeoff problem can be formally expressed as a bi-objective optimization problem [23] as:<span id="cite-_Ref76152586"></span>
52
53
{| class="formulaSCP" style="width: 100%; text-align: center;" 
54
|-
55
| 
56
{| style="text-align: center; margin:auto;" 
57
|-
58
|<math>min</math>
59
| <math>\left[ \left\{ \hat{\mu }\left( \boldsymbol{\text{x}}\right) -\tau \right\}^{2},\, \hat{\sigma }^{2} (\boldsymbol{\text{x}})\right]^{T}</math> 
60
|-
61
|<math>s.t.</math>
62
|<math display="inline"> {\boldsymbol{\text{x}}}\in X\,</math>
63
|}
64
| style="width: 5px;text-align: right;white-space: nowrap;" | (1)
65
|}
66
67
where <math display="inline">{\boldsymbol{\text{x}}}</math>, <math display="inline">X</math>, <math display="inline">\tau</math>, <math display="inline">{\left\{ \hat{\mu }\left( {\boldsymbol{\text{x}}}\right) -\tau \right\} }^{2}</math>, and <math display="inline">{\hat{\sigma }}^{2}({\boldsymbol{\text{x}}})</math> represent a vector of design factors, the set of feasible solutions under specified constraints, the target process mean value, and the estimated functions for process bias and variability, respectively.
68
69
===3.2 Lexicographic weighed Tchebycheff method===
70
71
A bi-objective robust design problem is generally addressed by introducing a set of parameters, determined by a DM, which represents the relative importance of those two objectives. With the introduced parameters, the bi-objective functions can be transformed into a single integrated function, thus the bi-objective optimization problem can be solved by simply optimizing the integrated function. One way to construct this integrated function is by using the weighted sum of the distance between the optimal solution and the estimated function. Different ways of measuring distance can lead to different solutions, and one of the most common methods is <math display="inline">{L}_{p}</math> metric, where <math>p=1,2,\, \mbox{or}\, \infty</math>. When <math>p=1</math>, the metric is called the Manhattan metric, whereas <math display="inline">p=\infty</math>, it is named the Tchebycheff metric [47]. Utopia point represents an initial point to apply <math>L-\infty</math> metric in weighted Tchebycheff method and can be obtained by minimizing each objective function separately. The weak Pareto optimal solutions can be obtained by introducing different weights: <span id='cite-41'></span>
72
73
{| class="formulaSCP" style="width: 100%; text-align: center;" 
74
|-
75
| 
76
{| style="text-align: center; margin:auto;" 
77
|-
78
| <math>\mathrm{min}\,\left(\displaystyle\sum _{i=1}^{p}{w}_{i}\left| {f}_{i}\left( \boldsymbol{\text{x}}\right) -{u}_{i}^{\ast }\right| ^{p}\right)^{\frac{1}{p}}</math>
79
|}
80
| style="width: 5px;text-align: right;white-space: nowrap;" | (2)
81
|}
82
83
where <math display="inline">{u}_{i}^{\ast }</math> and <math display="inline">{w}_{i}</math> denote the utopia point values and weights associated with objective functions, respectively. When <math>p=\infty</math>, the above function (i.e., Eq.(2)) will only consider the largest deviation. Although the weighted Tchebycheff method is an efficient approach, its main drawback is that only weak non-dominated solutions can be guaranteed [56], which is obviously not optimal for the DM. So, Steuer and Choo [57] introduced an interactive weighted Tchebycheff method, which can generate every non-dominated point provided that weights are selected appropriately. Shin and Cho [23] introduced the lexicographic weighted Tchebycheff method to the RPD area. This method is proved to be efficient and capable of generating all Pareto optimal solutions when the process bias and variability are treated as a bi-objective problem. The mathematical model is shown below [23]:<span id='cite-42'></span>[[Draft Shin 691882792|<span id="cite-43"></span>]]<span id='cite-_Ref76152586'></span><span id='cite-_Ref76152586'></span>
84
85
{| class="formulaSCP" style="width: 100%; text-align: center;" 
86
|-
87
| 
88
{| style="text-align: center; margin:auto;" 
89
|-
90
|<math>min</math>
91
| <math>\left[\,\xi ,\, \left[ {\left\{ \hat{\mu }\left( {\boldsymbol{\text{x}}}\right) -t\right\} }^{2}-{u}_{1}^{\ast }\right] +\left[{\hat{\sigma }}^{2}\left( {\boldsymbol{\text{x}}}\right) -{u}_{2}^{\ast }\right]\right]</math>
92
|-
93
|<math>s.t.</math>
94
|<math> \, \lambda \left[{\left\{ \hat{\mu }\left( {\boldsymbol{\text{x}}}\right) -t\right\} }^{2}-{u}_{1}^{\ast }\right]\leq \xi</math>
95
|-
96
|
97
|<math>\left( 1-\lambda \right) \left[\hat{\sigma }\left({\boldsymbol{\text{x}}}\right) -{u}_{2}^{\ast }\right]\leq \xi</math>
98
|-
99
|
100
|<math>{\boldsymbol{\text{x}}}\in X</math>
101
|}
102
| style="width: 5px;text-align: right;white-space: nowrap;" | (3)
103
|}
104
105
where <math>\xi</math> and <math>\lambda</math> represent a non-negative variable and a weight term associated with process bias and variability, respectively. The Lexicographic weighed Tchebycheff method is utilized as a verification method in this paper. 
106
107
===3.3 Nash bargaining solution===
108
109
A two-player bargaining game can be represented by a pair <math display="inline">\, (U,d)</math>, where <math display="inline">U\subset {R}^{2}</math> and <math display="inline">d\subset {R}^{2}</math>. <math display="inline">U=({u}_{1}({\boldsymbol{\text{x}}}){,u}_{2}({\boldsymbol{\text{x}}}))</math> denotes a pair of obtainable payoffs of the two players, where <math display="inline">{u}_{1}({\boldsymbol{\text{x}}})</math> and <math display="inline">{\, u}_{2}\left({\boldsymbol{\text{x}}}\right) \,</math> represent the utility functions for player 1 and 2, respectively, and <math display="inline">{\boldsymbol{\text{x}}}{\, =}{(}{x}_{1},\, {x}_{2})\,</math> denotes a vector of actions taken by players. <math display="inline">d</math> (<math display="inline">{d=(d}_{1},{d}_{2})</math>), defined as a disagreement point, represents the payoffs that each player will gain from this game when two players fail to reach a satisfactory agreement. In other words the disagreement point values are the payoffs that each player can expect to receive if a negotiation breaks down. Assuming <math display="inline">{u}_{i}({\boldsymbol{\text{x}}})>{d}_{i}</math> where <math display="inline">{u}_{i}({\boldsymbol{\text{x}}})\in U</math> for <math display="inline">\, i\, =1,2</math>, the set <math display="inline">U\cap \left\{ \left( {u}_{1}({\boldsymbol{\text{x}}}),{u}_{2}({\boldsymbol{\text{x}}})\right) \in \, {R}^{2}:\, {u}_{1}({\boldsymbol{\text{x}}})\geq {d}_{1};\, {u}_{2}({\boldsymbol{\text{x}}})\geq {d}_{2}\right\}</math> is non-empty. As suggested by the expression of the Nash bargaining game <math display="inline">(U, d)</math>, the Nash bargaining solution is affected by both the reachable utility range (<math display="inline">U</math>) and disagreement point value (<math display="inline">d</math>). Since <math display="inline">U</math> cannot be changed, rational players will decide a disagreement point value to optimize their bargaining position. According to Myerson [59], there are three possible ways to determine the value of a disagreement point. One standard way is to calculate the minimax value for each player
110
111
{| class="formulaSCP" style="width: 100%; text-align: left;" 
112
|-
113
| 
114
{| style="text-align: center; margin:auto;width: 100%;" 
115
|-
116
| style="text-align: center;" | <math>{d}_{1}=\mathrm{min}\,\mathrm{max}\,{u}_{1}({x}_{1},{x}_{2})\,\quad \mbox{and} \,\quad{d}_{2}=\mathrm{min}\,\mathrm{max}\,{u}_{2}({x}_{1},{x}_{2})</math>                                                                                                                                                            
117
|}
118
| style="width: 5px;text-align: right;white-space: nowrap;" |(4)
119
|}
120
121
To be more specific, Eq.(4) states that, given each possible action for player 2, player 1 has a corresponding best response strategy. Then, among all those best response strategies, player 1 chooses the one that returns the minimum payoff which is defined as a disagreement point value. Following this logic, player 1 can guarantee to receive an acceptable payoff. Another possible way of determining the disagreement point value is to derive the disagreement point value as an effective and rational threat to ensure the establishment of an agreement. The last possibility is to set the disagreement point as the focal equilibrium of the game.
122
123
Nash proposed four possible axioms that should be possessed by the bargaining game solution [<span id='cite-44'></span><span id='cite-45'></span>58,59]:
124
125
:* Pareto optimality
126
127
:* Independence of equivalent utility representation (IEUR)
128
129
:* Symmetry
130
131
:* Independence of irrelevant alternatives (IIA)
132
133
The first axiom states that the solution should be Pareto optimal, which means it should not be dominated by any other point. If the notation <math display="inline">f\left( U,d\right) =\left( {f}_{1}\left( U,d\right) ,\, {f}_{2}\left( U,d\right) \right) \,</math> stands for the Nash bargaining solution to the bargaining problem <math display="inline">(U,d)</math>, then the solution <math display="inline">{u}^{\ast }=</math><math>\left( {u}_{1}\left( {{\boldsymbol{\text{x}}}}^{\mathit{\boldsymbol{\ast }}}\right) ,{u}_{2}\left( {{\boldsymbol{\text{x}}}}^{\boldsymbol{\ast }}\right) \right)</math>  can be Pareto efficient if and only if there exists no other point <math display="inline">{u}^{'}=</math><math>\left( {u}_{1}\left( {{\boldsymbol{\text{x}}}}^{\mathit{\boldsymbol{'}}}\right) ,\, {u}_{2}\left( {{\boldsymbol{\text{x}}}}^{\boldsymbol{'}}\right) \right) \in U</math> such that <math display="inline">{u}_{1}\left( {{\boldsymbol{\text{x}}}}^{\boldsymbol{'}}\right) >{u}_{1}\left( {{\boldsymbol{\text{x}}}}^{\boldsymbol{\ast }}\right) ;{u}_{2}\left( {{\boldsymbol{\text{x}}}}^{\mathit{\boldsymbol{'}}}\right) \geq {u}_{2}\left( {{\boldsymbol{\text{x}}}}^{\boldsymbol{\ast }}\right)</math>  or <math display="inline">{\, u}_{1}\left( {{\boldsymbol{\text{x}}}}^{\boldsymbol{'}}\right) \geq {u}_{1}\left( {{\boldsymbol{\text{x}}}}^{\boldsymbol{\ast }}\right) ;</math> <math display="inline">{u}_{2}\left( {{\boldsymbol{\text{x}}}}^{\boldsymbol{'}}\right) >{u}_{2}\left( {{\boldsymbol{\text{x}}}}^{\boldsymbol{\ast }}\right)</math>. This implies that there is no alternative feasible solution that is better for one player without worsening the payoff for other players.
134
135
The second axiom, IEUR also referred to as scale covariance, states that the solution should be independent of positive affine transformations of utilities.In other words, if a new bargaining game <math display="inline">(G,w)</math> exists, where <math>G=\left\{ {\alpha }_{1}{u}_{1}({\boldsymbol{\text{x}}})+ {\beta }_{1},{\alpha }_{2}{u}_{2}({\boldsymbol{\text{x}}})+{\beta }_{2}\right\}</math> and  <math>w=\left({\alpha }_{1}{d}_{1}+{\beta }_{1},{\alpha }_{2}{d}_{2}+{\beta }_{2}\right)\,</math> and where <math>\left({u}_{1}({\boldsymbol{\text{x}}}),{u}_{2}({\boldsymbol{\text{x}}})\right)\in U</math> and <math> {\alpha }_{1}>0,{\alpha }_{2}>0</math>, then the solution for this new bargaining game (i.e., <math display="inline">f(G,w)</math>) can be obtained by applying the same transformations, which is demonstrated by Eq.(5) and [[#img-1|Figure 1]]:
136
137
{| class="formulaSCP" style="width: 100%; text-align: left;" 
138
|-
139
| 
140
{| style="text-align: center; margin:auto;width: 100%;" 
141
|-
142
| style="text-align: center;" |<math>\, \, f(G,\, w)=({\alpha }_{1}{f}_{1}(U,d)+{\beta }_{1},\, {\alpha }_{2}{f}_{2}(U,\, d)+{\beta }_{2})\,
143
</math>
144
|}
145
| style="width: 5px;text-align: right;white-space: nowrap;" |(5)
146
|}
147
148
149
<div id='img-1'></div>
150
{| style="text-align: center; border: 1px solid #BBB; margin: 1em auto; width: auto;max-width: auto;"
151
|-
152
|style="padding:10px;"| [[File:Dail2.png|450px]]
153
|- style="text-align: center; font-size: 75%;"
154
| colspan="1" style="padding:10px;"| '''Figure 1'''. Explanation of IEUR axiom
155
|}
156
157
158
The third axiom “symmetry” represents that the solutions should be symmetric when the bargaining positions of the two players are completely symmetric. This axiom can be explained as if there is no information that can be used to distinguish one player from the other, then the solutions should also be indistinguishable between players [46].
159
160
As shown in [[#img-2|Figure 2]], the last axiom states that if <math display="inline">{U}_{1}\subset {U}_{2}</math> and <math display="inline">f({U}_{2},d)</math> is located within the feasible area <math display="inline">{U}_{1}</math>, then <math display="inline">f\left( {U}_{1,}d\right) =</math><math>f({U}_{2},d)</math> [59]. 
161
162
<div id='img-2'></div>
163
{| style="text-align: center; border: 1px solid #BBB; margin: 1em auto; width: auto;max-width: auto;"
164
|-
165
|style="padding:10px;"| [[File:Draft_Shin_691882792-image2.png|centre|374x374px|]]  
166
|- style="text-align: center; font-size: 75%;"
167
| colspan="1" style="padding:10px;"| '''Figure 2'''. Explanation of IIA axiom
168
|}
169
170
171
The solution function introduced by Nash [46] that satisfies all the four axioms as identified before can be defined as follows:
172
173
{| class="formulaSCP" style="width: 100%; text-align: center;" 
174
|-
175
| 
176
{| style="text-align: center; margin:auto;" 
177
|-
178
| <math display="inline">f\left( U,\, d\right) =Max\prod _{i=1,2}^{}({u}_{i}({\boldsymbol{\text{x}}})-{d}_{i})=</math><math>Max\, ({u}_{1}({\boldsymbol{\text{x}}})-{d}_{1})({u}_{2}({\boldsymbol{\text{x}}})-{d}_{2})</math>
179
|}
180
| style="width: 5px;text-align: right;white-space: nowrap;" | (6)
181
|}
182
183
where <math display="inline">{u}_{i}({\boldsymbol{\text{x}}})>{d}_{i},\, i=1,2</math>. Intuitively, this function is trying to find solutions that maximize each player’s difference in payoffs between the cooperative agreement point and the disagreement point. In simpler terms, Nash selects an agreement point <math display="inline">({u}_{1}\left( {{\boldsymbol{\text{x}}}}^{\ast }\right) ,{u}_{2}({{\boldsymbol{\text{x}}}}^{\ast }))</math> that maximizes the product of utility gains from the disagreement point <math display="inline">\, ({d}_{1},{d}_{2})</math>.
184
185
== 4. The proposed model ==
186
187
The proposed method attempts to integrate bargaining game concepts into the tradeoff issue between the process bias and variability, so that not only the interaction between process bias and variability can be incorporated but also a unique optimal solution can be obtained. The detailed procedure includes problem description, calculation for response functions and disagreement points, bargaining game based RPD model, and verification can be illustrated in [[#img-3|Figure 3]]. As illustrated in [[#img-3|Figure 3]], the objective of the proposed method is to address the tradeoff between process bias and variability. In the calculation phase, a utopia point can be calculated based on separately estimated functions for the process bias and variability. However, this utopia point is in an infeasible region, which means that a simultaneous minimization of the process bias and variability is unachievable. The disagreement point is calculated by first, optimizing only one of the objective functions (i.e., the estimated process variability or the process bias function) and obtaining a solution set, and second, inserting the obtained solution set into the other objective function to generate a corresponding value. In the proposed model, based on the obtained disagreement point, the Nash bargaining solution concept is applied to solve the bargaining game. While in the verification phase, the lexicographic weighted tchebycheff is applied to generate the associated Pareto frontier, so that the obtained game solution can be compared with other efficient solutions. 
188
189
<div id='img-3'></div>
190
{| style="text-align: center; border: 1px solid #BBB; margin: 1em auto; width: auto;max-width: auto;"
191
|-
192
|style="padding:10px;"|  [[File:New2.png|centre|820x820px|]] 
193
|- style="text-align: center; font-size: 75%;"
194
| colspan="1" style="padding:10px;"| '''Figure 3'''. The proposed procedure by integrating of bargaining game into RPD 
195
|}
196
197
198
An integration of the Nash bargaining game model involves three steps. First step, the two players and their corresponding utility function should be defined. The process bias can be defined as player A, and variability can be regarded as player B. The RSM-based estimated functions of both responses will be regarded as the players’ utility functions in this bargaining game (i.e., <math display="inline">u_{A}({\boldsymbol{\text{x}}})</math> and <math display="inline"> u_{B}({\boldsymbol{\text{x}}})</math>) where <math>{\boldsymbol{\text{x}}}</math> stands for a vector of controllable factors. Then, the goal of each player is to choose a set of controllable factors while minimizing each individual utility function. Second step, a disagreement point can be determined by applying a minimax-value theory as identified in Equation 7. Based on the tradeoff between the process bias and variability, the modified disagreement point functions can be defined as follows:
199
200
{| class="formulaSCP" style="width: 100%; text-align: center;" 
201
|-
202
| 
203
{| style="text-align: center; margin:auto;" 
204
|-
205
| <math>{d}_{A}=\mathrm{max}\,\mathrm{min}\,{u}_{A}({\boldsymbol{\text{x}}})\,\quad \mbox{and}\,\quad {d}_{B}=\mathrm{max}\,\mathrm{min}\,{u}_{B}({\boldsymbol{\text{x}}})</math>
206
|}
207
| style="width: 5px;text-align: right;white-space: nowrap;" | (7)
208
|}
209
210
In this way, both player A (i.e., the process bias) and player B (i.e., the process variability) are guaranteed to receive the worst acceptable payoffs. In that case, the disagreement point, defined as the maximum minimum utility value, can be calculated by minimizing only one objective (process variability or bias). The computational functions for the disagreement point values can be formulated as:
211
212
{| class="formulaSCP" style="width: 100%; text-align: center;" 
213
|-
214
| 
215
{| style="text-align: center; margin:auto;" 
216
|-
217
| <math>\left\{ {d}_{A}={u}_{A}({\boldsymbol{\text{x}}})|{\boldsymbol{\text{x}}}=\arg\min{u}_{B}({\boldsymbol{\text{x}}})\qquad \mbox{and}\qquad {\boldsymbol{\text{x}}}\in X\right\} </math>
218
|}
219
| style="width: 5px;text-align: right;white-space: nowrap;" | (8)
220
|}
221
222
and
223
224
{| class="formulaSCP" style="width: 100%; text-align: center;" 
225
|-
226
| 
227
{| style="text-align: center; margin:auto;" 
228
|-
229
| <math>\left\{ {d}_{B}={u}_{B}({\boldsymbol{\text{x}}})|{\boldsymbol{\text{x}}}=\arg\min{u}_{A}({\boldsymbol{\text{x}}})\qquad \mbox{and}\qquad{\boldsymbol{\text{x}}}\in X\right\} </math>
230
|}
231
| style="width: 5px;text-align: right;white-space: nowrap;" | (9)
232
|}
233
234
Thus, the idea of the proposed method to find the optimal solutions is to continuously perform bargaining games from the specified disagreement point <math display="inline">({d}_{A} ,\, {d}_{B})</math> to Pareto frontier as illustrated in [[#img-4|Figure 4]]. To be more specific, as demonstrated in [[#img-4|Figure 4]], if the convex curve represents all Pareto optimal solutions, then each point on the curve can be regarded as a minimum utility value for one of the two process parameters (i.e., the process variability or bias). For example, at point A, when the process bias is minimized within the feasible area, the corresponding variability value is the minimum utility value for the process variability, since other utility values would be either dominated or infeasible. These solutions may provide useful insight for a DM when the relative importance between process bias and variability is difficult to identify.  
235
236
<div id='img-4'></div>
237
{| style="text-align: center; border: 1px solid #BBB; margin: 1em auto; width: auto;max-width: auto;"
238
|-
239
|style="padding:10px;"|  [[File:New.png|centre|391x391px|]] 
240
|- style="text-align: center; font-size: 75%;"
241
| colspan="1" style="padding:10px;"| '''Figure 4'''. Solution concepts for the proposed bargaining game based RPD method<br> by integrating trafeoff between both process bias and variability 
242
|}
243
244
245
In the final step, the Nash bargaining solution function <math display="inline">Max\left( {u}_{A}({\boldsymbol{\text{x}}})- {d}_{A}\right) \left( {u}_{B}({\boldsymbol{\text{x}}})-{d}_{B}\right)</math> is utilized. In an RPD problem, the objective of this problem is to minimize both process bias and variability, so a constraint of <math display="inline">{u}_{i}({\boldsymbol{\text{x}}})</math>< <math display="inline">{d}_{i},\, i=</math><math>A,B</math> is applied. After the players, utility functions and the disagreement point are identified, the Nash bargaining solution function is applied as below:
246
247
{| class="formulaSCP" style="width: 100%; text-align: center;" 
248
|-
249
| 
250
{| style="text-align: center; margin:auto;" 
251
|-
252
|<math>\max </math>
253
| <math display="inline"> \left( {u}_{A}({\boldsymbol{\text{x}}})-{d}_{A}\right) \left( {u}_{B}({\boldsymbol{\text{x}}})-\right.\left. {d}_{B}\right) </math>
254
|-
255
|<math>s.t. </math>
256
|<math>{u}_{A}({\boldsymbol{\text{x}}})\leq {d}_{A},{u}_{B}({\boldsymbol{\text{x}}})\leq {d}_{B}, \qquad \mbox{and} \qquad {\boldsymbol{\text{x}}}\in \,X</math>
257
|}
258
| style="width: 5px;text-align: right;white-space: nowrap;" | (10)
259
|}
260
261
where
262
263
{| style="text-align: center; margin:auto;"  
264
|-
265
|<math>u_A({\boldsymbol{\text{x}}})=\bigl(\widehat{\mu}({\boldsymbol{\text{x}}})-\tau\bigr)^2,\,u_B({\boldsymbol{\text{x}}})=\hat{\sigma}^2({\boldsymbol{\text{x}}})\,or\,\hat{\sigma}({\boldsymbol{\text{x}}})</math>
266
|-
267
|<math>\hat{\mu}({\boldsymbol{\text{x}}})=\alpha_0+{\boldsymbol{\text{x}}}^T\boldsymbol{\alpha_1}+{\boldsymbol{\text{x}}}^T\boldsymbol{\Gamma}{\boldsymbol{\text{x}}},\quad \mbox{and}\quad \hat{\sigma}^2({\boldsymbol{\text{x}}})=\beta_0+{\boldsymbol{\text{x}}}^T\boldsymbol{\beta_1}+{\boldsymbol{\text{x}}}^T\Delta{\boldsymbol{\text{x}}}</math>
268
|-
269
|<math>\hat{\sigma}({\boldsymbol{\text{x}}})=\gamma_0+{\boldsymbol{\text{x}}}^T\boldsymbol{\gamma_1}+{\boldsymbol{\text{x}}}^T\Epsilon{\boldsymbol{\text{x}}}</math>
270
|}
271
and, where 
272
{| class="formulaSCP" style="width: 100%; text-align: center;" 
273
|-
274
| 
275
{| style="text-align: center; margin:auto;" 
276
|-
277
| <math display="inline">{\boldsymbol{\text{x}}}=\left[ \, \begin{matrix}{x}_{1}\\{x}_{2}\\\, \begin{matrix}\vdots \\{x}_{n-1}\\{x}_{n}\end{matrix}\end{matrix}\right], \,{\mathit{\boldsymbol{\alpha }}}_{\mathit{\boldsymbol{1}}}=\left[ \, \begin{matrix}{\hat{\alpha }}_{1}\\{\hat{\alpha }}_{2}\\\, \begin{matrix}\vdots \\{\hat{\alpha }}_{n-1}\\{\hat{\alpha }}_{n}\end{matrix}\end{matrix}\right],\,{\mathit{\boldsymbol{\beta }}}_{\mathit{\boldsymbol{1}}}=\, \left[ \begin{matrix}{\hat{\beta }}_{1}\\\begin{matrix}{\hat{\beta }}_{2}\\\vdots \end{matrix}\\\begin{matrix}{\hat{\beta }}_{n-1}\\{\hat{\beta }}_{n}\end{matrix}\end{matrix}\right], \,{\mathit{\boldsymbol{\gamma }}}_{\mathit{\boldsymbol{1}}}=\, \left[ \begin{matrix}{\hat{\gamma }}_{1}\\\begin{matrix}{\hat{\gamma }}_{2}\\\vdots \end{matrix}\\\begin{matrix}{\hat{\gamma }}_{n-1}\\{\hat{\gamma }}_{n}\end{matrix}\end{matrix}\right],\quad \mbox{and}\quad \boldsymbol{\Gamma}=\, \left[ \begin{matrix}\begin{matrix}{\hat{\alpha }}_{11}&{\hat{\alpha }}_{12}/2\\{\hat{\alpha }}_{12}/2&{\hat{\alpha }}_{22}\end{matrix}&\cdots &\begin{matrix}{\hat{\alpha }}_{1n}/2\\{\hat{\alpha }}_{2n}/2\end{matrix}\\\vdots &\ddots &\vdots \\\begin{matrix}{\hat{\alpha }}_{1n}/2&{\hat{\alpha }}_{2n}/2\end{matrix}&\cdots &{\hat{\alpha }}_{nn}\end{matrix}\right] </math>
278
|}
279
|-
280
|<math>\boldsymbol{\Delta}=\, \left[ \begin{matrix}\begin{matrix}{\hat{\beta }}_{11}&{\hat{\beta }}_{12}/2\\{\hat{\beta }}_{12}/2&{\hat{\beta }}_{22}\end{matrix}&\cdots &\begin{matrix}{\hat{\beta }}_{1n}/2\\{\hat{\beta }}_{2n}/2\end{matrix}\\\vdots &\ddots &\vdots \\\begin{matrix}{\hat{\beta }}_{1n}/2&{\hat{\beta }}_{2n}/2\end{matrix}&\cdots &{\hat{\beta }}_{nn}\end{matrix}\right],\quad \mbox{and}\quad\boldsymbol{\Epsilon}=\, \left[ \begin{matrix}\begin{matrix}{\hat{\gamma }}_{11}&{\hat{\gamma }}_{12}/2\\{\hat{\gamma }}_{12}/2&{\hat{\gamma }}_{22}\end{matrix}&\cdots &\begin{matrix}{\hat{\gamma }}_{1n}/2\\{\hat{\gamma}}_{2n}/2\end{matrix}\\\vdots &\ddots &\vdots \\\begin{matrix}{\hat{\gamma}}_{1n}/2&{\hat{\gamma }}_{2n}/2\end{matrix}&\cdots &{\hat{\gamma }}_{nn}\end{matrix}\right]</math>
281
|}
282
283
where <math>(d_A, d_B)</math>, <math>u_A({\boldsymbol{\text{x}}})</math>, <math>u_B({\boldsymbol{\text{x}}})</math>, <math>\hat{\mu}({\boldsymbol{\text{x}}})</math>, <math>\hat{\sigma}^2({\boldsymbol{\text{x}}})</math>, <math>\hat{\sigma}({\boldsymbol{\text{x}}})</math>, <math>\tau</math>, <math>X</math> and <math>\bf x</math> represent a disagreement point, utility functions for player A and B, an estimated process mean function, process variance function, and standard deviation function, the target value, the feasible area, the vector of controllable factors, respectively. In Eq.(10), <math>\boldsymbol{\alpha_1} </math>, <math>\boldsymbol{\beta_1} </math>, <math>\boldsymbol{\gamma_1} </math>, <math>\boldsymbol{\Gamma} </math>, <math>\boldsymbol{\Delta} </math>, and <math>\boldsymbol{\Epsilon} </math> denote vectors and matrixes of estimated regression coefficients for the process mean, variance, and standard deviation, respectively. Here, the constraint <math>u_i({\boldsymbol{\text{x}}})\leq d_i </math>, where <math>i=A,B </math>, ensures that the obtained agreement point payoffs will be at least as good as the disagreement point payoffs. Otherwise, there is no reason for players to participate in the negotiation.
284
285
== 5.  Numerical illustrations and sensitivity analysis ==
286
287
===5.1 Numerical example 1===
288
289
<span id='_Hlk60583940'></span>
290
Two numerical examples are conducted to demonstrate the efficiency of the proposed method. As explained in section 3.1, the process variability can be measured in terms of both the estimated standard deviation and variance functions, but the optimal solutions can be different if different response surface expressions are used. Therefore, the equations estimated in the original example were utilized for better comparison. Example 1 investigates the relationship between the coating thickness of bare silicon wafers (<math>y </math>) and three controller variables: mould temperature <math display="inline">({x}_{1})</math>, injection flow rate <math display="inline">({x}_{2})</math>, and cooling rate <math display="inline">{(x}_{3})</math> [10]. A central composite design and three replications were conducted, and the detailed experimental data with coded values can be shown in [[#tab-1|Table 1]].
291
<span id='cite-46'></span>
292
293
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;">
294
'''Table 1'''. Data for numerical example 1</div>
295
296
<div id='tab-1'></div>
297
{| class="wikitable" style="margin: 1em auto 0.1em auto;border-collapse: collapse;font-size:85%;width:auto;" 
298
|-style="text-align:center"
299
! Experiments number !! <math>x_1</math> !! <math>x_2</math> !! <math>x_3</math> !! <math>y_1</math> !! <math>y_2</math> !! 
300
<math>y_3</math> !! <math>y_4</math> !! <math>\overline{\mathit{y}}</math> !! <math>\mathit{\sigma }</math>
301
|-
302
| style="text-align: center;vertical-align: top;" |1
303
| style="text-align: center;" |-1
304
| style="text-align: center;" |-1
305
| style="text-align: center;" |-1
306
| style="text-align: center;" |76.30
307
| style="text-align: center;" |80.50
308
| style="text-align: center;" |77.70
309
| style="text-align: center;" |81.10
310
| style="text-align: center;" |78.90
311
| style="text-align: center;" |2.28
312
|-
313
| style="text-align: center;vertical-align: top;" |2
314
| style="text-align: center;" |1
315
| style="text-align: center;" |-1
316
| style="text-align: center;" |-1
317
| style="text-align: center;" |79.10
318
| style="text-align: center;" |81.20
319
| style="text-align: center;" |78.80
320
| style="text-align: center;" |79.60
321
| style="text-align: center;" |79.68
322
| style="text-align: center;" |1.07
323
|-
324
| style="text-align: center;vertical-align: top;" |3
325
| style="text-align: center;" |-1
326
| style="text-align: center;" |1
327
| style="text-align: center;" |-1
328
| style="text-align: center;" |82.50
329
| style="text-align: center;" |81.50
330
| style="text-align: center;" |79.50
331
| style="text-align: center;" |80.90
332
| style="text-align: center;" |81.10
333
| style="text-align: center;" |1.25
334
|-
335
| style="text-align: center;vertical-align: top;" |4
336
| style="text-align: center;" |1
337
| style="text-align: center;" |1
338
| style="text-align: center;" |-1
339
| style="text-align: center;" |72.30
340
| style="text-align: center;" |74.30
341
| style="text-align: center;" |75.70
342
| style="text-align: center;" |72.70
343
| style="text-align: center;" |73.75
344
| style="text-align: center;" |1.56
345
|-
346
| style="text-align: center;vertical-align: top;" |5
347
| style="text-align: center;" |-1
348
| style="text-align: center;" |-1
349
| style="text-align: center;" |1
350
| style="text-align: center;" |70.60
351
| style="text-align: center;" |72.70
352
| style="text-align: center;" |69.90
353
| style="text-align: center;" |71.50
354
| style="text-align: center;" |71.18
355
| style="text-align: center;" |1.21
356
|-
357
| style="text-align: center;vertical-align: top;" |6
358
| style="text-align: center;" |1
359
| style="text-align: center;" |-1
360
| style="text-align: center;" |1
361
| style="text-align: center;" |74.10
362
| style="text-align: center;" |77.90
363
| style="text-align: center;" |76.20
364
| style="text-align: center;" |77.10
365
| style="text-align: center;" |76.33
366
| style="text-align: center;" |1.64
367
|-
368
| style="text-align: center;vertical-align: top;" |7
369
| style="text-align: center;" |-1
370
| style="text-align: center;" |1
371
| style="text-align: center;" |1
372
| style="text-align: center;" |78.50
373
| style="text-align: center;" |80.00
374
| style="text-align: center;" |76.20
375
| style="text-align: center;" |75.30
376
| style="text-align: center;" |77.50
377
| style="text-align: center;" |2.14
378
|-
379
| style="text-align: center;vertical-align: top;" |8
380
| style="text-align: center;" |1
381
| style="text-align: center;" |1
382
| style="text-align: center;" |1
383
| style="text-align: center;" |84.90
384
| style="text-align: center;" |83.10
385
| style="text-align: center;" |83.90
386
| style="text-align: center;" |83.50
387
| style="text-align: center;" |83.85
388
| style="text-align: center;" |0.77
389
|-
390
| style="text-align: center;vertical-align: top;" |9
391
| style="text-align: center;" |-1.682
392
| style="text-align: center;" |0
393
| style="text-align: center;" |0
394
| style="text-align: center;" |74.10
395
| style="text-align: center;" |71.80
396
| style="text-align: center;" |72.50
397
| style="text-align: center;" |71.90
398
| style="text-align: center;" |72.58
399
| style="text-align: center;" |1.06
400
|-
401
| style="text-align: center;vertical-align: top;" |10
402
| style="text-align: center;" |1.682
403
| style="text-align: center;" |0
404
| style="text-align: center;" |0
405
| style="text-align: center;" |76.40
406
| style="text-align: center;" |78.70
407
| style="text-align: center;" |79.20
408
| style="text-align: center;" |79.30
409
| style="text-align: center;" |78.40
410
| style="text-align: center;" |1.36
411
|-
412
| style="text-align: center;vertical-align: top;" |11
413
| style="text-align: center;" |0
414
| style="text-align: center;" |-1.682
415
| style="text-align: center;" |0
416
| style="text-align: center;" |79.20
417
| style="text-align: center;" |80.70
418
| style="text-align: center;" |81.00
419
| style="text-align: center;" |82.30
420
| style="text-align: center;" |80.80
421
| style="text-align: center;" |1.27
422
|-
423
| style="text-align: center;vertical-align: top;" |12
424
| style="text-align: center;" |0
425
| style="text-align: center;" |1.682
426
| style="text-align: center;" |0
427
| style="text-align: center;" |77.90
428
| style="text-align: center;" |76.40
429
| style="text-align: center;" |76.90
430
| style="text-align: center;" |77.40
431
| style="text-align: center;" |77.15
432
| style="text-align: center;" |0.65
433
|-
434
| style="text-align: center;vertical-align: top;" |13
435
| style="text-align: center;" |0
436
| style="text-align: center;" |0
437
| style="text-align: center;" |-1.682
438
| style="text-align: center;" |82.40
439
| style="text-align: center;" |82.70
440
| style="text-align: center;" |82.60
441
| style="text-align: center;" |83.10
442
| style="text-align: center;" |82.70
443
| style="text-align: center;" |0.29
444
|-
445
| style="text-align: center;vertical-align: top;" |14
446
| style="text-align: center;" |0
447
| style="text-align: center;" |0
448
| style="text-align: center;" |1.682
449
| style="text-align: center;" |79.70
450
| style="text-align: center;" |82.40
451
| style="text-align: center;" |81.00
452
| style="text-align: center;" |81.20
453
| style="text-align: center;" |81.08
454
| style="text-align: center;" |1.11
455
|-
456
| style="text-align: center;vertical-align: top;" |15
457
| style="text-align: center;" |0
458
| style="text-align: center;" |0
459
| style="text-align: center;" |0
460
| style="text-align: center;" |70.40
461
| style="text-align: center;" |70.60
462
| style="text-align: center;" |70.80
463
| style="text-align: center;" |71.10
464
| style="text-align: center;" |70.73
465
| style="text-align: center;" |0.30
466
|-
467
| style="text-align: center;vertical-align: top;" |16
468
| style="text-align: center;" |0
469
| style="text-align: center;" |0
470
| style="text-align: center;" |0
471
| style="text-align: center;" |70.90
472
| style="text-align: center;" |69.70
473
| style="text-align: center;" |69.00
474
| style="text-align: center;" |69.90
475
| style="text-align: center;" |69.88
476
| style="text-align: center;" |0.78
477
|-
478
| style="text-align: center;vertical-align: top;" |17
479
| style="text-align: center;" |0
480
| style="text-align: center;" |0
481
| style="text-align: center;" |0
482
| style="text-align: center;" |70.70
483
| style="text-align: center;" |71.90
484
| style="text-align: center;" |71.70
485
| style="text-align: center;" |71.20
486
| style="text-align: center;" |71.38
487
| style="text-align: center;" |0.54
488
|-
489
| style="text-align: center;vertical-align: top;" |18
490
| style="text-align: center;" |0
491
| style="text-align: center;" |0
492
| style="text-align: center;" |0
493
| style="text-align: center;" |70.20
494
| style="text-align: center;" |71.00
495
| style="text-align: center;" |71.50
496
| style="text-align: center;" |70.40
497
| style="text-align: center;" |70.78
498
| style="text-align: center;" |0.59
499
|-
500
| style="text-align: center;vertical-align: top;" |19
501
| style="text-align: center;" |0
502
| style="text-align: center;" |0
503
| style="text-align: center;" |0
504
| style="text-align: center;" |71.50
505
| style="text-align: center;" |71.10
506
| style="text-align: center;" |71.20
507
| style="text-align: center;" |70.00
508
| style="text-align: center;" |70.95
509
| style="text-align: center;" |0.66
510
|-
511
| style="text-align: center;vertical-align: top;" |20
512
| style="text-align: center;" |0
513
| style="text-align: center;" |0
514
| style="text-align: center;" |0
515
| style="text-align: center;" |71.00
516
| style="text-align: center;" |70.40
517
| style="text-align: center;" |70.90
518
| style="text-align: center;" |69.90
519
| style="text-align: center;" |70.55
520
| style="text-align: center;" |0.51
521
|}
522
523
524
The fitted response functions for the process bias and standard deviation of the coating thickness are estimated by using  LSM through MINITABsoftware package as:
525
{| class="formulaSCP" style="width: 100%; text-align: center;" 
526
|-
527
| 
528
{| style="text-align: center; margin:auto;" 
529
|-
530
| <math>\hat{\mu }\left({\boldsymbol{\text{x}}}\right) =\,72.21+\,{{\boldsymbol{\text{x}}}}^{T}{\boldsymbol{\alpha }}_{\boldsymbol{1}}\mathit{\boldsymbol{+\, }}{{\boldsymbol{\text{x}}}}^{T}\boldsymbol{\Gamma \text{x}} </math>
531
|}
532
| style="width: 5px;text-align: right;white-space: nowrap;" | (11)
533
|}
534
where
535
{| class="formulaSCP" style="width: 100%; text-align: center;" 
536
|-
537
| 
538
{| style="text-align: center; margin:auto;" 
539
|-
540
|<math>{\boldsymbol{\alpha }}_{1}=\, \left[ \begin{matrix}0.59\\-0.35\\-0.01\end{matrix}\right],  and  \quad \boldsymbol\Gamma=\, \left[ \begin{matrix}0.28&0.045&0.83\\0.045&1.29&0.755\\0.83&0.755&1.85\end{matrix}\right] </math>
541
|}
542
|}
543
544
{| class="formulaSCP" style="width: 100%; text-align: center;" 
545
|-
546
| 
547
{| style="text-align: center; margin:auto;" 
548
|-
549
| <math display="inline">{\hat{\sigma }}\left( {\boldsymbol{\text{x}}}\right)=\, 2.55\,+ {\boldsymbol{\text{x}}}^T\boldsymbol{\gamma_1}+{\boldsymbol{\text{x}}}^T\Epsilon{\boldsymbol{\text{x}}} </math>
550
|}
551
| style="width: 5px;text-align: right;white-space: nowrap;" | (12)
552
|}
553
where
554
{| style="text-align: center; margin:auto;"  
555
|-
556
|<math>\boldsymbol\gamma_1=\, \left[ \begin{matrix}0.38\\-0.43\\0.56\end{matrix}\right],  and  \quad \boldsymbol\mathrm{E}=\, \left[ \begin{matrix}0.49&-0.235&0.36\\-0.235&0.61&-0.06\\0.36&-0.06&0.85\end{matrix}\right] </math>
557
|}Based on the proposed RPD procedure as described in Figure 3, those two functions (i.e., process bias and standard deviation) as shown in Equations (11) and (12) are regarded as two players and also their associated utility functions in the bargaining game.  The disagreement point as shown in Figure 4 can be computed as <math display="inline">d=({d}_{\mathit{\boldsymbol{A}}}</math>, <math display="inline">{d}_{\mathit{\boldsymbol{B\, }}})</math>=(1.2398, 3.1504)  by using Equations (8) and (9). Then, the optimization problem can be solved by applying Equation (10) under an additional constraint, <math display="inline">\sum _{l=1}^{3}{{x}_{l}}^{2}\leq 3</math>. which represents a feasible experiment region.
558
559
The solution (i.e., <math display="inline">{\left( \hat{\mu }\left( {\boldsymbol{\text{x}}}^*\right) -\tau \right) }^{2}=</math> 0.2967 and <math display="inline">\hat\sigma \left({\boldsymbol{\text{x}}}^*\right)</math> = 2.6101) are calculated by using a MATLAB software package. To perform a comparative study, the optimization results of the proposed method and the conventional dual response approach are summarized in Table 2. Based on Table 2, the proposed method provides slightly better MSE results in this particular numerical example. To check the efficiency of the obtained results, the lexicographic weighted Tchebycheff approach is adopted to procure an associated Pareto frontier which is shown in Figure 5.
560
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;">
561
'''Table 2.''' The optimization results of example1</div>
562
563
{| style="width: 100%;border-collapse: collapse;" 
564
|-
565
|  style="border: 1pt solid black;text-align: center;"|
566
|  style="border: 1pt solid black;text-align: center;vertical-align: top;"|<math>{x}_{1}^{\ast }</math>
567
|  style="border: 1pt solid black;text-align: center;vertical-align: top;"|<math>{x}_{2}^{\ast }</math>
568
|  style="border: 1pt solid black;text-align: center;vertical-align: top;"|<math>{x}_{3}^{\ast }</math>
569
|  style="border: 1pt solid black;text-align: center;vertical-align: top;"|<math>{\left( \hat{\mu }\left( {\boldsymbol{\text{x}}}^*\right) -\tau \right) }^{2}</math> 
570
|  style="border: 1pt solid black;text-align: center;vertical-align: top;"|<math>{\hat{\sigma }}^{2}\mathit{\boldsymbol{(}}{\boldsymbol{\text{x}}}^*\mathit{\boldsymbol{)}}</math>
571
|  style="border: 1pt solid black;text-align: center;vertical-align: top;"|'''MSE'''
572
|-
573
|  style="border: 1pt solid black;text-align: center;"|'''Dual response model with WLS'''
574
|  style="border: 1pt solid black;text-align: center;"|-1.4561
575
|  style="border: 1pt solid black;text-align: center;"|-0.1456
576
|  style="border: 1pt solid black;text-align: center;"|0.5596
577
|  style="border: 1pt solid black;text-align: center;"|0
578
|  style="border: 1pt solid black;text-align: center;"|3.0142
579
|  style="border: 1pt solid black;text-align: center;"|9.0854
580
|-
581
|  style="border: 1pt solid black;text-align: center;"|'''Proposed model'''
582
|  style="border: 1pt solid black;text-align: center;"|-0.8473
583
|  style="border: 1pt solid black;text-align: center;"|0.0399
584
|  style="border: 1pt solid black;text-align: center;"|0.2248
585
|  style="border: 1pt solid black;text-align: center;"|0.2967
586
|  style="border: 1pt solid black;text-align: center;"|2.6101
587
|  style="border: 1pt solid black;text-align: center;"|7.1093
588
|}
589
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;">
590
[[File:News.png|alt=|centre|thumb|404x404px|'''Figure 5.''' The optimization results plot with the Pareto frontier of example 1]]</div>
591
592
As exhibited in Figure 5, the obtained Nash bargaining solution, which is plotted as a star, is on the Pareto frontier. By using the concept of bargaining game theory, the interaction between process bias and variability can be incorporated while identifying a unique tradeoff result. As result, this proposed method might provide well-balanced optimal solutions associated with the process bias and variability in this particular example.
593
594
=== 5.2 '''Sensitivity analysis for numerical example 1''' ===
595
Based on the optimization results, sensitivity analysis for different disagreement point values are then conducted for verification purposes as shown in Table 3. While changing  <math>d_B</math> values by both 10% increment and decrement with fixed <math>d_A</math> value at 3.1504, the changing patterns of the process bias and variability values are investigated in this sensitivity analysis.
596
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;">
597
'''Table 3.''' Sensitivity analysis results for numerical example 1 by changing <math display="inline">{d}_{A}</math></div>
598
599
{| style="width: 100%;border-collapse: collapse;" 
600
|-
601
|  style="border-top: 1pt solid black;text-align: center;"|<math>{d}_{A}</math>
602
|  style="border-top: 1pt solid black;text-align: center;"|<math>{d}_{B}</math>
603
|  style="border-top: 1pt solid black;text-align: center;"|<math>{\mathit{\boldsymbol{((}}\hat{\mu }\left( {\boldsymbol{\text{x}}}\right) \mathit{\boldsymbol{-}}t\mathit{\boldsymbol{)}}}^{\mathit{\boldsymbol{2}}}-</math><math>{d}_{A}\boldsymbol{)\ast (}\hat{\sigma }\left({\boldsymbol{\text{x}}}\right) -</math><math>{d}_{B}\mathit{\boldsymbol{)}}</math>
604
|  style="border-top: 1pt solid black;text-align: center;"|<math>{\boldsymbol{\text{x}}}</math>
605
|  style="border-top: 1pt solid black;text-align: center;"|<math>{\boldsymbol{(}\mu \left( {\boldsymbol{\text{x}}}^*\right) \boldsymbol{-}\tau \boldsymbol{)}}^{\boldsymbol{2}}</math>
606
|  style="border-top: 1pt solid black;text-align: center;"|<math>\sigma \left({\boldsymbol{\text{x}}}^*\right)</math> 
607
|-
608
|  style="border-top: 1pt solid black;text-align: center;"|0.6589
609
|  style="border-top: 1pt solid black;text-align: center;"|3.1504
610
|  style="border-top: 1pt solid black;text-align: center;"|0.2218
611
|  style="border-top: 1pt solid black;text-align: center;"|[-1.0281  -0.0159  0.3253]
612
|  style="border-top: 1pt solid black;text-align: center;"|0.157
613
|  style="border-top: 1pt solid black;text-align: center;"|2.7085
614
|-
615
|  style="text-align: center;"|0.7321
616
|  style="text-align: center;"|3.1504
617
|  style="text-align: center;"|0.2547
618
|  style="text-align: center;"|[-1.0017   -0.0078   0.3107]
619
|  style="text-align: center;"|0.1753
620
|  style="text-align: center;"|2.6930
621
|-
622
|  style="text-align: center;"|0.8134
623
|  style="text-align: center;"|3.1504
624
|  style="text-align: center;"|0.2925
625
|  style="text-align: center;"|[-0.9739   0.0007   0.2953]
626
|  style="text-align: center;"|0.1953
627
|  style="text-align: center;"|2.6771
628
|-
629
|  style="text-align: center;"|0.9038
630
|  style="text-align: center;"|3.1504
631
|  style="text-align: center;"|0.3361
632
|  style="text-align: center;"|[-0.9445   0.0098  0.2790]
633
|  style="text-align: center;"|0.2174
634
|  style="text-align: center;"|2.6608
635
|-
636
|  style="text-align: center;"|1.0042
637
|  style="text-align: center;"|3.1504
638
|  style="text-align: center;"|0.3861
639
|  style="text-align: center;"|[-0.9137    0.0193    0.2619]
640
|  style="text-align: center;"|0.2416
641
|  style="text-align: center;"|2.6441
642
|-
643
|  style="text-align: center;"|1.1158
644
|  style="text-align: center;"|3.1504
645
|  style="text-align: center;"|0.4435
646
|  style="text-align: center;"|[-0.8813    0.0293    0.2438]
647
|  style="text-align: center;"|0.2680
648
|  style="text-align: center;"|2.6272
649
|-
650
|  style="text-align: center;"|'''1.2398'''
651
|  style="text-align: center;"|'''3.1504'''
652
|  style="text-align: center;"|'''0.5095'''
653
|  style="text-align: center;"|'''[-0.8473    0.0399    0.2248]'''
654
|  style="text-align: center;"|'''0.2967'''
655
|  style="text-align: center;"|'''2.6101'''
656
|-
657
|  style="text-align: center;"|1.3638
658
|  style="text-align: center;"|3.1504
659
|  style="text-align: center;"|0.5775
660
|  style="text-align: center;"|[-0.8153    0.0499    0.2069]
661
|  style="text-align: center;"|0.3248
662
|  style="text-align: center;"|2.5946
663
|-
664
|  style="text-align: center;"|1.5002
665
|  style="text-align: center;"|3.1504
666
|  style="text-align: center;"|0.6543
667
|  style="text-align: center;"|[-0.7820    0.0603    0.1881]
668
|  style="text-align: center;"|0.3549
669
|  style="text-align: center;"|2.5791
670
|-
671
|  style="text-align: center;"|1.6502
672
|  style="text-align: center;"|3.1504
673
|  style="text-align: center;"|0.7412
674
|  style="text-align: center;"|[-0.7475    0.0711    0.1687]
675
|  style="text-align: center;"|0.3869
676
|  style="text-align: center;"|2.5637
677
|-
678
|  style="text-align: center;"|1.8152
679
|  style="text-align: center;"|3.1504
680
|  style="text-align: center;"|0.8393
681
|  style="text-align: center;"|[-0.7120    0.0824    0.1486]
682
|  style="text-align: center;"|0.4209
683
|  style="text-align: center;"|2.5484
684
|-
685
|  style="text-align: center;"|1.9967
686
|  style="text-align: center;"|3.1504
687
|  style="text-align: center;"|0.9499
688
|  style="text-align: center;"|[-0.6754    0.0939    0.1278]
689
|  style="text-align: center;"|0.4567
690
|  style="text-align: center;"|2.5335
691
|-
692
|  style="text-align: center;"|2.1964
693
|  style="text-align: center;"|3.1504
694
|  style="text-align: center;"|1.0746
695
|  style="text-align: center;"|[-0.6381    0.1058    0.1065]
696
|  style="text-align: center;"|0.4942
697
|  style="text-align: center;"|2.5191
698
|-
699
|  style="border-bottom: 1pt solid black;text-align: center;"|2.4160
700
|  style="border-bottom: 1pt solid black;text-align: center;"|3.1504
701
|  style="border-bottom: 1pt solid black;text-align: center;"|1.2148
702
|  style="border-bottom: 1pt solid black;text-align: center;"|[-0.6002    0.1180    0.0847]
703
|  style="border-bottom: 1pt solid black;text-align: center;"|0.5331
704
|  style="border-bottom: 1pt solid black;text-align: center;"|2.5052
705
|}
706
707
As shown in Table 3, if only <math display="inline">{d}_{A}</math> increases, the optimal squared bias <math display="inline">{(\hat{\mu }(\boldsymbol{x}^*)-\tau )}^{2}</math> increases while the process variability <math display="inline">\hat{\sigma }\left( \boldsymbol{x}^*\right)</math> decreasing. All of the optimal solutions obtained by using the proposed methods are plotted as circles and compared with the Pareto optimal solutions generated by using the lexicographic weighted Tchebycheff method. Clearly, the obtained solutions are on the Pareto frontier, as shown in Figure 6.[[File:Draft_Shin_691882792-image7.png|centre|thumb|463x463px|'''Figure 6.  '''Plot of sensitivity analysis results with the Pareto frontier for numerical example 1 by changing <math display="inline">{d}_{A}</math>.]]On the other hand, if <math display="inline">{d}_{A}</math> is considered as a constant and <math display="inline">{d}_{B}</math> is changed by 5% each time, the transformed data is summarized and plotted in Table 4 and Figure 7, respectively.
708
709
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;">
710
'''Table 4.''' Sensitivity analysis results for numerical example 1 by changing <math display="inline">{d}_{B}</math></div>
711
712
{| style="width: 100%;border-collapse: collapse;" 
713
|-
714
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{d}_{A}</math>
715
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{d}_{B}</math>
716
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{\mathit{\boldsymbol{((}}\hat{\mu }\left( {\boldsymbol{\text{x}}}\right) \mathit{\boldsymbol{-}}t\mathit{\boldsymbol{)}}}^{\mathit{\boldsymbol{2}}}-</math><math>{d}_{A}\boldsymbol{)\ast (}\hat{\sigma }\left({\boldsymbol{\text{x}}}\right) -</math><math>{d}_{B}\mathit{\boldsymbol{)}}</math>
717
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{\boldsymbol{\text{x}}}</math>
718
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{\boldsymbol{(}\hat{\mu }\left( {\boldsymbol{\text{x}}}^*\right) \boldsymbol{-}\tau \boldsymbol{)}}^{\boldsymbol{2}}</math>
719
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>\hat{\sigma }\left({\boldsymbol{\text{x}}}^*\right)</math> 
720
|-
721
|  style="border-top: 1pt solid black;text-align: center;"|1.2398
722
|  style="border-top: 1pt solid black;text-align: center;"|2.4377
723
|  style="border-top: 1pt solid black;text-align: center;"|0.0076
724
|  style="border-top: 1pt solid black;text-align: center;"|[-0.2082    0.2495   -0.1539]
725
|  style="border-top: 1pt solid black;text-align: center;"|0.9764
726
|  style="border-top: 1pt solid black;text-align: center;"|2.4089
727
|-
728
|  style="text-align: center;"|1.2398
729
|  style="text-align: center;"|2.5660
730
|  style="text-align: center;"|0.0592
731
|  style="text-align: center;"|[-0.4198    0.1770   -0.0212]
732
|  style="text-align: center;"|0.7286
733
|  style="text-align: center;"|2.4501
734
|-
735
|  style="text-align: center;"|1.2398
736
|  style="text-align: center;"|2.7011
737
|  style="text-align: center;"|0.1394
738
|  style="text-align: center;"|[-0.5607    0.1307    0.0618]
739
|  style="text-align: center;"|0.5746
740
|  style="text-align: center;"|2.4916
741
|-
742
|  style="text-align: center;"|1.2398
743
|  style="text-align: center;"|2.4832
744
|  style="text-align: center;"|0.2425
745
|  style="text-align: center;"|[-0.6726    0.0948    0.1262]
746
|  style="text-align: center;"|0.4595
747
|  style="text-align: center;"|2.5324
748
|-
749
|  style="text-align: center;"|1.2398
750
|  style="text-align: center;"|2.9929
751
|  style="text-align: center;"|0.3664
752
|  style="text-align: center;"|[-0.7666    0.0651    0.1795]
753
|  style="text-align: center;"|0.3690
754
|  style="text-align: center;"|2.5721
755
|-
756
|  style="text-align: center;"|1.2398
757
|  style="text-align: center;"|3.1504
758
|  style="text-align: center;"|0.5095
759
|  style="text-align: center;"|[-0.8473    0.0399    0.2248]
760
|  style="text-align: center;"|0.2967
761
|  style="text-align: center;"|2.6101
762
|-
763
|  style="text-align: center;"|1.2398
764
|  style="text-align: center;"|3.3079
765
|  style="text-align: center;"|0.6626
766
|  style="text-align: center;"|[-0.9141    0.0192    0.2621]
767
|  style="text-align: center;"|0.2412
768
|  style="text-align: center;"|2.6444
769
|-
770
|  style="text-align: center;"|1.2398
771
|  style="text-align: center;"|3.4733
772
|  style="text-align: center;"|0.8316
773
|  style="text-align: center;"|[-0.9727    0.0011    0.2946]
774
|  style="text-align: center;"|0.1962
775
|  style="text-align: center;"|2.6764
776
|-
777
|  style="text-align: center;"|1.2398
778
|  style="text-align: center;"|3.6470
779
|  style="text-align: center;"|1.0162
780
|  style="text-align: center;"|[-1.0241   -0.0147    0.3231]
781
|  style="text-align: center;"|0.1597
782
|  style="text-align: center;"|2.7061
783
|-
784
|  style="text-align: center;"|1.2398
785
|  style="text-align: center;"|3.8293
786
|  style="text-align: center;"|1.2159
787
|  style="text-align: center;"|[-1.0692   -0.0285    0.3480]
788
|  style="text-align: center;"|0.1303
789
|  style="text-align: center;"|2.7334
790
|-
791
|  style="border-bottom: 1pt solid black;text-align: center;"|1.2398
792
|  style="border-bottom: 1pt solid black;text-align: center;"|4.0208
793
|  style="border-bottom: 1pt solid black;text-align: center;"|1.4308
794
|  style="border-bottom: 1pt solid black;text-align: center;"|[-1.1088   -0.0406    0.3698]
795
|  style="border-bottom: 1pt solid black;text-align: center;"|0.1065
796
|  style="border-bottom: 1pt solid black;text-align: center;"|2.7583
797
|}
798
[[File:Draft_Shin_691882792-image8.png|centre|thumb|435x435px|'''Figure 7.''' Plot of sensitivity analysis results with the Pareto frontier for numerical example 1 by changing <math display="inline">{d}_{B}</math>]]As demonstrated by Table 4, the value of <math display="inline">{(\hat{\mu }(\boldsymbol{x}^*)-\tau )}^{2}</math> declines while <math display="inline">\, \hat{\sigma }\left( {\boldsymbol{\text{x}}}^*\right)</math>  grows if <math display="inline">{d}_{B}</math> is increased and <math display="inline">{d}_{A}</math> is kept constant. However, all of the solution points are still on the Pareto frontier, as shown in Figure 7.
799
800
===5.3 Numerical example 2===
801
In the second example [20], an unbalanced data set is utilized to investigate the relationship between coating thickness (<math>y</math>), mould temperature (<math display="inline">{x}_{1}</math>) and injection flow rate (<math display="inline">{x}_{2}</math>). A 3<sup>2</sup> factorial design with three levels as -1, 0, and +1 is applied as shown in Table 5.<span id="cite-47"></span><div class="center" style="width: auto; margin-left: auto; margin-right: auto;">
802
'''Table 5.'''  Experimental data for example 2</div>
803
804
{| style="width: 100%;border-collapse: collapse;" 
805
|-
806
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|'''Experiments'''
807
808
'''number'''
809
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{\mathit{{x}}}_{{1}}</math>
810
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{\mathit{{x}}}_{{2}}</math>
811
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{\mathit{{y}}}_{{1}}</math>
812
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{\mathit{{y}}}_{{2}}</math>
813
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{\mathit{{y}}}_{{3}}</math>
814
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{\mathit{{y}}}_{{4}}</math>
815
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{\mathit{{y}}}_{{5}}</math>
816
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{\mathit{{y}}}_{{6}}</math>
817
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{\mathit{{y}}}_{{7}}</math>
818
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>\overline{\mathit{{y}}}</math>
819
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{\mathit{{\sigma }}}^{\mathit{{2}}}</math>
820
|-
821
|  style="border-top: 1pt solid black;text-align: center;"|1
822
|  style="border-top: 1pt solid black;text-align: center;"|-1
823
|  style="border-top: 1pt solid black;text-align: center;"|-1
824
|  style="border-top: 1pt solid black;text-align: center;"|84.3
825
|  style="border-top: 1pt solid black;text-align: center;"|57.0
826
|  style="border-top: 1pt solid black;text-align: center;"|56.5
827
|  style="border-top: 1pt solid black;text-align: center;"|
828
|  style="border-top: 1pt solid black;text-align: center;"|
829
|  style="border-top: 1pt solid black;text-align: center;"|
830
|  style="border-top: 1pt solid black;text-align: center;"|
831
|  style="border-top: 1pt solid black;text-align: center;"|65.93
832
|  style="border-top: 1pt solid black;text-align: center;"|253.06
833
|-
834
|  style="text-align: center;"|2
835
|  style="text-align: center;"|0
836
|  style="text-align: center;"|-1
837
|  style="text-align: center;"|75.7
838
|  style="text-align: center;"|87.1
839
|  style="text-align: center;"|71.8
840
|  style="text-align: center;"|43.8
841
|  style="text-align: center;"|51.6
842
|  style="text-align: center;"|
843
|  style="text-align: center;"|
844
|  style="text-align: center;"|66.00
845
|  style="text-align: center;"|318.28
846
|-
847
|  style="text-align: center;"|3
848
|  style="text-align: center;"|1
849
|  style="text-align: center;"|-1
850
|  style="text-align: center;"|65.9
851
|  style="text-align: center;"|47.9
852
|  style="text-align: center;"|63.3
853
|  style="text-align: center;"|
854
|  style="text-align: center;"|
855
|  style="text-align: center;"|
856
|  style="text-align: center;"|
857
|  style="text-align: center;"|59.03
858
|  style="text-align: center;"|94.65
859
|-
860
|  style="text-align: center;"|4
861
|  style="text-align: center;"|-1
862
|  style="text-align: center;"|0
863
|  style="text-align: center;"|51.0
864
|  style="text-align: center;"|60.1
865
|  style="text-align: center;"|69.7
866
|  style="text-align: center;"|84.8
867
|  style="text-align: center;"|74.7
868
|  style="text-align: center;"|
869
|  style="text-align: center;"|
870
|  style="text-align: center;"|68.06
871
|  style="text-align: center;"|170.35
872
|-
873
|  style="text-align: center;"|5
874
|  style="text-align: center;"|0
875
|  style="text-align: center;"|0
876
|  style="text-align: center;"|53.1
877
|  style="text-align: center;"|36.2
878
|  style="text-align: center;"|61.8
879
|  style="text-align: center;"|68.6
880
|  style="text-align: center;"|63.4
881
|  style="text-align: center;"|48.6
882
|  style="text-align: center;"|42.5
883
|  style="text-align: center;"|53.46
884
|  style="text-align: center;"|139.89
885
|-
886
|  style="text-align: center;"|6
887
|  style="text-align: center;"|1
888
|  style="text-align: center;"|0
889
|  style="text-align: center;"|46.5
890
|  style="text-align: center;"|65.9
891
|  style="text-align: center;"|51.8
892
|  style="text-align: center;"|48.4
893
|  style="text-align: center;"|64.4
894
|  style="text-align: center;"|
895
|  style="text-align: center;"|
896
|  style="text-align: center;"|55.40
897
|  style="text-align: center;"|83.11
898
|-
899
|  style="text-align: center;"|7
900
|  style="text-align: center;"|-1
901
|  style="text-align: center;"|1
902
|  style="text-align: center;"|65.7
903
|  style="text-align: center;"|79.8
904
|  style="text-align: center;"|79.1
905
|  style="text-align: center;"|
906
|  style="text-align: center;"|
907
|  style="text-align: center;"|
908
|  style="text-align: center;"|
909
|  style="text-align: center;"|74.87
910
|  style="text-align: center;"|63.14
911
|-
912
|  style="text-align: center;"|8
913
|  style="text-align: center;"|0
914
|  style="text-align: center;"|1
915
|  style="text-align: center;"|54.4
916
|  style="text-align: center;"|63.8
917
|  style="text-align: center;"|56.2
918
|  style="text-align: center;"|48.0
919
|  style="text-align: center;"|64.5
920
|  style="text-align: center;"|
921
|  style="text-align: center;"|
922
|  style="text-align: center;"|57.38
923
|  style="text-align: center;"|47.54
924
|-
925
|  style="border-bottom: 1pt solid black;text-align: center;"|9
926
|  style="border-bottom: 1pt solid black;text-align: center;"|1
927
|  style="border-bottom: 1pt solid black;text-align: center;"|1
928
|  style="border-bottom: 1pt solid black;text-align: center;"|50.7
929
|  style="border-bottom: 1pt solid black;text-align: center;"|68.3
930
|  style="border-bottom: 1pt solid black;text-align: center;"|62.9
931
|  style="border-bottom: 1pt solid black;text-align: center;"|
932
|  style="border-bottom: 1pt solid black;text-align: center;"|
933
|  style="border-bottom: 1pt solid black;text-align: center;"|
934
|  style="border-bottom: 1pt solid black;text-align: center;"|
935
|  style="border-bottom: 1pt solid black;text-align: center;"|60.63
936
|  style="border-bottom: 1pt solid black;text-align: center;"|81.29
937
|}
938
939
Based on Cho and Park [20], a weighted least square (WLS) method was applied to estimate the process mean and variability functions as:
940
{| class="formulaSCP" style="width: 100%; text-align: center;" 
941
|-
942
| 
943
{| style="text-align: center; margin:auto;" 
944
|-
945
| <math display="inline">\hat{\mu }\left({\boldsymbol{\text{x}}}\right) =\,55.08\,\mathit{\boldsymbol{+\, }}{{\boldsymbol{\text{x}}}}^{T}{\boldsymbol{\alpha }}_{\boldsymbol{1}}\mathit{\boldsymbol{+\, }}{{\boldsymbol{\text{x}}}}^{T}\boldsymbol\Gamma{\boldsymbol{\text{x}}} </math>
946
|}
947
| style="width: 5px;text-align: right;white-space: nowrap;" | (13)
948
|}
949
where 
950
{| class="formulaSCP" style="width: 100%; text-align: center;" 
951
|-
952
| 
953
{| style="text-align: center; margin:auto;" 
954
|-
955
<math>\boldsymbol\alpha_1=\, \left[ \begin{matrix}-5.76\\-0.52\\\end{matrix}\right], and \quad \boldsymbol\Gamma=\, \left[ \begin{matrix}5.51&-0.92\\-0.92&5.47\\\end{matrix}\right] </math>     
956
|}
957
|}
958
{| class="formulaSCP" style="width: 100%; text-align: center;"  
959
|-
960
| 
961
{| style="text-align: center; margin:auto;"  
962
|-
963
| <math display="inline">{\hat{\sigma }}^{2}\left( {\boldsymbol{\text{x}}}\right)=\, 154.26\,+ {{{\boldsymbol{\text{x}}}}^{\mathit{\boldsymbol{T}}}\mathit{\boldsymbol{\beta }}}_{\mathit{\boldsymbol{1}}}\mathit{\boldsymbol{+\, }}{{\boldsymbol{\text{x}}}}^{\mathit{\boldsymbol{T}}}\boldsymbol\Delta{\boldsymbol{\text{x}}} </math>
964
|}
965
| style="width: 5px;text-align: right;white-space: nowrap;" | (14)
966
|}
967
where    
968
{| class="formulaSCP" style="width: 100%; text-align: center;" 
969
|-
970
| 
971
{| style="text-align: center; margin:auto;" 
972
|-
973
<math>\boldsymbol\beta_1=\, \left[ \begin{matrix}-39.34\\-93.09\\\end{matrix}\right], and \quad \boldsymbol\Delta=\, \left[ \begin{matrix}-38.31&22.07\\22.07&17.81\\\end{matrix}\right] </math>
974
|}
975
|}
976
Applying the same logic as utilized in example 1, the ranges for the process bias and variability are calculated by [12.0508, 420.25] and [45.53, 310.39], respectively. The disagreement points are computed as <math display="inline">{d}_{A}</math> =63.0436 and <math display="inline">{d}_{B}</math>=112.0959. Applying Equation (10), the optimal solutions can be obtained as follows: <math display="inline">{(\hat{\mu }({\boldsymbol{\text{x}}}^*)-\tau )}^{2}</math>=23.6526 and <math display="inline">{\hat{\sigma }}^{2}\left( {\boldsymbol{\text{x}}}^*\right) =</math> 58.3974. Based on the optimization results of both the proposed method and the conventional MSE model as demonstrated in Table 6, the optimization results of the proposed method provide a significantly small MSE compared to the conventional MSE model in this particular example.
977
978
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;">
979
'''Table 6.''' The optimization results of example2</div>
980
981
{| style="width: 100%;border-collapse: collapse;" 
982
|-
983
|  style="border: 1pt solid black;text-align: center;"|
984
|  style="border: 1pt solid black;text-align: center;vertical-align: top;"|<math>{x}_{1}^{\ast }</math>
985
|  style="border: 1pt solid black;text-align: center;vertical-align: top;"|<math>{x}_{2}^{\ast }</math>
986
|  style="border: 1pt solid black;text-align: center;vertical-align: top;"|<math>\left| \hat{\mu }\mathit{\boldsymbol{(}}{\boldsymbol{\text{x}}}^*\mathit{\boldsymbol{)-}}\tau \right|</math> 
987
|  style="border: 1pt solid black;text-align: center;vertical-align: top;"|<math>{\hat{\sigma }}^{2}\mathit{\boldsymbol{(}}{\boldsymbol{\text{x}}}^*\mathit{\boldsymbol{)}}</math>
988
|  style="border: 1pt solid black;text-align: center;vertical-align: top;"|'''MSE'''
989
|-
990
|  style="border: 1pt solid black;text-align: center;"|'''MSE model'''
991
|  style="border: 1pt solid black;text-align: center;"|0.998
992
|  style="border: 1pt solid black;text-align: center;"|0.998
993
|  style="border: 1pt solid black;text-align: center;"|7.93
994
|  style="border: 1pt solid black;text-align: center;"|45.66
995
|  style="border: 1pt solid black;text-align: center;"|108.48
996
|-
997
|  style="border: 1pt solid black;text-align: center;"|'''Proposed model'''
998
|  style="border: 1pt solid black;text-align: center;"|1.000
999
|  style="border: 1pt solid black;text-align: center;"|0.4440
1000
|  style="border: 1pt solid black;text-align: center;"|4.8606
1001
|  style="border: 1pt solid black;text-align: center;"|58.3974
1002
|  style="border: 1pt solid black;text-align: center;"|82.023
1003
|}
1004
1005
A Pareto frontier including all non-dominated solutions can be obtained by applying a lexicographic weighted Tchebycheff approach. As illustrated by Figure 8, the Nash bargaining solution is on the Pareto frontier, which may clearly verify the efficiency of the proposed method.
1006
1007
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;">[[File:Ty1.png|centre|thumb|407x407px|''' Figure 8. '''The optimization results plot with the Pareto frontier of example 2]]</div>
1008
1009
===5.4 Sensitivity analysis for numerical example 2===
1010
1011
Applying the same logic for example 2, <math display="inline">{d}_{B}</math> is kept constant as <math display="inline">{d}_{A}</math> is changed by 10%. Table 7 exhibits the effect of changes in <math display="inline">{d}_{A}</math>, and Figure 9 demonstrates the efficiency of the calculated solutions.
1012
1013
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;">
1014
'''Table 7.''' Sensitivity analysis results for numerical example 2 by changing <math display="inline">{d}_{A}</math></div>
1015
1016
{| style="width: 100%;margin: 1em auto 0.1em auto;border-collapse: collapse;" 
1017
|-
1018
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{d}_{A}</math>
1019
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{d}_{B}</math>
1020
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{\mathit{\boldsymbol{((}}\hat{\mu }\left( {\boldsymbol{\text{x}}}\right) \mathit{\boldsymbol{-}}\tau \mathit{\boldsymbol{)}}}^{\mathit{\boldsymbol{2}}}-</math><math>{d}_{A}\boldsymbol{)\ast (}{\hat{\sigma }}^{2}\left({\boldsymbol{\text{x}}}\right) -</math><math>{d}_{B}\mathit{\boldsymbol{)}}</math>
1021
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{\boldsymbol{\text{x}}}</math>
1022
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{{(}\hat{\mu }\left( {\boldsymbol{\text{x}}}^*\right) {-}\tau {)}}^{{2}}</math>
1023
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{\hat{\sigma }}^{\mathit{{2}}}\left( {\boldsymbol{\text{x}}}^*\right)</math> 
1024
|-
1025
|  style="border-top: 1pt solid black;text-align: center;"|37.2266
1026
|  style="border-top: 1pt solid black;text-align: center;"|112.0959
1027
|  style="border-top: 1pt solid black;text-align: center;"|790.0487
1028
|  style="border-top: 1pt solid black;text-align: center;"|[ 0.9510    0.3554]
1029
|  style="border-top: 1pt solid black;text-align: center;"|19.9778
1030
|  style="border-top: 1pt solid black;text-align: center;"|66.2928
1031
|-
1032
|  style="text-align: center;"|41.3629
1033
|  style="text-align: center;"|112.0959
1034
|  style="text-align: center;"|986.2591
1035
|  style="text-align: center;"|[0.9813    0.3624]
1036
|  style="text-align: center;"|21.2437
1037
|  style="text-align: center;"|63.0751
1038
|-
1039
|  style="text-align: center;"|45.9588
1040
|  style="text-align: center;"|112.0959
1041
|  style="text-align: center;"|1218.3
1042
|  style="text-align: center;"|[1.0000    0.3751]
1043
|  style="text-align: center;"|22.2248
1044
|  style="text-align: center;"|60.7647
1045
|-
1046
|  style="text-align: center;"|51.0653
1047
|  style="text-align: center;"|112.0959
1048
|  style="text-align: center;"|1482.5
1049
|  style="text-align: center;"|[1.0000    0.3978]
1050
|  style="text-align: center;"|22.6267
1051
|  style="text-align: center;"|59.9662
1052
|-
1053
|  style="text-align: center;"|56.7392
1054
|  style="text-align: center;"|112.0959
1055
|  style="text-align: center;"|1780.6
1056
|  style="text-align: center;"|[1.000    0.4208]
1057
|  style="text-align: center;"|23.0925
1058
|  style="text-align: center;"|59.1766
1059
|-
1060
|  style="text-align: center;"|'''63.0436'''
1061
|  style="text-align: center;"|'''1120959'''
1062
|  style="text-align: center;"|'''2116.7'''
1063
|  style="text-align: center;"|'''[1.0000    0.4440]'''
1064
|  style="text-align: center;"|'''23.6256'''
1065
|  style="text-align: center;"|'''58.3974'''
1066
|-
1067
|  style="text-align: center;"|69.3480
1068
|  style="text-align: center;"|112.0959
1069
|  style="text-align: center;"|2457.5
1070
|  style="text-align: center;"|[1.0000    0.4653]
1071
|  style="text-align: center;"|24.1686
1072
|  style="text-align: center;"|57.7026
1073
|-
1074
|  style="text-align: center;"|76.2828
1075
|  style="text-align: center;"|112.0959
1076
|  style="text-align: center;"|2837.1
1077
|  style="text-align: center;"|[1.0000    0.4867]
1078
|  style="text-align: center;"|24.7721
1079
|  style="text-align: center;"|57.0185
1080
|-
1081
|  style="text-align: center;"|83.9110
1082
|  style="text-align: center;"|112.0959
1083
|  style="text-align: center;"|3259.8
1084
|  style="text-align: center;"|[1.0000    0.5083]
1085
|  style="text-align: center;"|25.4386
1086
|  style="text-align: center;"|56.346
1087
|-
1088
|  style="text-align: center;"|92.3021
1089
|  style="text-align: center;"|112.0959
1090
|  style="text-align: center;"|3730.5
1091
|  style="text-align: center;"|[1.0000    0.5300]
1092
|  style="text-align: center;"|26.1709
1093
|  style="text-align: center;"|55.686
1094
|-
1095
|  style="text-align: center;"|101.5323
1096
|  style="text-align: center;"|112.0959
1097
|  style="text-align: center;"|4254.2
1098
|  style="text-align: center;"|[1.0000    0.5518]
1099
|  style="text-align: center;"|26.9716
1100
|  style="text-align: center;"|55.0393
1101
|-
1102
|  style="text-align: center;"|111.6856
1103
|  style="text-align: center;"|112.0959
1104
|  style="text-align: center;"|4836.8
1105
|  style="text-align: center;"|[1.0000    0.5738]
1106
|  style="text-align: center;"|27.8435
1107
|  style="text-align: center;"|54.407
1108
|-
1109
|  style="text-align: center;"|122.8541
1110
|  style="text-align: center;"|112.0959
1111
|  style="text-align: center;"|5484.6
1112
|  style="text-align: center;"|[1.0000    0.5958]
1113
|  style="text-align: center;"|28.7892
1114
|  style="text-align: center;"|53.7896
1115
|-
1116
|  style="border-bottom: 1pt solid black;text-align: center;"|135.1396
1117
|  style="border-bottom: 1pt solid black;text-align: center;"|112.0959
1118
|  style="border-bottom: 1pt solid black;text-align: center;"|6204.7
1119
|  style="border-bottom: 1pt solid black;text-align: center;"|[1.0000    0.6179]
1120
|  style="border-bottom: 1pt solid black;text-align: center;"|29.8115
1121
|  style="border-bottom: 1pt solid black;text-align: center;"|53.1879
1122
|}
1123
1124
1125
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;">[[File:Draft_Shin_691882792-image9.png|centre|thumb|'''Figure 9.''' Plot of sensitivity analysis results with the Pareto frontier for numerical example 2 by changing
1126
<math display="inline">{d}_{A}</math>
1127
|445x445px]]</div>
1128
1129
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;"></div>
1130
1131
On the other hand, another sensitivity analysis is conducted by changing <math display="inline">{d}_{B}</math> with 10% increment and decrement while holding <math display="inline">{d}_{A}</math> at a fixed value (63.0436) as shown in Table 8 and plotted in Figure 10.
1132
1133
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;">
1134
'''Table 8.''' Sensitivity analysis results for numerical example 2 by changing <math display="inline">{d}_{B}</math></div>
1135
1136
{| style="width: 100%;border-collapse: collapse;" 
1137
|-
1138
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{d}_{A}</math>
1139
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{d}_{B}</math>
1140
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{\mathit{\boldsymbol{((}}\hat{\mu }\left( {\boldsymbol{\text{x}}}\right) \mathit{\boldsymbol{-}}\tau \mathit{\boldsymbol{)}}}^{\mathit{\boldsymbol{2}}}-</math><math>{d}_{A}\boldsymbol{)\ast (}{\hat{\sigma }}^{\mathit{\boldsymbol{2}}}\left( {\boldsymbol{\text{x}}}\right) -</math><math>{d}_{B}\mathit{\boldsymbol{)}}</math>
1141
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{\boldsymbol{\text{x}}}</math> 
1142
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;"|<math display="block">{\boldsymbol{(}\hat{\mu }\left( {\boldsymbol{\text{x}}}^*\right) \boldsymbol{-}\tau \boldsymbol{)}}^{{2}}</math>
1143
|  style="border-top: 1pt solid black;border-bottom: 1pt solid black;text-align: center;"|<math>{\hat{\sigma }}^{\mathit{{2}}}\left( {\boldsymbol{\text{x}}}^*\right)</math> 
1144
|-
1145
|  style="border-top: 1pt solid black;text-align: center;vertical-align: top;"|63.0436
1146
|  style="border-top: 1pt solid black;text-align: center;vertical-align: top;"|48.2536
1147
|  style="border-top: 1pt solid black;text-align: center;vertical-align: top;"|15.4253
1148
|  style="border-top: 1pt solid black;text-align: center;vertical-align: top;"|[1.0000    0.9166]
1149
|  style="border-top: 1pt solid black;text-align: center;vertical-align: top;"|52.7429
1150
|  style="border-top: 1pt solid black;text-align: center;vertical-align: top;"|46.7561
1151
|-
1152
|  style="text-align: center;vertical-align: top;"|63.0436
1153
|  style="text-align: center;vertical-align: top;"|53.6151
1154
|  style="text-align: center;vertical-align: top;"|102.0493
1155
|  style="text-align: center;vertical-align: top;"|[1.0000    0.8094]
1156
|  style="text-align: center;vertical-align: top;"|42.2936
1157
|  style="text-align: center;vertical-align: top;"|48.6971
1158
|-
1159
|  style="text-align: center;vertical-align: top;"|63.0436
1160
|  style="text-align: center;vertical-align: top;"|59.5724
1161
|  style="text-align: center;vertical-align: top;"|245.6339
1162
|  style="text-align: center;vertical-align: top;"|[1.0000    0.7284]
1163
|  style="text-align: center;vertical-align: top;"|36.1567
1164
|  style="text-align: center;vertical-align: top;"|50.4366
1165
|-
1166
|  style="text-align: center;vertical-align: top;"|63.0436
1167
|  style="text-align: center;vertical-align: top;"|66.1915
1168
|  style="text-align: center;vertical-align: top;"|438.1362
1169
|  style="text-align: center;vertical-align: top;"|[1.0000    0.6620]
1170
|  style="text-align: center;vertical-align: top;"|32.0892
1171
|  style="text-align: center;vertical-align: top;"|52.0372
1172
|-
1173
|  style="text-align: center;vertical-align: top;"|63.0436
1174
|  style="text-align: center;vertical-align: top;"|73.5461
1175
|  style="text-align: center;vertical-align: top;"|677.0693
1176
|  style="text-align: center;vertical-align: top;"|[1.0000    0.6055]
1177
|  style="text-align: center;vertical-align: top;"|29.2313
1178
|  style="text-align: center;vertical-align: top;"|53.5218
1179
|-
1180
|  style="text-align: center;vertical-align: top;"|63.0436
1181
|  style="text-align: center;vertical-align: top;"|81.7179
1182
|  style="text-align: center;vertical-align: top;"|962.4337
1183
|  style="text-align: center;vertical-align: top;"|[1.0000    0.5567]
1184
|  style="text-align: center;vertical-align: top;"|27.1579
1185
|  style="text-align: center;vertical-align: top;"|54.8985
1186
|-
1187
|  style="text-align: center;vertical-align: top;"|63.0436
1188
|  style="text-align: center;vertical-align: top;"|90.7977
1189
|  style="text-align: center;vertical-align: top;"|1295.7
1190
|  style="text-align: center;vertical-align: top;"|[1.0000    0.5140]
1191
|  style="text-align: center;vertical-align: top;"|25.6262
1192
|  style="text-align: center;vertical-align: top;"|56.1698
1193
|-
1194
|  style="text-align: center;vertical-align: top;"|63.0436
1195
|  style="text-align: center;vertical-align: top;"|100.8863
1196
|  style="text-align: center;vertical-align: top;"|1679.3
1197
|  style="text-align: center;vertical-align: top;"|[1.0000    0.4767]
1198
|  style="text-align: center;vertical-align: top;"|24.4832
1199
|  style="text-align: center;vertical-align: top;"|57.3359
1200
|-
1201
|  style="text-align: center;vertical-align: top;"|'''63.0436'''
1202
|  style="text-align: center;vertical-align: top;"|'''112.0959'''
1203
|  style="text-align: center;vertical-align: top;"|'''2116.7'''
1204
|  style="text-align: center;vertical-align: top;"|'''[1.0000    0.4440]'''
1205
|  style="text-align: center;vertical-align: top;"|'''23.6256'''
1206
|  style="text-align: center;vertical-align: top;"|'''58.3974'''
1207
|-
1208
|  style="text-align: center;vertical-align: top;"|63.0436
1209
|  style="text-align: center;vertical-align: top;"|123.3066
1210
|  style="text-align: center;vertical-align: top;"|2562.1
1211
|  style="text-align: center;vertical-align: top;"|[1.0000    0.4181]
1212
|  style="text-align: center;vertical-align: top;"|23.0342
1213
|  style="text-align: center;vertical-align: top;"|59.2691
1214
|-
1215
|  style="text-align: center;vertical-align: top;"|63.0436
1216
|  style="text-align: center;vertical-align: top;"|135.6372
1217
|  style="text-align: center;vertical-align: top;"|3058.4
1218
|  style="text-align: center;vertical-align: top;"|[1.0000    0.3951]
1219
|  style="text-align: center;vertical-align: top;"|22.5764
1220
|  style="text-align: center;vertical-align: top;"|60.0593
1221
|-
1222
|  style="text-align: center;vertical-align: top;"|63.0436
1223
|  style="text-align: center;vertical-align: top;"|149.2010
1224
|  style="text-align: center;vertical-align: top;"|3609.9
1225
|  style="text-align: center;vertical-align: top;"|[1.0000    0.3748]
1226
|  style="text-align: center;vertical-align: top;"|22.2214
1227
|  style="text-align: center;vertical-align: top;"|60.7721
1228
|-
1229
|  style="text-align: center;vertical-align: top;"|63.0436
1230
|  style="text-align: center;vertical-align: top;"|164.1211
1231
|  style="text-align: center;vertical-align: top;"|4223.7
1232
|  style="text-align: center;vertical-align: top;"|[0.9829    0.3628]
1233
|  style="text-align: center;vertical-align: top;"|21.3147
1234
|  style="text-align: center;vertical-align: top;"|62.9025
1235
|-
1236
|  style="text-align: center;vertical-align: top;"|63.0436
1237
|  style="text-align: center;vertical-align: top;"|180.5332
1238
|  style="text-align: center;vertical-align: top;"|4919.9
1239
|  style="text-align: center;vertical-align: top;"|[0.9512    0.3554]
1240
|  style="text-align: center;vertical-align: top;"|19.9864
1241
|  style="text-align: center;vertical-align: top;"|66.2698
1242
|-
1243
|  style="border-bottom: 1pt solid black;text-align: center;vertical-align: top;"|63.0436
1244
|  style="border-bottom: 1pt solid black;text-align: center;vertical-align: top;"|198.5865
1245
|  style="border-bottom: 1pt solid black;text-align: center;vertical-align: top;"|5708.3
1246
|  style="border-bottom: 1pt solid black;text-align: center;vertical-align: top;"|[0.9199    0.3472]
1247
|  style="border-bottom: 1pt solid black;text-align: center;vertical-align: top;"|18.7938
1248
|  style="border-bottom: 1pt solid black;text-align: center;vertical-align: top;"|69.5842
1249
|}
1250
1251
1252
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;">[[File:Draft_Shin_691882792-image10.png|centre|423x423px|thumb|'''Figure 10.''' Plot of sensitivity analysis results with the Pareto frontier for numerical example 2 by changing <math display="inline">{d}_{B}</math>]]</div>
1253
1254
In general, for<span id="OLE_LINK13"></span><span id="OLE_LINK14"></span> both cases, an increase in the value of <math display="inline">{d}_{i}</math> will increase the corresponding bargaining solution value. For example, an increasement of <math display="inline">{d}_{A}</math> will lead to an increase in process bias and a decrease in the variability value. This conclusion also makes sense from the perspective of game theory since it can be explained as disagreement point monotonicity [60] which can be defined as:<span id="cite-48"></span>
1255
1256
For two points, <math display="inline">\, d=\left( {d}_{A},\, {\, d}_{B}\right)</math>  and <math display="inline">\, {d}^{'}=</math><math>\left( {{d}_{A}}^{'},\, {{\, d}_{B}}^{'}\right)</math> , <math display="inline">if\, {d}_{i}^{'}\geq {d}_{i},\, {d}_{j}^{,}={d}_{j},</math>then <math>{f}_{i}\left( U,{d}^{'}\right) \geq {f}_{i}\left( U,d\right) \, \, </math>; <math>j\not =i,\, and\, i,j\in \lbrace A,\, B\rbrace</math>
1257
1258
where <math display="inline">{f}_{i}\left( U,{d}^{'}\right)</math> and <math display="inline">{f}_{i}\left( U,d\right)</math>  represent the solution payoff for player ''i'' after and before the incensement of his disagreement point payoff, respectively. More specifically, the more disagreement point value ( <math display="inline">{d}_{i}</math>) a player demands for participation in an agreement, the more the player will get. Although, a gain achieved by one player comes at the expense of the other player. This is because if the agreed solution is not an improvement for one player, then the player would not have any incentive to participate in the bargaining game. However, in the RPD case, the objective for a player is to minimize instead of maximize the utility value, so the less <math display="inline">{d}_{i}</math> a player proposes, the higher the requirement the player is actually proposing to participate in a bargaining game.
1259
1260
==6. Conclusion and future direction ==
1261
1262
In a robust design model, when considering the simultaneous minimization of both process bias and variability as a bi-objective problem, there is an intractable tradeoff problem between them. Most existing methods tackle this tradeoff problem by either prioritizing a process parameter or assigning weights to process parameters to indicate the relative importance determined by a DM. However, the DM may struggle with assigning the weights or priority orders to different types and units of responses. Furthermore, the prioritizing or combining response procedure involves a certain degree of subjectivity, as different DMs may have different viewpoints on which process parameter is more important. Thus, in this paper, a bargaining game-based RPD method is proposed to solve this tradeoff problem by integrating Nash bargaining solution techniques and letting the two objectives (e.g., process bias and variability) “negotiate”, so that unique, fair, and efficient solutions can be obtained. These solutions can provide valuable suggestions to the DM, especially when there is no prior information of the relative importance for the process bias and variability. To inspect the efficiency of the obtained solutions, the associated Pareto frontier was generated through applying the lexicographic weighted Tchebycheff method, and thus, the solution position was visually confirmed. As validated by the two numerical examples, compared with the conventional dual response surface method and mean squared error method the proposed method can provide more efficient solutions based on MSE criterion. In addition, a number of sensitivity studies were conducted to investigate the relationship between the disagreement point values (<math>d_i</math>) and the agreement solutions. This research illustrates the possibility of combining the concept of game theory with an RPD model. For further study, the proposed method will be extended to solve the multiple response optimization problems. The tradeoff issue among multiple responses can be addressed by applying the multilateral bargaining game theory, where each quality response is regarded as a rational player who attempts to reach an agreement with others on which set of control factors to choose. In the game, each response proposes a solution set that optimizes the respective estimated response function and is subject to the expectations of the other responses.
1263
1264
== Acknowledgment ==
1265
This research was a part of the project titled ‘Marine digital AtoN information management and service system development (2/5) (20210650)’, funded by the Ministry of Oceans and Fisheries, Korea.
1266
1267
== References ==
1268
<div id="1">
1269
[1] Park, G. J., Lee, T. H., Lee, K. H., & Hwang, K. H. Robust design: an overview. AIAA Journal, 44(1): 181-191,2006.
1270
1271
[2] Myers, W. R., Brenneman, W. A., & Myers, R. H. A dual-response approach to robust parameter design for a generalized linear model. Journal of Quality Technology, 37(2), 130-138, 2005.
1272
1273
[3] Lin, D. K. J., and Tu, W.  “Dual response surface optimization.” Journal of Quality Technology 27:34-39, 1995.
1274
1275
[4] Cho, B. R., Philips, M. D., and Kapur, K. C.  “Quality improvement by RSM modeling for robust design.” The 5th Industrial Engineering Research Conference, Minneapolis, 650-655, 1996.
1276
1277
[5] Ding, R., Lin, D. K. J., and Wei, D.  “Dual response surface optimization: A weighted MSE approach.” Quality engineering 16(3):377-385, 2004.
1278
1279
[6] Vining, G. G., and Myers, R. H.  “Combining Taguchi and response surface philosophies: A dual response approach.” Journal of Quality Technology 22:38-45, 1990.
1280
1281
[7] Myers, R. H. and Carter, W. H.  Response Surface Methods for Dual Response Systems, Technometrics, 15(2), 301-307,1973. 
1282
1283
[8] Copeland, K. A. and Nelson, P. R. Dual Response Optimization via Direct Function Minimization, Journal of Quality Technology, 28(3), 331-336, 1996.
1284
1285
[9] Lee, D., Jeong, I., and Kim, K. A Posterior Preference Articulation Approach to Dual-Response Surface Optimization, IIE Transaction, 42(2), 161-171, 2010. 
1286
1287
[10] Shin, S. and Cho, B. R.  Robust design models for customer-specified bounds on process parameters, Journal of Systems Science and Systems Engineering, 15, 2-18, 2006.
1288
</div>
1289
[11] Leon R.V., Shoemaker A.C., Kackar R.N. Performance Measures Independent of Adjustment: an Explanation and Extension of Taguchi’s Signal-To-Noise Ratios. Technometrics, 29(3), 253-265, 1987. 
1290
1291
[12] Box G. Signal-to-noise ratios, performance criteria, and transformations. Technometrics, 30(1): 1-17, 1988.
1292
1293
[13] Nair V N, Abraham B, MacKay J, et al. Taguchi's parameter design: a panel discussion. Technometrics, 34(2): 127-161, 1992.
1294
1295
[14] Tsui K L. An overview of Taguchi method and newly developed statistical methods for robust design. IIE Transactions, 24(5): 44-57, 1992.
1296
1297
[15] Copeland, K. A. and Nelson, P. R.  Dual Response Optimization via Direct Function Minimization, Journal of Quality Technology, 28(3), 331-336, 1996.
1298
1299
[16] Shoemaker A C, Tsui K L, Wu C F J. Economical experimentation methods for robust design. Technometrics'','' 33(4): 415-427, 1991.
1300
1301
[17] Khattree R. Robust parameter design: A response surface approach. Journal of Quality Technology, 28(2): 187-198, 1996.
1302
1303
[18] Pregibon, Daryl. Generalized linear models. The Annals of Statistics, 12(4): 1589–1596, 1984.  
1304
1305
[19] Lee S B, Park C.  Development of robust design optimization using incomplete data. Computers & industrial engineering, 50(3): 345-356, 2006.
1306
1307
[20] Cho B R, Park C. Robust design modeling and optimization with unbalanced data. Computers & industrial engineering, 48(2): 173-180, 2005.
1308
1309
[21] Jayaram, J.S.R. and Ibrahim, Y.  Multiple response robust design and yield maximization. International Journal of Quality & Reliability Management, 16(9): 826-837, 1999.  
1310
1311
[22] Köksoy O, Doganaksoy N. Joint optimization of mean and standard deviation using response surface methods. Journal of Quality Technology, 35(3): 239-252, 2003.
1312
1313
[23] Shin S, Cho B R. Studies on a biobjective robust design optimization problem. IIE Transactions, 41(11): 957-968, 2009.
1314
1315
[24] Le T H, Tang M, Jang J H, et al. Integration of Functional Link Neural Networks into a Parameter Estimation Methodology[J]. Applied Sciences, 2021, 11(19): 9178.
1316
1317
[25] Picheral L, Hadj-Hamou K, Bigeon J.  Robust optimization based on the Propagation of Variance method for analytic design models. International Journal of Production Research, 52(24): 7324-7338, 2014. 
1318
1319
[26] Mortazavi A, Azarm S, Gabriel S A. Adaptive gradient-assisted robust design optimization under interval uncertainty. Engineering Optimization, 45(11): 1287-1307, 2013. 
1320
1321
[27] Bashiri, M., Moslemi, A., & Akhavan Niaki, S. T. Robust multi‐response surface optimization: a posterior preference approach. International Transactions in Operational Research, 27(3), 1751-1770, 2020. 
1322
1323
[28] Yang, S., Wang, J., Ren, X., & Gao, T. Bayesian online robust parameter design for correlated multiple responses. Quality Technology & Quantitative Management, 18(5), 620-640, 2021. 
1324
1325
<div id="11">
1326
[29] Sohrabi M K, Azgomi H. A survey on the combined use of optimization methods and game theory. Archives of Computational Methods in Engineering'','' 27(1): 59-80, 2020. 
1327
1328
[30] Shoham Y. Computer science and game theory. Communications of the ACM, 51(8): 74-79, 2008. 
1329
1330
[31] Manshaei M H, Zhu Q, Alpcan T, et al. Game theory meets network security and privacy. ACM Computing Surveys (CSUR), 45(3): 1-39, 2013.
1331
1332
[32] Pillai P S, Rao S. Resource allocation in cloud computing using the uncertainty principle of game theory. IEEE Systems Journal, 10(2): 637-648, 2014. 
1333
1334
[33] Lemaire J. An application of game theory: cost allocation. ASTIN Bulletin: The Journal of the IAA. 1984; 14(1): 61-81, 2014.
1335
1336
[34] Barough A S, Shoubi M V, Skardi M J E. Application of game theory approach in solving the construction project conflicts. Procedia-Social and Behavioral Sciences'','' 58: 1586-1593, 2012. 
1337
1338
[35] Gale D, Kuhn H W, Tucker A W. Linear programming and the theory of game. Activity analysis of production and allocation, 13: 317-335, 1951.
1339
1340
[36] Mangasarian O L, Stone H. Two-person nonzero-sum games and quadratic programming. Journal of mathematical analysis and applications, 9(3): 348-355, 1964. 
1341
1342
[37] Leboucher C, Shin H S, Siarry P, et al. Convergence proof of an enhanced particle swarm optimization method integrated with evolutionary game theory. Information Sciences, 346: 389-411, 2016. 
1343
1344
[38] Annamdas K K, Rao S S. Multi-objective optimization of engineering systems using game theory and particle swarm optimization. Engineering optimization, 41(8): 737-752, 2009. 
1345
1346
[39] Zamarripa, M. A., Aguirre, A. M., Méndez, C. A., & Espuña, A.  Mathematical programming and game theory optimization-based tool for supply chain planning in cooperative/competitive environments. Chemical Engineering Research and Design, 91(8): 1588-1600, 2013.
1347
1348
[40] Dai, L., Tang, M., & Shin, S. Stackelberg game approach to a bi-objective robust design optimization. Revista Internacional de Métodos Numéricos para Cálculo y Diseño en Ingeniería, 37(4), 2021.
1349
1350
[41] Matejaš J, Perić T. A new iterative method for solving multiobjective linear programming problem. Applied mathematics and computation, 243: 746-754, 2014. 
1351
1352
[42] Doudou, M., Barcelo-Ordinas, J. M., Djenouri, D., Garcia-Vidal, J., Bouabdallah, A., & Badache, N. Game theory framework for MAC parameter optimization in energy-delay constrained sensor networks. ACM Transactions on Sensor Networks (TOSN), 12(2), 1-35, 2016.
1353
</div>
1354
[43] Muthoo A. Bargaining theory with applications. Cambridge University Press, 1999.
1355
1356
[44] Goodpaster G. Rational decision-making in problem-solving negotiation: Compromise, interest-valuation, and cognitive error. Ohio St. J. on Disp. Resol. 8: 299, 1992.
1357
1358
[45] Nash, J. F. The Bargaining Problem.  Econometrica, 18(2):155-162, 1950. 
1359
1360
[46] Nash, J. F.  Two-Person Cooperative Games. Econometrica, 21(1):128-140, 1953. 
1361
1362
[47] Kalai E, Smorodinsky M. Other solutions to Nash's bargaining problem.  Econometrica: Journal of the Econometric Society, 513-518, 1975. 
1363
1364
[48] Rubinstein A.  Perfect equilibrium in a bargaining model.  Econometrica: Journal of the Econometric Society, 97-109, 1982. 
1365
1366
[49] Köksoy, O. A nonlinear programming solution to robust multi-response quality problem. Applied mathematics and computation, 196(2), 603-612, 2008. 
1367
1368
[50] Goethals, P. L., & Cho, B. R. Extending the desirability function to account for variability measures in univariate and multivariate response experiments. Computers & Industrial Engineering, 62(2), 457-468, 2012. 
1369
1370
[51] Wu, F. C., & Chyu, C. C.  Optimization of robust design for multiple quality characteristics. International Journal of Production Research, 42(2), 337-354, 2004. 
1371
1372
[52] Shin, S., & Cho, B. R. Bias-specified robust design optimization and its analytical solutions. Computers & Industrial Engineering, 48(1), 129-140, 2005. 
1373
1374
[53] Tang, L. C., & Xu, K. A unified approach for dual response surface optimization. Journal of quality technology, 34(4), 437-447, 2002. 
1375
1376
[54]  Steenackers, G., & Guillaume, P. Bias-specified robust design optimization: A generalized mean squared error approach. Computers & Industrial Engineering, 54(2), 259-268, 2008. 
1377
1378
[55] Mandal W A. Weighted Tchebycheff optimization technique under uncertainty. Annals of Data Science, 1-23, 2020.
1379
1380
[56] Dächert K, Gorski J, Klamroth K. An augmented weighted Tchebycheff method with adaptively chosen parameters for discrete bicriteria optimization problems. Computers & Operations Research, 39(12): 2929-2943, 2012.
1381
1382
[57] Steuer R E, Choo E U. An interactive weighted Tchebycheff procedure for multiple objective programming. Mathematical programming, 26(3): 326-344, 1983.
1383
1384
[58] Rausser G C, Swinnen J, Zusman P. Political power and economic policy: Theory, analysis, and empirical applications''.'' Cambridge University Press, 2011.
1385
1386
[59] Myerson R B. Game Theory: Analysis of Conflict. Harvard University Press, Cambridge, MA. London England, 1991.
1387
1388
[60] Thomson W. In: Handbook of game theory with economic applications. Cooperative models of Bargaining. 2: 1237-1284, 1994. <span id="_GoBack"></span>
1389

Return to Tang et al 2022a.

Back to Top

Document information

Published on 20/06/22
Accepted on 08/06/22
Submitted on 18/03/22

Volume 38, Issue 2, 2022
DOI: 10.23967/j.rimni.2022.06.002
Licence: CC BY-NC-SA license

Document Score

0

Views 150
Recommendations 0

Share this document

claim authorship

Are you one of the authors of this document?