Predictive Performance Evaluation of the Kibria-Lukman Estimator
ISSAM DAWOUD1, MOHAMED R. ABONAZEL2, ELSAYED TAG ELDIN3
1Department of Mathematics, Al-Aqsa University, Gaza, PALESTINE
2Department of Applied Statistics and Econometrics, Faculty of Graduate Studies for Statistical
Research, Cairo University, Giza, EGYPT
3Electrical Engineering Department, Faculty of Engineering & Technology, Future University in
Egypt, New Cairo, EGYPT
Abstract: - Regression models are commonly used in prediction, but their predictive performances may be
affected by the problem called the multicollinearity. To reduce the effect of the multicollinearity, different
biased estimators have been proposed as alternatives to the ordinary least squares estimator. But there are still
little analyses of the different proposed biased estimators’ predictive performances. Therefore, this paper
focuses on discussing the predictive performance of the recently proposed “new ridge-type estimator”, namely
the Kibria-Lukman (KL) estimator. The theoretical comparisons among the predictors of these estimators are
done according to the prediction mean squared error criterion in the two-dimensional space and the results are
explained by a numerical example. The regions are determined where the KL estimator gives better results than
the other estimators.
Key-Words: - Biased Estimator, Ridge Estimator, Liu Estimator, Kibria-Lukman estimator, Prediction Mean
Square Error, Multicollinearity.
Received: July 28, 2021. Revised: June 29, 2022. Accepted: August 11, 2022. Published: September 20, 2022.
1 Introduction
The multiple linear regression model is given by
mX


, (1)
where 𝑚 is an 𝑛 × 1 vector of dependent variable, 𝛽
is a
1p
vector of unknown parameters,
X
is an
np
full column rank matrix of non-stochastic
predetermined regressors, and
is an
1n
vector
of
2
. . .(0, )ii d
random errors.
The Ordinary Least Squares (OLS) estimator of the
unknown parameters in (1) is given by
1
ˆ( ) .
OLS X X X m

(2)
To reduce the effect of multicollinearity problem,
Hoerl and Kennard [1] proposed the most common
estimator which is called the ordinary ridge
regression (ORR) estimator and is defined as
follows:
,
0k
(3)
where
k
is the biasing parameter.
Then, Liu [2] proposed another alternative biased
estimator called the Liu estimator and is defined as
follows:
1
ˆˆ
( ) ( ) , 0 1
d OLS
X X X X d d


(4)
where
d
is the biasing parameter.
Recently, Kibria and Lukman [3] proposed a new
one parameter ridge-type estimator called the
Kibria-Lukman (KL) estimator and is defined as
1
ˆˆ
( ) ( ) . 0
KL OLS
X X k X X k k


(5)
Since the predictive performance of the regression
models which are commonly used in prediction is
affected by the multicollinearity, different biased
estimators have been proposed as an alternative to
the ordinary least squares estimator to reduce its
effect. But unfortunately there are few studies about
the predictive performances of the biased
estimators, as [4, 5, 6, 7, 8, 9, 10, 11].
As a consequence, it appears reasonable to evaluate
the predictive performance of the recently proposed
KL estimator compared with the OLS, ORR and Liu
estimators. The rest of this article is organized as
follows: In section 2, we present the evaluations of
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2022.21.75
Issam Dawoud, Mohamed R. Abonazel, Elsayed Tag Eldin
E-ISSN: 2224-2880
641
Volume 21, 2022
the prediction mean squared error (PMSE). In
section 3, the theoretical comparison of the PMSEs
in the two dimensional space among the above
mentioned estimators are given. A numerical
example (an application) is given to demonstrate the
theoretical results in section 4. Finally, some
concluding remarks are given in section 5.
2 Evaluation of the Prediction Mean
Squared Errors
We recall the developed PMSEs of Friedman and
Montgomery [4] for the OLS and the ORR
estimators and the developed PMSE of the Liu
estimator given by [5] and then obtain the PMSE of
the recently proposed KL estimator.
The PMSE is defined as:
22
00
ˆ
()J E m m Var Bias
, (6)
where
J
is the PMSE,
0
m
is the value to be
predicted,
0
ˆ
m
is the prediction of that value,
()Var
is the variance and
2
()Bias
is the squared bias.
Now, the prediction error variance and bias are
given as follows:
0 0 0 0
ˆˆ
( ) ( ) ( )Var m m Var m Var m
(7)
and
00
ˆ
()Bias E m m
. (8)
For convenience, the canonical form of model (1) is
given by
mZ


, (9)
where
Z XD
,
D

. Here,
D
is an
orthogonal matrix such that
12
( , ,..., )
p
Z Z D X X D diag
. Then
the OLS estimator of
in model (9) is
1
ˆOLS Zm

. (10)
The PMSE of the OLS estimator is given by
2
20
1
1
pi
OLS OLS ii
z
J Var



, (11)
where
0
z
is the orthonormalized point of the
prediction
0
ˆ
m
.
The ORR estimator of
is defined by Hoerl and
Kennard (1970) as follows:
1
ˆ( ) , 0
kk Z m k
(12)
and then Friedman and Montgomery [4] found the
PMSE of the ORR estimator as follows
2
2
22
0
2
11
1( ) ( )
pp
i i oi i
kii
ii
zz
Jk
kk





. (13)
The Liu estimator of
is defined by Liu [2] as
follows:
1
ˆˆ
( ) ( ) , 0 1
d OLS
dd

(14)
and then Özbey and Kaçıranlar [5] found the PMSE
of the Liu estimator as follows:
2
22
22
0
2
11
()
1 (1 )
( 1) ( 1)
pp
i i oi i
dii
i i i
z d z
Jd




. (15)
The recently proposed NRT estimator of
is
defined by Kibria and Lukman [3] as follows:
1
ˆˆ
( ) ( ) , 0.
NRT OLS
k k k

(16)
The NRT estimator has been extended in different
regression models, such as [12, 13, 14, 15, 16].
The variance of the prediction error of the NRT
estimator is
NRT 0 0 0 NRT 0
2
0 NRT
22
20
2
1
ˆˆ
( ) ( ) ( )
ˆ
()
()
1.
()
pii
iii
Var m m Var m Var m
Var z
kz
k







(17)
The bias of the prediction error of the NRT
estimator is
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2022.21.75
Issam Dawoud, Mohamed R. Abonazel, Elsayed Tag Eldin
E-ISSN: 2224-2880
642
Volume 21, 2022
NRT 0 0 0 0 NRT
1
ˆ
ˆ
( ) ( )
2()
poi i
ii
Bias E m m z z E
z
kk


(18)
so, the squared bias is
2
22
NRT
1
4()
poi i
ii
z
Bias k k



. (19)
By summing up the variance and the squared bias of
the NRT estimator we obtain
2
NRT NRT
22
20
2
1
2
2
1
()
1()
4.
()
NRT
pii
iii
poi i
ii
J Var Bias
kz
k
z
kk









(20)
3 Comparisons of Prediction Mean
Squared Errors in the Two
Dimensional Space
We discuss here the prediction performance of the
recently proposed KL estimator by following the
method of [4, 5, 6, 7, 8, 9, 10, 11] such that our
inferences are based on the predicted observations
subspace (i.e., the ratio
22
02 01
/zz
) and since the non-
zero values of
2
1
only increase the intercept values
for
k
J
,
d
J
and
NRT
J
and leave the curve for
OLS
J
unchanged, we set
2
1
to zero. So, the comparisons
of
NRT
J
with
OLS
J
,
k
J
and
d
J
will be done and
written in the following three theorems.
Theorem 1:
a. If
2 2 2
22
2
22
2
( ) ( )
4
kk
k
, then
NRT OLS
JJ
2
2
02
12
2
01
()
zf
z
.
b. If
2 2 2
22
2
22
2
( ) ( )
4
kk
k
, then
NRT OLS
JJ
.
Where
2
21
2
1 1 1
2
12 2 2 2 2 2
22
22
2 2 2 2
()
1
()
() ( ) 4
( ) ( )
k
k
fkk
kk








. (21)
Proof:
The NRT estimator gives better results than the OLS
estimator due to the PMSE criterion, when
NRT OLS
JJ
. That means,
2 2 2 2
22
1 01 2 02
22
1 1 2 2
2 2 2 2 2
22
2 02 01 02
2
2 1 2
( ) ( )
( ) ( )
4.
()
k z k z
kk
k z z z
k












(22)
After rearranging the inequality in (22), we get
2 2 2 2 2
222
02 22
2 2 2 2
2
22 1
01 2
1 1 1
( ) 4
( ) ( )
()
1
()
kk
zkk
k
zk









. (23)
If both
2 2 2 2
22
22
2 2 2
2
2
( ) 4
( ) ( )
kk
kk

(24)
and
2
21
2
1 1 1
()
1
()
k
k



(25)
have the same signs, the NRT estimator gives better
results than the OLS estimator when
2 2 2
02 01 1 2
/ ( )z z f
holds where
2
2
2
2
2
2
2
2
22
2
2
2
2
11
2
1
1
2
2
21
)(
4
)(
)(
)(
)(1
)(
k
k
k
k
k
k
f
. (26)
Also, if (24) and (25) have opposite signs, the NRT
estimator always gives better results than the OLS
estimator where
2
12
()f
is negative and
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2022.21.75
Issam Dawoud, Mohamed R. Abonazel, Elsayed Tag Eldin
E-ISSN: 2224-2880
643
Volume 21, 2022
2 2 2
02 01 1 2
/ ( )z z f
always holds. Consequently, at
that region the NRT estimator is better than the OLS
estimator.
The positiveness condition of (24) is given by
2 2 2
22
2
22
2
( ) ( )
4
kk
k
(27)
and equation (25) is always positive.
The hyperbola
2
12
()f
vertical asymptote is at the
point
2 2 2
22
2
22
2
( ) ( ) .
4
kk
k
(28)
Theorem 2:
a. If
2
2
2
2
2
2
2
2
23
)(
k
k
, then
kNRT JJ
)( 2
22
2
01
2
02
f
z
z
.
b. If
2
2
2
2
2
2
2
2
23
)(
k
k
, then
kNRT JJ
.
Where
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
22
2
2
2
2
11
2
1
2
1
1
2
2
22
)()()(
4
)(
)(
)(
)(
)(
)(
k
k
kk
k
k
k
k
k
k
f
. (29)
Proof:
The NRT estimator gives better results than the
ORR estimator due to the PMSE criterion, when
kNRT JJ
. That means,
2 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2
1 01 2 02 2 02 1 01 2 02 2 02
2 2 2 2 2 2
1 1 2 2 2 1 2 2
( ) ( ) 4 .
( ) ( ) ( ) ( ) ( ) ( )
k z k z k z z z k z
k k k k k k



(30)
After rearranging the inequality in (30), we get
.
)(
)(
)()()()(
4
)(
)(
2
11
2
1
2
1
1
22
01
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
22
2
2
2
2
02
k
k
k
z
k
k
kk
k
k
k
z
(31)
If both
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
22
2
2
2
)()()(
4
)(
)(
k
k
kk
k
k
k
(32)
and
2
11
2
1
2
1
1
2
)(
)(
)( k
k
k
(33)
have the same signs, the NRT estimator gives better
results than the ORR estimator when
)(/ 2
22
2
01
2
02
fzz
holds where
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
22
2
2
2
2
11
2
1
2
1
1
2
2
22
)()()(
4
)(
)(
)(
)(
)(
)(
k
k
kk
k
k
k
k
k
k
f
. (34)
Also, if (32) and (33) have opposite signs, the NRT
estimator always gives better results than the ORR estimator where
)( 2
22
f
is negative and
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2022.21.75
Issam Dawoud, Mohamed R. Abonazel, Elsayed Tag Eldin
E-ISSN: 2224-2880
644
Volume 21, 2022
)(/ 2
22
2
01
2
02
fzz
always holds. Consequently, at
that region the NRT estimator is better than the
ORR estimator.
The positiveness condition of (32) is given by
2
2
2
2
2
2
2
2
23
)(
k
k
(35)
and the equation (33) is always positive.
The hyperbola
)( 2
22
f
vertical asymptote is at the
point
2 2 2
22
2
22
2
()
.
3
k
k

(36)
Theorem 3:
a. If
2
22
22
22
2
2
2
2
2
2
2
2
2
2
2
2)()1()1(4
)()1()()(
kdk
kkd
then
-
dNRT JJ
for
2
1
2
1
2
1
2
1)()1()()( kkd
,
-
dNRT JJ
)( 2
23
2
01
2
02
f
z
z
for
2
1
2
1
2
1
2
1)()1()()( kkd
.
b. If
2
22
22
22
2
2
2
2
2
2
2
2
2
2
2
2)()1()1(4
)()1()()(
kdk
kkd
then
-
dNRT JJ
for
2
1
2
1
2
1
2
1)()1()()( kkd
,
-
dNRT JJ
)( 2
23
2
01
2
02
f
z
z
for
2
1
2
1
2
1
2
1)()1()()( kkd
.
Where
2
2
2
2
2
2
22
22
2
2
2
2
2
2
2
22
22
2
2
11
2
1
2
11
2
1
2
2
23
)1(
)1(
)1(
)(
)(
4
)(
)(
)(
)(
)1(
)(
)(
dd
k
k
k
k
k
kd
f
. (37)
Proof:
The NRT estimator gives better results than the Liu
estimator due to the PMSE criterion, when
dNRT JJ
. That means,
.
)1(
)1(
)1(
)(
)1(
)(
)(
4
)(
)(
)(
)(
2
2
2
02
2
2
2
2
22
2
02
2
2
2
11
2
01
2
1
22
2
2
2
02
2
2
2
2
22
2
02
2
2
2
11
2
01
2
1
22
zdzdzd
k
zk
k
zk
k
zk
(38)
After rearranging the inequality in (38), we get
2 2 2 2 2 2 2 2 2 2
2 2 2
2 2 2 2 1 1
02 01
2 2 2 2 2 2
2 2 2 2 2 2 1 1 1 1
( ) 4 ( ) (1 ) ( ) ( ) .
( ) ( ) ( 1) ( 1) ( 1) ( )
k k d d d k
zz
k k k
(39)
If both
2
2
2
2
2
2
22
22
2
2
2
2
2
2
2
22
22
2
)1(
)1(
)1(
)(
)(
4
)(
)(
dd
k
k
k
k
(40)
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2022.21.75
Issam Dawoud, Mohamed R. Abonazel, Elsayed Tag Eldin
E-ISSN: 2224-2880
645
Volume 21, 2022
and
2
11
2
1
2
11
2
1
2
)(
)(
)1(
)(
k
kd
(41)
have the same signs; so the NRT estimator gives
better results than the Liu estimator when
)(/ 2
23
2
01
2
02
fzz
holds where
2
2
2
2
2
2
22
22
2
2
2
2
2
2
2
22
22
2
2
11
2
1
2
11
2
1
2
2
23
)1(
)1(
)1(
)(
)(
4
)(
)(
)(
)(
)1(
)(
)(
dd
k
k
k
k
k
kd
f
. (42)
Also, if (40) and (41) have opposite signs, the NRT
estimator always gives better results than the Liu
estimator where
)( 2
23
f
is negative and
)(/ 2
23
2
01
2
02
fzz
always holds. Consequently, at
that region the NRT estimator is better than the Liu
estimator.
The positiveness condition of (40) is given by
2
22
22
22
2
2
2
2
2
2
2
2
2
2
2
2)()1()1(4
)()1()()(
kdk
kkd
(43)
and the positiveness condition of (41) is given by
2 2 2 2
1 1 1 1
( ) ( ) ( 1) ( ) .d k k
(44)
Of course, the opposite conditions are needed for
the negativeness of (40) and (41). The hyperbola
)( 2
23
f
vertical asymptote is at the point
2 2 2 2 2
2 2 2 2
2
22 2 2 2
2 2 2 2
( ) ( ) ( 1) ( ) .
4 ( 1) (1 ) ( )
d k k
k d k
(45)
The biasing parameters (
k
,
d
) estimation is
significant for the multiple regression model suffers
from the multicollinearity problem. So, we have not
here made any attempt to estimate them. However,
we refer the readers to some of these studies, for
example [1, 2, 3, 17, 18, 19].
Several biased estimators are developed in different
regression models for solving the multicollinearity,
such as [20, 21, 22, 23, 24, 25, 26].
4 Application
In this section, we explain the theoretical results of
this study using the example given by [4] (i.e.,
1
2
,
1.0k
,
95.1
1
and
05.0
2
) and
[5] (i.e.,
9.0d
).
Firstly, considering the NRT and the OLS
estimators’ predictive performances. From (21), we
get
10
05354.0
)( 2
2
2
21
f
, (46)
which is a hyperbola with the vertical asymptote at
10
2
2
. (47)
We are here interested only in the points lie in the
first quadrant because of both
2
01
2
02 /zz
and
2
2
are
positive.
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2022.21.75
Issam Dawoud, Mohamed R. Abonazel, Elsayed Tag Eldin
E-ISSN: 2224-2880
646
Volume 21, 2022
Fig. 1:
)( 2
21
f
versus
2
2
values
Figure 1 shows that when
2
2
values are smaller
than 10, the NRT estimator gives better results than
the OLS estimator and when
2
2
values are greater
than 10, there is a trade-off between the NRT and
the OLS estimators such that if the ratio
value
2
01
2
02 /zz
is smaller than the
)( 2
21
f
value,
then the NRT estimator gives better results than the
OLS estimator, otherwise the OLS estimator is
better.
Secondly, considering the NRT and the ORR
estimators’ predictive performances. From (29), we
get
2
2
2
22
03478.0
)(
f
, (48)
which is a hyperbola with a vertical asymptote at
0
2
2
. (49)
Fig. 2:
)( 2
22
f
versus
2
2
values
Figure 2 shows that when
2
2
values are greater
than zero, there is a trade-off between the NRT and
the ORR estimators such that if the ratio
value
2
01
2
02 /zz
is smaller than the
)( 2
22
f
value,
then the NRT estimator gives better results than the
ORR estimator, otherwise the ORR estimator is
better.
Finally, considering the NRT and Liu estimators
predictive performances. From (37), we get
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2022.21.75
Issam Dawoud, Mohamed R. Abonazel, Elsayed Tag Eldin
E-ISSN: 2224-2880
647
Volume 21, 2022
8
03449.0
)( 2
2
2
23
f
, (50)
which is a hyperbola with a vertical asymptote at
8
2
2
. (51)
Fig. 3:
)( 2
23
f
versus
2
2
values
Figure 1 shows that when
2
2
values are smaller
than 8, the NRT estimator gives better results than
the Liu estimator and when
2
2
values are greater
than 8, there is a trade-off between the NRT and the
Liu estimators such that if the ratio value
2
01
2
02 /zz
is smaller than the
)( 2
23
f
value, then the NRT
estimator gives better results than the Liu estimator,
otherwise the Liu estimator is better.
5 Conclusion
We consider and examine the predictive
performance of the recently proposed NRT
estimator and is compared with the OLS, the
ORR and the Liu estimators according to the
PMSE criterion at a specific point in the two-
dimensional space. The PMSE of the NRT
estimator is obtained and three theorems are
given. The theoretical results are explained by a
numerical example and the regions are assigned
where the NRT estimator gives better results
than the other mentioned estimators. For some
2
2
values, there are trade-offs among the above
mentioned estimators. The OLS estimator is
good only when the value of
2
2
is very large
compared to the NRT estimator. These
techniques effectiveness is also affected by the
prediction point location. In the numerical
example, a region is established where the NRT
estimator gives better results than the other
mentioned estimators. So, it is theoretically
possible to determine such a region.
References:
[1] A.E. Hoerl, R.W. Kennard, Ridge regression:
biased estimation for nonorthogonal
problems. Technometrics, 12(1):55–67
(1970).
[2] K. Liu, A new class of biased estimate in
linear regression. Communication in
Statistics- Theory and Methods, 22: 393–402
(1993).
[3] B.M. Kibria, A.F. Lukman, A New Ridge-
Type Estimator for the Linear Regression
Model: Simulations and Applications.
Scientifica Article ID 9758378, 1-16 (2020).
[4] D.J. Friedman, D.C. Montgomery, Evaluation
of the predictive performance of biased
regression estimators. Journal of Forecasting
4: 153-163 (1985).
[5] F. Özbey, S. Kaçıranlar, Evaluation of the
Predictive Performance of the Liu Estimator.
Communications in Statistics Theory Methods
44: 1981- 1993 (2015).
[6] I. Dawoud, S. Kaciranlar, The Predictive
Performance Evaluation of Biased Regression
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2022.21.75
Issam Dawoud, Mohamed R. Abonazel, Elsayed Tag Eldin
E-ISSN: 2224-2880
648
Volume 21, 2022
Predictors With Correlated Errors. Journal of
Forecasting 34: 364–378 (2015).
[7] I. Dawoud, S. Kaciranlar, The Prediction of
the Two Parameter Ridge Estimator. Istatistik:
Journal of the Turkish Statistical Association
9: 56–66 (2016).
[8] I. Dawoud, S. Kaciranlar, Evaluation of the
predictive performance of the Liu type
estimator. Communications in Statistics -
Simulation and Computation 46: 2800-2820
(2017).
[9] I. Dawoud, S. Kaciranlar, Evaluation of the
predictive performance of the r-k and r-d class
estimators. Communications in Statistics-
Theory and Methods 46: 4031-4050 (2017).
[10] R. Li, F. Li, J. Huang, Evaluation of the
predictive performance of the principal
component two-parameter estimator.
Concurrency and Computation: Practice and
Experience e4710 (2018).
[11] I. Dawoud, S. Kaciranlar, The Prediction
Performance of the Alternative Biased
Estimators for the Distributed Lag Models.
Iranian Journal of Science and Technology,
Transactions A: Science 44: 85–98 (2020).
[12] A.F. Lukman, Z. Y. Algamal, B.G. Kibria, K.
Ayinde, The KL estimator for the inverse
Gaussian regression model. Concurrency and
Computation: Practice and
Experience, 33(13), e6222 (2021).
[13] A.F. Lukman, I. Dawoud, B.M. Kibria, Z.Y.,
Algamal, B. Aladeitan, A new ridge-type
estimator for the gamma regression model.
Scientifica, (2021).
[14] M.N. Akram, B.G. Kibria, M.R. Abonazel, N.
Afzal, On the performance of some biased
estimators in the gamma regression model:
simulation and applications. Journal of
Statistical Computation and Simulation, 1-23.
DOI: 10.1080/00949655.2022.2032059.
[15] M.R. Abonazel, I. Dawoud, F.A. Awwad,
A.F. Lukman, Dawoud–Kibria estimator for
beta regression model: simulation and
application. Frontiers in Applied Mathematics
and Statistics, 8:775068 (2022).
[16] I. Dawoud, M.R. Abonazel, Generalized
Kibria-Lukman Estimator: Method,
Simulation, and Application. Frontiers in
Applied Mathematics and Statistics, 8:880086
(2022).
[17] B.M. Kibria, Performance of some new ridge
regression estimators. Communications in
Statistics Simulation and Computation 32:
419-435 (2003).
[18] G. Khalaf, G. Shukur, Choosing ridge
parameters for regression problems.
Communications in Statistics Theory Methods
34: 1177-1182 (2005).
[19] G. Muniz, B.M. Kibria, On some ridge
regression estimators: An empirical
comparison. Communications in Statistics
Simulation and Computation 38: 621-630
(2009).
[20] M.R. Abonazel, I. Dawoud, Developing
robust ridge estimators for Poisson regression
model. Concurrency and Computation:
Practice and Experience, 34: e6979 (2022).
https://doi.org/10.1002/cpe.6979.
[21] I. Dawoud, M.R. Abonazel, Robust Dawoud–
Kibria estimator for handling multicollinearity
and outliers in the linear regression model. J
Stat Comput Simul. 91:3678–92 (2021).
[22] M.N. Akram, M.R. Abonazel, M. Amin, B.M.
Kibria, N. Afzal, A new Stein estimator for
the zero-inflated negative binomial regression
model. Concurrency Computat Pract Exper.
34:e7045 (2022).
[23] M.R. Abonazel, Z.Y. Algamal, F.A. Awwad,
I.M. Taha, A New Two-Parameter Estimator
for Beta Regression Model: Method,
Simulation, and Application. Front. Appl.
Math. Stat. 7: 780322 (2022).
[24] I. Dawoud, M.R. Abonazel, F.A. Awwad, E.
Tag Eldin, A New Tobit Ridge-Type
Estimator of the Censored Regression Model
With Multicollinearity Problem. Front. Appl.
Math. Stat. 8:952142 (2022).
[25] F.A. Awwad, K.A. Odeniyi, I. Dawoud, Z.Y.
Algamal, M.R. Abonazel, B.M. Kibria, E. Tag
Eldin, New Two-Parameter Estimators for the
Logistic Regression Model with
Multicollinearity. WSEAS TRANSACTIONS
on MATHEMATICS 21:403-414.
[26] Z.Y. Algamal, M.R. Abonazel, Developing a
Liutype estimator in beta regression
model. Concurrency and Computation:
Practice and Experience, 34(5):e6685 (2022).
Creative Commons Attribution License 4.0
(Attribution 4.0 International, CC BY 4.0)
This article is published under the terms of the
Creative Commons Attribution License 4.0
https://creativecommons.org/licenses/by/4.0/deed.en
_US
WSEAS TRANSACTIONS on MATHEMATICS
DOI: 10.37394/23206.2022.21.75
Issam Dawoud, Mohamed R. Abonazel, Elsayed Tag Eldin
E-ISSN: 2224-2880
649
Volume 21, 2022