Submit manuscript...
eISSN: 2378-315X

Biometrics & Biostatistics International Journal

Research Article Volume 11 Issue 2

A comparative study of various estimation methods for modified Lindley distribution

Veena G,1 Lishamol Tomy2

1Department of Statistics, Department of Statistics, St. Thomas College, Palai, India
2Department of Statistics, Deva Matha College, Kuravilangad, India

Correspondence: Lishamol Tomy, Department of Statistics, Deva Matha College, Kuravilangad, Kerala, India

Received: June 07, 2022 | Published: July 2, 2022

Citation: Veena G, Lishamol T. A comparative study of various estimation methods for modified Lindley distribution. Biom Biostat Int J. 2022;11(2):79-81. DOI: 10.15406/bbij.2022.11.00358

Download PDF

Abstract

The purpose of this study is to examine the finite sample aspects of estimates of the parameters of the weighted Lindley distribution derived by four estimation methods: maximum likelihood, method of moments, ordinary least squares, and weighted least squares, using Monte Carlo simulations. As a comparison criterion, bias and mean-squared error are used. In both small and large samples, the Cramer-von Mises approach is found to be very competing with the maximum likelihood method. To substantiate the conclusion, a statistical analysis of a real data set related to weather is performed.

Keywords: modified Lindley distribution, method of moments, maximum likelihood, bias, mean square error, weighted models

Introduction

In modelling data related to reliability, one of the most sought out distribution is the Lindley distribution. Lindley distribution was studied by Lindley,1 with an advantage of having a single parameter, with an increasing hazard rate function (hrf). The Lindley distribution is a linear combination of the Gamma (2, β) and exponential (β) distribution.

Let Z be a random variable following the Lindley distribution with parameter β. The probability density function (pdf) of the Lindley distribution is defined by Lindley1 as follows

g(z)= β 2 1+β ( 1+z ) e βz MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaam4zaiaacIcacaWG6bGaaiykaiabg2da9maalaaabaGaeqOSdi2a aWbaaSqabeaacaaIYaaaaaGcbaGaaGymaiabgUcaRiabek7aIbaada qadaqaaiaaigdacqGHRaWkcaWG6baacaGLOaGaayzkaaGaamyzamaa CaaaleqabaGaeyOeI0IaeqOSdiMaamOEaaaaaaa@4A23@ , z > 0  MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamOEaiaabccacqGH+aGpcaqGGaGaaGimaiaacckaaaa@3C59@ β>0 MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaeqOSdiMaeyOpa4JaaGimaaaa@3A91@

The distribution function (df) of the Lindley distribution is also defined as follows

G(z)=1[ 1+ βz 1+β ] e βz MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaam4raiaacIcacaWG6bGaaiykaiabg2da9iaaigdacqGHsisldaWa daqaaiaaigdacqGHRaWkdaWcaaqaaiabek7aIjaadQhaaeaacaaIXa Gaey4kaSIaeqOSdigaaaGaay5waiaaw2faaiaadwgadaahaaWcbeqa aiabgkHiTiabek7aIjaadQhaaaaaaa@4B21@ , z > 0  MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamOEaiaabccacqGH+aGpcaqGGaGaaGimaiaacckaaaa@3C59@  , β>0 MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaeqOSdiMaeyOpa4JaaGimaaaa@3A91@  

Ghitany2 discussed the application of the Lindley distribution with a real-world dataset, after which the distribution became popular. Many generalizations of the Lindley distribution were developed. We redirect the readers to Tomy3 and Chesneau4 to get a better understanding of the different generalizations and applications of the Lindley distribution.

Among which the modified Lindley (ML) distribution developed by Chesneau5 is of prime importance. Suppose Y is a random variable following the ML distribution with parameter β, then the pdf of ML distribution is defined as follows

g( y )= β 1+β e 2βy [ ( 1+β ) e βy +2βy1 ] MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiaadEgadaqada qaaiaadMhaaiaawIcacaGLPaaacqGH9aqpdaWcaaqaaiabek7aIbqa aiaaigdacqGHRaWkcqaHYoGyaaGaamyzamaaCaaaleqabaGaeyOeI0 IaaGOmaiabek7aIjaadMhaaaGcdaWadaqaamaabmaabaGaaGymaiab gUcaRiabek7aIbGaayjkaiaawMcaaiaadwgadaahaaWcbeqaaiabek 7aIjaadMhaaaGccqGHRaWkcaaIYaGaeqOSdiMaamyEaiabgkHiTiaa igdaaiaawUfacaGLDbaaaaa@563D@ , y>0    MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamyEaiabg6da+iaaicdacaGGGcGaaiiOaiaacckaaaa@3D5A@ , β>0 MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaeqOSdiMaeyOpa4JaaGimaaaa@3A91@   (1)

The df of the ML distribution are also defined as follows,         

G(y)=1[ 1+ βz 1+β e βy ] e βy MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaam4raiaacIcacaWG5bGaaiykaiabg2da9iaaigdacqGHsisldaWa daqaaiaaigdacqGHRaWkdaWcaaqaaiabek7aIjaadQhaaeaacaaIXa Gaey4kaSIaeqOSdigaaiaadwgadaahaaWcbeqaaiabgkHiTiabek7a IjaadMhaaaaakiaawUfacaGLDbaacaWGLbWaaWbaaSqabeaacqGHsi slcqaHYoGycaWG5baaaaaa@4FCC@   , y>0      MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamyEaiabg6da+iaaicdacaGGGcGaaiiOaiaacckacaGGGcGaaiiO aaaa@3FA2@ ,   β>0 MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaeqOSdiMaeyOpa4JaaGimaaaa@3A91@

μ= 5+4β 4β(1+β) σ 2 = (3+4β)(5+4β) 16 β 2 (1+β) 2 MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOabaeqabaGaeqiVd0 Maeyypa0ZaaSaaaeaacaaI1aGaey4kaSIaaGinaiabek7aIbqaaiaa isdacqaHYoGycaGGOaGaaGymaiabgUcaRiabek7aIjaacMcaaaaaba Gaeq4Wdm3aaWbaaSqabeaacaaIYaaaaOGaeyypa0ZaaSaaaeaacaGG OaGaaG4maiabgUcaRiaaisdacqaHYoGycaGGPaGaaiikaiaaiwdacq GHRaWkcaaI0aGaeqOSdiMaaiykaaqaaiaaigdacaaI2aGaeqOSdi2a aWbaaSqabeaacaaIYaaaaOGaaiikaiaaigdacqGHRaWkcqaHYoGyca GGPaWaaWbaaSqabeaacaaIYaaaaaaaaaaa@5CE2@

The mean and variance of ML distribution is given as follows,

Figure 1 Illustrates the plots of the ML distribution with various values of the parameter β.

Observations –

The shapes of the pdf plots are unimodal, and are decreasing in nature.

In this paper, we discuss the different methods to estimate the unknown single parameter β of the ML distribution. A simulation study is also conducted to compare the efficiency of different estimators.

The structure of the paper is as follows. Section 2 discusses the different methods of estimating β. In Section 3, we compare the efficiency of the estimates produced by the different methods of estimation. Section 4 shows the application of different estimation methods over a real-world dataset.

Inferential aspects

In this section, we concentrate on the estimation of parameter β, with an assumption that β is unknown. Here, we consider estimating the parameter of the ML distribution using the methods, which include Maximum Likelihood (ML), Method of Moment (MOM) estimation, least squares (LS) and Weighted Least squares (WLS) estimation, and Cram´er-von Mises (CVM) estimation.

ML estimation

Consider (y1, y2, ..., yn) to be a random sample from the ML distribution with parameter , with pdf defined in Equation (1). The log-likelihood (LL) function is given by

LL(β)=nlogβnlog(1+β)2β Σ i=1 n y i +log[ Π i=1 n [(1+β) e β y i +2β y i 1] MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamitaiaadYeacaGGOaGaeqOSdiMaaiykaiabg2da9iaad6gacaWG SbGaam4BaiaadEgacqaHYoGycqGHsislcaWGUbGaamiBaiaad+gaca WGNbGaaiikaiaaigdacqGHRaWkcqaHYoGycaGGPaGaeyOeI0IaaGOm aiabek7aIjabfo6at9aadaqhaaWcbaWdbiaadMgacqGH9aqpcaaIXa aapaqaa8qacaWGUbaaaOGaamyEa8aadaWgaaWcbaWdbiaadMgaa8aa beaak8qacqGHRaWkcaWGSbGaam4BaiaadEgacaGGBbGaeuiOda1dam aaDaaaleaapeGaamyAaiabg2da9iaaigdaa8aabaWdbiaad6gaaaGc caGGBbGaaiikaiaaigdacqGHRaWkcqaHYoGycaGGPaGaamyza8aada ahaaWcbeqaa8qacqaHYoGycaWG5bWdamaaBaaameaapeGaamyAaaWd aeqaaaaak8qacqGHRaWkcaaIYaGaeqOSdiMaamyEa8aadaWgaaWcba WdbiaadMgaa8aabeaak8qacqGHsislcaaIXaGaaiyxaaaa@73BF@

The ML estimate of β, β MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GafqOSdiMbambaaaa@38E9@  is defined as

β ^ =argma x βε( 0, ) LL( β ) MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GafqOSdiMbaKaacqGH9aqpcaGGHbGaaiOCaiaacEgacaGGTbGaaiyy aiaacIhadaWgaaWcbaGaeqOSdiMaeqyTdu2aaeWaaeaacaaIWaGaai ilaiabg6HiLcGaayjkaiaawMcaaaqabaGccaWGmbGaamitamaabmaa baGaeqOSdigacaGLOaGaayzkaaaaaa@4C2B@

The ML estimate of β can be obtained by taking the derivative of LL(β) and equating it to zero, which is as follows:

n β n 1+β 2 i=1 n y i + i=1 n e βyi ( β y i + y i +1 )+2 y i ( 1+β ) e β y i +2β y i 1 MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape WaaSaaaeaacaWGUbaabaGaeqOSdigaaiabgkHiTmaalaaabaGaamOB aaqaaiaaigdacqGHRaWkcqaHYoGyaaGaeyOeI0IaaGOmamaaqahaba GaamyEamaaBaaaleaacaWGPbaabeaaaeaacaWGPbGaeyypa0JaaGym aaqaaiaad6gaa0GaeyyeIuoakiabgUcaRmaaqahabaWaaSaaaeaaca WGLbWaaWbaaSqabeaacqaHYoGycaWG5bGaamyAaaaakmaabmaabaGa eqOSdiMaamyEamaaBaaaleaacaWGPbaabeaakiabgUcaRiaadMhada WgaaWcbaGaamyAaaqabaGccqGHRaWkcaaIXaaacaGLOaGaayzkaaGa ey4kaSIaaGOmaiaadMhadaWgaaWcbaGaamyAaaqabaaakeaadaqada qaaiaaigdacqGHRaWkcqaHYoGyaiaawIcacaGLPaaacaWGLbWaaWba aSqabeaacqaHYoGycaWG5bWaaSbaaWqaaiaadMgaaeqaaaaakiabgU caRiaaikdacqaHYoGycaWG5bWaaSbaaSqaaiaadMgaaeqaaOGaeyOe I0IaaGymaaaaaSqaaiaadMgacqGH9aqpcaaIXaaabaGaamOBaaqdcq GHris5aaaa@7280@

As there doesn’t exist any explicit form of β, numerical methods are used to get the estimate of.

MOM estimation

Let (y1, y2, ..., yn) represent n independent ML distribution observations. Chesneau5 showed that the unique MoM estimate of the pdf defined in Equation, , of  is given by

f( z )= ( z1 )+ (z1) 2 +5z 2z MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamOzamaabmaabaGaamOEaaGaayjkaiaawMcaaiabg2da9maalaaa baGaeyOeI0YaaeWaaeaacaWG6bGaeyOeI0IaaGymaaGaayjkaiaawM caaiabgUcaRmaakaaabaGaaiikaiaacQhacqGHsislcaaIXaGaaiyk amaaCaaaleqabaGaaGOmaaaaaeqaaOGaey4kaSIaaGynaiaadQhaae aacaaIYaGaamOEaaaaaaa@4B13@   z > 0  MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamOEaiaabccacqGH+aGpcaqGGaGaaGimaiaacckaaaa@3C59@ ,

LS and WLS estimation

Swain6 devised the method of least squares. The difference between both the vector of uniformized order statistics and the associated vector of expected values is reduced using this approach. Let (y1, y2, ..., yn) represent a random sample from the ML distribution with parameters. Also, let y(1), y(2), … y(n) be the order values of y1,y2,...,yn in ascending order.

LS( β )= i=1 n [ G 0 ( y i ,β ) i ( n+1 ) ] 2 MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamitaiaadofadaqadaqaaiabek7aIbGaayjkaiaawMcaaiabg2da 9maaqadabaWaamWaaeaacaWGhbWaaSbaaSqaaiaaicdaaeqaaOWaae WaaeaacaWG5bWaaSbaaSqaaiaadMgaaeqaaOGaaiilaiabek7aIbGa ayjkaiaawMcaaiabgkHiTmaalaaabaGaamyAaaqaamaabmaabaGaam OBaiabgUcaRiaaigdaaiaawIcacaGLPaaaaaaacaGLBbGaayzxaaaa leaacaWGPbGaeyypa0JaaGymaaqaaiaad6gaa0GaeyyeIuoakmaaCa aaleqabaGaaGOmaaaaaaa@534C@

The LS estimate of  is defined by

β ^ =argma x βε( 0, ) LS( β ) MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GafqOSdiMbaKaacqGH9aqpcaGGHbGaaiOCaiaacEgacaGGTbGaaiyy aiaacIhadaWgaaWcbaGaeqOSdiMaeqyTdu2aaeWaaeaacaaIWaGaai ilaiabg6HiLcGaayjkaiaawMcaaaqabaGccaWGmbGaam4uamaabmaa baGaeqOSdigacaGLOaGaayzkaaaaaa@4C32@

After partially differentiating with regard to unknown parameter β, one can construct a nonlinear equation using the df of the ML distribution. The Monte Carlo simulation can be used to find the solution to this nonlinear equation.

The WLS function is given as follows:

WLS(β)= η i i=1 n [ G 0 ( y i ,β) i n+1 ] 2 MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaam4vaiaadYeacaWGtbGaaiikaiabek7aIjaacMcacqGH9aqpcqaH 3oaApaWaaSbaaSqaa8qacaWGPbaapaqabaGcdaaeWaqaamaadmaaba WdbiaadEeapaWaaSbaaSqaa8qacaaIWaaapaqabaGcpeGaaiikaiaa dMhapaWaaSbaaSqaa8qacaWGPbaapaqabaGccaGGSaGaeqOSdi2dbi aacMcacqGHsisldaWcaaqaaiaadMgaaeaacaWGUbGaey4kaSIaaGym aaaaa8aacaGLBbGaayzxaaaaleaacaWGPbGaeyypa0JaaGymaaqaai aad6gaa0GaeyyeIuoakmaaCaaaleqabaGaaGOmaaaaaaa@55D8@

where     η i =  ( n+ 1 ) 2 ( n+ 2 ) [i( ni+1 )] MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaeq4TdG2damaaBaaaleaapeGaamyAaaWdaeqaaOWdbiabg2da9iaa bccapaWaaSaaaeaadaqadaqaa8qacaWGUbGaey4kaSIaaeiiaiaaig daa8aacaGLOaGaayzkaaWaaWbaaSqabeaapeGaaGOmaaaak8aadaqa daqaa8qacaWGUbGaey4kaSIaaeiiaiaaikdaa8aacaGLOaGaayzkaa aabaGaai4waiaadMgadaqadaqaaiaad6gacqGHsislcaWGPbGaey4k aSIaaGymaaGaayjkaiaawMcaaiaac2faaaaaaa@4F6F@

The WLS estimate of β is defined by

  β ^ =argma x βε( 0, ) WLS( β ) MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaaiiOaiqbek7aIzaajaGaeyypa0JaaiyyaiaackhacaGGNbGaaiyB aiaacggacaGG4bWaaSbaaSqaaiabek7aIjabew7aLnaabmaabaGaaG imaiaacYcacqGHEisPaiaawIcacaGLPaaaaeqaaOGaam4vaiaadYea caWGtbWaaeWaaeaacqaHYoGyaiaawIcacaGLPaaaaaa@4E32@

As a result, the WLS estimate of can be derived using a process similar to the LS estimate.

CVM estimation

The CVM approach is identical to the two methods earlier described. The CVM function is described as follows:

CVM( β )= 1 12n + i=1 n [ G 0 ( y i ,β) 2i1 2n ] 2 MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaai4qaiaacAfacaGGnbWaaeWaaeaacqaHYoGyaiaawIcacaGLPaaa cqGH9aqpdaWcaaqaaiaaigdaaeaacaaIXaGaaGOmaiaad6gaaaGaey 4kaSYaaabCaeaadaWadaqaaiaadEeadaWgaaWcbaGaaGimaaqabaGc caGGOaGaamyEamaaBaaaleaacaWGPbaabeaakiaacYcacqaHYoGyca GGPaGaeyOeI0YaaSaaaeaacaaIYaGaamyAaiabgkHiTiaaigdaaeaa caaIYaGaamOBaaaaaiaawUfacaGLDbaaaSqaaiaadMgacqGH9aqpca aIXaaabaGaamOBaaqdcqGHris5aOWaaWbaaSqabeaacaaIYaaaaaaa @5836@

The CVM estimate of β is defined by

β ^ =argma x βε( 0, ) CVM( β ) MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GafqOSdiMbaKaacqGH9aqpcaGGHbGaaiOCaiaacEgacaGGTbGaaiyy aiaacIhadaWgaaWcbaGaeqOSdiMaeqyTdu2aaeWaaeaacaaIWaGaai ilaiabg6HiLcGaayjkaiaawMcaaaqabaGccaWGdbGaamOvaiaad2ea daqadaqaaiabek7aIbGaayjkaiaawMcaaaaa@4CFE@

As a result, the CVM estimate of follows the same approach as the WL or LS estimations.

Simulation study

We conduct a simulation analysis in this part to assess the efficiency of the ML model parameter estimates reported in Section 3. We employed the Monte Carlo technique in R software, as well as Newton's method and the BFGS (Broyden-Fletcher-Goldfarb-Shanno) algorithm, developed by Broyden7, Fletcher8, Goldfarb9, & Shanno10.

The bias and Mean Square Error (MSE) were examined in the study,

Bias ( β ^ )= 1 N j=1 N ( β ^ j β ) MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape WaaeWaaeaacuaHYoGygaqcaaGaayjkaiaawMcaaiabg2da9maalaaa baGaaGymaaqaaiaad6eadaaeWaqaamaabmaabaGafqOSdiMbaKaada WgaaWcbaGaamOAaaqabaGccqGHsislcqaHYoGyaiaawIcacaGLPaaa aSqaaiaadQgacqGH9aqpcaaIXaaabaGaamOtaaqdcqGHris5aaaaaa a@497E@   and  MSE( β ^ )= 1 j=1 N ( β ^ j β ) 2 MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamytaiaadofacaWGfbGaaiikaiqbek7aIzaajaGaaiykaiabg2da 9maalaaabaGaaGymaaqaamaaqadabaWaaeWaaeaacuaHYoGygaqcam aaBaaaleaacaWGQbaabeaakiabgkHiTiabek7aIbGaayjkaiaawMca amaaCaaaleqabaGaaGOmaaaaaeaacaWGQbGaeyypa0JaaGymaaqaai aad6eaa0GaeyyeIuoaaaaaaa@4BCD@ .

where, for each sample size n, β ˜ MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GafqOSdiMbaGaaaaa@38DE@ J is the parameter's estimate at the jth iteration using a specific estimation method. R software, version 4.0.5, was used for all calculations.

Tables 1–3 show the biases and mean square error (MSE) of the proposed model's parameters based on N = 5000 replicates. To further analyse the nature of the estimates, various sample sizes (n = 50, 100, 200, and 500) and values of the parameters, β= (0.5,1,1.5) are employed.

n

Estimate

Bias

MSE

           β

50

ML

0.017219

0.004665

MOM

0.017282

0.004002

OLS

0.016243

0.004623

WLS

-0.5

0.25

CVM

0.016524

0.003894

100

ML

0.013654

0.00199

MOM

0.013417

0.002018

OLS

0.013752

0.00229

WLS

-0.5

0.25

CVM

0.013256

0.002306

200

ML

0.011259

0.000995

MOM

0.012063

0.001168

OLS

0.011815

0.001161

WLS

-0.5

0.25

CVM

0.010682

0.000978

500

ML

0.010423

0.000425

MOM

0.011379

0.000505

OLS

0.011279

0.000503

WLS

-0.5

0.25

CVM

0.010103

0.000422

Table 1 Bias and MSE for β = 0.5

n

Estimate

Bias

MSE

β

50

ML

0.03443951

0.0156625

MOM

0.03748674

0.0162677

OLS

0.02567615

0.01804344

WLS

-1

1

CVM

0.02774687

0.0181966

100

ML

0.02801196

0.008132881

MOM

0.0302575

0.00823406

OLS

0.02086761

0.008980337

WLS

-1

1

CVM

0.02191171

0.009037173

200

ML

0.02473895

0.003989069

MOM

0.02688922

0.004123211

OLS

0.01902113

0.004393245

WLS

-1

1

CVM

0.0195463

0.004416063

500

ML

0.0234787

0.001838779

MOM

0.02486852

0.001912136

Table 2 Bias and MSE for β = 1

n

Estimate

Bias

MSE

                    β

50

ML

0.1781775

0.07687606

MOM

0.1962871

0.09591677

OLS

0.1927908

0.09444386

WLS

-1.5

2.25

CVM

0.17634

0.07521178

100

ML

0.1664498

0.04858995

MOM

0.16478

0.04814568

OLS

0.1813909

0.05871948

WLS

-1.5

2.25

CVM

0.02191171

0.009037173

200

ML

0.1617356

0.03610189

MOM

0.1784734

0.04441311

OLS

0.1775813

0.04408753

WLS

-1.5

2.25

CVM

0. 02113903

0.003554184

500

ML

0.1588387

0.02905477

MOM

0.1753744

0.03550387

OLS

0.1750164

0.03537724

WLS

-1.5

2.25

CVM

0.0155954

0.00281068

Table 3 Bias and MSE for β = 1.5

Some observations from Tables 1-3,

  1. The estimators of all the parameters are positively biased, except for the WLS estimator.
  2. The biases of all the estimators tend to zero for large values of the sample size n.
  3. WLS has the largest MSE among the considered four estimators.
  4. The CVM estimator has the least bias among all the other estimators.

Real data analysis

We use a well-known real data set to demonstrate the ML model's use in real-life circumstances. To accomplish this, we compare the estimates obtained through the different estimation methods of the ML model . Standard performance validation criteria such the goodness-of-fit testing statistic such as the Kolmogorov Smirnov (K-S), also denoted by Dn, are used to discover the best model.

D n = ma x i=1,2.....n [ G ML ( y i ; β ^ ) i1 n , i n G ML ( y i ; β ^ ) ] MathType@MTEF@5@5@+= feaagKart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamira8aadaWgaaWcbaWdbiaad6gaa8aabeaak8qacqGH9aqpcaqG GaGaamyBaiaadggacaWG4bWdamaaBaaaleaapeGaamyAaiabg2da9i aaigdacaGGSaGaaGOmaiaac6cacaGGUaGaaiOlaiaac6cacaGGUaGa amOBaaWdaeqaaOWaamWaaeaacaWGhbWaaSbaaSqaaiaad2eacaWGmb aabeaakmaabmaabaGaamyEamaaBaaaleaacaWGPbaabeaakiaacUda cuaHYoGygaqcaaGaayjkaiaawMcaaiabgkHiTmaalaaabaGaamyAai abgkHiTiaaigdaaeaacaWGUbaaaiaacYcadaWcaaqaaiaadMgaaeaa caWGUbaaaiabgkHiTiaadEeadaWgaaWcbaGaamytaiaadYeaaeqaaO GaaiikaiaacMhadaWgaaWcbaGaamyAaaqabaGccaGG7aGafqOSdiMb aKaacaGGPaaacaGLBbGaayzxaaaaaa@621D@

The p-values for the KS test statistic are also taken into account. The best method of estimation has the smallest KS, as well as the highest p-values. We refer the readers to Kenneth and Anderson11 for definitions and more insights on these attributes. The data set includes 30 measurements of precipitation in March in Minneapolis, with inch as the unit of measurement. Hinkley12 has provided this data set.

Table 4 contains the parameter estimates obtained using various estimation methods for the ML distribution in order to complete the comparison of the estimation methods of the ML distribution.

Methods

β

K-S

p-value

MOM

0.685563

0.167852

0.366565

OLS

0.613856

0.129532

0.69541

WLS

0.616286

0.130844

0.683348

CVM

0.604734

0.124597

0.740227

MLE

0.6644

0.1567

0.4532

Table 4 β of different estimation methods for ML distribution with K-S statistic and p-value

We use the essential metrics to compare the four estimation approaches for the weighted Lindley distribution:

  • The K-S test with the highest p-value (which considers the greatest difference between the theoretical and empirical distributions)

Table 4 demonstrates that the CVM approach meets the above-mentioned criteria, namely, the highest p-values for the K–S test. As a result, we can conclude that the CVM estimation approach is the best appropriate among the five estimate methods for the presented data set.

Conclusion

In this study, we compared the estimate of the parameters of the weighted Lindley distribution using four well-known estimation methods: maximum likelihood, method of moments, least-squares, weighted least-squares, and Cramer-von Mises, utilising extensive simulation trials. In comparison to the maximum likelihood method, simulations reveal that Cramer-von Mises is a very competing method. This is backed up by the use of a real-world climatic data set.

Acknowledgments

None.

Conflicts of interest

The authors declare that there are no conflicts of interest.

References

  1. Lindley DV. Fiducial distributions and Bayes theorem. Jour­nal of the Royal Statistical Society. 1958;20(1):102–107.
  2. Ghitany ME, Atieh B, Nadarajah S. Lindley distribution and its application. Mathematics and computers in simulation. 2008;78(4): 493–506.
  3. Tomy L. A Retrospective study on Lindley distribution. Biom Biostat Int J. 2018;7(2):163–169.
  4. Chesneau C, Tomy L, Jose M. A Retrospective Study on Applications of the Lindley Distribution. Journal of Non-linear modelling and Analysis. 2022;4(2):291–309.
  5. Chesneau C, Tomy L, Gillariose J. A new modified Lindley distribution with properties and applications. Journal of Statistics and Management Systems. 2021;24(7):1383–1403.
  6. Swain JJ, Venkatraman S, Wilson JR. Least-squares estimation of distribution functions in Johnson’s translation system. J Stat Comput Simul. 1988;29(4):271–297.
  7. Broyden CG. The convergence of a class of double-rank minimization algorithms 1. general considerations. IMA J App Math. 1970;6:76–90.
  8. Fletcher R. A new approach to variable metric algorithms. Comput J. 1970;13(3):317–322.
  9. Goldfarb D. A family of variable-metric methods derived by variational means. Math Comput. 1970;24(109):23–26.
  10. Shanno DF. Conditioning of quasi-Newton methods for function minimization. Math Comput. 1970;24(111):647–656.
  11. Kenneth PB, Anderson DR. Model Selection and Multi-Model Inference. A Practical Information-Theoretic Approach. 2nd edn. 2002.
  12. Hinkley D. On quick choice of power transformations. Ap­plied Statistics. 1977;26(1):67–69.
Creative Commons Attribution License

©2022 Veena, et al. This is an open access article distributed under the terms of the, which permits unrestricted use, distribution, and build upon your work non-commercially.