Processing math: 100%
Submit manuscript...
eISSN: 2378-315X

Biometrics & Biostatistics International Journal

Research Article Volume 7 Issue 1

Some new results on fuzzy generalized ‘useful’ mean lengths and its bounds

Arif Habib

Department of Public Health, College of Applied Medical Sciences, King Khalid University, Saudi Arabia

Correspondence: Arif Habib, Department of Public Health, College of Applied Medical Sciences, King Khalid University, Abha, Saudi Arabia

Received: December 21, 2017 | Published: February 12, 2018

Citation: Habib A. Some new results on fuzzy generalized ‘useful’ mean lengths and its bounds. Biom Biostat Int J. 2018;7(1):54-58. DOI: 10.15406/bbij.2018.07.00190

Download PDF

Abstract

In this paper, we propose a fuzzy generalized ‘useful’ parametric mean length and bounds are obtained in terms of useful fuzzy measure. The bounds obtained are new and some known results are the particular cases of the proposed measure and bounds.

Keywords: fuzzy sets, shannon’s inequality, generalized shannon’s inequality, coding theorem, kerridge inaccuracy

AMS subject classification: 94A17, 94A24.

Introduction

Fuzzy sets play a significant role in many deployed system because of their capability to model non-statistical imprecision. Consequently, characterization and quantification of fuzziness are important issues that affect the management of uncertainty in many system models and designs. The notion of fuzzy sets was proposed by Zahed1 with a view to tackling problems in which indefinites arising from a sort intrinsic ambiguity plays a fundamental role. Fuzziness, a texture of uncertainty, results from the lack of sharp distinction of the boundary of set. The concept of fuzzy sets in which imprecise knowledge can be used to define an event. The concept of entropy has been widely used in different areas, e.g. communication theory, statistical mechanics, pattern recognition, diagnostic and neural network etc.

A fuzzy set is represented as

A= {xi/μA(xi):i=1,2,,n},

where μA(xi) gives the degree of belongingness of the element ‘xi’ to the set ‘A’. If every element of the set ‘A’ is ‘0’ or ‘1’, there is no uncertainty about it and a set is said to be a crisp set. On the other hand, a fuzzy set ‘A’ is defined by a characteristic function

μA(xi)={x1,x2,..,xn}[0,1].

The function μA(xi) associates with each xiRn grade of membership function.
A fuzzy set A* is called a sharpened version of fuzzy set A if the following conditions are satisfied:

μA*(xi)£μA(xi), if  μA(xi)£0.5 for all i=1,2,..,n and μA*(xi)μA(xi), if  μA(xi)0.5 for all i=1,2,..,n.

De Luca and Termini2 formulated a set of properties and these properties are widely accepted as criterion for defining any fuzzy entropy. In fuzzy set theory, the entropy is a measure of fuzziness which expresses the amount of average ambiguity in making a decision whether an element belong to a set or not. So, a measure of average fuzziness is fuzzy set H(A) should have the following properties to be a valid entropy.

  1. (Sharpness): H(A) is minimum if and only if A is a crisp set μA(xi)=0 or 1; "i
  2. (Maximality): H(A) is maximum if and only if A is most fuzzy set i.e., μA(xi)=12  i
  3. (Resolution): H(A*)£H(A) where A* is sharpened version of A.
  4. (Symmetry):  H(A)=H(ˉA), where ˉA  is the complement of set A i.e. ˉµA(xi)= 1 µA(xi)

The importance of fuzzy set comes from the fact that it can deal with imprecise and inexact information. Its application areas span from design of fuzzy controller to robotics and artificial intelligence.

Basic concepts

Let X be discrete random variable taking on a finite number of possible values X=(x1,x2,...,xn)  with respective membership function A={μA(x1),μA(x2),μA(xn)} [0,1],  μA(xi)   gives the elements the degree of belongingness xi to the set A. The function μA(xi) associates with each xi Rn a grade of membership to the set A and is known as membership function.

Denote

Χ=[x1  x2          x2μA(x1)μA(x2)    μA(xn)]      (2.1)

We call the scheme (2.1) as a finite fuzzy information scheme. Every finite scheme describes a state of uncertainty. De Luca and termini2 introduced a quantity which, in a reasonable way to measures the amount of uncertainty (fuzzy entropy) associated with a given finite scheme. This measure is given by

H(A)= ni[μA(xi)logμA(xi)+(1μA(xi))log(1μA(xi))]       (2.2)

The measure (2.2) serve as a very suitable measure of fuzzy entropy of the finite information scheme (2.1).

Let a finite source of n source symbols X=(x1,x2,,xn) be encoded using alphabet of D symbols, then it has been shown by Feinstein3 that there is a uniquely decipherable instantaneous code with lengths  l1,l2,ln iff the following Kraft4 inequality is satisfied

inDli1      (2.3)

Belis & Guiasu5 observed that a source is not completely specified by the probability distribution P over the source alphabet in the absence of qualitative character. So it can be assumed (Belis & Guiasu5) that the source alphabet letters are assigned weights according to their importance or utilities in view of the experimenter.

Let U=(u1,u2,,un) be the set of positive real numbers, ui  is the utility or importance of Xi. The utility, in general, is independent of probability of encoding of source symbol Xi  i.e.pi  . The information source is thus given by

X=[X1X2Xnp1p2pnu1u2un],  ui >0 pi0, nipi=1     (2.4)

Belis & Guiasu3 introduced the following quantitative- qualitative measure of information

H(P,U)=niuipilogpi   (2.5)

Which is a measure for the average of quantity of ‘variable’ or ‘useful’ information provided by the information source (2.4).

Guiasu & Picard6 considered the problem of encoding the letter output by the source (2.4) by means of a single letter prefix code whose code word’s c1,c2,,cn are of lengths l1,l2,,ln respectively and satisfy the Kraft’s inequality (2.3), they included the following ‘useful’ mean length of the code

L(U)=niuipiliniuipi   (2.6)

Further they derived a lower bound for (2.6).

Now, corresponding to (2.5) and (2.6), we have the following fuzzy measures

H(A,U)=ni=1ui{μA(xi)+(1μA(xi))}log{μA(xi)+(1μA(xi))}   (2.7)

and

L(U)=ni=1ui{μA(xi)+(1μA(xi))}lini=1ui{μA(xi)+(1μA(xi))}   (2.8)

respectively.

In the next section, fuzzy coding theorem have been obtained by considering a new parametric fuzzy entropy function involving utilities and generalized useful fuzzy code word mean length. The result obtained here are not only new, but also generalized some well-known results available in the literature of information theory.

Information scheme

Consider the following model for a random experiment S,
S=[E;P;U]

Where E=(E1,E2,,En) is a finite system of events happening with respective probabilities  P=(p1,p2,,pn),  pi0 and pi=1 and credited with utilities U=(u1,u2,,un)ui>0, i=1,2,,n. Denote the model by E, where

E=[E1 E2   Enp1 p2 pnu1     u2     un]   (3.1)

We call (3.1) a Utility Information Scheme (UIS). Bilis & Guiasu3 proposed a measure of information called ‘useful information’ for this scheme, given by

H(U;P)=ni=1uipilogpi    (3.2)

Where H(U;P) reduces to Shannon’s7 entropy when the utility aspect of the scheme is ignored i.e., when ui=1  for each i.

Guiasu & Picard6 considered the problem of encoding the outcomes in (3.1) by means of a prefix code with codewords w1,w2,,wn having lengths l1,l2,,ln and satisfying Kraft’s inequality4

ni=1Dli1   (3.3)

Where D  is the size of the code alphabet. The useful mean length L(U) of code was defined as

L(U)=niuipiliniuipi     (3.4)

and the authors obtained bounds for it in terms of H(U;P).

Now, corresponding to (3.2) and (3.4) , we have the following fuzzy measures

H(A,U)=ni=1ui{μA(xi)+(1μA(xi))}log{μA(xi)+(1μA(xi))}    (3.5)

and

L(U)=ni=1ui{μA(xi)+(1μA(xi))}lini=1ui{μA(xi)+(1μA(xi))}   (3.6)

Longo,8 Gurdial & Pessoa,9 Khan & Autar,10 Autar & Khan11 have studied generalized coding theorems by considering different generalized measures of (3.2) and (3.4) under condition (3.3) of unique decipherability.

In this paper, we study some coding theorems by considering a new function depending on the parameters α and and a utility function. The motivation for studying this new function is that it generalizes some entropy functions already existing in the literature. The function under study is closely related to Tsallis entropy which is used in physics.

Coding theorems

Consider a function

Hα,β(A;U)=α1α[1(n1(μαβA(xi)+(1μA(xi))αβ)uin1(μβA(xi)+(1μA(xi))β)ui)1α],   (4.1)

Where α>0 (1),β>0,  pi 0, i=1,2,,n and nipi 1.

  1. When β=1 and α1, (4.1) reduces to a measure of useful information for the incomplete distribution due to Belis & Guiasu.5
  2. When ui=1 for each i, i.e., when the utility aspect is ignored, nipi=1, β=1  and α1, the measure (4.1) reduces to Shannon’s entropy.7
  3. When ui=1 for each i, the measure (4.1) becomes entropy for β -power distribution derived from P studied by Roy.12 We call Hα,β(A;U) in (4.1) the generalized useful measure of information for the incomplete power distribution  Pβ.

 Further consider,

Lα,β(U)=αα1[1n1(μβA(xi)+(1μA(xi))β)(uini=1(μβA(xi)+(1μA(xi))β)ui)1αDli(α1α)],   (4.2)

Where    α>0 (1),  nipi 1.          

  1. For β=1, ui=1 for each i and α1, Lα,β(U) in (4.2) reduces to the useful mean length L(U) of the code given in (3.6).
  2. For β=1, ui=1 for each i and α1, Lα,β(U) becomes the optimal code length defined by Shannon.7,13,14

Theorem 4.1. For all integers D>1  

Lα,β(U)Hα,β(A;U)   (4.3)

Under the condition (3.3). Equality holds if and only if

li=log((μαβA(xi)+(1μA(xi))αβ)uini=1(μαβA(xi)+(1μA(xi))αβ)ui)     (4.4)

Proof. We use Holder’s Inequality

ni=1xiyi(ni=1xpi)1p(ni=1yqi)1q ;

For all when and with equality if and only if there exists a positive number such that

xpi=cypi.    (4.6)

Setting

xi=(μαβα1A(xi)+(1μA(xi))αβα1)(uini=1(μβA(xi)ui+(1μA(xi))βui))1α1Dli 

yi= (μαβ1αA(xi)+(1μA(xi))αβ1α)(uini=1(μβA(xi)ui+(1μA(xi))βui))11α

p=11/α  and q=1α in (4.5) and using (3.3) we obtain the result (4.3) after simplification for αα1>0 as α>1.  

Theorem 4.2. For every code with lengths {li},  i=1,2,,n, Lα,β(U) can be made to satisfy,

Lα,β(U)Hα,β(A;U)D(1αα)+α1α[1D(1αα)].     (4.7)

Proof. Let li be the positive integer satisfying, the inequality

log((μαβA(xi)+(1μA(xi))αβ)uini=1(μαβA(xi)+(1μA(xi))αβ)ui)li<log((μαβA(xi)+(1μA(xi))αβ)uini=1(μαβA(xi)+(1μA(xi))αβ)ui)+1   (4.8)

Consider the intervals

δi=[log((μαβA(xi)+(1μA(xi))αβ)uini=1(μαβA(xi)+(1μA(xi))αβ)ui),log((μαβA(xi)+(1μA(xi))αβ)uini=1(μαβA(xi)+(1μA(xi))αβ)ui)+1   ]     (4.9)

of length 1. In every δi there lies exactly one positive number li such that

0<log((μαβA(xi)+(1μA(xi))αβ)uini=1(μαβA(xi)+(1μA(xi))αβ)ui)li

<log((μαβA(xi)+(1μA(xi))αβ)uini=1(μαβA(xi)+(1μA(xi))αβ)ui)+1   (4.10)

 It can be shown that the sequence {li},  i=1,2,,n,  thus defined, satisfies (3.3). From (4.10) we have

li<log((μαβA(xi)+(1μA(xi))αβ)uini=1(μαβA(xi)+(1μA(xi))αβ)ui)+1   (4.11)

Dli<((μαβA(xi)+(1μA(xi))αβ)uini=1(μαβA(xi)+(1μA(xi))αβ)ui)D 

Dli(α1α)<((μαβA(xi)+(1μA(xi))αβ)uini=1(μαβA(xi)+(1μA(xi))αβ)ui)1ααD(α1α)

Multiplying both sides of (4.11) by (μβA(xi)+(1μA(xi))β)(uini=1(μαβA(xi)+(1μA(xi))αβ)ui)1α summing over i=1,2,,n and simplifying, gives (4.7).

Theorem 4.3. For every code with lengths {li},  i=1,2,,n,  of Theorem 4.1, Lα,β(U) can be made to satisfy,

Hα,β(A;U)Lα,β(U)<Hα,β(A;U)+αα1(1D)    (4.12)

ˉll=log((μαβA(xi)+(1μA(xi))αβ)uini=1(μαβA(xi)+(1μA(xi))αβ)ui)    (4.13)

Clearly ˉll and ˉll+1 satisfy ‘equality’ in Holder’s inequality (4.5). Moreover,ˉll satisfies Kraft’s inequality (3.3).

Suppose li is the unique integer between ˉll and ˉll+1 then obviously, li satisfies (3.3).
Since α>0(1),  we have

ni=1(μβA(xi)+(1μA(xi))β)(uini=1(μβA(xi)+(1μA(xi))β)ui)1αDˉli(α1)/α   (4.14)

ni=1(μβA(xi)+(1μA(xi))β)(uini=1(μβA(xi)+(1μA(xi))β)ui)1αDli(α1)/α

<D(ni=1(μβA(xi)+(1μA(xi))β)(uini=1(μβA(xi)+(1μA(xi))β)ui)1αDˉll(α1)/α  )           

Since

ni=1(μβA(xi)+(1μA(xi))β)(uini=1(μβA(xi)+(1μA(xi))β)ui)1αDˉll(α1)/α  

 =(ni=1(μαβA(xi)+(1μA(xi))αβ)uini=1(μβA(xi)+(1μA(xi))β)ui)1α

Hence, (4.14) becomes

(ni=1(μαβA(xi)+(1μA(xi))αβ)uini=1(μβA(xi)+(1μA(xi))β)ui)1α

ni=1(μβA(xi)+(1μA(xi))β)(uini=1(μβA(xi)+(1μA(xi))β)ui)1αDˉll(α1)/α  

     <D(ni=1(μαβA(xi)+(1μA(xi))αβ)uini=1(μβA(xi)+(1μA(xi))β)ui)1α

Which gives the result (4.12).

Error correcting codes constitute one of the key ingredients in achieving the high degree of reliability required in transmission of information. Therefore, we find the minimum value of useful mean lengths subject to a given constraint on code- word lengths. It may be seen that the "useful" mean lengths had been generalized parametrically and their bounds had been studied in terms of generalized measures of entropies.

We establish a result, that in a sense, provides a characterization of Hα,β(A;U)  under the condition of unique decipherability.  The main objective of information is to remove uncertainty and fuzziness. The measured information supplied by the amount of probabilistic uncertainty removed in an experiment and the measure of uncertainty removed is also called as a measure of information, while measure of fuzziness is the measure of vagueness and ambiguity of uncertainties. The results with the proofs obtained in the theorem 4.1 – 4.3, not only generalize the existing fuzzy mean lengths and its bounds but all the known results of the particular cases of the proposed length. Some new fuzzy coding theorems have also been proved.

Acknowledgments

None.

Conflicts of interest

None.

References

  1. Zadeh LA. Fuzzy sets. Information and control. 1966;8(3):338‒353.
  2. De Luca A, Termini S. A definition of non- probabilistic entropy in setting of fuzzy set theory. Information and Control.1972;20(4):301‒312.
  3. Feinstein A. Foundations of information theory. Mc Graw-Hill, New York, USA, 1958.
  4. Kraft DH. Fuzzy sets and generalized retrieval systems. Intern J of Man-Machine studies.1949;19(1):45‒56.
  5. Belis M, Guiasu S. A qualitative-quantitative measure of information in Cybernetics Systems.IEEE Trans. Information Theory. 1968;14(4):593‒594.
  6. Guiasu S, Picard CF. Borne Inferieure de la longueur utile de certain codes. CR AcadSci Paris.1971;273:248‒251.
  7. Shannon CE. A mathematical theory of communication. Bell System Technical Journal.1948;27:379‒423.
  8. Longo G. A noiseless coding theorem for sources having utilities. SIAM J Appl Math. 1976;30(4):739‒748.
  9. Gurdial, Pessoa F. On useful information of order α. J Comb Information and Syst Sci. 1977;2:158‒162.
  10. Khan AB, Autar R. On useful information of order α and βJ Math. 1979;5:93‒99.
  11. Autar R, Khan AB. On generalized useful information for incomplete distribution. J of Comb Information and Syst Sci. 1989;14(4):187‒191.
  12. Roy LK.Comparison of Renyi entropies of power distribution.ZAMM. 1976;56:217‒218.
  13. Arndt C. Information Measures-Information and its Description in Science and Engineering. Springer, Berlin, Germany, 2001.
  14. Shisha O. Inequalities. Academic Press, New York, USA, 1967.
Creative Commons Attribution License

©2018 Habib. This is an open access article distributed under the terms of the, which permits unrestricted use, distribution, and build upon your work non-commercially.