Submit manuscript...
eISSN: 2378-315X

Biometrics & Biostatistics International Journal

Research Article Volume 7 Issue 1

Some new results on fuzzy generalized ‘useful’ mean lengths and its bounds

Arif Habib

Department of Public Health, College of Applied Medical Sciences, King Khalid University, Saudi Arabia

Correspondence: Arif Habib, Department of Public Health, College of Applied Medical Sciences, King Khalid University, Abha, Saudi Arabia

Received: December 21, 2017 | Published: February 12, 2018

Citation: Habib A. Some new results on fuzzy generalized ‘useful’ mean lengths and its bounds. Biom Biostat Int J. 2018;7(1):54-58. DOI: 10.15406/bbij.2018.07.00190

Download PDF

Abstract

In this paper, we propose a fuzzy generalized ‘useful’ parametric mean length and bounds are obtained in terms of useful fuzzy measure. The bounds obtained are new and some known results are the particular cases of the proposed measure and bounds.

Keywords: fuzzy sets, shannon’s inequality, generalized shannon’s inequality, coding theorem, kerridge inaccuracy

AMS subject classification: 94A17, 94A24.

Introduction

Fuzzy sets play a significant role in many deployed system because of their capability to model non-statistical imprecision. Consequently, characterization and quantification of fuzziness are important issues that affect the management of uncertainty in many system models and designs. The notion of fuzzy sets was proposed by Zahed1 with a view to tackling problems in which indefinites arising from a sort intrinsic ambiguity plays a fundamental role. Fuzziness, a texture of uncertainty, results from the lack of sharp distinction of the boundary of set. The concept of fuzzy sets in which imprecise knowledge can be used to define an event. The concept of entropy has been widely used in different areas, e.g. communication theory, statistical mechanics, pattern recognition, diagnostic and neural network etc.

A fuzzy set is represented as

A= xi/μAxi:i=1,2,,n,

where μAxi gives the degree of belongingness of the element ‘xi’ to the set ‘A’. If every element of the set ‘A’ is ‘0’ or ‘1’, there is no uncertainty about it and a set is said to be a crisp set. On the other hand, a fuzzy set ‘A’ is defined by a characteristic function

μAxi=x1,x2,..,xn0,1.

The function μAxi associates with each xiRn grade of membership function.
A fuzzy set A* is called a sharpened version of fuzzy set A if the following conditions are satisfied:

μA*xi£μAxi, if  μAxi£0.5 for all i=1,2,..,n and μA*xiμAxi, if  μAxi0.5 for all i=1,2,..,n.

De Luca and Termini2 formulated a set of properties and these properties are widely accepted as criterion for defining any fuzzy entropy. In fuzzy set theory, the entropy is a measure of fuzziness which expresses the amount of average ambiguity in making a decision whether an element belong to a set or not. So, a measure of average fuzziness is fuzzy set H(A) should have the following properties to be a valid entropy.

  1. (Sharpness): H(A) is minimum if and only if A is a crisp set μAxi=0 or 1; "i
  2. (Maximality): H(A) is maximum if and only if A is most fuzzy set i.e., μAxi=12  i
  3. (Resolution): HA*£HA where A* is sharpened version of A.
  4. (Symmetry):  HA=HA¯, where A¯  is the complement of set A i.e. µ¯Axi= 1 µA(xi)

The importance of fuzzy set comes from the fact that it can deal with imprecise and inexact information. Its application areas span from design of fuzzy controller to robotics and artificial intelligence.

Basic concepts

Let X be discrete random variable taking on a finite number of possible values X=x1,x2,...,xn  with respective membership function A=μAx1,μAx2,μAxn 0,1,  μAxi   gives the elements the degree of belongingness xi to the set A. The function μAxi associates with each xi Rn a grade of membership to the set A and is known as membership function.

Denote

Χ=x1  x2          x2μAx1μAx2    μAxn      (2.1)

We call the scheme (2.1) as a finite fuzzy information scheme. Every finite scheme describes a state of uncertainty. De Luca and termini2 introduced a quantity which, in a reasonable way to measures the amount of uncertainty (fuzzy entropy) associated with a given finite scheme. This measure is given by

HA= inμAxilogμAxi+1μAxilog1μAxi       (2.2)

The measure (2.2) serve as a very suitable measure of fuzzy entropy of the finite information scheme (2.1).

Let a finite source of n source symbols X=x1,x2,,xn be encoded using alphabet of D symbols, then it has been shown by Feinstein3 that there is a uniquely decipherable instantaneous code with lengths  l1,l2,ln iff the following Kraft4 inequality is satisfied

inDli1      (2.3)

Belis & Guiasu5 observed that a source is not completely specified by the probability distribution P over the source alphabet in the absence of qualitative character. So it can be assumed (Belis & Guiasu5) that the source alphabet letters are assigned weights according to their importance or utilities in view of the experimenter.

Let U=u1,u2,,un be the set of positive real numbers, ui  is the utility or importance of Xi. The utility, in general, is independent of probability of encoding of source symbol Xi  i.e.pi  . The information source is thus given by

X=X1X2Xnp1p2pnu1u2un,  ui >0 pi0, inpi=1     (2.4)

Belis & Guiasu3 introduced the following quantitative- qualitative measure of information

HP,U=inuipilogpi   (2.5)

Which is a measure for the average of quantity of ‘variable’ or ‘useful’ information provided by the information source (2.4).

Guiasu & Picard6 considered the problem of encoding the letter output by the source (2.4) by means of a single letter prefix code whose code word’s c1,c2,,cn are of lengths l1,l2,,ln respectively and satisfy the Kraft’s inequality (2.3), they included the following ‘useful’ mean length of the code

LU=inuipiliinuipi   (2.6)

Further they derived a lower bound for (2.6).

Now, corresponding to (2.5) and (2.6), we have the following fuzzy measures

HA,U=i=1nuiμAxi+1μAxilogμAxi+1μAxi   (2.7)

and

LU=i=1nuiμAxi+1μAxilii=1nuiμAxi+1μAxi   (2.8)

respectively.

In the next section, fuzzy coding theorem have been obtained by considering a new parametric fuzzy entropy function involving utilities and generalized useful fuzzy code word mean length. The result obtained here are not only new, but also generalized some well-known results available in the literature of information theory.

Information scheme

Consider the following model for a random experiment S,
S=E;P;U

Where E=E1,E2,,En is a finite system of events happening with respective probabilities  P=p1,p2,,pn,  pi0 and pi=1 and credited with utilities U=(u1,u2,,un)ui>0, i=1,2,,n. Denote the model by E, where

E=E1 E2   Enp1 p2 pnu1     u2     un   (3.1)

We call (3.1) a Utility Information Scheme (UIS). Bilis & Guiasu3 proposed a measure of information called ‘useful information’ for this scheme, given by

HU;P=i=1nuipilogpi    (3.2)

Where H(U;P) reduces to Shannon’s7 entropy when the utility aspect of the scheme is ignored i.e., when ui=1  for each i.

Guiasu & Picard6 considered the problem of encoding the outcomes in (3.1) by means of a prefix code with codewords w1,w2,,wn having lengths l1,l2,,ln and satisfying Kraft’s inequality4

i=1nDli1   (3.3)

Where D  is the size of the code alphabet. The useful mean length L(U) of code was defined as

LU=inuipiliinuipi     (3.4)

and the authors obtained bounds for it in terms of HU;P.

Now, corresponding to 3.2 and 3.4 , we have the following fuzzy measures

HA,U=i=1nuiμAxi+1μAxilogμAxi+1μAxi    (3.5)

and

LU=i=1nuiμAxi+1μAxilii=1nuiμAxi+1μAxi   (3.6)

Longo,8 Gurdial & Pessoa,9 Khan & Autar,10 Autar & Khan11 have studied generalized coding theorems by considering different generalized measures of (3.2) and (3.4) under condition (3.3) of unique decipherability.

In this paper, we study some coding theorems by considering a new function depending on the parameters α and and a utility function. The motivation for studying this new function is that it generalizes some entropy functions already existing in the literature. The function under study is closely related to Tsallis entropy which is used in physics.

Coding theorems

Consider a function

Hα,βA;U=α1α11nμAαβxi+1μAxiαβui1nμAβxi+1μAxiβui1α,   (4.1)

Where α>0 1,β>0,  pi 0, i=1,2,,n and inpi 1.

  1. When β=1 and α1, (4.1) reduces to a measure of useful information for the incomplete distribution due to Belis & Guiasu.5
  2. When ui=1 for each i, i.e., when the utility aspect is ignored, inpi=1, β=1  and α1, the measure (4.1) reduces to Shannon’s entropy.7
  3. When ui=1 for each i, the measure (4.1) becomes entropy for β -power distribution derived from P studied by Roy.12 We call Hα,βA;U in (4.1) the generalized useful measure of information for the incomplete power distribution  Pβ.

 Further consider,

Lα,βU=αα111nμAβxi+1μAxiβuii=1nμAβxi+1μAxiβui1αDliα1α,   (4.2)

Where    α>0 1,  inpi 1.          

  1. For β=1, ui=1 for each i and α1, Lα,βU in (4.2) reduces to the useful mean length L(U) of the code given in (3.6).
  2. For β=1, ui=1 for each i and α1, Lα,βU becomes the optimal code length defined by Shannon.7,13,14

Theorem 4.1. For all integers D>1  

Lα,βUHα,βA;U   (4.3)

Under the condition (3.3). Equality holds if and only if

li=logμAαβxi+1μAxiαβuii=1nμAαβxi+1μAxiαβui     (4.4)

Proof. We use Holder’s Inequality

i=1nxiyii=1nxip1pi=1nyiq1q ;

For all when and with equality if and only if there exists a positive number such that

xip=cyip.    (4.6)

Setting

xi=μAαβα1xi+1μAxiαβα1uii=1nμAβxiui+1μAxiβui1α1Dli 

yi= μAαβ1αxi+1μAxiαβ1αuii=1nμAβxiui+1μAxiβui11α

p=11/α  and q=1α in (4.5) and using (3.3) we obtain the result (4.3) after simplification for αα1>0 as α>1.  

Theorem 4.2. For every code with lengths li,  i=1,2,,n, Lα,βU can be made to satisfy,

Lα,βUHα,βA;UD1αα+α1α1D1αα.     (4.7)

Proof. Let li be the positive integer satisfying, the inequality

logμAαβxi+1μAxiαβuii=1nμAαβxi+1μAxiαβuili<logμAαβxi+1μAxiαβuii=1nμAαβxi+1μAxiαβui+1   (4.8)

Consider the intervals

δi=logμAαβxi+1μAxiαβuii=1nμAαβxi+1μAxiαβui,logμAαβxi+1μAxiαβuii=1nμAαβxi+1μAxiαβui+1        (4.9)

of length 1. In every δi there lies exactly one positive number li such that

0<logμAαβxi+1μAxiαβuii=1nμAαβxi+1μAxiαβuili

<logμAαβxi+1μAxiαβuii=1nμAαβxi+1μAxiαβui+1   (4.10)

 It can be shown that the sequence li,  i=1,2,,n,  thus defined, satisfies (3.3). From (4.10) we have

li<logμAαβxi+1μAxiαβuii=1nμAαβxi+1μAxiαβui+1   (4.11)

Dli<μAαβxi+1μAxiαβuii=1nμAαβxi+1μAxiαβuiD 

Dliα1α<μAαβxi+1μAxiαβuii=1nμAαβxi+1μAxiαβui1ααDα1α

Multiplying both sides of (4.11) by μAβxi+1μAxiβuii=1nμAαβxi+1μAxiαβui1α summing over i=1,2,,n and simplifying, gives (4.7).

Theorem 4.3. For every code with lengths li,  i=1,2,,n,  of Theorem 4.1, Lα,βU can be made to satisfy,

Hα,βA;ULα,βU<Hα,βA;U+αα11D    (4.12)

l¯l=logμAαβxi+1μAxiαβuii=1nμAαβxi+1μAxiαβui    (4.13)

Clearly l¯l and l¯l+1 satisfy ‘equality’ in Holder’s inequality (4.5). Moreover,l¯l satisfies Kraft’s inequality (3.3).

Suppose li is the unique integer between l¯l and l¯l+1 then obviously, li satisfies (3.3).
Since α>01,  we have

i=1nμAβxi+1μAxiβuii=1nμAβxi+1μAxiβui1αDl¯iα1/α   (4.14)

i=1nμAβxi+1μAxiβuii=1nμAβxi+1μAxiβui1αDliα1/α

<Di=1nμAβxi+1μAxiβuii=1nμAβxi+1μAxiβui1αDl¯lα1/α             

Since

i=1nμAβxi+1μAxiβuii=1nμAβxi+1μAxiβui1αDl¯lα1/α  

 =i=1nμAαβxi+1μAxiαβuii=1nμAβxi+1μAxiβui1α

Hence, (4.14) becomes

i=1nμAαβxi+1μAxiαβuii=1nμAβxi+1μAxiβui1α

i=1nμAβxi+1μAxiβuii=1nμAβxi+1μAxiβui1αDl¯lα1/α  

     <Di=1nμAαβxi+1μAxiαβuii=1nμAβxi+1μAxiβui1α

Which gives the result (4.12).

Error correcting codes constitute one of the key ingredients in achieving the high degree of reliability required in transmission of information. Therefore, we find the minimum value of useful mean lengths subject to a given constraint on code- word lengths. It may be seen that the "useful" mean lengths had been generalized parametrically and their bounds had been studied in terms of generalized measures of entropies.

We establish a result, that in a sense, provides a characterization of Hα,βA;U  under the condition of unique decipherability.  The main objective of information is to remove uncertainty and fuzziness. The measured information supplied by the amount of probabilistic uncertainty removed in an experiment and the measure of uncertainty removed is also called as a measure of information, while measure of fuzziness is the measure of vagueness and ambiguity of uncertainties. The results with the proofs obtained in the theorem 4.1 – 4.3, not only generalize the existing fuzzy mean lengths and its bounds but all the known results of the particular cases of the proposed length. Some new fuzzy coding theorems have also been proved.

Acknowledgments

None.

Conflicts of interest

None.

References

  1. Zadeh LA. Fuzzy sets. Information and control. 1966;8(3):338‒353.
  2. De Luca A, Termini S. A definition of non- probabilistic entropy in setting of fuzzy set theory. Information and Control.1972;20(4):301‒312.
  3. Feinstein A. Foundations of information theory. Mc Graw-Hill, New York, USA, 1958.
  4. Kraft DH. Fuzzy sets and generalized retrieval systems. Intern J of Man-Machine studies.1949;19(1):45‒56.
  5. Belis M, Guiasu S. A qualitative-quantitative measure of information in Cybernetics Systems.IEEE Trans. Information Theory. 1968;14(4):593‒594.
  6. Guiasu S, Picard CF. Borne Inferieure de la longueur utile de certain codes. CR AcadSci Paris.1971;273:248‒251.
  7. Shannon CE. A mathematical theory of communication. Bell System Technical Journal.1948;27:379‒423.
  8. Longo G. A noiseless coding theorem for sources having utilities. SIAM J Appl Math. 1976;30(4):739‒748.
  9. Gurdial, Pessoa F. On useful information of order α. J Comb Information and Syst Sci. 1977;2:158‒162.
  10. Khan AB, Autar R. On useful information of order α and βJ Math. 1979;5:93‒99.
  11. Autar R, Khan AB. On generalized useful information for incomplete distribution. J of Comb Information and Syst Sci. 1989;14(4):187‒191.
  12. Roy LK.Comparison of Renyi entropies of power distribution.ZAMM. 1976;56:217‒218.
  13. Arndt C. Information Measures-Information and its Description in Science and Engineering. Springer, Berlin, Germany, 2001.
  14. Shisha O. Inequalities. Academic Press, New York, USA, 1967.
Creative Commons Attribution License

©2018 Habib. This is an open access article distributed under the terms of the, which permits unrestricted use, distribution, and build upon your work non-commercially.