Submit manuscript...
eISSN: 2574-8092

International Robotics & Automation Journal

Research Article Volume 6 Issue 2

Study on the model of quantification of syndromes and identification of syndrome types classification of hypertension based on facial color spectral decomposition technology

Kai Li,1 Xiaoyu Ma,2 Rong Ni,3 Jiali Liu,1 Bin Wu4

1Center for Chinese Medicine Big Data and Smart Health Research, Zhejiang Chinese Medical University, China
2Institute of Intelligent Media Technology, Communication University of Zhejiang, China
3Micro Medical Group (Zhejiang) Co., Ltd, China
4The Third Affiliated Hospital of Zhejiang Chinese Medical University, China

Correspondence: Kai Li, Center for Chinese Medicine Big Data and Smart Health Research, Zhejiang Chinese Medical University, China, Tel +86-15268500555, Fax 0086-571-86610109

Received: March 29, 2020 | Published: May 15, 2020

Citation: Li K, Ma X, Ni R, et al. Study on the model of quantification of syndromes and identification of syndrome types classification of hypertension based on facial color spectral decomposition technology. Int Rob Auto J . 2020;6(2):68?78. DOI: 10.15406/iratj.2020.06.00203

Download PDF

Abstract

Objective: In this paper, the classification quantification and intelligent identification of traditional Chinese medicine (TCM) syndromes of hypertension based on facial image information are studied by the method of combination of medicine and engineering.

Methods: Firstly, the research on the combination of TCM syndrome quantification and hypertension blood pressure levels and the application of chromatographic decomposition technology to the extraction of TCM facial features are innovatively explored. Secondly, on the basis of the above research and combining with the machine learning algorithm, the hierarchic and quantitative intelligent diagnosis model of TCM syndromes of hypertension is constructed, which realizes the intelligent classification and quantification of TCM syndrome types of hypertension, based on the accurate analysis of the subtle differences in the color features of each facial region of different TCM syndrome types of hypertension.

ResultsSyndrome identification accuracy and classification accuracy of overabundant liver-fire syndrome, Yin deficiency with Yang hyperactivity, Yin and Yang deficiency syndrome, and healthy persons in the test respectively reach 68%/74%, 75%/72%, 70.8%/72%, 80%/-, with a good identification effect, but that of excessive accumulation of phlegm-dampness only reach 58.3%/66%, whose model identification effect is not obvious.

ConclusionsFor the study of small sample size in the field of intelligent diagnosis of hypertension with TCM syndromes, the diagnosis model based on color spectral decomposition (CSD) and random forest (RF) is more effective than the diagnosis model based on DNN. The CSD algorithm for syndrome color extraction and analysis proposes a new model of intelligent diagnosis of the classification and quantification of TCM syndromes in hypertension. This paper uses indirect methods to measure the syndromes and the evolution of syndromes in a non-wound manner. Judging the degree of target organ damage and home rehabilitation for the patients with hypertension makes this study of great research significance and application prospects.

Key words: Classification and quantification of traditional Chinese medicine syndromes of hypertension, color spectral decomposition technology, quantification of facial image syndromes

Introduction

The intelligent diagnosis of the traditional Chinese medicine (TCM) syndromes of hypertension belongs to the medical intelligence research field which is the intersection between traditional Chinese medicine diagnosis and information science. TCM syndromes is a significant factor affecting people to suffer from hypertension.1,2 The research method of combination of diseases and syndromes based on syndrome differentiation and dialectical treatment is an inherent and unique content of TCM,3 which has a significant effect in the clinical treatment and prevention of hypertension,4–6 Intelligent diagnosis of TCM syndromes of hypertension can indirectly judge the degree of target organ damage and be used for home rehabilitation to the patients with hypertension by measuring the syndromes and the degree of syndrome evolution in a non-wound manner at home without biochemical examination in hospital, which makes this research very significant and promising. However, at present, the diagnosis of hypertension in traditional Chinese medicine is based on the symptoms (the objective characteristics of various parts of the human body in the disease state). Hypertension is diagnosed into four syndromes (Overabundant Liver-Fire Syndrome;Yin Deficiency with Yang Hyperactivity;excessive accumulation of phlegm-dampness;Yin and Yang deficiency syndrome).7 The biggest drawback is that each syndrome has only a name and no severity level, so it is impossible to quantify the morbidity like blood pressure value diagnosis of western medicine and then give a timely warning when the blood pressure is higher.8–13

In recent years, the intelligent machine learning image processing algorithm has been widely applied in the field of modern medical image recognition.14,15 It is still in its infancy in the digital image recognition of face image, tongue image and pulse in traditional Chinese medicine,16 which are always used for constitution identification.17 Besides, the sign image detection algorithm for single diseases is now still vacant18,19 and the intelligent detection standards need to be improved. At present, the main method for facial chroma characterization is three-dimensional index in a specific color space, including RGB HSV20 CIE21 and so on. In many image processing tasks, that redundant and sparse representation often yield better results compared to compact representations. For example, image quality assessment algorithms such as VIF22 MS-SSIM,23 FR-DOG,24 and MAD25 tend to decompose the image into several sub-bands26,27 in the pre-processing stage. Considering that, a new chroma representation method namely color spectral decomposition is developed to transform the three-dimensional color index into a sparse vector which can benefit the extraction of distinguishing features of facial colors of various TCM syndromes.

This study is to realize the hierarchic and quantitative intelligent diagnosis of the four syndrome types of hypertension in traditional Chinese medicine through the analysis of objective images of syndromes, and achieve the correspondence and mutual interpretation between the severity level of TCM syndromes of hypertension and the blood pressure values of hypertension in western medicine. The innovation of this paper is to explore the hierarchic and quantitative intelligent diagnosis model of TCM syndromes of hypertension through the establishment of a chromatographic analysis algorithm that is more suitable for color extraction and analysis in small sample size syndromes.

Methodology

Based on the previous introduction, this paper proposes the following hypothesis: different facial colors play an important role in the prediction and classification of TCM syndrome types of hypertension.

In order to verify the scientific hypothesis, these works firstly determine several facial regions that are supposed most relevant to TCM syndromes via Analytic Hierarchy Process (AHP) fuzzy comprehensive evaluation method. Then intelligent diagnosis models are designed to extract the distinguishing color features and predict the TCM syndromes given the selected facial regions. Two types of diagnosis model are proposed and the overall pipeline is shown as Figure 1.

Figure 1 Overall pipeline of TCM diagnosis model.

Intuitively, deep learning techniques might be an optimal solution to extract distinguishing color information, considering its powerful potency of abstraction and representation. However, several factors could restrict the performance of the diagnosis model based on DNN:

  1. Color differences of various TCM syndromes are subtle in Red, Green, Blue (GRB) color space. Whether the diagnosis model based on DNN can capture the subtle difference requires further validation.
  2. Due to the difficulty of the facial information collection of the patients with hypertension, the training samples are limited, which will aggravate the difficulty of extracting the subtle difference in RGB color space.

Considering that, explicit hand-craft feature extraction methods might be another feasible way. Specifically, we also developed a color spectral decomposition (CSD) algorithm to capture the subtle distinguishing color features and employ traditional regression tools (e.g., RF) to aggregate the features and predict the TCM syndromes. The detailed description of the AHP evaluation method and the two diagnosis models are depicted as follow:

Fuzzy evaluation of TCM diagnostic knowledge based on AHP algorithm

Traditional Chinese medicine believes that color changes in different parts of the human body indicate different diseases, and the face, nose, cheeks, lips, and eyes are divided into different organs to diagnose and predict diseases.28 Lingshu Five colors points out that the pathological position, severity level of the disease can be judged by observing the depth of complexion. You can understand the position of viscera limb segment and other lesions by observing the position of the sickly complexion. That is, if the sickly complexion is undertint, the disease is light, while, if it is dark, the disease is serious. Hypertension is characterized in different parts. Jing Sun et al., (2014)29 proposed that patients with hypertension could be treated according to the characteristics of the forehead, nose, ear, cheek, tongue, eye and hand. Different syndromes of hypertension show different color features in facial regions.30 The identification of TCM syndrome types of hypertension mainly depends on the experience of clinical experts in diagnosis. Due to lacking for the normative quantitative objectification criteria, it’s difficult to achieve the intelligent detection and monitoring of the evolution degree of hypertension based on combination of diseases and syndromes. Based on previous studies, this paper uses the AHP fuzzy comprehensive evaluation method to determine the regions with significant facial color features of hypertension. The algorithm adopted in this paper is as follows:

The evaluation index system of the facial partition of the patients with hypertension is composed of the first and second index layers, and the first index set is  U={ U 1, U 2, U 3 } MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaaiiOaiaadwfacqGH9aqpdaGadaWdaeaapeGaamyva8aadaWgaaWc baWdbiaaigdacaGGSaaapaqabaGcpeGaamyva8aadaWgaaWcbaWdbi aaikdacaGGSaaapaqabaGcpeGaamyva8aadaWgaaWcbaWdbiaaioda a8aabeaaaOWdbiaawUhacaGL9baaaaa@4401@ .

Let the first-level index U i ( i=1,2,3 ) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamyva8aadaWgaaWcbaWdbiaadMgaa8aabeaak8qadaqadaWdaeaa peGaamyAaiabg2da9iaaigdacaGGSaGaaGOmaiaacYcacaaIZaaaca GLOaGaayzkaaaaaa@409B@  have M i MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamyta8aadaWgaaWcbaWdbiaadMgaa8aabeaaaaa@3949@ second-level indexes, denoting as U i ={ U i1 , U i2 ,, U ij }( i=1,2,3;j=1,2,,m ) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamyva8aadaWgaaWcbaWdbiaadMgaa8aabeaak8qacqGH9aqpdaGa daWdaeaapeGaamyva8aadaWgaaWcbaWdbiaadMgacaaIXaaapaqaba GcpeGaaiilaiaadwfapaWaaSbaaSqaa8qacaWGPbGaaGOmaaWdaeqa aOWdbiaacYcacqGHMacVcaGGSaGaamyva8aadaWgaaWcbaWdbiaadM gacaWGQbaapaqabaaak8qacaGL7bGaayzFaaWaaeWaa8aabaWdbiaa dMgacqGH9aqpcaaIXaGaaiilaiaaikdacaGGSaGaaG4maiaacUdaca WGQbGaeyypa0JaaGymaiaacYcacaaIYaGaaiilaiabgAci8kaacYca caWGTbaacaGLOaGaayzkaaaaaa@5964@ , and U ij MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamyva8aadaWgaaWcbaWdbiaadMgacaWGQbaapaqabaaaaa@3A40@ as the j MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamOAaaaa@381E@ th second-level index of U i MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamyva8aadaWgaaWcbaWdbiaadMgaa8aabeaaaaa@3951@ .

This paper adopts AHP method, and sets the weight of U i MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamyva8aadaWgaaWcbaWdbiaadMgaa8aabeaaaaa@3951@ as a i ( i=1,2,3 ) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamyya8aadaWgaaWcbaWdbiaadMgaa8aabeaak8qadaqadaWdaeaa peGaamyAaiabg2da9iaaigdacaGGSaGaaGOmaiaacYcacaaIZaaaca GLOaGaayzkaaaaaa@40A7@ , then the first-level weight set is:  A={ a 1, a 2, a 3 },0 a i 1, i=1 3 a i =1 MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaaiiOaiaadgeacqGH9aqpdaGadaWdaeaapeGaamyya8aadaWgaaWc baWdbiaaigdacaGGSaaapaqabaGcpeGaamyya8aadaWgaaWcbaWdbi aaikdacaGGSaaapaqabaGcpeGaamyya8aadaWgaaWcbaWdbiaaioda a8aabeaaaOWdbiaawUhacaGL9baacaGGSaGaaGimaiaadggapaWaaS baaSqaa8qacaWGPbaapaqabaGcpeGaaGymaiaacYcadaGfWbqabSWd aeaapeGaamyAaiabg2da9iaaigdaa8aabaWdbiaaiodaa0Wdaeaape GaeyyeIuoaaOGaamyya8aadaWgaaWcbaWdbiaadMgaa8aabeaak8qa cqGH9aqpcaaIXaaaaa@5354@ .

Let the weight of the second-level index U ij MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamyva8aadaWgaaWcbaWdbiaadMgacaWGQbaapaqabaaaaa@3A40@ be a ij ( i=1,2,3;j=1,2,,m ) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamyya8aadaWgaaWcbaWdbiaadMgacaWGQbaapaqabaGcpeWaaeWa a8aabaWdbiaadMgacqGH9aqpcaaIXaGaaiilaiaaikdacaGGSaGaaG 4maiaacUdacaWGQbGaeyypa0JaaGymaiaacYcacaaIYaGaaiilaiab gAci8kaacYcacaWGTbaacaGLOaGaayzkaaaaaa@4A51@ , then the second-level weight set is A i ={ a i1 , a i2 ,, a i m i },0 a ij 1, j=1 m i a ij =1,i=1,2,3;j=1,2,, m i MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamyqa8aadaWgaaWcbaWdbiaadMgaa8aabeaak8qacqGH9aqpdaGa daWdaeaapeGaamyya8aadaWgaaWcbaWdbiaadMgacaaIXaaapaqaba GcpeGaaiilaiaadggapaWaaSbaaSqaa8qacaWGPbGaaGOmaaWdaeqa aOWdbiaacYcacqGHMacVcaGGSaGaamyya8aadaWgaaWcbaWdbiaadM gacaWGTbWdamaaBaaameaapeGaamyAaaWdaeqaaaWcbeaaaOWdbiaa wUhacaGL9baacaGGSaGaaGimaiaadggapaWaaSbaaSqaa8qacaWGPb GaamOAaaWdaeqaaOWdbiaaigdacaGGSaWaaybCaeqal8aabaWdbiaa dQgacqGH9aqpcaaIXaaapaqaa8qacaWGTbWdamaaBaaameaapeGaam yAaaWdaeqaaaqdbaWdbiabggHiLdaakiaadggapaWaaSbaaSqaa8qa caWGPbGaamOAaaWdaeqaaOWdbiabg2da9iaaigdacaGGSaGaamyAai abg2da9iaaigdacaGGSaGaaGOmaiaacYcacaaIZaGaai4oaiaadQga cqGH9aqpcaaIXaGaaiilaiaaikdacaGGSaGaeyOjGWRaaiilaiaad2 gapaWaaSbaaSqaa8qacaWGPbaapaqabaaaaa@6D9D@ .

Evaluation grade is the basis for evaluation and measurement of facial partition. The evaluation set is divided into 5 grades, which are expressed as V={ very fit, relatively fit, fit, barely fit, not fit } MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamOvaiabg2da9maacmaapaqaa8qacaqG2bGaaeyzaiaabkhacaqG 5bGaaeiOaiaabAgacaqGPbGaaeiDaiaacYcacaqGGcGaaeOCaiaabw gacaqGSbGaaeyyaiaabshacaqGPbGaaeODaiaabwgacaqGSbGaaeyE aiaabckacaqGMbGaaeyAaiaabshacaGGSaGaaeiOaiaabAgacaqGPb GaaeiDaiaacYcacaqGGcGaaeOyaiaabggacaqGYbGaaeyzaiaabYga caqG5bGaaeiOaiaabAgacaqGPbGaaeiDaiaacYcacaqGGcGaaeOBai aab+gacaqG0bGaaeiOaiaabAgacaqGPbGaaeiDaaGaay5Eaiaaw2ha aaaa@6AD0@ in the indicator system. That is, the weight ratio of the secondary indicators will be calculated according to the selection of the team of clinical experts on hypertension.

According to the judgment of experts, each factor U ij MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamyva8aadaWgaaWcbaWdbiaadMgacaWGQbaapaqabaaaaa@3A40@  of U i MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamyva8aadaWgaaWcbaWdbiaadMgaa8aabeaaaaa@3951@  has a degree of membership to the five review levels ( r ij1 , r ij2 ,, r ij5 ) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaaiikaiaadkhapaWaaSbaaSqaa8qacaWGPbGaamOAaiaaigdaa8aa beaak8qacaGGSaGaamOCa8aadaWgaaWcbaWdbiaadMgacaWGQbGaaG OmaaWdaeqaaOWdbiaacYcacqGHMacVcaGGSaGaamOCa8aadaWgaaWc baWdbiaadMgacaWGQbGaaGynaaWdaeqaaOWdbiaacMcaaaa@4834@ , and the evaluation results of M i MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamyta8aadaWgaaWcbaWdbiaadMgaa8aabeaaaaa@3949@  factors can be expressed as a fuzzy matrix R i MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamOua8aadaWgaaWcbaWdbiaadMgaa8aabeaaaaa@394E@  of order M i ×5 MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamyta8aadaWgaaWcbaWdbiaadMgaa8aabeaak8qacqGHxdaTcaaI 1aWexLMBbXgBd9gzLbvyNv2CaeHbnfgBNvNBGC0B0HwAJbacfaGaa8 Ng=daa@489E@

R i =[ r i11 r i21 r i12 r i22   r i11 r i25        r i m i 1 r i m i 2   r i m i 5 ]( i=1,2,3 ) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamOua8aadaWgaaWcbaWdbiaadMgaa8aabeaak8qacqGH9aqpdaWa daWdaeaafaqabeWadaaabaqbaeqabeGaaaqaauaabeqaceaaaeaape GaamOCa8aadaWgaaWcbaWdbiaadMgacaaIXaGaaGymaaWdaeqaaaGc baWdbiaadkhapaWaaSbaaSqaa8qacaWGPbGaaGOmaiaaigdaa8aabe aaaaaakeaafaqabeGabaaabaWdbiaadkhapaWaaSbaaSqaa8qacaWG PbGaaGymaiaaikdaa8aabeaaaOqaa8qacaWGYbWdamaaBaaaleaape GaamyAaiaaikdacaaIYaaapaqabaaaaaaaaOqaauaabeqaceaaaeaa peGaeS47IWeapaqaa8qacqWIVlctaaaapaqaauaabeqaceaaaeaape GaaiiOaiaadkhapaWaaSbaaSqaa8qacaWGPbGaaGymaiaaigdaa8aa beaaaOqaa8qacaWGYbWdamaaBaaaleaapeGaamyAaiaaikdacaaI1a aapaqabaaaaaGcbaqbaeqabeGaaaqaa8qacqWIUlsta8aabaWdbiaa cckacaGGGcGaaiiOaiaacckacqWIUlstaaaapaqaa8qacqWIXlYta8 aabaWdbiabl6UinbWdaeaafaqabeqacaaabaWdbiaacckacaWGYbWd amaaBaaaleaapeGaamyAaiaad2gapaWaaSbaaWqaa8qacaWGPbaapa qabaWcpeGaaGymaaWdaeqaaaGcbaWdbiaadkhapaWaaSbaaSqaa8qa caWGPbGaamyBa8aadaWgaaadbaWdbiaadMgaa8aabeaal8qacaaIYa aapaqabaaaaaGcbaWdbiaacckacqWIVlcta8aabaWdbiaadkhapaWa aSbaaSqaa8qacaWGPbGaamyBa8aadaWgaaadbaWdbiaadMgaa8aabe aal8qacaaI1aaapaqabaaaaaGcpeGaay5waiaaw2faamaabmaapaqa a8qacaWGPbGaeyypa0JaaGymaiaacYcacaaIYaGaaiilaiaaiodaai aawIcacaGLPaaaaaa@81A3@

R i MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamOua8aadaWgaaWcbaWdbiaadMgaa8aabeaaaaa@394E@  is the single-factor evaluation matrix of the first-level index fuzzy comprehensive evaluation of U i MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamyva8aadaWgaaWcbaWdbiaadMgaa8aabeaaaaa@3951@ , where r ijn MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamOCa8aadaWgaaWcbaWdbiaadMgacaWGQbGaamOBaaWdaeqaaaaa @3B50@  is the degree of membership of U ij MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamyva8aadaWgaaWcbaWdbiaadMgacaWGQbaapaqabaaaaa@3A40@ which is rated as grade n( n=1,2,,5 ) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamOBamaabmaapaqaa8qacaWGUbGaeyypa0JaaGymaiaacYcacaaI YaGaaiilaiabgAci8kaacYcacaaI1aaacaGLOaGaayzkaaaaaa@4197@ . According to the determined weight set A i MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamyqa8aadaWgaaWcbaWdbiaadMgaa8aabeaaaaa@393D@ , the first-level index fuzzy comprehensive evaluation matrix of y is:

C i = A i   R i =( a i1 , a i2 ,, a i m i )[ r i11 r i21 r i12 r i22   r i11 r i25        r i m i 1 r i m i 2   r i m i 5 ]=( c i1 , c i2 ,, c i m i ) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaam4qa8aadaWgaaWcbaWdbiaadMgaa8aabeaak8qacqGH9aqpcaWG bbWdamaaBaaaleaapeGaamyAaaWdaeqaaOWdbiaacckacaWGsbWdam aaBaaaleaapeGaamyAaaWdaeqaaOWdbiabg2da9maabmaapaqaa8qa caWGHbWdamaaBaaaleaapeGaamyAaiaaigdaa8aabeaak8qacaGGSa Gaamyya8aadaWgaaWcbaWdbiaadMgacaaIYaaapaqabaGcpeGaaiil aiabgAci8kaacYcacaWGHbWdamaaBaaaleaapeGaamyAaiaad2gapa WaaSbaaWqaa8qacaWGPbaapaqabaaaleqaaaGcpeGaayjkaiaawMca amaadmaapaqaauaabeqadmaaaeaafaqabeqacaaabaqbaeqabiqaaa qaa8qacaWGYbWdamaaBaaaleaapeGaamyAaiaaigdacaaIXaaapaqa baaakeaapeGaamOCa8aadaWgaaWcbaWdbiaadMgacaaIYaGaaGymaa WdaeqaaaaaaOqaauaabeqaceaaaeaapeGaamOCa8aadaWgaaWcbaWd biaadMgacaaIXaGaaGOmaaWdaeqaaaGcbaWdbiaadkhapaWaaSbaaS qaa8qacaWGPbGaaGOmaiaaikdaa8aabeaaaaaaaaGcbaqbaeqabiqa aaqaa8qacqWIVlcta8aabaWdbiabl+Uimbaaa8aabaqbaeqabiqaaa qaa8qacaGGGcGaamOCa8aadaWgaaWcbaWdbiaadMgacaaIXaGaaGym aaWdaeqaaaGcbaWdbiaadkhapaWaaSbaaSqaa8qacaWGPbGaaGOmai aaiwdaa8aabeaaaaaakeaafaqabeqacaaabaWdbiabl6UinbWdaeaa peGaaiiOaiaacckacaGGGcGaaiiOaiabl6Uinbaaa8aabaWdbiablg VipbWdaeaapeGaeSO7I0eapaqaauaabeqabiaaaeaapeGaaiiOaiaa dkhapaWaaSbaaSqaa8qacaWGPbGaamyBa8aadaWgaaadbaWdbiaadM gaa8aabeaal8qacaaIXaaapaqabaaakeaapeGaamOCa8aadaWgaaWc baWdbiaadMgacaWGTbWdamaaBaaameaapeGaamyAaaWdaeqaaSWdbi aaikdaa8aabeaaaaaakeaapeGaaiiOaiabl+UimbWdaeaapeGaamOC a8aadaWgaaWcbaWdbiaadMgacaWGTbWdamaaBaaameaapeGaamyAaa WdaeqaaSWdbiaaiwdaa8aabeaaaaaak8qacaGLBbGaayzxaaGaeyyp a0ZaaeWaa8aabaWdbiaadogapaWaaSbaaSqaa8qacaWGPbGaaGymaa WdaeqaaOWdbiaacYcacaWGJbWdamaaBaaaleaapeGaamyAaiaaikda a8aabeaak8qacaGGSaGaeyOjGWRaaiilaiaadogapaWaaSbaaSqaa8 qacaWGPbGaamyBa8aadaWgaaadbaWdbiaadMgaa8aabeaaaSqabaaa k8qacaGLOaGaayzkaaaaaa@A193@

Among them, ° is a composition operator.

The single factor evaluation matrix R of comprehensive fuzzy evaluation is composed of the fuzzy judgment matrix C i ( i=1,2,3 ) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaam4qa8aadaWgaaWcbaWdbiaadMgaa8aabeaak8qadaqadaWdaeaa peGaamyAaiabg2da9iaaigdacaGGSaGaaGOmaiaacYcacaaIZaaaca GLOaGaayzkaaaaaa@4089@ of the first-level index. The comprehensive evaluation model is:

R=[ C 1 C 2 C 3 ]=[ A 1   R 1 A 2   R 2 A 3   R 3 ]= ( c im ) 3×5 MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamOuaiabg2da9maadmaapaqaauaabeqaceaaaeaapeGaam4qa8aa daWgaaWcbaWdbiaaigdaa8aabeaaaOqaauaabeqaceaaaeaapeGaam 4qa8aadaWgaaWcbaWdbiaaikdaa8aabeaaaOqaa8qacaWGdbWdamaa BaaaleaapeGaaG4maaWdaeqaaaaaaaaak8qacaGLBbGaayzxaaGaey ypa0ZaamWaa8aabaqbaeqabmqaaaqaa8qacaWGbbWdamaaBaaaleaa peGaaGymaaWdaeqaaOWdbiaacckacaWGsbWdamaaBaaaleaapeGaaG ymaaWdaeqaaaGcbaWdbiaadgeapaWaaSbaaSqaa8qacaaIYaaapaqa baGcpeGaaiiOaiaadkfapaWaaSbaaSqaa8qacaaIYaaapaqabaaake aapeGaamyqa8aadaWgaaWcbaWdbiaaiodaa8aabeaak8qacaGGGcGa amOua8aadaWgaaWcbaWdbiaaiodaa8aabeaaaaaak8qacaGLBbGaay zxaaGaeyypa0ZaaeWaa8aabaWdbiaadogapaWaaSbaaSqaa8qacaWG PbGaamyBaaWdaeqaaaGcpeGaayjkaiaawMcaa8aadaWgaaWcbaWdbi aaiodacqGHxdaTcaaI1aaapaqabaaaaa@5D7F@

Therefore, the second-level fuzzy comprehensive evaluation set is:

C=A R=( a 1, a 2, a 3 )[ A 1   R 1 A 2   R 2 A 3   R 3 ]=( c 1 , c 2 ,, a 5 ) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaam4qaiabg2da9iaadgeacaGGGcGaamOuaiabg2da9maabmaapaqa a8qacaWGHbWdamaaBaaaleaapeGaaGymaiaacYcaa8aabeaak8qaca WGHbWdamaaBaaaleaapeGaaGOmaiaacYcaa8aabeaak8qacaWGHbWd amaaBaaaleaapeGaaG4maaWdaeqaaaGcpeGaayjkaiaawMcaamaadm aapaqaauaabeqadeaaaeaapeGaamyqa8aadaWgaaWcbaWdbiaaigda a8aabeaak8qacaGGGcGaamOua8aadaWgaaWcbaWdbiaaigdaa8aabe aaaOqaa8qacaWGbbWdamaaBaaaleaapeGaaGOmaaWdaeqaaOWdbiaa cckacaWGsbWdamaaBaaaleaapeGaaGOmaaWdaeqaaaGcbaWdbiaadg eapaWaaSbaaSqaa8qacaaIZaaapaqabaGcpeGaaiiOaiaadkfapaWa aSbaaSqaa8qacaaIZaaapaqabaaaaaGcpeGaay5waiaaw2faaiabg2 da9maabmaapaqaa8qacaWGJbWdamaaBaaaleaapeGaaGymaaWdaeqa aOWdbiaacYcacaWGJbWdamaaBaaaleaapeGaaGOmaaWdaeqaaOWdbi aacYcacqWIVlctcaGGSaGaamyya8aadaWgaaWcbaWdbiaaiwdaa8aa beaaaOWdbiaawIcacaGLPaaaaaa@6485@

Finally, according to the index with the largest corresponding weight in the evaluation results, the most suitable facial feature extraction regions to be screened were determined, namely, the cheeks, forehead and nose.

TCM diagnosis model based on CSD+FR

The feature extraction methods elaborately designed, namely color spectral decomposition (CSD), and are proposed to explicitly capture the subtle difference of color information between various types of TCM syndromes. The overall procedure is described as the upper part of Figure 1, and the details are illustrated as follow.31

Facial region extraction: According to the AHP results in Section II, the color information in cheeks, forehead and nose of the face is more relevant to the TCM syndromes. Ensemble of Regression Trees32 (implemented by Dlib33) is employed to automatically localize and extract the 4 facial regions namely A, B, C, and D (representing cheeks, forehead and nose of the face). Considering the tested facial images are captured by standardized equipment and need no calibration, the Ensemble of Regression Tress is preferable due to its light-weight and robustness.

Specifically, the 68 landmarks are localized by Dlib33 (shown as Figure 2a) and based on several (the 1st, 3th , 13th, 15th, 21th, 22th, 27th, 28th, 29th) of the 68 landmark points we can extract color information from the four specific facial regions, as illustrated in Figure 2b.

Figure 2 Face detection and specific facial region extraction.

Color spectral decomposition: After extraction of facial regions, the proposed color spectral decomposition (CSD) is involved to extract the subtle distinguishing color features in the 4 facial regions. Giving a pixel with color index [ r,g,b ] MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape WaamWaa8aabaWdbiaadkhacaGGSaGaam4zaiaacYcacaWGIbaacaGL BbGaayzxaaaaaa@3D6A@ in RGB color space, the color spectral decomposition (CSD) method this paper proposes aims to construct a sparse representation of [ r,g,b ] MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape WaamWaa8aabaWdbiaadkhacaGGSaGaam4zaiaacYcacaWGIbaacaGL BbGaayzxaaaaaa@3D6A@ that can transform the three-dimensional color index into a spectral vector with N dimension where N>>2, i.e., s= T CSD ( [ r,g,b ] ) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaam4Caiabg2da9mrr1ngBPrwtHrhAXaqeguuDJXwAKbstHrhAG8KB LbacfaGae83eXt1damaaBaaaleaapeGaam4qaiaadofacaWGebaapa qabaGcpeWaaeWaa8aabaWdbmaadmaapaqaa8qacaWGYbGaaiilaiaa dEgacaGGSaGaamOyaaGaay5waiaaw2faaaGaayjkaiaawMcaaaaa@4F54@ .

The color index [ r,g,b ] MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape WaamWaa8aabaWdbiaadkhacaGGSaGaam4zaiaacYcacaWGIbaacaGL BbGaayzxaaaaaa@3D6A@ is firstly converted into HSV color space as [ h,s,v ] MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape WaamWaa8aabaWdbiaadIgacaGGSaGaam4CaiaacYcacaWG2baacaGL BbGaayzxaaaaaa@3D80@  ( h,s,v MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamiAaiaacYcacaWGZbGaaiilaiaadAhaaaa@3B6F@ denote the hue, saturation, and luminance respectively;19 the values of s,v are in the range of [0,1] and the value of h is in the range of [ 0, 2π MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaaGimaiaacYcacaqGGcGaaGOmaiabec8aWbaa@3C35@ ], which are demonstrated as Figure 3.

Figure 3 An illustration of HSV color space.

N solid colors with uniformly-spaced hue difference are selected as anchor points, which are denoted as ( p 0 , p , p 2 ,, p 2π ) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape WaaeWaa8aabaWdbiaadchapaWaaWbaaSqabeaapeGaaGimaaaakiaa cYcacaWGWbWdamaaCaaaleqabaWdbiabl2==UbaakiaacYcacaWGWb WdamaaCaaaleqabaWdbiaaikdacqWI9=VBaaGccaGGSaGaeyOjGWRa aiilaiaadchapaWaaWbaaSqabeaapeGaaGOmaiabec8aWjabgkHiTi abl2==UbaaaOGaayjkaiaawMcaaaaa@501E@ , where Δ= 2π N MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaeuiLdqKaeyypa0ZaaSaaa8aabaWdbiaaikdacqaHapaCa8aabaWd biaad6eaaaaaaa@3D35@ . The intensity of p that is projected to ( p 0 , p , p 2 ,, p 2π ) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape WaaeWaa8aabaWdbiaadchapaWaaWbaaSqabeaapeGaaGimaaaakiaa cYcacaWGWbWdamaaCaaaleqabaWdbiabl2==UbaakiaacYcacaWGWb WdamaaCaaaleqabaWdbiaaikdacqWI9=VBaaGccaGGSaGaeyOjGWRa aiilaiaadchapaWaaWbaaSqabeaapeGaaGOmaiabec8aWjabgkHiTi abl2==UbaaaOGaayjkaiaawMcaaaaa@501E@ is calculated respectively as described in Figure 4.

Figure 4 An illustration of intensity calculation.

As shown in Figure 4, the red circle denotes all the solid colors in HSV color space, i.e., h[ 0, 2π ], s=1, v=1 MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamiAaiabgIGiopaadmaapaqaa8qacaaIWaGaaiilaiaabckacaaI YaGaeqiWdahacaGLBbGaayzxaaGaaiilaiaabckacaWGZbGaeyypa0 JaaGymaiaacYcacaqGGcGaamODaiabg2da9iaaigdaaaa@49D2@ . N solid colors ( p 0 , p , p 2 ,, p 2π ) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape WaaeWaa8aabaWdbiaadchapaWaaWbaaSqabeaapeGaaGimaaaakiaa cYcacaWGWbWdamaaCaaaleqabaWdbiabl2==UbaakiaacYcacaWGWb WdamaaCaaaleqabaWdbiaaikdacqWI9=VBaaGccaGGSaGaeyOjGWRa aiilaiaadchapaWaaWbaaSqabeaapeGaaGOmaiabec8aWjabgkHiTi abl2==UbaaaOGaayjkaiaawMcaaaaa@501E@ are represented by blue circles. The intensity of p which is projected to a solid color p kΔ MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamiCa8aadaahaaWcbeqaa8qacaWGRbGaeuiLdqeaaaaa@3AC6@ is defined as the function of the distance between p and p kΔ MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamiCa8aadaahaaWcbeqaa8qacaWGRbGaeuiLdqeaaaaa@3AC6@ . Supposing the color index of p is [ h p , s p ,1 ] MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape WaamWaa8aabaWdbiaadIgapaWaaSbaaSqaa8qacaWGWbaapaqabaGc peGaaiilaiaadohapaWaaSbaaSqaa8qacaWGWbaapaqabaGcpeGaai ilaiaaigdaaiaawUfacaGLDbaaaaa@4012@  and p kΔ MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamiCa8aadaahaaWcbeqaa8qacaWGRbGaeuiLdqeaaaaa@3AC6@  is [ h i ,1,1 ] MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape WaamWaa8aabaWdbiaadIgapaWaaSbaaSqaa8qacaWGPbaapaqabaGc peGaaiilaiaaigdacaGGSaGaaGymaaGaay5waiaaw2faaaaa@3E65@ , their distance then can be derived as d p k = 12 s p cosθ+ s p 2 MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamiza8aadaqhaaWcbaWdbiaadchaa8aabaWdbiaadUgaaaGccqGH 9aqpdaGcaaWdaeaapeGaaGymaiabgkHiTiaaikdacaWGZbWdamaaBa aaleaapeGaamiCaaWdaeqaaOWdbiaadogacaWGVbGaam4CaiabeI7a XjabgUcaRiaadohapaWaa0baaSqaa8qacaWGWbaapaqaa8qacaaIYa aaaaqabaaaaa@48EC@ . The analysis above is focused on colors with full luminance, i.e., v p =1 MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamODa8aadaWgaaWcbaWdbiaadchaa8aabeaak8qacqGH9aqpcaaI Xaaaaa@3B54@ . For an arbitrary p with color index [ h p , s p , v p ] MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape WaamWaa8aabaWdbiaadIgapaWaaSbaaSqaa8qacaWGWbaapaqabaGc peGaaiilaiaadohapaWaaSbaaSqaa8qacaWGWbaapaqabaGcpeGaai ilaiaadAhapaWaaSbaaSqaa8qacaWGWbaapaqabaaak8qacaGLBbGa ayzxaaaaaa@41BB@ , its distance can be derived as d p k = 12 s p cosθ+ s p 2 + ( 1 v p ) 2 MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamiza8aadaqhaaWcbaWdbiaadchaa8aabaWdbiaadUgaaaGccqGH 9aqpdaGcaaWdaeaapeGaaGymaiabgkHiTiaaikdacaWGZbWdamaaBa aaleaapeGaamiCaaWdaeqaaOWdbiaadogacaWGVbGaam4CaiabeI7a XjabgUcaRiaadohapaWaa0baaSqaa8qacaWGWbaapaqaa8qacaaIYa aaaOGaey4kaSYaaeWaa8aabaWdbiaaigdacqGHsislcaWG2bWdamaa BaaaleaapeGaamiCaaWdaeqaaaGcpeGaayjkaiaawMcaa8aadaahaa Wcbeqaa8qacaaIYaaaaaqabaaaaa@5094@ .

A function f( x )= e x 2 λ MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamOzamaabmaapaqaa8qacaWG4baacaGLOaGaayzkaaGaeyypa0Ja amyza8aadaahaaWcbeqaa8qacqGHsisldaWcaaWdaeaapeGaamiEa8 aadaahaaadbeqaa8qacaaIYaaaaaWcpaqaa8qacqaH7oaBaaaaaaaa @41FB@ is employed to map the distance between p and p kΔ MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamiCa8aadaahaaWcbeqaa8qacaWGRbGaeuiLdqeaaaaa@3AC6@  to the intensity of p which is observed on p kΔ MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamiCa8aadaahaaWcbeqaa8qacaWGRbGaeuiLdqeaaaaa@3AC6@ . Therefore, the intensity of p projected to p kΔ MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamiCa8aadaahaaWcbeqaa8qacaWGRbGaeuiLdqeaaaaa@3AC6@ , denoted as s k MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaam4Ca8aadaWgaaWcbaWdbiaadUgaa8aabeaaaaa@3971@ , can be derived as

s k =exp( 12 s p cosθ+ s p 2 + ( 1 v p ) 2 λ ) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaam4Ca8aadaWgaaWcbaWdbiaadUgaa8aabeaak8qacqGH9aqpcaqG LbGaaeiEaiaabchadaqadaWdaeaapeGaeyOeI0YaaSaaa8aabaWdbi aaigdacqGHsislcaaIYaGaam4Ca8aadaWgaaWcbaWdbiaadchaa8aa beaak8qacaWGJbGaam4BaiaadohacqaH4oqCcqGHRaWkcaWGZbWdam aaDaaaleaapeGaamiCaaWdaeaapeGaaGOmaaaakiabgUcaRmaabmaa paqaa8qacaaIXaGaeyOeI0IaamODa8aadaWgaaWcbaWdbiaadchaa8 aabeaaaOWdbiaawIcacaGLPaaapaWaaWbaaSqabeaapeGaaGOmaaaa aOWdaeaapeGaeq4UdWgaaaGaayjkaiaawMcaaaaa@56F5@       (1)

λ is employed to control the attenuation of intensity. For instance, if the saturation and luminance of p is near to 1, then its intensity should concentrate in its nearest solid colors. On the contrary, if its saturation and luminance is significantly smaller than 1, its intensity should also spread a larger range of solid colors. In view of that, we set λ= λ 0 + ( 1 s p ) 2 + ( 1 v p ) 2 5 MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaeq4UdWMaeyypa0Jaeq4UdW2damaaBaaaleaapeGaaGimaaWdaeqa aOWdbiabgUcaRmaalaaapaqaa8qadaqadaWdaeaapeGaaGymaiabgk HiTiaadohapaWaaSbaaSqaa8qacaWGWbaapaqabaaak8qacaGLOaGa ayzkaaWdamaaCaaaleqabaWdbiaaikdaaaGccqGHRaWkdaqadaWdae aapeGaaGymaiabgkHiTiaadAhapaWaaSbaaSqaa8qacaWGWbaapaqa baaak8qacaGLOaGaayzkaaWdamaaCaaaleqabaWdbiaaikdaaaaak8 aabaWdbiaaiwdaaaaaaa@4D25@ where λ 0 MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaeq4UdW2damaaBaaaleaapeGaaGimaaWdaeqaaaaa@39F7@ is set to 0.1. A N-dimensional spectral vector s=[ s 1 ,, s N ] MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaam4Caiabg2da9maadmaapaqaa8qacaWGZbWdamaaBaaaleaapeGa aGymaaWdaeqaaOWdbiaacYcacqGHMacVcaGGSaGaam4Ca8aadaWgaa WcbaWdbiaad6eaa8aabeaaaOWdbiaawUfacaGLDbaaaaa@4292@ can be got by iterating all anchor points.

Supposing that four image patches x A ,  x B ,  x c ,  x D MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamiEa8aadaWgaaWcbaWdbiaadgeaa8aabeaak8qacaGGSaGaaeiO aiaadIhapaWaaSbaaSqaa8qacaWGcbaapaqabaGcpeGaaiilaiaabc kacaWG4bWdamaaBaaaleaapeGaam4yaaWdaeqaaOWdbiaacYcacaqG GcGaamiEa8aadaWgaaWcbaWdbiaadseaa8aabeaaaaa@4590@ represent 4 different facial regions of a facial image respectively, each region is transformed via color spectral decomposition pixel-wise. For instance, all pixels p i j x i MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamiCa8aadaqhaaWcbaWdbiaadMgaa8aabaWdbiaadQgaaaGccqGH iiIZcaWG4bWdamaaBaaaleaapeGaamyAaaWdaeqaaaaa@3E3F@  ( i{ A, B,C,D },  MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamyAaiabgIGiopaacmaapaqaa8qacaWGbbGaaiilaiaabckacaWG cbGaaiilaiaadoeacaGGSaGaamiraaGaay5Eaiaaw2haaiaacYcaca qGGcaaaa@4415@ and denotes the special index) with color index [ r i j , g i j , b i j MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamOCa8aadaqhaaWcbaWdbiaadMgaa8aabaWdbiaadQgaaaGccaGG SaGaam4za8aadaqhaaWcbaWdbiaadMgaa8aabaWdbiaadQgaaaGcca GGSaGaamOya8aadaqhaaWcbaWdbiaadMgaa8aabaWdbiaadQgaaaaa aa@4245@ ] are converted into HSV color space, whose corresponding spectral vector s i j MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaam4Ca8aadaqhaaWcbaWdbiaadMgaa8aabaWdbiaadQgaaaaaaa@3A6F@ is calculated according to Eq.(1). It should be noticed that the hue of facial regions always concentrates in a limited range, which is about from 0.03 π MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaeqiWdahaaa@38EC@  to 0.1 π MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaeqiWdahaaa@38EC@ based on the observation of approximate 250 subjects. Therefore, the hue of each pixel p, denoted as hp, is mapped to range [0,1] by map( h p )= h p 0.03π 0.07π MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape GaamyBaiaadggacaWGWbWaaeWaa8aabaWdbiaadIgapaWaaSbaaSqa a8qacaWGWbaapaqabaaak8qacaGLOaGaayzkaaGaeyypa0ZaaSaaa8 aabaWdbiaadIgapaWaaSbaaSqaa8qacaWGWbaapaqabaGcpeGaeyOe I0IaaGimaiaac6cacaaIWaGaaG4maiabec8aWbWdaeaapeGaaGimai aac6cacaaIWaGaaG4naiabec8aWbaaaaa@4BD5@ . A demonstration of spectral vectors extracted from the four facial regions is shown as Figure 5, in which the horizontal axis denotes the hue of solid colors that is used as anchor and the vertical axis denotes the Intensity. N is set to 100 to show the detailed features of spectral vectors, which can also be set to other values. The spectral vectors in the same region are plotted in one single sub-figure.

Figure 5 An illustration of spectral vectors in different regions.

The central spectral vectors of region i, denoted as c s i ,i{ A, B,C,D } MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaam4yaiaadohapaWaaWbaaSqabeaapeGaamyAaaaakiaacYcacaWG PbGaeyicI48aaiWaa8aabaWdbiaadgeacaGGSaGaaeiOaiaadkeaca GGSaGaam4qaiaacYcacaWGebaacaGL7bGaayzFaaaaaa@4616@ , can be calculated by simply averaging all spectral vectors in the given region after outlier exclusion, which is illustrated as Eq.(2)

c s i = ji' s i j #{ i' } MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaam4yaiaadohapaWaaWbaaSqabeaapeGaamyAaaaakiabg2da9maa liaapaqaa8qadaqfqaqabSWdaeaapeGaamOAaiabgIGiolaadMgaca qGNaaabeqdpaqaa8qacqGHris5aaGccaWGZbWdamaaDaaaleaapeGa amyAaaWdaeaapeGaamOAaaaaaOWdaeaapeGaai4iamaacmaapaqaa8 qacaWGPbGaae4jaaGaay5Eaiaaw2haaaaaaaa@49BF@       (2)

i denotes the region after outlier exclusion and #{ i' } MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaai4iamaacmaapaqaa8qacaWGPbGaae4jaaGaay5Eaiaaw2haaaaa @3BBE@ denotes the number of pixels in region i. The outlier exclusion procedure is employed to eliminate singular spectral vectors that are quite different from the mean vector, and 20% pixels (or structural vectors) are excluded. At last, the kurtosis, skewness, average and standard value of c s i MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaam4yaiaadohapaWaaWbaaSqabeaapeGaamyAaaaaaaa@3A49@ are extracted as statistical features.

TCM syndrome classification and quantification based on machine learning: The kurtosis, skewness, average and standard value of four central spectral vectors ( c s A , c s B , c s C , c s D MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaam4yaiaadohapaWaaWbaaSqabeaapeGaamyqaaaakiaacYcacaqG GcGaam4yaiaadohapaWaaWbaaSqabeaapeGaamOqaaaakiaacYcaca qGGcGaam4yaiaadohapaWaaWbaaSqabeaapeGaam4qaaaakiaacYca caqGGcGaam4yaiaadohapaWaaWbaaSqabeaapeGaamiraaaaaaa@4894@ ) derived from the four facial regions, is denoted as sf =[ kurtosis( c s A ), skewness( c s A ), mean( c s A ),std( c s A ),kurtosis( c s B ), ] MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaamaaFiaabaaeaa aaaaaaa8qacaWGZbGaamOzaaWdaiaawEnia8qacqGH9aqpdaWadaWd aeaapeGaam4AaiaadwhacaWGYbGaamiDaiaad+gacaWGZbGaamyAai aadohadaqadaWdaeaapeGaam4yaiaadohapaWaaWbaaSqabeaapeGa amyqaaaaaOGaayjkaiaawMcaaiaacYcacaqGGcGaam4CaiaadUgaca WGLbGaam4Daiaad6gacaWGLbGaam4CaiaadohadaqadaWdaeaapeGa am4yaiaadohapaWaaWbaaSqabeaapeGaamyqaaaaaOGaayjkaiaawM caaiaacYcacaqGGcGaamyBaiaadwgacaWGHbGaamOBamaabmaapaqa a8qacaWGJbGaam4Ca8aadaahaaWcbeqaa8qacaWGbbaaaaGccaGLOa GaayzkaaGaaiilaiaadohacaWG0bGaamizamaabmaapaqaa8qacaWG JbGaam4Ca8aadaahaaWcbeqaa8qacaWGbbaaaaGccaGLOaGaayzkaa GaaiilaiaadUgacaWG1bGaamOCaiaadshacaWGVbGaam4CaiaadMga caWGZbWaaeWaa8aabaWdbiaadogacaWGZbWdamaaCaaaleqabaWdbi aadkeaaaaakiaawIcacaGLPaaacaGGSaGaeyOjGWlacaGLBbGaayzx aaaaaa@79F7@ . They are served as inputted features and fed into the machine learning tools in order to predict the level of hypertension and the type of syndromes. The overall flowchart is depicted in Figure 6. Random forest algorithm [1] is served as the machine learning tools.

Figure 6 The overall flowchart of the model of quantification of syndromes of hypertension.

The prediction is comprised of two stages. For instance, supposing the 16-D as inputted features sf MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaamaaFiaabaaeaa aaaaaaa8qacaWGZbGaamOzaaWdaiaawEniaaaa@3AD5@  in the first stage, random forest algorithm A can classify the given facial images into one of the four types of TCM syndromes or healthy people. In the second stage, the centroid is obtained by averaging all the inputted eigenvectors of the same syndrome type and the same level in the training set. The Euclidean Distance between the inputted characteristics of the patients to be predicted and the corresponding centroid of the three levels of each corresponding syndrome type is calculated respectively, and the closest level is taken as the level of the patients to be predicted. It should be noted that if the prediction results in the first stage are healthy, the prediction in the second stage will be terminated in advance.

TCM diagnosis model based on deep neural networks (DNN)

As shown in the lower part of Figure 1, a TCM diagnosis model based on Deep Neural Networks (DNN) is designed as comparison. Considering the limited training samples, the DNN framework is trained in a two-stage manner. Firstly, the Hour-Glass convolutional neural network (CNN) is trained in a self-supervised way to learn a low-dimensional representation in feature domain. Secondly, the CNN-extracted feature is employed to classify the TCM syndrome types via two fully connected layers. The detailed pipeline of the diagnosis model based on DNN is shown as Figure 7.

Figure 7 The flowchart of TCM diagnosis model based on DNN.

The input of the DNN is the channel-wise concatenation of Region A, B, C, D. Each region is resized into 128x128x3 with RGB format. The input size is therefore 128x128x12. In training stage I, the input X is fed into self-supervised CNN that contains an encoder P( · ) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamiuamaabmaapaqaa8qacaGG3caacaGLOaGaayzkaaaaaa@3AE7@ and a decoder Q( · ) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamyuamaabmaapaqaa8qacaGG3caacaGLOaGaayzkaaaaaa@3AE8@ , which is similar as Ronneberger O, et al.32 The encoder is comprised of 4 down-blocks, each of which contains two convolutional layers with kernel size 3x3 and activated by Leaky Re Lu.34 A batch normalization layer and a max-pooling layer with size 2x2 is also included in down-blocks. After the 4th down-blocks, the feature map goes through an inception layer whose convolutional kernel size is 1x1. The decoder Q( · ) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaamyuamaabmaapaqaa8qacaGG3caacaGLOaGaayzkaaaaaa@3AE8@ is the inversion of encoder, which is comprised of 4 up-blocks. The structure of up-blocks is similar with that of down-blocks; the difference is the 1st layer in up-blocks is an up-sampling layer acting as the inversion of max-pooling.

As for the training stage II, the pre-trained encoder is fine-tuned by the labels indicating the syndrome types. The feature maps after inception layer is max-pooling and min-pooling channel-wise to obtain a feature vector with size 512x1, and such feature vector is then fed into g( · ) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr 4rNCHbGeaGqkY=Mj0xXdbba91rFfpec8Eeeu0xXdbba9frFj0=OqFf ea0dXdd9vqaq=JfrVkFHe9pgea0dXdar=Jb9hs0dXdbPYxe9vr0=vr 0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaabaaaaaaaaape Gaam4zamaabmaapaqaa8qacaGG3caacaGLOaGaayzkaaaaaa@3AFE@ to predict the TCM syndromes. The training of the second stage is end-to-end, which means that the parameters in encoder is also trainable in training stage II. Since the training samples are limited, data augmentation is involved by rotating the facial regions 90, 180, 270 degrees respectively.

Experimental results

Training protocol

In this paper, a total of 250 samples of 50 patients with overabundant liver-fire syndrome (OLF), 50 patients with Yin deficiency with Yang hyperactivity (YDYH), 50 patients with excessive accumulation of phlegm-dampness (EAPD), 50 patients with Yin and Yang deficiency syndrome (YYD), and 50 healthy patients are tested. TCM syndromes and level distribution of patients are shown in Table 1.

TCM syndrome types

Number of samples

Number of samples of blood pressure at all levels

I degree

II degree

III degree

Over abundant liver-fire syndrome

    50

24

18

8

Yin deficiency with Yang hyperactivity,

    50

21

16

13

Excessive accumulation of phlegm-dampness

    50

17

19

14

Yin and Yang deficiency syndrome

    50

12

21

17

Healthy persons (no syndrome)

    50

-

-

-

Table 1 Distribution of TCM Syndromes and Blood Pressure Levels

As for both the training of diagnosis model based on CSD+RF and DNN, about 50% of each type and level are randomly selected as the training set and the remaining data as the test set. When the number of data is an odd number, round up to an integer. Thus, the number of training samples in the five labels of OLF, YDYH, EAPD, YYD and healthy persons is 25,26,26,26,25 respectively. In total, 128 samples are used as the training set and 122 samples as the test set.

Experimental results of diagnosis model based on CSD+RF

The overall flowchart of the proposed diagnosis model based on CSD+RF is shown as Figure 6, and the detailed predicting performance in the testing set is shown in the following confusion matrix (Figure 8), in which the vertical axis represents the ground-truth label, and the horizontal axis represents the prediction. OLF, YDYH, EAPD and YYD represent overabundant liver-fire syndrome, Yin deficiency with Yang hyperactively, excessive accumulation of phlegm-dampness and Yin and Yang deficiency syndrome respectively.

Figure 8 Confusion matrix of stage I prediction.

As can be seen from the above confusion matrix, in general, the overall accuracy of the classifier in the stage I of the test set is 70.5%. The prediction accuracy of OFL, YDYH, EAPD, YYD and healthy persons is respectively 68.0%75.0%, 58.3%, 70.8% and 80%. The prediction accuracy of healthy persons is higher, and that of excessive accumulation of phlegm-dampness is the lowest.

The diagnosis model based on CSD+RF will continue to predict the level of syndrome types in the second stage after the TCM syndrome types of patients are classified. In the second stage, it is assumed that the prediction results of the first stage are accurate enough, that is, the types of inputted syndromes are all true values. At this time, according to the inputted characteristics of patients to be predicted and the distance between the centroid of each level and the corresponding syndrome type, the prediction results of the level can be obtained. The prediction accuracy of each syndrome type is shown as Figure 9. In this figure, the vertical axis represents ground-truth label and the horizontal axis represents prediction. 

Figure 9 Confusion matrix of stage II prediction.

Experimental results of the diagnosis model based on DNN

As a comparison, the overall flowchart of the diagnosis model based on DNN is shown as Figure 7. The training/testing set division protocol is the same as the framework based on CSD+RF. Considering data augmentation, there are totally 512 training samples for training stage II. In training stage I and II, the minibatch size is 8, and the optimization strategy is Adam, et al.35 with learning rate 0.0005. After the training stage, the training loss of first stage is shown as Figure 10 and the experimental results in testing set is shown as Figure 11.

Figure 10 Training loss of the diagnosis model based on DNN in stage I.

Figure 11 Confusion matrix of the diagnosis model based on DNN.

Performance comparison

The performance comparison in terms of predicting accuracy in testing set between the diagnosis model based on CSD+RF and diagnosis model based on DNN are shown as Figure 12. In 4 (OLF, YDYH, YYD, and Healthy) out of the 5 classes, the diagnosis model based on CSD+RF outperforms the diagnosis model based on DNN by an obvious margin.

Figure 12 Performance comparison between the diagnosis model based on CSD+RF and the diagnosis model based on DNN in terms of accuracy.

Figure 12 indicates that the TCM syndrome diagnosis model based on CNN cannot effectively predict the TCM syndromes in the testing set, demonstrating the superiority of the diagnosis model based on CSD+RF.

We think the limitation of the training samples might be the main reason resulting in the unsatisfied accuracy of diagnosis model based on CNN. Although several tricks (e.g., pre-training stage and data augmentation) are involved aiming to relieve the overfitting problem, 128 original training samples are still insufficient to extract representative color features that relevant to TCM syndromes. On the contrary, the diagnosis model elaborately designed based on CSD+RF can capture distinguishing features of different TCM syndromes. Such feature extraction is explicit; therefore, it can achieve better performance than diagnosis model based on DNN when the training samples are limited. The effectiveness of the diagnosis model based on CSD+RF toward limited training samples are further discussed in ablation experiments.

Ablation experiment

In order to further validate the effectiveness of the proposed TCM syndrome & hypertension level prediction model, several ablation experiments are conducted. Firstly, the results of facial region extraction of several samples are listed to demonstrate that Ensemble of Regression Trees implemented by Dlib is acceptable for facial region extraction in this work, which is shown in Figure 13. The sample data information of this part is shown in Table 2.

Figure 13 Several examples of facial specific region extraction.

serial number

gender

age

blood pressure classification

TCM syndrome types

blood pressure value

level

1

male

55

SBP: 148 mmHg
DBP: 87 mmHg

I degree of hypertension

Over abundant liver-fire syndrome

2

male

61

SBP: 148 mmHg
DBP: 87 mmHg

I degree of Hypertension

Over abundant liver-fire syndrome

3

male

73

SBP: 151 mmHg
DBP: 80 mmHg

I degree of Hypertension

Excessive accumulation of phlegm-dampness

4

female

70

SBP: 153 mmHg
DBP: 83 mmHg

I degree of Hypertension

Excessive accumulation of phlegm-dampness

5

male

73

SBP: 142 mmHg
DBP: 86 mmHg

I degree of Hypertension

Yin deficiency with Yang hyperactivity

6

female

73

SBP: 145 mmHg
DBP: 88 mmHg

I degree of Hypertension

Over abundant liver-fire syndrome

7

female

58

SBP: 149 mmHg
DBP: 90 mmHg

I degree of Hypertension

Yin and Yang deficiency syndrome

8

female

50

SBP: 144 mmHg
DBP: 89 mmHg

I degree of Hypertension

Yin and Yang deficiency syndrome

Table 2 Sample Data Information

Secondly, some comparisons are illustrated in Figure 14 to validate the effectiveness of the proposed CSD method. The facial regions of 50 samples (10 healthy people and 10 patients from each TCM syndromes) are collected. The central structural vector of region D is then extracted, and the kurtosis and skewness of the central structural vectors are calculated in the end (all regions are effective; region D is randomly selected to validate the color spectral decomposition method.). The distribution of the 50 samples is shown as Figure 14. The horizontal and vertical axis of upper sub-figure respectively denote the kurtosis and skewness of central spectral vectors of facial region D in each subjective. The lower sub-figure denotes several central spectral vectors. Figure 6 demonstrates the proposed color spectral decomposition can effectively extract distinguishing facial chroma features of different types of people.

Figure 14 Feature comparison between healthy people and four TCM syndromes.

Lastly, in order to further validate the effectiveness of the color spectral decomposition, this paper stacks all spectral vectors from region C (region C is also randomly selected) and constructs a matrix of size Mx100, where 100 is the dimension of spectral vector and M is the total number of spectral vectors in region C after outlier elimination. The covariance of the matrix for each TCM syndrome is shown in Figure 15, which further validates that the spectral vectors can extract distinguishing features from different TCM syndromes.

Figure 15 Distinguishing covariance matrix between healthy people and yin and yang deficiency syndrome.

Conclusion and future work

Conclusion

This study designs a simple and easy chromatographic analysis algorithm for the feature extraction and analysis in small sample size and accurate syndrome data based on the medical equipment collected by the four clinics of Chinese medicine Diagnosing the model. On this basis, the hierarchic and quantitative intelligent diagnosis model of TCM syndromes of hypertension is constructed and the scientific hypothesis of TCM Diagnosis Model Based on Deep Neural Networks is verified. Moreover, this paper draws the following research conclusions.

Conclusion 1: this paper has realized the medical explanation between the syndromes of hypertension in traditional Chinese medicine and the blood pressure value in western medicine.

Conclusion 2: syndrome identification accuracy and classification accuracy of overabundant liver-fire syndrome, Yin deficiency with Yang hyperactivity, Yin and Yang deficiency syndrome, and healthy persons with a good identification effect, but that of excessive accumulation of phlegm-dampness, whose model identification effect is not obvious.

Further discussion and limitations of this study

In view of the deficiency such as the low classification and quantification accuracy of identification of excessive accumulation of phlegm-dampness in conclusion 2, we will combine more objective syndrome data such as inquiry, tongue inspection and meridian diagnosis of TCM in the next step to make up for the shortcoming, that the evidence and analysis are incomplete of single complexion data. At the same time, we will improve the generalization ability and prediction accuracy of the model and further expand the sample size. On the basis of large samples, we will design plentiful feature extraction methods and optimize the model.

The "CSD + FR" model designed in this paper has a better performance in intelligent diagnosis of hypertension syndrome because the current research is based on the small sample size, which does not mean that the "CSD + FR" model has an edge over the "DNN" model in the large sample size.

Future work

In the future, based on Zhejiang Chinese Medical University TCM Big Data and Smart Health Research Center and Henan University of Chinese Medicine Intelligent Health Equipment Research Center, the multi-center research will be carried out. The results based on the large sample size will be published and compared with the results of this paper. This paper contributes to the advancement of objective quantitative research on TCM diagnosis of major diseases and the internationalization of TCM.

Funding

None.

Acknowledgments

None.

Conflicts of interest

The authors declare that there was no conflict of interest.

References

  1. Wang Liying, Han Xuejie. Establishing Syndrome-Combination Risk Forecasting Model for Hypertension in Prehypertension Population. Journal of Traditional Chinese Medicine. 2018;59(6):470–473.
  2. Wang Jie, He Qingyong. Combination of Diseases and Syndromes in TCM. China Medical Science Press. 2011.
  3. Zhao Huayi. A Present Study on According to Diseases, Based on Recipes. Clinical Journal of Chinese Medicine. 2010;2(10):121–122.
  4. Pelliccia  F, Pasceri  V, Marezzi  G, et al. Randomised,double-blind,placebo-controlled,assessment of the efficacy and safety of dietary supplements in prehypertension. Journal of Human Hypertension. 2017;31(10):647–653.
  5. Wang Haiyan, Li Ling, Fan E, et al. Observation of the Clinical Efficacy of Acupoint Thread-Embedding fpr Prehypertension. Shanghai Journal of Acupuncture and Moxibustion. 2016;35(10):1200–1202.
  6. Ki Yu-jong, Fresh, Han Won-young, et al. Comparative Review of Treatment Based on Syndrome Differentiation and Treatment Based on Syndrome Differentiation from Disease - Emphasis on Hypertension and Headache. Society of Preventive Korean Medicine. 2017;21(1):9–17.
  7. Joint Committee for Guideline Revision. 2018 Chinese Guidelines for Prevention and Treatment of Hypertension-A report of the Revision Committee of Chinese Guidelines for Prevention and Treatment of Hypertension. J Geriatr Cardiol. 2019;16(3):182-214.
  8. Wang  J, Xiong  X, Liu W,  Traditional Chinese medicine syndromes for essential hypertension: A literature analysis of 13,272 patients. Evidence-Based Complementary and Alternative Medicine. 2014:1–19.
  9. Tian P. Convergence: Where west meets east. Nature. 2011;480(7378):84–86.
  10. Wang Bingyuan, Liu Yu, Sun Xizhuo, et al. Prediction model and assessment of probability of incident hypertension: the Rural Chinese Cohort Study. Journal of Human Hypertension. 2020.    
  11. Gu Dongfeng. Guideline on the assessment and management of cardiovascular risk in China. Z Honghu AYU put the value of learning. 2019;53(1):13–32.
  12. Tatsuya Maruhashi, Junko Soga, Noritaka Fujimura, et al. Increased arterial stiffness and cardiovascular risk prediction in controlled hypertensive patients with coronary artery disease: post hoc analysis of FMD-J (Flow-mediated Dilation Japan) Study A. Hypertension Research. 2020.
  13. Yang Wenqing, Li Yunlun, Xie Jun, et al. Exploratory Study of Quantification Diagnostic Standard on Common Traditional Chinese Medicine Syndromes of Hypertension. China Journal of Traditional Chinese Medicine and Pharmacy. 2016;31(5):2008–2012.
  14. Chen Yun. A Study on Medical Image Detection Based on Deep Learning Algorithm, Beijing University of Posts and Telecommunications. 2019.
  15. Choi JY. Improved deep face identification with multi-class pairwise discriminant loss. Electronics Letters. 2017;53(20):1356–1357.
  16. Lu Yun xi. Color Correction, Deep Learning Based TCM Tongue Image Color Correction. Segmentation and Server System, Beijing University of Technology. 2019.
  17. Liang Yumei. Research on Automatic Identification System of TCM Constitution Based on Facial Image Feature. Beijing University of Technology. 2016.
  18. Li Jianhan. Human Vital Signs and Multi-Targets Detection Based on Convolutional Neural Network. Beijing University of Posts and Telecommunications. 2019.
  19. Hanqing Sun, Zheng Liu, Guizhi Wang, et al. Intelligent Analysis of Medical Big Data Based on Deep Learning. Ieee Access. 2019;7:142022–142037.
  20. Sural S, Qian G, Pramanik S. Segmentation and histogram generation using the HSV color space for image retrieval. Proc. IEEE Conf. on Image Processing. 2002:II589–II592.
  21. Kang HR. Computational color technology. Bellingham: Spie Press; 2006.
  22. Sheikh HR, Bovik AC. A visual information fidelity approach to video quality assessment. The First International Workshop on Video Processing and Quality Metrics for Consumer Electronics. 2005.
  23. Wang Z, Simoncelli EP, Bovik AC. Multiscale structural similarity for image quality assessment. Ieee Conf. on The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers. 2003:1398–1402.
  24. Pei SC, Chen LH, Image quality assessment using human visual DOG model fused with random forest. IEEE Transactions on Image Processing. 2015;24(11):3282–3292.
  25. Larson EC, Chandler DM. Most apparent distortion: full-reference image quality assessment and the role of strategy. Journal of Electronic Imaging. 2010;19(1):011006–011006-21.
  26. Antonini M, Barlaud M, Mathieu P, et al. Image coding using wavelet transform. IEEE Transactions on image processing. 1992;1(2):205–220.
  27. Muhammad G, Al-Hammadi MH, Hussain M. Image forgery detection using steerable pyramid transform and local binary pattern. Machine Vision and Applications. 2014;25(4):985–995.
  28. Huifang Deng, Zijie Chen, Shuangqing Zhai. Evolution of theory of facial differentiation of zang-fu viscera in Huangdi Neijing. Chinese journal of traditional Chinese medicine. 2018;33(03):837–840.
  29. Jing Sun, Shouqiang Chen. Exploration and Clinical Practice of Multivariate Inspection Information of Hypertension. Clinical research of traditional Chinese medicine. 2014;6(30):18–20.
  30. Changying Yu. Study on the correlation between hypertension syndrome and tongue features based on image recognition technology. Beijing Chinese medical university. 2019.
  31. Liaw A, Wiener M. Classification and regression by random Forest. R news. 2002;2(3):18–22.
  32. Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segementation. International Conference on Medical Image computing and computer-assisted intervention. Springer: Germany; 2015. p. 234–241.
  33. Beel J, Aizawa A, Breitinger C. Mr. DLib: recommendations-as-a-service (RaaS) for academia. 2017 ACM/IEEE Joint Conference on Digital Libraries (JCDL). 2017:1–2.
  34. Xu B, Wang N, Chen T, et al. Empirical evaluation of rectified activations in convolutional network. Machine Learning. 2015.
  35. Kingma DP, Ba J. Adam: A method for stochastic optimzaton. ICLR. 2014.
Creative Commons Attribution License

©2020 Li, et al. This is an open access article distributed under the terms of the, which permits unrestricted use, distribution, and build upon your work non-commercially.