Submit manuscript...
Journal of
eISSN: 2373-6437

Anesthesia & Critical Care: Open Access

Research Article Volume 6 Issue 5

Comparison of Clinical Diagnoses Versus Computerized Test Diagnoses using the Maryland Clinical Diagnostics Diagnostic Paradigm (Expert System) for Diagnosing Chronic Pain in the Neck, Back and Limbs

Nelson Hendler,1 Dennis Spurgeon2

1Former Assistant Professor of Neurosurgery, Johns Hopkins University School of Medicine, USA
2Former Dean of Chiropractic Education and Clinical Training, Los Angeles College of Chiropractic (division of Southern California University of Health Sciences), USA

Correspondence: Nelson Hendler, Former Assistant Professor of Neurosurgery, Johns Hopkins University School of Medicine, 117 Willis St., Cambridge, Maryland, 21613, USA, Tel 443-277-0306

Received: May 04, 2016 | Published: December 30, 2016

Citation: Hendler N, Spurgeon D (2016) Comparison of Clinical Diagnoses Versus Computerized Test Diagnoses using the Maryland Clinical DiagnosticsDiagnostic Paradigm (Expert System) for Diagnosing Chronic Pain in the Neck, Back and Limbs. J Anesth Crit Care Open Access 6(5): 00242. DOI: 10.15406/jaccoa.2016.06.00242

Download PDF

Abstract

Past research found that 40%-71% of chronic pain patients had overlooked diagnoses, underscoring the need for more accurate diagnostic methodology. In this study, 28 diagnoses were made by the senior author during the initial evaluation of 5 chronic pain patients. Of these diagnoses, 27 diagnoses made by the senior author were also made by the computer scored and interpreted Maryland Clinical Diagnostics (MCD) “Diagnostic Paradigm.” The MCD Diagnostic Paradigm matched the senior author’s evaluation 96.37 % of the time, and the Diagnostic Paradigm missed 1 diagnoses made by the senior author, for a 3.63% “missed diagnosis” rate, based on the initial clinic evaluation.

Overall there were 4 “Test Only Positives,” i.e. diagnoses made by the Diagnostic Paradigm, but not made by the senior author. Therefore, these patients received the benefit of additional diagnostics studies, which helped the chiropractor obtain an additional diagnoses 16% of the time, which he otherwise might have overlooked, as the results of the “Test Only Positive” diagnosis.
The Pain Validity Test could predict which of the patients would have moderate or severe abnormalities on objective medical tests with 100% accuracy. The Diagnostic Paradigm predicted the specific abnormality on objective medical testing with 98% accuracy.

Keywords: expert system, chronic pain, misdiagnosis, validating pain, accurate diagnosis

Introduction

Past research reports from Mensana Clinic indicate that 40% to 67% of chronic pain patients involved in litigation are misdiagnosed.1,2 When evaluating just the diagnosis of complex regional pain syndrome, type I, (CRPS I), formerly called reflex sympathetic dystrophy (RSD), Hendler found that 71% of the patients who were told they had only CRPS I, actually had nerve entrapment syndromes, and 26% had a combination of both nerve entrapment syndrome and CRPS I.3 Therefore, 97% of patients diagnosed by other physicians as having CPRS I, were misdiagnosed, or only partially diagnosed. In specialized diagnostic situations, the overlooked diagnosis rate for people who survived lightning strikes was 93 %, and for people who survived electrical injury, the rate was 98%.4 These errors in diagnoses are costly to the patient and the medical system alike, since they prolong or result in inappropriate treatment.

Psychiatric problems arise as the result of chronic pain.5,6 Hendler reported that 77% of patients seen at Mensana Clinic had coexisting depression and chronic pain, but when questioned about pre-existing depression, 89% of the patients had never had significant depression before the onset of their pain.6 The presence of psychiatric problems, even though a normal response to chronic pain, biases many physicians, which results in a less extensive evaluation.7,8 This physician bias is often compounded by factors such as litigation, an additional negative bias against women with pain complaints, and a bias against men by female physicians.9–13 These biases also influence the length of time, and the extent of an evaluation. Some physicians spend less than 15 minutes with a patient,14 while other “high volume” physicians have reduced by 30% the amount of time they spend with their patients.15 Since some physicians have reduced the length of time spent with patients, an automated history is a desirable efficiency, and may improve the accuracy of diagnosis and treatment, since a comprehensive questionnaire can ask questions overlooked by the time conscious physician.

Training physicians and transfer of knowledge is a cumbersome process. Hansen and his coauthors divide knowledge transfer into computer based systems, which they call a “codification strategy,” and direct person to person contact which they call the “personalization strategy”.16 The use of the “codification strategy” “…allows many people to search for and retrieve codified knowledge without having to contact the person who originally developed it”.16 Medicine has traditionally relied upon the tutorial or “personalization strategy” to provide clinical training. However, some aspects of clinical training do lend themselves to the more efficient and less expensive “codification strategy.” All physicians should recognize the need for an accurate, thorough history to establish diagnoses. History taking does lend itself to computerization, as a way of sharing the knowledge of an experienced physician with a trainee, and of assuring the thoroughness of an evaluation. However, an expert system is more than just an automatic history-taking tool. The interpretation of the answers to the questions, the integration of the answers, and the ability to formulate diagnoses based on the integration and interpretation of the answers traditionally has been primarily a person to person training process in medicine. This is labor intensive, and subject to personal variations, which generates uneven quality of care. In the past, this process had defied duplication.

Moreover, the selection and interpretation of appropriate laboratory studies is a highly individualized phenomenon. Physicians are poorly trained to recognizing the sensitivity, and specificity of a laboratory study, and have a tendency to rely on the results of a laboratory test, rather than their own clinical judgment. The fact that a laboratory test lends a degree of objectivity to diagnosis is very appealing in the uncertain world of medicine, and provides a description which is universally understood, and is less subjective than “clinical judgment.” However, if there are false positive and false negative errors in the results of the laboratory studies, these errors and the elimination of them for diagnostic considerations, require clinical judgment. Furthermore, physicians fail to recognize the distinction between anatomical tests, and physiological tests, which is a critical issue, since each category of testing provides a different answer to the same question, and the degree of correlation is very poor.

One method of eliminating the subjective component to diagnosis is to review outcome results. In the fashion, the efficacy of accurate diagnosis, and by extension, proper treatment, can be impartially analyzed. This “results based” data is being used by a number of hospital systems to market their institution, especially when compared to another institution offering the same services. Insurance carriers have used outcome studies from various hospitals to determine if the insurance company will reimburse for a procedure. Evidence based medicine is quickly becoming a standard of assessing medical treatment.

In the area of the diagnosis and treatment of chronic pain patients, most insurance carriers have abysmal results with claimants involved in workers compensation claims. The reported return to work rates, for claimants out of work for two years or more is less than one percent (1%). For a comparable group of patients, Mensana Clinic has a return to work rate of nineteen and a half percent (19.5%) for workers compensation claimants, and sixty two and a half percent (62.5%) for auto accident cases, with a 90% reduction in the use of medication, and a 45% reduction in doctor visits.8,10 These are quantifiable objective outcome measures, which meet the criteria for evidence based medicine.

In order to see if the diagnostic methods could be duplicated, Maryland Clinical Diagnostics (MCD) developed the “Diagnostic Paradigm” described in the following report. The following research report tests the reliability, specificity and sensitivity of these methods.

Subjects

All subjects who received the Maryland Clinical Diagnostics (MCD) tests were patients at a private chiropractor’s office. Of the 5 patients included in this study, all were involved in active litigation. The patients included in this study represent new evaluations seen between Novemeber 2011, through March 2012. The average age of the patients was 41.2 years. Three subjects were females, and two were males.

Methods

For this study, an evaluation at the senior author’s office consisted of the administration of the MCD Diagnostic Paradigm, and The MCD Pain Validity Test, and immediately afterwards, a one hour clinical evaluation with the chiropractor, who was the senior author. The diagnoses from the initial evaluation of senior author were compared to the diagnoses generated by the MCD Diagnostic Paradigm, which had been administered just prior to the clinical evaluation, and scored within 5 minutes of completion.

The diagnoses were considered a match only if the type of pathology and the spinal level diagnosed by the senior author corresponded with the type of pathology and spinal level generated by the diagnostic paradigm.

The MCD Diagnostic Paradigm was designed to detect 60 diagnoses and 44 differential diagnose, for a total of 104 diagnoses commonly seen in post traumatic auto accident or workers compensation injuries. A list of the possible diagnoses, and differential diagnoses is shown in appendix A. These diagnoses were selected for programming for computer analysis and scoring, since these were the most frequent ones seen in post-traumatic injuries.

Sources of errors

Two patients had “Patient Input Error.” A “Patient Input Error” is defined as an error in completing the questionnaire. Two of the patients initially neglected to complete sections of the questionnaire dealing with pain the shoulder, because the questions about shoulder pain were included under arm pain. When this oversight was shared with the patients, they then were able to complete the questionnaire accurately.

Another potential source of errors was a Patient Communication Error: Included in this report were two patients who had marked questions for areas of the body that they had not discussed with the senior author during the initial evaluation. When questioned after the clinical evaluation and MCD Diagnostic Paradigm results were both available, the patients said “I forgot to tell the doctor,” and “I was too anxious or scared during the clinical interview so I neglected to mention the problem “and “I didn’t think it was important.” This error was considered a “Patient Communication Error” This error demonstrated the value of a self administered questionnaire, which reduces the chance of missed or incomplete history, and therefore missed diagnoses, through patient communication errors.

Results

There were 28 diagnoses made by the senior author during the initial evaluation on the 5 patients included in the study. Of these diagnoses, 27 diagnoses made by the senior author were also made by the MCD Diagnostic Paradigm. Therefore, the comparison reported in this article is between initial clinical evaluations and Diagnostic Paradigm. The MCD Diagnostic Paradigm matched the senior author’s evaluation 96.37 % of the time, and the Diagnostic Paradigm missed 1 diagnosis made by the senior author, for a 3.63% “missed diagnosis” rate, based on the initial clinic evaluation.

Overall there were four Test Only Positives, i.e. diagnoses made by the Diagnostic Paradigm, but not made by the senior author. Finding these four diagnoses, in addition to the 28 diagnoses made (16%), is of great benefit to the treating chiropractor, since it improves his or her level of care, and assures that the patients get a comprehensive evaluation.

The only medical laboratory studies performed in this study population were CTs, MRIs and X-rays. Every abnormality found on these tests was predicted by diagnoses generated by the Diagnostic Paradigm. In other words, the tests confirmed diagnoses made by the Diagnostic Paradigm 100% of the time. Other diagnoses made by the Diagnostic Paradigm were not confirmed, since the objective testing needed to confirm the diagnoses was never performed, i.e. EMG/Nerve conduction testing to confirm radiculopathy, provocative discograms to confirm damaged discs, etc. With more extensive physiological testing, (provocative discogram, facet blocks, EMG, etc.), a clinician should expect to confirm many more diagnoses, in addition to those confirmed by CT, MRI and X-ray, which are anatomical tests.

In all 5 patients, the Pain Validity Test predicted that the patients would have moderate or severe abnormalities on objective medical testing (Objective Pain Patients). Indeed, all 5 patients had at least one moderate or severe abnormality on objective medical testing. This 100% accuracy is compatible with previously published reports indicating a 94%- 95% accuracy level in a larger number of patients.7,11

Discussion

A number of deficits exist with expert systems. In the absurd extreme, if the computerized expert system lists all the possible diagnoses, there is 100% sensitivity, but the specificity is very low. Conversely, if the specificity is tightened to such a degree that the computerized expert system always gets a specific diagnosis, but misses other associated diagnoses, the sensitivity of the system is reduced to a level of inaccuracy that approaches or exceeds the lack of accuracy of current physician diagnostic skills,1–4 and no benefit accrues from the use of the computerized expert system.

After 30 years of work in this area, some authors feel only limited progress has been made in expert systems.17 Engelbrecht feels that the quality of knowledge used to create the system, and the availability of patient data are the two main problems confronting any developer of an expert system, and advocates an electronic medical record system to correct one component of the problem.18 Babic concurs with the value of the longitudinal collection of clinical data, and data mining to develop expert systems.19

The accuracy of any computer scored and interpreted expert systems is a major issue. Those expert systems that seem to have the best results are the ones that focus on a narrow and highly specialized area of medicine. One questionnaire, consists of 60 questions, to cover 32 rheumatologic diseases, for 358 patients.20 The correlation rate was 74.4%, and an error rate of 25.6%, with the 44% of the errors attributed to “information deficits of the computer using standardized questions,”,20 but in a later version “RHEUMA” was studied prospectively in 51 outpatients, and achieved a 90% correlation with clinical experts.21 Several groups have approached the diagnosis of jaundice. ICTERUS produced a 70% accuracy rate,22 while ‘Jaundice’ also had a 70% overall accuracy rate.23 An expert system for vertigo was reported, and it generated and accuracy rate of 65%.24 This later was reported as OtoNeurological Expert (ONE), which generated the exact same results reported in the earlier article.25 There was a 76% agreement for diagnosis of depression, between an expert system and a clinician.26

When a Computer Assisted Diagnostic Interview (CADI) was used to diagnosis a broad range of psychiatric disorders, there was an 85.7% agreement level with three clinicians.27 In a review of twenty charts by a computerized analysis of treatment for hypertension, using HyperCritic, a panel of 18 family practitioners felt the treatment suggested by the computer system was erroneous or possibly erroneous 16% of the time.28 The panel accepted HyperCritics critiques equally as beneficial as critiques from 8 human reviewers.28 Others have developed a “to do” list to remind and alert treating physicians about tests they should order, based on input into electronic patient records.29 In the narrow area of managing lipid levels, there was a 93% agreement between management advice given by the expert system, and the specialist, after interpretation of laboratory and clinical data.30 However, physicians have a 65% level of accepting comments from expert systems regarding diagnosis of a patient, and are resistant to comments about prescriptions for patients, with only a 35% acceptance level.31 Therefore, there may be more resistance from untrained physician to the use of the diagnostic studies recommended by the Report of the MCD Diagnostic Paradigm, than there might be to accepting the diagnoses generated by the report of the MCD Diagnostic Paradigm. This premise needs to be tested in future research.

Many of the recommended diagnostic studies from the Treatment Algorithm are not commonly used in community medical centers, but have been used for years by major teaching hospitals in the United States. A classic example of this is the wide spread use of the MRI for detecting disc damage in the cervical and lumbar spine. However, in 98 patients, the MRI has a 29% false positive rate, i.e. the MRI says there is pathology in a disc, in patients who are asymptomatic,32 and a 69%-79% there is a false negative rate, i.e. the MRI says there is no abnormality, in patients who are symptomatic, and have positive provocative discogram.33–35 The value of the provocative discogram is clearly demonstrated by the groundbreaking work by Bogduk, who clearly demonstrated pain fibers in the posterior portion of the annulus of a inter-vertebral disc, which can be damaged, and produce pain, without any anatomical distortion of the disc.36 He terms this condition “internal disc disruption”.37 Central to understanding the value of the provocative discogram the concept that pain is a physiological condition, not an anatomical event. While the use of an MRI can detect only anatomical distortions, the use of the provocative discogram, which is a physiological test, is more reliable for diagnosing chronic pain. The same rationale applies to the use of other physiological tests, used to make diagnoses in chronic pain patients, such as root blocks, nerve blocks, facet blocks, bone scans, gallium scans, Indium 111 scans, neurometer studies, somatosensory evoked potentials, and flexion-extension X-rays with obliques. With more extensive physiological testing, (provocative discogram, facet blocks, EMG, etc.), a clinician should expect to confirm many more diagnoses, in addition to those confirmed by CT, MRI and X-ray, which are anatomical tests. This is why the majority of the recommended tests in the Treatment Algorithm are physiological ones (Tables 1–3).

Patient #1

Diagnosis by chiropractor

Diagnosis by dagnostic paradigm

LF

1

Lumbar facet damage L2-4

Facet syndrome L2-4

2

Lumbar disc damage L3-4

L3-4 disrupted disc

3

Subluxation C3 on C4

C3-4 unstable spinal segment

4

--

Occult fracture navicular bone

Patient #2     

KC

1

Herniated disc L4-L5

Herniated disc L4-L5

2

L4-L5 Radiculopathy

L4-L5 Radiculopathy

3

Lumbar Strain

----

4

----

L3-S1 Lumbar facet syndrome

Table 1 Comparison of Chiropractor to Diagnostic Paradigm Diagnoses

 

Result of pain validity test   

Medical test abnormality

Patient #1   

Objective Pain Patient

X-ray -uncovertebral osteophytes, C5-6

LF

Subluxation C3 on C4

MRI- L2-3 facet degenerative changes

L3-4 facet degenerative changes

L3-4 disc protrustion

X-ray- fracture base of 4th metacarpal

X-ray –hip within normal limits

X-ray- shoulder within normal limits

X-ray- leg within normal limits

Patient #2

Objective Pain Patient

MRI –L4-5 annular tear

KC

facet arthropathy

 

 

L5-S1 annular tear

Table 2 Predictive Results of Pain Validity Test, where “objective pain patients” are predicted to have abnormal medical test results with 95% accuracy39

 

Result of diagnostic paradigm

Medical test abnormality

Patient #1     

C5-6 radiculopathy, facet syndrome     

X-ray – uncovertebral osteophytes, C5-6

LF

C3-4 unstable spinal segment

Subluxation C3 on C4

Lumbar facet syndrome L2-3

MRI- L2-3 facet degenerative changes

Lumbar facet syndrome L3-4

L3-4 facet degenerative changes

L3-4 disrupted disc, unstable L3-4

L3-4 disc protrustion

Occult fracture navicular bone

X-ray- old fracture base of 4th metacarpal

X-ray –hip within normal limits

X-ray- shoulder within normal limits

 

 

X-ray- leg within normal limits

Table 3 Confirmation of Diagnostic Paradigm Diagnosis with Objective Testing

The purpose of an “expert system” is to improve the level of the reliability and accuracy of diagnosis, and enhance medical care. In our research, we had essentially the same results with the www.MarylandClinicalDiagnostics.com Diagnostic Paradigm and Treatment Algorithm as published in an earlier article, which reported that the test generated diagnoses which had a 96% correlation with diagnoses of Johns Hopkins Hospital staff members.38 Likewise, the fact that in this research, the Pain Validity Test could predict with 100% accuracy who would have moderate or severe abnormalities on objective medical testing corresponds with the published reported accuracy level of 95%.7,11,39,40 Therefore, we conclude that the tests offered by www.MarylandClinicalDiagnostics.com are a most useful adjunct to evaluating and treating a patient in a chiropractic setting.

Conflicts of interest

There is no conflict of interest.

Acknowledgments

None.

Funding

None.

References

  1. Hendler N, Kozikowski J. Overlooked Physical Diagnoses in Chronic Pain Patients Involved in Litigation. Psychosomatics. 1993;34(6):494–501.
  2. Hendler N, Bergson C, Morrison C. Overlooked Physical Diagnoses in Chronic Pain Patients Involved in Litigation Part 2. he addition of MRI, nerve blocks, 3–D CT, and qualitative flow meter. Psychosomatics. 1996;37(6):509–517.
  3. Hendler N. Differential Diagnosis of Complex Regional Pain Syndrome. Pan–Arab Journal of Neurosurgery. 2002:1–9.
  4. Hendler N. Overlooked Diagnosis in Electric Shock and Lightning Strike Survivors. J Occup and Environ Med. 2005;47(8):796–805.
  5. Hendler N. Depression Caused by Chronic Pain. J Clinical Psychiatry. 1984;45:30–36.
  6. Hendler N. The Four Stages of Pain, in Diagnosis and Treatment of Chronic Pain. Edited by N. Hendler, Long D, and Wise T, John Wright/PSG, Littleton, Colorado, USA. 1982:1–8.
  7. Hendler N. Validating and Treating the complaint of Chronic Pain: The Mensana Clinic Approach, in Clinical Neurosurgery, Edited by P Black, Williams and Wilkens, Baltimore, Maryland, USA. 1989:385–397.
  8. Hendler N. Depression Caused by Chronic Pain. J Clinical Psychiatry. 1984;45:30–36.
  9. Hendler N. Diagnosis and Non–Surgical Management of Chronic Pain. Raven Press, New York, USA. 1981.
  10. Talo S, Hendler N, Brodie J. Effects of Active and Completed Litigation on Treatment Results: Workers Compensation Patients Compared With Other Litigation Patients. J Occupat Med. 1989;31(3):265–270.
  11. Hendler N, Mollett A, Talo S. A Comparison Between the MMPI and the “Mensana Clinic Back Pain Test” For Validating the Complaint of Pain. J Occupational Medicine. 1988;30(2):98–102.
  12. Armitage, KJ, Schneiderman, LJ, Bass RA. Response of Physicians to Medical Complaints in Men and Women. J Amer Med Assoc. 1979;241(20):2186–2187.
  13. Hall JA, Irish JT, Roter DL, et al. Satisfaction, Gender and Communication in Medical Visits. Med Care. 1984;32(12):1216–1231.
  14. Gross DA, Zyzanski SJ, Borawski EA, et al. Patient Satisfaction With Time Spent With Their Physician. J Fam Pract. 1998;47(2):133–137.
  15. Zyzanski SJ, Stange KC, Langa D, et al. Trade–offs in High Volume Primary Care Practice. J Fam Prac. 1998;46(5):397–402.
  16. Hansen M, Nohria N, Tierney T. What’s Your Strategy For Managing Knowledge?. Harvard Business Review. 1999;106–116.
  17. Metaxiotis KS, Samouilidis JE. Expert systems in medicine: academic exercise or practical tool? J Med Eng Technol. 2000;24(2):68–72.
  18. Engelbrecht R. Expert Systems for Medicine–functions and Development. Zentralbl Gynakol. 1997;119(9):428–434.
  19. Babic A. Knowledge Discovery for Advanced Clinical Data Management. Stud Health Technol Inform. 1999;68:409–413.
  20. Schewe S, Herzer P, Kruger K. Prospective Application of an Expert System for the Medical History of Joint Pain. Klin Wochenschr. 1990;68(9):466–471.
  21. Schewe S, Schreiber MA. Stepwise development of a Clinical Expert System in Rheuatology. Clin Investig. 1993;71(2):139–144.
  22. Molino G, Marzuoli M, Molino F, et al. Validation of ICTERUS, a Knowledge–Based Expert System for Jaundice Diagnosis. Methods Inf Med. 2000;39(4–5):311–318.
  23. Camma C, Garofalo G, Almasio P, et al. Performance Evaluation of the Expert System ‘Jaundice’ in Comparison with that of Three Hepatologists. J Hepatol. 1991;13(3):279–285.
  24. Kentala E, Auramo Y, Juhola M, et al. Comparison Between Diagnoses of Human Experts and a Neurotologic Expert System. Ann Otol Rhinol Laryngol. 1998;107(2):135–140.
  25. Kentala EL, Laurikkala JP, Viikki K, et al. Experiences of Otoneurological Expert System for Vertigo. Sand Audiol Suppl. 2001;52:90–91.
  26. Cawthorpe D. An Evaluation of a Computer Based Psychiatric Assessment: Evidence of Expanded Use. Cyberpyschol Behav. 2001;4(4):503–510.
  27. Miller PR, Dasher R, Collins R, et al. Inpatient Diagnostic Assessments: 1. Accuracy of Structured vs. Unstructured Interviews. Psychiatry Res. 2001;105(3):255–264.
  28. Van der Lei J, van der Does E, Man in ‘t Veld AJ, et al. Response of General Practitioners to Computer Generated Critiques of Hypertension Therapy. Methods of Information Medicine. 1993;32(2):146–153.
  29. Silverman BG, Andonyadis C, Morales A. Web Based Health Care Agents: the case of reminders and todos, too (R2Do2). Artificial Intell Med. 1999;14(3):295–316.
  30. Sinnott MM, Carr B, Markey J, Brosnan P, et al. Knowledge Based Lipd Management System for General Practitioners. Clin Chim Acta. 1993;222(1–2):71–77.
  31. Kuilboer MM, van der Lei J, de Jongste JC, et al. Simulating an Integrated Critiquing System. J Am Med Inform Assoc. 1999;5(2):194–202.
  32. Jensen MC, Brant–Zawadzki MN, Obuchowski N, et al. Magnetic resonance imaging of the lumbar spine in people without back pain. N Engl J Med. 1994;331(2):69–73.
  33. Braithwaite I, White J, Saifuddin A, et al. Vertebral end–plate (Modic) changes on lumbar spine MRI: correlation with pain reproduction at lumbar discography. Eur Spine J. 1999;7(5):363–368.
  34. Sandhu HS, Sanchez–Caso LP, Parvataneni HK, et al. Association between findings of provocative discography and vertebral endplate signal changes as seen on MRI. J Spinal Disord. 2000;13(5):438–443.
  35. Carragee EJ, Chen Y, Tanner CM, et al. Provocative discography in patients after limited lumbar discectomy: A controlled, randomized study of pain response in symptomatic and asymptomatic subjects. Spine. 2000;25(23):3065–3071.
  36. Bogduk, Mc Guirk. Pain Research and Clinical Management. 2002;13:119–122.
  37. Hendler N, Berzoksky C, Davis RJ. Comparison of Clinical Diagnoses Versus Computerized Test Diagnoses Using the Mensana Clinic Diagnostic Paradigm (Expert System) for Diagnosing Chronic Pain in the Neck, Back and Limbs. Pan Arab Journal of Neurosurgery. 2007;8–17.
  38. Hendler N, Baker A. An Internet questionnaire to predict the presence or absence of organic pathology in chronic back, neck and limb pain patients. Pan Arab Journal of Neurosurgery. 2008;12(1):15–24.
Creative Commons Attribution License

©2016 Hendler, et al. This is an open access article distributed under the terms of the, which permits unrestricted use, distribution, and build upon your work non-commercially.