[PDF] Computer Aided Diagnosis Methods for Coronary CT Angiography. Matthias Teßmann - Free Download PDF (2024)

Download Computer Aided Diagnosis Methods for Coronary CT Angiography. Matthias Teßmann...

Computer Aided Diagnosis Methods for Coronary CT Angiography Matthias Teßmann

Dissertation - 2011

Computer Aided Diagnosis Methods for Coronary CT Angiography

Computergestützte Diagnoseverfahren für CT Koronarangiographiedaten

Der Technischen Fakultät der Universität Erlangen–Nürnberg zur Erlangung des Grades

Doktor–Ingenieur

vorgelegt von

Matthias Teßmann

Erlangen — 2011

Als Dissertation genehmigt von der Technischen Fakultät der Universität Erlangen–Nürnberg

Tag der Einreichung: Tag der Promotion: Dekan: Berichterstatter:

21.01.2011 22.03.2011 Prof. Dr.-Ing. Reinhard German Prof. Dr. Günther Greiner Prof. Dr. Willi A. Kalender

Revision 1.00 ©2011, Copyright Matthias Teßmann All Rights Reserved Alle Rechte vorbehalten

i

Abstract Cardiovascular diseases are the number one cause of death in the world. As a consequence, cardiovascular diseases are a major health and economic problem. Any actions taken to support the clinical process during diagnosis, treatment and aftercare procedures are therefore strongly desirable. Today medical imaging techniques play a key role for this purpose. Especially cardiac imaging has high demands on the imaging modality with respect to spatial and temporal resolution. The image quality that can be acquired by computed tomography is almost at pace with traditional catheter based angiography. However, analysis of the data is a manual and time consuming process. Hence, a quick evaluation of the images is required in order to provide optimal patient care. Especially the identification of small structures like plaques contained in the coronary arteries of the heart is difficult. In this thesis, methods were examined that approach this problem. The foundation of the presented algorithms is a robust segmentation of the coronary arteries within the data. Based on this segmentation, methods that operate on its results have been developed that allow the classification of pathologies along the vessels. A learning-based approach has been implemented and used to identify diseased regions along the arteries. The resulting algorithm is capable of quickly detecting the location of soft- and calcified plaques in the data. Besides the detection of the location and the type of plaques, their quantification is important with respect to risk assessment. A fully automatic, threshold based segmentation and scoring method for calcified plaque is presented that delivers similar results than those obtained by manual segmentation from a radiologist. Finally, a snake-based segmentation algorithm for soft-plaques in CT angiography data has been examined. This approach generates a boundary hull along the whole vessel and extracts a radius distribution curve from that data. Thereby, it is possible to detect and quantify the narrowing of vessel lumen in the presence of a soft-plaque. Overall, the algorithms presented in this thesis and the software products that were developed in conjunction with it could contribute significantly to the provision and improvement of computer aided diagnostic methods for the analysis of coronary artery disease in CT data.

iii

Contents

Abstract

i

Contents

iii

List of Figures

vii

List of Tables

xi

Acknowledgements

I

Introduction

1 Motivation

xv

1 3

1.1

Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . .

4

1.2

Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . .

6

2 The Human Heart 2.1

Anatomy and Function . . . . . . . . . . . . . . . . . . . . .

7 7

2.1.1

Heart Cycle . . . . . . . . . . . . . . . . . . . . . . . .

10

2.1.2

Vascular System . . . . . . . . . . . . . . . . . . . . .

11

2.2 Cardiac Diseases . . . . . . . . . . . . . . . . . . . . . . . . .

12

3 Cardiac Diagnostics and Imaging 3.1

17

Electrocardiography . . . . . . . . . . . . . . . . . . . . . . .

17

3.2 Echocardiography . . . . . . . . . . . . . . . . . . . . . . . .

18

iv

3.3 Cardiac Catheterization . . . . . . . . . . . . . . . . . . . . .

20

3.4 Computed Tomography . . . . . . . . . . . . . . . . . . . . .

22

3.4.1

II

Basic Principle of CT Imaging . . . . . . . . . . . . .

22

3.4.2 Hounsfield Units . . . . . . . . . . . . . . . . . . . . .

26

3.4.3 Cardiac CT . . . . . . . . . . . . . . . . . . . . . . . .

26

Computer Aided Diagnosis Methods for Cardiac CT Data 29

4 Cardiac CT Preprocessing 4.1

31

Heart Isolation . . . . . . . . . . . . . . . . . . . . . . . . . .

31

4.2 Coronary Tree Segmentation . . . . . . . . . . . . . . . . . .

33

4.3 Automatic Tree Labeling . . . . . . . . . . . . . . . . . . . .

36

4.3.1

Algorithm . . . . . . . . . . . . . . . . . . . . . . . . .

36

4.3.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . .

41

5 Learning-Based Stenosis Detection 5.1

45

Inductive Learning and Pattern Classification . . . . . . . . .

45

5.1.1

Decision Stumps . . . . . . . . . . . . . . . . . . . . .

46

5.1.2

Boosting . . . . . . . . . . . . . . . . . . . . . . . . .

48

5.2 Feature Extraction Strategies for Stenosis Representation . .

51

5.2.1

Vascular Sampling . . . . . . . . . . . . . . . . . . . .

52

5.2.2

Multi-Resolution Extension

. . . . . . . . . . . . . .

55

5.2.3

Simple Feature Values . . . . . . . . . . . . . . . . . .

57

5.2.4

Haar-like Feature Values . . . . . . . . . . . . . . . .

59

5.3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . .

61

5.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

65

5.4.1

Basic Approach . . . . . . . . . . . . . . . . . . . . .

66

5.4.2

Multi-Resolution Extension

68

. . . . . . . . . . . . . .

v

6 Coronary Calcium Detection and Quantification 6.1

73

Clinical Procedures and Previous Work . . . . . . . . . . . .

73

6.2 Automatic Calcium Scoring . . . . . . . . . . . . . . . . . . .

77

6.2.1

HU-Threshold Determination . . . . . . . . . . . . .

78

6.2.2 Detecting Calcifications . . . . . . . . . . . . . . . . .

82

6.3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . .

85

6.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

88

7 Coronary Soft-Plaque Detection and Quantification 7.1

95

Active Contour Models . . . . . . . . . . . . . . . . . . . . .

96

7.1.1

Solving the Energy Equation . . . . . . . . . . . . . .

97

7.1.2

Gradient Vector Flow as External Image Energy . . .

99

7.2 Detection of Soft-Plaques . . . . . . . . . . . . . . . . . . . . 101 7.3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . 110 7.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

III Summary 8 Discussion 8.1

Learning-Based Stenosis Detection

111

115 117 . . . . . . . . . . . . . . 117

8.2 Calcium-Plaque Detection and Quantification . . . . . . . . 119 8.3 Soft-Plaque Detection and Quantification . . . . . . . . . . . 121

9 Conclusion

125

Bibliography

129

Kurzfassung

141

vii

List of Figures

2.1

The heart muscle . . . . . . . . . . . . . . . . . . . . . . . . .

8

2.2 The heart chambers . . . . . . . . . . . . . . . . . . . . . . .

9

2.3 The heart valves . . . . . . . . . . . . . . . . . . . . . . . . .

9

2.4 The coronary vascular system . . . . . . . . . . . . . . . . . .

13

2.5 Soft- and calcified plaque CPR . . . . . . . . . . . . . . . . .

15

2.6 Soft-plaque on a CTA slice image . . . . . . . . . . . . . . . .

16

2.7 Calcified plaque on a CTA slice image . . . . . . . . . . . . .

16

3.1

Schematic view of a standard ECG . . . . . . . . . . . . . . .

18

3.2 Standard cardiac ultrasound views . . . . . . . . . . . . . . .

19

3.3 Coronary angiography images of the major coronary arteries

21

3.4 Overview of the four CT scanner generations . . . . . . . . .

25

3.5 Schematic view of the CT scanning process . . . . . . . . . .

25

3.6 Prospective ECG-triggering . . . . . . . . . . . . . . . . . . .

27

3.7 Retrospective ECG-gating . . . . . . . . . . . . . . . . . . . .

28

4.1

Heart isolation . . . . . . . . . . . . . . . . . . . . . . . . . .

32

4.2 Extracted coronary centerline trees . . . . . . . . . . . . . .

35

4.3 Centerline tree and heart isolation mesh . . . . . . . . . . . .

37

4.4 Branch separation . . . . . . . . . . . . . . . . . . . . . . . .

38

4.5 Automatic tracing result . . . . . . . . . . . . . . . . . . . . .

39

4.6 Final result of the branch labeling algorithm . . . . . . . . .

40

4.7 Prototype software for automatic branch labeling . . . . . .

42

4.8 Automatic branch labeling MeVisLab network . . . . . . . .

43

viii

Schematic example of a decision stump . . . . . . . . . . . .

47

5.2 Calcification within a coronary artery . . . . . . . . . . . . .

52

5.3 Soft-plaque within a coronary artery . . . . . . . . . . . . . .

53

5.4 Schematic view of the bounding cylinder overlap . . . . . . .

53

5.5 Vessel CPR with bounding cylinders . . . . . . . . . . . . . .

54

5.6 Sampling pattern . . . . . . . . . . . . . . . . . . . . . . . . .

54

5.7

Varying calcifications in cardiac vessel CPR cut-outs . . . . .

56

5.8 Varying soft-plaques in cardiac vessel CPR cut-outs . . . . .

56

5.9 Multi-scale vessel representation . . . . . . . . . . . . . . . .

57

5.10 Result of the multi-scale vessel classification . . . . . . . . .

58

5.11 Rotational invariance of the simple feature values . . . . . .

59

5.12 Haar-like feature extraction patterns . . . . . . . . . . . . . .

60

5.13 Transformation of the circular to a rectangular image . . . .

61

5.14 Ground truth acquisition tool . . . . . . . . . . . . . . . . . .

62

5.15 Ground truth acquisition tool used for stenosis detection . .

63

5.16 MeVisLab network driving the CA-Measurement software .

63

5.17 The Coronary-CAD Software . . . . . . . . . . . . . . . . . .

64

5.18 MeVisLab network driving the Coronary-CAD software . . .

65

5.19 Classification results for calcified plaques . . . . . . . . . . .

70

5.20 Classification results for soft-plaques . . . . . . . . . . . . .

71

5.21 Classification results for mixed plaques . . . . . . . . . . . .

71

6.1

Example of a non-contrast enhanced CT scan . . . . . . . . .

75

6.2 Contrast variability on CTA data . . . . . . . . . . . . . . . .

76

6.3 Histograms of the contrast variability on CTA data . . . . . .

79

6.4 Histograms generated from the pre-processing tree . . . . .

80

5.1

6.5 Smoothed histograms generated from the pre-processing tree 81 6.6 Illustration of the automatic threshold determination . . . .

82

ix

6.7 Calcium plaque candidate selection . . . . . . . . . . . . . .

83

6.8 Calcium plaque detection result . . . . . . . . . . . . . . . .

84

6.9 Manual scoring with the CalcScore prototype . . . . . . . . .

86

6.10 Automatic scoring with the CalcScore prototype . . . . . . .

87

6.11 Network of the CalcScore prototype . . . . . . . . . . . . . .

88

6.12 Network of the automatic calcium scoring algorithm . . . . .

89

6.13 Artefacts in CTA scans . . . . . . . . . . . . . . . . . . . . . .

90

6.14 Manual vs. automatically determined thresholds . . . . . . .

91

6.15 Manual vs. automatically generated calcium scores . . . . .

93

7.1

Planar cross-section images along the vessel centerline . . . 103

7.2 GVF field on a vessel MPR slice . . . . . . . . . . . . . . . . . 104 7.3 Vessel boundary segmentation . . . . . . . . . . . . . . . . . 106 7.4 Contour generation on the whole vessel . . . . . . . . . . . . 107 7.5

Radius distribution on a coronary artery

. . . . . . . . . . . 108

7.6 Contour closeup at a soft-plaque . . . . . . . . . . . . . . . . 109 7.7

Software prototype for soft-plaque quantification . . . . . . 110

7.8 Software prototype CPR view . . . . . . . . . . . . . . . . . .

111

7.9 Network of the soft-plaque quantification prototype . . . . . 112 7.10 Soft-plaque detection results . . . . . . . . . . . . . . . . . . 114

xi

List of Tables

2.1

Names and abbreviations for the coronary arteries . . . . . .

12

3.1

Standard HU-to-tissue associations . . . . . . . . . . . . . .

26

5.1

Detection rates on a per-vessel basis . . . . . . . . . . . . . .

66

5.2 Sensitivities and specificities for per-vessel classification . .

67

5.3 Detection rates on a per-lesion basis on training data . . . .

67

5.4 Detection rates on a per-lesion basis on unseen data . . . . .

67

5.5 Overall detection rate on a per-lesion basis . . . . . . . . . .

67

5.6 Summary of the detection rates on a per-vessel basis . . . . .

69

5.7

Summary of detection results for the lesion based evaluation

69

6.1

Correlation between manual and automatic calcium scoring

92

xiii

List of Algorithms 5.1

AdaBoost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

50

xv

Acknowledgements I would like to express my gratitude to my supervisor Prof. Dr. Günther Greiner for supporting me during the course of this thesis. I greatly appreciated his kind advice as well as his profound scientific knowledge and leadership skills. Without him always encouraging my work, this thesis would not have been possible. Thank you so much! Moreover, I greatly appreciate that Prof. Dr. Willi A. Kalender from the Institute of Medical Physics at the University of Erlangen kindly agreed to review this thesis. Thank you. I also would like to thank my former supervisors Dr. Michael Scheuering, Fernando Vega-Higuera and Dr. Dominik Bernhard as well as my former colleagues Arne Militzer, Jens Kaftan and Thomas Beck from the Siemens AG, Healthcare Sector, Forchheim. Their support during the years enabled the success of this thesis in the first place. Furthermore, I thank all my former colleagues at the Chair of Computer Graphics for providing many valuable discussions and ideas as well as a lot of fun, which made working there a memorable pleasure. Namely I want to thank Prof. Dr. Marc Stamminger, Dr. Roberto Grosso, Frank Bauer, Quirin Meyer, Marco Winter, Christian Eisenacher, Henry Schäfer, Michael Martinek, Matthias Nießner, Maria Baroti and Elmar Dolgener. In particular I want to thank Jochen Süßmuth for kindly sharing his LATEX template, which was used to layout this thesis. Moreover, I would like to thank my former supervisor and later colleague PD Dr. Peter Hastreiter for his continuing support and the many fruitful and encouraging discussions we had. I am also very grateful to my former students Paul Kaletta, Daniele Protogerakis, Li Ding, Svitlana Gayetskyy and Yesim Alicioglu for their contributions to this work and our good relationship. Special thanks go to my friends Alexander Voss and David Moore for always supporting my work by entirely reviewing every English text I ever wrote. Additionally, I want to thank Dr. Jan Egger from the University of Marburg and my friend Christoph Dengler for proofreading parts of this thesis.

xvi

Acknowledgements

Likewise, I would like to express my deep love and gratitude to my family, especially to my mother, Angella Graf. Her love and constant support during my whole life taught me that nothing is impossible. Thank you very much. I also would like to thank my father, Fred Teßmann, who died way too early. I deeply regret that he is not able to see this. Finally, I want to thank my beloved girlfriend Johanna Mayers for always supporting and motivating me throughout the years of my work. Matthias Teßmann

PART I

Introduction

3

CHAPTER 1

Motivation According to the World Health Organization (WHO), the number one cause of all deaths worldwide are cardiovascular diseases (CVD) [Wor09]. In 2004, an estimated number of 17.1 million people died from CVDs. This number represents about 29% of all global deaths. Solely in Germany, 43.4% of all deaths occurred in 2007 were caused by diseases of the circulatory system [Sta10a]. The WHO estimates that by 2030 almost 23.6 million people worldwide will die of CVD, mainly from heart disease and stroke. Considering these vast numbers, an early and reliable detection, diagnosis, treatment and aftercare of patients suffering from CVDs is vital. In combination with the clinical standard techniques for risk stratification of patients, the evolution of medical imaging techniques over the last decades has constantly introduced more and more powerful tools for clinical diagnostics. Standing out was the introduction of multi-slice computed tomography (MSCT) in 1998, as this nowadays allows cardiac imaging at a very high temporal and spatial resolution. Consequently, CT allows imaging of the coronary vessels at a quality comparable to the gold-standard of clinical CVD diagnostics: invasive catheter-based angiography. However, CT angiography (CTA) does not suffer from the fatality risk associated with conventional angiography. Nevertheless, considering the current clinical workflow that is necessary for processing a tomographic dataset, the ever increasing amount of patients and associated data also drastically increases the manual effort and the time required by the radiologist to create a reliable diagnosis or risk stratification. It is therefore highly desirable to research and develop methods operating on this data in order to provide automatic diagnostic support. Even simple algorithmic automations, for example the detection of the heart position in the dataset, could lead to a significant increase of efficiency in the radiologists workflow. Moreover, these computer aided diagnostic (CAD) methods lead to an enhancement of the work division between the radiologist and the computer. While the computer processes the acquired data automatically, the clinician re-obtains more time for the interpretation of the information contained in this data. Alternatively, more patients can be diagnosed dur-

4

CHAPTER 1

Motivation

ing the same time, which would be also strongly desirable due to the ever increasing cost pressure in the health care system. Regarding CTA data with respect to the diagnosis of CVD, the development of algorithms supporting the radiologist in the detection and quantification of stenotic lesions within the cardiac vascular tree are of utter importance. Stenosis are lipid-rich, fibrous or calcified plaques, narrowing the lumen of the coronary arteries and thus are the major cause of CVD and its associated health risks. Consequently, the assessment of the coronary arteries and the quantification of stenosis extent is the major focus during CVD diagnostics. However, as of today this process is mostly manual, requiring a high degree of clinical experience and time. Additionally, as with most manual diagnosis processes, it suffers from intra- and inter-observer variability. Even though image processing methods have been greatly improved during the last years, only a few semi-automatic methods for stenosis detection and quantification have been developed so far. The major requirement for the development of such methods are simplicity of applicability, clinical validity and reproducibility. This thesis presents methods that are capable of detecting and quantifying stenosis in CTA data fully automatically. In contrast to many semi-automatic methods, the algorithms developed during the course of this work do not require any manual interaction from the radiologist except for the validation of the diagnostic results. Therefore, the focus of all techniques was clinical pertinence and validity. For every automatic detection approach, prototype software products have been developed and used for clinical evaluation. Some of these prototypes have already been integrated into the standard CT processing software — others are still used actively within a clinical research context. Additionally, the evaluation of the presented algorithms was not only performed on single, isolated data, but rather on a large database of real clinical CT data. Thereby, the validation of the algorithms could benefit from the diagnostic reference data created by the radiological experts.

1.1 Contribution The contribution of this thesis is comprised of techniques that are capable of detecting and quantifying stenotic lesions on CTA data automatically. Thus, all algorithms fulfill the requirement of simple applicability, ensuring clin-

1.1

Contribution

5

ical validity and reproducibility by evaluation on real world data, including reference diagnostic results provided by experienced radiologists. The algorithms have been partly published in [TVHF∗ 08, TVHF∗ 09, TVHB∗ 10, TVHB∗ 11]. In particular, the contributions of this work are:

(1) Automatic Vessel Tree Labeling Based on an existing pre-processing pipeline for CTA data, which consists of a heart-isolation algorithm and an automatic vessel tree segmentation, a geometry based approach for the automatic detection and labeling of the three major coronary arteries has been developed. Anatomical knowledge in conjunction with the results obtained from the pre-processing steps is used in order to detect and mark the vessels for automatic extraction. Despite its simplicity, the approach has proven to be robust, fast and delivering reliable results. Due to the correct labeling of the arteries it becomes possible to provide a precise orientation for the radiologist and to create accurate clinical reports. (2) Learning-Based Stenosis Detection Since learning-based pattern classification methods have shown to yield useful results in a variety of industrial and clinical applications, their applicability for stenosis detection on CTA data has been investigated. Therefore, a multi-scale feature extraction approach for CTA image data has been developed in order to describe the coronary arteries including their possible diseases adequately for an AdaBoost based classifier. Using the obtained feature values, the classification algorithm is trained with a set of selected sample data. The implemented approach is capable of detecting the position and the extent of calcified- and soft-plaques in the CTA data with a high sensitivity and specificity. This method was shown to be especially useful for quickly ruling out the presence of lesions within a CTA scan. (3) Automatic Calcium-Plaque Detection and Quantification One major indication for risk assessment of CVD is the cardiac calcium score, which is computed from a threshold based segmentation of calcified plaques in the data. This can be particularly difficult on CTA data, since due to the application of contrast agent image contrast has a very high variability. In this thesis, an algorithm has been developed that provides a robust determination of an accurate segmentation threshold for calcified plaques in CTA data. Subsequently, calcified plaques can be detected, segmented and quantified accord-

CHAPTER 1

6

Motivation

ing to the clinical standard procedures and calcium scores can be computed automatically. This approach has been validated against manual acquired data and has proven to yield equivalent clinical results.

(4) Automatic Soft-Plaque Detection and Quantification Due to CT’s low soft-tissue contrast, soft- or lipid-rich plaques are hard to detect and measure. Within the course of this thesis, several algorithms based on active contours have been investigated in order to create a model-based description of the coronary arteries. Subsequently, these models can be used in order to deliver an estimate of soft-plaque location and severity.

1.2 Thesis Outline The present thesis consists of nine chapters divided into three parts. Part 1 consists of a general introduction and motivation, followed by an overview of the anatomy and the function of the human heart in Chapter 2. In Chapter 3, the current clinical imaging and diagnostic techniques used in practice are discussed. Part 2 presents the methods developed during the course of this thesis. Chapter 4 discusses the CT preprocessing pipeline and the method developed for automatic vessel labeling. In Chapter 5, the approach for learning-based stenosis detection is discussed, followed by the coronary calcium detection and quantification in Chapter 6. Finally, Chapter 7 introduces techniques that can be used in order to detect and quantify softplaques on CTA data. The thesis is finally closed with Part 3, which discusses the presented work and puts the results into context in Chapter 8, followed by a conclusion and outlook in Chapter 9.

7

CHAPTER 2

The Human Heart The heart is one of the most important organs in the human body. It acts as a pump connected to the vascular system and therefore is responsible for the delivery of blood to all parts of the body, supplying oxygen and nutrients to other organs and musculature. The average heart weights between 250 and 300 grams and is a little larger than a human fist [Led05]. The cardiac output, i.e. the volume of blood pumped by the heart in the time interval of one minute for the average healthy and unstressed heart is about 5.0 litres for a human male and about 4.5 litres for a human female. Consequently, the daily delivery rate of pump volume amounts to at least 6, 480 litres. Due to the high energy turnover of the heart muscle even in periods of rest, a consistent supply of the musculature with oxygen is inevitable for its correct function [Led05]. During phases of physical rest, the oxygen consumption of the heart is already at about 8–12ml/min/100g, increasing during periods of stress. However, unlike as for most muscles in the body, it is not possible for the heart to have an excess post-exercise oxygen consumption. Since the coronary arteriovenous oxygen difference is in stressless phases already found to be about 10–20 Vol.% it cannot be substantially increased anymore in phases of stress. Consequently, any increased oxygen demand has to be covered solely by an increase of blood flow [Led05]. Hence, any disease or defect of the coronary arteries, which are responsible for fulfilling the hearts oxygen demands, may have fatal consequences.

2.1 Anatomy and Function The heart is a cone shaped muscular hollow organ situated in the human thorax in between the left and the right lung. It is bordered downwards by the phrenic [Led05, PP07] (Fig. 2.1). The heart itself is enclosed within a fluid filled sac of connective tissue, the pericardium. The pericardium consists of the endocardium, myocardium and epicardium and allows the heart muscle to move freely between the surrounding organs.

8

CHAPTER 2

The Human Heart

The human heart is divided into four subparts. The left atrium and the left ventricle (i.e the heart chamber) form the left heart side and the right atrium and the right ventricle form the right heart side (Fig. 2.2). Technically, each side can be viewed as a synchronized series connection of a helping pump (atrium) and a main pump (ventricle). The initially pumping atrium is responsible for a better filling of the ventricle with blood, whereas the subsequently pumping ventricle supplies the connected vascular system.

Figure 2.1: Heart muscle, near vessels and location within the human body [PP07]. Image courtesy of Elsevier GmbH, Urban & Fischer Verlag, München.

In order to ensure the correct blood flow direction during a heartbeat, the atria are separated from the chambers by the atrioventricular valves. The atrium and the ventricle of the left heart side are separated from each other by the mitral valve which is formed of two cusps. The atrium and the ventricle of the right heart side are separated by the tricuspidal valve which is formed of three cusps. The cusps of the valves are connected to the papillary muscles by strong tendon fibers [Led05, PP07]. The blood flow from

2.1

Anatomy and Function

9

and to the great arteries (vena cava superior and aorta) is regulated by the pocket valves, the pulmonary valve (valva trunci pulmonalis) and the aortic valve (valva aortae). The valve system of the heart is shown in Figure 2.3.

Figure 2.2: Left and right heart chamber [PP07]. Image courtesy of Elsevier GmbH, Urban & Fischer Verlag, München.

Figure 2.3: The heart valves separating the atria and the chambers, thereby ensuring correct blood flow direction [PP07]. Image courtesy of Elsevier GmbH, Urban & Fischer Verlag, München.

CHAPTER 2

10

The Human Heart

2.1.1 Heart Cycle The low oxygen blood from the body is transported into the right atrium of the heart through the vena cava superior (Fig. 2.1). Then, the right ventricle is filled by a contraction of the right atrium in conjunction with the opening of the tricuspidal valve. With the following beat of the right ventricle and the opening of the pulmonary valve, the blood is pumped into the lung vessels through the pulmonary artery. Having been oxygen enriched in the lungs, the blood is transported through the lung veins into the left atrium of the heart. Subsequently, it is pumped through the mitral valve into the left ventricle by a contraction of the left atrium. From there, the blood is ejected through the aortic valve into the aorta reaching the body circulation. As a pumping device, the heart muscle rhythmically alternates between the states of fatigue (diastole) and contraction (systole). During the diastolic phase, the heart chambers are filled with blood. Subsequently, this blood is ejected into the major arteries during the systolic phase. The central clock for this cyclic process is the sinoatrial node which is located in the right atrium. The electric stimulation originating within the sinuatrial node firstly propagates through the myocard of the atria. Subsequently, it passes the atrioventricular (AV) node, which acts as an electric delay element. Finally, the excitation spreads into the ventricle causing its contraction. Besides the division into diastolic and systolic phases, the heart cycle can be refined further [Led05]: 1. Contraction Phase: The contraction of the myocardium is initiated by an electric potential emerging at the sinoatrial node. This marks the beginning of the systolic phase. The ventricular pressure increases beyond the pressure present within the atria. As a consequence, the atrioventricular valves close, leading to an isovolumetric contraction of the heart. 2. Expulsive Phase: As soon as the ventricular pressure increases beyond the pressure in the aorta, the atrioventricular valves open and the blood is ejected into the circulatory system. However, only a part of the end-diastolic blood volume (EDV) is ejected into the aorta. A rest volume (RV) remains within the ventricles. The stroke volume (SV) is then SV = EDV − RV . The fraction SV of the EDV is called the ejection fraction (EF). During the expulsive phase the pressure within the chambers is still

2.1

Anatomy and Function

11

increasing, reaching its maximum at the end of the systolic phase. 3. Relaxation Phase: Shortly after the decrease of the ventricular pressure below the pressure of the aorta, the atrioventricular valves close, leading to an isovolumetric muscular fatigue. As soon as the pressure drops below the pressure within the atria, the atrioventricular valves reopen and the ventricles start to fill with blood. 4. Filling Phase: The atrioventricular valves are open and blood flows into the ventricle, leading to increased ventricular volume and pressure.

2.1.2 Vascular System The coronary arteries (arteria coronariae cordis) are the vasa privata of the heart and thus are responsible for its optimal oxygen supply. The arteries are embedded within fatty tissue and coated with epicard. The main and most important branches of the vascular tree are [Led05]: 1. Arteria coronaria sinistra The left coronary artery originates above the aortic valve and is responsible for the supply of the left chamber, the front 23 of the chamber septum and a small part of the right front chamber wall. It branches into the ramus circumflexus (RCX) and the ramus interventricularis anterior (RIVA), which branch further into some minor vessels. 2. Arteria coronaria dextra The right coronary artery is responsible for the oxygen supply of the right atrium, the right ventricle, 13 of the chamber septum as well as the sinoatrial node and the AV-node. It branches into the ramus interventricularis posterior (RIVP) and some minor vessels. The coronary arteries can be further divided into the great coronary arteries (diameter > 400µm), small arteries (diameter 100 − 400µm) and arterioles (diameter < 100µm) [Led05]. Schematic images of the coronary vascular system are shown in Figure 2.4. Since in technical applications and software products, the English names of the coronary vessels are more common than the Latin names, Table 2.1 lists the important translations including abbreviations used in practice and during this thesis. Additionally,

CHAPTER 2

12

The Human Heart

the vascular system of the heart includes the heart veins, which collect the venous blood of the heart. The most important heart veins are the venae cardiaca magna, venae cardiaca parva, vena ventriculi sinistri posterior and the vena interventricularis posterior (Fig. 2.4a). Latin term

English term

Arteria coronaria sinistra — Ramus interventricularis anterior RIVA Ramus circumflexus RCX Ramus interventricularis posterior RIVP

Left main LM Left anterior descending LAD Left Circumflex LCX, CX Right coronary artery RCA

Table 2.1: Names and abbreviations for the relevant coronary arteries.

2.2 Cardiac Diseases In principle, the various and numerous diseases of the heart can be grouped into the following categories [Led05]: • Congenital heart diseases, e.g. a ventricular septum defect, • acquired heart defects, e.g. aortic or mitral valve stenosis, • cardiomyopathies, i.e. real defects of the heart muscle that are no reaction to other cardiac or systemic diseases, • inflammatory cardiac diseases, e.g. endocarditis or rheumatic fever, • cardiac arrhythmias, • cardiac conduction disorders, • coronary heart disease / coronary vascular disease (CVD). Since the focus of this thesis is on algorithms for computer aided diagnostics of CVD, this type of disease will be discussed in further detail.

2.2

Cardiac Diseases

13

(a) Coronary vascular system in the context of the heart muscle

(b) Detail view of the main coronary arteries showing the most important branches.

Figure 2.4: The coronary vascular system which is responsible for the oxygen supply of the heart muscle [PP07]. Images courtesy of Elsevier GmbH, Urban & Fischer Verlag, München.

14

CHAPTER 2

The Human Heart

In the presence of CVD, the myocardium is not adequately supplied with blood due to an insufficient blood flow within the coronary arteries. This insufficient blood flow is caused by athereosclerotic plaques. Athereosclerosis is the most common vasoactive process in the human body. The single phases during the genesis of a plaque are understood to be an inflammatory activity within the coronary arteries. The trigger of this process is the gathering of excess low density lipoproteins (LDL) at the inner vessel wall. If these lipoproteins are changed due to a chemical process, e.g. oxidation, the endothelium cells of the vessel wall show an inflammatory reaction. As a consequence, white blood cells (monocytes) are attached and infiltrate the inside of the vascular wall. There, they are transformed into macrophages and absorb the LDL particles. Together with a further accumulation of monocytes, a lipid stripe with a fibrous coating originates within the vessel. If the plaque is growing further, it causes a narrowing of the inside vessel diameter (lumen). These vascular narrowings are called stenosis. Such a stenosis symptomatically appears usually as angina pectoris. Generally, they are haemodynamic relevant only if the diameter of the vascular lumen is reduced by more than 70—75%. However, in most of the cases the reduction of blood flow is not the cause of further incidents [VBFK06, Ach08]. As an example, only 15% of all heart attacks are caused by stenosis stopping the cardiovascular blood flow. The major risk that is associated with a plaque is the rupture of the fibrous coating. If the blood in the coronary arteries gets in contact with the lipid core of the plaque, the result is a coagulation of blood – a thrombus emerges. Is this thrombus big enough to clog the artery, an acute and massive undersupply of the myocardium occurs, resulting in a myocardial infarction. Such a plaque is also called a vulnerable plaque [VBFK06]. Unfortunately, it is currently not possible to identify a vulnerable plaque by any of the known imaging methods reliably [Ach08]. However, the analysis of the components and structure of a plaque enable the clinician to estimate its vulnerability risk and stage of development. In principle there are three major types of plaques: 1. Soft-plaque Soft-plaques are plaques which consist mainly of lipid molecules. The amount of calcium is very low. This is the typical form of plaques until their median stage of development. During this phase, the probability of a rupture is very high – much higher than that of lesions exhibiting a higher amount of calcium. If the soft-plaque is big enough in

2.2

Cardiac Diseases

15

size and extent, it can be identified on contrast enhanced CT as a narrowing of the vessel lumen. However, it is difficult to measure the real extent and composition of the plaque as the soft tissue contrast on CT is very low (Fig. 2.6). Visual perception of soft-plaques in CT can be enhanced by vessel segmentation and a curved planar reformation (CPR) view of the corresponding artery (Fig. 2.5a). 2. Calcified Plaque Is the amount of calcium contained within a plaque very high, then it is called a calcified plaque. This stadium usually marks an advanced stage of plaque development. The probability of a rupture is far lower than that of a soft-plaque, which may be caused by a stabilization effect of the calcium deposits. Calcified plaques are relatively easy to detect on CTA and non-contrast enhanced CT images (Fig. 2.7, 2.5b). 3. Positive Remodeling Remodeling is the name for the development of plaques on the outside of the vessel wall, i.e. instead of narrowing the vessel lumen this type of plaque does not cause a stenosis at all. As a consequence, many cases of remodeling are never discovered as patients usually are asymptomatic. However, the danger of rupture of such a plaque still exists. It is very difficult to identify this type of plaque on a CT image.

(a) Soft-plaques

(b) Calcified plaques

Figure 2.5: Example showing CPRs of the coronary arteries from Figures 2.6 and 2.7. The plaques are marked by red circles. Due to the transformed view, the plaques become easier to detect visually than in the traditional slice view. Additionally, several plaques within the artery can be seen at once.

16

CHAPTER 2

The Human Heart

Figure 2.6: Example of a soft-plaque as seen on a CTA image. The plaque is located at the distal part of the CX artery and marked by the red circle. These types of plaques are hard to detect visually due to the low soft-tissue contrast of CT.

Figure 2.7: Example of a calcified plaque as seen on a CTA image. The plaque is located at the distal part of the LAD artery and marked by the red circle. These types of plaques are easy to detect visually due to the high attenuation value of dense calcium deposits.

17

CHAPTER 3

Cardiac Diagnostics and Imaging This chapter provides an introduction to the imaging techniques applied for cardiac diagnostics. Besides a general overview, the focus of this chapter will be on the principles of computed tomography imaging, as the algorithms presented in this thesis are optimized for the CT imaging modality.

3.1 Electrocardiography Even though electrocardiography (ECG) is no imaging technique itself, it is shortly discussed in this section, as it is one of the most important and most widely used standard cardiac diagnostic techniques. Moreover, ECG is used for the synchronization of the heart cycle with other imaging modalities. Basically, electrocardiography measures the activity of the cardiac conductive system by applying electrodes on the body surface. Thereby, the potential difference over the whole heart is measured as an integral vector of the potential differences of the single muscle fibers [Led05]. Figure 3.1 shows a schematic view of a standard ECG with mean time and amplitude values of the single ECG phases. The most relevant parts of an ECG are [Led05]: • P-Wave The P-Wave is a consistent, convex and positive wave with a duration of 50–100ms. Its maximum amplitude is at 0.25mV . The wave represents the atrial excitation propagation. Its initial part shows the excitation propagation within the left atrium and its terminal part that of the right atrium. Any duration extension, deformation or delayed start can be a sign of atrium “wire” fault or ectopic excitement. • PQ-Duration The PQ-duration is the time interval from the start of the atrial excitement to the start of the ventricle excitement, which should last for about 120—200ms. Clinically relevant are extended or shortened durations, as they occur for example with narrowed or blocked AV nodes.

CHAPTER 3

18

Cardiac Diagnostics and Imaging

• QRS-Complex The QRS-Complex should last for 60—120ms and represents the excitement propagation within the ventricles. Within normal conditions, the Q-spike is ≤ 30ms and not deeper than 14 of the following R-spike. R and S are slim and sharp spikes. Extended duration or shape deformation indicates a pathology of the heart. • ST-Distance, T-Wave The ST-distance and the T-wave represent the initial and terminal phases of the repolarisation stage. The T-wave is semicircular, positive and not higher than 16 – 32 of the R-spike. Any increase or decrease of the ST-distance or the T-wave indicates a pathologic condition.

Figure 3.1: Schematic view of a standard ECG (based on [Led05]).

3.2 Echocardiography Echocardiography, also known as cardiac ultrasound (US), uses standard ultrasound techniques in order to generate 2-dimensional slice images of the heart, based on the reflection of ultrasound waves in the MHz range

3.3

Echocardiography

19

at substance boundaries of varying impedance [KMS06]. The impedance represents a resistance, which works against the propagation of the sound waves. Hence, the higher the impedance difference between two tissues, the more sound is reflected at the boundary region. As a consequence, distance and material conclusions can be drawn from the reflection intensity and signal runtime. In order to reconstruct a 2-dimensional image from the 1-dimensional wave signal measurement, the sender and the receiver is panned on the scanning surface either electrically or mechanically. Hence, a whole signal fan of line images is acquired, resulting in the typical circular segment form of ultrasound images. Furthermore, 3-dimensional images can be reconstructed by additionally panning along the scan-plane. For cardiac diagnostics, US scanning is usually performed transthoracically, i.e. the sensor is placed on the chest and the US waves are measured through the ribs (Fig. 3.2). As an alternative, scanning can be performed transesophageal, whereby the US sensor is mounted on an endoscope and is inserted into the patients esophagus [KMS06, Led05]. Additionally, during the course of a cardiac catheterization, intravascular ultrasound (IVUS) can be used in order to determine the situation of the vessel walls.

Figure 3.2: Standard cardiac ultrasound views used for diagnostics with their corresponding anatomy illustrations [Lyn06].

20

CHAPTER 3

Cardiac Diagnostics and Imaging

3.3 Cardiac Catheterization

Classic, projective X-ray imaging for cardiac diagnostics is mainly used for invasive coronary angiography, also known as catheter-based angiography or coronary catheterization. It is a minimally invasive procedure in order to access the coronary circulation. The method can be used for both diagnostic and interventional purposes and is currently the gold standard for CVD diagnostics and intervention. Coronary catheterization is performed in a cardiac catheterization lab. With current designs, the patient must lay on a radiolucent table. The X-ray source and detector are mounted on a C-stand, positioned on opposite sides of the patient’s chest. The X-ray equipment can be moved freely under motorized control around the patient. Consequently, images can be taken quickly from multiple angles [Led05, KMS06]. In order to create the X-ray images, a physician guides a small tube-like device, called a catheter, typically about 2.0mm in diameter, through the large arteries of the body until the tip is just within the opening of one of the coronary arteries. By design, the catheter is smaller than the lumen of the artery it is placed in. The catheter is itself designed to be radiodense for visibility and it allows a clear, watery, blood compatible radiocontrast agent to be selectively injected and mixed with the blood flowing through the artery. Typically 3—8cc of the radiocontrast agent is injected for each image to make the blood flow visible for about 3—5 seconds. Subsequently, the radiocontrast agent is rapidly washed away by the flowing blood into the coronary capillaries and veins. Without the contrast agent injection, the blood and surrounding heart tissue appears as only mildly shape changing and no details of the blood and organ structure are visible. Typical X-ray images taken during coronary catheterization are shown in Figure 3.3. In the beginning of clinical applied coronary catheterization this method frequently took several hours and involved severe to fatal complications for about 2–3% of the patients. Due to several procedure improvements over time this number could be reduced. However, major complications still occur in about 1% of all examinations. For the significant number of patients which are not likely to suffer from an destructive stenosis and thus need no immediate invasive treatment, this risk can be avoided by using computed tomography angiography as an alternative [Ach07].

3.4

Cardiac Catheterization

21

(a) Coronary angiography image showing mainly the CX and the RCA arteries.

(b) Coronary angiography image showing mainly the CX, LM and the LAD arteries.

Figure 3.3: Coronary angiography images of the major coronary arteries as taken during coronary catheterization [PP07]. Images courtesy of Elsevier GmbH, Urban & Fischer Verlag, München.

22

CHAPTER 3

Cardiac Diagnostics and Imaging

3.4 Computed Tomography Computed tomography (CT) imaging is a tomographic extension of traditional 2-dimensional projective X-ray imaging. Its introduction by Godfrey N. Hounsfield in 1968 provided for the first time a true 3-dimensional representation of objects instead of only a 2-dimensional projection of an object [PB07]. The invention of CT marks the birth of modern tomographic imaging modalities and was recognized with the Nobel prize in 1979. The general structure of a CT scanner consists of an X-ray tube, a detector array and a patient table in between the tube and the detector. The image data acquired by CT consists of a series of individual X-ray images which are composed into one volumetric dataset.

3.4.1 Basic Principle of CT Imaging One such X-ray image represents the intensity profile measured by a detector along a pencil-beam-type X-ray passing through the scanned object. This corresponds to the intensity profile being scanned along a straight line at a given direction [CJS93]. In order to measure several different angles of the projection data, the scanning is repeated at each given angular view by rotating both, the X-ray tube and the detector. Subsequently, a slice image of the scanned object can be reconstructed from a full rotational scan of these intensity profiles. Afterwards, the table on which the object is positioned is moved forward and the process is repeated for the acquisition of the next slice image of the object. Mathematically, the attenuation coefficient, which is measured by the detector, can be represented as the line integral over a material dependent attenuation factor. The photon density emerging from a narrow beam of radiation along the line l can be written as [CJS93]

I = I0 · e−

∫ L

µ( x,y)dl

,

(3.1)

where I0 is the initial, i.e. unattenuated, radiation intensity, I is the detected attenuated radiation intensity and µ is the material specific attenuation co-

3.4

Computed Tomography

23

efficient. Rearranging yields

ln(

Io )= I

∫ L

µ( x, y)dl .

(3.2)

Hence, the natural logarithm of the ratio I0 /I is equivalent to the accumulation of the X-ray attenuation coefficients along a line through the object. The path L is determined by the X-ray through the two dimensional X-ray absorption distribution µ( x, y), where x and y are the spatial coordinates within the image plane. In order to reconstruct an image from the measured intensity profiles, it is required to calculate the material specific attenuation coefficient as a position dependent function µ( x, y). This can be achieved using the Radon transform. The Radon transform of the function µ( x, y), represented as pϕ ( x ′ ), is defined as the line integral of µ along a line that is parallel with the y′ -axis at a distance x ′ from the origin. Thereby, ( x ′ , y′ ) are the rotated coordinates of ( x, y) with the rotation angle ϕ. By definition, the Radon transform of µ is calculated as [Rad86, CJS93]

pϕ ( x ′ ) ≡ R [µ( x, y)] = ∫ ∞

−∞

where

(

µ( x ′ cos ϕ − y′ sin ϕ, x ′ sin ϕ + y′ cos ϕ)dy′ ,

x′ y′

)

(

=

cos ϕ − sin ϕ

sin ϕ cos ϕ

)(

x y

(3.3)

) .

(3.4)

Consequently, the function pϕ ( x ′ ) is the 1-dimensional projection of µ( x, y) at an angle ϕ. This means that the Radon transform performs the line integral of the 2-dimensional image data along y′ . Therefore, the X-ray projection that is acquired by the scanner (Eq. 3.2) is in essence the Radon transform of the object data. In order to reconstruct µ( x, y) from pϕ ( x ′ ), the projection theorem, also known as central slice theorem can be used. If the 1-dimensional Fourier transform is used to transform pϕ ( x ′ ) in conjunction with a transformation of the coordinates ( x ′ , y′ ) to ( x, y), we have [CJS93] [ ] Pϕ (ω ) ≡ F1 pϕ ( x ′ ) = ∫ ∫ ∞ (3.5) µ( x, y)e−iω ( x cos ϕ+y sin ϕ) dxdy . −∞

24

CHAPTER 3

Cardiac Diagnostics and Imaging

Since the Fourier domain coordinates ω x′ and ωy′ correspond to ω x′ = ω cos ϕ and ωy′ = ω sin ϕ, this can be rewritten as

[ ] Pϕ (ω ) ≡ F1 pϕ ( x ′ ) = F (ω, ϕ) ,

(3.6)

where F is the 2-dimensional Fourier transform of µ( x, y) and ω and ϕ represent the polar coordinates in the Fourier domain. Consequently, the Fourier transform of the projection data pϕ ( x ′ ) at a given view angle ϕ is the same as the radial data passing through the origin at a given angle ϕ in the 2-dimensional Fourier transform domain [CJS93]. Therefore, in order to reconstruct the original image function µ( x, y), all projections pϕ ( x ′ ) are transformed by the 1-dimensional Fourier transform, yielding Pϕ (ω ). These values are then used as the domain data of the 2-dimensional Fourier transform of µ( x, y). Applying, the inverse Fourier transform to this data then yields the original function µ( x, y). In order to solve this backprojection problem, different reconstruction algorithms have been used during the development stage of CT scanners. While in the early days of development, algebraic reconstruction techniques (ART) were used, the current standard method for image reconstruction is filtered backprojection [PB07, Kal06]. The first generation of CT scanners were built after the theoretic approach of single pencil-beam-type X-rays. It used a single ray emitter and a single detector on the opposite side of the object. In order to acquire a single data slice, the pencil-beam-type X-ray was translated along the object. Subsequently, the emitter was rotated and the next series of translations were executed [PB07]. All in all a single examination using this first generation of scanners would take from minutes up to several hours. However, the scanning technology has been extended in the second, third and fourth generation scanner hardware. The currently most widely used third generation CT scanner has a fan beam emitter and a number of detectors that is capable of covering the whole object that is to be scanned. An overview of the principle setup of the four scanner generations is shown in Figure 3.4. With the third generation scanners, the emitter/detector system rotates permanently around the patient, while the patients table is moving continuously in a perpendicular direction in order to acquire a full data volume [Kal06]. These systems are called spiral or helical CT and their function principle is depicted schematically in Figure 3.5. Furthermore, since 1999, multiple layers of detectors and emitters have been combined such that current state-of-the-art CT can acquire up to 64 slices at a time.

3.4

Computed Tomography

25

Figure 3.4: Overview of the four CT scanner generations [Kal06].

Figure 3.5: Schematic view of the CT scanning process [KMS06]. Image courtesy of Elsevier GmbH, Urban & Fischer Verlag, München.

CHAPTER 3

26

Cardiac Diagnostics and Imaging

3.4.2 Hounsfield Units The attenuation coefficients that are calculated during image reconstruction are not used directly in clinical practice. They are assigned a CT value, measured in Hounsfield-Units (HU). Therefore, some materials have been assigned a fixed value [KMS06]: CTValue (µWater ) = 0HU and CTValue (µ Air ) = −1000HU . All in between values can be calculated by interpolation:

CTValue =

µ − µWater · 1000[ HU ] . µWater

(3.7)

Table 3.1 lists a few of the commonly used CT values and their associated materials in radiology. These values are understood to be used as approximate indication values within a clinical context. Tissue Air Lung Water Fatty tissue Flowing blood Coagulated blood Contrast-enhanced blood Bone Liver

CTValue

−1000 −400—175 0 −65 ± 5 55 ± 5 80 ± 10 200—500 100—1500 55 ± 60

Table 3.1: Some of the standard guideline values for the association of CTValue and corresponding tissue in radiological practice [KMS06, OFB∗ 07]. The assigned fixed point values are marked in bold.

3.4.3 Cardiac CT Imaging of the human heart is a challenge for any of the imaging modalities. A high temporal resolution is required in order to capture a still image of the continuously beating heart without severe imaging artefacts. This has become clinically possible with the introduction of multi-slice spiral-CT hardware, as these scanners allow rotation times of < 0.5 seconds. In order to acquire the image data of the single phases of the heart cycle, an ECG is recorded in addition to the CT scan. In principle, there are two different

3.4

Computed Tomography

27

possibilities for the combination of ECG and CT data acquisition: Prospective ECG-Triggering and Retrospective ECG-Gating. For prospective ECG-triggering, the mean distance between two R spikes, called the R-R interval, is determined. Using this distance and a previously defined percentage of this time span (e.g. 80% for data acquired in the diastolic phase), the start time for imaging relative to an R-spike is set. After imaging of the corresponding slice, the patient table is moved forward and the next slice is recorded using the same, previously defined time intervals (Fig. 3.6). This imaging technique is easy to implement technically, however, it strongly suffers from the limitations of sequential imaging.

(a)

(b)

Figure 3.6: a) Sequential volume coverage with prospective ECG-triggering and 16slice acquisition. Due to scan time limitations, a scan can be obtained only every other heart cycle at usual heart rates. b) Scan coverage of the heart with a 16-slice CT scanner. 16 image slices are acquired with an individual scan. [OFB∗ 07].

In order to take advantage of modern spiral-CT, retrospective ECG-gating is used. Thereby, CT imaging is performed continuously, not taking the heartbeat into account. Additionally, an ECG is recorded during the whole scan. After the scan is complete, the recorded ECG data can be used in order to select the desired slices that can be used for reconstructing an artefact free image (Fig. 3.7). Consequently, not all of the acquired image data is used for reconstruction. However, even though data is “wasted” using this method, it is more flexible than the prospective triggering approach. In clinical applied CT diagnostics it is differentiated between two principal imaging techniques: native diagnostic imaging and contrast agent diagnostic imaging. In terms of cardiac CT imaging, native cardiac CT images are acquired without contrast agent administration, whereas CT angiogra-

CHAPTER 3

28

(a)

Cardiac Diagnostics and Imaging

(b)

Figure 3.7: a) Scan with continuos table feed and exposure. b) Stacks of overlapping images can be reconstructed from the scan data in different phases of the cardiac cycle by selection of the data ranges in relation to the R-spikes [OFB∗ 07].

phy (CTA) images are acquired with contrast agent administration. For most diagnostic issues, native CT images are always acquired first. Only then contrast enhanced images are taken [KMS06]. As a contrast agent for CT, and especially for vascular diagnostics, usually water-soluble, non-ionic contrast agents are employed [KMS06]. Special care has to be taken in order to avoid contrast agent administration to allergic patients. Considering diagnosis and risk stratification of CVD, native CT imaging is especially useful for the determination of the cardiac calcium score (CAC score), which has been established as a reliable risk indicator, predicting coronary events even beyond the standard Framingham risk factors such as age, hypertension, family history, hyperlipidemia, and diabetes [DMM51, BAB∗ 06, Ach07, Ach08]. Contrast enhanced CT angiography, however, is becoming more and more a practical alternative to the traditional invasive coronary angiography, avoiding the risk associated with catheterization if it is unnecessary [Ach07].

PART II

Computer Aided Diagnosis Methods for Cardiac CT Data

31

CHAPTER 4

Cardiac CT Preprocessing For the efficient analysis of the coronary arteries on a CTA dataset, preprocessing of the image data is required. Thereby, the first step is to locate the heart within the dataset and isolate it from other structures, like the ribs, the backbone and the lungs, which are usually present in a chest CT scan. This process is called heart isolation and is based on a graph-cut segmentation [BJ01] combined with an automatic detection of the seed region and a “blob” constraint [FLBF∗ 06].

4.1 Heart Isolation The type of graph-cut segmentation used [BJ01] requires an initialization with some points known to lie within the region of interest and some points known to lie outside of the region of interest. Automatic determination of such seed regions begins by computing the volumetric barycenter of the dataset weighted by image intensity values. Since the data used are CTA studies, the brightest regions are bone and blood regions. The location of the barycenter is thus mostly affected by the distribution of these tissues. However, the distribution of bone in the chest is relatively symmetric about the heart and the largest concentrations of blood are generally found within the chambers of the heart. Consequently in nearly all cases the barycenter point of the data lies within the heart [FLBF∗ 06]. Subsequently, given that point within the heart, the pre-segmentation step consists of determining the ellipsoid of maximum volume contained within the heart. Starting from the detected barycenter, an ellipsoid is progressively grown until it touches the walls of the hearts surface. Thereby, the wall contact points are generally assumed to have intensity values ≥ −224HU . Any intensity values below are considered to be out-of-heart structures. The result is an initial shape estimate of the heart muscle, which is further refined by the graph-cut segmentation and a following dilation of the isolation mask [FLBF∗ 06]. Moreover, the resulting isolation mask can be used

32

CHAPTER 4

Cardiac CT Preprocessing

to guide an active shape model (ASM) for the generation of a heart-hull triangle mesh. This algorithm is provided as a part of the SCRCardiacSegmentation MeVisLab module by the SIEMENS AG, Forchheim. The corresponding results of the heart isolation process are illustrated in Figure 4.1 — together with the coronary tree segmentation, they are the basis for all further processing steps.

Figure 4.1: Example result from the heart isolation phase. On the left side, a slice image of the original CT scan can be seen. On the right top, the result of applying the heart isolation mask is shown. The corresponding heart hull contour is drawn in red. The generated heart hull mesh is shown at the right bottom.

4.2

Coronary Tree Segmentation

33

4.2 Coronary Tree Segmentation After the heart isolation an automatic vessel tree segmentation is performed as presented in [Tek08] and [GT08]. The segmentation of the coronary vascular tree is carried out in three consecutive steps: 1. Automatic aorta detection and segmentation, 2. automatic detection of the ostia points, 3. extraction of a vessel tree model by a local center-axis description. The aorta detection is performed with a fast variant of the isoperimetric algorithm [Gra06] which is applied to a connected component mask of the aorta [GS06]. The threshold used for the connected component generation is computed based on a segmentation of the left ventricle, which itself is also segmented using the method presented in [Gra06]. After the segmentation of the aorta mask, a front propagation is started using a medialness measure as a cost function in order to detect the ostia points [Tek08]. This medialness measure integrates edge responses along different size circles and it is not sensitive to the isolated noise on a particular location. The medialness measure proposed in [Tek08] gives strong responses at the center of a vessel and those responses drop rapidly towards vessel boundaries and very small responses are obtained in non-vascular areas. Consequently, small vessels, and close vascular structures or vessels near the ventricles, for example, can be detected reliably. From the detected ostia locations the vessel tree segmentation can be started. For the vessel tree segmentation, the same medialness measure computed from 2-dimensional cross-sectional models is used. Thereby, the idea is that an ideal 2-dimensional cross-sectional vessel profile consists of a circular or elliptic bright disc surrounded by a dark area. In order to integrate this assumption, the medialness response m( x0 ) at a position x0 is calculated from a circle C ( x0 , R) centered at x0 with radius R [Tek08]:

m( x0 ) = max{ R

1 N −1 Θ( x0 + R⃗u(2πi/N ))} , N i∑ =0

(4.1)

where ⃗u(α) = sin(α)u⃗1 + cos(α)u⃗2 and u⃗1 and u⃗2 are orthogonal vectors and define a 2D plane. Θ is the normalized edge response measure along a ray u⃗α starting at x0 . For a thorough discussion and analysis of the edge

CHAPTER 4

34

Cardiac CT Preprocessing

response measure the reader is referred to [Tek08] and [GT08]. Subsequently, the medialness measure m( x ) is used in order to determine the center-axis curve C (s) between two points p0 and p1 travelling through the center of a vessel. In order to determine the curve, a minimum cost path problem must be solved [DC01, LY07, TdTF∗ 07] with E(C ) as the total energy along the curve C. E(C ) can be described by ∫

E(C ) =

( P(C (s)) + w)ds .

(4.2)

Here, P(·) is called the potential and s is the arc length of the curve. For the extraction of a center-axis representation of a vessel in the image data, the potential function used in the energy functional corresponds to the inverse of the medialness measure, i.e. P( x ) = m(1x) . The curve with the total minimum energy can be computed from the minimum accumulative cost which measures the minimal energy at a point p integrated along a curve starting at the point p0 [Tek08]. Essentially, this corresponds to finding the minimum cost path on a connected graph and can be solved using Dijsktra’s algorithm [Dij59]. Examples of the automatic coronary tree extraction algorithm are shown in Fig. 4.6. The result of the automatic propagation method is an unconnected point cloud representing an approximation to all possible vessel centerlines and their corresponding branches within the dataset. In order to obtain a single and smooth vessel centerline, a user provided seed point has to be set on a point located inside of the vessel within the image data. The algorithm can then backtrack and extract the vessel centerline from the point cloud. In order to avoid the required manual interaction, the seed points for the major arteries can be determined automatically, as presented in the next section. Note that in most cases, all important vascular branches are detected. However, it is possible that due to imaging artefacts errors can occur during segmentation (Fig. 4.6c, d). Even though these preprocessing errors are very rare, they need to be taken into consideration for the evaluation, since every automatic stenosis detection method presented in this thesis directly depends on the quality of the vascular tree extraction algorithm. This algorithm is provided as a part of the SCRCardiacSegmentation MeVisLab module by the SIEMENS AG, Forchheim.

4.3

Coronary Tree Segmentation

35

(a)

(b)

(c)

(d)

Figure 4.2: Example results from the coronary centerline tree extraction algorithm. The detected ostia points are shown in red. In subfigures c) and d) an incomplete segmentation is shown.

CHAPTER 4

36

Cardiac CT Preprocessing

4.3 Automatic Tree Labeling One of the major drawbacks of the coronary tree extraction algorithm presented in the previous section is the requirement of a manually placed seed point for the extraction of a single vessel centerline from the point cloud representing the vessel tree. As a consequence, in order to employ this method for automatic diagnostic algorithms, these seed points have to be determined automatically. This can be achieved by exploiting the information contained within the dataset together with anatomical knowledge.

4.3.1 Algorithm Since the haemodynamic most relevant stenosis occur within the major coronary arteries (LAD, LCX, RCA), the automatic branch detection and labeling algorithm is currently designed to segment only these three vessels. The basic idea is very simple. The DICOM (Digital Imaging and Communications in Medicine) standard for handling, storing, printing and transmitting information in medical imaging requires a wealth of information to be included in the image data that is provided with a CT scan. One part of that information is the position and orientation of the patient during the scan process, as well as the resolution and slice spacing of the acquired data as a modelview matrix M [NEM10]. Consequently, each coronary CT dataset can be oriented in its own 3-dimensional coordinate system, where the orthogonal image planes are well defined. Within the dataset, the xy-plane corresponds to the axial body plane, the yz-plane corresponds to the coronal plane and the xz-plane corresponds to the sagittal body plane. Once the preprocessed coronary tree is obtained, its associated point cloud can thus be spatially divided into its branches using the axis-aligned bounding box (AABB) of the heart isolation mesh as a reference frame. The AABB of the mesh can be easily calculated by scanning the mesh vertices for their minimum and maximum coordinates in the x-, y- and z-directions (Fig. 4.3). This yields the corner vertices of the bounding box⃗bmin = ( xmin , ymin , zmin ) T and ⃗bmax = ( xmax , ymax , zmax ) T . Using the center of the bounding box

⃗c =

1⃗ + ⃗bmax ) , (b 2 min

(4.3)

4.3

Automatic Tree Labeling

37

it can be divided into subvolumes by simple axis-aligned planes. Therefore, it is sufficient to use the planes defined by the orthogonal normal vectors of the euclidian space       1 0 0 ⃗nyz− plane = 0 , ⃗n xz− plane = 1 , ⃗n xy− plane = 0 . (4.4) 0 0 1 Subsequently, the well known equation

⃗n· (⃗x −⃗c) = d ,

(4.5)

can be used to split the point cloud into regions of interest. Here, ⃗n is one of the normal vectors, ⃗c is the center of the bounding box and d indicates the position of an inserted point ⃗x with respect to the plane. If d < 0, the point is located on the opposite side of ⃗n, if d = 0, it is located right on the plane and if d > 0 the point is located on the same side as ⃗n. Having the knowledge of orientation provided by the DICOM standard, this allows the classification of points as being on the left and right (sagittal plane), front and back (coronal plane) and upper and lower (axial plane) heart sides.

Figure 4.3: The coronary artery centerline tree in the context of the heart isolation mesh. Additionally, the bounding box of the heart isolation mesh which is used as a reference frame for the point cloud separation is shown.

38

CHAPTER 4

Cardiac CT Preprocessing

Firstly, the yz-plane is used in order to separate the left and right parts of the coronary branches. Thereby, each point of the point cloud representing the vessel tree is inserted into Eq. 4.5 and sorted into a list according to the result. Subsequently, the xy-plane is used to divide the subvolume representing the left branch in order to determine the branches belonging to the LAD and the LCX arteries, respectively. Therefore, the plane is translated from the center such that the volume is split with a ratio of 2 : 3, accounting for the fact that the LM artery usually spawns its branches after 2—3cm [Led05, PP07]. Then, Eq. 4.5 and the xy-plane normal is used in order to split the point cloud of the left branch into its LAD and LCX parts. The xz-plane is used to remove the bent, distal part of the RCA, which otherwise would be classified as belonging to the LM branch during the first step of the algorithm. Thereby, the original point cloud is separated into three individual point clouds, each representing a coarse approximation to the major arteries. This approximation is shown in Fig. 4.4.

Figure 4.4: The result of the branch separation process. The RCA part of the point cloud is drawn in red, the LAD part in green and the CX part in blue.

After separation of the single branches, a seed point for the vessel tracing algorithm can be determined. Therefore, the individual branches are scanned in a point-by-point fashion and the points are sorted according to their distance to the ostia location. Additionally, the distance to the bounding box corners is considered. Consequently, the ideal segmentation seed for one of

4.3

Automatic Tree Labeling

39

the major coronary arteries is at a maximum distance from the ostia location and additionally at a minimum distance to their associated bounding box corner, i.e. the right bottom front corner for the RCA, the left bottom front for the LAD and the left bottom back corner for the LCX, respectively.

Figure 4.5: Result of the automatic tracing process. The RCA is drawn in red, the LAD in green and the CX in blue, respectively.

Since the preprocessing tree also exhibits many small branches exiting from the major arteries, it is possible that the point satisfying the previously stated constraints is located within such a small branch. Consequently, this would lead to a wrong segmentation. In order to deal with this issue, a length constraint is added to the point selection algorithm. Thereby, only branches consisting of at least 150 individual centerline points are considered as vessel candidates. When a point on one of the branches fulfilling the requirements has been detected, it is used as a seed point for the vessel tracing algorithm presented in Section 4.2. Subsequently, the tracing is triggered automatically for each of the detected branch locations. The result of this operation within the context of the preprocessed tree is shown in Figure 4.5. The individual vessel centerlines are stored and the preprocessed tree can be removed. Figure 4.6 shows an example. These centerlines are then used for further processing. Note, that the presented method does not explicitly attempt to classify the LM branch

40

CHAPTER 4

Cardiac CT Preprocessing

(a)

(b)

Figure 4.6: Final result of the automatic branch labeling on the preprocessing tree. The vessel centerlines of the three major coronary arteries have been extracted, labeled and can subsequently be used for further processing. The LM branch of the coronary arteries can be determined by examining the overlapping area of the LAD and CX branches (subfigure b).

4.3

Automatic Tree Labeling

41

of the coronary vascular tree. However, since after the last step there exists a separate segmentation of the LAD and the LCX branches, the LM section can be easily determined by evaluating the overlapping segments of the LAD and the LCX branches, respectively (Fig. 4.6 b). The presented method for automatic branch labeling and segmentation has shown to be utterly robust and reliable. This is true even if the preprocessing tree exhibits severe segmentation errors, such as completely missing or erroneous branches. In these cases, the detection constraints, which are based on anatomic observations, will not be fulfilled by the points on the specific branches and no segmentation occurs. In order to evaluate the quality and the clinical applicability, a prototype application has been created within the MeVisLab [MeV] environment. Screenshots of the software are shown in Figure 4.7. The software simply requires manual loading of a dataset. Once the CT scan is selected, the automatic pre-processing pipeline starts, ultimately resulting in a segmentation and labeling of the three major coronary arteries. The implementation of the prototype software is done in two parts. The automatic branch and seed point detection algorithm is implemented as a C++ MeVisLab module (BranchDetection), taking as input the selected dataset as well as the coronary tree pre-segmentation of the SCRCardiacSegmentation module. The second part of the implementation is a Python [Pyt] script driving the user interface and connecting and triggering the various other modules, especially the SCRCardiacSegmentation module. The MeVisLab network driving the software prototype is shown in Figure 4.8.

4.3.2 Results The presented algorithm has been evaluated on an extensive database of 160 CTA scans provided by the SIEMENS AG, Forchheim. The algorithm has proven to be very robust. In about 15% of the cases (23 datasets), the coronary tree pre-segmentation was not able to segment all branches of the coronary tree. However, the branches present in the pre-segmentation have been detected and labeled correctly (100% detection rate). Of the remaining 137 datasets, which correspond to 411 coronary artery branches, three mis-classifications of the LCX artery occurred. This corresponds to a false detection rate of 0.72% and a detection success rate of 99.28%.

42

CHAPTER 4

Cardiac CT Preprocessing

(a)

(b)

Figure 4.7: The prototype software for evaluation of the automatic branch labeling algorithm. The detected vessel labels and their corresponding segmentation can be evaluated within the context of the heart volume (subimage a) and without (subimage b). Additionally, the segmented centerline points are projected onto a 2-dimensional slice view of the CT dataset.

4.3

Automatic Tree Labeling

43

Figure 4.8: The MeVisLab network behind the prototype software.

The average detection and segmentation time of a single centerline is about 0.1s. However, since the time required for the pre-processing, which consists of the heart isolation and pre-segmentation of the coronary tree is usually between 20 and 30 seconds, the time required for the labeling process is quite negligible for practical applications.

45

CHAPTER 5

Learning-Based Stenosis Detection Pattern recognition and learning-based classification systems have proven to be applicable to a wide variety of practical problems in industrial and clinical settings [VJ04, BCF∗ 06, ZBG∗ 07, TZB∗ 06]. Due to their good performance, investigating the capabilities of these algorithms with respect to stenosis classification was a reasonable approach taken during this thesis. The algorithms that are presented in this chapter have been partly published in [TVHF∗ 08] and [TVHF∗ 09].

5.1 Inductive Learning and Pattern Classification One important class of learning algorithms is the one of inductive learning from observations [RN03]. Any algorithm for deterministic learning from observations is given as input the correct value of an unknown function for the particular inputs and must try to recover the unknown function or an approximation of it. More formally, an example i is a pair (⃗xi , f (⃗xi )), where ⃗xi is the input data and f (⃗xi ) is the output of the unknown function f applied to ⃗xi . Hence, the task of pure inductive inference that must be carried out by the learning approach is Given a collection of examples of f , return a function h that is able to approximate f The complete collection of examples D = {(⃗ x1 , f (⃗ x1 )), . . . , ( x⃗N , f ( x⃗N ))} is called the training set. It consists of N distinct examples Di , i ∈ (1, . . . , N ). The function h is called a hypothesis. Therefore, the goal of pattern classification is to determine a hypothesis that is capable of associating an example with its corresponding class [RN03]. The task domain Ω, from which the examples are drawn, is therefore partitioned into J disjoint classes Ω j , j ∈ (1, . . . , J ), i.e. Ων ∩ Ωµ = ∅ for ν ̸= µ. Consequently, the general problem that has to be solved by numerical classification systems is to find an assignment for a feature vector ⃗xi that represents such an example to a

46

CHAPTER 5

Learning-Based Stenosis Detection

class Ω j from the set of available classes. Thereby, the features of the feature vector are calculated from the input data and it is generally assumed that those features allow the classification of the different structures that are contained in this data [RN03]. If the numerical feature vector ⃗xi is a real-valued K-dimensional vector, the classification problem can be formulated mathematically by the discrete mapping h : RK 7 → j , (5.1) where the function h is the decision function (i.e. the hypothesis) which assigns a class index j to each of the feature vectors ⃗xi . The design of the decision function, which is also referred to as the classifier, strongly depends on the problem domain. Generally, a classifier is constructed by taking into account observable data, experience and domain knowledge. By the use of these information sources, a general decision rule is computed which then allows the classification of features which were not part of the sample input data. The process of constructing this decision rule is usually called the learning stage (training) and can be done automatically. If sample data (ground truth) is available, this construction process is also termed supervised learning [RN03].

5.1.1 Decision Stumps One such decision function is the decision stump [AL92]. It is a very simple form of a classification function which is based on a decision tree [Qui86]. Unlike a decision tree, however, the stump only has one internal node which is immediately connected to the terminal nodes, i.e. the decision stump is actually a one-level decision tree. Therefore, a decision stump makes a prediction based on the numerical value of just one input feature at a time. In connection with the concept of ensemble learning, however, such simple classifiers can be used as a powerful object recognition tool. A schematic example of a decision stump is shown in Figure 5.1. The problem of finding a set of decision stumps that agree with the training data provided to the algorithm is basically the same as that for devising a complete decision tree [RN03]. Note that while creating a tree that has one path from a set of feature values which are located at its internal nodes leading to a specific classification example at its leaves is a trivial task, such a

5.1

Inductive Learning and Pattern Classification

47

Figure 5.1: Schematic example of a decision stump discriminating between the two classes vessel healthy and vessel diseased based on the numerical value of a feature.

tree would not be useful to make any predictions about unknown cases. The problem with such a trivial tree is that it would only memorize observations instead of extracting a classification pattern for the extrapolation to unseen examples. The desired behavior, however, can be achieved by constructing a decision tree that tests the most important feature attributes first. Here, most important feature means the feature that makes the most difference to the classification of an example. Using this strategy, the idea is that the correct classification is achieved using the smallest number of tests. Hence, the derived pattern is expected to correctly classify unseen examples. In order to determine that most informative feature, the training set examples are divided according to these feature values which maximize the information gain on the data [LL06]. Intuitively, information gain is defined as the reduction of uncertainty about the class of an object Ω j by knowing the value of a feature (⃗xi )k , where k is the k-th component of the vector ⃗xi . Mathematically, the information gain for a feature value (⃗xi )k on a set of discrete attributes can be described by [LL06, RN03, Sha48]

IGk = H (Ω) − H (Ω|(⃗xi )k ) ,

(5.2)

where, H (Ω) is the entropy of set of possible classes J

H (Ω) = − ∑ P(Ω j ) log P(Ω j ) ,

(5.3)

j =1

and H (Ω|⃗xi ) is the conditional entropy

H (Ω|(⃗xi )k ) = −

K

J

k =1

j =1

∑ P((⃗xi )k ) ∑ P(Ω j |(⃗xi )k ) log P(Ω j |(⃗xi )k )

,

(5.4)

48

CHAPTER 5

Learning-Based Stenosis Detection

with P(Ω j ) as the probability of occurrence of class Ω j and P((⃗xi )k ) as the probability of occurrence of the feature vector component k of the feature vector ⃗xi . Consequently, the selected feature value will be

k∗ = arg max IGk . k

(5.5)

By repeatedly selecting the most important features, a decision tree can be constructed recursively. If instead of recursively partitioning the training set according to the feature attributes just simply the one feature attribute maximizing the information gain of the whole training set is selected, a single decision stump is created. Since a decision stump employs only one test at the root node, it belongs to the class of weak learning algorithms. This means that the classification function modelled by the stump represents a hypothesis that performs just slightly better than random guessing [RN03]. However, using many of such single weak learners as classification function leads to the concept of ensemble learning. Thereby, a whole collection of hypotheses from a hypothesis space are selected and their predictions are combined for the final result. The motivation for ensemble learning is simple: it is expected that a combined majority vote of many independent weak classifiers is much less likely to misclassify a new example than a single weak classifier alone.

5.1.2 Boosting One of the most widely used forms of an ensemble learning method is called boosting [MR03]. The idea of boosting is to use a weighted training set for the learning stage. Therefore, each example is associated with a weight wi ≥ 0. Initially, all weights are equal, i.e. wi = 1 for all examples i. Then, a first hypothesis h1 is created by one round of learning. The resulting hypothesis will classify some of the examples correctly and some of the examples incorrectly. During the next round of learning, the next hypothesis that is generated should perform a better classification of the previously misclassified examples. Hence, the weights of these examples are increased, while the weights of the correctly classified examples are decreased. From this new weighted training set, a second hypothesis h2 is created. This process is repeated until T hypotheses have been created and T is a parameter of the boosting algorithm. The final ensemble hypothesis, also called a strong

5.1

Inductive Learning and Pattern Classification

49

hypothesis, is a weighted majority combination of all hypotheses, each individually weighted according to how well it performed on the overall training set. I.e., the resulting strong hypothesis H (⃗x ) is T

H (⃗x ) =

∑ αt ht (⃗x)

,

(5.6)

t =1

where αt are the weights depending on the quality of the classification of their associated hypotesis ht . One of the most prominent ensemble learning algorithms is the AdaBoost algorithm, introduced by Freund and Schapire in 1995 [FS95]. One reason for the popularity of AdaBoost is that if the input learning algorithm is a weak learning algorithm, then AdaBoost will return a hypothesis that classifies the training data perfectly for large enough T . For the following discussion we abbreviate f (⃗xi ) = yi . Assume having a set of N training examples (⃗x1 , y1 ), . . . , (⃗x N , y N ). The idea is to find a strong hypothesis H which is consistent with most of the samples, i.e. H ( xi ) = yi for most 1 ≤ i ≤ N [FS95]. Therefore, algorithm 5.1 aims to find a final hypothesis for a two class classification problem with a low error relative to a given distribution D. Initially, this distribution is set to be uniform, 1 ⃗ t is maintained in order i.e. D (i ) = N . Additionally, a set of weights w to calculate a new distribution ⃗pt to be used by the attached weak learning algorithm in each iteration t. After T iterations, the strong hypothesis is returned. Due to its adaptive nature, the error ϵ = Pi D [ H ( xi ) ̸= yi ], i.e. the probability of a misclassified training example, is bounded by T

ϵ ≤ 2T ∏

ϵt (1 − ϵt ) ,

(5.7)

t =1

where ϵt is the classification error on the training set during round t. It can be shown that this error bound converges to zero as the number of training rounds increase. For an elaborate proof and further details, the reader is referred to [FS95, SS99, MR03]. Note that while the algorithm is described for a binary classification task only, it can be easily extended to multi-class classification problems [FS95, ZRZH05, BH07]. Another important property of AdaBoost is, that since it uses only weak learning algorithms for boosting and specifically in the present case decision

CHAPTER 5

50

Learning-Based Stenosis Detection

stumps, it acts as a feature selector [MR03, RN03, VJ04]. As each decision stump uses only a single feature value for classification, the adaptive weighting approach naturally reduces the weights applied to those feature values that can not adequately perform a good classification task on the target domain. As a consequence, this feature selection mechanism also leads to a high speed of learning when compared to other feature selection methods and the encoding of feature value importance within the example weights leads to a constant time evaluation of the weak classifiers [VJ04]. The AdaBoost implementation used during this thesis is based on the freely available MultiBoost framework [CK06]. This framework allows a very flexible implementation and evaluation of various strong and weak learning algorithms and provides an extensive support for different feature types and algorithm improvements. Moreover, the basic implementation runs on various platforms and since it is written solely in C++, it is easily ported or integrated to new platforms or development environments such as MeVisLab.

Algorithm 5.1: AdaBoost Input: input sequence of N labeled examples ((⃗x1 , y1 ), . . . , (⃗x N , y N )) distribution D over the N examples weak learning algorithm WL integer T specifying the number of iterations Initialize (wi )1 = D (i ) ∀i = 1, . . . , N foreach t = 1, 2, . . . , T do ⃗t Set ⃗pt = N w ∑ i =1 ( w i ) t

Get ht = WL(⃗pt ), where ht : X 7→ [0; 1] Calculate the error ϵt = ∑iN=1 pit |ht ( xi ) − yi | ϵt Set β t = (1− ϵ) t

1−|h ( x )−y |

Calculate new weights (wi )t+1 = (wi )t β t t i i end Output: the hypothesis ( ) { 1 if ∑tT=1 log β1t ht ( x ) ≥ 21 ∑tT=1 log H (x) = 0 otherwise

1 βt

5.2

Feature Extraction Strategies for Stenosis Representation

51

5.2 Feature Extraction Strategies for Stenosis Representation In order to use a classification algorithm such as AdaBoost for lesion detection on CTA data, an adequate representation of the stenosis in terms of a feature vector has to be found. In general, it is convenient to have a pattern description that approximates the shape of the object that is to be detected. Those patterns, also called shape models, are widely used for object detection within a medical context. Tu et al. [TZB∗ 06], for instance, use a shape pattern approximating polyps for polyp detection in 3D colonoscopy data. Zheng et al. [ZBG∗ 07] generate a 3D triangle model of the heart in order to guide a feature extraction pattern for learning-based detection of the heart chambers in CT angiography data. Lamecker et al. [LLS02] use a similar 3D modeling approach for liver segmentation. Lorenz et al. [LK99] presented a framework for the automatic generation of such shape models. Unfortunately, stenosis in CTA scans do not have a typical shape that can be generalized into a form description. Particularly, soft-plaques cannot be described by a shape pattern or template, since their form can only be visualized indirectly due to lumen narrowing of the coronary arteries. Moreover, not even calcified plaques can be easily approximated by a shape model since their position, extent and characteristics have a high variability. Therefore, it would be very difficult to predict the shape parameters of a lesion and it is not possible to model the stenosis within the coronary arteries in terms of a transformation of a sampling template. As a consequence, an alternative sampling representation for drawing adequate feature values from the data is required. Major points of interest for stenotic lesion classification include intensity changes within vessel segments. Typical images of plaques in CTA are shown in Figures 5.2 and 5.3. However, not always — especially for soft-plaques — are the plaques so clearly visible and thus are the intensity changes within the slice images as discernible. Furthermore, the tissue surrounding the vessel should be excluded as good as possible. In order to achieve these requirements, the cylindrical shape of vessels can be exploited in order to generate a sampling pattern that captures sufficient context information for an accurate description of the vessel condition. The vessel will be approximated by a 2-dimensional circular sampling pattern which is extruded into 3D, yielding a cylinder segment. By using many of these small cylinder segments, the shape and the curvature of the artery can be discretely approximated. From these segments, feature values will

52

CHAPTER 5

Learning-Based Stenosis Detection

be drawn in order to guide the classification. The net effect of this procedure is then not to classify the stenosis as such, but rather a classification of the artery segments into the classes healthy, calcified and soft plaque.

Figure 5.2: Example of a clearly visible calcification within a coronary artery. The vessel wall is marked with the red circle. Subimages a, b and c show the corresponding image slices. The bright spot that is the calcified plaque is marked with the blue circle and can clearly be seen.

5.2.1 Vascular Sampling In order to create such a sampling pattern, the vessel is enclosed in a series of bounding cylinder segments along its centerline. Thereby, the bounding cylinders are placed in a way so that each cylinder overlaps the previous one by one half of its height. This is done in order to ensure that small stenosis, possibly being situated just at a segment boundary, will be classified correctly. This is shown schematically in Fig. 5.4. If a vessel centerline of length L is given, and a maximum diameter of a coronary artery Dmax is assumed, a series of i bounding cylinder segments

5.2

Feature Extraction Strategies for Stenosis Representation

53

Figure 5.3: Example of a clearly visible soft-plaque. The vessel wall is marked with the red circle. Subimages a, b and c show the corresponding image slices. Note the diffuse intensity variation within slice image b. Whereas on the CPR image, the plaque is very distinctive, the slice image is not easily categorized as being a lesion.

Figure 5.4: Schematic view of the bounding cylinder overlap. Since the stenosis is situated at the bounding cylinders boundary it might not be captured fully by the feature extraction. Introducing overlapping cylinders solves this problem.

can be described by

  Ci (ϕ, v) = 

Dmax 2 cos( ϕ ) Dmax 2 sin( ϕ ) H 2i+v

   ,

(5.8)

were ϕ ∈ [0, 360], v ∈ [0, H ] and i = 0, . . . , Ncyl . H is the length of a single cylinder segment and since the cylinder segments are required to be overL lapping, Ncyl is calculated as Ncyl = 2 H − 1. The result of applying this pattern to a whole vessel is shown schematically in Fig. 5.5. A free input parameter to the boundary function is the height of a single

54

CHAPTER 5

Learning-Based Stenosis Detection

Figure 5.5: Schematic example of a extracted vessel CT image with its bounding cylinders imposed.

cylinder segment H . This height has to be set to a reasonably small value, since the cylinder is just an approximation to the vessel shape and cannot precisely follow the curvature of the anatomy. Additionally, the probability that different types of stenotic lesions lie in a single cylinder segment has to be reduced. However, due to the variability of the lesions that can occur within a coronary artery it is reasonable to make this height variable. This approach and its implications for the classification results will be discussed in the next section. Feature values are generated from samples drawn of the inside of the boundary volume, which is partitioned by quantizing polar coordinates. Therefore, radial sampling is performed in order to exploit the basic radial shape of an ideal vessel. As a result, a spider web-like sampling pattern is obtained. This is shown schematically in Fig. 5.6.

Figure 5.6: Schematic view of a slice of the sampling pattern that is generated within the bounding cylinders.

5.2

Feature Extraction Strategies for Stenosis Representation

55

Creating the sampling pattern in 3D leaves three degrees of freedom. Assume sampling should be performed on Nr radii, Nc radial segments and Ns slices. Then, the applied method generates Nr × Nc × Ns sample positions and can be described by the function   r j cos(ϕk ) si ( j, k, l ) =  r j sin(ϕk )  , (5.9) H i + v l 2

∀ j = 0, . . . , Nr , k = 0, . . . , Nc and l = 0, . . . , Ns , where Dmax j , 2Nr

(5.10)

ϕk =

2π k , Nc

(5.11)

vl =

H l . Ns

(5.12)

rj =

and

Experiments have shown that small values for Nr , Nc and Ns are sufficient to capture significant features that adequately describe the different types of stenosis that are present in CTA images. Typical values for which good results were achieved are Nr = 4, Nc = 6 and Ns = 6, where l is increased by 2 at each step. Therefore, only every other image slice within the bounding cylinder is considered for sample drawing. Consequently, only 72 sample positions are used for feature value extraction on a single vessel segment. In contrast to simple 2-dimensional feature extraction on cross-sections of the vessel, the applied cylindrical pattern ensures that a feature set includes not only relevant image data, but also sufficient context information within a cylinder segment in order to fully describe a diseased vessel part. Moreover, the anisotropic pattern accounts for the nature of the coronary arteries to narrow along their path to the most distal regions of the heart.

5.2.2 Multi-Resolution Extension Analysis of the first classification results revealed that the fixed cylinder length of the original algorithm is a drawback for the representation of a bigger set of lesions. Due to the variability of lesion configuration (Fig. 5.7 and 5.8), especially in extent, keeping the length of the feature cylinder

CHAPTER 5

56

Learning-Based Stenosis Detection

fixed leads to several misclassifications. In order to overcome this limitation, the feature extraction pattern was extended by a multi-scale approach. Thereby, variable sampling segment lengths are introduced.

Figure 5.7: Varying calcifications (marked by arrows) in cardiac vessel CPR cut-outs.

Figure 5.8: Varying soft-plaques (marked by arrows) in cardiac vessel CPR cut-outs.

Based on the ground truth database, maximum (Smax ) and minimum (Smin ) stenosis length are calculated. Subsequently, the computed length values are used to create several series of overlapping bounding cylinders with each series representing a distinct model of the vessel. Hence, the range of stenosis lengths Smax − Smin is divided into S equidistant parts. Experiments have shown that S = 3 is sufficient in most of the observed cases. Using this approach, a scaling factor τ is added to the formulation of Eq. 5.8:   Dmax 2 cos( ϕ )   Ciτ (ϕ, v) =  Dmax (5.13) 2 sin(ϕ )  . Hτ i + v 2 With S as the total number of scaling levels used, the height of the current cylinder is given by interpolation:

Hτ =

τ τ . Smax + (1 − )S S−1 S − 1 min

(5.14)

5.2

Feature Extraction Strategies for Stenosis Representation

57

With τ = 0, . . . , S − 1 as the current scale level and L as the total length of the current vessel, L Nτ = 2⌊ (5.15) ⌋−1 , Hτ overlapping cylinder segments are computed in order to represent the vessel at a given scale. This is shown schematically in Fig. 5.9. Subsequently, sampling positions can be generated analogously to the original formulation by taking into account the current scaling level τ . As a result, the multi-scale sampling approach allows to better capture a wider range of lesions with different positions and extents.

Figure 5.9: Schematic view of the multi-scale vessel representation.

Using multiple representations for each vessel also leads to multiple classification results for each scaling level (Fig. 5.10). In order to generate the final result, several approaches are possible. The most meaningful is to combine the different results by a majority vote of the individual classification outcomes. Thereby, lesions that are detected on overlapping positions at different scales are assigned a greater weight than those lesions that are detected only at a single scaling level. However, if a lesion is detected at only one scaling level it cannot be simply ignored. This is because for clinical applications, a false positive result is still acceptable while a false negative (i.e. a lesion is missed) is not. Therefore, the overall goal of the approach was to keep the false positive detection rates low while absolutely minimizing the false negative results. The detection results are thus accompanied by a confidence factor which directly corresponds to the number of scaling levels an individual plaque is detected at. This factor can then aid the radiologist to estimate the value and the reliability of the automatic diagnostic output.

5.2.3 Simple Feature Values At the generated sampling locations, various image based feature values are computed. Of course, the image intensity value at that location, denoted as I , is taken, but also the absolute value of the image gradient | G I | = ∇ I =

58

CHAPTER 5

Learning-Based Stenosis Detection

Figure 5.10: Schematic view of the result of a multi-scale vessel classification. Different scale levels might lead to differing classification results (marked in red). The overal classification of a vessel segment is then based on a majority vote of the individual results.

( gx , gy , gz ), which is calculated using a 3D Sobel filter [Sch00, Pra07]. The value of the individual vector components of the image gradient vector, gx , gy , and gz , are also used for the feature vector. Thereby, ( x, y, z) are understood to be the world coordinates of the sampling positions. In order to generate a specific feature value, this coordinate is transformed into voxel coordinates using the modelview matrix that is attached to the DICOM image. The final value is created by trilinear interpolation on the CT dataset. The values used for building the feature vector provide a wide capture range for intensity discontinuities as they increase inside the vessel when a stenotic lesion is present in the image data. Moreover, the computed values on the radial sampling pattern expose a semi-rotational-invariant behavior. Since the main goal is to determine whether there is a stenosis located in the current cylinder or not, the local feature values extracted along the pattern can be sorted according to value magnitude. Thereby, relevant information is kept (e.g. high intensity peaks representing calcifications, low intensity values representing soft-plaques) while their exact location along the sampling pattern is lost. A schematic example is shown in Fig. 5.11. By sorting the local feature values according to value magnitude, the classifier becomes insensitive to lesion position as far as possible. As another result, the size of the object space that has to be covered by the classifier is drastically decreased. Hence, the overall robustness of the approach is further increased.

5.2

Feature Extraction Strategies for Stenosis Representation

59

Figure 5.11: Schematic display of the rotational invariance with respect to the simple feature values. In the example, values indicating healthy vessel lumen are shown in grey. Two calcified spots are shown as white circles. The sampling direction is given by the arrow on the top. Original and sorted feature vectors show that value position changed, but the information contained in the feature vector, i.e. attenuation values denoting a lesion, is kept.

However, most of the these local simple features are particularly significant for calcified lesions, as these regions have a high attenuation value and usually show a very high gap between their lowest and highest intensity values. Additionally, calcified plaques exhibit high gradient magnitudes and strong variation in gradient directions within the sampling region. In order to capture the reduced contrast values originating from soft-plaques, haarlike local features based on an extension of the original integral image in conjunction with the global feature values are used. Those features are particularly useful in regions, where the soft-plaques would otherwise be very unclear and hardly discernible from the surrounding tissue.

5.2.4 Haar-like Feature Values Haar-like features have been successfully used for many object recognition tasks [LM02, MKH05]. They allow to capture a great amount of information while being easy and efficiently to calculate. The haar-like features used are expressed as the difference of the sums of pixel intensities within the image. Some of the used difference patterns are shown in Fig. 5.12. Thereby, the boundary of the pixel areas is given by the bounding cylinder around the vessel centerline from Section 5.2.1. The sum of the pixel values within a rectangular image area can be calcu-

CHAPTER 5

60

Learning-Based Stenosis Detection

Figure 5.12: Schematic display of some of the haar-like feature patterns used. The sum of the pixel intensity values which lie within the white circular area are subtracted from the sum of pixel intensities in the grey area. Due to the pre-calculation of the transformed integral image, these features can be quickly computed in 3D by simple table look-ups.

lated very quickly by using an intermediate image representation called the integral image [VJ04] which has been extended to 3D by [ZBG∗ 07]. At each image position ( x, y, z), the voxel value of the integral image contains the sum of the voxels above, to the left, before and inclusive the value at ( x, y, z) from the original image, i.e. for a 2-dimensional image, the integral image can be described by

I I ( x, y) =

I ( x ′ , y′ ) .

(5.16)

x ′ ≤ x,y′ ≤y

Here, I I ( x, y) is the integral image value at position ( x, y) and I ( x, y) is the intensity value of the original image. Using the recurrences

s( x, y)

= s( x, y − 1) + I ( x, y)

(5.17)

I I ( x, y)

= I I ( x − 1, y) + s( x, y) ,

(5.18)

where s( x, y) is the cumulative row sum and s( x, −1) = 0 and I I (−1, y) = 0, the integral image can be computed in one pass over the original image [VJ04]. The initial formulation of the integral image is based on rectangular images, therefore it cannot be used directly for the circular pattern that is applied in the presented approach. In order to compute the integral image of the cylindrical shape while keeping the simplicity of the original implementation, the image data enclosed by the cylinder boundary is transformed into a rectangular grid. This can be achieved by first creating a linear vessel volume image in a similar manner to a stretched CPR [KFW∗ 02]. Accordingly,

5.3

Implementation

61

the radial extraction pattern is generated over the whole length of the vessel. The pattern is then mapped to a rectangular grid using polar coordinates (Fig. 5.13). On this grid, the 3D integral image of the vessel can be easily computed. The transformation from polar to cartesian coordinates is pre-computed and stored in an index table. In order to access the value at a sample position in the integral image, a look-up into the index table has to be performed. Subsequently, curved haar-like feature values on the circular 3D patterns can be computed similarly to the original formulation, requiring only one additional level of indirection.

Figure 5.13: Schematic display of the transformation from the cylindrical extraction pattern to a rectangular image. The rectangular image as generated by the transformation process can be subsequently used in order to compute the integral image of the extraction pattern. White indicates padding with zero values.

5.3 Implementation The presented approach was implemented in C++ and integrated into the MeVisLab environment mainly as the module CAAnalysis. This module implements the feature extraction strategies and acts as the connection to the AdaBoost classifier. It works in conjunction with the automatic branch labeling and centerline tree extraction methods presented in Chapter 4. For data acquisition and training of the classifier a software tool called CA Measurement has been developed. The coronary arteries can be displayed as a CPR using the previously segmented centerlines. Furthermore, the tool allows the manual placement of markers which indicate the start and end positions of stenosis on the corresponding centerline as well as associating the pair of markers to the stenosis type. This is shown in Fig. 5.14.

62

CHAPTER 5

Learning-Based Stenosis Detection

Figure 5.14: The software tool (CA-Measurement) which was developed for the manual acquisition of then ground truth data. On the right side, the previously extracted centerlines and their corresponding images can be selected. On the left side, stenosis markers can be placed (yellow) on a CPR view of the vessel and stored in the database for further use.

The manually acquired data can then be stored in a stenosis database which is subsequently used for training, evaluation and testing of the algorithm. The training of the strong classifier and an initial evaluation of the algorithm can also be performed directly within the CA Measurement tool. An example is shown in Fig. 5.15. The MeVisLab network that is used to drive the software is shown in Fig. 5.16. For practical application and evaluation of the stenosis detection method, a second software called Coronary-CAD has been developed. The use of the software is, just like the software prototype for the branch detection algorithm, quite simple. There is one button that allows the selection and loading of a CTA dataset. Then, the whole algorithm chain, consisting of pre-processing, branch labeling, feature extraction and classification runs automatically, requiring no user interaction or parameter specification. Finally, the result of the classification is displayed on a 3D volume rendering of the CTA data. Additionally, lesion coverage statistics on the major coronary arteries are computed and displayed. The few other user interface elements

5.3

Implementation

63

Figure 5.15: The same software can also be used to train and test the strong classifier based on the ground truth data. The screenshot shows the detection result on a coronary artery with massive calcifications.

Figure 5.16: The MeVisLab network driving the CA-Measurement software.

64

CHAPTER 5

Learning-Based Stenosis Detection

that are integrated in the software are only used to adjust the visualization of the results. This is shown in Figures 5.17 a and b, respectively. The network driving the Coronary-CAD software is shown in Fig. 5.18. This implementation also has the advantage that it can be driven from the command line, thus allowing a batch evaluation of many CTA datasets without requiring a user to control the process. Moreover, for the evaluation of the results the classification outcome can be compared to the ground truth data automatically, if available.

5.4 Results The learning-based stenosis detection algorithm has been tested on clinically relevant CTA data. The evaluation has been carried out on the three major coronary arteries, if they could be segmented properly by the automatic vessel segmentation approach. All centerlines were pre-segmented and stored in a vessel database. With the annotation software tool presented in the previous section, all stenoses contained within the CTA data have been annotated manually. Thereby, a ground truth database was generated containing position, size and extent of every lesion present in the CTA images, which was then used for training and evaluation of the algorithm.

5.4.1 Basic Approach Firstly, the algorithm without multi-resolution extension has been evaluated using 30 CTA data sets. 10 CTA images with 29 annotated coronary artery centerlines containing 38 stenosis in 15 diseased vessels were used as training ground truth. In the training database 14 vessels were annotated as being healthy. This data was used to generate a feature matrix for the training of the classifier. The resulting boosted strong classifier has been validated performing an automatic diagnosis step on the training database. Subsequently, evaluation has been performed using 20 unseen CTA datasets. The results of the simple approach were very promising. On the 38 annotated stenosis from the training set, there was only one misclassification which is equivalent to a training error of 2.64%. For the unseen database, the algorithm was applied to 55 pre-segmented vessel cen-

5.4

Results

65

(a) Coronary-CAD software presenting the detection results right after a dataset has been selected.

(b) Additionally, the visualization of the results can be adjusted.

Figure 5.17: The Coronary-CAD software implementing the learning-based stenosis detection for testing and evaluation. The user interface is quite simple: there is only one button to load an image, and means to change the visualization of the results.

66

CHAPTER 5

Learning-Based Stenosis Detection

Figure 5.18: The MeVisLab network driving the Coronary-CAD software.

Type Diseased Healthy Detected Diseased Detected Healthy False Positives False Negatives

Training 15 14 15 14 0 0

Evaluation 33 22 35 20 2 0

Table 5.1: Detection rates measured on a per-vessel basis.

terlines. From these vessels, a total of 22 was healthy and 33 contained one or more lesions. For a coarse per-vessel evaluation, a coronary artery was considered to be diseased if it contained at least one stenosis. During validation, every vessel containing diseased pathology was marked as such, even though in one case a calcification has been misclassified as being a soft-plaque. On the evaluation datasets, two false positives but no false negatives were found. Tables 5.1 and 5.2 summarize the results. Sensitivity and specificity rates for the individual datasets and the overall result of the per vessel classification are shown.

5.4

Results

67

Type Training Set Evaluation Set Overall

Sensitivity 100.00% 100.00% 100.00%

Specificity 100.00% 91.66% 94.73%

Table 5.2: Sensitivities and specificities for a per-vessel classification.

Type Total Correctly Detected False Positives False Negatives

Soft-Plaque 11 11 0 0

Calcification 27 26 0 1

Table 5.3: Detection rates measured on a per-lesion basis using the training datasets.

For a per-lesion based evaluation, the detection rates of individual stenosis types and positions on all vessels have been determined. The training database contained a total of 38 annotated stenotic lesions, where 27 of them were calcifications and 11 were soft-plaques. All 11 soft-plaques were detected and classified correctly. Only one calcification has been misclassified as being a soft-plaque. The evaluation database contained a total of 68 visible stenosis, 57 of which were calcifications and 11 soft-plaques. From the 11 soft-plaques, 10 were correctly detected and one has been not detected. From the 57 calcifications, a total of 12 have not been detected and 45 of them were properly identified. Additionally, two false positives have been detected as being calcifications. Tables 5.3, 5.4, and 5.5 summarize the results. Type Total Correctly Detected False Positives False Negatives

Soft-Plaque 11 10 0 1

Calcification 57 45 2 12

Table 5.4: Detection rates measured on a per-lesion basis using unseen data.

68

CHAPTER 5

Type Training Set Evaluation Set Overall

Learning-Based Stenosis Detection

Soft-Plaque 100.00% 90.90% 95.45%

Calcification 96.29% 78.94% 84.52%

Table 5.5: Overall detection rate on a per-lesion basis.

Due to the restricted portion of image data to process, classification times can be kept remarkably small. On the testing machine (MacBook Pro, Intel Core 2 Duo 2.4 GHz, 2GB RAM) the classification of an entire vessel tree with previously segmented centerlines of the left anterior descending, left circumflex and right coronary artery took less than 5 seconds. Although the achieved results were initially good, extending the database of evaluation images quickly revealed the shortcomings of the approach, as the sensitivity and specificity of the algorithm dropped below 70%. Hence, the simple algorithm was extended by the multi-resolution method in order to enhance the detection quality on a larger set of CTA images.

5.4.2 Multi-Resolution Extension The enlarged database contained 140 pre-segmented and annotated coronary centerlines from 45 CTA images. The annotations included exact lesion position and type for each vessel. From the 140 centerlines, 30 were used to train and validate the classifier. The remaining 110 have been used for quality assessment of the algorithm. As with the previous approach, two separate performance measures were evaluated. First, vessel-based detection rates were examined. Consequently, a vessel was considered as being diseased if it contained at least one lesion. The corresponding classification result was accepted as being correct if it showed at least one lesion. Applying these criteria, from the 110 segmented coronary arteries 71 had to be considered diseased, whereas 39 were healthy. The classification algorithm was able to detect 68 of the diseased and 30 of the healthy vessels correctly. Among these results three vessels were classified as false negative and nine healthy vessels have been classified as false positives. This corresponds to a sensitivity of 95.5% and a specificity of 76.9%, as summarized in Table 5.6.

5.4

Results

Type Total Correct FP FN

69

Diseased vessels (%) 71 (100.0%) 68 (95.8%) 0 (0.0%) 3 (4.2%)

Healthy vessels (%) 39 (100.0%) 30 (76.9%) 9 (23.1%) 0 (0.0%)

Total (%) 110 (100.0%) 98 (89.1%) 9 (8.2%) 3 (2.7%)

Table 5.6: Summary of the detection rates measured on a per-vessel basis. FP is abbreviated for false positive and FN is abbreviated for false negative, respectively.

As the second performance measure, lesion-based detection rates were examined. Evaluation included detection rates in terms of lesion type and position. Accordingly, coverage of the cylinder segment classification obtained from the classification algorithm was compared to the annotated vessel data. There were 54 distinct soft-plaques and 101 distinct calcifications contained in the datasets of the diseased vessels. From the 54 soft-plaques, 43 have been identified correctly. Eleven were missed by the classification algorithm and four were found to be false positives. From the 101 calcifications annotated in the data, 95 have been detected correctly. Six of them were not detected and another eight were classified as false positives as summarized in table 5.7. This amounts to an overall correct detection rate of 89.0% and a false positive rate of 7.7%. Even though a quite extensive vessel representation is used for the classification task, evaluation times are still low. Performing CAD on the three major coronary arteries (LAD, LCX, RCA) takes between 5 and 10 seconds only on standard PC hardware. Therefore, the time required for classification is only slightly higher than with the simple representation. Together with the coronary tree generation and vessel labeling algorithms, the presented method takes about 30 to 40 seconds for the creation of a diagnosis estimate on a single cardiac CT dataset. Note that this diagnosis estimate can be generated fully automatically, requiring no user input at all. Examples of the quality of the detection results and their high accuracy are shown in Figures 5.19, 5.20 and 5.21.

CHAPTER 5

70

Type Total Correct FP FN

Soft-Plaque (%) 54 (100.0%) 43 (79.6%) 4 (7.4%) 11 (20.4%)

Learning-Based Stenosis Detection

Calcification (%) 101 (100.0%) 95 (94.0%) 8 (7.9%) 6 (5.9%)

Total (%) 155 (100.0%) 138 (89.0%) 12 (7.7%) 17 (10.9%)

Table 5.7: Summary of detection results for the lesion based evaluation. FP is abbreviated for false positive and FN is abbreviated for false negative, respectively.

(a)

(b)

Figure 5.19: Classification results for calcified coronary artery lesions. a) and b) show detection results of distinct calcified stenosis. On the bottom, the vessels are shown without detection results but marked stenosis for visual clarity. Detection results are marked in purple.

5.4

Results

71

(a)

(b)

Figure 5.20: Classification results for non-calcified coronary artery lesions. a) and b) show detection results of distinct soft-plaque stenosis. On the bottom, the vessels are shown without detection results but marked stenosis for visual clarity. Detection results are marked in yellow.

(a)

(b)

Figure 5.21: Detection results for coronary arteries containing calcified as well as fibrous lesions. On the bottom, the vessels are shown without detection results but marked stenosis for visual clarity. Automatically detected calcifications are marked in purple, soft-plaques in yellow.

73

CHAPTER 6

Coronary Calcium Detection and Quantification Calcified coronary plaque represents approximately 20% of the total coronary artery plaque burden. Therefore, the more coronary calcium, the more overall atherosclerotic burden is present. The other 80% of the coronary plaque usually is fibrous or lipid-rich soft-plaque. Consequently, coronary calcium is a measure of a patient’s atherosclerotic burden [PM07]. Calcium appears in the early stages of coronary artery disease and is viewed as a measure of atherosclerosis itself and not simply as a marker of disease like the traditional Framingham risk factors such as age, hypertension, family history, hyperlipidemia, and diabetes [DMM51]. One hypothesis is that calcium in the coronary arteries may mean that there was at least one incidence of a healed soft-plaque rupture with inflammation. Since calcified coronary plaque detection and quantification plays a key role in clinical cardiac diagnostics, it is essential to provide means of automatic diagnostic aid to the radiologist. This is even more true for CTA data, as calcium plaque segmentation is difficult in the presence of contrast agent. The method that is presented in this chapter has proven to provide good results for calcium scoring on CTA data. It has been partly published in [TVHB∗ 10] and [TVHB∗ 11].

6.1 Clinical Procedures and Previous Work The clinical standard procedure for risk stratification of coronary vascular disease (CVD) consists of an extensive physical examination. Thereby, well understood risk factors, such as body height and weight, blood pressure, certain blood levels, e.g. HDL and LDL cholesterol, including present diseases and habits, such as diabetes or smoking are determined and taken into account [WDL∗ 98, DMM51]. Furthermore, since cardiac CT has proven to allow a reliable diagnosis of CVD, a CT study is to be taken if the patient shows symptoms but otherwise has not a high pretest likelihood of having coronary stenoses [Ach07]. On returning CVD patients, a coronary CT will be acquired in order to judge the state of progress of the disease.

74

CHAPTER 6

Coronary Calcium Detection and Quantification

Currently, one of the most relevant pieces of clinical information with respect to risk assessment on a cardiac CT scan is the cardiac calcium (CAC) score, as stated in the scientific statement of the American Heart Association: ”[...] the total amount of coronary calcium [...] predicts coronary disease events beyond standard risk factors.” [BAB∗ 06]. Therefore, a CT study is taken without application of contrast agent and an image reconstruction is performed at a quite coarse slice thickness resolution of usually 3mm in order to identify calcified plaques in the image data. Subsequently, a radiologist has to manually identify coronary calcifications among a set of candidate regions in the data, which are then extracted by thresholding, region growing and component labeling. Then the calcium scores such as calcium volume, calcium mass and the Agatston Score can be calculated using the image data and the resulting segmentation mask [AJH∗ 90]. This is a reasonable procedure, as soft-tissue contrast in CT is low and highdensity structures such as bone or calcified plaques are easily visible and can be segmented by simply applying a threshold based region growing [AB94]. For non-contrast enhanced CT scans the clinically used reference threshold for the segmentation of calcified plaques is 130 Hounsfield-Units. If coronary artery disease is present, the CAC score will provide information on the extent of disease. Additionally, the CAC score can also determine the likelihood of a significant stenosis and the degree to which the atherosclerotic process is progressing. An example of a native CT scan used for CAC scoring is shown in Fig. 6.1. In addition to the scoring of calcified plaques, soft-plaques, e.g. vascular narrowings caused by fat deposits, can be a major indication for CVD risk [BAB∗ 06]. Although invasive coronary angiography is still the gold standard for the diagnosis in this case, contrast enhanced CT coronary angiography has proven to be able to reliably rule out the presence of softplaques and stenoses in general [Ach07, MMS∗ 08, HSS∗ 05]. Therefore, CTA is the preferred diagnosis method in the majority of cases where the patient’s situation does not warrant the risk of invasive angiography. As a consequence, a cardiac CT patient study usually consists of two separate scans, since the traditional calcium score determined on non-contrast enhanced data cannot be easily related to calcium scores created from contrast enhanced data. The reason for this are the different resolution of the two different scans as well as the high contrast variation within the CTA datasets that is introduced by the contrast agent. The higher attenuation

6.1

Clinical Procedures and Previous Work

75

Figure 6.1: Example of a non-contrast enhanced CT scan used for CAC scoring. The low soft tissue contrast can be seen. Additionally, the image shows two manually marked and segmented calcified lesions in the left main coronary artery.

values of the contrast agent can overlap and thus hide calcified plaques or parts thereof, which might be visible without the contrast agent. Hence, more and more clinical studies are published with the goal of establishing an equivalence between traditional CAC scoring and calcium scoring on CTA data in order to provide additional information for coronary risk stratification and avoiding redundant CT scanning [GHT∗ 09, HFM∗ 09]. However, using contrast enhanced data for manual scoring is more difficult than using non-contrast enhanced data, since the image contrast in CTA data has a high variability (Fig. 6.2). This is due to varying contrast agent absorption rates over time and patient physiology. As a consequence, HU values for vessel lumen vary from patient to patient and from study to study. Moreover, even an intra-patient variation is possible, depending on the patient’s physiology and the time of scanning after contrast-medium administration. Hence, scoring thresholds have to be determined individually for each patient by manually inspecting the image contrast values. This drastically increases the effort and time required by the radiologist in order to process a dataset. Moreover, since the threshold determination and lesion

76

CHAPTER 6

Coronary Calcium Detection and Quantification

selection process is carried out manually, it is also subject to strong intraand inter-observer variability.

(a) 469 HU

(b) 549 HU

(c) 599 HU

(d) 754.5 HU

Figure 6.2: Contrast variability on CTA data. The image shows four slices from different CTA scans that were used for calcium scoring. The manually assigned segmentation threshold value is indicated for each image. This demonstrates the wide variability of contrast distribution on contrast-enhanced CT data.

Since cardiac calcium burden is such an important clinical topic, the detection and quantification of calcified plaques on CT data has already been addressed by several groups. The work of Wesarg et al. [WKF06] presents

6.2

Automatic Calcium Scoring

77

an algorithm for vessel segmentation that is capable of dynamically determining calcium plaques during vessel centerline extraction. However, this method is strongly dependent on the user manually placing seed points on the coronary arteries in the data. Moreover, the detected lesions are not segmented and no quantitative analysis is presented. Toumolin et al. [TBGB03] use a level set method in order to detect the inner and outer walls of vessels on CTA data and thereby identify calcified lesions in the coronary arteries. Rinck et al. [RKRS06] present a similar approach using a shapebased segmentation scheme. The main drawback of their approach is, however, that the algorithm has to be initialized manually with a seed point before and after each lesion. Komatsu et al. [KHO∗ 05] introduced a ”Plaque Map” for CT images in order to facilitate fast and easy detection of plaques in CT data. However, their approach acts only as a visual aid guiding the radiologist through the manual segmentation of stenosis in the coronary arteries. Finally, Saur et al. [SAD∗ 08] presented a method for automatic calcified plaque detection by fusing and relating non-contrast enhanced CT data with contrast enhanced data. Although their method is fully automatic and delivers promising results, the approach still has the drawback of requiring both, a non-contrast enhanced and a contrast enhanced CT scan, leading to possibly unnecessary radiation exposure for the patient. In contrast to the previous works, the method presented in this thesis is fully automatic, requiring no user interaction at all. Moreover, only the CTA input data is needed in order to compute a calcium score for the patient study. It is implemented as a multi-step process. In a preprocessing phase, the vascular tree is automatically extracted from a cardiac CT dataset as presented in Chapter 4. Subsequently, the pre-processing tree is used in an image analysis step in order to determine the HU value that adequately separates calcified plaque tissue from healthy vessel lumen, i.e. a calcium segmentation threshold is computed automatically. Obtaining this threshold is a prerequisite for the following automatic detection of calcified plaques. Finally, the detected plaque positions can be used in order to generate the standard score values used for clinical applications.

6.2 Automatic Calcium Scoring In order to deal with the high contrast variability on CTA data, it is required to determine a threshold for calcium segmentation. This has to be done

78

CHAPTER 6

Coronary Calcium Detection and Quantification

individually for each CT scan that is to be analyzed. Therefore, information about the intensity distribution within the data is needed. This information can be acquired by computing and analyzing the image histogram.

6.2.1 HU-Threshold Determination However, using the full dataset histogram is not practical for the purpose of calcium threshold calculation. The ideal HU threshold for calcified plaque segmentation corresponds to the lowest intensity value that belongs to calcium but does not fall into the range of contrast agent or bone. Looking at different histograms of full datasets (Fig. 6.3) quickly reveals that there is a lot of variation in the intensity distribution and peak values. Which one of these peaks does correspond to the contrast agent distribution in the coronary arteries? Which one to the bones? Experiments on the data have shown that this is also strongly dependent on the dataset itself, the field of view of the scan and the contrast agent distribution. During the analysis of this problem it has turned out that deriving a general rule for the detection of the segmentation threshold based on the full dataset histogram is very difficult, maybe even impossible. As it turns out, the previously generated vascular pre-processing tree can be exploited in order to calculate a valid segmentation threshold. Instead of creating a histogram based on all voxels contained within the dataset, a histogram H based only on the intensity values of the voxels that are coincident with the points on the pre-processing tree is computed. Hence, each histogram entry x corresponds to the total number of voxels with value x, whereas only voxels on the vascular tree are considered. The basic idea is, that it can be assumed that by only taking into account the voxel values that are covered by the segmented tree, the HU values for which the histogram function reaches its last maximum are equivalent to the mean value of healthy vessel lumen. Moreover, it can be assumed that the number of voxels belonging to calcified plaques is not significant in comparison to the number of voxels belonging to contrast agent. Since the number of centerline points which have been generated during the preprocessing step varies, the histogram values have to be normalized with respect to the number of entries in order to generate valid results on different datasets. However, as the number of points contained in the pre-processing tree is rather small when compared to the total number of possible histogram entries, the cre-

6.2

Automatic Calcium Scoring

79

(a)

(b)

(c)

(d)

Figure 6.3: Four histograms of four different CTA scans. In principle the histograms show the same structure but with a very different distribution of their peak values. Thus, they are not suitable for a segmentation threshold estimation.

ated data exhibits a high amount of noise. This is shown in Figure 6.4. This noise can be reduced by smoothing the histogram curve applying a simple 1D box filter. Thereby, the curve values are repeatedly averaged yielding smoother results (see Fig. 6.5). The error that is introduced to the histogram counts by filtering is negligible, since the exact count of histogram entries at a specific intensity value is not important. The ideal HU threshold for calcified plaque segmentation corresponds to the lowest intensity value that belongs to calcium but does not fall into the range of contrast agent. Hence, in order to find the optimal threshold value for separation of vessel lumen and coronary calcium, it is

80

CHAPTER 6

Coronary Calcium Detection and Quantification

(a)

(b)

(c)

(d)

Figure 6.4: Four histograms of four different CTA scans corresponding to the histograms shown in Fig. 6.3. It can be seen that the intensity variation is much less and there is one distinct peak in the data. However, due to the low number of voxels used to generate the histogram there is a high amount of noise in the cuve data.

required to find the point on the X-axis of the histogram, where the descent of the last peak curve starts to flatten, i.e. the derivative of the histogram function H ′ approximates 0. This is reasonable since it is assumed that the width of the histogram area around the maximum peak covers the distribution of vascular lumen within the histogram function. Hence, the derivative of the histogram function is computed and its values are examined starting at the maximum peak position until they exceed a threshold ϵ. Based on samples of the ground truth database, ϵ = −10−3 was empirically found to be the optimal value. Selecting this point yields the desired threshold value

6.2

Automatic Calcium Scoring

81

(a)

(b)

(c)

(d)

Figure 6.5: Four histograms of four different CTA scans corresponding to the histograms shown in Fig. 6.3 and 6.4. After smoothing the histograms show a very clear curve with an obvious peak. Note, however, that local maxima and plateaus are still possible, as shown in d). This has to be taken into account during analysis.

τ that is capable of separating vascular lumen from calcified plaques. This is illustrated in Fig. 6.6. Note that it is possible for the vessel tree histogram to have multiple local maxima. This situation can arise due to an uneven distribution of contrast agent in the vascular system, which occurs in particular if the time point for scanning has not been carefully selected. Moreover, the same is possible for patients whose physiology strongly diverges from norms dictating ideal values for contrast agent administration and scanning time. Hence, it is important to select the peak value which represents the global maximum of

82

CHAPTER 6

Coronary Calcium Detection and Quantification

Figure 6.6: Illustration of the automatic threshold determination algorithm based on the vessel histogram. The maximum peak value is marked with the green dot. The determined threshold point is marked by the red dot. The line indicates the threshold value that separates the healthy vessel lumen from the calcified plaques.

the histogram curve in order to get correct results from the algorithm.

6.2.2 Detecting Calcifications After the segmentation threshold is determined, the pre-processing tree is used in order to detect the position of the calcified lesions. Note that in this case, the whole pre-processing tree is used instead of only the extracted major coronary arteries as in Chapter 5. This is done in order to ensure that calcified plaques that are located within small branching vessels are also considered for score calculation. For the detection of the calcified lesions, the pre-processing tree is processed in a point-by-point fashion, as it is safe to assume that for a successful segmentation each of these points is located approximately at the center point of an artery. At each of these centerline points, its voxel neighborhood within a spherical radius of 2mm is examined and searched for voxels which lie above the previously computed threshold value. The radius is chosen to

6.2

Automatic Calcium Scoring

83

be 2mm as the maximum diameter of a coronary artery is about 4mm—5mm and the voxel resolution of a CTA scan usually is about 0.45mm. Therefore, it can be ensured that calcified plaques which are exceeding the threshold and are located within the vessel boundary are detected and marked during processing. I.e., if a voxel is found to have a HU value > τ it is marked as a possible calcium candidate. The resulting candidate points together with the threshold τ are then used as input seeds for a standard region growing algorithm as indicated by the clinically established calcium scoring methods [HSS∗ 05, AJH∗ 90]. The result of the candidate selection on the preprocessing tree of a CTA dataset is shown in Fig. 6.7.

Figure 6.7: Example of the candidate selection on a pre-processing tree of a CTA dataset. The possible calcium plaque candidates are shown in red.

The candidate selection algorithm works on a per-voxel basis. Thus it is possible, that single voxels showing a high HU value, e.g. originating from image noise, are marked as plaque candidates. Furthermore, it is possible that in the case of a erroneous pre-segmentation tree, large bright areas, e.g. the ventricles, are marked as coronary calcium plaque. This can be avoided by a followup step analyzing the detected calcium locations. Thereby, the

84

CHAPTER 6

Coronary Calcium Detection and Quantification

original plaque positions together with the segmentation threshold are used as input to a standard region growing algorithm. The resulting segmentation mask is then used as input to a connected components analysis [ST88, DST92]. The connected components analysis facilitates the detection of 1voxel plaques and excess volume plaques (≥ 250mm3 ) that indicate either image noise or a wrong pre-segmentation that would lead to flooding. Note that ignoring 1-voxel “plaques” even if they might indicate a real, beginning plaque instead of noise is not an error with respect to the scoring result. The original formulation by Agatston et. al. [AJH∗ 90] proposes only to consider lesions with an pixel area of ≥ 1mm2 , which corresponds to at least five voxels with an isotropic xy-resolution of 0.49mm × 0.49mm. When applying these thresholds to the connected components analysis the list of generated calcium markers is purged from erroneously detected plaques, from which usually most are single voxel lesions. The resulting candidate list, now marking only significant lesions is shown in Fig. 6.8.

Figure 6.8: Example of the result of the calcium plaque detection after connected components and volume analysis. The resulting position marks a calcified plaque (red). This can be used for a subsequent region growing step.

6.3

Automatic Calcium Scoring

85

Based on the generated marker list and the automatically determined segmentation threshold, the region growing process is repeated in order to yield the final calcium plaque segmentation mask. Using this mask, the standard calcium score values can be computed. The following scores are calculated based on the segmentation result: 1. Agatston score is calculated as

CSaga = a0 · nvox · f ( p) · scorr ,

(6.1)

where a0 is the area of one image pixel on the corresponding axial image slice , nvox is the number of pixels per lesion, f ( p) is a weight factor between 1 and 4, depending on the peak HU value of a single lesion [AJH∗ 90] and scorr is a correction factor compensating for the different slice thickness in CTA data. Note that the Agatston score is computed on a per-slice basis assuming a 3mm slice spacing in its original formulation. Even though the weighting factors, which depend on HU values and pixel size, were empirically determined by [AJH∗ 90], the Agatston score is the most relevant scoring factor for cardiac calcium risk stratification in clinical practice. 2. Calcium volume is calculated as

CSvol = a0 · nvox · sth ,

(6.2)

where sth is the voxel thickness in mm. The resulting value represents the calcium burden in ml . 3. Calcium mass is calculated as

CSmass = CSvol · f calib ,

(6.3)

where f calib is a scanner and mode dependent calibration factor that is determined by the CT scanner’s system software. The resulting value represents the approximate calcium burden in mg. The base scores are determined on a per-lesion basis. Per-vessel and overall calcium burden scores can be calculated by summing over all lesions.

86

CHAPTER 6

Coronary Calcium Detection and Quantification

6.3 Implementation For the calcium scoring project, a comprehensive prototype software application has been created. Its main purpose is to facilitate clinical studies that try to relate the traditional CAC scoring on native CT scans to calcium scores determined on CTA data. Therefore, the software provides side-byside view and analysis of native CT and its corresponding contrast enhanced CTA scan. The radiologist can then place manual calcium plaque markers on both datasets and set a scoring threshold for the CTA scan. Subsequently, the native markers can be related to the CTA markers and manual scoring can be carried out. As a result, calcium scoring values are obtained for each marked lesion on the datasets and correlation statistics are computed. A screenshot of the software in use is shown in Figure 6.9. With this software application, the ground truth reference data has been acquired that was used to evaluate the automatic calcium scoring method.

Figure 6.9: The cardiac calcium scoring prototype software CalcScore. The screenshot displays two cardiac CT datasets taken from the same patient. The left dataset is the native scan, the right dataset is the contrast enhanced CTA scan. Both datasets show the same, manually marked lesion on their corresponding slices. Below the datasets, the list of manually assigned lesions and their corresponding scoring results can be seen. The data acquired by the radiologist in this way can be stored for further use and automatic evaluations.

6.3

Implementation

87

The automatic algorithm presented in the previous section has been integrated into this software prototype in order to facilitate easy evaluation and comparison of the manual and automatic results. Thereby, the manually annotated data can be used to directly evaluate the quality of the automatic scoring visually. Moreover, risk grouping according to clinical standards was implemented. However, those are not hardcoded, but rather are all parameters freely configurable by the radiologist using the software. As an additional feature, the software is capable of performing an unattended batch evaluation of the automatic calcium scoring algorithm using the previously stored manual data, i.e. the calcium plaque markers and the corresponding segmentation threshold. The results presented in the next section were generated using this batch evaluation. A screenshot of the software in automatic mode is shown in Figure 6.10.

Figure 6.10: The panel for the automatic scoring. Automatic scoring can be started by the click of a button. Additionally, the manually determined calcium positions and the corresponding segmentation threshold can be used to generate a manual scoring result that can be compared to the automatic result. The automatic segmentation result mask on the CTA data (right) is overlaid in red color. This facilitates easy manual evaluation of the segmentation result.

The MeVisLab network that implements the software prototype is shown in Figure 6.11. This network is divided into several parts that handle the manual segmentation and data generation. The automatic calcium plaque

88

CHAPTER 6

Coronary Calcium Detection and Quantification

segmentation algorithm is implemented as a macro module within this context. By ensuring that the whole software prototype is completely reentrant, the automatic algorithm could be integrated into the main application. This also greatly simplified the deployment of the application into its clinical setting at the Deutsches Herzzentrum München, where it is used extensively in order to provide data for clinical studies. The content of the AutoCAScoreStandaloneV2 module is shown in Figure 6.12.

Figure 6.11: The main network of the CalcScore prototype. It is comprised of several modules and macro modules that provide manual and automatic segmentation procedures.

The main parts of this module are the SCRCardiacSegmentation module, which generates the pre-processing tree. The GenCalcPositions module is responsible for computing the segmentation threshold and generating the initial list of calcium plaque markers. Finally, the CaScoreV2 module performs the connected components analysis, computes the calcium scores on the segmented lesion positions and generates the final marker list that is used in order to display the detected calcium plaque locations.

6.4

Results

89

Figure 6.12: The internal network of the automatic calcium plaque detection and segmentation algorithm.

6.4 Results The automatic calcium plaque quantification algorithm has been evaluated on a comprehensive database of CTA studies. This database consisted of 53 datasets. However, data exhibiting severe and unpredictable image artifacts caused by various reasons, such as strong cardiac motion or pacemaker leads, for example, were excluded from the automatic evaluation (Fig. 6.13). Hence, for the validation of the presented algorithm 46 datasets were selected. All calcified lesions contained in the data were marked manually by a radiologist. Each data set was manually assigned an individual HU threshold for a region-growing based segmentation of the coronary calcium, even if the data itself contained no calcified plaques. This was done in order to assess the performance of the threshold determination algorithm. The totally covered segmentation threshold range for all datasets in the database was found to be between 417 and 1041 HU-Units. Subsequently, a region growing based segmentation for the ground-truth data was carried out and the resulting segmentation masks were used to generate ground-truth reference calcium scores (Agatston score, calcium mass score and calcium volume

90

CHAPTER 6

Coronary Calcium Detection and Quantification

(a)

(b)

Figure 6.13: Example of two of the CTA scans that were rejected for algorithm evaluation due to massive imaging artefacts.

6.4

Results

91

score) for evaluation. Finally, our method has been applied in unattended batch-mode on the same database. The automatically detected HU thresholds and calcium scores were then compared to the ground-truth data. The automatic thresholds were found to be very close to those manually selected. The mean absolute difference between the two values was found to be about 72 HU units with a standard deviation of ±73.59 . Note that the manual thresholds were set by a single reader only. As a consequence, the intra-observer variability in threshold selection could not be quantified in this study. In 62% of the examined cases, the automatically detected segmentation threshold was lower than the segmentation threshold selected by the radiologist. However, no flooding occurred during region growing, which indicates that most of the manually determined thresholds are too high. A plot comparing the manually selected values to the automatically detected thresholds is shown in Fig. 6.14. The resulting correlation between manually and automatically generated scores expressed as Spearmans ρ and their respective 95% intervals of agreement [BA99] for the three scoring types are very high in all cases, as shown in Table 6.1. Plots comparing volume, mass and Agatston scoring results directly are shown in Figure 6.15. During manual inspection of the resulting segmentation masks and detected calcium positions, it was found that the automatic algorithm was able to detect all of the manually marked calcified plaques contained in the data. Considering the broad cluster sizes for risk-grouping of patients in current clinical practice [BAB∗ 06], the deviation of values with respect to the expert’s ground truth is not significant. Additionally, most of the deviations from the reference scores are caused by the automatically determined segmentation threshold providing a better fit to the data than the manually selected one. Thus, there are more voxels that belong to a calcified lesion segmented, which yields an increased score value. Since no flooding occurred during the segmentation phase, it can be concluded that the algorithm is capable of selecting a near to optimal threshold for the inclusion of most voxels belonging to a calcified lesion. It is thus reasonable to assume that the resulting higher score resembles the real calcium burden more accurately.

92

CHAPTER 6

Coronary Calcium Detection and Quantification

Figure 6.14: Comparison of manual and automatically generated HU thresholds on the test database (Patient number and segmentation threshold on the x- and yaxis, respectively). The line corresponds to the manually selected thresholds. The Database was sorted according to increasing manual threshold value selection in order to improve visual clarity of the plot.

Score Type Agatston Score Calcium Mass Calcium Volume

ρ

95% limits of agreement

0.946 0.951 0.950

0.26 ± 29.78 0.41 ± 10.83 0.91 ± 22.85

Table 6.1: Standard correlation and limits of agreement between manual and automatic scoring.

6.4

Results

93

(a) Calcium Mass

(b) Calcium Volume

(c) Agatston Score

Figure 6.15: Comparison of manual and automatically generated scores on the test database (Patient number and Total-Score on the x- and y-axis, respectively). The line corresponds to the manual scoring result. The patients are sorted according to increasing manually determined calcium score values in order to improve visual clarity of the plots and to facilitate easier comparison of the results.

95

CHAPTER 7

Coronary Soft-Plaque Detection and Quantification Unlike calcified plaques, soft-plaques in CTA data exhibit very low intensity values. Therefore, the intensity distribution of a soft-plaque is very similar to the general soft tissue contrast of CT. As a consequence, it is not possible to perform a direct segmentation of these plaques in contrast to the segmentation of calcified plaques as described in Chapter 6. It is, however, possible to detect and segment the vascular boundary region on cross-sections of vessel images using active contour models. By analyzing the segmented contour properties, e.g. their area and diameter, conclusions about the state of the vessel can be drawn. Hence, significant soft-plaques can be detected and their extent can be estimated. During the course of this research project, very little literature has been published on the detection of soft-plaques on CTA data. Renard et al. [RY08] present an adaptive region growing approach in order to segment the inner vessel lumen and the outer vessel wall. They then estimate soft-plaque position and extent by quantifying the difference of the effective cross sectional area of the inner vessel lumen and the outer wall. In contrast to the region growing approach, Lankton et al. [LSRT09] present a soft-plaque detection method based on localized active contours. Using a user provided seed point, local energy constraints are minimized in order to segment the vessel and detect soft-plaques on the artery. Their method has been evaluated on 8 cardiac CTA datasets and delivered promising results. Both of these methods require the user to specify, either directly or indirectly, the approximate location of the plaque before segmentation. In contrast to these algorithms, the method examined in this thesis runs fully automatic on previously segmented centerlines of the coronary arteries. The presented approach is based on the extraction of vessel boundary contours using gradient vector flow snakes. Thereby, the shape of the examined vessel will be reconstructed and pathologies can be detected along the course of the artery using the generated vessel hull.

96

CHAPTER 7

Coronary Soft-Plaque Detection and Quantification

7.1 Active Contour Models

Snakes, or active contours, are used extensively in computer vision and image processing applications [KWT88, LL93, MT96, TS93, TAMM04]. Simply put, active contours are curves that deform within images in order to recover desired object shapes [KWT88]. The deformation is controlled by the minimization of an energy functional that represents the active model. The basic snake is a controlled continuity spline. This spline is deformed under the influence of external and internal forces. Thereby, the internal spline forces impose piecewise smoothness constraints, whereas the external forces draw the curve towards image features. When the position of a snake curve is represented parametrically as c(s) = ( x (s), y(s)), the resulting energy functional of the active contour can be written as [KWT88]

Esnake =

∫ 1 0

Eint (c(s)) + Eext (c(s)) ds .

(7.1)

Here, Eint represents the internal, and Eext represents the external energy. The spline energy, i.e. the internal energy, is composed of a first- and a second-order term and is expressed as

Eint (c(s)) =

) 1( α(s)|c′ (s)|2 + β(s)|c′′ (s)|2 , 2

(7.2)

where the first-order term makes the spline act like a string or a rubber band and is controlled by α(s). The second-order term makes the spline act like a beam and is controlled by the parameter β(s). Adjusting α and β controls the relative importance of the first- and second-order terms. For example, if β is set to zero at some point, the snake curve can become second-order discontinuous and develop a corner [KWT88]. The external energy of the snake is usually derived from the image data. It can be additionally supplemented by a user controlled constraint term Econ , i.e. Eext (c(s)) = Eimage (c(s)) + Econ (c(s)). However, for the present discussion, only the image-based term will be taken into account, since the developed algorithm is required to run without any user input. The energy function should be designed in a way such that it takes on its smallest values on the features of interest of the image, e.g. the boundary regions of vascular structures . This can be achieved in different ways. The simplest useful 1 = I ( x, y ) [KWT88]. Here, image functional is image intensity itself, i.e. Eext

7.1

Active Contour Models

97

we are given a 2-dimensional gray level image I ( x, y), where ( x, y) is a position on the image grid. Other common external image functionals can be the edge functional and the scale space functional, expressed as 2 Eext ( x, y) = −|∇ I ( x, y)|2 ,

(7.3)

3 Eext ( x, y) = −|∇( Gσ ( x, y) ∗ I ( x, y))|2 ,

(7.4)

and where Gσ ( x, y) is a 2-dimensional Gaussian function with the standard deviation σ. Moreover, these functionals can be combined in order to yield a total image energy [KWT88], i.e. 1 2 3 Eext = ω1 Eext + ω2 Eext + ω3 Eext .

(7.5)

Thereby, a wide range of snake behaviors can be created simply by adjusting the weighting factors. However, these image forces can be problematic, especially in image regions where concavities exist. Moreover, the initial snake curve has to be placed very close to the shape that is to be segmented. These drawbacks can be avoided by using an alternative external force called gradient vector flow. Gradient vector flow will be described in Section 7.1.2.

7.1.1 Solving the Energy Equation At the minimum of Equation 7.1, the active contour must satisfy the EulerLagrange equation

αc′′ (s) − βc′′′′ (s) − ∇ Eext (c(s)) = 0 .

(7.6)

Here, it is assumed that α(s) = α and β(s) = β are constant. This equation can also be considered as being a force balance equation, i.e.

Fint (c(s)) + Fext (c(s)) = 0 .

(7.7)

The internal force in this equation opposes stretching and bending and the external force pulls the snake towards the desired image features. This force balance equation can then be solved by approximating the derivatives by finite differences [KWT88]. Thus, the energy functional of the snake can be

CHAPTER 7

98

Coronary Soft-Plaque Detection and Quantification

expressed as a discrete formulation [KWT88] ∗ Esnake =

N

∑ Eint (i) + Eext (i)

(7.8)

,

i =1

where the snake is represented by N discrete curve elements which are calculated at a fixed step size h. Let then ci = ( xi , yi ) = ( x (ih), y(ih)) be a discrete point on the curve. Consequently, Eint (i ) can be approximated by

Eint (i ) = αi

| c i − c i −1 | 2 |c − 2ci + ci+1 |2 + β i i −1 , 2 2h 2h4

ext where c0 = cn . If f x (i ) = δE δxi and f y (i ) = equation can be written as [KWT88]

α i ( c i − c i −1 )

− + − + +

δEext δyi ,

(7.9)

the corresponding Euler

α i +1 ( c i +1 − c i ) β i−1 (ci−2 − 2ci−1 + ci ) 2β i (ci−1 − 2ci + ci+1 ) β i+1 (ci − 2ci+1 + ci+2 ) ( f x (i), f y (i )) = 0 .

(7.10)

The vector-valued equation can be expressed as

Ax + fx (x, y) = 0 ,

(7.11)

Ay + fy (x, y) = 0 ,

(7.12)

and where A is a pentadiagonal banded matrix of the form

c1 b2 a3

       A=       e N −1 dN

d1 c2 b3 a4

e1 d2 c3 b4 .. .

a1 e2 d3 c4 .. . .. .

e3 d4 .. . .. . a N −1

eN

e4 .. .

..

..

..

.

b N −1 aN

.

. c N −1 bN

 b1 a1         ,   ..  .   d N −1  cN

(7.13)

7.1

Active Contour Models

99

and

ai = bi = ci = di = ei =

β i −1 , h4 2 ( β i + β i −1 ) α − − 2i , h h4 β i+1 + 4β i + β i−1 α +α + i +1 2 i , 4 h h α i +1 2 ( β i +1 + β i ) − 2 , − h h4 β i +1 . h4

(7.14)

Thus, the values ai , bi , ci , di and ei correspond directly to the coefficients of the discrete variant of the Euler equation. In order to solve Equations 7.11 and 7.12 numerically, the right-hand sides of the equations are set equal to the product of a step size γ and the negative derivatives of the left-hand sides [KWT88]. Thereby, fx and fy are assumed to be constant during a time step. The internal forces are completely specified by the banded matrix, hence the time derivative can be evaluated at the time t, rather than time t − 1 [KWT88]. Consequently, providing an initial snake contour curve at t = 0, the energy equation can be solved by iterating

xt = (A + γI)−1 (γxt−1 − fx ( xt−1 , yt−1 )) yt = (A + γI)

−1

(γyt−1 − fy ( xt−1 , yt−1 )) .

(7.15) (7.16)

Since the matrix A + γI is also a pentadiagonal banded matrix, its inverse can be calculated by LU decomposition in O(n) time [KWT88].

7.1.2 Gradient Vector Flow as External Image Energy Instead of stating it as a standard energy minimization problem, the solution to the snake curve is formulated as a force balance equation. As a consequence, different external forces have been proposed in order to improve the overall snake performance [XP97]. The major drawback of the standard external forces such as the image gradient is that the force field generally exhibits a zero magnitude in hom*ogenous image regions. This is, however, an undesirable behavior for a static external image force.

CHAPTER 7

100

Coronary Soft-Plaque Detection and Quantification

Imagine a free particle as a single point snake with no internal force applied. In the presence of an external image force, the free particle should be able to move to image regions of interest when placed anywhere within that external force field. With standard external forces, however, the particle must be placed very close to the region of interest in order to show this behavior. Xu and Prince proposed a solution to this general problem by replacing the traditional external force by a new external force field, called the gradient vector flow (GVF) field [XP97, XP98]. The GVF field has the advantage of not only pointing towards the desired object boundary when it is very close to the actual boundary, but also to vary smoothly over hom*ogenous image regions. Consequently, the forces from the GVF field can capture the points of a snake curve from a long range. Thereby, it is irrelevant if the snake curve is inside or outside of the objects boundary. Moreover, in contrast to traditional forces, the GVF field is capable of forcing the snake into concave boundary regions [XP97]. The gradient vector flow force field can be derived by defining an edge map f ( x, y), which corresponds to one of the traditional external force fields. In 3 ( x, y ) = |∇( G ( x, y ) ∗ I ( x, y ))|2 was the case of this thesis, f ( x, y) = − Eext σ chosen. The gradient field ∇ f has vectors pointing towards the image edges, but exhibits only a narrow capture range [XP97]. Moreover, in hom*ogenous regions, the image intensities I ( x, y) are constant, thus ∇ f is zero and no information of nearby or distant edges is available. Based on the edge map, the gradient vector flow field can be constructed. The GVF field is defined to be the vector field v( x, y) = (u( x, y), v( x, y)) that minimizes the energy functional [XP97, LY07] ∫ ∫

EGVF =

µ(u2x + u2y + v2x + v2y ) + |∇ f |2 |v − ∇ f |2 dxdy .

(7.17)

This formulation follows the principle of creating a smooth result in the absence of data. In particular, when |∇ f | is small, the functional is dominated by the partial derivatives of the vector field. When |∇ f | is large, the second term dominates the integrand which reaches its minimum at v = ∇ f . µ is a regularization term which determines the tradeoff between the two terms of the functional. The GVF field can be calculated similarly to the snake curve

7.2

Detection of Soft-Plaques

101

by solving the following Euler-Lagrange equations

µ∇2 u − (u − f x )( f x2 + f y2 ) = 0

(7.18)

µ∇ v − (v −

(7.19)

2

f y )( f x2

+

f y2 )

=0 ,

where ∇2 is the Laplacian operator. These two equations can be solved by treating u and v as functions of time t and iterating

ut ( x, y, t) = µ∇2 u( x, y, t)

(7.20)

− (u( x, y, t) − f x ( x, y)) · ( f x ( x, y) + f y ( x, y) ) 2

2

vt ( x, y, t) = µ∇ v( x, y, t) 2

(7.21) (7.22)

− (v( x, y, t) − f y ( x, y)) · ( f x ( x, y) + f y ( x, y) ) . 2

2

(7.23)

After v( x, y) is computed, it can be used to replace the external force −∇ Eext in the snake equation. The active contour model formulation thus becomes

αc′′ (s) − βc′′′′ (s) + v = 0 ,

(7.24)

and is solved analogously to the traditional snake formulation, i.e. by discretization and iterative solution. When the traditional external force of the snake formulation is replaced by the GVF field force, the snake is also termed GVF snake. The GVF snake allows a flexible initialization and encourages convergence to boundary concavities. This is especially interesting when extracting contours for plaque detection, as plaques on CTA vessels can exhibit any shape. Another common problem for vessel boundary extraction are branching vessels, which usually tend to distract the standard contouring methods and thus yield massive outliers. In these cases, the GVF snake yields smoother results than traditional snakes, which makes it an ideal candidate for vessel boundary extraction [TAMM04, XP98, LY07].

7.2 Detection of Soft-Plaques In order to perform an automatic soft-plaque detection on a CTA dataset, the three major coronary arteries are extracted and labeled with the methods presented in Chapter 4. Subsequently, the generated vessel centerlines

102

CHAPTER 7

Coronary Soft-Plaque Detection and Quantification

can be used to create 2-dimensional multi-planar reformations (MPR) of the image data [PB07, Pra07]. The calculated plane segments are perpendicular to the centerline, as shown in Fig. 7.1. This is done for each centerline point in order to create a MPR image stack of the corresponding vessel. In a subsequent step, the GVF field is calculated for each slice of this stack. In order to decrease the time required for computation and to reduce the number of possible snake distracting image elements from the input data, the algorithm from Chapter 6 is adapted to generate a threshold for a coarse image segmentation. Hence, the vessel histogram is created as described in the previous chapter. Instead of detecting the separation point between vessel lumen and calcium plaque, however, the start of the ascent of the histogram curve is detected. The corresponding threshold value represents the minimum HU value for which vascular lumen can occur in the image. The detected threshold value is then used in order to create a segmentation mask for a simple threshold based image segmentation [Pra07], which is responsible for the removal of undesired tissue from the CT image. This is important, since the low soft tissue contrast of CT yields a high gradient variation in the image. Consequently, this leads to a very unsteady GVF field which has a negative overall influence on the resulting snake curve. An example of the GVF field that is created on a slice image of a vessel is shown in Figure 7.2. In subfigure a), the GVF field is created on the original image. Subfigure b) shows the GVF field as it is created after performing the segmentation. The unsteady field that is created when not applying the segmentation step can clearly be seen. Various whirls are produced due to the noisy soft-tissue contrast. If the snake curve is placed within such a unsteady field, the curve would be distracted from the vessel boundary and thus the algorithm would yield a wrong segmentation result. In order to segment the boundary of the vessel in the slice image, an initial contour curve is created at the centerline point on the 2D projection image of the vessel. This initial contour is simply a circle of radius R and consists of N discrete sample elements. R and N are the input to the algorithm. Setting an initial radius not greater than 2.5mm is reasonable, since the diameter of a coronary artery usually does not exceed 5mm at its most wide point [Led05]. Note that due to noise in the resulting centerline segmentation, the center point of the vessel segmentation does not always correspond to the exact center of the coronary artery. This is, however, no problem, since these

7.2

Detection of Soft-Plaques

103

(a) 3D View including the vessel centerline.

(b) Corresponding 2D slice view.

Figure 7.1: Planar reformation of the image data along the vessel centerline. Subfigure a) shows the reformation plane within the 3D context of the vessel centerline. In subfigure b) the corresponding 2D slice view is shown. The vessel lumen of the coronary artery is located at the center of the image plane.

104

CHAPTER 7

Coronary Soft-Plaque Detection and Quantification

(a) Original MPR slice clearly showing the ves- (b) GVF field calculated on the whole slice imsel lumen. age.

(c) GVF field calculated after threshold based image segmentation.

Figure 7.2: The GVF field that is calculated on a MPR slice of a segmented coronary artery centerline. In subfigure a) the original slice image is shown. Subfigure b) shows the computed GVF field which is created from the whole image data. The field is unsteady and shows a lot of whirls which are created due to the noisy soft-tissue contrast of CT. Subfigure c) shows the GVF field that is calculated from the image after performing a threshold based segmentation. The field is much clearer and shows a concise shape towards the vessel lumen boundary.

7.2

Detection of Soft-Plaques

105

small deviations can be fully compensated by the GVF field attracting the contour curve towards the vessel boundary. Subsequently, the snake energy functional is minimized in order to fit the initial contour curve to the vessel boundary. Due to the GVF field that is used as the external energy function, the contour curve converges quite fast to the desired boundary. Usually, only 1–2 iterations are required in order to achieve a reasonably good segmentation. In order to reduce the probability of an erroneous segmentation, the procedure is repeated until the contour does not change substantially for at least 5 iterations. Hence, the algorithm gets the possibility to deal with local minimas that might capture the curve too early. The maximum number of iterations that are performed per contour curve is set to 25. An example is shown in Figure 7.3. This process is then repeated over the whole course of the extracted vessel centerline. The result is a stack of contour curves which can be analyzed in order to detect pathologies. The stack of contours in a 3D visualization is shown in Figure 7.4. Once all contour curves are determined on the vessel, their characteristics can be calculated. Specifically, the area of the closed contour curve and the average radius is calculated. This can be achieved by determining the center ⃗ci of the closed curve of slice i as the average of the sum of the individual points of the boundary polygon, i.e.

⃗c =

1 N ⃗p j , N j∑ =0

(7.25)

where ⃗p j is the j-th contour curve point and N is the number of contour sample points. Using the calculated center point, the area A is calculated as the sum of area of the triangles that are part of the triangle fan represented by the center point and two adjacent contour points. Furthermore, the average radius of the closed contour is computed as the mean of the maximum and minimum radius within the closed polygon. By calculating the area and the average radius for all contours on a vessel, an area and a radius distribution graph over the whole course of the vessel is obtained. Note that since the segmentation is usually noisy, the resulting distribution curves have to be smoothed by applying a 1D box filter prior to analysis. This noisiness is caused primarily by the huge shape variation of the coronary artery itself. Since vessels are blood filled, hollow soft-tissue, their shape is not fixed and their cross-section contour can vary strongly, depending on

106

CHAPTER 7

Coronary Soft-Plaque Detection and Quantification

(a) Initial contour curve.

(b) Adapted snake curve segmenting the vessel boundary after one iteration.

Figure 7.3: Vessel boundary segmentation using GVF snakes. In subimage a) the initial contour is shown. Subimage b) shows the segmentation result after a single iteration.

7.2

Detection of Soft-Plaques

107

(a) Resulting contour stack on the whole vessel.

(b) Contour stack close-up.

Figure 7.4: Contour generation on the whole vessel. Subimage a) shows the resulting contour stack within the context of the heart muscle. Subfigure b) shows a close-up of the corresponding contour stack. In this particular example, a pathology which could be a soft-plaque in the distal part of the vessel is easily visible even without further algorithmic aid.

108

CHAPTER 7

Coronary Soft-Plaque Detection and Quantification

the current blood flow and heart state. Consequently, minor changes to the vascular shape caused by a possible pathology cannot be reliably detected by the presented method. An example of the obtained radius distribution curve on a coronary artery is shown in Figure 7.5. Since the centerline segmentation usually starts at the ostium, the MPR slice stack includes parts of the aorta at its first slices. These slices do not yield valid contours when processed by the algorithm. However, this can be easily compensated for by simply ignoring the segmentation results for the first 1cm after the start of the centerline. It is shown in the figure that major soft-plaques reducing the vessel lumen substantially yield a significant negative peak in the resulting distribution curve.

Figure 7.5: Example of the average radius distribution graph obtained on a coronary artery using the GVF snake contours. The vessel centerline is drawn in red, the radius distribution graph is drawn in green. Note the massive drop of the radius size at the clearly visible soft-plaque at the distal vessel part. At the proximal part of the vessel near the ostium, there is also a narrowing visible on the CT. This narrowing is also represented in a drop of the radius curve. This might be a soft-plaque, but is not significant enough to be automatically detected by the algorithm.

These negative peaks, which are characteristic for the presence of a softplaque, can be detected automatically by scanning the curve data in a point by point fashion. Thereby, the area and the radius of the current slice is compared to the slices within a reference frame. This reference frame contains the current slice and 0.5cm before and after the current slice. If the percental difference between the current and the surrounding area or radius sizes exceed a previously defined threshold, the corresponding segment is marked as a soft-plaque. Experiments on the example data have shown that selecting a threshold of 35% is capable of detecting all significant stenoses without yielding too much false positive results. One disadvantage of this approach is, that very small variations in the vessel that

7.3

Detection of Soft-Plaques

109

might be plaque are not detected. However, as clinical studies show, the current quality of CT for determining very small soft-plaques is at all questionable [Ach07, KDK∗ 08, Ach08]. Based on the curve computations, the extent of the detected soft-plaque can be approximately quantified as the ratio of the radius at the most narrow point of the vessel lumen, i.e. the plaque location, and the mean of the widest radii of the reference frame. This yields a value S between 0 and 1 being inverse proportional to the narrowing of the vessel lumen caused by the plaque. Its extent can thus be quantified by a soft-plaque score Ps which is expressed in percent and is calculated by Ps = (1 − S) ∗ 100. However, it is important to state that this score value just represents a radius reduction in percent which is calculated based on geometric observations after contour extraction. Moreover, it has to be taken into account that at the very center of a strong soft-plaque, the resulting GVF field is very noisy and the resulting contour curve is usually not smooth (Fig. 7.6). Thus, the resulting score value has, at the time of this writing, no clinical significance.

(a)

(b)

Figure 7.6: Closeup of the contour at a soft-plaque. Subimage a) shows a cut-out of the vessel CPR at the soft-plaque. In subimage b) the contour curve at the location of the red marker in the CPR image is displayed. It can be seen that in the vicinity of a plaque the contour curve is very noisy.

However, with the presented algorithm it is possible to detect soft-plaques reliably and it is also possible to provide at least an estimate of the vessel lumen reduction that is caused by the pathology. In order to establish a meaningful clinical risk score value similar to the calcium scoring values, the developed software product has to be used for evaluation in long term clinical studies that can prove the validity of such a score.

110

CHAPTER 7

Coronary Soft-Plaque Detection and Quantification

7.3 Implementation A prototype software has been created in order to facilitate the evaluation of the algorithm. Therefore, the GVF snake method has been integrated into the automatic branch detection software presented in Chapter 4. After the vessel branches have been segmented, they are automatically processed by the GVF snake algorithm, which is followed by the radius estimation and soft-plaque detection. Note that the parameters of the snake algorithm are configured such that calcified plaques are enclosed within the resulting contour. This is required, as segmenting calcium plaque would lead to wrong results for the soft-plaque detection. Calcified plaques can be segmented and quantified with the algorithm presented in Chapter 6. A screenshot of the software is shown in Figure 7.7.

Figure 7.7: The software prototype for soft-plaque quantification based on the software for automatic branch labeling. The three major coronary arteries are processed automatically and no user interaction is required.

As with the other software products presented in this thesis, the prototype implementation uses very few user interaction elements. In order to use the software, the user simply has to load a CTA dataset. Then, the segmentation of the coronary arteries and the soft-plaque detection runs fully automati-

7.4

Results

111

cally. Once the algorithm has finished processing, buttons allow the user to cycle through all detected soft-plaque lesions, if they are present within the data. Additionally, the CPR views of the coronary arteries with plaque markers can be displayed in order to visually asses the results (Fig. 7.8).

Figure 7.8: The software prototype for soft-plaque quantification displaying the CPR of the RCA artery. The detected soft-plaque is marked with the yellow marker. The red marker indicates the current position of the slice view (lower left image).

The core parts of the implementation are the MeVisLab modules SMSGVF and SMSSnake. These are responsible for computing the GVF field and applying it as external image force to the snake curve. A screenshot of the underlying network that drives the prototype software is shown in Figure 7.9.

7.4 Results The GVF snake based algorithm for soft-plaque detection has been evaluated on 35 distinct soft-plaques which were found to be pathologies of 21 coronary arteries in 16 CTA scans. The coronary centerlines were segmented using the automatic segmentation approach presented in Chapter 4. Subsequently, visible plaques contained within the arteries were marked

112

CHAPTER 7

Coronary Soft-Plaque Detection and Quantification

Figure 7.9: The network driving the software prototype for soft-plaque quantification.

manually for later evaluation of the algorithm. Then, the automatic soft-plaque analysis algorithm has been applied to the CT datasets. From the 35 soft-plaques, 33 have been detected correctly with respect to their position. This corresponds to a correct detection rate of 94.28%. The two missing plaques were not marked by the algorithm since their lumen narrowing impact was not greater than the proposed threshold value of 35%. Additionally, within the 21 coronary arteries, 5 plaques were detected as false positives. On the testing data, this corresponds to an approximate false positive detection rate of about 12%. However, the false positives were detected only on two datasets which exhibited very noisy data. Due to the noise contained within these datasets and the resulting unclear vessel lumen the vessel contour curve and thus the radius distribution were also very noisy. This could not be compensated for by the smoothing operations and therefore led to wrong detection results. The processing times for analysis of a full coronary tree are quite low. For the tree generation, the GVF field computation, snake curve generation and analysis about 30—40 seconds are required. As for the automatic calcium plaque detection algorithm, the majority of the processing time is used for

7.4

Results

113

the generation of the coronary tree segmentation. The soft-plaque detection itself takes about 5—10 seconds, depending on the length of the centerline that has to be processed. Additionally, the automatic algorithm can be integrated into the standard pre-processing pipeline of the CT software vendors. Since this algorithm is also based on the segmentation of the coronary tree, the method can be combined with the plaque quantification algorithm presented in the previous chapter. Thereby, once the data is available at the clinical workstation, a comprehensive plaque analysis can be presented to the radiologist. This can provide substantial aid to the clinician for the creation of a faster and reliable diagnosis. In general, the results delivered by the soft-plaque detection are very accurate. This is shown in Figure 7.10. It shows three different results that were created by the algorithm. It can be seen that the marker is placed at the center of the plaque, which usually represents the position of the smallest diameter of the coronary artery.

114

CHAPTER 7

Coronary Soft-Plaque Detection and Quantification

(a)

(b)

(c)

Figure 7.10: Images showing examples of the detection results of soft-plaques on different coronary artery CPRs. The soft-plaques on the coronary arteries are clearly visible. The detected lesions are automatically marked with the yellow marker. With the software prototype, the user can cycle through all detected plaques and examine the plaque and its surrounding tissue in order to create a diagnosis.

PART III

Summary

117

CHAPTER 8

Discussion In the following sections, each of the presented automatic algorithms for plaque detection on CTA data will be discussed on its own. Even though the presented methods can be used as standalone algorithms, they could benefit from combining the results they generate. For example, the learning-based stenosis detection approach could be used for an initial and fast screening of the data. If pathologies are detected, the calcium plaque quantification or the soft-plaque quantification algorithm can be run on the images as required. Thereby, the results obtained from the learning-based approach can be augmented with additional information and wrong outputs can be corrected. The same approach could also be used in order to establish an online learning method in an effort to improve the hypothesis used by the classification algorithm. Additionally, using all of the presented methods for data analysis simultaneously and combining their outcomes can increase the overall reliability of the generated results. This is similar to having a manual diagnosis checked by a second reader of the data. Since the application area of CAD methods is critical and false negative results have to be avoided, it is reasonable to have the outcome of different and independent algorithms increase the overall confidence in the generated data. Consequently, the thorough analysis of the different algorithms and the integration of their results to a single diagnosis estimate should be a mandatory option for future research. The possible and desirable improvement of the presented results should also be kept in mind while following the subsequent discussion of the standalone methods.

8.1 Learning-Based Stenosis Detection The first method presented in this thesis for automatic stenosis detection was a learning-based approach. In order to capture significant properties

118

CHAPTER 8

Discussion

of different types of stenosis in CTA data for training and classification purposes, a novel approach for feature value extraction on vascular structures has been introduced. By extending this feature extraction approach by a multi-scale method, it was possible to implement a reliable, automatic detection method for coronary artery stenosis. The algorithm has been evaluated on real clinical datasets and delivered high-quality and accurate detection results. Additionally, the presented method implements a boosting framework for training and classification purposes. This permits a flexible investigation of varying weak- and strong-learning algorithms without having to change the basic stenosis detection logic. As a drawback of the current setup, the relatively limited database reduces the statistical significance of the evaluated results. Currently, there are approximately about six times as many calcified lesions than soft-plaques in the evaluation database. In this sense, the high detection rate of soft-plaques has to be put in context. Hence, a major concern is to extend the training and evaluation databases in order to improve the robustness and confirm the quality of the algorithm. Consequently, a clinical evaluation under realworld conditions should be carried out in the future. The extension of the training database is also supposed to increase the overall detection rate and quality, since a wider distribution of stenosis configurations is captured by the learning process. Another problem is that the class selection method tends to produce too many false positives. This originates from the extensive multi-scale vessel representation and the worst case selection method employed. However, the main focus of this project was to minimize false negative results. Moreover, the false positive rate is within a range that allows clinical application, as diagnosis performance is improved on the majority of cases. Overall, it could be shown, that the presented methods are adequate for coronary computer aided diagnosis. In the current setup, the algorithm could be already used for diagnostic assistance within the scope of a clinical evaluation. By extending the training database and increasing the overall robustness of the approach, the presented technique can be capable of drastically reducing diagnosis times for coronary artery disease analysis. Thereby, the value of CT as an alternative diagnostic method for coronary artery disease could be further increased. The learning-based plaque detection algorithm has been integrated into a clinical diagnosis software prototype. In conjunction with the automatic

8.2

Calcium-Plaque Detection and Quantification

119

vessel tree generation algorithm, the software is capable of generating diagnosis support data without user interaction. This is especially valuable as the application of the software permits to quickly rule out the presence of significant stenosis in the data. The low false negative rate of only about 3% on 110 datasets when performing a whole vessel type classification is very good. Consequently, as results obtained on the evaluation database are clinically relevant, extensive evaluation within a clinical setting should be started in the near future in order to assess not only the accuracy of the algorithm but also the possible improvements in diagnosis performance. Nevertheless, the presented methods offer room for further research on the topic of the application of learning-based algorithms to computer aided diagnosis methods. As already stated, major tasks for future work should include an increase of the learning and evaluation databases. Furthermore, a thorough analysis of the relevant feature values that have the most impact on the classification result should be performed. A detailed examination of different classifiers and weak learning algorithms and their influence on the quality of the detection rates could also be valuable with respect to the applicability of the method. Another possible point of improvement would be the extension of the learning-based method by an on-line learning approach. Thereby, misclassifications would be marked by the radiologist during data examination and could thus lead to an direct improvement of future results.

8.2 Calcium-Plaque Detection and Quantification The next method presented in this thesis was an algorithm for the fully automatic coronary calcium-plaque detection, segmentation and quantification on CTA data. The evaluation results have shown that the method is adequate for coronary calcium scoring within a clinical context when compared to results obtained by a radiological expert. Considering the broad cluster sizes for risk-grouping of patients in current clinical practice, the deviation of the result values with respect to the expert’s ground truth is not significant. Additionally, most of the deviations from the reference scores are caused by the automatically determined segmentation threshold providing a better fit to the data than the manually selected one. Hence, more voxels belonging to a calcified lesion are segmented, yielding an increased score value. However, since no flooding occurred during the segmentation phase, it can be concluded that the algorithm is capable of selecting the opti-

120

CHAPTER 8

Discussion

mal threshold for the inclusion of most voxels belonging to a calcified lesion. It is thus reasonable to assume that the resulting higher score resembles the real calcium burden more accurately. Moreover, since the results can be accurately reproduced, a major advantage of the presented approach is that it is not susceptible to the current intra- or inter-observer variability prevalent in manual scoring. In practice, it was found that the parameter values for determining the segmentation threshold – which were selected empirically once by manual inspection of sample histograms – proved to be sufficiently robust for the intended application and no case could be found where this simple choice failed. However, the quality of the presented method could probably be improved by using a more sophisticated approach to determine the associated parameter values, such as expectation maximization (EM) on the histogram data. A direct comparison of the presented approach to the results of the previous works would not be adequate, as almost all of them require manual selection of the plaque positions or manual segmentation of the coronary arteries, others require a native CT scan in addition to a contrast-enhanced CT scan. This could lead to a possibly unnecessary increase of radiation exposure for the patient which can be avoided. Albeit the presented approach runs fully automatic and without any user interaction, no calcified plaques that were marked by the radiologist in the ground-truth database were missed by the algorithm. This underlines the reliability of the applied method. The main drawback of the evaluation of the algorithm is that it was not possible to integrate a measure of intra-observer variability, as the groundtruth was created by only a single radiologist. Carrying out a study including several observers is expected to yield a wider variability within the groundtruth database, thus putting the presented results into context. Performing such a study should be a goal for the immediate future in order to assess the quality of the approach in more detail. Since the presented method can be integrated into the standard preprocessing pipeline carried out by the software components provided by the CT vendors, no notable time delay occurs for the diagnosing radiologist. Hence, diagnostic data can be provided as soon as the image is available at the clinical workstation, imposing no extra workload other than verifying the presented data. This could yield to a significant reduction of overall diagnosis time in clinical practice, which is highly desirable when dealing with CTA data.

8.3

Soft-Plaque Detection and Quantification

121

The study performed by Glodny et. al. [GHT∗ 09] shows that a calcium volume score can be obtained from CTAs that correlates strongly with the traditional Agatston score. The intra- and inter-observer variability was found to be extremely small, but present when performing manual segmentation. They also concluded that the coronary calcium load is underestimated when using contrast-enhanced data. However, the study also attests scores obtained on CTAs the possibility to replace the traditional scores. Hadamitzky et. al. [HFM∗ 09] present about the same conclusions with respect to the prognostic value of CTA scoring, especially for low-risk patients. The major critique about scoring on CTA data was that the score could not be determined semi-automatically to the same extent as the traditional Agatston score due to the high contrast variation. As a direct consequence, this leads to an increased manual effort and thus increased diagnosis times. This limitation, however, can be compensated by using the algorithm presented within this thesis. Overall, one of the major burdens when performing studies on the applicability of CTA scoring for coronary risk stratification, the increased manual effort, can be avoided by applying the presented algorithm. Moreover, should those studies finally verify the clinical value of CTA calcium scores the presented method could become a valuable diagnostic tool.

8.3 Soft-Plaque Detection and Quantification The final approach presented in this thesis was an automatic soft-plaque detection algorithm based on the application of gradient vector flow snakes. The results of the algorithm are promising. All of the significant stenosis on the test database could be detected by the method. The two missed plaques in the evaluation database that were marked by the radiologist in the data did not significantly reduce the vascular lumen. A more severe drawback of the evaluation is, however, that the soft-plaque database is very small. There were only 35 plaques for algorithm assessment, which reduces the statistical significance of the overall results. A major point for future work would thus be to increase the databases and extend the evaluation drastically — preferably within a clinical setting. This would be an enormous aid in providing a more concise analysis of the results.

122

CHAPTER 8

Discussion

A limitation of the algorithm is, that with the current parameter settings, comparatively many false positive results are produced. One of the major reasons for this behavior is that the coronary arteries are subject to strong deformation within the course of a heartbeat. As a consequence, the radius distribution curve which is used to detect the soft-plaques has a high variation which cannot be fully compensated by smoothing the data. Another reason is that the snake curves resulting from a particular segmentation are in general very parameter dependent. The set of parameters used for the detection results presented in this thesis were found empirically and delivered good results. However, a closer examination of them should be indicated for future work. Examining methods for the automatic estimation of the optimal parameter setting could also prove very worthwhile. In order to put the results into context, it has to be stated that soft-plaque detection on CTA data is a relatively new topic. The previous works from Renard et. al. [RY08] and Lankton et. al. [LSRT09] use similar methods than the approach presented in this thesis. The initial results are all very promising. However, their clinical relevance has still to be demonstrated. The study of Knollmann et. al. [KDK∗ 08] analyzed the quality of CTA with respect to plaque measurements. They compared CTA results to histological specimens in a post-mortem investigation of 30 autopsy cases. Their major conclusions are that CT overestimates plaque area, but underestimates lipid-cores if present. Moreover, they found that CT measurements were problematic with respect to reproducibility. Overall, they concluded that CTA cannot reliably detect vulnerable plaques and has problems identifying smaller non-calcified plaque areas. On larger plaques, however, there is a correlation between CT and histologic area measurements. In summary, even though CTs spatial and temporal resolution is high, it is not yet high enough for the reliable detection of difficult plaque situations. This is, however, expected to change with the technological evolution in the future. In summary, soft-plaque detection on CTA data is even more difficult than the detection of calcified plaques. Thus, automatic methods that aid the radiologist with the task of identifying and estimating the clinical significance of a plaque are becoming more and more important. The method presented in this thesis can thus be a good start for further research on the topic and clinical evaluation of the results. The algorithm is fast and easy to use. It integrates well with the existing image processing and analysis pipeline and it does not require user input in order to generate results that can be analyzed.

8.3

Soft-Plaque Detection and Quantification

123

Consequently, the presented method and the associated software prototype can aid in performing clinical studies on the applicability of CTA with respect to soft-plaque detection and quantification. The presented results have shown that significant plaques can be detected with respect to their position. Further studies and analyses have to be performed in order to assess the quality of the quantification approach. This could also lead to the derivation of a clinically relevant soft-plaque score value which could prove as valuable as the cardiac calcium score is today with respect to the risk assessment of coronary artery disease. Moreover, as the image quality of CT is increasing the presented algorithms will become more valuable since their implementation is expected to scale well with imaging improvements. Furthermore, especially when dealing with soft-plaques, the possibilities of advanced imaging techniques such as dual-source CT should be explored in order to provide even more diagnostic quality and improvements in diagnosis speed and reliability.

125

CHAPTER 9

Conclusion Cardiovascular diseases are the number one cause of death in the world and the number of cases is ever increasing. According to the Statistisches Bundesamt, solely in Germany 42% of all deaths, or a total of 358.908 people, died directly or indirectly from CVDs [Sta10b]. Using some simple math, this corresponds statistically to roughly about 1000 deaths per day or 40 deaths per hour — not counting the acute cases of ischemia or stroke without fatal ending. These vast numbers should make it obvious that CVD is indeed a major health and economical problem. As a consequence, any actions taken to support the clinical process during diagnosis, treatment and aftercare procedures is therefore strongly desirable. Medical imaging techniques play a key role today for this purpose. Especially cardiac imaging has high demands on the imaging modality with respect to spatial and temporal resolution. Despite the permanently decreasing radiation doses, CT is outstanding as a modality for imaging of the heart and the coronary arteries. The image quality that can be acquired by CT is almost at pace with that of traditional invasive catheter based angiography. Unlike the latter, CT does not have the risk associated with introducing a catheter into the beating heart. However, analysis of the data is a manual and time consuming process and unlike traditional angiography, CT does not allow direct treatment of diseases. Hence, a fast evaluation of the images is required in order to provide optimal patient care. This is becoming even more important, as patients and thus patient data as well as time and cost pressure in the clinical environment are constantly increasing. Simple image processing techniques, e.g. threshold based segmentation, already do provide a great aid to the radiologist when creating a diagnosis. More advanced algorithms and techniques are on the way in order to provide further diagnostic support. These class of algorithms, referred to as computer aided diagnosis methods, extend the existing procedures. The major goal for these approaches is to provide a diagnosis estimate to the radiologist automatically, ideally not requiring any other input than the image data. The clinician only has to confirm or reject the auto-

126

CHAPTER 9

Conclusion

matic results and possibly examine the image parts that might have been missed by the procedure. Thereby, the algorithm can not only reduce diagnosis times drastically, it also acts as a second reader of the data, increasing the final reliability of the outcome by reducing observer variability. Especially the identification of small structures, like plaques contained in the coronary arteries of the heart, is a difficult manual process that can benefit substantially from automatic methods. In this thesis, different methods where examined in order to approach this problem. The foundation of any automatic plaque detection algorithm is a reliable segmentation of the coronary arteries itself within the data. Based on this segmentation, methods that operate on the segmentation results have been developed that allow the classification of pathologies along the vessels. The learning-based approach that is trained with ground truth data created by expert knowledge was used to identify diseased regions along the arteries. The resulting algorithm was capable of quickly identifying the overall state of the vascular system, i.e. if there are diseases present or not, as well as determining the exact location of the plaques. However, as detailed analysis revealed there is still much room for improvement. The ground truth reference database should be increased and the performance of different classifiers should be examined further. Overall the approach delivered promising results and showed itself being worthwhile for future research. Besides the detection of the location of plaques, their quantification is an important topic with respect to risk assessment of coronary artery disease. Current state-of-the art clinical research deals with the transfer of the results of established calcium scoring methods on native CT scans to the results obtained on contrast-enhanced CT angiography image data. One of the main reasons for this approach is the reduction of radiation dose, as contrast-enhanced CT allows a better imaging of the coronary arteries than native CT, and therefore usually two scans are required in order to provide full diagnostic data. However, calcium scoring on CTA is more difficult than on native CT. In this thesis, an image analysis method that runs in conjunction with the coronary vascular tree segmentation was presented that is capable of automatically determining a segmentation threshold and the location of calcified plaques in the data. Thereby, a fully automatic threshold based segmentation and scoring of calcified plaques could be achieved that delivered similar results than those obtained by manual segmentation from a radiologist. The approach was found to produce very reliable results.

9

127

Therefore, it is already used in order to supplement clinical studies on the applicability of CTA scoring for patient risk assessment. This approach is also open for algorithmic improvement, e.g. in the current setting the parameters for the image analysis are determined empirically. It would, however, be a substantial improvement if those parameters could also be derived automatically from the input data. This would increase the reliability of the algoritm even further. Finally, a snake-based segmentation algorithm for soft-plaques in CTA data has been examined. This is a difficult task, since soft-plaque contrast is very low in CT and usually plaques are hardly discernible from the surrounding tissue. The presented approach generated a boundary hull along the whole vessel and extracted a radius distribution curve from that data. Thereby, it was possible to detect the locations where the boundary diameter became significantly narrowed — a strong indication for the presence of a soft-plaque. By relating the radii of the healthy vessel parts in the vicinity of the pathology to those present at the most narrow part of the plaque, a score value estimating the severity of the disease could be generated. The first results obtained from the application look very promising and provide a high detection rate. However, as there exists no reference data with respect to the score values, it is very difficult to judge the quality of the measurement. Therefore, more research work is required in order to provide a reliable ground truth that enables the comparison of different algorithms. Also, clinical studies on the topic are necessary in order to provide a reference for the diagnostic significance of the obtained data. In summary it could be shown during the course of this thesis that the presented algorithms are capable of providing diagnostic aid to the radiologist when dealing with contrast-enhanced CT angiography data. Thereby, the speed of data analysis could be substantially increased, as the presented methods do not require any user input and can be connected to the standard image processing pipeline. Thus, the generated diagnostic data can be provided as soon as the images are available at the clinical workstation. Hence, the presented methods can be a valuable tool for clinical processing. However, it was also shown that there is still much room for improvement and further research on these topics. Reliability and significance of the generated data must still be improved in order to leverage the presented methods from study to everyday clinical use. Automatic methods are expected to become a more and more common tool for providing valuable information to

128

CHAPTER 9

Conclusion

the radiologist. Despite the quality of these algorithms, however, it should not be forgotten that it will always be the human making the final decision about the diagnosis for the foreseeable future.

129

Bibliography [AB94]

Adams R., Bischof L.: Seeded Region Growing. IEEE Trans. Pattern Anal. Mach. Intell. 16 (June 1994), 641–647.

[Ach07]

Achenbach S.: Cardiac CT: state of the art for the detection of coronary arterial stenosis. Journal of cardiovascular computed tomography 1, 1 (2007), 3–20.

[Ach08]

Achenbach S.: Can CT detect the vulnerable coronary plaque? The International Journal of Cardiovascular Imaging (formerly Cardiac Imaging) 24 (2008), 311–312. 10.1007/s10554007-9281-1.

[AJH∗ 90] Agatston A. S., Janowitz W. R., Hildner F. J., Zusmer N. R., Viamonte M., Detrano R.: Quantification of Coronary Artery Calcium Using Ultrafast Computed Tomography. J Am Coll Cardiol 15 (1990), 827–832. [AL92]

Ai W. I., Langley P.: Induction of One-Level Decision Trees. In Proceedings of the Ninth International Conference on Machine Learning (1992), Morgan Kaufmann, pp. 233–240.

[BA99]

Bland J. M., Altmann D. G.: Measuring agreement in method comparision studies. Stat Methods Med Res 8 (1999), 135–160.

[BAB∗ 06] Budoff M. J., Achenbach S., Blumenthal R. S., Carr J. J., Goldin J. G., Greenland P., Guerci A. D., Lima J. A., Rader D. J., Rubin G. D., Shaw L. J., Wiegers S. E.: Assessment of Coronary Artery Disease by Cardiac Computed Tomography: A Scientific Statement From the American Heart Association Committee on Cardiovascular Imaging and Intervention, Council on Cardiovascular Radiology and Intervention, and Committee on Cardiac Imaging, Council on Clinical Cardiology. Circulation 114, 16 (2006), 1761–1791. [BCF∗ 06]

Bottigli U., Cascio D., Fauci F., Golosio B., Magro R., GL: Massive Lesions Classification using Features based on Morpho-

130

BIBLIOGRAPHY

logical Lesion Differences. Proc. of World Academy of Science, Engineering and Technology 12 (2006), 20–24. [BH07]

Bühlmann P., Hothorn T.: Boosting Algorithms: Regularization, Prediction and Model Fitting. Statistical Science 22, 4 (2007), 477–505.

[BJ01]

Boykov Y. Y., Jolly M.-P.: Interactive Graph Cuts for Optimal Boundary & Region Segmentation of Objects in N-D Images, 2001.

[CJS93]

Cho Z.-H., Jones J., Singh M.: Foundations of Medical Imaging. John Wiley & Sons, Inc, New York, 1993.

[CK06]

Casagrande N., Kégl B.: Multiboost: An Open Source MultiClass AdaBoost Learner. http://multiboost.sourceforge. net/ (accessed 27.10.2010), 2006.

[DC01]

Deschamps T., Cohen L.: Fast extraction of minimal paths in 3D images and applications to virtual endoscopy. Medical Image Analysis 5, 4 (Dec. 2001).

[Dij59]

Dijkstra E. W.: A note on two problems in connexion with graphs. Numerische Mathematik 1 (1959), 269–271. 10.1007/BF01386390.

[DMM51]

Dawber T. R., Meadors G. F., Moore Felix E. J.: Epidemiological Approaches to Heart Disease: The Framingham Study. Am J Public Health Nations Health 41, 3 (1951), 279–286.

[DST92]

Dillencourt M. B., Samet H., Tamminen M.: A general approach to connected-component labeling for arbitrary image representations. J. ACM 39 (April 1992), 253–280.

[FLBF∗ 06] Funka-Lea G., Boykov Y., Florin C., p. Jolly M., Moreau-gobard R., Ramaraj R., Rinck D.: Automatic heart isolation for CT coronary visualization using graph-cuts. In In IEEE International Symposium on Biomedical Imaging (2006). [FS95]

Freund Y., Schapire R. E.: A decision-theoretic generalization of on-line learning and an application to boosting. In EuroCOLT ’95: Proceedings of the Second European Conference on Computational Learning Theory (London, UK, 1995), Springer-

BIBLIOGRAPHY

131

Verlag, pp. 23–37. [GHT∗ 09] Glodny B., Helmel B., Trieb T., Schenk C., Taferner B., Unterholzer V., Strasak A., Petersen J.: A method for calcium quantification by means of CT coronary angiography using 64-multidetector CT: very high correlation with agatston and volume scores. Eur Radiol 19 (2009), 1661–1668. [Gra06]

Grady L.: Fast, Quality, Segmentation of Large Volumes – Isoperimetric Distance Trees. In Computer Vision – ECCV 2006, Leonardis A., Bischof H., Pinz A., (Eds.), vol. 3953 of Lecture Notes in Computer Science. Springer Berlin / Heidelberg, 2006, pp. 449–462.

[GS06]

Grady L., Schwartz E. L.: Isoperimetric Graph Partitioning for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 28, 3 (2006), 469–475.

[GT08]

Gülsün M. A., Tek H.: Robust Vessel Tree Modeling. Proc. MICCAI (2008), 602–611.

[HFM∗ 09] Hadamitzky M., Freißmuth B., Meyer T., Hein F., Kastrati A., Martinoff S., Schöning A., Hausleiter J.: Prognostic Value of Coronary Computed Tomographic Angiography for Prediction of Cardiac Events in Patients With Suspected Coronary Artery Disease. Int J Cardiovasc Imaging 2, 4 (2009), 404–411. [HSS∗ 05]

Hoffmann M. H. K., Shi H., Schmitz B. L., Schmid F. T., Lieberknecht M., Schulze R., Ludwig B., Kroschel U., Jahnke N., Haerer W., Brambs H., Aschoff A. J.: Noninvasive Coronary Angiography With Multislice Computed Tomography. JAMA 293, 20 (2005), 2471–2478.

[Kal06]

Kalender W.: Computertomographie - Grundlagen, Gerätechnologie, Bildqualität, Anwendungen. Publicis Corporate Publishing, Erlangen, 2006.

[KDK∗ 08] Knollmann F., Ducke F., Krist L., Kertesz T., Meyer R., Guski H., Felix R.: Quantification of atherosclerotic coronary plaque components by submillimeter computed tomography. The International Journal of Cardiovascular Imaging (formerly Cardiac Imaging) 24 (2008), 301–310. 10.1007/s10554-007-

132

BIBLIOGRAPHY

9262-4. [KFW∗ 02] Kanitsar A., Fleischmann D., Wegenkittl R., Felkel P., Groller E.: CPR - curved planar reformation. In Visualization, 2002. VIS 2002. IEEE (2002), pp. 37 –44. [KHO∗ 05] Komatsu S., Hirayama A., Omori Y., Ueda Y., Mizote I., Fujisawa Y., Kiyomoto M., Higashide T., Kodama K.: Detection of Coronary Plaque by Computed Tomography With a Novel Plaque Analysis System, ‘Plaque Map’, and Comparison With Intravascular Ultrasound and Angioscopy. Circulation 69, 1 (2005), 72–77. [KMS06]

Kauffmann G., Moser E., Sauer R.: Radiologie. Elsevier GmbH, Urban & Fischer Verlag, München, 2006.

[KWT88]

Kass M., Witkin A., Terzopoulos D.: Snakes: Active contour models. INTERNATIONAL JOURNAL OF COMPUTER VISION 1, 4 (1988), 321–331.

[Led05]

Lederhuber H.: BASICS Kardiologie. Elsevier GmbH, Urban & Fischer Verlag, München, 2005.

[LK99]

Lorenz C., Krahnstover N.: 3D statistical shape models for medical image segmentation. In 3-D Digital Imaging and Modeling, 1999. Proceedings. Second International Conference on (1999).

[LL93]

Leymarie F., Levine M. D.: Tracking Deformable Objects in the Plane Using an Active Contour Model. IEEE Trans. Pattern Anal. Mach. Intell. 15 (June 1993), 617–634.

[LL06]

Lee C., Lee G. G.: Information gain and divergence-based feature selection for machine learning-based text categorization. Inf. Process. Manage. 42, 1 (2006), 155–165.

[LLS02]

Lamecker H., Lange T., Seebass M.: A Statistical Shape Model for the Liver. In In MICCAI ’02: Proceedings of the 5th International Conference on Medical Image Computing and Computer-Assisted Intervention-Part II (2002), SpringerVerlag, pp. 422–427.

[LM02]

Lienhart R., Maydt J.:

An Extended Set of Haar-Like Fea-

BIBLIOGRAPHY

133

tures for Rapid Object Detection. In IEEE ICIP 2002 (2002), pp. 900–903. [LSRT09] Lankton S., Stillman A., Raggi P., Tannenbaum A.: Soft Plaque Detection and Automatic Vessel Segmentation. In Proceedings of PMMIA. Workshop of MICCAI (2009). [LY07]

Li H., Yezzi A.: Vessels as 4-D Curves: Global Minimal 4-D Paths to Extract 3-D Tubular Surfaces and Centerlines. Medical Imaging, IEEE Transactions on 26, 9 (2007), 1213–1223.

[Lyn06]

Lynch P. J.: Heart normal transthoracic echocardiography views. http://de.wikipedia.org/w/index.php?title= Datei:Heart_normal_tte_views.jpg&filetimestamp= 20061226035810 (accessed 28.09.2010), 2006.

[MeV]

MeVis Medical Solutions AG: MeVisLab, Software for Medical Image Processing and Visualization. http://www.mevislab. de.

[MKH05]

Mita T., Kaneko T., Hori O.: Joint Haar-like Features for Face Detection. In Proceedings of the Tenth IEEE International Conference on Computer Vision - Volume 2 (Washington, DC, USA, 2005), ICCV ’05, IEEE Computer Society, pp. 1619–1626.

[MMS∗ 08] Meijboom W. B., Meijs M. F., Schuijf J. D., Cramer M. J., Mollet N. R., van Mieghem C. A., Nieman K., van Werkhoven J. M., Pundziute G., Weustink A. C., de Vos A. M., Pugliese F., Rensing B., Jukema J. W., Bax J. J., Prokop M., Doevendans P. A., Hunink M. G., Krestin G. P., de Feyter P. J.: Diagnostic Accuracy of 64-Slice Computed Tomography Coronary Angiography: A Prospective, Multicenter, Multivendor Study. J Am Coll Cardiol 52, 25 (2008), 2135–2144. [MR03]

Meir R., Rätsch G.: An Introduction to Boosting and Leveraging. In Lecture Notes in Artificial Intelligence (2003), pp. 118–183.

[MT96]

Mcinerney T., Terzopoulos D.: Deformable models in medical image analysis: A survey. Medical Image Analysis 1 (1996), 91–108.

134

BIBLIOGRAPHY

[NEM10]

NEMA: DICOM - Digital Imaging and Communications in Medicine. http://dicom.nema.org (accessed 11.10.2010), 2010.

[OFB∗ 07]

Ohnesorge B., Flohr T. G., Becker C., Knez A., Reiser M.: Multislice and Dual-source CT in Cardiac Imaging. Elsevier GmbH, Urban & Fischer Verlag, München, 2007.

[PB07]

Preim B., Bartz D.: Visualization in Medicine. Morgan Kaufmann Publishers, Burlington, MA, 2007.

[PM07]

Pelberg R., Mazur W.: Springer, 2007.

[PP07]

Putz R., Pabst R.: Sobotta - Anatomie des Menschen. Elsevier GmbH, Urban & Fischer Verlag, München, 2007.

[Pra07]

Pratt W. K.: Digital Image Processing. Wiley-Interscience, Hoboken, New Jersey, 2007.

[Pyt]

Python Software Foundation: The Python Programming Language. http://www.python.org.

[Qui86]

Quinlan J. R.: Induction of Decision Trees. Machine Learning 1, 1 (1986), 81–106.

[Rad86]

Radon J.: On the Determination of Functions from Their Integral Values along Certain Manifolds. Medical Imaging, IEEE Transactions on 5, 4 (dec. 1986), 170 –176.

Cardiac CT Angiography Manual.

[RKRS06] Rinck D., Krüger S., Reimann S., Scheuering M.: Shape-based segmentation and visualization techniques for evaluation of atherosclerotic plaques in coronary artery disease. vol. 6141, SPIE, p. 61410G. [RN03]

Russel S., Norvig P.: Artificial Intelligence - A Modern Approach. Prentice Hall, Pearson Education, Inc., New Jersey, 2003.

[RY08]

Renard F., Yang Y.: Image analysis for detection of coronary artery soft plaques in MDCT images. In Biomedical Imaging: From Nano to Macro, 2008. ISBI 2008. 5th IEEE International Symposium on (2008), pp. 25–28.

BIBLIOGRAPHY

135

[SAD∗ 08] Saur S., Alkadhi H., Desbiolles L., Szekely G., Cattin P.: Automatic Detection of Calcified Coronary Plaques in Computed Tomography Data Sets. In Proc. of MICCAI (2008), vol. 5241 of Lecture Notes in Computer Science, pp. 170–177. [Sch00]

Scharr H.: Optimale Operatoren in der Digitalen Bildverarbeitung, 2000.

[Sha48]

Shannon C.: A Mathematical Theory of Communication. The Bell System Technical Journal 27 (1948), 379–423.

[SS99]

Schapire R. E., Singer Y.: Improved Boosting Algorithms Using Confidence-rated Predictions. In Machine Learning (1999), pp. 80–91.

[ST88]

Samet H., Tamminen M.: Efficient component labeling of images of arbitrary dimension represented by linear bintrees. Pattern Analysis and Machine Intelligence, IEEE Transactions on 10, 4 (July 1988), 579 –586.

[Sta10a]

Statistisches Bundesamt Deutschland: Diseases of the circulatory system were the most frequent cause of death in 2007. http://www.destatis.de/jetspeed/portal/cms/ Sites/destatis/Internet/DE/Navigation/Statistiken/ Gesundheit/Todesursachen/Todesursachen.psml (accessed 21.12.2010), 2010.

[Sta10b]

Statistisches Bundesamt Deutschland: Todesursachen. http://www.destatis.de/jetspeed/portal/cms/Sites/ destatis/Internet/EN/Content/Statistics/Health/ DeathCauses/Current,templateId=renderPrint.psml (accessed 23.09.2010), 2010.

[TAMM04] Tang J., Acton S. T., Member S., Member S.: Vessel Boundary Tracking for Intravital Microscopy via Multiscale Gradient Vector Flow Snakes. IEEE Trans. Biomed. Eng 51 (2004), 316–324. [TBGB03] Toumoulin C., Boldak C., Garreau M., Boulmier D.: Coronary Characterization In Multi-Slice Computed Tomography. In IEEE Comp. Cardiol. (2003), pp. 749–752.

136

BIBLIOGRAPHY

[TdTF∗ 07] Tyrrell J., di Tomaso E., Fuja D., Tong R., Kozak K., Jain R., Roysam B.: Robust 3-D Modeling of Vasculature Imagery Using Superellipsoids. Medical Imaging, IEEE Transactions on 26, 2 (feb. 2007), 223 –237. [Tek08]

T e k H .: Automatic Coronary Tree Modeling. The MIDAS Journal - Grand Challenge Coronary Artery Tracking (MICCAI 2008 Workshop) (2008). http://hdl.handle.net/10380/1426.

[TS93]

Terzopoulos D., Szeliski R.: Tracking with Kalman snakes. MIT Press, Cambridge, MA, USA, 1993, pp. 3–20.

[TVHB∗ 10] Teßmann M., Vega-Higuera F., Bischoff B., Hausleiter J., Greiner G.: Robust Automatic Calcium Scoring for CT Coronary Angiography. In Bildverarbeitung für die Medizin 2010 - Algorithmen, Systeme, Anwendungen (Informatik Aktuell) (2010), Springer Verlag, Berlin, pp. 430–434. [TVHB∗ 11] Teßmann M., Vega-Higuera F., Bischoff B., Hausleiter J., Greiner G.: Automatic detection and quantification of coronary calcium on 3D CT angiography data. Computer Science - Research and Development 26 (2011), 117–124. 10.1007/s00450-010-0131-3. [TVHF∗ 08] Teßmann M., Vega-Higuera F., Fritz D., Scheuering M., Greiner G.: Learning-Based Detection of Stenotic Lesions in Coronary CT Data. In Proceedings of Vision, Modeling and Visualization (2008), pp. 189–198. [TVHF∗ 09] Teßmann M., Vega-Higuera F., Fritz D., Scheuering M., Greiner G.: Multi-scale feature extraction for learning-based classification of coronary artery stenosis. In Proceedings of SPIE (2009), vol. 7260, Spie, pp. 1–8. [TZB∗ 06]

Tu Z., Zhou X., Barbu A., Bogoni L., Comaniciu D.: Probabilistic 3D polyp detection in CT images: The role of sample alignment. in [Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recochnition ] 2 (2006), 1544–1551.

[VBFK06] Virmani R., Burke A. P., Farb A., Kolodgie F. D.: Pathology of

BIBLIOGRAPHY

137

the Vulnerable Plaque. J Am Coll Cardiol 47, 8 Suppl C (2006), C13–18. [VJ04]

Viola P., Jones M. J.: Robust Real-Time Face Detection. IEEE Computer Vision 57, 2 (May 2004), 137–154.

[WDL∗ 98] Wilson P. W. F., D’Agostino R. B., Levy D., Belanger A. M., Silbershatz H., Kannel W. B.: Prediction of Coronary Heart Disease Using Risk Factor Categories. Circulation 97 (1998), 1837–1847. [WKF06]

Wesarg S., Khan M. F., Firle E. A.: Localizing Calcifications in Cardiac CT Data Sets Using a New Vessel Segmentation Approach. Journal of Digital Imaging 19, 3 (2006), 249–257.

[Wor09]

World Health Organization: Cardiovascular diseases (CVDs) – Fact Sheet No. 317. http://www.who.int/mediacentre/ factsheets/fs317/en/index.html (accessed 21.12.2010), 2009.

[XP97]

Xu C., Prince J. L.: Gradient Vector Flow: A New External Force for Snakes. In Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR ’97) (Washington, DC, USA, 1997), CVPR ’97, IEEE Computer Society, pp. 66–.

[XP98]

Xu C., Prince J. L.: Snakes, Shapes, and Gradient Vector Flow. IEEE TRANSACTIONS ON IMAGE PROCESSING 7, 3 (1998), 359–369.

[ZBG∗ 07]

Zheng Y., Barbu A., Georgescu B., Scheuering M., Comaniciu D.: Fast Automatic Heart Chamber Segmentation from 3D CT Data Using Marginal Space Learning and Steerable Features. IEEE Computer Vision 0 (2007), 1–8.

[ZRZH05] Zhu J., Rosset S., Zou H., Hastie T.: Multi-class AdaBoost. Tech. rep., 2005.

Computergestützte Diagnoseverfahren für CT Koronarangiographiedaten

141

Kurzfassung Erkrankungen des kardiovaskulären Systems sind weltweit Haupttodesursache. Da die KHK ein enormes gesundheitliches und ökonomisches Problem darstellt, ist es unerlässlich, die klinischen Verfahren zur Diagnose, Behandlung und Nachsorge zu unterstützen. Die medizinische Bildgebung des Herzens stellt hohe Ansprüche an die Modalität der Aufnahmetechniken in Bezug auf räumliche und zeitliche Auflösung. Hier ist die Computertomographie mittlerweile auf einem qualitativ ähnlich hohen Stand wie die traditionelle katheterbasierte Koronarangiographie. Obwohl das Krankheitsbild eine schnelle und verlässliche Diagnosenstellung erfordert, um eine optimale Patientenversorgung zu gewährleisten, ist die Analyse der Bilddaten jedoch noch immer ein aufwändiger manueller Prozess. Beim Vorliegen einer KHK wird der Herzmuskel aufgrund eines unzureichenden Blutflusses in den Herzkranzgefäßen nicht mehr ausreichend mit Sauerstoff versorgt. Dieser unzureichende Blutfluss wird durch kleine arteriosklerotische Plaques verursacht, die das Gefäß blockieren - eine Stenose entsteht. In der vorliegenden Arbeit werden automatische Verfahren vorgestellt, die den diagnosestellenden Radiologen bei seiner Aufgabe unterstützen können. Dabei erstellen die verwendeten Algorithmen eine Erstdiagnose, ohne eine Benutzerinteraktion zu benötigen. Die Grundlage der Verfahren bildet eine robuste Gefäßbaumsegmentierung. Darauf aufbauend wird ein lernbasiertes Verfahren untersucht, welches in der Lage ist, pathologische Regionen automatisch zu erkennen. Neben der reinen Erkennung von Plaques nimmt die Quantifizierung derselben einen wichtigen Stellenwert für die Risikoanalyse ein. Es wird daher ein vollautomatisches, schwellwertbasiertes Verfahren vorgestellt, welches Kalk-Plaques detektieren und einen klinisch relevanten Kalziumscore erstellen kann. Dabei wurden Ergebnisse erzielt, die mit denen von Radiologen manuell erstellten vergleichbar sind. Schließlich wird ein auf aktiven Konturen basiertes Verfahren gezeigt, um SoftPlaques in CT Daten zu erkennen und zu quantifizieren. Dabei wird eine Gefäßwandhülle erstellt und eine Radiusverteilungskurve über das gesamte Gefäß erzeugt, anhand derer Soft-Plaques erkannt und näherungsweise deren Ausmaß bemessen werden kann.

142

Kurzfassung

Zusammen leisten die in dieser Arbeit vorgestellten Methoden und die erstellten Software-Prototypen einen wichtigen Beitrag zur Forschung und Entwicklung im Bereich der computergestützten Diagnose zur Analyse der koronaren Herzkrankheit in CT Daten.

[PDF] Computer Aided Diagnosis Methods for Coronary CT Angiography. Matthias Teßmann - Free Download PDF (2024)

FAQs

What is the difference between coronary calcium scan and coronary CT angiogram? ›

While there are many reasons your doctor might order imaging of the heart, calcium scoring is primarily used to better define and diagnose risk in patients who may not be experiencing any symptoms or only mild heart health symptoms, while CCTA is primarily used in patients with acute or chronic chest pain.

What is the difference between CCTA and coronary angiography? ›

CCTA is not invasive. An alternative test, cardiac catheterization with a coronary angiogram, is invasive, has more complications related to the placement of a long catheter into the groin or wrist arteries extending all the way to the heart, and the movement of the catheter in the blood vessels.

What is a good calcium score on a coronary angiogram? ›

If the CAC score is zero, no other examination would be indicated; if the score is between 1 and 400, the consensus recommends coronary angiography; and if the score is > 400, coronary angiography would be indicated(38).

Can CT coronary angiography detect blockage? ›

A CT coronary angiogram mainly is done to check for narrowed or blocked arteries in the heart. It may be done if you have symptoms of coronary artery disease. But the test can look for other heart conditions too.

What is the average calcium score for a 70 year old man? ›

Using this calculator, the coronary artery calcium score at which cardiovascular disease overtook cancer as the leading cause of death increased exponentially as a function of age corresponding to a coronary artery calcium score of approximately 115 at age 50 compared with 570 at age 70 (Figure 4).

What is the difference between a coronary angio and a CT coronary angio? ›

The primary difference between the two procedures is that while a standard angiogram involves inserting a catheter into the artery and area to be studied, a CT angiogram does not require catheter insertion. A significant advantage of a CT angiogram over a traditional angiogram is that a CT angiogram is non-invasive.

What is the cost of a CT coronary angiogram? ›

CT Coronary Angiography: The average cost for performing CT Coronary Angiography (CTCA) in India varies from Rs. 9000 to Rs. 15000. CT Pulmonary Angiography: CT pulmonary angiography price in India varies from INR 8000 to INR16000.

What are the disadvantages of CT angiogram? ›

Risks of CT angiography

Small amount of radiation. The amount of radiation you are exposed to depends on the number of pictures taken and the part of the body being examined. Extremely small chance you could develop cancer in the long term from the radiation. An allergic reaction from the dye.

Which is more accurate CT angiography or angiography? ›

CT angiography is fast, non-invasive and may have fewer complications compared to conventional angiography. CT angiography may provide more precise anatomical details than other angiography exams such as conventional catheter angiography and magnetic resonance imaging (MRI).

What is a worrisome calcium score? ›

A score of 100 to 300 means moderate plaque deposits. It's associated with a relatively high risk of a heart attack or other heart disease over the next 3 to 5 years. A score greater than 300 is a sign of more extensive disease and a higher heart attack risk.

What dissolves calcium deposits in the arteries? ›

Vitamin D3 (a cholecalciferol) is a vitamin that helps your body absorb calcium. This vitamin works alongside Vitamin K2 to help pull calcium deposits from your arteries. Oily fish is the best source of Vitamin D3, including rainbow trout, sockeye salmon, and sardines.

What is a $49 heart scan? ›

This scan identifies whether calcium deposits are present in the arteries of your heart. The more calcium deposits you have, the more plaque.

What is the ideal heart rate for a CT angiogram? ›

The optimal heart rate for evaluation of the coronary arteries with coronary CT is < 65 beats per minute. Patients will receive heart rate controlling medications (metoprolol) at their coronary CT appointment as needed unless contraindicated*.

What is the best test to check for clogged arteries? ›

Heart CT scan.

A CT scan of the heart can show calcium deposits and blockages in the heart arteries. Calcium deposits can narrow the arteries. Sometimes dye is given by IV during this test. The dye helps create detailed pictures of the heart arteries.

How accurate is a CT scan for blocked arteries? ›

CT scans are very accurate for finding heart blockages. In fact, a comprehensive study by John Hopkins showed that overall, non-invasive CT angiograms accurately detected or ruled out artery blockages in 91 percent of patients, compared with 69 percent for stress testing.

Is an angiogram more accurate than a CT scan? ›

A CT angiogram is very accurate in detecting CHD in patients and almost as accurate as a traditional angiogram, allowing doctors to make decisions such as ruling out CAD in patients with a low-to-medium risk of disease.

Why doesn't insurance cover a coronary calcium scan? ›

Because this test is relatively new, it's not part of standard guidelines for heart screenings—and not all insurance plans cover it. But it's apt to be used more in the future, as several studies comparing it with other tests have shown its effectiveness in determining the risk of cardiac events.

References

Top Articles
Unveiling Kazumi: A Journey Through Mystery  - Oxfordshiredaily
James Pressley (@jpress) Investment Portfolio on Public.com
Amtrust Bank Cd Rates
Black Gelato Strain Allbud
Snarky Tea Net Worth 2022
Paketshops | PAKET.net
Rainfall Map Oklahoma
Gt Transfer Equivalency
Myql Loan Login
De Leerling Watch Online
Insidekp.kp.org Hrconnect
Grace Caroline Deepfake
Louisiana Sportsman Classifieds Guns
Truth Of God Schedule 2023
Echat Fr Review Pc Retailer In Qatar Prestige Pc Providers – Alpha Marine Group
Adam4Adam Discount Codes
Weather Rotterdam - Detailed bulletin - Free 15-day Marine forecasts - METEO CONSULT MARINE
Walgreens Tanque Verde And Catalina Hwy
Garnish For Shrimp Taco Nyt
Greyson Alexander Thorn
Aliciabibs
3Movierulz
Hesburgh Library Catalog
N.J. Hogenkamp Sons Funeral Home | Saint Henry, Ohio
Die wichtigsten E-Nummern
Christmas Days Away
Gina's Pizza Port Charlotte Fl
Max 80 Orl
The Wichita Beacon from Wichita, Kansas
Old Peterbilt For Sale Craigslist
2016 Honda Accord Belt Diagram
Naya Padkar Newspaper Today
Zero Sievert Coop
Babylon 2022 Showtimes Near Cinemark Downey And Xd
Shih Tzu dogs for sale in Ireland
Is Arnold Swansinger Married
Smith And Wesson Nra Instructor Discount
Blackwolf Run Pro Shop
Thelemagick Library - The New Comment to Liber AL vel Legis
Busted Newspaper Campbell County KY Arrests
RECAP: Resilient Football rallies to claim rollercoaster 24-21 victory over Clarion - Shippensburg University Athletics
manhattan cars & trucks - by owner - craigslist
Emily Browning Fansite
Craigslist Farm And Garden Reading Pa
Unlock The Secrets Of "Skip The Game" Greensboro North Carolina
Blow Dry Bar Boynton Beach
Quaally.shop
Chubbs Canton Il
bot .com Project by super soph
Used Sawmill For Sale - Craigslist Near Tennessee
Random Warzone 2 Loadout Generator
Craigslist Farm And Garden Missoula
Latest Posts
Article information

Author: Sen. Emmett Berge

Last Updated:

Views: 5840

Rating: 5 / 5 (80 voted)

Reviews: 87% of readers found this page helpful

Author information

Name: Sen. Emmett Berge

Birthday: 1993-06-17

Address: 787 Elvis Divide, Port Brice, OH 24507-6802

Phone: +9779049645255

Job: Senior Healthcare Specialist

Hobby: Cycling, Model building, Kitesurfing, Origami, Lapidary, Dance, Basketball

Introduction: My name is Sen. Emmett Berge, I am a funny, vast, charming, courageous, enthusiastic, jolly, famous person who loves writing and wants to share my knowledge and understanding with you.