Список литературы
[1] World Health Organization (2017). Cardiovascular diseases (CVDs). http://www.who.int/mediacentre/factsheets/fs317/en/ Accessed 18 Apr 2018
[2] Nikolic, G., Bishop, R., Singh, J., 1982. Sudden death recorded during holter monitoring. Circulation 66, 218-225.
[3] Hong S. et al. Opportunities and Challenges in Deep Learning Methods on Electrocardiogram Data: A Systematic Review //arXiv preprint arXiv:2001.01550. - 2019.
[4] Hong, S., Zhou, Y., Wu, M., Shang, J., Wang, Q., Li, H., Xie, J., 2019c. Combining deep neural networks and engineered features for cardiac arrhythmia detection from ecg recordings. Physiological measurement 40, 054009.
[5] Hong, S., Wu, M., Zhou, Y., Wang, Q., Shang, J., Li, H., Xie, J., 2017. Encase: An ensemble classifier for ecg classification using expert features and deep neural networks, in: 2017 Computing in Cardiology (CinC), IEEE. pp. 1-4.
[6] Clifford, G.D., Liu, C., Moody, B., Li-wei, H.L., Silva, I., Li, Q., Johnson, A., Mark, R.G., 2017. Af classification from a short single lead ecg recording: the physionet/computing in cardiology challenge 2017, in: 2017 Computing in Cardiology (CinC), IEEE. pp. 1-4.
[7] Alena I. Kalyakulina, Igor I. Yusipov, Victor A. Moskalenko, Alexander V. Nikolskiy, Artem A. Kozlov, Konstantin A. Kosonogov, Nikolay Yu. Zolotykh, Mikhail V. Ivanchenko New ECG Delineation Database. arXiv:1809.03393
[8] Li, K., Pan, W., Li, Y., Jiang, Q., Liu, G., 2018a. A method to detect sleep apnea based on deep neural network and hidden markov model using single-lead ecg signal. Neurocomputing 294, 94-101
[9] Attia, Z.I., Sugrue, A., Asirvatham, S.J., Ackerman, M.J., Kapa, S., Friedman, P.A., Noseworthy, P.A., 2018b. Noninvasive assessment of dofetilide plasma concentration using a deep learning (neural network) analysis of the surface electrocardiogram: A proof of concept study. PloS one 13, e0201059.
[10] Fotiadou, E., Konopczyсski, T., Hesser, J.W., Vullings, R., 2020. End-to-end trained cnn encoder-decoder network for fetal ecg signal denoising. Physiological Measurement.
[11] Santamaria-Granados, L., Munoz-Organero, M., RamirezShenda Hong et al.: Preprint submitted to Elsevier Page 14 of 16 Opportunities and Challenges of Deep Learning for ECG Data Gonzalez, G., Abdulhay, E., Arunkumar, N., 2018. Using deep convolutional neural network for emotion detection on a physiological signal dataset (amigos). IEEE Access 7, 57-67.
[12] Attia, Z.I., Sugrue, A., Asirvatham, S.J., Ackerman, M.J., Kapa, S., Friedman, P.A., Noseworthy, P.A., 2018b. Noninvasive assessment of dofetilide plasma concentration using a deep learning (neural network) analysis of the surface electrocardiogram: A proof of concept study. PloS one 13, e0201059.
[13] Rastgoo, M.N., Nakisa, B., Maire, F., Rakotonirainy, A., Chandran, V., 2019. Automatic driver stress level classification using multimodal deep learning. Expert Systems with Applications 138, 112793.
[14] Kuznetsov V. V., Moskalenko V. A., Zolotykh N. Y. Electrocardiogram Generation and Feature Extraction Using a Variational Autoencoder //arXiv preprint arXiv:2002.00254. - 2020.
[15] Lee, J., Oh, K., Kim, B., Yoo, S.K., 2019a. Synthesis of electrocardiogram v lead signals from limb lead measurement using r peak aligned generative adversarial network. IEEE journal of biomedical and health informatics
[16] Karpov N., Lyashuk A., Vizgunov A. Sentiment Analysis Using Deep Learning //International Conference on Network Analysis. - Springer, Cham, 2016. - С. 281-288.
[17] Saadatnejad, S., Oveisi, M., Hashemi, M., 2019. Lstm-based ecg classification for continuous monitoring on personal wearable devices. IEEE journal of biomedical and health informatics.
[18] Li, R., Zhang, X., Dai, H., Zhou, B., Wang, Z., 2019e. Interpretability analysis of heartbeat classification based on heartbeat global sequence features and bilstm-attention neural network. IEEE Access 7, 109870-109883.
[19] Krizhevsky A., Sutskever I., Hinton G. E. Imagenet classification with deep convolutional neural networks //Advances in neural information processing systems. - 2012. - С. 1097-1105.
[20] Zihlmann M., Perekrestenko D., Tschannen M. Convolutional recurrent neural networks for electrocardiogram classification //2017 Computing in Cardiology (CinC). - IEEE, 2017. - С. 1-4.
[21] Jun T. J. et al. ECG arrhythmia classification using a 2-D convolutional neural network //arXiv preprint arXiv:1804.06812. - 2018.
[22] Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L., 2009. Imagenet: A large-scale hierarchical image database, in: 2009 IEEE conference on computer vision and pattern recognition, Ieee. pp. 248-255.
[23] He K. et al. Deep residual learning for image recognition //Proceedings of the IEEE conference on computer vision and pattern recognition. - 2016. - С. 770-778.
[24] Simonyan K., Zisserman A. Very deep convolutional networks for large-scale image recognition //arXiv preprint arXiv:1409.1556. - 2014.
[25] Rajpurkar P. et al. Cardiologist-level arrhythmia detection with convolutional neural networks //arXiv preprint arXiv:1707.01836. - 2017.
[26] Xu X., Liu H. ECG Heartbeat Classification Using Convolutional Neural Networks //IEEE Access. - 2020. - Т. 8. - С. 8614-8619.
Приложения
Приложение 1
1. import argparse
2. import os
3. import os.path as osp
4.
5. import cv2
6. import matplotlib.pyplot as plt
7. import numpy as np
8. import wfdb
9. from sklearn.preprocessing import scale
10. from wfdb import rdrecord
11.
12. # Choose from peak to peak or centered
13. # mode = [20, 20]
14. mode = 128
15.
16. image_size = 128
17. output_dir = '../data'
18.
19. # dpi fix
20. fig = plt.figure(frameon=False)
21. dpi = fig.dpi
22.
23. # fig size / image size
24. figsize = (image_size / dpi, image_size / dpi)
25. image_size = (image_size, image_size)
26.
27.
28. def plot(signal, filename):
29. plt.figure(figsize=figsize, frameon=False)
30. plt.axis('off')
31. plt.subplots_adjust(top=1, bottom=0, right=1, left=0, hspace=0, wspace=0)
32. # plt.margins(0, 0) # use for generation images with no margin
33. plt.plot(signal)
34. plt.savefig(filename)
35.
36. plt.close()
37.
38. im_gray = cv2.imread(filename, cv2.IMREAD_GRAYSCALE)
39. im_gray = cv2.resize(im_gray, image_size, interpolation=cv2.INTER_LANCZOS4)
40. cv2.imwrite(filename, im_gray)
41.
42.
43. if __name__ == '__main__':
44.
45. parser = argparse.ArgumentParser()
46. parser.add_argument('--file', required=True)
47. args = parser.parse_args()
48.
49.
50. ecg = args.file
51. name = osp.basename(ecg)
52. record = rdrecord(ecg)
53. ann = wfdb.rdann(ecg, extension='atr')
54. for sig_name, signal in zip(record.sig_name, record.p_signal.T):
55. if not np.all(np.isfinite(signal)):
56. continue
57. signal = scale(signal)
58. for i, (label, peak) in enumerate(zip(ann.symbol, ann.sample)):
59. if label == '/': label = "\\"
60. print('\r{} [{}/{}]'.format(sig_name, i + 1, len(ann.symbol)), end="")
61. if isinstance(mode, list):
62. if np.all([i > 0, i + 1 < len(ann.sample)]):
63. left = ann.sample[i - 1] + mode[0]
64. right = ann.sample[i + 1] - mode[1]
65. else:
66. continue
67. elif isinstance(mode, int):
68. left, right = peak - mode // 2, peak + mode // 2
69. else:
70. raise Exception("Wrong mode in script beginning")
71.
72. if np.all([left > 0, right < len(signal)]):
73. one_dim_data_dir = osp.join(output_dir, '1D', name, sig_name, label)
74. two_dim_data_dir = osp.join(output_dir, '2D', name, sig_name, label)
75. os.makedirs(one_dim_data_dir, exist_ok=True)
76. os.makedirs(two_dim_data_dir, exist_ok=True)
77.
78. filename = osp.join(one_dim_data_dir, '{}.npy'.format(peak))
79. np.save(filename, signal[left:right])
80. filename = osp.join(two_dim_data_dir, '{}.png'.format(peak))
81.
82. plot(signal[left:right], filename)
Приложение 2
1. import subprocess
2. import os.path as osp
3. import multiprocessing as mp
4. from glob import glob
5. from tqdm import tqdm
6.
7. input_dir = '../mit-bih/*.atr'
8. ecg_data = sorted([osp.splitext(i)[0] for i in glob(input_dir)])
9. pbar = tqdm(total=len(ecg_data))
10.
11.
12. def run(file):
13. params = ['python3', 'dataset-generation.py', '--file', file]
14. subprocess.check_call(params)
15. pbar.update(1)
16.
17.
18. if __name__ == '__main__':
19. p = mp.Pool(processes=mp.cpu_count())
20. p.map(run, ecg_data)
Приложение 3
1. import json
2. import os.path as osp
3. from glob import glob
4.
5. import pandas as pd
6.
7. # 1. N - Normal
8. # 2. V - PVC (Premature ventricular contraction)
9. # 3. \ - PAB (Paced beat)
10. # 4. R - RBB (Right bundle branch)
11. # 5. L - LBB (Left bundle branch)
12. # 6. A - APB (Atrial premature beat)
13. # 7. ! - AFW (Ventricular flutter wave)
14. # 8. E - VEB (Ventricular escape beat)
15.
16. classes = ['N', 'V', '\\', 'R', 'L', 'A', '!', 'E']
17. lead = 'MLII'
18. extension = 'png' # or `npy` for 1D
19. data_path = osp.abspath('../data/*/*/*/*/*.{}'.format(extension))
20. val_size = 0.1 # [0, 1]
21.
22. output_path = '/'.join(data_path.split('/')[:-5])
23. random_state = 7
24.
25. if __name__ == '__main__':
26. dataset = []
27. files = glob(data_path)
28.
29. for file in glob(data_path):
30. *_, name, lead, label, filename = file.split('/')
31. dataset.append({
32. "name": name,
33. "lead": lead,
34. "label": label,
35. "filename": osp.splitext(filename)[0],
36. "path": file
37. })
38.
39. data = pd.DataFrame(dataset)
40. data = data[data['lead'] == lead]
41. data = data[data['label'].isin(classes)]
42. data = data.sample(frac=1, random_state=random_state)
43.
44. val_ids = []
45. for cl in classes:
46. val_ids.extend(data[data['label'] == cl].sample(frac=val_size, random_state=random_state).index)
47.
48. val = data.loc[val_ids, :]
49. train = data[~data.index.isin(val.index)]
50.
51. train.to_json(osp.join(output_path, 'train.json'), orient='records')
52. val.to_json(osp.join(output_path, 'val.json'), orient='records')
53.
54. d = {}
55. for label in train.label.unique():
56. d[label] = len(d)
57.
58. with open(osp.join(output_path, 'class-mapper.json'), 'w') as file:
59. file.write(json.dumps(d, indent=1))
Приложение 4
1. import torch.nn as nn
2. import torch.nn.functional as F
3.
4.
5. def conv_block(in_planes, out_planes, stride=1, groups=1, dilation=1):
6. return nn.Conv1d(in_planes, out_planes, kernel_size=17, stride=stride,
7. padding=8, groups=groups, bias=False, dilation=dilation)
8.
9.
10. def conv_subsumpling(in_planes, out_planes, stride=1):
11. return nn.Conv1d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False)
12.
13.
14. class BasicBlockHeartNet(nn.Module):
15. expansion = 1
16.
17. def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1,
18. base_width=64, dilation=1, norm_layer=None):
19. super(BasicBlockHeartNet, self).__init__()
20. if norm_layer is None:
21. norm_layer = nn.BatchNorm1d
22. if groups != 1 or base_width != 64:
23. raise ValueError('BasicBlock only supports groups=1 and base_width=64')
24. if dilation > 1:
25. raise NotImplementedError("Dilation > 1 not supported in BasicBlock")
26. # Both self.conv1 and self.downsample layers downsample the input when stride != 1
27. self.conv1 = conv_block(inplanes, planes, stride)
28. self.bn1 = norm_layer(inplanes)
29. self.relu = nn.ReLU(inplace=True)
30. self.conv2 = conv_block(planes, planes)
31. self.bn2 = norm_layer(planes)
32. self.downsample = downsample
33. self.stride = stride
34.
35. def forward(self, x):
36. identity = x
37.
38. out = self.bn1(x)
39. out = self.relu(out)
40. out = self.conv1(out)
41.
42. out = self.bn2(out)
43. out = self.relu(out)
44. out = self.conv2(out)
45.
46. if self.downsample is not None:
47. identity = self.downsample(x)
48. if self.stride != 1:
49. identity = F.max_pool1d(identity, self.stride)
50.
51. out += identity
52.
53. return out
54.
55.
56. class BasicBlock(nn.Module):
57. expansion = 1
58.
59. def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1,
60. base_width=64, dilation=1, norm_layer=None):
61. super(BasicBlock, self).__init__()
62. if norm_layer is None:
63. norm_layer = nn.BatchNorm1d
64. if groups != 1 or base_width != 64:
65. raise ValueError('BasicBlock only supports groups=1 and base_width=64')
66. if dilation > 1:
67. raise NotImplementedError("Dilation > 1 not supported in BasicBlock")
68. # Both self.conv1 and self.downsample layers downsample the input when stride != 1
69. self.conv1 = conv_block(inplanes, planes, stride)
70. self.bn1 = norm_layer(inplanes)
71. self.relu = nn.ReLU(inplace=True)
72. self.conv2 = conv_block(planes, planes)
73. self.bn2 = norm_layer(planes)
74. self.dropout = nn.Dropout()
75. self.downsample = downsample
76. self.stride = stride
77.
78. def forward(self, x):
79. identity = x
80.
81. out = self.bn1(x)
82. out = self.relu(out)
83. out = self.dropout(out)
84. out = self.conv1(out)
85.
86. out = self.bn2(out)
87. out = self.relu(out)
88. out = self.dropout(out)
89. out = self.conv2(out)
90.
91. if self.downsample is not None:
92. identity = self.downsample(x)
93.
94. out += identity
95.
96. return out
97.
98.
99. class HeartNet(nn.Module):
100.
101. def __init__(self, layers=(1, 2, 2, 2, 2, 2, 2, 2, 1), num_classes=1000, zero_init_residual=False,
102. groups=1, width_per_group=64, replace_stride_with_dilation=None,
103. norm_layer=None, block=BasicBlockHeartNet):
104.
105. super(HeartNet, self).__init__()
106. if norm_layer is None:
107. norm_layer = nn.BatchNorm1d
108. self._norm_layer = norm_layer
109.
110. self.inplanes = 32
111. self.dilation = 1
112. if replace_stride_with_dilation is None:
113. # each element in the tuple indicates if we should replace
114. # the 2x2 stride with a dilated convolution instead
115. replace_stride_with_dilation = [False, False, False]
116. if len(replace_stride_with_dilation) != 3:
117. raise ValueError("replace_stride_with_dilation should be None "
118. "or a 3-element tuple, got {}".format(replace_stride_with_dilation))
119. self.groups = groups
120. self.base_width = width_per_group
121. self.conv1 = conv_block(1, self.inplanes, stride=1,)
122. self.bn1 = norm_layer(self.inplanes)
123. self.relu = nn.ReLU(inplace=True)
124. self.layer0 = self._make_layer(block, 64, layers[0])
125. self.layer1 = self._make_layer(block, 64, layers[1], stride=2,
126. dilate=replace_stride_with_dilation[0])
127. self.layer2 = self._make_layer(block, 128, layers[2], stride=2,
128. dilate=replace_stride_with_dilation[0])
129. self.layer2_ = self._make_layer(block, 128, layers[3], stride=2,
130. dilate=replace_stride_with_dilation[0])
131. self.layer3 = self._make_layer(block, 256, layers[4], stride=2,
132. dilate=replace_stride_with_dilation[1])
133. self.layer3_ = self._make_layer(block, 256, layers[5], stride=2,
134. dilate=replace_stride_with_dilation[1])
135. self.layer4 = self._make_layer(block, 512, layers[6], stride=2,
136. dilate=replace_stride_with_dilation[2])
137. self.layer4_ = self._make_layer(block, 512, layers[7], stride=2,
138. dilate=replace_stride_with_dilation[2])