国际麻醉学与复苏杂志   2023, Issue (1): 0-0
    
人工智能在识别丙泊酚麻醉下临界意识状态的应用研究
宋大为, 颜飞, 王昱博, 张云, 王强, 景桂霞1()
1.西安交通大学第一附属医院
Application of artificial intelligence in identifying critical conscious states during propofol anesthesia
 全文:
摘要:

目的 使用置信学习识别并检测丙泊酚麻醉临界状态下与意识波动相关的脑功能特征,并用其进行意识状态分类。 方法 选择2019年10月至2020年10月在西安交通大学第一附属医院行胸腹部手术的患者29例,年龄18~45岁,BMI 18~26 kg/m2,ASA分级Ⅰ、Ⅱ级,无神经和精神类疾病或病史。所有患者靶控输注丙泊酚,初始药物浓度设置为1.0 mg/L,并以0.2 mg/L的阶梯增加直到患者失去意识,每次药物浓度增加均以6 min为间隔。同时,给予患者听音按键任务获取其意识状态,并采集患者的脑电图(electroencephalography, EEG)。对采集的EEG进行预处理,依据声音刺激将其分为长为5 s的数据段,基于患者对每个声音刺激反应与否给予每个数据段一个标签(有意识或无意识),将患者在有意识和无意识交替出现的意识状态波动阶段定义为临界状态。提取每个数据段的包括脑电信号功率普特性、信号复杂度、脑区间功能连通性和脑网络特性的110个脑功能特征。使用前向选择方法进行特征筛选。使用置信学习对训练集进行清洗,采用线性判别分析(linear discriminant analysis, LDA)、逻辑回归(logistic regression, LR)和支持向量机(support vector machine, SVM)3种分类器模型分别计算置信学习前后的意识状态分类正确率。 结果 被置信学习清洗的标签大多位于意识波动的临界状态阶段。置信学习前LDA、LR和SVM分类器模型对意识状态的分类正确率分别为(85.3±3.7)%、(85.3±3.9)%和(85.3±3.8)%,置信学习后的分类正确率分别为(93.5±2.0)%、(92.9±1.8)%和(93.3±1.0)%,相比于置信学习前平均增长了7.93%。原本显示为无意识状态(对声音刺激无反应)而被置信学习重新标记为有意识状态的数据段上的α占比‑顶后区、PLI‑δ‑额顶和聚类系数‑δ这三类特征与稳定无意识数据段差异有统计学意义(P<0.001),而与稳定有意识数据段差异无统计学意义(P>0.05)。尽管快慢波‑α‑额区、排列熵‑θ‑额区、排列熵‑θ‑中央和α占比‑额区上在不确定意识与稳定有意识状态和稳定无意识状态差异均有统计学意义(P<0.001),但和稳定的无意识相比,这些数据明显更加接近于稳定的有意识状态。 结论 使用置信学习可以有效提高临界状态下不同意识水平的分类结果,为更精确的术中意识监测提供方法学支持。

关键词: 人工智能; 脑电图; 置信学习; 麻醉深度
Abstract:

Objective To identify and detect brain functional characteristics related to consciousness fluctuations under the critical state with propofol anesthesia by confident learning and then use them to classify consciousness states. Methods Atotal of 29 patients who underwent thoracic and abdominal surgery in the First Affiliated Hospital of Xi'an Jiaotong University from October 2019 to October 2020 were selected. They were aged 18‒45 years, BMI 18‒26 kg/m2, American Society of Anesthesiologists (ASA) class Ⅰ to Ⅱ, without neurological and psychiatric diseases or history. Propofol was administered using a target‑controlled infusion system. The initial drug concentration of propofol was set to 1.0 mg/L, which then increased with a step‑size of 0.2 mg/L at every 6 min until unconsciousness was achieved. Meanwhile, the patients were requested to press buttons to capture their consciousness states and collect their electroencephalography (EEG) data. Then, the collected EEG data were preprocessed and divided into 5 s epochs according to the sound stimulation. EEG epochs were assigned to consciousness or unconsciousness according to behavioral response following each stimulus. The state when patient's consciousness states were fluctuated was defined as a critical state. Totally 110 brain functional characteristics, including power spectrum of EEG signal, signal complexity, interregional functional connectivity and brain network properties, were extracted for each EEG epoch. The forward selection method was employed for feature screening. Then, confident learning was used to clear training sets. Finally, three machine learning algorithm models including linear discriminant analysis (LDA), logistic regression (LR), and support vector machine (SVM) were used to calculate the accuracy rate of consciousness state classification before and after confident learning. Results Most labels cleared by confident learning were located in the critical state of consciousness fluctuations. The classification accuracy rate of LDA, LR and SVM classification models for consciousness state before confident learning was (85.3±3.7)%, (85.3±3.9)%, and (85.3±3.8)%, respectively, and the classification accuracy rate after confident learning was (93.5±2.0)%, (92.9±1.8)%, and (93.3±1.0)%, respectively, with an average increase of 7.93% compared to those before confident learning. The posterior α‑power, frontal‑parietal PLI‑δ, and clustering coefficient‑δ of the epochs that were initially labeled as unconsciousness (without responding to auditory stimuli) but re‑labeled as consciousness by confident learning were significantly different from stable unconsciousness state (P<0.001). At the same time, there was no significant difference compared with steady consciousness state (P>0.05). Although there were statistical significant differences between indeterminate consciousness state and stable consciousness states and table unconsciousness state in the fast and slow wave‑α‑frontal, permutation entropy‑θ‑frontal, permutation entropy‑θ‑central, and α‑proportional‑frontal (P<0.001), these data were significantly closer to stable conscious state than stable unconsciousness state. Conclusion The use of confident learning effectively improves the classification of different consciousness states in the critical state, which provides methodology support for more accurate intraoperative awareness monitoring.

Key words: Artificial intelligence; Electroencephalography; Confident learning; Depth of anesthesia