摘要
目的:对我院使用的处方点评软件进行应用性评价。方法:以甲、乙两位药师及软件分别对552张相同处方进行点评的结果进行比较,计算出Kappa值以评价该软件的可靠性;将乙药师的点评结果与软件的点评结果进行联合试验(系列试验和平行试验),再与甲药师的点评结果相比较,计算出灵敏度、特异度、约登指数,以评价该软件的真实性。对甲药师前后两次对相同的552张处方进行点评的结果进行比较,计算出Kappa值以评价甲药师点评的一致性。结果:甲、乙药师的点评结果与软件的点评结果比较一致性差,Kappa值分别为0.020 2、0.002 0,差异均有统计学意义(P<0.001);甲、乙药师的点评结果一致性最强,Kappa值为0.843 0,差异无统计学意义(0.1<P<0.25);软件的灵敏度为69.09%,特异度为32.58%;乙药师的灵敏度为84.55%,特异度为97.74%;平行试验灵敏度较高(93.64%),特异度未有变化(32.35%);系列试验灵敏度稍降(54.55%)、特异度大幅度提高(97.96%),但与甲药师的点评结果比较差异有统计学意义(P<0.005);甲药师两次评价结果的Kappa值为0.971 4,差异无统计学意义(0.5<P<0.75),一致性最强。结论:与我院药师人工点评处方比较,点评软件的灵敏度、特异度均较低,整体的真实性较差。药师与软件联合点评处方并不能有效提高结果的灵敏度及特异度。
OBJECTIVE:To evaluate the application of prescription comment software in our hospital. METHODS:552 prescriptions were evaluated by pharmacist A,B and software,and the Kappa values were calculated to evaluate the reliability of the software. The results of comment by pharmacist B and by software were combined(serial test and parallel test),and then compared with the results of comment by pharmacist A. The sensitivity,specificity and Youden's index were calculated to evaluate the reliability of the software. The comment results of 552 prescriptions by pharmacist A for twice were compared to calculate Kappa value,and it was used to evaluate the comment consistency of pharmacist A. RESULTS:The pharmacist A,B comment results and software comment results showed poor consistency,and Kappa value were 0.020 2 and 0.002 0,with significant difference(P〈0.001);pharmacist A comment results had strongest consistency with pharmacist B comment results,and Kappa value was0.843 0,with no significant difference(0.1〈P〈0.25). The software sensitivity was 69.09%,and the specificity was 32.58%. The sensitivity of pharmacist B was 84.55%,and the specificity was 97.74%. The parallel test had high sensitivity(93.64%),no change was found in specificity(32.35%). The sensitivity of serial test decreased a little(54.55%)and the specificity improved significantly(97.96%);but there was significant difference,compared with pharmacist A comment(P〈0.005). The preliminary evaluation and re-evaluation of pharmacist A were most consistent,and Kappa value was 0.971 4,with no significant difference(0.5〈P〈0.75). CONCLUSIONS:Compared with the pharmacist artificial prescription comment in our hospital,software comment has low sensitivity and specificity,and poor authenticity overall. Pharmacist combined with software for prescription comment can not improve the sensitivity and specificity of results.
出处
《中国药房》
CAS
北大核心
2015年第31期4330-4332,共3页
China Pharmacy
关键词
处方点评软件
人工点评
合理处方
不合理处方
KAPPA值
一致性
Prescription comment software
Artificial comment
Rational prescription
Irrational prescription
Kappa value
Consistency