The accurate segmentation of breast ultrasound images is an important precondition for the lesion determination. The existing segmentation approaches embrace massive parameters, sluggish inference speed, and huge memory consumption. To tackle this problem, we propose T2KD Attention U-Net (dual-Teacher Knowledge Distillation Attention U-Net), a lightweight semantic segmentation method combined double-path joint distillation in breast ultrasound images. Primarily, we designed two teacher models to learn the fine-grained features from each class of images according to different feature representation and semantic information of benign and malignant breast lesions. Then we leveraged the joint distillation to train a lightweight student model. Finally, we constructed a novel weight balance loss to focus on the semantic feature of small objection, solving the unbalance problem of tumor and background. Specifically, the extensive experiments conducted on Dataset BUSI and Dataset B demonstrated that the T2KD Attention U-Net outperformed various knowledge distillation counterparts. Concretely, the accuracy, recall, precision, Dice, and mIoU of proposed method were 95.26%, 86.23%, 85.09%, 83.59%and 77.78% on Dataset BUSI, respectively. And these performance indexes were 97.95%, 92.80%, 88.33%, 88.40% and 82.42% on Dataset B, respectively. Compared with other models, the performance of this model was significantly improved. Meanwhile, compared with the teacher model, the number, size, and complexity of student model were significantly reduced (2.2×106 vs. 106.1×106, 8.4 MB vs. 414 MB, 16.59 GFLOPs vs. 205.98 GFLOPs, respectively). Indeedy, the proposed model guarantees the performances while greatly decreasing the amount of computation, which provides a new method for the deployment of clinical medical scenarios.
ObjectiveTo observe the effect of sensory integration training combined with methylphenidate hydrochloride on attention deficit hyperactivity disorder (ADHD). MethodsThe clinical data of 96 patients with ADHD diagnosed between January 2009 and March 2013 were retrospectively analyzed. The patients were divided into two groups by the table of random number. The trail group (n=48) received the combination therapy of sensory integration training combined with methylphenidate hydrochloride; while the control group (n=48) only received the medication of methylphenidate hydrochloride. The scores of sensory integration ability rating scale, integrated visual and auditory continuous performance test (IVA-CPT), Conner's behavior rating scale, Chinese Wechsler Intelligence Scale for Children (C-WISC) and adverse reactions were observed and compared between the two groups. ResultsThe scores of the sensory integration ability rating scale, FRCQ, FAQ (IVA-CPT), PIQ, VIQ, FIQ, C factor (C-WISC) in both of the two groups were significantly higher after the therapy; while the scores of the study, behavior, somatopsychic disturbance, impulsion, hyperactivity index and anxiety factor significantly decreased after the treatment (P<0.05). Compared with the control group, the trial group's scores of sensory integration ability rating scale, IVA-CPT, Conner's behavior rating scale, C-WISC were improved obviously, and the adverse reactions were significantly less (P<0.05). ConclusionThe sensory integration training combined with methylphenidate hydrochloride is sage and effective on children with attention deficit hyperactivity disorder.
The conventional fault diagnosis of patient monitors heavily relies on manual experience, resulting in low diagnostic efficiency and ineffective utilization of fault maintenance text data. To address these issues, this paper proposes an intelligent fault diagnosis method for patient monitors based on multi-feature text representation, improved bidirectional gate recurrent unit (BiGRU) and attention mechanism. Firstly, the fault text data was preprocessed, and the word vectors containing multiple linguistic features was generated by linguistically-motivated bidirectional encoder representation from Transformer. Then, the bidirectional fault features were extracted and weighted by the improved BiGRU and attention mechanism respectively. Finally, the weighted loss function is used to reduce the impact of class imbalance on the model. To validate the effectiveness of the proposed method, this paper uses the patient monitor fault dataset for verification, and the macro F1 value has achieved 91.11%. The results show that the model built in this study can realize the automatic classification of fault text, and may provide assistant decision support for the intelligent fault diagnosis of the patient monitor in the future.
Lung cancer is the most threatening tumor disease to human health. Early detection is crucial to improve the survival rate and recovery rate of lung cancer patients. Existing methods use the two-dimensional multi-view framework to learn lung nodules features and simply integrate multi-view features to achieve the classification of benign and malignant lung nodules. However, these methods suffer from the problems of not capturing the spatial features effectively and ignoring the variability of multi-views. Therefore, this paper proposes a three-dimensional (3D) multi-view convolutional neural network (MVCNN) framework. To further solve the problem of different views in the multi-view model, a 3D multi-view squeeze-and-excitation convolution neural network (MVSECNN) model is constructed by introducing the squeeze-and-excitation (SE) module in the feature fusion stage. Finally, statistical methods are used to analyze model predictions and doctor annotations. In the independent test set, the classification accuracy and sensitivity of the model were 96.04% and 98.59% respectively, which were higher than other state-of-the-art methods. The consistency score between the predictions of the model and the pathological diagnosis results was 0.948, which is significantly higher than that between the doctor annotations and the pathological diagnosis results. The methods presented in this paper can effectively learn the spatial heterogeneity of lung nodules and solve the problem of multi-view differences. At the same time, the classification of benign and malignant lung nodules can be achieved, which is of great significance for assisting doctors in clinical diagnosis.
To address the challenges in blood cell recognition caused by diverse morphology, dense distribution, and the abundance of small target information, this paper proposes a blood cell detection algorithm - the "You Only Look Once" model based on hybrid mixing attention and deep over-parameters (HADO-YOLO). First, a hybrid attention mechanism is introduced into the backbone network to enhance the model's sensitivity to detailed features. Second, the standard convolution layers with downsampling in the neck network are replaced with deep over-parameterized convolutions to expand the receptive field and improve feature representation. Finally, the detection head is decoupled to enhance the model's robustness for detecting abnormal cells. Experimental results on the Blood Cell Counting Dataset (BCCD) demonstrate that the HADO-YOLO algorithm achieves a mean average precision of 90.2% and a precision of 93.8%, outperforming the baseline YOLO model. Compared with existing blood cell detection methods, the proposed algorithm achieves state-of-the-art detection performance. In conclusion, HADO-YOLO offers a more efficient and accurate solution for identifying various types of blood cells, providing valuable technical support for future clinical diagnostic applications.
Accurate segmentation of ground glass nodule (GGN) is important in clinical. But it is a tough work to segment the GGN, as the GGN in the computed tomography images show blur boundary, irregular shape, and uneven intensity. This paper aims to segment GGN by proposing a fully convolutional residual network, i.e., residual network based on atrous spatial pyramid pooling structure and attention mechanism (ResAANet). The network uses atrous spatial pyramid pooling (ASPP) structure to expand the feature map receptive field and extract more sufficient features, and utilizes attention mechanism, residual connection, long skip connection to fully retain sensitive features, which is extracted by the convolutional layer. First, we employ 565 GGN provided by Shanghai Chest Hospital to train and validate ResAANet, so as to obtain a stable model. Then, two groups of data selected from clinical examinations (84 GGN) and lung image database consortium (LIDC) dataset (145 GGN) were employed to validate and evaluate the performance of the proposed method. Finally, we apply the best threshold method to remove false positive regions and obtain optimized results. The average dice similarity coefficient (DSC) of the proposed algorithm on the clinical dataset and LIDC dataset reached 83.46%, 83.26% respectively, the average Jaccard index (IoU) reached 72.39%, 71.56% respectively, and the speed of segmentation reached 0.1 seconds per image. Comparing with other reported methods, our new method could segment GGN accurately, quickly and robustly. It could provide doctors with important information such as nodule size or density, which assist doctors in subsequent diagnosis and treatment.
ObjectiveTo systematically review the methodological quality of guidelines concerning attention-deficit/hyperactivity disorder (ADHD) in children and adolescents, and to compare differences and similarities of the drugs recommended, in order to provide guidance for clinical practice. MethodsGuidelines concerning ADHD were electronically retrieved in PubMed, EMbase, VIP, WanFang Data, CNKI, NGC (National Guideline Clearinghouse), GIN (Guidelines International Network), NICE (National Institute for Health and Clinical Excellence) from inception to December 2013. The methodological quality of included guidelines were evaluated according to the AGREE Ⅱ instrument, and the differences between recommendations were compared. ResultsA total of 9 guidelines concerning ADHD in children and adolescents were included, with development time ranging from 2004 to 2012. Among 9 guidelines, 4 were made by the USA, 3 in Europe and 2 by UK. The levels of recommendations were Level A for 2 guidelines, and Level B for 7 guidelines. The scores of guidelines according to the domains of AGREE Ⅱ decreased from "clarity of presentations", "scope and purpose", "participants", "applicability", "rigour of development" and "editorial independence". Three evidence-based guidelines scored the top three in the domain of "rigour of development". There were slightly differences in the recommendations of different guidelines. ConclusionThe overall methodological quality of ADHD guidelines is suboptimal in different countries or regions. The 6 domains involving 23 items in AGREE Ⅱ vary with scores, while the scores of evidence-base guidelines are higher than those of non-evidence-based guidelines. The guidelines on ADHD in children and adolescents should be improved in "rigour of development" and "applicability" in future. Conflicts of interest should be addressed. And the guidelines are recommended to be developed on the basis of methods of evidence-based medicine, and best evidence is recommended.
The processing mechanism of the human brain for speech information is a significant source of inspiration for the study of speech enhancement technology. Attention and lateral inhibition are key mechanisms in auditory information processing that can selectively enhance specific information. Building on this, the study introduces a dual-branch U-Net that integrates lateral inhibition and feedback-driven attention mechanisms. Noisy speech signals input into the first branch of the U-Net led to the selective feedback of time-frequency units with high confidence. The generated activation layer gradients, in conjunction with the lateral inhibition mechanism, were utilized to calculate attention maps. These maps were then concatenated to the second branch of the U-Net, directing the network’s focus and achieving selective enhancement of auditory speech signals. The evaluation of the speech enhancement effect was conducted by utilising five metrics, including perceptual evaluation of speech quality. This method was compared horizontally with five other methods: Wiener, SEGAN, PHASEN, Demucs and GRN. The experimental results demonstrated that the proposed method improved speech signal enhancement capabilities in various noise scenarios by 18% to 21% compared to the baseline network across multiple performance metrics. This improvement was particularly notable in low signal-to-noise ratio conditions, where the proposed method exhibited a significant performance advantage over other methods. The speech enhancement technique based on lateral inhibition and feedback-driven attention mechanisms holds significant potential in auditory speech enhancement, making it suitable for clinical practices related to artificial cochleae and hearing aids.
The synergistic effect of drug combinations can solve the problem of acquired resistance to single drug therapy and has great potential for the treatment of complex diseases such as cancer. In this study, to explore the impact of interactions between different drug molecules on the effect of anticancer drugs, we proposed a Transformer-based deep learning prediction model—SMILESynergy. First, the drug text data—simplified molecular input line entry system (SMILES) were used to represent the drug molecules, and drug molecule isomers were generated through SMILES Enumeration for data augmentation. Then, the attention mechanism in the Transformer was used to encode and decode the drug molecules after data augmentation, and finally, a multi-layer perceptron (MLP) was connected to obtain the synergy value of the drugs. Experimental results showed that our model had a mean squared error of 51.34 in regression analysis, an accuracy of 0.97 in classification analysis, and better predictive performance than the DeepSynergy and MulinputSynergy models. SMILESynergy offers improved predictive performance to assist researchers in rapidly screening optimal drug combinations to improve cancer treatment outcomes.
Glioma is a primary brain tumor with high incidence rate. High-grade gliomas (HGG) are those with the highest degree of malignancy and the lowest degree of survival. Surgical resection and postoperative adjuvant chemoradiotherapy are often used in clinical treatment, so accurate segmentation of tumor-related areas is of great significance for the treatment of patients. In order to improve the segmentation accuracy of HGG, this paper proposes a multi-modal glioma semantic segmentation network with multi-scale feature extraction and multi-attention fusion mechanism. The main contributions are, (1) Multi-scale residual structures were used to extract features from multi-modal gliomas magnetic resonance imaging (MRI); (2) Two types of attention modules were used for features aggregating in channel and spatial; (3) In order to improve the segmentation performance of the whole network, the branch classifier was constructed using ensemble learning strategy to adjust and correct the classification results of the backbone classifier. The experimental results showed that the Dice coefficient values of the proposed segmentation method in this article were 0.909 7, 0.877 3 and 0.839 6 for whole tumor, tumor core and enhanced tumor respectively, and the segmentation results had good boundary continuity in the three-dimensional direction. Therefore, the proposed semantic segmentation network has good segmentation performance for high-grade gliomas lesions.