Displaying publications 21 - 40 of 61 in total

Abstract:
Sort:
  1. Abd-Shukor SN, Yahaya N, Tamil AM, Botelho MG, Ho TK
    Eur J Dent Educ, 2021 Nov;25(4):744-752.
    PMID: 33368978 DOI: 10.1111/eje.12653
    INTRODUCTION: The application of video-based learning in dentistry has been widely investigated; however, the nature of on-screen video enhancements of the video has been minimally explored in the literature. This study investigated the effectiveness of an in-class and on-demand enhanced video to support learning on removable partial dentures in terms of knowledge acquisition, perception and clinical skill performance.

    METHODS: Fifty-four dental students enrolled in 2018 were recruited as participants and assigned to two groups. Both groups were given the same lecture and asked to watch the same video in either the enhanced or non-enhanced version. The enhanced video was modified with the contemporaneous subtitle of the presenters' dialogue, text bullet points and summary text pages. The knowledge acquisition from the two types of video was subjected to pre- and post-tests one month after the students watched the video. A questionnaire was used to evaluate the students' perceptions of the learning experience and a performance test on practical skills was performed after six weeks. All the students responded to the test (100%).

    RESULTS: The enhanced video demonstration improved the students' short-term knowledge acquisition after they watched the video, with an average score of 1.59 points higher in the enhanced group than in the non-enhanced group (p video as a replacement of the existing teaching method rather than a teaching supplement (70.3%).

    CONCLUSION: The application of the enhanced video demonstration resulted in a better theoretical knowledge retention but not practical performance. Students also preferred watching video to using conventional learning methods.

    Matched MeSH terms: Video Recording
  2. Yahya M'F, Wang GM, Nimbalkar S
    Am J Orthod Dentofacial Orthop, 2023 Jul;164(1):97-105.
    PMID: 36890012 DOI: 10.1016/j.ajodo.2022.11.013
    INTRODUCTION: The evaluation of the quality of information (QOI) and clarity of information (COI) among oral health-related videos on the video-streaming Web site YouTube is scarce. This study evaluated QOI and COI regarding temporary anchorage devices contained within videos uploaded by dental professionals (DPs) on YouTube.

    METHODS: YouTube videos were systematically acquired with 4 search terms. The top 50 videos per search term by the number of views were stored in a YouTube account. A set of inclusion/exclusion criteria were applied, videos were assessed for viewing characteristics, a 4-point scoring system (0-3) was applied to evaluate QOI in 10 predetermined domains, and a 3-point scoring system (0-2) was applied to evaluate COI. Descriptive statistical analyses and intrarater and interrater reliability tests were performed.

    RESULTS: Strong intrarater and interrater reliability scores were observed. Sixty-three videos from the top 58 most-viewed DPs were viewed 1,395,471 times (range, 414-124,939). Most DPs originated from the United States (20%), and orthodontists (62%) uploaded most of the videos. The mean number of reported domains was 2.03 ± 2.40 (out of 10). The mean overall QOI score per domain was 0.36 ± 0.79 (out of 3). The "Placement of miniscrews" domain scored highest (1.23 ± 0.75). The "Cost of miniscrews placement" domain scored the lowest (0.03 ± 0.25). The mean overall QOI score per DP was 3.59 ± 5.64 (out of 30). The COI in 32 videos was immeasurable, and only 2 avoided using technical words.

    CONCLUSIONS: The QOI related to temporary anchorage devices contained within videos provided by DPs through the YouTube Web site is deficient, particularly in the cost of placement. Orthodontists should be aware of the importance of YouTube as an information resource and ensure that videos related to temporary anchorage devices contain comprehensive and evidence-based information.

    Matched MeSH terms: Video Recording
  3. AlHasan AJMS
    Surgery, 2023 Sep;174(3):744-746.
    PMID: 37419760 DOI: 10.1016/j.surg.2023.05.042
    Surgical journals use videos for educational and promotional purposes. YouTube is a suitable social media platform for sharing videos of journal content. The Surgery journal experience on YouTube can be used to learn important information on the nature of video content, the measurement of performance, and the benefits and challenges of using YouTube to disseminate journal content. Video content can be created to deliver information and infotainment. The online performance of videos can be measured using various metrics on YouTube Analytics, including content views and engagement metrics. There are several benefits to the use of YouTube videos by surgical journals, including the dissemination of reliable information, language versatility and diversity, open access and portability, increased visibility for authors and journals, and the humanization of the journal interface. However, challenges also need to be overcome, including viewer discretion where graphic content is concerned, copyright protection, limitations of Internet connection bandwidth, algorithmic barriers imposed by YouTube itself, and violations of biomedical ethics.
    Matched MeSH terms: Video Recording
  4. Aly CA, Abas FS, Ann GH
    Sci Prog, 2021;104(2):368504211005480.
    PMID: 33913378 DOI: 10.1177/00368504211005480
    INTRODUCTION: Action recognition is a challenging time series classification task that has received much attention in the recent past due to its importance in critical applications, such as surveillance, visual behavior study, topic discovery, security, and content retrieval.

    OBJECTIVES: The main objective of the research is to develop a robust and high-performance human action recognition techniques. A combination of local and holistic feature extraction methods used through analyzing the most effective features to extract to reach the objective, followed by using simple and high-performance machine learning algorithms.

    METHODS: This paper presents three robust action recognition techniques based on a series of image analysis methods to detect activities in different scenes. The general scheme architecture consists of shot boundary detection, shot frame rate re-sampling, and compact feature vector extraction. This process is achieved by emphasizing variations and extracting strong patterns in feature vectors before classification.

    RESULTS: The proposed schemes are tested on datasets with cluttered backgrounds, low- or high-resolution videos, different viewpoints, and different camera motion conditions, namely, the Hollywood-2, KTH, UCF11 (YouTube actions), and Weizmann datasets. The proposed schemes resulted in highly accurate video analysis results compared to those of other works based on four widely used datasets. The First, Second, and Third Schemes provides recognition accuracies of 57.8%, 73.6%, and 52.0% on Hollywood2, 94.5%, 97.0%, and 59.3% on KTH, 94.5%, 95.6%, and 94.2% on UCF11, and 98.9%, 97.8% and 100% on Weizmann.

    CONCLUSION: Each of the proposed schemes provides high recognition accuracy compared to other state-of-art methods. Especially, the Second Scheme as it gives excellent comparable results to other benchmarked approaches.

    Matched MeSH terms: Video Recording
  5. Pogorelov K, Suman S, Azmadi Hussin F, Saeed Malik A, Ostroukhova O, Riegler M, et al.
    J Appl Clin Med Phys, 2019 Aug;20(8):141-154.
    PMID: 31251460 DOI: 10.1002/acm2.12662
    Wireless capsule endoscopy (WCE) is an effective technology that can be used to make a gastrointestinal (GI) tract diagnosis of various lesions and abnormalities. Due to a long time required to pass through the GI tract, the resulting WCE data stream contains a large number of frames which leads to a tedious job for clinical experts to perform a visual check of each and every frame of a complete patient's video footage. In this paper, an automated technique for bleeding detection based on color and texture features is proposed. The approach combines the color information which is an essential feature for initial detection of frame with bleeding. Additionally, it uses the texture which plays an important role to extract more information from the lesion captured in the frames and allows the system to distinguish finely between borderline cases. The detection algorithm utilizes machine-learning-based classification methods, and it can efficiently distinguish between bleeding and nonbleeding frames and perform pixel-level segmentation of bleeding areas in WCE frames. The performed experimental studies demonstrate the performance of the proposed bleeding detection method in terms of detection accuracy, where we are at least as good as the state-of-the-art approaches. In this research, we have conducted a broad comparison of a number of different state-of-the-art features and classification methods that allows building an efficient and flexible WCE video processing system.
    Matched MeSH terms: Video Recording/methods*
  6. AlDahoul N, Md Sabri AQ, Mansoor AM
    Comput Intell Neurosci, 2018;2018:1639561.
    PMID: 29623089 DOI: 10.1155/2018/1639561
    Human detection in videos plays an important role in various real life applications. Most of traditional approaches depend on utilizing handcrafted features which are problem-dependent and optimal for specific tasks. Moreover, they are highly susceptible to dynamical events such as illumination changes, camera jitter, and variations in object sizes. On the other hand, the proposed feature learning approaches are cheaper and easier because highly abstract and discriminative features can be produced automatically without the need of expert knowledge. In this paper, we utilize automatic feature learning methods which combine optical flow and three different deep models (i.e., supervised convolutional neural network (S-CNN), pretrained CNN feature extractor, and hierarchical extreme learning machine) for human detection in videos captured using a nonstatic camera on an aerial platform with varying altitudes. The models are trained and tested on the publicly available and highly challenging UCF-ARG aerial dataset. The comparison between these models in terms of training, testing accuracy, and learning speed is analyzed. The performance evaluation considers five human actions (digging, waving, throwing, walking, and running). Experimental results demonstrated that the proposed methods are successful for human detection task. Pretrained CNN produces an average accuracy of 98.09%. S-CNN produces an average accuracy of 95.6% with soft-max and 91.7% with Support Vector Machines (SVM). H-ELM has an average accuracy of 95.9%. Using a normal Central Processing Unit (CPU), H-ELM's training time takes 445 seconds. Learning in S-CNN takes 770 seconds with a high performance Graphical Processing Unit (GPU).
    Matched MeSH terms: Video Recording*
  7. Narayanan SN, Kumar RS
    Acta. Biol. Hung., 2018 Dec;69(4):371-384.
    PMID: 30587025 DOI: 10.1556/018.69.2018.4.1
    In the behavioral science field, many of the oldest tests have still most frequently been used almost in the same way for decades. The subjective influence of human observer and the large inter-observer and interlab differences are substantial among these tests. This necessitates the possibility of using technological innovations for behavioral science to obtain new parameters, results and insights as well. The light-dark box (LDB) test is a characteristic tool used to assess anxiety in rodents. A complete behavioral analysis (including both anxiety and locomotion parameters) is not possible by performing traditional LDB test protocol, as it lacks the usage of a real-time video recording of the test. In the current report, we describe an improved approach to conduct LDB test using a real-time video tracking system.
    Matched MeSH terms: Video Recording/methods*
  8. Mori Y, Itoi T, Baron TH, Takada T, Strasberg SM, Pitt HA, et al.
    J Hepatobiliary Pancreat Sci, 2018 Jan;25(1):87-95.
    PMID: 28888080 DOI: 10.1002/jhbp.504
    Since the publication of the Tokyo Guidelines in 2007 and their revision in 2013, appropriate management for acute cholecystitis has been more clearly established. Since the last revision, several manuscripts, especially for alternative endoscopic techniques, have been reported; therefore, additional evaluation and refinement of the 2013 Guidelines is required. We describe a standard drainage method for surgically high-risk patients with acute cholecystitis and the latest developed endoscopic gallbladder drainage techniques described in the updated Tokyo Guidelines 2018 (TG18). Our study confirmed that percutaneous transhepatic gallbladder drainage should be considered the first alternative to surgical intervention in surgically high-risk patients with acute cholecystitis. Also, endoscopic transpapillary gallbladder drainage or endoscopic ultrasound-guided gallbladder drainage can be considered in high-volume institutes by skilled endoscopists. In the endoscopic transpapillary approach, either endoscopic naso-gallbladder drainage or gallbladder stenting can be considered for gallbladder drainage. We also introduce special techniques and the latest outcomes of endoscopic ultrasound-guided gallbladder drainage studies. Free full articles and mobile app of TG18 are available at: http://www.jshbps.jp/modules/en/index.php?content_id=47. Related clinical questions and references are also included.
    Matched MeSH terms: Video Recording*
  9. Mahmud Mohayuddin N, Azman M, Wan Hamizan AK, Zahedi FD, Carroll TL, Mat Baki M
    J Voice, 2024 Nov;38(6):1439-1449.
    PMID: 35896429 DOI: 10.1016/j.jvoice.2022.06.008
    OBJECTIVE: To explore the use of real-time virtual chromoendoscopy (i-scan) in characterizing the mucosal changes present in subjects with suspected laryngopharyngeal reflux (LPR) and to compare the inter-rater and intra-rater agreement of Reflux Finding Scores (RFS) from both laryngologists and general otolaryngologists (ORL) observing exams using both white light endoscopy (WLE) and i-scan.

    METHODS: This is a cross-sectional study that included 66 subjects: 46 symptomatic and 20 asymptomatic of suspected LPR based on the reflux symptom index (RSI). Subjects underwent flexible video laryngoscopic evaluation of the larynx utilising both WLE and i-scan during one continuous exam. Subjects also underwent 24-hour oropharyngeal pH-monitoring (Dx-pH). Two laryngologists and two general otolaryngologists evaluated the anonymized videos independently using RFS. Dx-pH results were interpreted using the pH graph, report and RYAN score. Subjects were then designated into one of three groups: no reflux, acid reflux and alkaline reflux.

    RESULTS: For the symptomatic group, no mucosal irregularities or early mucosal lesions were observed except in one subject who had granulation tissue. The mean RFS using WLE and i-scan were, respectively: 11.8 (SD 6.1) and 11.3 (SD 5.6) in symptomatic and 7.3 (SD 5.7) and 7.3 (SD 5.2) in asymptomatic group. The inter-rater agreement of RFS using WLE and i-scan for both groups were good with intraclass correlation, ICC of 0.84 and 0.88 (laryngologists); and 0.85 and 0.81 (ORL). The intra-rater agreement among all four raters were good to excellent and similar for both WLE and i-scan (ICC of 0.80 to 0.99). 47 of 66 subjects had evidence of LPR on Dx-pH results which more specifically showed 39 subjects had "acid reflux" and 8 had "alkaline reflux". Sixteen subjects demonstrated a positive RYAN score but showed none were significantly correlated with their RFS.

    CONCLUSIONS: This study reports the first utilization of real-time video chromoendoscopy with i-scan technology through high-definition flexible endoscopes to attempt to characterize laryngopharyngeal findings in patients suspected of having LPR. Both general otolaryngologists and laryngologists were equally capable of reliably calculating the RFS using both WLE and i-scan, however no significant improvement in agreement or change in RFS was found when i-scan technology was employed.

    LEVEL OF EVIDENCE: Level 2.

    Matched MeSH terms: Video Recording*
  10. Goh KL
    Ann Acad Med Singap, 2015 Jan;44(1):34-9.
    PMID: 25703498
    Gastrointestinal (GI) endoscopy has evolved tremendously from the early days when candlelight was used to illuminate scopes to the extent that it has now become an integral part of the practice of modern gastroenterology. The first gastroscope was a rigid scope first introduced by Adolf Kussmaul in 1868. However this scope suffered from the 2 drawbacks of poor illumination and high risk of instrumental perforation. Rudolf Schindler improved on this by inventing the semiflexible gastroscope in 1932. But it was Basil Hirschowitz, using the principle of light conduction in fibreoptics, who allowed us to "see well" for the first time when he invented the flexible gastroscopy in 1958. With amazing speed and innovation, instrument companies, chiefly Japanese, had improved on the Hirschowitz gastroscope and invented a flexible colonoscope. Walter McCune introduced the technique of endoscopic retrograde cholangiopancreatography (ERCP) in 1968 which has now evolved into a sophisticated procedure. The advent of the digital age in the 1980s saw the invention of the videoendoscope. Videoendoscopes have allowed us to start seeing the gastrointestinal tract (GIT) "better" with high magnification and resolution and optical/digital enhancements. Fusing confocal and light microscopy with endoscopy has allowed us to perform an "optical biopsy" of the GI mucosa. Development of endoscopic ultrasonography has allowed us to see "beyond" the GIT lumen. Seeing better has allowed us to do better. Endoscopists have ventured into newer procedures such as the resection of mucosal and submucosal tumours and the field of therapeutic GI endoscopy sees no end in sight.
    Matched MeSH terms: Video Recording
  11. Yousefi B, Loo CK
    ScientificWorldJournal, 2014;2014:238234.
    PMID: 24883361 DOI: 10.1155/2014/238234
    Following the study on computational neuroscience through functional magnetic resonance imaging claimed that human action recognition in the brain of mammalian pursues two separated streams, that is, dorsal and ventral streams. It follows up by two pathways in the bioinspired model, which are specialized for motion and form information analysis (Giese and Poggio 2003). Active basis model is used to form information which is different from orientations and scales of Gabor wavelets to form a dictionary regarding object recognition (human). Also biologically movement optic-flow patterns utilized. As motion information guides share sketch algorithm in form pathway for adjustment plus it helps to prevent wrong recognition. A synergetic neural network is utilized to generate prototype templates, representing general characteristic form of every class. Having predefined templates, classifying performs based on multitemplate matching. As every human action has one action prototype, there are some overlapping and consistency among these templates. Using fuzzy optical flow division scoring can prevent motivation for misrecognition. We successfully apply proposed model on the human action video obtained from KTH human action database. Proposed approach follows the interaction between dorsal and ventral processing streams in the original model of the biological movement recognition. The attained results indicate promising outcome and improvement in robustness using proposed approach.
    Matched MeSH terms: Video Recording
  12. Kolivand H, Billinghurst M, Sunar MS
    PLoS One, 2016;11(12):e0166424.
    PMID: 27930663 DOI: 10.1371/journal.pone.0166424
    To achieve realistic Augmented Reality (AR), shadows play an important role in creating a 3D impression of a scene. Casting virtual shadows on real and virtual objects is one of the topics of research being conducted in this area. In this paper, we propose a new method for creating complex AR indoor scenes using real time depth detection to exert virtual shadows on virtual and real environments. A Kinect camera was used to produce a depth map for the physical scene mixing into a single real-time transparent tacit surface. Once this is created, the camera's position can be tracked from the reconstructed 3D scene. Real objects are represented by virtual object phantoms in the AR scene enabling users holding a webcam and a standard Kinect camera to capture and reconstruct environments simultaneously. The tracking capability of the algorithm is shown and the findings are assessed drawing upon qualitative and quantitative methods making comparisons with previous AR phantom generation applications. The results demonstrate the robustness of the technique for realistic indoor rendering in AR systems.
    Matched MeSH terms: Video Recording
  13. Xuan W, Phongsatha T, Hao L, Tian K
    Front Public Health, 2024;12:1382687.
    PMID: 39011330 DOI: 10.3389/fpubh.2024.1382687
    OBJECTIVE: To enhance individuals' sustained intention to use health science popularization videos, this study investigated the path relationships and influencing mechanisms of health science popularization video factors on users' perceived value, expectancy confirmation, enjoyment, satisfaction, trust, and continuous usage intention based on the cognitive-affective-conative and expectation-confirmation model theoretical framework.

    METHODS: This study adopted a cross-sectional design and collected data using self-administered questionnaires. The hypotheses were analyzed using the smart partial least squares (Smart-PLS) structural equation modeling method with a dataset containing 503 valid responses. Subsequently, comprehensive data analysis was conducted.

    RESULTS: Blogger and video quality factors present in health science popularization videos substantially influenced users' perceived value (p videos. From a theoretical perspective, it expands upon the cognitive-affective-conative and expectation-confirmation model theoretical frameworks, enriches the theoretical model, and offers theoretical references for future research in this domain. From a practical perspective, enhancing the overall quality of health science popularization content significantly influences users' perceived value, emotional engagement, and continued usage intention to engage with the content.

    Matched MeSH terms: Video Recording
  14. Jirjees F, Hasan S, Krass I, Saidawi W, Al-Juboori MK, Othman AM, et al.
    BMC Public Health, 2024 Aug 28;24(1):2340.
    PMID: 39198786 DOI: 10.1186/s12889-024-19553-z
    Meaningful communication between health service users and providers is essential. However, when stakeholders are unfamiliar with new health services, innovative communication methods are necessary to engage them. The aim of the study was to create, validate, and evaluate a video-vignette to enhance stakeholders' (physicians, pharmacists, and laypeople) engagement and understanding of an innovative pharmacy-based diabetes screening and prevention program. Also, to assess the video-vignette's capacity to measure appetite and appeal for such preventive programs. This mixed-methods study consisted of two phases. In phase one, a video-vignette depicting the proposed screening and prevention program was developed and validated following established international guidelines (n = 25). The video-vignette was then evaluated by stakeholders (n = 99). In phase two, the video-vignette's capacity as a communication tool was tested in focus groups and interviews to explore stakeholders' perspectives and engagement on the proposed service (n = 22). Quantitative data were analyzed descriptively, while qualitative data underwent thematic analysis. In total, 146 stakeholders participated. The script was well-received, deemed credible, and realistic. Furthermore, the video-vignette received high ratings for its value, content, interest, realism, and visual and audio quality. The focus groups and interviews provided valuable insights into the design and delivery of the new service. The video-vignette compellingly portrayed the novel pharmacy-based diabetes screening and prevention service. It facilitated in-depth discussions among stakeholders and significantly enhanced their understanding and appreciation of such health services. The video-vignette also generated significant interest in pharmacy-based diabetes screening and prevention programs, serving as a powerful tool to promote enrollment in these initiatives.
    Matched MeSH terms: Video Recording
  15. Hunt EA, Heine M, Shilkofski NS, Bradshaw JH, Nelson-McMillan K, Duval-Arnould J, et al.
    Emerg Med J, 2015 Mar;32(3):189-94.
    PMID: 24243484 DOI: 10.1136/emermed-2013-202867
    AIM: To assess whether access to a voice activated decision support system (VADSS) containing video clips demonstrating resuscitation manoeuvres was associated with increased compliance with American Heart Association Basic Life Support (AHA BLS) guidelines.
    METHODS: This was a prospective, randomised controlled trial. Subjects with no recent clinical experience were randomised to the VADSS or control group and participated in a 5-min simulated out-of-hospital cardiopulmonary arrest with another 'bystander'. Data on performance for predefined outcome measures based on the AHA BLS guidelines were abstracted from videos and the simulator log.
    RESULTS: 31 subjects were enrolled (VADSS 16 vs control 15), with no significant differences in baseline characteristics. Study subjects in the VADSS were more likely to direct the bystander to: (1) perform compressions to ventilations at the correct ratio of 30:2 (VADSS 15/16 (94%) vs control 4/15 (27%), p=<0.001) and (2) insist the bystander switch compressor versus ventilator roles after 2 min (VADSS 12/16 (75%) vs control 2/15 (13%), p=0.001). The VADSS group took longer to initiate chest compressions than the control group: VADSS 159.5 (±53) s versus control 78.2 (±20) s, p<0.001. Mean no-flow fractions were very high in both groups: VADSS 72.2% (±0.1) versus control 75.4 (±8.0), p=0.35.
    CONCLUSIONS: The use of an audio and video assisted decision support system during a simulated out-of-hospital cardiopulmonary arrest prompted lay rescuers to follow cardiopulmonary resuscitation (CPR) guidelines but was also associated with an unacceptable delay to starting chest compressions. Future studies should explore: (1) if video is synergistic to audio prompts, (2) how mobile technologies may be leveraged to spread CPR decision support and (3) usability testing to avoid unintended consequences.
    KEYWORDS: cardiac arrest; research, operational; resuscitation; resuscitation, effectiveness; resuscitation, research
    Matched MeSH terms: Video Recording*
  16. Omar H, Khan SA, Toh CG
    J Dent Educ, 2013 May;77(5):640-7.
    PMID: 23658411
    Student-generated videos provide an authentic learning experience for students, enhance motivation and engagement, improve communication skills, and improve collaborative learning skills. This article describes the development and implementation of a student-generated video activity as part of a knowledge, observation, simulation, and experience (KOSE) program at the School of Dentistry, International Medical University, Kuala Lumpur, Malaysia. It also reports the students' perceptions of an activity that introduced first-year dental students (n=44) to clinical scenarios involving patients and dental team aiming to improve professional behavior and communication skills. The learning activity was divided into three phases: preparatory phase, video production phase, and video-watching. Students were organized into five groups and were instructed to generate videos addressing given clinical scenarios. Following the activity, students' perceptions were assessed with a questionnaire. The results showed that 86 percent and 88 percent, respectively, of the students agreed that preparation of the activity enhanced their understanding of the role of dentists in provision of health care and the role of enhanced teamwork. In addition, 86 percent and 75 percent, respectively, agreed that the activity improved their communication and project management skills. Overall, the dental students perceived that the student-generated video activity was a positive experience and enabled them to play the major role in driving their learning process.
    Matched MeSH terms: Video Recording*
  17. Waran V, Bahuri NF, Narayanan V, Ganesan D, Kadir KA
    Br J Neurosurg, 2012 Apr;26(2):199-201.
    PMID: 21970777 DOI: 10.3109/02688697.2011.605482
    The purpose of this study was to validate and assess the accuracy and usefulness of sending short video clips in 3gp file format of an entire scan series of patients, using mobile telephones running on 3G-MMS technology, to enable consultation between junior doctors in a neurosurgical unit and the consultants on-call after office hours.
    Matched MeSH terms: Video Recording/standards*
  18. Raymond AA, Gilmore WV, Scott CA, Fish DR, Smith SJ
    Epileptic Disord, 1999 Jun;1(2):101-6.
    PMID: 10937139
    Video-EEG telemetry is often used to support the diagnosis of non-epileptic seizures (NES). Although rare, some patients may have both epileptic seizures (ES) and NES. It is crucially important to identify such patients to avoid the hazards of inappropriate anticonvulsant withdrawal. To delineate the electroclinical characteristics and diagnostic problems in this group of patients, we studied the clinical, EEG and MRI features of 14 consecutive patients in whom separate attacks, considered to be both NES and ES were recorded using video-EEG telemetry. Only two patients were drug-reduced during the telemetry. Most patients had their first seizure (ES or NES) in childhood (median age 7 years; range: 6 months-24 years); 8/14 patients were female. Brain MRI was abnormal in 10/14 patients. Interictal EEG abnormalities were present in all patients; 13/14 had epileptiform and 1/14 only background abnormalities. Over 70 seizures were recorded in these 14 patients: in 12/14 patients, the first recorded seizure was a NES (p < 0.001), and 7 of these patients had at least one more NES before an ES was recorded. Only 3/14 patients had more than 5 NES before an ES was recorded. Recording a small number of apparently NES in an individual by no means precludes the possibility of additional epilepsy. Particular care should be taken, and multiple (> 5) seizure recording may be advisable, in patients with a young age of seizure onset, interictal EEG abnormalities, or a clear, potential aetiology for epilepsy.
    Matched MeSH terms: Video Recording*
  19. Abdullah B, Rajet KA, Abd Hamid SS, Mohammad WM
    Sleep Breath, 2011 Dec;15(4):747-54.
    PMID: 20957444 DOI: 10.1007/s11325-010-0431-7
    OBJECTIVES: We aimed to evaluate the severity of upper airway obstruction at the retropalatal and retroglossal regions in obstructive sleep apnea (OSA) patients.

    METHODOLOGY: This is a descriptive cross-sectional study at the Sleep Clinic, Department of Otorhinolaryngology-Head and Neck Surgery. Flexible nasopharyngolaryngoscopy was performed in seated erect and supine position. Retropalatal and retroglossal regions were continuously recorded during quiet breathing and Mueller's maneuver in both positions. Captured images were measured using Scion Image software and narrowing rate was calculated. Level of each site was classified based on Fujita classification and severity of obstruction using Sher scoring system for Mueller's maneuver.

    RESULTS: A total of 59 patients participated in this study. Twenty-nine (49.2%) participants had type 1 (retropalatal) obstruction, 23 (38.9%) had type 2 (retropalatal and retroglossal), and seven (11.9%) in type 3 (retroglossal) obstruction. Fifty (84.7%) of the patients have severe obstruction at the retropalatal region in supine position (SRP) followed by 35 (59.3%) at retropalatal region in erect position (ERP), 27 (45.8%) at retroglossal region in supine position (SRG) and eight (13.5%) at retroglossal region in erect position (ERG). The average oxygen saturation showed significant association in ERP (P = 0.012) and SRP (P < 0.001), but not significant in ERG and SRG.

    CONCLUSIONS: Videoendoscopy utilizing flexible nasopharyngolaryngoscopy and Scion Image software is reliable, minimally invasive, and useful as an office procedure in evaluating the multilevel obstruction of upper airway in OSA patients. The retropalatal region has more severe obstruction compared with retroglossal region either in erect or supine position.

    Matched MeSH terms: Video Recording/instrumentation*
  20. Yokoe M, Hata J, Takada T, Strasberg SM, Asbun HJ, Wakabayashi G, et al.
    J Hepatobiliary Pancreat Sci, 2018 Jan;25(1):41-54.
    PMID: 29032636 DOI: 10.1002/jhbp.515
    The Tokyo Guidelines 2013 (TG13) for acute cholangitis and cholecystitis were globally disseminated and various clinical studies about the management of acute cholecystitis were reported by many researchers and clinicians from all over the world. The 1st edition of the Tokyo Guidelines 2007 (TG07) was revised in 2013. According to that revision, the TG13 diagnostic criteria of acute cholecystitis provided better specificity and higher diagnostic accuracy. Thorough our literature search about diagnostic criteria for acute cholecystitis, new and strong evidence that had been released from 2013 to 2017 was not found with serious and important issues about using TG13 diagnostic criteria of acute cholecystitis. On the other hand, the TG13 severity grading for acute cholecystitis has been validated in numerous studies. As a result of these reviews, the TG13 severity grading for acute cholecystitis was significantly associated with parameters including 30-day overall mortality, length of hospital stay, conversion rates to open surgery, and medical costs. In terms of severity assessment, breakthrough and intensive literature for revising severity grading was not reported. Consequently, TG13 diagnostic criteria and severity grading were judged from numerous validation studies as useful indicators in clinical practice and adopted as TG18/TG13 diagnostic criteria and severity grading of acute cholecystitis without any modification. Free full articles and mobile app of TG18 are available at: http://www.jshbps.jp/modules/en/index.php?content_id=47. Related clinical questions and references are also included.
    Matched MeSH terms: Video Recording*
Filters
Contact Us

Please provide feedback to Administrator ([email protected])

External Links