Background: The Covid-19 pandemic has resulted in an abrupt but accelerated shift to e-learning worldwide. Education in a post-pandemic world has to amalgamate the advantages of e-learning with important pedagogical goals associated with in-person teaching. Although various advanced technologies are present at our fingertips today, we are still unable to use their full potential in teaching and learning. In this regard, mobile VR technology is both cost-efficient, versatile and engaging for students. Developing countries have more smartphone users than developed countries, implying that developing countries, like Malaysia, should utilize mobile or cellphones more significantly. With that in mind, we propose here a pre-protocol to investigate learner motivation and levels of engagement for e-learning with smartphone-integrated VR, based on their VARK (Visual, Auditory, Read/Write, Kinesthetic) learning styles. Proposed methodology: This study intends to look into students from the same age group under the K-12 (particularly grade 9-12) belonging to STEM curriculum. The Google Cardboard VR set will be used as the prime technology for its affordability, easy build feature and variety of available vendors. A mixed-method (survey and activity log/tracking) for data collection is suggested to find the degree of engagement and motivation of the learners' learning in the mobile VR-assisted e-learning context. The students will be taught a topic using the mobile VR and then be assessed through simple classroom quizzes to assess how well they grasped the concept. The data collected through activity logs (while teaching the topic in mobile VR) and questionnaires will be mapped to each individual learner and organized in a data repository. Further visualization, analysis and investigation will be performed using Smart PLS, Python or R language. Conclusions: The study aims to provide context for smartphone and software companies to develop technologies that could facilitate learner motivation and engagement during the post-pandemic state.
This research enhances crowd analysis by focusing on excessive crowd analysis and crowd density predictions for Hajj and Umrah pilgrimages. Crowd analysis usually analyzes the number of objects within an image or a frame in the videos and is regularly solved by estimating the density generated from the object location annotations. However, it suffers from low accuracy when the crowd is far away from the surveillance camera. This research proposes an approach to overcome the problem of estimating crowd density taken by a surveillance camera at a distance. The proposed approach employs a fully convolutional neural network (FCNN)-based method to monitor crowd analysis, especially for the classification of crowd density. This study aims to address the current technological challenges faced in video analysis in a scenario where the movement of large numbers of pilgrims with densities ranging between 7 and 8 per square meter. To address this challenge, this study aims to develop a new dataset based on the Hajj pilgrimage scenario. To validate the proposed method, the proposed model is compared with existing models using existing datasets. The proposed FCNN based method achieved a final accuracy of 100%, 98%, and 98.16% on the proposed dataset, the UCSD dataset, and the JHU-CROWD dataset, respectively. Additionally, The ResNet based method obtained final accuracy of 97%, 89%, and 97% for the proposed dataset, UCSD dataset, and JHU-CROWD dataset, respectively. The proposed Hajj-Crowd-2021 crowd analysis dataset and the model outperformed the other state-of-the-art datasets and models in most cases.