["Mobile Augmented Reality for Learning Phonetics: A Review (2012\u20132022) 91 Table 3. (continued) Study App name Year Language Audience English 4\u20136 years old (Dalim and Sunar 2019) TeachAR 2019 English\/ English learners Arabic (Hadid et al. 2019) Reader Buddy 2019 Malay International students English\/ Children (Ali and Azmi 2019) Azmi 2019 Bangla (Hossain et al. 2019) Japanese Thai-Nichi Institute AR Children\u2019s 2019 students (Thongchum and Book English 7\u20139 years old Charoenpit 2018) Japanese Beginners (Fan et al. 2018) Kanji AR 2018 Arabic 7 years old (Plecher et al. 2018) English 4\u20136 years old (Hashim et al. 2017) PhonoBlocks 2018 (Abd Majid et al. 2016) Mayo Undergraduate students Dragon Tale 2018 English 3\u20134 years old (Boj\u00f3rquez et al. 2016) English Pre-school children (Mart\u00ednez et al. 2016) ARabic 2017 English Children (He et al. 2014) (Khaled et al. 2013) AR app for 2016 pre-literacy Loter\u00eda Mayo 2016 Leihoa 2016 MAR App 2014 AR AlphaBees 2013 3.2 Languages Used MAR Apps for LP Using MAR app for LP has been explored in many languages such as English, Arabic, Japanese, Chinese and Indonesian. It is also explored in uncommon languages such as Malay, Buginese, Bangla, Hijaiyah, and Makhraj. The languages available in LP MAR apps are illustrated in Fig. 2, along with the number of reviewed studies for each. English stands at the top with 31 studies in total. Arabic comes next with seven studies. Fig. 2. The number of MAR apps collected for each language.","92 R. M. Tolba et al. 3.3 Types of Activities Covered According to the reviewed studies, there are three main activity types used in MAR applications for LP: (1) Learning language\u2019s letters\/character\/alphabets pronunciation. (2) Learning how to pronounce words\/vocabulary of a speci\ufb01c language. (3) Pronuncia- tion option in Translation applications. Not all the reviewed studies focused on one type of activity. Studies such as (Daud et al. 2021), (Hossain et al. 2019), and (Yilmaz et al. 2022) covered two types: letters and vocabulary learning. Most of the studies focused on learning vocabulary pronunciation, as shown in Fig. 3. Fig. 3. Activities used in MAR for LP and number of studies found for each. 3.4 Technical Requirements of MAR Applications for LP The technical requirements to develop MAR application for LP include MAR SDK (Software Development Kit), tracking techniques, and interaction techniques. 3.5 AR SDKs SDKs or devkits are development tools that allow developers to create apps, build virtual objects, and blend them with the real world. From the reviewed studies, the top SDKs that gained researchers\u2019 attention in making MAR applications for LP are ARcore, Android SDK, Vuforia, Aurasma, Wikitude, and Hair SDK (Zhang et al. 2020). It was found that Vuforia is the most used SDK, as shown in Fig. 4. Android is the most used platform, as shown in Fig. 5. The other used platforms were IOS and XML. Fig. 4. Number of studies for each SDK.","Mobile Augmented Reality for Learning Phonetics: A Review (2012\u20132022) 93 Fig. 5. Number of studies for each platform. 3.6 Tracking Techniques To overlay the virtual content onto physical objects in the real world, objects must be tracked in the real world in real-time. Tracking anchors, the virtual content in the correct position to the real world (Yu et al. 2016). MAR tracking techniques could be either Sensor-based or Vision-based tracking. Sensor-based tracking is a lightweight MAR implementation approach (Singh and Mantri 2015). It uses mobile device sensors, such as accelerometers, gyroscopes, compasses, magnetometers, and GPS. While Vision-based tracking uses a Mobile device\u2019s camera to capture the surrounding environment. The most common Vision-based tracking types are Marker-based and Marker-less tracking (Perry 2021). In Marker-based tracking, the virtual content is triggered by using printed \ufb02ashcards or books pages. Where in Marker-less tracking, it could be triggered either by large-scale real-world scenes such as buildings or by small-scaled objects placed in the environment (e.g., table) (Karacan 2021). Marker-based tracking is used in 70% of reviewed studies. 3.7 Interaction Techniques Interaction techniques focused on how users interact with the virtual objects that appear in the AR environment (Tang and Young 2014). It also offers the controls of the virtual objects, such as selection and manipulation functions (e.g., color, shape, and position) (Nizam et al. 2018). Usually, interaction technique in AR involves unimodal interaction technique that only allows user to interact with AR content by using one modality such as gesture (Bruhn 2018), speech (Nasution et al. 2019; Dalim and Sunar 2019), touch (Jalaluddin et al. 2020; Khaled et al. 2013) and click. Clicking is the most used interaction technique in 23 studies, as shown in Fig. 6, because it includes pressing menus and buttons of the interface. Where Touching includes physical manipulation of the virtual object from rotating to zooming in and out, but it has a lot of issues such as fat \ufb01ngers.","94 R. M. Tolba et al. Fig. 6. The number of reviewed studies for each interaction technique. 4 Discussion From the reviewed studies, MAR technology signi\ufb01cantly enhanced the learning process. It overcame the dilemma of whether the pronunciation of a word is correct or not. The learners\u2019 interaction increased through the provided multimedia content. It can be easily deployed at schools or at home (Fan et al. 2020). And the provided LP activities helped improve learners\u2019 reading skills (Wook et al. 2020). Also, it transforms the abstract language symbols on physical learning materials (e.g., letters, \ufb02ashcards, objects) into vivid 2D\/3D augmented visual representations and auditory sounds (Fan et al. 2020). Despite the bene\ufb01ts of using MAR technology in LP, the number of studies in this \ufb01eld started to decrease in 2021. This could be due to the dif\ufb01culty of providing the rich content containing all the needed rules to learn the phonetics of speci\ufb01c languages, such as French or Russian. It could also be due to the improvement of Mixed Reality Headsets that grabbed researchers\u2019 attention. Yet, using MAR technology for LP still suffers from limitations such as unstable marker tracking due to inappropriate marker design or inappropriate interaction design (e.g., children\u2019s hands blocked markers during interaction) (Fan et al. 2020). The AR content is \ufb01xed in most studies and didn\u2019t have the option to be updated. Only the InglesAR app by (Daniel et al. 2020) offered an option to upload resources, to expand the vocabulary existing in the game. The commonly used interaction technique (clicking) didn\u2019t provide the interaction level offered by MAR technology. Only two reviewed studies (Jalaluddin et al. 2020) and (Khaled et al. 2013) used Touch interaction to increase interactivity level. 5 Conclusion MAR technology increases learning outcomes by using integrated augmented visualiza- tions as learning guidance. It is the most popular because of its ease of use, affordances, and portability. This paper discussed the use of MAR for learning phonetics. A total of 65 articles were reviewed from 2012 to 2022. Approximately two-thirds were published after 2015, and half were from the Google Scholar database. The \ufb01ndings revealed that the most taught foreign language is English with 31 articles. In addition, MAR has been explored for other languages such as Arabic, Chinese, and Japanese. Yet it is still not used in learning common languages such as German, French, and Italian. The most pre- ferred development tools were Unity and Vuforia SDK. Vision-based tracking is used in all MAR applications for LP, especially the Marker-based type. Where touch and click are the most used interaction models. Although using MAR technology have great bene\ufb01ts, it still suffers from certain limitations such as instability of marker tracking, in\ufb02exibility of updating AR content, and inability to correct learners\u2019 pronunciation as it","Mobile Augmented Reality for Learning Phonetics: A Review (2012\u20132022) 95 happens in real life by the language teacher. All these limitations are considered points of improvement for future research in this \ufb01eld. References Abd Majid, N., Yunus, F., Arshad, H., Johari, M.: Development framework based on mobile Augmented Reality for pre-literacy kit. World Acad. Sci. Eng. Technol. Int. J. Educ. Pedagogical Sci. 10(8) (2016) Ali, S., Azmi, N.: Augmented Reality in learning Malay language. In: 2nd International Conference on Applied Engineering (ICAE) (2019) Anas, N., Mahayuddin, Z.: Aiding autistic children learn Arabic through developing an engaging user-friendly android app. IIOABJ 8, 87\u201390 (2017) Antkowiak, D., Kohlschein, C., Kroo\u00df, R., Speicher, M.: Language therapy of Aphasia sup- ported by Augmented Reality applications. In: IEEE 18th International Conference on e-Health Networking, Applications and Services (Healthcom) (2016) Arunsirot, N.: Implementing the Augmented Reality technology to enhance English pronunciation of Thai EFL students. KKU Res. J. Humanit. Soc. Sci. (Grad. Stud.) 8(3) (2020) Beder, P.: Language learning via an android Augmented Reality system. Master thesis, School of Computing, Blekinge Institute of Technology (2012) Bhatt, Z., Bibi, M., Shabbir, N.: Augmented Reality based multimedia learning for Dyslexic Children\u00d3. In: 3rd International Conference on Computing, Mathematics and Engineering Technologies (iCoMET) (2020) Boj\u00f3rquez, E., Villegas, O., S\u00e1nchez, V.: Study on mobile Augmented Reality adoption for Mayo language learning. Mob. Inf. Syst. 2016(2) (2016). Hindawi Publishing Corporation Booton, S., Hodgkiss, A., Murphy, V.: The impact of mobile application features on children\u2019s language and literacy learning: a systematic review. Comput. Assist. Lang. Learn. 1\u201330 (2021) Bruhn, T.: Enhancing knowledge-transfer for digital exhibitions via Augmented Reality. Bach- elor thesis, TU Dresden, Faculty of Computer Science, Institute of Software and Multimedia Technology (2018) Chatzopoulos, D., Bermejo, C., Huang, Z., Hui, P.: Mobile Augmented Reality survey: from where we are to where we go. IEEE Access 5, 6917\u20136950 (2017) Chen, I.: The application of Augmented Reality in English phonics learning performance of ESL young learners. In: 1st International Cognitive Cities Conference (IC3) (2018) Dalim, C., Sunar, M.: Using Augmented Reality with speech input for non-native children\u2019s language learning. Int. J. Hum Comput Stud. 134, 44\u201364 (2019) Daniel, G., Moreira, M., Mesquita, R.: InglesAR-a collaborative Augmented Reality game for English practicing. In: Proceedings of SBGames, Brazil (2020) Daud, W., Ghani, M., Rahman, A.: ARabic-Kafa: design and development of educational material for Arabic vocabulary with Augmented Reality technology. J. Lang. Linguist. Stud. 7(4), 1760\u2013 1772 (2021) Engwall, O.: Analysis of and feedback on phonetic features in pronunciation training with a virtual teacher. Comput. Assist. Lang. Learn. 25(1), 37\u201364 (2012) Fan, M., Antle, A., Warren, J.: Augmented Reality for early language learning: a systematic review of Augmented Reality application design, instructional strategies, and evaluation outcomes. J. Educ. Comput. Res. 58(6), 1059\u20131100 (2020) Fan, M., Sarker, S., Antle, A.: From tangible to augmented: designing a PhonoBlocks reading system using everyday technologies. In: CHI, Canada (2018) Florentin, A.: A Foreign language learning application using mobile Augmented Reality. Informatica Economica\u02d8 20(4), 76\u201387 (2016)","96 R. M. Tolba et al. Fung, K., Wan, W.: Augmented Reality and 3D model for children Chinese character recognition- Hong Kong primary school education. In: 27th International Conference on Computers in Education (ICCE 2019), vol. 1, pp. 673\u2013678 (2019) Hadid, A., Mannion, P., Khoshnevisan, B.: Augmented Reality to the rescue of language learners. Florida J. Educ. Res. 57(2), 81\u201389 (2019) Hasbi, M., Tolle, H., Supianto, A.: The Development of Augmented Reality educational media using think-pair-share learning model for studying Buginese language. J. Inf. Technol. Comput. Sci. 5(1), 38\u201356 (2020) Hashim, N., Majid, N., Arshad, H.: Mobile Augmented Reality application for early Arabic lan- guage education: ARabic. In: 8th International Conference on Information Technology (ICIT) (2017) He, J., Ren, J., Zhu, G.: Mobile-based AR application helps to promote EFL children\u2019s vocabulary study. In: IEEE 14th International Conference on Advanced Learning Technology (2014) Hossain, M., Barman, S., Haque, A.: Augmented Reality for education; AR children\u2019s book. In: IEEE Region 10 Conference (TENCON 2019) (2019) Jalaluddin, I., Ismail, L., Darmi, R.: Developing vocabulary knowledge among low achievers: mobile Augmented Reality (MAR) practicality. Int. J. Inf. Educ. Technol. 10(11), 813\u2013819 (2020) Jumi, J.: Augmented Reality courseware for English vocabulary pronunciation using phonic read- ing technique. Bachelor thesis, Faculty of Computer and Mathematical Sciences, Universiti Teknologi Mara (2018) Karacan, C.: Educational Augmented Reality technology for language learning and teaching: a comprehensive review. Int. J. Educ. 9(2), 68\u201379 (2021) Khaled, H., Mohamed, Y., Khalifa, A.: An interactive Augmented Reality alphabet 3-dimensional pop-up book for learning and recognizing the English alphabet. Bachelor of Technology (Hons), Information Communication Technology, Universiti Teknologi Petronas Bandar Seri Iskandar (2013) Khan, D., Rehman, I., Ullah, S.: Low-cost interactive writing board for primary education using distinct Augmented Reality markers. Sustainability 11, 5720 (2019) Khatoony, S.: Exploring Iranian and Turkish faculty members\u2019 views toward using Augmented Reality English Trainer (ARET) application for educational purposes: a comparative study in different university faculties. AJELP: Asian J. English Lang. Pedagogy 9(2), 128\u2013152 (2021) K\u00fc\u00e7\u00fck, S., Y\u0131lmaz, R., G\u00f6ktas\u00b8, Y.: Augmented Reality for learning English: achievement, attitude and cognitive load levels of students. Educ. Sci. 39(176), 393\u2013404 (2014) Mahayuddin, Z., Mamat, N.: Implementing Augmented Reality (AR) on phonics-based literacy among children with autism. Int. J. Adv. Sci. Eng. Inf. Technol. 9(6), 2176 (2019) Majid, S., Salam, A.: A systematic review of Augmented Reality applications in language learning. Int. J. Emerg. Technol. Learn. (IJET) 16, 18 (2021) Mart\u00ednez, A., Benito, R., Gonz\u00e1lez, A.: An experience of the application of Augmented Reality to learn English in infant education. IEEE (2017) Mart\u00ednez, A., L\u00f3pez, A., Benito, I.: Leihoa: a window to Augmented Reality in early childhood education. In: International Symposium on Computers in Education (SIIE) (2016) Mei, B.: Using clips in the language classroom. RELC J. 53 (2021) Munshi, A., Aljojo, N.: Examining subjective involvement on Arabic alphabet Augmented Reality application. Rom. J. Inf. Technol. Automatic Control 30(3), 107\u2013118 (2020) Nasution, A., Rizki, Y., Nasution, S., Muhammad, R.: An Augmented Reality machine trans- lation agent. In: Proceedings of 2nd International Conference on Science, Engineering and Technology, pp. 163\u2013168 (2019) Nizam, S., Abidin, R., Hashim, N.: A review of multimodal interaction technique in Augmented Reality environment. Int. J. Adv. Sci. Eng. Inf. Technol. 8(4\u20132), 1460\u20131469 (2018)","Mobile Augmented Reality for Learning Phonetics: A Review (2012\u20132022) 97 Nugraha, I., Suminar, A., Octaviana, D., Hidayat, M., Ismail, A.: The application of Augmented Reality in learning English phonetics. J. Phys. Conf. Ser. 1402, 077024 (2019) Opu, M., Islam, M., Kabir, M.: Learn2Write: Augmented Reality and machine learning-based mobile app to learn writing (2021). https:\/\/doi.org\/10.3390\/computers11010004 Perry, B.: Gami\ufb01ed mobile collaborative location-based language learning. Front. Educ. J. 6 (2021) Piatykop, O., Pronina, O., Tymo\ufb01eieva, I., Palii, I.: Using Augmented Reality for early literacy. In: 9th Illia O. Teplytskyi Workshop on Computer Simulation in Education, Ukraine (2021) Plecher, D., Eichhorn, C., Kindl, J.: Dragon tale\u2013a serious game for learning Japanese Kanji. In: CHI PLAY 2018 Extended Abstracts, VIC, Australia (2018) Poompimol, P.: The development and implementation of Augmented Reality materials through mobile applications to improve the English pronunciation pro\ufb01ciency of Prathom 1 students in Nakhonayok Province. Master thesis, Language Institute Thammasat University (2017) Rozi, I., Larasati, E., Lestari, V.: Developing vocabulary card base on Augmented Reality (AR) for learning English. IOP Conf. Ser. Mater. Sci. Eng. ATASEC 1073, 012061 (2020) Shaltout, E., Fi\ufb01, A., Amin, K.: Augmented Reality based learning environment for children with special needs. In: 15th International Conference on Computer Engineering and Systems (ICCES) (2020) Sidi, J., Yee, L., Chai, W.: Interactive English phonics learning for kindergarten consonant-vowel- consonant (CVC) word using Augmented Reality. J. Telecommun. Electron. Comput. Eng. 9(3\u201311), 85\u201391 (2017) Singh, G., Mantri, A.: Ubiquitous hybrid tracking techniques for Augmented Reality applications. In: 2nd International Conference on Recent Advances in Engineering & Computational Sciences (RAECS), pp. 1\u20135 (2015) Sirat, N., Othman, M., Ayub, A.: Comparison of different student categories using Augmented Reality and conventional methods. Evol. Electr. Electron. Eng. 2(2), 791\u2013796 (2021) Sorrentino, F., Spano, L., Scateni, R.: Speaky notes learn languages with Augmented Reality. In: Conference on Interactive Mobile Communication Technologies and Learning (IMCL), pp. 146\u2013150 (2015) Tang, W., Young, S.: Maparin: creating a friendly and adaptable learning scenario for foreign students in Taiwan. In: 22nd International Conference on Computers in Education, pp. 442\u2013450 (2014) Thongchum, K., Charoenpit, S.: Conceptual design of Kanji mobile application with Augmented Reality technology for beginner. In: 5th International Conference on Business and Industrial Research (ICBIR) (2018) Tsai, C.: The effects of Augmented Reality to motivation and performance in EFL vocabulary learning. Int. J. Instruct. 13(4), 987\u20131000 (2020) Ulfah, S., Ramdania, D., Fatoni, U.: Augmented Reality using Natural Feature Tracking (NFT) method for learning media of makharijul huruf. IOP Conf. Ser. Mater. Sci. Eng. 874, 012019 (2020) Welbeck, A.: Teachers\u2019 perceptions on using Augmented Reality for language learning in primary years program (PYP) education. Int. J. Emerg. Technol. Learn. (iJET) 15(12), 116\u2013135 (2020) Wen, Y.: An augmented paper game with socio-cognitive support. IEEE Trans. Learn. Technol. 13(2), 259\u2013268 (2020) Wook, M., Haggerty, N., Whaley, A.: Effects of video modelling using an Augmented Reality iPad application on phonics performance of students who struggle with reading. Read. Writ. Q. (2020). https:\/\/doi.org\/10.1080\/10573569 Wu, M.: The applications and effects of learning English through Augmented Reality: a case study of Pok\u017dmon Go. Comput. Assist. Lang. Learn. 34, 778\u2013812 (2019) Wulan, N., Rahma, R.: Augmented Reality-based multimedia in early reading learning: introduc- tion of ICT to children. J. Phys. Conf. Ser. 1477, 042071 (2020)","98 R. M. Tolba et al. Yilmaz, R., Topu, F., Tulgar, A.: An examination of vocabulary learning and retention levels of pre-school children using Augmented Reality technology in English language learning. Educ. Inf. Technol. 27, 6989\u20137017 (2022) Yu, J., Fang, L., Lu, C.: Key technology and application research on mobile Augmented Reality. In: 7th IEEE International Conference on Software Engineering and Service Science (ICSESS), pp. 547\u2013550 (2016) Zaman, H.: Augmented Reality technology in helping down syndrome learner in basic reading. In: Proceedings of the 4th Mexican Conference on Human-Computer Interaction - MexIHC 2012, pp. 160\u2013176 (2012) Zhang, L., Cheng, M., Shi, Y., Li, H., Xue, Y.: Application and practice of Augmented Reality technology in the design of K-12 education-assisted products. In: International Conference on Computers, Information Processing and Advanced Education (CIPAE) (2020)","Mooting in Virtual Reality: An Exploratory Study Using Experiential Learning Theory Justin Cho(B), Timothy Jung, Kryss Macleod, and Alasdair Swenson Faculty of Business and Law, Manchester Metropolitan University, Manchester, UK [email protected] Abstract. Mooting, also known as a mock trial, is a form of simulated learn- ing that is frequently used in legal education to teach practical skills. With the increasing advancement of technologies, educators have begun to incorporate a digital element into mooting in order to enhance its educational capabilities. In other areas of education, the use of virtual reality technology to facilitate simulated learning has shown to successfully improve students\u2019 learning. Using experiential learning theory as a foundation, this paper aims to investigate the effects of virtual reality in the context of legal education and to propose a theoretical framework to guide the successful implementation of the novel technology speci\ufb01cally for mooting. Semi-structured interviews were conducted at a UK university. Findings show three main groups of themes that are relevant when implementing virtual reality in mooting: effectiveness, suitability, and emotions. Keywords: Virtual Reality \u00b7 Mooting \u00b7 Education \u00b7 Experiential learning \u00b7 Simulation 1 Introduction The proliferation of novel technologies has introduced new and innovative ways of teaching (Yeh and Wan 2016; Fealy et al. 2019). Their advanced capabilities can be used to drastically enhance the engagement and motivation of students in a learning context (Kavanagh et al. 2017). This study focuses on virtual reality (VR), a virtual environment in which users are fully immersed (Parong and Mayer 2018). Unlike augmented reality, where virtual images are superimposed onto the real world (Azuma 1997), VR immerses the user into a computer-generated virtual world in which the user can interact with others and the environment through a virtual avatar (Carmigni- ani et al. 2011). The use of VR has increased in many areas of education such as health- care and engineering due to its educational bene\ufb01ts (Kavanagh et al. 2017). However, research on the use of VR in legal education is rare (Thanaraj 2016). This study utilised a mock trial VR application at a UK university to investigate the potential effects of VR in legal education and proposed a theoretical framework built upon Kolb\u2019s (1984) experiential learning theory to effectively facilitate future implementations of VR technology into legal education curricula. \u00a9 The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 T. Jung et al. (Eds.): XR 2022, SPBE, pp. 99\u2013105, 2023. https:\/\/doi.org\/10.1007\/978-3-031-25390-4_8","100 J. Cho et al. 2 Literature Review Simulated learning is a teaching method that is used in legal education to teach practical skills (Newbery-Jones 2015). The experiential and practical nature of simulation makes it ideal for teaching professional and legal skills that cannot be ef\ufb01ciently taught in the classroom (Philips 2012). Simulation has been found to enhance law-related practical skills such as advocacy and legal reasoning (Daly and Higgins 2011; Knerr et al. 2001) but has also found to enhance general soft skills such as teamwork, oral communica- tion, and resilience (Parsons 2017; Apel 2017). Examples of legal simulation exercises include mock trials (or mooting), negotiation, mediation, and case studies (Daly and Higgins 2011; Boyne 2012; Byrnes and Lawrence 2016; Knerr et al. 2001; Waters 2016). However, studies have found that a lack of realism of the simulations leads to reduced engagement by students, ultimately resulting in a hindered learning experience (Waters 2016). In response, educators have started to use technology to supplement con- ventional forms of simulation (Newbery-Jones 2015). Generally, technology has been found to increase motivation and engagement for students, helping them to engage with the learning experience (Maharg and Nicol 2014). Meanwhile, the use of VR, a novel technology that immerses the user into a virtual environment, has grown substantially in other areas of education such as healthcare and engineering (Kavanagh et al. 2017). In addition to its interactive nature, VR\u2019s stimulating audio and visual effects have been found to greatly increase immersion and engagement, signi\ufb01cantly enhancing the learning experience (Panteldis 2009; Yeh and Wan 2016; Parong and Mayer 2018). Furthermore, these qualities of VR allow for more \ufb02exible design of learning environments to accommodate the speci\ufb01c learning objectives at hand (Panteldis 2009; Sala 2016). In addition to providing a sensory-rich and situated context, VR environments also provide a safe means to practice, allowing students to learn without having to bear the consequences of any mistakes they might make (Aczel 2017; Sala 2016). Despite its bene\ufb01ts, research on the use of VR in the context of legal education is scarce (Thanaraj 2016). Therefore, this study aimed to explore the effects of VR in the context of legal education. Although there is much evidence supporting the use of technology, research has also found that technology should only be used when its capabilities align with the learning objectives (Newbery-Jones 2015). Indeed, simply using technologies as novel mediums that do not \ufb01t into existing curricula is ineffective (Resnick, 2002). In these scenarios, some studies even found that the novelty of certain technologies took away from the learning experience by distracting the student from the content (Newbery- Jones 2015; Elphick 2018). Therefore, in order to facilitate a learning-centred approach, it was decided that the VR application and the study overall should be designed around a theoretical framework. As this study explores the use of VR in an educational context, learning theory was used a theoretical foundation. Various learning theories were found to be used in legal simulation literature and educational VR literature (Kavanagh et al. 2017) such as experiential learning, situated learning, and collaborative learning. In order to choose the most suitable learning theory for the study, the research context was considered. Re\ufb02ection and contextualization of abstract legal concepts is key to helping students make the most of legal simulations (Casey 2014; Newbery-Jones 2015). Kolb\u2019s (1984)","Mooting in Virtual Reality: An Exploratory Study 101 experiential learning cycle is a four-stage cycle that portrays learning as beginning with a concrete experience, followed by re\ufb02ective observation, abstract conceptualisation, and lastly, active experimentation. This theory was chosen due to its emphasis upon re\ufb02ection and conceptualisation, \ufb01tting the current context. Having chosen Kolb\u2019s (1984) theory, a review was conducted to identify the themes related to the theory in the context of educational virtual reality. Themes related to each stage of the cycle were gathered and aggregated into a theoretical model portraying Kolb\u2019s theory in educational VR. In order to propose a pilot theoretical framework that would facilitate the effective implementation of VR in legal simulation, this study aimed to extend the theoretical model of Kolb\u2019s (1984) theory in educational VR to the context of legal education by identifying new themes relevant to the speci\ufb01c law context. Fig. 1. Theoretical model of VR in general education based on Kolb\u2019s theory 3 Methodology A single case study at a UK university was conducted. This university was chosen due to it having an embedded mock trial module in its law course syllabus, providing an easily accessible pool of suitable participants. A VR mock trial application, the most common form of legal simulation, was designed for students to play through. Semi-structured interviews were then conducted with 5 participants, all currently studying law with previous experience in mock trials. Originally, the study had planned to collect data until data saturation was reached (Glaser and Strauss 1967). However, due to COVID- 19, data collection was restricted to 5 participants. Purposive sampling was used to choose a sample that provided rich data for the exploratory study, and also due to the sample restrictions caused by COVID-19. The VR mock trial application was designed using MozillaHubs, an online tool for creating virtual environments. In terms of the visual \ufb01delity of the environment, studies have shown that higher visual \ufb01delity in virtual reality training environments positively impact the task performance of the participants and the transferability of skills to the real world (Allen et al. 2011; Ragan et al. 2015). With this in mind, the VR environment was designed in a way that would visually replicate an actual courtroom as much as possible. The entire experience was recorded and shown to the participants after the trial.","102 J. Cho et al. The interviews were focused on achieving two main objectives: (1) exploring the effect of VR on the learning experience; and (2) exploring the suitability of VR technol- ogy for legal simulation. Objective 1 was formulated with the intention of addressing the main aim: to develop a theoretical framework to portray the effects of VR in legal edu- cation. Objective 2, on the other hand, was formulated to ensure that VR as a technology was suitable for facilitating the learning objectives of legal simulation. This is because, as discussed previously, the use of new technologies must align with the content in order to effectively enhance the learning experience. In order to achieve objective 1, the four stages of Kolb\u2019s (1984) cycle were used to map out the learning experience and participants were asked to provide their thoughts on how VR impacted their learning during each stage. This helped to identify new themes relevant to the legal education context, leading to the extension of Kolb\u2019s (1984) theory in educational VR to the legal context. In order to achieve objective 2, participants were asked to re\ufb02ect upon the role that VR played at each stage of the cycle, and to compare VR mooting with non-VR mooting. Participants were also asked to comment on the feasibility of using VR to facilitate mooting. This allowed for the further identi\ufb01cation of themes relevant to the legal education context, focusing more on the technological aspects of the experience. 4 Findings Objective 1 aimed to explore the effects of VR on the learning experience. Four main themes arose: knowledge\/skills, re\ufb02ection, learner-centred learning, and lack of feed- back. Students generally found that in terms of knowledge acquisition and application of knowledge into practice, there was not much difference from traditional mooting. How- ever, students found that the experience helped them improve practical skills such as oral communication, advocacy, and more generally, con\ufb01dence. Participants also noted that being able to re\ufb02ect upon their performance allowed them to learn from their mistakes and plan for future improvements. The experience was found to be learner-centred and active, engaging participants and making the learning enjoyable. Participants also noted that being able to learn at their own pace relieved a lot of the pressure of traditional mooting. Lastly, participants commented that although the re\ufb02ection process helped them identify potential improvements, adding a feedback mechanism from a tutor might make the experience more useful. Objective 2 explored the suitability of VR technology for legal simulation. Three themes were identi\ufb01ed: ease of access, ease of using VR, and clear guidance. Being able to use the VR at any location and at any time made it convenient to use, increasing access to training opportunities. Furthermore, participants stated that being able to prac- tice mooting on your own or online with friends made it a lot easier to arrange moots, also increasing accessibility. Participants generally found that the VR technology itself was not too dif\ufb01cult to use, on average taking around 5 to 10 min to get used to the controls. However, some participants noted that an unstable internet connection slightly hindered their experiences. Lastly, the clear guidance of the VR functions helped par- ticipants navigate through the experience with ease, allowing them to concentrate on their learning. Unlike traditional moots, having a clearly visible timer and prompts also","Mooting in Virtual Reality: An Exploratory Study 103 allowed participants to practice new aspects of their presentation such as timing and speech speed. In addition to the seven themes mentioned above, four other themes emerged linked to emotions: Sense of presence, authenticity, motivation\/engagement, and enjoyment. Participants noted that the sense of being in a courtroom made the experience more realis- tic and that this positively contributed to their learning. The realism of the experience and the visual effects of the environment added to the authenticity of the scenario, increasing the engagement and excitement of the participants. Some stated that VR mooting felt more realistic when compared to traditional mooting. Furthermore, the novel and excit- ing method of learning motivated students to learn and to engage, especially compared to other forms of online learning such as Zoom classes. Lastly, the combination of hav- ing a safe and comfortable environment, being able to learn at their own pace, learning within a realistic environment, and the novelty of VR all contributed to make the VR experience very enjoyable. The themes identi\ufb01ed in the \ufb01ndings were compared to themes that were identi\ufb01ed in the literature (Fig. 1). Emergent themes in the \ufb01ndings that were not identi\ufb01ed in the literature were added, and a new theoretical framework to portray the effects of VR in legal education was proposed (Fig. 2). Fig. 2. Proposed theoretical framework portraying VR in legal education based on Kolb\u2019s theory 5 Conclusions Theoretical evidence supporting the potential bene\ufb01ts of VR in legal simulation was identi\ufb01ed in the literature. Using Kolb\u2019s (1984) learning theory as a theoretical foun- dation, the theoretical model portraying the effects of VR in general education was extended to the legal education context through the results of the study. A new theo- retical framework has been proposed by identifying themes relevant to the speci\ufb01c law context. This proposed framework can inform future research on the implementation of VR in legal education, as well as the wider literature on immersive technology. This study has also provided empirical evidence of the effectiveness of VR in legal simulation for stakeholders such as law \ufb01rms and universities.","104 J. Cho et al. A limitation of this study is the small sample size. Future research should utilise a large sample size across multiple universities to increase accuracy and generalisability. Another limitation of this study is the visual \ufb01delity of the environment. As shown in the literature, higher visual \ufb01delity of virtual environments leads to higher task performance (Ragan et al. 2015). Although a great effort was made to replicate real courtrooms as much as possible, the tools used were very limited. Future research should make use of highly realistic virtual environments and also further investigate the impact of visual \ufb01delity on task performance in the context of virtual legal simulation environments. Furthermore, this study identi\ufb01ed a new and unexpected branch of themes important to the learning experience \u2013 the role of emotions. This should be investigated further in future studies. References Allen, J.A., Hays, R.T., Buffardi, L.C.: Maintenance training simulator \ufb01delity and individual differences in transfer of training. Human Factors J. Human Factors Ergonomics Soc. 28(5), 497\u2013509 (1986) Aczel, P.: Virtual reality and education \u2013 world of teachcraft? Perpect. Innov. Econ. Bus. 17(1), 6\u201322 (2017) Apel, S.B.: No more casebooks: using simulation-based learning to educate future family law practitioners. Fam. Court. Rev. 49(4), 700\u2013710 (2017) Azuma, R.T.: A survey of augmented reality. Presence Teleoper. Virtual Environ. 6(4), 355\u2013385 (1997). https:\/\/doi.org\/10.1162\/pres.1997.6.4.355 Boyne, S.: Crisis in the classroom: using simulations to enhance decision-making skills. J. Leg. Educ. 62(2), 311\u2013322 (2012) Byrnes, R., Lawrence, P.: Bringing diplomacy into the classroom: stimulating student engagement through a simulated treaty negotiation. Leg. Educ. Rev. 26(1&2), 19\u201346 (2016) Carmigniani, J., Furht, B., Anisetti, M., Ceravolo, P., Damiani, E., Ivkovic, M.: Augmented reality technologies, systems, and applications. Multimedia Tools Appl. 51(1), 341\u2013377 (2011). https:\/\/ doi.org\/10.1007\/s11042-10-660-6 Casey, T.: Re\ufb02ective practice in legal education: the stages of re\ufb02ection. Clin. Law Rev. 20(2), 317\u2013354 (2014) Daly, Y.M., Higgins, N.: The place and ef\ufb01cacy of simulations in legal education: a preliminary examination. All Ireland J. High. Educ. 3(2), 1\u201320 (2011) Edwards, B.L., Bielawski, K.S., Prada, R., Cheok, A.D.: Haptic virtual reality and immersive learning for enhanced organic chemistry instruction. Virtual Reality 23(4), 363\u2013373 (2019) Elphick, L.: Adapting law lectures to maximise student engagement: is it time to transform. Leg. Educ. Rev. 28(1), 1\u201325 (2018) Fealy, S., et al.: The integration of immersive virtual reality in tertiary nursing and midwifery education: a scoping review. Nurse Educ. Today 79, 14\u201319 (2019) Fromm, J., Radianti, J., Wehking, C., Stieglitz, S., Majchrzak, T. A., vom Brocke, J.: More than experience? On the unique opportunities of virtual reality to afford a holistic experiential learning cycle. Internet High. Educ. 50, 100804 (2021) Glaser, B., Strauss, A.: The Discovery of Grounded Theory: Strategies of Qualitative Research. Wiedenfeld and Nicholson, London (1967) Jarmon, L., Traphagan, T., Mayrath, M., Trivedi, A.: Virtual world teaching, experiential learning, and assessment: an interdisciplinary communication course in second life. Comput. Educ. 53(1), 169\u2013182 (2009)","Mooting in Virtual Reality: An Exploratory Study 105 Kavanagh, S., Luxton-Reilly, A., Wuensche, B., Plimmer, B.: A systematic review of virtual reality in education. Themes Sci. Technol. Educ. 10(2), 85\u2013119 (2017) Knerr, C.R., Sommerman, A.S., Rogers, S.K.: Undergraduate appellate simulation in American colleges. J. Legal Stud. Educ. 19, 27\u201362 (2001) Koivisto, J., Niemi, H., Multisilta, J., Eriksson, E.: Nursing students\u2019 experiential learning processes using an online 3D simulation game. Educ. Inf. Technol. 22(1), 383\u2013398 (2017) Kolb, D.: Experiential Learning: Experience as the Source of Learning and Development. Prentice Hall, Englewood Cliffs (1984) Maharg, P., Nicol, E.: Simulation and technology in legal education: a systematic review and future research programme. In: Strevens, C., Grimes, R., Phillips, E. (eds.) Legal Education: Simulation in Theory and Practice, Farnham, UK (2014) Newbery-Jones, C.: Trying to do the right thing: experiential learning e-learning employability skills in modern legal education. Eur. J. Law Technol. 6(1), 1\u201326 (2015) Pantelidis, V.S.: Reasons to use virtual reality in education and training courses and a model to determine when to use virtual reality. Themes Sci. Technol. Educ. 2(1&2), 59\u201370 (2009) Parong, J., Mayer, R.E.: Learning science in immersive virtual reality. J. Educ. Psychol. 110(6), 785\u2013797 (2018) Parsons, L.: Competitive mooting as clinical legal education: can real bene\ufb01ts be derived from an unreal experience? Aust. J. Clin. Educ. 1(1), 1\u201322 (2017) Philips, E.: Law games \u2013 role play and simulation in teaching legal application and practical skills: a case study. J. Learn. Teach. 3(5), 1\u20134 (2012) Ragan, E.D., Bowman, D.A., Kopper, R., Stinson, C., Scerbo, S., McMahan, R.P.: Effects of \ufb01eld of view and visual realism on virtual reality training effectiveness for a visual scanning task. IEEE Trans. Visual Comput. Graphics 21(7), 794\u2013807 (2015) Resnick, M.: Rethinking learning in the digital age. In: Kirkman, G.S., Cornelius, P.K., Sachs, J.D., Schwab, K. (eds.) The Global Information Technology Report 2001\u20132002. OUP, Oxford (2002) Sala, N.M.: Virtual reality and education: overview across different disciplines. In: Choi, D.W., Dailey-Hebert, A., Estes, J.S. (eds.) Emerging Tools and Applications of Virtual Reality in Education. IGI Global, ProQuest Ebook Central (2016). http:\/\/ebookcentral.proquest.com\/lib\/ mmu\/detail.action?docID=4448077 Thanaraj, A.: Evaluating the potential of virtual simulations to facilitate professional learning in law: a literature review. World J. Educ. 6(6), 89\u2013100 (2016) Waters, B.: A part to play: the value of role-play simulation in undergraduate legal education. Law Teach. 50(2), 172\u2013194 (2016) Yeh, E., Wan, G.: The use of virtual worlds in Foreign language teaching and learning. In: Choi, D.W., Dailey-Hebert, A., Estes, J.S. (eds.) Emerging Tools and Applications of Virtual Reality in Education. IGI Global, ProQuest Ebook Central (2016). http:\/\/ebookcentral.proquest.com\/ lib\/mmu\/detail.action?docID=4448077 Yin, C., Song, Y., Tabata, Y., Ogata, H., Hwang, G.J.: Developing and implementing a framework of participatory simulation for mobile learning using scaffolding. Educ. Technol. Soc. 16(2), 137\u2013150 (2013)","If You Believe, You Will Receive: VR Interview Training for Pre-employment Anthony Kong1(B), Ray Tak -Yin Hui2, and Jeff Kai-Tai Tang3 1 School of Design, The Hong Kong Polytechnic University, Hong Kong, China [email protected] 2 NUCB Business School, Nagoya University of Commerce and Business, Nagoya, Japan [email protected] 3 Department of Computing, The Hong Kong Polytechnic University, Hong Kong, China [email protected] Abstract. Covid-19 pandemic accelerates the growing use of augmented and virtual reality in various industries, especially in education sector. It is worthy to study whether VR training would apply to technology-accepted learners, i.e., does \u201cIf you believe, you will receive\u201d apply to VR training. In this work, the researchers developed an immersive VR interview room system that allows pre-employment learners to try on a simulated environment. Pre-captured interviewer questions are played for the learners get a taste into a real-liked interview. The investigation is the relationship between learners\u2019 perceived usefulness and interview self-ef\ufb01cacy in VR training in human resources management. The experiment results show they are positively correlated. Keywords: VR interview \u00b7 Human resource \u00b7 Structural equal modelling \u00b7 Technology acceptance model \u00b7 Self-regulatory emotions 1 Introduction Covid-19 pandemic accelerates the digitalisation of human resource management (HRM), companies are actively involved in digital technologies to the recruitment pro- cess (Elena and Kristina 2021). The increasing applications of augmented and virtual reality in HRM and development is in progress, however, there is little empirical study for virtual reality (VR) training in human resources management (Ferreira et al. 2021). In this study, it is aimed to investigate the relationship between learners\u2019 perceived useful- ness and interview self-ef\ufb01cacy in VR training. The value of study not only contributes to HRM technology for industry but also future use of education technology in teaching and learning in academia. 2 Literature Review Virtual Reality (VR) is an emerging technology that enables people to immerse into and experience a rendered virtual space (Jaynes et al. 2003). Recently, people commonly \u00a9 The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 T. Jung et al. (Eds.): XR 2022, SPBE, pp. 106\u2013111, 2023. https:\/\/doi.org\/10.1007\/978-3-031-25390-4_9","If You Believe, You Will Receive: VR Interview Training for Pre-employment 107 call this VR world the \u201cMetaverse\u201d which was renamed after the \u201cHorizon\u201d platform introduced by the social media company Facebook, which supports multiple players to interact and collaborate in the VR environment via their avatar (Constine 2019). VR entertainment and gaming applications are getting more common. VR gami\ufb01cation could assist to conduct an interactive dance learning with motion capture (Chan et al. 2010). Wong, Kong and Hui (2017) also found that learners with different levels of openness to experience respond differently to the VR training environment, leading to disparate impacts to the learning effectiveness. With muscle sensors a VR game system can determine whether the players\u2019 is doing exercise with enough strength, so can encourage players to play harder to increase exercise level (Tang et al. 2016). In schools and higher education institutes, many libraries started to provide VR services (Suen et al. 2020). Oh and Kong (2021) show that VR creates emotional connection, enhanced presence and deeper immersion. Li et al. (2019) conducted a pilot study to evaluate the effectiveness of VR technology in enhancing social work students\u2019 perceived creativity and their competence in working with offenders empirically. The study shows a positive change in self-perceived con\ufb01dence in handling offenders following the VR training session, and this gives us insights into the use of VR technology for improving the job interview skills. 2.1 Perceived Usefulness Karahanna and Straub (1999) pointed out the role of the perceived usefulness and per- ceived ease-of-use constructs in the technology acceptance model (TAM). The perceived ease of use has a positive correlation to perceived usefulness. 2.2 Interview Self-ef\ufb01cacy Self-ef\ufb01cacy is widely treated as the primary cognitive construct determining learning behavior and performance in virtual settings (e.g., Chen and Hsu; 2020; Ding et al. 2020; Francis et al. 2020). Drawing from Bandura\u2019s (1986, 1997) social cognitive theory, our study aims at exploring how students\u2019 perception towards the usefulness of the VR training affects their personal judgments of interviewing capabilities, or interview self- ef\ufb01cacy (Petruzziello et al. 2022). We proposed that students\u2019 perceived usefulness to VR training serves as a proxy of enactive mastery experience, which is theorized as the most in\ufb02uential source of ef\ufb01cacy information (Bandura 1986, 1997), may enhance their interview self-ef\ufb01cacy which links to job interview success (Tay et al. 2006). H1: Perceived usefulness of VR training is positively related to interview self- ef\ufb01cacy 2.3 Self-regulatory Emotions Emotions serve as self-regulators as they play important mediating roles in the learning process of students (e.g., Liu et al. 2021; Othman and Othman 2021) since it serves as another source of ef\ufb01cacy information (Bandura 1997). Higgins, Shah and Fried- man (1997) stated that goal attainment promotes positive outcomes (i.e., promotion","108 A. Kong et al. focus) underlies cheerfulness- and dejection-related emotions while goal attainment related to preventing negative outcomes (i.e., prevention focus) underlies quiescence- and agitation-related emotions, thus leading unequal impacts on individual perception and behaviour (e.g., Hui et al. 2017). In this study, we explore the potentially disparate effects of different emotions in the relationship of perceived usefulness of VR train- ing and interview self-ef\ufb01cacy. Extending the social cognitive theory, we suggest that promotion-focused emotions (i.e., cheerfulness and dejection) will mediate the positive impacts of perceived usefulness on interview self-ef\ufb01cacy while prevention-focused emotions (i.e., quiescence and agitation) will explain the potential negative impacts of perceived usefulness on interview self-ef\ufb01cacy. H2: (a) Cheerfulness, (b) Dejection, (c) Quiescence and (d) Agitation mediate the relationship between perceived usefulness of VR training and interview self-ef\ufb01cacy disparately 3 Method In this study, there were 31 participants who were graduating students in postsecondary education (intention job seeker). A VR prototype was built for experiment. The live action interactive video contents were recorded in 360-degree video side by side format that simulates the actual interview environment setting (Fig. 1). During the experiment, the participant (interviewee) is required to wear a VR headset and answer 6 questions from the interviewers in a simulated environment. The whole interview process was recorded for further analysis. The experiment \ufb02ow is designed to arrange two structured questionnaire surveys which were conducted at both pre-experiment and post-experiment (Fig. 2). Fig. 1. VR simulated environment Fig. 2. The experiment \ufb02ow","If You Believe, You Will Receive: VR Interview Training for Pre-employment 109 4 Method Perceived usefulness (PU) consists of six items, which were adapted from Davis (1989) and Davis et al. (1989) to re\ufb02ect participants\u2019 perception of VR training context (\u03b1 = .87). For regulatory focus emotion, a 16-item measure, including 4 items for cheerfulness, 4 items for dejection, 4 items for quiescence, and 4 items for agitation, adapted from Higgins et al. (1997) was used (\u03b1 = .92 (cheerfulness), .97(dejection), .79(quiescence), and .84 (agitation)). Adapted from Tay et al (2006), interview self-ef\ufb01cacy was measured with 5 items in which the participants were asked to rate how con\ufb01dent they are to perform in an interview (\u03b1 = .90). Control variables included age, gender, full time and part time work experience, interview self-ef\ufb01cacy (Tay et al. 2006) before training, and perceived ease of use (Davis 1989; Davis et al. 1989). 5 Hypothesis Testing All hypotheses were tested using structural equation modelling (SEM) with the con- trol of six demographic variables. Our hypothesized model, which includes perceived usefulness, four regulatory focus emotions as mediators, and interview self-ef\ufb01cacy as outcome, was a good \ufb01t for the data [\u03c72 (11) = 29.31, p < .01, CFI = .84; NFI= .83; IFI = .89; RMSEA = .29 (90% CI = .16 to .42)]. Moreover, when the proposed mediators were removed, the overall \ufb01t deteriorated signi\ufb01cantly (\u03c72diff (6) = 18.75 > 16.81 (df = 6, p < .01). The SEM results are shown in Fig. 3. Fig. 3. Summary model of hypothesized relationships For H1, the SEM result shows that the direct relationship between perceived useful- ness of VR training and interview self-ef\ufb01cacy is not statistically signi\ufb01cant (\u03b2 = .14, ns), thus H1 is rejected. Although perceived usefulness does not directly affect interview self-ef\ufb01cacy, we further test the indirect effect via emotions (i.e., H2).","110 A. Kong et al. For H2, the SEM results shows that perceived usefulness of VR training is positively related to cheerfulness (\u03b2 = .90, p < .01) and quiescence (\u03b2 = .87, p < .01), but is negatively related to dejection (\u03b2 = \u22121.54, p < .01) and agitation (\u03b2 = \u2212.61, p < .05). As expected, the results also shows that promotion-focused emotions are positively related to interview self-ef\ufb01cacy (cheerfulness: \u03b2 = .41, p < .05; dejection: \u03b2 = .31, p < .05) while prevention-focused emotions are negatively related to interview self-ef\ufb01cacy (quiescence: \u03b2 = \u2212.49, p < .01; agitation: \u03b2 = \u2212.79, p < .05). The results show that perceived usefulness indirectly affects interview self-ef\ufb01cacy via 4 speci\ufb01c emotions. As expected, these four emotions mediate the perceived-usefulness-interview-self-ef\ufb01cacy relationship differently. Therefore, H2a, H2b, H2c and H2d were supported. 6 Conclusion The results show that perceived usefulness is highly mediated by cheerfulness and pos- itive correlation to interview self-ef\ufb01cacy. In the other words, \u201cif you believe, you will receive\u201d that may be applied to VR training. Therefore, it is suggested that VR training would apply to learners who are technology accepted. Otherwise, no matter how well developed for digital learning contents or technology adopted, it may not help to enhance the training performance for those against technology in learning. Future studies can be extended to explore more interactive mode of VR learning, such as virtual coaching (Hui et al. 2021) or face-to-face coaching (Hui and Sue-Chan 2018; Hui et al. 2013, 2019) assisted with augmented reality (AR). References Chen, Y.L., Hsu, C.C.: Self-regulated mobile game-based English learning in a virtual real- ity environment. Comput. Educ. 154, 03910 (2020). https:\/\/doi.org\/10.1016\/j.compedu.2020. 103910 Davis, F.D.: Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 13(3), 319\u2013340 (1989) Davis, F.D., Bagozzi, R.P., Warshaw, P.R.: User acceptance of computer technology: a comparison of two theoretical models. Manage. Sci. 35(8), 982\u20131003 (1989) Ding, D., Brinkman, W.P., Neerincx, M.A.: Simulated thoughts in virtual reality for negotiation training enhance self-ef\ufb01cacy and knowledge. Int. J. Hum Comput Stud. (2020). https:\/\/doi. org\/10.1016\/j.ijhcs.2020.102400 Elena, R.S., Kristina, S.P.: Personnel management innovations in the digital era: case of Russia in Covid-19 pandemic. Acad. Strateg. Manag. J. 20, 1\u201316 (2021) Ferreira, P., Meirinhos, V., Rodrigues, A., Marques, A.: Virtual and augmented reality in human resource management and development: a systematic literature review. IBIMA Bus. Rev. 1\u201318 (2021) Francis, E.R., Bernard, S., Nowak, M.L., Daniel, S., Bernard, J.A.: Operating room virtual reality immersion improves self-ef\ufb01cacy amongst preclinical physician assistant students. J. Surg. Educ. 77(4), 947\u2013952 (2020) He, Y., Chen, Q., Kitkuakul, S.: Regulatory focus and technology acceptance: Perceived ease of use and usefulness as ef\ufb01cacy. Cogent Bus. Manage. 5(1), 1459006 (2018). https:\/\/doi.org\/10. 1080\/23311975.2018.1459006","If You Believe, You Will Receive: VR Interview Training for Pre-employment 111 Higgins, E.T.: Promotion and prevention: regulatory focus as a motivational principle. Adv. Exp. Soc. Psychol. 30, 1\u201346 (1998) Higgins, E.T., Shah, J., Friedman, R.: Emotional responses to goal attainment: strength of regulatory focus as moderator. J. Pers. Soc. Psychol. 72(3), 515\u2013525 (1997) Hui, R.T.Y., Law, K.K., Lau, S.C.P.: Online or of\ufb02ine? Coaching media as mediator of the rela- tionship between coaching style and employee work-related outcomes. Aust. J. Manag. 46(2), 326\u2013345 (2021) Hui, R.T.Y., Lee, Y.K., Sue-Chan, C.: The interactive effects of coaching styles on students\u2019 self- regulatory emotions and academic performance in a peer-assisted learning scheme. In: 2017 IEEE 6th International Conference on Teaching, Assessment, and Learning for Engineering (TALE), pp. 167\u2013174. IEEE, December 2017 Hui, R.T.Y., Sue-Chan, C.: Variations in coaching style and their impact on subordinates\u2019 work outcomes. J. Organ. Behav. 39(5), 663\u2013679 (2018). https:\/\/doi.org\/10.1002\/job.2263 Hui, R.T.Y., Sue-Chan, C., Wood, R.E.: The contrasting effects of coaching style on task perfor- mance: the mediating roles of subjective task complexity and self-set goal. Hum. Resour. Dev. Q. 24(4), 429\u2013458 (2013) Hui, R.T.Y., Sue-Chan, C., Wood, R.E.: Performing versus adapting: how leader\u2019s coaching style matters in Hong Kong. Int. J. Hum. Resour. Manag. (2019). https:\/\/doi.org\/10.1080\/09585192. 2019.1569547 Karahanna, E., Straub, D.: The psychological origins of perceived usefulness and ease-of-use. Inf. Manage. 35(4), 237\u2013250 (1999) Liu, X.X., Gong, S.Y., Zhang, H.P., Yu, Q.L., Zhou, Z.J.: Perceived teacher support and cre- ative self-ef\ufb01cacy: the mediating roles of autonomous motivation and achievement emotions in Chinese junior high school students. Thinking Skills Creativity 39, 100752 (2021) Oh, J., Kong, A.: VR and Nostalgia: using animation in theme parks to enhance visitor engagement. J. Promot. Manage. 28, 113\u2013127 (2021) Othman, N.H., Othman, N.: Entrepreneurial emotions on start-up process behavior among university students. Iran. J. Manage. Stud. 14(4), 721\u2013733 (2021) Petruzziello, G., Chiesa, R., Guglielmi, D., van der Heijden, B.I., de Jong, J.P., Mariani, M.G.: The development and validation of a multi-dimensional job interview self-ef\ufb01cacy scale. Personality Individ. Differ. (2022). https:\/\/doi.org\/10.1016\/j.paid.2021.111221 Tay, C., Ang, S., Van Dyne, L.: Personality, biographical characteristics, and job interview success: a longitudinal study of the mediating effects of interviewing self-ef\ufb01cacy and the moderating effects of internal locus of causality. J. Appl. Psychol. 91(2), 446 (2006) Wong, E.Y.C., Kong, K.H., Hui, R.T.Y.: The in\ufb02uence of learners\u2019 openness to IT experience on the attitude and perceived learning effectiveness with virtual reality technologies. In: 2017 IEEE 6th International Conference on Teaching, Assessment, and Learning for Engineering (TALE), pp. 118\u2013123. IEEE, December 2017","Models, Category and System","Foundational Models for Manipulation Activity Parsing Daniel Be\u00dfler1(B), Robert Porzel2, Mihai Pomarlan3, and Michael Beetz1 1 Institute for Arti\ufb01cial Intelligence, Bremen University, Bremen, Germany {danielb,beetz}@uni-bremen.de 2 Digital Media Lab, Bremen University, Bremen, Germany [email protected] 3 Applied Linguistics Department, Bremen University, Bremen, Germany [email protected] Abstract. Human demonstrations of everyday activities are an important resource to learn the particularities of the corresponding control strategies that are needed to perform such activities with ease and competence. However, such demonstrations need to be annotated such that time segments get associated to the appropriate actions. Previous research in psychology has shown that humans \ufb01nd contact and force events to be particularly signi\ufb01cant when adapting control situations during a task. Based on the psychologically motivated Flanagan model, we present a method to recognize activities from force dynamic events and states. For this, we incorporated the Flanagan model in an ontology, together with Allen\u2019s interval algebra to model temporal ordering constraints. We use the ontology to generate the grammar of an activity parser. Due to this parser creation method, the system can also be used as a veri\ufb01cation tool for the ontology. Keywords: Formal ontologies \u00b7 Activity understanding \u00b7 Robotics \u00b7 Virtual reality 1 Introduction Humans are extremely apt at accomplishing a variety of manipulation tasks very com- petently under an innumerable set of conditions. At the same time, we can recognize and understand what actions other agents are performing when we observe them, despite the large variability of actions with respect to both the selection and ordering of different motion phases as well as to the ensuing effects that these motions cause, e.g., in terms of force dynamic events that subsequently occur. In this work, we investigate an approach to parse such sequences of motions and their effects to yield the higher-level actions that caused them. For this, we employ different knowledge sources. We start with an action model postulated in human psychology (Flanagan et al. 2006), in which actions are decomposed into motion phases with different subgoals. These subgoals are force dynamic events that also generate distinctive sensory feedback in the nervous system. For example, a hand comes into physical contact with \u00a9 The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 T. Jung et al. (Eds.): XR 2022, SPBE, pp. 115\u2013121, 2023. https:\/\/doi.org\/10.1007\/978-3-031-25390-4_10","116 D. Be\u00dfler et al. a cup of coffee before grasping it from the table. Furthermore, the cup loses contact to the supporting table surface when the agent performs a retracting motion after the coffee has been grasped. When human agents perform actions in reality or in a virtual reality environment, their intentions cannot be monitored directly. Monitoring force events, however, is feasible. Such events can, for example, be monitored in the physics engines of virtual worlds (Haidu et al. 2018), are visually observable by other agents (Fern et al. 2002; Siskind 2003) or can be monitored via ambient intelligence using sensoric materials and smart objects (Cook et al. 2009). As we will show in the work presented herein, the integration of models of motions, force events that are tied foundationally to models of actions, states and qualities, enables new and robust reasoning approaches, i.e., the Manipulation Activity Parsing (MAP) system introduced below. Measuring performance on a task such as activity parsing, in turn, informs and evaluates the models used for it (Porzel and Malaka 2004). The contributions of this work are: \u2022 a foundational ontology of manipulation actions; \u2022 a method to construct an activity parser from ontological speci\ufb01cations of actions in terms of the events that are expected to occur in them; and \u2022 a procedure to check whether the action modeling is \ufb01t for the purpose of parsing, and to improve it if not. 2 A Human Cognition Inspired Model of Manipulation Activities This section introduces our foundational ontology and adjoined models needed to parse sequences of events into action representations, as inspired by the Flanagan model. The ontology libraries used in our approach have been designed using the principles proposed by Masolo et al. and is created by using the DOLCE + DnS Ultralite ontology (DUL) as an overarching foundational framework (Niles and Pease 2001: Masolo et al. 2003). In case an ontology is to be used solely for capturing and representing knowledge for a speci\ufb01c resource within a given community and if the intended meaning of the terms used within the respective community is generally understood and agreed upon in advance by all its members, then little is to be gained by the employment of a foundational ontology, such as DOLCE, SUMO or DUL (Niles and Pease Masolo et al. 2003; Mascardi and Cordi 2010). If, however, an ontology is to be extended or re-used in different settings or even ported to new domains and applications, then a foundational ontology becomes indispensable, as previous attempts on building scalable knowledge models have shown (Gangemi and Mika 2003; Gangemi et al. 2003; Oberle et al. 2007). While these matters have been discussed before and are by now widely accepted in the ontology engineering community a new level of complexity is added for the case of running cognition-enabled robotic platforms, virtual reality simulations and corresponding experiments. Generally speaking, a foundational ontology constitutes an axiomatic theory about the high-level domain-independent categories in the real world (Guarino 1995), such as object, attribute, event, spatial and temporal connections and the like. The purpose of a foundational ontology is to act as a modelling basis for building individual domain","Foundational Models for Manipulation Activity Parsing 117 ontologies, e.g., an ontology of actions and one for robotic agents. Equally important is that foundational ontologies provide ontology design patterns that prescribe best practice modelling avoid ontological idiosyncrasies and save a substantial amount of modeling effort (Gangemi 2005). Speci\ufb01c branches of the KnowRob knowledge model pertaining to everyday activi- ties (Beetz et al. 2018), such as those involved in table setting, have consequently been aligned to the DUL framework. Speci\ufb01cally, a crucial foundational distinction is drawn between an Action and a Process. Any Action is an Event with at least one Agent that isParticipantIn it, and that executes a Task that isDe\ufb01nedIn a Plan, Work\ufb02ow or Project. A Process, however, is an Event that is not dependent on agents, tasks, and plans. For example, physical processes, such as melting, can be described without them being part of a plan executed by an agent. The same holds for motions and force-events, e.g., stones moving and meeting each other during an avalanche. Any intentional action, such as grasping, lifting, or placing, requires an agentive participant, but consists of a set of particular processes. We can, therefore, express that each individual action consists of a speci\ufb01c set of processes, such as motion phases or force events, that are inexorably tied to their respective actions. This modelling is crucial for the automatic generation of our pars- ing system. Additional axiomatization that is beyond the scope of description logics, e.g., identity constraints on the objects manipulated, will be integrated by means of the Distributed Ontology Language (Mossakowski 2016) using the appropriate \ufb01rst order logic-based modelling frameworks. For the time being, such axiomatization is either expressed using SWRL rules (Horrocks et al. 2004) or via dedicated Prolog rules. For expressing the sequential relationships between the processes that make up an action, we aligned Allen\u2019s interval calculus (Allen 1983) to DUL as further relations that hold between events. Additionally, we can employ the established pattern that Objects can be linked to a Quality that cannot exist without the object, e.g., colours, which are physically manifested in a Region, e.g., light frequency. This allows us to model the states of spe- ci\ufb01c objects to have the quality of being openable\/closable as well as the manifestation of being open or closed as the result of some force-event.","118 D. Be\u00dfler et al. 3 The Manipulation Activity Parser (MAP) We exploit our action model to detect activities in streams of observed processes. Observed processes are force dynamics, states, and motions that are used to de\ufb01ne actions in our model. We use temporal constraints in action de\ufb01nitions to build an End- point Sequence Graph (ESG) of constituent endpoints that represents their temporal ordering. Activity detection is implemented through a top-down depth-\ufb01rst parser that uses ESGs as grammar. An activity is detected whenever the parser can reach an ESG end node following only nodes that can be uni\ufb01ed with the stream of observed event endpoints which are used as tokens by the parser. This architecture is depicted in Fig. 1. Fig. 1. Map uses ESG grammars to detect activities in streams of observed force dynamics, motion, and state endpoints. The parser, named MAP, builds a parse tree of detected action terms. We say that MAP parse trees are interpretations of observed event endpoints. Multiple interpretations may be provided by the parser. Interpretations are scored according to the number of observed endpoints they explain. We select the interpretation that explains the maximum number of endpoints. MAP parse trees also imply temporal constraints on action constituents such as that a lifting motion starts when a grasped artefact loses contact to its supporting surface. The motion segmentation represented in MAP parse trees is only partial because endpoints of motions are often not observable, nor do they co-occur with force dynamics endpoints which are observable by an external viewer. Tokens processed by the MAP system are sequences of 4-ary terms ordered by time that encode: (1) the time instant at which the endpoint was observed; (2) an event symbol that represents an instance of some process type de\ufb01ned in our model; (3) a 2-ary endpoint term -T or +T where \u201c\u2013\u201d indicates the beginning and \u201c +\u201d the end of an observed process with type T; and (4) a list of event participant symbols which represent instances of artefacts, locations, and effectors involved in the process. The tokens can be","Foundational Models for Manipulation Activity Parsing 119 either supplied to the parser as stream, or by providing an ABox ontology of observed events. The MAP system exploits ontologies as grammars for detecting activities in observa- tions. For this it has to infer temporal ordering of action constituents using time interval calculus. This is implemented through ESGs. ESGs are directed acyclic graphs in which nodes are endpoints of constituent events, and in which an edge is added from one end- point a to another endpoint b if a <b holds. Any path from endpoint a to another endpoint b implies that a < b (transitivity). Event relations are not explicitly represented in the graph but can be inferred through relations between endpoints. ESGs are inferred from a TBox ontology in which action types are de\ufb01ned by tem- poral patterns of observable processes. Given an action de\ufb01nition, we iterate over the set of temporal constraints which can be derived from the model (in our current imple- mentation we only consider existential restrictions). For each temporal constraint, we add edges to the graph according to how the relations were originally de\ufb01ned by Allen. The graph is built through two basic operations: adding edges and merging nodes. An edge between two nodes is only added if there is not yet a path between both nodes which would already imply the < relation through transitivity. Further edges are then identi\ufb01ed that can be removed from the graph without losing information. Pragmatic shortcomings can be unveiled by ESGs. These are detected whenever an edge is pushed to the graph that would yield a cycle. The semantics of a cycle in ESGs is that some endpoint occurs before and after some other endpoint which is impossible. This is also the case if two endpoints are merged between which a path existed before. The parser is implemented through a small set of top-down parsing rules which require additional context about which tokens were processed so far, and the endpoint sequence graph which is uni\ufb01ed with input tokens by the parser. The hierarchical rules implemented re\ufb02ect the constituent structure of actions in our model \u2013 i.e., actions have constituents which are actions, processes, or states. The core of the parser is the activity rule which parses one ESG by matching the input token sequence with the graph structure. Recursive calls are performed for activity constituents which are actions. Nodes from the input ESG are popped according to the parsing rules, and the remaining ESG is returned to be used by subsequent rules. The activity rule uses a pre-condition ESG and a list of tokens which were recorded to check if all the endpoints that need to occur before the action are present in the sequence of previously recorded tokens. This is, for example, to check if some contact is active at the current position in the token stream, if some artefact is supported, etc. The pre-condition ESG is derived from our model. Pre-condition checking is skipped in case the pre-condition ESG is empty (i.e., in case the action has no endpoint pre- conditions de\ufb01ned). For non-empty pre-condition ESGs, we parse the recorded token sequence, trying to unify it with some endpoint path in the pre-condition ESG. Tokens are recorded in reverse order, i.e., the latest token occurring before the action is head of the list of recorded tokens such that this sequence can be directly used for parsing. For this backwards parsing we reuse some of the parsing rules that we use for forward parsing. Namely the rules that are used for parsing constituent processes and states of actions can be reused for parsing endpoint pre-conditions. The only exception is that activity context is handled differently by forward and backward parsing.","120 D. Be\u00dfler et al. Activity context is important for the parser to not falsely detect activities in token sequences that represent multiple interleaved activities. This is, for example, that Pick- ing and Placing constituents of a\u2019Pick & place\u2019 action need to have the same artefact participating in both actions, or that Grasping involves performing a grasping motion with the effector that gets into contact with an artefact during the motion. We de\ufb01ne activity context as the set of entities that must be participants in every constituent of some activity. Such identity constraints cannot be modelled in DL. For this reason, we have de\ufb01ned some abstract rules that govern how the activity context is inferred from tokens. The rules are declared based on a (hand-coded) classi\ufb01cation of actions into \u201cartefact-centric\u201d, \u201ceffector-centric\u201d, and \u201clocation-centric\u201d. Artefact-centric actions, for example, all have in common that their constituents must have some shared artefact participating in them. This classi\ufb01cation is an abstraction to allow reusing the rules for many actions, and to avoid the need to extend the rule base when new actions are de\ufb01ned in our model. 4 Conclusion In this work, we have introduced a novel action model, inspired by a model from human psychology, and the MAP system which can yield higher level activities from sequences of observed lower-level processes and states. The ontology is translated into sequences of event endpoints which are used as grammars by the MAP system. As a side-effect prag- matic shortcoming in the model are discovered that would not be detected by description logics reasoner. The MAP system was tested under laboratory conditions with human activity data acquired in a virtual environment. Acknowledgements. The research reported in this paper has been (partially) supported by the German Research Foundation DFG, as part of Collaborative Research Center (Sonderforschungs- bereich) 1320 \u2018\u2019EASE - Everyday Activity Science and Engineering\u2019\u2018, University of Bremen (http:\/\/www.ease-crc.org\/). The research was conducted in subprojects H02, P01 and R01 as well as the FET-Open Project #951846 \u201cMUHAI -- Meaning and Understanding for Human-centric AI\u201d funded by the EU Program Horizon 2020. References Allen, J.: Maintaining knowledge about temporal intervals. Commun. ACM 26(11), 832\u2013843 (1983) Beetz, M., Be\u00dfler, D., Haidu, A., Pomarlan, M., Bozcuoglu, A.K., Bartels, G.: Knowrob 2.0 \u2013 a 2nd generation knowledge processing framework for cognition-enabled robotic agents. In: International Conference on Robotics and Automation (ICRA), Brisbane, Australia (2018) Cook, D.J., Augusto, J.C., Jakkula, V.R.: Ambient intelligence: technologies, applications, and opportunities. Pervasive Mob. Comput. 5(4), 277\u2013298 (2009) Fern, A., Siskind, J. M., Givan, R.: Learning temporal, relational, force-dynamic event de\ufb01nitions from video. In: Proceedings of the Eighteenth National Conference on Arti\ufb01cial Intelligence and Fourteenth Conference on Innovative Applications of Arti\ufb01cial Intelligence, July 28\u2013August 1, 2002, pp. 159\u2013166. Edmonton, Alberta, Canada (2002)","Foundational Models for Manipulation Activity Parsing 121 Flanagan, J.R., Bowman, M.C., Johansson, R.S.: Control strategies in object manipulation tasks. Curr. Opin. Neurobiol. 16(6), 650\u2013659 (2006) Gangemi, A., Mika, P.: Understanding the Semantic web through descriptions and situations. In: Meersman, R., Tari, Z., Schmidt, D.C. (eds.) On the Move to Meaningful Internet Systems 2003: CoopIS, DOA, and ODBASE. OTM 2003. LNCS, vol. 2888, pp. pp. 689\u2013706. Springer, Berlin (.2003). https:\/\/doi.org\/10.1007\/978-3-540-39964-3_44 Gangemi, A., Guarino, N., Masolo, C., Oltramari, C.: Sweetening wordnet with dolce. AI Mag. 24(3), 13\u201324 (2003) Gangemi, A.: Ontology design patterns for semantic web content. In: Gil, Y., Motta, E., Benjamins, V.R., Musen, M.A. (eds.) ISWC 2005. LNCS, vol. 3729, pp. 262\u2013276. Springer, Heidelberg (2005). https:\/\/doi.org\/10.1007\/11574620_21 Guarino, N.: Formal ontology, conceptual analysis and knowledge representation. Int. J. Hum Comput Stud. 43(5\u20136), 625\u2013640 (1995) Haidu, A., Be\u00dfler, D., Bozcuoglu, A.K., Beetz, M.: Knowrobsim\u2013 game engine-enabled knowl- edge processing for cognition-enabled robot control. In: International Conference on Intelligent Robots and Systems (IROS). Madrid, Spain (2018) Horrocks, I., Patel-Schneider, P.F., Boley, H., Tabet, S., Grosof, B., Dean, M.: SWRL: a semantic web rule language combining OWL and RuleML. W3C Member Submission (2004) Mascardi, V., Cordi, V.: Technical Report disi-tr-06-21, University of Genua (2010) Masolo, C., Borgo, S., Gangemi, A., Guarino, N., Oltramari, A.: Wonderweb deliverable d18 ontology library, August 2003 Mossakowski, T.: The distributed ontology, model and speci\ufb01cation language \u2013 DOL. In: Recent Trends in Algebraic Development Techniques - 23rd IFIP WG 1.3 International Workshop, WADT 2016, pp.5\u201310, Gregynog, UK, 21\u201324 September 2016, Revised Selected Papers (2016) Niles, I. & Pease, A., (2001). Towards a standard upper ontology, In Proceedings of the Interna- tional Conference on Formal Ontology in Information Systems (pp. 2\u20139) - Volume 2001, ser. FOIS \u201901. New York, NY, USA: ACM Oberle, D., et al.: Dolce ergo sumo: On foundational and domain models in swinto (smartweb integrated ontology). J. Web Semant. Sci. Serv. Agents World Wide Web 5(3), 156\u2013174 (2007) Porzel, R., Malaka, R.: A task-based approach for ontology evaluation. In: Proceedings of the ECAI 2004 Workshop on Ontology Learning and Population, Valencia, Spain (2004) Siskind, J.M.: Reconstructing force-dynamic models from video sequences. Artif. Intell. 151(1\u20132), 91\u2013154 (2003)","Categorising Virtual Reality Content Ricard A. Gras(B) Fakult\u00e4t IV: Medienwissenschaft, Bayreuth Universit\u00e4t, Bayreuth, Germany [email protected] Abstract. The current second wave of Virtual Reality (VR) initiated after the advent of the Oculus Rift in 2012 has led to millions of head-mounted displays (HMDs) being sold worldwide and to a vast and varied amount of content being produced. In addition, as it happened with the VR\u2019s \ufb01rst wave a few decades ago, an exciting body of research has emerged. During both such waves, much of such research has described VR\u2019s pivotal terminology ambiguously, which among other things has led to a lack of agreement on the nomenclature of VR content types. This paper proposes to consider VR as a medium with its own set of core elements, shedding light on its speci\ufb01cities. The text offers a taxonomy of VR content types taking into consideration such core elements and in particular, the concept of embodied interactivity, proposing possible directions to further advance the medium. Keywords: Virtual Reality \u00b7 360 video \u00b7 VR \ufb01lm \u00b7 VR games \u00b7 Embodied Interactivity 1 Introduction \u2013 The Detrimental Lack of a Common Ground Today\u2019s VR entertainment sector paints a fast-moving and, in many ways, highly frag- mented landscape where impressive VR arcades co-exist with portable yet powerful HMDs for home consumption. The reality right now (both pre-COVID-19 and, all going well, post) is that more people than ever are familiar with VR and\/or even own an HMD, for whom plenty of diverse content is easily available. The number of active VR users is estimated to be hundreds of million worldwide. At the same time, VR projects are being released by established artists such as Paul McCartney and Bj\u00f6rk just as art festivals all over are including VR in their programs. It is clear that even though the sales of second wave-HMDs (and associated content) might have not fully met everyone\u2019s expectations, VR is advancing on some fronts. As a result, if still far from what could be considered mass adoption, VR is gradually \ufb01nding its place in the media diet of today\u2019s consumers. In this context, two distinct content types are establishing themselves. On the one hand, there is 360-degree video, which offers \ufb01lm-like experiences. And on the other hand, a diverse set of interactive experiences is sometimes generically referred to as VR apps. Such apps explore several genres, from user-generated content and eSports to Battle Royal-type games and beyond. Despite the clear differences in terms of the production modus operandi and the ultimate end-user experience that VR apps offer \u2013 why do many still refer to these two \u00a9 The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 T. Jung et al. (Eds.): XR 2022, SPBE, pp. 122\u2013133, 2023. https:\/\/doi.org\/10.1007\/978-3-031-25390-4_11","Categorising Virtual Reality Content 123 distinct types of content (i.e., video VS. apps) as if they were the same specimen? Despite that the differences in the level of agency afforded to the user (even in terms of distribution methods) are so clear \u2013 why hasn\u2019t a categorisation of content been created and agreed upon? The need for such a call for clari\ufb01cation has been \ufb02agged in the past by Dolan et al. (2015): \u201c[c]on\ufb02ation peppers the storyteller\u2019s ideology in 360 video and virtual reality. The general public, and many virtual reality evangelizers and practitioners, do not differentiate the distinctions between 360 video and virtual reality. They are two distinct mediums\u201d. Examples of the effect of this degree of confusion can be found in the incongruous way VR content is labelled and marketed across the media as well as in some educa- tional initiatives that claim to be teaching VR when, in fact, they are teaching the very respectable art of 360-\ufb01lm. Such a situation could be having negative effects on the understanding of VR as a whole as well as on the expectations of students wanting to learn the trade and indeed those of creators wanting to experiment with the medium as well as \ufb01rst-time users, investors and journalists. Regrettably, this state of affairs has not been addressed by scholars or the media (despite the considerable fanfare surrounding the subject). The fact that, effectively, two different bodies of research on the subject of VR (co-) exist (i.e., studies form VR\u2019s \ufb01rst and second waves) plus the diverse background of the academics studying the subject, might be two of the reasons why a sense of disorientation prevails on that subject. As a result, while some scholars argue that \ufb01lms made for VR ought to be called cinematic VR, others such as Uricchio et al. (2016) utterly disregard the lower levels of interactivity found in 360-video and suggest shifting the focus into game-like VR content. This situation is not helped by the fact that VR itself is hard to describe (oftentimes, the inherent empirical aspect of the experience means that a user has to try it). Indeed, while most people are used to describing the synopsis of a movie or a book, trying to explain the mechanics of a popular VR title such as Super Hot VR (SUPERHOT Team, 2016) is likely to prove a more complex task. Given this state-of-affairs, this text proposes to describe the key characteristics found in VR to then map content types, suggesting a nomenclature for each of those types. 2 Core Elements in VR The approach taken when it comes to proposing the following core elements in VR gathers inspiration from studies on the formal elements of \ufb01lms (Bordwell and Thomp- son) and video-games (Esposito). Such studies have respectively identi\ufb01ed a series of components that are invariably found in both cinemas (mise-en-sc\u00e8ne, cinematography, editing and sound) as well as video-games (gameplay, a peculiar audio-visual apparatus and, in some occasions, a story). However, applying any of the above to VR is inadequate, not only because VR is sensorially-speaking a different medium altogether, but because some of the above elements may not exist or be used in the same way. Additionally, over the past few decades, there has been confusion in terminology within the VR sector, which has been exposed by two young scholars in their respective thesis. Bokyung (2016), for instance, claims that across academic studies, the \u201csense of immersion is often confused with","124 R. A. Gras a sense of presence, since there is not yet a universal agreement on de\ufb01nitions of both terms, nor a standard explication of the relationship between them. Therefore, immersion and presence have appeared as interchangeable terms within VR literature\u201d. Similarly, Dubbelman (2013), in his PhD thesis, writes, \u201c[i]n academia, the term presence is often used interchangeably with the word immersion\u201d. In light of this, the author proposes to start by identifying the core elements that are speci\ufb01c to VR \u2013 either because they are intrinsic to its audio-visual apparatus and\/or because they are audience reactions that unique to the medium: Immersion is understood as the multi-sensorial experience facilitated by HMDs. Presence, i.e., the sense of being there. And embodied interactivity, i.e., those body-centric actions and behaviours by which audiences intend to engage with VR content. This concept will be examined in more detail later in the text. As much as the concepts of immersion and presence night have been investigated at large, the remit of this text calls for a brief description of both terms with a view to establishing a clear frame of reference for\/with the reader. Such descriptions will be followed by an in-depth study of the concept of embodied interactivity. 2.1 Immersion By immersion, this study will refer to the particular sensorial experience afforded by HMDs that allows users to be gain access to a different reality. As Ayda Sevin (2011) puts it\u2014in VR, the screen disappears since the \u201cviewer\u2019s visual \ufb01eld is completely \ufb01lled, and s\/he no longer looks through a window. In this case, the physical space and the virtual space coincide.\u201d This text, therefore, proposes to view immersion as the direct consequence of technology that offers users a multi- sensorial stereoscopic environment alternative to reality. It is important to celebrate VR\u2019s explicit level of sensorial immersion, whatever the type of VR discussed, as an essential building block of the medium because without it, we would likely be discussing a different platform altogether. Immersion, therefore, is not to be confounded with the sense of presence (examined later) or with the proverbial suspension of disbelief attainable across other art-forms. As Matuszkiewicz and Weidle (2019) put it, \u201cemotional transportation readers or viewers might experience when \u2018immersing\u2019 themselves\u201d into (non-VR) content. In the context of VR, user immersion is a perceptual occurrence, functionality, or feature. A direct consequence of the use of the technology that is unique to this VR and that cannot be found in other media. 2.2 Presence The concept of presence holds a complex, multi-layered meaning. As the studies cited in this section describe, researchers have found that different types of presence\/s exist. And yet, given its signi\ufb01cance, a general understanding of it has not been achieved fully. Such research efforts investigating this concept (often also referred to as telepresence), have con\ufb01rmed that audiences when in VR, feel like actually being in a given location\/scene.","Categorising Virtual Reality Content 125 Pillai et al. (2013) have successfully summarised this state of affairs. There is a large and exciting body of research in this area, which has combined proxemics with neuroscience. Across many decades, Slater has made key contributions in this area, \ufb01nding strong evidence of the unique sense of presence that VR offers to users. Not surprisingly, an assortment of de\ufb01nitions can be found across this extensive body of research. According to Lee et al. (2004), presence is \u201ca product of an unconscious effort to correctly register oneself into the virtual environment in a consistent manner\u201d. Acclaimed VR creator Aaron Koblin illustrated the feeling of presence when he stated, \u201cbeing a part of virtual reality is not an ephemeral experience between your imagi- nation and the storyteller. It\u2019s actually a much more visceral presence and experience when you\u2019re a part of the environment\u201d. In 2000, a non-pro\ufb01t organisation aptly called International Society for Presence Research also provided a de\ufb01nition for the term: \u201cA psychological state or subjective perception in which even though part or all of an individual\u2019s current experience is generated by and\/or \ufb01ltered through human- made technology, part or all of the individual\u2019s perception fails to accurately acknowledge the role of the technology in the experience (\u2026). This experience, identi\ufb01ed by some scholars as \u201c\ufb01rst order\u201d mediated experience, is the \u201cnor- mal\u201d or \u201cnatural\u201d way we perceive the physical world and provides a subjective sensation of being present in our environment\u201d. This inherent sense of presence found in VR has profound and multiple repercussions when it comes to conceptualising and producing content for the medium. As suggested by studies by Llobera, Slater and Blom (2013), the feeling of being there has considerable implications for VR producers\u2014from the placing of objects in-space to the way users expect to interact with avatars. 2.3 Embodied Interactivity Interactivity as a concept has drawn plenty of interest from researchers, especially Jensen. Such levels of interest have however triggered a multiplicity of de\ufb01nitions. Steuer (1992) describes interactivity as \u201cthe degree to which users can participate in modifying the form and content of a mediated environment in real time.\u201d Jensen (1998) proposes a similar approach when describing it as: \u201ca measure of a media\u2019s potential ability to let the user exert an in\ufb02uence on the content and\/or form of the mediated communication\u201d. Technically speaking, interactivity is available in VR by default, as much as moving one\u2019s head is far from the likes of the voluntary decision-type interactions found at the end of The London Heist (a game exclusively available in Playstation VR, SCEE, U.K. 2016). In this piece, the user is forced to choose what to do with her\/his \ufb01nal bullet \u2013 shoot a shrewd ma\ufb01a boss, kill an erratic former colleague or choose not to act. Any of the two possible actions and user inaction render a different ending to the experience (this title will be analysed later in the text). Nevertheless, when conceptualising new VR content, when and how much interactivity to add (whatever the type of interactivity that might be) is a critical question for all creators to address \u2013 and one which will profoundly affect the nature of the experience eventually delivered. On that note, a key question","126 R. A. Gras arises \u2013 generally speaking, how is the interactivity found in VR different to the one found in video-games? Beyond the use of peripherals that are used in both games and VR headsets (joysticks, etc.), the answer to such question seems to lay in the manner in which the audiovisual apparatus of VR lets us incorporate our bodies into the experience. In other words, the particular way in which VR audiences can interact with content corporeally, in space. This usually takes place through head movements, via the use of our hands and\/or peripherals or by moving\/walking \u2013 actions which are all tracked and have an effect on the digital space presented to us in-VR. Such seamless incorporation of the (potentially full) human anatomy as a conduit for interacting spatially, inside the digital realm and the synchronous sensorial effect it yields are key when it comes to understanding VR as a unique medium. The author proposes to refer to such in-VR user actions as embodied interactivity. Such term has been explored across past studies by Byers (2019) and also by Debarba et al. (2017). The author proposes to adapt such term and rede\ufb01ne it as the group of heterogenous, non-mutually exclusive, body-centric affordances by which VR users engage with content. The core elements outlined in previous sections may well help towards the under- standing of the main characteristics that VR works contain as a whole, especially when it comes to comparing it with other media and\/or creating content. However, referring to these characteristics is barely helpful when it comes to attempting to categorise VR content in any way (precisely because such elements are permanently present across all content types). In order to move forward, the author proposes to investigate the nature and levels of embodied interactivity offered in each content type. As illustrated across the text so far, VR allows for different types of actions \u2013 from head movements that change the view, to more meaningful decision-type actions described in relation to The London Heist (ibid.) A study by Roussou et al. (2006) addresses this issue by identifying three levels of interactivity using a level approach: \u201c[S]patial navigation is the lowest form of interactivity; [M]anipulation of the environment counts as a basic medium level of interactivity; [and] and the ability to modify system of operation is the top level of interactivity\u201d. Similarly, Zhang et al. (2019) also argue that three levels of interactivity exist: Low (when the user only controls \u201cwhere to look\u201d); Medium (\u201ca balance between system automation and user-controlled actions\u201d); and High (when the \u201cuser chooses and initiates as many actions as possible, while still maintaining narrative \ufb02ow\u201d). Both aforementioned studies coincide in proposing precisely three levels or degrees of interactivity. All of such studies also stress the signi\ufb01cance of the quality of the inter- action rather than its quantity or level of physical intensity. However, such investigations into the subject do not provide a categorisation of existing content or suggest ways in which such different levels may inform the choice of a suitable denomination for VR content types.","Categorising Virtual Reality Content 127 2.4 Further Considerations on Previous Research As cited across the text, scholars from different disciplines have investigated the way users relate to VR content. Over and above the long tradition of studies that focus on the scienti\ufb01c measuring of user responses in VR (proxemics, haptics, etc.), an alternative discourse has placed attention on the particular user reaction triggered by immersive media from an experiential point of view. Studies following the latter approach, although less common in quantitative terms, provide a useful sense of perspective to anyone investigating the role that VR is exploding in today\u2019s media landscape. In that context, the concept of teleacting has emerged \u2013 an idea that lacks a fully- de\ufb01ned description. Teleact or teleacting has been used in investigations as divergent as Marqu\u00e9s Rodilla\u2019s essay on cybersex and Arslan\u2019s study on remote robotics. The term seems to be a \ufb01tting way to illustrate the way VR users relate to the experience facilitated by HMDs. However, in terms of the focus of this paper, proposing the use of this term (given the lack of consensus gathered in academic circles) seems a purpose-defeating suggestion in the sense that teleacting (as its etymology implies) draws attention to dis- tance and therefore, in many ways, accentuates the more technological and impersonal part of the VR experience. The author argues that such a term lacks the ability to recognise the special communion between our real body and its synthetic in-world representation (and actions) distinctly found in VR. On that subject, an alternative approach is proposed by Allen and Tucker (2018) through their concept of \u201cstorydoing or storyliving\u201d, which revisits \u2013 or, rather, expands \u2013 the concept of stori\ufb01cation popularised by Ruth Aylett. In a 2018 industry report entitled Immersive Content Formats for Future Audiences, Allen and Tucker propose that VR creators adopt a more \u201cuser-centric\u201d approach in which the user can be involved in the co-creation of narrative. Another study that explores the rela- tionship that users adopt in relation to VR comes from Reyes and Dettori (2019), who noted that one of the characteristics of interactive VR \ufb01lms is to introduce: \u201ca shift from the authorial point of view of classical media, literature, cinema and theatre\u201d. Interest- ingly, these latter studies investigate VR by making recurrent references to narrativistic and \ufb01lmic terms whilst also drawing attention to the idea of (co)authorship, therefore acknowledging the explicitly active role that users play in immersive media. The aforementioned studies are valuable sources when it comes to both facilitating the understanding of the uniqueness of the VR experience and also \u2013 as it will be described later in the text \u2013 the key role the VR user holds as an active driver of such experience. 3 Methodology Generically speaking, throughout this text, the author will consider VR as a medium de\ufb01ned by the experience it offers rather than its technological features. This latter discourse, to an extent dominant in academia, has been challenged by Steuer (1992) who proposes a similar approach to the one offered by this paper. According to this author, focusing merely on technology when discussing VR could lead to an \u201cirrational de\ufb01nition of the medium itself\u201d. Additionally, the aim of this text is not to dissect all types of interactivities available or potentially available in VR. Such an analysis would be bound to be affected by the \ufb02uid nature of the VR ecosystem, which is highly likely to continuously come up with haptic appliances and other innovations. Moreover, such","128 R. A. Gras an analysis would require a conscious study that takes into consideration all creative and technical aspects at play, especially the ones that have a direct impact on the experience and amount\/nature of interactions \u2013 single-player vs multi-player and a long etc. This paper will propose de\ufb01nitions for a series of recurrent terms related to VR with a view to precisely use them in a generic taxonomy for main VR content types. Nevertheless, the object of this study is not to investigate VR-speci\ufb01c genres but provide a high-level overview. The text will often use words such as producer (the creator of a VR work, regardless of its category or genre) and proposes to understand work as any individual piece of VR content. On the methodology employed, a considerable amount of VR entertainment titles has been analysed (a small selection of which can be found in \u201cTable 2\u201d). The selection of such works is meant to illustrate and provide actual examples of the points discussed in this text. Most of such works are popular across the VR sector. 4 Results 4.1 Levels of Embodied Interactivity In this section, the author proposes to adopt a level approach, just the ones described in the studies from Sect. 2. Although the following proposal coincides with the number of levels proposed, it, however, suggests focusing not on the technology aspect and the degrees of complexity that derives from the implementation of such technology \u2013 instead, the following new levels are based on the explicit actions that VR affords to end-users: By Basic level, the author proposes to refer to the user\u2019s ability to control the view by moving her\/his head, similarly to what Zhang et al. (2019) suggest. Such agency, always made possible by VR\u2019s audiovisual apparatus by default, is the nethermost level. Such levels of interactivity clearly correlate to the more linear tradition of media. By Standard level, the author refers to those affordances that invite audiences to interact with in-scene content explicitly, regardless of the amount or intensity. Usually, such experiences allow a certain degree of spatial freedom and offer users real-time interactions with objects, avatars or the environment. Nevertheless, this level should be understood in a wide sense since it can cover narrative-heavy, action-adventure experi- ences as much as more gameplay-intense works. The content pertaining to this level is strongly lined with the tradition of video-games. At the Open level, the author proposes to include those works that require some degree of authorship from the user. The use of the term open is also indicative that user input might occur in different ways, often creatively. Examples of this can be found in open world works where users have to craft their own experience as Lucid Trips (Vogel et al. 2017) as well as in works with signi\ufb01cant social elements such as The Tempest (Tender Claws 2020). In the latter, users are invited to use their avatar to adopt a role in a live theatre play. Works that fall within this open category are bound to be associated with more hybrid experiences that may or may not feed on a variety of media traditions. Moreover, Open VR works are likely to venture into more innovative creative experiences (in creative and content format terms).","Categorising Virtual Reality Content 129 Figure 1 below, summarises the proposed core elements of VR whilst illustrating the proposed level approach in regard to the concept of embodied interactivity: Fig. 1. Summary of the proposed core elements of VR 4.2 Mapping VR Content by Type By mapping the different types of embodied interaction against existing VR content, a series of content types can be identi\ufb01ed using a simple nomenclature applicable to currently available VR content (Table 1). Table 1. Generic mapping of VR content by type. Embodied 360 film VR Games Interactive interactivity VR Basic +++ Standard \u2013++ Open \u2013 \u2013+ The following de\ufb01nitions are offered to further clarify each content type: 360 Video or VR Video. By 360 video or VR video, this text proposes to refer to live recordings and animations that contain stereoscopic or monoscopic images in motion accessible via HMDs (and VR-ready smartphones, tablets and PCs). The author suggests including 180-degree video in this category as well. Predictably, audiences will make use of terms video and \ufb01lm in a similar way such terms are used in the context of cinema, i.e. taking into considerations aspects such as length, production values, etc. In any case, the delivery aspect of VR video content, irrelevant as it is in the context of this paper, provides however an illustration of the high levels of accessibility of this speci\ufb01c type of VR content \u2013 nowadays, VR video can be found as a downloadable or streamable option across services as popular as YouTube.","130 R. A. Gras Remarkable examples of 360 video works are The Limit (Rodriguez 2018) and Pearl (Osborne 2016). Additionally, beyond projects made with Eevo (www.eevo.com) and works such as Zena (Reyes 2018), and 11.11.18 (Schrevens et al. 2019), at present, the author has not found enough exponents of interactive VR \ufb01lms to investigate into this area. VR Games. By VR games, this text suggests referring to content that allows the user to have similar experiences like the ones facilitated by video-games \u2013 yet in VR. Successful VR games are Beat Saber (Hyperbolic Magnetism 2019) and Super Hot VR (ibid.). As found across traditional video-games, such titles may also have story-telling aspirations, as seen with the example of The London Heist (Ibid.). As brie\ufb02y described earlier, in The London Heist (ibid.), users are given the chance to embody a ma\ufb01oso, who happens to be held hostage. After going through a number of \ufb02ashbacks scenes, often highly interactive (e.g. trying to steal a diamond), the user is given a weapon and asked to make a choice in order to save her\/his life. Depending on this \ufb01nal choice, a different ending is rendered. Regardless of the branching aspect, this title offers an effective combination of story-led, linear experiences with intense game-like challenges and could be seen as a key paradigm case in VR. Interactive VR Experiences. By interactive VR experiences, I refer to works that are not 360 videos nor games and that explore new or hybrid ways to invite the user into taking part in the modi\ufb01cation and\/or the creation of the experience. A key aspect of such interactive VR experiences is the novel and innovative way in which they often bring the user and her\/his body into the experience. Additionally, another key aspect is the fact that such works tend to allow the user to personalise the experience to some extent. Examples of interactive VR experiences are Notes on Blindness (Middleton et al. 2016), Lucid Trips [ibid.] and The Tempest [ibid.]. Table 2 maps the different types of embodied interactivity against all referenced VR tiles: Table 2. Mapping VR examples by type. Embodied 360 film VR Games Interactive interactivity VR Basic The Limit, SuperHot VR, Notes on Pearl Beat Saber, Blindness, Standard - The London Lucid Trips, Open The Tempest - Heist -","Categorising Virtual Reality Content 131 5 Conclusion: The Signi\ufb01cance of Embodied Interactivity After describing the context, this text identi\ufb01ed a series of core elements related to the medium and, speci\ufb01cally, the relevance of the user\u2019s body in VR. Thereupon, the author put forward and developed the concept of embodied interactivity. As described across Sect. 2.3, this idea can help identify, understand, and map the different generic types of experiences available in VR. Having established three different types or levels of embodied interactivity (Basic, Standard and Open), the text suggests a nomenclature for VR work types with the aim of addressing the confusion that exists on the subject, offering a generic categorisation that is straightforward and easy to employ. The author proposes such nomenclature so that the VR sector (understood in its widest sense \u2013 from academia and manufacturers to the content industry), addresses the need of establishing a common ground in relation to the key recurrent terms. Corre- spondingly, the author proposes the abandonment of widespread denominations such as immersive VR (which is, essentially, an oxymoron) and cinematic VR (since cinematic is a quality that may or may not be found, adeptly or not, across several types of VR content). This text also maps such categories against existing VR works for illustration pur- poses. Such examples demonstrate actual uses of core elements in content whilst stressing the dif\ufb01culty of typecasting, in detail, works that fall within the interactive VR category. Predictably, the advent of new functionalities and enhanced social features and experi- ences are likely to challenge the proposals made in this paper as well as challenge further categorisations. The question now remains to which point VR can champion a change of paradigm when it comes to promoting a completely new role for audiences in relation to content \u2013 a change of trend where we all, as users, are able to fully employ our bodies to exercise the new privileges afforded to us in the immersive realm. Even though the advent of the Internet era and the popularisation of video-games opened audiences to a breadth of new affordances, there is a sense of poetic justice in the singular way by which VR may revert the passive role that, during the past century, mass media forced on users (or, more appropriately viewers). Immersive media converts the human anatomy into an active intermediary, a perfect interface, that allows us to engage with the digital world we inhabit with a unique combination of immediacy and intuitiveness. The question is how present and future producers will interpret this powerfully liberating sense of corporeality and maximise the creative potential that VR holds. Acknowledgements. Special thanks to Dr. Jochen Koubek, Dr. M. Cecilia Reyes, Dr. Joan Llobera and Nico Nonne. References Aylett, R., Louchart, S.: Towards a narrative theory of virtual reality. Virtual Reality 7, 2\u20139 (2003). https:\/\/doi.org\/10.1007\/s10055-003-0114-9","132 R. A. Gras Allen, C., Tucker, D.: Immersive content formats for future audiences. Industry report (2018). https:\/\/www.immerseuk.org\/wpcontent\/uploads\/2018\/07\/Immersive_Content_F ormats_for_Future_Audiences.pdf Arslan, M.S.: Improving performance of a remote robotic teleoperation over the internet. Middle East Technical University (2005) Byers, K.: Embodied Interaction \u2013 Introduction (2019) Bokyung, K.: Virtual reality as an artistic medium: a study on creative projects using contemporary head-mounted displays. Master\u2019s thesis\u2014Media Lab Helsinki, p. 29 (2016) Bordwell, D., Thompson, K.: Film Art: An Introduction. McGraw-Hill, New York (2004) Dolan, D., Perets, M.: Rede\ufb01ning the Axiom of Story: The VR and 360 Video Complex. Techcrunch (2015). http:\/\/tcrn.ch\/1OttLQw. Accessed 26 Sept 2020 Debarba, H., Bovet, S., Salomon, R., Blanke, O., Herbelin, B., Boulic, R.: Characterizing \ufb01rst and third person viewpoints and their alternation for embodied interaction in virtual reality (2017) Dubbelman, T.: Narratives of Being There: Computer Games, Presence and Fictional Worlds, p. 4 (2013) Eevo Inc. www.eevo.com. Accessed 23 Feb 2022 Esposito, N.: A Short and Simple De\ufb01nition of What a Videogame Is (2005) Hyperbolic Magnetism. Beat Saber VR (2019) International Society for Presence Research website (2002). https:\/\/ispr.info\/about-presence-2\/ about-presence\/. Accessed 11 June 2020 Jensen, J.: \u2018Interactivity\u2019 \u2014 Tracking a New Concept in Media and Communication Studies (1998) Koblin, A.: Aaron Koblin on VR Storytelling. https:\/\/www.dfrobot.com\/blog-427.html https:\/\/ispr.info\/about-presence-2\/about-presence\/. Accessed 10 Sept 2020 Lee, S., Kim, G., Rizzo, A., Park, H.: Formation of spatial presence: by form or content (2004) Llobera, J., Slater, M., Blom, J.: Telling stories within immersive virtual environment. Article in Leonardo, October 2013. https:\/\/doi.org\/10.1162\/LEON_a_00643 Marqu\u00e9s Rodilla, C.: En torno a los avatares del placer virtual. Contrastes. Revista Internacional De Filosof\u00eda (2004). https:\/\/doi.org\/10.24310\/Contrastescontrastes.v0i0.1857 Matuszkiewicz, K., Wiedle, F.: At the threshold into new worlds: virtual reality games beyond narratives. Eur. J. Media Stud. 8(2), 5\u201323, 12 (2019). Autumn 2019 Osborne, P.: Pearl VR (2016) Pillai, J., Schmidt, C., Richir, S.: Achieving presence through evoked reality. Front. Psychol. (2013). https:\/\/doi.org\/10.3389\/fpsyg.2013.00086 Reyes, M.S.: Zena VR (2018) Reyes, M., Dettori, G.: Developing a media hybridization based on interactive narrative and cinematic virtual reality (2019) Rodriguez, R.: The Limit VR (2018) Roussou, M., Oliver, M., Slater, M.: The virtual playground: an educational virtual reality envi- ronment for evaluating interactivity and conceptual learning. Virtual Reality 10, 227\u2013240 (2006) Schrevens, D., Tixador, S.: 11.11.18 VR Experience (2019) Sevin, A.: Computer screen, Virtual Reality and the Frame (incl. in \u2018Image, Time and Motion\u2019 by Aytemiz et al.), p. 126 (2011). ISBN: 978-90-816021-5-0 Slater, M.: Place illusion and plausibility can lead to realistic behaviour in immersive virtual environments. Philos. Trans. R Soc. Lond. B Biol. Sci. 364, 3549\u20133557 (2009).https:\/\/doi.org\/ 10.1098\/rstb.2009.0138 Sony Computer Entertainment Europe (SCEE): The London Heist VR (2016) Steuer, J.: De\ufb01ning virtual reality: dimensions determining telepresence. J. Commun. 42(4), 73\u201393 (1992) SuperHot Team: Superhot VR (2016)","Categorising Virtual Reality Content 133 Claws, T.: The Tempest VR (2020) Uricchio, W., et al.: Virtually there: documentary meets virtual reality. In: Rafsky, S. (ed.) Virtually There: Documentary Meets Virtual Reality, Cambridge (2016). http:\/\/opendoclab.mit.edu\/wp\/ wp-content\/uploads\/2016\/11\/MIT_OpenDocLab_VirtuallyThereConference.pdf Vogel, S.L., Nerds, V.R.: Lucid Trips (2017) Zhang, L., Bowman, D.A., Jones, C.N.: Exploring effects of interactivity on learning with interac- tive storytelling in immersive virtual reality. In: 2019 11th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games), pp. 1\u20138 (2019). https:\/\/doi.org\/10. 1109\/VS-Games.2019.8864531","CDT-GEM: Conversational Digital Twin for Geographic Emergency Management Seungyoub Ssin1, Junseong Bang2, and Woontack Woo1,3(B) 1 KI-ITC ARRC, Daejeon, Republic of Korea [email protected] 2 Intelligent Convergence Research Laboratory, ETRI, Daejeon, Republic of Korea [email protected] 3 UVR Lab, KAIST, Daejeon, Republic of Korea Abstract. The endless evolution of cities aiming for smartization is combined with modern ICT (Information & Communication Technology) techniques to extract and analyze meaningful data generated throughout various places in the city and realize advanced visualization. However, despite these people\u2019s efforts, when an emergency or a disaster suddenly occurs in a city area, the sustain of tools that support intelligent responses to minimize damage is still lacking. Many urban researchers are in the trend of designing urban management tools that combine digital twin technology with XR and voice recognition AI technology that can link with multiple IoT (Internet of Things) and integrate and manage extracted sensing data. In this paper, we introduce the CDT-GEM. This digital twin system can inform the manager of the situation where the problem occurs when a cri- sis occurs and how citizens can communicate with the emergency management system in real-time. Keywords: Urban digital twin \u00b7 Metaverse \u00b7 AR \u00b7 VR \u00b7 MR \u00b7 XR \u00b7 Smart city 1 Introduction The human experience accumulates through times of crisis and evolves into predictions in order not to repeat itself. Some crises are at the individual level (e.g., diagnosis of life-threatening diseases), organizational level (e.g., businesses facing bankruptcy), while others are at the social or global level (e.g., COVID-19 pandemic) (Beghetto 2021). Beghetto says a crisis can cause severe problems and cause anxiety but it can also serve as an essential catalyst for creative action and innovative outcomes. This is because our typical reasoning and actions may no longer help us in times of crisis. However, it is unreasonable to monitor crises in various ways, such as \ufb01re, terrorism, \ufb02ooding, gas leakage, earthquake, heavy snow, collapse, etc., with traditional CCTV or SNS methods without monitoring the entire problem area (Laufs et al. 2020). Moreover, the spread of a powerful disease such as pandemic COVID 19 shows a current crisis of human life, causing a chain of problems such as economic damage Nicola et al. (2020). It is essential to identify the cause quickly in such a crisis time, and it is best to respond \u00a9 The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 T. Jung et al. (Eds.): XR 2022, SPBE, pp. 134\u2013138, 2023. https:\/\/doi.org\/10.1007\/978-3-031-25390-4_12","CDT-GEM 135 soon and get out of the crisis in the short term with minimal damage (Christophe and B\u00e9n\u00e9dicte 2003). Lacin\u00e1k and Ristvej introduced an Emergency Solution, which enables managers of all cities aiming for a smart city to manage urban disasters and safety using a GIS-based system (Lacin\u00e1k and Ristvej 2017). In such a smart city, managers conduct research and development to devise the Disaster City Digital Twin as a technology to monitor spaces out of sight and predict possible disaster situations in the future (Fan et al. 2021). It shows that the combination of a disaster management situation that can reduce damage only when real-time is guaranteed and digital twin technology that can visually support real-time information is inevitable. In this paper, we introduce a demonstration system call CDT-GEM as a tool that both managers and citizens can get help in times of crisis. Unlike existing digital twins, CDT- GEM is equipped with an AI system in a chatbot that allows users to make a decision (Cheng and Jiang 2020) (Fig. 1). Fig. 1. The concept of CDT-GEM 2 The Architecture of CDT-GEM CDT-GEM architecture is constructed to allow managers to monitor emergencies occur- ring in urban spaces and communicate real-time information between many citizens and managers from remote places. Figure 2 shows the architecture of CDT-GEM, consist- ing of Virtual IoTs, Real IoTs, Cloud system, Edge Computing, and Users, and each component is connected to various devices.","136 S. Ssin et al. Fig. 2. The architecture of CDT-GEM (a) Virtual IoTs: This is a server system unit that creates data virtually. It can be used when city managers want to test by mixing natural phenomena and virtual simulations in the city center. When real IoT cannot be expressed or expressed abundant information, virtual IoT can create useful data that supplements the real IoT data to design innovative city services. (b) Real IoTs: A system area in which existing IoT systems are combined and refers to a system that manages IoT that already exists at this time. RI1, RI2 show that are existing real IoTs are connected. 360 IoTs is a technology that expresses the location of objects using 360 images [8]. BoT IoTs represent information received from a mobile device\u2019s speech recognition system as data and can be used as sensing data. (c) Cloud System: A system that expresses cloud services using large-capacity server systems such as Microsoft Azure, AWS, and Google Cloud and accepts data gen- erated by (a) virtual and (b) real IoT systems as input. In this system area, the input data is extracted and then parsed, combined, and analyzed. (d) Edge Computing: HMD is a part where city managers and citizens use extended reality (XR) HMD, such as Microsoft HoloLens. It refers to the role that outputs the information \ufb01nally received from the Edge Computing department as informa- tion that users can easily understand and use. Spatiotemporal information entered through the network is divided into a macroscopic visualization method called city miniature and a microscopic visualization method using 360 technology and expressed. City miniatures are used to visualize geographic information using city maps. In 360 technology, monitoring technology such as CCTV and remote visits to the building\u2019s interior such as non-face-to-face performances are provided. The Tabletop system is a interaction system that provides a touch function to make it easy to control information outside the Viewport, which is inconvenient to han- dle with the HMD\u2019s NUI in a two-dimensional plane. It provides various menu selection and information expressions such as two-dimensional object movement, geographic map zoom in\/out, and interaction functions such as drag & drop. The","CDT-GEM 137 Bot system operated through a mobile device processes natural language input from users, converts the natural input language into event-type text, provides guidance to people in need, and allows administrators to perform necessary commands. Loca- tion and status information generated from the mobile device can be converted into IoT information and stored in the system. (e) Users: It is a group in charge of recognizing and handling problems by connecting to the Edge Computing area using various devices as city managers or citizens and delivering information to external managers and providing necessary data. 3 Conclusion In emergency situations, digital twin technology is needed to obtain spatiotemporal information that can understand the situations of the problem spots. However, there is a need to improve the interface to connect city managers and citizens \ufb02exibly in cyberspace composed of digital twins. Besides, AI-based chatbot system can be used as a digital twin\u2019s interactive interface to quickly and conveniently deliver high-value information to users by analyzing the entire input data. Moreover, it is possible to design a platform for emergency management based on geographic information using augmented reality technology. In the proposed CDT-GEM, 3D augmented reality technology and chatbot functions are \ufb02exibly connected through edge computing to manage crisis situations from various angles. Using 5G Edge technology, Cloud technology, XR HMD, and AI chatbot system, an integrated approach can be con\ufb01gured as a user-friendly technology. The integrated system can be widely used in the smart crisis management \ufb01eld. This paper presents essential technologies for crisis management and shows a system that can design smart services using the techniques. Acknowledgements. This work was supported by the KAIST Institute for Information Technol- ogy Convergence (KI-ICT) and National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2021R1A2C2011459). Also, this research was sup- ported and funded by the Korean National Police Agency. [Pol-Bot Development for Conver- sational Police Knowledge Services\/PR09-01-000-20]. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2021R1A2C2011459). References Beghetto, R.A.: How times of crisis serve as a catalyst for creative action: an agentic perspective (in Eng). Front Psychol 11, 600685 (2021). https:\/\/doi.org\/10.3389\/fpsyg.2020.600685 Laufs, J., Borrion, H., Bradford, B.: Security and the smart city: a systematic review. Sustain. Cities Soc. 55, 102023 (2020). https:\/\/doi.org\/10.1016\/j.scs.2020.102023 Nicola, M., et al.: The socio-economic implications of the coronavirus pandemic (COVID-19): a review. Int. J. Surg. 78, 185\u2013193 (2020). https:\/\/doi.org\/10.1016\/j.ijsu.2020.04.018 Christophe, R.-D., B\u00e9n\u00e9dicte, V.: The dif\ufb01culties of improvising in a crisis situation - a case study. Int. Stud. Manag. Organ. 33(1), 86\u2013115 (2003). https:\/\/doi.org\/10.1080\/00208825.2003.110 43675","138 S. Ssin et al. Lacin\u00e1k, M., Ristvej, J.: Smart city, safety and security. Procedia Eng. 192, 522\u2013527 (2017). https:\/\/doi.org\/10.1016\/j.proeng.2017.06.090 Fan, C.. Zhang, C. Yahja, A., Mostafavi, A.: Disaster city digital twin: a vision for integrating arti\ufb01cial and human intelligence for disaster management. Int. J. Inf. Manag. 56, 102049 (2021). https:\/\/doi.org\/10.1016\/j.ijinfomgt.2019.102049 Cheng, Y., Jiang, H.: AI-Powered mental health chatbots: examining users\u2019 motivations, active communicative action and engagement after mass-shooting disasters. J. Cont. Crisis Manag. 28(3), 339\u2013354 (2020). https:\/\/doi.org\/10.1111\/1468-5973.12319","Tourism and Cultural Heritage","Towards User Experience Guidelines for Mobile Augmented Reality Storytelling with Historical Images in Urban Cultural Heritage Silviu Vert1(B), Oana Rotaru1, Diana Andone2, Miruna Antonica1, Adina Borobar1, Ciprian Orhei1, and Victor Holotescu1 1 Multimedia Center, Politehnica University of Timisoara, Timis,oara, Romania [email protected], {oana.rotaru,felicia.antonica,adina.borobar, ciprian.orhei,victor.holotescu}@student.upt.ro 2 eLearning Center, Politehnica University of Timisoara, Timis,oara, Romania [email protected] Abstract. Augmented reality is a novel and engaging way for visitors to experi- ence cultural heritage as locals or tourists. Mobile augmented reality applications with historical images showing landmarks or events of the past are a popular category. Using research methods such as literature review, competitive analysis, semi-structured interviews, and usability evaluations, we derived a set of practical user experience guidelines for designing mobile augmented reality storytelling applications with historical images in the context of urban cultural heritage. To demonstrate how these guidelines could be applied, we present a redesigned pro- totype for such an application, developed for Timisoara (Romania), European Capital of Culture in 2023. Keywords: User experience \u00b7 Mobile augmented reality \u00b7 Cultural heritage 1 Introduction Cultural heritage is experimented in many forms today, from visiting a museum exhi- bition to sur\ufb01ng the museum\u2019s website or mobile app or using augmented reality applications to enhance the places that one visits. One of the most popular use cases for digital cultural heritage is mobile augmented reality applications with historical images that show landmarks or events from the past, aligned with the current surroundings of the user. This produces an impressive view of the passing of time and reinforces the value of the local cultural heritage. To showcase this, we developed the Spotlight Timisoara AR application, which superimposes old photos of buildings, from the archive of the National Museum of Banat, on the current landmarks, as seen through the lenses of the smartphone. Spot- light Heritage Timisoara is a digital cultural initiative of the Politehnica University of Timisoara realised in partnership with the National Museum of Banat. Spotlight Heritage Timisoara reveals, by digital storytelling, the city of Timisoara through stories of cul- tural and historical heritage, technical development, communities, and neighbourhoods, \u00a9 The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 T. Jung et al. (Eds.): XR 2022, SPBE, pp. 141\u2013147, 2023. https:\/\/doi.org\/10.1007\/978-3-031-25390-4_13","142 S. Vert et al. interwoven with the personal stories of the inhabitants of yesterday and today (Vert et al. 2021). In this chapter, we present our user research modelling towards establishing prac- tical user experience guidelines for designing mobile augmented reality storytelling applications with historical images in the context of urban cultural heritage. We demon- strate these guidelines by showing their implementation in an improved prototype of the Spotlight Timisoara AR application. 2 Related Work The domain of cultural heritage can bene\ufb01t from mobile augmented reality applications, as they are a proper medium for preserving, documenting, and exploring all the values that cultural heritage holds (Tiriteu and Vert 2020). In short, augmented reality is a great way to meet the needs of cultural heritage. Using augmented reality in the context of cultural heritage maximises visitor satisfaction and offers a unique and personalized experience to each visitor (Perra et al. 2019). If the user experience and usability of the application are also considered, it is even more accessible for the cultural heritage to reach its goal of being noticed. Recently, we surveyed the scienti\ufb01c literature for research \ufb01ndings on usability test- ing of mobile augmented reality applications for cultural heritage (Tiriteu and Vert 2020). One of the conclusions was that most of the surveyed papers addressed the usability of their applications via questionnaires and only a small number of studies employed some other usability testing methods (e.g., focus group, user testing, interview). This means that usability evaluation of mobile augmented reality applications for cultural heritage needs a broader and multifaceted usability framework. The same conclusion was reached in another research (Konstantakis and Caridakis 2020), where the authors pointed out the need to design novel evaluation metrics to support user experience evaluation of current cultural heritage applications. Other researchers studied how to translate tourist requirements into mobile aug- mented reality application engineering (Han et al. 2019) and proposed a general user experience model for augmented reality applications in urban heritage tourism (Han et al. 2019). Vi et al. (2019) performed a thorough analysis of various resources from research and industry and developed a set of eleven user experience guidelines for designing extended reality applications for head-mounted displays. Although our research is targeted at handheld augmented reality applications, we found their guidelines to be the closest concept to our research. 3 Methodology Our methodology for establishing user experience guidelines in designing mobile aug- mented reality storytelling applications with historical images in the context of urban cultural heritage consisted of a mix of research methods: existing literature review, competitive analysis, semi-structured interviews, and usability evaluations.","Towards User Experience Guidelines 143 The literature review was performed on academic databases (Web of Science, IEEE, Scopus, Google Scholar), using appropriate keywords, to derive user experience aspects related to the intersection of augmented reality technologies and cultural heritage. In addition, we reviewed recommendations for developing augmented reality applications from key players (Google, Apple, Wikitude, Vuforia) and case studies from independent user experience experts. Next, we performed a competitive analysis of similar applications, such as StreetMu- seum and StreetMuseum Londinium from London, Chicago 0,0 Riverwalk from Chicago or Augmented Reality by PhillyHistory.org from Philadelphia (Vert 2021). Our research was limited by the fact that we could not test the applications in the geographical area where the augmented reality experiences could be triggered and because some of the applications were no longer available in the app stores. To \ufb01nd out user needs, expectations, and past experiences, we also performed 10 semi-structured interviews with technology-savvy tourists, city tourism and cultural heritage stakeholders, artists, augmented reality developers, and user experience experts. Our research was complemented by a usability evaluation of the \ufb01rst iteration of our Spotlight Timisoara AR app, which consisted of mini semi-structured interviews, user observation sessions with think-aloud protocol, System Usability Scale questionnaires, and Product Reaction Cards. The evaluation was done with 5 participants, in the form of an on-site evaluation, in front of landmarks where augmented reality experiences were available within the application. 4 Findings The following guidelines have been developed using the methodology explained above and are being demonstrated in an improved prototype of the Spotlight Timisoara AR application. 4.1 User Safety in Public Urban Areas Experiencing AR in crowded urban areas can be risky, since these applications tend to be entertaining and distracting. Such applications should warn users of the possible consequences of not being fully aware of their surroundings (Fig. 1a). Also, in this sense, they should break long AR experiences into shorter ones, so that users would be constrained to check their surroundings more often. 4.2 Gentle Guidance of the User The AR application should guide users in such a way that it does not restrict their view of the real world or their current task. The limited display area of the smartphone needs to be well managed. For example, the AR application can use soft visual guiding arrows to indicate directions (Fig. 1b)."]
Search
Read the Text Version
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
- 108
- 109
- 110
- 111
- 112
- 113
- 114
- 115
- 116
- 117
- 118
- 119
- 120
- 121
- 122
- 123
- 124
- 125
- 126
- 127
- 128
- 129
- 130
- 131
- 132
- 133
- 134
- 135
- 136
- 137
- 138
- 139
- 140
- 141
- 142
- 143
- 144
- 145
- 146
- 147
- 148
- 149
- 150
- 151
- 152
- 153
- 154
- 155
- 156
- 157
- 158
- 159
- 160
- 161
- 162
- 163
- 164
- 165
- 166
- 167
- 168
- 169
- 170
- 171
- 172
- 173
- 174
- 175
- 176
- 177
- 178
- 179
- 180
- 181
- 182
- 183
- 184
- 185
- 186
- 187
- 188
- 189
- 190
- 191
- 192
- 193
- 194
- 195
- 196
- 197
- 198
- 199
- 200
- 201
- 202
- 203
- 204
- 205
- 206
- 207
- 208
- 209
- 210
- 211
- 212
- 213
- 214
- 215
- 216
- 217
- 218
- 219
- 220
- 221
- 222
- 223
- 224
- 225
- 226
- 227
- 228
- 229
- 230
- 231
- 232
- 233
- 234
- 235
- 236
- 237
- 238
- 239
- 240
- 241
- 242
- 243
- 244
- 245
- 246
- 247
- 248
- 249
- 250
- 251
- 252
- 253
- 254
- 255
- 256
- 257
- 258
- 259
- 260
- 261
- 262
- 263
- 264
- 265
- 266
- 267
- 268
- 269
- 270
- 271
- 272
- 273
- 274
- 275
- 276
- 277
- 278
- 279
- 280
- 281
- 282
- 283
- 284
- 285
- 286
- 287
- 288
- 289
- 290
- 291
- 292
- 293
- 294
- 295
- 296
- 297
- 298
- 299
- 300
- 301
- 302
- 303
- 304
- 305
- 306
- 307
- 308
- 309
- 310
- 311
- 312
- 313
- 314
- 315
- 316
- 317
- 318
- 319
- 320
- 321
- 322
- 323
- 324
- 325
- 326
- 327
- 328
- 329
- 330
- 331
- 332
- 333
- 334
- 335
- 336
- 337
- 338
- 339
- 340
- 341
- 342
- 343
- 344
- 345
- 346
- 347