Authoring of Automatic Data Preparation and Scene Enrichment 4334 Summary and OutlookIn industry virtual reality scenarios, the geometry for the scenario is usually pro-vided by CAD or modelling tools. In this paper we have presented our approachfor authoring the data preparation chain for geometry processing and behaviourenrichment. The behaviour enrichment of large amounts of objects is handledthrough a general object selection mechanism and an iteration mechanism allow-ing processing of homogeneous objects. The framework enables users to quicklyset up interactive virtual reality scenarios. Once a processing chain is set up, itcan be applied to updated versions of the geometry and to other base geometrieswith little effort. In future we plan to extend our approach in different directions. One direc-tion is, to allow summarizing components with integrated modules to high levelbuilding blocks, providing a higher level of abstraction to the user, while still sup-porting fine-grained data processing modules. Another direction is to ease theauthoring process by increasing the graphical capabilities. Currently selectingobjects from an X3D scene must be done manually, e.g. by name. Graphicallypresenting a loaded X3D-scene and allowing selection to happen through thevisual representation would further ease the authoring process.References 1. Backman, A.: Colosseum3d – authoring framework for virtual environments. In: Proceedings of EUROGRAPHICS Workshop IPT & EGVE Workshop, pp. 225–226 (2005) 2. Beeson, C.: An object-oriented approach to vrml development. In: Proceedings of the Second Symposium on Virtual Reality Modeling Language, VRML 1997, pp. 17–24. ACM, New York (1997) 3. Chittaro, L., Ieronutti, L., Ranon, R., Visintini, D., Siotto, E.: A high-level tool for curators of 3d virtual visits and its application to a virtual exhibition of renaissance frescoes. In: Artusi, A. (ed.) VAST 2010, pp. 147–154. Eurographics Association, Goslar (2010) 4. Dachselt, R., Rukzio, E.: Behavior3d: An xml-based framework for 3d graphics behavior. In: Proceedings of the Eighth International Conference on 3D Web Tech- nology, p. 101. ACM, Saint Malo (2003) 5. Dachselt, R., Hinz, M., Meissner, K.: Contigra: An xml-based architecture for component-oriented 3d applications. In: Proceedings of the Seventh International Conference on 3D Web Technology, Web3D 2002, pp. 155–163. ACM, New York (2002) 6. Kapadia, M., Singh, S., Reinman, G., Faloutsos, P.: A behavior-authoring frame- work for multiactor simulations. IEEE Computer Graphics and Applications 31(6), 45–55 (2011) 7. Lee, G.A., Kim, G.J., Billinghurst, M.: Directing virtual worlds: Authoring and testing for/within virtual reality based contents. In: Proceeding of the 14th In- ternational Conferent on Artifical Reality and Teleexistence (ICAT), Seoul, Korea (2004)
434 B. Mesing and U. von Lukas 8. Mesing, B., Hellmich, C.: Using aspect oriented methods to add behaviour to x3d documents. In: Web3D 2006: Proceedings of the Eleventh International Conference on 3D Web Technology, pp. 97–107. ACM, New York (2006) 9. Mesing, B., Kluwe, F., von Lukas, U.: Evaluating evacuation simulation results in a virtual reality environment. In: Bertram, V. (ed.) 10th International Conference on Computer and IT Applications in the Maritime Industries (COMPIT 2011), Hamburg, pp. 326–334 (2011)10. Springer, J.P., Neumann, C., Reiners, D., Cruz-Neira, C.: An integrated pipeline to create and experience compelling scenarios in virtual reality. In: Beraldin, J.A., Cheok, G.S., McCarthy, M.B., Neuschaefer-Rube, U., Am Baskurt, McDowall, I.E., Dolinsky, M. (eds.) Three-Dimensional Imaging, Interaction, and Measurement. Proceedings of SPIE-The International Society for Optical Engineering. SPIE-INT SOC Optical Engineering, 1000 20TH ST and PO BOX 10 and Bellingham and WA 98227-0010 USA, vol. 7864 (2011)11. Zauner, J., Haller, M., Brandl, A., Hartman, W.: Authoring of a mixed reality assembly instructor for hierarchical structures. In: Proceedings of the Second IEEE and ACM International Symposium on Mixed and Augmented Reality 2003, pp. 237–246 (2003)12. Zook, A., Lee-Urban, S., Riedl, M.O., Holden, H.K., Sottilare, R.A., Brawner, K.W.: Automated scenario generation. In: El-Nasr, M.S. (ed.) Proceedings of the International Conference on the Foundations of Digital Games, p. 164. ACM, S.l. (2012)
AR-Based Vehicular Safety Information System for Forward Collision Warning Hye Sun Park and Kyong-Ho Kim Human-Vehicle Interaction Research Center, ETRI, Daejeon, Korea [email protected] Abstract. This paper proposes an AR (augmented reality) based vehicular safe- ty information system that provides warning information allowing drivers to easily avoid obstacles without being visually distracted. The proposed system consists of four stages: fusion data based object tracking, collision threat as- sessment, AR-registration, and a warning display strategy. It is shown experi- mentally that the proposed system can predict the threat of a collision from a tracked forward obstacle even during the nighttime and under bad weather con- ditions. The system can provide safety information for avoiding collisions by projecting information directly into the driver’s field of view. The proposed system is expected to help drivers by conveniently providing safety information and allowing them to safely avoid forward obstacles. Keywords: AR (augmented reality), vehicular safety information, forward col- lision, warning system, data fusion, object tracking, threat assessment, warning strategy.1 IntroductionTo avoid collisions with stationary obstacles, other moving vehicles, or pedestrians,drivers have to be aware of the possibility of a collision and be ready to start brakingearly enough. In addition, when following other vehicles, drivers need to keep a safedistance to allow for proper braking. An understanding of how drivers maintain sucha safe distance, the type of visual information they use, and what visual factors affecttheir performance is clearly important for improving road safety. A driver has to relyon direct visual information to know how rapidly they are closing in on a forwardvehicle. Therefore, if this information is poor, there is a danger of the driver not suffi-ciently braking in time. In addition, a system for the rapid detection of neighboringobjects such as vehicles and pedestrians, a quick estimation of the threat of an ob-stacle, and a convenient way to avoid predicted collisions is needed. Automobilemanufactures are highly concerned about problems related to motor vehicle safety,and are making greater effort to solve them, for example, adaptive cruise control(ACC) [1], antilock brake systems (ABSs) [2, 3], collision-warning systems (CWSs)[4], and emergency automatic brakes (EABs). AR-based driving support systems(AR-DSSs) have also been recently developed [5, 6]. These developed AR-DSSsdiffer from traditional in-vehicle collision avoidance systems (CASs) in thatthey provide warning signals overlapping with real physical objects. Compared to aR. Shumaker and S. Lackey (Eds.): VAMR 2014, Part II, LNCS 8526, pp. 435–442, 2014.© Springer International Publishing Switzerland 2014
436 H.S. Park and K.-H. Kimtraditional CAS, an AR-DSS attempts to support the direct perception of mergingtraffic rather than the generation of a warning signal. Therefore, an AR-DSS can low-er the switching costs associated with a traditional CAS by providing signals thatalign with a driver’s perceptual awareness [5]. Accordingly, an active visual-basedsafety information system for preventing collisions has become one of the major re-search topics in the field of safe driving. Thus, this paper proposes an AR-based vehi-cular safety information system that provides visual-based collision warning informa-tion to match the driver’s viewpoint. We expect that the proposed system will contri-bute significantly to a reduction in the number of driving accidents and their severity.2 AR-Based Vehicular Safety Information SystemThe proposed system consists of four stages: fusion data based object tracking, colli-sion threat assessment, AR-registration, and a warning display strategy. An I/O flow-chart of the proposed system is presented in Fig. 1. Once a driver starts driving, thesystem continuously detects and tracks forward objects and classifies the collisionthreat level. Simultaneously, the system tracks the driver’s eye movement andpresents potential collision threats on a see-through display; the results are thenmatched with the driver’s visual viewpoint to help the driver identify and avoid ob-stacles. Unlike conventional ABS, the goal in this paper is to provide forward-objectlocation based on the driver's viewpoint through an interactive AR-design for main-taining a safe distance from forward objects and preventing collisions. Thus, twomodules, i.e., collision threat assessment and a warning display strategy, will be de-scribed in detail. Fig. 1. The proposed system configuration2.1 Fusion Data Based Object TrackingTo track forward objects accurately and robustly, the proposed system uses both videoand radar information, which provide important clues regarding the ongoing trafficactivity in the driver’s path. Fig. 2 shows how to track based on fusion data.
AR-Based Vehicular Safety Information System for Forward Collision Warning 437 Fig. 2. Sensor Data Fusion for object trackingIn vision-based object tracking, all objects on a road can be deemed potential ob-stacles. In this system, we first extract all obstacles using their geometric properties.We also classify them into significant and less significant objects, which are triggeredunder certain circumstances. Significant objects are obtained using specialized detec-tors (i.e., vehicle and pedestrian detectors) [7, 8]. In an invisible environment, theproposed system detects multiple forward obstacles using three radar systems, andthen recognizes and classifies them based on fusion with a night-vision camerathrough a processing shown in the flowchart in Fig. 2.2.2 Collision Threat AssessmentThe threat assessment measure of the proposed system is defined in Eq. (1). Thismeasure is based on the basic assumption that a threatening object is in the same laneas the host vehicle, and is the closest object ahead. The proposed system estimates thecollision possibility using the velocity and distance between the host vehicle and ob-stacle, which is referred to as TTC (time to collision) in this paper. To measure theTTC, an experimental DB is first generated, and the optimal threshold value is thenextracted using this DB. (1)where Dc2o is defined as the distance between the host car and obstacle, and Vc is thevelocity of the host car in Eq. (1). Table 1. The compliled DB under various conditions Type Driving Condition The compiled DB Vehicle Velocity (V) 60 km/h Acquired images including morePedestrian than 500 vehicles, which were Distance (D) 100 m taken during an 18-hour period Road Type Highway Public road Acquired images including more than 800 pedestrians, which were Velocity (V) 40 km/h taken during a 12-hour period Distance (D) 15 m Road Type Crossway Residential street
438 H.S. Park and K.-H. KimThe proposed system divides a threat into three levels according to the TTC value. Tomeasure the TTC value of each of the three levels, an experimental DB of variousdriving conditions is first generated, as shown in Table 1. Next, the optimal thresholdvalue is extracted using the DB, as shown in Table 2. In general, a TTC is defined asthe time remaining until a collision between two vehicles that will occur if the colli-sion course and difference speeds are maintained [9]. A TTC has been one of thewell-recognized safety indicators for traffic conflicts on highways [10–12]. However,the proposed system provides warning information for safety fitting the driver's view-point through an interactive AR-design, and is applied to public road environmentsfor both vehicles and pedestrians. Therefore, the TTC values used by the proposedsystem are extracted through various experiments.Table 2. TTC threshold value of each of the three levels (m/s) Type Vehicle 0.3 PedestrianLevel 0.71 (Danger) 0.0 5.0 0.0 1.12 (Warning) 0.3 1.1 6.03 (Attention) 0.7 6.0 10.02.3 AR RegistrationFor the registration, the calibration parameters are generated offline through an ex-pression of the relations among the three coordinates of the vehicle, driver, and dis-play. The system then detects and tracks the driver’s head and eyes in real time. Thecoordinates of the target objects transform into display coordinates matching the driv-er’s viewpoint. A flowchart of this AR registration module is shown in Fig. 3. Fig. 3. AR registration module2.4 Warning Display StrategyTo improve the driver’s cognition of the displayed information, an interactive UX de-sign is needed. For this, the information provided should not only be easier to under-stand, but also more intuitively acceptable by considering the driver's characteristics,
AR-Based Vehicular Safety Information System for Forward Collision Warning 439the type of information provided, and the driving conditions. The system expressesinformation differently depending on both the threat level in the previous module andthe study results from [13]. Table 3 shows the AR-display design and a representativescene on the see-through display according to the three levels of obstacle type. In theAR-display design, the color and line thickness are set based on the ISO rules [14]. Inaddition, the design type was determined through the study results of [13] and the HUDconcept design in [15]. Table 3. AR-display design for three levels of obstacle typeObstacle Type Design Type Level Real Display 1 2 3 SceneVehicle Type 1 Type 2 Type 1Pedestrian Type 2 Type 33 Experiment ResultsTo provide driving-safety information using the proposed AR-HUD, various sensorsand devices were attached to the experimental test vehicle, as shown in Fig. 4. Thetwo cameras used for the forward obstacle recognition are GS2-FW-14S5 modelsfrom Point Grey Research Co., which are 12 mm cameras with a resolution of 1384 x1036, and can obtain an image at a speed of 30 fps. In addition, we used IEEE 1394bfor the interface. To cover multi-target tracking, two SRRs (short range radar) and oneLRR (long range radar) are used in environments with poor visibility such as underrainy conditions and at night. Both radar models are a Delphi ESR at 77GHz with aCAN interface. The IR-camera is a PathfindIR model from FLIR Co., and has a reso-lution of 320 x 240 with a speed of 30 fps using an RCA interface.
440 H.S. Park and K.-H. Kim Fig. 4. Experimental test vehicleTo show our AR-based vehicle safety information system, we used a 22-inch, transpa-rent Samsung LCD display with a transparency of 20%. This LCD display has lowtransparency, and thus cannot allow AR-based vehicle safety information to be seenvery well at nighttime. To solve this problem, it is necessary to develop a large-areatransparent OLED based display. Fig. 5 shows images of pedestrians detected by theproposed system based on the estimated optimal TTC value shown on the display.(A) stopping in a crosswalk (B) jaywalkers crossing a (C) jaywalkers crossing a public road residential street Fig. 5. Experiment resultsTo evaluate each module, the experimental DB was generated from various drivingenvironments, including a simulated road environment and actual roads (a highway,public roads, and residential streets). For vehicle recognition in the daytime, a total of10,320 frames were obtained from the experimental stereo camera. For pedestrianrecognition, a total of 3,270 frames were acquired. Furthermore, a total of 5,400frames were obtained from the IR-camera for recognition of both vehicles and pede-strians during the nighttime. Fig. 6 shows the real road test region. As indicated inFig. 6, the test region includes public roads, residential streets, and crossways forrecognition of both vehicles and pedestrians in the daytime. In contrast, Fig. 7 showsthe test-bed used for obstacle recognition during the nighttime.
AR-Based Vehicular Safety Information System for Forward Collision Warning 441 Fig. 6. Experiment test region on real roads Fig. 7. Experiment test-bed for nighttime recognitionThe recognition rate of the driving-safety information obtained by the proposed sys-tem during the daytime is 85.01%, and the system has a recognition speed of 15 fpsfor both vehicles and pedestrians. The recognition rate of the driving-safety informa-tion and recognition speed of the proposed system during the nighttime are 77% and10 fps for both vehicles and pedestrians.4 ConclusionsThis paper proposed an AR-based vehicular safety information system for forwardcollision warning. This paper showed that 1) a forward obstacle can be successfullydetected and tracked by fusing radar and two types of vision data, 2) fusion basedforward obstacle tracking is robust compared to single sensor based obstacle detec-tion, and objects can be reliably be detected, 3) collision threat assessments can beefficiently classified into threat levels by measuring the collision possibility of eachobstacle, 4) AR-registration can provide warning information without visual distrac-tion by matching the driver’s viewpoint, and 5) a warning strategy can convenientlyprovide safety information considering both the obstacle and human-vision attributes.The experiment results show that the proposed system achieves an 81.01% recogni-tion rate. We expect that the proposed system will provide suitable information ac-cording to the driver's viewpoint as a way to reduce traffic accidents.
442 H.S. Park and K.-H. KimAcknowledgement. This work was supported by the Industrial Strategic technologydevelopment program, Development of Driver-View based in-Vehicle AR DisplaySystem Technology Development (10040927), funded by the Ministry of KnowledgeEconomy (MKE, Korea). The first author would like to thank the institutes that parti-cipated in this study for their helpful experiments.References 1. Ganci, P., Potts, S., Okurowski, F.: A forward looking automotive radar sensor. In: IEEE Intelligent Vehicles Symposium, Detroit, USA, pp. 321–325 (September 1995) 2. Lin, C.M., Hsu, C.F.: Neural-network hybrid control for antilock braking systems. IEEE Transactions on Neural Networks 14(2), 351–359 (2003) 3. Mirzaei, A., Moallem, M., Mirzaeian, B.: Optimal Design of a hybrid controller for anti- lock braking systems. In: IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Monterey, CA, USA, pp. 905–910 (July2005) 4. Dagan, E., Mano, O., Stein, G.P., Shashua, A.: Forward Collision Warning with a Single Camera. In: IEEE Intelligent Vehicles Symposium, Parma, Italy, pp. 37–42 (June 2004) 5. Fu, W.T., Gasper, J., Kim, S.W.: Effects of an In-Car Augmented Reality System on Im- proving Safety of Younger and Older Drivers. In: International Symposium on Mixed and Augmented Reality, Adelaide, Australia, pp. 59–66 (October 2013) 6. Ferreira, M., Gomes, P., Silveria, M.K., Vieira, F.: Augmented Reality Driving Supported by Vehicular Ad Hoc Networking. In: International Symposium on Mixed and Augmented Reality, Adelaide, Australia, pp. 253–254 (October 2013) 7. Park, H.S., Park, M.W., Won, K.H., Kim, K.H., Jung, S.K.: In-Vehicle AR-HUD System to Provide Driving-Safety Information. ETRI Journal 35(6), 1038–1047 (2013) 8. Won, K.H., Jung, S.K.: Billboard sweep stereo for obstacle detection in road scenes. Elec- tronics Letters 48(24), 1528–1530 (2012) 9. Hayward, J.C.: Near miss determination through use of a scale of danger. Highway Research Board (384), 24–34 (1972)10. Farah, H., Bekhor, S., Polus, A.: Risk evaluation by modeling of passing behavior on two- lane rural highways. Accident Analysis and Prevention 41(4), 887–894 (2009)11. Svensson, A.: A method for analyzing the traffic process in a safety perspective. Doctoral Dissertation, University of Lund, Lund, Sweden, vol. 166, pp. 1–174 (October 1998)12. Vogel, K.: A comparison of headway and time to collision as safety indicators. Accident Analysis and Prevention 35(3), 427–433 (2003)13. Park, H., Kim, K.-h.: Efficient Information Representation Method for Driver-centered AR-HUD system. In: Marcus, A. (ed.) DUXU 2013, Part III. LNCS, vol. 8014, pp. 393–400. Springer, Heidelberg (2013)14. ISO International Standards, ISO 15008: Road vehicles - Ergonomic aspects of transport information and control systems - Specifications and compliance procedures for in-vehicle visual presentation (February 11, 2009)15. BMW Concept Video, BMW Head Up Display HUD - Simulation Augmented Reality, Webpage: http://www.youtube.com/watch?feature=player_detailpage&v=33D ME4SHTSI
An Augmented Reality Framework for Supporting and Monitoring Operators during Maintenance Tasks Guido Maria Re and Monica Bordegoni Politecnico di Milano, Dipartimento di Meccanica, Via La Masa, 1, 20156 Milano, Italy {guidomaria.re,monica.bordegoni}@polimi.it Abstract. The paper proposes a framework for supporting maintenance services in industrial environments through the use of a mobile device and Augmented Reality (AR) technologies. 3D visual instructions about the task to carry out are represented in the real world by means of AR and they are visible through the mobile device. In addition to the solutions proposed so far, the framework in- troduces the possibility to monitor the operator’s work from a remote location. The mobile device stores information for each maintenance step that has been completed and it makes them available on a remote database. Supervisors can consequently check the maintenance activity from a remote PC at any time. The paper presents also a prototype system, developed according to the framework, and an initial case study in the field of food industry. Keywords: Augmented Reality, Framework, Maintenance tasks, Remote Su- pervision.1 IntroductionMaintenance operations in a factory are necessary duties in order to provide a conti-nuous functioning of the machineries and of the production. In several cases, opera-tors are trained in order to acquire skills necessary to intervene on the machines on ascheduled time and to operate by following proper procedures. However, the outcomeand the time planned to achieve these maintenance operations are always uncertain.The uncertainties are due to difficulties that the operator must face to complete themaintenance task, such as the functional and mechanical complexity of the machine. The level of uncertainty increases when the maintenance operation is not a routinework because it is a fortuitous or compelling event that the operator is not used tocarry out. In other cases, instead, an operator carries out a maintenance task eventhough his background is not sufficient to accomplish it autonomously or accurately,such as when the operator gets confused because he deals with several similar ma-chines or when an unskilled operator performs the task. This case usually happens toavoid the intervention of an expert operator, which could be costly and require a longwaiting. Thus, an instruction manual traditionally supports the operator to accomplishthe maintenance activity. However, maintenance operations accomplished with lack of depth or withoutcomplying with the protocols could lead to functioning problems of the machine.R. Shumaker and S. Lackey (Eds.): VAMR 2014, Part II, LNCS 8526, pp. 443–454, 2014.© Springer International Publishing Switzerland 2014
444 G.M. Re and M. BordegoniA malfunctioning can be dangerous to the people working in the factory or it can leadup to additional maintenance, due to unexpected machine fails. From these considera-tions it turns out that the complexity of the machine, inexperience, negligence and thehuman predisposition to errors affect the maintenance effectiveness. Consequently,these issues negatively influence a machine, by affecting its production in a plant, itsworking life and, in a long-term perspective, it leads to the increase of the industrialcosts. According to the above-mentioned considerations, current research trends areoriented to reduce maintenance costs by improving the operator’s performances atwork. In particular, one of the main trends aims at reducing time and money for thetraining, by providing supports for instructions that are more accessible and easy tounderstand also for unskilled operator. A great advantage in the field of maintenance is offered by Augmented Reality(AR), which is an emerging technology coming from Computer Science. AR enablesthe user to see and interact with virtual contents seamlessly integrated in the real envi-ronment [1, 2]. In case of maintenance operations, the virtual contents are the instruc-tions to perform, which can be represented as text or as three-dimensional objects.Hence, the instructions are provided in a way that it is more direct, accessible andeasy to understand than the approach based on a traditional paper manual. In this research work, the authors describe a framework that aims at extending theAR solutions for supporting maintenance tasks so far proposed. The framework com-bines a method to provide maintenance instructions to the operator by means of amobile device and a solution to record and monitor the performed tasks in a remotelocation. The mobile device shows the instructions to the operator by using AR and,at the same time, sends data and pictures regarding the on-going maintenance task to aremote PC through a wireless network. The advantage of this framework is twofold.Operators have an intuitive support to achieve maintenance at their disposal, whilesupervisors can visualize the maintenance history of a machine and check the opera-tors’ work from remote. In particular, the remote PC can be used to evaluate if thetasks have been carried out in accordance with the protocols, if the operator did amistake and if the maintenance has been accomplished on schedule. The paper is divided as follows. The most relevant research works carried out inthe field of AR for maintenance are reported in Section 2. Then, the developedframework is described in Section 3, while an initial case study is presented in Section4. The paper ends with a discussion and an outlook on future developments.2 BackgroundAR technology has been successfully experimented in the field of maintenance [3]and nowadays first industrial cases and applications are coming out [4]. The advan-tage of applying AR in this field is the reduction of the operator’s abstraction processto understand the instructions. In fact, the instructions are represented by means ofvirtual objects directly within the real world so that paper manuals are no longer re-quired. Comparative tests demonstrated the improvement of operator’s work insome manual activities by using AR in comparison with other supports to provideinstructions. Tang et Al. demonstrated how AR reduces the user errors during manual
An Augmented Reality Framework for Supporting and Monitoring Operators 445operations and the mental workload to understand a given task [5]. Henderson andFeiner showed that AR reduces the time to understand, localize and focus on a taskduring a maintenance phase [6]. In summary, AR increases the effectiveness of theoperator activity and it consequently speeds up the whole workflow. Many AR applications conceived for conducting maintenance operations are basedon immersive visualization devices, as for instance the Head Mounted Displays(HMD). The first research focused on maintenance using AR was carried out withinthe context of the KARMA project [7], in which they provided maintenance instruc-tions on a laser printer through an HMD tracked by an ultrasonic system. A case studyin the automotive domain has been described in [8] for the doorlock assembly into acar door, while an immersive AR support is proposed for a military vehicle in [9].Lastly, the immersive AR solution presented in [10] enables the operator to manipu-late maintenance information by means of an on-site authoring interface. In this way,it is possible to record and share knowledge and experience on equipment mainten-ance with other operators and technicians. However, HMDs have ergonomic and economic issues that impede their wide dep-loyment in industry, even though they are an effective means to give AR instructions.It is a relatively expensive technology that does not provide a good compromise be-tween graphic quality and comfort for the user [11]. Moreover, its use is unsuitablefor a long period, as for an entire working day [12]. Mobile devices are currently the most interesting support for AR applications. Bil-linghurst et Al. evaluated the use of mobiles as AR support for assembly purposes[13]. Klinger et Al. created a versatile Mobile AR solution for maintenance in variousscenarios and they tested it in a nuclear power plant [14]. Also Ishii et Al. tackledmaintenance and inspection tasks in this very delicate environment in [15]. As nega-tive aspect, mobile AR has the disadvantage of reducing the manual ability of theoperator during its use, if compared with the HMD case. In the first case, in fact, hehas to hold the device. For this reason, Goose et Al. proposed the use of vocal com-mands in order to obtain location-based AR information during the maintenance of aplant [16]. Nevertheless, from an industrial point of view, mobile devices are current-ly the most attractive and promising solution for supporting maintenance tasks bymeans of AR. They are powerful enough to provide augmentation and they are cheapand highly available on the market, due to their high volume production for the massmarket. In addition, since these devices are easy to handle, to carry and they are cur-rently present in the everyday life, they are considered more socially acceptable thanthe HMDs. This work aims at extending the use of AR in maintenance by monitoring the opera-tor’s activity from a remote location. Some research works partially dealt with this ideaby integrating AR in tele-assistance. Boulanger demonstrated it by developing an im-mersive system for collaborative tele-training on how to repair an ATM machine [17].Reitmayr et Al., instead, integrated a simultaneous localization and mapping system(SLAM) in an online solution of annotations in unknown environment [18]. The inte-gration of mobile devices into maintenance activities increases the effectiveness
446 G.M. Re and M. Bordegoniof remote assistance by experts because it makes as if the expert was collaboratingon-site. The research described in this paper distinguishes itself from the others because itpresents a framework to record the work done for future monitoring. Actually, thecited works focus only on the support and supervision of the operator in real-time andthey do not allow recording his work in order to check it afterwards. By the time thispaper has been written, only Fite-Georgel et Al. have proposed a research with a simi-lar approach to monitor the accomplished work [19]. Their solution is a system tocheck undocumented discrepancies between the designed model of a plant and thefinal object. However, it works only offline and it has not been conceived for main-tenance purposes.3 Framework DescriptionThe developed framework enables the visualization of maintenance instructionsthrough a mobile device and the remote monitoring of the accomplished work. In thissection, an overview of the framework is firstly depicted and subsequently the twomain modules, of which the system is made up, are described in detail.3.1 OverviewFigure 1 provides a schematic representation of the framework and shows its twomodules. The first one is the Maintenance Module and it is based on an AR solutionto display instructions to accomplish on a mobile device. Thus, the instructions, whichare traditionally provided by a paper manual, are stored in a database as digital infor-mation and loaded automatically by the AR solution when they are required. For eachmaintenance step, the module saves data about how the maintenance operation isgoing and, if the device is connected to a Wi-Fi network, it sends them to a remotestorage server. The data stored into the server are visible at any time by means of the Monitoringmodule. In this way, a supervisor can check the entire maintenance history carried outon a specific machine. Fig. 1. Schematic representation of the framework
An Augmented Reality Framework for Supporting and Monitoring Operators 4473.2 MaintenanceThe Maintenance module is basically an application that provides a mobile AR visua-lization of the instructions. Besides this, a Wi-Fi client is integrated in the applicationand it sends the data about the accomplished step to the remote server. Figure 2 shows the tasks of Maintenance Module. The camera, embedded in themobile device, frames the machine that requires maintenance service and providesvideo stream to the AR application. Specific algorithms estimate the position of thecamera with respect to the mechanical component by the video stream. This task, alsoreferred to as tracking, allows the module to represent precisely the virtual contents inthe real world, with a proper perspective and spatial coherency. The instructions of the tasks to carry out are stored in configurations files and theyare loaded during the initialization of the AR application. They are textual informa-tion about how to perform the task and the spatial position of the machine compo-nents on which the operator should intervene. The instructions are rendered in agraphic manner, by using also the tracking data. The graphic result is superimposedonto the video stream and shown through the video display of the device. Once a step is finished, the module automatically saves the maintenance informa-tion and makes them available to the remote PC through the Wi-Fi client.Data Communication. Every time the operator presses the button to move to the nextmaintenance task, the application saves the data of the last operation concluded andsends them to the remote server. This approach, if scaled with several AR mobilemaintenance devices, is a cloud-computing network, which is referred to as cloud inFig. 2. The tasks that the Maintenance Module performs in order to provide an AR visualiza-tion and the data to the remote database
448 G.M. Re and M. Bordegonithis work. These data are two pictures of the machine at the end of the task and addi-tional textual information. The pictures are the same one and they distinguished be-tween each other because one shows also the augmented content. In this way, thesupervisor can check the correctness of the operation by comparing it also by the ARinstructions. The other pieces of information are complementary data to complete the descrip-tion of the operation and they are the following: • Operator’s name • Data • Time • Machine ID number • Typology of maintenance (ordinary, extraordinary, intervention for break- down/error) • Name of the maintenance operation • Step Number • Step description • Time taken to execute the task These data are saved in a file and organized according to a simple XML-like struc-ture so as to have an effective communication protocol to exchange information be-tween Maintenance and Monitoring Modules.3.3 MonitoringMonitoring Module is constituted by a software application that allows the supervisorto check the maintenance data stored in the Cloud. A parser retrieves the maintenancedata saved in the XML files and makes them available to the module. Then, a GUIcollects all the data and enables the supervisor to visualize and navigate through thepictures and the maintenance information of each task.4 Case StudyThe case study, which will be presented in this section, describes an initial test of theframework within the context of food industry. In particular, the machine used for thestudy is addressed to food packaging and it requires particular attentions and a period-ic maintenance service in order to provide a safe packaging process of the product. Several maintenance operations on this kind of machine must be carried out daily,due to hygienic reasons. Food Companies usually involves normal operators withoutany particular skills or knowledge about the machine to take care of it. The reason liesin the necessity to avoid the constant need of a skilled operator, but it involves a high-er risk of uncertainty on the outcome of the operation. The system developed according to the framework is described in the following.Then, the case study is presented.
An Augmented Reality Framework for Supporting and Monitoring Operators 4494.1 System DescriptionThe system used for the case study is here described according to the two modulesof the framework. This section takes into account both hardware and softwarecomponents.Maintenance. The Maintenance Module used by the operator is an AR application,constituted by a GUI designed for maintenance purposes, which runs on a mobiledevice. The mobile device used for this case study is a Windows-based tablet PC. Thetablet is equipped with a 1.80 GHz processor, 2 GB RAM, 10.1in color touch screendisplay and a 640x480 camera working at 30Hz. Once the operator has selected the right maintenance service to perform on the ma-chine through the GUI, the application provides him with the AR instructions. All thetasks have been taken directly from the paper manual, while the machine components,which are visible as augmented contents in the scene, have been exported from theCAD model of the machine. Each set of instruction for a specific maintenance serviceis saved in a separated file. A very stable marker-based tracking solution has been chosen to detect the camerapose and subsequently to properly represent the virtual content in the realenvironment. Thus, tracking is performed by placing squared, black and whitemarkers on the machine. The tracking algorithms are from the library calledARToolkit Plus [20]. These algorithms detect the markers placed in the environmentand they retrieve the camera pose by means of mathematical considerations on thefour corners of each marker in the scene. The visualization of the AR contents in the real world is a merging process be-tween the video stream of the camera on the tablet and the virtual objects, which arerendered with the right perspective according to the tracking data. OpenSceneGraph1is the Computer Graphics library used for this purpose and it updates the visualizationat every new camera frame. The interface has been specifically designed for the AR use on a mobile device.Thus, some considerations regarding how to represent and manage the AR content forthe operator in the best manner have been taken into account. Actually, the instruc-tions to execute a task have to be provided by the system in a way that is simple tounderstand and interact. The guidelines presented in [21] have been used as startingpoint. Figure 3 shows the achieved result of the following considerations. The first consideration regards the visibility of the virtual objects in the workingenvironment. The objects are 3D or 2D elements and each of them has a precise pur-pose, such as indicating the point on which the user has to work or showing the actionto perform. For this reason, animations applied to them in order to show how to dealwith a component can increase the understanding of the user. In addition, these virtualelements must be easy to be recognized into the scene by the operator.1 OpenSceneGraph library: http://www.openscenegraph.org/
Search
Read the Text Version
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
- 108
- 109
- 110
- 111
- 112
- 113
- 114
- 115
- 116
- 117
- 118
- 119
- 120
- 121
- 122
- 123
- 124
- 125
- 126
- 127
- 128
- 129
- 130
- 131
- 132
- 133
- 134
- 135
- 136
- 137
- 138
- 139
- 140
- 141
- 142
- 143
- 144
- 145
- 146
- 147
- 148
- 149
- 150
- 151
- 152
- 153
- 154
- 155
- 156
- 157
- 158
- 159
- 160
- 161
- 162
- 163
- 164
- 165
- 166
- 167
- 168
- 169
- 170
- 171
- 172
- 173
- 174
- 175
- 176
- 177
- 178
- 179
- 180
- 181
- 182
- 183
- 184
- 185
- 186
- 187
- 188
- 189
- 190
- 191
- 192
- 193
- 194
- 195
- 196
- 197
- 198
- 199
- 200
- 201
- 202
- 203
- 204
- 205
- 206
- 207
- 208
- 209
- 210
- 211
- 212
- 213
- 214
- 215
- 216
- 217
- 218
- 219
- 220
- 221
- 222
- 223
- 224
- 225
- 226
- 227
- 228
- 229
- 230
- 231
- 232
- 233
- 234
- 235
- 236
- 237
- 238
- 239
- 240
- 241
- 242
- 243
- 244
- 245
- 246
- 247
- 248
- 249
- 250
- 251
- 252
- 253
- 254
- 255
- 256
- 257
- 258
- 259
- 260
- 261
- 262
- 263
- 264
- 265
- 266
- 267
- 268
- 269
- 270
- 271
- 272
- 273
- 274
- 275
- 276
- 277
- 278
- 279
- 280
- 281
- 282
- 283
- 284
- 285
- 286
- 287
- 288
- 289
- 290
- 291
- 292
- 293
- 294
- 295
- 296
- 297
- 298
- 299
- 300
- 301
- 302
- 303
- 304
- 305
- 306
- 307
- 308
- 309
- 310
- 311
- 312
- 313
- 314
- 315
- 316
- 317
- 318
- 319
- 320
- 321
- 322
- 323
- 324
- 325
- 326
- 327
- 328
- 329
- 330
- 331
- 332
- 333
- 334
- 335
- 336
- 337
- 338
- 339
- 340
- 341
- 342
- 343
- 344
- 345
- 346
- 347
- 348
- 349
- 350
- 351
- 352
- 353
- 354
- 355
- 356
- 357
- 358
- 359
- 360
- 361
- 362
- 363
- 364
- 365
- 366
- 367
- 368
- 369
- 370
- 371
- 372
- 373
- 374
- 375
- 376
- 377
- 378
- 379
- 380
- 381
- 382
- 383
- 384
- 385
- 386
- 387
- 388
- 389
- 390
- 391
- 392
- 393
- 394
- 395
- 396
- 397
- 398
- 399
- 400
- 401
- 402
- 403
- 404
- 405
- 406
- 407
- 408
- 409
- 410
- 411
- 412
- 413
- 414
- 415
- 416
- 417
- 418
- 419
- 420
- 421
- 422
- 423
- 424
- 425
- 426
- 427
- 428
- 429
- 430
- 431
- 432
- 433
- 434
- 435
- 436
- 437
- 438
- 439
- 440
- 441
- 442
- 443
- 444
- 445
- 446
- 447
- 448
- 449
- 450
- 451
- 452
- 453
- 454
- 455
- 456
- 457
- 458
- 459
- 460
- 461
- 462
- 463
- 464
- 465
- 466
- 467
- 468
- 469
- 470
- 471
- 472
- 473
- 474
- 475
- 476
- 477
- 478
- 479
- 480
- 481
- 482
- 483
- 484
- 485
- 486
- 487
- 488
- 489
- 490
- 491
- 492
- 493
- 494
- 495
- 496
- 497
- 1 - 50
- 51 - 100
- 101 - 150
- 151 - 200
- 201 - 250
- 251 - 300
- 301 - 350
- 351 - 400
- 401 - 450
- 451 - 497
Pages: