Increasing the Transparency of Unmanned Systems 383 (a) (b)Fig. 1. (a) When key planning constraints are spatial in nature, automated planner outputs maybe intuitively presented in a map display; (b) However, when key constraints are not directlyspatial (e.g., the travelling speed of a vehicle; the efficacy of onboard sensor payloads), map-based displays of automated outcomes are much less intuitive In geospatial displays that use standard military symbology (MIL-STD-2525C;[19]) vehicle icons typically encode only spatial locations and gross platform differ-ences (e.g., whether a vehicle is friendly or foe, ground or air-based, fixed-wing orrotary). In this case, to understand how individual vehicle differences have affected anautomated tasking response, the operator must perform multiple drill-down searchesthrough vehicle details, for example clicking on individual vehicles to identify theirsensor payloads and travel speeds, as in Figure 2(a). With this approach, the operatormust mentally consider and compare other vehicles to the one selected by the automa-tion, using a serial exploration process that is time consuming and places a significantload on working memory.Call Sign: NAVAJO-7Type: Platform -ASpeed: 240 ktsA ltitude: 180 x100 f tTime -to: 00:04.00Sensor 1: SA RSensor 2: EO/IR (a) (b)Fig. 2. (a) Traditional drill-down display, with hidden data accessed serially through pop-upwindows; (b) An example of an ecological display alternative, with data provided in parallelthrough explicit visual cues—in this case time-to-location coded as icon size, and sensor effica-cy coded as icon opacity In contrast, ecological display approaches, such as Figure 2(b), can support opera-tors’ direct perception of the non-spatial considerations that led to an automated
384 R. Kilgore and M. Voshellplanner’s decision—in this case, by visually mapping calculations of sensor/targetpairing efficacy (icon opacity) and time-to-target estimates based on platform speedand distance (icon size). This mapping “makes visible the invisible,” while also in-creasing the perceptual salience of the most promising vehicle options (i.e., those thatcan get to the target both quickly and with ideal equipment). Combined, this enablesthe operator to rapidly consider alternative choices across the vehicle set in parallel,and intuitively interpret automated planning outcomes. While this example focuses ongeospatial displays, we have similarly applied a range of chained visual transforma-tions (including manipulations of hue, saturation, blur, and animation effects; see[20]) across mission timelines, asset/task link diagrams, and health and status views. Beyond visually encoding the key system and environmental attributes that driveautomation outcomes, we have also explored methods to visually represent automatedbehaviors themselves, and particularly the ways in which these may differ acrossheterogeneous vehicle teams. For example, differences in platform type and onboardsensing and processing capabilities may have profound impact on how different ve-hicles within a team may respond to abnormal events, such as a lost communicationslink. While better-equipped vehicles may be able to continue autonomously for sometime on a pre-filed course in the absence of communications, it is also typical formany vehicles to continue at their current heading and altitude indefinitely, or to ab-andon an established flight plan after only a short period time and proceed directly toa pre-configured emergency landing location. (a) (b)Fig. 3. (a) A typical display, communicating only the location (and time) of a critical event(e.g., lost communications), and forcing the operator to reason about future vehicle behavior;(b) Example of an ecological display alternative, using explicit visual cues to inform and aug-ment the operator’s mental modeling of vehicle state Unfortunately, if they show anything at all, supervisory control displays oftensimply reflect the location, and possibly time, of a system state change (e.g., a commslink switching from “active” to “lost”), and not the impact of this event, as in Figure3(a). This forces the operator to anticipate how the particular vehicle will respond tothis new situation and invites significant opportunity for operator surprise in the eventof an incorrect or misapplied mental model [1]. In contrast, an ecological approachsuch as that shown in Figure 3(b) increases system transparency by explicitly
Increasing the Transparency of Unmanned Systems 385representing the processes governing system behavior. In this particular example, thedisplay not only indicates the lost communications event time and location, but alsoexplicitly represents anticipated behavior based on the vehicle’s loaded operatingprotocol (proceeding directly to a an emergency landing site), the estimated progressagainst that plan in the time since the event, and the expected behavior should com-munications be regained (an immediate redirection to the next waypoint).4.2 Presenting Information in ContextBeyond simply increasing the perceptual availability of task-critical information,ecological design techniques emphasize situating this information in context. As withmore traditional process control systems [9,10], unmanned system displays benefitwhen health-and-status and automated planning outcomes (e.g., available pounds offuel; engine speed; altitude; time-on-station) are provided against the framing of ex-pected values, nominal minimum/maximum ranges, and critical limits (e.g., total fuelcapacity and minimum-remaining fuel requirements; normal and red-line RPM levels;aircraft flight performance envelopes). Additionally, useful temporal context can beprovided by showing changes in data values over time (e.g., a trailing graph of enginepressure) or calculating and then graphically representing instantaneous rates ofchange (“engine temperature is 280 degrees, but RISING RAPIDLY”) Such visualdepictions of range and temporal context aid the operator in interpreting how currentsystem operations compare to expected behaviors and critical safety boundaries, andsupport timely perception of when such boundaries may be breached. Supervisory control displays can also be improved by presenting informationattributes not only within the context of their own expected limits, but also within thecontext of other information that pertains to related system functions (with the struc-ture of these relationships identified through the previously described AH modelingprocess [5,6]). Unfortunately, many supervisory control displays artificially disperserelated system information over discrete, stovepiped views (e.g., maps, timelines,health-and-status dashboards), both as a matter of convention and convenience. Thisapproach inadvertently serves to mask critical relationships that occur across viewboundaries—for example, relationships such as those between engine RPM, altitude,wind speed, and the aeronautical distance of a mission leg, all of which directly im-pact fuel consumption and, with it, available time on station. In contrast, EID methods purposefully seek to integrate these diverse representa-tion modalities within coordinated display perspectives that explicitly reflect thesecomplex relationships. Figure 4(b) shows how such an approach could support com-mon fuel or power management tasks (which are often performed in-the-head duringre-plan, relying on heuristics and estimations that are subject to calculation error). Theleft-most image depicts estimated fuel to be consumed by each leg of the missionflight (green shaded segments) against the context of overall fuel capacity (full set ofsquares), the amount of fuel that is currently available (sum of all shaded squares),anticipated fuel reserves (dark grey), and the minimum amount remaining reservefuel that is required by mission safety doctrine (red line). If this particular vehicle isallocated to a pop-up task (center image), the fuel cost of this activity is added to the
386 R. Kilgore and M. Voshelldisplay (indicated by light blue squares) and the total fuel consumption is visiblypushed beyond the minimum safe reserve amount required (indicated by red squares).As the operator directly manipulates elements of a coordinated mission plan display(not shown here, but see Figure 6 for an example)—perhaps by increasing the altitudeof the first mission leg and reducing the travel speed of the third—the efficiency gainsanticipated by these changes are represented directly within the context of the fueldisplay. The coordinated behaviors enable the operator to intuitively sense of themaximum gains to be had in manipulating attributes of a particular mission leg, aswell as when the combined impact of some set of changes is sufficient to overcomethe negative impact of the pop-up task on the fuel safety margin. (a) (b)Fig. 4. (a) Example of an ecological fuel management display (left), showing the relative im-pact of a pop-up tasks on available fuel reserves (center), as well as efficiency gains as altitudeand time-on-station variables are manipulated in a coordinated flight plan display (not pic-tured); (b) Example of an ecological mission coordination display, showing the relative impactof two different vehicle retasking options on overall team and mission efficacy as these planalternatives are selected on a map (not pictured) In a similar example of context, Figure 4(b) shows a mission coordination displaythat presents the relative timing vehicle activities with respect to established goals andwindows of opportunity. Continuing the example of the pop-up task, automatedrecommendations for vehicle-retasking strategies (and their resultant path plan mod-ifications) may be depicted in a map view (not shown here). As the operator exploresalternative retasking plans by selecting them in the map view, this coordinated displayprovides a depiction of the relative impact on current vehicle tasking, against the tem-poral context of acceptable servicing windows (e.g., the time during which the currenttasks must be completed for the mission to be of value). In this example, assigningVehicle A to the new pop-up (left) results in a delay to the primary mission, but onethat is within acceptable bound. In contrast, assigning vehicle C not only pushes thatvehicle’s primary task out of the acceptable window, it also negatively impacts ve-hicle B, which must perform a coordinated task within a similar period of time). Suchexplicit context enables the operator to readily assess automated behaviors.4.3 Managing Operator AttentionOne of the central design strategies of EID is to create display figures whose emer-gent visual behaviors—driven by mapping graphical sub-elements of the figuresto specific low-level attributes of the dynamic work-domain—reflect higher-order
Increasing the Transparency of Unmanned Systems 387system properties [5,6]. When designed well, these mappings (which can be as simpleas the scale and opacity icon transformation strategies shown in Figure 2b) modulatethe perceptual salience of elements across the display, automatically directing theoperator’s attention towards critical system process information and causing less criti-cal information to recede into the background. An example of this salience mapping approach can be seen in Figure 5 (a), where avehicle is traveling out of the range of its primary emergency landing site, at whichpoint the operator must confirm a secondary site. When this transition point is farin the future, the boundary is flagged as a simple stroke across the planned path.As the vehicle approaches this point, however, the stroke gradually grows in sizeand salience, and additional cues (all of which would otherwise clutter up the display)are incrementally added to increase the salience of the pending alert, clarify the spe-cific nature of the alert type, and recommend a secondary sight for selection, as inFigure 5(b). (a) (b)Fig. 5. Example of an ecological display using variable perceptual salience cues to direct opera-tor attention while managing clutter; (a) a simple marker (green arc) flags an upcoming eventboundary; (b) as the vehicle approaches the marker (both in space, and in time) the cue be-comes more salient and additional information regarding the anticipated automation behavior isprovided—in this case signaling that the aircraft is about to head out of range of the primaryemergency landing location and the operator must confirm a secondary location Unfortunately, it is impractical to support all management of operator attentionthrough emergent display features—both because display designs would quickly be-come overwhelmingly complex, and because not all requirements for directing opera-tor attention can be known a priori. Through our interactions with operators—bothduring our analyses of the work domain and in our subsequent walkthroughs of ourdesign prototypes—we learned that it is often the relative amount of time until keyevents (e.g., “check the available fuel and confirm the emergency landing site locationwhen we are five minutes outside of the search area”) that is more critical to cueingand directing operator attention than absolute timing (e.g., “check the fuel at 1345”),particularly when the future time in question is not easily calculated from availabledisplay information. This is particularly true in directing operators’ prospective mem-ory, or the memory to recall and perform a task in the future. Unfortunately, supervi-sory control displays rarely capture and manage such relative times explicitly. Instead,they must be calculated and then later recalled by the operator, often via physical
424 G.M. Mair, A. Robinson, and J. Storrsystem was highly intuitive and allowed for early evaluation of concepts without theneed to build prototypes. This, in turn, saves time and money, further adding value forthe designer. The implementation of augmented reality in this context will add great value. Asthe technology advances in augmented reality and the capability of mobile devicesincreases, the value can only increase in the oncoming years.References 1. Pugh, S.: Total Design: Integrated Methods for Successful Product Engineering. Addison- Wesley Publishing Company, Glasgow (1991) 2. Chrysikou, E., Weisberg, R.: Following the wrong footsteps: Fixation effects of pictorial examples in a design problem solving task. Journal of Experimental Pyschology 31(5), 1134–1148 (2005) 3. Pertulla, M., Liikkanen, L.: Structural tendencies and exposure effects in design idea gen- eration. In: ASME 2006 (2006) 4. Benami, O., Jin, Y.: Creative stimulation in conceptual design. In: ASME 2002, Montreal (2002) 5. Fuge, M., Yumer, M., Orbay, G., Kara, L.: Conceptual design and modification of freeform surfaces using dual shape representations in augmented reality environments. Computer-Aided Design 5(9), 1020–1032 (2011) 6. Fiorentino, M., de Amicis, R., Monno, G., Stork, A.: Spacedesign: A mixed Reailty Work- space for Aesthetic Industrial Design. In: International Symposium on Mixed and Aug- mented Reality (2002) 7. Verlinden, J., Horváth, I.: A Critical Systems Position on Augmented Prototyping Systems for Industrial Design. In: Proceedings of the ASME 2007 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference IDETC/CIE (2007) 8. Verlinden, J., Horváth, I.: Framework for testing and validating Interactive Augmented Prototyping as a Design Means in Industrial Practice. In: Virtual Concept 2006, Playa Del Carmen (2006) 9. Ong, S.K., Pang, Y., Nee, A.Y.C.: Augmented Reality Aided Assembly Design and Plan- ning. CIRP Annals - Manufacturing Technology 56(1), 49–52 (2007)10. Serdar, T., Aziz, E.-S.S., Esche, S., Chassapis, C.: Integration of Augmented Reality into the CAD Process (2007)11. Park, J.: Augmented Reality Based Re-Formable Mock-Up for Design Evaluation. In: Pro- ceedings of the 2008 International Symposium on Ubiquitos Virtual Reality, pp. 17–20. IEEE Computer Society, Washington DC (2008)12. Sidharta, R., Oliver, J., Sannier, A.: Augmented Reality Tangible Interface for Distributed Design Review. In: International Conference on Computer Graphics, Imaging and Visuali- sation, CGIV 2006 (2006)13. Regenbrecht, H., Wagner, M., Baratoff, G.: MagicMeeting - A Collaborative Tangible Augmented Reality System. Virtual Reality - Systems, Development and Applica- tions 6(3), 151–166 (2002)14. Smparounis, K., Alexopoulos, K., Xanthakis, V., Pappas, M., Mavrikios, D., Chryssolouris, G.: A Web-based platform for Collaborative Product Design and Evaluation. Digital Factory for Human-Oriented Production Systems (2011)
Applying Augmented Reality to the Concept Development Stage 42515. Schumann, H., Burtescu, S., Siering, F.: Applying Augmented Reality Techniques in the Field of Interactive Collaborative Design, Darmstadt, Germany16. Shen, Y., Ong, S., Nee, A.: Augmented reality for collaborative product design anf devel- opment. Design Studies 31, 118–145 (2010)17. Kujala, S.: User involvement: A review of the benefits and challenges. Behaviour & In- formation Technology 22(1), 1–16 (2003)18. Schumann, H., Silviu, B., Frank, S.: Applying Augmented Reality Techniques in the Field of Interactive Collaborative Design (1998)19. Wang, X.: Exploring augmented reality benefits in collaborative conceptualization. In: 2008 12th International Conference on Computer Supported Cooperative Work in Design, pp. 699–704 (2008)20. Nee, A.Y.C., Ong, S.K., Chryssolouris, G., Mourtzis, D.: Augmented reality applications in design and manufacturing. CIRP Annals - Manufacturing Technology 61(2), 657–679 (2012)21. Aurasma (2014), http://www.aurasma.com22. Metaio (2014), http://www.metaio.com23. Layar - Augmented Reality (2014), http://www.layar.com
Authoring of Automatic Data Preparation and Scene Enrichment for Maritime Virtual Reality Applications Benjamin Mesing and Uwe von LukasFraunhofer Institute for Computer Graphics Research IGD, 18059 Rostock, Germany Abstract. When realizing virtual reality scenarios for the maritime sec- tor a key challenge is dealing with the huge amount of data. Adding interactive behaviour for providing a rich interactive experi- ence manually requires a lot of time and effort. Additionally, even though shipyards today often use PDM or PLM systems to manage and aggre- gate the data, the export to a visualisation format is not without prob- lems and often needs some post procession to take place. We present a framework, that combines the capabilities of processing large amounts of data for preparing virtual reality scenarios and enriching it with dynamic aspects like interactive door opening capabilities. An authoring interface allows orchestrating the data preparation chain by non-expert users to realise individual scenarios easily.1 IntroductionAt our institute we have done research within the field of virtual reality in mar-itime sector for more than ten years. During this time, we have identified two keyfactors that need to be addressed when developing virtual reality applications: 1. The amount of data is huge and, when exported, often comes divided into a large number of files. For example a ship of middle complexity consists of one million individual parts, often split in tens of thousands of files. This data needs to be converted and optimised for visualisation. 2. The enrichment of scenes with dynamic aspects, e.g. for more realistic design reviews or training scenarios, requires large numbers of objects to be handled in a similar way. For example for realistic lighting conditions, each lamp de- signed within CAD, must be assigned a light source within the visualisation. Manual processing is time-consuming and expensive, being a show-stopper for many VR applications in the maritime industry. We address those issues with an extensible data processing framework capableof processing 3D geometry and performing geometry specific operations like thecalculation of bounding boxes. The framework supports the notion of modulesperforming the actual processing and offers a selection of predefined modules forbasic operations. Additionally, an authoring interface is provided allowing theorchestration of the modulesR. Shumaker and S. Lackey (Eds.): VAMR 2014, Part II, LNCS 8526, pp. 426–434, 2014.c Springer International Publishing Switzerland 2014
Authoring of Automatic Data Preparation and Scene Enrichment 4272 Related WorkAuthoring of dynamic virtual reality scenarios has received increased attentionduring the last fifteen years. Specific authoring application exist for a numberof application domains with a tight focus on the specific domain. Examples arethe “High-Level Tool for Curators of 3D Virtual Visits” of Chitarro et al. [3] orthe authoring approach for “mixed reality assembly instructor for hierarchicalstructures” of Zauner et al. [11] as well as commercial software products like theUnity editor environment. Generic authoring approaches are much harder to design and often providemore generic building blocks. Kapadia et al. introduce an approach how to authorbehaviour in a simple behaviour description language [6]. To this end, variousconstraints and settings can be specified. The focus is laid on the authoring ofthe behaviour itself. Many approaches ground on the idea of object based behaviour [2,5,4]. Herean object consists of geometry and behaviour, i.e. both form a unit. When au-thoring virtual worlds based on those approaches, the workflow is usually tocreate at least the interactive objects of the world within an authoring environ-ment. The authoring environment is specific to the creation of behaviour ob-jects and provides often only limited geometric modelling capabilities. Backmansuggests a slightly different approach [1]. His authoring framwork for virtualenvironments is also based on the notion of objects, where the link betweenphysical behaviour and geometry is maintained through a link definition. Forvisual authoring however, he utilizes an existing 3D modelling tool, and the ob-ject properties responsible for the behaviour are defined as annotations to the3D-objects. A comprehensive approach on the authoring of “Compelling Scenarios in Vir-tual Reality” is presented by Springer et al. [10]. The authors describe a system,consisting of several stages. The system addresses automatic scenario generationby creating objects in a predefined way based on e.g. 2D-terrain images. Fur-ther, scenario editing is provided, supporting the creation of additional objects.The geometry is loaded from external files, while the object behaviour must beimplemented in the presented application. Finally, an immersive viewer displaysthe scenario and can be coupled with the scenario editor supporting a live edit-ing of the scenarios. The idea of a tight coupling between scenario editor andimmersive environment is also pursued by Lee et al. [7]. The authors describean authoring system for VR-scenarios, that allows doing the authoring withinVR. The approach is named “immersive authoring and testing” and, accordingto the authors, avoids frequent changes from desktop to VR system. A step further in automatic scenario generation is done by Zook et al. [12].They propose the creation of training scenarios based on computer stored worldknowledge, learning objectives and learners attributes. For a specific domain,the approach allows to generate a large amount of different training scenariosfor training different objectives in various combinations. The approach requiresa high initial effort to store the world knowledge and learning objectives in acomputer processable way.
428 B. Mesing and U. von Lukas3 Data Processing and Enrichment FrameworkCurrent state of the art authoring approaches provide means to either createdynamic scenes from ground up or to manually enrich existing geometric modelswith behaviour. In the latter case, the geometry usually gets exported from aCAD system or a 3D modelling tool. This approach works well, when only alimited set of objects needs to be interactive and when the basic 3D geometrydoes not change over times. In 2006 we have presented a first approach fordefining an automatic enrichment process for enriching geometry with behaviourto address this issue [8]. However, this approach was limited in a way, that itallowed addressing geometric objects solely by name and was closely tied to theVRML programming language. We present a framework, that allows applying specific behaviour to a largenumber of objects based on custom selection mechanism. The work presentedin this paper is based on our previous work but features an open architectureand additionally incorporates flexible mechanisms for data processing. Apartfrom the addition of geometric behaviour the framework also supports the pre-processing of geometric data like data conversion or geometry cleanup. It canbe used to define a fully automated data processing chain from CAD-model tointeractive virtual reality scenario. The new architecture consists of a generic data processing platform whichcan basically handle any kind of data. The data processing flow can be definedusing a graphical authoring environment, enabling non-IT experts to set up thedata conversion chain for a VR session. The platform has a strong focus on 3Dcontent and behaviour enrichment.3.1 Data Processing FrameworkIn this section we will discuss the data processing framework in more detail. Thebasic components of the framework are:Modules perform the actual data processing, they receive data and operate on the data, usually transforming it in a way. Each module contains a set of typed in-slots for they data that gets processed, a set of typed out-slots for the data generated by the module and a set of attribute to configure various parameters of the module.Components are a special form of modules. They can also perform data pro- cessing and, additionally, can contain other modules or components, called inner modules. In addition to the normal in- and out-slots, they can also contain internal in- and out-slots. Internal slots are utilised to release data available to the component to its inner modules, and to receive data gener- ated by the inner modules to the surrounding component.Routes describe the data flow between modules and components, i.e. they con- nect the out-slots of one module with the in-slot of another module. Figure 1 illustrates the usage of modules and routes. The Creator modulecreates an X3D scene based on the X3D code specified in the attribute and
Authoring of Automatic Data Preparation and Scene Enrichment 429Fig. 1. A simple data processing chain, the module Creator creates an X3D scene andthe module Writer writes the scene to the hard drivesends the scene to its out slot. The Writer module receives the scene and writesit to an X3D file. An example for a component is given in Fig. 2. The ForEachComponent re-ceives a list of objects at its objectsIn slot. When executed, it creates instancesof its inner modules for each element of the list, and sends the object to beprocessed through its inner out-slot listObject. In the example, a list of X3Dobjects is processed and a transform node is inserted around each of the nodes.The resulting objects are collected by the foreach component and released viaits out-slot objectsOut once all objects have been processed.Fig. 2. The component Processor receives the list of objects and for each of thoseobjects runs its inner modules. The resulting objects are collected by the componentand released via the out-slot objectsOut once all objects have been processed. The platform offers an authoring environment which allows to visually com-bine the data processing modules, define the attributes and connect the modulesvia routes. The illustrations of data processing modules throughout this paperhave been exported from the authoring environment. With the predefined mod-ules provided by the framework common data processing tasks for preparation ofgeometry can be realised. This includes the iteration over a collection of objects,the selection of specific items and the processing of files with external tools likegeometry converter or optimiser.3.2 Creating Custom ModulesThe open architecture of the data processing framework allows users to developmodules and components for their specific application domains. When develop-ing a new module or component, its interface must be described within an XML
430 B. Mesing and U. von Lukasdefinition and the behaviour must be implemented in the Java programminglanguage. This includes the information about attributes, in- and out-slots sup-ported by new modules, its name and the Java-classes implementing the dataprocessing. This task is suitable for IT-experts only. Once the custom modules have been developed, the end user can utilise themin the same way as the modules predefined by the framework. In fact, the pre-defined modules are in no way different from custom modules. For example theForEach-component given in Fig. 2 is just an ordinary component. The function-ality to iterate over the list of objects and send it to the in-slots is implementedwithin the component-implementation. A module-designer could decide to re-move the available ForEach-component and provide its own implementation.Figure 3 gives an overview over the roles involved.Fig. 3. The structure of the modules, and components as well as the type system isdefined by the framework developer. The module designer can define custom modulesfor their specific application domain which can be used by the end user.3.3 Geometry Modification and Behaviour EnrichmentThe main focus of the platform is the preparation of geometry for interactivevirtual reality scenarios. This is reflected by special support for the X3D lan-guage. In particular, a special type for X3D-data and a set of modules specificfor processing X3D geometry is provided. The most important modules are presented hereafter. The X3DCreator mod-ule allows to read whole X3D files as well as fragment X3D code. The code canconsist of any possible top level node and thus be used to create e.g. Touch-Sensors, Script nodes, or additional geometry as required for the realisation ofarbitrary behaviour. The X3DWriter module can write out X3D code to a file.The X3DNodeSelector supports selection of specific nodes from a X3D scene. Se-lection can happen either based on the node names using wildcard expressions,
Authoring of Automatic Data Preparation and Scene Enrichment 431or using XPath expressions. Using XPath as selection criteria is very flexibleand can e.g. be utilised to select nodes based on associated meta information,which is stored in separate Metadata nodes in X3D. Finally the NodeInsertercan insert one node inside or around another one. This can be required, e.g. ifa Transform node must be added to allow the movement of an object or if aTouchSensor should be added to allow interaction with the object. Fig. 4 shows,how different modules are combined to add a Transform node around all doorswithin a scene. Fig. 4.3.4 Maritime Enrichment ScenarioWe have used the platform to realise data processing for different maritimeapplication scenario. This section describes the realisation of one such scenario. In this scenario we got a ship model consisting of approximately 20.000 geome-try files and more then 100.000 objects from the shipyard Flensburger SchiffbauGesellschaft. To allow for an interactive walk-through we needed to add thefunctionality to open and close the doors within the model. We have set upthe data processing chain shown in Fig. 5. The chain sets up a VR scene andautomatically enriches it with the interactive behaviour. It first scans the con-tent of the directories where the geometry files are located and then creates andX3D-Group node containing and Inline-Node for each of the files. A filter thenselects all nodes where the name matches the string 216* (indicating a door).The list of nodes is passed to the ForEachComponent, where one module calcu-lates the bounding box for each of the doors and a second module inserts thebehaviour to open and close the doors. The route from the Inline module tothe ForEach component is merely to make the grouping node available to theForEach component. The resulting file is then written to the specified X3D file
432 B. Mesing and U. von LukasFig. 5. Data processing chain to enrich a ship scene with the behaviour to allow openingand closing of doorsFig. 6. Walk through an enriched 3D scene, when the user approaches the door opensautomaticallyby the X3DNodeWriter. Figure 6 illustrates how a walk through looks like, whenapproaching a door it opens automatically. Since the model was to large to be displayed fluently as a whole, we haveset up additional data processing steps to implement on demand loading ofobjects. When the users approaches the object the objects are shown only up to acertain distance and when the user moves away, the object are hidden again. Themodules for the data processing consist of the BoundingBox creator module tocalculate the bounding boxes of the individual objects and a OnDemandLoadermodule which inserts a ProxyimitySensor node for each object and shows orhides the object as appropriate. More information on this industry scenario canbe found in our paper [9]. The implementation of the behaviour enrichment with our data processingframework has two major advantages. First, it allows to be executed multipletimes, being applicable to new versions of the CAD geometry or, with usuallyonly a few adjustments like the name pattern, to totally new ships. Second, itallows to apply the behaviour to large numbers of object in very little time.
Search
Read the Text Version
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
- 108
- 109
- 110
- 111
- 112
- 113
- 114
- 115
- 116
- 117
- 118
- 119
- 120
- 121
- 122
- 123
- 124
- 125
- 126
- 127
- 128
- 129
- 130
- 131
- 132
- 133
- 134
- 135
- 136
- 137
- 138
- 139
- 140
- 141
- 142
- 143
- 144
- 145
- 146
- 147
- 148
- 149
- 150
- 151
- 152
- 153
- 154
- 155
- 156
- 157
- 158
- 159
- 160
- 161
- 162
- 163
- 164
- 165
- 166
- 167
- 168
- 169
- 170
- 171
- 172
- 173
- 174
- 175
- 176
- 177
- 178
- 179
- 180
- 181
- 182
- 183
- 184
- 185
- 186
- 187
- 188
- 189
- 190
- 191
- 192
- 193
- 194
- 195
- 196
- 197
- 198
- 199
- 200
- 201
- 202
- 203
- 204
- 205
- 206
- 207
- 208
- 209
- 210
- 211
- 212
- 213
- 214
- 215
- 216
- 217
- 218
- 219
- 220
- 221
- 222
- 223
- 224
- 225
- 226
- 227
- 228
- 229
- 230
- 231
- 232
- 233
- 234
- 235
- 236
- 237
- 238
- 239
- 240
- 241
- 242
- 243
- 244
- 245
- 246
- 247
- 248
- 249
- 250
- 251
- 252
- 253
- 254
- 255
- 256
- 257
- 258
- 259
- 260
- 261
- 262
- 263
- 264
- 265
- 266
- 267
- 268
- 269
- 270
- 271
- 272
- 273
- 274
- 275
- 276
- 277
- 278
- 279
- 280
- 281
- 282
- 283
- 284
- 285
- 286
- 287
- 288
- 289
- 290
- 291
- 292
- 293
- 294
- 295
- 296
- 297
- 298
- 299
- 300
- 301
- 302
- 303
- 304
- 305
- 306
- 307
- 308
- 309
- 310
- 311
- 312
- 313
- 314
- 315
- 316
- 317
- 318
- 319
- 320
- 321
- 322
- 323
- 324
- 325
- 326
- 327
- 328
- 329
- 330
- 331
- 332
- 333
- 334
- 335
- 336
- 337
- 338
- 339
- 340
- 341
- 342
- 343
- 344
- 345
- 346
- 347
- 348
- 349
- 350
- 351
- 352
- 353
- 354
- 355
- 356
- 357
- 358
- 359
- 360
- 361
- 362
- 363
- 364
- 365
- 366
- 367
- 368
- 369
- 370
- 371
- 372
- 373
- 374
- 375
- 376
- 377
- 378
- 379
- 380
- 381
- 382
- 383
- 384
- 385
- 386
- 387
- 388
- 389
- 390
- 391
- 392
- 393
- 394
- 395
- 396
- 397
- 398
- 399
- 400
- 401
- 402
- 403
- 404
- 405
- 406
- 407
- 408
- 409
- 410
- 411
- 412
- 413
- 414
- 415
- 416
- 417
- 418
- 419
- 420
- 421
- 422
- 423
- 424
- 425
- 426
- 427
- 428
- 429
- 430
- 431
- 432
- 433
- 434
- 435
- 436
- 437
- 438
- 439
- 440
- 441
- 442
- 443
- 444
- 445
- 446
- 447
- 448
- 449
- 450
- 451
- 452
- 453
- 454
- 455
- 456
- 457
- 458
- 459
- 460
- 461
- 462
- 463
- 464
- 465
- 466
- 467
- 468
- 469
- 470
- 471
- 472
- 473
- 474
- 475
- 476
- 477
- 478
- 479
- 480
- 481
- 482
- 483
- 484
- 485
- 486
- 487
- 488
- 489
- 490
- 491
- 492
- 493
- 494
- 495
- 496
- 497
- 1 - 50
- 51 - 100
- 101 - 150
- 151 - 200
- 201 - 250
- 251 - 300
- 301 - 350
- 351 - 400
- 401 - 450
- 451 - 497
Pages: